Security Policy
We strongly recommend using the latest version of Langfuse to receive all security updates.
Explore
61,659 skills indexed with the new KISS metadata standard.
We strongly recommend using the latest version of Langfuse to receive all security updates.
<div align="center">
Langfuse is an open-source LLM engineering platform that helps teams collaboratively develop, monitor, evaluate, and debug AI applications.
<div align="center">
<div align="center">
<div align="center">
First off, thanks for taking the time to contribute! ❤️
packages/shared/prisma/generated
Langfuse is an open source LLM engineering platform for developing, monitoring,
public-hoist-pattern[]=@aws-sdk/client-s3
/node_modules
.dockerignore
v24.6.0
skip = .git,*.pdf,*.svg,package-lock.json,*.prisma,pnpm-lock.yaml
We take the security of our software products seriously, which includes not only the code base but also the scanners provided within. If you have found any issues that might have security implications, please send a report to [[email protected]].
site_author: Protect AI, Inc.
- repo: https://github.com/pre-commit/pre-commit-hooks
__pycache__/
MD004: false # Unordered list style
:tada: Thanks for taking the time to contribute! :tada:
LLM Guard by [Protect AI](https://protectai.com/llm-guard) is a comprehensive tool designed to fortify the security of Large Language Models (LLMs).
[*]
Guardrails docs are served as a docusaurus site. The docs are compiled from various sources
- "guardrails/version.py"