Security Policy
We strongly recommend using the latest version of Langfuse to receive all security updates.
Explore
65,799 skills indexed with the new KISS metadata standard.
We strongly recommend using the latest version of Langfuse to receive all security updates.
<div align="center">
<div align="center">
<div align="center">
First off, thanks for taking the time to contribute! ❤️
Langfuse is an open-source LLM engineering platform that helps teams collaboratively develop, monitor, evaluate, and debug AI applications.
public-hoist-pattern[]=@aws-sdk/client-s3
/node_modules
v24.6.0
packages/shared/prisma/generated
Langfuse is an open source LLM engineering platform for developing, monitoring,
.dockerignore
skip = .git,*.pdf,*.svg,package-lock.json,*.prisma,pnpm-lock.yaml
site_author: Protect AI, Inc.
We take the security of our software products seriously, which includes not only the code base but also the scanners provided within. If you have found any issues that might have security implications, please send a report to [[email protected]].
MD004: false # Unordered list style
LLM Guard by [Protect AI](https://protectai.com/llm-guard) is a comprehensive tool designed to fortify the security of Large Language Models (LLMs).
:tada: Thanks for taking the time to contribute! :tada:
- repo: https://github.com/pre-commit/pre-commit-hooks
__pycache__/
[*]
<img src="https://raw.githubusercontent.com/guardrails-ai/guardrails/main/docs/dist/img/Guardrails-ai-logo-for-dark-bg.svg#gh-dark-mode-only" alt="Guardrails AI Logo" width="600px">
- "guardrails/version.py"
Guardrails docs are served as a docusaurus site. The docs are compiled from various sources