Security Policy
We strongly recommend using the latest version of Langfuse to receive all security updates.
Explore
61,659 skills indexed with the new KISS metadata standard.
We strongly recommend using the latest version of Langfuse to receive all security updates.
clickhouse:
First off, thanks for taking the time to contribute! ❤️
<div align="center">
<div align="center">
Langfuse is an open-source LLM engineering platform that helps teams collaboratively develop, monitor, evaluate, and debug AI applications.
<div align="center">
v24.6.0
Langfuse is an open source LLM engineering platform for developing, monitoring,
.dockerignore
packages/shared/prisma/generated
/node_modules
public-hoist-pattern[]=@aws-sdk/client-s3
skip = .git,*.pdf,*.svg,package-lock.json,*.prisma,pnpm-lock.yaml
We take the security of our software products seriously, which includes not only the code base but also the scanners provided within. If you have found any issues that might have security implications, please send a report to [[email protected]].
site_author: Protect AI, Inc.
MD004: false # Unordered list style
__pycache__/
- repo: https://github.com/pre-commit/pre-commit-hooks
:tada: Thanks for taking the time to contribute! :tada:
LLM Guard by [Protect AI](https://protectai.com/llm-guard) is a comprehensive tool designed to fortify the security of Large Language Models (LLMs).
[*]
<img src="https://raw.githubusercontent.com/guardrails-ai/guardrails/main/docs/dist/img/Guardrails-ai-logo-for-dark-bg.svg#gh-dark-mode-only" alt="Guardrails AI Logo" width="600px">
- "guardrails/version.py"