Security Policy
We strongly recommend using the latest version of Langfuse to receive all security updates.
Explore
64,097 skills indexed with the new KISS metadata standard.
We strongly recommend using the latest version of Langfuse to receive all security updates.
First off, thanks for taking the time to contribute! ❤️
<div align="center">
<div align="center">
<div align="center">
Langfuse is an open-source LLM engineering platform that helps teams collaboratively develop, monitor, evaluate, and debug AI applications.
public-hoist-pattern[]=@aws-sdk/client-s3
v24.6.0
/node_modules
.dockerignore
packages/shared/prisma/generated
Langfuse is an open source LLM engineering platform for developing, monitoring,
skip = .git,*.pdf,*.svg,package-lock.json,*.prisma,pnpm-lock.yaml
site_author: Protect AI, Inc.
We take the security of our software products seriously, which includes not only the code base but also the scanners provided within. If you have found any issues that might have security implications, please send a report to [[email protected]].
__pycache__/
LLM Guard by [Protect AI](https://protectai.com/llm-guard) is a comprehensive tool designed to fortify the security of Large Language Models (LLMs).
MD004: false # Unordered list style
- repo: https://github.com/pre-commit/pre-commit-hooks
:tada: Thanks for taking the time to contribute! :tada:
[*]
Welcome and thank you for your interest in contributing to Guardrails! We appreciate all contributions, big or small, from bug fixes to new features. Before diving in, let's go through some guidelines to make the process smoother for everyone.
<img src="https://raw.githubusercontent.com/guardrails-ai/guardrails/main/docs/dist/img/Guardrails-ai-logo-for-dark-bg.svg#gh-dark-mode-only" alt="Guardrails AI Logo" width="600px">
- "guardrails/version.py"