Security Policy
We strongly recommend using the latest version of Langfuse to receive all security updates.
Explore
61,827 skills indexed with the new KISS metadata standard.
We strongly recommend using the latest version of Langfuse to receive all security updates.
langfuse-web:
<div align="center">
<div align="center">
Langfuse is an open-source LLM engineering platform that helps teams collaboratively develop, monitor, evaluate, and debug AI applications.
First off, thanks for taking the time to contribute! ❤️
<div align="center">
packages/shared/prisma/generated
Langfuse is an open source LLM engineering platform for developing, monitoring,
v24.6.0
public-hoist-pattern[]=@aws-sdk/client-s3
/node_modules
.dockerignore
skip = .git,*.pdf,*.svg,package-lock.json,*.prisma,pnpm-lock.yaml
site_author: Protect AI, Inc.
We take the security of our software products seriously, which includes not only the code base but also the scanners provided within. If you have found any issues that might have security implications, please send a report to [[email protected]].
- repo: https://github.com/pre-commit/pre-commit-hooks
MD004: false # Unordered list style
__pycache__/
LLM Guard by [Protect AI](https://protectai.com/llm-guard) is a comprehensive tool designed to fortify the security of Large Language Models (LLMs).
:tada: Thanks for taking the time to contribute! :tada:
[*]
Guardrails docs are served as a docusaurus site. The docs are compiled from various sources
Welcome and thank you for your interest in contributing to Guardrails! We appreciate all contributions, big or small, from bug fixes to new features. Before diving in, let's go through some guidelines to make the process smoother for everyone.