v0.52.5 (2026-02-23)
- **traceloop-sdk**: Add evaluator config to the evaluator validator (#3706)
Explore
68,063 skills indexed with the new KISS metadata standard.
- **traceloop-sdk**: Add evaluator config to the evaluator validator (#3706)
clickhouse:
services:
<div align="center">
clickhouse:
clickhouse:
langfuse-web:
We strongly recommend using the latest version of Langfuse to receive all security updates.
- ClickHouse migrations in the `packages/shared/clickhouse/migrations/clustered` directory should include `ON CLUSTER default` and should use `Replicated` merge tree table types.
Langfuse is an open-source LLM engineering platform that helps teams collaboratively develop, monitor, evaluate, and debug AI applications.
<div align="center">
<div align="center">
First off, thanks for taking the time to contribute! ❤️
<div align="center">
/node_modules
.dockerignore
packages/shared/prisma/generated
public-hoist-pattern[]=@aws-sdk/client-s3
Langfuse is an open source LLM engineering platform for developing, monitoring,
v24.6.0
skip = .git,*.pdf,*.svg,package-lock.json,*.prisma,pnpm-lock.yaml
site_author: Protect AI, Inc.
We take the security of our software products seriously, which includes not only the code base but also the scanners provided within. If you have found any issues that might have security implications, please send a report to [[email protected]].
LLM Guard by [Protect AI](https://protectai.com/llm-guard) is a comprehensive tool designed to fortify the security of Large Language Models (LLMs).