.nvmrc
v24.6.0
Explore
64,735 skills indexed with the new KISS metadata standard.
v24.6.0
public-hoist-pattern[]=@aws-sdk/client-s3
/node_modules
.dockerignore
packages/shared/prisma/generated
skip = .git,*.pdf,*.svg,package-lock.json,*.prisma,pnpm-lock.yaml
site_author: Protect AI, Inc.
We take the security of our software products seriously, which includes not only the code base but also the scanners provided within. If you have found any issues that might have security implications, please send a report to [[email protected]].
- repo: https://github.com/pre-commit/pre-commit-hooks
LLM Guard by [Protect AI](https://protectai.com/llm-guard) is a comprehensive tool designed to fortify the security of Large Language Models (LLMs).
:tada: Thanks for taking the time to contribute! :tada:
MD004: false # Unordered list style
__pycache__/
[*]
Welcome and thank you for your interest in contributing to Guardrails! We appreciate all contributions, big or small, from bug fixes to new features. Before diving in, let's go through some guidelines to make the process smoother for everyone.
<img src="https://raw.githubusercontent.com/guardrails-ai/guardrails/main/docs/dist/img/Guardrails-ai-logo-for-dark-bg.svg#gh-dark-mode-only" alt="Guardrails AI Logo" width="600px">
- "guardrails/version.py"
Guardrails docs are served as a docusaurus site. The docs are compiled from various sources
- repo: https://github.com/astral-sh/ruff-pre-commit
*.pyc
*__pycache__*
NVIDIA is dedicated to the security and trust of our software products and services, including all source code repositories managed through our organization.
All notable changes to this project will be documented in this file.
All notable changes to the Colang language and runtime will be documented in this file.