See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
/node_modules
Explore
67,360 skills indexed with the new KISS metadata standard.
/node_modules
public-hoist-pattern[]=@aws-sdk/client-s3
.dockerignore
skip = .git,*.pdf,*.svg,package-lock.json,*.prisma,pnpm-lock.yaml
We take the security of our software products seriously, which includes not only the code base but also the scanners provided within. If you have found any issues that might have security implications, please send a report to [[email protected]].
site_author: Protect AI, Inc.
__pycache__/
LLM Guard by [Protect AI](https://protectai.com/llm-guard) is a comprehensive tool designed to fortify the security of Large Language Models (LLMs).
:tada: Thanks for taking the time to contribute! :tada:
- repo: https://github.com/pre-commit/pre-commit-hooks
MD004: false # Unordered list style
[*]
- "guardrails/version.py"
Welcome and thank you for your interest in contributing to Guardrails! We appreciate all contributions, big or small, from bug fixes to new features. Before diving in, let's go through some guidelines to make the process smoother for everyone.
Guardrails docs are served as a docusaurus site. The docs are compiled from various sources
<img src="https://raw.githubusercontent.com/guardrails-ai/guardrails/main/docs/dist/img/Guardrails-ai-logo-for-dark-bg.svg#gh-dark-mode-only" alt="Guardrails AI Logo" width="600px">
*__pycache__*
- repo: https://github.com/astral-sh/ruff-pre-commit
*.pyc
NVIDIA is dedicated to the security and trust of our software products and services, including all source code repositories managed through our organization.
[](https://opensource.org/licenses/Apache-2.0)
No description available.
All notable changes to the Colang language and runtime will be documented in this file.
All notable changes to this project will be documented in this file.