This file inherits settings from mkdocs.yml but adds the PDF plugin.
INHERIT: ./mkdocs.yml
Explore
118,187 skills indexed with the new KISS metadata standard.
INHERIT: ./mkdocs.yml
We are committed to providing a welcoming and inclusive environment for all people, regardless of age, body size, caste, disability, ethnicity, gender identity and expression, level of experience, family status, gender, immigration status, level of expertise, national origin, personal appearance, po
We take the security of RAGAS seriously. If you discover a security vulnerability in this project, please report it to us privately. **Do not report security vulnerabilities through public GitHub issues, discussions, or pull requests.**
<img style="vertical-align:middle" height="200"
This comprehensive guide covers development workflows for the Ragas monorepo, designed for both human developers and AI agents.
repos:
.DS_Store
No description available.
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
mkdocs:
test_resources
<img src="https://github.com/confident-ai/deepeval/blob/main/docs/static/img/deepeval.png" alt="DeepEval Logo" width="100%">
For issues in this repo you can mention one or more of:
__pycache__/
- repo: https://github.com/psf/black
Thanks for thinking about contributing to DeepEval! We accept fixes, improvements, or even entire new features. Some reasons why you might want to contribute:
Version 2.0, January 2004
This repository includes datasets written by language models, used in our paper on "Discovering Language Model Behaviors with Model-Written Evaluations."
*.ipynb
For a more in-depth look at our security policy, please check out our [Coordinated Vulnerability Disclosure Policy](https://openai.com/security/disclosure/#:~:text=Disclosure%20Policy,-Security%20is%20essential&text=OpenAI%27s%20coordinated%20vulnerability%20disclosure%20policy,expect%20from%20us%20
Copyright (c) 2023 OpenAI
- repo: https://github.com/pre-commit/mirrors-mypy
> You can now configure and run Evals directly in the OpenAI Dashboard. [Get started →](https://platform.openai.com/docs/guides/evals)
evals.egg-info/