CogVLM-SFT-311K: Bilingual Visual Instruction Data in CogVLM SFT
CogVLM-SFT-311K is the primary aligned corpus used in the initial training of CogVLM v1.0. The process of constructing this dataset is as follows:
Explore
86,351 skills indexed with the new KISS metadata standard.
CogVLM-SFT-311K is the primary aligned corpus used in the initial training of CogVLM v1.0. The process of constructing this dataset is as follows:
📗 [README in English](./README.md)
📗 [中文版README](./README_zh.md)
__pycache__
LOCAL_WORLD_SIZE=8
build:
.git
[*]
* text=auto
__pycache__
*Visual instruction tuning towards large language and vision models with GPT-4 level capabilities.*
site_description: Evaluation framework for your AI Application
We take the security of RAGAS seriously. If you discover a security vulnerability in this project, please report it to us privately. **Do not report security vulnerabilities through public GitHub issues, discussions, or pull requests.**
INHERIT: ./mkdocs.yml
<img style="vertical-align:middle" height="200"
We are committed to providing a welcoming and inclusive environment for all people, regardless of age, body size, caste, disability, ethnicity, gender identity and expression, level of experience, family status, gender, immigration status, level of expertise, national origin, personal appearance, po
This comprehensive guide covers development workflows for the Ragas monorepo, designed for both human developers and AI agents.
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
repos:
.DS_Store
No description available.
mkdocs:
test_resources
<img src="https://github.com/confident-ai/deepeval/blob/main/docs/static/img/deepeval.png" alt="DeepEval Logo" width="100%">