<!---
Copyright 2023 The HuggingFace Team. All rights reserved.
Explore
68,443 skills indexed with the new KISS metadata standard.
Copyright 2023 The HuggingFace Team. All rights reserved.
- repo: https://github.com/astral-sh/ruff-pre-commit
Copyright 2020 The HuggingFace Team. All rights reserved.
__pycache__/
<img src="assets/logo.png" alt="Stanford-Alpaca" style="width: 50%; min-width: 300px; display: block; margin: auto;">
**Organization developing the model**
To enable more open-source research on instruction following large language models, we use generate 52K instruction-followng demonstrations using OpenAI's text-davinci-003 model.
[](https://github.com/hiyouga/LLaMA-Factory/stargazers)
[](https://github.com/hiyouga/LLaMA-Factory/stargazers)
- repo: https://github.com/pre-commit/pre-commit-hooks
__pycache__/
* text=auto
.git
Qwen-7B は `tiktoken` パッケージを使用して、UTF-8 バイトを BPE トークン化します。
<a href="README_CN.md">中文</a>  |  <a href="README.md">English</a>  |  日本語 |  <a href="README_FR.md">Français</a> |  <a href="README_ES.md">Español</a>
Qwen-7B uses BPE tokenization on UTF-8 bytes using the `tiktoken` package.
Large language models have recently attracted an extremely large amount of
> 注:作为术语的“tokenization”在中文中尚无共识的概念对应,本文档采用英文表达以利说明。
中文</a>  |  <a href="README.md">English</a>  |  <a href="README_JA.md">日本語</a> |  <a href="README_FR.md">Français</a> |  <a href="README_ES.md">Español</a>
<a href="README_CN.md">中文</a>  |  <a href="README.md">English</a>  |  <a href="README_JA.md">日本語</a>  |  Français |  <a href="README_ES.md">Español</a>
<a href="README_CN.md">中文</a>  |  <a href="README.md">English</a>  |  <a href="README_JA.md">日本語</a> |  <a href="README_FR.md">Français</a> |  Español
Flash attention is an option for accelerating training and inference. Only NVIDIA GPUs of Turing, Ampere, Ada, and Hopper architecture, e.g., H100, A100, RTX 3090, T4, RTX 2080, can support flash attention. **You can use our models without installing it.**
Flash attention は、トレーニングと推論を加速するオプションです。H100、A100、RTX 3090、T4、RTX 2080 などの Turing、Ampere、Ada、および Hopper アーキテクチャの NVIDIA GPU だけが、flash attention をサポートできます。それをインストールせずに私たちのモデルを使用することができます。
<a href="README_CN.md">中文</a>  |  English  |  <a href="README_JA.md">日本語</a> |  <a href="README_FR.md">Français</a> |  <a href="README_ES.md">Español</a>