Contributing to vLLM
You may find information about contributing to vLLM on [docs.vllm.ai](https://docs.vllm.ai/en/latest/contributing).
Explore
68,129 skills indexed with the new KISS metadata standard.
You may find information about contributing to vLLM on [docs.vllm.ai](https://docs.vllm.ai/en/latest/contributing).
vllm/model_executor/layers/fla/ops/*.py
d6953beb91da4e9c99be4c0a1304a2d24189535c
- pre-commit
indent: 4
/vllm/_version.py
version: 2
UseTab: Never
/build
source =
services:
- 🤗 **Try the pretrained model out [here](https://huggingface.co/spaces/tloen/alpaca-lora), courtesy of a GPU grant from Huggingface!**
7B/
.github
output/
| [Paper](https://arxiv.org/abs/2305.14314) | [Adapter Weights](https://huggingface.co/timdettmers) | [Demo](https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi) |
require_ci_to_pass: yes
services:
type: website
- Can you train StableLM with this? Yes, but only with a single GPU atm. Multi GPU support is coming soon! Just waiting on this [PR](https://github.com/huggingface/transformers/pull/22874)
python: python3
configs
generic skill
<picture>