YAML-configured fine-tuning for LLMs: LoRA, QLoRA, FSDP, and many architectures on top of Hugging Face trainers.
Browse & filter
Filter by platform, license text, maturity, maintenance cadence, and editorial tags like privacy-focused or self-hosted. Search matches names, summaries, tags, and use cases.
17 tools match your filters
Hugging Face TB small LM family (135M–1.7B) with Apache-2.0 weights aimed at on-device and edge quality per size.
OpenAI’s open-weight GPT-OSS checkpoints (e.g. 20B, 120B) hosted on Hugging Face for local inference and fine-tuning.
Historic decoder-only LM family (124M–1.5B) under `openai-community` on the Hub—still a default tutorial and pipeline test target.
Meta’s Open Pretrained Transformer suite (125M–175B) released with reproducible logbooks—canonical Hub org `facebook` / `facebook/opt-*`.
Early open chat models fine-tuned from Llama-class bases by LMSYS—widely mirrored on the Hub (e.g. Vicuna-7B v1.5).
Z.ai GLM-5–generation checkpoints (e.g. FP8 builds) distributed on the Hub for text generation and agent-style use cases.
EleutherAI’s public scaling suite: matched GPT-NeoX–architecture models from 70M–12B with public datasets for interpretability research.
Alibaba’s Qwen2.5 Coder 7B instruct checkpoint on Hugging Face—optimized for code completion, synthesis, and tooling workflows.
Apple’s OpenELM family—openly released efficient language models with layer-wise scaling and Hub-hosted instruct variants.
NVIDIA Nemotron 3 open model checkpoints (dense and MoE) on Hugging Face for reasoning, coding, and agentic workloads at scale.
BigScience instruction-tuned BLOOM derivatives (e.g. BLOOMZ-560M–176B) for multilingual zero-shot instruction following on the Hub.
Hugging Face library to run PyTorch training on CPU, single GPU, multi-GPU, or TPU with minimal code changes.
Hugging Face library for large shared datasets: memory mapping, streaming, Arrow-backed columns, and Hub integration.
AutoTrain Advanced: low-code training flows for classification, LLM fine-tunes, and diffusion tasks tied to the Hub.
Official Python client for the Hugging Face Hub: upload/download models, datasets, and manage tokens and repos.
TypeScript/JavaScript libraries to call Inference API, manage Hub assets, and build browser or Node AI features.
