Skip to content
OpenCatalogcurated by FLOSSK

Browse & filter

Filter by platform, license text, maturity, maintenance cadence, and editorial tags like privacy-focused or self-hosted. Search matches names, summaries, tags, and use cases.

17 tools match your filters

YAML-configured fine-tuning for LLMs: LoRA, QLoRA, FSDP, and many architectures on top of Hugging Face trainers.

llmfine-tuningloratraininghuggingface

Hugging Face TB small LM family (135M–1.7B) with Apache-2.0 weights aimed at on-device and edge quality per size.

llmslmedgeapache-2huggingface

OpenAI’s open-weight GPT-OSS checkpoints (e.g. 20B, 120B) hosted on Hugging Face for local inference and fine-tuning.

llmhuggingfaceopen-weightsopenaitext-generation

Historic decoder-only LM family (124M–1.5B) under `openai-community` on the Hub—still a default tutorial and pipeline test target.

llmhuggingfacegpt-2educationtext-generation

Meta’s Open Pretrained Transformer suite (125M–175B) released with reproducible logbooks—canonical Hub org `facebook` / `facebook/opt-*`.

llmhuggingfacemetaresearchtext-generation

Early open chat models fine-tuned from Llama-class bases by LMSYS—widely mirrored on the Hub (e.g. Vicuna-7B v1.5).

llmhuggingfacechatinstruction-tuninglmsys

Z.ai GLM-5–generation checkpoints (e.g. FP8 builds) distributed on the Hub for text generation and agent-style use cases.

llmhuggingfaceglmtext-generationz.ai

EleutherAI’s public scaling suite: matched GPT-NeoX–architecture models from 70M–12B with public datasets for interpretability research.

llmhuggingfaceresearcheleutheraiinterpretability

Apple’s OpenELM family—openly released efficient language models with layer-wise scaling and Hub-hosted instruct variants.

llmhuggingfaceappleefficienttext-generation

NVIDIA Nemotron 3 open model checkpoints (dense and MoE) on Hugging Face for reasoning, coding, and agentic workloads at scale.

llmhuggingfacenvidiamoetext-generation

BigScience instruction-tuned BLOOM derivatives (e.g. BLOOMZ-560M–176B) for multilingual zero-shot instruction following on the Hub.

llmhuggingfacemultilingualinstructionbigscience

Hugging Face library to run PyTorch training on CPU, single GPU, multi-GPU, or TPU with minimal code changes.

distributedtrainingpytorchhuggingface

Hugging Face library for large shared datasets: memory mapping, streaming, Arrow-backed columns, and Hub integration.

datanlpllmhuggingface

AutoTrain Advanced: low-code training flows for classification, LLM fine-tunes, and diffusion tasks tied to the Hub.

fine-tuningautomlhuggingfacetaaft-repositories

TypeScript/JavaScript libraries to call Inference API, manage Hub assets, and build browser or Node AI features.

huggingfacejavascripttypescriptinferencetaaft-repositories