Unsloth
Optimized fine-tuning library claiming 2× faster LoRA/QLoRA with less VRAM via custom kernels and Hugging Face compatibility.
Why it is included
Popular acceleration layer for open-model finetuning notebooks and startups.
Best for
Fast iteration on single-GPU fine-tunes with less memory.
Strengths
- Speed
- VRAM savings
- HF Trainer integration
Limitations
- Kernel support matrix evolves with CUDA versions
Good alternatives
Axolotl · LLaMA Factory
Related tools
AI & Machine Learning
Axolotl
YAML-configured fine-tuning for LLMs: LoRA, QLoRA, FSDP, and many architectures on top of Hugging Face trainers.
AI & Machine Learning
Hugging Face Transformers
State-of-the-art pretrained models for PyTorch, TensorFlow, and JAX.
AI & Machine Learning
PEFT
Parameter-efficient fine-tuning methods (LoRA, adapters, prompt tuning) integrated with Transformers models.
AI & Machine Learning
OLMo
Allen AI fully open LLM **pipeline**: weights, training code, data mixes, and evaluation—research transparency flagship.
AI & Machine Learning
GPT-NeoX
EleutherAI framework and 20B-class models for training large autoregressive LMs with 3D parallelism—Apache-2.0 training stack.
AI & Machine Learning
Ollama
Local LLM runner and model library with simple CLI and API for workstation inference.
