Skip to content
OpenCatalogcurated by FLOSSK
AI & Machine Learning

Unsloth

Optimized fine-tuning library claiming 2× faster LoRA/QLoRA with less VRAM via custom kernels and Hugging Face compatibility.

Why it is included

Popular acceleration layer for open-model finetuning notebooks and startups.

Best for

Fast iteration on single-GPU fine-tunes with less memory.

Strengths

  • Speed
  • VRAM savings
  • HF Trainer integration

Limitations

  • Kernel support matrix evolves with CUDA versions

Good alternatives

Axolotl · LLaMA Factory

Related tools