Skip to content
OpenCatalogcurated by FLOSSK

Browse & filter

Filter by platform, license text, maturity, maintenance cadence, and editorial tags like privacy-focused or self-hosted. Search matches names, summaries, tags, and use cases.

2 tools match your filters

Parameter-efficient fine-tuning methods (LoRA, adapters, prompt tuning) integrated with Transformers models.

fine-tuningloratransformersllm

NVIDIA library for FP8/FP4 and fused kernels on Hopper/Ada-class GPUs to accelerate Transformer training and inference.

trainingtransformersfp8nvidiataaft-repositories