Skip to content
OpenCatalogcurated by FLOSSK
AI & Machine Learning

ONNX Runtime

Cross-platform inference accelerator for ONNX models: CPU, GPU, and mobile execution providers with graph optimizations.

Why it is included

Standard OSS runtime for shipping models across frameworks and hardware without retraining.

Best for

Edge, mobile, and server inference where a single graph bundle must run everywhere.

Strengths

  • Broad EP support
  • ONNX interchange
  • MS + community maintenance

Limitations

  • Not every op combo exports cleanly; validate graphs per target

Good alternatives

TensorRT · OpenVINO · TVM

Related tools