Skip to content
OpenCatalogcurated by FLOSSK
AI & Machine Learning

OpenVINO

Intel toolkit to optimize and deploy deep learning on Intel CPUs, GPUs, and NPUs with model conversion and runtime APIs.

Why it is included

Leading OSS path for Intel-hardware inference tuning in robotics, edge, and datacenter Xeon deployments.

Best for

Teams targeting Intel silicon and needing quantize-and-run workflows from PyTorch/TF/ONNX.

Strengths

  • Intel-optimized kernels
  • Broad model support
  • Active releases

Limitations

  • Hardware story is Intel-centric

Good alternatives

ONNX Runtime · TensorRT · TFLite

Related tools