Get Started with OpenVINO™ toolkit for Linux* OS

About the Intel® Distribution of OpenVINO™ toolkit

The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance. The Intel® Distribution of OpenVINO™ toolkit includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT).

The Intel® Distribution of OpenVINO™ toolkit for Linux*:

Included with the Installation

The following components are installed by default:

Component Description
Model Optimizer This tool imports, converts, and optimizes models, which were trained in popular frameworks, to a format usable by Intel tools, especially the Inference Engine.
NOTE: Popular frameworks include Caffe*, TensorFlow*, MXNet*, and ONNX*.
Inference Engine This is the engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications.
Drivers and runtimes for OpenCL™ version 2.1 Enables OpenCL on the GPU/CPU for Intel® processors.
Intel® Media SDK Offers access to hardware accelerated video codecs and frame processing.
OpenCV* OpenCV* community version compiled for Intel® hardware.
OpenVX* Intel's implementation of OpenVX* optimized for running on Intel® hardware (CPU, GPU, IPU).
Pre-trained models A set of Intel's pre-trained models for learning and demo purposes or to develop deep learning software.
Sample Applications A set of simple console applications demonstrating how to use the Inference Engine in your applications. For additional information about building and running the samples, refer to the Inference Engine Samples Overview.

System Requirements

This guide covers the Linux* version of the Intel® Distribution of OpenVINO™ toolkit that does not include FPGA support. For the toolkit that includes FPGA support, see Get Started with OpenVINO™ toolkit for Linux* with FPGA Support.

Hardware

Processor Notes:

Operating Systems

Install Intel® Distribution of OpenVINO™ toolkit

To install and configure the Intel® Distribution of OpenVINO™ toolkit for Linux, follow the instructions from the Installation Guide.

IMPORTANT:

  • All steps in this guide are required unless otherwise stated.
  • In addition to the downloaded package, you must install dependencies and complete configuration steps.

Build and Run Sample Applications

To build and run Inference Engine sample applications and demos, refer to the Inference Engine Samples Overview.

Pre-Trained Models

OpenVINO™ toolkit provides a set of pre-trained models to expedite development of your high-performance deep learning inference applications. Use these free pre-trained models instead of training your own models to speed-up the development and production deployment process.

To learn about pre-trained models for the OpenVINO™ toolkit, refer to:

Documentation

Learn the OpenVINO documentation and how-to's at https://docs.openvinotoolkit.org.