Get Started with OpenVINO™ toolkit for Linux* with FPGA support

About the Intel® Distribution of OpenVINO™ toolkit

The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance. The Intel® Distribution of OpenVINO™ toolkit includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT).

The Intel® Distribution of OpenVINO™ toolkit for Linux* with FPGA support:

Included with the Installation

The following components are installed by default:

Component Description
Model Optimizer This tool imports, converts, and optimizes models, which were trained in popular frameworks, to a format usable by Intel tools, especially the Inference Engine.
NOTE: Popular frameworks include Caffe*, TensorFlow*, MXNet*, and ONNX*.
Inference Engine This is the engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications.
Drivers and runtimes for OpenCL™ version 2.1 Enables OpenCL on the GPU/CPU for Intel® processors
Intel® Media SDK Offers access to hardware accelerated video codecs and frame processing
Pre-compiled FPGA bitstream samples Pre-compiled bitstream samples for the Intel® Arria® 10 GX FPGA Development Kit, Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA and Intel® Vision Accelerator Design with an Intel® Arria 10 FPGA (preview)
Intel® FPGA SDK for OpenCL™ software technology The Intel® FPGA RTE for OpenCL™ provides utilities, host runtime libraries, drivers, and RTE-specific libraries and files
OpenCV OpenCV* community version compiled for Intel® hardware.
OpenVX* Intel's implementation of OpenVX* optimized for running on Intel® hardware (CPU, GPU, IPU)
Pre-trained models Set of Intel's pre-trained models for learning and demo purposes or to develop deep learning software.
Sample Applications A set of simple console applications demonstrating how to use the Inference Engine in your applications. For additional information about building and running the samples, refer to the Inference Engine Samples Overview.

System Requirements

The development and target platforms have the same requirements, but you can select different components during the installation, based on your intended use.

Hardware

Processor Notes:

Operating Systems:

Install Intel® Distribution of OpenVINO™ toolkit

To install and configure the Intel® Distribution of OpenVINO™ toolkit for Linux with FPGA support, follow the instructions from the Installation Guide

IMPORTANT:

  • All steps in this guide are required unless otherwise stated.
  • In addition to the downloaded package, you must install dependencies and complete configuration steps.

Build and Run Sample Applications

To build and run Inference Engine sample applications and demos, refer to the Inference Engine Samples Overview.

Pre-Trained Models

To learn about pre-trained models for the OpenVINO™ toolkit, refer to:

Documentation

Learn the OpenVINO documentation and how-to's at https://docs.openvinotoolkit.org.