Intel Tutorial:
Accelerate AI workloads using Intel® oneAPI AI Analytics Toolkit

Brief Overview

This workshop aims to familiarize attendees with Intel® optimization techniques for speeding up machine learning and deep learning workload performance with minimal or no code change on Intel® CPUs and GPUs using Intel® oneAPI AI Analytics toolkit that leverages Intel® oneAPI libraries like Intel® oneAPI Data Analytics Library (oneDAL), Intel® oneAPI Deep Neural Network Library (oneDNN) for low-level compute optimizations.

 

Intel® AI Analytics Toolkit consists of Intel-optimized machine learning libraries like Intel® Extension for Scikit-learn and XGBoost, Data Analytics library Intel® Distribution of Modin, Intel-optimized deep learning frameworks TensorFlow and PyTorch, and Intel® distribution for Python. It also has various pre-trained models and benchmarking scripts in Model Zoo for Intel® Architecture and the low-precision optimization tool Intel® Neural Compressor.

 

All the Intel® AI Analytics Toolkit components can be installed standalone without needing to install the full Toolkit.

 

Topics covered

  • Overview of Intel®’s oneAPI AI Analytics Toolkit.
  • Intel® Extension for Scikit-learn.
  • Recent advancements in Intel® CPUs.
  • Intel® Optimization for TensorFlow and Intel® Extension for TensorFlow (ITEX) for Intel® CPU/GPUs.
  • Intel® Extension for PyTorch for Intel® CPU’s/GPU’s.
  • Intel® Neural Compressor.

 

Agenda

  • An overview of Intel®’s oneAPI AI Analytics Toolkit, its components, and performance libraries.
  • Introduction and hands-on demo to Intel® Extension for Scikit-learn and oneAPI performance library oneDAL to accelerate machine learning algorithms.
  • Important features of Intel’s latest CPUs like AVX-512 and AMX and how they can accelerate low precision data types (INT8 and BF16) operations.
  • Overview of oneAPI library oneDNN to optimize deep learning frameworks TensorFlow and PyTorch and quantization with Intel® Neural Compressor.
  • A live hands-on demo to optimize and accelerate deep learning workload using oneDNN with minimal code change and quantization using Intel® Neural Compressor on Intel® DevCloud.
  • Contest details.

 

 

Who should attend

AI Developers, ML Engineers, Data Scientists, Researchers, or anyone interested in learning about the capabilities of Intel® in AI.

 

Speakers Profile and Bio:

Speaker 1: Aditya Sirvaiya – AI Software Solutions Engineer, Intel

Bio and Pic:

Speaker 1: Aditya

 

Aditya Sirvaiya is AI Software Solutions Engineer in Intel’s Artificial Intelligence and Analytics (AIA) group, helps customers and partners optimize their AI workloads on Intel platforms using Intel AI software tools and oneAPI high-performance libraries. Aditya holds a bachelor’s degree in Engineering Physics from IIT Delhi and a master’s degree in Computer Science with a specialization in AI from IIT Bombay.

 

Speaker 2: Vishnu Madhu – AI Software Solutions Engineer, Intel

Bio and Pic:

Speaker 2: Vishnu Madhu

 

Vishnu Madhu is Intel’s AI Software Solutions Engineer based out of Bangalore, India. He is an EEE graduate with more than a decade of technology experience. The last few years of his work involved piping together ML systems for use cases that spans CV, NLP, Recommender Systems, and more. In his current role, he works towards enabling Intel’s customers to efficiently utilize their Intel hardware for deploying AI/ML applications. If not during Intel events like this, you might still cross paths with him at AI evangelization events, hackathons, and other tech conferences.

 

Pre-requisites 

– A laptop is a Mandatory Pre-Requisite for this workshop

All participants should have prior access to Devcloud to participate in the hands-on session

Please find the Devcloud URL and for the Devcloud setup guide click here

Basic knowledge of Linux/Windows command line tools.

– Basic knowledge of machine learning and deep learning.

Experience with writing scripts in Python and any of the deep learning frameworks from TensorFlow and PyTorch.

– Understanding of deep learning model training and inference. 

 

Key takeaways

  1. Learn about all Intel®’s AI optimization tools and libraries offered by Intel® oneAPI AI Analytics Toolkit.
  2. Basic and useful knowledge of Intel’s Hardware features to get an in-depth understanding of all the optimizations.
  3. Hands-on understanding of how to apply all these oneAPI based easy optimizations in any of your AI workloads on the Intel®

 

 

BoF Intel® oneAPI AI Analytics Toolkit Workshop Contest Details:

 

During the contest, a skeleton ipython jupyter notebook will be provided to the participants. The notebook contains the following,

1) Details of the contest,

2) a sample deep learning model inference code using Pytorch,

3) pointers to Intel® optimizations for reference,

4) the target performance that the participant must achieve by applying Intel® optimizations,

 5) code stubs and/or comment hints to aid the participant in applying Intel optimizations.

 6) optionally a few quiz questions related to the topics covered during the workshop presentations. The participants are expected to complete the exercise on Intel® Devcloud and share the notebook with the organizers for evaluation.

 

Scoring criterion

The total score would be computed as follows:

  • 50 points for fully correct implementation of Intel® optimizations
  • 25 points for partially correct implementation of Intel® optimizations
  • 10 points each for every correctly answered quiz question

 

Results

Submissions would be scored on the above criterion. Winners would be the first 5 high-scored submissions. Top 5 winners to win a Goodie bag from Intel (Amazon vouchers, Laptop Sleeve, Sipper Bottle).

 

The results declared by the organizers would be final and binding.  

 

Please send the results to [email protected] and [email protected] no later than 20th Dec end of the day.

Results will be announced by 12 noon on 21st Dec 2022 at the Intel Booth at HiPC