Intel’s One API : What We Know and How to Get Ready

What is One API? This has been a common question since Intel announced One API vision during Intel Architecture Day back in December 2018. The aim of this is to deliver uniform software experience with optimal performance across broad range of Intel hardware. There has been some press releases and high level presentations depicting how One API is going to solve programming challenge for scalar processors (CPUs), vector processors (GPUs), matrix processors (AI accelerators) as well as spatial processing elements (FPGAs). For people waiting for Intel Xe as Xeon Phi successor, this is exciting. But for application developers current situation is somewhat confusing as it is difficult to answer simple questions:


  • How One API programming model will look like?
  • Will this be Intel's proprietary solution?
  • Will this be fully compatible with AMD or NVIDIA GPUs?
  • Should I port my application to CUDA/HIP/OpenACC/OpenMP or wait for One API?

Obviously Intel engineers and institutes with early access program can answer this questions. For the rest of us, these are all still unknowns. I am not in position to directly answers all questions but I have been looking at related announcements, conference presentations, mailing lists and code repositories. One can try to correlate this to find out how One API might look like. I tried to put together all information in this blog post and I think this gives a good picture of current state.

What Intel has announced?

There has been few press releases from Intel about One API (see list below). Intel announced One API during Intel Architecture Day (December 11th, 2018). There were not much details provided but it was clear that Intel want to provide a unified programming model to simplify application development across diverse computing architectures. Intel said a public project release will be available in 2019. In June 2019, during Intel’s Software Technology Day in London, Intel provided an update about One API project. It was announced that One API will support Direct Programming and API Programming model. The Direct Programming will be based on a new, direct programming language called Data Parallel C++ (DPC++). The API Programming approach will be based on optimized libraries to accelerate workload from different domains. The DPC++ is what most interesting to many of us and will be based Khronos Group’s SYCL heterogeneous programming model (more details in next section). A developer beta of One API project is expected to be released in Q4 2019.

If you would like to read these announcements (which are quite vague in my opinion), here are links:

What is in press, conferences, mailing lists or repositories?

As Intel has revealed very few details, different tech news portals have summarized above mentioned announcements and there is little (new) information. During last year Intel has emphasized that they would like to keep One API efforts open, standards based and portable. This claim is supported by the RFC that Intel team submitted to LLVM mailing list in January 2019. The RFC states that Intel would like to add SYCL programming model support and facilitate collaboration on C++ single-source heterogeneous programming for accelerators like GPU, FPGA, DSP, etc. from different hardware and software vendors. After couple of weeks, Intel open sourced SYCL Compiler and Runtime which is available on GitHub. This repository is seen as staging area for upstreaming SYCL support to LLVM.

Once we connect One API with SYCL then lot of things become more clearer. We can find more information about Intel's effort in SYCL ecosystem and possible programming model that Intel is trying to build. During EuroLLVM 2019, Andrew Savonichev from Intel presented SYCL Compiler. During Embedded Vision Summit 2019, Konstantin Bobrovski from Intel also presented Intel Open-Source SYCL Project. As OpenCL driver will be an important component, there is push for related development as well.

From these developments so far, it is clear that the One API will be closely connected to SYCL. Here are some references that will provide more insights:

So What is SYCL?

SYCL is a cross-platform, single source, C++ abstraction layer layer on top of OpenCL. It allows developers to leverage standard C++ language to target heterogeneous devices supported by OpenCL. In contract to Microsoft's C++ AMP and NVIDIA's CUDA, SYCL is a pure C++ DSEL (domain specific embedded language) without any C++ extension. This allows one to develop application using standard C++ for standard CPUs or a new architecture without having the hardware and specific compiler available. SYCL specification is around for quite some time, first specification SYCL 1.2 was announced back in GDC 2014. There are multiple implementations available: ComputeCpp, triSYCL, hipSYCL and Intel's LLVM SYCL.

There are already good resources/tutorials about SYCL. Instead of repeating more here, I will leave this section with handy references: is good place to get latest updates about SYCL.

And How Can I Try It?

Although SYCL is based on standard C++ language, some compiler extensions are required to enable code execution on accelerators (e.g. to annotate functions for device execution). Intel has implemented these changes in LLVM and open sourced SYCL implementation on GitHub. This has two components: SYCL compiler and runtime library. There is a Getting Started guide which is quite straightforward to start with. By the way, I don't think there is support for OSX yet. Below are steps to setup Intel's SYCL implementation on my linux box (Ubuntu 18.04).

Step I First we have to install Intel CPU Runtime for OpenCL Applications with SYCL support provided here. There is a newer release but it's a source release and binary distribution is not provided yet. Following these instructions, I installed these libraries as:

Step II This step is optional and only requires if we want to run on GPU device. Intel has provided OpenCL runtime for GPU here here.

From my understanding, only Intel GPUs are currently supported. There might be possibility to target other GPUs using SPIR backend but I haven't tried that yet.

On my linux box I have NVIDIA GPU amd hence installed OpenCL libraries as:

Now we can query all OpenCL supported devices using clinfo command:

We have now two OpenCL enabled devices available: NVIDIA Quadro GPU and Intel Haswell CPU.

Step III Next we have to install SYCL compiler. This is similar to building LLVM from source with some extra projects(which is bit heavy to build). Assuming you have necessary build tools, we can download and build LLVM with SYCL support as:

To use clang++ that is just built, set PATH and LD_LIBRARY_PATH environmental variables as:

Step IV With development environment setup, we can now test small SYCL programs. Here is a hello-world program to list all devices. The program is self explanatory and you can easily guess what is going on:

We can compile this program with clang++ that we have built before (make sure it's in $PATH). We can restrict SYCL exposed devices using environmental variable SYCL_DEVICE_TYPE:

Here is more involved example demonstrating from SYCL Reference guide[^sycl-reference-card]. I have added comments so that you can follow the examples without much efforts:

We can now compile and run this example as:

All good! We now have working SYCL installation and you can dive deep yourself! If you are interested, Codeplay has put together nice tutorial with their own SYCL implementation called ComputeCpp. Another way to learn more about is SYCL specification
and tests under LLVM SYCL source :).


Intel has been putting significant efforts in SYCL ecosystem and this would be major contribution to the LLVM/Clang infrastructure.
I think One API won't be a magical solution but SYCL with specific extensions, optimized libraries for Intel architectures. It's clear that new C++ standards (C++11, C++17, C++20) is taking centre stage and different vendors are already pushing this. Based on above developments we can try to answer questions mentioned at the beginning:

  • How One API programming model will look like? : SYCL based C++ programming model with some extensions and optimized libraries?
  • Will this be Intel's proprietary solution? : Not entirely but there will be some Intel specific extensions?
  • Will this be compatible with AMD or NVIDIA GPUs? : SYCL is open standard, so "theoretically" yes using other implementations?
  • Should I port my application to CUDA/HIP/OpenACC/OpenMP or wait for One API? : It's more about, can you move to future C++17/20 programming models with SYCL like interface? Implementations are still going to use OpenMP, ROCm, CUDA etc underneath.

Until Intel unveil beta release in Q4 2019, there is sufficient material for us learn about modern C++ and SYCL programming model. That's all for this post! Happy weekend!