SYCL

SYCL
Original author(s) Khronos Group
Developer(s) Khronos Group
Initial release March 2014 (2014-03)
Stable release
1.2.1 revision 3 / July 19, 2018 (2018-07-19)
Operating system Cross-platform
Platform Cross-platform
Type High-level programming language
Website www.khronos.org/sycl

SYCL is a higher-level programming model for OpenCL as a single-source domain specific embedded language (DSEL) based on pure C++11 for SYCL 1.2.1 and C++14 for SYCL 2.2 to improve programming productivity. This is a standard developed by Khronos Group, announced in March 2014.

Purpose

SYCL (pronounced ‘sickle’) is a royalty-free, cross-platform abstraction layer that builds on the underlying concepts, portability and efficiency of OpenCL that enables code for heterogeneous processors to be written in a “single-source” style using completely standard C++. SYCL enables single source development where C++ template functions can contain both host and device code to construct complex algorithms that use OpenCL acceleration, and then re-use them throughout their source code on different types of data.

While originally developed for use with OpenCL and SPIR it is actually a more general heterogeneous framework able to target other systems.

Versions

The latest version is SYCL 1.2.1 revision 3 which was published on July 19, 2018 (the first version was published on December 6, 2017[1]).

SYCL was introduced at GDC in March 2014 with provisional version 1.2[2], then the SYCL 1.2 final version was introduced at IWOCL 2015 in May 2015.[3]

SYCL 2.2 provisional was introduced at IWOCL 2016 in May 2016.[4]

The public versions are:

  • SYCL 1.2.1 targeting OpenCL 1.2 hardware features with an OpenCL 1.2 interoperability mode;
  • provisional SYCL 2.2 targeting OpenCL 2.2 hardware features with an OpenCL 2.2 interoperability mode.

Example

The following example shows the single-source pure C++ programming model defining an implicit task graph of 3 kernels running on a default accelerator.

#include <CL/sycl.hpp>
#include <iostream>
using namespace cl::sycl;
// Size of the matrices
constexpr size_t N = 2000;
constexpr size_t M = 3000;
int main() {
  // Create a queue to work on default device
  queue q;
  // Create some 2D buffers with N×M float values for our matrices
  buffer<double, 2> a{{ N, M }};
  buffer<double, 2> b{{ N, M }};
  buffer<double, 2> c{{ N, M }};
  // Launch a first asynchronous kernel to initialize buffer "a"
  q.submit([&](auto &cgh) {
      // The kernel write "a", so get a write accessor on it
      auto A = a.get_access<access::mode::write>(cgh);

      // Enqueue parallel kernel on an N×M 2D iteration space
      cgh.parallel_for<class init_a>({ N, M },
                         [=] (auto index) {
                           A[index] = index[0]*2 + index[1];
                         });
    });
  // Launch an asynchronous kernel to initialize buffer "b"
  q.submit([&](auto &cgh) {
      // The kernel write "b", so get a write accessor on it
      auto B = b.get_access<access::mode::write>(cgh);
      // Enqueue a parallel kernel on an N×M 2D iteration space
      cgh.parallel_for<class init_b>({ N, M },
                         [=] (auto index) {
                           B[index] = index[0]*2014 + index[1]*42;
                         });
    });
  // Launch an asynchronous kernel to compute matrix addition c = a + b
  q.submit([&](auto &cgh) {
      // In the kernel "a" and "b" are read, but "c" is written
      // Since the kernel reads "a" and "b", the runtime will add implicitly
      // a producer-consumer dependency to the previous kernels producing them.
      auto A = a.get_access<access::mode::read>(cgh);
      auto B = b.get_access<access::mode::read>(cgh);
      auto C = c.get_access<access::mode::write>(cgh);

      // Enqueue a parallel kernel on an N×M 2D iteration space
      cgh.parallel_for<class matrix_add>({ N, M },
                                     [=] (auto index) {
                                       C[index] = A[index] + B[index];
                                     });
    });
  /* Request an access to read "c" from the host-side. The SYCL runtime
     will wait for "c" to be ready available on the host side before
     returning the accessor.
     This means that there is no communication happening in the loop nest below.  */
  auto C = c.get_access<access::mode::read>();
  std::cout << std::endl << "Result:" << std::endl;
  for (size_t i = 0; i < N; i++)
    for (size_t j = 0; j < M; j++)
      // Compare the result to the analytic value
      if (C[i][j] != i*(2 + 2014) + j*(1 + 42)) {
        std::cout << "Wrong value " << C[i][j] << " on element "
                  << i << ' ' << j << std::endl;
        exit(-1);
      }

  std::cout << "Good computation!" << std::endl;
  return 0;
}

Tutorials

There are a few tutorials in the ComputeCpp SYCL guides.[5]

Comparison with CUDA

The open standards SYCL and OpenCL are similar to vendor-specific CUDA from Nvidia.

In the Khronos Group realm, OpenCL is the low-level non-single source API and SYCL is the high-level single-source C++ domain-specific embedded language.

By comparison, the single-source C++ domain-specific embedded language version of CUDA, which is actually named "CUDA Runtime API", is somehow similar to SYCL. But there is actually a less known non single-source version of CUDA which is called "CUDA Driver API", similar to OpenCL, and used for example by the CUDA Runtime API implementation itself.

See also

References

  1. Khronos Group (6 December 2017). "The Khronos Group Releases Finalized SYCL 1.2.1". Khronos. Retrieved 12 December 2017.
  2. Khronos Group (19 March 2014). "Khronos Releases SYCL 1.2 Provisional Specification". Khronos. Retrieved 20 August 2017.
  3. Khronos Group (11 May 2015). "Khronos Releases SYCL 1.2 Final Specification". Khronos. Retrieved 20 August 2017.
  4. Khronos Group (18 April 2016). "Khronos Releases OpenCL 2.2 Provisional Specification with OpenCL C++ Kernel Language". Khronos. Retrieved 18 September 2017.
  5. "Introduction to GPGPU programming with SYCL". Codeplay. Retrieved 3 October 2017.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.