Singularity (software)

Singularity
Singularity running an hello world container from the command-line.
Original author(s) Gregory Kurtzer (gmk), et al.
Developer(s) Community
Gregory Kurtzer
Stable release
2.5.2[1] / 3 July 2018 (2018-07-03)
Repository github.com/singularityware/singularity
Written in C, Go[2]
Operating system Linux
Platform x86-64
Type Operating-system-level virtualization
License 3-clause BSD License[3]
Website www.sylabs.io/singularity/

Singularity is a free, cross-platform and open-source computer program that performs operating-system-level virtualization also known as containerization[4].

One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world[5].

The need for reproducibility requires the ability to use containers to move applications from system to system[6].

Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms[7].

Usage workflow for Singularity containers

History

Singularity began as an open-source project in 2015, when a team of researchers at Lawrence Berkeley National Laboratory, led by Gregory Kurtzer, developed the initial version and released it[8] under the BSD license[9].

By the end of 2016, many developers from different research facilities joined forces with the team at Lawrence Berkeley National Laboratory to further the development of Singularity[10]

Singularity quickly attracted the attention of computing-heavy scientific institutions worldwide[11]:

  • Stanford University Research Computing Center deployed Singularity on their XStream[12][13] and Sherlock[14] clusters
  • National Institutes of Health installed Singularity on Biowulf[15], their 95,000+ core/30 PB Linux cluster[16]
  • various sites of the Open Science Grid Consortium including Fermilab started adopting Singularity[17]; by April 2017, Singularity was deployed on 60% of the Open Science Grid network[18].

For two years in a row, in 2016 and 2017, Singularity was recognized by HPCwire editors as "One of five new technologies to watch"[19][20]. In 2017 Singularity also won the first place for the category ″Best HPC Programming Tool or Technology″[21].

As of 2018, based on the data entered on a voluntary basis in a public registry, Singularity user base is estimated to be greater than 25,000 installations[22] and includes users at academic institutions such as Ohio State University, and Michigan State University, as well as top HPC centers like Texas Advanced Computing Center, San Diego Supercomputer Center, and Oak Ridge National Laboratory.

Features

Singularity is able to support natively high-performance interconnects, such as InfiniBand[23] and Intel Omni-Path Architecture (OPA)[24].

Similar to the support for InfiniBand and Intel OPA devices, Singularity can support any PCIe-attached device within the compute node, such as graphic accelerators[25].

Singularity has also native support for Open MPI library by utilizing a hybrid MPI container approach where OpenMPI exists both inside and outside the container[26].

These features make Singularity increasingly useful in areas such as Machine learning, Deep learning and most data-intensive workloads where the applications benefit from the high bandwidth and low latency characteristics of these technologies[27].

Integration

HPC systems traditionally already have resource management and job scheduling systems in place, so the container runtime environments must be integrated into the existing system resource manager.

Using other enterprise container solutions like Docker in HPC systems would require modifications to the software[28].

Singularity seamlessly integrates with many resource managers[29] including:

See also

References

  1. "Singularity Releases". singularity.lbl.gov. Singularity Team. 3 July 2018. Retrieved 2018-07-10.
  2. "Singularity+GoLang".
  3. "Singularity License". singularity.lbl.gov. Singularity Team. 3 July 2018. Retrieved 2018-07-10.
  4. "Singularity presentation at FOSDEM 17".
  5. Kurtzer, Gregory M; Sochat, Vanessa; Bauer, Michael W (2017). "Singularity: Scientific containers for mobility of compute". Plos. 12 (5): e0177459. doi:10.1371/journal.pone.0177459.
  6. "Singularity, a container for HPC". admin-magazine.com. 24 April 2016.
  7. "Singularity Manual: Mobility of Compute".
  8. "Sylabs brings Singularity containers into commercial HPC".
  9. "Singularity License". singularity.lbl.gov. Singularity Team. 19 March 2018. Retrieved 2018-03-19.
  10. "Changes to the AUTHORS.md file in Singularity source code made in April 2017".
  11. "Berkeley Lab's Open-Source Spinoff Serves Science". 7 June 2017.
  12. "XStream online user manual, section on Singularity".
  13. "XStream cluster overview".
  14. "Sherlock Supercomputer: What's New, Containers and Deep Learning Tools".
  15. "NIH HPC online user manual, section on Singularity".
  16. "NIH HPC Systems".
  17. "Singularity on the OSG".
  18. "Singularity in CMS: Over a million containers served" (PDF). }
  19. "HPCwire Reveals Winners of the 2016 Readers' and Editors' Choice Awards at SC16 Conference in Salt Lake City".
  20. "HPCwire Reveals Winners of the 2017 Readers' and Editors' Choice Awards at SC17 Conference in Denver".
  21. "HPCwire Reveals Winners of the 2017 Readers' and Editors' Choice Awards at SC17 Conference in Denver".
  22. "Voluntary registry of Singularity installations".
  23. "Intel Advanced Tutorial: HPC Containers & Singularity – Advanced Tutorial – Intel" (PDF).
  24. "Intel Application Note: Building Containers for Intel® Omni-Path Fabrics using Docker* and Singularity" (PDF).
  25. "Singularity Manual: A GPU example".
  26. "Intel Advanced Tutorial: HPC Containers & Singularity – Advanced Tutorial – Intel" (PDF).
  27. Tallent, Nathan R; Gawande, Nitin A; Siegel, Charles; Vishnu, Abhinav; Hoisie, Adolfy (2018). "Evaluating On-Node GPU Interconnects for Deep Learning Workloads". Plos. Lecture Notes in Computer Science. 10724: 3. doi:10.1007/978-3-319-72971-8_1. ISBN 978-3-319-72970-1.
  28. Jonathan Sparks, Cray Inc. (2017). "HPC Containers in use" (PDF).
  29. "Support on existing traditional HPC".

Further reading

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.