GeForce 16 series

The GeForce 16 series is a series of graphics processing units developed by Nvidia, based on the Turing microarchitecture, announced in February 2019.[1] The 16 series, commercialized within the same timeframe as the 20 series, aims to cover the entry level to midrange market, not addressed by the latter.

GeForce 16 series
Release dateFebruary 22, 2019 (February 22, 2019)
CodenameTU11x
ArchitectureTuring
ModelsGeForce GTX Series
Transistors
  • 4.7B 12 nm (TU117)
  • 6.6B 12 nm (TU116)
Fabrication processTSMC 12 nm (FinFET)
Cards
Entry-level
  • GeForce GTX 1650
  • GeForce GTX 1650 (GDDR6)
  • GeForce GTX 1650 Super
Mid-range
  • GeForce GTX 1660
  • GeForce GTX 1660 Super
  • GeForce GTX 1660 Ti
API support
Direct3DDirect3D 12.0 (feature level 12_1)
OpenCLOpenCL 1.2
OpenGLOpenGL 4.6
VulkanVulkan 1.2
History
PredecessorGeForce 10 series
VariantGeForce 20 series

Architecture

The GeForce 16 series is based on the same Turing architecture used in the GeForce 20 series, omitting the Tensor (AI) and RT (ray tracing) cores exclusive to the 20 series. The 16 series does, however, retain the dedicated integer cores used for concurrent execution of integer and floating point operations.[2] On March 18, 2019 Nvidia announced that via a driver update in April 2019 they would enable DirectX Raytracing on the 16 series cards, together with certain cards in the 10 series, a feature reserved to the RTX series up to that point.[3]

Products

The GeForce 16 series launched on February 22, 2019 with the announcement of the GeForce GTX 1660 Ti.[4] The cards are PCIe 3.0 x16 cards, produced with TSMC's 12 nm FinFET process. On April 22, 2019, coinciding with the announcement of the GTX 1650, Nvidia announced laptops equipped with built-in GTX 1650 chipsets.[5]

Model Launch Code name(s) Transistors (billion) Die size (mm2) Core

Config[lower-alpha 1]

L1 Cache (KB) L2 Cache (KB) Clock speeds Fillrate Memory Processing power (GFLOPS) TDP (watts) Launch price (USD)
Base core clock (MHz) Boost core clock (MHz) Memory (MT/s) Pixel (GP/ s)[lower-alpha 2] Texture (GT/s)[lower-alpha 3] Size (GB) Bandwidth (GB/s) Type Bus width (bit) Single precision (boost) Double precision (boost) Half precision (boost)
GeForce GTX 1650[6] April 23, 2019 TU117-300-A1 4.7 200 896:56:32:14 896 1024 1485 1665 8000 53.28 93.24 4 128 GDDR5 128 2661 (2984) 83.16 (93.24) 5322 (5967) 75 $149[7]
GeForce GTX 1650 (GDDR6)[8][9] April 3, 2020 1410 1590 12000 192 GDDR6 2527 (2849) 79 (89) 5053 (5699) 75 $149
GeForce GTX 1650 Super[10] November 22, 2019 TU116-250-KA-A1 6.6 284 1280:80:32:20 1280 1530 1725 12000 55.2 110.4 3916 (4416) 122 (138) 7832 (8832) 100 $159
GeForce GTX 1660[4] March 14, 2019 TU116-300-A1 1408:88:48:22 1408 1536 1785 8000 73 135 6 GDDR5 192 4308 (5027) 135 (157) 8616 (10053) 120 $219
GeForce GTX 1660 Super[11] October 29, 2019 TU116-300-A1 14000 336 GDDR6 125 $229
GeForce GTX 1660 Ti[4] February 22, 2019 TU116-400-A1 1536:96:48:24 1536 1500 1770 12000 88.6 177.1 288 4608 (5437) 144 (170) 9216 (10875) 120 $279
  1. Shader Processors : Texture mapping units : Render output units : Streaming multi-processors
  2. Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.
  3. Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.

References

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.