TU117: Tiny Turing

Before we take a look at the Zotac card and our benchmark results, let’s take a moment to go over the heart of the GTX 1650: the TU117 GPU.

TU117 is for most practical purposes a smaller version of the TU116 GPU, retaining the same core Turing feature set, but with fewer resources all around. Altogether, coming from the TU116 NVIDIA has shaved off one-third of the CUDA cores, one-third of the memory channels, and one-third of the ROPs, leaving a GPU that’s smaller and easier to manufacture for this low-margin market.

Still, at 200mm2 in size and housing 4.7B transistors, TU117 is by no means a simple chip. In fact, it’s exactly the same die size as GP106 – the GPU at the heart of the GeForce GTX 1060 series – so that should give you an idea of how performance and transistor counts have (slowly) cascaded down to cheaper products over the last few years.

Overall, NVIDIA’s first outing with their new GPU is an interesting one. Looking at the specs of the GTX 1650 and how NVIDIA has opted to price the card, it’s clear that NVIDIA is holding back a bit. Normally the company launches two low-end cards at the same time – a card based on a fully-enabled GPU and a cut-down card – which they haven’t done this time. This means that NVIDiA is sitting on the option of rolling out a fully-enabled TU117 card in the future if they want to.

By the numbers, the actual CUDA core count differences between GTX 1650 and a theoretical fully-enabled GTX 1650 Ti are quite limited – to the point where I doubt a few more CUDA cores alone would be worth it – however NVIDIA also has another ace up its sleeve in the form of GDDR6 memory. If the conceptually similar GTX 1660 Ti is anything to go by, a fully-enabled TU117 card with a small bump in clockspeeds and 4GB of GDDR6 could probably pull far enough ahead of the vanilla GTX 1650 to justify a new card, perhaps at $179 or so to fill NVIDIA’s current product stack gap.

The bigger question is where performance would land, and if it would be fast enough to completely fend off the Radeon RX 570. Despite the improvements over the years, bandwidth limitations are a constant challenge for GPU designers, and NVIDIA’s low-end cards have been especially boxed in. Coming straight off of standard GDDR5, the bump to GDDR6 could very well put some pep into TU117’s step. But the price sensitivity of this market (and NVIDIA’s own margin goals) means that it may be a while until we see such a card; GDDR6 memory still fetches a price premium, and I expect that NVIDIA would like to see this come down first before rolling out a GDDR6-equipped TU117 card.

Turing’s Graphics Architecture Meets Volta’s Video Encoder

While TU117 is a pure Turing chip as far as its core graphics and compute architecture is concerned, NVIDIA’s official specification tables highlight an interesting and unexpected divergence in related features. As it turns out, TU117 has incorporated an older version of NVIDIA’s NVENC video encoder block than the other Turing cards. Rather than using the Turing block, it uses the video encoding block from Volta.

But just what does the Turing NVENC block offer that Volta’s does not? As it turns out, it’s just a single feature: HEVC B-frame support.

While it wasn’t previously called out by NVIDIA in any of their Turing documentation, the NVENC block that shipped with the other Turing cards added support for B(idirectional) Frames when doing HEVC encoding. B-frames, in a nutshell, are a type of advanced frame predication for modern video codecs. Notably, B-frames incorporate information about both the frame before them and the frame after them, allowing for greater space savings versus simpler uni-directional P-frames.


I, P, and B-Frames (Petteri Aimonen / PD)

This bidirectional nature is what make B-frames so complex, and this especially goes for video encoding. As a result, while NVIDIA has supported hardware HEVC encoding for a few generations now, it’s only with Turing that they added B-frame support for that codec. The net result is that relative to Volta (and Pascal), Turing’s NVENC block can achieve similar image quality with lower bitrates, or conversely, higher image quality at the same bitrate. This is where a lot of NVIDIA’s previously touted “25% bitrate savings” for Turing come from.

Past that, however, the Volta and Turing NVENC blocks are functionally identical. Both support the same resolutions and color depths, the same codecs, etc, so while TU117 misses out on some quality/bitrate optimizations, it isn’t completely left behind. Total encoder throughput is a bit less clear, though; NVIDIA’s overall NVENC throughput has slowly ratcheted up over the generations, in particular so that their GPUs can serve up an ever-larger number of streams when being used in datacenters.

Overall this is an odd difference to bake into a GPU when the other 4 members of the Turing family all use the newer encoder, and I did reach out to NVIDIA looking for an explanation for why they regressed on the video encoder block. The answer, as it turns out, came down to die size: NVIDIA’s engineers opted to use the older encoder to keep the size of the already decently-sized 200mm2 chip from growing even larger. Unfortunately NVIDIA isn’t saying just how much larger Turing’s NVENC block is, so it’s impossible to say just how much die space this move saved. However, that the difference is apparently enough to materially impact the die size of TU117 makes me suspect it’s bigger than we normally give it credit for.

In any case, the impact to GTX 1650 will depend on the use case. HTPC users should be fine as this is solely about encoding and not decoding, so the GTX 1650 is as good for that as any other Turing card. And even in the case of game streaming/broadcasting, this is (still) mostly H.264 for compatibility and licensing reasons. But if you fall into a niche area where you’re doing GPU-accelerated HEVC encoding on a consumer card, then this is a notable difference that may make the GTX 1650 less appealing than the TU116-powered GTX 1660.

The NVIDIA GeForce GTX 1650 Review: Featuring ZOTAC Meet the ZOTAC GeForce GTX 1650 OC
Comments Locked

126 Comments

View All Comments

  • Znaak - Monday, May 6, 2019 - link

    The RX570 won't even draw that much power if you know how to undervolt it. Both cards I've used in builds undervolted without a decrease in performance. Same for my RX580, undervolted it without a loss of performance.
  • timecop1818 - Sunday, May 5, 2019 - link

    Why does this shit still have a fucking DVI connector. Great card and I would totally buy it but I've got only DisplayPort monitors and guess what, this dumb piece of shit wastes half the bracket for a connector that has been dead for the last DECADE. Seriously who the fuck has a monitor with DVI? Last I saw this was on some Lenovo biz monitor that was still 4:3!

    WTB: GTX1650 with 2x DP, 1x USB-C and 0x or 1x HDMI
  • mm0zct - Monday, May 6, 2019 - link

    I for one still have a high-resolution DVI monitor. I've got no good reason to replace my Dell U2711, it's 2560x1440 and can only be driven at that resolution by Displayport or Dual-link DVI. Since I have multiple systems, and there are two DVI inputs and only one displayport, it's useful if I can connect to it via DVI still. Displayport to dual-link DVI adapters are still ludicrously expensive, so they aren't an option. Since DVI is easilly passively adapted to HDMI it's not useless even if you don't want DVI, but you can't passivle adapt HDMI or DisplayPort to dual-link DVI. Around the time I got htis monitor there were also a few quite populat 2560x1440 options which were DVI only, with no displayport, so it's good that this card would still support them.
  • PeachNCream - Monday, May 6, 2019 - link

    I do agree that DVI is a dead animal. DisplayPort is following suit though as the industry settles into HDMI so I think that's where we'll end up in a few years.
  • Korguz - Monday, May 6, 2019 - link

    timecop1818 and PeachNCream ...
    you 2 are complaining about DVI being on a video card in 2019, and saying its a dead animal ?? what about putting VGA ports on a NEW monitor in 2019 ? when was the last time a vga port was on a new video card ??? IMO.. DVI is a lot more useful then VGA, and instead of VGA on a monitor.. add another hdmi, or display port... along with a single DVI port. timecop1818 vga has been dead for longer... and FYI.. all my monitors have DVI, and are in use :-) if the last monitor you saw was a 4:3 lenovo.. then you havent bought a new one.. or havent looked at a new monitor for a while...
  • Xyler94 - Thursday, May 9, 2019 - link

    to be fair on the VGA front, you'd be very hard pressed to find a server with anything but a VGA port.

    Only consumer/gaming products aren't using VGA.
  • Calista - Monday, May 13, 2019 - link

    DVI is far more common than DP. Far more common than HDMI on monitors. Besides, if looking at the offerings from Zotac and other they often have very similar cards with different output options. So you're free to pick the one you like.
  • Znaak - Monday, May 6, 2019 - link

    There have been premium GTX 1650's announced with prices higher than stock RX580's!

    Sure the AMD card uses a lot of power, but performance wise it trounces the nVidia card.

    System builders will like this card simply for being fully slot powered, everyone consumer building a system is better of going for AMD. Better performace, better price and if you get the 8GB version more future proof for even low end gaming.
  • sonny73n - Monday, May 6, 2019 - link

    Every time there a review of Nvidia gpu, AMD fans came crawling out of the woodwork. Just to set the record straight - I’m not an Nvidia fan nor AMD fan. While I agree with all of the commenters here about Nvidia pricing that this card should be around $120, I do not agree with most people’s perceptions of performance. You can talk about performance per dollar all day long but people like me sees efficiency as better in a whole. I don’t have this card or the RX 560 to compare but I do have a Zotac GTX 1060 GDDR5X 6GB (5 stars ratings) bought for $200 from Newegg last month and I have access to RX 590 8GB which is currently priced at $220 (the cheapest). I was going for the 590 but there were several reasons that held me back. First, all the cheap 590s (less than $250) had terrible reviews (mostly 3 out of 5 stars). Second, I don’t want a loud foot heater. Last but not least, I define performance based on efficiency. The RX 590 uses almost twice the power of the GTX 1060 but only is 17% faster. How can you call that a better performance? Obviously, AMD haven’t got their acts together. They need to come up with a better chip architecture and revamp everything. I had hopes for them but they fell me every time. Words from Lisa’s mouth are cheap. Until AMD can actually deliver, Nvidia will keep selling their gpus at whatever price they want.
  • silverblue - Tuesday, May 7, 2019 - link

    I'm running The Division 2 on a marginally under-volted Nitro+ 590, and with a mixture of Ultra and some High settings at 1080p with a manually imposed frame cap of 75, I'm getting 60-75fps for a power consumption figure of 130-160W. The card is audible, but barely.

    It's just the one title, but I really doubt that the 1060 can deliver anywhere close to that level of performance, and certainly not at about half the power.

Log in

Don't have an account? Sign up now