TU117: Tiny Turing

Before we take a look at the Zotac card and our benchmark results, let’s take a moment to go over the heart of the GTX 1650: the TU117 GPU.

TU117 is for most practical purposes a smaller version of the TU116 GPU, retaining the same core Turing feature set, but with fewer resources all around. Altogether, coming from the TU116 NVIDIA has shaved off one-third of the CUDA cores, one-third of the memory channels, and one-third of the ROPs, leaving a GPU that’s smaller and easier to manufacture for this low-margin market.

Still, at 200mm2 in size and housing 4.7B transistors, TU117 is by no means a simple chip. In fact, it’s exactly the same die size as GP106 – the GPU at the heart of the GeForce GTX 1060 series – so that should give you an idea of how performance and transistor counts have (slowly) cascaded down to cheaper products over the last few years.

Overall, NVIDIA’s first outing with their new GPU is an interesting one. Looking at the specs of the GTX 1650 and how NVIDIA has opted to price the card, it’s clear that NVIDIA is holding back a bit. Normally the company launches two low-end cards at the same time – a card based on a fully-enabled GPU and a cut-down card – which they haven’t done this time. This means that NVIDiA is sitting on the option of rolling out a fully-enabled TU117 card in the future if they want to.

By the numbers, the actual CUDA core count differences between GTX 1650 and a theoretical fully-enabled GTX 1650 Ti are quite limited – to the point where I doubt a few more CUDA cores alone would be worth it – however NVIDIA also has another ace up its sleeve in the form of GDDR6 memory. If the conceptually similar GTX 1660 Ti is anything to go by, a fully-enabled TU117 card with a small bump in clockspeeds and 4GB of GDDR6 could probably pull far enough ahead of the vanilla GTX 1650 to justify a new card, perhaps at $179 or so to fill NVIDIA’s current product stack gap.

The bigger question is where performance would land, and if it would be fast enough to completely fend off the Radeon RX 570. Despite the improvements over the years, bandwidth limitations are a constant challenge for GPU designers, and NVIDIA’s low-end cards have been especially boxed in. Coming straight off of standard GDDR5, the bump to GDDR6 could very well put some pep into TU117’s step. But the price sensitivity of this market (and NVIDIA’s own margin goals) means that it may be a while until we see such a card; GDDR6 memory still fetches a price premium, and I expect that NVIDIA would like to see this come down first before rolling out a GDDR6-equipped TU117 card.

Turing’s Graphics Architecture Meets Volta’s Video Encoder

While TU117 is a pure Turing chip as far as its core graphics and compute architecture is concerned, NVIDIA’s official specification tables highlight an interesting and unexpected divergence in related features. As it turns out, TU117 has incorporated an older version of NVIDIA’s NVENC video encoder block than the other Turing cards. Rather than using the Turing block, it uses the video encoding block from Volta.

But just what does the Turing NVENC block offer that Volta’s does not? As it turns out, it’s just a single feature: HEVC B-frame support.

While it wasn’t previously called out by NVIDIA in any of their Turing documentation, the NVENC block that shipped with the other Turing cards added support for B(idirectional) Frames when doing HEVC encoding. B-frames, in a nutshell, are a type of advanced frame predication for modern video codecs. Notably, B-frames incorporate information about both the frame before them and the frame after them, allowing for greater space savings versus simpler uni-directional P-frames.


I, P, and B-Frames (Petteri Aimonen / PD)

This bidirectional nature is what make B-frames so complex, and this especially goes for video encoding. As a result, while NVIDIA has supported hardware HEVC encoding for a few generations now, it’s only with Turing that they added B-frame support for that codec. The net result is that relative to Volta (and Pascal), Turing’s NVENC block can achieve similar image quality with lower bitrates, or conversely, higher image quality at the same bitrate. This is where a lot of NVIDIA’s previously touted “25% bitrate savings” for Turing come from.

Past that, however, the Volta and Turing NVENC blocks are functionally identical. Both support the same resolutions and color depths, the same codecs, etc, so while TU117 misses out on some quality/bitrate optimizations, it isn’t completely left behind. Total encoder throughput is a bit less clear, though; NVIDIA’s overall NVENC throughput has slowly ratcheted up over the generations, in particular so that their GPUs can serve up an ever-larger number of streams when being used in datacenters.

Overall this is an odd difference to bake into a GPU when the other 4 members of the Turing family all use the newer encoder, and I did reach out to NVIDIA looking for an explanation for why they regressed on the video encoder block. The answer, as it turns out, came down to die size: NVIDIA’s engineers opted to use the older encoder to keep the size of the already decently-sized 200mm2 chip from growing even larger. Unfortunately NVIDIA isn’t saying just how much larger Turing’s NVENC block is, so it’s impossible to say just how much die space this move saved. However, that the difference is apparently enough to materially impact the die size of TU117 makes me suspect it’s bigger than we normally give it credit for.

In any case, the impact to GTX 1650 will depend on the use case. HTPC users should be fine as this is solely about encoding and not decoding, so the GTX 1650 is as good for that as any other Turing card. And even in the case of game streaming/broadcasting, this is (still) mostly H.264 for compatibility and licensing reasons. But if you fall into a niche area where you’re doing GPU-accelerated HEVC encoding on a consumer card, then this is a notable difference that may make the GTX 1650 less appealing than the TU116-powered GTX 1660.

The NVIDIA GeForce GTX 1650 Review: Featuring ZOTAC Meet the ZOTAC GeForce GTX 1650 OC
Comments Locked

126 Comments

View All Comments

  • eva02langley - Sunday, May 5, 2019 - link

    Hey, Turing is a joke. The only thing Turing brought is a different price bracket. Nvidia took 2 years and half before releasing Turing... so I don't see the age of Polaris to be an issue when new cards are coming in a couple of months.
  • Ryan Smith - Saturday, May 4, 2019 - link

    "This is by far the most 1650 friendly review I have seen online."

    Having finally read the other GTX 1650 reviews (I don't read them beforehand, to avoid coloring my own video card reviews), I agree with you on that. Still, I stand by my article.

    AMD is by no means desperate here. But they are willing to take thinner profit margins than NVIDIA does. And that creates all kinds of glorious havoc in the sub-$200 video card market.

    No one card wins in all categories here; one has better performance, another has better power efficiency. So it makes things a little more interesting for buyers as they now need to consider what they are using a card for - and what attributes they value the most.

    Next to the GTX 1650, the RX 570 is really kind of a lumbering beast. The power consumption difference for the 11% performance advantage is quite high. But at the end of the day it's still 11% faster for the same price, so if you're buying on a pure price/performance basis, then it's an easy call to make.

    As for Navi, AMD will eventually have a successor of some sort for Polaris 11. However I'm not expecting it in Q3; AMD normally launches the low-end stuff later.
  • eva02langley - Sunday, May 5, 2019 - link

    You can stand by your article, but it doesn't mean you are right because of it. You are living in LALA land Ryan for even believing that 75W difference is important. It would be important if the cards were of the same performances at a similar price... but it isn't.

    At this point, you can probably undervolt the RX 570 pretty close to the 1650 if that was sooooo important...

    I made the calculation that it is going to cost you 15-20$ of power per year for playing 4 hours per day. You cannot defend this. It is insanity.

    https://www.youtube.com/watch?v=um63-_YPNcA

    https://youtu.be/WTaSIG5Z-HM
  • yannigr2 - Thursday, May 9, 2019 - link

    AMD in all those last years, is trying to defend it's position with smaller profit margins. It's not something that is doing now and it's not something that is doing only with RX 570, to make us question it's ability to maintain this price.

    One other thing is that, while in the review the GTX 1650 is tested against the 4GB RX 570, when there is something to be said about pricing and profit margings and questions about the ability of AMD to keep selling the RX 570 under $150, the 8GB model of the RX 580 is used. No mentioning of the much cheaper 4GB version that is used in the review.

    In the end of the day, RX 570 is not 11% faster for the same price. It's 11% faster for $30 less and the only question is if GTX 1650's power efficiency and a couple of other features are enough to justify the loss of 11% performance(or more if the RX 570 model was not overclocked) and a significant(for this price range) higher price tag.

    And no, we can't assume that in the near future AMD's prices will just jump 20% to make the GTX 1650 less of an expensive card. Especially when Navi is not far away, meaning that older hardware will have to be sold to make room for the new models, or just stay at those low prices to not interfere with newer Navi models that could come at $200 and up.
  • yannigr2 - Thursday, May 9, 2019 - link

    EDIT - clarification: In many tests, there are scores for the RX 570 4GB and not for the 8GB model.
  • catavalon21 - Saturday, May 4, 2019 - link

    In the 1660 and 1660Ti reviews, the RX 570 wasn't included; however, the RX 590 and RX 580 are shown taking 201 and 222 seconds respectively to complete the V-Ray benchmark 1.0.8, where this chart shows the RX 570 only taking 153 seconds. The GTX 1660 is shown taking 109 seconds in both that chart and this one. Since the 570 typically falls short of its 580/590 siblings, how did it manage to stomp them in this benchmark?

    https://www.anandtech.com/show/14071/nvidia-gtx-16...
  • GreenReaper - Saturday, May 4, 2019 - link

    I think this is a reasonable review. Using twice the power at maximum load is not an insignificant factor over the life of the card. But it depends on if additional heat means a cost, or just means you can run your heating less, how often you game, who is paying for your power, etc. Then there are factors such as Linux source driver support, which may or may not matter for a particular person.

    If pressed, I'd get the RX 570 in a heartbeat, but maybe not if I wanted to put it in my microserver (admittedly, I'd also need a low-profile card for that). But I'd rather wait for Navi in an APU. :-)
  • Koenig168 - Saturday, May 4, 2019 - link

    The article tries too hard to make Nvidia look good despite the GTX 1650 being inferior in performance compared to the RX 570 and overpriced for what it is offering.
  • Oxford Guy - Saturday, May 4, 2019 - link

    The last time I remember any major tech news site give Nvidia any grief was in the Fermi days, with the 480 and especially the 465. As bad as the 480 was, too, people still bragged about their triple 480 SLI systems and 480 dual SLI was routinely featured in benchmarks.
  • Haawser - Sunday, May 5, 2019 - link

    4GB RX 570 is $130, not $150. And beats the 4GB 1650 out of sight. It also only draws ~120W, which is not a lot seeing as the majority of 1650s (ie- those with a 6 pin) draw ~90W anyway.

    The actual 75W 1650s should be $99, and the rest shouldn't even exist. Because at $150-160 they are a complete and utter joke.

Log in

Don't have an account? Sign up now