TU117: Tiny Turing

Before we take a look at the Zotac card and our benchmark results, let’s take a moment to go over the heart of the GTX 1650: the TU117 GPU.

TU117 is for most practical purposes a smaller version of the TU116 GPU, retaining the same core Turing feature set, but with fewer resources all around. Altogether, coming from the TU116 NVIDIA has shaved off one-third of the CUDA cores, one-third of the memory channels, and one-third of the ROPs, leaving a GPU that’s smaller and easier to manufacture for this low-margin market.

Still, at 200mm2 in size and housing 4.7B transistors, TU117 is by no means a simple chip. In fact, it’s exactly the same die size as GP106 – the GPU at the heart of the GeForce GTX 1060 series – so that should give you an idea of how performance and transistor counts have (slowly) cascaded down to cheaper products over the last few years.

Overall, NVIDIA’s first outing with their new GPU is an interesting one. Looking at the specs of the GTX 1650 and how NVIDIA has opted to price the card, it’s clear that NVIDIA is holding back a bit. Normally the company launches two low-end cards at the same time – a card based on a fully-enabled GPU and a cut-down card – which they haven’t done this time. This means that NVIDiA is sitting on the option of rolling out a fully-enabled TU117 card in the future if they want to.

By the numbers, the actual CUDA core count differences between GTX 1650 and a theoretical fully-enabled GTX 1650 Ti are quite limited – to the point where I doubt a few more CUDA cores alone would be worth it – however NVIDIA also has another ace up its sleeve in the form of GDDR6 memory. If the conceptually similar GTX 1660 Ti is anything to go by, a fully-enabled TU117 card with a small bump in clockspeeds and 4GB of GDDR6 could probably pull far enough ahead of the vanilla GTX 1650 to justify a new card, perhaps at $179 or so to fill NVIDIA’s current product stack gap.

The bigger question is where performance would land, and if it would be fast enough to completely fend off the Radeon RX 570. Despite the improvements over the years, bandwidth limitations are a constant challenge for GPU designers, and NVIDIA’s low-end cards have been especially boxed in. Coming straight off of standard GDDR5, the bump to GDDR6 could very well put some pep into TU117’s step. But the price sensitivity of this market (and NVIDIA’s own margin goals) means that it may be a while until we see such a card; GDDR6 memory still fetches a price premium, and I expect that NVIDIA would like to see this come down first before rolling out a GDDR6-equipped TU117 card.

Turing’s Graphics Architecture Meets Volta’s Video Encoder

While TU117 is a pure Turing chip as far as its core graphics and compute architecture is concerned, NVIDIA’s official specification tables highlight an interesting and unexpected divergence in related features. As it turns out, TU117 has incorporated an older version of NVIDIA’s NVENC video encoder block than the other Turing cards. Rather than using the Turing block, it uses the video encoding block from Volta.

But just what does the Turing NVENC block offer that Volta’s does not? As it turns out, it’s just a single feature: HEVC B-frame support.

While it wasn’t previously called out by NVIDIA in any of their Turing documentation, the NVENC block that shipped with the other Turing cards added support for B(idirectional) Frames when doing HEVC encoding. B-frames, in a nutshell, are a type of advanced frame predication for modern video codecs. Notably, B-frames incorporate information about both the frame before them and the frame after them, allowing for greater space savings versus simpler uni-directional P-frames.


I, P, and B-Frames (Petteri Aimonen / PD)

This bidirectional nature is what make B-frames so complex, and this especially goes for video encoding. As a result, while NVIDIA has supported hardware HEVC encoding for a few generations now, it’s only with Turing that they added B-frame support for that codec. The net result is that relative to Volta (and Pascal), Turing’s NVENC block can achieve similar image quality with lower bitrates, or conversely, higher image quality at the same bitrate. This is where a lot of NVIDIA’s previously touted “25% bitrate savings” for Turing come from.

Past that, however, the Volta and Turing NVENC blocks are functionally identical. Both support the same resolutions and color depths, the same codecs, etc, so while TU117 misses out on some quality/bitrate optimizations, it isn’t completely left behind. Total encoder throughput is a bit less clear, though; NVIDIA’s overall NVENC throughput has slowly ratcheted up over the generations, in particular so that their GPUs can serve up an ever-larger number of streams when being used in datacenters.

Overall this is an odd difference to bake into a GPU when the other 4 members of the Turing family all use the newer encoder, and I did reach out to NVIDIA looking for an explanation for why they regressed on the video encoder block. The answer, as it turns out, came down to die size: NVIDIA’s engineers opted to use the older encoder to keep the size of the already decently-sized 200mm2 chip from growing even larger. Unfortunately NVIDIA isn’t saying just how much larger Turing’s NVENC block is, so it’s impossible to say just how much die space this move saved. However, that the difference is apparently enough to materially impact the die size of TU117 makes me suspect it’s bigger than we normally give it credit for.

In any case, the impact to GTX 1650 will depend on the use case. HTPC users should be fine as this is solely about encoding and not decoding, so the GTX 1650 is as good for that as any other Turing card. And even in the case of game streaming/broadcasting, this is (still) mostly H.264 for compatibility and licensing reasons. But if you fall into a niche area where you’re doing GPU-accelerated HEVC encoding on a consumer card, then this is a notable difference that may make the GTX 1650 less appealing than the TU116-powered GTX 1660.

The NVIDIA GeForce GTX 1650 Review: Featuring ZOTAC Meet the ZOTAC GeForce GTX 1650 OC
Comments Locked

126 Comments

View All Comments

  • Yojimbo - Saturday, May 4, 2019 - link

    That's true, and I noted that in my original post. But the important thing is that the price/performance comparison should consider the total cost of ownership of the card. Ultimately, the value of any particular increment in performance is a matter of personal preference, though it is possible for someone to make a poor choice because he doesn't understand the situation well.
  • dmammar - Friday, May 3, 2019 - link

    This power consumption electricity savings debate has gone on too long. The math is not hard - the annual electricity cost is equal to (Watts / 1,000) x (hours used per day) x (365 days / year) x (cost per kWh)

    In my area, electricity costs $0.115/kWh so a rather excessive (for me) 3 hours of gaming every day of the year means that an extra 100W power consumption equals only $12.50 higher electricity cost every year.

    So for me, the electricity cost of the higher power consumption isn't even remotely important. I think most people are in the same boat, but run the numbers yourself and make your own decision. The only people who should care either live somewhere with expensive electricity and/or game way too much, in which case they should probably be using a better GPU.
  • Yojimbo - Saturday, May 4, 2019 - link

    How is $12.50 a year not remotely important? Would you say a card costing $25 less is a big deal? If one costs $150 and the other is $175 you would not consider that to be at all a consideration to your purchase?
  • OTG - Saturday, May 4, 2019 - link

    How IS $12.50/year even worth thinking about?
    That's less than an hour of work for most people, it's like 3 cents a day, you could pay for it by finding pennies on the sidewalk!
    PLUS you get much better performance! It's a faster card for a completely meaningless power increase.
    If your PSU doesn't have a six pin, get the 1650 I guess, otherwise the price is kinda silly.
  • Yojimbo - Saturday, May 4, 2019 - link

    I like the way you think. Whatever you buy, just buy it from me for $12.50 more than you could otherwise get it, because it's just not worth thinking about. What you say would be entirely reasonable if it didn't apply to every single purchase you make. I mean if a company comes along as says "Come on, buy this pen for $20. You're only going to buy one pen this year." would you do it? Do you ask the people who are saying NVIDIA's new cards are too expensive because they are $20 more expensive than the previous generation equivalents "How is $10 a year even worth thinking about?"

    Hey, if you are willing to throw money out the window if it is for electricity but not for anything else that's up to you, but you are making unreasonable decisions that harm yourself.
  • jardows2 - Monday, May 6, 2019 - link

    Using your logic, why don't we all just save bunches of money by using Intel Integrated graphics. Since the money we save on power usage is all that matters, we might as well make sure we are only using Mobile CPU's as well.
    What your paying for here is the improved gaming experience provided by the extra performance of the RX570. For many people, the real-world improvement in the gaming experience is worth the relatively low cost of energy usage. Realistically, the only reason to get one of these over the 570 is if your power supply cannot handle the RX570.
  • Sushisamurai - Tuesday, May 7, 2019 - link

    Holy crap man! The amount of electricity I spent to read this comment thread and that mount of keyboard clicks that've been consumed from my 70 million clicks from my mechanical keyboard from my total cost of ownership was totally worth reading and replying to this.
  • OTG - Tuesday, May 7, 2019 - link

    If you're pinching pennies that hard, you're probably better off not spending 4 hours a day gaming.
    Those games cost money, and you know what they say about time!
    Maybe even set the card mining when you're away, there are profits to be had even now.
  • WarlockOfOz - Saturday, May 4, 2019 - link

    Anyone calculating the total ownership cost of a video card in cents per day should also consider that the slightly higher performance of the 570 may allow it to last a few more months before justifying replacement, allowing the purchase price to be spread over a longer period.
  • Yojimbo - Sunday, May 5, 2019 - link

    "Anyone calculating the total ownership cost of a video card in cents per day should also consider that the slightly higher performance of the 570 may allow it to last a few more months before justifying replacement, allowing the purchase price to be spread over a longer period."

    Sure. Not that likely, though, because the difference isn't that great so what is more likely to affect the timing of upgrade is the card that becomes available. But at the moment, NVIDIA has a big gap between the 1650 and the 1660 so there aren't two more-efficient cards that bracket the 570 well from a price standpoint.

    Of course, some people apparently don't care about $25 at all so I don't understand why they should care about $25 more than that (for a total of 50) such that it would prevent them from getting a 1660, which has a performance that blows the 570 out of the water and would be a lot more likely to play a factor in the timing of a future upgrade.

Log in

Don't have an account? Sign up now