The NVIDIA GeForce GTX 1650 Review, Feat. Zotac: Fighting Brute Force With Power Efficiencyby Ryan Smith & Nate Oh on May 3, 2019 10:15 AM EST
TU117: Tiny Turing
Before we take a look at the Zotac card and our benchmark results, let’s take a moment to go over the heart of the GTX 1650: the TU117 GPU.
TU117 is for most practical purposes a smaller version of the TU116 GPU, retaining the same core Turing feature set, but with fewer resources all around. Altogether, coming from the TU116 NVIDIA has shaved off one-third of the CUDA cores, one-third of the memory channels, and one-third of the ROPs, leaving a GPU that’s smaller and easier to manufacture for this low-margin market.
Still, at 200mm2 in size and housing 4.7B transistors, TU117 is by no means a simple chip. In fact, it’s exactly the same die size as GP106 – the GPU at the heart of the GeForce GTX 1060 series – so that should give you an idea of how performance and transistor counts have (slowly) cascaded down to cheaper products over the last few years.
Overall, NVIDIA’s first outing with their new GPU is an interesting one. Looking at the specs of the GTX 1650 and how NVIDIA has opted to price the card, it’s clear that NVIDIA is holding back a bit. Normally the company launches two low-end cards at the same time – a card based on a fully-enabled GPU and a cut-down card – which they haven’t done this time. This means that NVIDiA is sitting on the option of rolling out a fully-enabled TU117 card in the future if they want to.
By the numbers, the actual CUDA core count differences between GTX 1650 and a theoretical fully-enabled GTX 1650 Ti are quite limited – to the point where I doubt a few more CUDA cores alone would be worth it – however NVIDIA also has another ace up its sleeve in the form of GDDR6 memory. If the conceptually similar GTX 1660 Ti is anything to go by, a fully-enabled TU117 card with a small bump in clockspeeds and 4GB of GDDR6 could probably pull far enough ahead of the vanilla GTX 1650 to justify a new card, perhaps at $179 or so to fill NVIDIA’s current product stack gap.
The bigger question is where performance would land, and if it would be fast enough to completely fend off the Radeon RX 570. Despite the improvements over the years, bandwidth limitations are a constant challenge for GPU designers, and NVIDIA’s low-end cards have been especially boxed in. Coming straight off of standard GDDR5, the bump to GDDR6 could very well put some pep into TU117’s step. But the price sensitivity of this market (and NVIDIA’s own margin goals) means that it may be a while until we see such a card; GDDR6 memory still fetches a price premium, and I expect that NVIDIA would like to see this come down first before rolling out a GDDR6-equipped TU117 card.
Turing’s Graphics Architecture Meets Volta’s Video Encoder
While TU117 is a pure Turing chip as far as its core graphics and compute architecture is concerned, NVIDIA’s official specification tables highlight an interesting and unexpected divergence in related features. As it turns out, TU117 has incorporated an older version of NVIDIA’s NVENC video encoder block than the other Turing cards. Rather than using the Turing block, it uses the video encoding block from Volta.
But just what does the Turing NVENC block offer that Volta’s does not? As it turns out, it’s just a single feature: HEVC B-frame support.
While it wasn’t previously called out by NVIDIA in any of their Turing documentation, the NVENC block that shipped with the other Turing cards added support for B(idirectional) Frames when doing HEVC encoding. B-frames, in a nutshell, are a type of advanced frame predication for modern video codecs. Notably, B-frames incorporate information about both the frame before them and the frame after them, allowing for greater space savings versus simpler uni-directional P-frames.
I, P, and B-Frames (Petteri Aimonen / PD)
This bidirectional nature is what make B-frames so complex, and this especially goes for video encoding. As a result, while NVIDIA has supported hardware HEVC encoding for a few generations now, it’s only with Turing that they added B-frame support for that codec. The net result is that relative to Volta (and Pascal), Turing’s NVENC block can achieve similar image quality with lower bitrates, or conversely, higher image quality at the same bitrate. This is where a lot of NVIDIA’s previously touted “25% bitrate savings” for Turing come from.
Past that, however, the Volta and Turing NVENC blocks are functionally identical. Both support the same resolutions and color depths, the same codecs, etc, so while TU117 misses out on some quality/bitrate optimizations, it isn’t completely left behind. Total encoder throughput is a bit less clear, though; NVIDIA’s overall NVENC throughput has slowly ratcheted up over the generations, in particular so that their GPUs can serve up an ever-larger number of streams when being used in datacenters.
Overall this is an odd difference to bake into a GPU when the other 4 members of the Turing family all use the newer encoder, and I did reach out to NVIDIA looking for an explanation for why they regressed on the video encoder block. The answer, as it turns out, came down to die size: NVIDIA’s engineers opted to use the older encoder to keep the size of the already decently-sized 200mm2 chip from growing even larger. Unfortunately NVIDIA isn’t saying just how much larger Turing’s NVENC block is, so it’s impossible to say just how much die space this move saved. However, that the difference is apparently enough to materially impact the die size of TU117 makes me suspect it’s bigger than we normally give it credit for.
In any case, the impact to GTX 1650 will depend on the use case. HTPC users should be fine as this is solely about encoding and not decoding, so the GTX 1650 is as good for that as any other Turing card. And even in the case of game streaming/broadcasting, this is (still) mostly H.264 for compatibility and licensing reasons. But if you fall into a niche area where you’re doing GPU-accelerated HEVC encoding on a consumer card, then this is a notable difference that may make the GTX 1650 less appealing than the TU116-powered GTX 1660.
Post Your CommentPlease log in or sign up to comment.
View All Comments
schujj07 - Friday, May 3, 2019 - linkPricing is even better right now for the RX570. The 4GB starts at $130 and the 8GB starts at $140, whereas the cheapest GTX 1650 is $150. Unless you need a sub 75W GPU, there is no reason at all to buy the 1650, not when you can get 10-20% better performance for $10-20 less cost.
Death666Angel - Friday, May 3, 2019 - linkSeems like it. Although I do know some people that run Dell/HP refurbs from years ago (Core i5-750 or i7-860, maybe a Sandybridge if they are lucky) and need the 75W graphics card. They all have GTX 750 still. This may be a card to replace that, since the rest still serves them fine.
Otherwise, this is really kinda disappointing.
I still rock a GTX 960 2GB (from my HTPC, it has to be small), since I sold my 1080 when I saw that I played only a few hours each month. But I won't be upgrading to this. I'd rather get a 580 8GB or save more and get a 2060 that can last me for several years. Oh well, guess someone will buy it. And it'll end up in tons of off-the-shelf PCs.
SaturnusDK - Friday, May 3, 2019 - linkThey don't need a 75W graphics card on an old refurb PC. What they desperately need is to replace the PSU with a modern 80+ certified one. The PSU in those old OEM PCs is typically 220W-280W ones with 75% maximum efficiency. And probably not over 70% with a 75W graphics card. Anandtech have tests of old OEM PSUs that shows that.
Replacing the PSU to a reasonably low cost modern 80+ one gets you at least 50% more power capacity, and they will generally be at or near 90% efficient in the 40-50% load sweet spot which they will be at in gaming with an RX570 for instance.
So they can get a new PSU and an RX570 for the same price. Have at least 15% better performance, have a quieter and a more power efficient system for the same price as if they bought a 1650.
At $150 literally no one should even consider buying this. If the price was in the $100-$110 it would be another matter. Maybe even ok at $120. But at $150 it makes no sense for anyone to buy.
PeachNCream - Friday, May 3, 2019 - linkThe "with compromises" bit could also mean setting the resolution to 1600x900. Power and temps are okay for the performance offered. The typical Nvidia ego-induced, absent-competition Turing price premium isn't as terrible at the low end. However a ~30W replacement for the 1030 would be nice as it would likely fit on a half-height, single slot card.
Flunk - Friday, May 3, 2019 - linkThe name of this card is pretty confusing. GTX 1650 being noticeably slower than a GTX 1060 despite being 590 numbers higher doesn't make much sense. Why didn't Nvidia keep their naming to one scheme (2000 series) instead of having the GTX 16XX cards with confusing names.
serpretetsky - Friday, May 3, 2019 - linklast two digits are the performance category, the more significant digits are the generation. It is strange that right now they basically have two generation numbers 1600 and 2000. But that 50 is slower than 60 is not too confusing (for me anyways). Different performance category.
Death666Angel - Friday, May 3, 2019 - linkThat makes no sense. The 2060 is slower than the 1080 Ti, but it is 980 "numbers higher". A Core i3-8100 is slower than an i5 or i7 of an earlier generation (being some 500 to thousands of "numbers" higher).
Don't get me wrong, Nvidia's naming scheme sucks. But not because of the reason you stated.
guidryp - Friday, May 3, 2019 - link@DeathAngel. Not sure what your problem is. 80>70>60>50>30 etc...
But that obviously only applies within a current generation. When you compare to an older generation then New x80 will be faster than old x80 and so on.
It's about as logical as you can make it.
serpretetsky - Friday, May 3, 2019 - linkDeathAngel was replying to Flunk.
sor - Friday, May 3, 2019 - linkOf these low-mid cards, looks like the 1660 is where it's at. ~70% more cores and ~70% more performance for ~40% more money. I know, they need to have tiers, but as far as value goes it's the better bang for the buck if you can scrape together a bit more cash.