The Intel Core i3-12300 Review: Quad-Core Alder Lake Shines
by Gavin Bonshor on March 3, 2022 8:30 AM ESTLGA1700: Reports of Bending Sockets
Since the launch of Intel's Alder Lake-based 12th generation Core processors, there have been several reports of high and abnormal temperatures, even at stock frequencies. The art in balancing out the integrated heat spreader (IHS) of a processor is one thing extreme overclockers have been working on for many years now. Typically called lapping, extreme overclockers finely sand down the IHS to make it a more flat and even surface. The aim is to reduce gaps by sanding out imperfections or curvatures. This is so that the cooling plate of the CPU cooler makes better contact with the IHS, and it has been known to reduce CPU thermals by a decent amount.
Our Core i9-12900K IHS is 'relatively' flat and even.
Fellow enthusiast Igor Wallossek published an article on his website, Igorlabs.de, which investigates potential issues with the ILM (independent loading mechanism), which keeps the processor firmly in place within the socket. Doing some investigations myself, our testbed Core i9-12900K which we've used the most doesn't seem to show any noticeable gaps or abnormal curvatures when used with a metal ruler. This, however, changes when we install the CPU into an LGA1700 socket or into one of the readily available Z690 motherboards.
The rear of the Intel LGA1700 socket with Core i9-12900K installed
There have been many reports that installing an Alder Lake processor into one of the cheaper Z690 or B660 models causes the CPU socket to bend and the IHS itself. We saw no bending before installing our Alder Lake processor into the socket of the GIGABYTE Z690 Aorus Master, which is a premium board priced around $470. Installing the Core i9-12900K into the socket and locking the ILM into place, we saw noticeable bending on the rear of the board, as our picture above illustrates.
The implications of this are two-fold. Firstly, from a cooling standpoint, it will and can lead to increased thermals due to the gaps this creates between the cold plate of the cooler and the IHS on the CPU. While thermal paste will generally fill some of the gaps, the problem is the nature of the gap and its size that the increased pressure the ILM creates. The second and perhaps the most fundamental part of this, it should NOT be happening.
Buildzoid 'rambles' about the LGA1700 washer mod, a potential fix?
While PCBs can be flexible, the nature of heat creating further expansion could lead to damaged sockets damaged processors and ultimately leave users with an expensive headache. There's also the potential to create permanent bends in the PCB area around the socket. This is not a good thing. It should be noted that LGA1700 motherboards either use ILM's manufacturers by Lotes or Foxconn, but it's reported that both ILMs are affected by this issue.
Fundamentally, there are a couple of potential workarounds to the issue, including a large, robust backplate. Still, on some of the AIO coolers, we have seen recently, these usually come with flimsy plastic backplates. Another potential fix is installing four washers to alleviate the issue. Both Igorlabs.de and Buildzoid have posted content detailing this, with Igor Wallossek doing some testing using washers of a different thickness to show variation.
140 Comments
View All Comments
CiccioB - Friday, March 4, 2022 - link
You may be surprised by how many applications are still using a single thread or even if multi-threaded be on thread bottle-necked.All office suite, for example, use just a main thread. Use a slow 32 thread capable CPU and you'll see how slow Word or PowerPoint can become. Excel is somewhat more threaded, but not surely to the level of using 32 core even for complex tables.
Compilers are not multi-thread. They just spawn many instances to compile more files in parallel, and if you have mane many cores it just end up being I/O limited. At the end of the compiling process, however, you'll have the linked which is a single threaded task. Run it on a slow 64 core CPU, and you'll wait much more time for the final binary then on a fast Celeron CPU.
All graphics retouching applications are mono thread. What is multi-threaded are just some of the effects you can apply. But the interface and the general data management is on a single task. That's why you can have Photoshop layer management so slow even on a Threadripper.
Printing app and format converters are monothread. CADs are also.
And browser as well, though they mask it as much as possible. With my surprise I have found that Javascript is run on a single thread for all opened windows as if I encounter some problems on a heavy Javascript page, other pages are slowed down as well despite having spare cores.
At the end, there are many many task that cannot be parallelized. Single core performance can help much more than having a myriad of slower core.
Yet there are some (and only some) applications that tasks advantage of a swarm of small cores, like 3D renderers, video converters and... well, that's it. Unless you count for scientific simulations but I doubt those are interesting for a consumer oriented market.
BTW, video conversion can be easily and more efficiently done using HW converter like those present in GPUs, you you are left with 3D renderers to be able to saturate whichever the number of core you have.
mode_13h - Saturday, March 5, 2022 - link
> Compilers are not multi-thread.There's been some work in this area, but it's generally a lower priority due to the file-level concurrency you noted.
> if you have mane many cores it just end up being I/O limited.
I've not seen this, but I also don't have anything like a 64-core CPU. Even on a 2x 4-core 3.4 GHz Westmere server with a 4-disk RAID-5, I could do a 16-way build and all the cores would stay pegged. You just need enough RAM for files to stay in cache while they're still needed, and buffer enough of the writes.
> At the end of the compiling process, however,
> you'll have the linked which is a single threaded task.
There's a new, multi-threaded linker on the block. It's called "mold", which I guess is a play on Google's "gold" linker. For those who don't know, the traditional executable name for a UNIX linker is ld.
> At the end, there are many many task that cannot be parallelized.
There are more that could. They just aren't because... reasons. There are still software & hardware improvements that could enable a lot more multi-threading. CPUs are now starting to get so many cores that I think we'll probably see this becoming an area of increasing focus.
CiccioB - Saturday, March 5, 2022 - link
You may be aware that there are lots of compiling chain tools that are not "google based" and are either not based on experimental code."You just need enough RAM for files to stay in cache while they're still needed, and buffer enough of the writes."
Try compiling something that is not "Hello world" and you'll see that there's not such a way to keep the files in RAM unless you have put your entire project is a RAM disk.
"There are more that could. They just aren't because... reasons."
Yes, the fact the making them multi threaded costs a lot of work for a marginal benefit.
The most part of algorithms ARE NOT PARALLELIZABLE, they run as a contiguous stream of code where the following data is the result of the previous instruction.
Parallelizable algorithms are a minority part and most of them require really lots of work to work better than a mono threaded one.
You can easily see this in the fact that multi core CPU in consumer market has been existed for more than 15 years and still only a minor number of applications, mostly rendered and video transcoders, do really take advantage of many cores. Others do not and mostly like single threaded performance (either by improved IPC or faster clock).
mode_13h - Tuesday, March 8, 2022 - link
> Try compiling something that is not "Hello world" and you'll seeMy current project is about 2 million lines of code. When I build on a 6-core workstation with SATA SSD, the entire build is CPU-bound. When I build on a 8-core server with a HDD RAID, the build is probably > 90% CPU-bound.
As for the toolchain, we're using vanilla gcc and ld. Oh and ccache, if you know what that is. It *should* make the build even more I/O bound, but I've not seen evidence of that.
I get that nobody like to be contradicted, but you could try fact-checking yourself, instead of adopting a patronizing attitude. I've been doing commercial software development for multiple decades. About 15 years ago, I even experimented with distributed compilation and still found it still to be mostly compute-bound.
> You can easily see this in the fact that multi core CPU in consumer market has been
> existed for more than 15 years and still only a minor number of applications, mostly
> rendered and video transcoders, do really take advantage of many cores.
Years ago, I saw an article on this site analyzing web browser performance and revealing they're quite heavily multi-threaded. I'd include a link, but the subject isn't addressed in their 2020 browser benchmark article and I'm not having great luck with the search engine.
Anyway, what I think you're missing is that phones have so many cores. That's a bigger motivation for multi-threading, because it's easier to increase efficient performance by adding cores than any other way.
Oh, and don't forget games. Most games are pretty well-threaded.
GeoffreyA - Tuesday, March 8, 2022 - link
"analyzing web browser performance and revealing they're quite heavily multi-threaded"I think it was round about the IE9 era, which is 2011, that Internet Explorer, at least, started to exploit multi-threading. I still remember what a leap it was upgrading from IE8, and that was on a mere Core 2 Duo laptop.
GeoffreyA - Tuesday, March 8, 2022 - link
As for compilers being heavy on CPU, amateur commentary on my part, but I've noticed the newer ones seem to be doing a whole lot more---obviously in line with the growing language specification---and take a surprising amount of time to compile. Till recently, I was actually still using VC++ 6.0 from 1998 (yes, I know, I'm crazy), and it used to slice through my small project in no time. Going to VS2019, I was stunned how much longer it took for the exact same thing. Thankfully, turning on MT compilation, which I believe just duplicates compiler instances, caused it to cut through the project like butter again.mode_13h - Wednesday, March 9, 2022 - link
Well, presumably you compiled using newer versions of the standard library and other runtimes, which use newer and more sophisticated language features.Also, the optimizers are now much more sophisticated. And compilers can do much more static analysis, to possibly find bugs in your code. All of that involves much more work!
GeoffreyA - Wednesday, March 9, 2022 - link
On migration, it stepped up the project to C++14 as the language standard. And over the years, MSVC has added a great deal, particularly features that have to do with security. Optimisation, too, seems much more advanced. As a crude indicator, the compiler backend, C2.DLL, weighs in at 720 KB in VC6. In VS2022, round about 6.4-7.8 MB.mode_13h - Thursday, March 10, 2022 - link
So, I trust you've found cppreference.com? Great site, though it has occasional holes and the very rare error.Also worth a look s the CppCoreGuidelines on isocpp's github. I agree with quite a lot of it. Even when I don't, I find it's usually worth understanding their perspective.
Finally, here you'll find some fantastic C++ infographics:
https://hackingcpp.com/cpp/cheat_sheets.html
Lastly, did you hear that Google has opened up GSoC to non-students? If you fancy working on an open source project, getting mentored, and getting paid for it, have a look!
China's Institute of Software Chinese Academy of Sciences also ran one, last year. Presumably, they'll do it again, this coming summer. It's open to all nationalities, though the 2021 iteration was limited to university students. Maybe they'll follow Google and open it up to non-students, as well.
https://summer.iscas.ac.cn/#/org/projectlist?lang=...
GeoffreyA - Thursday, March 10, 2022 - link
I doubt I'll participate in any of those programs (the lazy bone in me talking), but many, many thanks for pointing out those programmes, as well as the references! Also, last year you directed me to Visual Studio Community Edition, and it turned out to be fantastic, with no real limitations. I am grateful. It's been a big step forward.That cppreference is excellent; for I looked at it when I was trying to find a lock that would replace a Win32 CRITICAL_SECTION in a singleton, and the one found, I think it was std::mutex, just dropped in and worked. But I left the old version in because there's other Win32 code in that module, and using std::mutex meant no more compiling on the older VS, which still works on the project, surprisingly.
Again, much obliged for the leads and references.