Bulldozer for Servers: Testing AMD's "Interlagos" Opteron 6200 Series
by Johan De Gelas on November 15, 2011 5:09 PM ESTWhat Makes Server Applications Different?
The large caches and high integer core (cluster) count in one Orochi die (four CMT module Bulldozer die) made quite a few people suspect that the Bulldozer design first and foremost was created to excel in server workloads. Reviews like our own AMD FX-8150 launch article have revealed that single-threaded performance has (slightly) regressed compared to the previous AMD CPUs (Istanbul core), while the chip performs better in heavy multi-threaded benchmarks. However, high performance in multi-threaded workstation and desktop applications does not automatically mean that the architecture is server centric.
A more in depth analysis of the Bulldozer architecture and its performance will be presented in a later article as it is out of the scope of this one. However, many of our readers are either hardcore hardware enthusiasts or IT professionals that really love to delve a bit deeper than just benchmarks showing if something is faster/slower than the competition, so it's good to start with an explanation of what makes an architecture better suited for server applications. Is the Bulldozer architecture a “server centric architecture”?
What makes a server application different anyway?
There have been extensive performance characterizations on the SPEC CPU benchmark, which contains real-world HPC (High Performance Computing), workstation, and desktop applications. The studies of commercial web and database workloads on top of real CPUs are less abundant, but we dug up quite a bit of interesting info. In summary we can say that server workloads distinguish themselves from the workstation and desktop ones in the following ways.
They spend a lot more time in the kernel. Accessing the network stack, the disk subsystem, handling the user connections, syncing high amounts of threads, demanding more memory pages for expending caches--server workloads make the OS sweat. Server applications spend about 20 to 60% of their execution time in the kernel or hypervisor, while in contrast most desktop applications rarely exceed 5% kernel time. Kernel code tends to be very low IPC (Instructions Per Clockcycle) with lots of dependencies.
That is why for example SPECjbb, which does not perform any networking and disk access, is a decent CPU benchmark but a pretty bad server benchmark. An interesting fact is that SPECJBB, thanks to the lack of I/O subsystem interaction, typically has an IPC of 0.5-0.9, which is almost twice as high as other server workloads (0.3-0.6), even if those server workloads are not bottlenecked by the storage subsystem.
Another aspect of server applications is that they are prone to more instruction cache misses. Server workloads are more complex than most processing intensive applications. Processing intensive applications like encoders are written in C++ using a few libraries. Server workloads are developed on top of frameworks like .Net and make of lots of DLLs--or in Linux terms, they have more dependencies. Not only is the "most used" instruction footprint a lot larger, dynamically compiled software (such as .Net and Java) tends to make code that is more scattered in the memory space. As a result, server apps have much more L1 instruction cache misses than desktop applications, where instruction cache misses are much lower than data cache misses.
Similar to the above, server apps also have more L2 cache misses. Modern desktop/workstation applications miss the L1 data cache frequently and need the L2 cache too, as their datasets are much larger than the L1 data cache. But once there, few applications have significant L2 cache misses. Most server applications have higher L2 cache misses as they tend to come with even larger memory footprints and huge datasets.
The larger memory footprint and shrinking and expanding caches can cause more TLB misses too. Especially virtualized workloads need large and fast TLBs as they switch between contexts much more often.
As most server applications are easier to multi-thread (for example, a thread for each connection) but are likely to work on the same data (e.g. a relational database), keeping the caches coherent tends to produce much more coherency traffic, and locks are much more frequent.
Some desktop workloads such as compiling and games have much higher branch misprediction ratios than server applications. Server applications tend to be no more branch intensive than your average integer applications.
Quick Summary
The end result is that most server applications have low IPC. Quite a few workstation applications achieve 1.0-2.0 IPC, while many server applications execute 3 to 5 times fewer instructions on average per cycle. Performance is dominated by Memory Level Parallelism (MLP), coherency traffic, and branch prediction in that order, and to a lesser degree integer processing power.
So is "Bulldozer" a server centric architecture? We'll need a more in-depth analysis to answer this question properly, but from a high level perspective, yes, it does appear that way. Getting 16 threads and 32MB of cache inside a 115W TDP power consumption envelope is no easy feat. But let the hardware and benchmarks now speak.
106 Comments
View All Comments
JohanAnandtech - Thursday, November 17, 2011 - link
1) Niagara is NOT a CMT. It is interleaved multipthreading with SMT on top.
I haven't studied the latest Niagaras but the T1 was a fine grained mult-threaded CPU. It switched like a gatling gun between threads, and could not execute two threads at the same time.
Penti - Thursday, November 17, 2011 - link
SPARC T2 and onwards has additional ALU/AGU resources for a half physical two thread (four logically) solution per core with shared scheduler/pipeline if I remember correctly. That's not when CMT entered the picture according to SUN and Sun engineers any way. They regard the T1 as CMT as it's chip level. It's not just a CMP-chip any how. SMT is just running multiple threads on the cpus, CMP is working the same as SMP on separate sockets. It is not the same as AMDs solution however.Phylyp - Tuesday, November 15, 2011 - link
Firstly, this was a very good article, with a lot of information, especially the bits about the differences between server and desktop workloads.Secondly, it does seem that you need to tune either the software (power management settings) or the chip (CMT) to get the best results from the processor. So, what advise is AMD offering its customers in terms of this tuning? I wouldn't want to pony up hundreds of dollars to have to then search the web for little titbits like switching off CMT in certain cases, or enabling High-performance power management.
Thirdly, why is the BIOS reporting 32 MB of L2 cache instead of 8 MB?
mino - Wednesday, November 16, 2011 - link
No need for tuning - turbo is OS-independent (unless OS power management explicitly disables it aka Windows).Just disable the power management on the OS level (= high performance fro Windows) and you are good to go.
JohanAnandtech - Thursday, November 17, 2011 - link
The BIOS is simply wrong. It should have read 16 MB (2 orochi dies of 8 MB L3)gamoniac - Tuesday, November 15, 2011 - link
Thanks, Johan. I run HyperV on Windows Server 2008 R2 SP1 on Phonem II X6 (my workstation) and have noticed the same CPU issue. I previously fixed it by disabling AMD's Cool'n'Quiet BIOS setting. After switching to high performance increase my overall power usage by 9 watts but corrected the CPU capping issue you mentioned.Yet another excellent article from AnandTech. Well done. This is how I don't mind spending 1 hour of my precious evening time.
mczak - Tuesday, November 15, 2011 - link
L1 data and instruction cache are swapped (instruction is 8x64kB 2-way data is 16x16kB 4-way)L2 is 8x2MB 16-way
JohanAnandtech - Thursday, November 17, 2011 - link
fixed. My apologies.hechacker1 - Tuesday, November 15, 2011 - link
Curious if those syscalls for virtualization were improved at all. I remember Intel touting they improved the latency each generation.http://www.anandtech.com/show/2480/9
I'm guessing it's worse considering the increased general cache latency? I'm not sure how the latency, or syscall, is related if at all.
Just curious as when I do lots of compiling in a guest VM (Gentoo doing lots of checking of packages and hardware capabilities each compile) it tends to spend the majority of time in the kernel context.
hechacker1 - Tuesday, November 15, 2011 - link
Just also wanted to add: Before I had a VT-x enabled chip, it was unbearably slow to compile software in a guest VM. I remember measuring latencies of seconds for some operations.After getting an i7 920 with VT-x, it considerably improved, and most operations are in the hundred or so millisecond range (measured with latencytop).
I'm not sure how the latests chips fare.