A suggestion for future articles of this type. If the results mostly show that really slow memory is bad but above that it doesn't really matter, normalizing data with a reasonably priced option that performs well as 1.0 might make clearer. ex for the current results put 1866-C9 as 1.0, and having 1333 as .9x and 3000 as 1.02. I think this would would help drive home that you're hitting diminishing returns on the cheap stuff.
It looks like the days of 1600 C9 being the standard are over however the Hynix fire isn't helping faster memory prices any. 4-5 months ago you could get 2x 4GB of 1600 C9 for $30-35 bucks.
That's because just like when HDD prices skyrocketed due to the 2011 Thailand Flood, RAM prices have skyrocketed due to the 2013 Hynix Factory Fire. Prices had started to rise around early 2013 due to market consolidation and some other electronics (tablet, console, etc market needs), nothing huge, and they were actually starting to drop until the factory fire.
As for 1600 C9 being some sort of standard, well, what Intel/AMD specifies as their rated RAM speed is no more useful than what they specify their CPU speed, as we know the chip can go way above that. JPeople who are savvy and know how to buy RAM, can buy RAM easily capable of 2400mhz CL8 by researching the RAM IC.
PSC/BBSE is easily capable of 2400mhz CL8 and generally costs ~$60 per 8gb (ie similar to the cheapest ddr3 ram). You can find some Hynix CFRs (double sided, unlike MFR, meaning they don't hit the high mhz numbers, but way better 24/7 performance clock for clock, kinda like dual channel vs single channel) for around $65, like the Gskill Ripjaws X 2400CL11 (currently like $75 on newegg), which will easily do ~2800mhzCL13.
RAM speed has always made an impact, the problem with reviews like the above is assuming you can't overclock RAM, and have to pay for it. In reality there is only ~5 different types of RAM (and a few subtypes). If you are smart and purchase Hynix, Samsung instead of Spektek, Qimonda, you can get RAM that easily does 2400mhz+ for the same or simliar price as the cheapest spekteks. If you assume that going from 1600 to 2400 will cost you $100+, of course it's a ripoff...
But if you buy, say, some Gskill Pi's 1600mhz for bargain bin, and overclock them to 2400CL8, you gain a good 10+ fps for almost nothing, and that's an awesome value. All RAM is just merely rebranded Spektek/Qimonda/PSC/BBSE/Hynix/Samsung, ie the same RAM is sold as 1600 CL9, 1600 CL8 1.65v, 1866 CL10, 2000CL11, 2133 CL12, etc ad nauesum.
Relational results are helpful -I think they've been added since your comment- but I also like to see the empirical data as is also being listed.
I know these things are currently being done in this article, I just want to make it a point not to make the decision to use one or the other, but both, again in the future.
As an amendment, I want to add that the thing I would change in the future is the colors used. The spectrum should be green:good red:hazardous/bad. If you have something at 1.00x, perhaps that should be yellow, since it's the neutral starting position.
Yes, this is some pretty basic stuff. It seems there's bouncing back and forth between green = good/bad right now. The author needs to stick to a convention throughout the article. I'm not really of the opinion that green and red are the best choices, but at least if a convention is used I can train my eyes.
hi, you said that the order of importance place amount of memory & their placement is most importance, but not a clue regarding how this scale in real world... for example i have 1600mhz 7Cl 6GB RAM in a x58 system,,,,should i upgrade it to 12GB ... how much i'll gain from that
That's ultimately up for the user to determine based on workload, gaming style, etc. I'd always suggest playing it safe, so if you plan on doing anything that would tax a system, 12gb might be a safe bet. That's X58 though, this is talking about Haswell, whose memory controller can take this high end kits ;)
Why does going from 2933 to 3000, with the same latencies, automatically make the system run slower on almost all of the benchmarks ? Is it because of the ratio between cpu, base and memory clock frequencies ?
Moving to the 3000 MHz setting doesn't actually move to the 3000 MHz strap - it puts it on 2933 and adds a drop of BCLK, meaning we had to drop the CPU multiplier to keep the final CPU speed (BCLK * multi) constant. At 3000 MHz though, all the subtimings in the XMP profile are set by the SPD. For the other MHz settings, we set the primaries, but we left the motherboard system on auto for secondary/tertiary timings, and it may have resulted in tighter timings under 2933. There are a few instances where the 3000 kit has a 2-3% advantage, a couple where it's at a disadvantage, but the rest are around about the same (within some statistical variance).
Well, that's the reality of gaming on these iGPUs in low "HD" resolution. But I actually agree with you: running at 10 fps is just not realistic and hence not worth much.
The problem I see with these benchmarks is that at maximum detail settings you're putting en emphasis on shaders. By turning details down you'd push more pixels and shift the balance towards needing more bandwidth to achieve just that. And since in any real world situation you'd see >30 fps, you ARE pushing more pixels in these cases.
Your benchmark choices are nice, but I've seen quite a few "real world" applications which benefit far more from high-performance memory: - matrix inversion in Matlab (Intel MKL), probably in other languages / libs too - crunching Einstein@Home (BOINC) on all 8 threads - crunching Einstein@Home on 7 threads and 2 Einstein@Home tasks on the iGPU - crunching 5+ POEM@Home (BOINC) tasks on a high end GPU
It obviously depends on the user how real the "real world" applications are. For me they are far more relevant than my occasional game, which is usually fast enough anyway.
Yep it'd be interesting to understand where extra speed does help, eg database, j2ee servers, cad, transactional systems of any kind, etc. otherwise great read and a great story idea, thanks.
Why did memory prices fluctuate so much since the end of last year and now? The Hynix fire looks to have next to no impact, but the price of memory has nearly doubled since last November/December.
Ian, I do not want to disparage your work, please take this as nothing but constructive criticism. You do amazing work and the wealth of technical expertise is very clear in your articles.
But you have a terrible writing style. There are so many sentences in your articles that while technically grammatically correct, are the most awkward ways of saying what you mean. Take a very simple sentence: "In terms of real world usage, on our Haswell platform, there are some recommendations to be made." There are so many ways, much simpler, much cleaner, much shorter to say what you said in that sentence.
I really struggle with your writing style. I know journalism isn't really your day job and you have a lot of important things to attend to, but please, if you care about this side job of yours as a technical writer, being technical is only half the story. Please consider improving your writing style to make it more readable.
I'd wager most of the readers didn't struggle as mightily as you. If you want to critique another's wordsmithery, you might want to find a classier way to do it. Our first exhibit will be sentence one of paragraph two. Surely you could have strung together a couple of words that got your point across without sounding like an ass?
How would you like it if someone came to your place of business and told you "Look, I don't mean to disparage your work, but it makes my cat's hair fall out in clumps."
Ian, I don't see anything wrong with your writing, and I would rather you concentrate on getting articles out than on spending lots of extra time on editing your work.
so bandwidth starved apps with predictable data requests (h264 p1) really like it but when the CPU has enough data to crunch (winrar) the lower real-world latency time in seconds is worth having.
Ian: Thank you for the excellent article. You provide in depth and thorough analysis. Your article will undoubtedly serve as a frequently referenced guide.
Please redo the IGP gaming benchmarks with playable settings. All you did was waste your time testing at unreasonably high detail and not proven a single thing about whether the extra bandwidth is able to help increase performance.
Hi Ian, thanks for the review, could you explain the thinking behind using only 1360x768 for the gaming tests, especially for the single card benchmarks? Would stretching the single card with a memory intensive game at a high resolution change the results more towards IGPU fractions?
This is more the scenario I would expect gamers to be facing and even if the answer turns out to be no, that in itself would be valuable data to learn.
Please, please, please incorporate some ramdisk benchmarks for these memory tests. It seems like such a given but no one seems to think of this, which is essentially the only test where you'll see some major differences between speed tiers. Things like gaming don't really result in differences worth your money.
I recommend Primo Ramdisk for its rock-solid stability but if you're looking for a free alternative I recommend SoftPerfect RAM Disk, which has been noted to be significantly faster than Primo, but may not be as stable under certain circumstances.
I think you would have to propose a software benchmark which benefits from actually running from a Ramdisk. Testing the RD itself with some kind of synthetic HD-Benchmark will not give you much different results than a synthetic memory benchmark, unless the software implementation is rubbish.
So if you want to see this happen, I suggest you explain to everybody what kind of software you use in combination with your Ramdisk, and why it benefits from it. And hope that this software is sufficiently relevant to get a large number of people interested in this kind of benchmark.
Two comments on the "Performance Index" used in this article:
1. It is calculated as the reverse of the actual access latency (in nanoseconds). Using the reverse of a physically meaningful number will always make the relationship exhibit much more of an "diminishing return" then when using the phyical attribute directly.
2. As no algorithm should care directly about the latency, but rather about the combined time to get the full data set it requested, it would be interesting to understand which is the typical size of a data set affecting the benchmarks indicate. If your software is randomly picking single bytes from the memory, you expect performance to only depend on the latency. On the other hand, if the software is reading complete rows (512 bytes), the bandwidth becomes more relevant than the latency.
Of course figuring out the best performance metric for any kind of review can take a lot of time and effort. But when you do a review generating this large amount of data anyways, would it be possible to make the raw data available to the readers, so they can try to get their own understanding on the matter?
First of all, great article and really good chart layout, very easy to read! :D But one thing seems strange, the WinRAR 3.93 test, 2800MHz/C12 performs better than 2800MHz/C11, but you call out ...C11 in the text as performing well, even though anyone can increase their latencies without incurring stability issues (that's my experience at least). Switched numbers? :)
I too thought this was strange. You could see higher latencies clock for clock performing better which doesn't seem intuitive. I couldn't work out why those results were the way they were.
In reality, there really should be no reason why a longer latency should increase performance (unless you are programming some real-time code which depends on algorithm synchronization). Therefore it seems safe to interpret the difference as the measurement noise of this specific benchmark.
Great work, I'd like to see a future article look at single-channel vs dual channel RAM in laptops/mITX/NUC configurations. With only two SO-DIMM slots, people have to really evaluate whether or not you want to fill both DIMM slots knowing you'd have to replace both of them if you want to upgrade but able to utilize the dual channels, or going with a single SO-DIMM, losing the dual channel but having an easier memory upgrade path down the road.
How do you get such nice screenshots of the BIOS? They look much nicer than when people just use a camera so what did you use to take those screenshots?
Avoid DDR3 1600 and spend more for that 1 extra fps? No thanks. I'll stick with my DDR3 1600 @ 9-9-9-24 and I'll keep my Haswell overclocked at 4.7 Ghz which is giving me more fps.
I have RAM that has an XMP profile, but I did NOT enable it in the BIOS, reason being that it will run faster but it jumps to 2T, and ups to 1.65v from the default 1.5v, apart from the other latencies going up of course. Now 2T is known to not be a great plan if you can avoid it. So instead I simply tweak the settings to my own needs, because unlike this article's suggestion you can, and overclockers will, do it manually instead of only having the options SPD or XMP.. The difference is that you need to do some testing to see what is stable, which can be quite different from the advised values in the settings chip. So it's silly to ridicule people for not being some uninformed type with no idea except allowing the SPD/XMP to tell them what to do.
Not done yet, but so far it seems 1866 CL 9 is the sweet spot for bang/buck.
I'd also like to add that I absolutely LOVE that you guys do this kind of in depth analyses. Remember when, one of you, did the PSU review? Actually going over how much the motherboard pulled at idle and load, same for memory on a per DIMM basis. CPU, everything, hdd, add in cards. I still have the specs saved for reference. That info is getting pretty old though, things have changed quite a bit since back then; when the northbridge was still on the motherboard :P
Ian, any chance you could post the sub-timings you ended up using for each of the tested speeds?
If you're looking at mostly sequential workloads, then CL is indicative of overall latency, but once the workloads become more random / less sequential, tRCD and tRP start to play a much larger role. If what you list as 2933 CL12 is using 12-14-14, then page-empty or page-miss accesses are going to look a lot more like CL13 or CL14 in terms of actual ns spent servicing the requests.
Also, was CMD consistent throughout the tests, or are some timings using 1T and others 2T?
There's a lot of good data in this article, but I constantly struggle with seeing the correlation between real world performance, memory bandwidth, and memory latency. I get the feeling that most scenarios are not bound by bandwidth alone, and that reducing the latency and improving the consistency of random accesses pays bigger dividends once you're above a certain bandwidth threshold. I also made the following chart, somewhat along the lines of those in the article, in order to better visualize what the various CAS latencies look like at different frequencies: http://i.imgur.com/lPveITx.png Of course real world tests don't follow the simple curves of my chart because the latency penalties of various types of accesses are not dictated solely by CL, and enthusiast memory kits are rarely set to timings such as n-n-n-3*n-1T where the latency would scale more consistently.
"#2 Number of sticks of memory" Can you please clarify? What should be that number? The highest possible? For example, to get 16GB, what is the best sticks combination to recommend? Thanks for any help.
I think that if you have a dual channel memory controller and have a single dimm, then you should fill up the controller with a second memory chip first.
Peroxyde, Haswell uses a dual channel controller, so in theory (and in some benchmarks I have seen) 2 sticks of 8gb ram would give the same performance as 4 sticks of 4gb ram. So go with the 2 sticks as this allows you to fit more ram in the future should you want to without having to throw away old sticks. You could also get 1 16gb stick of ram, and benchmarks I have seen suggest that there is only about a 5% decrease in performance, though for the tiny saving in cost you might as well go dual channel.
I'm reading the benchmarks. And what I see is that in 99% of tests the gains are technical and only measurable to the third significant digit. That means they make no practical noticeable difference. The money is better spent on a difference part of the system.
This was a good review. But I see one major problem for practiacl applications: Whoever cares about performace, doesn't use 8 GB of memory in the year 2013. Even for a cheap home-built (no gaming, no CAD etc.) I used 16 GB a year ago, which cost only ~$70. when I run multiple applications in parallel (who doesn't?) W7/8 easily uses all memory for cache. Even with an SSD this is a speed advantage.
So for real world applications (running virus scan in parallel to work, 18 browser windows, watching movies etc) 8 GB re easily used up.
I would imagine a 16 GB PC (let's say ~$100) runs circles around the $700 8 GB PC in the real world.
Right now I run MSE and Malwarebytes while just using IE for browsing and I have none of my 16 GB left. The computer is not sluggish at all. I'm not sure how 8 GB RAM would work out.
One could argue most applications don't require that much memory, but running virusscan frequently should be done by all users.
I think this test should be repeated with either 16 GB or 24 GB for triple-channel platform. People interested in a few % more, also need more RAM.
@HerrKaLeun you say who doesn't use more than 8GB? and say you got 16GB for about 70 dollars, but this article covers a lot of extremely highly speced RAM that as stated is quite expensive, and if you bought 8GB for several hundred dollars you aren't going to supplement it with cheap high-latency low speed off-the-shelf stuff obviously.
HerrKaleun you are talking rubbish!! I have an X58 running 6gb ram and I never get anywhere near flooding it. 8GB is more tha ample for 99% of users out there. I recently built a 16gb ram rig for one of our engineers because he demanded it. To prove a point I benchmarked all our software (which includes a juicy construction CAD package) and recorded no more than a 3% performance increase going to 16gb and I put most of that down to going from single channel 8gb stick to dual channel for the 16gb. We tested render times, large drawing copies plus program open and close times with lots every peice of software on the machine running. His argument was the same as yours, and incorrect. Hardware is way ahead of the curve at the moment vs software and it will be a while before the everyday user "needs" more than 8gb.
To be fair, I hear battlefield 4 has as suggested setup at least 8GB. Like always the more RAM people on average have the more software starts to require.
"So for real world applications (running virus scan in parallel to work, 18 browser windows, watching movies etc) 8 GB re easily used up."
Because Windows will fill up all the Memory it has before even starting any garbage collection algorithms. Even today, you should be able to do all those trivial applications on 2GB of memory.
And anybody doing serious work or gaming will probably not run two major software packages at the same time. A few background programs (depending on how paranoid your companies IT department is), and a few trivial programs like browser, word processor, excel, PDF may run on the side and use up 1GB to 2GB. But nobody in his right mind will start processing of huge images in Photoshop while keeping his CAD models open in CATIA. A few nutjobs out there may run 16 installations of WoW on 16 screens with the same PC, but thats not really relevant to a general review.
So if you go and have a look again at what is tested in this review, and once you understand that any reviewer worth his salary will not go and run a dozen pieces of software parallel to the one software he is benchmarking at that moment, it should be clear at the very least that repeating above benchmarks with 16GB will give you absolutely no difference in the benchmark results whatsoever.
So the the three common scenarios are: : --- 1. You want an IGP --- Get the cheapest RAM, If you buy significantly better RAM the cost of APU + RAM becomes more than the cost of a normal CPU + dGPU + cheap RAM, which is obviously much higher.performance. : --- 2. You want a single graphics card --- Spend the money you're *thinking* about spending on better RAM on a better graphics card. If you want a decent dGPU then you're most likely a gamer and even 1600MHz CL9 is fine, but you'll see a big improvement if you move from a $200 GTX660 to a $250 660Ti : --- 3. You want more than one graphics card --- Divide RAM Frequency by CAS Latency to get the actual speed, I've been doing this for years and I'm glad Ian has finally mentioned this in an article.
I don't think anybody would disagree with the general direction of your comment, but you seem to overestimate the exact differences in cost for 8GB of RAM these days. A quick check (for Germany) gives me the following price differences for RAM frequency (relative to 1333):
So, for 8€ you can pick 2400 instead of 1600, which would give you a significant increase in performance should you ever find a piece of software that heavily depends on memory transfer rates. You are very unlikely to step up your CPU or GPU model for that kind of price difference.
Latencies can be similar. For DDR3-1600, going from CL11 to CL9 will cost you about 2€ to 3€. Of course, at that point you still have a higher latency than DDR3-2400 with a CL11, so that seems to make the most sense right now for price to value ratio.
Hd4600 is likely not memory bottlenecked with 20 eus at stock igp frequencies. There is a reason that intel didn't add the EDram to skus other than the 47w+ gt3e 40eu skus, 4 samplers and 2pixel hacienda. For a gt2 with half the assets, memory is not the issue- 1600mhz in dual channel is plenty. For people who were asking earlier in the thread, dual channel vs single channel is ~15-30% impact on gt2.
If you want to see more sensitivity/ scaling with memory, you would need to OC the igp first.
Or, as others said, test on skus that are more likely to stress memory - like gt3e (iris pro 5200) Note that hd5000 (15w package tdp) and iris 5100 (28w tdp) may be tdp bound on most workloads, so even there you may not see scaling with memory beyond ~1600-1866 dual channel.
Note that Trinity/Richland are more sensitive to memory (especially on 65-100w desktop skus) because they don't have the LLC to buffer some of the bandwidth demands.
I have mushkin 6-8-6-21 1600mhz which seems to be almost unique (don't think I have seen nayone else make cl6 at this speed) - would be interested to see if CL6 at 1600mhz was a match for much higher mhz
I think the comment 1600mhz is bad can be taken with a pinch of salt here. Depends on who the PC is for. If it is normal use then 1600mhz cl9 is going to be fine all day long. Ian's point is, I think, aimed at the enthusiast who is benchmark chasing, in which case bigger is always better. It would be nice if hte price of ram had not doubled. I was buying 8gb 1600mhz cl9 for £29.99 not too long ago, two recent builds it is as £54.99, nearly twice the price in the UK :(
I'm really curious to see a similar test on HD5000 or (28W) HD5100 - they don't have the benefit of EDRAM like the HD5200 and should be much closer to being memory bandwidth limited than HD4600.
..."should be much closer to being memory bandwidth limited" I meant to say "should be much closer to memory bandwidth limits" or "should be much more memory bandwidth limited" - pick one :P
like Gregory said I am alarmed that a stay at home mom able to earn $5886 in 1 month on the internet. visit their website............B u z z 5 5 . com open the link without spaces
Ian, you REALLY should include code compilation benchmarks. 80% of the people I know who actually need a powerful CPU/RAM/SSD combination use it to build software. You took the time to test IGP performance (who the spends money on RAM to play on an HD4000?) when you could have provided much more useful data. :)
This article just confirmed my suspicions, that this more expensive faster ram basically has no effect on your system. Basically anything 1866+ is going to be relatively the same performance. I use 2133mhz CAS 8 ram in my system and am totally happy and only paid 105 for 4x4GB kit.
What a useless test.. Now we don't even know if resolution matters.. No one is going to be doing crossfire so (s)he can play on 1 monitor with 1360x768 pixels..
I don't understand testing a 3000mhz kit and to evaluate gaming performance use that resolution (extremely low) and even not one gpu. I would suggest to once test the difference with this very interesting test on a triple hd resolution with 2 or 3 gpu. Or even better, as we talk about memory for the enthusiasts, cpu should be overclocked, gpu should be at least 2 and overclocked. Te title cud be: Aiming at 120hz on 5800x1080, how much to spend on the ram? Maybe it comes out that 150$ more on memory are enough for 5% higher fps, that are not nothing when spending already some $$$$ on gpu to get the best, another $$$ on cpu and $$$$to put all on water.
Interesting article however, "Number of Sticks" as noted above would mean what? Is there a performance gain or loss using the same amount of Gigs of the same RAM in say 16GB in two dims versus 16GB of the same using 4 dimms?
That is a reasonable inference, and given the age of the article and date of the last post, probably all you're going to get. For upgrade ability, it's smart to use the two dual-channel slots instead of filling all four with the same amount.
Thanks for this testing and article. This shows 1366x768 for resolution. While I understand that this will test the RAM fully, it's also not realistic. I'd like to see results running single 1080p or 3x1080p because that's more real world.
To see gains from faster ram the game needs to be cpu limited while most console ports are totally gpu limited Increasing resolution just stresses the gpu more further lightning the load on the cpu Thief & Arma are two cpu limited games that can see big gains from faster ram
Quote: "Using the older version of WinRAR shows a 31% advantage moving from 1333 C9 to 3000 C12" That's wrongly calculated. Correct is: ((213.63-163.11)/213.63) × 100% = 24%
Do you have an article that explains the basics for RAM, CPU & m/b matching?
I want to learn the basics on this, but all I keep finding are articles like this with bits and pieces, and general explanations of the various components, but no pragmatic explanations on how they work together and how to match them and do the over-clocking between the various components to arrive at a stable system.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
89 Comments
Back to Article
DanNeely - Thursday, September 26, 2013 - link
A suggestion for future articles of this type. If the results mostly show that really slow memory is bad but above that it doesn't really matter, normalizing data with a reasonably priced option that performs well as 1.0 might make clearer. ex for the current results put 1866-C9 as 1.0, and having 1333 as .9x and 3000 as 1.02. I think this would would help drive home that you're hitting diminishing returns on the cheap stuff.superjim - Thursday, September 26, 2013 - link
It looks like the days of 1600 C9 being the standard are over however the Hynix fire isn't helping faster memory prices any. 4-5 months ago you could get 2x 4GB of 1600 C9 for $30-35 bucks.Belial88 - Tuesday, October 1, 2013 - link
That's because just like when HDD prices skyrocketed due to the 2011 Thailand Flood, RAM prices have skyrocketed due to the 2013 Hynix Factory Fire. Prices had started to rise around early 2013 due to market consolidation and some other electronics (tablet, console, etc market needs), nothing huge, and they were actually starting to drop until the factory fire.As for 1600 C9 being some sort of standard, well, what Intel/AMD specifies as their rated RAM speed is no more useful than what they specify their CPU speed, as we know the chip can go way above that. JPeople who are savvy and know how to buy RAM, can buy RAM easily capable of 2400mhz CL8 by researching the RAM IC.
PSC/BBSE is easily capable of 2400mhz CL8 and generally costs ~$60 per 8gb (ie similar to the cheapest ddr3 ram). You can find some Hynix CFRs (double sided, unlike MFR, meaning they don't hit the high mhz numbers, but way better 24/7 performance clock for clock, kinda like dual channel vs single channel) for around $65, like the Gskill Ripjaws X 2400CL11 (currently like $75 on newegg), which will easily do ~2800mhzCL13.
RAM speed has always made an impact, the problem with reviews like the above is assuming you can't overclock RAM, and have to pay for it. In reality there is only ~5 different types of RAM (and a few subtypes). If you are smart and purchase Hynix, Samsung instead of Spektek, Qimonda, you can get RAM that easily does 2400mhz+ for the same or simliar price as the cheapest spekteks. If you assume that going from 1600 to 2400 will cost you $100+, of course it's a ripoff...
But if you buy, say, some Gskill Pi's 1600mhz for bargain bin, and overclock them to 2400CL8, you gain a good 10+ fps for almost nothing, and that's an awesome value. All RAM is just merely rebranded Spektek/Qimonda/PSC/BBSE/Hynix/Samsung, ie the same RAM is sold as 1600 CL9, 1600 CL8 1.65v, 1866 CL10, 2000CL11, 2133 CL12, etc ad nauesum.
vol7ron - Monday, September 30, 2013 - link
Relational results are helpful -I think they've been added since your comment- but I also like to see the empirical data as is also being listed.I know these things are currently being done in this article, I just want to make it a point not to make the decision to use one or the other, but both, again in the future.
vol7ron - Monday, September 30, 2013 - link
As an amendment, I want to add that the thing I would change in the future is the colors used. The spectrum should be green:good red:hazardous/bad. If you have something at 1.00x, perhaps that should be yellow, since it's the neutral starting position.alfredska - Monday, September 30, 2013 - link
Yes, this is some pretty basic stuff. It seems there's bouncing back and forth between green = good/bad right now. The author needs to stick to a convention throughout the article. I'm not really of the opinion that green and red are the best choices, but at least if a convention is used I can train my eyes.xTRICKYxx - Thursday, September 26, 2013 - link
It would be cool to see other IGP's including Iris Pro or HD 5000. Also, Richland may see slightly more than the 5% Haswell's HD 4600 has.Khenglish - Thursday, September 26, 2013 - link
I would expect richland/trinity to have larger gains since the IGP has access to only 4MB cache instead of 6MB or 8MB found on intel processors.yoki - Thursday, September 26, 2013 - link
hi, you said that the order of importance place amount of memory & their placement is most importance, but not a clue regarding how this scale in real world... for example i have 1600mhz 7Cl 6GB RAM in a x58 system,,,,should i upgrade it to 12GB ... how much i'll gain from thatIanCutress - Thursday, September 26, 2013 - link
That's ultimately up for the user to determine based on workload, gaming style, etc. I'd always suggest playing it safe, so if you plan on doing anything that would tax a system, 12gb might be a safe bet. That's X58 though, this is talking about Haswell, whose memory controller can take this high end kits ;)MrSpadge - Thursday, September 26, 2013 - link
Is your HDD scratching because you're running out of RAM? Then an upgrade is worth it, otherwise not.nevertell - Thursday, September 26, 2013 - link
Why does going from 2933 to 3000, with the same latencies, automatically make the system run slower on almost all of the benchmarks ? Is it because of the ratio between cpu, base and memory clock frequencies ?IanCutress - Thursday, September 26, 2013 - link
Moving to the 3000 MHz setting doesn't actually move to the 3000 MHz strap - it puts it on 2933 and adds a drop of BCLK, meaning we had to drop the CPU multiplier to keep the final CPU speed (BCLK * multi) constant. At 3000 MHz though, all the subtimings in the XMP profile are set by the SPD. For the other MHz settings, we set the primaries, but we left the motherboard system on auto for secondary/tertiary timings, and it may have resulted in tighter timings under 2933. There are a few instances where the 3000 kit has a 2-3% advantage, a couple where it's at a disadvantage, but the rest are around about the same (within some statistical variance).Ian
mikk - Thursday, September 26, 2013 - link
What a stupid nonsense these iGPU Benchmarks. Under 10 fps, are you serious? Do it with some usable fps and not in a slide show.MrSpadge - Thursday, September 26, 2013 - link
Well, that's the reality of gaming on these iGPUs in low "HD" resolution. But I actually agree with you: running at 10 fps is just not realistic and hence not worth much.The problem I see with these benchmarks is that at maximum detail settings you're putting en emphasis on shaders. By turning details down you'd push more pixels and shift the balance towards needing more bandwidth to achieve just that. And since in any real world situation you'd see >30 fps, you ARE pushing more pixels in these cases.
RYF - Saturday, September 28, 2013 - link
The purpose was to put the iGPU into strain and explore the impacts of having faster memory in improving the performance.You seriously have no idea...
MrSpadge - Thursday, September 26, 2013 - link
Your benchmark choices are nice, but I've seen quite a few "real world" applications which benefit far more from high-performance memory:- matrix inversion in Matlab (Intel MKL), probably in other languages / libs too
- crunching Einstein@Home (BOINC) on all 8 threads
- crunching Einstein@Home on 7 threads and 2 Einstein@Home tasks on the iGPU
- crunching 5+ POEM@Home (BOINC) tasks on a high end GPU
It obviously depends on the user how real the "real world" applications are. For me they are far more relevant than my occasional game, which is usually fast enough anyway.
MrSpadge - Thursday, September 26, 2013 - link
Edit: in fact, I have set a maximum of 31 fps in PrecisionX for my nVidia, so that the games don't eat up too much crunching time ;)Oscarcharliezulu - Thursday, September 26, 2013 - link
Yep it'd be interesting to understand where extra speed does help, eg database, j2ee servers, cad, transactional systems of any kind, etc. otherwise great read and a great story idea, thanks.willis936 - Thursday, September 26, 2013 - link
SystemCompute - 2D Ex CPU 1600CL10. Nice.Nagorak - Thursday, September 26, 2013 - link
Why did memory prices fluctuate so much since the end of last year and now? The Hynix fire looks to have next to no impact, but the price of memory has nearly doubled since last November/December.aryonoco - Thursday, September 26, 2013 - link
Ian, I do not want to disparage your work, please take this as nothing but constructive criticism. You do amazing work and the wealth of technical expertise is very clear in your articles.But you have a terrible writing style. There are so many sentences in your articles that while technically grammatically correct, are the most awkward ways of saying what you mean. Take a very simple sentence: "In terms of real world usage, on our Haswell platform, there are some recommendations to be made." There are so many ways, much simpler, much cleaner, much shorter to say what you said in that sentence.
I really struggle with your writing style. I know journalism isn't really your day job and you have a lot of important things to attend to, but please, if you care about this side job of yours as a technical writer, being technical is only half the story. Please consider improving your writing style to make it more readable.
Bob Todd - Friday, September 27, 2013 - link
I'd wager most of the readers didn't struggle as mightily as you. If you want to critique another's wordsmithery, you might want to find a classier way to do it. Our first exhibit will be sentence one of paragraph two. Surely you could have strung together a couple of words that got your point across without sounding like an ass?Impulses - Friday, September 27, 2013 - link
Yeah, I'm pretty sure that starting a sentence with a But is something you'd typically avoid...Dustin Sklavos - Friday, September 27, 2013 - link
"You have a terrible writing style."Constructive!
How would you like it if someone came to your place of business and told you "Look, I don't mean to disparage your work, but it makes my cat's hair fall out in clumps."
ingwe - Friday, September 27, 2013 - link
Yep. This definitely wasn't constructive.Ian, I don't see anything wrong with your writing, and I would rather you concentrate on getting articles out than on spending lots of extra time on editing your work.
jaded1592 - Sunday, September 29, 2013 - link
Your first sentence is grammatically incorrect. Stones and glass houses...Sivar - Thursday, September 26, 2013 - link
What a great article. Tons of actual data, and the numerous charts weren't stupidly saved as JPG. I love Anandtech.soccerballtux - Thursday, September 26, 2013 - link
so bandwidth starved apps with predictable data requests (h264 p1) really like it but when the CPU has enough data to crunch (winrar) the lower real-world latency time in seconds is worth having.gandergray - Thursday, September 26, 2013 - link
Ian: Thank you for the excellent article. You provide in depth and thorough analysis. Your article will undoubtedly serve as a frequently referenced guide.tynopik - Thursday, September 26, 2013 - link
colors reversed on USB 3.0 Copy Test chart where green is given to the highest (worst) results and red is given to the lowest (best) resultsTegeril - Thursday, September 26, 2013 - link
These are the most colorblind-unfriendly images I've seen to date on this site.Razorbak86 - Friday, September 27, 2013 - link
You tell 'em, bro! Too bad he didn't put actual NUMBERS in the cells, instead of all those non-readable colors. ;)QChronoD - Friday, September 27, 2013 - link
Please redo the IGP gaming benchmarks with playable settings. All you did was waste your time testing at unreasonably high detail and not proven a single thing about whether the extra bandwidth is able to help increase performance.pdjblum - Friday, September 27, 2013 - link
Awesome work. Man, this must have taking forever, even with fast memory. Thanks so much.adityarjun - Friday, September 27, 2013 - link
CAS Latency is given as 6-7-8-9-10-11. What does that mean?http://www.flipkart.com/transcend-jetram-ddr3-8-gb...
Any help on which of these would be better and why?
http://www.flipkart.com/computers/computer-compone...
anton68 - Friday, September 27, 2013 - link
It'd be nice to see how the Iris Pro eDRAM affects compute performance when used as an L4 cache.pjdaily - Saturday, September 28, 2013 - link
I'd like to see this test too.MadAd - Friday, September 27, 2013 - link
Hi Ian, thanks for the review, could you explain the thinking behind using only 1360x768 for the gaming tests, especially for the single card benchmarks? Would stretching the single card with a memory intensive game at a high resolution change the results more towards IGPU fractions?This is more the scenario I would expect gamers to be facing and even if the answer turns out to be no, that in itself would be valuable data to learn.
merikafyeah - Friday, September 27, 2013 - link
Please, please, please incorporate some ramdisk benchmarks for these memory tests. It seems like such a given but no one seems to think of this, which is essentially the only test where you'll see some major differences between speed tiers. Things like gaming don't really result in differences worth your money.I recommend Primo Ramdisk for its rock-solid stability but if you're looking for a free alternative I recommend SoftPerfect RAM Disk, which has been noted to be significantly faster than Primo, but may not be as stable under certain circumstances.
ShieTar - Friday, September 27, 2013 - link
I think you would have to propose a software benchmark which benefits from actually running from a Ramdisk. Testing the RD itself with some kind of synthetic HD-Benchmark will not give you much different results than a synthetic memory benchmark, unless the software implementation is rubbish.So if you want to see this happen, I suggest you explain to everybody what kind of software you use in combination with your Ramdisk, and why it benefits from it. And hope that this software is sufficiently relevant to get a large number of people interested in this kind of benchmark.
ShieTar - Friday, September 27, 2013 - link
Two comments on the "Performance Index" used in this article:1. It is calculated as the reverse of the actual access latency (in nanoseconds). Using the reverse of a physically meaningful number will always make the relationship exhibit much more of an "diminishing return" then when using the phyical attribute directly.
2. As no algorithm should care directly about the latency, but rather about the combined time to get the full data set it requested, it would be interesting to understand which is the typical size of a data set affecting the benchmarks indicate. If your software is randomly picking single bytes from the memory, you expect performance to only depend on the latency. On the other hand, if the software is reading complete rows (512 bytes), the bandwidth becomes more relevant than the latency.
Of course figuring out the best performance metric for any kind of review can take a lot of time and effort. But when you do a review generating this large amount of data anyways, would it be possible to make the raw data available to the readers, so they can try to get their own understanding on the matter?
Death666Angel - Friday, September 27, 2013 - link
First of all, great article and really good chart layout, very easy to read! :DBut one thing seems strange, the WinRAR 3.93 test, 2800MHz/C12 performs better than 2800MHz/C11, but you call out ...C11 in the text as performing well, even though anyone can increase their latencies without incurring stability issues (that's my experience at least). Switched numbers? :)
willis936 - Friday, September 27, 2013 - link
I too thought this was strange. You could see higher latencies clock for clock performing better which doesn't seem intuitive. I couldn't work out why those results were the way they were.ShieTar - Friday, September 27, 2013 - link
In reality, there really should be no reason why a longer latency should increase performance (unless you are programming some real-time code which depends on algorithm synchronization). Therefore it seems safe to interpret the difference as the measurement noise of this specific benchmark.Urbanos - Friday, September 27, 2013 - link
excellent article! i was waiting for one of these! great work, masterful :)jaydee - Friday, September 27, 2013 - link
Great work, I'd like to see a future article look at single-channel vs dual channel RAM in laptops/mITX/NUC configurations. With only two SO-DIMM slots, people have to really evaluate whether or not you want to fill both DIMM slots knowing you'd have to replace both of them if you want to upgrade but able to utilize the dual channels, or going with a single SO-DIMM, losing the dual channel but having an easier memory upgrade path down the road.Thanks and great work!
Hrel - Friday, September 27, 2013 - link
How do you get such nice screenshots of the BIOS? They look much nicer than when people just use a camera so what did you use to take those screenshots?merikafyeah - Friday, September 27, 2013 - link
Probably used a video capture card. These are also used to objectively evaluate GPU frame-pacing in a way that software like FRAPS cannot.Rob94hawk - Saturday, September 28, 2013 - link
Moder BIOS allow you to upload screenshots to USB. My MSI Z87 Gaming does it. No more picture taking. It's a great feature long overdue!Rob94hawk - Friday, September 27, 2013 - link
Avoid DDR3 1600 and spend more for that 1 extra fps? No thanks. I'll stick with my DDR3 1600 @ 9-9-9-24 and I'll keep my Haswell overclocked at 4.7 Ghz which is giving me more fps.Wwhat - Friday, September 27, 2013 - link
I have RAM that has an XMP profile, but I did NOT enable it in the BIOS, reason being that it will run faster but it jumps to 2T, and ups to 1.65v from the default 1.5v, apart from the other latencies going up of course.Now 2T is known to not be a great plan if you can avoid it.
So instead I simply tweak the settings to my own needs, because unlike this article's suggestion you can, and overclockers will, do it manually instead of only having the options SPD or XMP..
The difference is that you need to do some testing to see what is stable, which can be quite different from the advised values in the settings chip.
So it's silly to ridicule people for not being some uninformed type with no idea except allowing the SPD/XMP to tell them what to do.
Hrel - Friday, September 27, 2013 - link
Not done yet, but so far it seems 1866 CL 9 is the sweet spot for bang/buck.I'd also like to add that I absolutely LOVE that you guys do this kind of in depth analyses. Remember when, one of you, did the PSU review? Actually going over how much the motherboard pulled at idle and load, same for memory on a per DIMM basis. CPU, everything, hdd, add in cards. I still have the specs saved for reference. That info is getting pretty old though, things have changed quite a bit since back then; when the northbridge was still on the motherboard :P
Hint Hint ;)
repoman27 - Friday, September 27, 2013 - link
Ian, any chance you could post the sub-timings you ended up using for each of the tested speeds?If you're looking at mostly sequential workloads, then CL is indicative of overall latency, but once the workloads become more random / less sequential, tRCD and tRP start to play a much larger role. If what you list as 2933 CL12 is using 12-14-14, then page-empty or page-miss accesses are going to look a lot more like CL13 or CL14 in terms of actual ns spent servicing the requests.
Also, was CMD consistent throughout the tests, or are some timings using 1T and others 2T?
There's a lot of good data in this article, but I constantly struggle with seeing the correlation between real world performance, memory bandwidth, and memory latency. I get the feeling that most scenarios are not bound by bandwidth alone, and that reducing the latency and improving the consistency of random accesses pays bigger dividends once you're above a certain bandwidth threshold. I also made the following chart, somewhat along the lines of those in the article, in order to better visualize what the various CAS latencies look like at different frequencies: http://i.imgur.com/lPveITx.png Of course real world tests don't follow the simple curves of my chart because the latency penalties of various types of accesses are not dictated solely by CL, and enthusiast memory kits are rarely set to timings such as n-n-n-3*n-1T where the latency would scale more consistently.
Wwhat - Sunday, September 29, 2013 - link
Good comment I must say, and interesting chart.Peroxyde - Friday, September 27, 2013 - link
"#2 Number of sticks of memory"Can you please clarify? What should be that number? The highest possible? For example, to get 16GB, what is the best sticks combination to recommend? Thanks for any help.
erple2 - Sunday, September 29, 2013 - link
I think that if you have a dual channel memory controller and have a single dimm, then you should fill up the controller with a second memory chip first.malphadour - Sunday, September 29, 2013 - link
Peroxyde, Haswell uses a dual channel controller, so in theory (and in some benchmarks I have seen) 2 sticks of 8gb ram would give the same performance as 4 sticks of 4gb ram. So go with the 2 sticks as this allows you to fit more ram in the future should you want to without having to throw away old sticks. You could also get 1 16gb stick of ram, and benchmarks I have seen suggest that there is only about a 5% decrease in performance, though for the tiny saving in cost you might as well go dual channel.lemonadesoda - Saturday, September 28, 2013 - link
I'm reading the benchmarks. And what I see is that in 99% of tests the gains are technical and only measurable to the third significant digit. That means they make no practical noticeable difference. The money is better spent on a difference part of the system.faster - Saturday, September 28, 2013 - link
This is a great article. This is valuable, useful, and practical information for the system builders on this site. Thank you!HerrKaLeun - Saturday, September 28, 2013 - link
This was a good review. But I see one major problem for practiacl applications:Whoever cares about performace, doesn't use 8 GB of memory in the year 2013.
Even for a cheap home-built (no gaming, no CAD etc.) I used 16 GB a year ago, which cost only ~$70. when I run multiple applications in parallel (who doesn't?) W7/8 easily uses all memory for cache. Even with an SSD this is a speed advantage.
So for real world applications (running virus scan in parallel to work, 18 browser windows, watching movies etc) 8 GB re easily used up.
I would imagine a 16 GB PC (let's say ~$100) runs circles around the $700 8 GB PC in the real world.
Right now I run MSE and Malwarebytes while just using IE for browsing and I have none of my 16 GB left. The computer is not sluggish at all. I'm not sure how 8 GB RAM would work out.
One could argue most applications don't require that much memory, but running virusscan frequently should be done by all users.
I think this test should be repeated with either 16 GB or 24 GB for triple-channel platform. People interested in a few % more, also need more RAM.
Wwhat - Sunday, September 29, 2013 - link
@HerrKaLeun you say who doesn't use more than 8GB? and say you got 16GB for about 70 dollars, but this article covers a lot of extremely highly speced RAM that as stated is quite expensive, and if you bought 8GB for several hundred dollars you aren't going to supplement it with cheap high-latency low speed off-the-shelf stuff obviously.malphadour - Sunday, September 29, 2013 - link
HerrKaleun you are talking rubbish!! I have an X58 running 6gb ram and I never get anywhere near flooding it. 8GB is more tha ample for 99% of users out there. I recently built a 16gb ram rig for one of our engineers because he demanded it. To prove a point I benchmarked all our software (which includes a juicy construction CAD package) and recorded no more than a 3% performance increase going to 16gb and I put most of that down to going from single channel 8gb stick to dual channel for the 16gb. We tested render times, large drawing copies plus program open and close times with lots every peice of software on the machine running. His argument was the same as yours, and incorrect. Hardware is way ahead of the curve at the moment vs software and it will be a while before the everyday user "needs" more than 8gb.Wwhat - Monday, September 30, 2013 - link
To be fair, I hear battlefield 4 has as suggested setup at least 8GB.Like always the more RAM people on average have the more software starts to require.
ShieTar - Monday, September 30, 2013 - link
"So for real world applications (running virus scan in parallel to work, 18 browser windows, watching movies etc) 8 GB re easily used up."Because Windows will fill up all the Memory it has before even starting any garbage collection algorithms. Even today, you should be able to do all those trivial applications on 2GB of memory.
And anybody doing serious work or gaming will probably not run two major software packages at the same time. A few background programs (depending on how paranoid your companies IT department is), and a few trivial programs like browser, word processor, excel, PDF may run on the side and use up 1GB to 2GB. But nobody in his right mind will start processing of huge images in Photoshop while keeping his CAD models open in CATIA. A few nutjobs out there may run 16 installations of WoW on 16 screens with the same PC, but thats not really relevant to a general review.
So if you go and have a look again at what is tested in this review, and once you understand that any reviewer worth his salary will not go and run a dozen pieces of software parallel to the one software he is benchmarking at that moment, it should be clear at the very least that repeating above benchmarks with 16GB will give you absolutely no difference in the benchmark results whatsoever.
Chrispy_ - Sunday, September 29, 2013 - link
So the the three common scenarios are::
--- 1. You want an IGP ---
Get the cheapest RAM, If you buy significantly better RAM the cost of APU + RAM becomes more than the cost of a normal CPU + dGPU + cheap RAM, which is obviously much higher.performance.
:
--- 2. You want a single graphics card ---
Spend the money you're *thinking* about spending on better RAM on a better graphics card. If you want a decent dGPU then you're most likely a gamer and even 1600MHz CL9 is fine, but you'll see a big improvement if you move from a $200 GTX660 to a $250 660Ti
:
--- 3. You want more than one graphics card ---
Divide RAM Frequency by CAS Latency to get the actual speed, I've been doing this for years and I'm glad Ian has finally mentioned this in an article.
ShieTar - Monday, September 30, 2013 - link
I don't think anybody would disagree with the general direction of your comment, but you seem to overestimate the exact differences in cost for 8GB of RAM these days. A quick check (for Germany) gives me the following price differences for RAM frequency (relative to 1333):1600 : -0.50€ (No-Brainer)
1866 : +1€
2000 : +20€
2133 : +10€
2400 : + 8€
2666 : +50€
2933 : +170€
So, for 8€ you can pick 2400 instead of 1600, which would give you a significant increase in performance should you ever find a piece of software that heavily depends on memory transfer rates. You are very unlikely to step up your CPU or GPU model for that kind of price difference.
Latencies can be similar. For DDR3-1600, going from CL11 to CL9 will cost you about 2€ to 3€. Of course, at that point you still have a higher latency than DDR3-2400 with a CL11, so that seems to make the most sense right now for price to value ratio.
rootheday3 - Sunday, September 29, 2013 - link
Hd4600 is likely not memory bottlenecked with 20 eus at stock igp frequencies. There is a reason that intel didn't add the EDram to skus other than the 47w+ gt3e 40eu skus, 4 samplers and 2pixel hacienda. For a gt2 with half the assets, memory is not the issue- 1600mhz in dual channel is plenty. For people who were asking earlier in the thread, dual channel vs single channel is ~15-30% impact on gt2.If you want to see more sensitivity/ scaling with memory, you would need to OC the igp first.
Or, as others said, test on skus that are more likely to stress memory - like gt3e (iris pro 5200) Note that hd5000 (15w package tdp) and iris 5100 (28w tdp) may be tdp bound on most workloads, so even there you may not see scaling with memory beyond ~1600-1866 dual channel.
Note that Trinity/Richland are more sensitive to memory (especially on 65-100w desktop skus) because they don't have the LLC to buffer some of the bandwidth demands.
malphadour - Sunday, September 29, 2013 - link
I have mushkin 6-8-6-21 1600mhz which seems to be almost unique (don't think I have seen nayone else make cl6 at this speed) - would be interested to see if CL6 at 1600mhz was a match for much higher mhzmalphadour - Sunday, September 29, 2013 - link
I think the comment 1600mhz is bad can be taken with a pinch of salt here. Depends on who the PC is for. If it is normal use then 1600mhz cl9 is going to be fine all day long. Ian's point is, I think, aimed at the enthusiast who is benchmark chasing, in which case bigger is always better. It would be nice if hte price of ram had not doubled. I was buying 8gb 1600mhz cl9 for £29.99 not too long ago, two recent builds it is as £54.99, nearly twice the price in the UK :(Rainman11 - Tuesday, October 1, 2013 - link
The gaming segment was utterly pointless. Show the difference using a resolution of at least 1080p or don't even bother including it.Anonymuze - Tuesday, October 1, 2013 - link
I'm really curious to see a similar test on HD5000 or (28W) HD5100 - they don't have the benefit of EDRAM like the HD5200 and should be much closer to being memory bandwidth limited than HD4600.Anonymuze - Tuesday, October 1, 2013 - link
..."should be much closer to being memory bandwidth limited"I meant to say "should be much closer to memory bandwidth limits" or "should be much more memory bandwidth limited" - pick one :P
Kathrine647 - Wednesday, October 2, 2013 - link
like Gregory said I am alarmed that a stay at home mom able to earn $5886 in 1 month on the internet. visit their website............B u z z 5 5 . com open the link without spacesHrel - Thursday, October 3, 2013 - link
This is a lot of pages on content that all just tells you to buy 1866-CL9. Good to know.SetiroN - Friday, October 4, 2013 - link
Ian, you REALLY should include code compilation benchmarks.80% of the people I know who actually need a powerful CPU/RAM/SSD combination use it to build software.
You took the time to test IGP performance (who the spends money on RAM to play on an HD4000?) when you could have provided much more useful data. :)
dreamer77dd - Saturday, October 5, 2013 - link
AMD might like higher speed RAM then Intel. That could be interesting article also.Laststop311 - Sunday, October 6, 2013 - link
This article just confirmed my suspicions, that this more expensive faster ram basically has no effect on your system. Basically anything 1866+ is going to be relatively the same performance. I use 2133mhz CAS 8 ram in my system and am totally happy and only paid 105 for 4x4GB kit.SmokingCrop - Sunday, October 27, 2013 - link
What a useless test.. Now we don't even know if resolution matters..No one is going to be doing crossfire so (s)he can play on 1 monitor with 1360x768 pixels..
qiplayer - Saturday, November 2, 2013 - link
I don't understand testing a 3000mhz kit and to evaluate gaming performance use that resolution (extremely low) and even not one gpu.I would suggest to once test the difference with this very interesting test on a triple hd resolution with 2 or 3 gpu. Or even better, as we talk about memory for the enthusiasts, cpu should be overclocked, gpu should be at least 2 and overclocked.
Te title cud be: Aiming at 120hz on 5800x1080, how much to spend on the ram?
Maybe it comes out that 150$ more on memory are enough for 5% higher fps, that are not nothing when spending already some $$$$ on gpu to get the best, another $$$ on cpu and $$$$to put all on water.
gsuburban - Thursday, November 28, 2013 - link
Interesting article however, "Number of Sticks" as noted above would mean what? Is there a performance gain or loss using the same amount of Gigs of the same RAM in say 16GB in two dims versus 16GB of the same using 4 dimms?neal.a.nelson - Sunday, December 8, 2013 - link
That is a reasonable inference, and given the age of the article and date of the last post, probably all you're going to get. For upgrade ability, it's smart to use the two dual-channel slots instead of filling all four with the same amount.htwingnut - Monday, January 20, 2014 - link
Thanks for this testing and article. This shows 1366x768 for resolution. While I understand that this will test the RAM fully, it's also not realistic. I'd like to see results running single 1080p or 3x1080p because that's more real world.melk - Thursday, January 23, 2014 - link
Am I reading this correctly? That there is literally a 1fps difference at best, in both lowest and avg fps?melk - Thursday, January 23, 2014 - link
So we are talking about a ~1 fps difference in real world testing? Wow...dasa43 - Friday, February 28, 2014 - link
To see gains from faster ram the game needs to be cpu limited while most console ports are totally gpu limitedIncreasing resolution just stresses the gpu more further lightning the load on the cpu
Thief & Arma are two cpu limited games that can see big gains from faster ram
Thief benchmarks
http://forums.atomicmpc.com.au/index.php?showtopic...
Arma benchmarks
http://forums.bistudio.com/showthread.php?166512-A...
NordRack2 - Sunday, June 1, 2014 - link
Quote: "Using the older version of WinRAR shows a 31% advantage moving from 1333 C9 to 3000 C12"That's wrongly calculated.
Correct is: ((213.63-163.11)/213.63) × 100% = 24%
cadman777 - Sunday, April 19, 2015 - link
Dear Sir,Do you have an article that explains the basics for RAM, CPU & m/b matching?
I want to learn the basics on this, but all I keep finding are articles like this with bits and pieces, and general explanations of the various components, but no pragmatic explanations on how they work together and how to match them and do the over-clocking between the various components to arrive at a stable system.
Thanx ... Chris
Nickolai - Sunday, August 13, 2017 - link
Is there a similar article for DDR4?