Intel Z68 Chipset & Smart Response Technology (SSD Caching) Review
by Anand Lal Shimpi on May 11, 2011 2:34 AM ESTApplication & Game Launch Performance: Virtually Indistinguishable from an SSD
We'll get to our standard benchmark suite in a second, but with a technology like SRT we need more to truly understand how it's going to behave in all circumstances. Let's start with something simple: application launch time.
I set up a Z68 system with a 3TB Seagate Barracuda 7200RPM HDD and Intel's 20GB SSD 311. I timed how long it took to launch various applications both with and without the SSD cache enabled. Note that the first launch of anything with SSD caching enabled doesn't run any faster; it's the second, third, etc... times that you launch an application that the SSD cache will come into effect. I ran every application once, rebooted the system, then timed how long it took to launch both in the HDD and caching configurations:
Application Launch Comparison | |||||||
Intel SSD 311 20GB Cache | Adobe Photoshop CS5.5 | Adobe After Effects CS5.5 | Adobe Dreamweaver CS5.5 | Adobe Illustrator CS5.5 | Adobe Premier Pro CS5.5 | ||
Seagate Barracuda 3TB (No cache) | 7.1 seconds | 19.3 seconds | 8.0 seconds | 6.1 seconds | 10.4 seconds | ||
Seagate Barracuda 3TB (Enhanced Cache) | 5.0 seconds | 11.3 seconds | 5.5 seconds | 3.9 seconds | 4.7 seconds | ||
Seagate Barracuda 3TB (Maximize Cache) | 3.8 seconds | 10.6 seconds | 5.2 seconds | 4.2 seconds | 3.8 seconds |
These are pretty big improvements! Boot time and multitasking immediately after boot also benefit tremendously:
Boot & Multitasking After Boot Comparison | ||||
Boot Time (POST to Desktop) | Launch Adobe Premier + Chrome + WoW Immediately After Boot | |||
Seagate Barracuda 3TB (No cache) | 55.5 seconds | 37.0 seconds | ||
Seagate Barracuda 3TB (Enhanced Cache) | 35.8 seconds | 12.3 seconds | ||
Seagate Barracuda 3TB (Maximize Cache) | 32.6 seconds | 12.6 seconds |
Let's look at the impact on gaming performance, this time we'll also toss in a high end standalone SSD:
Game Load Comparison | ||||||||
Intel SSD 311 20GB Cache | Portal 2 (Game Launch) | Portal 2 (Level Load) | StarCraft 2 (Game Launch) | StarCraft 2 (Level Load) | World of Warcraft (Game Launch) | World of Warcraft (Level Load) | ||
Seagate Barracuda 3TB (No cache) | 12.0 seconds | 17.1 seconds | 15.3 seconds | 23.3 seconds | 5.3 seconds | 11.9 seconds | ||
Seagate Barracuda 3TB (Enhanced Cache) | 10.3 seconds | 15.0 seconds | 10.3 seconds | 15.1 seconds | 5.2 seconds | 5.6 seconds | ||
Seagate Barracuda 3TB (Maximize Cache) | 9.9 seconds | 15.1 seconds | 9.7 seconds | 15.0 seconds | 4.5 seconds | 5.8 seconds | ||
OCZ Vertex 3 240GB (6Gbps) | 8.5 seconds | 13.1 seconds | 7.5 seconds | 14.5 seconds | 4.1 seconds | 4.7 seconds |
While the Vertex 3 is still a bit faster, you can't argue that Intel's SRT doesn't deliver most of the SSD experience at a fraction of the cost—at least when it comes to individual application performance.
Look at what happens when we reboot and run the application launch tests a third time:
Game Load Comparison | ||||||||
Intel SSD 311 20GB Cache | Portal 2 (Game Launch) | Portal 2 (Level Load) | StarCraft 2 (Game Launch) | StarCraft 2 (Level Load) | World of Warcraft (Game Launch) | World of Warcraft (Level Load) | ||
Seagate Barracuda 3TB (No cache) | 12.0 seconds | 17.1 seconds | 15.3 seconds | 23.3 seconds | 5.3 seconds | 11.9 seconds | ||
Seagate Barracuda 3TB (Enhanced Cache) | 10.3 seconds | 15.0 seconds | 10.3 seconds | 15.1 seconds | 5.2 seconds | 5.6 seconds | ||
Seagate Barracuda 3TB (Maximize Cache) | 9.9 seconds | 15.1 seconds | 9.7 seconds | 15.0 seconds | 4.5 seconds | 5.8 seconds | ||
Seagate Barracuda 3TB (Maximize Cache)—Run 3 | 9.9 seconds | 14.8 seconds | 8.1 seconds | 14.9 seconds | 4.4 seconds | 4.3 seconds | ||
OCZ Vertex 3 240GB (6Gbps) | 8.5 seconds | 13.1 seconds | 7.5 seconds | 14.5 seconds | 4.1 seconds | 4.7 seconds |
Performance keeps going up. The maximized SRT system is now virtually indistinguishable from the standalone SSD system.
Gaming is actually a pretty big reason to consider using Intel SRT since games can eat up a lot of storage space. Personally I keep one or two frequently used titles on my SSD, everything else goes on the HDD array. As the numbers above show however, there's a definite performance benefit to deploying a SSD cache in a gaming environment.
I was curious how high of a hit rate I'd see within a game loading multiple levels rather than just the same level over and over again. I worried that Intel's SRT would only cache the most frequently used level and not improve performance across the board. I was wrong.
StarCraft 2 Level Loading—Seagate Barracuda 3TB (Maximize Cache) | ||||
Levels Loaded in Order | Load Time | |||
Agria Valley | 16.1 seconds | |||
Blistering Sands | 4.5 seconds | |||
Nightmare | 4.8 seconds | |||
Tempest | 6.3 seconds | |||
Zenith | 6.2 seconds |
Remember that SRT works by caching frequently accessed LBAs, many of which may be reused even across different levels. In the case of StarCraft 2, only the first multiplayer level load took a long time as its assets and other game files were cached. All subsequent level loads completed much quicker. Note that this isn't exclusive to SSD caching as you can benefit from some of this data being resident in memory as well.
106 Comments
View All Comments
MrCromulent - Wednesday, May 11, 2011 - link
Thanks for the review! Good to see that Intel's SSD caching actually works quite well.I'm looking forward to the next generation of SB notebooks with a ~20GB mSATA SSD drive combined with a 1TB 2,5" hard drive.
dac7nco - Wednesday, May 11, 2011 - link
Indeed. I'd be interested in seeing how a Crucial M4 64GB mated to a pair of short-stroked single-platter Samsung drives in RAID-0 would perform in a dedicated gaming system.JarredWalton - Wednesday, May 11, 2011 - link
Really? Man, I thought short-stroking drives was all but dead these days. That's the whole point of SSDs: if you're so concerned about storage performance that you're willing to short-stroke an HDD, just move to a full SSD and be done with it. Plus, storage is only a minor bottleneck in a "dedicated gaming system"; your GPU is the biggest concern, at least if you have any reasonable CPU and enough RAM.My biggest concern with SRT is the reliability stuff Anand mentions. I would *love* to be able to put in a 128GB SSD with a large 2TB HDD and completely forget about doing any sort of optimization. That seems like something that would need to be done at the hardware level, though, and you always run the risk of data loss if the SSD cache somehow fails (though that should be relatively unlikely). Heck, all HDDs already have a 16-64MB cache on them, and I'd like the SSD to be a slower but much larger supplement to that.
Anyway, what concerns me is that we're not talking about caching at the level of, say, your CPU's L1 or L2 or even L3 cache. There's no reason the caching algorithm couldn't look at a much longer history of use so that things like your core OS files never get evicted (i.e. they are loaded every time you boot and accessed frequently, so even if you install a big application all of the OS files still have far higher hit frequency). Maybe that does happen and it's only in the constraints of initial testing that the performance degrades quickly (e.g. Anand installed the OS and apps, but he hasn't been using/rebooting the system for weeks on end).
The "least recently used" algorithm most caching schemes use is fine, but I wonder if the SSD cache could track something else. Without knowing exactly how they're implementing the caching algorithm, it's hard to say would could be improved, and I understand the idea of a newly installed app getting cached early on ("Hey, they user is putting on a new application, so he's probably going to run that soon!"). Still, if installing 30GB of apps and data evicts pretty much everything from the 20GB cache, that doesn't seem like the most effective way of doing things--especially when some games are pushing into the 20+ GB range.
bji - Wednesday, May 11, 2011 - link
It seems like a good way to do it would be for the software to recognize periods of high disk activity and weigh caching of all LBAs during that period much higher.So for example, system boot, where lots and lots of files are read off of the drive, would be a situation where the software would recognize that there is a high rate of disk I/O going on and to weigh all of the files loaded during this time very highly in caching.
The more intense the disk I/O, the higher the weight. This would essentially mean that the periods that you most want to speed up - those with heavy disk I/O - are most likely to benefit from the caching, and disk activity that is typically less intense (say, starting a small application that you use frequently but that is relatively quick to load because of the small number of disk hits) would only be cached if it didn't interfere with the caching of more performance-critical data.
All that being said, I am not a fan of complex caching mechanisms like this to try to improve performance. The big drawback, as pointed out in this well-presented article, is that there is a lack of consistency; sometimes you will get good performance and sometimes not, depending on tons of intangible factors affecting what is and what isn't in the cache. Furthermore, you are always introducing extra overhead in the complexity of the caching schemes, and in this case because it's being driven by a piece of software on the CPU, and because data is being shuffled around and written/read multiple times more than it would have with no caching involved.
Then again, it is highly unlikely to *hurt* performance so if you don't mind sometimes waiting more than other times for the same thing to happen (this in particular drives me crazy though; if I am used to a program loading in 5 seconds, the time it takes 10 seconds really stands out like a sore thumb), and can absorb the extra cost involved, then it's not a totally unreasonable way to try to get a little bit of performance.
Zoomer - Wednesday, May 11, 2011 - link
Or the filesystem can manage the cache. That would be a much more intelligent and foolproof way to do this.vol7ron - Wednesday, May 11, 2011 - link
Can you point a RAM Disk to this caching drive?bji - Wednesday, May 11, 2011 - link
What is the algorithm that the filesystem would use to decide what data to cache in preference to other cacheable data? That is the question at hand, and it doesn't matter at what level of the software stack it's done, the problem is effectively the same.Mr Perfect - Wednesday, May 11, 2011 - link
<quote>I would *love* to be able to put in a 128GB SSD with a large 2TB HDD and completely forget about doing any sort of optimization.</quote>I heartily agree with that. Everyone is so gung ho about having a SSD for OS and applications, a HD for data and then <b>manually managing the data!</b> Isn't technology supposed to being doing this for us? Isn't that the point? Enthusiast computers should be doing things the consumer level stuff can't even dream about.
Intel, please, for the love of all that is holy, remove the 64GB limit.
Mr Perfect - Wednesday, May 11, 2011 - link
On a completely unrelated note, why is the AT commenting software unable to do things the DailyTech site can? Quotes, bolding, italics and useful formatting features like that would really be welcome. :)JarredWalton - Wednesday, May 11, 2011 - link
I'm not sure when they got removed, but standard BBS markup still works, if you know the codes. So...[ B ]/[ /B ] = Bolded text
[ I ]/[ /I ] = Bolded text
[ U ]/[ /U ] = Bolded text
There used to be an option to do links, but that got nuked at some point. I think the "highlight" option is also gone... but let's test:
[ H ]/[ /H ] = [h]Bolded text[/h]
So why don't we have the same setup as DT? Well, we *are* separate sites, even though DT started as a branch off of AT. They have their own site designer/web programmer, and some of the stuff they have (i.e. voting) is sort of cool. However, we would like to think most commenting on AT is of the quality type so we don't need to worry about ratings. Most people end up just saying "show all posts" regardless, so other than seeing that "wow, a lot of people didn't like that post" there's not much point to it. And limiting posts to plain text with no WYSIWYG editor does reduce page complexity a bit I suppose.