Advatronix Cirrus 1200: a Storage Server Under Your Desk
by Johan De Gelas on June 6, 2014 5:00 AM ESTLow Latency Database Servers
The main usage case for the Cirrus 1200 is serving up documents and files, basically a "NAS on steriods". Ganesh specializes in this field, so we'll leave that evaluation to him. There is little doubt in our mind that the combination of quad "big core" Xeon (compared to a much weaker ARM or Atom cores), the relatively high performance RAID-controller, and the large amount of memory should make the Cirrus 1200 a very potent file server, especially compared to the usual NAS solutions that rely on much slower ARM SoCs, Atoms, or Celerons. Most of them also have just 1-4GB of RAM. But as a file server, the Cirrus 1200 is likely overkill.
As we explained in the introduction, we believe that one of the use cases for the Cirrus 1200 is as a high performance database server, potentially combined with a file server. The idea is that you are in full control of your data (i.e. it's not in the cloud), and you can offer low latency (network) access without hosting costs. Most databases are storage limited, so the availablility of 10 (12 in total) hard drives and 6 SSDs sounds very good in that respect. The other technical specifications (Xeon E3, 32GB max) are not ideal for a database server, but they typically aren't as critical. Thus it seemed that it would be very interesting to see what this platform was capable of as a database server, and what would be the best way to configure it.
We used HammerDB to set up a "tpcc-like" database, but we tested the transactions rate with our vApus stress test, as it is more accurate and closer to the real world than the classic "HammerDB" test. It also allows us to integrate extensive monitoring while testing, which can improve our understanding of what is going on. Also, throughput (transactions per second) should not be reported without taking response time into account. We tested with 128 to 1024 connections and report the throughput numbers that—at most—require a response time of 100 ms. We chose this number as a typical database application will do quite a few requests to the database. A 100 ms transaction response time should deliver acceptable application response times (< 1 second).
39 Comments
View All Comments
thomas-hrb - Friday, June 6, 2014 - link
If you looking at storage servers under the desk why not consider something like the DELL VRTX. that at least have a significant advantage in the scalability department. You can start small and re-dimension to many different use cases as you growJohanAnandtech - Friday, June 6, 2014 - link
Good suggestion, although the DELL VRTX is a bit higher in the (pricing) food chain than the servers I described in this article.DanNeely - Friday, June 6, 2014 - link
With room for 4 blades in the enclosure the VRTX is also significantly higher in terms of overall capability. Were you unable to find a server from someone else that was a close match in specifications to the Cirrus 1200? Even if it cost significantly more, I think at least one of comparison systems should've been picked for equivalent capability instead of equivalent pricing.jjeff1 - Friday, June 6, 2014 - link
I'm not sure who would want this server. If you have a large SQL database, you definitly need more memory and better reliability. Same thing if you have a large amount of business data.Dell, HP or IBM could all provide a better box with much better support options. This HP server supports 18 disk slots, 2 12 core CPUs, and 768GB memory.
http://www8.hp.com/us/en/products/proliant-servers...
It'll cost more, no doubt. But if you have a business that's generating TBs of data, you can afford it.
Jeff7181 - Sunday, June 8, 2014 - link
If you have a large SQL database, or any SQL database, you wouldn't run it on this box. This is a storage server, not a compute server.Gonemad - Friday, June 6, 2014 - link
I've seen U server racks on wheels, with a dark glass and keys locking it, but that was just an empty "wardrobe" where you would put your servers. It was small enough to be pushed around, but with enough real estate to hide a keyboard and monitor in there, like a hypervisor KVM solution. On the plus side, if you ever decided to upgrade, just plop your gear on a real rack unit. It felt less cumbersome than that huge metal box you showed there.Then again, a server that conforms to a rack shape is needed.
Kevin G - Friday, June 6, 2014 - link
Actually I have such a Gator case. It is sold as a portable case for AV hardware but conforms to standard 19" rack mount widths and hole mounts. There is one main gotcha with my unit: it does't provide as much depth as a full rack. I have to use shorter server cases and they tend to be a bit taller. It works out as the cooling systems of taller rack cases tend to be quieter and an advantage when bring them to other locations An more of a personal preference thing but I don't use sliding rails in a portable case as I don't see that as wise for a unit that's going to be frequently moved around and traveling.martixy - Friday, June 6, 2014 - link
Someone explain something to me please.So this is specifically low-power - 500W on spec. Let's say then that it's a non-low-power(e.g. twice - 1kW). I'm gonna assume we're threading on CRAC territory at that point. So why exactly? Why would a high powered gaming rig be able to easily handle that load, even under air cooling, but a server with the same power factor require special cooling equipment with fancy acronyms like CRAC?
alaricljs - Friday, June 6, 2014 - link
A gaming rig isn't going to be pushing that much wattage 24x7. A server is considered a constant load and proper AC calculations even go so far as to consider # of people expected in a room consistently, so a high wattage computer is definitely part of the equation.DanNeely - Friday, June 6, 2014 - link
I suspect it's mostly marketing BS. One box even a high power one that's at a constant 100% load doesn't need special cooling. A CRAC is needed when you've got a data center packed full of servers because they collectively put out enough heat to overwhelm general purpose AC units. (With the rise of virtualization many older data centers capacity has become a thermal limit instead of being limited by the number of racks there's room for.)At the margin they may be saying it was designed with enough cooling to keep temps reasonable in air on the warm side of room temperature instead of only when it's being blasted with chilled air. OTOH a number of companies that have experimented with running their data centers 10 or 20F hotter than traditional have found the cost savings from cooling didn't have any major impact on longevity so...