Random (usually) Apple Related Tidbits
Upgrading An Xserve Cluster Node With Three (Sort Of) Internal Drives
A how-to explaining a low-tech, low-cost way to add two internal hard drives to an XServe G5 cluster node.
Recently, my trusty Xserve G4 started having an issue with its power supply. Unlike its newer relatives, the G4 doesn't include the luxury of redundant power. After a bit of looking, I discovered that a replacement power supply for even a relatively ancient machine like the Xserve G4 can run in excess of two hundred bucks. If you're anything like me, you probably find the thought of paying $200 for a power supply a bit unpalatable. As such, I turned to other options.
As it turns out, I was able to purchase a very capable Xserve G5 for a very reasonable price. Although the added horsepower that the G5 offers isn't something that I absolutely needed for my relatively low server requirements, a little extra juice is never a bad thing. Unfortunately, to get the sweet deal on the G5, I had to settle for a cluster node model. Luckily, the unit that I purchased was a second revision G5 cluster node which means that it's pretty much identical to a "normal" G5 Xserve except that it has only one hot-swap drive bay whereas the regular G5 has three. Needless to say, moving from the Xserve G4 with its four internal drive bays to an Xserve with only one internal drive bay was going to take a little bit of reconfiguration.
Initially, my approach was going to be to use my Mercury Elite-AL to provide a mirrored RAID startup volume with its relatively speedy Firewire 800 connection. But, having had a previous power related failure with the Mercury, I wasn't too thrilled about relying on that unit to provide my primary operating volume. Furthermore, I speculated that new G5 might actually be able to support a few more internal drives with a bit of creativity. So, I cracked the XServe open.
As I suspected, the chassis of the G5 Xserve cluster node is identical to that of the normal G5 server. However, as a cost-cutting measure, Apple scrapped two of the (likely costly) hot swap drive cradles leaving only blank drive bays. With a little poking around I was able to see that (predictably) Apple used the same circuit board as in the regular XServe to provide the motherboard SATA drive connections (which are in turn connected to the hot-swap drive bays). In other words, the G5 Cluster node has two empty drive bays and two spare SATA connections. To me, that sure sounded like a recipe for success.
Unfortunately, as it turns out, one circuit board change that Apple did implement is the power distribution board which feeds the drive modules. The board in the G5 Cluster Node only has a power connection for the single drive bay. I briefly considered trying to figure out a way to tap into some part of the internals to run power to the drive bays, but given the fact that my Xserve lives in a very low tech environment (my basement) I opted for a quick, dirty and decidedly low-tech approach.
As the pictures illustrate, two 18" SATA cables provide ample length to snake out of the front of the Xserve and allow the connection of two drives to the two previously unused internal SATA connections. In addition, a power adapter with a molex connector (which I just happened to have on hand) and a $4.99 Radio Shack molex to twin SATA power connector and everything is good to go. So, you might ask, what's the advantage of doing what I've just explained as opposed to simply connecting some fast external storage and calling it good (besides making my machine look completely ghettotastic, of course)? Well, in theoretical terms, the fastest stock external bus on the Xserve G5 is Firewire 800 which tops out at 800 Mbps. SATA on the other hand, tops out at 1.5 Gbps or nearly twice the speed of Firewire 800. Not to mention that each one of the Xserve's internal SATA ports are on a separate controller. In other words, for two drives my fastest external option is 800 Mbps. Running them on the internal bus, I get a total 3 Gbps. Although I haven't benchmarked it, I suspect that with today's speedy, high capacity drives, that can amount to a tangible difference in performance. In addition, I saved the cost of an external drive enclosure.
Ultimately, I'm not suggesting that everyone stack bare drives on top of their Xserve as I've done here (that probably wouldn't go over well a co-lo), I'm merely providing the approach I used here chiefly to demonstrate that the Xserve cluster node has some valuable hardware (i.e. two SATA busses) that would otherwise go unused. Although I didn't have the patience or motivation to do so, I'm fairly certain that there would be a way to tap into the Xserve's power and install the drives in the unused bays (maybe I'll get tired of my beautiful Xserve looking so ridiculous and get around to figuring that out one day. Or, perhaps even more practical solution would be to use one of the PCI slots to provide an eSATA connection to the internal SATA connections. In any case, I did what works for me. Hopefully others will be able to use my experience to come up with a solution that works for them.
*Although I don't know this to be the case, I would suspect that the approach that I have outlined here would also work on more current Xserve models.
blog comments powered by Disqus