RAID on the desktop

Review date: 24 June 2000.
Last modified 03-Dec-2011.

 

If you need lots of disk storage in your PC, the obvious solution these days is to just go out and buy a hard drive with ludicrous capacity. They're cheap as chips.

15 gigabyte drives cost less than $AU350, 20 gigabytes'll set you back $AU400 or so, and 30 gigabytes can be had for not much more than $AU600. And there are still bigger drives out there. The average computer store's not likely to have a monster like the IBM Deskstar 75GXP DTLA-307075 (more than 70 real live formatted gigabytes, kids) sitting on the shelf, but they're out there. And the price per megabyte is roughly the same.

Cheap gargantuan drives, though, have their limits.

They fail, for instance.

All hard drives will fail, if you use them long enough. Through defects or misadventure, some die young. A hard drive may fail in a considerate way - obviously spinning down and up and down and up all the time, for instance. This gives you the chance to swap it out at a convenient time, and even get the current data off it elegantly.

But, now and then, drives just drop dead out of the blue.

Now, because you're a sensible person, you of course make regular backups of all of your important data.

I don't know why so many people start shuffling their feet and whistling tunelessly when I say that.

Anyway, as I was saying, you have backups. So if your monster-drive drops dead, you can get a recent version of your data back. But, in the meantime, whatever you were using that drive for isn't happening any more.

You don't put twenty-plus-gigabytes of storage in a PC you use for freakin' word processing. Well, people do, of course, but people run Windows 98 on dual processor Xeon boxes, too. There's nowt so queer as folk. Sensible people are usually doing Something Important with a great big drive.

Assuming you're sensible, it's therefore a big pain if the drive dies. Your office just lost its file and print server, or its intranet database box, or something else that'll cost you time and money well beyond what you'll have to spend to get the computer running again.

If you can get greater reliability without spending an absolute steaming fortune, then, you probably ought to.

Another shortcoming of cheap big drives is that they're not fast enough. Well, not fast enough for what you may want them to do, anyway.

Most desktop tasks aren't very disk intensive at all. Ordinary Windows tasks, business applications, games - most of what most people do with their computers just won't happen noticeably faster even if the hard drive speed goes through the roof. Double your disk speed, get a 10% overall speed boost. Whoopee.

But some tasks are disk intensive. File serving, be it for a local network or Internet purposes. Database work. Digital video. If you've got some task that flogs the drive all the live-long day, then a faster drive will help.

Lower latency - the length of time between ordering the drive to do something and having its mechanism actually get arranged properly to start - will help you if you're doing some file serving-esque task where lots of files are requested simultaneously. A higher data transfer rate will help if you're doing some task where delivering tons of data per second's useful - and that covers pretty much everything disk-intensive.

Now, today's big stand-alone drives, even the cheap ones, have low latency and high transfer rates, by previous standards. But if you want real speed, you've got to make them work in a team.

RAID

RAID stands for Redundant Array of Independent (or Inexpensive) Disks. It lets multiple drives behave like one, bigger, faster drive, by spreading data between them.

If RAID's implemented in hardware, there's a controller that handles the job of making multiple drives look like one. If it's implemented in software, a special device driver does the work. Software RAID's slower, but you don't need the special controller. You can do it with "industrial strength" operating systems like Windows NT and Linux.

The two elementary kinds of RAID are called "striping" and "mirroring". In striping, a.k.a. "RAID Level 0", you use more than one drive, and the data's sent to or read from each of them in turn. Byte one to drive one, byte two to drive two, and so on until you run out of drives and go back to the first one again.

Striping improves performance - you add together the transfer rates of the two drives, provided your drive interface has the bandwidth to handle it. But it doesn't provide any redundancy at all. If one drive in a "stripe set" fails, the whole set's hosed. So you shouldn't use it for important data. Well, not without mirroring as well, anyway.

In mirroring, a.k.a. "RAID Level 1", the data's duplicated across different drives. So if one of them fails you can swap in another one and rebuild the data. Better RAID systems even let this happen without interrupting operation of the computer - you can "hot swap" drives, and the RAID array will rewrite data to the mirror drive as and when it has time in between other tasks.

You can mix these two RAID flavours, too. Say, with a stripe set of two drives mirrored to another, identical, pair of striped drives. This is called RAID 0+1.

In this simple four-drive example, the chance of any one drive failing is, of course, four times higher than it would be if you were only using one drive with equal reliability to the four in the RAID array. But the chance of actually losing data because of a drive failure is much lower. You'd have to lose a drive, and its mirror drive, simultaneously. This makes the chance of a drive-failure-related data loss pretty darn minuscule.

Assume you're using crummy drives with, say, a one in one thousand chance of failing on any given day. Further assume that you only check for failed drives once a day, so you've got that much of a window for a double failure to occur.

Assuming one drive's failed, the chance of the matching remaining drive failing is 1/1000 (there's a 1/500 chance that one of the other two drives'll fail, but that won't cause data loss unless its mirror dies as well), so you've got a one in a million chance of a data-loss failure on any given day.

One in a thousand, though, is a lot worse than the real failure probability of modern drives, even the alarmingly gigantic ones. Current drives are more likely to have something like a 500,000 hour Mean Time Between Failures (MTBF) score.

This doesn't mean that anybody's tested any of them for 57 years, of course. The real drive lifespan is likely to be ten years, tops. The big number may just mean that the manufacturers set up, say, 1000 drives and tested them for, say, 1000 hours, (a rather more manageable 42 days or so). And at the end of that time, they had two dead ones.

The 500,000 hour MTBF figure, therefore, can be arrived at by saying that if 0.2% of your drives fail in 1000 hours, then all of 'em are going to fail by the half-million-hour mark. The drive manufacturers know that isn't true, and so does anybody else who knows how MTBFs are worked out, but it sure impresses the punters.

There are lots of other factors that go into MTBF evaluation for many companies - they figure in real-world data from returns, subtracting devices obviously killed by the user from the stats. Some of them therefore come up with Theoretical MTBF and Operational MTBF figures. But there's no standardised way to calculate any kind of MTBF, so you shouldn't put too much stock in it.

You can, at least, use MTBF numbers to work out the rough probability of failure in any given day over the sensible lifespan of the drive. It's probably going to be replaced, along with the whole system it's in, in a few years anyway.

If you take a 500,000 hour MTBF figure at face value in this way, it gives you a roughly one in 20,000 chance of a drive failure on any given day.

Build your RAID 0+1 four-drive array out of these drives, and now you have a one in 400 million chance that it'll drop dead today. A naive analysis would suggest that you have rather more chance of being struck by lightning than of having your RAID array suffer a dual-drive data loss failure.

There are plenty of other bad things that can happen to your storage, of course. Fire, lightning, flood, earthquake, thieves, virus attack, egregious user error; if you want to keep your data, you still need a backup. But if you want fast, reliable storage for a critical machine, some sort of hardware-driven mirrored-and-striped RAID is a great idea.

The only problem is that, until recently, hardware RAID was alarmingly expensive. Because all those lovely cheap drives couldn't do it.

Interface elitism

Every single one of the world's low-cost-per-megabyte hard drives uses the "IDE" interface.

IDE stands for Intelligent (or maybe Integrated) Drive Electronics. It's the interface standard, more correctly referred to as AT Attachment (ATA), which is used by all consumer PC motherboards, and by recent Macintoshes as well. IDE drives are made in vast quantities. Their low price is chiefly because of economies of scale, and also owes something to tight competition in the cost-sensitive consumer market.

These days, there's very little reason not to use IDE, for the vast majority of computing tasks. It's fast, it's compatible, and you can attach up to four IDE devices to modern motherboards - they've got two 40 pin IDE connectors, and each of these connectors is a separate IDE "channel" that can support one or two drives.

In the olden days, motherboards used to have only one IDE connector, and there was no guarantee that you could successfully connect a pair of drives to it. IDE CD-ROMs have been around for some time (they use an extension to the ATA standard called ATAPI, AT Attachment Packet Interface), but you used to have to run them from a secondary controller, usually built into a sound card, to have a good chance of them working.

When there are two IDE devices on a single channel, the one that's set to "Master" (usually with little jumper blocks on the back of the drive) uses its controller circuitry to control itself and the second device, which has to be set to "Slave". If the master drive's controller doesn't work with the slave drive's electronics, problems result.

But now the various manufacturers of IDE gear have well and truly worked out the bugs, and practically any combination of current hard, CD-ROM or tape drives should work fine.

IDE devices also used to be slow. There was nothing especially awful about the drive mechanisms, but the earlier incarnations of the IDE interface required a lot of CPU time for shifting data around, and didn't have a lot of bandwidth.

The bandwidth figure determines the maximum amount of data that can be moved in a given period of time, and for the original IDE it was crummy. The theoretical bandwidth of the fastest transfer mode of the early ATA systems was more than eight megabytes per second, but in practice they had a hard time beating 2Mb/s, and with two devices on a channel, the slower device could hold up the faster one quite a lot. This was another reason not to put your CD-ROM on the same IDE channel as your hard drive.

Up against these underwhelming early IDE systems was SCSI (Small Computer Systems Interface), which in its original version could support up to seven devices per controller, without loading up the CPU. Those devices could be hard drives, scanners, tape drives, CD-ROM drives; you name it. IDE, with ATAPI, can support CD-ROMs and tape drives, but that's where it stops. And the earliest IDE versions worked only with hard drives.

SCSI was faster, and you could use decently long cables (IDE cables can be only 40 centimetres in length, so it's meant to be an inside-the-computer-only system), and all of the newest, sexiest, fastest drives always came out in SCSI versions first and IDE later, if ever.

SCSI was rather more compatible, as well - SCSI devices tended to actually play nice when plugged into the same cable. It was far from perfect, in early incarnations. Apple made some, um, refreshingly idiosyncratic, um, amendments to the standard in their earlier Macintoshes. We may never know how many died of frustration.

But SCSI was still better than IDE.

And SCSI supported hardware RAID, with an appropriate controller. And, of course, it still does. But it sure as heck isn't cheap.

For a simple four drive hardware SCSI RAID array at the moment, you're looking at about $AU750 for the controller, and the same again for each of the drives, even if you're settling for relatively low capacity ones. You can get 9Gb SCSI drives for a fair bit less than $AU600; if you want 36Gb ones, you're talking $AU1800 per drive.

If that's in your price range, I'm happy for you. Let's assume it's not, and move on to The Innovation That Guarantees That You WILL Save: IDE RAID.

The cheap seats

IDE RAID controllers use a couple of simple two-channel IDE connectors, and cunning control hardware, to allow you to make basic two-to-four-drive RAID arrays for much, MUCH less money than you'd pay for the SCSI alternative. They're no good if you want huge super-arrays, but they're thoroughly adequate for less demanding purposes, and they're real live hardware RAID, with all of its advantages.

You've been able to buy IDE RAID controller cards for a while now, and last year some enterprising hackers discovered that Promise's low-cost Ultra66 card (listing for well under $US30 from various dealers, now that most new motherboards support the standard natively) could, by the addition of a resistor and a new BIOS version, be converted into their rather more expensive Fastrak66 RAID controller (read all about it here).

People with less soldering iron aptitude, of course, just bought the Fastrak, which is now selling for less than $US120, or well under $US100 from the really cheap places.

And there are other options. The Iwill SIDE RAID66, less than $US90, for instance.

And then there are motherboards with IDE RAID built in. Iwill weigh in here, again, with their VD133Pro, a Socket 370 (Intel Pentium III and Celeron, or VIA Cyrix III) motherboard with two IDE connectors. It can do RAID 0+1 using the Ultra ATA/66 standard, which has a theoretical maximum bandwidth of 66 megabytes per second, per channel. No current IDE hard drive can use more than half of that bandwidth, so it makes little difference

And then there's Abit's RAID motherboard, the KA7-100. I review this board in detail here; in brief, it's an all-singing, all-dancing super-board for AMD's excellent Athlon CPU, and it's got a pair of IDE controllers, each with two connectors. One controller's ATA/66, the other's the new ATA/100, also known as ATA/66+, if you're talking to someone who's got their knickers twisted over the fact that the standard was announced by drive makers Quantum, who came up with the earlier standards, too.

ATA/100 can move a hundred megabytes per second, in theory. But ATA/66 isn't a very large improvement, in the real world, over the earlier ATA/33 (the current baseline IDE standard). For the same reason, ATA/100 is barely faster than ATA/66. And you have to use ATA/100 drives for an ATA/100 controller to work in that mode at all, and not fall back to ATA/66 or 33; ATA/100 drives are not thick on the ground yet.

Incidentally, you can firmware-upgrade some older drives to ATA/100 compatibility, which won't do anything much for their performance but at least means they won't hold the bus speed down if they're sharing a cable with another ATA/100 device. For info on how to do this with some Maxtor DiamondMax models, see here.

The exciting thing about the KA7-100's second IDE controller is therefore not its bandwidth. The exciting thing is RAID.

It's based on a Highpoint HPT370 chip, which is an IDE RAID controller, and if you go into the HPT370 BIOS setup menu during startup - which you can only do if you've got at least one drive connected to that controller - there the RAID options are, plain as day.

Which is something of a surprise, because Abit don't mention RAID in the KA7-100 specs anywhere.

The word from Abit PR organism Eric Boeing about this was that "there are a few BIOS issues to be worked out" before the RAID controller will make it onto the official feature list.

Which is a fair statement. The KA7-100's RAID works. And you don't even need an especially broad definition of the term "works".

But it's got... personality.

The BIOS version that the KA7-100 ships with - well, that my review one shipped with, anyway - includes RAID support, but unofficially leaked later-version BIOSes definitely don't, and it's possible that off-the-shelf KA7-100s will also be RAID-less, though just getting hold of a BIOS file saved from a RAID-equipped one and flashing it to the BIOS of one of the emasculated boards will solve the problem.

Once Abit have beaten the characterful quirks out of the RAID functionality, there'll be an official RAID-equipped BIOS file available for download, and KA7-100s will no doubt be shipping with it, too.

Drive-a-mania

I wanted to see how much performance you could get out of bog standard IDE drives in a RAID 0+1 array, so 'twas time to purchase the penny-pincher's choice in reasonably capacious storage.

Drives, drives, drives...

Four Western Digital Caviar 153AA 15.3Gb Ultra ATA/66 drives, a steal at $AU295 each from m'verygoodfriends at Aus PC Market. These drives support ATA/66, but they only spin at 5400RPM. The 7200RPM version costs a mere $AU30 more.

7200RPM drives, all things being equal, have higher sustained transfer rates than their 5400RPM cousins, which are in turn faster than the older 3600RPM units. Faster rotation also means less rotational latency - the time while you're waiting for the right spot on the drive to spin around to the read/write heads.

But 5400s are cheaper, and run cooler, too - hard drives have air inside them, and friction between the platters and that air makes them quite toasty little things. The high-end 10,000RPM SCSI drives need special cooling if you want them to be reliable; 5400s can be jammed together in an underventilated case and will probably work fine for years and years.

Of course, the 153AAs aren't really 15.3 gigabyte drives. Hard drive specs are written by marketing people who sort of zoned out when the fact that computer "kilo", "mega" and "giga" prefixes relate to powers of two, not powers of ten, was explained to them.

A real gigabyte is two to the power of 30, or 1,073,741,824, bytes. A hard disk marketing gigabyte is ten to the power of nine, a nice round 1,000,000,000, bytes. So a "15.3 gigabyte" hard drive is actually only 14.25 gigabytes, and that's before you format it and lose a few per cent more of your space as you paint the logical lines on the metaphorical data car park.

Four drives wasn't the end of it, mind you. The KA7-100 isn't capable of booting from its RAID controller. The option to make the "Future ATA" controller the first boot device is there in the BIOS, but it doesn't work yet.

Like I said - personality.

So you need another drive to be C, allowing the RAID array to be D. I opted for an old 2Gb Quantum. This is no big deal if you're running a Proper Operating System, which can be installed on any drive you like; it's even OK with dumb old Windows 95/98, which only needs to put a few megabytes of files on C if you tell it to install on D. Which is what I was going to do.

Of course, you're not going to fit five 3.5 inch hard drives and sundry other storage componentry in a minitower case. Time to get a Symbol of Masculinity.

Aopen HQ08 case

This is AOpen's HQ08, a full-tower case with lots of drive bays (five 5.25 inch, eight 3.5 inch counting the floppy drive bay), slide-out motherboard tray, solid construction, swish looking front panel. Yours for a mere $AU239 including 300 watt power supply.

Most people who buy big butch tower cases don't use anything like all the space available. The manufacturers clearly know this; the HQ08's standard power supply doesn't have nearly enough four pin Molex power plugs for the twelve drives you can install in all those bays. A 300 watt supply should be adequate for even a quite heavily stacked machine, though; modern hard drives draw quite a bit of current when they're spinning up, but in operation they only need about ten watts each.

So all you need to add tons of drives is a handful of Molex double-adaptor cables. I hacked the leads off an old, dead power supply and soldered up a connector-squid for all of the drives in the top half of the case, to reduce the plug-and-socket count a bit.

Wiring all of those drives up was a real cabling adventure. Four IDE connectors next to each other and hard up against the RAM slots (which are over towards one side of the board to leave room for raving maniacs like, uh, me, to put brick-sized CPU coolers on their processor) mean that you tend to find yourself asking a lot of molecules to share the same location at the same time.

Soon, my pretties, we will have Serial ATA, and it will be a good thing. It'll have thin, low-conductor-count, round cables that'll be easy to route in a case.

We darn well don't have it yet, though, which means that you've got to deal with a modern art masterpiece of 40 and 80 wire IDE cables (you need the 80 wire ones to use ATA/66 or ATA/100) all over the danged place. They're two inches wide, they're stiff, and the connectors have to be plugged in the right way; once you've got four of them (and, in my case, a SCSI cable for the danged CD writer as well, and a cable for the top-mounted floppy drive) all jammed together, it becomes... upsetting.

In the short term, you can make your cabling a bit less painful by splitting your cables. 40 wire IDE cables don't really have to be the shape they are, even for Ultra ATA/33; you can split them into few-conductor strips in between the connectors, gather the strips together with cable ties or electrical tape or what have you, and end up with a slightly shorter, still stiff but easier to deal with cable.

Sliced and diced IDE cable

Here's one I prepared earlier. This one's been split into ten strips of four wires each, and gathered with cable ties. You start the split with a sharp knife in between the conductors, then just tear the thin insulation down to the next connector. It's actually surprisingly difficult to cut a wire by mistake.

In case you do, though, do the deed on a spare IDE lead. Any computer store should be happy to give you an couple of spare IDE leads for nothing with any significant purchase.

The 80 wire ATA/66-and-higher leads are different. They'd be much more fiddly to split, for a start. And their extra conductors are all earths, and they need to be there for signal isolation purposes. Slice 'em up and bundle 'em together and you could get data corruption.

Built!
The Sunshine Home for Lost Hard Drives.

In order to keep this article slightly shorter than The Decline and Fall of the Roman Empire, I'll gloss over the rest of the computer-assembly process, other than to mention that AOpen clearly still retain the services of a circus strongman whose only job is to tighten every screw in their cases. On to the benchmarking.

Making numbers

I tested a single 153AA from both the plain ATA/66 controller and the Highpoint ATA/100 one, then a two-drive stripe-set, then a full four-drive RAID 0+1 array, which lets you see the following show-off screen during boot-up:

Boot-up status screen

Things are made simpler by the HPT370's restricted capabilities. It can only stripe two disks together, so, if you've got four drives connected, you can only stripe them as two sets of two. And it can't stripe disks that are connected to the same cable. Which is fair enough, because two disks on the one cable are both running from the master drive's controller, which can only issue instructions to one of them at a time.

This also means that RAID 0+1, where there have to be two drives on each IDE channel of the HPT370 and they all have to do stuff at once, can be expected to be a bit slower than a simple two-drive stripe-set. Which is how it turned out.

HPT370 BIOS setup

You don't get any documentation in the KA7-100 package that explains the extra HPT370 features. Fortunately, you don't really need any. There's practically nothing to tweak in the HPT370 setup utility; all you can do is change the access modes of the drives (which default to the fastest mode they can support) and make and break RAID sets. It's all menu driven, and quite easy to understand.

RAID aficionados who want to play around with higher RAID levels, tweak the stripe size or otherwise make like a Serious System Administrator will be disappointed. Two drive sets, or a four drive 0+1 set are all you get with this IDE RAID controller.

Making a RAID 0+1 set with the KA7-100 is just too easy - you simply make two RAID 1 sets, then make a RAID 0 set that includes one drive from each of the original sets, and you're done. Presto, the four drives will collapse into one drive letter as far as the PC's concerned, and you can partition and format again as normal.

I used WinBench® 99 Version 1.1 for the tests, and an AMD "Thunderbird" new-model Athlon CPU, with 256 kilobytes of Level 2 cache running at 840MHz (which elevated speed is explained in my review of this most excellent processor, here. I only had 64Mb of RAM installed for the tests, and I formatted the drives with the FAT32 file system.

Here are my results, as a WinBench® data file. You can get WinBench® free from here.

The first thing I noticed was that whenever you change the drive configuration on the HPT370 controller, you've got to reinstall the Windows drivers for it, well, in Windows 98, anyway; I didn't try out the Windows NT and 2000 drivers that are also included with the KA7-100.

If you don't reinstall the driver, the HPT370 will still work, but it'll be painfully slow - the four-drive array, with the driver not reinstalled after switching from the two-drive setup, was between seven and 15 times slower than a single drive with a fresh driver install.

When you reinstall the HPT370 driver, a Windows machine has to restart. Nothing new there. But when the machine reboots, Windows asks you where the drivers are again. So you point it to the appropriate directory on the KA7 CD, or wherever else you've put the drivers. And it looks at it, and then it tells you that it's found a perfectly good driver in the Windows directory itself. Which is the previously-installed version of the driver, which isn't working properly.

I've seen this before (Windows does the same thing with graphics card drivers). If you pick the default option, you'll stick with the driver you had before, and you'll have the same problems. If you tell Windows to list the other drivers it's found, it'll grudgingly show you the one it found in the directory you pointed it to. Select that one, and all will be well.

This isn't a big deal, once you know about it, but it's compelling evidence that the HPT370 setup really isn't ready for prime time yet.

Here's the executive performance summary:

  ATA/66 controller, single drive HPT370 controller, single drive HPT370 controller, two drive RAID 0 stripe-set HPT370 controller, four drive RAID 0+1 array
Business Disk WinMark™ 99 3970 4120 5230 5030
High-End Disk WinMark™ 99 7760 13800 17800 17200

Odd, huh?

For some reason, a single drive on the HPT370 benchmarks, in WinBench® at least, a lot better than the exact same drive hanging off the standard ATA/66 controller. These are only ATA/66 drives, running in the same mode from each controller, so I don't know where the difference is coming from.

RAID 1 is significantly faster again, as you'd expect, and RAID 0+1 takes a bit of the edge off the speed because of the bus-contention problems from the dual drives per channel. But the difference is negligible.

Two 153Aas, back to back, give you 28.65Gb of storage in the Windows FAT32 format. You get exactly the same storage from the four drive RAID 0+1 array, of course. And in comparison with high-end single SCSI drives, the IDE RAID option is rather attractive.

Four 153AAs cost less than $AU1200. You can get them for little more than $US400 if you shop around in the States.

A "36.7 gigabyte" Quantum Atlas 10K II, which when you convert to real megabytes and format it will give you about 34.5 gigabytes of storage, costs a hair under $US1000, street price; here in Australia you're likely to pay nearly $AU2000.

The Atlas 10K II is a 10,000RPM Ultra160 SCSI drive, but it doesn't use anything like all of the 160-megabyte-per-second bandwidth of Ultra160 SCSI, any more than ATA/100 drives can saturate their interface's bandwidth.

The Atlas 10K II can actually deliver about 40 megabytes per second in sustained transfer benchmarks, which is most impressive, but its real world performance is not that exciting, unless you're moving big contiguous files all of the time.

Quantum don't quote an MTBF figure for the 10K II line, but they've got a five year manufacturer's warranty, and you can reasonably expect a single 10K II to out-last any cheap IDE drive. Expecting it to out-last a four drive array, though, is a lot less reasonable.

In the performance department, a single Atlas 10K II running from an Adaptec AHA-2940U2W SCSI controller (yours for $US200 or so) edges out the HPT370-driven RAID setups by a little, according to StorageReview.com's review here. This isn't a straight comparison, though, because the StorageReview test machine is running a lowly 266MHz P-II CPU, which makes it four to five times slower than the rip-snorting Athlon test machine I used.

The Disk WinMark™ results aren't strongly CPU dependent, but it does make a difference; it's plausible to suppose that the really large speed difference between these test machines could translate to roughly 40% and 30% benchmark differences for the Business and High-End results, respectively.

So a single, painfully expensive enterprise-class SCSI drive can be expected to beat out a much cheaper array of IDE drives. Not by miles and miles, but by a noticeable amount. And, of course, a truly inspiringly expensive RAID array of 10,000RPM SCSI drives will stretch the margin even further.

But for the price of a single "36.7Gb" Atlas 10K II and its controller, you could get TEN Western Digital 153AAs. Or, with a bit of a discount for buying all the stuff in a pile, you could get the four 153AAs you need, plus your KA7-100, and a 700MHz Athlon to put on it.

For the money, IDE RAID is clearly a really good option, with excellent performance and reliability, and a surprisingly low price tag. It's thoroughly possible that the full release version of the KA7-100 BIOS and drivers will give enough of a performance boost to beat any single SCSI drive, too; the results I got seem a bit wiggy.

Using it

I've been using the KA7-100 machine with its RAID array as my regular computer for a few days now. Dodgy these drivers may be, but the array works fine. No files have gone west, no transfers have hung, no errors have appeared.

In normal desktop-computer use, there's no big performance difference. This is because, as explained above, most desktop computer tasks aren't disk intensive. And most of the desktop computer tasks that are disk intensive only flog the drive for a little while.

Start up Photoshop, for instance, and it loads all of its extra bits and scans its filter directories and so on, resulting in a big disk hit no matter how much RAM you have.

On my old 5400RPM Quantum Fireball drive, this meant about a 12 second startup time for Photoshop 5.5. The RAID machine chops this down to about 7 seconds.

Now, if you could see this sort of 70% speed improvement across the board, the RAID option would be obviously superior.

But you won't. Only disk-flogging applications can be expected to run faster in normal use, and the rest of your computing life will be very much the same. Rather more impressive disk drive noises, rather less chance of data loss from a drive failure.

In the course of using the KA7-100 as I go about my daily business, I've only noticed another quirk - the disk subsystem has a strange but surprisingly non-destructive personality disorder.

For instance, for reasons I cannot fathom, it doesn't seem to like having Windows CAB files on any hard drive, no matter what controller it's connected to. Put your Windows 98 setup directory on a hard drive for quick access and, whenever you change something and Windows hits its source files, you'll get a CAB file error with a tremendously helpful message telling you to try cleaning the CD you're not using with a soft cloth.

You can extract the CAB files yourself, making a somewhat larger Windows directory and maybe banishing the problem, but I just went back to using the CD.

And when I tried to patch the stunningly excellent Half-Life mod Counter-Strike to the latest version, I got an error about a quarter of the way through the process saying I had a dud patch file, and was left with a munged Counter-Strike directory that needed a from-the-ground-up reinstall. Reinstalling Half-Life itself from CD was easy; reinstalling Counter-Strike woudn't work. Didn't matter what drive I tried.

So I built the directory on a different machine and copied it to the KA7-100 box over the LAN.

Now I had a perfectly functional Counter-Strike installation, but it couldn't connect to any servers no matter what drive I put the files on. Whenever I tried to connect to a server running the same version of Counter-Strike as me (Beta 6.6 - you can't connect to servers running earlier versions), I got a nonsense-error saying that a sprite file had the wrong version number. It quoted a titanically ultra-gigantic number as the version number Counter-Strike thought the file had. Heck, that sprite doesn't even exist as a stand-alone file. Bizarre.

I fixed the problem by dumping the whole Half-Life directory onto my nice reliable Windows 2000 file server machine, mapping a network drive letter to the drive it was on and running it from there. Load times aren't much worse, and sidestepping the KA7-100 disk subsystem completely solves the problem perfectly. This is not an option, though, for people who don't have networked PCs throbbing merrily away all over the house.

Now, disk errors like this would appear to suggest a seriously broken disk subsystem, and you'd expect all sorts of other things to stop working, too. But they don't. Just errors with certain data-intensive disk operations, and it doesn't matter what drive they're on, and nothing seems to throw data away the rest of the time. The swap file works perfectly on the RAID array, ordinary file copy operations seem to be fine, no data files go west. I'm keeping running copies of important stuff on other network drives, on general principles, but as broken disk subsystems go, this one's very civilised.

Presumably, all this nonsense will go away when a newer BIOS, or driver set, or both, hits the streets. In the meantime, this is a useable machine. It's certainly very fast. But it's a tad nutty.

Overall

IDE is now fast and compatible enough that no matter how much SCSI fans grumble about upstart interfaces that grew by trial and error, they can't deny that it does all that most people need. And that includes people who want basic RAID.

If you're a normal desktop computer user, there's no reason to bother with fancy storage systems - ultra-fast drives, RAID arrays, whatever. None of it'll make any real difference to your system performance, and your life will not be over if your drive drops dead and you have to go back to last week's backup.

There's all that shuffling and whistling again. I do not know what it means.

There are plenty of data-loss situations that a RAID array won't help you with at all; the data safety gain over a single drive isn't amazing.

But IDE RAID is substantially faster, and it's highly affordable, and it's easy to set up. Not with the KA7-100, yet; there's no guarantee at the moment that an off-the-shelf KA7-100 will even have the RAID-capable BIOS, and until the glitches have been got rid of I wouldn't bet my life on it.

But there are other options for IDE RAID; you can add a stand-alone controller to any old motherboard and go from there. If you need a big, fast storage system on the cheap, IDE RAID is what you want.



Give Dan some money!
(and no-one gets hurt)