Atomic I/O letters column #76Originally published in Atomic: Maximum Power Computing Last modified 16-Jan-2015.
I am thinking of installing a new Seagate 500Gb SATA HDD in my PC, however I am unsure if my power supply can run another drive.
My PC has one HD and two optical drives at the moment (no floppy), a 430W Thermaltake PSU, an Athlon X2 4200+, and a Gigabyte Radeon X1900XTX 512Mb video card.
Will I be able to add the abovementioned 500Gb drive without changing power supplies? Presently my machine is very stable.
Usually, drives of all kinds don't put enough load on your power supply to be worth worrying about. A computer that's flaky when you add a drive was probably running out of juice already. If you're not adding a whole RAID array to your computer, you'll almost always be fine.
If it's working really hard, your current PC probably draws almost 300 watts, but the average power even when playing a game is likely to be more like 250W. Plenty of headroom there, even if your "430W" PSU is, like most PSUs, rather optimistically rated.
There's one way, however, in which adding a new drive can bite you. You'll see it if you go to the trouble of figuring out what the drive actually draws.
Most hard drives have basic current-draw numbers on their label - it'll say something like "5VCD: 0.95A 12VDC: 0.80A". That means 4.75 watts on the 5 volt rail and 9.6 watts on the 12 volt one, but it's only an average-draw figure, not a worst-case one.
Download the datasheet from the manufacturer's site and you ought to get the full story, with a proper breakdown of idle, sleep, active and startup power.
For a 500Gb 7200RPM Seagate, you're talking about eight watts total when the drive's spinning but idle, maybe 12 watts when it's active. But it'll draw as much as 30 watts for the brief period when it spins up, on system startup, or when coming out of sleep mode.
That's the bit that can cause problems, especially if you've got a few drives in your computer. If four drives all spin up at once - as they will, when you turn on an ordinary PC - then for the first, highest-draw moment of the spin-up they can want well over a hundred watts between them, which along with everything else in the computer may trip your PSU's overcurrent protection, and prevent the computer from booting properly.
If you've got multiple drives and have startup problems, don't run a bunch of drives from one cable coming out of the PSU. Many PSUs these days have two or more separate 12V supplies (I talk about this more in this piece), and their wiring splits the big 12V draws (motherboard, video card, drive connectors) between those outputs. Check the PSU manual to see if a you can choose a different string of drive power connectors which is running from a different 12V rail.
It's also possible to solve this problem by spreading the surge out. PATA drives in a master/slave configuration on the one cable may automatically stagger their spin-up, and some ATA drives have a jumper on the back to activate selective spin-up, too. I don't think current consumer Seagates do, though, and the jumper won't work at all if your ATA controller doesn't know about it. I think some ATA RAID controllers can do staggered spin-up all by themselves, but I wouldn't bet my life on that.
As I said, though, all of this is very unlikely to matter to anybody who's just using one or two drives.
I've never really understood Pi. It equals 3.14 which is the ratio of a circle's circumference to its diameter, it's used heavily at school in maths classrooms, but there's something odd about it. It never ends. I first caught on to how long it was when mates would recite the first 50 or so numbers at lunch time.
I've discovered an easy way to fault Pi. Imagine anything of any size, but for the sake of argument something small, like a cigarette. Now divide it into three equal parts and take a hard look at each part. Each part is of equal size and does not skew off into the eternal distance, you can see each part with your own eyes; pick it up, play with it!
I pick up 1/3 and I'm holding it – how can it be a never ending number? Wouldn't the part I'm holding physically reach the stars and beyond?
As far as I know, the only use for Pi is when you flip it backwards it reads 14.3 which represents the first three digits of crystal clocks you'll most likely find on motherboards.
Point 1: An irrational number is not an infinite number. Pi is equal to three-point-one-four-blah-blah-blah. You can extend that blah-blah as far as you like, but you'll never make it to three-point-one-five-anything. All the extra digits do is refine your definition of the number, not enlarge it. Measuring the length of a cigarette with a micrometer instead of a ruler doesn't make the cigarette any longer.
Point 2: In the physical world, no actual object has any irrational-number dimensions.
In Geometry Land you can take exactly a metre of perfect unstretchable one-dimensional wire, form it into a perfect circle, measure its diameter and end up with one-on-pi metres.
But in the real world the diameter would only be exact to a few decimal places, just as the original length of the wire would only be exact to a few decimal places, no matter how carefully you cut it.
(After this page went up, a reader pointed out that if you measure some quantity to real-world-meaningless arbitrary accuracy, you're actually certain to end up with an irrational number, essentially because there are by definition infinitely more irrationals than rationals in any given range of values. In the material world, though, ruler divisions smaller than the Planck length are very difficult to engrave.)
This is why we don't define lengths based on a piece of platinum-iridium alloy any more. Since 1983, a metre has been defined in terms of light - it's currently defined as the distance travelled by light in 1/299,792,458th of a second. This isn't especially applicable to most real-world measuring tasks, but that's OK, because real-world measuring tasks don't need this kind of precision.
Getting back to pi - as I mentioned above, it's an irrational number. That means it can't be described as any whole number divided by any other whole number. You can roughly approximate pi as 22 divided by 7, or you can more closely approximate it as 355 divided by 113, and you can keep on shooting closer and closer to the real value with more and more digits in your fraction, but you can never quite hit it.
And that, too, is fine, for reasons philosophically analogous to the reasons why no object in the real world ever quite hits any particular exact dimension. A metre-stick isn't exactly a metre long, and neither was the old platinum-iridium metre standard.
It is, of course, theoretically possible for a physical object at some particular time to embody pi as truly perfectly as can be measured. But it'll take surprisingly few decimal places before the length of your pi-metre-long platinum-iridium rod at a perfectly even temperature becomes unquantifiable, as you get down to the inescapably spongy and lumpy atoms that make up its ends.
(As a matter of fact, the ten significant digits of pi that're stamped into the brain of anybody who spent six years of high-school maths stabbing randomly at their scientific calculator pretty closely match how close any real world "pi-metre-long" object can possibly be to a genuine pi metres in length.)
I bought an 8Gb Flash drive ages ago and it's the biggest I've had so far in the portable memory stakes. I finally got around to filling the sucker up, and I've been carrying it with me everywhere I go. At first when it was empty, it felt pretty light, sought of weightless, but lately carrying around this 8Gb chunk of data, it feels as if it's gotten heavier.
I don't think my mind's playing tricks on me, but I've never really looked into it too much either. To me, data is something that has weight to it and something that you could physically see, manipulate etc, under the right circumstances and if you knew what you were doing.
Is data really "virtual" like they say it is? Or is data more real then what people think it is?
A piece of paper weighs a tiny bit more when you've written on it, since you've added ink or graphite. But computer storage systems store data by changing the state of things that're already there.
Think of an empty Flash drive as a long line of coins all placed with the head side up. Writing data to the device involves turning some of the coins tails up. That doesn't make the coins change in mass, any more than moving the graphite to another part of a piece of paper would change its mass.
I've been trying to think of a computer storage system that does get heavier when you write to it, but I can't come up with one, even going back to mercury delay lines. All I can think of are devices that get lighter when written. Write-once optical discs might get a tiny bit lighter when written to if the vaporised dye manages to seep out of the disc, and good old paper tape and punched cards get lighter when you punch holes out of them.
This isn't because of some mystic "weight of data", of course. You're just removing some of the physical substance of the device to encode information in the pattern of voids.
(A couple of readers have suggested Scantron forms and electrographic mark sense cards as examples of storage that gets heavier when filled, on account of how you write on it. Neither is really a storage technology, though; people write to them, not computers. So they're more like input systems, if you ask me.)
Incidentally, batteries and capacitors don't get heavier when you charge them, either. Charging a battery or cap isn't like filling a bucket; the electrons you pump into a battery to charge it come straight back out again (or, at least, an equal number of other electrons very much like them do...), having done work in the meantime by changing the chemical makeup of the battery's electrodes. The battery may lose a little weight as gas, but this is incidental to its operation, and battery makers try to stop it from happening.
When you charge a capacitor, you're pushing electrons off one of its internal plates and onto the other. The total electron count in the capacitor doesn't change either, and so neither does the cap's mass.
Flash RAM, at base, depends on a capacitive process (each Flash "cell" is a transistor/capacitor hybrid), so it doesn't change in mass when "filled", either.
After this page went up (for some reason, it's attracting an unusually heavy rain of nitpicks...) a reader pointed out that capacitors actually do get very very slightly heavier when they're charged; e=mc2 requires it.
This increase is virtually impossible to measure by any means, much less feel in your shirt pocket. To get a mass increase of only one milligram you'd need 89,875,517,874 joules of energy. That's about 21.5 tons of TNT, or the maximum energy content of some 650 million unusually-large super capacitors.
I've been looking up information on liquid cooling, and was wondering with their high thermal conductivity and specific heat capacity, how effective would diamonds, either low grade, leftovers from cutting, or lab created, be as liquid blocks and heat sinks?
Oh, they'd be very effective, as long as you didn't have too much trouble forming them into the right shapes.
Unfortunately, diamond pieces big enough to be useful for this purpose are also colossally expensive.
This is changing. The great thermal conductivity of diamond - about 2.6 times that of copper - makes it attractive as a heat path inside things like CPU packages. And synthesis techniques to make diamond in appropriate shapes are a growth industry. We're a long way away from being able to make sizeable components out of contiguous diamond, though.
The bulk industrial diamond that you get on hardware-store saw-blade teeth is either made in a factory using low-cost methods that produce tiny stones (which might perhaps be slightly useful as loading material for thermal grease, at the cost of making the grease abrasive...), or it's dug up out of the ground. Those latter stones are the "rejects" of the gem industry - stones which aren't useful for jewellery either because they're not clear and pretty, or because they're too small, or both. Diamond mines produce a lot more industrial diamond than gem diamond.
If there were no demand for large pieces of sub-gem-quality diamond then I suppose it'd be available at reasonable prices for thermal applications - nothing the size of a heat-sink, but fingernail-sized slices to incorporate in chip packages might be possible.
But (comparatively) large industrial diamond pieces are actually in demand - they're useful for things like super-hard bearings, and diamond anvils for ultra-high-pressure lab applications. So they remain really expensive.
A one-gram brown and ugly diamond will cost you considerably less than a one-gram gem-quality stone. But one gram is five carats, so it still won't be cheap.
A piece of industrial diamond the size of a normal CPU heat sink, say a five-centimetre cube, would weigh about 87.5 grams - around four times the mass of the Koh-i-Noor diamond.
It'd be worth a great deal less than that stone, but the mathematics becomes a bit complex when you try to divide "priceless" by any number.