Atomic I/O letters column #47Originally published in Atomic: Maximum Power Computing Reprinted here July 2005.
Last modified 03-Dec-2011.
I've got a nice Microsoft Wireless IntelliMouse Explorer V2.0, but occasionally it just doesn't decide to track on my Steelpad 3S. Sometimes the buttons don't even work. But when I use it on the desk or on a cloth pad it works fine. Could the pad be screwing up the radio frequencies?
I'm thinking of getting a new mouse instead. Logitech's cordless models seem nice. But what is the "MX Optical Engine"? Are there little pixies that run around inside doing what all pixies do and increasing gaming performance? Or is it some marketing thing?
Yep. Metal mousemats, like your aluminium "Steel"pad, play hob with low power RF from cordless mouses (oh, OK, mice), especially if the mat's pseudo-earthed by your wrist or the desk. Moving the receiver around may help, and some mice are better than others, but nuking the site from orbit, I mean using a regular non-conductive mat, is the only way to be sure.
Soft Trading, the makers of the Steelpads, don't recommend you use their metal mats with cordless rodents.
The MX Optical Engine is also, but less impressively, known as the Agilent A2020 and S2020 sensor chips, which Agilent only sold to Logitech. Those two very similar chips were the highest spec optical mouse sensors on the market before the Razer Diamondback came along; now the Logitech MX518 uses that same 1600dpi sensor. But the STMicro sensors Microsoft are still using aren't rubbish by comparison.
Agilent, Pixart and STMicro are the only names in high-spec sensors today, and there's not a vast amount of difference between their mainstream products, which explains why Agilent and Pixart are suing each other.
The sensor's not the end of the story, though. The MouseMan Dual Optical's two cameras were basically just a gimmick, but the MX 1000's laser illuminator gives much better tracking on lousy surfaces (like, say, a whiteboard, if that matters to you), and different lens designs can do a lot, too. A better quality lens gives the sensor a sharper image; a higher magnification lens gives more resolution, at the price of more susceptibility to skipping.
Sometimes mouse makers make, um, adventurous claims about their products' stats; they've been less prone to proudly announce frame rate or other specs that greatly exceed the hardware's limits of late, though.
There's a great page about all this here.
I've come into possession of two very large capacitors. One is 110,000 UF 15V and the other is 72,000 UF 18V. I've measured the voltage of my UPS float charger and its around about 13.3 volts.
What'd be a rough estimate of the capacity of these caps in parallel? How does their combined 182,000uF capacity compare to lead acid amp-hours?
Would my 600VA UPS last at least say, 5 minutes?
As I mention in this column, the energy stored in a capacitor, in joules, is equal to 0.5*C*V^2, where C is the capacitance in farads and V is the voltage in volts. Your 111,000 microfarad cap (0.111 farads), fully charged (not just to 13.3V), would therefore store about 12.5 joules. The other one can store about 11.7 joules.
A joule is a watt-second - one watt for one second. One volt-amp doesn't equal one watt for reactive loads like PCs (something I've written about before), but even with a power factor of 1, a total of 25 joules with a following wind means you can power a 600VA load for... about 0.04 seconds.
Real capacitors like the ones you've got there, which use only classical electrostatic energy storage, can be charged and discharged very, very quickly. Charge one of those caps up and drop a screwdriver across its terminals and there'll be a spark, a bang, and a neat little two-point weld job. The down side is that capacitor energy density is pathetic, compared with any electrochemical battery.
The other problem with capacitors is that their terminal voltage directly reflects their state of charge. Unlike batteries, caps don't keep much the same voltage for most of their discharge cycle, so you can't drop them in as battery replacements.
Super-hyper-mega-monster capacitors overcome the capacity problem by using a hybrid electrostatic/electrochemical storage method (which makes them more susceptible to damage than real caps), but they don't fix the sliding voltage problems.
Can two computers be connected together in such a way to share all resources - processors, memory, etc - in a way economical to your average home user like me?
I'm asking this because I have read before how one could build a supercomputer by interconnecting several regular PCs and using all their processing power simultaneously. My objective is not to use this for any commercial reasons, just for the sheer learning experience of it all. I have done home networking before, and have a 10BaseT hub that I use on occasion to share files. I also have a homebuilt 1.7GHz Celeron, and an older 200MHz Sony that I would like to use, if possible. The problem is, I don't even know where to start, what I would need to buy, or how to set it up.
Is it economically possible? Yes. Wouldn't cost you anything but time. It won't be easy or useful, though.
You're talking about clustering, which as you say is the technology used to make some of the world's most powerful supercomputers. Clusters are a somewhat specialised kind of supercomputer; they're only useful for doing "parallelisable" tasks that can be split into tons of smaller processes that don't require much intercommunication between nodes, because the network "pipes" between the nodes in the clusters aren't nearly wide enough to shift the gigantic data streams that non-parallelisable supercomputer tasks require. But a lot of supercomputer tasks can be split up this way, which is why so many clusters have been built.
Clustering is also big business in the 3D animation world; most of today's fancy computer-generated movie effects are generated by serried ranks of PCs or Macs, each chewing away at one frame at a time (most render farms used to use heavy duty workstation hardware, but desktop boxes have a better price/performance ratio now).
Clustering is useless for normal home and small business computer applications, though, because those tasks are seldom particularly parallelisable. You can get definite benefit from a dual CPU computer, but those CPUs aren't throttled off from main memory and the other core resources by a network connection.
For this reason, nobody's ever bothered to make a "friendly" general purpose clustering system. 3D render farms are pretty easy to set up, but they can only do one thing. The Beowulf Project is general purpose and popular, but not among regular PC users, who'd have a hard time setting up a cluster and a harder one thinking of something to do with it if they ever got it working. There's certainly no way to just offload, say, half the task of running Photoshop filters to a second PC over a network cable; even if you used gigabit Ethernet, the transfer speeds between machines would kill the advantage.
But if you just want to do it for the experience of doing it, then you can go right ahead with the hardware you've got. The software to make a Beowulf cluster happen will cost you precisely nothing.
A few days ago I decided to do a cable sleeve mod on my Allied Apex 500W PSU (ATX-500W-P4 AL-B500E). Unfortunately, during the process I got distracted and messed up the wiring in one cable arm (set of four wires with two Molex connectors). I removed all four wires (red, black, black and yellow) from both Molex connectors and now I'm stuck without the proper combination for the middle two black wires (no problem in figuring out red and yellow wire positions on the Molex connector). I believe interchanging the two black wire positions is not a problem because they share a common ground in the PSU. I hope you can give some advice regarding this.
Yes, the two black wires are exactly the same. The only reason there's two of them is because that keeps resistance down; for similar reasons, there are multiple wires for each rail on the main ATX connector.
Generally speaking, all PSU wires of the same colour are interchangeable, length permitting. You're actually likely to find that everything in your PC still works fine if you cut one of the black wires going to every Molex plug - not that I'm recommending you do that.
Because all of the grounds in a PC are tied to each other and to the chassis, you can cut some corners if you're using a multimeter to check rail voltages, or whatever. Unless you're actually worried that the ground wires and/or contacts for a given component aren't working right, you can just clip the negative lead of the multimeter to the chassis (I like to clip onto a grille on the PSU) and leave it there, rather than poke the negative probe into a Molex connector or something.
Bear in mind that fancy lacquer-finish PSUs may not make very good electrical contact with the case - the screws that hold them in place are often sufficient, though. It is, of course, easy to see whether this is the case, by probing around with a multimeter in resistance mode.