Atomic I/O letters column #113Originally published 2010, in Atomic: Maximum Power Computing
Reprinted here February 21, 2011 Last modified 16-Jan-2015.
I built my system about a year ago now and it is nice. 8Gb 800MHz DDR2 RAM, 3GHz Core 2 Q9650, 1Tb Seagate and a Sapphire 4870X2 all plugged into MSI P45 motherboard.
My question revolves around the virtual memory used by Windows (I use Win7, 64-bit). Why is VM used when I have a stack of RAM?
I know I can turn it off, or lock down a smaller amount of VM, but for whatever reason Windows seems intent on using it. What options are there for speeding up the VM function? SSDs do not hold up to read/write very well, which is what VM does. The extra RAM seems to be ignored.
It seems to me that drive manufacturers may be missing a niche market here - SATA VM drives designed with high read/write and low latency. Is there an elegant solution?
You can do this if you insist...
...but then you're not allowed to complain about this.
First up, putting a swap file on an SSD is not actually a bad idea.
But you can indeed probably get away with turning off the swap file. But it's not really a great idea, and may leave you with a slower computer, for some tasks.
There are actually two concepts here - virtual memory, and the paging, or swap, file (or files).
Virtual memory is the translation layer between programs and the real, physical memory. Every program sees a nice flat hole-free block of unshared RAM, and uses that space however it wishes. The OS then maps this idealised, "virtual" memory onto the actual physical memory in the computer - the RAM, and normally also the swap file(s).
You can't disable virtual memory in any remotely modern computer. Doing so would throw you back to the Before Time when a Memory Management Unit (MMU) was something advertised in big text on the product brochure, and wasn't necessarily even part of the CPU. If you wanted an MMU in your Macintosh II, for instance, it came as a Motorola 68851 plugged into the mainboard separately.
Ordinary home computers had no MMU, so they could either do memory-management achingly slowly in software, or give all programs direct access to the physical memory. Those programs had to play nicely together, because nothing was stopping them from writing over another program's data, or tying up all of the computer's RAM and leaving it unable to do basic OS tasks.
(MMU-less computers are still very common, but they're now usually embedded microcontroller systems in cars, MP3 players, basic mobile phones and so on.)
So what are you actually doing when you disable the swap file?
Well, with no swap file, you definitely won't ever be waiting for data to be paged into or out of swap. But the down side of this is that data never can be paged in or out. So you need enough physical RAM to handle everything that every program asks for, all at once.
Which is, demonstrably, doable. Look at ye olde Win98 box with 512Mb of RAM; that was more than most users ever needed, and could run swapless perfectly well.
Today, many common applications have much bigger RAM budgets, and users run more programs at once. But it's now no big deal to have more than 4Gb of RAM in a computer, so you can still, often, get away with disabling swap.
Suppose you do this, on your 8Gb x64 Win7 box. Now, if something decides to allocate half a gigabyte of RAM, the virtual-memory system will, at least, prevent half a gigabyte of your precious physical RAM from being dedicated to that program. It'll only actually hand over as much physical RAM as the program proceeds to use. (It's this that makes swapless computing practical at all. Programs often over-allocate massively, specifically because they know that virtual memory makes it safe to leave vast amounts of room for dataset growth.)
You can fit a lot of software, even today, in 8Gb of RAM. Fire up Task Manager (or the fancy-pants Resource Monitor) and check out the un-shareable "Private Working Set" for each program, plus the various overlapping "Public Working Sets", which hold shared system libraries, memory-mapped files and such.
You'll probably find that your everyday Web browser, music player, mail client, blah blah blah, all together come in under a gigabyte even when you've got lots of browser windows open. Various Windows system tasks including heavyweights like "explorer" and "svchost" may add up to another half-gig. Add at least another half-gig for a modern 3D game running in high res; what the hey, let's call that a whole gigabyte, and run a smattering of other system tasks and small utilities that add up to another whole gigabyte between them. You're still well under 4Gb!
There are ways for single programs to consume vast amounts of memory - video editing or complex Photoshop PSDs, say - but I don't think I'd ever run out of memory on a swapless 8Gb machine, and probably not on the 6Gb one I'm using now.
But I'm still not going to turn swap off. (Well, not on drives other than the boot drive, anyway. If you've got more than one hard drive, not having any swap on your system drive can reduce disk-thrashing a bit.)
I'm leaving swap on because virtual memory uses the swap file as a back-lot warehouse for data that some program put in memory, once, hours or days ago, and hasn't looked at since. Paging that data out to disk gives you more free physical RAM. And modern OSes use that free physical RAM to make the programs you're actually using now faster, mainly via disk caching.
(In Vista and Win7, the actual free RAM figure is usually quite low, because the OS always uses as much RAM as it can for caching, rather than just let it sit idle. When a program needs the RAM, the cache is instantly auto-shrunk. Windows 7 no longer adds the RAM used as cache to the "Physical Memory Usage" graph in Task Manager, by the way, because people complained that their Vista computer was apparently out of RAM all the time.)
There's all sorts of predictive just-in-time cleverness involved in pagefile management these days. People had been thinking pretty hard about page replacement algorithms for about 25 years before personal computers had VM at all, and some tricks that worked on minicomputers in 1980 still work on desktop computers today.
You also really, really don't want to blow your memory budget, even if you've got enough RAM that it's quite difficult to do that. A hard out-of-memory error (which can also happen if you cap your swap-file size and manage to blow that budget...) is a showstopper. The computer can't just murder some random task to claw back enough memory to tell you that it needs to claw back a lot more memory. Instead, it'll just hang, and you'll probably see one of those malformed errors you get when there aren't even enough system resources left to put text in the error box.
With 8Gb, though, there's an excellent chance you'll blow the budget seldom or never. So feel free to give it a shot.
Don't just fall victim to placebo effect, though. Bust out the stopwatch and see whether tasks that matter to you are really faster with swap turned off.
I recently found out that a stuck Molex plug really can rip the whole power connector off a DVD drive. So I'm upgrading some PSU plugs to "Easy-Grip Internal Power Connectors", with a bit that you squeeze to push the plug out of the socket.
The new plugs don't come with pins, but they include a tool to remove the pins from the old plugs, so you can slide them into the new ones. I've noticed that all of the pins are just crimped onto the end of the wire, not soldered in place.
Is this just a money-saving measure? Does anybody make a PSU that has soldered connectors as standard? When I buy my next PSU, I'd prefer that.
Yes, PSU-plug pins are always crimped on. So are other, similar connectors, like the ones in automotive wiring looms and on radio-controlled model battery packs.
This is partly because crimping is indeed usually easier and cheaper to do than soldering, but it's mainly because crimping is electrically superior to soldering.
A good crimped connection won't have resistance any higher than a soldered one - actually, it may be marginally lower - and it shouldn't be any more prone to mechanical failure, either. Solder joints often wick some solder into the wire on either side of the joint, stiffening it and encouraging stress fractures just beyond the solder. Crimped joints leave the wire as flexible as possible.
That said, it's usually fine to solder instead of crimping. For really immense current flows, like in high-powered R/C models, crimping can give you a little more power, but this doesn't matter for even a seriously stacked PC. "Amateur" crimp joints that've been made in a vise or with ordinary pliers are also likely to be of much lower quality than joints made with a proper crimping tool. You're likely to get a better result than this from soldering, unless your soldering procedure involves a Bic lighter.
You'll occasionally also see connections that are crimped and soldered. The usual reason for this is corrosion protection. The solder makes no electrical contribution, but it keeps air and water from getting at the crimped connection.
If you don't have the right crimping tool, crimping with pliers and then soldering the flimsy crimp will also give you a solid connection.
You may have answered this question previously, or you may know of resources/link on the net that answer this, so feel free to be lazy and send me links. But please, Obi-Wan, you're my only hope (of getting this question answered).
I'm an IT guy and I want to build a learn/test/play environment for home. I plan to build a custom chassis and populate it with 3 bare machines (motherboard/CPU/RAM/NICs) + Gb switch + KVM + boatload of HDDs. One machine will operate the drives and provide iSCSI to the other two machines, which will be running clustered Hyper-V with a boatload of VMs. All of the machines will be relatively low power consumption - < 120W max, not kW multi-SLI monsters.
My question involves powering this setup. I'd like to put two beefy (650-850W) PSUs in a redundant (hot/warm) configuration, and I'd like to do it for a minimum of cost/waste. I know I can buy server cases that do this, but the first thing I'd do with this $$$ case is gut it to get the PSUs and electronics out. I'd prefer to buy a custom PCB with 2 inputs/3 outputs, with all the circuitry for cutover already in place. Does such a thing exist?
If I wanted to build it, will I need an EE degree? I'm reasonably familiar with "Eagle PCB" software and basic robotics-controller custom electronics, but I've never attempted something like this. I imagine there are custom ASICs and such that I'd have to integrate to do the cutover, and all sorts of AC filtering mojo that's way beyond my skill set.
If the answers to above are all negative, could I hack together a 2 input/2 output (hot/hot) solution by using vampire clips to tie all the wires of two PSUs? Or is this likely to result in lots of magic blue smoke?
I haven't a clue whether anybody makes an off-the-shelf PSU redundant-iser.
Fortunately, I don't think I need a clue in this situation, because you can buy redundant PSUs off the shelf. They're usually priced for the enterprise-server market, and you won't necessarily be able to mount one neatly in a standard PC case, but both of these problems are solvable. This is the sort of thing you're looking for.
If it's got standard output connectors, it should work. And for your, relatively un-demanding, application you should be able to just use plug adapters if you need to turn 20-pin ATX into 24-pin ATX, or add more EPS12V plugs, or whatever.
In the olden days server PSUs were a lot more likely to have outlandish proprietary connectors (so did some non-server PSUs, for that matter - beware old Dell power supplies...), but that problem's much less serious now. And spare parts for older server models have always had an odd price distribution - super-expensive, if you must have that part to keep your mission-critical hardware running, or super-cheap, if it's a part for which no such suckers can be found any more.
Running three parallel computers off the one PSU is a separate problem, but probably not.
If you've got a lot of stuff that all powers up at once then the aggregate draw on power-on could be a problem; you may need more PSU wattage than you expect, especially if you can't stagger HDD spin-up. (SCSI drives should all be able to do staggered spin-up, I think with a simple jumper setting. ATA drives, not so much.)
Do NOT just connect power supplies in parallel and hope for the best. It's possible to make this work, but it's not as simple as it looks.