Dan's Data letters #97Publication date: 26 March 2004.
Last modified 03-Dec-2011.
I've been reading things such as the following:
"With the Athlon 64, the memory controller is now on the processor's die, which means memory traffic no longer has to travel out of the CPU to chipset and back. Being that the memory controller is now integrated into the CPU, it will run at the same speed as the host processor."
Last time I checked, I couldn't find memory (much less ECC) that runs at 2.2GHz, the clock speed of the Athlon FX-51. The article above, for instance, goes on to state:
"To test the Athlon 64 FX-51, we used Asus' nForce 3 150 Pro powered SK8N motherboard, with 1GB (2x512MB) of PC3200 registered DDR DRAM equipped with 5ns Infineon chips."
This article was a little more helpful.
I suspect that since everyone's using PC3200, they're simply running more memory in parallel.
What gives? How much memory bandwidth could I get? And does this help with memory latency at all? Judging from bulletin board traffic in researching the answer, I'm not the only one who's confused.
My statistical natural language processing apps (for example this and this) are heavily memory bound. For every token of input, they do half a dozen memory lookups (pretty randomly distributed) in a 10Mb statistical model. Assuming I take even minimal care with I/O, my apps are memory-to-CPU bandwidth bound.
Correct. The memory controller built into AMD's Hammer-core processors (the Athlon 64, Athlon FX and Opteron) is just a DDR400 controller; dual channel in the Opteron and Athlon FX, single channel in the Athlon 64.
The integrated memory controller does give these CPUs faster memory access than they'd otherwise have, and one-memory-controller-per-CPU is one of the things that makes Opterons better than Xeons in multi-CPU boxes. But this is not some magical full-core-speed memory interface. If the Hammer-core chips had that, they'd be the basis for some very serious workstations, shading into pseudo-supercomputer territory - but a couple of gigabytes of memory would cost as much as a couple of gigs of minicomputer RAM. Which is to say, as much as a bulletproof Mercedes.
Now, you already have a fast PC. I'd be astonished if a new Athlon FX box gave you more than 1.5 times the speed of your current system, and I wouldn't be surprised if you only got 1.25 times. I would not consider even a 1.5X improvement worth the time and effort of a motherboard transplant or whole new system. I'd take a raincheck on the upgrade for at least six months.
If and when you switch to 64 bit AMD, though, then you'll probably have some reason to drop the considerable extra dollars on an Athlon FX-whatever. For most PC purposes, the substantial extra memory speed the FX's dual channel controller gives it over the Athlon 64 amounts to very little (most PC tasks don't lean heavily on RAM, which is why PCs have for so long gotten away with CPU core speeds much, much faster than their RAM clock), but you may find the expense justified.
You might also like to look into a dual processor Opteron system, if your software's multi-threaded or you run more than one app at a time. Quad Opterons would be even tastier, but a quad box will cost you more than twice as much as a dual, which already won't be cheap.
I was flicking through an audio magazine when I found an ad for some KEF speakers that claimed their "hyper-tweeter gives full bandwidth response up to 80kHz, to fully exploit new formats like DVD-A and SACD".
But humans can only hear frequencies up to 20kHz at best when young, and that goes down with age anyway.
Am I missing something really obvious here, or is this extra frequency response completely useless to humans?
Also, with noise reduction headphones, like the Bose QuietComfort and these Sennheisers, is it possible to get the noise canceling device on its own? The concept is simple and proven effective, but is there a product that you know of that you can simply bung inline and turn on like you would do with a headphone amplifier?
The idea behind ultra-tweeters is that various instruments produce very high harmonics which produce interference beats down in the audible range, the exact nature of which depends on the acoustics of the place where the recording's made.
So if you record up to 100kHz or whatever, and manage to reproduce that in your listening room, the result is meant to be more like the actual instruments playing in your listening room than you'd get from recording just the interference beats themselves in the original venue - all of which you wouldn't get to record anyway, since interference nodes will be distributed unevenly around the performance venue.
That's the theory, anyway.
In the real world, it's questionable whether most allegedly-hyper-wide-range recordings actually faithfully capture ultra-high harmonics, whether there are any useful harmonics to be captured in the first place, and whether the inescapable simultaneous capture of some of the original venue's particular interference beat flavour and other distinctive acoustics (unless the recording's made in an anechoic chamber) invalidates the whole theory.
As usual, there are golden-eared audiophiles firmly convinced that they can hear the difference between a DC-to-daylight recording and a mere 20-20000Hz one, hard-nosed empiricists who think the audiophiles are all smoking crack, and various people with views somewhere in between.
Personally, I wouldn't pay extra for tweeters that can frighten dogs.
I don't think a separate noise cancellation device would work. The problem here is that the noise cancelling system's microphone must be integrated into the headphones, to hear pretty much what your ears a hearing when they hear it. Noise cancelling becomes less and less effective the further away from the listening point you move this pickup; it may hear the right basic sound, but if it's 180 degrees out of phase at a particular frequency, then the system's going to make that noise louder. So a separate noise canceller with some kind of microphone that strapped onto your headphones might work, but not one with a remote mic.
I'm looking at all the power boards around my place to check that they all have surge protectors. By and large, they do. What I wanted to ask you is - are new surge protectors worth switching to (at least for my expensive computer gear)?
I'm thinking to get at least one new one with a phone line filter, just to protect my ADSL adapter and computer. But what about the old protectors currently on my TV, et cetera?
All the ones I have are about 10 or so years old (at a guess), and are fairly straightforward - plastic, four to eight sockets, and a red button labelled "Press to Reset". Do surge protectors age and become less reliable? Or do the new ones offer better protection anyway?
This is all inspired by a friend of mine losing a computer to flakey power recently. Looks like his motherboard or CPU is toast.
Is it a Silly Thing To Do to get a single port protector and run a four-socket board off of that?
Regular surge/spike protector powerboards, and lightweight single-socket units, are all pretty close to useless. Always have been, probably always will be, and yes, surge/spike filters do age; the protection components in the old ones you've got are probably all now totally useless. I've talked about this in the past.
The rather more expensive premium surge/spike filter powerboards you can buy these days under brands like APC are probably better, but still don't provide anything like the protection you'll get from a proper, expensive, heavy, line conditioner.
Cheap UPSes can provide decent power filtering, but, again, don't necessarily. Unless you've got a welding shop next door or live in a very lightning-prone area, though, a UPS on all of the low-current gear (regular PCs and monitors, ink jet printers...) and a line conditioner on high surge current stuff (laser printers, photocopiers) should be perfectly adequate.
I read with interest your review of the CMoy headphone amp - what a great little unit! Do you know whether there is a kit available for budding electronic enthusiasts to assemble one themselves? I'm not versed enough in componentry to source the parts myself from various outlets.
A few people have put together short form CMoy kits (you have to supply your own mint tin and 9V battery...), but I don't know of any proper commercial outfits that're doing it. All I've seen are people on headphone-geek forums and such, who put together a dozen kits for their friends.
The CMoy component count's small enough that it's not really difficult to collect the parts yourself. Most of them can be had from any electronics store (provided you don't need the good-enough-for-Men-In-Black zero-tolerance versions of the components), and the amp chips aren't that hard to find either, or expensive to ship from overseas. There are plenty of people on the abovementioned fora (at HeadWize, for instance) that'll help, if things like the parts list at Tangentsoft (part of their excellent CMoy tutorial) aren't enough.
Apart from the convenience of not having to scare up the bits individually, the only real advantage of a kit is that it will (or at least ought to) come with a proper little printed circuit board, avoiding the slight extra hassle of building the circuit on strip-board with wire links, or printing your own board. That's not terribly hard, these days, but it's overkill to invest in PCB-making gear if all you need is one tiny board.
I was wondering if using a program like RefreshForce could damage my monitor (Mitsubishi DiamondView 17 inch) by forcing the refresh rate above 75 Hertz? Ideally I'd like to have it at 85-100 Hertz. There is a disclaimer in the documentation for this program saying it "may" damage the monitor, so I figure it is a matter of probability rather than certainty. Please enlighten me.
Old CRTs could, in theory, be harmed if you ran them at an unsupported refresh rate, but even then it was likely to take a while for damage to happen (you're overheating the flyback transformer, basically), and in the real world pretty much every screen would just display garbage for an arbitrary period of time, unless you were feeding it something really weird.
Modern CRTs just give you the black screen treatment when you send them an out-of-range signal. If yours does that, there should be no risk at all.
This is not directly tech goodies related but does affect them - what do you think about the subject of "peak oil"?
Proponents of the theory say we have now hit our peak ability to produce oil from our dwindling reserves, and that it's a very quick downhill slope from here. The loss of cheap oil, they say, will result in the loss of our ability to manufacture, transport, harvest crops and so on, and we don't have either the time or desire at this stage to prepare alternatives. The argument also states that without these cheap hydrocarbons underpinning our society that it will all fall in a heap. War, anarchy, starvation and the extinction of the majority of the world's population, they say, are the things we have to look forward to over the next 15-odd years.
It scared the hell out of me - what do you think?
I don't know enough about the subject to say anything very confidently, but I've been interested to see responses from some people who ought to know to Thomas Gold's "Deep Hot Biosphere" view. Gold says that most oil is not, actually, fossil fuel, and that there's a very large amount more of it down there, and plenty more being made all the time. If Gold is right, then there's no practical limit to our "fossil fuel" reserves at all.
Mind you, the consensus view on Gold still seems to be that he's a nutcase, but this whole field of study seems to be rife with clashing egos and dogma not necessarily supported by empirical evidence, so I don't know which way it's going to go. And even if we do have oil to spare, of course, that doesn't make it a good idea to keep burning it. With "renewable" energy sources not at all ready for prime time yet, and with nuclear power stigmatised into political impossibility in many countries (like Australia...), though, fossil fuels are still where it's at. Australia is going to keep burning our plentiful coal for a long time yet.
And not to sound too Cato Institute, or anything, but "war, anarchy and starvation" were all supposed to happen in 1980, too, because we were going to run out of oil then. Or overpopulate the planet so severely that we ran out of food instead.
I'm all for environmentalism that's based on real evidence, but "Wolf!" has been cried numerous times in this field.
I was giving this magnetic motor (reached via Gizmodo) the skeptical benefit of the doubt, until I got to the "more power out than in" claim. And the fact that the Japan Patent Office wasn't willing to grant a patent until the US PTO did (given some of the goofy patents awarded in the US, that's not a good sign).
However, it may be possible this guy genuinely has a more efficient motor, and the super-unity power claim is the result of measurement/calculation confusion (simple multiplication of peak values vs. the area under the curve). I could believe the reporter might make this mistake; the fact that the inventor goes along with it is not encouraging.
I only read the Gizmodo precis about that when it was mentioned there the other day, and assumed that when they said it used 20% of the power of a conventional motor they just had the wrong end of the stick, and should have said it was 20% more efficient than some existing not-too-efficient maintenance free long service motor design, or something. Since motors with better than 85% efficiency are common already, a motor that draws a fifth as much power to do the same work will, as you say, be one of those fabled "over-unity devices", a.k.a. perpetual motion machines.
On reading the actual article, it seems clear to me (and others...) that this is just another fraudulent "magnetic motor", with the usual explanation that the mystic energy of permanent magnets is somehow making up the shortfall (some such motors are supposed to slowly use up their magnets, the lost mass being somehow converted to energy to keep the thing running).
If this guy actually has orders for his products, from people assuming they do what this article says they do, he will soon end up fleeing angry buyers. I suspect the orders haven't actually been placed, though (or are conditional on working products being delivered, with no payment having yet been made...), since these sorts of scammers are usually in it to fleece small investors, who're the only people who believe their claims. No company with an engineering department will buy this line of bull; it's been tried far too many times before.
A reader kindly found what looks to be the appropriate patent for me. The patent clearly states that it's for a a way in which "rotational energy can be efficiently obtained from permanent magnets", which I would have thought would have triggered the USPTO's perpetual-motion-device radar, but apparently not. Maybe they're getting sloppy about more than software patents these days.
It should be noted that, generally speaking, patent offices do not require proof that a device works in order to grant a patent. They often make exceptions in the case of perpetual motion machines, but if you disguise your over-unity patent application as an ultra-high-efficiency motor or something (which Minato has pretty much done in his US patent application), your local patent office would probably be happy to grant you a patent.
As I've observed on previous occasions, (one involving another magic magnetic motor...) the patent office's job is to sell you legal protection for your idea, not to guarantee that the idea is worth protecting.
(Oh, and while I'm at it: Let the linkage circle be unbroken.)