Prolink PixelView XX-Player GeForce3 graphics card

Review date: 5 May 2001.
Last modified 03-Dec-2011.

 

It's new. It's shiny. It's covered with heat sinks, and to do the benchmarks I put it in my Number One Computer, the one I use for important stuff (like writing things, and playing games). Not one of the various other test boxes that infest the place.

Which, at this point in time, means that it must be a GeForce3.

Video cards based on this chipset, Nvidia's successor to the GeForce2, are now hitting the retail market here in Australia. They ain't cheap, but they also ain't as expensive as people thought they'd be.

So should you run out and buy one, or what?

Well, first, you'll need a fairly large wad of cash.

The damage

When the first preview GF3 cards started bouncing around, the price everybody was talking about was a stratospheric $US600 or so. Maybe $US700, even. At that price - which would have translated to an easy 1200 to 1400 Australian dollars - the new Nvidia-based boards would be comfortably the most expensive consumer PC graphics cards ever. Well, if you don't count the outrageous, but stillborn, 3dfx Voodoo 5 6000.

Part of the alarming price was accounted for by the fact that even these original model GeForce3 boards all have 64Mb of very fast DDR SDRAM built in, just like the previously top-of-the-line GeForce2 Ultra. We can expect to see even faster memory on future GeForce3s, if Nvidia follow the same sort of product roadmap they have with their earlier chipsets. But at the moment it's pretty much impossible to get a decent supply of faster video card memory than the chips that come on these cards as standard. And the stuff is expensive.

But, as I write this, the low price for GeForce2 Ultra cards in the States is only just over $US250. More than twice as much money for the new board would try the patience of the nuttiest early adopters.

Fortunately, the early price estimates turned out to be wrong, and the GeForce3's debuted at a lower price point. Recommended retail prices range up to $US530 or so, depending on the options (video in and out, software bundle, meaningless brand name premium) but the US cheapie-dealers already have GeForce3s from the less famous brands - Gainward, VisionTek and so on - for less than $US400.

Prolink PixelView XX-Player

And this GeForce3, the Prolink PixelView XX-Player, sold here in Australia by Best Buy Computers, should be retailing for a mere $AU849 by the time you read this (it's $879 at the time of writing, but Best Buy tell me the price will drop very shortly).

$AU849's equal to $US443 right now. That's not bloomin' bad at all for a GF3 in Australia at the moment.

Buying a less-than-totally-famous-brand video card with a well-known chipset is a pretty much totally safe move. You are unlikely to get any manufacturing defects. If there's nothing physically wrong with the card, it'll behave exactly like a big-name board with the same feature set (amount of RAM, extra ins and outs, and so on).

If your Reccomendable Periwinkle Fabulance brand video card (one day I'm going to have to run out of made-up graphics card company names) comes with dodgy drivers, you can use the reference ones from Nvidia. If its software bundle stinks... um, well. Does anybody who's shopping for a cutting-edge video card ever care even slightly about any of the bundled apps and games? I'm just asking.

So, like I said, it's safe. You pay less, you get a perfectly good product. You just usually have to wait, because the smaller names generally don't have cards with a given chipset out as quickly as do the big boys.

Well, that always used to be the deal, anyway. A fair few smaller manufacturers seem to have GeForce3 boards out already, and Prolink are right there with 'em.

What you get

Heat sinks

The hefty heat sinks on the main GeForce3 chip and on the RAM chips do their job of keeping the hardware cool, but they also make this one of the spiffiest looking GeForce3s, if you ask me. All of these cards have RAM heat sinks - as do all of the GeForce2 Ultras - but I think I prefer Prolink's orange-gold anodising to the blue, green and silver treatments other manufacturers have used.

Thanks for watching. Interior Decorating for the PC Enthusiast will be back, right after these messages.

Oh, one other thing about the heat sinks. All that aluminium makes GeForce3s satisfyingly heavy. You feel as if you're holding a video card of substance.

Back in the world of actual productive features, the XX-Player has a TV out connector. It's an S-Video connector, but you also get a little adapter cable that converts it to a composite RCA jack, and a composite video lead to match. But no S-Video lead.

The driver CD that came with my review card was a CD-R (containing absolutely vanilla Nvidia reference drivers), but apart from that I got the normal retail package. It's your standard Taiwanese graphics card software bundle. Which is to say that you may never use any of it, but at least you can rest assured that it added diddly (a technical term) to the price of the card.

You get the full version of Cyberlink's excellent PowerDVD DVD playback software; great if you've got a DVD-ROM drive and no DVD playback software (which is possible, if a tad unlikely); useless if you don't.

There's also Ulead VideoStudio SE, a standard pack-in video editing package which works pretty well (considering what you pay), but is, once again, useless unless you have some sort of video capture device in your PC. The XX-Player only has a TV out, so it is not such a device.

Why it's better

Every new video chipset comes with a sheaf of impenetrable techno-marketing-speak which implies that now, at last, there exists on this lucky, lucky planet a piece of silicon for which you and all other right-thinking consumers not only should, but definitely and obviously will, fight and die. Or, at the very least, you'll put it in your pants.

When you prune out the features that other things have had but are now a bit faster, and the features that nothing yet uses and maybe nothing will, and the occasional features that have just been made up by the marketroids, you generally end up with a much shorter and less exciting list. Do that with the GeForce2, for instance, and the end result is "it's like the first GeForce, but it runs cooler and it's clocked faster."

The GeForce3, though, is different. Its biggest selling point is that its fancy new rendering capabilities actually seem likely to be used in games, instead of lying fallow.

Every chipset up until now that's had special rendering capabilities - like, for instance, the much-vaunted hardware transform and lighting (T&L) acceleration of the whole GeForce line - has had those features "hard coded". They're a fixed function of the chipset, and to use them, programmers hook into particular commands in the Application Programming Interfaces (APIs) - which, these days, means OpenGL or Direct3D.

If you run the resultant software on a system with hardware acceleration for that particular feature, it'll be fast. If the computer doesn't have one of the hardware accelerated cards, it won't.

Unfortunately, programmers frequently ignore these sorts of features - hardware T&L is a perfect example. Practically nothing uses full hardware T&L. Why? Because programmers can't count on all (or even most) of their market actually having the hardware acceleration. And because the standard commands limit what a graphics engine can do.

A painstakingly bummed piece of bespoke graphics code can be faster than the stock version on systems without hardware acceleration for whatever sort of thing that code does, and prettier than the stock version on any computer. And so even now, plenty of brand new 3D games that don't support, say, hardware T&L are still coming out.

The GeForce3 does everything the GeForce2 did, but it also has truly programmable hardware 3D acceleration. Coders can take advantage of new commands in DirectX 8, and some new OpenGL extensions, to write their own little programs for the GeForce3. They can thus create new and interesting effects, just as they could by cutting their own code before, but they can have 'em hardware accelerated when a system supports it. All sorts of spectacular effects, like the ones shown in the Nvidia "Whole Enchilada" demo movie here, ought to actually be seen in games fairly soon.

If you want to download that demo, by the way, you can currently get it and various others from the Blisware page here, but not from the Nvidia site yet.

Programmable effects are only one of the significant new GeForce3 features. The GeForce3 also has a more efficient memory controller architecture, with four separate 32 bit memory controllers instead of the single 128 bit one in the GeForce2. It can combine the controllers in arbitrary ways when it needs to deal with smaller amounts of data, allowing it to, for instance, do two 64 bit memory operations in half the time a GeForce2 would take at the same clock speed.

This makes a difference for things like antialiasing - smoothing out jaggies in slanted rendered lines. But what makes more of a difference is the way in which the GeForce3 actually generates antialiased graphics.

The GF3's Full Screen Anti-Aliasing (FSAA) is based on a "quincunx" sampling pattern. That vaguely rude-sounding word actually just means four things arranged in a rectangle with a fifth thing in the middle - the dots on the five side of a six-sided die are arranged in a quincunx.

When it's doing FSAA, the GeForce3 drops two copies of each frame into memory, and maps them on top of each other with one of them shifted half a pixel diagonally. It then averages each of the many quincunxes of pixels in the resultant matrix to create the final image. This greatly reduces the storage you need for a given FSAA quality, and it also frees the GF3 from meaninglessly rendering textures at a higher resolution. You don't want to antialias textures, just edges.

Almost all current video cards are solidly limited by their RAM speed, at higher resolutions. When they use simple supersampling for FSAA (just rendering internally at a higher resolution and then scaling down), they lose about half of their frame rate when you use 2X FSAA (rendering twice as many pixels internally), and lose half of that frame rate when they go to 4X.

The GeForce3 only suffers about half the penalty it ought to, by the old rules. Quincunx FSAA looks about as good as 4X, but only carries a 2X-ish speed loss.

The GeForce3 also has something Nvidia call "Visibility Subsystem", which is a Hidden Surface Removal (HSR) technique. It's used to get stuff that's going to end up behind other stuff out of the rendering pipeline as early as possible, so that things you won't be able to see because something else is in front of them don't chew up too much rendering time.

Visibility Subsystem isn't as effective as the equivalent portion of the tile-based system used by the very impressive, surprisingly cheap Kyro II chipset (check out my review of the Hercules 3D Prophet 4500 here to see how well the Kyro II stacks up against the Nvidia opposition), but it still helps keep the GeForce3's RAM load down.

Benchmarking

I did some light benchmarking of the GeForce3 versus a GeForce2 GTS, on my 1.1GHz Athlon machine. If you've got a bit of a graph fetish - you know who you are - allow me to recommend the Tom's Hardware review here and the AnandTech piece here, which will each give you enough graphs to choke a pig.

So they're especially good if you're troubled by a pig that needs choking with graphs.

If you just want to cut to the chase, though, here's the basics. At resolutions as low as 1024 by 768, running current games, the GeForce3 is genuinely faster than the GeForce2 Ultra. Only around 10% faster at most, mind you; not worth the extra money. Here in Australia you can still find bells-and-whistles GeForce2 Ultra cards on sale for substantially more than this GeForce3 costs, but that'll change fast enough as stocks of this and other GF3s build up locally.

In 1600 by 1200, running current games, the GeForce3 beats the GeForce2 Ultra by about 20 to 25%; a noticeable difference (well, noticeable if your frame rate doesn't end up being too far above your monitor refresh rate, anyway...), but not one worth paying big bucks for if you've already got an Ultra.

Or if you've already got a plain GeForce2 GTS, for that matter; the GeForce2 Ultra beats the ordinary GTS by about 30% in 1600 by 1200. The GeForce2 Pro's somewhere in the middle.

Things get more interesting when you start playing with antialiasing. The quincunx system means the GeForce3 walks away from any GeForce2 - and beats the Kyro II, as well - once you tell it to smooth the jaggies.

Use something that supports the new DirectX 8 features and the GeForce3 suddenly shows you what the big deal is. In Mad Onion's 3DMark 2001 - which, at the moment, is pretty much the only DX8 software out there - the GeForce3 spanks a GeForce2 GTS by a factor of more than two, across the board. And it does things that the GeForce2 just can't do at all. There are some tests in 3DMark 2001, like this one...

Alien test

...and, more spectacularly, this one...

Nature test

...which you'll only see with a GeForce3 or similarly capable card.

By way of an experiment, I tried swapping out the Standard Data Rate (SDR) PC133 RAM in my Athlon box and replacing it with PC-2100 Double Data Rate memory, kindly supplied for the purpose by those princes among, uh, people who aren't princes, Aus PC Market. (Australians! Don't just sit there like some sort of anti-capitalist retail rejector! Order something at once from Aus PC Market by clicking here!)

The reason I could just swap one flavour of RAM out and another in is that this PC's based on an Asus A7A266 motherboard, which I talk about in more detail here. The A7A266 is stable and well equipped and well priced, but it's not the fastest of the DDR-capable motherboards, because it uses ALI's ALiMAGiK 1 chipset. The MAGiK 1 has the highest RAM latency of the DDR-capable Socket A offerings on the market at the moment, which means that machines based on it to, generally, lose by a few per cent in various benchmarks to otherwise identical AMD-CPU computers. It's not a big deal, but the MAGiK 1 is still not really the tweaker's choice.

Aaaaanyway, I swapped in the DDR memory and, well, whoopee. 5% faster at 1024 by 768 with no antialiasing; even that difference faded away as the resolution went up, or when I used FSAA.

DDR memory does make a difference for real world tasks, and the difference between the older AMD-CPU machines made for PC-100 memory (but often capable of higher speeds) and the newer DDR boxes can be substantial for games as well as power productivity apps. Check out my original DDR Athlon review here for more on that subject.

But if you're a gamer, even if you've got a GeForce3, most of the memory-intensive work will be done on the graphics card, not in the PC. So the speed of your main memory won't make much of a difference.

Buy SDR RAM and enjoy the (large) savings. Or spend the same amount of money on SDR that you might have spent on DDR, and end up with a computer with more than twice as much RAM.

Overclocking

As with pretty much any other current 3D card, you can software-twiddle the clock speed of the GeForce3. There are utilities that do it, but the function's also built right into the standard drivers; you just have to fiddle your registry, add a "Coolbits" DWORD to the HKEY_LOCAL_MACHINE\Software\NVIDIA Corporation\Global\NVTweak key and give it the value "3".

If the idea of running Regedit fills you with nameless horror, you can just download a Coolbits .reg file from any one of a squillion places (like, say, here), and merge it into your registry with a double-click.

Overclocking controls

And voila, now you have an overclocking panel, and you can get to it by the transcendently simple process of going to Display Properties -> Settings tab -> Advanced button -> GeForce3 tab -> Additional Properties button -> Hardware Options tab. There's now but a small amount of rebooting and clicking of the "Test New Settings" button between you and overclocked Nirvana.

I think I now know why people prefer to use utilities like PowerStrip. No matter.

However you do it, the speed adjustment's the same. Move a slider, confirm your decision, fire up something 3D and see if it looks fine (win!), looks all weird and glitchy (lose!), or freezes your computer solid (more annoying lose!).

My tests pretty much lined up with other GF3 overclock attempts I've seen; the core's not significantly overclockable, but the RAM can be speed boosted somewhat profitably.

Boosting the core speed to 215MHz from its stock 200 caused crashes in 3DMark 2001, which gives the GF3 plenty to do. Lesser applications were fine; a core speed of 230MHz worked in good old Quake 2.

Since the percentage performance difference you get from a graphics card core overclock is generally considerably smaller than the percentage of the overclock itself, I just left the core speed at its default 200MHz and concentrated on the RAM instead. If you're a Counter-Strike nut, you might be able to get away with a 10% core overclock, but it'll still probably make no real difference to anything.

The RAM was happy at 525MHz, 14% up from its stock 460MHz. The computer didn't actually hang with the RAM at 540MHz, but there were many and varied glitches in 3DMark 2001. So that may be the level you tweak it to when you're hunting a sOoPeR-sTuDlY 3DMark score to impress your fellow teenage cyberpunks, but it's of no use for, like, real people. Again, that higher RAM speed was fine in old games that give the GeForce3 less to do.

The 14% RAM overclock made a difference, but not much of a one. A big 8% performance gain at 1600 by 1200 in 3DMark 2001, less at lower resolutions.

Hey, if it's stable - and it seems to be - then why not. I can't say that the results turned my crank much, though.

If you're playing older games that higher overclocks will be OK with, and you can afford a GeForce3, then I presume you've got a fairly modern CPU, too. Which means your frame rates in those old games will be so stunning already that there's no point pumping them up further.

On my 1.1GHz Athlon with the DDR RAM, for instance, the old Crusher Quake 2 demo blurs by at better than 95 frames per second with the GeForce3. In 1600 by 1200. 32 bit colour (which doesn't even look any better in Q2, but makes things slower nonetheless). Crusher, canonically, runs slower than any real game; the Massive1 demo, which does reflect the real frame rates you get in a hectic multiplayer game, buzzes past at about 130fps with the same graphics settings.

When you've already got frame rates above the refresh rate, even when you're running a resolution too high for most monitors to clearly display and a colour depth that just throws frame rate away for no reason, tweaking the clock speeds for yet more performance is kinda goofy, if you ask me.

Overall

I like the GeForce3 a lot. But it's not cheap, and it's not supported by any real games yet. It has excellent FSAA performance, and it does a bit better at high resolutions than a GeForce2 Ultra - but, if you ask me, that's not worth shelling out $AU849 plus shipping for.

The good news is that if you give in to temptation and buy a GeForce3 right now, you're likely to see DirectX 8 and, possibly, also OpenGL games coming out which do utterly amazing things with it. It's not going to be a situation like the one that faced early GeForce 256 adopters - even now, getting on for two years later, the big feature of the GeForce still isn't supported by most games.

Of course, if you buy a GeForce3 right now and wait for software that supports it, what you'll have during the waiting period is a card that doesn't yet beat considerably cheaper cards by a very great deal, and whose resale value is plunging at the spectacular rate that is the way of computer hardware. Obviously, the sensible solution is to hold off on the purchase for a while.

If you've got a crummy video card at the moment and you're looking for something better, a GeForce3 sure is better. Better than anything else you can buy.

But, if I were you, I'd get a stopgap GeForce2 MX card, or a Kyro II; the Prophet 4500 is amazingly fast, for its price, and beats rather more expensive Nvidia based cards for some tests.

By the time the DX8 game of your dreams has come out, you'll probably be able to get a GeForce3 for less than the difference between the current GF3 price and the current price of a Kyro II card, and you'll still have your stopgap card to put in another machine, or sell.

But since when has sense played a strong role in the purchasing decisions of the PC gamer?

If you Must Go Faster, then damn the torpedoes and buy one of these things! Now! Right now!

Go! Go now! Spend!


Buy one!
Needless to say, Best Buy Computers aren't selling this card any more. But do feel free to see what modern gear they've got!



Give Dan some money!
(and no-one gets hurt)