ATI vs NVidia - different architectures
Well some of that article is incorrect and it reads more like a Team nVidia rah rah piece. Benchmarks don't lie, the HD 5850, which has 10% less computational units than the HD 5870, still beats the GTX 285 in a lot of actual game testing and for $80 less.
ATI's only problem is not having a card to compete with the GTX 260/216 for around the $225 price point. The HD 5770 is a bit slower and the HD 5850 is a lot more expensive.
Oh and one thing they left out of their piece. Four out of five on the processors on the ATI card can do 64-bit floating point (FP) math, two operations per clock cycle. The current nVidia G200 based cards have one 64-bit unit for every eight of their 32-bit units, or only 30 on the G200. And if you just consider standard 32-bit FP math, the GTX 285 has a peak of 1116 GFLOPS, the HD 5870 has a peak of 2720 GFLOPS.
Father Xmas - Level 50 Ice/Ice Tanker - Victory
$725 and $1350 parts lists --- My guide to computer components
Tempus unum hominem manet
Prefer with newest build: ATI.
Why: I can actually buy one.
Otherwise, yeah, my newest build would probably have a 260 in it, as originally specced out.
Well some of that article is incorrect and it reads more like a Team nVidia rah rah piece. Benchmarks don't lie, the HD 5850, which has 10% less computational units than the HD 5870, still beats the GTX 285 in a lot of actual game testing and for $80 less.
|
Although he has a point that the architectures can be misleadingly compared when it comes to things like shaders, unfortunately the reality of the situation is that the 5870 beats the 285 in most of cases, contrary to the undertone of the article, and undermining the point. 5870 performance is actually in between the 285 and the 295. Clearly, that "bulky code" the article is referring to mostly exists in HPC environments, not gaming.
[Guide to Defense] [Scrapper Secondaries Comparison] [Archetype Popularity Analysis]
In one little corner of the universe, there's nothing more irritating than a misfile...
(Please support the best webcomic about a cosmic universal realignment by impaired angelic interference resulting in identity crisis angst. Or I release the pigmy water thieves.)
And actually it's a lot like the CISC / RISC debates of the 90s. Both companies have taken different approaches to the same problem and a lot has to do with the driver's ability to "compile" the various shaders to take best advantage of the underlying hardware.
The thing is even though ATI was the first to have software that allowed BOINC style projects like Folding@Home to take advantage of the their hardware, it appears that nVidia is striving for the HPC (High Performance Computing) market with their new core after it's success with the G200 in that market. The market for uber video cards for gaming is relatively small but the market for inexpensive "super" computers is a growth market.
Father Xmas - Level 50 Ice/Ice Tanker - Victory
$725 and $1350 parts lists --- My guide to computer components
Tempus unum hominem manet
With the impending arrival of Going Rouge and the awesome eye candy known as Ultra mode, there's a lot of people considering - and asking about - upgrading their video to take advantage of the new graphics setting in an attempt to send their eyes into diabetic shock.
As much as I'd like to eviscerate my optic nerves with the bleeding edge of video technology, I'm going to have to be satisfied with my GTX260 for the time being. But I like to casually follow technology, see what's coming up, and get an idea about current products. I also like to see what products people prefer, and - most importantly - why.
With the aforementioned inquisitions about what a good video choice would be with these parameters or those parameters, recommendations are coming out for ATI or NVidia. (Let's face it, these two are the only real players in this market.)
Getting to the point, I'm curious what other tech-heads might glean from this article I read over at Tweaktown. It compares, technically, Nvidia and ATI. More specifically, I'm wondering how CoH and its code is tied in to this paragraph:
Now, what does all this have to do with your gaming? Well, it comes into play when you consider how game engines are coded. If the game code is all small and simple instructions, then the AMD GPU has a very large upper hand, even considering the faster speed of the NV shaders. If the game code is in complex and bulky blocks then AMD only has 320 stream processors that can execute that code and then at a significantly slower speed. This problem has come to light more and more in the world of GPGPU computing, but is also starting to show up in gaming situations.
Like I said, I'm curious what people with more technical knowledge might have to say about the issue mentioned in the article, and if it affirms or conflicts with the preferred video company.
The article is here: http://www.tweaktown.com/articles/30...ame/index.html
Test Subject 42 - lvl 50 Sp/DA Scrapper
Oku No Te - lvl 50 MA/SR Scrapper
Borg Master - lvl 24 Bots/traps MM
Pinnacle
Nyghtfyre - lvl 50 DM/SR Scrapper
Champion