Ultra-mode video card shopping guide


5th_Player

 

Posted

Quote:
Originally Posted by Ozmosis View Post
You have any suggestions for websites to would build the PC for me? Keep in mind I'm in Canada.
I didn't even notice where you were located!

J'habite à Ottawa! Si vous préférez, je recommanderai plusieurs bons magasins.


 

Posted

Quote:
Originally Posted by Father Xmas View Post
It's Turbo Boost and not Hyperthreading that needs to be shut down for maximum OC of an i5/i7.
Separate from what I've read, I've been told by people who experiment with this that if you intend to saturate all active cores with as much work as possible disabling hyperthreading can reduce the heat load per core by enough to allow you to increase the per core clock rate. I haven't tested that myself, but the theory seemed logical given the way hyperthreading works on the new Core iX processors relative to P4-style hyperthreading.

Although its possible disabling hyperthreading does absolutely nothing to allow you to clock the individual cores faster, it almost defies the laws of physics unless you're running into an uncore bottleneck that makes the advantage moot. I've also heard some rumblings that disabling hyperthreading doesn't just disable the second thread on the core, it disables other things that might impact the performance of the core itself. But that's unsubstantiated rumor as far as I know.


[Guide to Defense] [Scrapper Secondaries Comparison] [Archetype Popularity Analysis]

In one little corner of the universe, there's nothing more irritating than a misfile...
(Please support the best webcomic about a cosmic universal realignment by impaired angelic interference resulting in identity crisis angst. Or I release the pigmy water thieves.)

 

Posted

Quote:
Originally Posted by steveb View Post
In a workstation environment, such as what you run, you're absolutely right. But on a pure high end gaming machine, assuming that your hard drive is not bottlenecking you first, you want to get the load off your CPU as quickly as possible and onto your GPU(s) (technically, you want to bottleneck the monitor with more frames that it can display). Even if a thread is not fully saturated, moving the load to the GPU as quickly as possible allows the GPU to render frames at it's maximum capacity.
I'm afraid you lost me here. Once the CPU is outpacing the GPU, going faster can't help by more than a tiny fraction, because the vast majority of workloads including games don't require synchronous computing between the CPU and the GPU for most of their work. In fact, I can simulate the reverse and arbitrarily load down a CPU while a game is running, and so long as the game wasn't using much of the CPU to begin with, and the extra load doesn't get very close to maximum CPU load (and the extra load doesn't trigger a different bottleneck like disk or network) the frame rate stays basically constant in every case I've ever tested under.


[Guide to Defense] [Scrapper Secondaries Comparison] [Archetype Popularity Analysis]

In one little corner of the universe, there's nothing more irritating than a misfile...
(Please support the best webcomic about a cosmic universal realignment by impaired angelic interference resulting in identity crisis angst. Or I release the pigmy water thieves.)

 

Posted

Quote:
Originally Posted by Arcanaville View Post
I'm afraid you lost me here. Once the CPU is outpacing the GPU, going faster can't help by more than a tiny fraction, because the vast majority of workloads including games don't require synchronous computing between the CPU and the GPU for most of their work. In fact, I can simulate the reverse and arbitrarily load down a CPU while a game is running, and so long as the game wasn't using much of the CPU to begin with, and the extra load doesn't get very close to maximum CPU load (and the extra load doesn't trigger a different bottleneck like disk or network) the frame rate stays basically constant in every case I've ever tested under.
CPUs don't run faster than GPUs. GPUs process their information significantly faster than CPUs and frequently have to wait on their instruction sets from the CPU. Creating video graphics essentially comes down to massive number crunching, which GPUs do much more efficiently than CPUs. This is why a mid level GPU whose core clock is at 500MHz can run ten times the number of Folding@Home calculations in a day that a high level CPU with a clock speed of 3GHz can, and why the same mid level GPU can transcode video in far less time than that same high level CPU.

The laws of physics are really quite simple here. Regardless of how saturated a processing thread is, calculations being run at 3GHz are happening faster than calculations being run at 2GHz, and thus instruction sets get transferred from the CPU to the GPU faster. The more time that the GPU is processing instruction sets from the CPU, the higher a game's frame rate is. The more time that a GPU spends waiting on instruction sets, the lower that the frame rate is.

GTA 4 is the perfect example here as it is a heavily CPU demanding game: a Core i7 920 and GeForce GTX 285, both at stock clocks, will run the game with maxed out settings at 1920x1200 at an average frame rate of roughly 45-50 frames per second. The game in fact will never saturate any thread more than 20 to 30%, and will only run two on threads. At a clock speed of 3.4GHz, the flood gates break open and the frame rate will jump up to a consistent 60 FPS (never tried it without Vsync, so I'm not sure what the max is). Until that processing speed is reached by the CPU, the GPU spends the equivalent of 10 to 15 frames per second waiting on instructions from the CPU, meaning that the CPU is the bottleneck, despite being nowhere near full load. The exact FPS gained in any given game is dependent upon the CPU and GPU set up, but the principle is consistent: the faster that the GPU receives its instructions from the CPU, the higher the frames per second will climb.

Again, this comes down to the difference between a workstation environment and a gaming environment.

A workstation environment is about maximum productivity. Speed, while far from unimportant, is secondary to productivity. Having as many threads as possible as saturated as possible is a good thing: the more threads a CPU can run at a stable clock speed with healthy temperatures at maximum load, the better the workstation is considered to be performing.

A gaming computer is counter productive. On a gaming computer the only thing that is important is speed. Multi-tasking is not only not a concern, but counter-productive to being counter-productive. The CPU has to be blazing fast in order to keep up with the GPU, and that is the only thing that matters. If a single frame per second is lost due to the CPU lagging behind the GPU, then the system is under performing.


 

Posted

Quote:
Originally Posted by steveb View Post
CPUs don't run faster than GPUs. GPUs process their information significantly faster than CPUs and frequently have to wait on their instruction sets from the CPU. Creating video graphics essentially comes down to massive number crunching, which GPUs do much more efficiently than CPUs. This is why a mid level GPU whose core clock is at 500MHz can run ten times the number of Folding@Home calculations in a day that a high level CPU with a clock speed of 3GHz can, and why the same mid level GPU can transcode video in far less time than that same high level CPU.
That's not relevant to what I said. CPUs don't *do* the same things GPUs do: its entirely possible for a CPU to be able to do the work a game demands of it much faster than the GPU of the same computer to be able to do the work the game demands of it, frame by frame. When that happens, the CPU is outpacing the GPU, and a better video card would speed you up: you're GPU bound in that case, not CPU bound.

Core clock is completely worthless to compare between standard CPUs and GPUs, because current GPUs are basically SIMDs these days.

But that has nothing to do with whether the CPU or the GPU is the bottleneck in your performance.


Quote:
The laws of physics are really quite simple here. Regardless of how saturated a processing thread is, calculations being run at 3GHz are happening faster than calculations being run at 2GHz, and thus instruction sets get transferred from the CPU to the GPU faster. The more time that the GPU is processing instruction sets from the CPU, the higher a game's frame rate is. The more time that a GPU spends waiting on instruction sets, the lower that the frame rate is.

GTA 4 is the perfect example here as it is a heavily CPU demanding game: a Core i7 920 and GeForce GTX 285, both at stock clocks, will run the game with maxed out settings at 1920x1200 at an average frame rate of roughly 45-50 frames per second. The game in fact will never saturate any thread more than 20 to 30%, and will only run two on threads. At a clock speed of 3.4GHz, the flood gates break open and the frame rate will jump up to a consistent 60 FPS (never tried it without Vsync, so I'm not sure what the max is). Until that processing speed is reached by the CPU, the GPU spends the equivalent of 10 to 15 frames per second waiting on instructions from the CPU, meaning that the CPU is the bottleneck, despite being nowhere near full load. The exact FPS gained in any given game is dependent upon the CPU and GPU set up, but the principle is consistent: the faster that the GPU receives its instructions from the CPU, the higher the frames per second will climb.

Again, this comes down to the difference between a workstation environment and a gaming environment.
Pointers to the benchmarks, please.


Quote:
A workstation environment is about maximum productivity. Speed, while far from unimportant, is secondary to productivity. Having as many threads as possible as saturated as possible is a good thing: the more threads a CPU can run at a stable clock speed with healthy temperatures at maximum load, the better the workstation is considered to be performing.

A gaming computer is counter productive. On a gaming computer the only thing that is important is speed. Multi-tasking is not only not a concern, but counter-productive to being counter-productive. The CPU has to be blazing fast in order to keep up with the GPU, and that is the only thing that matters. If a single frame per second is lost due to the CPU lagging behind the GPU, then the system is under performing.
I'm not unfamiliar with the performance issues involved. However, if you're telling me any game can be "CPU intensive," only put 20% load on each of two cores, and be the performance bottleneck holding back GPU computations, I'm afraid I will need a lot more situational information and performance numbers before I accept that as anything other than an aberration.

This is so counter to experience, actually, that I think I'm going to have to borrow a copy of GTA 4 just to analyze its performance. If its doing what you are saying its doing, its worth ripping apart just to find out why. You should never, ever, ever be CPU bound at only 20% utilization of a single core, multiplied by however many cores you have. You really shouldn't see major CPU issues until at least one of those cores gets up above 75%-80% utilization at least, unless another bottleneck is reached (memory IO, for example) or you have incredibly crap code.


[Guide to Defense] [Scrapper Secondaries Comparison] [Archetype Popularity Analysis]

In one little corner of the universe, there's nothing more irritating than a misfile...
(Please support the best webcomic about a cosmic universal realignment by impaired angelic interference resulting in identity crisis angst. Or I release the pigmy water thieves.)

 

Posted

Quote:
Originally Posted by Arcanaville View Post
That's not relevant to what I said. CPUs don't *do* the same things GPUs do: its entirely possible for a CPU to be able to do the work a game demands of it much faster than the GPU of the same computer to be able to do the work the game demands of it, frame by frame. When that happens, the CPU is outpacing the GPU, and a better video card would speed you up: you're GPU bound in that case, not CPU bound.

Core clock is completely worthless to compare between standard CPUs and GPUs, because current GPUs are basically SIMDs these days.

But that has nothing to do with whether the CPU or the GPU is the bottleneck in your performance.




Pointers to the benchmarks, please.




I'm not unfamiliar with the performance issues involved. However, if you're telling me any game can be "CPU intensive," only put 20% load on each of two cores, and be the performance bottleneck holding back GPU computations, I'm afraid I will need a lot more situational information and performance numbers before I accept that as anything other than an aberration.

This is so counter to experience, actually, that I think I'm going to have to borrow a copy of GTA 4 just to analyze its performance. If its doing what you are saying its doing, its worth ripping apart just to find out why. You should never, ever, ever be CPU bound at only 20% utilization of a single core, multiplied by however many cores you have. You really shouldn't see major CPU issues until at least one of those cores gets up above 75%-80% utilization at least, unless another bottleneck is reached (memory IO, for example) or you have incredibly crap code.
What I'm talking about is the fact that modern GPUs out pace most modern CPU's ability to keep up with them; and in really high end GPUs, they out pace even high end CPUs by a wide margin. I know that comparing core clocks on a GPU versus a CPU is not a fair comparison as they do completely different tasks, but the comparison was about when performing the same task, one can see how a GPU can out pace a CPU.

The best place to see benchmarks of CPU bottlenecks is at guru3d.com. In their review of the Radeon HD 5870 and more recent review GTX 480 and 470, you can see many of these games in their benchmark suites capping out at a certain frame rate. As screen resolution increases and game detail levels remain the same, a GPU that is handling the bulk of the workload should be achieving a lower frame rate. You can see a bottleneck occur when the frame rate remains consistent as resolution increases, which means that the GPU is out pacing the CPU's ability to provide it with instructions. And the testing rig used at guru3d.com is not some off the shelf HP Pavillion, but a custom built water cooled Core i7 960 system running at 3.6GHz (3.7 when turbo boost is active). If you look at most tech sites that review video cards, their CPUs will be outrageously overclocked (guru3d.com is actually the slowest speed I've seen on a reviewer's rig: most have them clocked at 4GHz or higher) in order to remove the possibility that the CPU will be unable to keep up with the GPU and thus create an inaccurate idea of the full capability of the GPU being reviewed. When they're reviewing these cards and seeing the frame rates cap out, they're communicating the same information as I am: the GPU is out pacing the CPU's ability to keep up with it, regardless of how much or how little load it actually puts on the CPU.

If you've never seen this for yourself, I can't make you believe it. It didn't make sense to me when I was running GTA 4 at 45 FPS and such a seemingly low system load. Upon the recommendation of someone far more experienced than I, I overclocked my CPU and watched my frame rate jump up significantly. And it wasn't just in GTA 4 that I saw the performance leap: Crysis took off as well, adding a good 7 to 10 frames per second. Like I said, the principle is simple: when a GPU is waiting for it's instructions, its not rendering frames, thus frame rates drop; when the GPU isn't waiting, its rendering frames, thus increasing frame rates.


 

Posted

Well, I'm frustrated. My new ATI Radeon 5870 card (replacing my two nVidia 9800GTX) is great and all, and enables me to use a resolution of 1920x1080 on my 24" wide screen monitor, but CoX on test on Ultramode... if I'm just standing still it looks very pretty, but my frame rate is like 17. (On regular CoH it's 50-60, sometimes higher.) It looks choppy and is quite frustrating. I know earlier in the test run they intentionally dialed down the FPS, but it should be normal now, right? (And no, I'm not using FSAA and occlusion together since I knew about that issue.)

At this point, I feel like I spent $400 for nothing. I'd blame it on my 2.5 GHz Quad CPU, but none of my four cores are anywhere near maxed out. (CPU usage is at 20%, RAM is at about 35%)


My Mission Architect arcs:

Attack of the Toymenator - Arc # 207874

Attack of the Monsters of Legend - Arc # 82060

Visit Cerulean Shadow's Myspace page!

 

Posted

Honestly, I don't think the card's an issue. or, at least, I HOPE not, as I have the same card in my rig atm. And I'm having the exact situation you are at present.

I'm pretty sure they're still running the non-optimized client, though.

I hope.


"Iron defenses and a crappy attitude do not, a tanker, make."

Proud Leader and founder of The Gangbusters Super Group and The Madhouse Villain Group: Ask me about becoming a member!

 

Posted

Quote:
Originally Posted by Cerulean_Shadow View Post
Well, I'm frustrated. My new ATI Radeon 5870 card (replacing my two nVidia 9800GTX) is great and all, and enables me to use a resolution of 1920x1080 on my 24" wide screen monitor, but CoX on test on Ultramode... if I'm just standing still it looks very pretty, but my frame rate is like 17. (On regular CoH it's 50-60, sometimes higher.) It looks choppy and is quite frustrating. I know earlier in the test run they intentionally dialed down the FPS, but it should be normal now, right? (And no, I'm not using FSAA and occlusion together since I knew about that issue.)

At this point, I feel like I spent $400 for nothing. I'd blame it on my 2.5 GHz Quad CPU, but none of my four cores are anywhere near maxed out. (CPU usage is at 20%, RAM is at about 35%)
There's issues with CoH/V and ATI drivers at the moment. The 10.4 Catalyst drivers are supposed to be released next week and apparently should clear up most of these issues. With any luck, it will also re-enable multi-GPU support for CoH/V as well, which will make me a very happy man.

The big issue that you're probably experiencing right now is the obscene drop in frame rates if a graphical setting is changed. I got smashed with that this morning, dropping my FPS from a steady 60 FPS literally down to 1 FPS. You just have to logout and log back in to resume normal FPS. I think that there's a good chance that the 10.4 drivers will be available in time for I17 to go live.

I think the CPU overclocking argument has been beaten to death here, but you'd probably experience an overall rise in FPS in most games with your CPU clocked to a higher speed. Will it make a difference in CoH/V in particular? I don't know.


 

Posted

I'm going to give it until ATI releases 10.4 drivers and if things aren't nice and fast by then, I'm sticking my old cards back in and will sell the 5870. (They are selling on eBay for just about what I paid for it on Newegg.) I want pretty shadows and water and such, but I'd rather have a non-choppy, fast frame rate. I'm sure the issue is my 2.5 GHz CPU, but I have no clue how to overclock it and have heard enough horror stories that I don't really want to try.


My Mission Architect arcs:

Attack of the Toymenator - Arc # 207874

Attack of the Monsters of Legend - Arc # 82060

Visit Cerulean Shadow's Myspace page!

 

Posted

Quote:
Originally Posted by Cerulean_Shadow View Post
I'm going to give it until ATI releases 10.4 drivers and if things aren't nice and fast by then, I'm sticking my old cards back in and will sell the 5870. (They are selling on eBay for just about what I paid for it on Newegg.) I want pretty shadows and water and such, but I'd rather have a non-choppy, fast frame rate. I'm sure the issue is my 2.5 GHz CPU, but I have no clue how to overclock it and have heard enough horror stories that I don't really want to try.
Something to keep in mind, that I mentioned earlier, is that if your computer is a pre-built HP/Dell/Acer, you're likely to have a locked BIOS and overclocking won't be possible.

Beyond that, overclocking is not a difficult process, but it can be time consuming and you do need the proper hardware in your computer for it: most specifically a good CPU heatsink. If you've got the proper equipment, then there are plenty of sites that you can Google up that will show you guides on how to overclock your specific brand/model of CPU. But in order to even begin considering overclocking, you have to know certain facts about your computer that a surprisingly large number of people are unaware of: your CPU model/stepping, RAM amount/type/speed and motherboard chipset. There is a good reason to be uncomfortable with overclocking if you don't consider yourself tech savvy: doing it literally voids your warranty from Intel or AMD (depending on what company made your CPU). It is nearly impossible to tell if a CPU has been overclocked when it is removed from the motherboard, but nearly impossible does not equal actually impossible, so not risking the damage when you're not sure of yourself is a commendable stance in my opinion. Frankly, the first piece of advice that I ever give anyone considering overclocking is this: don't do it if you don't feel comfortable with it.

So if this is the only game that you play on a regular basis, jumping through the hoops to squeeze an extra 10 to 15 frames per second out of your rig might not really be worth the trouble to you: I understand and respect that. But if you're playing a wide variety of games, then its definitely worth at least doing a few Google searches and some reading into the art of overclocking to get the most possible performance you can out of your computer: even if you don't do it, a little more knowledge about computers never hurts.


 

Posted

Ok, so I'm down to choosing between 2 PCs. One is a modified Alienware called Aurora, and the other is a modified "Killer INTIMIDATOR" from a site called Exteme PC. Both are pre-assembled.


Extreme PC:
PROCESSOR Intel Core i7 920 Quad Core (2.66GHz) 8MB cache
OPERATING SYSTEM Windows® 7 Home Premium, 64bit
MEMORY 3x2048MB Corsair Xtreme Performance DDR3 PC3-12800
HARD DRIVEWesternDigital Caviar Blue640GB(7200 RPM)16MBCacheSATA2
GRAPHICS CARDSapphire ATIRadeon HD5870 850MHz Core1GB 4800MHz
POWER SUPPLY Thermaltake TR2 RX 850W Silent power supply
OPTICAL DRIVE LG Super Multi DVD Writer 22x DVD±RW/RAM SATA
CASE Antec Nine Hundred (900) TWO Ultimate Gaming "Stealth"
Linky : http://www.extreme-pc.ca/customize.a...9&custid=55772
Shipping and tax not included. Mouse/keyboard not included
1871,25$

AlienWare:
PROCESSOR Intel® Core™ i7-860 processor(8MB Cache, 2.8GHz)
OPERATING SYSTEM Windows® 7 Home Premium, 64bit
MEMORY 6GB dual Channel DDR3 SDRAM at 1333MHz
HARD DRIVE 500GB 7200 RPM SATAII Hard Drive 16MB cache
GRAPHICS CARD ATI Radeon HD 5870 1GB GDDR5
POWER SUPPLY 875W
OPTICAL DRIVE 24x CD/DVD burner
CASE Alienware Aurora black chassis
mouse/keyboard included, tax included but not shipping.
1,848$

Any thoughts? Comments?

I'm leaning more towards Extreme PC right now, for the few stats it has better, but the price tag is bigger (the tax is gonna add another 100$, not to mention shipping). The Alienware is cheaper and has a keyboard and mouse included, but it has lower stats on a few components, that and I don't know the brand of some of the parts, so it makes me nervous...


"You wear a mask to hide who you are, I wear a mask to show who I am"

Arc ID 91456: The Zombie Apocalypse Task Force:poster 1, poster 2


CLICK THE ABOVE LINK TO HELP DO YOUR PART TO SAVE C.O.H!!!!!

 

Posted

The Extreme PC box is the better machine I'd say. It'll be much easier I expect to upgrade down the line and you know exactly what parts are coming in it. Since I assume you already have a mouse and keyboard, I wouldn't even take that as a measure of comparison between them. Besides, unless you want a fancy set, it's not like they're that expensive. So that's my take.


It is known that there are an infinite number of worlds, simply because there is an infinite amount of space for them to be in. However, not every one of them is inhabited. Therefore, there must be a finite number of inhabited worlds. Any finite number divided by infinity is as near to nothing as makes no odds, so the average population of all the planets in the Universe can be said to be zero. From this it follows that the population of the whole Universe is also zero, and that any people you may meet from time to time are merely the products of a deranged imagination.

 

Posted

I have Gforce 8400 GS i hope that works i also have 6150 also


 

Posted

Amartino, neither cards will work with Ultra Mode.


The Story of a Petless MM with a dream
Quote:
Originally Posted by Deus_Otiosus View Post
This entire post should receive some kind of award for being both hysterical and fantastic.
Well done.
I have a 50 in every AT, but Scrappers and Dominators are my favorites.

 

Posted

@ Ozmosis:

Of the two PCs you listed, the ExtremePC is definitely the superior of the two. The ExtremePC is a i7 920 which means the X58 chipset versus the 860 on the P55 chipset at only a $30 difference. The X58 chipset is currently Intel's king of the heap and outpaces the P55 in pretty much every respect for performance.

My only suggestion that I'd make for that build is to see if an XFX version of the graphics card is available for close to the same amount. With XFX you'll get a double lifetime limited warranty that is modder friendly (meaning you could add after market cooling or overclock the card without voiding the warranty), and transferable to the next owner if you choose sell it off down the line.


 

Posted

Thanks for the feedback guys!

The "Killer INTIMIDATOR" will be in my greedy little hands by Friday :P


"You wear a mask to hide who you are, I wear a mask to show who I am"

Arc ID 91456: The Zombie Apocalypse Task Force:poster 1, poster 2


CLICK THE ABOVE LINK TO HELP DO YOUR PART TO SAVE C.O.H!!!!!

 

Posted

My system is actually a made-to-order system from Cybertron. Two years ago, it was the best of the best. Time flies.


My Mission Architect arcs:

Attack of the Toymenator - Arc # 207874

Attack of the Monsters of Legend - Arc # 82060

Visit Cerulean Shadow's Myspace page!

 

Posted

In the very first post of this thread, Positron give the GeForce GTX 260 and GTX 285 as good video cards for Ultra Mode. Is that information still accurate? Five months have passed since he posted that.


 

Posted

Quote:
Originally Posted by Cascadian View Post
In the very first post of this thread, Positron give the GeForce GTX 260 and GTX 285 as good video cards for Ultra Mode. Is that information still accurate? Five months have passed since he posted that.
Cascadian,

My experience with the 8800GTX (pretty much equivalent to the 9800GTX) is that I can run it almost maxed out. Compare the 8800GTX, the GTX 260 and the GTX 285

So I would say the GTX285 would be able to run UM full on just fine (given sufficient supporting components of course). I have a bid on one right now as a matter of fact...


 

Posted

I have an EVGA 2GB GTX 285 folks, my fps shot down to around 20 with shadows, reflections, and ambient occlusion all turned up with my default graphics max as well. (minus AA) Playable, but any powers go off and the lag ensues x_X


 

Posted

Quote:
Originally Posted by JagBlade View Post
I have an EVGA 2GB GTX 285 folks, my fps shot down to around 20 with shadows, reflections, and ambient occlusion all turned up with my default graphics max as well. (minus AA) Playable, but any powers go off and the lag ensues x_X
But what about the rest of your PC? Does the mobo support PCI-E 2.0? Did you buy the GPU with the rig or upgrade to it after the fact?


 

Posted

Processor: Intel i7 920 @ 2.77ghz

Motherboard: Evga 3x SLI

Video Card: EVGA 2GB GTX 285 (had two of these in SLI, however it's memory went bad and needs an RMA)

Sound Card: Fatal1ty Titanium Professional

RAM: 12GB Corsair XMS3 DDR3

PSU: Corsair 1000w

Hard Drive: 2x 10,000rpm, 300GB Western Digital Velociraptor

OS: Windows 7 Ultimate 64 bit

Had this rig custom made at realmcomputers.com not even a year ago. All my drivers are up to date (aside from motherboard bios.)


 

Posted

I run a 240 gt on a quad core with 8gb ram. The settings are not 'maxxed' but it's surely running quite well. That's with ultra mode active at 100% in each of the settings, and default everything else.

I picked the 240 up around christmas for us$100.