-
Posts
50 -
Joined
-
Like the title says, the game is freezing for me while running EVGA Precision or MSI Afterburner GPU overclocking/monitoring software (and most likely would also freeze with RivaTuner, as these two programs are derivatives of it).
CoH will run if I'm running either of those programs, but it will lock up if I attempt to adjust any graphical options or log out of the game. I'm not the only person who's encountered this bug, as there was a thread (here: http://boards.cityofheroes.com/showthread.php?t=231842) that originally gave me the notion that it was Afterburner that was giving me the lock up problems. With Afterburner off, no lock up problems; with it on, I can't adjust any settings or properly log out.
Of course, the simple solution is to simply not run any of these programs while running CoH, but that's just a pain for me, as I prefer to be able to monitor my system temps and performance while gaming to stay ahead of any problems.
I'm just wondering if anyone has come up with a workaround for this issue, or if they know of a program that monitors video card performance/temperatures in a similar way, with an on-screen display or Logitech LCD display, that doesn't lock up the game?
I haven't actually tried using RivaTuner proper as yet, but my suspicion is that it would also lock up the game. -
Thanks for this. I was having this issue as well while running MSI Afterburner, which is essentially the same program: a derivative of RivaTuner. Shutting off Afterburner worked like a charm. Interestingly though, this problem only cropped up when I switched to an Nvidia GTX 460 from an ATI Radeon HD 5870; so I'd have to guess that this is a problem with the way RivaTuner/Precision/Afterburner monitors Nvidia cards, not ATI.
-
Just to add to je_saist's suggestions, a lot of people find it useful to uninstall the video card drivers, then boot into safe mode to run driver sweeper and manually delete anything that might have been left by driver sweeper while in safe mode: usually just the ATI folder should be left but occasionally you might find other remnants if you've been installing your drivers over top of one another instead of uninstalling the old ones then installing the new ones.
As far as the suggestion of using 10.6, that would be good to see if your OpenGL drivers are stable, but I've found that the 10.7 one have corrected then AA/AO bug that's been plaguing all of us ATI owners since the launch of i17. -
@ Ozmosis:
Of the two PCs you listed, the ExtremePC is definitely the superior of the two. The ExtremePC is a i7 920 which means the X58 chipset versus the 860 on the P55 chipset at only a $30 difference. The X58 chipset is currently Intel's king of the heap and outpaces the P55 in pretty much every respect for performance.
My only suggestion that I'd make for that build is to see if an XFX version of the graphics card is available for close to the same amount. With XFX you'll get a double lifetime limited warranty that is modder friendly (meaning you could add after market cooling or overclock the card without voiding the warranty), and transferable to the next owner if you choose sell it off down the line. -
Quote:Something to keep in mind, that I mentioned earlier, is that if your computer is a pre-built HP/Dell/Acer, you're likely to have a locked BIOS and overclocking won't be possible.I'm going to give it until ATI releases 10.4 drivers and if things aren't nice and fast by then, I'm sticking my old cards back in and will sell the 5870. (They are selling on eBay for just about what I paid for it on Newegg.) I want pretty shadows and water and such, but I'd rather have a non-choppy, fast frame rate. I'm sure the issue is my 2.5 GHz CPU, but I have no clue how to overclock it and have heard enough horror stories that I don't really want to try.
Beyond that, overclocking is not a difficult process, but it can be time consuming and you do need the proper hardware in your computer for it: most specifically a good CPU heatsink. If you've got the proper equipment, then there are plenty of sites that you can Google up that will show you guides on how to overclock your specific brand/model of CPU. But in order to even begin considering overclocking, you have to know certain facts about your computer that a surprisingly large number of people are unaware of: your CPU model/stepping, RAM amount/type/speed and motherboard chipset. There is a good reason to be uncomfortable with overclocking if you don't consider yourself tech savvy: doing it literally voids your warranty from Intel or AMD (depending on what company made your CPU). It is nearly impossible to tell if a CPU has been overclocked when it is removed from the motherboard, but nearly impossible does not equal actually impossible, so not risking the damage when you're not sure of yourself is a commendable stance in my opinion. Frankly, the first piece of advice that I ever give anyone considering overclocking is this: don't do it if you don't feel comfortable with it.
So if this is the only game that you play on a regular basis, jumping through the hoops to squeeze an extra 10 to 15 frames per second out of your rig might not really be worth the trouble to you: I understand and respect that. But if you're playing a wide variety of games, then its definitely worth at least doing a few Google searches and some reading into the art of overclocking to get the most possible performance you can out of your computer: even if you don't do it, a little more knowledge about computers never hurts. -
Quote:There's issues with CoH/V and ATI drivers at the moment. The 10.4 Catalyst drivers are supposed to be released next week and apparently should clear up most of these issues. With any luck, it will also re-enable multi-GPU support for CoH/V as well, which will make me a very happy man.Well, I'm frustrated. My new ATI Radeon 5870 card (replacing my two nVidia 9800GTX) is great and all, and enables me to use a resolution of 1920x1080 on my 24" wide screen monitor, but CoX on test on Ultramode... if I'm just standing still it looks very pretty, but my frame rate is like 17. (On regular CoH it's 50-60, sometimes higher.) It looks choppy and is quite frustrating. I know earlier in the test run they intentionally dialed down the FPS, but it should be normal now, right? (And no, I'm not using FSAA and occlusion together since I knew about that issue.)
At this point, I feel like I spent $400 for nothing. I'd blame it on my 2.5 GHz Quad CPU, but none of my four cores are anywhere near maxed out. (CPU usage is at 20%, RAM is at about 35%)
The big issue that you're probably experiencing right now is the obscene drop in frame rates if a graphical setting is changed. I got smashed with that this morning, dropping my FPS from a steady 60 FPS literally down to 1 FPS. You just have to logout and log back in to resume normal FPS. I think that there's a good chance that the 10.4 drivers will be available in time for I17 to go live.
I think the CPU overclocking argument has been beaten to death here, but you'd probably experience an overall rise in FPS in most games with your CPU clocked to a higher speed. Will it make a difference in CoH/V in particular? I don't know. -
Quote:What I'm talking about is the fact that modern GPUs out pace most modern CPU's ability to keep up with them; and in really high end GPUs, they out pace even high end CPUs by a wide margin. I know that comparing core clocks on a GPU versus a CPU is not a fair comparison as they do completely different tasks, but the comparison was about when performing the same task, one can see how a GPU can out pace a CPU.That's not relevant to what I said. CPUs don't *do* the same things GPUs do: its entirely possible for a CPU to be able to do the work a game demands of it much faster than the GPU of the same computer to be able to do the work the game demands of it, frame by frame. When that happens, the CPU is outpacing the GPU, and a better video card would speed you up: you're GPU bound in that case, not CPU bound.
Core clock is completely worthless to compare between standard CPUs and GPUs, because current GPUs are basically SIMDs these days.
But that has nothing to do with whether the CPU or the GPU is the bottleneck in your performance.
Pointers to the benchmarks, please.
I'm not unfamiliar with the performance issues involved. However, if you're telling me any game can be "CPU intensive," only put 20% load on each of two cores, and be the performance bottleneck holding back GPU computations, I'm afraid I will need a lot more situational information and performance numbers before I accept that as anything other than an aberration.
This is so counter to experience, actually, that I think I'm going to have to borrow a copy of GTA 4 just to analyze its performance. If its doing what you are saying its doing, its worth ripping apart just to find out why. You should never, ever, ever be CPU bound at only 20% utilization of a single core, multiplied by however many cores you have. You really shouldn't see major CPU issues until at least one of those cores gets up above 75%-80% utilization at least, unless another bottleneck is reached (memory IO, for example) or you have incredibly crap code.
The best place to see benchmarks of CPU bottlenecks is at guru3d.com. In their review of the Radeon HD 5870 and more recent review GTX 480 and 470, you can see many of these games in their benchmark suites capping out at a certain frame rate. As screen resolution increases and game detail levels remain the same, a GPU that is handling the bulk of the workload should be achieving a lower frame rate. You can see a bottleneck occur when the frame rate remains consistent as resolution increases, which means that the GPU is out pacing the CPU's ability to provide it with instructions. And the testing rig used at guru3d.com is not some off the shelf HP Pavillion, but a custom built water cooled Core i7 960 system running at 3.6GHz (3.7 when turbo boost is active). If you look at most tech sites that review video cards, their CPUs will be outrageously overclocked (guru3d.com is actually the slowest speed I've seen on a reviewer's rig: most have them clocked at 4GHz or higher) in order to remove the possibility that the CPU will be unable to keep up with the GPU and thus create an inaccurate idea of the full capability of the GPU being reviewed. When they're reviewing these cards and seeing the frame rates cap out, they're communicating the same information as I am: the GPU is out pacing the CPU's ability to keep up with it, regardless of how much or how little load it actually puts on the CPU.
If you've never seen this for yourself, I can't make you believe it. It didn't make sense to me when I was running GTA 4 at 45 FPS and such a seemingly low system load. Upon the recommendation of someone far more experienced than I, I overclocked my CPU and watched my frame rate jump up significantly. And it wasn't just in GTA 4 that I saw the performance leap: Crysis took off as well, adding a good 7 to 10 frames per second. Like I said, the principle is simple: when a GPU is waiting for it's instructions, its not rendering frames, thus frame rates drop; when the GPU isn't waiting, its rendering frames, thus increasing frame rates. -
Quote:CPUs don't run faster than GPUs. GPUs process their information significantly faster than CPUs and frequently have to wait on their instruction sets from the CPU. Creating video graphics essentially comes down to massive number crunching, which GPUs do much more efficiently than CPUs. This is why a mid level GPU whose core clock is at 500MHz can run ten times the number of Folding@Home calculations in a day that a high level CPU with a clock speed of 3GHz can, and why the same mid level GPU can transcode video in far less time than that same high level CPU.I'm afraid you lost me here. Once the CPU is outpacing the GPU, going faster can't help by more than a tiny fraction, because the vast majority of workloads including games don't require synchronous computing between the CPU and the GPU for most of their work. In fact, I can simulate the reverse and arbitrarily load down a CPU while a game is running, and so long as the game wasn't using much of the CPU to begin with, and the extra load doesn't get very close to maximum CPU load (and the extra load doesn't trigger a different bottleneck like disk or network) the frame rate stays basically constant in every case I've ever tested under.
The laws of physics are really quite simple here. Regardless of how saturated a processing thread is, calculations being run at 3GHz are happening faster than calculations being run at 2GHz, and thus instruction sets get transferred from the CPU to the GPU faster. The more time that the GPU is processing instruction sets from the CPU, the higher a game's frame rate is. The more time that a GPU spends waiting on instruction sets, the lower that the frame rate is.
GTA 4 is the perfect example here as it is a heavily CPU demanding game: a Core i7 920 and GeForce GTX 285, both at stock clocks, will run the game with maxed out settings at 1920x1200 at an average frame rate of roughly 45-50 frames per second. The game in fact will never saturate any thread more than 20 to 30%, and will only run two on threads. At a clock speed of 3.4GHz, the flood gates break open and the frame rate will jump up to a consistent 60 FPS (never tried it without Vsync, so I'm not sure what the max is). Until that processing speed is reached by the CPU, the GPU spends the equivalent of 10 to 15 frames per second waiting on instructions from the CPU, meaning that the CPU is the bottleneck, despite being nowhere near full load. The exact FPS gained in any given game is dependent upon the CPU and GPU set up, but the principle is consistent: the faster that the GPU receives its instructions from the CPU, the higher the frames per second will climb.
Again, this comes down to the difference between a workstation environment and a gaming environment.
A workstation environment is about maximum productivity. Speed, while far from unimportant, is secondary to productivity. Having as many threads as possible as saturated as possible is a good thing: the more threads a CPU can run at a stable clock speed with healthy temperatures at maximum load, the better the workstation is considered to be performing.
A gaming computer is counter productive. On a gaming computer the only thing that is important is speed. Multi-tasking is not only not a concern, but counter-productive to being counter-productive. The CPU has to be blazing fast in order to keep up with the GPU, and that is the only thing that matters. If a single frame per second is lost due to the CPU lagging behind the GPU, then the system is under performing. -
Quote:Not trying to be a jerk here, but in a year and an half I've never once read that hyper-threading should be shut off in order to achieve maximum overclock. That's going to come as quite a shock to the literally hundreds of guys, like myself, running Core i7 860s, 920s and 930s at 4GHz or higher with hyper-threading on (these CPUs all have stock clocks of 2.66 to 2.8GHz). Intel Turbo-Boost does need to be shut down in order have a stable overclock, but there's literally hundreds of guys on the Nvidia, EVGA or XFX forums who would disagree about hyper-threading. Personally, I'm at 4GHz on an air-cooled i7 920 with my hyper-threading on and I've run extended CPU stress tests and Folding@Home for hours on my CPU without a glitch.Not necessarily, and probably not in my case. Since none of the cores on my system were maxed out during that load, its likely there were other bottlenecks constraining performance (disk, for example, or less likely memory IO).
On an i7-860, its also likely I would have to disable hyperthreading to get the maximum possible overclock, and that might also be a less than optimal change given my workloads.
For the most part, overclocking only helps if you have a saturated or nearly saturated processor core (even if the others are idle and your net CPU utilization is low). If you don't, overclocking usually can't help.
Your best bet for gaining ground with overclocking on a multicore processor like an i5 or i7 is if you don't do a lot of stuff simultaneously, but you do one or two extremely CPU-intensive tasks for which the CPU speed itself is the critical bottleneck. For people trying to maximize City of Heroes performance, the CPU is probably not going to be the problem if you're starting from an i7-860 or 920, say. There are other games, on the other hand, that could tap out those CPUs, because they would saturate out a single (or pair of) cores. I just don't own any at the moment.
In a workstation environment, such as what you run, you're absolutely right. But on a pure high end gaming machine, assuming that your hard drive is not bottlenecking you first, you want to get the load off your CPU as quickly as possible and onto your GPU(s) (technically, you want to bottleneck the monitor with more frames that it can display). Even if a thread is not fully saturated, moving the load to the GPU as quickly as possible allows the GPU to render frames at it's maximum capacity.
Now CoH/V isn't likely to be a huge problem for i3/i5/i7 CPUs at stock clocks, but CoH/V isn't the only game that I play on my PC, and I'm sure that many others like that are around here as well. If CoH/V is the only game that person here plays and huge multi-tasking capabilities are not a concern, then they can save a lot of money by going for a good Phenom II X4 with 4Gb of RAM and a Radeon HD 5750 as a Core i7 system will never come remotely close to getting fully utilized. -
Quote:A common misconception about overclocking is that it will let your computer do more. It won't. It will let you do what you do faster though. The i7s are great for that sort of multi-tasking, but that kind of load on a system (78%) is unusual for the average gamer. If your CPU was overclocked to say 3.4GHz, you'd still be at 78% load with all of that running, but you'd be finishing the tasks faster.I'll be honest though: I haven't bothered to even check to see if any overclocking settings exist in my Dell, because I haven't pegged the CPU in my i7-860 yet. If I really wanted to, I'm sure I could, but when I dump normal workloads (normal for me) onto it, it really doesn't seem to notice. A week ago I had two instances of CoH loaded, Vmware running a copy of my old XP workstation, a python simulation running, Real converting movies to iTunes in the background, and a bunch of miscellaneous foreground apps (like browsers), and I think I was at 78% CPU utilization. Wouldn't want to do that with less than 8 gigs of RAM, though.
These days with many games simply being ported over from consoles with as few changes as possible, a faster CPU helps a lot, regardless over how many core/threads that you have to work with. As consoles age very quickly, technologically speaking, the way game developers get the most out of them is by putting as much of the work load on the CPU as possible. Since console CPUs usually have very high clock speeds, many games actually seem to run worse on PCs whose CPUs might be far more powerful, but their clock speeds are actually much lower than on their console counterparts. Thus, many people think that their GPUs simply aren't good enough to handle the high settings when in fact the CPU just isn't getting the information to the GPU quickly enough. By having a faster CPU, it allows for the CPU tasks to be accomplished faster, thus allowing your GPU to kick in sooner and let your PC really shine over a console. -
Quote:While it is possible to mix an ATI primary GPU and use an Nvidia one to process PhysX, the workaround is a real hassle with only mixed results. The truth is the game companies have been very slow to get into PhysX, plus with Intel and ATI/AMD both pushing Havoc (an open sources physics engine) few game developers bother implementing even the most basis PhysX features. If the only game you play on a regular basis is CoH/V, then there's no point in using it as at the moment CoH isn't utilizing PhsyX processors except the old Aegia ones, which will slow your system down to a crawl with a modern high end GPU. There's really only two games out there that heavily use PhysX right now anyway: Cryostasis and Batman: Arkham Asylum; so if unless you're also playing one of those two, don't bother trying to get the ATI/Nvidia combo to work.I currently have two GeForce 9800GTX cards in SLI configuration. I'm going to yank one and see if I can get it to fit inside my son's PC. The other I'll probably sell, though I did read that there is a way to get it to work as a dedicated PhysX card. Does anyone know if that is actually possible?
-
Quote:Well, first I'm not saying don't buy from Dell at all, but for the stats of the computer you're buying, get one of their Alienware branded PCs instead of a regular Dell one. Alienware rigs are built with enthusiast level parts and an unlocked BIOS, which will allow you to overclock your CPU to more out of it.You have any suggestions for websites to would build the PC for me? Keep in mind I'm in Canada.
Like many others in this thread, I am a DIY'er, but I get that building a computer for yourself might not be an option to you for various reasons. So, to point you in the direction of someone who can build you a great PC, I'd first say your local computer parts store: given the location in your profile, you're bound to have more than a few choices in your town, or else in nearby Hull and/or Ottawa. At a local store, they'll be able to help you choose compatible parts that might be significantly better than what the big names use in their PCs and for less money. Many stores actually have pre-picked bundles that they allow a few customization options on that really take the hassle out of choosing parts; plus with them assembling it for you, you still get the piece of mind that your PC has been tested out extensively before you ever turn it on for the first time. And it never hurts to put your money into a local business. -
Quote:Totally in agreement with Blast here.Looks good on the whole. Nitpick-wise, I'd suggest changing the memory a bit. Right now you have six 1GB sticks. Odds are that's all the memory slots in the machine filled. You might consider changing to say three 2GB sticks. That will leave room open for future upgrades if you ever decide you need more memory. 6GB total is good and may well be enough for the life of the system, but just in case... And you could consider a bigger HDD. Storage is generally cheap. But if you don't think there's any chance you'll fill it, don't bother increasing it. As an alternative to bigger, you could also look at a faster HDD.
One thing I do have to mention is that if this is a Dell system as opposed to an Alienware system, your BIOS may be locked and thus CPU overclocking will not be possible. The Core i3/i5/i7 CPUs are so awesome for overclocking that it would be a shame to spend that kind of money on a system and not be able to get the absolute most out of it. If you're planning on have a PC built for you, I'd check around to a couple other websites before committing to a system from Dell, as you might be able to find the same thing for less money and still have the option to be able to overclock it if you want. -
Given the sorts of results you're seeing for UM with the 9800GTX, which is now considered mid-level mainstream card, it really does prove my theory that the demands of UM will not be anywhere near as great as many in the thread have feared. If the only game that you play, and are planning on playing for the next couple years, is CoH/V, the you probably don't need to shell out the $500 or so on a GTX 480, as its unlikely that this game will ever require that much graphics power, unless it lives on long enough to see yet another ultra mode upgrade. In all likelihood, all that anyone will need to run UM on high settings at a good frame rate will be a card along the lines of a GTX 260 or a Radeon 5770.
-
Quote:Truthfully, most of today's high end cards see absolutely no benefit from overclocking when they're new because they can generally run most of today's games at the highest settings well above 60 frames per second. Overclocking a GPU can bring some gains in heavily demanding games like Metro 2033 or Crysis, but generally that gain will only work out to be an average of 7 to 12 frames per second: that can be significant in a game like Crysis, but you do have to overclock the GPU much higher than a factory overclocked card ever is. Older cards can see significant improvements with overclocking, making them far more able to keep up with today's more demanding games; so a GPU's ability to overclock isn't usually a real factor when they first hit the market, except for in benchmark scoring, it does indicate their ability to keep up once they're a couple years old. But unless you're finding the card on sale for a price equal to or less than a GTX 480 at standard speeds (not bloody likely for the next four or five months), the factory overclocked version isn't really worth the extra money, unless you're also looking for marginally higher scores in benchmarking suites like 3DMark Vantage.So what would be the real world benefit of buying the GeForce GTX480 superclocked edition as opposed to the regular edition? Is it really that much faster? And I always thought overclocking was supposed to be a bad things in terms of the life of your graphics card.
I'd be putting it in my 2.85Ghz Quad Core system with 8GB of Ram, 64-bit Windows Vista Ultimate, and a 1000w power supply. Would I benefit from the overclocking?
Thanks!!
So a GTX 480 will see much greater benefits in your system from overclocking the CPU, rather than the GPU. Since many of today's games are ported over from consoles, they rely heavily on the CPU for most functions: the faster the CPU runs, the faster the GPU receives it's instructions from the drivers and the higher the frame rates you will get. GTA 4 is a perfect example of this: a Core i7 920/GTX 285 will get about 45 to 50 FPS at 1920x1200 on stock clock speed of 2.66GHz on maximum settings; with the CPU's speed set to 3.4GHz, the FPS can leap up over 60 (since non-3D monitors cap out at 60 FPS, there's not much point in measuring beyond 60, although with the vsync off, a ~70 FPS average at that clock speed is possible). Most of today's MMORPGs like fast CPUs, so getting your CPU's clock speed up will definitely benefit your overall performance in CoH/V as well as pretty much any other game on the market today. -
Quote:The GT 330M is a tough one to nail down, as it and the 325M are what Nvidia has aimed at being the most attractive to buyers, so there are few variants available that will affect performance. It'll probably be able to handle it fairly well at mid-range settings, depending on screen resolution. Don't expect to max out the settings, as the 330M is a mid-level card, but it does depend on the version being used. The 330M has both a GDDR2 and GDDR3 version, and the GDDR2 version can be significantly slower than the GDDR3 version. Also, given that this card is in a Macbook, and Apple heavily emphasizes low power consumption, it could also be the low power version which has a lower clock speed and thus loses some performance. I believe the card should work quite well, but you'll probably get better results by playing at a resolution lower than the monitor's native resolution.Apple announced their new i7 MacBook Pro computers today with the following video card info:
Any ideas/guesses as to how good the NVIDIA GeForce GT 330M would be at handling Ultra Mode?
Thanks,
Buxley
However, I'd honestly stay away from the current crop of mobile Core i7s. The clock speeds on them are very low. Yes they have a great turbo boost feature, but that only boosts the speed on the first two cores when multi-threaded applications are not coming heavily into play: once you get into using a program optimized for multiple threads, or into multi-tasking, any advantage from the turbo boost disappears and you're left with a very slow quad core processor (not difficult for a gamer to kill turbo boost's advantage: run the game, a web browser and a music program at the same time). I personally feel that the mobile Core i5s are a much better value: higher default clock speeds on a hyper threaded dual core CPU give you the power of a high speed quad core for a much lower price; plus, they still have the great turbo boost for when you're not running anything demanding. -
Quote:Actually, I haven't read a post saying absolutely that SLI/CrossfireX will not work on Ultra Mode when its launched; although I don't expect it as I haven't read any updates in Nvidia or ATI's drivers that would indicate they've re-optimized their drivers to support CoH/V in a multi-GPU set up. The most recent information I've seen around here on that dates back to February, so that's a long time for changes to have occurred. There's also an equally good chance I've missed an update, as Posi's posts on the subject are buried pretty deeply in the Dev Tracker at this point. I will readily admit to being too lazy right now to go digging past the first three or four pages for more recent news.Well first off, for the moment, SLI and Crossfire are stated to not work with UM so until they are, a dual GPU setup is less than optimal for that reason alone. Presumably, they will get it working at some point but we don't know when. Until then, single GPU upgrades are the way to go.
Beyond that, while they haven't specified a CPU/memory requirement for UM, Posi has listed a range of what cards they expect to do what level of UM. And thus it can be taken that the 9800GT and equivalents is the current minimum spec for UM. Also keep in mind that no one is being run off. If you can play now, you'll still be able to play under I17. UM is an *optional* feature. Anyways, until I17 at least leaves Closed Beta, I think you're expecting too much.
I think you might have misread my statement about running off 70% of customers due to high system requirements: I was actually referring to why even the most recent cutting edge games tend to have relatively low minimum system requirements. And as I stated, I expect the minimum demands of UM to be along the lines of what it would take to run a DirectX 10 only game, despite the fact that CoH/V actually uses OpenGL, which would be comprised of what was considered mid-range hardware circa 2007.
But my point is that even as an optional graphical upgrade, a guaranteed minimum spec should have been posted at least a month ago; all we've seen are "well, you can expect this sort of result with this... etc" statements, which are not exactly concrete information: I can throw a Radeon 5970 into an eight year old system with a Pentium 4 CPU and will not see anywhere near the same result as someone with the same card running with a Phenom II quad core CPU. Its fine for for the exact specification to still be in downward flux while at this stage of development, but we should have been told a CPU, RAM and graphics spec that was guaranteed to meet minimum requirements for UM; it gives them room to work on a less demanding spec, but it also would give their customers a firm idea of what they should be looking for if they are considering an upgrade to allow for UM. Honestly, I'm not concerned about my rig's ability to run UM at maximum settings (or any currently existing game), but not everyone has the money available to go out and get themselves a Core i7 system with a Radeon 5870 or GTX 480, and that's why a guaranteed minimum spec should have been posted long before now. -
As stated by some others, if you're building a new rig solely for CoH/V, then, even with Ultra mode coming down the pipe, a dual GPU system probably won't be much help to you, as SLI/CrossfireX tends to yield less impressive results in MMORPGs than in most other games.
Now I understand the idea of wanting a dual card set-up, I have one myself, but SLI or CrossfireX really needs to be treated one of two ways: either as the starting point for a very high end gaming system; or as an upgrade path on a mid-range system. I see people looking at building something new with dual cards at the mid-range which is something that doesn't really make a lot of sense. For example, the Radeon 5670s mentioned in earlier in an Alienware build would be outperformed by a 5770 or 5830 for less money with lower power consumption and heat generation. But if you already have a card like a 5670 and you're just looking to add some extra power to your graphics processing, then adding in a second 5670 makes a lot of sense, as its probably still cheaper than selling off the used mid-range card and then buying a new high end card.
However, I feel this thread represents a pretty large failure on the dev's part: by February, they should have had a pretty exact specification needed to run ultra mode, instead of all this guess work that we players have been making. And while I understand the need to still be flexible while this is under development, considering that many of their customers are considering a fairly expensive upgrade to coincide with this graphics update, I really think that it was their responsibility to nail down and post an exact minimum spec. While everyone is sort of panicking with the whole "can my system run ultra mode?" I'm pretty sure that ultra mode's requirements are going to be far less hefty than most are scared of: a dual core CPU at about 2.6GHz and a Geforce 8600 or Radeon 3850 are probably all that will be required for minimum spec (most game developers shoot for specs much lower than this for minimum specs, and we're talking about on very popular games like Mass Effect or Call of Duty; its stupid to eliminate 70% of potential customers out there with obscenely high requirements); cards in the current mid-range like GTS 250 or Radeon 5750 will probably be all that's needed to run the game at relatively high settings with a good frame rate. The bulk of worried posts in this thread could have been largely avoided by simply making a statement along the lines of "You WILL need at least CPU XX, X amount of RAM and card XXX from Nvidia or XXXX from ATI in order to run ultra mode: we are still shooting for a slightly lower minimum spec, but this is the guaranteed minimum spec required for ultra mode." This hasn't shown up yet, and so now we have people running around trying to shop for an unknown future, and quite possibly spending money that they could have put to better use somewhere else simply because the dev's have not nailed down a guaranteed minimum spec for ultra mode, and haven't posted that spec up in lights with arrows pointing to it for everyone to see. -
Quote:This is a great point. A lot of older games and console ports prefer powerful CPUs to powerful GPUs. Anyone out there not too sure about their current set up's ability to provide decent FPS should see if they can overclock their CPU a bit. You'd be amazed what bumping up the clock speed on an AMD Athlon or Intel Pentium dual core from 2.4GHz to 2.8 or 3.0 can get you out of even a mid-range graphics card; and a lot of older tri/quad cores that run around 2.0GHz or 2.2GHz can start to see a lot more FPS by getting up to about 2.6GHz or above.Worth mentioning: COH currently likes fast CPU's. Moreso than normal. I wouldn't necessarily buy until we get minimum/recommended specs for Going Rogue if GR is a major factor for you. If you've got other games you're jonesing to improve, well, that's something else
-
Quote:A 4890 is still considered an upper mid-range card and should quite easily tear through Ultra mode on high settings while maintaining 60+ FPS.How would a ATI 4890 fair? I'd like to be able to turn on all the bells and whistles and still have good performance. Would a RAM upgrade also be needed. I've got 3.25GB of Ram already.
I started this thread about general appearance (FSAA, water, colors etc), but for anyone concerned about performance with Ultra mode, you just need to keep in mind that CoH is an MMORPG, and as such the system requirements are generally kept not too demanding, even for a graphical upgrade like Ultra mode, in order to appeal to the widest array of players possible. Older cards like an Nvidia 8800 or ATI 3870 should quite easily handle Ultra mode on the game's suggested detail levels while still maintaining very playable frame rates up to resolutions in the area of 1680x1050; at higher resolutions, you might start to see a definite FPS drop off. Ultra mode will definitely be more demanding, but its not like Paragon is overhauling the graphics demands to anywhere remotely close to the levels a game like Crysis requires to run well.
Ultra mode is called such simply because it is an optional graphics update mode, as opposed to a mandatory upgrade that would leave many players no longer able to play. The base system requirements for Ultra mode will be higher than base requirements for the regular game, but any mid-range card from the last two generations from Nvidia or ATI should handle Ultra mode quite well: I suspect even slightly older cards like Nvidia's 7900 or ATI's 2900 should also handle the newer graphics adequately. -
Quote:Darn. I've seen other in the past mention that they were in the closed beta on the forums, so I honestly didn't think that was a violation of the NDA.actually part of the NDA is not mentioning that you are in fact in the beta test. so were the beta test going on and someone said they were in it, that technically would be a violation if i remember right.
Quote:The biggest trick you'll run into (once the beta goes open and things can be talked about) is what looks better on one person's system doesn't mean it'll be the same for yours.
CPU will make a difference.
RAM will make a difference.
Display will make a difference (resolution, quality of the screen, etc).
Drivers will make a difference (remember to keep those updated!).
I'm under the impression that Paragon wants it to look equally good on both chipset maker's cards (ie, no favoritism, no "sabotaging" the engine so the other guy's cards have issues, etc). So, hopefully, you'll be good to go with anything you get. Generally speaking the more powerful card the better things will look, but that's pretty much a given for most games (and honestly, doesn't answer your question).
I can tell you that the footage we've seen of GR has been done on Radeons, though. Positron's recommendations seems to indicate they've done a lot of GeForce testing already. We know that neither SLI or Crossfire are working just at the moment. In short? Hopefully you won't have to worry about it and you'll be able to go with your preferred card maker
Honestly, I am leaning towards ATI at the moment based on the reports of the GTX 480's horrible thermals and power consumption (price is not my main concern, although I'm not likely to pay Nvidia's launch prices); but these are really only rumors at the moment and if reality proves to be different, then I will put off my upgrade for price drops if they come in relative short order (many predictions have the 400 series dropping in price as early as mid-May). -
Quote:I've been following the reports with pretty intense interest over the last few weeks myself. Right now, I am leaning toward ATI as well, but until the 400 series actually launches, performance is pure speculation.I'm not a hardware guru but I've been following this issue with some interest and here's my impression: for a given price point, you'll get better performance with ATI right now. The current scuttlebutt concerning the soon to be released next generation nVidia cards is that they won't change that situation.
I'm going to wait a month or two, since I understand that nVidia's new cards are going to be released soon and it's rumored that ATI may be doing a refresh on its product line. But ATI looks like the card to buy for CoH at the moment. Check out this summary from Tom's Hardware; it shows that at every price point, ATI is the card to get right now.
If the 400 series vastly outperform the Radon 5800s (the GTX 470 will have to do at least 25% better than a 5870, and the GTX 480 would have to come close to a 5970), then I'll probably put off my upgrade until mid-May when the 400s will be more widely available and prices should drop significantly. However, if initial reports hold out to be true and the GTX 470 falls between the 5850 and 5870 in terms of performance (and only costing a ridiculous $100 more a 5870), then, in terms of price to performance, going with ATI become a no brainer. -
Quote:Actually, I'm really only asking for an opinion without solid examples, which I doubt would violate NDA. It no more violates NDA than a beta tester confirming that I17 does in fact exist.I17 Beta hasn't started yet. And if it had, and if there was an NDA involved, they couldn't tell you anyways. Also, there's still a good chance you'll be invited to the beta.
As for my asking now, its because I'm a little absent minded, so I'm likely to forget to post this for several days if I remember at all. -
Without breaking the NDA, I would just like to ask any beta testers who have access to both Nvidia and ATI graphics a simple question: on which company's cards does Ultra mode look better?
I know the I17 beta testing doesn't begin until tomorrow, but this question occurred to me now, which is why I'm posting it now.
I'm not going to be receiving a beta invite, at least not to any early stages due to having let my sub lapse for a few days a couple times in the last six months. But I'm very interested in the Ultra mode as I'm going to be upgrading my graphics cards in late March or early April; about the time I17 will probably be launched, which will roughly also be the time that Nvidia will launch it's 400 series and ATI should be launching it's Radeon HD 5890. I'll be grabbing high end cards regardless of company choice, so frames per second performance will probably not be an issue, but I just want to know on which company's cards does the Ultra mode look better on.
I know in the past, CoH has looked better on Nvidia's cards due to having been part of Nvidia's "The way its meant to be played" campaign at launch. But in recent months, CoH has dropped the Nvidia banner, and the CoH homepage is now displaying an ATI logo, which leaves me wondering if the Ultra mode will favor ATI cards, Nvidia cards or simply show no appearance/performance preference to cards from either company.
While performance on CoH's Ultra mode will not entirely influence my decision to go with Nvidia's 400 series or ATI's 5890, as I'm also heavily weighing in factors like cost, overall performance, power consumption and heat generation, but if the factors all even out, then knowing which of my games will simply look better on what cards can be come the tie breaker for me. -
First, here's a link (http://www.notebookrepairguide.com/c...tebook-laptop/) to a site featuring bunch of custom system builders. The actual link is in regards to notebooks, but most of these builders also deal with desktop computers. Just beware, the page is not all that well maintained, so some of the companies are gone and also some of them are for the UK, but its a good place to start looking. One site not listed there that you can also try is digitalstormonline.com.
Next, as far as liquid cooling goes, most of these custom builders will just use small CPU closed loop coolers that are okay for maintaining good temps at stock settings, but they're no good for overclocking. Unless you plan on doing some fairly extreme overclocking and are willing to spend the money on a mid to high end liquid cooling system, you'd do better to stick with air cooling. I'm running a Core i7 920 at 3.6GHz (stock is 2.66GHz) with a Thermalright Ultra 120 Extreme air cooler and even under full load my core temps are never going above 60 degrees Celsius.
If you're going to go for a system from a custom builder, I suggest a company that allows you to choose the brands of parts that you want (ibuypower.com, cyberpower.com, digitalstormonline.com, pugetsystems.com) and then study up on the parts so you know what you're getting, as opposed to something like Alienware, which uses entirely unbranded parts so you've got no idea exactly how good/reliable your system components are supposed to be.
Finally, for dual cards and CoH, dual card set-up haven't been supported by this game for quite some time. This is something to be especially wary of if you decide to get for a dual GPU card like the GTX 295, Radeon HD 4870x2 or the upcoming HD 5870x2, as you'll only be able to use half that card's power when playing CoH. When I force any sort of dual card rendering on the game, I get a noticeable decrease in my FPS. However, just one of my cards can max out all the settings in CoH and still maintain 60 FPS with vsync on.