I still think that FPS is not the best way of comparing cards.....
Announcement
Collapse
No announcement yet.
More GeForce FX Details (Benchmarks)!
Collapse
X
-
If there's artificial intelligence, there's bound to be some artificial stupidity.
Jeremy Clarkson "806 brake horsepower..and that on that limp wrist faerie liquid the Americans call petrol, if you run it on the more explosive jungle juice we have in Europe you'd be getting 850 brake horsepower..."
-
Originally posted by The PIT
Your missing my point as graphics card come more and more powerful to the point that eye can't different which card is going the quickest at the most complex part of the game with everything turned on your left which looks best at the highest 2d quality.
I agree completely once that point is reached...but it's just that we're still a few GPU generations away from having enough rendering power to really even begin to think about photorealism in a gaming environment running with acceptable performance...note to self...
Assumption is the mother of all f***ups....
Primary system :
P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...
Comment
-
Even the professional renderfarms still have problems with "plastic" and "shiny"
( it has become better but we aint there yet)
If there's artificial intelligence, there's bound to be some artificial stupidity.
Jeremy Clarkson "806 brake horsepower..and that on that limp wrist faerie liquid the Americans call petrol, if you run it on the more explosive jungle juice we have in Europe you'd be getting 850 brake horsepower..."
Comment
-
Unless we get down to one manufacture I would say two and years to two years. Thats presuming ATI and Nvidea keep bashing each other. Someone yet may buy out Matrox or some keeps them alive.
Comment
-
Originally posted by Wombat
A few generations? At least the better part of a decade.
just renember that there might be other ways to reach an equally good looking result, that isn´t nessescarily mathematical "correct"
the flexible programmability in mordern gpus, might be enough for developers to develop "short-cuts" in these areas that wasn´t possible before.
those "photorealistic" cgi-effects in movies we see today, are not designed to be rendered realtime, so they do everything the "correct" way, which requires a huge amounts of polygons and gigantic textures, (which admittedly wont run on any normal gpu in realtime, this decade), but maybe the added programmability in those gpus will allow developers to "fake" those detail levels, or cut corners that couldn´t be cut with acceptable performance before.
just because it won´t be fast enough if it is done the traditional way, doesn´t mean it can´t be done in any other way.Last edited by TdB; 6 December 2002, 17:53.This sig is a shameless atempt to make my post look bigger.
Comment
-
I haven't seen anything yet that was fully rendered and photorealistic.Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.
Comment
-
well, it is all subjective of course, but I thought the dinosaurs in jurrassic park was pretty convincing, the fluid metal robot in terminator 2 was also ok.
sadly it seems that hollywood has been sloppy lately, none of the newer movies has really impressed me.
or maybe it is because you never get a really clear look at the cgi in those 2 movies, there was either fast movement or there were smoke-effects in the foreground, and it only happened at scenes with poor lightning
but it was believeable.
making it believable in games will be very hard though, because the gamer can control the action, and has the opportunity to take a close and troughout look at the graphics, as opposed to movies where the the only thing you control is the "pause"-button.Last edited by TdB; 6 December 2002, 18:25.This sig is a shameless atempt to make my post look bigger.
Comment
-
Well...By a few generations,i meant Gpu's that actually introduce new 3d features as well as better performance,such as GPU's that get introduced every 12 to 18 months on average,not the intermidiate steps in between which for the most part,are increases in clock speeds...
Though truthfully,i don't think that the real problem is with GPU's themselves,but for the most part,actually getting the data to them as fast as possible...
To render a scene with the kind of quality we see in movies also requires the ability to move a lot more data around in the first place,somthing that current pc's architecture isn't what i'd particularly well suited for,even though it's been improving(higher FSB,faster memory,etc.)note to self...
Assumption is the mother of all f***ups....
Primary system :
P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...
Comment
-
Those "benchmarks" are just BS. Radeon 9700 allready is 2.5-3.5x faster than GF4 @ UT2k3... according to those benches NV30 then is about 10-25% faster than ATI. Consider that on release NV30's HIGH end model will have a HIGH pricetag, and you can be sure that those benchmarks are run on the BEST that they currently have on their labs. NVidia is going to have REALLY hard time to get people to pay 50% more on NV30 than on Radeon 9700's.
Btw, those benches were run on a 3G P4 [ atleast those that were published by NVidia were ], so go see the same map on other 3G P4 reviews to compare the performance.
Dont you wonder why the Radeon score is shown ONLY in the ALPHA Doom tests ? Because Radeon would get too damn close on ANYTHING ELSE. Doom III has been built on/for NVidia cards so no wonder a ALPHA release might have some edge. And now im NOT counting the fact that nobody knows what drivers they used for ATI for that ONE test.
After all that add the fact that ATI has R350 ready for NV30 launch = NVidia has lost the speed king crown to ATI permanently. NV30 should have been a LOT faster than it actually is to make any difference.
Pe-Te
Comment
-
NV30 will be a lot faster than the 9700 as applications start transitioning over to pixel and vertex shaders. from a development standpoint it definately takes the cake as far as capabilities and flexibility."And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz
Comment
-
Originally posted by DGhost
NV30 will be a lot faster than the 9700 as applications start transitioning over to pixel and vertex shaders.
The fact that the pixel shaders of the NV30 can render more complex scenes in one pass doesn't say anything about their performance when more basic shader operations are used.
The not so great Nature benchmark (which does make "slight" use of pixel shaders) seems to prove that the raw performance for standard length pixelshader operations is not so different from the R300s one.Last edited by Indiana; 7 December 2002, 03:52.
Comment
-
listen... the NV30 has a much more flexible and much more powerful shader unit than the 9700 has... it will get better performance than the 9700 in cases where any of the 9700's shader execution units are sitting idle... this includes when you are executing shaders with a length of anything other than a multiple of 3 (if my sources on the R300 are right, it has 3 shader execution units per pipeline - if it acctually has 4 then its gonna be multiples of 4 - with the other design decisions that ATI has made on it i would bet its 3 though). Never mind the marketing fluff both companies have oozed about maximum shader lengths (although really long shaders are going to be good).
Indiana, you are confusing number of render passes and the number of pixels you can render in a clock. On the raw number of render passes they will be equal. But, on the amount each chip can render per clock they will most certainly not be equal. If a 9700 only has 3 shader execution units per pipeline, it can execute (theoretically and optimally) 1-3 shader ops on 8 pixels in 1 clock, assuming that it doesn't require more than 1 texture. if it requires 2 textures you just killed it and it goes to 8 pixels in 2 clocks. that being said, if your shader has 4-6 ops it drops it to 8 pixels/2 clocks, and if it is 7-9 ops it goes down to 8pixels /3 clocks.
on the geforce fx performance level will vary a lot more depending on how many shader ops are being executed... because of the fact it can allocate shader execution units where nessicary (instead of having a fixed pipeline like the 9700), it can achieve much better performance for shaders that are say, 5 ops long, than the 9700 can. the GeForceFX can do 6 pixels/clock with a single texture on a shader script that is 5 ops long, whereas the 9700 is forced into doing 8ops/2clock. on a shader thats 6 ops long and single textured, the GeForceFX can do 5 pixels/clock (and probably acctually 16 pixels in 3 clocks because you wind up with an extra 2 shader execution units that sit idle each clock), whereas the 9700 is forced into (again) 8 pixels in 2 clocks. the most optimial case is (again assuming my information is correct) going to be at 4 op long shaders. on a GeForce FX it will operate at 8pixels/1clock, whereas the 9700 is still at 8pixels/2 clocks.
And my GeForceFX numbers are also assuming it only does whole pixels per clock. it will probably wind up being able to do more work across multiple clock cycles than it will on each individual cycle because of idle execution units being put to work.
assuming the same level of efficency out of the shader execution units, the GeForceFX could wind up being 50% faster clock for clock with shader scripts. very easily. especially those pesky scripts that are not multiples of the number of execution units it has. if you start figuring in the clock speed difference, you will see where I am coming from.
Efficency is something that has so far not been discussed. i'm not really sure about the efficency of either of them. it could go either way. i would bet on the architectures being fairly equivilent on efficency of each shader execution unit though."And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz
Comment
-
Originally posted by DGhost
"And my GeForceFX numbers are also assuming.."
"assuming blahblah"
So what are you assuming ATI has ready when NVidia can run one DEMO or benchmark without BSODs. Yeah, thats right.. a much FASTER card than R9700 currently is.
Pe-Te
Comment
Comment