Announcement

Collapse
No announcement yet.

AcesHarware

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by atko

    If there are really a lot of polys in that benchmark, I think it is not the 128 megs of ram but the two vertex shader pipelines of GF4 which help to double the fps...

    It is said that Parhelia have 4 of those pipelines...

    That's where the extra memory comes in handy,seems like the bench stores at least part of the poly data directly in video card memory,to aviod using AGP to transfer such a huge amount of them on the fly...


    While the extra vextex shader is in handy and boost scores,i've seen some scores from GF3 TI 200 128 meg cards and they are significantly faster than any GF3 64 meg card,by as much as 50~60% faster in some cases,and both still have only 1 vextex shader.
    note to self...

    Assumption is the mother of all f***ups....

    Primary system :
    P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...

    Comment


    • #17
      In that case a Radeon 8500 128MB should be able to pull off a good score considering poly throughput.

      Comment


      • #18
        Originally posted by superfly
        While the extra vextex shader is in handy and boost scores,i've seen some scores from GF3 TI 200 128 meg cards and they are significantly faster than any GF3 64 meg card,by as much as 50~60% faster in some cases,and both still have only 1 vextex shader.
        In that case, you are right, that 128 megabytes of video ram made the demo run faster on the same GPU.

        Comment


        • #19
          quote:
          --------------------------------------------------------------------------------
          Originally posted by superfly
          While the extra vextex shader is in handy and boost scores,i've seen some scores from GF3 TI 200 128 meg cards and they are significantly faster than any GF3 64 meg card,by as much as 50~60% faster in some cases,and both still have only 1 vextex shader.
          --------------------------------------------------------------------------------

          well, i don't know where you've seen those scores but I must have missed them, most of the time I only see increases between 0 and 10%
          Main Machine: Intel Q6600@3.33, Abit IP-35 E, 4 x Geil 2048MB PC2-6400-CL4, Asus Geforce 8800GTS 512MB@700/2100, 150GB WD Raptor, Highpoint RR2640, 3x Seagate LP 1.5TB (RAID5), NEC-3500 DVD+/-R(W), Antec SLK3700BQE case, BeQuiet! DarkPower Pro 530W

          Comment


          • #20
            I've tried the benchmark and I've got to say im not impressed:

            1. It's not really good looking, the gras is quite bad, the sky is bad. Only the trees/leaves and the water is decent, but we've seen this before.

            2. It's not a good gfx-card benchmark, from what I've seen the results are mostyl dependant on the system memory more than the gfx-card. E.g. a slow-clocked R8500DV in a 1GB rig gives quite a bit better results than a R8500@310/320 in a 256MB, but otherwise equally specced rig. CPU also seems to be as important as the gfx-card.

            4. It's a DX8.0 benchmark, not a 8.1 one. They're only using pixel-Shader 1.1, not 1.3 or even 1.4 - most likely to please all those Geforce3 owners.

            5. It's IMO badly coded. Look at those system-requirements and those incredibly slow speeds even on >2GHz rigs with 512MB RAM and a Geforce4600 - then look at 3DMarks nature test, which is imo overall better looking and runs smooth. Or take a look at the stunning Chameleon-benchmark from NVidia that also runs smooth. Or the ATI pixel-Shader demos. No, I don't think I need this engine...
            Last edited by Indiana; 6 April 2002, 05:46.
            But we named the *dog* Indiana...
            My System
            2nd System (not for Windows lovers )
            German ATI-forum

            Comment


            • #21
              Does the benchmark run on a Parhelia , because it won't run on my MAX .
              Main: Dual Xeon LV2.4Ghz@3.1Ghz | 3X21" | NVidia 6800 | 2Gb DDR | SCSI
              Second: Dual PIII 1GHz | 21" Monitor | G200MMS + Quadro 2 Pro | 512MB ECC SDRAM | SCSI
              Third: Apple G4 450Mhz | 21" Monitor | Radeon 8500 | 1,5Gb SDRAM | SCSI

              Comment


              • #22
                Originally posted by KeiFront
                Does the benchmark run on a Parhelia , because it won't run on my MAX .
                Yes, it probably will... I think that your G400MAX is lacking HW Pixel Shaders

                Comment


                • #23
                  Originally posted by Indiana
                  I've tried the benchmark and I've got to say im not impressed:

                  1. It's not really good looking, the gras is quite bad, the sky is bad. Only the trees/leaves and the water is decent, but we've seen this before.

                  2. It's not a good gfx-card benchmark, from what I've seen the results are mostyl dependant on the system memory more than the gfx-card. E.g. a slow-clocked R8500DV in a 1GB rig gives quite a bit better results than a R8500@310/320 in a 256MB, but otherwise equally specced rig. CPU also seems to be as important as the gfx-card.

                  4. It's a DX8.0 benchmark, not a 8.1 one. They're only using pixel-Shader 1.1, not 1.3 or even 1.4 - most likely to please all those Geforce3 owners.

                  5. It's IMO badly coded. Look at those system-requirements and those incredibly slow speeds even on >2GHz rigs with 512MB RAM and a Geforce4600 - then look at 3DMarks nature test, which is imo overall better looking and runs smooth. Or take a look at the stunning Chameleon-benchmark from NVidia that also runs smooth. Or the ATI pixel-Shader demos. No, I don't think I need this engine...

                  You have to remember one thing though,3d mark 2001 uses at most about 250 000~300 000 polys in it's high detail test scenes,with the highest amount being used in the dragothic high detail test run,so poly wise it isn't as demanding as this bench,not even close...


                  While the nature test in 3d mark looks nice,it has nowhere near the detail that this test has,the grass density in this bench is so high,you can't even see the ground period,like you can in the 3dmark nature test...


                  As far as pixel shaders issue goes,there's no real difference in pixel shaders 1.1 through 1.4,save in the fact that if a developer choses to use 1.3 or 1.4 pixel effects,cards that only support up to 1.1 will need to render the effect in more than one pass,resulting in less performance...


                  The updated version of 3d mark added an advanced pixel shader test which radion 8500 cards run faster than any GF3 card due the fact that it cn do all the effects in a single pass,even though the final output looks the same between both cards..


                  But overall,while 3d mark is still a very nice bench,it's getting outdated fast as a viable bench since it's getting all too easy to get high scores and see the individual tests run well over 60+ fps...

                  This bench however,looks is quite likely to be a serious challege for any hardware released this year(and possibly next year as well),to see it run at 60+ fps,even at 1024*768.let alone at 1600*1200...
                  note to self...

                  Assumption is the mother of all f***ups....

                  Primary system :
                  P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...

                  Comment


                  • #24
                    This bench however,looks is quite likely to be a serious challege for any hardware released this year(and possibly next year as well),to see it run at 60+ fps,even at 1024*768.let alone at 1600*1200...
                    heh just wait, we'll see we'll see
                    "Be who you are and say what you feel, because those who mind don't matter, and those who matter don't mind." -- Dr. Seuss

                    "Always do good. It will gratify some and astonish the rest." ~Mark Twain

                    Comment


                    • #25
                      This benchmark uses only a bit more than double the polygons as 3DMark. And the gras while dense is poorly done and the main reason that you don't see the floor is the grasses length.
                      It's still too slow and too demanding, there' not much optimising been done apparently (if at all).
                      The additional polygons don't translate in better looks when compared to e.g. 3DMark - not to speak of the newer NVidia techdemos that are simply amazing (sounding like Steve Jobs already... ) and run smooth despite very high polygon-counts.

                      Besides your PixelShader argument should be seen vice-versa: i.e. through using lower PixelShader revisions this test hinders new gfx-chips as they are forced to do all the same cycles as an old 1.1 only chip - while they probably could do the same in one pass.

                      And the major weakness of this benchmark is that it's at least as dependant on system RAM and CPU (and most probably on AGP-transfers as well) than on the gfx-card.
                      Last edited by Indiana; 6 April 2002, 13:05.
                      But we named the *dog* Indiana...
                      My System
                      2nd System (not for Windows lovers )
                      German ATI-forum

                      Comment


                      • #26
                        Originally posted by Indiana
                        This benchmark uses only a bit more than double the polygons as 3DMark. And the gras while dense is poorly done and the main reason that you don't see the floor is the grasses length.
                        It's still too slow and too demanding, there' not much optimising been done apparently (if at all).
                        The additional polygons don't translate in better looks when compared to e.g. 3DMark - not to speak of the newer NVidia techdemos that are simply amazing (sounding like Steve Jobs already... ) and run smooth despite very high polygon-counts.

                        Besides your PixelShader argument should be seen vice-versa: i.e. through using lower PixelShader revisions this test hinders new gfx-chips as they are forced to do all the same cycles as an old 1.1 one while they probably could do it in one pass.

                        And the major weakness of this benchmark is that it's at least as dependant on system RAM and CPU (and most probably on AGP-transfers as well) than on the gfx-card.

                        Remember now,the poly budgets are doubled in this test relative to the high detail tests in 3d mark which most systems run at 60~70 fps on average,not the lower detail runs which use about 100 000 per frame and you see current cards getting well over 150 fps at times...So it's only normal in this bench that with poly budgets going to 500 000~600 000 polys per frame, that your average fps goes down to high 20/low 30 fps range(GF4 4600)...And with all that geometry data,it necessarily needs lots of memory,preferably on the video card itself,to store it all....


                        But it isn't just the amounts of polys that account for the slow down either,the draw distance of the environments are much larger that what we see in the nature test of 3dmark,where everything is much closer,with carefully chosen camera angles that view the environment from above and pointed down,not at grass level and straight ahead,like this bench does,so z- buffer data is much higher that in 3d mark.


                        As far as pixel shaders go,it's all about to be rendered a moot point anyways,since dx9 will go directly to pixel and vertex shaders 2.0,so the most likely thing to happen is that once enough DX9 cards are on the market,developers will switch directly from pixel shaders 1.1 to 2.0 anyhow..


                        Hey Greebe,you sure about that...
                        note to self...

                        Assumption is the mother of all f***ups....

                        Primary system :
                        P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...

                        Comment


                        • #27
                          My point still stays: this benchmark looks mediocre at best and is unoptimised. I get 85fps in the Dragothic high detail test (the one with the highest poly count), but only 15 in this Codecreatures test. This bad performance cannot be solely explained with the double polygon-count and some extra ZBuffer-data (while I'd tend to disagree with your argument of those "carefully chosen camera angles").
                          And this PixelShader argument is typically NVidia like: anything that NVidia doesn't implement should be skipped anyways - while everything that NVidia implements is an absolute must have (T&L e.g. at the GF256 times)....

                          Still the worst about this "benchmark" is it tests more or less the amount of system RAM and the CPU-speed instead of gfx-card speed....
                          But we named the *dog* Indiana...
                          My System
                          2nd System (not for Windows lovers )
                          German ATI-forum

                          Comment


                          • #28
                            oooh pretty colors, and smooth
                            I'm with the ugly guy below me

                            (It's amazing how many threads I kill with that line )

                            Comment


                            • #29
                              Originally posted by Indiana

                              And this PixelShader argument is typically NVidia like: anything that NVidia doesn't implement should be skipped anyways - while everything that NVidia implements is an absolute must have (T&L e.g. at the GF256 times)....
                              I was with you all except for this statement Indiana. I've had both a GF3 and now a GF4Ti5600 (with 3 weeks left to return it for a full refund- get on the ball with that announcement, Big M!:.

                              If ATI 8500's are the only cards to support DX8.1 shaders, it will most likely be skipped over by most developers, who will work with either NO shaders, DX8.0 shaders (to get ALL shader capable card owners as part of their audience), or DX9 shaders (since nV, Matrox, and ATI are all rumored to be working on hardware that supports this..).

                              It all comes down to how many of your target customers have hardware to support each API. I think nV decided that making radical changes in order to just add ps1.4 capabilities isn't worth the time and money, especially given that 1) DX9 is rumored very close, so a part with support for it should be a priority, and 2) Everything that can be done in ps1.4 can be done in ps1.1-1.3, even if it is multipass. Further, I have yet to see real proof that the single pass the radeons are making is making an improvement in performance over the equivalent output done using multiple passes on non ps1.4 hardware (basically, on nVidia hardware, since only ATI and nV make ps capable stuff right now).

                              Anyhow, I could care less about the ATI/nV war. Come on Matrox, give me a reason to return this card (notice: if you announce a new revolutionary card in the next three weeks, even with an expected retail sometime as far out as August, I WILL return my Ti4600 and stuff the cash under my matress, for use on the new M card. )
                              "..so much for subtlety.."

                              System specs:
                              Gainward Ti4600
                              AMD Athlon XP2100+ (o.c. to 1845MHz)

                              Comment


                              • #30
                                Originally posted by Indiana
                                My point still stays: this benchmark looks mediocre at best and is unoptimised. I get 85fps in the Dragothic high detail test (the one with the highest poly count), but only 15 in this Codecreatures test. This bad performance cannot be solely explained with the double polygon-count and some extra ZBuffer-data (while I'd tend to disagree with your argument of those "carefully chosen camera angles").
                                And this PixelShader argument is typically NVidia like: anything that NVidia doesn't implement should be skipped anyways - while everything that NVidia implements is an absolute must have (T&L e.g. at the GF256 times)....

                                Still the worst about this "benchmark" is it tests more or less the amount of system RAM and the CPU-speed instead of gfx-card speed....

                                Listen,a little z-buffer trafic is this case,deserves the understatement of the year award when you're talking scenes with that much depth complexity and draw distances that are larger than in the nature test in 3dmark.... And if you notice carefully while the bench is running,you'll see that there's no LOD system inplemented in the engine,meaning that the scenes you see in the background are just as detailed(poly wise) and use the same textures as the objects that are closer...


                                LOD systems are used precisely to increase overall performance in current(and future) games at the expense of visuall quality and allowing the games to run on less video card and system memory than they otherwise would without an LOD system...just ask any developer if you don't believe me...


                                And it isn't cpu dependant either since i've seen scores with systems outfitted with the same amount of memory and using the same video card and even if one of those systems uses a much faster cpu,the overall score increases perhaps 2~3 fps,no more,which makes it an execellent test for gauging graphic cards rendering performance actually...

                                Even with my own system,i get seriously smoked(twice the score,at least) by other users that have a GF4 card and cpu's that are running at half the clock speed mine is...


                                The thing that you should remember is that 3dmark 2001 is yesterday's news which hardly presents a serious challenge, pretty much any system you can get your hands on today,even on a fairly tight budget,can get pretty high scores as it is.

                                And my comment about pixel shaders has nothing to do with being pro nvidia at,it's all about installed user base issues and developers going to the trouble of using features when the user base is large enough to justify it...As tempting as it is for any given card maker,be it ATI,nvidia,matrox,SIS to have one feature that sets it apart from others,it ends up being useless for games,execept for the odd tech demo,if other cards don't have it either and you know it,there's ample proof of that over the last couple of years,so there's no need to quote examples either...


                                Card makers should concentrate on having the best possible implementation of performance/visual quality on a fixed set of features common between all of them,which ultimately speeds up their adoption in actual games,which is what this is all about anyhow...
                                note to self...

                                Assumption is the mother of all f***ups....

                                Primary system :
                                P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...

                                Comment

                                Working...
                                X