Announcement

Collapse
No announcement yet.

G400 MAX can still kick some ass

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • G400 MAX can still kick some ass


    Check this little benchmark from sharkyextreme - the G400 MAX with driver 5.5 managed to kick the Viper 2 ass on some benchmarks...








    ------------------
    Cloudy
    Asus P2B-DS, 2 x Celeron 400@75Mhz, 128Mb Ram, Xitel Storm Platinum,
    2 x IBM 4.3Gb scsi,IBM 22GB IDE, Pioneer DVD ROM scsi, G400 32MB DH (Oc to 150/200).

    Cloudy
    Asus P2B-DS, 2 x Celeron 450 (400@75Mhz), 192Mb Ram, SB Live! Platinum,
    2 x IBM 4.3Gb scsi,IBM 22GB IDE, Pioneer DVD ROM scsi, G400 32MB DH.

  • #2
    We know the G400 Max kicks butt but let's not compare it to the Viper 2. The Viper 2 features T&L (at least in openGL) which once implented in games will give the Viper 2 an advantage over a lot of older cards.

    Then again, the Viper 2 could be faster if only S3 managed to get the hardware bugs out of their silicon! This is an advantage for all non T&L titles and yes, the G400 Max beats it! =)

    One nice thins to finish with, old cards can run T&L games or games with a lot of polygons but no card (not even GeForce) can run EMBM because the lack of this feature! Now who's the winner?

    Comment


    • #3
      Evidently you didn't read HardOCP's T&L study.

      Evidently, T&L (On GeForce Cards, not sure about other T&L Cards) actually provides LESS FPS on faster machines PIII 600+ (maybe even less). Than if the T&L is disabled

      On slower machines, it does provide an increase.

      T&L was alot of hype...

      Comment


      • #4
        Well, why is this? It seems we get these results because what was used till now were static benchmarks which do not really demand the max from the T&L unit on the GeForce's they do not use a variety of totally different 3D models which makes it easy for
        software T&L to work through. This does not mean T&L was not hyped, T&L was hyped but there are some positive aspect concerning T&L.

        Take the DMZG demo, doesn't it run faster than all other cards if hardware T&L is enabled?

        What I wanted to say was, the G400 is great the Max is splendid but it is compared to the GeForce an older card missing a good T&L implentation now a days! If it was different, why are there plans to implent T&L in G800?

        This shall not end in a battle so let me just say that I'm more happy with my G400 Max than I was with my Creative Labs Annihilator Pro (GeForce DDR card)! Happy?! =)

        Comment


        • #5
          But isn't it also amazing that they keep comparing the others to the G400. And still saying that for the best all around card the G400 is the way to go.

          Joel
          Libertarian is still the way to go if we truly want a real change.

          www.lp.org

          ******************************

          System Specs: AMD XP2000+ @1.68GHz(12.5x133), ASUS A7V133-C, 512MB PC133, Matrox Parhelia 128MB, SB Live! 5.1.
          OS: Windows XP Pro.
          Monitor: Cornerstone c1025 @ 1280x960 @85Hz.

          Comment


          • #6
            Well the G400 may not be the fastest but with how many cards can you say that looks great.
            Chief Lemon Buyer no more Linux sucks but not as much
            Weather nut and sad git.

            My Weather Page

            Comment


            • #7
              You are all right I do also think the G400 is a very good card and it's graphic quality especially in D3D is fantastic BUT it is not the fastest today! We will have to see how exactly it compares to others in really new higher polygon count games and apps!

              Besides, Matrox would greatly improve their product if they'd get out some superb stable and fast working openGL ICD or MCD! The latest TurboGL does not support stencil shadows and the ICD (5.41) is too slow. The icd from 5.50.010 and earlier (BETA) is not stable! Hmmmm... Matrox however is not the only company with driver problems.

              Comment


              • #8
                The GeForce DDR is in most situations the fastest board available. However, this is an early implementation of T&L (for nVidia, at least), and it appears enabling it produces a performance hit when benchmarked with software not specifically optimized for it.

                This has been discussed to death for the last month or so. T&L on the GeForce, while a neat feature, produces no real world benefits. It seems a fast CPU just does the job better than a GPU.

                I have several computers running, and although I favor Matrox, I'm still an nVidia user, and I continue to visit the nVidia fan sites. You will not find many of them extolling the virtues of the T&L feature on the GeForce these days, other than an admiration of nVidia for taking the lead and implementing it on a gaming board.

                I think one buys a GeForce because it is a very fast board, and that alone can enhance gaming. Not because T&L enhances gaming in any way, because, at this point in time, it just doesn't.

                Paul
                paulcs@flashcom.net

                Comment


                • #9
                  The fact remains that per-polygon lighting is a very cheap solution to per-pixel lighting. For the following few years we will probably see a trend towards T&L, because per-pixel lighting is to expensive (in terms of fillrate).

                  But the drawback of the per-polygon implementation of T&L is that the number of polygons increases tenfold or even more, while not adding any extra layout. The extra polygons are needed for the lighting purposes. And that makes the CPU to slow in comparison to T&L in hardware.

                  When per-pixel lighting is introduced (there's already support in OpenGL and probably even DirectX 7 :-) ) the extra polygon's are not needed anymore, because lighting becomes a part of the rendering instead of the transformation part. That way less polygons are neccesary and only transformation is required. And that should be easily handled by a CPU.

                  Although per-pixel lighting is expensive (in terms of fillrate) for current chipsets, it might be much less if a company implemented lighting for say 4x4 pixels at a time and use bilinear algorithms to smooth it. That way 16 times less calculations are needed and looks many times better than per-polygon lighting. Matrox? Please? As graphical quality company? ;-)

                  Frank

                  [This message has been edited by franksch3 (edited 21 February 2000).]

                  Comment


                  • #10
                    O.K. you might be right, however, T&L is interesting! Maybe not as much as it was hyped but we will surely see more and better implentations!

                    Maggi: Hey, I know it was just an idea but yeah, I've got everything set to run the stencil shadows, it's just not working in the latest TurboGL drivers! What I get is ther following, you can see the stencil shadow correctly but you can see the shadow beams from the object to the shadow on the ground, wall, etc... this is kind of annoying especially if the light source is behind you because all you see is some grey stuff!

                    I've been using a visual quality optimized config file (no not the high quality stuff in the game but a personally edited cfg file) and it worked with the earlier TurboGL and other graphic cards I've tested in the system! It looks perfect so I'd like to get it back on G400.

                    Hmmmm... now if only Ant or Kruzin would give me an address where I can be sure the mail is read. I've set off a mail to Matrox concerning my drivers and other stuff but nothing in return today!

                    [This message has been edited by ParaKnowYa (edited 21 February 2000).]

                    Comment


                    • #11
                      posted by ParaKnowYa:

                      ... The latest TurboGL does not support stencil shadows and the ICD ...
                      Could it be that your Zbuffer is set to 16bit ???

                      When it's set to 32bit Q3 - for example - takes 24bit as a Zbuffer and 8bit as stencil buffer ... have a look into your PD options.

                      Despite my nickname causing confusion, I am not female ...

                      ASRock Fatal1ty X79 Professional
                      Intel Core i7-3930K@4.3GHz
                      be quiet! Dark Rock Pro 2
                      4x 8GB G.Skill TridentX PC3-19200U@CR1
                      2x MSI N670GTX PE OC (SLI)
                      OCZ Vertex 4 256GB
                      4x2TB Seagate Barracuda Green 5900.3 (2x4TB RAID0)
                      Super Flower Golden Green Modular 800W
                      Nanoxia Deep Silence 1
                      LG BH10LS38
                      LG DM2752D 27" 3D

                      Comment

                      Working...
                      X