Announcement

Collapse
No announcement yet.

iXBT's claims of the G450

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    CannyOne: Maybe that "theory" thing was a bit over the top, but I really like the idea of a dual memory bus on future Matrox products and we all should know if this is going to be true on 25th of april.
    PS: No need to apologize, I understand your gripes with all the Voodoo 5 hype going on, I was only a bit disappointed that nobody else was able to see what I saw:-)

    dneal: You're definitely getting the idea! If true, this will be really some exciting news for Matrox fans (more so than V5 benchs anyway, heh, heh).
    Why do you think memory latency will be cut in half? Because of simultanious read/write operations? Please enlight me on that one. If true, thats even better than I thought it was. Now add FC DDR to that, which itself reduces memory latency significantly and Matrox is definitely going to have the most advanced memory subsystem around, combine that with a massive fillrate, advanced T&L, EMBM etc.(G800)...drool.

    AJ: I always believed those numbers you're pointing to referred to the number and size of the memory chips for the 16MB and 32MB cards, but I honestly don't know for certain. Maybe anyone can confirm/deny that?
    I think that one about lower signal noise is brilliant! I think thats the reason why we're still stuck with 128bit memory bus right now and it makes the idea of a dual memory bus look even better to me (reduced bus noise + increased bandwidth + lower latency!!!). You're definitely right, that the info we have is vague, but since ixbt explicitely confirmed the info about the 64bit memory bus, I'm pretty sure it's going be some sort of dual bus, otherwise it wouldn't make any sense to reduce the buswidth for the G450 and Matrox isn't going to shoot itself.

    What really makes me wonder is, why nobody noticed the dual memory bus thing in the "Condor" info, since this is pretty much "old news". Maybe those who knew were under NDA and everybody else was disappointed by the specs, like myself, and therefore nobody read them carefully. It was this thread that made me reading them again, because I thought, - hmm weren't they talking of some sort of dual bus in the info? Then this could make some sense - and tadaa: The info clearly stated "64bit dual memory bus".
    "All I can say is that the G450 will be an excellent videocard, but not the favorite of hardcore gamers(taken from Murc News "Straight From The Horse's Mouth" by Aces Hardware)." Wouldn't this statement make perfect sense if my assumptions are true("excellent videocard"=dual memory bus + "not the favorite of hardcore gamers"=sub par fillrate compared to V5/NV15/Rage6)? Well, we'll have to wait 'till next week to be sure, but at least for the G800 this would be exciting news, even for the "hardcore gamers".

    Ole

    ------------------
    My Sytem: P3 450@558, Asus P3B-F(Bios 1005), 128 MB PC-100(2,3,3,4, fast), G400 Max@174/218 (MGA-Tweak:2,5/2/2,5,435) PD 5.52wTGL1.30, 13GB Quantum Fireball CR, Pioneer 40*CD-Rom, SBLive!Value (Liveware 3.0)

    Comment


    • #17
      Jutai, maybe we can get the tekkies around here intrested, if keep the thread at the top, hehe(trying to force fortune). btw: Matrox engineers kick a$$!!!

      ------------------
      My Sytem: P3 450@558, Asus P3B-F(Bios 1005), 128 MB PC-100(2,3,3,4, fast), G400 Max@174/218 (MGA-Tweak:2,5/2/2,5,435) PD 5.52wTGL1.30, 13GB Quantum Fireball CR, Pioneer 40*CD-Rom, SBLive!Value (Liveware 3.0)

      Comment


      • #18
        Proton: Yes, I believe that a dual bus would definetely have inherent latency reducing qualities. Memory latency is the delay between read/write access so memory dual-bus would, by definition, have approx. half the latency of a one-way setup. It would never be exactly half, that would be 100% efficiency and we all know that that pretty much never happens in out universe.

        What this all means is..........I want a friggin G800! Gooooood Loooord Maaaaan.........

        And yes, Matrox better ship it with lots of Matrox stickers. I wanna put 'em on everything


        ------------------
        ABIT BF6, Pentium III SL35D 450MHz -> 630MHz, 192MB PC100 SDRAM @ 140MHz, Toshiba 6X DVD, Quantum Fireball Plus KA, Quantum Fireball ST,
        Matrox Millenium G400 DH @ 160/200, Creative SBLive Value, 3Com Fast Etherlink XL PCI
        Supermicro SC701A ATX 300watt TurboCool PC Power & Cooling PS, Panasonic Panasync S17
        Last edited by dneal; 20 May 2022, 08:58.

        Comment


        • #19
          Look, I'm not saying that bandwidth isn't important, but considering the fairly huge mem bandwidth of the G400 line, the framerates are lower compared some competitor with less bandwidth.

          Now two other things that can affect frame rates are: (1) the drives, which have been getting better and it shows (2) max fillrate (efficiency) of the core, which seems to be somewhat lacking in the G400. Now the G450 may improve the core efficiency and up the clock rate, then the bandwidth problem would be more evident.

          I need to think a bit more about the dual bus issues, but by the time I understand it,the specs will probably be out!

          As far as the low cost version business, I'm suggesting that they may have in mind a low cost version (for motherboards and OEM's) with a lower clock and lower mem bandwidth, plus a "MAX" version for the gamer market with a higher clock rate and greater bandwidth.

          Yeah, it would be nice if a few hardware types got in on this discussion.

          -AJ



          [This message has been edited by AJ (edited 20 April 2000).]
          Trying to figuring out what Matrox is up to is like tying to find a road that's not on the map, at night, while wearing welders googles!

          Comment


          • #20
            AJ: It's true that the current G400 Series has plenty of bandwidth compared to competitors, especially the Max. I experimented a little with Q3A@1024HQBilinear and overclocking my Max and found out that overclocking the core without the memory (180/200) wouldn't give me any significant result, while overclocking both (174/218) gave a pretty significant performance boost. I'm not really sure if those results were valid, since overclocking the core to 180MHz definitely was pushing the limits of my card, because it crashed bigtime in 3DMark2000, but it still indicates, that memory bandwidth is a major bottleneck at higher resolutions with 32bit color.
            The SDR GeForce is the perfect example why memory bandwidth is so important for highres/truecolor gaming. In Q3A@1024HQBilinear the GeForce SDR is only a few fps faster than my overclocked G400 Max, although it has a much higher fillrate, due to its memory bandwidth limitation. The G450 should have a higher core clock speed than current G400 boards, therefore it needs more memory bandwidth than those. This could be achieved through the usage of DDR-Ram combined with the current 128bit memory bus, but all rumours we've heard so far indicate that the G450 will have a 64bit memory bus. IMO this leaves us with two possibilities:
            1. The rumours were wrong and the G450 will use a standard 128bit memory bus with DDR-Ram. I think this would be the easiest and the cheapest way for Matrox to avoid bandwidth limitations with the G450.
            2. The rumours are right and the G450 will indeed feature a 64bit dual memory bus. If our speculations are right, this could be the better solution and could improve the overall performance of the G450 quite a bit.
            I think we'll have to wait until the official specs will be released, but it's fun to speculate a little.

            Ole

            ------------------
            My Sytem: P3 450@558, Asus P3B-F(Bios 1005), 128 MB PC-100(2,3,3,4, fast), G400 Max@174/218 (MGA-Tweak:2,5/2/2,5,435) PD 5.52wTGL1.30, 13GB Quantum Fireball CR, Pioneer 40*CD-Rom, SBLive!Value (Liveware 3.0)

            Comment


            • #21
              Yes, at high res, 32 bit color, every card out now is limited by bandwidth. I think the charts I used to draw the conclusions in my previous post were at lower resolutions and 16bit color. Then some of the competitors had much higher frame rates, but I didn't check to see which driver they were using (eg TGL on a SSE or 3DNow cpu).

              I'm curious about the framerate changes when OCing at low res/color settings. Maybe I'll play around a bit when my wounded Sony gets back from the shop (of course, my current cpu on that machine, a cel 400, is probably the bottleneck - hence a coming cpu upgrade to a PIII 700). My vanilla g400 has the 6ns SGRAM and seems to overclock nicely.

              Bottom line, for a non-value card with a ~200MHz core they will likely need the equivalent of a 128bit bus running at 300MHz or more to get decent performance at high res/color depth (since they will have a fillrate rate just shy of the DDR GeForce in multitexture apps).

              I think it would be really cool if the frame rates on the top end g450 actually beat out the NV10. I stick with Matrox because I hate the lousy 2D quality of everybody else's cards, it real nice when the performace of the Matrox is near the top as well. Maybe if I bought cheap monitors, I wouldn't care (not!).

              -AJ


              [This message has been edited by AJ (edited 20 April 2000).]
              Trying to figuring out what Matrox is up to is like tying to find a road that's not on the map, at night, while wearing welders googles!

              Comment


              • #22
                Internally there is never a speed difference between 16 bit and 32 bit calculations. Most cards (Voodoo1,2,3, G200 and G400 (not sure about TNT and TNT2) calculate internally at 32 bit.

                However memory bandwidth is mostly the limiting factor. At 16 bit memory bandwidth is good enought, however 32 bit (meaning 32 bit frambuffer, Z-buffer and textures)requires twice the bandwidth.

                Most chips have a small internal cache to store data to retreive data much quicker, just like a CPU internal cache. The setup looks like below:

                Core <-> Internal cache <-> External memory

                Between core and internal cache is the internal bus. Between internal cache and external memory is the external bus.

                The TNT2 has both a internal and external bus of 128 bits. The G400 has a 256 bit internal bus and a 128 bit external bus. The 256 bit bus is divided in one 128 bit bus for reading and a 128 bit bus for writing.

                AJ, the core of the G400 is not that effective at 16 bit, however at 32 bit it is much more effective than say a TNT2.

                When benchmarking my G400 at 150 MHz against a TNT2 at 150 MHz the fillrate of the G400 at 32 bit is 200 MTexels/sec. The TNT2 however only scored 150 MTexels/sec. So the core of the G400 is indeed very effective.

                However, especially the OpenGL drivers of the G400 aren't as optimized as those for the TNT2, so indeed at lower resolutions where fillrate isn't the limiting factor, the TNT2 indeed wins. Although the TurboGL is a big step forward, it is still not as fast as that of nVidia.

                Comment


                • #23
                  Franksch3, I'm not sure about the G400 being slower than a TNT2 at 16bit colour OGL(Q3A). The TurboGL drivers improved Q3A performance at low resolutions and 16bit color quite a bit. The way I see it, the G400 and TNT2 are pretty much equal in OGL thanks to Matrox TGL drivers. Same with the Max and Ultra Versions. In Q3AHQ the TNT2 seems to be faster than the G400's(not by much), but the reason for this is a driver trick from NVidia. The TNT2 won't do true trilinear filtering, because this results in a major performance hit. Instead NVidia uses some sort of dithering to emulate trilinear filtering. If you compare both cards in Q3AHQ and turn off trilinear the G400 wins hands down, because Matrox optimised the G400's for 32bit rendering. Most reviewers simply use the predefined settings in Q3A, but this is somewhat unfair, because at the HQ setting, where the G400's should shine, the trilinear filtering results in a major performance hit for them (~5fps). In contrast, the TNT2 takes close to zero performance hit, thanks to NVidias driver trick. To be fair though, Matrox uses only 16bit Z-Buffer as the default setting for G400's. When you turn on 32bit color in Q3A, the TNT2 uses a 32bit Z-Buffer (24bit Z/8bit stencil), whereas the G400 stays at 16bit Z-Buffer accuracy. You can enable 32bit Z-Buffer for the G400's, but when I did this, in 32bit mode I lost about 8fps! That's even more than the performance hit for trilinear.
                  On a side note: I was't able to play Q3A with 32bit Z-Buffer enabled and the PD5.52 ICD. In 16bit Z-Buffer mode it worked, but the Q3A driver information reported "32bit color/16bit-Z/8bit-stencil". I think the ICD forces 16bit Z-buffer/8bit-stencil, because the G400 is slow with 24bit Z/8bit stencil enabled, but this won't work if 32bit Z-buffer is enabled. It's not really an issue to me, because TGL is faster on my system and there was no visible difference between 16bit /32bit Z-Buffer in Q3A, but I think it's an indication that the G400's have some sort of problem with 32bit Z-buffering.
                  If you compare OGL(=Q3A) performance of the G400 with the TNT2, it really depends on the game settings you use. There's always the possibility to make one of them look better fps wise. To me, they are pretty much equal performance wise, but the G400's visuals are superior to the TNT2's. From my experience, I would would recommend using the TGL, 16 bit Z-buffer and bilinear filtering for the best performance in Q3A with G400's. I'm getting 41.1 fps @1024HQ with this setup, which is perfectly playable to me and looks far better than any TNT2 I've seen so far.
                  To get back on topic, with a G450, you should get even better results than that, if my guess about the dual memory bus is going to be true and the clock speed will be around 200MHz. This card looks like a nice upgrade for all G200 users, but I will definitely wait for the G800. Until then, my G400 Max should serve me well.

                  Ole

                  ------------------
                  My Sytem: P3 450@558, Asus P3B-F(Bios 1005), 128 MB PC-100(2,3,3,4, fast), G400 Max@174/218 (MGA-Tweak:2,5/2/2,5,435) PD 5.52wTGL1.30, 13GB Quantum Fireball CR, Pioneer 40*CD-Rom, SBLive!Value (Liveware 3.0)

                  Comment


                  • #24
                    Hi Proton,

                    some clarification about the Z-buffer and stencil buffer:

                    The following possibilities are existant for both G400, TNT, TNT2 and GeForce:

                    - 16 bit Z-buffer, no stencil buffer
                    - 24 bit Z-buffer, 8 bit stencil buffer
                    - 32 bit Z-buffer, no stencil buffer.

                    To be exact. For each pixel 4 bytes are reserved. The first 3 are used for the z-buffer, the other one for the stencil buffer.

                    Comment


                    • #25
                      Hi Franksch3,

                      That's what I thought in the first place, but with 5.52 ICD and the 32bit Z-Buffer option in powerdesk unchecked Q3A reports 16bit Z/8bit stencil in "driver information". With TGL its 16bit Z/no stencil or 24bit Z/8bit stencil, if I enable 32bit Z-buffer in powerdesk. With TGL it works according to your explanation, because Q3A uses stencil buffering, but it makes me wonder what the ICD is doing. Let me see if I can understand what you said. 4bytes=32bit which can be used for 32bit Z-buffer(all 4bytes) or 24bit Z-buffer(3bytes) and 8bit stencil buffer(1byte). 16bit Z-buffer must be 2bytes for each pixel(2*8=16). Therefore the ICD seems to use 3bytes per pixel to do what Q3A reports. This seems to be impossible, because I always thought the G400's would only be able to use 16bit Z-buffer unless you activate the 32bit Z-buffer through PD. I don't know if you have Q3A or if you're using the 5.52 ICD, but if you do, please check the "driver information"(in Q3A) with 32bit Z unchecked(in PD) and tell me if it reports the same thing. Maybe it's just me, but there have been reports of corrupted shadows in Q3A with PD 5.52 ICD on this boards when the drivers where released and I've seen them as well. Couldn't that be related to the stencil buffer, since Q3A uses stencil shadows? Just curious.

                      Ole

                      ------------------
                      My Sytem: P3 450@558, Asus P3B-F(Bios 1005), 128 MB PC-100(2,3,3,4, fast), G400 Max@174/218 (MGA-Tweak:2,5/2/2,5,435) PD 5.52wTGL1.30, 13GB Quantum Fireball CR, Pioneer 40*CD-Rom, SBLive!Value (Liveware 3.0)

                      Comment


                      • #26
                        I don't have Quake 3 installed, because I'm much more of an Unreal Tournament fan. ;-)

                        But it might be that internally a 24 bit z-buffer is used with a 8 bit stencil buffer, but reports it wrong to Quake 3.

                        There is no 3D card out there which will access memory in a non 16 bit or 32 bit memory alignment because of the enormous performance penalties for that.

                        I think that it's a driver bug.

                        Comment


                        • #27
                          It´s not a 64 bit memory bus, it´s a 64 bit DDR memory interface.

                          The G450 will also include a second ramdac, and integrated circuitry for flat-panels and hardware DVD.

                          Comment


                          • #28
                            and when will the damn thing arrive?

                            Comment


                            • #29
                              in the summertime when the weather is hot..

                              Comment


                              • #30
                                You're kidding- August?

                                Or do you live in a desert in the northern hemisphere- or near the equator?



                                Does it ever get "hot" in Quebec?

                                Comment

                                Working...
                                X