Announcement

Collapse
No announcement yet.

'Fusion' cards

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Rags: I'm not saying we couldn't use more memory/FSB bandwidth, only that video memory bandwidth is the major limitation. The p4's strength is certainly in it's platform, not the cpu. However, I like to play quake3 with all the details on at 800x600. I'd go higher if I could and I'd kill for FSAA. Turn that all on though and I'd get probably less than 5 fps, and it wouldn't matter if I had a k6-2 300 or a p4 overclocked to 2 ghz, I'd still get 5 fps because the limitation is the video memory.

    DGhost: I'm assuming you're referring to the following paragraph:

    <font face="Verdana, Arial, Helvetica" size="2">
    Since depth-testing and hidden surface removal are implemented entirely on-chip, an external depth buffer has been dispensed with. In contrast, GeForce accesses external, or off-chip, memory for depth-testing. Another important difference between Kyro and GeForce is the order in which depth-testing takes place. Kyro performs depth-testing and hidden surface removal prior to texturing. As only visible pixels are textured, the savings in texture bandwidth is a function of the extent of overdraw. GeForce performs depth-testing after textures have been applied to the polygon.
    </font>
    You'll notice that nowhere does it say that the kyro has a depth (or z-)buffer, only that it does away with it. It does say it does HSR and depth-testing on-chip, however that does not mean it has a depth buffer.

    The depth buffer on all of today's cards are external (from the GPU) - but it's still on the video card in the video memory.

    actually, I did see later on they said '[provides] a depth buffer on-chip'. They are wrong. If it truly did have a depth buffer, then there would be no need for the work around they described on the same page..

    You are right about the PCI speed though. I don't know where that 512 number came from. AGP 2x, which has been around for something like 4 years now, does have 512mb/s of bandwidth. It can handle that transfer speed of 240 mb/s. Although I'm sure after optimization, you could lower it enough that the PCI bus could do it too.

    Comment


    • <font face="Verdana, Arial, Helvetica" size="2">Originally posted by DGhost:
      UT is such a piss poor benchmark... notice how most benchmarkers use Q3 instead?

      also, you are referencing Toms Hardware. Toms Hardware is ultimately a joke. he cannot make up his mind on so many things, and it is so obvious that he is getting paid to test the hardware he reviews...

      the rendering engine on UT was designed for glide. on any other API it runs like ass.

      and as far as CPU speed goes, 640x480 is not pushing the video card...

      at higher resolutions it has to process each pixel on the screen... how many pixels it can process is not so much dependant on the speed of the memory, but the speed of the core. Memory speed will make a difference, but as long as it can keep up with the core its the core thats the limiting factor...

      at lower resolutions tho it is the memory and FSB that affect performance the most....
      </font>

      ..But my whole point was that even when minimize the dependency of the video memory bandwidth, the FSB and memory bandwidth are hardly what's limiting the system!

      As for the core limiting the speed, please explain how a geforce2 clocked at 240/333 core/mem is slower than when it's clocked at 200/363
      (You'll notice I didn't use Tom's benchmarks, although I've never seen any major discreptancy between his and anyone else's...)

      Comment


      • <font face="Verdana, Arial, Helvetica" size="2">Originally posted by Maggi:
        What might happen when we start talking about the Millennium G² MMS ...

        oops - there you go, now it slipped.

        [/B]</font>
        really?

        Comment


        • <font face="Verdana, Arial, Helvetica" size="2">Originally posted by Maggi:
          What might happen when we start talking about the Millennium G² MMS ...

          oops - there you go, now it slipped.

          [/B]</font>
          really?

          Comment


          • NetSnake, Maggi is only teasing (D'oh!)
            "Be who you are and say what you feel, because those who mind don't matter, and those who matter don't mind." -- Dr. Seuss

            "Always do good. It will gratify some and astonish the rest." ~Mark Twain

            Comment


            • The card *has* to have a depth buffer in order to do HSR. there is no way around this.

              in that snippit it says that it does away with an external depth buffer. not that it gets rid of the depth buffer.

              <font face="Verdana, Arial, Helvetica" size="2">
              There is also additional bandwidth needed for storing the scene geometry (polygonal data) and its subsequent retrieval to extract only the visible portions of the scene geometry - hidden surface removal. Finally, scene geometry occupies video board memory. By default, ten megabytes have been set aside for this purpose. For the MBTR benchmark, 2.7 megabytes of video board memory was allocated as scene buffer.

              Dividing each scene into tiles (32x16 pixels) and passing each tile individually through the graphics pipeline have made it possible to maintain depth- and frame-buffers on-chip.
              </font>
              So anywho, i read external is meaning card and non external as being in card memory, when in reality it was based on the core...


              it doesn't help to have a core running faster than the memory can move data - but there is no reason to increase memory speeds without increasing core speeds.

              if the card is pushed to its max, it becomes fillrate limited. At which point in time increasing core speed does affect performance more than memory speed. link <a href="http://www.anandtech.com/showdoc.html?i=1276&p=16">here</a> at the bottom is an example.

              take the kyro you were talking about. if you are pushing more polys at it than it can handle (which may happen within a year or two) faster core speed would be a bigger benefit to it than a faster memory bus, even tho they are both important and you would have to scale them both.
              "And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz

              Comment


              • Matrox said this in an interview (http://fullon3d.com/reports/gfx_q&a/q5.shtml)


                FO3D: If a genie gave you the choice between having your product installed in every business PC in the world or every home PC in the world, which option would you choose and why?

                Matrox: Both ideally! People underestimate the opportunities in the business market. It is a huge market and is ever growing. However, not everyone can play in this market as quality, reliability and support issues take precedence over who has 5 fps higher on Quake 3. Its one thing to make a claim on paper regarding having high quality, unified drivers etc., its another thing to actually have them. Matrox understands the corporate market and caters to its diverse needs and stringent quality requirements. Matrox has focused on the commercial desktop for the past 5 years and has become the graphics vendor of choice for the top 10 PC manufacturers in their mainstream high end business solutions and their entry level workstation. But, we also understand the needs of the enthusiast who wants the kicking home PC with all the bells and whistles and we have no intention of ignoring this market.

                Comment


                • This is interesting, as well

                  FO3D: How much life does the single chip approach to graphics design have?

                  Matrox: Quite a bit. The dual chip approach certainly has its advantages from increased fill rate to added memory bandwidth, but at a cost…a cost not favourable for high volume shipments. It remains a niche product for a niche market like the hard core gamer for instance. Depending on the user you are targeting and how price conscious they are, a dual chip solution may not be viable. The OEM and mainstream PC market, for example, are comprised of very price conscious consumers to the extent that the added cost associated with a multi chip solution is not justifiable. Therefore, the single chip approach is here to stay.

                  Comment


                  • This is rather sad for 3dfx, but it seems to be an older interview, so if yall already know this im sorry, i didnt

                    FO3D: What makes you think that you can survive in this market when others have found the going so tough?

                    3dfx: We have the best people and the best technology.

                    Matrox: - Because we're Matrox, I like that . But seriously, Matrox is renowned for industry leading 2D, 3D and video/graphics quality and we are definitely innovators. It is no secret that it has taken over a year for the competition to capitalize on features we had already integrated in our hardware. Matrox has consistently catered to the needs of its market and therefore has a loyal customer base, which continues to grow through innovation and deliverance of real world features. We have extremely talented employees and an excellent support system that brings to light not only our technological prowess but our concern for the end user as well, which makes us a well rounded company.

                    Comment


                    • <font face="Verdana, Arial, Helvetica" size="2">Originally posted by Greebe:
                      NetSnake, Maggi is only teasing (D'oh!)</font>
                      Yeah right ... bust me ...


                      <font face="Verdana, Arial, Helvetica" size="2">Originally posted by Topha:
                      ...

                      FO3D: What makes you think that you can survive in this market when others have found the going so tough?

                      3dfx: We have the best people and the best technology.
                      ...
                      </font>
                      But you know that 3dfx doesn't exist anymore, don't you ?

                      Sort of funny to read that response nowadays ...

                      Cheers,
                      Maggi

                      yes, that's right ... it is page 11, post #403
                      Despite my nickname causing confusion, I am not female ...

                      ASRock Fatal1ty X79 Professional
                      Intel Core i7-3930K@4.3GHz
                      be quiet! Dark Rock Pro 2
                      4x 8GB G.Skill TridentX PC3-19200U@CR1
                      2x MSI N670GTX PE OC (SLI)
                      OCZ Vertex 4 256GB
                      4x2TB Seagate Barracuda Green 5900.3 (2x4TB RAID0)
                      Super Flower Golden Green Modular 800W
                      Nanoxia Deep Silence 1
                      LG BH10LS38
                      LG DM2752D 27" 3D

                      Comment


                      • yep, i know that 3dfx is dead, thats why i posted it. question is: anyone know when that was and if matrox still stands behind the statements about the gaming market?

                        Comment


                        • i guess that leaves some hope that matrox is gonne realease something "g800" like in the next couple of months (or years)

                          Comment


                          • <font face="Verdana, Arial, Helvetica" size="2">Originally posted by DGhost:
                            The card *has* to have a depth buffer in order to do HSR. there is no way around this.
                            </font>
                            Not true. Since it has all the data for the scene before it begins rendering, it *knows* which order to draw the triangles. There's simply no need for a buffer that keeps track of the depth of previously drawn triangles.

                            If the kyro did integrate a depth buffer on-chip, since it is capable of 1600x1200 that would mean the chip has 30 megabits of onboard memory for 16-bit precision. That's almost 4 megs of integrated memory! Considering the kyro only has 12 million transistors, it's impossible for the card to have a full on-chip framebuffer (as it takes a minimum of 2 transistors to create the flip-flop circuit used to store bits)


                            <font face="Verdana, Arial, Helvetica" size="2">Originally posted by DGhost:
                            if the card is pushed to its max, it becomes fillrate limited. At which point in time increasing core speed does affect performance more than memory speed. link <a href="http://www.anandtech.com/showdoc.html?i=1276&p=16">here</a> at the bottom is an example.</font>
                            I guess it depends on what card you're talking about. v5 is very different from other cards in that it uses two chips and two memory banks in order to achieve a true doubling of memory bandwidth - benchmarks with ddr motherboards show that expecting a 50% increase in bandwidth with the ddr memory the radeon and GF2 use is asking quite a lot.

                            So yes, the voodoo5 is limited by it's cores, however, in a single-chip solution, it's the video memory bandwidth that's the limiting factor, as nvidia has shown us with the GF2, it's quite possible to design a chip with a fillrate that's way too high for the video memory to keep up with.

                            As for future multi-chip video cards, the only time you'll see them is when a manafacturer is desperate to keep up with the current speed leader and doesn't have a better single-chip implementation. No one designs a chip ground-up to to be run in parrallel, it's waay too costly.

                            <font face="Verdana, Arial, Helvetica" size="2">Originally posted by DGhost:
                            take the kyro you were talking about. if you are pushing more polys at it than it can handle (which may happen within a year or two) faster core speed would be a bigger benefit to it than a faster memory bus, even tho they are both important and you would have to scale them both.</font>
                            yes, that's true, as I said in my first post on page 10, the two solutions to the video memory bandwidth problem is HSR or faster memory. The kyro is a HSR card. A kyro with DDR memory and a 200+ mhz core would simply destroy any video card out there right now...

                            [This message has been edited by Rob M. (edited 05 February 2001).]

                            Comment


                            • <font face="Verdana, Arial, Helvetica" size="2">Originally posted by Greebe:
                              NetSnake, Maggi is only teasing (D'oh!)</font>
                              pfffff no wonder this topic is half year old
                              will all this teasing going on.... I wonder if there is ANY real info on G800 in these threads

                              Comment

                              Working...
                              X