Announcement

Collapse
No announcement yet.

'Fusion' cards

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • <font face="Verdana, Arial, Helvetica" size="2">Originally posted by Ant:
    Keep guessing folks </font>
    <font face="Verdana, Arial, Helvetica" size="2">Originally posted by Joel:
    NDAs suck. </font>
    $%^$@%&@$%&$#@%#! NDAs suck for those NOT under NDA

    Comment


    • Believe me, it can also suck BIG TIME for those under it, because there are some things you just want to be able to say, but you just can't

      Rags

      Comment


      • There is if you look hard enough and read between the lines

        Comment


        • why would there be G800 info in this thread? I thought it was all about a 'fusion' something, which appeared in the driver files as a 'g450 fusion' and a 'g800 fusion plus'. So fusion isn't necessarily connected to a specific videocore in the first place. And there's plenty of info in this thread, as Ant already said.. just be sure to read between the lines of those knowledgable of what it is (not that I completely realised untill AFTER I knew what it was... but that's probably because I'm plain stupid )

          [This message has been edited by dZeus (edited 05 February 2001).]

          Comment


          • <font face="Verdana, Arial, Helvetica" size="2">Originally posted by Maggi:
            Electric Amish,

            you want a time frame ?

            Well here you go ...

            The G800 will be released before 2002 !!!
            </font>
            this isn't so funny anymore...

            Comment


            • Matrox debut a new video card this year. I don't know what the specs will be, I don't know when it will be out.

              I just KNOW that they WILL produce a new card this year.

              amish
              Despite my nickname causing confusion, I have no religious affiliations.

              Comment


              • made screenshots of all the lines in the topic... scaled it up 16x... nothing between the lines except white&grey pixel...

                perhaps the vault of the crappy tnt I'm just using...

                mfg
                wulfman
                _______________
                G400 ist dead.. at least my one at the moment... if u can hear me, my beloved g400: have a nice holiday in canada!
                "Perhaps they communicate by changing colour? Like those sea creatures .."
                "Lobsters?"
                "Really? I didn't know they did that."
                "Oh yes, red means help!"

                Comment


                • <font face="Verdana, Arial, Helvetica" size="2">Originally posted by Rags:
                  Believe me, it can also suck BIG TIME for those under it, because there are some things you just want to be able to say, but you just can't

                  Rags

                  </font>
                  Ok then let's trade places

                  Comment


                  • Tim sweeney wasn't refering to the fact that cpu speeds are increasing faster than bus speeds,everyone here already knows this.

                    He specificaly said that pc's overall performance is deep into diminishing returns territory because of bus speeds,here's a pretty good difference between the two,no??

                    The p4 does indeed have a 3x advantage in bus speeds,but only between the memory to motherboard chipset to cpu itself,nothing else.

                    All the other buses are still the same(the agp and pci bus),so i'm sure that the full advantage of p4's bus still isn't being used yet,at least in some applications like games.

                    In fact there was yet another developer(roman ribaric,one of the developers of serious sam)who specificaly wished for higher agp speeds.

                    And if bandwith speeds aren't holding systems back,then why does DX8 includes support for compressed vertex formats for geometry???(similar to what s3tc does for textures).

                    note to self...

                    Assumption is the mother of all f***ups....

                    Primary system :
                    P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...

                    Comment


                    • <font face="Verdana, Arial, Helvetica" size="2">Originally posted by superfly:
                      Tim sweeney wasn't refering to the fact that cpu speeds are increasing faster than bus speeds,everyone here already knows this.

                      He specificaly said that pc's overall performance is deep into diminishing returns territory because of bus speeds,here's a pretty good difference between the two,no??
                      </font>
                      Let's see what he said:
                      <font face="Verdana, Arial, Helvetica" size="2">Nowadays, even though processor speeds are going up, real-world performance is deep into diminishing returns territory -- the bottleneck is now memory, hard disk, and bus, which aren't progressing as fast as they need to.</font>
                      It seems to me he's saying simply that increasing cpu speed is leading to diminishing returns on increasing system performance. He's not saying that the biggest bottleneck in today's games is the bus speed.

                      Using your same reasoning one could argue that hard drives are the limitation in today's games, I hardly think anyone will agree with you there, the hard drive increases load times, but once loaded, it makes next to no difference on game speed.


                      <font face="Verdana, Arial, Helvetica" size="2">Originally posted by superfly:
                      The p4 does indeed have a 3x advantage in bus speeds,but only between the memory to motherboard chipset to cpu itself,nothing else.

                      All the other buses are still the same(the agp and pci bus),so i'm sure that the full advantage of p4's bus still isn't being used yet,at least in some applications like games.

                      In fact there was yet another developer(roman ribaric,one of the developers of serious sam)who specificaly wished for higher agp speeds.

                      And if bandwith speeds aren't holding systems back,then why does DX8 includes support for compressed vertex formats for geometry???(similar to what s3tc does for textures).
                      </font>
                      You're right, the p4 still uses agp4x.

                      All that's needed to make fast 3d games is a fast memory-chipset bus, cpu-chipset bus, agp bus, and of course fast memory,cpu,video card and chipset.

                      Of those, benchmarks in any 3d app at 1024x768 yeild similar results( ie quake3), regardless of whether or not it's on a p4 or a pc133 athlon 1000. So what does that tell us?

                      Well, the p4's faster cpu, memory, memory bandwidth, cpu bandwidth, and chipset are not what's the bottleneck. That leaves two things: The video card, or the agp bus. Now anyone here who's tried playing with their agp settigns will tell you there's next to NO difference switching between agp 1x, 2x or 4x, in fact, I still run my g400 at 1x because it's unstable at 2x.

                      So what does this leave us with? The video card isn't fast enough. And as I've said before, it's all because there isn't enough video memory bandwidth.

                      Comment


                      • Rob: if you don't need a depth buffer to point implement HSR, tell me how to do it without one.

                        And, also, i do not believe it was ever mentioned that the depth buffer keeps track of polygons already rendered.

                        Also, saying that the kyro needs 30mbit of memory to render at 1600x1200 for the depth buffer is bs. mostly because due to the architecture, it doesn't track the whole scene at once. the sharky extreme article talks about how it renders the scene and how its architecture differs from the norm.

                        And, i would not say that all single chip architectures require faster memory. the Voodoo 4/VSA-100 architecture scales better with core speeds because of its design. it all depends on how the card is designed. increasing memory/memory speed on the kyro would not improve performance all that much - mostly due to the fact that the kyro was designed to *not* require memory bandwidth as heavily as a GF2.

                        nvidia cards are beasts from architectural standpoints. they are sloppy and thrown together, not to mention horrendously inefficent.
                        "And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz

                        Comment


                        • The main reason why there is isn't a big difference in performance as you move up from agp 1x/2x,etc..,is because game developers WILLINGLY limit themselves in such a way as to make their games playable on the the average machine,and therefore affect potential sales,but the bottleneck exists none the less if you want much higher quality graphics which current T.l enabled cards and ghz+ cpu's are quite capable of.

                          And yes,in my view and i'm sure that it's the opinion that most here have,if they read his answer,sweeney said exactly that bus speeds are holding back performance,the point that you now need several hundred mhz increase in cpu clock speed to notice sometimes no better than a 20~30 extra fps,assuming the video card hasn't hit it's fill rate limits,and you know this very well.

                          But just like those two developers have mentioned bus limitations so have many other as well,like dave baranec,lead freespace 2 developer,who i personally had a talk with in the freespace 2 forums several months ago,regarding that very issue and he stated the exact same thing as the others.

                          Even nvidia has stated that will carefull optimization using an engine that's built with the Geforce in mind(currently there are none btw)and assuming 60 fps(for smooth gameplay),regardless of resolution or color depth or depth complexity(overdraw),you'd be limited to 50.000 polys per frame max,and it isn't because it's pushing the t.l capabilities of the card,it's a bus speed limitation.

                          This means that even if you're running a game at low resolution and color depth as well as in depth complexity,in which video card bandwith or fill rate is a non issue,if the developer uses too many polys in order to make everthing look as good as possible,you risk turning what would normally be a smooth game into a slide show,because all that extra vertex data will choke the system bus quite easily.

                          And you still ignored the fact that DX8 now has vertex compression routines built in.

                          Now if there's enough bandwith to go around,then why go to the trouble to develop it since it's only reason for existing is for bandwith savings,nothing more.

                          You mentioned that at 1024*768 resolution the performance of recent video cards starts leveling out because of insuficient bus bandwiths and i never disagreed with you on that point in the first place,push a card hard enough(resolution,color depth,depth complexity,etc..)and you'll reach the limits of any video card(for the time being anyways)in terms of either fill rate or local bus bandwith.







                          [This message has been edited by superfly (edited 06 February 2001).]
                          note to self...

                          Assumption is the mother of all f***ups....

                          Primary system :
                          P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...

                          Comment


                          • <font face="Verdana, Arial, Helvetica" size="2">Originally posted by DGhost:
                            [b]Rob: if you don't need a depth buffer to point implement HSR, tell me how to do it without one.[b]</font>
                            Let the cpu feed the video card the triangles back-to-front.

                            <font face="Verdana, Arial, Helvetica" size="2">Originally posted by DGhost:
                            [B]And, also, i do not believe it was ever mentioned that the depth buffer keeps track of polygons already rendered.
                            [b]</font>
                            It still hasn't been mentioned until your post. I said it keeps track of the *depth* of rendered triangles. But if that's not what it's for, then please, enlighten us all, the world needs to know exactly what a depth buffer does

                            <font face="Verdana, Arial, Helvetica" size="2">Also, saying that the kyro needs 30mbit of memory to render at 1600x1200 for the depth buffer is bs. mostly because due to the architecture, it doesn't track the whole scene at once. the sharky extreme article talks about how it renders the scene and how its architecture differs from the norm.</font>
                            Even if there was this supposed 'mini depth buffer', then by definition, it's not a true depth buffer(as long as there aren't any new definitions made up anytime soon). Why do you think the thing needs 10 megs of video memory to emulate a depth buffer??

                            <font face="Verdana, Arial, Helvetica" size="2">Originally posted by DGhost:
                            And, i would not say that all single chip architectures require faster memory. the Voodoo 4/VSA-100 architecture scales better with core speeds because of its design. it all depends on how the card is designed. increasing memory/memory speed on the kyro would not improve performance all that much - mostly due to the fact that the kyro was designed to *not* require memory bandwidth as heavily as a GF2.

                            nvidia cards are beasts from architectural standpoints. they are sloppy and thrown together, not to mention horrendously inefficent.
                            </font>
                            yes, that's why 3dfx bought out nvidia, because nvidia's inferior designs drove them right out of business. We should now all bow down and praise the pinnacle of all 3d graphic cards, the voodoo4.

                            Get real, if 3dfx couldn't design a board with two chips that would outperform one of nvidia's chips, and nvidia's chips are 'sloppy and thrown together', then what does that say about the vsa-100? What does it say about every video card mfr out there when they can't make a faster card?

                            I'll tell you why the vsa-100 is core limited and prepare to be shocked: 3dfx's design stunk! After designing the chip, they realized that it performed woefully, and teaming it up with DDR was simply a waste of bandwidth with such an underperformer of a chip. That's why they put two chips on the v5. They went with standard SDRAM to keep the costs low, and because the core was too slow to perform any better with faster memory anyway!

                            Comment


                            • Take a look at this schematic I made to explain what I'm talking of lower...

                              http://www.geocities.com/frankymail/dual-channel.jpg

                              Hey,
                              Everyone in this forum acknowledges that graphics cards have grown memory bandwidth limited; I wonder why graphics chips manufacturers haven't used a dual-channel architechture like the one used by Intel's i840 and i850 chipsets (not that they should use RDRAM, although RDRAM and MoSYS are probably the memory designs of the mid-future graphics cards...)

                              E.G: Let's take a potential vanilla Millennium G800 (go ahead, flame me, but I would walk through the burning flames of Hells to get one, as deep in my heart, I KNOW such a legend is true.), sporting a chip that would have something like 4 pix pipes with 3 tex/pipes, operating at a modest 200 MHz on a 0.13ยต design (low heat dissipation; read "POSSIBLE MOBILE SOLUTION"!) with a very conservative 32 MB of DDR SDRAM operating at 200 MHz, but that would be divided in two memory banks, the G800 having a bus for each of the two mem banks. That would effectively DOUBLE the memory bandwidth to 12.8 GB/s, yes, TWELVE.EIGHT GIGABYTES of memory bandwidth, which is more than the the GeForce 3's 8 GB/s bandwidth. Also, ALL the on-board memory would be accessible; remember the Voodoo 5 5500? It sported 64 MB, yet the effective memory capacity was only 32 MB as all textures had to be loaded in both 32 MB memory "banks". The only con would be the added 128-something pins to the chip and the added traces...small price to pay for such bandwidth at that cost!!!

                              (BTW, I think I'm growing impotent because of all that G800-induced stress; last night, my gal and I only danced the Grand Mambo once and then both fell asleep watching "The Late Late Show with Craig Kilborn"...)

                              Francis Beausejour
                              Frankymail@yahoo.com

                              ----------------------------------------
                              - Intel P3 850 MHz Retail O/C 1.133 GHz
                              - Alpha FC-PAL6030
                              - Asus CUSL2-M
                              - 256 MB TinyBGA PC150 (2 DIMMs)
                              - Matrox Millennium G400 DH 32 MB O/C 175/233
                              - Sony E400 + Sony 210GS (the E400 is a beauty, and very cheap too!!!)
                              - Quantum Fireball LM 10 GB + AS 40 GB
                              - Teac W54E CD-RW + Panasonic 8X DVD-ROM
                              - Sound Blaster Live! Value
                              - 3Dfx VoodooTV 200
                              Francis: 19/Male/5'8"/155Lbs/light blue eyes/dark auburn hair/loves SKA+punk+Hard/Software... just joking, I'm not THAT desperate (again, Marie-Odile, if you ever happen to read this (like THAT could happen...), I'm just joking and I REALLY, REEEAAALLLLLYYY love you!))

                              What was necessary was done yesterday;
                              We're currently working on the impossible;
                              For miracles, we ask for a 24 hours notice ...

                              (Workstation)
                              - Intel - Xeon X3210 @ 3.2 GHz on Asus P5E
                              - 2x OCZ Gold DDR2-800 1 GB
                              - ATI Radeon HD2900PRO & Matrox Millennium G550 PCIe
                              - 2x Seagate B.11 500 GB GB SATA
                              - ATI TV-Wonder 550 PCI-E
                              (Server)
                              - Intel Core 2 Duo E6400 @ 2.66 GHz on Asus P5L-MX
                              - 2x Crucial DDR2-667 1GB
                              - ATI X1900 XTX 512 MB
                              - 2x Maxtor D.10 200 GB SATA

                              Comment


                              • ok... implementing it to feed the triangles back to front... how do you do it without a depth buffer? and how does feeding it from back to front perform HSR on a scene??

                                the whole idea of HSR is that it *doesn't* render the triangles that are behind others. you keep a depth buffer and when you issue the command that causes it to render, it goes through, eliminates the triangles that or not visible (or it does it as you add them - you do the same math, its just the difference between all at once or as you do it) from the depth buffer, and *then* sends the remaining data in the depth buffer to the renderer. that way its not having to render trianges that are not visible. or even do anything to them.

                                the way the kyro renders, it stores the poly data in the card and then takes it into the core in chunks, providing FSAA and HSR capabilities to each of these chunks as it processes them, reducing the amount of memory required to implement a depth buffer and making it easier to perform. thus the name 'tile based rendering'. it still has a true depth buffer in the core - but its designed to handle a relatively small chunk of polys. if nothing else, HSR can be done as its moving data between the external 'emulated' depth buffer and the internal depth buffer.


                                you mention DDR memory... DDR is a marketing ploy. it does not deliver anywhere near the peak bandwidth that it claims to. there are numerous different places elsewhere on the net that will back up this statement...

                                3dfx acctually had computers design their core. maybe thats why their cards acctually worked on all platforms. maybe thats why they don't require heatsinks as big as your processor and use less space per chip - using an older manufacturing technology. maybe thats why they don't have to release drivers every week and have a list of known bugs longer than i am tall. maybe, despite the fact they were out dated, they had more than hype and a marketing department backing them.

                                i agree that the VSA-100 arch was too little too late, but a voodoo4 will out perform a GF2MX on an athlon. or a P3. especially at high resolutions. if nvidia is the god that you claim them to be, why would they ever release a card that is crippled and gutted, castrated for all purposes?

                                and, why are you going to complain when you are talking above what you can tell the difference of?
                                "And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz

                                Comment

                                Working...
                                X