Announcement

Collapse
No announcement yet.

Parhelia!!!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by DGhost


    or the pixel shader 1.3 / no memory bandwidth saving features...
    I dont see how this can be a problem when you have nearly double the bandwidth of your compitation. As for the Pixel Shader support...the Earliest you'll see a DX9 using that is 2 Christmas from now and I'll bet their be a Refresh of the Parhiela Core next year to support DX9 and a shrink down to .13 micron.

    I think the orginal break of the NDA by the Far East web site wasn't "that bad" compaired to the Xbit labs total trashing of the NDA and spilling of the beans. It might have been the Translation of the website, but I remember in the first paragraph of the article they posted they aluded to saying hey Matrox left us out in the Cold with the card and now since we have this info we are going to "screw" them over by letting out the cat out of the bag 2 days early

    My 2 cents

    Scott
    Why is it called tourist season, if we can't shoot at them?

    Comment


    • I agree with GT98 on that bit, the tone of the Xbit article does seem quite 'down' on Matrox so although some of what is there is obviously true, also some will be taking out parts which can have holes picked in them if shown out of context.

      more cents
      hmmmmm

      Comment


      • From the article it sounded as tho they hadnt seen the card, just some papers about parhelia....

        Comment


        • It's much more difficult to hide your guilt than to admit it your fault

          I'm with the ugly guy below me

          (It's amazing how many threads I kill with that line )

          Comment


          • Originally posted by GT98


            I dont see how this can be a problem when you have nearly double the bandwidth of your compitation. As for the Pixel Shader support...the Earliest you'll see a DX9 using that is 2 Christmas from now and I'll bet their be a Refresh of the Parhiela Core next year to support DX9 and a shrink down to .13 micron.

            I think the orginal break of the NDA by the Far East web site wasn't "that bad" compaired to the Xbit labs total trashing of the NDA and spilling of the beans. It might have been the Translation of the website, but I remember in the first paragraph of the article they posted they aluded to saying hey Matrox left us out in the Cold with the card and now since we have this info we are going to "screw" them over by letting out the cat out of the bag 2 days early

            My 2 cents

            Scott

            Bandwith saving tech is always important no matter how much of it is available...Just make scenes with higher depth complexity,and even the best cards can be pushed to their limits spending time rendering objects that you won't see anyhow...


            A good example of this is the difference between say a GF2 ultra and the GF3 TI 500 card,which both have the same theoretical bandwith as well as fill and texturing rates,but the GF3 is easily twice as fast in real world performance,thanks to early z-check and crossbar memory tech....


            So essentially,a GF2 ultra would likely need twice the bandwith(16 gig/sec) to perform roughly the same as a gf3 does(with only 8 gig/sec),in fill rate limited situations....
            note to self...

            Assumption is the mother of all f***ups....

            Primary system :
            P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...

            Comment


            • hmm, wonder what the card would be like with a form of bw saving
              I'm with the ugly guy below me

              (It's amazing how many threads I kill with that line )

              Comment


              • Originally posted by GT98


                I dont see how this can be a problem when you have nearly double the bandwidth of your compitation. As for the Pixel Shader support...the Earliest you'll see a DX9 using that is 2 Christmas from now and I'll bet their be a Refresh of the Parhiela Core next year to support DX9 and a shrink down to .13 micron.

                I think the orginal break of the NDA by the Far East web site wasn't "that bad" compaired to the Xbit labs total trashing of the NDA and spilling of the beans. It might have been the Translation of the website, but I remember in the first paragraph of the article they posted they aluded to saying hey Matrox left us out in the Cold with the card and now since we have this info we are going to "screw" them over by letting out the cat out of the bag 2 days early

                My 2 cents

                Scott
                don't get me wrong, with the amount of bandwidth the card has available its not that big of a deal. plus, w/ edge AA and the adaptive LOD stuff from DX9, a lot of bandwidth saving features are there...

                plus there are driver tricks that can be done to aid with saving of card bandwidth (and overall processing) like agressive culling, texture compression, etc...

                about PS 1.3 - the setup appears flexible enough that through drivers you could do a fair amount with it, and it could always be expanded later. as it is it looks to have a fairly powerful PS unit compared to the current competition, just not feature complete...

                like you said, i don't see people adopting PS2.0 for a while, if only for compatability purposes... but, we have yet to see what DX9 will bring to the table...

                the only problems i see with the card is that scenes with high poly counts (or lots of particles maybe) could cause a large performance hit with AA, and scenes with polys that have transparent portions could be an issue (it depends on how they implement it... hopefully they will AA those edges too...)...

                superfly... part of the GF2/GF3 comparison that you left out... NVidia makes very memory intensive architectures, probably so that they can scale it more... look at the performance difference you get from overclocking memory on one versus overclocking core... with ATI cards overclocking memory doesn't yield nearly the same amount of a performance improvement that it does on the GF2/GF3 cards... or the performance hit that you take by going from a GF2 to a GF2MX...

                of course, the flip side of the coin is that you can see GF4MX's outperforming GF3's due to a significantly upgraded memory controller...

                it all depends on the arch..
                "And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz

                Comment


                • Hard to say really since i've done some overclocking experiments of my own on my GF3 and overclocking either the core or the memory seperately yeilds about the same improvement in overall performance(about 5~10% in either case),but it mostly depends on the game settings i use...


                  At lower resolutions, the increase in core speed makes the biggest difference while at higher resolutions,it's the memory clock speed that has the biggest impact...


                  Overall,the biggest improvement is when both the core and memory are overclocked(about 15~20% better in that case),relative to the standard GF3 clock speeds(200/460)...
                  note to self...

                  Assumption is the mother of all f***ups....

                  Primary system :
                  P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...

                  Comment


                  • that is true about resolutions having an impact on memory requirements... but there is really nothing that can be done about it...

                    of course, driving 3 monitors off of one video card is gonna suck bandwidth up... or even two of them...
                    "And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz

                    Comment


                    • Originally posted by DGhost
                      that is true about resolutions having an impact on memory requirements... but there is really nothing that can be done about it...

                      of course, driving 3 monitors off of one video card is gonna suck bandwidth up... or even two of them...

                      Yes there is...Adding some form of hidden surface removal tech,even if it wouldn't likely be as effcient as the tiling tech that powerVR chips use,would definately help make the most of the available bandwith...Even more so if 3 monitors are used in a gaming environment....



                      Anyways...we'll see in 2 days time regarding if any bandwith saving measures are implemented.....
                      note to self...

                      Assumption is the mother of all f***ups....

                      Primary system :
                      P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...

                      Comment


                      • I for one am glad that superfly is defending/debunking any and all claims about Parhelia... something which he knows nothing more about than the general populous atm.
                        "Be who you are and say what you feel, because those who mind don't matter, and those who matter don't mind." -- Dr. Seuss

                        "Always do good. It will gratify some and astonish the rest." ~Mark Twain

                        Comment


                        • But Greebe, superfly knows more than any of us. I mean, he had to do so much to get a system that fast, and he's always correcting me on the things I actually do for a living. He must know what he's talking about....
                          Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

                          Comment


                          • Originally posted by Wombat
                            But Greebe, superfly knows more than any of us. I mean, he had to do so much to get a system that fast, and he's always correcting me on the things I actually do for a living. He must know what he's talking about....

                            Sigh...
                            note to self...

                            Assumption is the mother of all f***ups....

                            Primary system :
                            P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...

                            Comment


                            • Originally posted by superfly



                              Sigh...
                              Indeed!

                              Rags

                              Comment


                              • Rags, isnt that avatar a bit, um, cheeky?

                                Couldnt help it!

                                MadScot
                                Last edited by MadScot; 12 May 2002, 22:30.
                                Asus P2B-LS, Celeron Tualatin 1.3Ghz (PowerLeap adapter), 256Mb PC100 CAS 2, Matrox Millenium G400 DualHead AGP, RainbowRunner G-series, Creative PC-DVD Dxr2, HP CD-RW 9200i, Quantum V 9Gb SCSI HD, Maxtor 20Gb Ultra-66 HD (52049U4), Soundblaster Audigy, ViewSonic PS790 19", Win2k (SP2)

                                Comment

                                Working...
                                X