Announcement

Collapse
No announcement yet.

New Parhelia Drivers!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by K6-III
    I like these drivers. 9275 3dmarks at 238/619 in 3dmark2k1: http://service.futuremark.com/compare?2k1=6364627
    Hey impressive.

    I'll only reach about 8200 points (@std) and the new drivers didn't change anything significantly, here.

    Which were your 1.03 drivers results?

    Comment


    • you have an xp1700 running at 2.5 gig?whats going on there?
      is a flower best picked in it's prime or greater withered away by time?
      Talk about a dream, try to make it real.

      Comment


      • Yup, over 70% overclock, on air!!!
        Let us return to the moon, to stay!!!

        Comment


        • Comment


          • What the heck, do you use for cooling?

            Comment


            • Actually managed to take it to 2534mhz@1.9v this morning.

              Will bench later with updates...
              Let us return to the moon, to stay!!!

              Comment


              • But the development became harder and harder from the beginning of the 'Geforce' releases on. Now are much more transistors to integrate into a chip than before the realization of pixel and vertex shaders. These 'processing parts' take the most die size, much more than the 2D, video or at least RAMDAC or TMDS. They had to function!
                If care to search, I´ve posted it before, Matrox problems were not much the "Geforce" pressure, but a wrong decision about chip design. Pretty much as Nvidia with the NV35. If they hadn´t go with an anachronic 4 texture units per pipeline and implemented some nice memory interface features, Parhelia could have been a briliant chip, preety much as the G400, or even the G200 were on their time.

                Comment


                • Originally posted by Nuno
                  If care to search, I´ve posted it before, Matrox problems were not much the "Geforce" pressure, but a wrong decision about chip design. Pretty much as Nvidia with the NV35. If they hadn´t go with an anachronic 4 texture units per pipeline and implemented some nice memory interface features, Parhelia could have been a briliant chip, preety much as the G400, or even the G200 were on their time.
                  You mean higher clocked and less complex pipes would have been the better choice?

                  Comment


                  • Definitly. 4 texture units per pipeline is overkill and useless. They barely get used, and even at intense multitexturing situations, Parhelia can´t deliver - see templemark demo from PowerVR, it uses 6 texture layers. It´s a no-win situation - 4 texturing units aren´t used all the time, and when required, there´s not enough bandwidth to feed them.
                    See R3x0 design, it has 1 texture unit per pipeline, eight pipelines. If you count, it´s half of Parhelia texturing units.

                    So if they have released a 4x2 chip (that was the DX8 trend line), rip off DX9ish capabilities, that anyway may never be exposed by the drivers, and have invested in some bandwidth saving features - why to design a complex and expensive 256-bit memory bus and then ruin it with a conventional memory interface is beyond me - Matrox could had a smaller, faster and less expensive to manufacture chip.

                    But that´s only my opinion.
                    Last edited by Nuno; 21 April 2003, 08:40.

                    Comment


                    • Just to add Xvox demo is NOT Dx9 displacement mapping. It emulates displacement mappping using Dx8 vertex shader routines. It runs on R8500 and Geforce4 hardware, so it doesn´t use Dx9 in any way.

                      Comment


                      • Originally posted by K6-III
                        I guess 1002 3DMark03 is OK: http://service.futuremark.com/compare?2k3=635300
                        Something wrong? 2.5ghz XP should be faster than 492 CPU score... hmm... FSB? It is only 179! Make it 210*12!!
                        P4 Northwood 1.8GHz@2.7GHz 1.65V Albatron PX845PEV Pro
                        Running two Dell 2005FPW 20" Widescreen LCD
                        And of course, Matrox Parhelia | My Matrox histroy: Mill-I, Mill-II, Mystique, G400, Parhelia

                        Comment


                        • Originally posted by Nuno
                          If care to search, I´ve posted it before, Matrox problems were not much the "Geforce" pressure, but a wrong decision about chip design. Pretty much as Nvidia with the NV35. If they hadn´t go with an anachronic 4 texture units per pipeline and implemented some nice memory interface features, Parhelia could have been a briliant chip, preety much as the G400, or even the G200 were on their time.
                          I'm gonna argue this one.

                          This argument has very little to do with why the Parhelia performs subpar. It has an extremely balanced architecture capable of maintaining a certain level of performance (which is unfortunately limited by core clock speed), something which the more-pipelines-less-tmu's crowd does not enjoy. Although the major draw of the 8x1 architecture is that it offers a consistant decrease in performance for every texture that you add to it.

                          If ATI decided to release both a 4x2 and an 8x1 cores using identical logic for the TMU's, pixel pipes, memory controller, etc (where the only change is that one has 4 pixel pipes with 2 TMU's per and the other has 8 pixel pipes and 1 TMU per) you would see that performance for the most part should be similar with only a few exceptions. The first is single textured versus dual textured. That is going to be the biggest difference right there because you are talking a 50% improvement in performance. The second exception is anytime you have an odd number of textures, the 8x1 pipeline will have a 1 render pass (theoretical) advantage over the 4x2. This difference can easily be washed away by quality of drivers, how efficent their texture loopback logic is, etc. The complexity of the design inherently leads to more ways which an unoptimized driver or piece of core logic can acctually have a negative impact on performance.

                          Comparing a 4x4 to an 8x1 is kinda funkier, but once you get past the initial first 2 textures (ie, you are using 7-9 textures like doom3 was) you will see a definative advantage of the 4x4 over the 8x1, just because it has more texture units. How much of an advantage comes from how many textures (if you are using multiples of 4 you will always see the largest performance increase over the 8x1).

                          To boot a 4x4 architecture is easier and cheaper to implement than the 8x1. It is an inherently simpler design that takes less transisters to implement, less driver tricks to get everything working right and the drivers are not nearly as "twitchy" when it comes to having to be properly optimized or not.

                          The core design (at least from a diagram standpoint) is fine. Clock for clock it would be capable of beating a 9700 Pro or 9800 Pro in any Doom3 engine game or most of the next gen Unreal engine games (Deus Ex 2 for example). Anything that uses all the fun lighting tricks show an affinity for any architecture that offers an advantage while multitexture. The only problem you run into is the legacy applications.

                          There are too many other factors that equate into the performance problem though. Things like the level of optimization of the drivers, core clock speed, how "fast" the TMU's and shader units operate, how efficent the memory controller is, etc. Unfortunatly those are the areas where Matrox has traditionally always been weaker than NVidia or ATI. Still, the latest driver release showed some impressive improvements in performance in a number of areas, so they are definately getting there.

                          I guarantee that if ATI had chosen to do a 4x4 architecture and Matrox had chosen to do an 8x1, you would be arguing exactly opposite of what you are arguing right now.
                          "And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz

                          Comment


                          • aren´t those additional TMUs capable of delivering free trilinear filtering when they aren´t used?

                            I have also heard that they should boost performance quite a bit in anisotripic filtering,(if they bothered to enable it, in the drivers )
                            This sig is a shameless atempt to make my post look bigger.

                            Comment


                            • Originally posted by WyWyWyWy
                              Something wrong? 2.5ghz XP should be faster than 492 CPU score... hmm... FSB? It is only 179! Make it 210*12!!
                              My ram doesn't want to go above 185mhz stable and I don't really feel like taking my non-PCI lock Epox 8K3A+ to crazy FSB's.

                              BTW, I've already tried going above 190mhz FSB with no success.

                              As a result, I cut the necessary L3 bridges to get it a 14x default multiplier.

                              It now tops out at 182x14@1.9v
                              Let us return to the moon, to stay!!!

                              Comment


                              • which bridge needs to be cut to get the multipliers higher than 12.5X?.. I guess I could look it up elsewhere, but this is easier
                                We have enough youth - What we need is a fountain of smart!


                                i7-920, 6GB DDR3-1600, HD4870X2, Dell 27" LCD

                                Comment

                                Working...
                                X