Announcement

Collapse
No announcement yet.

are matrox short of engineers?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    My opinion is that Matrox's card will probably be at least on par or better than nvidias card that is due out at the same time. At least performance wise. will the numbers (fillrate, memory bandwidth, etc) look lower? probably. but to the nvidiots who say that those are what matters, i offer this: <a href="http://www4.tomshardware.com/graphic/00q4/001213/index.html">The FireGL 2</a>. Yes, the article is on toms hardware. I was quite surprised to see them review a card that beat the nvidia cards hands down. And then admit that it did. At 120mhz core, 190mhz geometry core and 120/240mhz DDR memory speed it was able to beat the Quadro 2 Pro, which if i remember correctly runs at 250mhz core/geometry and 200/400mhz memory.

    The FireGL2's specs say 410MPix/sec and 200MTex/sec fill rate versus the Quadro2 Pro's of 1000MPix/sec and 2000MTex/sec theoretical fillrate. The Geometry engine claims 27MPolys/sec versus the Q2P's 35MPolys/sec. And the FireGL2 absolutely beats the Quadro2.

    My point is that a well engineered product can do alot of things more efficently than poorly engineered product. Matrox has always had a history of being very 'elegant' with the way their cards work, and personally i believe they are well engineered. 3Dfx was never particularly well engineered, and NVidia is most definatly not. All nvidia does is every 12 months they cram all the features they can think will look good and appeal to people and crank the clock speed up as fast as they can cause their engineering sux and otherwise their performance would blow. And in the months between those 12 they take the existing technology and crank the speed up again.

    my $.02

    -Luke
    "And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz

    Comment


    • #32
      ... and your 0.02$ are worth a lot more ...

      Very interesting read, indeed.


      On a totally different topic, DGhost:

      by any chance ... are you the same Luke that was around here quite some time ago, i.e. are you Lukey (not Lucky) ???

      Cheers,
      Maggi
      Despite my nickname causing confusion, I am not female ...

      ASRock Fatal1ty X79 Professional
      Intel Core i7-3930K@4.3GHz
      be quiet! Dark Rock Pro 2
      4x 8GB G.Skill TridentX PC3-19200U@CR1
      2x MSI N670GTX PE OC (SLI)
      OCZ Vertex 4 256GB
      4x2TB Seagate Barracuda Green 5900.3 (2x4TB RAID0)
      Super Flower Golden Green Modular 800W
      Nanoxia Deep Silence 1
      LG BH10LS38
      LG DM2752D 27" 3D

      Comment


      • #33
        More off topic rambling....

        Maggin, no... at least i don't believe i am... i registered back in May when i grabbed a G400... if quite some time ago is before that then i didn't even know MURC existed...
        "And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz

        Comment


        • #34
          ok, you're not him, cos you would instantly have picked up the name Lucky ...
          Despite my nickname causing confusion, I am not female ...

          ASRock Fatal1ty X79 Professional
          Intel Core i7-3930K@4.3GHz
          be quiet! Dark Rock Pro 2
          4x 8GB G.Skill TridentX PC3-19200U@CR1
          2x MSI N670GTX PE OC (SLI)
          OCZ Vertex 4 256GB
          4x2TB Seagate Barracuda Green 5900.3 (2x4TB RAID0)
          Super Flower Golden Green Modular 800W
          Nanoxia Deep Silence 1
          LG BH10LS38
          LG DM2752D 27" 3D

          Comment


          • #35
            haaa
            all those rumors and speculations...
            Kind of remind me the old days on rendition forums when everybody was speculating about the mythical V3000....

            Hope that the end will be happier here...
            Athlon64 4800+
            Asus A8N deluxe
            2 gig munchkin ddr 500
            eVGA 7800 gtx 512 in SLI
            X-Fi Fatality
            HP w2207

            Comment


            • #36
              Originally posted by DGhost:
              but to the nvidiots who say that those are what matters, i offer this: <a href="http://www4.tomshardware.com/graphic/00q4/001213/index.html">The FireGL 2</a>. Yes, the article is on toms hardware. I was quite surprised to see them review a card that beat the nvidia cards hands down. And then admit that it did. At 120mhz core, 190mhz geometry core and 120/240mhz DDR memory speed it was able to beat the Quadro 2 Pro, which if i remember correctly runs at 250mhz core/geometry and 200/400mhz memory.
              Well, they're taking a card built specifically for CAD-type work and comparing it to a card built specifically for games but sold in the same area because it performs so well. Yes, they're both 3d cards, but the tests stress totally different parts of the card. If you took those two cards and benchmarked them in some games, you'd probably have the firegl2 performing worse than a G400...



              My point is that a well engineered product can do alot of things more efficently than poorly engineered product. Matrox has always had a history of being very 'elegant' with the way their cards work, and personally i believe they are well engineered. 3Dfx was never particularly well engineered, and NVidia is most definatly not. All nvidia does is every 12 months they cram all the features they can think will look good and appeal to people and crank the clock speed up as fast as they can cause their engineering sux and otherwise their performance would blow. And in the months between those 12 they take the existing technology and crank the speed up again.
              Hmm, does that mean that Intel's and AMD's engineers suck too? Pumping up clock speed, die shrinks, product crippling, and feature hype are all common in the computer industry. Even Matrox did it with their G450, although you're the first person I've seen who calls that elegant.

              Comment


              • #37
                That should be the effort of TSMC's well-done process. Without TSMC, I do not know how nVidia can supply such amount of high frequency chips...


                ------------------
                PIII-550E@733/1.65v, P3B-F, G400DH/32MB@140/186
                P4-2.8C, IC7-G, G550

                Comment


                • #38
                  Totally off topic but....
                  Actually YES I think the Intel and AMD Engineers suck big time here's why..

                  For as long as I can remember, back in the 286 days, that x86 architecture is still being produced up until now. Why can't they come out with a totally new radical design that could cut CPU processing time by half?
                  ALL there doing is uping the clock speeds and shrinking the die's. HELLO INTEL/AMD WAKE UP AND LEARN SOME LESSONS FROM THE BIG BOYS (SGI-CREY)
                  OK so they added 3D Now and MMX(2) technology to the chip, but that didn't significantly enhance performance either, I thought that was more a Marketing scam then anything.
                  Yes I know software has to be rewritten etc. but wouldn't you like to see a PC running at 500mhz beating the crap out of current PIII 1.3 gig machines just from the way it processes?

                  A debate open to all of coarse

                  Cheers,
                  Elie

                  Comment


                  • #39
                    Elie, the reason for not going with a completely new architecture is the entire installed base of consumers would have to buy all new hardware, OS and programs to support it. When any product is brought to market, the hopes of the makers is to have sell sell sell! With that concept in mind, bringing out completely anew could indeed be complete failure.

                    Sell that to investors and stockholders!

                    Especially when developers have such a difficult time keeping up with the already installed base and when users are so untrusting of them to even get that straight.

                    These are the risks that Amiga is facing today. It'll either fly or die a quick horrible saddening death.

                    Now while a small percentage would (you or I), this hardly makes up for the millions that would never concider it. Amiga has two small factors going for it tho... a "once upon a time" niche market, with a devoted following. Remember, if it weren't for the Trekies, NG would never have been done.

                    and...

                    Currently the computer industry is at critical mass, with enough headroom to possibly allow such a success. But this wasn't so, even a few years ago.
                    "Be who you are and say what you feel, because those who mind don't matter, and those who matter don't mind." -- Dr. Seuss

                    "Always do good. It will gratify some and astonish the rest." ~Mark Twain

                    Comment


                    • #40
                      Hmm, this is getting way OT..

                      Elie, one word: crusoe. Radical shift in design. But it can't beat that crap out of x86 we all want...

                      Actually, I'd love to see PC go down the drain as well. Its fundamentals are built in what, 1978? (IBM PC) We STILL can run the same software! Arg!

                      PC has become an evolving system instead of a revolutionary. Look at consoles, now THERE you can see revolutions at work. PS1 and PS2 to name one. I think a system that remains exactly the same for some years, and than gets a new, well-engineered, much better successor would be better than the current PC.
                      Why? Simple: software designers know the exact specs of every machine, and they WILL get the best out of the machine eventually (it takes YEARS to hit the bottom of all possibilities of a system. Yes, I've seen it, since I have had the MSX system )

                      But its the lack of tight standards that causes the lack of revolution. And it's economoy & management who screw it up as well. Look at NVidia, they seem to NEED to produce a "new" vidcard every 6 months. You can't design a revolution in six months! It's just evolution, small enhancements.

                      I hope Matrox will and can continue what they did before. Apart from some "evolution cards" like the Millennium 2 and G450, they did produce revolutionary cards since the Millennium. I like that. It makes you want that new card because it is NEW, not just a bit faster...

                      Comment


                      • #41
                        Watch it Elie, those are fighting words. It's not Intel/AMD/Cyrix/whoever that's holding the design back, but rather your software. The fact is, users need chips that run their old software faster. The chips that you run your software on today aren't really x86 machines either. They have a lot of hardware devoted to doing translation, and are crippled b/c of it. The Athlon series has an 11-stage pipeline (which is #@#!@# huge), and 4 of those are just for translation. If you could compile and run code for the processor that really runs in there you'd be amazed at what it did.

                        This issue plagues computers constantly. If people didn't have binary-only programs, things would be much easier, but only Open Source users can switch architectures with ease, really (I just saw in the news today, that Linux will be running on Itanium/Merced as soon as it comes out, MS has said it will be weeks/months later for them).
                        Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

                        Comment


                        • #42
                          Isn't it so that Linux has much less (if even) written x86 assembly? Windows has a many routines and functions written in x86 assembly. Since Intel is dumping the x86 for something else (better?) not only the compilers have to be re-written, but the x86 assembly code as well.

                          Reminds of the MSX turboR computer.. a MSX machine with 2 processors: a Z80, for compatibility with previous MSX versions, and a R800, a faster 16-bit version of the Z80 with some extra's.

                          If you do the same trick with PCs somehow, your old software would still run. Perhaps not faster than previously, but still fast enough. New, optimized programs for the new processor will come. So you're not stuck to the old design, but you still can run software created for that old design.

                          (the MSX turboR had one advantage: the R800 could run Z80 code as well, but very faster. Perhaps this is the same as what AMD is doing with their x64 designs?)

                          Comment


                          • #43
                            Ahh, don't forget the G250 in that mix. Funny how nobody whined about that one!
                            "Be who you are and say what you feel, because those who mind don't matter, and those who matter don't mind." -- Dr. Seuss

                            "Always do good. It will gratify some and astonish the rest." ~Mark Twain

                            Comment


                            • #44
                              Randy:
                              They both have their share of x86 assembly. That's not the real problem. There are two for close-source packages: not only the compilers, but sometimes you don't have source to the libraries you're compiling against.

                              And for backwards compatibility: Yes, the IA-64 series will support x86 one way or another. Some chips through emulation (slow), or through an additional core. This core is very costly in terms of area though, so it still does a good job of hindering the design.
                              Also, the last I heard, MS can't get IA-64 and x86 programs to mix very well, and running one 32-bit application forces the entire OS and every program running to run in a very slow compatibility mode (this may no longer be accurate, but I haven't heard differently).

                              Yes, that's what AMD is doing.

                              P.S. It may not become obvious for a while, but IA-64, or at least EPIC, is a kickass architecture.

                              Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

                              Comment


                              • #45
                                The reason crusoe does not beat intel/amd chips is it was designed with cost and power draw in mind.

                                Want proof? how about this concept:

                                maybe 4, or even 8, <a href="http://www.xilinx.com/">FPGA's</a>, a dedicated control processor, and a version of something similar Transmeta's code morphing software. Whats the result? a multi chip solution that can act like a single processor, but can be reconfigured to adjust to the demands placed on it. Granted, reconfiguring would be slow, maybe taking several clock cycles (read - probably stopping one core for a couple seconds) but in the end you could easily have a virtual processor that could beat an Athlon 1.2 and GeForce2 in software. Other advantages is that it could emulate the alpha platform, or mac platform, or any number of others at the same time.

                                Impossible? no. companies are already doing it. If i had two years and the nessicary capabilities (ie, EE degrees and etc) i could probably get a prototype working.

                                Whats the catch? Cost. it would cost several millions, if not billions, of dollars to create something like that. and when its done, the cost of the new architecture would be so insanely expensive that it would by far outweigh the performance.

                                As far as matrox, every new core they release is a redesign. i never specifically said the G450. Personally, i was thinking of the G200 and G400's. mostly due to the fact that the 450 is just a mod on an existing architecture.

                                Nvidia needed the 6 month product cycles to beat 3Dfx. now that 3Dfx is not in business (effectively) anymore, i wonder where their product designs will go. Will they slow the product cycles down? will they start moving into different markets like 3Dfx did? will they ultimately stagnate and let another company that is more agressive than they are beat them, much like 3Dfx did? who knows.

                                -Luke
                                "And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz

                                Comment

                                Working...
                                X