Announcement

Collapse
No announcement yet.

G400 .18 Micron = G450

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    (Sigh), I give up with you boys.

    Da-man, can you please tell us what new functions are included in the DX8 T & L api that DX7 doesn't have? Don't give any of the NDA crap cause I'm sure no one at your work knows your handle.

    I have the specs right in front of my face and I see 1 MAJOR change which any of todays cards will not be able to do it thru it's hardware.


    Quote:

    "T&L support through DX8 on the GeForce (Or ANY other card with T&L) is just a matter of drivers. Let's just say for a moment that they are right about DX8 not being backwards compatible.....All Nvidia would have to do is create some drivers that convert the DX8 API calls to the appropriate hardware commands."

    Oh really? So you are telling me that if there is a new function like a morph function, all Nvidia has to do is write drivers for this new function and the Geforce will now do Morphing via its hardware?

    SPSU you are right. If the hardware can't do it, then the drivers take over.

    SwAmPy

    Comment


    • #17
      HeHe
      you guys are all funny!
      i like Da-man. he was funny too!
      im no expert on driver or api development, im working on being a expert on hardware development. but i think you are all silly.
      i read i smile.
      i read i laugh.
      us MURCers are an odd bunch indeed!
      HD

      btw, i knew a couple of ppl who worked at microsoft. and just cuz they worked there, doesnt mean they knew jack from shit.

      Comment


      • #18
        We sure are silly but we laugh a lot and we smile a lot and that's all that matters

        Comment


        • #19
          Right on Ant...

          And Da-man can really make things more happier and funnier...
          I can't believe this dude... He can't even spell and he claims he works on DX development team (although even I fail to see my point here He's supposed to work for MS what doesn't make him smart).
          If he works on development of DirectX than my real name is Bill Gates and he is already fired for thrashing Matrox!
          I think he is just a pissed GeFarse owner that sold his car to buy a card, so he could have a longer peni... erm.. I mean more frames than anybody else....

          [This message has been edited by Goc (edited 21 December 1999).]
          _____________________________
          BOINC stats

          Comment


          • #20
            Huh? I can't believe this conversation.
            What makes you believe that DX8 is no good for GeForce?
            I have a G200 that worked nice with DX6, and behold! it works with DX7 as well. It's just a matter of drivers. Why would it be any different for GeForce?

            _
            B

            Comment


            • #21
              Hi Buuri,

              this is about the hardware T&L capabilities of the GeForce that could become obsolete/not fully fúnctional once DX8 hits the streets ...

              According to Swamplady there'll be too much changes over DX7 so that the GeForce could only take part of the T&L stuff and the rest will be sent to the main CPU ...


              ------------------
              Cheers,
              Maggi
              ________________________

              Working Rig:
              Asus P2B-DS @ 100MHz FSB
              Double Pentium III-450
              4 x 128MB CAS2 SDRAM
              Matrox Millennium G400 32MB DualHead
              Eye-Q 777 (22" with 127kHz) on primary VGA
              Nokia 445Xi (21") on secondary VGA

              Home Rig:
              Asus P2B-S Bios 1010 @ 100MHz FSB
              Celeron 333A @ 500MHz
              2 x 128MB CAS2 SDRAM
              Matrox Millennium G400 32MB DualHead @ 150/200MHz
              CTX VL710T (17")
              and a brand new Pioneer 303S SCSI-DVD

              Despite my nickname causing confusion, I am not female ...

              ASRock Fatal1ty X79 Professional
              Intel Core i7-3930K@4.3GHz
              be quiet! Dark Rock Pro 2
              4x 8GB G.Skill TridentX PC3-19200U@CR1
              2x MSI N670GTX PE OC (SLI)
              OCZ Vertex 4 256GB
              4x2TB Seagate Barracuda Green 5900.3 (2x4TB RAID0)
              Super Flower Golden Green Modular 800W
              Nanoxia Deep Silence 1
              LG BH10LS38
              LG DM2752D 27" 3D

              Comment


              • #22
                Umm... where does that leave the G400? You think the G400's organic biotech structure will have 'grown' full DX8 hardware T&L support by that time, leaving the GForce waaaay behind???
                P3@600 | Abit BH6 V1.01 NV | 256MB PC133 | G400MAX (EU,AGP2X) | Quantum Atlas 10K | Hitachi CDR-8330 | Diamond FirePort 40 | 3c905B-TX | TB Montego A3D(1) | IntelliMouse Explorer | Iiyama VisionMaster Pro 17 | Win2K/NT4

                Comment


                • #23
                  But that is perfectly understandable to me. I mean, did it come with a guarantee that no CPU would be used anymore or something..

                  Of course there will be new features. G400 is the only card to support EMBM, but that's it. Do you expect it to support some new mapping techniques to be revealed in DX8 as well?

                  _
                  B

                  Comment


                  • #24
                    If Da-Man really is a Microsoft employee developping DirectX, he wouldn't have responded in such an unprofesional way.

                    But if he really is a professional he would know that if they just change something as simple as the order of data needed for a vertex, the underlying hardware would be useless. The drivers would have to remap the data so it can be accepted by the hardware. And he should know as well as I do that that's a HUGE performance hit.

                    And that is just the reordering of data. Change the type of the data involved, requires an even more complex conversion. Changing the way the API works could make the hardware T&L even more useless.

                    Comment


                    • #25
                      Come on guys, if man can build an OGL->D3D wrapper man can build a T&L mapper. Sure there will be some kind of a performance hit, but the card will still perform better than a card w/o T&L at all.
                      P3@600 | Abit BH6 V1.01 NV | 256MB PC133 | G400MAX (EU,AGP2X) | Quantum Atlas 10K | Hitachi CDR-8330 | Diamond FirePort 40 | 3c905B-TX | TB Montego A3D(1) | IntelliMouse Explorer | Iiyama VisionMaster Pro 17 | Win2K/NT4

                      Comment


                      • #26
                        Hi everyone,

                        My thoughts: (hopefully franksch3 is not going to kill me one day)

                        I think that the GeForce will be capable of doing DX8 T&L after a "Driver"-Update, too.
                        But this is not because of the the possibility of wrapping APIs that was expl. by DA-MAN (come on, you don't work @ MS R&D,at least not as programmer, do you?). Wrapping the Vertex-Data would really mean a horrible performance drop.
                        BUT, The GeForce as well as The Matrox G200/400 include some sort of RISC-CPU (I remember back in the days of the orig Mytique220 that Matrox licenced a - small - RISC architecture ... guess where it went).
                        These cores (NVidia called theirs GPU,Matrox WARP) include RAM for Code to be executed (I know .. not only for that). And this code "makes" T&L (this is also the reason for why T&L COULD be possible on G-Chips).
                        I'm pretty much sure that NVidia did not choose to put this code into ROM (would be VERY stupid), so when DX8 hits the streets they could write a new T&L-Pipeline as its pure Software and include it into a Driver.
                        There may still be some issues that could decrease performace compared to DX8-Optimized hardware, but I think we can be pretty much sure that GF will fully "HW"-Support DX8 (although I'm still not a GeForce lover !!)

                        cheers ...
                        Bjoern

                        Comment


                        • #27
                          Why would I kill you? I only said (meant) that if their design isn't flexible the card is in for a big performance hit if the software has to convert data.

                          OpenGL -> D3D wrappers do have a big performance drop in comparison to a OpenGL driver with hardware which supports that standard.

                          Although I don't think that the WARP engine is fast (or flexible) enough to support T&L: on chip, the fact remains that it is fully programmable.

                          And the rumours that Matrox is working on a .18 G400 where they optimized the WARP engine for T&L don't sound that really far fetched.

                          If only Matrox would give information about the assembly language (OK, machine code) used by the WARP, so we can experiment ourselves what it can do and can't do. And more important, how fast is it.

                          Comment


                          • #28
                            I don't think Matrox will reveal the WARP architecture and instruction set.
                            Maybe someone remembers WHICH RISC architecture was licensed by Matrox? This would bring us some knowledge. We already know something about WARP. Maybe this way we will know more.

                            I don't think we will ever be able to use this knowledge for anything practical, but it would at least bring some fun...

                            Comment


                            • #29
                              I should've added a 8-)
                              Thought you remember our discussion regarding the different implementations of triangle-setup on current HW and if the Gx00 could be weak in that area.
                              I just didn't want to sound ignorant.

                              True, the current WARP will surely not be capable of handling T&L in a performing manner. But for a next gen. Chipset it would probably save alot of time.


                              cheerio ... Bjoern


                              Comment


                              • #30
                                Maybe they licensed the Alpha RISC architecture. Just think if the 21264 was the WARP engine!

                                Comment

                                Working...
                                X