Announcement

Collapse
No announcement yet.

It's amazing what a PIII upgrade will do...

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    DirectX 7.0 supports T&L that is sure.

    OpenGL is a VERY OLD standard I think the must recent version is 1.1 or 1.2 ... So it doesn't implement these new features like EMBM and Ge256 T&L. BUT luckly for us OpenGL ICD supports GL Extentions (u may see an opetion called GL Extentions in Q3Test too) Any hardware company can add an extention to itse ICD drivers and through it support new features...

    He we get to a painful thing about OpenGL and why OpenGL drivers are commonly slower then DirectX drivers (not only Matrox, Ati and 3dfx are also having some trouble).
    in DirectX MS is doing most of the work writing all the general code that deals with new feature like EMBM and T&L, this means that all the implementation is done by MS. The hardware company only has to write a driver that is practically a traslator between the MS Dx3D functions and the Graphics card itself.
    That's why we can't expect very much improvment in DirectX drivers over the time. Its a very simple driver to write and %99 of the optimizations must be done by MS itself.

    OpenGL is a totally different story... MS doesn't support development of OpenGL drivers so the hardware company has to write all the Code implementation itself... As u can imagine it's not so easy to say the least... To get an idea about it u can just look at the size of the OpenGL ICD in the G400 family... its beyond 2.1mb plus 700k of the opengl32.dll (this file directs all the opengl calls from ur game to the ICD) it's a pretty heavy thing...

    I really think the OpenGL is staying alive because IDSofware is supporting it, without this company we wouldn't see this API alive. sure its easier to program for OpenGL SDK but it also has less features and is aging... DirectX will take over eventually (I think so).

    BTW MS and SGI (developer of the OGL) are working together on a new future API called Farenheit, its kind of a salad with Direct3D and OpenGL in it)

    if you want more info about OpenGL vs Direct3D go to Brian Hook's page in www.voodooextreme.com he aswers all kind of questions...

    Comment


    • #32
      Listen I've read a few reviews myself and the thing is that T&L is not such a big deal... I tried to explain it before. for todays games the T&L calculations are not so complicated, I admit that to run a T&L demo like the nVidia TreeDemo it is best to have a T&L card BUT NO GAME supports this kind of details, and even if one did support it the GeForce and even the Savage2000 won't have the fillrate to run it... so for now and for the near future T&L is not the answer to boosting up game speed. The only way I get Q3Test to be CPU dependent is to run it in 16bit color and 800x600 res. anything above that and the Graphic card becomes the bottleneck...
      So I simulated in my head I had a T&L card that did the CPU work and what I got is that I could run faster in low resolutions when the Graphic card helps my cpu... but when I go to higher resolutions I'm still stuck with the same FillRate Bottleneck... It seems to me that for the T&L calculation we need today the SSE/3DNow is the answer I agree that once u have a big enough Fillrate it's a nice feature to have a T&L hardware engine that will let the cpu calculate more sofiticated AI and other stuff.

      Comment


      • #33
        "how exactly does T&L work, say from within OGL? Do current games use OGL's T&L for all their model animations, if so how would that work wrt T&L on the video card? Collisions? Getting data from the card again? Is there an OGL command to tell the drivers to send the data to the video card to be processed? Not a clue here."

        I certainly won't claim to be an OpenGL expert, but I've read the Red Book, I follow the OpenGL newsgroups, and I'm designing some 3D stuff for fun and experience... So, I'm either going to enlighten a bunch of people or make a big fool of myself. :-) Warning... It's going to be long.

        Anyway... The hard part about trying to answer your question is that APIs like OpenGL are very powerful and very flexible, so it's completely possible to use the API in some very different ways without invalidating its design philosophy.

        To highlight where hardware T&L comes into play, I'll ignore things like texturing, clipping, lighting, etc. for now and just focus on what has to happen to get a polygon onscreen.

        OpenGL is well-structured to handle hierarchical scene graphs ie: four wheels attached relatively to car, five bolts attached relatively to wheel, etc... It allows you to describe geometry relative to a local space (ie: bolts positioned within the wheel's local space, scratch on the bolt positioned within the bolt's local space). To get the a poly within the bolt (or a hand at the end of an arm attached to a body, or whatever) onto the screen, it must be transformed into wheel space, then car space, then through whatever space the car was in up into world space, then into the viewing volume (if it hasn't been clipped anywhere along the way). Once up all the way into screen space, it can finally be drawn (or thrown away).

        The current state of this hierarchy is usually maintained in a stack of transformation matrices. (There can be numerous stacks - one for the viewing volume, one for textures, one for the world, etc., but we'll ignore the differentiation for now).

        The topmost item in the stack contains the current transformation matrix that will take vertices you describe in local space and transform them into world space. It is this process which allows you to describe things easily at various nested yet independent levels and still be able to figure out (and draw them) where they exist in the "real" world. You can push a transformation onto this stack to "dive" down into a new local space, or pop the current one off to rewind back up to the previous local space.

        When you actually tell OpenGL to draw something, it must take all the vertices you have described to it, multiply them by a number of these matrices at the tops of various stacks in order to determine the location of the vertices in world space, and do a bunch of other stuff to locate the vertices in screen space to finally draw them (ignoring clipping, etc. as in my disclaimer at the top)

        Here's (finally :-) where hardware T&L can come in. The matrix multiplications to determine how to relate the current local space to the world space aren't that expensive. The fact that you then have to multiply each and every vertex by that transformation matrix is where it gets expensive. And that's one of the jobs that can be done by hardware transformation. A hardware transformation engine contains circuits specifically designed to do this job, and that's why it can do it so well even at a lower clock speed.

        Hardware transformation can even do the multiplications involved in relating local space to world space. However, since the software is rarely just blindly drawing hierarchical scene graphs to the screen, and also has to take care of collision detection, etc., this portion is usually kept in the client application, allowing the application to be able to figure out where everything is when it needs to. That area of the transformation process isn't the "expensive" part in the first place, so its a double plus to keep it in the application code.

        So, to relate a current OpenGL game to hardware T&L features, it depends on how much of the OpenGL pipeline the developer is using...

        For current games, if the developer is doing all transformation right to screen space through their own code, you will see no benefit from hardware transformation.

        If the developer is transforming to world space through their own code (as in my example), and letting the driver take it from there to the screen (which is the best balance between making life easy on yourself and still being able to do collision detection, etc.) you will see the biggest performance increase.

        This increase comes from handing off the transformations from local space to world space and then to screen space for what could be thousands of vertices to hardware designed to do just that job.

        To use any more of the pipeline (ie: through the driver for all world-to-local transforms) certainly makes things really easy to code initially, but it makes numerous features much harder (if not impossible) to code later, so the "balance point" somewhere in the middle is probably the best.

        If we were to assume that current developers are at that "balance point" right now, then they are already poised to make the BEST possible use of hardware transformation through OpenGL, because if they made much more use of it, some features (eg: certain forms of dynamic lighting, collision detection, etc.) would become much less efficient, if possible at all.

        I hope that my massively long-winded explanation helps. If any OpenGL gurus on the board find any transformation-specific problems with what I've said (excluding the stuff in my disclaimer :-) please correct me.

        Comment


        • #34
          Himself wrote:

          Yeah, coprocessing with a good OS. T&L won't fix Windows, that's for sure.


          What the hell do u mean ??? Windows has very little affect on games performance !!! Did u see Q3Test linux or Mac run faster ??? NO they don't ! a game like Quake gets %99 of cpu time and so windows cannot have an affect on gaming performance.

          I really must say that as a programmer (Win32, ATL and MFC) I find windows to be a great OS inspite what people say, performance is not the issue bugs are. but its getting better with every ver of windows.
          No other x86 OS (I didn't work with MacOS) didn't give me the functionality that windows did and that is the most important thing.
          a dedicated program like a 3D game will anywhy take %99 of cpu time if its on a linux system or if its on a win98 winNT or Win2000 system.

          Comment


          • #35
            Marmita: "OpenGL is a VERY OLD standard I think the must recent version is 1.1 or 1.2 ... So it doesn't implement these new features like EMBM and Ge256 T&L."

            OpenGL has supported hardware transform and lighting FOREVER. Without extensions. Period.

            The fact that inexpensive OpenGL-compliant hardware WITH hardware T&L onboard has not existed until now (or soon, as the case may be) is another issue entirely.

            Also, there's a reason that OpenGL is only at 1.2. It works. :-) You also have to remember that OpenGL has been around for ages... If it "lacks features" it's only because the ARB requires that features be evaluated and integrated into the standard by a group, not just one vendor. Currently, the only "feature" missing from OpenGL that is in D3D would be in the area of bump and environment mapping (though those can be handled in OpenGL through other means).

            Simply put, OpenGL is only at revision 1.2 because it hasn't needed more revisions. :-)

            Comment


            • #36
              That is right it Did include T&L... my mistake but still the driver company will have to do a special implementation to take advantage of it in there own hardware.

              The Group that desides what goes in and what does (into the OGL standard) is not taking in to acount the needs of PC gamez coz the API wasn't ment for them... so I want to make this clear... OpenGL was not written for Gamerz and no feature that is intended for games like 3dfx T-Buffer or S3TC texture compression or EMBM will be included into it... (about Anti-Aliasing I'm not sure Does OGL include support for that ?)

              Comment


              • #37
                Just as long as we agree on the fact that OpenGL has support for hardware T&L natively and WITHOUT extensions I'm happy. :-)

                The effects shown by the T-buffer so far can all be handled by the OpenGL accumulation buffer, though, once again, hardware support for the accumulation buffer at the consumer level is virtually (if not entirely) non-existent at this point.

                EMBM could easily be handled by an extension if it can't already be handled by OpenGL 1.2. If you think about it, perhaps D3D goes through so many revisions because MS keeps taking what should be "vendor extensions" and rolls them into a new release along with some new features.

                The only reason OpenGL isn't visibly directed at gamers is because a 3D API doesn't need to be directed in such a way. I think that D3D appears directed at gamers for two reasons (none of them involving a technically superior design, where OpenGL wins hands down IMHO):

                1. The first consumer vendors of 3D hardware wrote D3D drivers first because MS was bouncing back and forth on the issue of OpenGL support in the OS (for reasons I'll describe below).

                2. Microsoft says so.

                OpenGL never states that it leans towards either professional graphics or games.

                OpenGL is for everything. By saying that D3D is for gamers and that OpenGL is for professional applications, MS makes a claim that is designed to give D3D a purely psychological edge. The amount of revisions D3D has required to get it where it is today is an obvious sign that the psychological edge is all MS has had because the technology has obviously been lacking (ie: D3D _7_ vs. OpenGL _1.2_ - it's taken 7 releases to even get close to 1.1, let alone 1.2).

                Point 1 only exists because the DirectX team thought they could do better than the entire OpenGL consortium, not to mention that it's standard MS policy to either a) buy technology or b) kill it. :-)

                Comment


                • #38
                  That's not all true my friend... First 2 directX (i think 3 too) didn't include 3D Hardware acceleration at all so these are not to be counted as rev. there was No DirectX 4.0... 1 more down... so what's left 5, 6, 7.

                  I agree the Ms is doing the vendor extention work. BUT thats the why it has to be !

                  MS Direct3D if very well optimized (6 is well optimized.. 7 didn't get any more optimization just more feature).

                  MS is doing a great work with the Direct3D code and I really don't care how many rev they did... u can't compare OpenGL to Direct3D revisions !!! it's unfair.. when u say OpenGL 1.2 u mean the API spec. didn't change much but the OpenGL standard doesn't include the implementation of its functions !!! it just specifies what function are included and what they do... that's not so hard to do. While in Direct3D MS not only makes the Specfication it does the implementation too. That is way more complicated and surly will need more revisions to get perfect... Look how many openGL ICD revisions it takes the vendors until they get a good driver. that's the work MS is doing with Direct3D...

                  And as I mentioned its a great thing MS is doing the extention work that why we wont suffer from incombatibility between different vendors extentions... I think 1 company should deal with the code implementation and the vendors will just write a driver to use those implementations..

                  It's very nice to defend OpenGL but it's a fact the it isn't working as well as Direct3D for PC Gaming. Vendors are having trouble with there ICD and even nVidia which is considered to be the vendor with the best ICD has better performance under Direct3D.

                  I know u guys really like to attack MS but u must see it in an objective way. Direct3D and DirectX is a more healthy way to use 3D hardware... isn't it a waste of time the every vendor that makes a card with Texture compression will make his own code to deal with it ??? isn't it simple that MS does that in the most optimized way ???

                  In this way no new features will come out to the air.. EMBM doesn't have even a little chance to survive if MS won't support it in Direct3D the same goes to S3TC texture compression and the same goes to every new feature a vendor supports..

                  If not for MS we wouldn't see so much new features popping out in these days because. No vendor will start messing around with new OpenGL implementations.

                  Comment


                  • #39
                    That's not all true my friend... First 2 directX (i think 3 too) didn't include 3D Hardware acceleration at all so these are not to be counted as rev. there was No DirectX 4.0... 1 more down... so what's left 5, 6, 7.

                    I agree the Ms is doing the vendor extention work. BUT thats the why it has to be !

                    MS Direct3D if very well optimized (6 is well optimized.. 7 didn't get any more optimization just more feature).

                    MS is doing a great work with the Direct3D code and I really don't care how many rev they did... u can't compare OpenGL to Direct3D revisions !!! it's unfair.. when u say OpenGL 1.2 u mean the API spec. didn't change much but the OpenGL standard doesn't include the implementation of its functions !!! it just specifies what function are included and what they do... that's not so hard to do. While in Direct3D MS not only makes the Specfication it does the implementation too. That is way more complicated and surly will need more revisions to get perfect... Look how many openGL ICD revisions it takes the vendors until they get a good driver. that's the work MS is doing with Direct3D...

                    And as I mentioned its a great thing MS is doing the extention work that why we wont suffer from incombatibility between different vendors extentions... I think 1 company should deal with the code implementation and the vendors will just write a driver to use those implementations..

                    It's very nice to defend OpenGL but it's a fact the it isn't working as well as Direct3D for PC Gaming. Vendors are having trouble with there ICD and even nVidia which is considered to be the vendor with the best ICD has better performance under Direct3D.

                    I know u guys really like to attack MS but u must see it in an objective way. Direct3D and DirectX is a more healthy way to use 3D hardware... isn't it a waste of time the every vendor that makes a card with Texture compression will make his own code to deal with it ??? isn't it simple that MS does that in the most optimized way ???

                    In this way no new features will come out to the air.. EMBM doesn't have even a little chance to survive if MS won't support it in Direct3D the same goes to S3TC texture compression and the same goes to every new feature a vendor supports..

                    If not for MS we wouldn't see so much new features popping out in these days because. No vendor will start messing around with new OpenGL implementations.

                    Comment


                    • #40
                      That's not all true my friend... First 2 directX (i think 3 too) didn't include 3D Hardware acceleration at all so these are not to be counted as rev. there was No DirectX 4.0... 1 more down... so what's left 5, 6, 7.

                      I agree the Ms is doing the vendor extention work. BUT thats the why it has to be !

                      MS Direct3D if very well optimized (6 is well optimized.. 7 didn't get any more optimization just more feature).

                      MS is doing a great work with the Direct3D code and I really don't care how many rev they did... u can't compare OpenGL to Direct3D revisions !!! it's unfair.. when u say OpenGL 1.2 u mean the API spec. didn't change much but the OpenGL standard doesn't include the implementation of its functions !!! it just specifies what function are included and what they do... that's not so hard to do. While in Direct3D MS not only makes the Specfication it does the implementation too. That is way more complicated and surly will need more revisions to get perfect... Look how many openGL ICD revisions it takes the vendors until they get a good driver. that's the work MS is doing with Direct3D...

                      And as I mentioned its a great thing MS is doing the extention work that why we wont suffer from incombatibility between different vendors extentions... I think 1 company should deal with the code implementation and the vendors will just write a driver to use those implementations..

                      It's very nice to defend OpenGL but it's a fact the it isn't working as well as Direct3D for PC Gaming. Vendors are having trouble with there ICD and even nVidia which is considered to be the vendor with the best ICD has better performance under Direct3D.

                      I know u guys really like to attack MS but u must see it in an objective way. Direct3D and DirectX is a more healthy way to use 3D hardware... isn't it a waste of time the every vendor that makes a card with Texture compression will make his own code to deal with it ??? isn't it simple that MS does that in the most optimized way ???

                      In this way no new features will come out to the air.. EMBM doesn't have even a little chance to survive if MS won't support it in Direct3D the same goes to S3TC texture compression and the same goes to every new feature a vendor supports..

                      If not for MS we wouldn't see so much new features popping out in these days because. No vendor will start messing around with new OpenGL implementations.

                      Comment


                      • #41
                        why the hell did it post three times ???
                        (and btw sorry for the spelling mistakes... I typed it quick and didn't check it before posting... sorry again)

                        Comment


                        • #42
                          Weird multiple post, but oh well...

                          Anyway, I do agree with a number of your points. My apologies. I guess I wasn't really trying to compare the two APIs simply by revision count, but wanted to emphasize that D3D has had to run through a lot of revs to catch up with OpenGL because MS decided to go it on their own (Not Invented Here syndrome) instead of adopting the industry standard API. You can see this exact same thing happening in many areas of Microsoft... They have gotten somewhat better recently, but it was getting pretty damn bad for a few years there.

                          It's just truly unfortunate that the DX team (particularly the D3D team) had some of MS's biggest egos (everybody remember Alex St. John?) suffering from the worst cases of NIH the graphics industry has ever seen.

                          We could have all had a single, solid, cross-platform, open 3D graphics API with full driver support from the vendor community. We certainly wouldn't be here complaining about G400 OpenGL support! Oh well.

                          And don't worry, I'm not an MS-basher. I'm a software developer working in an all-MS shop. I have to like this stuff.


                          [This message has been smile-ified by JeremyGray (edited 09-15-1999).]

                          [This message has been edited by JeremyGray (edited 09-15-1999).]

                          Comment


                          • #43
                            I still think it's better this way... I really don't believe in these cross platform things. I trust MS to do the best job with the implementation of the API and after some experience with 3D card vendors (ATI, 3Dfx, nVidia and Matrox... had them all) I can't say the same about them...

                            I except ur point.. MS could have taken the OpenGL standard and enhance it and it would be simple.

                            BUT again the API spec is not the hard thing to do... and writing endless opengl extentions is not a easier path to go... I really think that Direct3D is the way to go with PC 3D hardware and I think the near future will show me right. My guess is that the next IDSofware game will support Direct3D it's mybe a little harder to write to, But that's what we pay them for and the performance advantage IS worth it, so are compatibility issues.

                            Comment


                            • #44
                              BTW I agree that MS is finding it hard to adopt other non MS standards, a good example is TCP/IP which novell didn't adopt at first too... But u see now in the new versions of windows we see the MS adopted TCP/IP as its network protocol and it even adopted the new IPv6 from Internet2... So u see they know when to admit they were wrong and that's really a thing that they begun doing in the last few years... but again 3D API shouldn't be a cross platform API every OS should have its own optimized API... U can see the cross platform advantage when u see the number of platform ports a game like q3 has but u can also see the disadvantage whe nu look at benchmarks compared to other similar D3D games... I really think benchmarks will always be better on D3D games and portability... Well, For one I think its less important, let the programmers brake there head.. if a platform is popular enough they WILL port there game to it.

                              Comment

                              Working...
                              X