Announcement

Collapse
No announcement yet.

Parhelia - no flame - makes me wonder

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Rags:

    "First, that's not a question. It's a statement"

    => my bad when I wrote "THE Parhelia Should be faster than the Nvidia chip in all cases." I should have wroten " Should the pharhelia be faster than nvidia chip in all cases?"


    "FAA is done by the hardware on Parhelia. It has it's own hardware to do the calculations. "

    You are saying pharhelia FAA is a hardwired function, not a set of available instruction used in order to perform a Algorythm (nb this is also hardware). I hope and think you are wrong, because if you are right is does means that as it is not programable it is not improvable!

    "Nothing nVidia has does it, so your statement is INCORRECT, ASSumption wrong. nVidia has no instruction set for doing FAA."

    Ok that is what I wanted to know, i was just expecting a anwser a bit less Nvidiotlike.

    Something like (!!Ps: I making the following out of my mind it is just a non sence exemple!!)

    'in order to perform Fsaa you need to stock the alpha coordonate when drawing polygone, then check that those coordonate are commun to 2 polygones, if they are the vectrice is store while the reste of the scene is rendered Multisampling is performed only on the stoked coord. in order to perform those operation you need to be able to poke data before rendering with is not possible on the gforce pipe' Or something like ' in order to do FAA you need a "*" or a shift function working on 512bit integer, those fonction arent available on other hardware"


    Matrox FAA algorithm is propriatary and i doubt that i will find any literature about it . But i think FAA is a technique used in other application and documented I was hoping someone here knew about it...

    I'll try to seek answer , that i will gladly post here..
    Last edited by notagain; 20 June 2002, 01:46.

    Comment


    • #32
      from "http://www.sgi.com/software/opengl/"
      notes = my comments

      "
      Remerbering the facts:

      Because images displays are made up of discrete pixels, we may get an effect called aliasing.
      The Jaggies
      Rasterised line segments and edges of polygons appear with jagged edges (the jaggies) even using high resolutions.
      The colour of pixels is set according to whether the line segment covers the point at the centre of a pixel.
      Each pixel "samples" the line segment (or polygon) at its centre determining whether the object is present there.
      The pixel's colour is set according to the answer to this sampling question.
      Small objects such as distant objects in a 3D scene may appear to disappear entirely if they land between the centres of two or more pixels.
      Small objects (or pixels making up objects) may blink on and off.
      They may cover the centre of a pixel in one frame but miss it in the next.
      Aliasing
      The term aliasing comes from sampling theory in signal processing.
      If a rapidly varying signal is sampled too infrequently, the samples appear to represent a signal that varies at a lower frequency



      Antialiasing Techniques

      note if i m right the foloowing is what we call nvidia & ati FSAA

      Prefiltering
      Prefiltering techniques compute pixel colours depending on an object's coverage (the fraction of the pixel area covered by the object).
      For example, a pixel covered by half a white object on a black background is given the intensity 0.5.
      What would be the intensity of a pixel if it was covered by one quarter of a white object?
      Without antialiasing, a pixel has either the intensity 1.0 if its centre is covered, or the intensity 0.0 if the centre is uncovered. There are a number of efficient prefiltering algorithms.


      Supersampling
      Supersampling tries to reduce aliasing effects by sampling more often than one sample per pixel.
      It takes more intensity samples of the image than are displayed and the display pixel value becomes the average of several samples.
      For example, imagine the square space taken up by one pixel overlaid by a 9 equally spaced sampling points (the centre and the 8 surrounding points).
      Some of these points are shared between pixels.
      You may get a number of samples of the background colour and a number of the object.
      If we get 3 background and 6 samples of object colour, the colour of the pixel is set to the sum of 1/3 background colour and 2/3 object colour.
      You only get the exact colour of the object when all 9 sample colours are the same as the object.

      Postfiltering
      Postfiltering computes each display pixel as a weighted average of an appropriate set of neighbouring samples of the scene.

      For example, if we consider the 9 sampling points described in supersampling, we may wish to give the centre sample a lot more weight.
      We could give it weight 1/2 and the other 8 points a weight of 1/16 each.
      So we can regard supersampling as a special case of postfiltering in which each sampling point has an equal weight (eg. 1/9).

      note: nvidia Quicunx AA apear to be in fact a diferent way of analysing and pondering the pre&post filter

      More advanced techniques for antialiasing are used in ray tracing.


      note: the following i think is more interesning


      Antialiasing in OpenGL (RGBA)

      Primitives are rasterised (converted to a 2D image) by determining which squares of an integer grid in window coordinates are occupied by the primitive, and then assigning colour and other values to each square. Each square of the grid with its associated values of colour, depth (z), and texture coordinates is known as a fragment.

      In RGBA mode, OpenGL multiplies a fragment's alpha value by its coverage. We can use the resulting alpha value to blend the fragment with the corresponding pixel alreay in the framebuffer. The details of calculating coverage values may vary depending on the implementation of OpenGL.

      Antialiasing Polygons
      Antialiasing the edges of filled polygons is similar to antialiasing points and lines. When different polygons have overlapping edges, blend colour values appropriately.

      However, antialiasing polygons in color index mode isn't practical since object intersections are more prevalent and you really need to use OpenGL blending to get decent results.

      To enable polygon antialiasing call glEnable() with GL_POLYGON_SMOOTH. This causes pixels on the edges of the polygon to be assigned fractional alpha values based on their coverage. Also, if you want, you can supply a value for GL_POLYGON_SMOOTH_HINT.

      In order to get the polygons blended correctly when they overlap, you need to sort the polygons in front to back order. Before rendering, disable depth testing, enable blending and set the blending factors to GL_SRC_ALPHA_SATURATE (source) and GL_ONE (destination). The final color will be the sum of the destination color and the scaled source color; the scale factor is the smaller of either the incoming source alpha value or one minus the destination alpha value. This means that for a pixel with a large alpha value, successive incoming pixels have little effect on the final color because one minus the destination alpha is almost zero. Since the accumulated coverage is stored in the color buffer, destination alpha is required for this algorithm to work. Thus you must request a visual or pixel format with destination alpha. OpenGL does not require implementations to support a destination alpha buffer so visual selection may fail




      My new question is :what is the difference between open gl polygon antialiasing and FAA ? I think it's more or less the same isn't it.
      Last edited by notagain; 20 June 2002, 01:59.

      Comment


      • #33
        Yes and no... they work on a similar principle, and have a similar result, but the level of accuracy might be different. There are 2 key differences (as I understand it):

        1) parhelia has an accuracy of 1/16th of a pixel (hence 16x FAA) - so a fragment pixel can be anything from 1/16th covered to 15/16ths covered. 'True' AA (depending on how you implement it) could give perfect precision (by calculating the actual area covered/uncovered). In practice the difference would be hard to see.

        2) antialiasing as described in the snippet you posted, claims to work by taking the fractional coverage value, and doing a weighted sum with the pixel already in the frame buffer. Parhelia (I believe) works by rendering 16 mini pixels, then averaging the colour values of those 16 mini pixels together (it takes no notice of what is already in the frame buffer).

        A few other points: Matrox say they have several outstanding patents for technologies used on the parhelia. I would be very surprised if at least one of these DIDN'T cover their way of implementing FAA.

        The FAA engine is a seperate piece of silicon, with its own cache (according to the chip-layout diagram, anyway). As such, it may or may not be programmable, but it is certainly not replicated on NvidiATI cards at the moment.

        LEM

        Comment


        • #34
          Originally posted by Dr Mordrid


          I thought you might be, but I wasn't.

          Dr. Mordrid
          It was actually a reply to the_king´s post.

          Do you really think nvidia shows the same frame twice just to win in benchmarks? that would be really low if nvidia did that.
          cheating in benchmarks is cheating your costumers, and it would also make animations look like complete crap(i think). I have never owned an nvidia card, so I can´t really tell how they are, but I have some friends that are very satisfied with their nvidiacards, so I am having some trouble believing that nvidia is cheating like that.
          This sig is a shameless atempt to make my post look bigger.

          Comment


          • #35
            I found a whitepaper about edgeAA, it is an other technique than 16xfaa.

            appearantly it makes all the edges and lines look bloated.
            This sig is a shameless atempt to make my post look bigger.

            Comment


            • #36
              cheers Lemmin & TDB

              Comment


              • #37
                The recent ExtremeTech interview with Matrox shed some more light on how the Fragment Anti-Aliasing is performed:

                "We're really excited about Fragment Anti-aliasing 16x (FAA16x), which we see as a unique approach to anti-aliasing. FAA16x operates on the principle that only pixels on the edge of a triangle need to be anti-aliased. The algorithm recognizes these pixels and treats them differently then interior pixels of a triangle. Our hardware actually looks at every pixel being rendered and checks to see if the pixel fully covers an entire raster element. If the raster element is fully covered, then the pixel is written to the back buffer as usual. If however, the raster element is only partially covered (which is typically the case on a polygon edge) than the level of coverage is calculated by the chip down to a 1/16th sub-pixel fragment, and this fragment value is written into a special fragment heap that is stored in local memory. As soon as another fragment is created that can be combined with the first to fully cover the raster element, the fragments are combined together to produce the correct result. This approach has the benefit that enabling 16x fragment anti-aliasing is typically only costing about 20-25% performance versus no anti-aliasing.

                The other benefit is that very high resolutions can be anti-aliased without producing memory overflow problems. In particular, a 1600x1200 screen would require 3200x2400 32it pixels, or ~ 30MB of scratch buffer to perform traditional 4x supersampled anti-aliasing. But with only 10% of the pixels needing anti-aliasing, our FAA-16x may only require a few MB to perform higher quality anti-aliasing. Note also that for any given scene, the ratio of edge pixels to interior pixels decreases as the resolution increases. This means that the cost of FAA-16x goes down the higher the resolution, while the cost of other algorithms increases dramatically! "

                Comment


                • #38
                  Originally posted by notagain
                  technoid

                  60fps today is fine i agree
                  but 60fps today also means 30 fps in 3 or 4 month.

                  ie my real fear is that: in fact the only stuning fact about the card is in fact a "Algorithm".


                  regards
                  This is not necessarily true.

                  Todays games may not take full advantage of all features of Parhelia, but tomorrows games will.


                  Cheers,
                  Elie

                  Comment


                  • #39
                    Originally posted by TDB

                    Do you really think nvidia shows the same frame twice just to win in benchmarks? that would be really low if nvidia did that. cheating in benchmarks is cheating your costumers...
                    That would be really low of them, I agree.. their costumers have done such an excellent job, too. Without them, nVidia never would have been able to masquerade as a high quality oriented company all these years.

                    I love it when misspelling results in creative irony.

                    Comment


                    • #40
                      I know they do that. Look over in TCB, and you'll have someone saying that when his screen drops from 200fps to 150fps it studders. Why should it be studdering if 150 unique, distinct frames are being created every second?
                      Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

                      Comment


                      • #41
                        Originally posted by Wombat
                        I know they do that. Look over in TCB, and you'll have someone saying that when his screen drops from 200fps to 150fps it studders. Why should it be studdering if 150 unique, distinct frames are being created every second?
                        maybe the frames are unevenly distributed within that second it drops the the framerate to 150fps, that is the first 3/4 second it does 0fps, and the last 1/4 second it does150 frames, it would give noticable stuttering, but the fps-counter would stay high.
                        This sig is a shameless atempt to make my post look bigger.

                        Comment


                        • #42
                          Like I have said before.

                          Just run a Parhelia then a GF4 back to back, and see which one is smoother.

                          Rags

                          Comment


                          • #43
                            Scarily enough, try running a G400Max and a Geforce4 back to back. In some games the G400 is still smoother. Just goes to show that frame rate doesn't always prove a great deal.......

                            Comment


                            • #44
                              is there any application or benchmark which can be run which count s frames per fraction of a second? so you could get a breakdown of each quarter of a second then look at minimum and maximum framerates. this could highlight the mystery behind how nvidia manages to make 200 fps seem jerky. surely it would be quite easy to boost framerates in drivers by making the card render or simply output the same frame multiple times without much stress on the card. also what does 3d mark use to form its end score is it entirely based on framerates?
                              is a flower best picked in it's prime or greater withered away by time?
                              Talk about a dream, try to make it real.

                              Comment


                              • #45
                                The problem is, I guess, that even if you could get framerates for quarters of a second instead of seconds, you could still have all the frames in the first half of that quarter and none in the last ...
                                "That's right fool! Now I'm a flying talking donkey!"

                                P4 2.66, 512 mb PC2700, ATI Radeon 9000, Seagate Barracude IV 80 gb, Acer Al 732 17" TFT

                                Comment

                                Working...
                                X