Announcement

Collapse
No announcement yet.

Alpha Bits (no... not the cereal)

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Alpha Bits (no... not the cereal)

    So I'm sitting here on a Friday night working on my preview of the Parhelia. I was doing some research around the web and came across someone complaining, without explanation. They mentioned that while the Parhelia has 10-bit color, it only has 2 bits reserved for alpha channel -- often used for transparency and could cause visual banding in scenes containing heavy fog. So that got me wondering... what do other GPUs normally use? 4 bits?

    (The preview will be done tomorrow. I do have other things to do on a Friday night.)
    Last edited by ChronosZero; 17 May 2002, 18:27.

  • #2
    In a traditional 32 bit rendering architecture you have 8 bits for each color channel plus 8 bits (256 shades) alpha. Parhelia can do either that or ten bits per color channel plus 2 bits (4 shades) alpha.

    I don't know though when (or if) this alpha information is used. If it's used in games, GigaColor won't be the choice, but it's intended for dtp/graficking anyway

    I believe photoshop handles transparency internally, independant of the graphics card's color format?

    AZ
    There's an Opera in my macbook.

    Comment


    • #3
      I'm pretty sure that it's only the final framebuffer that would be represented as 10-10-10-2. If I understand it correctly, internal calculations would be done using 10 bits per component including alpha (a total of 40-bits), then the final render can be to a 10-10-10-2 buffer because you don't need the alpha bits once you've rendered.

      On the desktop scene, the alpha bits are almost worthless anyway. (I think they might have some use with overlays, or similar stuff.)

      I've read the white paper, and it's not completely clear, but it'd be suicide to limit source textures in 3D rendering to a 10-10-10-2 format.

      Of course take this stuff with a grain of salt, but I think they'd be stupid if you could only have two bits for alpha.


      AlgoRhythm

      Comment


      • #4
        So I'm sitting here drinking my coffee and wiping the sleep away from my eyes while catching up on the posts. It occurs to me why there's only 2 bits of alpha channel data.

        Yes, I should have known it was 8 bits before. As a matter of fact, I even read it somewhere yesterday and forgot. 24-bit color is when you have 8 bits per channel. 32-bits is the same with an added 8 bits for alpha. Somewhere (everywhere?) in the pipeline they must be limited to a total of 32 bits, thus when you dedicate 10 bits to each color you only have 2 bits to spare.

        Comment


        • #5
          Í have a kyro2-card in my gaming-pc, and it can accelerate in 24bit(atleast in older drivers), i guess that it is in 8:8:8:0 format, and it works fine in all quake engine games, and rollcage, i couldn´t select it in other games, in quake i used "desktop-color-depth" and rollcage appeared to support it out of the box.
          it looked exacly like 32 bit (8:8:8:8) so i guess it doesn´t use alpha i a way that could be a problem.

          im not sure if it was only the front-buffer that was in 24bit, and the rest was rendered in 32bit internally.

          btw, i don´t think parhelia renders in 40bit internally, because 40bit datatransfers is very ineffective in hardware that has 32 bit wide datatpaths, which is VERY normal, because 40bits would require 2 transfers, the first 32 bits, and then the rest 8 bits, and then you would have 24bit empty data(noise) in the rest of the second transfer. I don´t know why a kyro2 can do 24bit, because it contradict my understanding of effiecient hardware-use, (8 bit wasted in transfers).
          Last edited by TdB; 18 May 2002, 09:14.
          This sig is a shameless atempt to make my post look bigger.

          Comment


          • #6
            TDB, you're exactly right. It comes down to efficient use of the data pipes. Powers of 2 are the way to go. So yeah, 2 bits of alpha channel are all you're going to get with a 32 bit bus and 10 bit color. (But I still have no idea where the other 8 bits go on the kryo2)

            We should be able to use 10 bit color in games. The Matrox whitepaper on gigacolor states that DirectX 8.1 supports it. Of course, that makes the grand assumption that any game compnay has produced and included textures in a format higher than 8 bits.

            Comment


            • #7
              Of course, that makes the grand assumption that any game compnay has produced and included textures in a format higher than 8 bits.
              not really, the internal interpolation with bilinear filtering and multitexture-blending can still be done in 10 bit.
              Last edited by TdB; 18 May 2002, 09:33.
              This sig is a shameless atempt to make my post look bigger.

              Comment


              • #8
                I could see that. I don't know if that's how it's done, but it sounds logical enough for me.

                Comment


                • #9
                  TDB, I don't think it would be a matter of inefficency. We've been using 24-bit framebuffers for long enough without people complaining about efficiency.

                  For example, if you want to transfer 24-bit pixels, but have a 32-bit wide data path, you just transfer four pixels at a time with three 32-bit words (4*24=3*32). So, if you used 40-bits, it would just be slightly different. You then transfer four pixels of 40-bits with five 32-bit words. (4*40=5*32).

                  So, I don't think your argument about efficiency means much.

                  But, the whole thing is conjecture until we get a definitive answer from Matrox.

                  P.S.: Yes, I have written a few graphics algorithms in my day, I'm not completely clueless.


                  AlgoRhythm

                  Comment


                  • #10
                    AlgoRhythm, that makes sense, but if they transfered pixels like that, wouldn´t they need an awful lot of instuctions, unless they had 40bit registers on the gpu itself?
                    I mean if they only had 32bit registers and got the pixels like that, then they would still use 2 registers for each pixel.
                    because if they used your approach on the gpu itself, 5 registers for 4 pixels, then the instruction set(for the gpu) would be huge just to consider the different register-combinations for a pixel(which bits in the register corresponds to which pixel).

                    and if they had 40bit registers and wanted to avoid a huge instruction-set, then the pixel troughput would be ineffective in lower bit color.
                    Now that i think of it, isn´t that precisely the case for the g400 in 16 bit color(that even 16bit color is rendered in 32bit precision internally, resulting in a less than stellar performance in 16bit color), so yes 40bit registers would probably not be THAT strange, atleast it fits the rumors about no-performance-loss in 10 bit mode.

                    BTW, this is just my own guessing here, i don´t really know if it makes sense, I don´t really know that much about hardware, i just tried to apply my (rather lacking) knowledge about cpus to the gpu world.
                    This sig is a shameless atempt to make my post look bigger.

                    Comment


                    • #11
                      Thanks for the enlightening post, AlgoRhythm. That does make sense.

                      Unfortunately though, it doesn't appear that Matrox is doing anything like that. Their Feature Summary pdf states "10-bit frame buffer mode for ARGB (2:10:10:10)". That leads me to believe that they're using plain 32-bit pixels.

                      I'm curious to see if this will have much of an effect on transparencies in games.

                      Comment


                      • #12
                        "10-bit frame buffer mode for ARGB (2:10:10:10)", doesn´t mean that internal calculasions is done in (2:10:10:10), the old g400 rendered 16bit 3d internally in 32 bit, yet the frame buffer was still 16 bit AFAIK.
                        This sig is a shameless atempt to make my post look bigger.

                        Comment


                        • #13
                          Hrm. Okay... we're wandered into some really interesting speculation and theory. I don't know how Matrox chose to implement the internals any more than what they've released in the PR diagrams, which obviously lacks heaps of details. I don't know if anyone over there is willing to discuss such a fine level of detail about the chips, but I'll ask anyway.

                          Comment


                          • #14
                            I believe the Matrox specs called out 512-bit registers.
                            <TABLE BGCOLOR=Red><TR><TD><Font-weight="+1"><font COLOR=Black>The world just changed, Sep. 11, 2001</font></Font-weight></TR></TD></TABLE>

                            Comment


                            • #15
                              I couldn't find anything on the Matrox site regarding register size but the FiringSquad article stated it had 512-bit registers.
                              A significant part of its performance comes from using 512-bit registers and being able to process 512-bit chunks of information.
                              <TABLE BGCOLOR=Red><TR><TD><Font-weight="+1"><font COLOR=Black>The world just changed, Sep. 11, 2001</font></Font-weight></TR></TD></TABLE>

                              Comment

                              Working...
                              X