If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Soon graphic cards will have enougth onboard ram so that agp transfer will never be needed.
I'm sorry, but this statement is just wrong. There will always be increases in requirements. Textures get larger and more detailed, every new technology uses some form of multi-texturing, it seems. You always have to get the data out of somewhere, and that somewhere is through the agp bus. I agree that bandwidth-optimizing technologies make the current trend of doubling the supposed agp bus speed every year pretty ridiculous, but saying that it will never be needed is just as ridiculous.
Well the use of even more onboard memory has less to do with textures since pretty much every card out there supports texture compression routines in hardware which can already save quite a lot in terms of memory footprint.
The texture compression routines built into DX can compress textures as high as 5 to 1,so even if we have any given card that has 64 mb of memory total and that half is already used for frame buffer and even then,only if the user decides to play his/her games at 1660*1200 32 bit,it still leaves 32 mb available for textures which if those compression routines are used,could make those 32 megs work like there was 150 megs(5 to 1 compression).
Now imagine the situation when a card has 128 megs of ram built in,i somehow have a hard time believing that particular card would ever be texture limited even if developers started to use texture resolutions as high as 2048*2048 or even higher than that since with texture compression,you could potentially have 250+ megs of textures and it would fit in the card's memory without having to texture out of the agp bus..
The higher memory amounts would be more usefull for things like higher quality AA at very high resolutions and using 64 tap or even 128 tap anisotropic filtering wich would use way more frame buffer or even using part of it as a vertex data cache for the graphics card,so that you don't overwhelm the agp bus with polygon trafic...
note to self...
Assumption is the mother of all f***ups....
Primary system :
P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...
Actually i think it's up to 6 to 1 compression for s3tc and i believe that it was incorporated into the dx spec without any modifications made to it.
There is one caveat though and that's that the spec suppports up to those compression ratios but there's a few intermediate ones between having no compression at all and the maximum that are there to support specific effects like translucent effects in wich the maximum compression ratio can't be used,since it would result in serious visual glitches.
But even in a case where you can't compress all the textures using the highest ratio,alot of memory can be saved even if developers were limited to using the intermediate compression ratios since it's not just the ratio itself that changes,it's the method used to acheive that ratio that's different too.
note to self...
Assumption is the mother of all f***ups....
Primary system :
P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...
OK, texture compression is nice and all but do you realize that a compression ratio of 6/1 will result in lossy compression and thus in lousy image quality. The highest obtainable lossless compression ratio is about 2/1 but that's with 8-bit images.
Losless compression of 32-bit textures is likely very small so I don't see the point of compressing textures unless you're willing to give up image quality. But since we all have Matrox cards I suppose we all DO care for image quality so texture compression isn't really that important IMHO.
KT7 Turbo Ltd. Ed. ; Athlon XP 1600+ @ 1470 MHz (140*10.5); 512MB Apacer SDRAM ; G400 MAX ; Iiyama VM Pro410
Well, let's examine the lossless/lossy argument for a second. If we really HAVE to lose quality to get decent compression, why is it that I can take a TIFF file (lossless image format) from my scanner, ZIP it using WinZip (compression tool), and the file size is greatly reduced (this is a 32bpp scanned image)? Keep in mind that the decompression of the contents of the ZIP file results in an exact copy of the original picture. (Of course, I understand that certain image formats don't compress nearly as well, but those are usually formats that are ALREADY compressed).
Possibly the real problem is the current compression formats that are used, and not whether compression is a worthwhile task. Maybe Matrox would like to use a non-lossy format of compression, especially if they really plan to implement displacement mapping (hopefully along with all the other goodies like full vertex/pixel shading capabilities) and still have at least good overall performance and image quality..
"..so much for subtlety.."
System specs:
Gainward Ti4600
AMD Athlon XP2100+ (o.c. to 1845MHz)
hehehe ... TIFF is a bad example, since you could also use JPEG compression within and that one is definitely not lossless ...
Usually scanners provide uncompressed TIFF format or maybe even LZW compression, while the latter indeed is lossless.
Now imagine you'd scan a completely empty sheet of paper (= white) and store it in TIFF uncompressed. The resulting file size can be calculated by the following term
(amount of horizontal pixels) * (amount of vertical pixels) * 3 bytes for RGB (in case you scanned in ordinary true color = 24bpp) = file size of scan
now using LZW compression will result in a dramatically shrunk file size, since the document contains only one RGB value, but used for every pixel and that can be put into a formula like "use 255/255/255 RGB for 10000 pixels width and 10000 pixels height" or something to that extend.
An actual example would be a movie's end-title scroller that we just rendered in 3656x1976x48bpp (16bit per channel) and the uncompressed images were well above 28MB each, while enabling lossless compression in the SGI file format shrunk 'em down to 700KB - 4MB, depending on the actual picture's contents.
It all boils down to the compression routine, ie. either it is lossless or not, plain and simple.
The compression ratio is a totally different topic, as my example above could illustrate, but as a rule of thumb one could say, lossless compression results in larger amount of data than lossy compression.
Anybody got a clue what kind of compression is used in DXTC ?
Despite my nickname causing confusion, I am not female ...
Originally posted by Ace Well, let's examine the lossless/lossy argument for a second. If we really HAVE to lose quality to get decent compression, why is it that I can take a TIFF file (lossless image format) from my scanner, ZIP it using WinZip (compression tool), and the file size is greatly reduced (this is a 32bpp scanned image)? Keep in mind that the decompression of the contents of the ZIP file results in an exact copy of the original picture. (Of course, I understand that certain image formats don't compress nearly as well, but those are usually formats that are ALREADY compressed).
That's because a TIFF file sometimes is LARGER than a BMP file. Believe me when I say that 2/1 is about the best you can get; I've worked on a thesis about lossless image compression for two years
>>edit<<
For the compression ratio I'm talking about photo-quality pictures here. If you take a white sheet of paper the compression ratio will obviously be better than 2/1
If i'm not mistaken,the type of compression used in DXTC is the lossy type regardless of which ratio and type is used since there are a few different methods within the DXTC spec....
I'll have to check to be sure though.....
note to self...
Assumption is the mother of all f***ups....
Primary system :
P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...
I don't think texture compresion is the most important feature. I is important, but there are other beatures that are moe important. Efficent FSAA is more importent, and also some sort of hidden surface removal. AGPx8 is comming out soon, and if Matrox is going to release/announce ther card with DirectX 9 (March/April) (CeBIT maybe....) they should suport AGPx8 in ther card!
Comment