I'll believe it when I see it.
Announcement
Collapse
No announcement yet.
What exactly is Matrox up to .... get the <facts>
Collapse
X
-
are you sure ?Originally posted by CHHAS
I'll believe it when I see it.
Despite my nickname causing confusion, I am not female ...

ASRock Fatal1ty X79 Professional
Intel Core i7-3930K@4.3GHz
be quiet! Dark Rock Pro 2
4x 8GB G.Skill TridentX PC3-19200U@CR1
2x MSI N670GTX PE OC (SLI)
OCZ Vertex 4 256GB
4x2TB Seagate Barracuda Green 5900.3 (2x4TB RAID0)
Super Flower Golden Green Modular 800W
Nanoxia Deep Silence 1
LG BH10LS38
LG DM2752D 27" 3D
Comment
-
I knew my compression example stunk, heh.
I tend to look to texture compression as just another tool in the battle to make most efficient use of the onboard memory of the graphics card. Besides the need for the new DX features to be supported, I still believe that an efficient method of utilizing 64MB (128 soon?) of onboard memory without causing severe bottlenecks (all while using the features- without them it's almost a moot point) is going to make or break video cards in the near future.
Using system RAM as an adjunct for video storage (in the sense of textures and such) is an unworkable scheme for now, at least in the context of high-performance real-time 3D rendering (gaming oriented, of course). I refuse to count on any new AGP spec having a major impact on performance, after seeing the minimal impact moving between 1x, 2x, and 4x has had to date. It has to be kept under consideration that system memory isn't only being used and accessed by the graphics card, and therefore incur delays as other devices and the processor also make accesses (except in the case of memory controllers capable of making multiple simultaneous accesses, similar in concept to the TwinBank controller on nVidia's nForce and the memory controller that nV uses in GF3s).
All the discussion going on here (and especially Haig's comments) has me hopeful that Matrox might actually have something revolutionary (when compared to other consumer-level products, anyhow) waiting for us. The only thing I can say to that is- "It's about time!" (speaking as an early adopter and still present owner of a G400Max, and also as one that uses a current GF3 in my gaming rig, lol.)"..so much for subtlety.."
System specs:
Gainward Ti4600
AMD Athlon XP2100+ (o.c. to 1845MHz)
Comment
-
Ladies and gentlemen, take my advice, pull down your pants and slide on the ice.
Comment
-
I just read the last few pages of this thread (sorry, there' not much use in me reading here
).
Two things:
JPEG can be lossless. A quality level of 10/10 will result only in lossless compression. Also, TIFFs can use all sorts of compression inside, or none at all. You can take a .RAW file, open it in a text editor, and type a TIFF header on it, and it will be opened just fine as a TIFF (handy when a program says it doesn't support .RAW). I don't believe Iced. I've worked on image compression too, and if you know something about what you're compressing, you can really up your compression ratio. We were doing 20:1 lossless, and 50:1 with no visually percievable loss.
Also, some people just said that speeding up RDRAM would cut its latency. This isn't so. RDRAM is a serial protocol, and some parts of its design just add more latency that can't be avoided. They did get low-voltage differential signalling in there, which was a smart move (see also HyperTransport), but other than that I don't like RDRAM in PCs that much. There's a saying around here "You can always get more bandwidth, if you're willing to pay for it, but not even God can give you better latency." Yeah, pin count will cost you, and that's part of the reason for RDRAMs in consoles. Also, RDRAM is a lot more feasible in those configurations than it is in the RIMMs that PCs require. Last time I checked, Rambus couldn't test the RIMMs until the heat dissipator was applied, and if a RIMM is bad, chips can't be removed - so the whole RIMM is trash. Puts a cramp on yield. (Even if you can't see it, your SDRAM sticks often have had a chip replaced at manufacturing time.Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.
Comment
-
Lossless JPG is called LJPG and is an entirely different method of compressing images. It doesn't have very much in common with the lossy JPG format. But you're right, there *is* lossless JPG.Originally posted by Wombat
Two things:
JPEG can be lossless. A quality level of 10/10 will result only in lossless compression. Also, TIFFs can use all sorts of compression inside, or none at all. You can take a .RAW file, open it in a text editor, and type a TIFF header on it, and it will be opened just fine as a TIFF (handy when a program says it doesn't support .RAW). I don't believe Iced. I've worked on image compression too, and if you know something about what you're compressing, you can really up your compression ratio. We were doing 20:1 lossless, and 50:1 with no visually percievable loss.
I'm sure that with certain specific kinds of images you can get a 20:1 ratio in lossless mode but I dare you to compress a 32bit image with lots of detail with a ratio that is better than 2:1.
Of course there are also lots of 'nearly lossless' compression techniques but I was merely talking about pure lossless.
Check out http://www.bitjazz.com/statistics.html e.g. for some results of lossless compression of detailed images. Apparently, there are new techniques that achieve slightly better ratios but an avarage of 20:1 is out of the question.KT7 Turbo Ltd. Ed. ; Athlon XP 1600+ @ 1470 MHz (140*10.5); 512MB Apacer SDRAM ; G400 MAX ; Iiyama VM Pro410

Comment
-
The original JPEG standard has lossless modes. If you crank the quality to the maximum, then there isn't any DCT work done, and it's just Huffman encoding.
We were compressing 15- to 22-bit grayscale images.Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.
Comment
-
While a agree that the current AGP protocol and using current memory technology is still too slow to transfer and access large amounts of data quickly,i woudn't say that the situation won't ever change...
I mean like you mentioned,AGP 8x is on the horizon as well as faster versions of both DDR sdram as well as rdram running at higher frenquency's and wider buses as well.
Fast forward a couple more years and you'll see the first implementations of N3GIO tech,which lays the groundwork for even faster system bus speeds as well as even faster versions of AGP(16 x to begin with which has about~4 gb/sec transfer rate) and can also be applied to the next generation PCI spec beyond that of PCI-X(1 gig/sec transfer rate),which is due to show up in a few months,mostly on server boards for the time being.
One of the really cool features of the new AGP 8X spec is that it allows Graphic cards with multiple processors to work together without any driver tricks or specialised hardware needed on the cards themselves to fool the current AGP spec into thinking that there's only one graphics processor in order for the card to work properly in an agp 4x slot.
I do believe that sooner or later card makers won't have any other choice but to have multiple processors on their cards,for one because you can't keep on shrinking chips indefinately and smaller process tech is getting harder to develop each year since it takes about 18 to 24 months for a new fab process to become available and if you're on a six or even 12 month release scedule,well you're pretty much screwed,since at most you're forced to release slightly higher clocked versions of a previous generation in order to say that you've got something new.
Hmmmm...... that's sounds vagely familiar.doesn't it?....
.
note to self...
Assumption is the mother of all f***ups...
.
Primary system :
P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...
Comment
-
Like I said, an entirely different methodOriginally posted by Wombat
The original JPEG standard has lossless modes. If you crank the quality to the maximum, then there isn't any DCT work done, and it's just Huffman encoding.
We were compressing 15- to 22-bit grayscale images.
KT7 Turbo Ltd. Ed. ; Athlon XP 1600+ @ 1470 MHz (140*10.5); 512MB Apacer SDRAM ; G400 MAX ; Iiyama VM Pro410

Comment
-
Hi Wombat,Originally posted by Wombat
...
JPEG can be lossless. A quality level of 10/10 will result only in lossless compression.
...
I just cannot believe that ...
Example:
- take an uncompressed picture and save it as JPEG in 10/10 (or 12/12 or whatever is highest quality setting)
- close the document
- open up both documents (JPEG and uncompressed)
- merge them with 'difference' as layer option
- raise color gain (or adjust levels) so that very dark pixels get lighter
in my experience there always was a difference that gets more and more evident the harder you color correct the resulting 'difference merge', but if there was no loss when using JPEG, the difference would always be 0/0/0 RGB, speak a totally black picture.
What programm do you use for 'lossless' JPEG compression ?Despite my nickname causing confusion, I am not female ...

ASRock Fatal1ty X79 Professional
Intel Core i7-3930K@4.3GHz
be quiet! Dark Rock Pro 2
4x 8GB G.Skill TridentX PC3-19200U@CR1
2x MSI N670GTX PE OC (SLI)
OCZ Vertex 4 256GB
4x2TB Seagate Barracuda Green 5900.3 (2x4TB RAID0)
Super Flower Golden Green Modular 800W
Nanoxia Deep Silence 1
LG BH10LS38
LG DM2752D 27" 3D
Comment
-
LOSSLESS JPEG
Wombat,The original JPEG standard has lossless modes. If you crank the quality to the maximum, then there isn't any DCT work done, and it's just Huffman encoding.
I have to totally disagree with your statement!!! The whole point of JPEG is use of Discrete Cosine Transform to compress the energy of the picture. The huffman coding is always done on DCT coefficients. In order to acheive LOSSLESS JPEG, you need should not perform QUANTIZATION ON DCT COEFFICIENTS. That is the whole idea of JPEG, the bigger your quantization the bigger loss, if you do not quantize, you will not degrade the quality however at the same time you can achieve pretty good compression, since you are not huffman coding the picture, but rather its frequency components.
Huffman coding the picture is like zipping a normal data file. If anyone is interested more than this, I know a few good links with example to show how JPEG works.
As long as AGP8x goes, I am not sure if it would improve the performace that much, and there is no point right now anyways. Maybe next year, it would really matter, but I guess its a good point for marketing anyways
-<= Its all about point of view ....
=>-
Comment
-
I completely agree that for the time being,there isn't much of a need for AGP 8X since developers knowingly limit themselves to what current protocols and memory tech can handle and for the most part,that won't change anytime soon,save for maybe games like DOOM 3,which will stress any system,no matter how high end it may be.
Id is one of the few companies out there with the financial freedom to do whatever they want and their strong point is making awsome game engines that allow other developers to build great games,so i can't wait to see other companies get their hands on that engine.....
.
It just a shame that even current video cards like the GF3 and radion 8500(and hopefully matrox) can handle so much more than the AGP 4x can dish out that we may never see what those cards can really do.note to self...
Assumption is the mother of all f***ups...
.
Primary system :
P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...
Comment
-
Just to cover a few points:
Exactly. And most jpeg compression programs do this by default. Maximum quality settings (according to the JPEG definition) do NO quantization of the coefficients. That's what I was talking about, but didn't know I could/should get into a detailed discussion of the DCT implementation here. Also, I think the "12/12" quality setting may be part of the newer JPEG standards. When I was doing this work in mid '99, they were still choosing the details of the next JPEG standard, as well as JPEG2000, which was something different. My bosses were involved in some of the selection. So, maybe my information is incomplete these days.Huffman coding the picture is like zipping a normal data file.
To address Maggi's questions, I was often using cjpeg and djpeg, the programs that you get when you compile the libjpeg6b code. My first guess at the differences you see Maggi are just palette downsampling. The libjpeg code can be compiled for 8-bit color or 12-bit color, those are the only options. Maybe other engines have more/other features, but I would guess that you're seeing the results of palette changes <I>before</I> the encoding was done. I've done the same kind of tests that you're talking about, and if the input image fits the requirements, the output file can be turned back into an identical image.
I do know what I'm talking about here. I read most of the JPEG reference textbook (whose name escapes me, that book stayed with that lab). I doubt if I can still do it, but I used to be able to open a JPEG file in a hex editor, and read the headers, picking out the tags and values for resolution, compression, and palette info. Lame, but true.Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.
Comment
-
He's right though. It's just that most programs don't implement the lossless version of JPG. That 10/10 value is an indication of the visual quality rather than a value for binary inversibility.Originally posted by Maggi
Hi Wombat,
I just cannot believe that ...
Example:
- take an uncompressed picture and save it as JPEG in 10/10 (or 12/12 or whatever is highest quality setting)
- close the document
- open up both documents (JPEG and uncompressed)
- merge them with 'difference' as layer option
- raise color gain (or adjust levels) so that very dark pixels get lighter
in my experience there always was a difference that gets more and more evident the harder you color correct the resulting 'difference merge', but if there was no loss when using JPEG, the difference would always be 0/0/0 RGB, speak a totally black picture.
What programm do you use for 'lossless' JPEG compression ?KT7 Turbo Ltd. Ed. ; Athlon XP 1600+ @ 1470 MHz (140*10.5); 512MB Apacer SDRAM ; G400 MAX ; Iiyama VM Pro410

Comment

Comment