If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
The difference is DMA is Direct Memory Access. DiME is Direct Memory Execution.
This is a generic example, and am leaving out the specifics and details for simplicity's sake.
Direct Memory Access is when they actually take and move the memory information across the bus by sending a request, fetching it, bringing it back, then using it.
Direct Memory Execution is simply a way to leave it there and work with while it's still in the memory.
This nets about a 50% increase in thoroughput over the bus, and allows the bus to do more work.
Originally posted by az but it won't copy the data into local memory, in case it's needed again later, if I get it right - so a mixture between both would be best?
AZ
We are talking about swapping textures of the AGP bus and using the data over the AGP bus for the transfer of information to and from the video card. Rather than the card moving all the data to and from the card, whether or not it's going to be use, the card uses the data right where it sits.
I cannot think of a case where DMA would be more efficient.
There was a very interesting (and long) article about the Parhelia in this month's issue of Atomic MPC. Part of the article covers the use of Higher Ordered Surfaces and Displacemnt Mapping. Part of the point of displacent mapping is not only to make surface/terrain design easier, it also greatly reduces the amount of data which needs to be sent over the AGP bus. Once these techniques are being used (remember that DX9 is supposed to contain full support for them), the AGP bus should no longer be the bottleneck. Since none of the games available now are designed to take advantage of 8x AGP (since pretty much noone has it yet), we can hope that in future games they will base their designs around techniques like DM, rather than depending on cranking up the bus speed.
A quote from the article: "The concept behind displacement mapping is ingenious: two triangles and a 128x128 grey scale texture can express terrain that the would otherwise take hundreds if not thousands of triangles to describe."
Since the Parhelia's architecture is designed around them, using these techniques on the Parhelia would most likely produce better speed than sending it thousands of triangles, so making the card 8x AGP compliant probably wouldn't accomplish much......
all discussion as to the pointlessness of AGP3.0/8x aside, does anyone acctually know if the Parhelia-512 GPU can acctually operate at AGP 8x? that was the original question in this thread, and while the merits of DiME vs DMA and Matrox's use of dmap and dlod to save bandwidth are all worthwhile points, does anyone really have an answer? Matrox appears to be treading very carefully with this issue.
i guess we will just have to wait and see what time has in store for it...
"And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz
Originally posted by Joel I bet you also didn't know that Matrox, AFAIK, is still the only manufacture to do DiME AGP transfers(which are better) instead of DMA AGP transfers.
Joel
I do thx But as we both know the speed of AGP memory is so crap compared to local RAM it's not really worth bothering with. I've yet to see an application that uses it in such a way that the user doesn't significantly notice. With 128MB on board there should be little need for it anyway!
nVidia don't get blasted here 'cause that'd be rather off topic They can be crap at marketing their own stuff but 'luckily' for them they have massive raw speed which, over the past 2-3 years, has been all people have looked for.
Matrox on the other have a very light (by comparison to everyone else) product road map and giving ANY reason to doubt them is not a good idea. If they'd have said AGP4x on initial specs release no-one would have given a sh**. However, saying one thing one week and then another the next has started this argument. Anyone who knows about such things - like us both - don't care either way, those who beleive bigger (numbers) is better are thinking they are getting short changed.
The impression I get from the info released is that the Parhelia chip is 8x capable, but the currently announced cards are not. Every mention of AGP 8x so far has been in relation to Parhelia-512 - ie the chip. All mention of Parhelia boards have said AGP 4x.
Can anyone confirm whether the graphics chip itself has to contain AGP 8x support for it to work? We know that chips capable of working at 4x AGP (like the G400), can be limited to 2x by the board design. Does the graphics chip itself have any effect on whether a board can support higher AGP levels (apart from whether it's fast enough to benefit from them)?
DW: There was some confusion regarding whether or not the Parhelia-512 is a full AGP 8X device. Basically, the answer is no, Parhelia-512 is an AGP 4X device, but - like any other AGP2.0 compatible device - it will fit into an AGP 8X slot and operate correctly at up to AGP4x signaling."
again, i think i quoted that in my very first paragraph at the start of the thread...
"And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz
Comment