I'll tell you, Nvidia has been so fricken sneaky. After reading this, I have lost all respect for them. They have used practically every trick in the book to make there piece of crap FX card look good.
Announcement
Collapse
No announcement yet.
GFFX - 4x2 not 8x1
Collapse
X
-
Looks like they're panicing a little bit. Need to to stop the marketing people and let the devolopers do the work and then come back with a strong card.
-
that would explain the f*cked up performanceMain Machine: Intel Q6600@3.33, Abit IP-35 E, 4 x Geil 2048MB PC2-6400-CL4, Asus Geforce 8800GTS 512MB@700/2100, 150GB WD Raptor, Highpoint RR2640, 3x Seagate LP 1.5TB (RAID5), NEC-3500 DVD+/-R(W), Antec SLK3700BQE case, BeQuiet! DarkPower Pro 530W
Comment
-
Originally posted by Kurt
me wonders if it's not more efficient to have 4 pipes doing multi-texturing instead or 8 doing single textures... what game still uses single textures??? 8x2, that would be nice...
Comment
-
Huh?
in the doom3 engine (and other that use similar techniques) they will be extremely limited by the cards ability to do multitexturing... especially considering that it will use up to 7 textures per poly, and i am pretty sure no less than 4 at any given point in time.
the only advantage that an 8x1 provides over a 4x2 design (assuming that they use the same pixel pipeline, except that the 4x2 just has a second texture unit per pipe) is that the 8x1 (with drivers/hardware that is capable of doing loopbacked texture lookups) is capable of rendering single textured pixels 2x as fast. considering that each pixel pipeline takes up extra die space, probably more than adding a second texture unit to each pipeline.
the only benefit that this offers is that under doom3 (and similar engines) the first pass will be about 2x as fast as it renders only geometry data... but... that difference can easily be made up by more efficent shader units, higher clock speed and other little things (like accessing 2 textures per clock on one pixel possibly being a smidgeon faster than doing 2 pixels with one texture and then having to rerender both pixels to get the second texture)...
this is one of the reasons why the parhelia (in theory) should still have been a good performer under next gen games... it broke away from the "8 textures/clock" limit that everyone has been at for a while... it might not be the most efficent (you get 4 pixels/clock no matter if you use 1 texture or 4 textures, or 2pixels/clock if you use 5 or 8 textures) but if used correctly it can easily make up the clock speed difference
also, about the GeForceFX...
It renders:
8 z pixels per clock
8 stencil ops per clock
8 textures per clock
8 shader ops per clock
4 color + z pixels per clock with 4x multisampling enabled
It is architected to perform those functions.
Basically, its 8 pipes with the exception of color blenders for traditional ROP operations, for which it has hardware to do 4 pixels per clock for color & Z. It has 8 "full" pipes that can blend 4 pixels per clock with color.
it is neither a 4x2 nor is it an 8x1 design. it is something quite different, following in line with their significantly improved pixel shader units."And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz
Comment
-
Originally posted by Novdid
Multi texturing ain't the way of the future, my friend. Don't expect many chips in the future with more than one TMU per pipe.
then again the vidcard makers might come up with a way to do it all for free for only 399$ + shipping
Comment
-
Originally posted by DGhost
Huh?
in the doom3 engine (and other that use similar techniques) they will be extremely limited by the cards ability to do multitexturing... especially considering that it will use up to 7 textures per poly, and i am pretty sure no less than 4 at any given point in time.
the only advantage that an 8x1 provides over a 4x2 design (assuming that they use the same pixel pipeline, except that the 4x2 just has a second texture unit per pipe) is that the 8x1 (with drivers/hardware that is capable of doing loopbacked texture lookups) is capable of rendering single textured pixels 2x as fast. considering that each pixel pipeline takes up extra die space, probably more than adding a second texture unit to each pipeline.
the only benefit that this offers is that under doom3 (and similar engines) the first pass will be about 2x as fast as it renders only geometry data... but... that difference can easily be made up by more efficent shader units, higher clock speed and other little things (like accessing 2 textures per clock on one pixel possibly being a smidgeon faster than doing 2 pixels with one texture and then having to rerender both pixels to get the second texture)...
this is one of the reasons why the parhelia (in theory) should still have been a good performer under next gen games... it broke away from the "8 textures/clock" limit that everyone has been at for a while... it might not be the most efficent (you get 4 pixels/clock no matter if you use 1 texture or 4 textures, or 2pixels/clock if you use 5 or 8 textures) but if used correctly it can easily make up the clock speed difference
also, about the GeForceFX...
What NVidia had to say about it, taken from The Tech Report.
it is neither a 4x2 nor is it an 8x1 design. it is something quite different, following in line with their significantly improved pixel shader units.
Comment
-
Originally posted by Kurt
they've been making their chips more programmable every time, they might someday make one with on-the-fly hardware re-organization (cell-computing). That way you don't have to wonder whether more pipes or more tmu or more whatnot-nextgen-thingamagic you need.P4 Northwood 1.8GHz@2.7GHz 1.65V Albatron PX845PEV Pro
Running two Dell 2005FPW 20" Widescreen LCD
And of course, Matrox Parhelia | My Matrox histroy: Mill-I, Mill-II, Mystique, G400, Parhelia
Comment
-
Originally posted by WyWyWyWy
F the GFFX, put an Athlon XP 2000+ on the vid card and let software do things! Oh and some DDR333 as well.Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.
Comment
-
Originally posted by WyWyWyWy
F the GFFX, put an Athlon XP 2000+ on the vid card and let software do things! Oh and some DDR333 as well.
Comment
-
Comment