PAPER LAUNCH!!!
Announcement
Collapse
No announcement yet.
GeForce FX is announced...
Collapse
X
-
NVidia clocks the chip at 500Mhz, and R300 is at 325Mhz...........Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.
Comment
-
But I'm really impressed by the look of the cooler on that card. It looks like it will give a good cooling with very little noise. I really can't understand why Parhelia didn't include something like this, since their price range is for the kind of buyers that moves any other card out of the way if they want a P
CobosMy Specs
AMD XP 1800+, MSI KT3 Ultra1, Matrox G400 32MB DH, IBM 9ES UW SCSI, Plextor 32X SCSI, Plextor 8x/2x CDRW SCSI, Toshiba 4.8X DVD ROM IDE, IBM 30GB 75GXP, IBM 60GB 60GXP, 120GB Maxtor 540X, Tekram DC390F UW, Santa Cruz Soundcard, Eizo 17'' F56 and Eizo 21'' T965' Selfmodded case with 2 PSU's.
Comment
-
yea that's true. probably another reason why g4/550 did not implant 128-bit DDR bus.
on the bright sides of things: Matrox is still the only gfx chip maker with 512-bit core logic. also the only to implant surround gaming/design (and i bet nV and ATI are not going to adapt this like DualHead). Also FAA is a great approch.
The biggest problem though, is probably the 128-bit precision pipeline that matrox gotta implant, which may be tough to develop from Parhelia. Also, the FP Vertex Shaders (and possibly FP Pixel Shaders soon)... gotta update them on the Pitou...
P.S. Does anybody think having 1 texturiing unit that can do 16 textures per pass is better than 4 texturing units that can do 4 textures per pass?
Comment
-
I'm gonna buy one, not!
So I take it that thing you guys call a video card generates WHACK load of heat to have that monstrosity attached to it!
Maybe they should work on that instead of working on being an FPS king.
The gawddamn thing is ridiculously GInormous! and not to mention foogly!
My 2 cents.Last edited by ZokesPro; 18 November 2002, 17:39.Titanium is the new bling!
(you heard from me first!)
Comment
-
It is going to be very ex$pensive with the new high speed memory tech and massive cooler(It seems to have AGP pro dimensions)
If they get it out in quantity and at the stated speeds and at a reasonable price(or slightly unreasonable price ) it will do well.
But I think the stated speed will be BS, most parts will be clock a lot lower.
A wider bus is a good thing, it means you can use slower less expensive memory to get the same bandwidth, NV's narrow bus may be heading up a voodoo alley if they are not carefull
and WTF is the third head
Comment
-
I do still see some problems for the GF Fx really beating the R9700Pro by a larger margin with its 128 bit wide RAM.
You should not forget over all this hype about the Color Compression that the R9700Pro has bandwidth-saving technology as well (HyperZ III), and apparently uses some sort of color compression as well, but only in FSAA operation, while the NV30 does use it all the time
Still that probably won't help the NV30 since in non-FSAA modes, there are no CPUs out there that are fast enough to feed even the R300 with todays game titles...
Oh and to put the NVidia marketing FUD of the quadruple bandwidth (do they really believe this themselves? ) in the correct light: NVidia already claims 300% with the GF4 LMA II...Last edited by Indiana; 18 November 2002, 19:04.
Comment
-
Originally posted by Indiana
I do still see some problems for the GF Fx really beating the R9700Pro with its 128 bit wide RAM.
You should not forget over all this hype about the Color Compression that the R9700Pro has bandwidth-saving technology as well (HyperZ III), and apparently uses some sort of color compression as well, but only in FSAA operation, while the NV30 does use it all the time
Still that probably won't help the NV30 since in non-FSAA modes, there are no CPUs out there that are fast enough to feed even the R300 with todays game titles...
Oh and to put the NVidia marketing FUD of the quadruple bandwidth (do they really believe this themselves? ) in the correct light: NVidia already claims 300% with the GF4 LMA II...
personally, having seen what 3Dfx was doing with FXT1 before they got bought out, i would bet that NVidia is reusing it. and in which case, NVidia's claims are pretty much dead on. it is lossless and it is has no performance impact. it isn't so much of a compression as an optimization of color palates...
considering that Doom3 and Quake4 will be limited far more by the raw fillrate and multitexturing abilities of the card than by memory bandwidth of the card, the 128bit memory bus won't make too much of a difference. especially if they are implementing FXT1...
the GFfx will be able to scale farther than the 9700 ever will, and when the requirement comes for having a wider memory bus the card will be ready for it i'm sure...
with a more powerful GPU it will be able to do more "for free" while it is in processor limited sitations. You will be able to add antialiasing and better texture filtering, longer pixel shaders, and even using a higher precision render modes without impacting overall performance."And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz
Comment
-
ATI 24:1 on the z buffer
NV 4:1 for "TEXTURE COMPRESSION"
that does sound impressive, coupled with the fact that the decompression can be combined with their new AA.
But...can anyone say ATI die shrink
NV is burning a lot of money to get ahead of ATI, jumping to .13, using very fast memory, trying to set the next next generation...Whats the bet most of NV's exotic dx9+ features barely get used(geforce T&L?)
A few tweaks for the 9700 , and a die shrink when the process is more mature/cheaper (hehe..NV is doing it for them)
and then NV may be on the back foot again quite quickly, they have money to burn but for how long?
I am still hanging out for P2 with .09 HSR , memory bus optimisations (4 64bit instead of one 256bit?), but I am not holding my breath.
Comment
-
We will see when NVidia finally is able to show actual cards...
Doom3 might be one of the very few cases where the NV30 could have a real advantage (and no, having 20000 3DMarks instead of 18000 is no real advantage...). Quake4, now this is really an imminent release
To your accusations of ATIs compression claims: They say up to 24:1 for the Z-Buffer, they do not claim the whole bandwidth to be multiplied by this factor and put a big 475 GB/s on their hardware (that would be 24 * 19.8) like NVidia is doing with the ridiculous 48GB/s claim. If you look at it that way, even the GF4Ti4600 would've had >>30GB/s bandwidth, it just doesn't show....
The R9700 already can do FSAA and anisotropic on nearly all games even in 16x12 and is still playable. All your longer pixel shaders, higher render modes and the like need new games - Oh yes, I'm sure we will se them as fast as we saw all those fine Hardware T&L games when the original GF256 was released.
the GF Fx will be able to scale farther than the 9700 ever will
I would not be astonished if the NV30 was a bit faster than the R300, but i doubt it will be more than say 20-25%.
All in all it is just pure speculation here since there aren't any NV30 cards out there for testing.Last edited by Indiana; 18 November 2002, 20:30.
Comment
-
taken from beyond3d:
Colour Compression – 4:1 Z compression techniques have been implemented in prior generations of hardware; however, Intellisample now extends that into colour compression. Again, a 4:1 loss-less compression technique is enabled on the pixel colour information. The process is enabled entirely through hardware and is transparent to the application. The biggest benefits of colour compression will be seen when FSAA is enabled.
As far as if it will be faster... it all depends... they may be doing the same thing that ATI did with the 9700 launch "its 2 times faster (when running with these settings...)" - that wouldn't suprise me at all. the 500mhz core clock will benefit it a lot when it comes to future games tho. that gives it a lot more fillrate to burn when it comes to the next generation of games... and games are pushing for a higher level of detail and will have to apply more than 2 textures per pixel... these are the areas it will excel at over the 9700... same argument that everyone was making for the Parhelia... heh...
on a side note... the R250 gibberish that was floating around since the 8500 launch (especially after the GF4 launch) resulted in the Radeon 9000... i'm not gonna hold my breath for something like the R350...Last edited by DGhost; 18 November 2002, 20:23."And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz
Comment
Comment