First thing: I do not want to incite any flaming.
I've been playing UT with an AIW Radeon lately, and I have noticed that there is in fact a large dicrepency between perceived smoothness and mid-range frame rates (~40ish fps) in comparison to my G400. (my G400 marvel averages high 50s, but remains fluid from the high 20s up, while my radeon system will become annoyingly choppy at anything below the mid 40s). To me, this lays credit to arguments I've read from various people (such as the stereotypically branded nvidiot) about *unplayable* performance below "X"# of fps, but also credit to the arguments from others (such as the sereotypically branded M-fanboy) about "Y"# of fps being certainly fine (eg. old geforce users arguing with matrox BBs about "those frame rates = slideshow" vs "but still more fluid than your higher fps count")
This discrepancy is something I've seen mentioned here several times, but never with any plausible explanation as to why. I'm simply curious as to any technical substantiation supporting this scenario. Is the situation dependent on manufacturer bus latencies, memory dependent, gpu architecture dependent, etc...?
Does anyone have any theories/factual information (not physiological lectures about conal & synaptic response times in the eye and brain to explain why "x"fps is overkill, but rather reasons why "X"fps on one card does not seem as fluid as the same on another )
Wombat? Doc? anyone?
PS: sorry for the annoying verbosity of my post, but I'm back in school for engineering, not that english composition mumbo jumbo
I've been playing UT with an AIW Radeon lately, and I have noticed that there is in fact a large dicrepency between perceived smoothness and mid-range frame rates (~40ish fps) in comparison to my G400. (my G400 marvel averages high 50s, but remains fluid from the high 20s up, while my radeon system will become annoyingly choppy at anything below the mid 40s). To me, this lays credit to arguments I've read from various people (such as the stereotypically branded nvidiot) about *unplayable* performance below "X"# of fps, but also credit to the arguments from others (such as the sereotypically branded M-fanboy) about "Y"# of fps being certainly fine (eg. old geforce users arguing with matrox BBs about "those frame rates = slideshow" vs "but still more fluid than your higher fps count")
This discrepancy is something I've seen mentioned here several times, but never with any plausible explanation as to why. I'm simply curious as to any technical substantiation supporting this scenario. Is the situation dependent on manufacturer bus latencies, memory dependent, gpu architecture dependent, etc...?
Does anyone have any theories/factual information (not physiological lectures about conal & synaptic response times in the eye and brain to explain why "x"fps is overkill, but rather reasons why "X"fps on one card does not seem as fluid as the same on another )
Wombat? Doc? anyone?
PS: sorry for the annoying verbosity of my post, but I'm back in school for engineering, not that english composition mumbo jumbo
Comment