Announcement

Collapse
No announcement yet.

Have any G800 rumors or "inside info?"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #76
    Cybermind - thanks for the info !! Got some more about that ?

    MK
    <font size="1">
    Celeron II 700 @ 1,1 GHz
    ASUS CUSL2-C, Bios 1009 final
    Alpha 6035MFC, 60 -> 80mm adapter
    2 x 80mm Papst Cooler 19/12dB
    256 MB PC133 Crucial 7E (CAS2)
    Maxtor Diamond MAX VL40
    ATI Radeon 8500 64MB @ Catalyst 3.0
    Hauppauge WinTV TV-Card
    Iiyama Vision Master Pro 400
    Plustek Optic Pro U12B
    HP Deskjet 959C
    Plantronics LS1 Headset
    all on W2k Professional SP2
    </font>

    Comment


    • #77
      Anyone heard about the Crusoe CPU ?? Programmable instruction on the CPU core much like a BIOS.....

      SOmeone remember the rumor about the G400 and the empty registers for future instructions ???

      Crusoe + rumor on the G400 = a G800 programmable with new API'S as DX goes on and on (9,10,...) by flashing the CARD.


      Possible ??? I don't know but....

      Comment


      • #78
        Reminds me of ATI's RD300 (a 3rd generation RadeOn) with programmable T&L.
        I believe this isn't the only next-gen card to be rumored to have such a feature.
        PlaneteMatrox.Com
        Le site dédié aux cartes de Matrox.

        Comment


        • #79
          well.... I guess FCRAM WILL be used, because it wasn't a rumour, but there were fcram entries in some of the DLLs of the drivers. And there was one G800 entry called 'F800 Fusion'. Probably that is the gamers-orientated version of the G800 with Dualchip and FC-RAM (I really really hope at least).

          Comment


          • #80
            From what has already been said before in this Forum Matrox is going a different way (than the other players in the graphics cards business) with the Fusion so I doubt it'll be a gamer card...
            But I would definitely be more than happy to have a G800 card with two chips and FCRam.
            PlaneteMatrox.Com
            Le site dédié aux cartes de Matrox.

            Comment


            • #81
              I highly doubt that the G800 will use FC-RAM. It really doesn't address the problem with graphics memory. Graphics hardware need higher bandwidth, not lower latencies. Lower latency is a good thing, but the latency isn't the bottleneck since a graphic card work with a lot of data in parallell and renders to screen linearly. But FC-RAM would really be nice to have instead of DDR as system memory, cause the cpu is more dependent on latency than bandwidth (which is why rambus sux as system memory, it could perhaps be used as graphics memory but since DDR offers higher bandwidth, lower latencies and lower price there's no reason to use it).

              Comment


              • #82
                i'm agree with Humus . Latency is not a bottleneck for GC. About memories system, bandwidth and latency read this nice article from lostcircuit !
                http://www.lostcircuits.com/memory/latency/

                The only way we can go today is DDR (i mean the ration price/perf is really good).

                The next step will be QDR or DDR-II. Jedec is working on this news memories and we can (perheaps) expect see them in 2001(end), 2002.



                [This message has been edited by Barbarella (edited 05 October 2000).]

                Comment


                • #83
                  Humus - not quite. The real problems associated with how a graphics card stresses the memory subsystem are well beyond me.

                  But in terms of basics concepts, latency affects "effective" bandwidth. I don't have time to do the math right now, but if you have two 200Mhz DDR configs, one with a 4-2-2-2 access cycle and another with a 3-1-1-1 access cycle, the later will deliver a lot more data to the requester than the former.
                  (the numbers I used are fictitious - I'd have to look them up).

                  Basically, actual max bandwidth = clock rate * path width * Efficiency. Efficiency is dominated by the memory device's access cycle. I used max bandwidth, because the memory controller on the gpu most likely won't be able use an optimal access pattern due to the varying nature of the requests made of it by the gpu. In actuality, it's probably a lot more complicated than this.

                  -AJ

                  PS, the access cycle I used is consecutive access to data words (which varies depending on the memory and the controller). I didn't use CAS, RAS and PreCharge per access (which as LostCircuits alludes to, varies from the first access to consecutive accesses).

                  Thanks for the link Barbarella.



                  [This message has been edited by AJ (edited 05 October 2000).]
                  Trying to figuring out what Matrox is up to is like tying to find a road that's not on the map, at night, while wearing welders googles!

                  Comment


                  • #84


                    If DDR SDRAM does not have enough bandwidth, and FCRAM is too expensive, what about using an interleaved pair of DDR SDRAM banks? Or would that end up more expensive than FCRAM?</P>

                    Comment


                    • #85
                      You're right AJ, but the performance increase isn't close to the increase in price. You may get a couple percent faster board, but something like twice the price (ok, i don't know the price of fc-ram ... but it's not cheap nor available in volume).

                      A gpu can use the memory a lot more efficiently than a cpu can. A gpu draws a triangle scanline for scanline. Each scanline is rendered from left to right. That means the data are accessed linearly most of the time. Texture accesses aren't linear, but all cards out there has a texture cache that reduces the impact of that a lot.

                      swr:
                      The right way to solve bandwidth problems are not using faster and faster memory, it's using smart technology that doesn't waste a lot of bandwidth. ATi's Hyper-Z is a good example of this. The Radeon have much less than half fillrate of a GF2 but still goes faster in hi-res 32 bit where GF2's brute force approach simply will wait for data almost all the time.

                      Comment

                      Working...
                      X