Announcement

Collapse
No announcement yet.

'Fusion' cards

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Compiling a game and playing it,are two very different things,and you know it.

    The example i gave still applies,because while getting 100+fps in today's games is indeed overkill,the fact remains that we have cpu's that are already capable enough to play next generation games with 60/70/80 polys per frame,but not the bus to match.at least not until most people have systems wich no longer rely on standard sdram.

    Do you really think that of those 500 megs,developers will use it all up in vertex data???,there are other considerations like texture uploads,sound,AI,physics,gameplay,collision,clippi ng,etc..

    All those nibble away at those 500 megs of bandwith which can't be completely dedicated to feed just the video card.

    So real terms,you won't have much more than those 266 megs available for the video card anyways.
    note to self...

    Assumption is the mother of all f***ups....

    Primary system :
    P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...

    Comment


    • Hi,

      --------------------
      With the new G400, Matrox has taken one step further and expanded the dual 64 bit bus to a dual 128 bit bus totaling in as much as 256 bit depth of the I/O interface. Both bus subsystems are unidirectional, meaning that we are looking at one 128 bit input bus and one 128 bit output bus...
      --------------------

      Don't forget that the Dual-Bus is a variant of the dual-channel concept (a less efficient and versatile variant, as the two buses could not read AND write), and it's only a bus to the I/O (the Input/Output of instructions, not to the memory...) but Dual-Bus is NOT a dual-channel architechture (only a less efficient derived model); the G400 memory path width is still a single 128-bits channel;

      --------------------
      Notice even the double gpu 64mb card costs nearly 20% more than the single gpu card. I don't know if you've talked to manafacturers, but 20% is a LOT.
      --------------------

      Yes a 20% increase in production cost is a lot, and that a reason why there is no Dual-GPU boards out; also, aside from the Radeon, there no Dual-Chip ready GPU in the industry.
      Aside from ATI's Dual-Stinger(dual Radeon 2 chips) upcoming board, there absolutely no Dual-GPU cards in sight... (BTW, the dual-G800 chip board was only an early rumor; remember those early rumors also gave the G800 a 64-bits memory data path?). The GeForce 3 will not support a dual-chip configuration, and neither will S3 Graphics' Columbia... Also, the "enthusiast" version of the NV20 (GeForce 3) will NOT be available in a 128 MB configuration... (don'T forget that the base 64 MB 128-bits 250 MHz DDR-SDRAM board will already cost $600 US at the official launch), but the "professionnal" NV20 (Quadro 3) will also be available in a 128 MB configuration, but with a projected retail price of $1100-1200 US DOLLARS... Take the present situation of the GeForce 2 GTS 32 and 64 MB cards... The cheapest 32 MB cards are found for about $170 US, while the cheapest 64 MB boards are $250 US (prices taken from Pricewatch.com); let's pretend a 32 MB card costs $150 to produce (with the PCB, logics and connectors) (which translate roughly into a $20 bucks profit) and a 64 MB card costs $190 (added memory also requires added logics...) and retails for $250 (give the manufacturer a much better $60 profit per card...), so the company would like to sell only 64 mb cards... But du to that price delta between the two card, the demand for 32 mb cards is MUCH bigger than the one for the 64 MB parts... lowering the 64 MB version price would means added demand for that part, but the profit/card would be much lower... Remember when the GeForce 2 GTS came out? It used to cost about $320, with the older GeForce 256 with DDR-SDRAM would still cost $250, but the Geforce 256 cards with SDRAM retailed for a low $125... half the price of the same GPU-based cards equipped with more expensive DDR-SDRAM. I know typical SDRAM and DDR-SDRAM prices (the modules used in DIMMs for the system RAM) have plundged to an all time low, but prices for higher-speed parts (used mostly for graphics purposes) are still amazingly high. So, RAM still has the biggest influence on boards prices, although it is much less than the influence it had 5 years ago!

      --------------------
      Interesting logic, but you have one flaw. Abit isn't the only mb manafacturer. In fact, they're only a 2nd rate manafacturer. Had you said it wasn't worth it for ASUS or Gigabyte to produce AMD boards,...
      --------------------

      And that's why they have to be more cost conscious than ASUS. Here's why: ASUS sales consist mostly of mid-performance boards sold to OEMS; the sales of high-end boards like the P4T and the A7xxxxx series represent only a minimal % of their sales. Abit, on the other end, sales mostly high-end "enthusiast" boards to the retail market... that's why they have to choose their designs wisely; it's also hte reason why you won't see Abit selling first-generation Pentium 4 boards... (they will make some in the future, but they'll have to wait for prices to go down and the market demand for such boards to go up.) Asus on the other hand have the ressources to produce "experimental" first-generation boards, as the high-end segment do not represent such a big % of their sales...

      --------------------
      Who's manafacturing these chips? What process? How many extra pins does it require to implement a 2nd sdram channel?
      --------------------

      VIA is fabbing its own chips, and I think that AMD does too... Adding a second 128-bits memory bus would not require 250 additionnal pins, it would need SLIGHTLY more than 128 pins, some of which would replace existing reserved pins on current BGA packages. By the way, "reserved" is the right word to use; if you look at any Intel, NVIDIA or VIA pin-out schematics, you will notice many pins are labelled as "reserved", "reserved" being used as in "kept for possible future use"... I looked up in my "Webster's Encyclopedic Dictionary "(the massive 12"x9"x4", 1200 pages, 1988 Canadian edition with the golden pages; it was the last gift I received from my grand-father before he died of asthma), and the second of the three possible definitions is "set apart or retained for future use"...

      Oh, and it's spelled "manUfacturer", not "manafacturer"... But you were right, my mother language is French (Quebec Rulez!!!), but be not mistaken (I felt a little "Shakespearian"...), my knowledge on the English syntax and vocabulary is much better than the average folk's... (I believe that no one in this forum falls in this category, as I believe most computer-savvy folks usually have a good general knowledge.)

      You're right about the numbers in Anandtech's article on Dual-Channel and DDR-SDRAM motherboards, as the numbers were taken on a Pentium 3 platform (GTL+ protocol), added the Pentium 3 cannot even unleash the full potential of those alternatives, as the FSB data bandwidth is much less the the possible memory bandwidth of those motherboards(The 133 MHz fsb limits the bandwidth to 1066 Mbytes/s, where 133 MHz Dual-Channel and PC2100 DDR-SDRAM boast a possible maximum of 2133 Mbytes/s, and PC1600 1600 Mbytes/s). Were those number taken using Athlon, Alpha or Sparc systems, they would have be much, much better figures, figures that could have been used for a fair comparison (this is even mroe true for dual P3 boards, as the GTL+ protocol SHAREs the FSB between the multiple CPUs, where the P4's Netburst and the Athlon's EV6 grant each CPU with its own data path...)

      Finally, I really agree with you on one point: a good argumentation is always entertaining and stimulating (my former girlfriends always told me that I shouldn't attend computer engineering, but the law faculty; I have to admit I considered law school and law enforcement...)

      Addendum: ID can't be using 16-way Xeons; THIS IS IMPOSSIBLE, as the only chipset for the GTL+ protocol (P2/3/XEONs) that support more than two CPUs SMP is the 450 NX, and it only support 4 CPU SMP, but it can support 8 CPUs by using an additionnal northbridge and a special northbridges-bridging chip... Compilation is advantaged more by CPU power, yet playing Q3A denefits much more from having a faster FSB and more memory bandwidth.

      Francis Beausejour, with the new and improved signature...
      ------------------
      What was necessary was done yesterday;
      We're currently working on the impossible;
      For miracles, we ask for a 24 hours notice ...

      [This message has been edited by frankymail (edited 08 February 2001).]
      What was necessary was done yesterday;
      We're currently working on the impossible;
      For miracles, we ask for a 24 hours notice ...

      (Workstation)
      - Intel - Xeon X3210 @ 3.2 GHz on Asus P5E
      - 2x OCZ Gold DDR2-800 1 GB
      - ATI Radeon HD2900PRO & Matrox Millennium G550 PCIe
      - 2x Seagate B.11 500 GB GB SATA
      - ATI TV-Wonder 550 PCI-E
      (Server)
      - Intel Core 2 Duo E6400 @ 2.66 GHz on Asus P5L-MX
      - 2x Crucial DDR2-667 1GB
      - ATI X1900 XTX 512 MB
      - 2x Maxtor D.10 200 GB SATA

      Comment


      • 16 way Xeons is *very* possible. while the Xeon processor only supports 4 way processors in glueless configurations, using custom chipsets you can go to at least 32 way. the chipsets basically fool each processor into thinking it is only part of a way way config. PPros did this - the ppro had the same limit, yet 8 way ppro boxes were not too uncommon.....

        for 32 way support it would have to implement 8 CPU busses.

        ServerWorks also implements chipsets that do 4 way processing, the ServerWorks III HE supports 4 way SMP on the Xeons.

        "And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz

        Comment


        • Don't say can't, Franky.
          Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

          Comment


          • and in case you doubt this, <a href="http://www5.compaq.com/products/servers/proliantml770/index.html">here</a< is a link to a compaq box that does 32 processors. sleep deprevation is preventing me from remembering what the other companies that produce them are.
            "And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz

            Comment


            • And now for something completely different...
              Matrox has never really ramped up there card speeds (once for g400/g400 max) unlike nvidia which does it constantly.
              Why?, surely there manufactureing process must improve as they refine there chips. And it would also improves there product lifetime.
              I guess it may be something to do with OEM , stuff. But still why won't they ramp up speeds on the g400/g450's?.
              or are they having fabrication difficultiies?

              Comment


              • <font face="Verdana, Arial, Helvetica" size="2">Originally posted by frankymail:
                Yes a 20% increase in production cost is a lot, and that a reason why there is no Dual-GPU boards out; also, aside from the Radeon, there no Dual-Chip ready GPU in the industry.</font>
                I'm very pessimistic on the chances of the dual-radeon board. I remember ATI making it sound like they may make it when they first released the radeon boards, but they seem much less enthusiastic about it in the later interviews..

                <font face="Verdana, Arial, Helvetica" size="2">Asus on the other hand have the resources to produce "experimental" first-generation boards, as the high-end segment do not represent such a big % of their sales...</font>
                There are other mb manufacturers who have just as small of a marketshare as Abit that did release the 750 boards though, FIC and MSI.

                Of course, I'm sure you remember the rumors about Intel limiting the number of BX chips to athlon-mb suppliers back then.. Also, the athlon at the time was unproven in the market. I highly doubt that the cost of the pcb was the deciding factor.

                <font face="Verdana, Arial, Helvetica" size="2">VIA is fabbing its own chips, and I think that AMD does too... Adding a second 128-bits memory bus would not require 250 additional pins, it would need SLIGHTLY more than 128 pins, some of which would replace existing reserved pins on current BGA packages.</font>
                Are you sure VIA has their own fab? I thought they had TSMC do their chips.

                Anyway, I still doubt we'll see dual-channel SDRAM on a video card anytime soon. I'm a little surprised serverworks made a chip that could do it, but then again, the board costs $800 frickin' bucks US..

                hm. the only glaring mistake as already handled. Left out the fun for me

                Comment


                • <font face="Verdana, Arial, Helvetica" size="2">Oh, and it's spelled "manUfacturer", not "manafacturer"... But you were right, my mother language is French (Quebec Rulez!!!), but be not mistaken (I felt a little "Shakespearian"...), my knowledge on the English syntax and vocabulary is much better than the average folk's... (I believe that no one in this forum falls in this category, as I believe most computer-savvy folks usually have a good general knowledge.)</font>
                  r j00 m4k1ng fun 0f my p00r 3ngl1sh sk1llz? <G>

                  I do agree with that, acctually... most people involved in computers are usually intelligent enough to speak correctly =)

                  and yes, you are acctually better at speaking english than most people i encounter... and i'm smack dab in the middle of the USA...

                  <font face="Verdana, Arial, Helvetica" size="2">
                  Finally, I really agree with you on one point: a good argumentation is always entertaining and stimulating (my former girlfriends always told me that I shouldn't attend computer engineering, but the law faculty; I have to admit I considered law school and law enforcement...)
                  </font>
                  Hmm... sounds very familiar from sonewhere.... my parents thought i would be a lawyer when i was young... law school was an entertaining thought for about 5 seconds, where law enforcement has acctually been an interesting thought....

                  Rob: sorry for stealing all the fun =)

                  "And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz

                  Comment


                  • Hi,
                    I do know there are servers equipped with more than 8 CPU... but they are not really computers, they are computer arrays, just like CRAY supercomputers and DEEP/DEEPER BLUE. They consist of special motherboards, each equipped with it's own CPUs and RAM; theses motherboards modules/daughtercards (these modules are generally referred to as "Clusters" or "Cells") and often store in racksmount cases. And these beast are HUGE, even compared to my homemade-all-mapplewood-18"x18"x16"-two-sides case... There are three reasons I said ID Software cannot be using 16-ways "computers" to compile map. First, these machines are almost exclusively running Solaris, Unix or Win2K Large Database Server, which make their utilisation as personnal computers usually very, very tough. They're (almost) only suited for the purpose they were designed and built for: be servers and they are very good at this. Then, there the fact that Id's maps/levels designers compile their maps/levels on their own workstations, which range from P3 450 to 1.X GHz Athlons and Dually-Cµmines... Finally the main reason why Id Software CANNOT be using 16/32-way-SMP machines is that those monsters price tags run in the low to mid-low 6-DIGITS ! ! ! (Excluding cents, of course . . . )

                    You're right Rob, the dual-radeon was cancelled, and it has been confirmed by ATI's spokepersons and top executives... But I only siad that the Radeon was the only dual-chip CAPABLE currently available GPU. However, ATI will (and the following is as strongly implented in ATI's roadmap than the AIW serie) produce a Dual-Stinger (Stinger replaced "R200" as the codename for the Radeon 2 chip). What happenned is that ATI didn't what to repeat the mistakes it made with the Rage Fury MAXX: a late arrival; this time, ATI plans announcing and releasing the single and dual boards at the smae time... (by the way, the Rage Fury was an excellant card than offered superior fillrate and memory bandwidth than the GeForce DDR, but suffered from a late arrival and poor drivers... I mean VERY' VERY VERY poor drivers... the ICD issue with Matrox cards is nothing to what some of my friend endured with their Rage 128/128 Pro/MAXX; I was lucky enough to never buy an ATI card in the past... but nowaday, the Radeon 32 MB DDR is the card that makes the most sense: it's available to buy for as little as $115 US offers better 3D than the GeForce 2 GTS (better images quality, faster at high-resolution & high color depth), and its drivers are much better and more stable than in the past!)

                    About the BX chipset issue: Intel did not limit the number of BX chipsets to the ATHLON-motherboards manufacturers; what happen is that Intel motivated not to build Athlon Motherboards by selling the 440BX to the Intel-Only manufacturers for a slightly lower price. There was a BX limitation policy at one point, but it implicated i810/E chipsets: Manufacturers had to buys this much i810/E chipsets if they wanted to buy that much BX chipsets... But back in the day Athlon was something totally, the only compatible chipset was AMD's own AMD750, which called for a 6-layers design; Abit said publicly to many "enthusiast" hardware web sites that it would not produce A'D750-based motherboards unless AMD would provided them with a 4-layers design. AMD didn't, so Abit didn't either. Why?, because retail sales make up for a enormous % of Abit's sales. They sell mostly mid-priced "enthusiasts" motherboards. A AMD750-based motherboard used to cost a lot to manufacture, and were priced much higher than Abit usual boards... it would have been very profitable, and small companies like Abit have to make a profit of every products; Larger motherboard manufacturers like ASUS, AOpen, FIC, Gigabyte, MSI and SOYO all have many times the OEM market share that Abit does... (the three last ones being smaller than ASUS and AOpen, they dominate Abit in terms of OEM sales) and they can allow themselves to make less profit of retail "enthusiast" motherboards (do you think ASUS will sell millions orP4T or VIA-based dual-P3 boards ???)

                    I'm positive the own at least one fab (but I think they really own four, but i'm not sure about this)

                    ABout the ServerWorks HE-sL based- motherboard: of course they're expensive; have you seen the features those boards are typically equipped with??? They comme in dual/quad variants, most of the time with on-board SCSI and nice features that makes them perfect for very-hig-end worstations. The the Supermicro S2QE6 motherboard ( http://www.supermicro.com/images/Ima...C_HE/s2qe6.JPG ) : it features 4 Slot 2 connectors, 4 64-bits 66 MHZ slots, 2 64-bits-33 MHZ slots, 16, yes SIXTEEN DIMMs, a DUAL-CHANNEL ULTRA160 SCSI controller, a multiport Ethernet controller, an AGP connector and more... Tell me: if this little sweety was YOUR motherboard and price was not important, what would you equip this baby with ??? YEAH, 4 P3 Xeons, quite a few 128/256 MB DIMMs, a few IBM 15K rpm SCSI harddrives and the best graphics card you can get than suits your needs !!! Believe me, a situation like this happens quite often; This summer, I worked as computer consultant for the engineering departement of a big corporation, and I played (assembled and used) with a i840 motherboard with two P3 1GHz, 1 GB of PC800 RDRAM and dual 10K Quantum SCSI drives, and I tell you, for the price the company paid for each workstation, you could have bought a few new cars... (BTW, guess what was the display subsystem of that workstation: two Sony W900 24" monitors connected to a . . . Matrox Millennium G400 Max! Believe me, it was the best thing humanity was granted since Heidi Klum )

                    Finally, I really enjoy talking to knowledgable guys like you, as the only slightly computer-savvy I was ever granted to encounter were egocentric show-offs that didn't even know what they were talking about, not knowing the difference between a GeForce 3 MX and a GeForce 2 GTS, etc... A guy I met once (I immediatly put him in the "Too stooooooopid to be granted the right to live" or "Let's put an end to his misery" category) told me that he was using the linux kernel with Win NT 4.0, he told me he just replace the Windows kernel with the linux' one (hello... are there any brain cells still alive in there???) and that his friend has a 4-way Itanium system (Yeah right! And my grandmother is the world record holder for apnea diving and I went in space using the shuttle three times without anyone even noticing it was gone... I refuelled it before giving it back )

                    "What's a LADA on top of a mountain?"
                    - God's doing..
                    "What are two LADAS on the top of a mountain mean?"
                    "- Pure Sci-Fi..."
                    "And what are THREE Ladas on top a
                    mountain???"
                    "- A proof someone had the very stupid idea to build a LADA factory on top of a mountain..."

                    OK, there are moments when I frighten myself, like right now...

                    For those of you who don't know what a LADA is, it's a very very cheap russian-made car that's worth less than the gas in its tank...

                    Francis Beausejour

                    ------------------
                    What was necessary was done yesterday;
                    We're currently working on the impossible;
                    For miracles, we ask for a 24 hours notice ...

                    [This message has been edited by frankymail (edited 09 February 2001).]
                    What was necessary was done yesterday;
                    We're currently working on the impossible;
                    For miracles, we ask for a 24 hours notice ...

                    (Workstation)
                    - Intel - Xeon X3210 @ 3.2 GHz on Asus P5E
                    - 2x OCZ Gold DDR2-800 1 GB
                    - ATI Radeon HD2900PRO & Matrox Millennium G550 PCIe
                    - 2x Seagate B.11 500 GB GB SATA
                    - ATI TV-Wonder 550 PCI-E
                    (Server)
                    - Intel Core 2 Duo E6400 @ 2.66 GHz on Asus P5L-MX
                    - 2x Crucial DDR2-667 1GB
                    - ATI X1900 XTX 512 MB
                    - 2x Maxtor D.10 200 GB SATA

                    Comment


                    • That 16 CPU processor from ID was sold on ebay last year sometime. It was used to render quake1 and 2 maps, and some of the quake3 maps.

                      From memory 8 of the processors didnt work anymore, but they would look into fixing that for extra.

                      I cant remember what the processors were, but they were quite old. Something like Alphas I think.

                      I think it went for US$16,000 in the end.

                      I tried to find it on ebay, but they only search back 2 weeks, so no luck.

                      Ali

                      Comment


                      • Found it:

                        http://finger.planetquake.com/plan.a...toddh&id=14795

                        id Software's SGI Origin 2000. This system was used to process all of the
                        map data for Quake II and Quake III Arena. This system has 2 banks of 8 x
                        180Mhz R10k processors for a total of 16 processors. The power supply to one
                        bank of processors needs to be replaced (i.e. those 8 processors and the RAM
                        associated with them are not working). We can investigate fixing this bank
                        of processors at buyer’s expense. The system has 1.2 GB RAM (512MB of this
                        RAM is working on the good power supply, the other 768MB will work once the
                        second power supply is replaced). This system also has 4GB of hard drive
                        space, and is running Irix v6.4. We paid approximately $500,000 in December
                        of 1996 for this system. For more pictures see:
                        http://www.idsoftware.com/origin/index.html


                        I was a little off, but memory is never perfect.

                        Ali

                        Comment


                        • But I was right, Id's was not using 16-way SMP Xeon-based computer arrays to compile their maps... BTW the SGI's Origin 2400 Server is a computer array http://www.sgi.com/origin/2000/2400.html But I have to agree $16,000 is very cheap sor such a powerful machine, Yet don't forget that it is still considered previous generation hardware to our technological standards (although its processing power is formidable!!!)

                          I wouldn't mind having one to show off a little, but I wonder where I could put an Origin 3800 server http://www.sgi.com/origin/3000/3800.html ... I guess I would probably put it outside (currently -20 Celsius and there is over 50" of snow on the ground here) and try to overclock it; that's the [H] spirit!!!

                          Let's bring hope a G800 annoucement is going to close this tread for good before it reaches the venerable age of 1 year...

                          Francis Beausejour

                          ------------------
                          What was necessary was done yesterday;
                          We're currently working on the impossible;
                          For miracles, we ask for a 24 hours notice ...

                          [This message has been edited by frankymail (edited 09 February 2001).]

                          [This message has been edited by frankymail (edited 09 February 2001).]
                          What was necessary was done yesterday;
                          We're currently working on the impossible;
                          For miracles, we ask for a 24 hours notice ...

                          (Workstation)
                          - Intel - Xeon X3210 @ 3.2 GHz on Asus P5E
                          - 2x OCZ Gold DDR2-800 1 GB
                          - ATI Radeon HD2900PRO & Matrox Millennium G550 PCIe
                          - 2x Seagate B.11 500 GB GB SATA
                          - ATI TV-Wonder 550 PCI-E
                          (Server)
                          - Intel Core 2 Duo E6400 @ 2.66 GHz on Asus P5L-MX
                          - 2x Crucial DDR2-667 1GB
                          - ATI X1900 XTX 512 MB
                          - 2x Maxtor D.10 200 GB SATA

                          Comment


                          • In the field of possibilities, consider this:
                            www.eetimes.com/story/OEG20010207S0012

                            Comment


                            • <font face="Verdana, Arial, Helvetica" size="2">Originally posted by frankymail:
                              There are three reasons I said ID Software cannot be using 16-ways "computers" to compile map. First, these machines are almost exclusively running Solaris, Unix or Win2K Large Database Server, which make their utilisation as personnal computers usually very, very tough. They're (almost) only suited for the purpose they were designed and built for: be servers and they are very good at this. Then, there the fact that Id's maps/levels designers compile their maps/levels on their own workstations, which range from P3 450 to 1.X GHz Athlons and Dually-Cµmines... Finally the main reason why Id Software CANNOT be using 16/32-way-SMP machines is that those monsters price tags run in the low to mid-low 6-DIGITS ! ! ! (Excluding cents, of course . . . )
                              </font>
                              1. id software used q3map and q3bspc which they ported for that solaris 2000 (There also exists linux variants)

                              2&3:
                              I know they cost a lot but it doesn't seem to be out of id's price range...
                              <font face="Verdana, Arial, Helvetica" size="2">
                              When it's working at full capacity, it can crunch through average map files with full vis and light extra in under half an hour ... usually in no more than 15 to 20 minutes.
                              I seem to remember that it cost us more than a couple hundred grand to acquire.

                              ------------------
                              Paul Jaquays
                              designer
                              id Software
                              Q3W Level Editing Forum Moderator
                              </font>
                              ..And from that they upgraded to the 16 cpu Xeon. (btw, at 1 ghz, it takes me 5-6 hours to do a light-extra on an average size map)

                              <font face="Verdana, Arial, Helvetica" size="2"> but nowaday, the Radeon 32 MB DDR is the card that makes the most sense: it's available to buy for as little as $115 US offers better 3D than the GeForce 2 GTS (better images quality, faster at high-resolution & high color depth), and its drivers are much better and more stable than in the past!)</font>
                              Except their drivers still aren't very good. Incompatability with the new Iwill ali-based motherboard (although a leaked driver version combined with a beta iwill bios will work if you don't care about the raid feature), and more importantly to me, opengl problems that make q3radiant unuseable.. At least the G400 works with that program (often)..

                              [This message has been edited by Rob M. (edited 09 February 2001).]

                              Comment


                              • In my mind, it's inefficient to do SMP more than 4 processors. The biggest problem lies on cache coherence. There will be too much overhead to do cache synchronization. That's why cluster architecture is used in massive processor computers.

                                As I know, SGI Origin 2000 uses hyper-cube connection and CC-NUMA architecture. Each board(node) contains 2 MIPS processors and the controller doesn't even perform cache coherence.

                                Intel has different cluster product which each node contains 4 SMP PPros, and all of the nodes are connected via SCI-ring.

                                ------------------
                                PIII-550E@733/1.65v, P3B-F, G400DH/32MB@140/186
                                P4-2.8C, IC7-G, G550

                                Comment

                                Working...
                                X