Announcement
Collapse
No announcement yet.
Socket 754 - jheeeeesh!
Collapse
X
-
They already pulled the article.
You can't tell me you're surprised. Of course there are a large number of pins...or did you think memory controllers and crossbar links wouldn't add much to the pin count?Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.
-
30% of them are for power
I'd like to know how many are unused, and how many are diagnostic.Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.
Comment
-
Sounds normal
You need many power rails so that power can be distributed evenly through the chip and without drawing so much through a single wire that it burns out and kills the chip.
The power wires that connect the core to the outside chip are very thin, so you have to connect many of them in parrellel so that the combined resistance is low enough to allow enough power in without overheating any specific wire.80% of people think I should be in a Mental Institute
Comment
-
Hammer I presume? Has anyone seen the chip - pins all the way across the package - even underneath the die. Unusual to look at, but it's the way forwardMeet Jasmine.
flickr.com/photos/pace3000
Comment
-
I think Clawhammer has a small area underneath the die that is pinless, but sledgehammer seems to have the whole base covered with something like an excess of 900pins. I guess it's the additional HT busses that is the main reason to a higher pincount. Any comments Wombat?
Comment
-
Actually, HT is a low pin-count protocol, with even the widest possible HT connection I'm aware of requiring about 40 pins; it's unlikely that it would cause this difference you're talking about. I'm scanning through this PDF now, but certainly haven't read all of it yet. The document is targetted towards vendors wishing to supply the sockets.
Looking through the PDF, the socket itself does not have pinholes directly underneath the die - so what pictures are you looking at Pace and Novdid?
The power wires that connect the core to the outside chip are very thin, so you have to connect many of them in parrellel so that the combined resistance is low enough to allow enough power in without overheating any specific wire.Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.
Comment
-
Originally posted by Wombat
They already pulled the article.
You can't tell me you're surprised. Of course there are a large number of pins...or did you think memory controllers and crossbar links wouldn't add much to the pin count?
Comment
-
Ah, cool. Thanks for that link.
Loosely, crossbars are the ways that CPUs in large multi-processor setups talk to each other, usually through bus uplink chips that are designed to handle that kind of traffic. Depending on how you define the term, Sledges might actually be considered "crossbar-less," since they only talk through other Sledges.
AMD released a document some months ago that had illustrations as to how multi-CPU setups would be accomplished with Sledges. The biggest diagram they had was for an 8-way box. I don't have a link to the diagram, but IIRC, 4 of the processors couldn't even link to anything except other processors, and the the other 4 only had one spare link. Also, with the bridges on the die with the processor, AMD hasn't said anything about redundancy or hot-maintenance. Losing just one processor has the potential to be disasterous for the whole system. If AMD really had good ways of avoiding such problems with their design, they probably would have been boasting about them for a long time already. The silence is worrysome.Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.
Comment
-
Originally posted by Wombat
AMD released a document some months ago that had illustrations as to how multi-CPU setups would be accomplished with Sledges. The biggest diagram they had was for an 8-way box. I don't have a link to the diagram, but IIRC, 4 of the processors couldn't even link to anything except other processors, and the the other 4 only had one spare link. Also, with the bridges on the die with the processor, AMD hasn't said anything about redundancy or hot-maintenance. Losing just one processor has the potential to be disasterous for the whole system. If AMD really had good ways of avoiding such problems with their design, they probably would have been boasting about them for a long time already. The silence is worrysome.
Comment
-
When you start worring about processor failure (that is exceptionally rare in any case) you really need to be looking into mainframe country.
The CPU redundacy in mainframes is simply amazing. It isn't just a bolted on system in mainframes, the CPU's are designed from the ground up to be fault tolerant. Whenever a cpu has detected it has failed (or failing) it moves its state to a hot spare (or loads it from a checkpoint) and continues processing in its place.
The mainframe then calls home and reports the failure, and some time later a replacement cpu arrives and the bewildered administrators (WTF is this guy doing at our door with this big ass CPU?) simply hotswap the dead CPU with the new one.
This is what I have heard anyway, since I haven't played with mainframes. It sounds right though
But even if my blurb about mainframes is not 100%, without actually being designed from the ground up to be fault tolerant (this requires extra work and a performance hit to the CPU), I can see any Intel or AMD x86 multiprocessor system recovering cleanly from a proc failure.80% of people think I should be in a Mental Institute
Comment
-
Yep, it's pretty much right. One of the reasons that big boxes still stick to SCSI as well, they're so easily hot-swapped.
CPU failure is fairly common in large systems - the CPU may not literally blow up, but it may have to shut down due to overheating. Things that <I>cause</I> CPUs to shutdown/lockup are fairly common.Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.
Comment
-
Gee,
I wonder if they have the Compaq view on HeatsinksIf there's artificial intelligence, there's bound to be some artificial stupidity.
Jeremy Clarkson "806 brake horsepower..and that on that limp wrist faerie liquid the Americans call petrol, if you run it on the more explosive jungle juice we have in Europe you'd be getting 850 brake horsepower..."
Comment
Comment