Announcement

Collapse
No announcement yet.

Network performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Network performance

    I have recently just set up a 100mbit network, replacing all my crap coaxal cable and realtek 8029 cards with UTP, a small cheap switch, and intel 85.

    I am wondering if getting aroung 7meg per second is the expected performance between my athlon 2000xp and my older duron 700. This is with windows file sharing under XP (both client and server) Under linux, using ftp, I was about to get around 9-10meg per sec.

    They are both using Intel etherexpress pro 100 network cards. (one is a 82557 chip, the other 82558 chip)

    Also, if anyone has any tips for getting better performance, I am all ears .
    80% of people think I should be in a Mental Institute

  • #2
    That's not a bad rate. 100Mbit = 12MB/sec max, including packet headers and all that. So, the ftp deal isn't too far from the best real-world performance.
    Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

    Comment


    • #3
      7MB is pretty good. Better than what you could claim with any Realtek 100MBps NIC.

      Intel and 3com NICs usually have a rate of about 75-80 MBps. Realtek is more like 30-40 MBps...

      Comment


      • #4
        Originally posted by Kurt
        7MB is pretty good. Better than what you could claim with any Realtek 100MBps NIC.

        Intel and 3com NICs usually have a rate of about 75-80 MBps. Realtek is more like 30-40 MBps...
        Watch your B's and your b's.
        Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

        Comment


        • #5
          just tested and I aweraged 9500-10000KB/s
          2 3com cards
          If there's artificial intelligence, there's bound to be some artificial stupidity.

          Jeremy Clarkson "806 brake horsepower..and that on that limp wrist faerie liquid the Americans call petrol, if you run it on the more explosive jungle juice we have in Europe you'd be getting 850 brake horsepower..."

          Comment


          • #6
            If I open 2 connections between the 2 computers, I can transfer at full speed. It seems to be something inside windows.

            The first connection brings network utilization to about 70%, the second connection pushes it to 90%+.
            80% of people think I should be in a Mental Institute

            Comment


            • #7
              turn of QOS

              Comment


              • #8
                Originally posted by Marshmallowman
                turn of QOS
                I have done that (set both computers to use only 1% reserved bandwidth)

                Hasn't made any difference.
                80% of people think I should be in a Mental Institute

                Comment


                • #9
                  Originally posted by Wombat
                  Watch your B's and your b's.
                  ooops

                  they're all "b" of course

                  Comment


                  • #10
                    QoS is application-specific: It really doesn't do anything unless you have a program smart enough to use the API, and most of the apps I have seen out there that use it cost into the thousands of dollars (Traffic Shaping/Routing Software).

                    12.5MB/sec is the maximum theoretical throughput for 100Mbps: the type of data makes a big difference on that throughput.

                    Most 32 Bit busmastering 10/100 PCI adapters work fairly well. The PCI bus is not much of a factor: A 100Mbps network connection doesn't take quite 10% of the theoretical available PCI Bus bandwidth.

                    The Harddisk subsystem tends to be the real limiting factor. Most IDE disks have a hard time sustaining speeds above 10 Megabytes per second, especially with non-sequential data.

                    Unfortunately for most Athlons, this is where the PCI bus can play into it: the older VIA chipsets have both poor PCI throughput and latency: Most of the time, the PCI bus is pretty heavily loaded when an IDE adapter and any 100 Mbps network device is utilizing the bus.

                    Most benchmarks I have seen put the VIA's 686 Southbridge at about 75-80% of the 133MBps maximum of the 32Bit/33MHz PCI Spec.

                    So, your 7-9 Mbps is about right, and it probably doesn't have anything to do with your network hardware: It's probably your harddisks.
                    Hey, Donny! We got us a German who wants to die for his country... Oblige him. - Lt. Aldo Raine

                    Comment


                    • #11
                      I have worked out where the performance bugs are in the network:

                      1) like you said, the hard drives.
                      2) network buffer sizes. If I run an FTP server, tweak the buffer size (8191 bytes) in the server, then direct the client computer to dump the data into the bitbucket, I can reach 90% (about 10meg per sec) network utilization with a single connection under windows. Unfortuantely, I haven't been able to tweak windows file sharing to offer the same sort of performance.

                      Oh well, I guess I will just have to be happy with the performance as it (its pretty damned good actually)
                      80% of people think I should be in a Mental Institute

                      Comment


                      • #12
                        2) is linked to 1)

                        the less the FTP server needs to access the HDD, the better for your transfer speed.

                        Comment


                        • #13
                          Yeah,

                          I increased the TCP window size to about 64K ... now the buffer problem isn't very serious. So, that seems to be the main solution now!

                          Even with non-optimal buffering, transfer speed is 9meg per sec.

                          Windows networking (SMB) didn't get much improvement from the increased window size. Oh well.
                          80% of people think I should be in a Mental Institute

                          Comment


                          • #14
                            There you go...beware some NICs don't like to have the buffers altered (My Adaptecs sure didn't).

                            I did a little speed test on my LAN with a Pair of Intel NICs transferring across two RAID arrays...averaged 11+MBps, but both of the machines were duallies, and it was a 10K RPM SCSI3 to IDE RAID transfer. I previously recorded the same speeds with 64bit Adaptec NICs on the same machines (running in the 64Bit slots).
                            Hey, Donny! We got us a German who wants to die for his country... Oblige him. - Lt. Aldo Raine

                            Comment


                            • #15
                              Originally posted by MultimediaMan
                              There you go...beware some NICs don't like to have the buffers altered (My Adaptecs sure didn't).

                              I did a little speed test on my LAN with a Pair of Intel NICs transferring across two RAID arrays...averaged 11+MBps, but both of the machines were duallies, and it was a 10K RPM SCSI3 to IDE RAID transfer. I previously recorded the same speeds with 64bit Adaptec NICs on the same machines (running in the 64Bit slots).
                              Was this windows file networking ... or ftp/http.

                              Anyway, thanks for all the information.
                              80% of people think I should be in a Mental Institute

                              Comment

                              Working...
                              X