Announcement

Collapse
No announcement yet.

SCSI termination question

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Thanks for all your research!

    Adaptec mentions (on the link you supplied)
    -
    SCSI-2 Ultra Wide, 68-pin - 1.5 Meters with four or more devices on the SCSI chain or 3 Meters for three or less devices on the SCSI chain.
    -

    The other sites claim 3 metres for four or less devices (but perhaps Adaptec isn't counting the host adapter).


    While I do appreciate your research - I merely was trying to illustrate that SCSI is not always as well defined as it claims to be (one of the reasons why I was wrong with Patricks cable length).
    You could be right about the 16 device UW support is due to a change of standards, but few sites (only the manufacturers) websites seem to make mention of this.


    Viewing these specs, I now have another question
    Are there any tools available that allows one to view the bus-usage ? I know that judging by specs, my system has the ability to max out the UW-bus; the 2 LVD-drives present have
    IBM : http://www.storage.ibm.com/hdd/ultra/36lzxdata.htm
    (Sustained data rate 21.7- 36.1MB/sec)
    Quantum:
    Seagate is a leader in mass-capacity data storage. We’ve delivered more than four billion terabytes of capacity over the past four decades. We make storage that scales, bringing trust and integrity to innovations that depend on data. In an era of unprecedented creation, Seagate stores infinite potential.

    (Sustained Throughput (MB/ sec) 18 to 26)

    Now that I look at it, it seems more dramatic than I thought, this on a 40 MB bus ..... Both drives weren't added at the same time: at the time of the Quantum Atlas 10K, I already had the 2940UW in my system (so I didn't see the need to buy a new controller). When I added the IBM, I just hooked it up to the same controller, I figured I'd upgrade the rest soon afterwards and that maxing out the bus wasn't that likely. But now that I'm more and more using software such as Photoshop, Visual Studio, ..., maxing out the bus doesn't seem so far-fetched....
    (an upgrade is coming this summer ! )


    Jörg
    pixar
    Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

    Comment


    • #47
      Originally posted by Verstraete J
      ... The other sites claim 3 metres for four or less devices (but perhaps Adaptec isn't counting the host adapter).
      That's right ... Adaptec didn't include the host controller.

      ... While I do appreciate your research - I merely was trying to illustrate that SCSI is not always as well defined as it claims to be (one of the reasons why I was wrong with Patricks cable length). ...
      It's not the SCSI standard that's ill defined but the web sites and manufacturer documentation that doesn't completely and accurately reflect the standard. That's why I wanted to go to the ultimate authority, the standard, to flush out this info.

      ... You could be right about the 16 device UW support is due to a change of standards, but few sites (only the manufacturers) websites seem to make mention of this. ...
      The UW 16 device support isn't due to a change in the standard but an implementation of the standard which exceeds their minimum requirements (this ability is documented in the standard which is also echoed in your Seagate drive documentation et.al.). The webs sites etc. mostly only reflect the minimum requirements.

      ... Viewing these specs, I now have another question
      Are there any tools available that allows one to view the bus-usage ? I know that judging by specs, my system has the ability to max out the UW-bus
      The spec wouldn't cover this except to document commands that a tool might utilize to measure bus utilization. I don't know of a tool off-hand that would report this except for the old trusty SCSI bus usage LED that is commonly provided via your SCSI controller. The OS and applications have to be able to take advantage of overlapped I/O and such and you need to be driving several SCSI devices simultaneously before you need to be concerned about saturating the bus. If your LED is solidly lit for extended periods of time then you may want to consider upgrading the controller ... after all, your controller is two generations behind your drives.

      P.S. Look at pg. 25 of your Seagate drive installation guide for the 16 device info.
      Last edited by xortam; 14 January 2002, 23:54.
      <TABLE BGCOLOR=Red><TR><TD><Font-weight="+1"><font COLOR=Black>The world just changed, Sep. 11, 2001</font></Font-weight></TR></TD></TABLE>

      Comment


      • #48
        The UW 16 device support isn't due to a change in the standard but an implementation of the standard which exceeds their minimum requirements (this ability is documented in the standard which is also echoed in your Seagate drive documentation et.al.).
        Yes, the Seagate manual does clearly state this. (I have a Quantum and an IBM, and none of their manuals describe it so accuratly). Now I completely understand (finally).


        The spec wouldn't cover this except to document commands that a tool might utilize to measure bus utilization. I don't know of a tool off-hand that would report this except for the old trusty SCSI bus usage LED that is commonly provided via your SCSI controller.
        I wasn't refering to the scsi-spec , just to the general spec sheets of my drives (sum of both transfer rates exceeds the bus transfer rate -> bus can be maxed out). The led would indeed indicate if the bus is maxed out, but it gives no idea about how much is lost (wrt transfer times); is the bus maxed out because 41 MB/s is needed, or because 53 MB/s is needed ? (the latter obviously has greater impact on the overall performance).

        Another idea I have received (I asked our sysadmin at work), is to create a software raid using linux (which is no problem, linux is installed as well, just need to clear 2 partitions). It then is possible to benchmark the softwareraid, and (prior to creating it) benchmarks the two partitions seperatly. This then ought to give an idea about the actual transferrates.

        If your LED is solidly lit for extended periods of time then you may want to consider upgrading the controller ... after all, your controller is two generations behind your drives.
        Yes, well, the system just "grew" . It started with a single cdrw drive (Plextor) on that controller. After a harddisk crash (of an ide drive), I opted for a scsi-disk. And some time after that, I needed more drivespace, and loved the scsi-speed, so I bought another... I will definetly upgrade the system somewhere this summer, but perhaps I could buy a controller in advance...


        Jörg
        pixar
        Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

        Comment


        • #49
          Your throughput would not improve by modifying your configuration to use RAID (HW or SW) if the bus is already saturated. The only way that you can measure the maximum transfer rate of your drive configuration is if you eliminate the SCSI bus as a bottleneck. Do you see extended periods of time where the bus is busy as indicated by a steady LED light?

          You want a tool that will indicate how much additional throughput your SCSI devices will deliver ("how much is lost") if you alleviate the SCSI bus saturation? The tool would have to know the performance metrics of those devices and simulate your system from a proposed model. There may be such tools available but I'm not aware of any off-hand. Personally, I would just develop my own model and use a simulation language to predict the performance.
          <TABLE BGCOLOR=Red><TR><TD><Font-weight="+1"><font COLOR=Black>The world just changed, Sep. 11, 2001</font></Font-weight></TR></TD></TABLE>

          Comment


          • #50
            Your throughput would not improve by modifying your configuration to use RAID (HW or SW) if the bus is already saturated.
            I know, but the idea was just to be able to perform benchmarks with both drives working at the same time. Comparing these results with the benchmarks of the drives individually could give an idea of how much the bottleneck is slowing the system down.

            Do you see extended periods of time where the bus is busy as indicated by a steady LED light?
            Due to some cabling problems (purchased a new atx-case for more adequate cooling, new led-cable is too short), I still haven't got round to connecting the led to the case-led . But I'll open the case, run some programs I often use and keep an eye on the led.

            Regardless, I think I'll just go out and buy a new controller, whether I do this now or in a couple of months won't make such a big difference in price (I'm not that big a fan of mainboards with on board scsi).
            The 29160N and the 19160 seem the "cheapest" alternative :



            The difference between both controllers is unclear to me. It appears to be only in the drivers:


            What would be best : connect the Plextor 4220 to the new controller, or keep the 2940UW in the system and connect the Plextor to this one ?


            Jörg
            Last edited by VJ; 16 January 2002, 01:29.
            pixar
            Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

            Comment


            • #51
              RAID 0 (which really isn't RAID strictly speaking) will stripe the data among multiple drives but it doesn't yield a linear increase in throughput. It is useful for increasing your throughput where there is still headroom on the bus but two drives won't double your transfer rate. RAID is also normally implemented by using identical drives or drives with the same performance metrics. The slowest drive in the array will determine the performance as the system will need to wait on this drive for each interleaved stripe that it accesses.

              You can't simply sum the transfer rates of the individual drives and expect this to be your total throughput even while driving them simultaneously in a non-RAID configuration with a non-saturated bus. There is overhead involved throughout the system in handling the overlapped I/O of multiple devices.

              All said, I doubt that you're saturating the bus with just two drives when using your applications. Let us know how that LED is doing.

              I don't have any problem using on-board SCSI as its a much cheaper solution and I keep my MBs for some time. I use a P2B-S and I was also running RAID 0 under Win98 with my Mylex RAID controller (unfortunately, they don't support W2K so I'm running SW RAID now). I think Adaptec controllers are way over priced but they do offer some of the best compatibility due to their market size. The 19160 doesn't look to be very flexible in regards to its OS support but it would save you some money over the 29160N if you didn't need that additional support. I haven't researched SCSI equipment in the last 2 1/2 years so I don't know off-hand who has the best solution for you right now. AFA device placement with multiple controllers ... some SCSI environments can transfer data directly between devices without the CPU being involved. On the other hand you might benefit from separating the traffic among multiple buses and controllers in other applications. The easiest thing for you to do is run some experiments using your target OS and apps and benchmark the performance.
              <TABLE BGCOLOR=Red><TR><TD><Font-weight="+1"><font COLOR=Black>The world just changed, Sep. 11, 2001</font></Font-weight></TR></TD></TABLE>

              Comment


              • #52
                You can't simply sum the transfer rates of the individual drives and expect this to be your total throughput even while driving them simultaneously in a non-RAID configuration with a non- saturated bus. There is overhead involved throughout the system in handling the overlapped I/ O of multiple devices.
                That's true, but it might give some idea. But I figured it was not necessary to setup a SW RAID for test-purposes : using linux, I could benchmarks both disks at the same time, and view how they perform. If the bus were to be saturated, it would show up on their transferrates when the benchmarks were run simultaniously. Using the command "hdparm -t -T /dev/sda2" and "hdparm -t -T /dev/sdb7" (see below), I got the following results :

                when tested individually :
                - Quantum, ID0, sda2 : ~24 MB/s
                - IBM, ID1, sdb7 : ~27 MB/s

                when the test were run simultaniously :
                - Quantum : ~17 MB/s
                - IBM : ~17 MB/s

                (I could not test the entire disks, as some of them contain W2K NTFS-partitions that are not recognised by my Linux)

                So apparently, I theoretically can lose up to 7 MB/s on the first disk, and 10 MB/s on the second (the drives seem to max out at at an added rate of 34 MB/s, on a 40 MB/s bus). The remaining question - as you state - is how much of the time do I need the maximum transferrates of both drives simultaniously; and as you mention, this can be viewed (more or less ) on the led. I'll keep an eye on that led (bring it to the case-led for starters); if necessary, I'll have to go for the 29160N as I also need to run Linux (the 19160 has no official Linux-support - although the distributions themselves might add it).

                Adaptec is indeed expensive, but it is about the only brand that can be "easily" obtained here in Belgium. And the bulk-versions are somewhat cheaper...

                I don't have any problem using on-board SCSI as its a much cheaper solution and I keep my MBs for some time.
                True, but I currently don't know to what mainboard (or even cpu : P4 or AMD) I'll upgrade. I'd like to have a free choice of mainboards; and depending on the system, I don't keep the mainboards for that long, which makes a seperate controller cheaper in the long run.


                Jörg

                documentation on hdparm :
                -T Perform timings of cache reads for benchmark and
                comparison purposes. For meaningful results, this
                operation should be repeated 2-3 times on an other­
                wise inactive system (no other active processes)
                with at least a couple of megabytes of free memory.
                This displays the speed of reading directly from
                the Linux buffer cache without disk access. This
                measurement is essentially an indication of the
                throughput of the processor, cache, and memory of
                the system under test. If the -t flag is also
                specified, then a correction factor based on the
                outcome of -T will be incorporated into the result
                reported for the -t operation.

                -t Perform timings of device reads for benchmark and
                comparison purposes. For meaningful results, this
                operation should be repeated 2-3 times on an other­
                wise inactive system (no other active processes)
                with at least a couple of megabytes of free memory.
                This displays the speed of reading through the
                buffer cache to the disk without any prior caching
                of data. This measurement is an indication of how
                fast the drive can sustain sequential data reads
                under Linux, without any filesystem overhead. To
                ensure accurate measurments, the buffer cache is
                flushed during the processing of -t using the BLK­
                FLSBUF ioctl. If the -T flag is also specified,
                then a correction factor based on the outcome of -T
                will be incorporated into the result reported for
                the -t operation.
                pixar
                Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

                Comment


                • #53
                  I think you understand the limited value of such synthetic benchmarks. Run some real application benchmarks with your current controller and the new one and you probably won't see much difference. Of course once you buy a bigger bus you'll probably take advantage of it by adding more devices and pushing your system harder. Happy shopping!
                  <TABLE BGCOLOR=Red><TR><TD><Font-weight="+1"><font COLOR=Black>The world just changed, Sep. 11, 2001</font></Font-weight></TR></TD></TABLE>

                  Comment


                  • #54
                    I think you understand the limited value of such synthetic benchmarks. Run some real application benchmarks with your current controller and the new one and you probably won't see much difference.
                    You're right, but this was just to give the maximum I could theoretically loose. Perhaps I'll wait some longer to buy a new one controller (and consider an onboard SCSI for my upgrade). I'll put the purchase on hold for now, run some more benchmarks and keep an eye on the LED.

                    Of course once you buy a bigger bus you'll probably take advantage of it by adding more devices and pushing your system harder. Happy shopping!
                    That's true, allthough I had to opt for an IDE-dvd drive (only SCSI models were a Pioneer and a Toshiba, both outperformed by IDE-models; the Toshiba was/is no longer in production even).

                    Thanks for all the info!


                    Jörg
                    pixar
                    Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

                    Comment


                    • #55
                      You're welcome VJ.
                      <TABLE BGCOLOR=Red><TR><TD><Font-weight="+1"><font COLOR=Black>The world just changed, Sep. 11, 2001</font></Font-weight></TR></TD></TABLE>

                      Comment

                      Working...
                      X