Announcement

Collapse
No announcement yet.

One 7200 RPM SATA II HDD or Two 7200 RPM Hard Drives in Striped Raid Array?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    you could take a look here for some more infos:

    http://faq.storagereview.com/tiki-in...leDriveVsRaid0

    To summarize, RAID 0 offers generally minimal performance gains, significantly increased risk of data loss, and greater cost. That said, it offers the ability to have one large partition using the combined space of your identical drives, and there are situations where the benefit of the benefits outweight the disadvantages. It is your computer: The choice is up to you.
    "Women don't want to hear a man's opinion, they just want to hear their opinion in a deeper voice."

    Comment


    • #32
      Despite all the comments you all may have and all the articles you all may throw in this thread, I still believe that RAID 0 offers a performance increase over a single drive application. The difference in performance is noticable, cause I've actually tried it and I've seen the difference. Ultimately, it's up to you darthJones. But if you can obtain 2 identical HDD's then try it out for a while and see for yourself and come up with your own conclusions.
      Titanium is the new bling!
      (you heard from me first!)

      Comment


      • #33
        Originally posted by Rakido
        you could take a look here for some more infos:

        http://faq.storagereview.com/tiki-in...leDriveVsRaid0

        Beat me to it.

        I think Annand as well has posted results to that effect as well.
        Chief Lemon Buyer no more Linux sucks but not as much
        Weather nut and sad git.

        My Weather Page

        Comment


        • #34
          Originally posted by ZokesPro
          You really firmly believe that raid 0 doesn't offer much performance increase for regular desktop users?

          I think it makes all the difference, no matter what you use it for. And with today's HDD prices and onboard controllers, why not?
          Average desktop users do not sustain high levels of transfer. What they DO do is access their disks for small pieces of information - and that depends on access time, not very large bandwidth. In fact, with RAID 0 seeking in two disks, they're likely to get that first cache fetch slower.

          Plus, "today's hard drive prices" are still $$$, and having to buy 2x of a drive every time you upgrade can be pricey. If you want speed, you're better off spending that money on a 10k HD or more RAM. If you want capacity, you should just use a JBOD or RAID 5 or something.

          If the average user has two hard drives, the smart thing to do is to keep them totally separate, and have your swap space on the fastest part of one of them, and the most-accessed files on the other disk, so it can be accessed concurrently. This will be MUCH faster than a RAID 0 for 99% of users (Desktop Video guys excepted).

          Also, onboard controllers are a BAD idea. RAID firmware has a bug? You'd better hope the MB mfr releases a MB BIOS with good code of theirs and good RAID controller code too. What happens if you want to change motherboards? Goodbye RAID. What if the RAID controller fails? Have to get a new motherboard, one with the same RAID chipset (and maybe RAID firmware level).

          If one drive fails? Goodbye ALL of your data.


          There are plenty of reasons most users should not use RAID 0. There are very few reasons why they should.

          Slower boot times.
          Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

          Comment


          • #35
            Well you right about that, but you can always go with 2x 80gb's though, they aren't all that expensive. Anyways, you make a good argument. I still like RAID 0 though.
            Titanium is the new bling!
            (you heard from me first!)

            Comment


            • #36
              @Wombat:
              isn't SAS a superset of SATA, allowing to run both SAS and SATA devices? AFAIK, you can't attach SAS disks on a SATA controller but you can attach SATA disks on an SAS controller. Correct me if I'm wrong.

              About the 7K500, that drive doesn't look too "hot", except if you really need storage. I wouldn't be surprised if it were slower than the 7K250. I don't suppose 5 platters actually help get the access times down. The 7K250 is still an excellent drive and the 7K400 is very close, besides, the 7K500 is 50% (!) more expensive than the 7K400 and 3 times as expensive as the 7K250 250GB model. I hope this just the intro price... Who's up for 600GB?

              @ZokesPro:
              there's another potential big problem with RAID0. When your drives actually transmit too much data for the controller, your overall troughtput is lower than with a single drive. What actually happens is that the controller stops receiving the data but your disks keep on spinning and need to be repositionned to access the rest of the data. That SVCKS.

              The max bandwith available depends on the controller and is not always related to it's ATA type (33/66/100/133) - as for ex VIA's own ATA66 implementation that wouldn't go above ATA33 speed. If the controller is linked via PCI, a safe bet is about 90MB/sec. A simple RAID0 array with 2 modern drives can go above that value easily (burst speed from cache or even sequential transfer).

              Another problem with RAID0 is data integrity: two drives double the chance of data corruption. If one drive fails, say bye bye to ALL your data. If read speed is important, and you insist on RAID , you can try RAID1 (most probably with the same controller).

              Comment


              • #37
                Well, I have two 36GB (10k rpm) Raptors in RAID 0 and all I can say is there isn't much difference from a single drive, games don't load faster (FarCry, Doom 3...), the only thing that went up is my bandwidth, average transfer rate went from ~54 mb/s to ~94 mb/s with hdtach and that's pretty much what all modern 7200rpm drives can do, single vs RAID (even IDE vs SATA).
                Where the Raptor wins is access time, that's where the lower loading times, faster archiving/unpacking ... comes from.

                If you want to be fair, get a 36GB Raptor for your OS, a 74GB Raptor for games and applications you run currently and a big 200GB or above drive for your storage needs.
                Personally what I would get is a 74GB Raptor for OS and games and another big drive for storage.

                Comment


                • #38
                  Originally posted by Kurt
                  @Wombat:
                  isn't SAS a superset of SATA, allowing to run both SAS and SATA devices? AFAIK, you can't attach SAS disks on a SATA controller but you can attach SATA disks on an SAS controller. Correct me if I'm wrong.
                  Can't say. Haven't played with SAS yet.

                  About the 7K500, that drive doesn't look too "hot", except if you really need storage. I wouldn't be surprised if it were slower than the 7K250. I don't suppose 5 platters actually help get the access times down. The 7K250 is still an excellent drive and the 7K400 is very close, besides, the 7K500 is 50% (!) more expensive than the 7K400 and 3 times as expensive as the 7K250 250GB model. I hope this just the intro price... Who's up for 600GB?
                  Why would access times go up? Five platters instead of four, and I believe the tech is the same. Controller board might be better. Largest storage is always expensive, but people will pay for it because they need the density.
                  Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

                  Comment


                  • #39
                    I have had quite a few raid 0 setups, in general they are slightly faster in desktop use...not enough for me to continue using them given the reduced reliabilty.

                    They are very good for bumping up your sustained throughput eg.
                    They were essential a few years ago to capture uncompressed video(or slightly compressed), but nowdays with faster cpus and HDD's its no longer the case.

                    a big downside of raid 0 is the extra CPU overhead which can kill any speed benefit, a lot more so on IDE/ATA than on scsi.

                    However I still have one RAID 0 array I uses for room heating and noise production.. A mate gave my 4 old 7200rpm 4G wide scsi drives from an old work server.

                    max interface speed for wide scsi =20MB

                    4 x wide(old) seaqagte7200rpms in RAID 0 on single 2940uw channel = approx 20MB in sustained throuput...woohoo

                    And it scares insects and heats rooms as well

                    Comment


                    • #40
                      Originally posted by Wombat
                      Why would access times go up? Five platters instead of four, and I believe the tech is the same. Controller board might be better.
                      With 5 platters the motor and actuators have more weight to move around. "Usually" it translates into worse access times (depends on the drive mechanics of course). Areal density is the same so I don't suppose they'll have changed the electronics (unless to make it cheaper).

                      Apart from the drive mechanics, there's this trend that bigger drives have a worse access time (you can check that on storagereview). I don't know whether it has to do with drive mechanics or electronics.

                      Largest storage is always expensive, but people will pay for it because they need the density.
                      OEMs sure, not me -at least not at that price point.

                      Comment


                      • #41
                        Originally posted by Kurt
                        With 5 platters the motor and actuators have more weight to move around. "Usually" it translates into worse access times (depends on the drive mechanics of course). Areal density is the same so I don't suppose they'll have changed the electronics (unless to make it cheaper).

                        Apart from the drive mechanics, there's this trend that bigger drives have a worse access time (you can check that on storagereview). I don't know whether it has to do with drive mechanics or electronics.



                        OEMs sure, not me -at least not at that price point.
                        I just took a look at StorageReview, and I didn't see that trend.

                        It's true that the extra platter(s) are a higher load on the motor, but that only affects spin-up time.

                        Extra actuator mass could be a problem, but the heads are so lightweight these days, that it shouldn't be an issue. Also, it's only a problem if they leave the head driver coils the same size, which is unlikely - ie, they'll increase the driver power to compensate for the increased head mass.

                        - Steve

                        Comment


                        • #42
                          For ex.:

                          AVG read service time

                          7K250 250GB 12.1ms
                          7K400 400GB 12.7ms

                          DM+9 160GB 13.0ms
                          DM+9 200GB 13.5ms

                          The kicker, seagate 7200.8 500GB 15.0ms.

                          These are drives pulled out of the DB that are similar enough (it's not meant to demonstrate in an absolute fashion that bigger drives have higher access times, but it helps). Bigger drives having higher access times in general, I guess the mechanics are not upgraded enough WRT drives with less platters- probably for a cost reason.

                          Comment


                          • #43
                            Originally posted by Kurt
                            For ex.:

                            AVG read service time

                            7K250 250GB 12.1ms
                            7K400 400GB 12.7ms

                            DM+9 160GB 13.0ms
                            DM+9 200GB 13.5ms

                            The kicker, seagate 7200.8 500GB 15.0ms.

                            These are drives pulled out of the DB that are similar enough (it's not meant to demonstrate in an absolute fashion that bigger drives have higher access times, but it helps). Bigger drives having higher access times in general, I guess the mechanics are not upgraded enough WRT drives with less platters- probably for a cost reason.
                            You've made no point at all here. You just compared three totally different drive families. What's more, you're saying it's because of more platters!

                            Let's clear up your facts here:
                            The 500GB 7200.8 doesn't exist to be reviewed. Your numbers are for the 400GB. Which, by the way, is a 3 platter drive, currently holding the record at 133GB/platter. It's beaten out by access times of the 4 platter 7K400.
                            Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

                            Comment


                            • #44
                              My bad, I misread 500GB.

                              (the 7K400 is not a new design, it's a 7K250 with 5 platters, not 4).

                              It comes down to this:
                              1/higher areal density means more work for the electronics and motors.
                              2/more platters means more work for the motors

                              Both 1 and 2 slow down the access time.

                              1/ seemed obvious and I forgot to mention it.

                              How would you rephrase the fact that newer, bigger drives have such high access times? Is 15.0ms progress? Or 16+ for some Maxtor drives. Shouldn't these value decrease rather?

                              Comment


                              • #45
                                Why? Simple: acces time doesn't sell, it's pointless to invest reasources into making it better.

                                Comment

                                Working...
                                X