Announcement

Collapse
No announcement yet.

Intel storage configuration

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel storage configuration

    Hello,

    I'm looking to put together a new computer in summer. I am currently looking at the Asus z97-a
    ASUS offers different kinds of motherboard accessories including Thunderbolt™ expansion cards, M.2 add-on cards, and fan extension cards give DIY PC users better choices when it comes to building their perfect workstation or gaming rig.


    The computer would be a htpc/storage computer. I was thinking about the following:
    - M.2 SSD disk as boot drive
    - RAID-5 on 3-4 of the sata6 ports
    Is it then possible to put an SSD disk on a remaining sata port, to act as automatic cache for the raid-5? This would allow e.g. when playing back a movie, to allow it to be cached on the SSD disk while the disks in the raid spin down.

    I know from my previous built that I can make it quiet enough even with the harddisks spinning, but it would be nice if they could power down when not needed. Of course I can manually copy the file(s) that will be used from the raid to another disk, but it would be nice if it could go automatically.

    I know a dedicated NAS would be a more elegant solution, but this computer would be the main one using the files on the storage space, so it seems a bit overkill to additionally get a raid-5 NAS.

    Thanks!
    Last edited by VJ; 11 June 2014, 02:39.
    pixar
    Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

  • #2
    Originally posted by VJ View Post
    Hello,

    I'm looking to put together a new computer in summer. I am currently looking at the Asus z97-a
    ASUS offers different kinds of motherboard accessories including Thunderbolt™ expansion cards, M.2 add-on cards, and fan extension cards give DIY PC users better choices when it comes to building their perfect workstation or gaming rig.


    The computer would be a htpc/storage computer. I was thinking about the following:
    - M.2 SSD disk as boot drive
    - RAID-5 on 3-4 of the sata6 ports
    Is it then possible to put an SSD disk on a remaining sata port, to act as automatic cache for the raid-5? This would allow e.g. when playing back a movie, to allow it to be cached on the SSD disk while the disks in the raid spin down.

    I know from my previous built that I can make it quiet enough even with the harddisks spinning, but it would be nice if they could power down when not needed. Of course I can manually copy the file(s) that will be used from the raid to another disk, but it would be nice if it could go automatically.

    I know a dedicated NAS would be a more elegant solution, but this computer would be the main one using the files on the storage space, so it seems a bit overkill to additionally get a raid-5 NAS.

    Thanks!
    whatever you do, try and avoid fakeRAID solution; it's better to use software RAID which allows access to your data from any motherboard rather than only from a specific combination of hardware and firmware.

    Comment


    • #3
      Originally posted by dZeus View Post
      whatever you do, try and avoid fakeRAID solution; it's better to use software RAID which allows access to your data from any motherboard rather than only from a specific combination of hardware and firmware.
      Or if you go RAID, buy some LSI (Dell/HP/IBM branded from a server) controller. Even if controller breaks, you can buy same controller and go from there.

      Comment


      • #4
        Originally posted by dZeus View Post
        whatever you do, try and avoid fakeRAID solution; it's better to use software RAID which allows access to your data from any motherboard rather than only from a specific combination of hardware and firmware.
        Yes... I currently have a hardware raid on a Promise card, but the card is a PCI-X. This ties the controller to the mainboard, as I won't be able to find a PCI-X mainboard if something goes wrong. So that config has many weak points that may prevent access to the raid: controller failure, mainboard failure, ... But I was aware of this and the whole raid is backed up to a harddisk, and important data also to multiple copies on DVDs. With the price of storage, I would probably do the same, regardless of the type of raid I would get.

        From what I read, the Intel raid allows you to add an SSD to act as a cache, but I don't think it would cache enough to spin down the disks for a long time. So if I would not use that cache-feature, but e.g. manual copy, I'm not tied to using intel raid.

        But how do things change if I dual boot? Can linux also access the Intel fakeRaid? Or can the Windows software raid also be accessed under Linux?

        Originally posted by UtwigMU View Post
        Or if you go RAID, buy some LSI (Dell/HP/IBM branded from a server) controller. Even if controller breaks, you can buy same controller and go from there.
        Indeed an option...
        pixar
        Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

        Comment


        • #5
          Originally posted by VJ View Post
          Yes... I currently have a hardware raid on a Promise card, but the card is a PCI-X. This ties the controller to the mainboard, as I won't be able to find a PCI-X mainboard if something goes wrong. So that config has many weak points that may prevent access to the raid: controller failure, mainboard failure, ... But I was aware of this and the whole raid is backed up to a harddisk, and important data also to multiple copies on DVDs. With the price of storage, I would probably do the same, regardless of the type of raid I would get.

          From what I read, the Intel raid allows you to add an SSD to act as a cache, but I don't think it would cache enough to spin down the disks for a long time. So if I would not use that cache-feature, but e.g. manual copy, I'm not tied to using intel raid.

          But how do things change if I dual boot? Can linux also access the Intel fakeRaid? Or can the Windows software raid also be accessed under Linux?



          Indeed an option...
          why dual boot when you can just abstract away the RAID implementation by running the second OS in a virtual machine on the server?

          Comment


          • #6
            Originally posted by dZeus View Post
            why dual boot when you can just abstract away the RAID implementation by running the second OS in a virtual machine on the server?
            The question is more to make sure that I know the full benefits / drawbacks of the different options... E.g. if the OS goes corrupt, or its disk fails, booting with a Linux DVD is always a nice alternative to have. UtwigMU's solution solves it, as long as the chosen controller has drivers for the OS. So I'm just considering benefits and drawbacks:

            Hardware raid
            - seperate controller (added cost)
            - hardware dependent: need to find same controller to access if controller fails
            + OS independent (accessible with other OS if drivers are available)

            Software raid
            - not OS independent (not accessible if OS fails)
            + no added cost
            + easily moved to different system (if it runs the same OS)

            fakeRaid
            + no added cost
            - hardware dependent: need to find same mainboard (more difficult than controller)
            ? works on other OS?

            So I need to work out which benefits I need, and which drawbacks I'm willing to suffer... Any idea if the fakeRaid works under dualboot?
            pixar
            Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

            Comment


            • #7
              This is all over my head but might I suggest Stablebit Drivepool? It does not support Linux but all is written as NTFS files. It does duplication (under windows) which is a Raid-1 lookalike I guess but also read-striping (if you have duplication set at at least 2 of course).

              I can imagine that if you would run Linux in a VM under a windows OS that it could be transparent.

              If it can't do SSD caching support then you might request this as a feature and in a way you actually want. Alex is very willing to upgrade and does so based on user feedback a lot.

              I use it on my WHS2011 server and am very satisfied with it. It protects me against single drive failure and whatever happens, I can put the remaining HDD in another NTFS capable machine and it'll see all, read an write all.

              Will set you back USD 15 I think after a 30-day trial period.
              Join MURCs Distributed Computing effort for Rosetta@Home and help fight Alzheimers, Cancer, Mad Cow disease and rising oil prices.
              [...]the pervading principle and abiding test of good breeding is the requirement of a substantial and patent waste of time. - Veblen

              Comment


              • #8
                It is not raid... but it might be combined with it. However, I don't have the need for large single volumes... You did give me an idea though, but perhaps dZeus can comment:

                Can you read a disk from an Intel fakeRaid 1 on another computer?
                It is just duplication, but I don't know if the disks can be read as a single disk still...

                Because then rather than going with 3 disks in a raid 5 config, I could consider going with 2x 2 disk in raid 0 config. It would give me 2 drive letters, which is not a problem for me. I could use the fakeRaid to make the raid 1, which would be accessible in both linux and windows. And if a single disk can be moved to another system, it also solves the issue of controller failure. It would be slightly more expensive (extra disk). But I could start with just 2 disks, and expand as drivespace gets cheaper.
                pixar
                Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

                Comment


                • #9
                  Actually, I do not use DrivePool for large volumes (a.o. because of Server Backup limitations of W2008S/WHS2011) but as a Raid-1 alternative: I am protected against a single drive failure (not being the OS drive) and have some read-striping benefits. The/any drive can be read on any NTFS-capable machine. You could do two Pools, each consisting of two drives with duplication x2, essentially you'd have the same AFAICS.

                  I found they do not support SSD-caching and are not likely to implement that. Rather, they rely on the OS/NTFS caching and one tip they do have is to ""fsutil behavior set memoryusage 2" (run from an elevated command prompt, and then reboot)." - It may help as it increases the size of the system-ram used as cache. It'd be helpfull to have a lot of internal memory I guess but that might be somewhat cheaper than running an additional SSD. It seems clear though that it falls short of your ideal situation (read instruction -> spin up -> read to cache -> spin down).
                  Join MURCs Distributed Computing effort for Rosetta@Home and help fight Alzheimers, Cancer, Mad Cow disease and rising oil prices.
                  [...]the pervading principle and abiding test of good breeding is the requirement of a substantial and patent waste of time. - Veblen

                  Comment


                  • #10
                    Sorry... I thought it was mainly a drive pooling tool...

                    But your comment made me realize that two RAID-1 may actually be better.

                    Initially, I thought of a 3 or 4-disk raid-5, as it should be more cost efficient. As 3 TB disk is currently the best price per GB (approx. 100 EUR for a disk), it would be a 6 or 9 TB storage (3 or 4 disks with redundancy, so 300-400 euro). But it limits future expansion: the case I'm looking at holds up to 5 disks, and the mainboard has 6 SATA ports. Also, 6 TB is quite a lot for now, and I won't fill it in one year, let alone 9 TB. Only expansion possible would be from the 3-disk configuration, but would require adding a disk and rebuilding the whole raid.

                    If I would go with 2x4 TB (2x 150 euro), and have 4 TB of storage, which would suffice for now. When needed, I can add another 2 disks and by then the 4 TB ones would be cheaper, or bigger might be better.

                    So a similar cost would give me less storage (4 TB in stead of 6 TB), but keep more expansion options open...
                    It will always be more expensive, as there is less net storage avaible than with RAID5, but considering the necessary storage and the upgrade paths, it may be the better option.
                    Last edited by VJ; 13 June 2014, 04:15.
                    pixar
                    Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

                    Comment


                    • #11
                      It is mainly a drive pooling tool but leave it to me to find some other use for something. And if you use it as such, you can still have duplication, as with Raid-1. Moreover, there is no requirement on drive sizes, i.e., you can have disk-pooling and duplication while using differently-sized drives (and, bar specific circumstances, not be limited to number of drives * smallest drive size). If you have a couple older HDDs hanging around unused you might not even have to make that investment right now. Just leave 1 SATA port available for when you need to expand. Simply add a HDD, add it to the Pool and then remove another HDD. Done.

                      Anyway, if you really want RAID, that'll be the way to go, especially if you want and can have the SSD-caching. _IF_ you can, would you not simply do that through the M.2 SSD as opposed to another?
                      Join MURCs Distributed Computing effort for Rosetta@Home and help fight Alzheimers, Cancer, Mad Cow disease and rising oil prices.
                      [...]the pervading principle and abiding test of good breeding is the requirement of a substantial and patent waste of time. - Veblen

                      Comment


                      • #12
                        Well, I want some storage system with redundancy. It is slightly overkill, as I keep backups, which for my purpose suffice (I don't need 24/7 availability), but I had too many harddisk problems not to consider redundancy in storage. My first idea was RAID5, as it offers a good price/redundancy balance (e.g. with 4 disks).

                        But the discussion made me realize there are other options I was dismissing too fast. I overlooked the fact that I don't need huge storage now if I can upgrade easily. The raid5 limits upgrading and has a higher initial cost (but offers more storage), but still lower/GB. But If I would settle for less storage now, and considering upgrade tracks:
                        - now get 2x3TB (3TB net storage), one year later add 1x6 TB hdd to achieve 6 TB net storage
                        - now get 2x3TB (3TB net storage), one year later add 2x3-4 TB hdd to achieve 6-7 TB net storage
                        - now get 2x4TB (4TB net storage), one year later add 2x4 TB hdd to achieve 8 TB net storage
                        - now get 2x4TB (4TB net storage), one year later add 2x6 TB hdd to achieve 10 TB net storage
                        - ...
                        So price per GB would be higher initially, but the upgrade path should allow for it to even out over time.
                        pixar
                        Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

                        Comment


                        • #13
                          How would that work with RAID5 exactly? I always thought that RAID5 was block-level striping/duplication and that downsides are:
                          - Lose the controller / MB, lose the data?
                          - Lose one drive and I/O becomes terribly slow until the array has been rebuild?

                          I may be terribly confused on this. I like things simple and DP does that for me. And yeah, upgrading, either adding disks or, once the case or connectors are full, replacing with larger disks is so easy. In case of the latter, all you need is to have one port and bracket spare. You can even do without but then it becomes a bit more complicated.
                          Join MURCs Distributed Computing effort for Rosetta@Home and help fight Alzheimers, Cancer, Mad Cow disease and rising oil prices.
                          [...]the pervading principle and abiding test of good breeding is the requirement of a substantial and patent waste of time. - Veblen

                          Comment


                          • #14
                            Originally posted by Umfriend View Post
                            How would that work with RAID5 exactly? I always thought that RAID5 was block-level striping/duplication and that downsides are:
                            - Lose the controller / MB, lose the data?
                            - Lose one drive and I/O becomes terribly slow until the array has been rebuild?
                            I've only had experience with my hardware raid. But yes: you cannot move the disks to a controller with a different chipset, so the disks are tied to the controller. I only once experienced a raid failure, and I/O was not terribly slow. But rebuilding takes time (several hours), and it depends how much priority you give to the rebuild. As long as it is not rebuilt, the raid is in critical state (one more disk failure, and there is data loss). On a pure software raid, I would guess rebuilding takes even more time and cpu cycles (slowing down the whole system).

                            But if you consider that you can have e.g. 4 disks, and have the net storage of 3 disks, it is a nice balance between redundancy and cost.
                            Originally posted by Umfriend View Post
                            I may be terribly confused on this. I like things simple and DP does that for me. And yeah, upgrading, either adding disks or, once the case or connectors are full, replacing with larger disks is so easy. In case of the latter, all you need is to have one port and bracket spare. You can even do without but then it becomes a bit more complicated.
                            Upgrading a raid5 is more of a pain though. Most raid5 nas systems offer some way of increasing the raid by changing the disks to bigger disks (one at a time, and rebuild after each disks), but they generally use a software raid. If you stick with standard hardware raid, I'm not aware of upgrade paths that allow you to keep the data. Perhaps if you do it the same way, and have an option to extend the volume... but e.g. my old raid controller does not allow that.

                            I always use to have this (bad) tactic of buying a nice system that is intended to be more future proof. So in that tactic, a raid5 that is too big seems like a good idea. But prices of storage drop so fast, that it is not a good tactic: a few years from now, the raid will be too small, with no upgrade path left. And then to keep the same thing, it means again buying 3-4 disks, and now way to keep the old ones in the case (perhaps as external disks or so). I'm really beginning to like the mirroring approach. Sufficient storage for the near future, expand with a second mirrored system when needed. Replace the oldest one when needed, etc....
                            pixar
                            Dream as if you'll live forever. Live as if you'll die tomorrow. (James Dean)

                            Comment


                            • #15
                              DP would, of course, also be in critical state after the loss of 1 HDD (assuming x2 duplication) but it will re-duplicate automatically given enough drives (at least 2 need to remain) and space. Another nice thing is if you complement it with Scanner. Not only does it read SMART-details, it also peforms surface scans. If a drive becomes suspect, DP will automatically offload the files to the other drives (given enough drives and space again off course).

                              There are people that run external enclosures and have like 20 HDDs in a single Pool. Not sure how they back that up though, perhaps there are tape-drives that allow for 40-80TB?

                              edit:
                              expand with a second mirrored system when needed
                              What do you mean with a second mirrored system? With DP, you'd just add one HDD to the Pool and it'd be done. They need not be in pairs or anything.
                              Last edited by Umfriend; 17 June 2014, 22:57.
                              Join MURCs Distributed Computing effort for Rosetta@Home and help fight Alzheimers, Cancer, Mad Cow disease and rising oil prices.
                              [...]the pervading principle and abiding test of good breeding is the requirement of a substantial and patent waste of time. - Veblen

                              Comment

                              Working...
                              X