> On Wed, Jun 29, 2005 at 01:53:17PM +0100, Niall O Broin wrote:
>> However, if you don't use RAID disks, and a disk fails, your data
>> similarly becomes unavailable - but after you have replaced the disk,
>> it stays unavailable.
>> All true. And we do use RAID cards in some systems where we have RAID5
> system disks and so on. But there are some other good reasons why we
> avoid raid cards.
>> Number one is the one I covered, we're using RAID1 for our system disks,
> it's not at all processor intensive or any kind of a real overhead. So
> why spend money on introducing a possible single point of failure?
I do not fully agree with this. IDE/SCSI?SATA/Whatever controllers on
which the disks reside can fail just as easily as a raid controller.
Although I fully take you points about chipsets/firmware.
e.g.: LSI is not compatible with Smart Array or Adaptec chipsets. In each
case, you are looking at a raid rebuild & restore from backup.
With linux raid, you just move the disks in the correct order to the new
system. However, LSI (megaraid controllers) are backward compatible with
> Number two is that we have an on-site cold-standby machine. Which is
> basically a chassis, motherboard, CPU and RAM. In practise, it really is
> very unreliable to try and swap physical disks between RAID cards and
> expect it to work. If a system component dies, be it the motherboard,
> ram, cpu or whatever, we want to be able to pull the disks and put them
> into the coldspare and have the system boot. At the rate Dell change
> their RAID chipsets, and the odd propreitary way cards tends to differ
> in how exactly they implement RAID1 (yes, even RAID1) - I wouldn't be at
> all confident of it working on a given ocasion.
The good old days, chipsets changed between vendors, and I cite the
adaptec/LSI change as a perfect example.
But now everything is pure LSI based, and they work very well.
> Number 3 is that linux is just better at RAID1 than most raid cards.
Yes, except under load. True hardware raid controllers will be faster as
they offload the work to the controllers cpu's, not to the host cpu.
In these situations, I would prefer to donate cpu time to the db's etc.
> With the cards you usually have to go into the cards BIOS to get it
> re-initialise the pair for example, and it's hard to get decent stats on
> what it's doing. On the other hand, linux with RAID1 gives you all sorts
> of hotremove and hotadd abilities and /proc/mdstat for monitoring
> builds. This comes in really handy when doing complete server upgrades.
> We just pull a disk from the pair, insert a new one and let it sync.
> Then we power down the server, put the new disk back in the new server
> (with the other new blank disk) and let it boot, and sync while it's at
> it. Complete chassis swaps with 5 minutes of downtime. I like :)
> Although some less OS's can't boot from software raid 1 of course.
Wot? Linux can for sure, you cannot have /boot on a LVM though.
I assume you are talking about doze? I though it could boot from RAID1 non?
> Colm MacCárthaigh Public Key: colm+pgp at stdlib.net> --
> Irish Linux Users' Group
This email has been scanned for viruses by Clamav
Maintained by the ILUG website team. The aim of Linux.ie is to
support and help commercial and private users of Linux in Ireland. You can
display ILUG news in your own webpages, read backend
information to find out how. Networking services kindly provided by HEAnet, server kindly donated by
Dell. Linux is a trademark of Linus Torvalds,
used with permission. No penguins were harmed in the production or maintenance
of this highly praised website. Looking for the
Indian Linux Users' Group? Try here. If you've read all this and aren't a lawyer: you should be!