From: Jakma, Paul (Paul.Jakma at domain compaq.com)
Date: Thu 03 Feb 2000 - 18:42:27 GMT
> Interesting. Yeah, I guess if your OS sends data in chunks that are
> big enough to fill a stripe completely, then the card doesn't need
> to do either read. If the card sends data that fills one segment,
> then it only needs to do one read. Fair enough. How well do your
> arrays cope with 1kb or 4kb over all disks in a stripe? What sort
> of OS do you have that sends data to the disk in chunks bigger than
> 4kb? 8kb perhaps?
well that's up to the driver for that controller. The driver should know how
to arrange data transfers in such a way that controller can operate most
I see your point about reading now. If data to be written is below stripe
size, then indeed the controller would need to read in chunks to allow it
to calculate parity, before it could commit the write. Although if the OS
had the unchanged blocks in memory, the driver could use them to make up
a complete stripe for the controller, hence avoiding the reads.
However, those extra data chunks would not need to be written back out. Only
the changed chunks and the parity. And still, the driver should try it's
best to buffer writes to avoid sub-stripe writes. If the fs hasn't already
done so. Eg with a 4KB blocksize you only need 16 blocks to fill a stripe,
which is a very small buffer. (also in the case of e2fs, you can tell mke2fs
to take the underlying RAID stripe size into account).
Anyway, a lot of this holds for RAID1 aswell. Eg if you want to change 4kb
from a RAID1 array with a 64kb stripe.
stripe is the total data block, chunk is the stripe/n component of the
stripe that is written out. in the case of RAID1, n=1, so chunk=stripe.
 Very likely. The controller has to read 'stripe size' at a time no
matter what, so the driver most likely reads them into memory too, whether
needed or not.
> > RAID1:
> > stripe must be written to each disk(X)
> Yes, all both of them. :-)
indeed. but on RAID1 chunk(0) == chunk(1) == stripe. Not so on RAID5, where
all the chunks are different parts of the stripe.
RAID1 effectively does not stripe. It can interleave reads depending on
which scsi host comes back with the data first, but it can't stripe.
> <Hesitates>. Ummm, yeah. Probably not too calculation bound.
> Still remain
> to be convinced, though. Since /I/ haven't offered any numbers, I can
> hardly ask for some from you. :-)
I'll be able to bench a DAC960-PDU, 4MB cache, write-back with 3 disks next
week or the week after. I think you'll be surprised, even though this
controller is very old.
Alternatively, try linux-raid 0.90 on a reasonably fast machine. I've seen
approx 26MB/s block out, 15MB/s block out on a RAID5 3 drive array, on a
Try it out, find a linux machine with 3 scsi disks, and compare linux RAID1
over 2 disks to RAID5 over the 3. I believe the performance ratio of R1/R5
will be comparable to a modern controller with a beefy cpu (eg mylex
> <Strains brain>. Umm. Like, two fives raid0-ed together? Or a load of
> noughts raid5-ed together? Interesting.
multiple RAID5 arrays joined with RAID0.
> However, if raid5 is so good, why bother? Why not just build a bigger
> raid5? At what point would you split a big raid5 into two and join 'em
> together in a raid0?
I wouldn't go higher than 4 disks because of redundancy. After that I'd
RAID1 or RAID0 multiple R5 arrays together if the hardware allowed, or use
> (I'm assuming one might do this so that two disks
> dying in close succession mightn't kill everything.)
indeed. but with a big setup you'd have to mirror arrays anyway,
(hardware/LVM) whether RAID1 or RAID5 arrays.
> > > Other reasons I like raid1 (and its striped friends) are:
> > > o In theory, Up to half of the disks in an array can fail and
> > > still have the
(going back to an older comment). that's slightly misleading. Assume half
the disks fail. It's highly unlikely that only the second disk of every
RAID1 array will fail. In fact if half your disks fail on a RAID1+0 setup it
is very likely that at least one of your R1 arrays will suffer a 2 disk
failure. The distribution of failed disks will not be even across the array.
And you lose the whole array.
(but same applies to multi R5)
> Like I've said, I would *love* a card that had
> raid5 performance
> even close to its raid1 performance. If you can show me one, I'll make
> a point of looking for boxes with 'em in in future.
i can't show you one, I don't have the numbers. However IMO, judging from
the difference in performance between linux raid 0.90 and older controllers,
the older controllers were very calculation bound. And so i would be very
surprised if new high-end controllers using newer, much faster CPU's (eg
StrongARM) did not have substantially improved RAID5 performance.
Tell me, would you be able to bench some RAID1 setups?
> I'm saying that once you've decided that you want 500GB (say) and have
> budgeted for a backup system for that, the difference in cost between
> making the 500GB out of raid5 or striped raid1 suddenly isn't
> that big.
indeed, big money setups don't care about about a couple of thou. But your
RAID1 solution will need 1.33 times as many disks as compared to a setup
with 3 disk/array based RAID5, and will need 1.5 times as many disks
compared to a 4disk/array based RAID5 setup.
Would you rather buy 1TB of disks (for RAID1), or 667GB of disks (RAID5) to
make up your 500GB array? say a 9GB disk is £200 (v optimistic). That makes
it £22k for R1 as oppossed to £15k for R5. Difference of £7k. With a more
likely cost of £350/9GB disk -> £38k for R1, compared to £26k for R5!
also, let's look at redundancy, granted each RAID1 array has a lower
probability of failure, compared to a RAID5 array. However the overall RAID1
based setup has 1.5 times more disks than the RAID5 setup, and hence
probably will suffer more drive failures than the RAID5 array! So your
overall redundancy is not that much more than the RAID5 setup, for 50% extra
> I say that since raid1 is so much "better" than raid5, one
> may as well go
> with raid1. You say that since raid5 is so much better than raid1, go
> with raid5.
I don't say RAID5 is so much better. I say RAID5 is more cost effective and
I disagree about the perf. thing. RAID5 definitely has better read
performance. But this needs numbers to settle it.
This archive was generated by hypermail 2.1.6 : Thu 06 Feb 2003 - 13:05:21 GMT