From: Wesley Darlington (wesley at domain yelsew.com)
Date: Thu 03 Feb 2000 - 16:59:42 GMT
On Thu, Feb 03, 2000 at 03:32:49PM -0000, Jakma, Paul wrote:
> i don't agree. RAID5 doesn't need to read in order to write.
> Controller has data of size stripe to write out.
> stripe is divided into n chunks, where n-1=# of disks.
> parity is calculculated.
> stripe/n(X) is written to each disk(X)
Interesting. Yeah, I guess if your OS sends data in chunks that are
big enough to fill a stripe completely, then the card doesn't need
to do either read. If the card sends data that fills one segment,
then it only needs to do one read. Fair enough. How well do your
arrays cope with 1kb or 4kb over all disks in a stripe? What sort
of OS do you have that sends data to the disk in chunks bigger than
4kb? 8kb perhaps?
> stripe must be written to each disk(X)
Yes, all both of them. :-)
> In the old days the parity calculation was quite an overhead i agree. But
> given Moore's law, it's safe to assume that this overhead is now negligable.
> eg the SA110 on Mylex ExtremeRAID is also used by empeg.com to play mpg-3's
> so it's no slouch.
> So assuming that modern controllers are not parity-calculation bound, then
> RAID5 should beat RAID1 for performance, due to the inherent striping of
> RAID5 on both read and writes.
<Hesitates>. Ummm, yeah. Probably not too calculation bound. Still remain
to be convinced, though. Since /I/ haven't offered any numbers, I can
hardly ask for some from you. :-) My mind /is/ open, though - If you
can point me to a card that really does stripe raid5 faster than it stripes
raid 1 and has half decent raid1 performance, I'll be one happy dude. :-)
> > As you say, rack space isn't cheap. And when there are *lots* of disks
> > to be raid-ed, raid5 looks more and more attractive. Especially if the
> > data is pretty much static.
> i don't accept the static bit. like i said, slow RAID5 means you have a crap
> old controller.
Accept that the ones I've used have had crap raid5 implementations. Would
love to play with an implementation with raid5 performance that came close
> > I don't accept that raid5 is necessarily faster at reading
> > either, though.
> > Any raid1 controller worth its salt will interleave requests
> > onto both
> > disks in a pair.
> but that's on a 'first to the drive head for this stripe' basis. RAID1 can't
> read in parallel from 2 disks. In RAID5 a single stripe is read in parallel.
Raid1 *can* read in parallel from 2 disks. Two "things" (processes, perhaps)
requesting things at different places on the disk => two simultaneous
> > Any raid1 controller worth its salt will
> > also trivially
> > do striping (1+0/0+1). (*)
> that's a different ball game, else i can just say 5+0. :)
<Strains brain>. Umm. Like, two fives raid0-ed together? Or a load of
noughts raid5-ed together? Interesting.
However, if raid5 is so good, why bother? Why not just build a bigger
raid5? At what point would you split a big raid5 into two and join 'em
together in a raid0? (I'm assuming one might do this so that two disks
dying in close succession mightn't kill everything.)
Everytime I've said "raid1" and implied more than two disks, I've meant
lots of raid1 pairs, striped into a raid0. It just seems normal to me.
Apologies for not making this clear. Most of the points I've made,
however, are applicable to a classical many-disk raid1 array, where the
second pair is only used when the first is full and so on. While this
isn't quite as good as many raid1s striped into a raid0, I would still
have used a classical raid1 where I have in the past used a raid10, if
the controller didn't support raid10. By the same token, when I've
used raid5 in the past, I wouldn't have used either raid1 or raid10,
because of the disk space issues and despite the performance issues.
> > Other reasons I like raid1 (and its striped friends) are:
> > o In theory, Up to half of the disks in an array can fail and
> > still have the
> > array accessible with no performance degradation. You can
> > even get a
> > performance improvement. :-)
> but you pay for that in both disk space and read performance. And also, i
> argue, write performance.
OK, we disagree here. I say I've found it to be a trade off between the
number of physical disks on the one hand and performance on the other.
You say you've found it to be a trade off between the number of physical
disks and performance on the one hand and no discernible benefit on
the other. Like I've said, I would *love* a card that had raid5 performance
even close to its raid1 performance. If you can show me one, I'll make
a point of looking for boxes with 'em in in future.
> > o Beyond a certain point, cost of backup starts to make the
> > cost of disk
> > *drives* look trivial and the cost of (filled) disk *space*
> > look more
> > and more expensive. Raid1 treats the cost of disks drives
> > with contempt
> > and disk *space* with reverence.
> firstly, the backup cost is related to your logical drive space. Whether
> that logical space is made with RAID1 or 5 is irrelevant.
I'm saying that once you've decided that you want 500GB (say) and have
budgeted for a backup system for that, the difference in cost between
making the 500GB out of raid5 or striped raid1 suddenly isn't that big.
I say that since raid1 is so much "better" than raid5, one may as well go
with raid1. You say that since raid5 is so much better than raid1, go
> Secondly, isn't treating the cost of drives with contempt, the same as
> treating disk space with contempt?
Not the way I meant it. :-) The cost of the physical drives isn't
important, but every /usable/ byte has to be backed up and is therefore
expensive and to be revered, not treated with contempt. Sorry for being
a bit flowery...
> Or you can have RAID5, and have nearly the same performance as 0, with
> nearly the same redundancy as 1. :)
s/nearly the same (\w+) as/some of the \1 as/g :-)
This archive was generated by hypermail 2.1.6 : Thu 06 Feb 2003 - 13:05:21 GMT