From: Wesley Darlington (wesley at domain yelsew.com)
Date: Thu 03 Feb 2000 - 11:40:05 GMT
On Thu, Feb 03, 2000 at 11:02:39AM +0000, Dermot Gorman wrote:
> why does my system on boot up give the error of maximal mount reached
> and then force a check of partitions hda3, hda5, hda6 even when theres
> plenty of space, as you can see from df...
> Filesystem 1k-blocks Used Available Use% Mounted on
> /dev/hda7 253775 62607 178066 26% /
> /dev/hda5 398250 39520 338169 10% /home
> /dev/hda6 101485 214 96031 0% /tmp
> /dev/hda3 809588 672824 95640 88% /usr
> /dev/hda1 2096320 1422016 674304 68% /mnt/hda1
When you've mounted a filesystem N times without doing a fsck on it,
the boot process decides to fsck the filesystem, just for the hell of
it. N is usually 20, IIRC, and is usually the same for all filesystems.
[ Murphy's law says that this process is initiated when you can least
afford the time for it. :-]
A trick I read about once is to set the maximal mount counts on the
bigger partitions to be different, ideally relatively prime (to each other).
Go to single user mode, unmount the relevant filesystems and use
"tune2fs -c" to set the counts. In your case, /dev/hda1 is a prime
candidate for having a separate max mount count than the other
filesystems. The default, IIRC, is 20. Set the mmc of /dev/hda1 to 21
PS. Of course, making them relatively prime is overkill if you don't
reboot your boxes very often. Just as long as all the big ones are
unlikely to all want fscked at the same time.
This archive was generated by hypermail 2.1.6 : Thu 06 Feb 2003 - 13:05:21 GMT