On Thu, 16 Mar 2000, Tony Bolger wrote:
> Inetd created lots of entries like:
>> Mar 14 14:12:09 turboapps inetd: fork: Cannot allocate memory
>> And Apache:
>> [Tue Mar 14 14:15:26 2000] [error] [client 192.168.26.10] (12)Cannot
> allocate memory: couldn't spawn child process:
>> It eventually got to the stage that shell programs gave unable to fork
> errors too.
> That also seems to be a problem. It occasionally complained about
> max_files reached.
i wonder would being out of max_files affect fork()? do children
share the same fd with parent, or is it a copied fd pointing to the same
file? (ie if a child seeks does it change the offset in the parent's fd
if so max_files being out might be giving the fork() prob. (but if so
would it then not say "out of file descriptors" instead?)
bump up all the sysctl's anyway, if you have a decent amount of memory
you should be able to double most of the limits without problems.
> Not much, maybe 20 concurrent apache connections. Delivers about 25000
> emails a day on maillists. A fair few pop connections, maybe 10 a
> minute. Same of SMTP. And a little FTP use. Nothing much. Load average
> in the range 0.2-0.4
so would you have 1024 concurrent processes?
final option is a leak in the kernel - try upgrading to 2.2.14.
Maintained by the ILUG website team. The aim of Linux.ie is to
support and help commercial and private users of Linux in Ireland. You can
display ILUG news in your own webpages, read backend
information to find out how. Networking services kindly provided by HEAnet, server kindly donated by
Dell. Linux is a trademark of Linus Torvalds,
used with permission. No penguins were harmed in the production or maintenance
of this highly praised website. Looking for the
Indian Linux Users' Group? Try here. If you've read all this and aren't a lawyer: you should be!