"Kenn Humborg" said:
> My thinking is that if I could put a pipe with a BIG
> buffer in between, I could do something like:
>> cat /images/test.img | gunzip --to-stdout > /dev/null
>> If this pipe was big enough, cat could keep pulling data
> across the network and stay ahead of gunzip's appetite,
> thus reducing the total time to something near 20 secs.
> But pipes only have a 4k buffer.
>> Before I go and write a mega-pipe that uses select() or
> poll() to implement a large buffer between two processes,
> does anyone know of any existing tool to do this?
I doubt the pipe buffer will help much.
The big problem is probably the fact that you're using NFS. Try plain
TCP, such as "rsh" if possible, to make it a bit more efficient.
Also I'd reckon that a parallel-writing tool, often used for tape writes,
will help get more parallelism out of it. They use a process pool of e.g.
4 processes, half reading, half writing, to get more parallelism instead of
the usual "read a block, write a block" loop.
Maintained by the ILUG website team. The aim of Linux.ie is to
support and help commercial and private users of Linux in Ireland. You can
display ILUG news in your own webpages, read backend
information to find out how. Networking services kindly provided by HEAnet, server kindly donated by
Dell. Linux is a trademark of Linus Torvalds,
used with permission. No penguins were harmed in the production or maintenance
of this highly praised website. Looking for the
Indian Linux Users' Group? Try here. If you've read all this and aren't a lawyer: you should be!