I'm trying to read a compressed disk image from an
NFS server and write it to a local disk:
gunzip --to-stdout < /images/test.img > /dev/sda1
Let's take the disk writes out of the equation:
gunzip --to-stdout < /images/test.img > /dev/null
This takes about 40 secs wall time and about 20 secs
user CPU. Just pulling it across the network:
cat /images/test.img > /dev/null
takes about 20 secs.
So, it looks like gunzip can't get it's input quickly
enough to saturate the CPU. The NFS reads and gzip's
uncompression are not happening in parallel.
My thinking is that if I could put a pipe with a BIG
buffer in between, I could do something like:
cat /images/test.img | gunzip --to-stdout > /dev/null
If this pipe was big enough, cat could keep pulling data
across the network and stay ahead of gunzip's appetite,
thus reducing the total time to something near 20 secs.
But pipes only have a 4k buffer.
Before I go and write a mega-pipe that uses select() or
poll() to implement a large buffer between two processes,
does anyone know of any existing tool to do this?
Maintained by the ILUG website team. The aim of Linux.ie is to
support and help commercial and private users of Linux in Ireland. You can
display ILUG news in your own webpages, read backend
information to find out how. Networking services kindly provided by HEAnet, server kindly donated by
Dell. Linux is a trademark of Linus Torvalds,
used with permission. No penguins were harmed in the production or maintenance
of this highly praised website. Looking for the
Indian Linux Users' Group? Try here. If you've read all this and aren't a lawyer: you should be!