Conor Daly wrote:
> I'm using dump to backup a number of filesystems to a tape hosted on a
> remote machine. The machine being backed up is still in commissioning so
> it hasn't got its full complement of data yet. Last night, one 300Gb
> filesystem had about 21Gb used. It took 3.5 hours to dump at an average
> rate of 1530 kB/s (according to dump). By my calculations, if this
> filesystem fill completely, it will take 57 hours to backup.
>> The dump runs like this:
>> Remote machine mounts the tape
> local machine does:
>> export RSH=ssh
>> dump -0 -f remote:/dev/nst0 /filesystem
>> local and remote machines are connected by a crossover cable. Their
> connections show up in dmesg output as running at 1000bit full duplex. I
> can do:
>> scp /path/to/7.1Gb_file remote:
>> and get about 40Mb/s throughput which translates to about 2 hours for
>> Any clue why dump is taking so long? The filesystem it's dumping has not
> too many files, each file is fairly large (~2Gb).
>> Running dump on RHEL 5
I never used dump before, and had a quick look at the man page.
Laugh! I would rather look at a network trace to see
the packets traversing the network and the latencies involved.
Why are you using tapes anyway. Aren't (removable) hard disks
much more compatible/performant/cheaper/sane these days?
Some links you may find useful:
Maintained by the ILUG website team. The aim of Linux.ie is to
support and help commercial and private users of Linux in Ireland. You can
display ILUG news in your own webpages, read backend
information to find out how. Networking services kindly provided by HEAnet, server kindly donated by
Dell. Linux is a trademark of Linus Torvalds,
used with permission. No penguins were harmed in the production or maintenance
of this highly praised website. Looking for the
Indian Linux Users' Group? Try here. If you've read all this and aren't a lawyer: you should be!