hi,
I need to migrate 40T data and 180M files from one storage device to another
one, both source and destination will be NFS and mounted to a local suse
linux box.
The first question is that if there is any risk for such a big number of
files? should I divide them into groups and rsync them in
Hi.
Tue, 11 Aug 2009 16:14:33 +0800, gaomingcn wrote:
The second question is about memory.
How much memory should I install to the linux box? The rsync FAQ
(http://rsync.samba.org/FAQ.html#4) says one file will use 100 bytes
to store relevant information, so 180M files will use about
18G
On Tue, 2009-08-11 16:14:33 +0800, Ming Gao gaomin...@gmail.com wrote:
I need to migrate 40T data and 180M files from one storage device to another
one, both source and destination will be NFS and mounted to a local suse
linux box.
The first question is that if there is any risk for such a
2009/8/11 Jan-Benedict Glaw jbg...@lug-owl.de:
On Tue, 2009-08-11 16:14:33 +0800, Ming Gao gaomin...@gmail.com wrote:
I need to migrate 40T data and 180M files from one storage device to another
one, both source and destination will be NFS and mounted to a local suse
linux box.
The first
On Tue, 2009-08-11 10:58:15 +0200, Michal Suchanek hramr...@centrum.cz wrote:
2009/8/11 Jan-Benedict Glaw jbg...@lug-owl.de:
On Tue, 2009-08-11 16:14:33 +0800, Ming Gao gaomin...@gmail.com wrote:
I need to migrate 40T data and 180M files from one storage device to
another
one, both
It's almost the same? I ever tested on about 7G data, I rsync'ed it to
another directory, and it takes less than 1 minute when I run the same
command line again.
The reason why I use rsync is that the data will change during the time I
run rsync the first time. Then I need to run rsync the second
2009/8/11 Ming Gao gaomin...@gmail.com:
It's almost the same? I ever tested on about 7G data, I rsync'ed it to
another directory, and it takes less than 1 minute when I run the same
command line again.
Did you test it on the two NFS shares or something else?
Also if you have enough memory
Ming Gao wrote:
I need to migrate 40T data and 180M files from one storage device to
another one, both source and destination will be NFS and mounted to a
local suse linux box.
Is there any way you could get local access to the write end of the transfer so
that you
don't have to do this all
Ming Gao wrote:
The first question is that if there is any risk for such a big number of
files? should I divide them into groups and rsync them in parallel or in
serial? If yes, how many groups is better?
For that amount of data, you ought to use something simple and recursive,
like cp -rp. A
Hourly I have an rsync job backup /home to /home/backup. I have 24
directories (one for each hour):
home.0
...
home.23
Here is the script I am running via cron:
#! /usr/local/bin/bash
dest=`date +%k | sed 's/ //g'`
linkdir=`date -v-1H +%k | sed 's/ //g'`
chflags -R noschg /home/backup
rm
And now df is reporting proper usage of 5.4 GiB (which is what I
expected). Maybe I just wasn't being patient enough and there's some
weird df lag or something. Anyway, seems like it's working OK, but if
anyone has any pointers on doing this more efficiently, I'd be more
than happy to hear
Maybe I should take up an afternoon coffee habit. I did some reading
on du, and found out that it only disregards a file with multiple hard
links if it has seen it before. Running du -hcd1 on /home/backup
resulted in all expected results.
[r...@arthur /home/backup]# du -hcd1 .
2.0K
I think your problem is with reading the correct size of folders as
there have hard links. To do this with du command, try:
du -sh /home/backup/*
As far as I know, du command will only report the real disk size if
the 2 hard links are in the du scoop. Otherwise, running 2 times du
on the 2
Use cp -au src dest, repeat if it crashes or if you have to stop (it
will skip files already copied), then rsync to update directories'
modification dates and catch anything changed during copy.
Denis
On Aug 11, 2009 5:37am, Ming Gao gaomin...@gmail.com wrote:
It's almost the same? I ever
Man rsync says that
If you need to transfer a filename that contains whitespace,
you'll need to either escape the whitespace in a way that the remote
shell will understand, or use wildcards in place of the spaces..
I am regularly doing backups with rsync and notice that files names
FILE NAME WITH SPACES -- this is 4 different space-separated parameters fed
from the shell to the program
FILE NAME WITH SPACES -- this is one parameter fed from the shell to the
program
FOLDER-NAME/ -- this is one parameter and means all the files in the
directory FOLDER-NAME/
FOLDER-NAME --
16 matches
Mail list logo