Steve Bertrand wrote:
> xufengnju wrote:
>> Hi all,
>> I have a storage server that holds one million of images with well 
>> structured directory structure such as
>>
>> /data/user1/2008/09/12/image1.jpg
>> /data/user1/2008/09/12/image2.jpg
>> ...
>> /data/user2/2009/01/01/image1.jpg
>> ...
>>
>> I want to copy them to the /data2 directory in the same server on another 
>> disk partion.
>> I want to keep the directory structure and `chown` && `chmod` the 
>> directories and files,much like doing a `cp -rf /data /data2 && chown -R 
>> sysuser:sysuser /data2 && chmod -R 755 /data2`.
>>
>> File::Find maybe an option.
>> Is there somebody who have some suggestions?
>>
>> If I do a `cp -rf /data /data2 && chown -R sysuser:sysuser /data2 && chmod 
>> -R 755 /data2`,how much time maybe taken to finish the job?(The images are 
>> about one million in count and 250GB in size totally).
> 
> Perhaps I am missing something completely obvious. If not:
> 
> Why-oh-why do you want to use Perl do perform such a task?
> 
> Use dump/restore (which I can't recall a cli sequence for off the top of
> my head), 

...and the dump commands I had to look up. Use it at your own risk.

Essentially, it creates a full (0) dump of /data, and because of '-f',
writes it to '-' (or STDOUT).

It is subsequently piped to the process on the right, which changes
directory to your backup disk, /mnt/data, and then pristinely restores
the incoming dump into '-' ( the current file system, /mnt/data ).

# mount your backup disk to /mnt/data

% dump -0f - /data | ( cd /mnt/data; restore -rf - )

Perhaps there is a Perl way to do it, but otherwise, for 250GB of data,
research dump/restore, and test it out (after making a backup).

imho, you shouldn't use another layer of abstraction for managing such a
large volume of data, unless you are attempting to create some sort of
index for it.

Steve

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to