100gb of 4-40MB files sounds like my home PC full of digital photos I've
taken. It backs up to a linux PC right beside it with rsync. I don't
really call it that big a project for rsync. Big things for rsync are
millions of files. At 100mbps, it takes a few seconds to build the list.
I use the
jp wrote:
100gb of 4-40MB files sounds like my home PC full of digital photos I've
taken. It backs up to a linux PC right beside it with rsync. I don't
really call it that big a project for rsync. Big things for rsync are
millions of files. At 100mbps, it takes a few seconds to build the
Jamie Lokier wrote:
Hmm. My home directory, on my laptop (a mere 60GB disk), does contain
millions of files, and it takes about 20 minutes to build the list on
a good day. 100Mbps network, but it's I/O bound not network bound.
It looks a lot like the number of files is more significant than
On Mon, Mar 06, 2006 at 07:18:45PM +0200, Shachar Shemesh wrote:
In fact, I know of at least one place where they don't use rsync because
they don't have enough RAM+SWAP to hold the list of files in memory.
As far as future directions for rsync, I think this is the major place
where rsync
Shachar Shemesh wrote:
Hmm. My home directory, on my laptop (a mere 60GB disk), does contain
millions of files, and it takes about 20 minutes to build the list on
a good day. 100Mbps network, but it's I/O bound not network bound.
It looks a lot like the number of files is more significant
Wayne Davison wrote:
On Mon, Mar 06, 2006 at 07:18:45PM +0200, Shachar Shemesh wrote:
In fact, I know of at least one place where they don't use rsync because
they don't have enough RAM+SWAP to hold the list of files in memory.
As far as future directions for rsync, I think this is the
Jamie Lokier wrote:
While you're there, one little trick I've found that speeds up
scanning large directory hierarchies is to stat() or open() entries in
inode-number order. For some filesystems it makes no difference, but
for others it reduces the average disk seek time as on many common
Shachar Shemesh wrote:
While you're there, one little trick I've found that speeds up
scanning large directory hierarchies is to stat() or open() entries in
inode-number order. For some filesystems it makes no difference, but
for others it reduces the average disk seek time as on many common
Object: Re: Question about rsync and BIG mirror
Thanks for all your answers and advices. My problem seems on the side of
the 2MB line one time the whole 190GB data are synchronised. I will keep
in touch and give some feedbacks.
Thanks for all
--
To unsubscribe or change options: https
[EMAIL PROTECTED] wrote:
Hello,
So: each night, from 0:00am to maximum 7:00am, the server will have to
check the 100Go of files and see what files have been modified, then,
upload them to the clients. Each file is around 4MB to 40MB in average.
Are the clients what you call the mirror?
On Fri, 2006-03-03 08:02:55 +0100, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
// I wonder if this message has been posted, so I sent it again //
It was, but nobody answered yet.
I'm preparing a plan for a production mode in my company: we need to
mirror around 100GB of data trough a special
Flames invited if I'm wrong on any of this, but:
Some (long overdue) backups indicate that network speed
should be much more important than cpu speed.
Your results will depend heavily on your exact mix
and I cannot think of any reasonable way to quantify it.
That said, this may help give you a
12 matches
Mail list logo