https://bugzilla.samba.org/show_bug.cgi?id=5482
Wayne Davison changed:
What|Removed |Added
Status|ASSIGNED|RESOLVED
Resolution|---
https://bugzilla.samba.org/show_bug.cgi?id=13433
Wayne Davison changed:
What|Removed |Added
Status|NEW |RESOLVED
Resolution|---
https://bugzilla.samba.org/show_bug.cgi?id=13433
--- Comment #5 from MulticoreNOP ---
might be related to bug #12769
--
You are receiving this mail because:
You are the QA Contact for the bug.
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or
On Thu 14 Feb 2019, Delian Krustev via rsync wrote:
> On Wednesday, February 13, 2019 6:25:59 PM EET Remi Gauvin
> wrote:
> > If the --inplace delta is as large as the filesize, then the
> > structure/location of the data has changed enough that the whole file
> > would have to be written out in
On Wednesday, February 13, 2019 6:25:59 PM EET Remi Gauvin
wrote:
> If the --inplace delta is as large as the filesize, then the
> structure/location of the data has changed enough that the whole file
> would have to be written out in any case.
This is not the case.
If you see my original post
On Wednesday, February 13, 2019 6:20:13 PM EET Remi Gauvin via rsync
wrote:
> Have you run the nifs-clean before checking this free space comparison?
> Maybe there is just large amplification created by Rsyn's many small
> writes when using --inplace.
nilfs-clean is being suspended for the
On 2019-02-13 10:47 a.m., Delian Krustev via rsync wrote:
>
>
> Free space at the beginning and end of the backup:
> Filesystem 1M-blocks Used Available Use% Mounted on
> /dev/mapper/bkp 102392 76872 20400 80% /mnt/bkp
> /dev/mapper/bkp 102392 78768
On 2019-02-13 5:26 p.m., Delian Krustev via rsync wrote:
>
> The copy is needed for the comparison of the blocks as "--inplace" overwrites
> the destination file. I've tried without "--backup" but then the delta
> transfers too much data - close to the size of the backed-up files.
>
It's
It can't do what you want. The closest thing would be --compare-dest.
On 2/13/19 5:26 PM, Delian Krustev wrote:
> On Wednesday, February 13, 2019 11:29:44 AM EET Kevin Korb via rsync
> wrote:
>> With --backup in order to end up with 2 files it has to write out a
>> whole new file.
>> Sure, it
On Wednesday, February 13, 2019 11:29:44 AM EET Kevin Korb via rsync
wrote:
> With --backup in order to end up with 2 files it has to write out a
> whole new file.
> Sure, it only sent the differences (normally that means
> over the network but there is no network here) but the writing end was
>
, Delian Krustev via rsync wrote:
> Hi All,
>
> For a backup purpose I'm trying to transfer only the changed blocks of
> large files. Thus I've run "rsync" with the appropriate options:
>
> RSYNC_BKPDIR=`mktemp -d`
> rsync \
> --archive
Hi All,
For a backup purpose I'm trying to transfer only the changed blocks of
large files. Thus I've run "rsync" with the appropriate options:
RSYNC_BKPDIR=`mktemp -d`
rsync \
--archive \
--no-whole-file \
https://bugzilla.samba.org/show_bug.cgi?id=13645
--- Comment #4 from Rob Janssen ---
Ok you apparently did not understand what I proposed.
However it is not that important as in our use case we can use --append.
--
You are receiving this mail because:
You are the QA Contact for the bug.
--
https://bugzilla.samba.org/show_bug.cgi?id=13645
Wayne Davison changed:
What|Removed |Added
Status|NEW |RESOLVED
Resolution|---
Doing checksums will cause a noticeable impact to local-file transfers.
On 10/5/2018 10:34 AM, just subscribed for rsync-qa from bugzilla via
rsync wrote:
https://bugzilla.samba.org/show_bug.cgi?id=13645
When transferring large files over a slow network, ...
The command used is: rsync -av --inpla
https://bugzilla.samba.org/show_bug.cgi?id=13645
--- Comment #2 from Rob Janssen ---
Thanks, that helps a lot for this particular use case.
(the files are backups)
--
You are receiving this mail because:
You are the QA Contact for the bug.
--
Please use reply-all for most replies to avoid
https://bugzilla.samba.org/show_bug.cgi?id=13645
--- Comment #1 from Kevin Korb ---
If you are sure the file has not been changed since it was partially copied,
see --append.
--
You are receiving this mail because:
You are the QA Contact for the bug.
--
Please use reply-all for most replies
https://bugzilla.samba.org/show_bug.cgi?id=13645
Bug ID: 13645
Summary: Improve efficiency when resuming transfer of large
files
Product: rsync
Version: 3.0.9
Hardware: All
OS: All
Status: NEW
https://bugzilla.samba.org/show_bug.cgi?id=13433
--- Comment #4 from Ben RUBSON ---
util2.c:#define MALLOC_MAX 0x4000
Which is 1 GB.
1 GB / 40 bytes x 131072 bytes = 3276 GB,
which is then the maximum file size in protocol_version >= 30.
Did you try to increase
https://bugzilla.samba.org/show_bug.cgi?id=13433
--- Comment #3 from Kevin Day ---
Just adding --protocol=29 falls back to the older chunk generator code and
automatically selects 2MB chunks which is enough to at least make this work
without a malloc error.
--
You are
https://bugzilla.samba.org/show_bug.cgi?id=13433
--- Comment #2 from Kevin Day ---
(In reply to Dave Gordon from comment #1)
It looks like that's no longer allowed?
rsync: --block-size=10485760 is too large (max: 131072)
rsync error: syntax or usage error (code 1) at
https://bugzilla.samba.org/show_bug.cgi?id=13433
--- Comment #1 from Dave Gordon ---
Maybe try --block-size=10485760 --protocol=29 as mentioned here:
https://bugzilla.samba.org/show_bug.cgi?id=10518#c8
--
You are receiving this mail because:
You are the QA Contact for the bug.
https://bugzilla.samba.org/show_bug.cgi?id=13433
Bug ID: 13433
Summary: out_of_memory in receive_sums on large files
Product: rsync
Version: 3.1.3
Hardware: All
OS: All
Status: NEW
Severity: normal
Hello,
While restoring a large data backup which contained some big sparse-ish files,
using rsync 3.1.1, (these were VMDK files to be precise), I found that adding
the --sparse option can permanently wedge the rsync processes.
I performed a few basic checks during the time it happened (at one
https://bugzilla.samba.org/show_bug.cgi?id=8512
--- Comment #5 from Peter van Hooft ho...@natlab.research.philips.com
2014-09-09 07:49:13 UTC ---
We use rsync to copy data from one file server to another using NFS3 mounts
over a 10Gb link. We found that upping the buffer sizes (as a quick test)
Why not enable Jumbo Frames? http://stromberg.dnsalias.org/~strombrg/jumbo.html
For NFS, you can use
http://stromberg.dnsalias.org/~strombrg/nfs-test.html to get some fast
settings. The script could be modified to do CIFS I suppose.
HTH
--
Please use reply-all for most replies to avoid
Hi list,
I've found this post on rsync's expected performance for large files:
https://lists.samba.org/archive/rsync/2007-January/017033.html
I have a related but different observation to share: with files in the
multi-gigabyte-range, I've noticed that rsync's runtime also depends
on how much
, i.e., dissimilarity
increases, the number of computed checksums grows. This relationship
is especially apparent for large files, where many strong (and
expensive) checksum must be computed, due to many false alarms.
On Fri, Apr 11, 2014 at 1:35 PM, Thomas Knauth thomas.kna...@gmx.de wrote:
Hi list
https://bugzilla.samba.org/show_bug.cgi?id=8512
--- Comment #4 from John Wiegley jo...@newartisans.com 2013-11-17 09:02:51
UTC ---
Let me add my voice to the mix here. I'm copying a 1GB VOB file from an Ubuntu
ZFS server running Samba 4.1.1, to my Mac OS X 10.9 box.
iperf reports 112 MB/s
On 2013-11-17 4:02 AM, samba-b...@samba.org samba-b...@samba.org wrote:
I'm using gigabit ethernet, obviously, with mtu set to 1500 and no TCP options
other than the following in smb.conf:
socket options = TCP_NODELAY SO_RCVBUF=131072 SO_SNDBUF=131072
First, remove these...
These
https://bugzilla.samba.org/show_bug.cgi?id=7195
--- Comment #2 from Loïc Gomez samba-b...@kyoshiro.org 2012-10-23 11:27:38
UTC ---
I ran into a similar issue recently while transferring large files (40GB).
After a few tests, it seems - in my case at least - to be related to the
delta-xfer
On Fri, Aug 10, 2012 at 9:03 AM, T.J. Crowder t...@crowdersoftware.comwrote:
1. Am I correct in inferring that when rsync sees data for a file in the
--partial-dir directory, it applies its delta transfer algorithm to the
partial file?
2. And that this is _instead of_ applying it to the real
On Sun, Aug 12, 2012 at 10:41 AM, Wayne Davison way...@samba.org wrote:
I have imagined making the code pretend that the partial file and any
destination file are concatenated together for the purpose of generating
checksums.
Actually, that could be bad if the destination and partial file
Hi,
Thanks for that!
On 12 August 2012 18:41, Wayne Davison way...@samba.org wrote:
I have imagined making the code pretend that the partial file and any
destination file are concatenated together for the purpose of generating
checksums. That would allow content references to both files,
I have recently changed the version of rsync I use from the ancient 2.6.6
to 3.0.7. Every since then it seems to me that I am getting more rsync
failures than before. Hopefully, other people can share their experiences
and point to the cause which, I acknowledge, might be me doing something
https://bugzilla.samba.org/show_bug.cgi?id=7195
Summary: timeout reached while sending checksums for very large
files
Product: rsync
Version: 3.0.7
Platform: All
OS/Version: All
Status: NEW
Severity
On Sun, 2009-12-13 at 07:21 +, tom raschel wrote:
i have to tranfer large files each 5-100 GB (mo-fri) over dsl line.
unfortunately dsl lines are often not very stable and i got a broken pipe
error.
(dsl lines are getting a new ip if they are broken or at least after a
reconnect every
Can anyone suggest a good way to speed up rsync on really large files? In
particular, when I rsync the mail spool directory, I have a few users with
inboxes over 1GB and up and it seems to take a very long time to just
compare the files. Maybe it would be faster to copy from scratch for files
On 01/15/2010 07:22 PM, David Trammell wrote:
Can anyone suggest a good way to speed up rsync on really large
files? In particular, when I rsync the mail spool directory, I have a
few users with inboxes over 1GB and up and it seems to take a very
long time to just compare the files. Maybe
on really large files
On 01/15/2010 07:22 PM, David Trammell wrote:
Can anyone suggest a good way to speed up rsync on really large files?
In particular, when I rsync the mail spool directory, I have a few users
with inboxes over 1GB and up and it seems to take a very long time to
just compare
...@consolejunky.net
To: rsync@lists.samba.org
Sent: Friday, January 15, 2010 12:40 PM
Subject: Re: rsync taking a while on really large files
On 01/15/2010 07:22 PM, David Trammell wrote:
Can anyone suggest a good way to speed up rsync on really large
files? In particular, when I rsync the mail spool
On Fri 15 Jan 2010, David Trammell wrote:
I saw the -W option, but I wasn't sure about how it behaves as the
man pages don't have many details, and I thought there might be
other options I missed. For -W the man page just says copy files
whole (w/o delta-xfer algorithm)
Take a moment to
We're having a performance issue when attempting to rsync a very large file.
Transfer rate is only 1.5MB/sec. My issue looks very similar to this one:
http://www.mail-archive.com/rsync@lists.samba.org/msg17812.html
In that thread, a 'dynamic_hash.diff' patch was developed to work around
Eric Cron (ericc...@yahoo.com) wrote on 8 January 2010 12:20:
We're having a performance issue when attempting to rsync a very large file.
Transfer rate is only 1.5MB/sec. My issue looks very similar to this one:
http://www.mail-archive.com/rsync@lists.samba.org/msg17812.html
In
rsync to delete the --backup file after an
successful
sync.
thx
Tom
tom raschel rasc...@edvantice.de schrieb im Newsbeitrag
news:loom.20091213t075221-...@post.gmane.org...
Hi,
i have to tranfer large files each 5-100 GB (mo-fri) over dsl line.
unfortunately dsl lines are often
is not desirable.
Is there a way to tell rsync to delete the --backup file after an successful
sync.
thx
Tom
tom raschel rasc...@edvantice.de schrieb im Newsbeitrag
news:loom.20091213t075221-...@post.gmane.org...
Hi,
i have to tranfer large files each 5-100 GB (mo-fri) over dsl line.
unfortunately dsl
Tom wrote:
to make things more clear
1.)
first transfer is done either a initial setup or with a usb hdd to get
sender and receiver in sync.
2.)
transfer does not stop because rsync had a timeout, it stops because
the dsl
line is broken (which i could see at dyndns)
3)
if dsl line
On Sat, Dec 12, 2009 at 11:21 PM, tom raschel rasc...@edvantice.de wrote:
so i had a look at --inplace which I thougt could do the trick, but inplace
is updating the timestamp and if the script start a retransfer after a
broken pipe it fails because the --inplace file is newer than the
Thx to all,
it was the -u option which prevents rsync to resume the file.
Tom
Tony Abernethy t...@servasoftware.com schrieb im Newsbeitrag
news:af5ef1769d564645a9acc947375f0d021567087...@winxbeus13.exchange.xchg...
Tom wrote:
to make things more clear
1.)
first transfer is done either a
Hi,
i have to tranfer large files each 5-100 GB (mo-fri) over dsl line.
unfortunately dsl lines are often not very stable and i got a broken pipe error.
(dsl lines are getting a new ip if they are broken or at least after a
reconnect every 24 hours)
i had a script which detect the rsync error
tom raschel wrote:
Hi,
i have to tranfer large files each 5-100 GB (mo-fri) over dsl line.
unfortunately dsl lines are often not very stable and i got a broken
pipe error.
(dsl lines are getting a new ip if they are broken or at least after a
reconnect every 24 hours)
i had a script
to make things more clear
1.)
first transfer is done either a initial setup or with a usb hdd to get
sender and receiver in sync.
2.)
transfer does not stop because rsync had a timeout, it stops because the dsl
line is broken (which i could see at dyndns)
3)
if dsl line is stable the
ehar...@lyricsemiconductors.com wrote:
I thought rsync, would calculate checksums of large files that have
changed timestamps or filesizes, and send only the chunks which
changed. Is this not correct? My goal is to come up with a
reasonable (fast and efficient) way for me to daily
Yup, by doing --inplace, I got down from 30 mins to 24 mins... So that's
slightly better than resending the whole file again.
However, this doesn't really do what I was hoping to do. Perhaps it can't
be done, or somebody would like to recommend some other product that is more
well suited for
I thought rsync, would calculate checksums of large files that have changed
timestamps or filesizes, and send only the chunks which changed. Is this
not correct? My goal is to come up with a reasonable (fast and efficient)
way for me to daily incrementally backup my Parallels virtual machine
a speed
impact for you:
--inplace, so that rsync doesn't create a tmp-copy that is later moved over
the previous file on the target-site.
--whole-file, so that rsync doesn't use delta-transfer but rather copies
the whole file.
Also you may to separate the small from the large files with:
--min-size
the previous file on the target-site.
Yes, this is useful because it avoids both a second reading and the
full write on the destination (in priciple; I didn't bother to check
the actual implementation). For large files with small changes this
option is probably the best. The problem
the rsync comparison |look into a hierarchical
|algorithm specially to .mov |checksum algorithm that
|and .mp4 files |would help to efficiently
||transmit really large files
--- Comment #6 from
Matt McCutchen wrote:
On Thu, 2008-01-24 at 13:54 +0900, Brendan Grieve wrote:
I had a look at rdiff-backup, but I was trying to get something that
spoke native rsync (IE, not to force any change on the client side).
To achieve this, you can have the client push to an
Matt McCutchen wrote:
On Wed, 2008-01-23 at 13:38 +0900, Brendan Grieve wrote:
Lets say
the file, whatever it is, is a 10Gb file, and that some small amount of
data changes in it. This is efficiently sent accross by rsync, BUT the
rsync server side will correctly break the
On Thu, 2008-01-24 at 13:54 +0900, Brendan Grieve wrote:
I had a look at rdiff-backup, but I was trying to get something that
spoke native rsync (IE, not to force any change on the client side).
To achieve this, you can have the client push to an rsync daemon and
then have the daemon call
Matt McCutchen wrote:
On Thu, 2008-01-24 at 13:54 +0900, Brendan Grieve wrote:
I had a look at rdiff-backup, but I was trying to get something that
spoke native rsync (IE, not to force any change on the client side).
To achieve this, you can have the client push to an
Hi There,
I've been toying around with the code of rsync on and off for a while,
and I had a thought that I would like some comments on. Its to do with
very large files and disk space.
One of the common uses of rsync is to use it as a backup program. A
client connects to the rsync server
On Wed, 2008-01-23 at 13:38 +0900, Brendan Grieve wrote:
Lets say
the file, whatever it is, is a 10Gb file, and that some small amount of
data changes in it. This is efficiently sent accross by rsync, BUT the
rsync server side will correctly break the hard-link and create a new
file with
Hi,
I'm Jordi and I work at the university of barcelona. I'm trying to make a
backup of our several clusters. In the past I worked with rsync with a very
good results.
When I try to backup some large directories (for example 1.5TB with a lot
of large files 20GB) whit this command:
rsync -aulHI
found it.
That thread is about an optimization that makes the delta-transfer
algorithm much faster on large files. The optimization is included in
current development versions of rsync 3.0.0. This probably is not
related to the problem with rsync starting over.
Matt
--
To unsubscribe or change
and it seems that I forgot to comment it :(
Sorry for the inconvenience.
but, the second time that makes this re-rsync it takes the same time that
the first time...it seems that it doen't make an incremental backup with our
large files... Do you think that the cause could be the problem of the new
On Wed, 2007-12-12 at 17:01 +0100, gorka barracuda wrote:
but, the second time that makes this re-rsync it takes the same time
that the first time...it seems that it doen't make an incremental
backup with our large files... Do you think that the cause could be
the problem of the new optimized
large files... Do you think that the cause could be
the problem of the new optimized algorithm that you put in rsync
3.0.0?
You are passing -I, which makes rsync transfer all regular files every
time even if they appear to be identical on source and destination.
Rsync does reduce network
I'm running pre4 on a 77GB file. It seems like the hash table is taking a
long time to be built. I'm not sure what is involved in this step but as an
example the following is logged during a run:
send_files(11, priv1.edb)
send_files mapped priv1.edb of size 79187419136
calling match_sums
On Mon, Jan 08, 2007 at 10:16:01AM -0800, Wayne Davison wrote:
And one final thought that occurred to me: it would also be possible
for the sender to segment a really large file into several chunks,
handling each one without overlap, all without the generator or the
receiver knowing that it
On 10/7/07, Wayne Davison [EMAIL PROTECTED] wrote:
On Mon, Jan 08, 2007 at 10:16:01AM -0800, Wayne Davison wrote:
And one final thought that occurred to me: it would also be possible
for the sender to segment a really large file into several chunks,
handling each one without overlap, all
Evan Harris wrote:
Would it make more sense just to make rsync pick a more sane blocksize
for very large files? I say that without knowing how rsync selects
the blocksize, but I'm assuming that if a 65k entry hash table is
getting overloaded, it must be using something way too small.
rsync
On Mon, Jan 08, 2007 at 01:37:45AM -0600, Evan Harris wrote:
I've been playing with rsync and very large files approaching and
surpassing 100GB, and have found that rsync has excessively very poor
performance on these very large files, and the performance appears to
degrade the larger
On Mon, 8 Jan 2007, Wayne Davison wrote:
On Mon, Jan 08, 2007 at 01:37:45AM -0600, Evan Harris wrote:
I've been playing with rsync and very large files approaching and
surpassing 100GB, and have found that rsync has excessively very poor
performance on these very large files
Hi,
I'm using rsync to backup my data from my Linux machine (SUSE10.1) and
Windows (XP)
I've mounted a windows share on my linux and I'm trying now to copy files to
the windows with rsync.
The windows share is a NTFS filesystem
It's all working except I have a error on large files
Greetings.
Here's my setup:
On the server -
rsync 2.5.6 protocol version 26
stunnel 4.04 on i686-suse-linux-gnu PTHREAD with OpenSSL 0.9.7b
On the client -
rsync version 2.6.6 protocol version 29
stunnel 4.14 on i686-suse-linux-gnu UCONTEXT+POLL+IPv4+LIBWRAP with OpenSSL
0.9.8a
Both ends
I've experienced cases like that.
I've been able to repair the file with an rsync -I, although this
doesn't address the cause of the problem.
On Tue, 2006-10-03 at 14:42 -0500, Mark Osborne wrote:
Hello,
I have run into an issue with rsync that I’m hoping someone can help
with. We are
Hello,
I have run into an issue with rsync
that I’m hoping someone can help with. We are using rsync to mirror
data between a samba share on an internal staging server and our production
ftp servers. The rsync runs from cron every 15 minutes. Occasionally,
the rsync will run while
I can't seem to get rsync to restart where it left off
when I am syncing a large file ( 5GB). Below is some
info on what I have been doing. If someone has the
energy to barrel through my comments that would be
great. Out of curiosity is there an alternative to
rsync for large files?
I believe
On Mon 24 Apr 2006, Panos Koutsoyannis wrote:
- I use rsync over ssh to sync over our wan.
- I sync over an 8 hour window every night.
- After 8 hours if the sync is not complete, it gets
killed and restarts the next evening.
How do you kill it? Via kill -9?
- I do notice a bunch
Ah...That make sense. I do not stop it politely ..
you are right. I will fix up the signal handling and
give it whirl.
Thanks
Panos
--- Paul Slootman [EMAIL PROTECTED] wrote:
On Mon 24 Apr 2006, Panos Koutsoyannis wrote:
- I use rsync over ssh to sync over our wan.
- I sync over an
Just changed by scripts and that was definately my
problem and its fixed.
Thank you
panos
--- Panos Koutsoyannis [EMAIL PROTECTED] wrote:
Ah...That make sense. I do not stop it politely ..
you are right. I will fix up the signal handling
and
give it whirl.
Thanks
Panos
https://bugzilla.samba.org/show_bug.cgi?id=3358
[EMAIL PROTECTED] changed:
What|Removed |Added
Status|NEW |RESOLVED
Resolution|
https://bugzilla.samba.org/show_bug.cgi?id=3358
--- Comment #6 from [EMAIL PROTECTED] 2006-01-02 10:21 MST ---
This is weird, there is no network activity during this building file list
phase. However, as soon as it is finished, rsync saturates my network.
I thought rsync worked, if
https://bugzilla.samba.org/show_bug.cgi?id=3358
--- Comment #7 from [EMAIL PROTECTED] 2006-01-02 11:02 MST ---
(In reply to comment #6)
This is weird, there is no network activity during this building file list
phase. However, as soon as it is finished, rsync saturates my network.
https://bugzilla.samba.org/show_bug.cgi?id=3358
--- Comment #8 from [EMAIL PROTECTED] 2006-01-02 11:42 MST ---
What is weird about that?
You wrote in a previous comment when I asked why rsync is considering a file
for 30 minutes if it is not checksumming it:
Because it is
https://bugzilla.samba.org/show_bug.cgi?id=3358
--- Comment #2 from [EMAIL PROTECTED] 2005-12-29 13:47 MST ---
Intereseting, didn't knwo that rsync worked that way - I thought the default
behaviour was to only replace the parts of the file that had changed. Anyway,
this motivates a
https://bugzilla.samba.org/show_bug.cgi?id=3358
--- Comment #3 from [EMAIL PROTECTED] 2005-12-29 13:48 MST ---
Btw, I am just trying your suggestions. First I will try the inplace switch and
secondly I will test syncing with twice the amount of space required for the
file available.
https://bugzilla.samba.org/show_bug.cgi?id=3358
--- Comment #4 from [EMAIL PROTECTED] 2005-12-29 13:54 MST ---
Sorry for spamming, but I just realised what you meant when you wrote:
You can use the --checksum option to avoid this unneeded update at the expense
of a lot of extra disk
https://bugzilla.samba.org/show_bug.cgi?id=3358
Summary: rsync chokes on large files
Product: rsync
Version: 2.6.6
Platform: PPC
OS/Version: Mac OS X
Status: NEW
Severity: major
Priority: P3
Component
https://bugzilla.samba.org/show_bug.cgi?id=3358
--- Comment #1 from [EMAIL PROTECTED] 2005-12-28 11:21 MST ---
The pertinent error is this:
rsync: write failed on /test: No space left on device (28)
That is an error from your OS that indicates that there was no room to write
out
On Mon, Aug 15, 2005 at 12:11:35PM -0400, Sameer Kamat wrote:
My question is, I am observing that the data being sent over is almost
equal to the size of the file. Would an insertion of a few blocks in a
binary file, move the alignment of the entire file and cause this to
happen?
That depends
Hello,
I have a few files of the order
of 50G, that get synchronized to a remote server over ssh. These files have
binary data and the
change before the next time they are synchronized over. My
question is, I am observing that the data being sent over is almost
equal to the size of the
On Wed, Jul 27, 2005 at 04:29:46PM -0700, Todd Papaioannou wrote:
Not sure I have the mojo to mess with the patches though!
I applied the --append patch to the CVS source, so if you want to snag
version 2.6.7cvs, you can grab it via the latest nightly tar file:
Hi,
My situation is that I would like to use rsync to copy very large files
within my network/systems. Specifically, these files are
in the order of 10-100GB. Needless to say, I would like to be able
to restart a transfer if it only partially succeeded, but NOT repeat
the work already done
Woops! In my last email, I meant to say the second command
was:
rsync --no-whole-file --progress theFile /path/to/dest
Todd
Hi,
My situation is that I would like to use rsync to copy very large files
within my network/systems. Specifically, these files are
in the order of 10-100GB. Needless
On Wed, Jul 27, 2005 at 01:50:39PM -0700, Todd Papaioannou wrote:
where both theFile and /path/to/dest are local drives. [...]
rsync -u --no-whole-file --progress theFile /path/to/dest
When using local drives, the rsync protocol (--no-whole-file) slows
things down, so you don't want to use it
Wayne,
Thanks for the swift answers and insight.
However, the stats shown during the progress seem to imply that the
whole transfer is starting again.
Yes, that's what rsync does. It retransfers the whole file, but it
uses the local data to make the amount of data flowing over
the
I'm trying to rsync a very large (62gig) file from one machine to another as
part of a nightly backup. If the file does not exist at the destination, it
takes about 2.5 hours to copy in my environment.
But, if the file does exist and --inplace is specified, and the file
contents differ, rsync
1 - 100 of 196 matches
Mail list logo