Aw: Re: [PATCH] Reduce memory usage

2021-10-02 Thread devzero--- via rsync
>In the exchange I argued that proper use of ram as a buffer would have cut down backup time to minutes instead of days.

could you give an example where rsync is slowing things down so much due to ram constraints or inefficient ram use?


please mind that disk bandwith and file copy bandwith is not the same. random i/o and seek time is the culprit.

why should rsync use ram for buffering data it copies, if the linux kernel / vm subsystem already does this?

roland
 

Gesendet: Samstag, 02. Oktober 2021 um 12:07 Uhr
Von: "Rupert Gallagher via rsync" 
An: makov...@gmail.com, rsync@lists.samba.org
Betreff: Re: [PATCH] Reduce memory usage

If you look at my previous exchange in the list, I raised the need for more ram usage via a tool option. In the exchange I argued that proper use of ram as a buffer would have cut down backup time to minutes instead of days. At the time, my proposal was dismissed by someone saying that rsync uses as much ram as it needs. I still feel the need to free rsync from this mindless constraint, while also welcoming ram usage optimisations such as yours in this patch. How hard can it be to allow rsync to use 1GB of ram instead of 100MB? The benefit would be huge. In my case, where a supermicro server uses a shared bus to transfer data from two disks, the overhead caused by frequent small buffer IO is so high that backup time is still huge. And I am using server hardware! PC and laptops are even worse.

RG




 Original Message 
On Sep 26, 2021, 13:54, Jindřich Makovička via rsync < rsync@lists.samba.org> wrote:
Hi,

When using rsync to back up the file system on my laptop, containing a pretty much default linux desktop, I was wondering how rsync uses over 100MB of RAM it allocates.

It turned out that most of the memory is used for the arrays of file_struct pointers, most of which end up unused - much more than the actual file_struct entries. In my case, the peak usage was 135MB of pointers, and just 1.5MB of the file_struct entries themselves.

The problem seems to be that the default file_list allocation parameters predate the incremental recursion, which allocates a huge number of small file lists, while AFAICS originally rsync allocated just one large list.

Applying the attached patch, which reduces the default allocation to 32 pointers, and preallocates 32K pointers only for the main file lists in send_file_list and recv_file_list, reduces the peak memory usage in my case from 142MB to 12MB.

Regards,
--
Jindřich Makovička
-- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html




-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Aw: Re: Re: rsync remote raw block device with --inplace

2018-12-30 Thread devzero--- via rsync
maybe this could also be useful:

https://github.com/RyanHow/block2file


> Gesendet: Sonntag, 30. Dezember 2018 um 22:52 Uhr
> Von: "Rolf Fokkens via rsync" 
> An: rsync@lists.samba.org
> Betreff: Re: Aw: Re: rsync remote raw block device with --inplace
>
> It was broucht up before indeed: 
> https://lists.samba.org/archive/rsync/2012-June/027680.html
> 
> On 12/30/18 9:50 PM, devzero--- via rsync wrote:
> >> There have been addons to rsync in the past to do that but rsync really
> >> isn't the correct tool for the job.
> > why not correct tool ?
> >
> > if rsync can greatly keep two large files in sync between source and 
> > destination
> > (using --inplace), why should it (generally spoken) not also be used to 
> > keep two
> > blockdevices in sync ?
> >
> > maybe these links are interesting in that context:
> >
> > https://lists.samba.org/archive/rsync/2010-June/025164.html
> >
> > https://github.com/dop251/diskrsync
> >
> > roland
> >
> >> Gesendet: Sonntag, 30. Dezember 2018 um 19:53 Uhr
> >> Von: "Kevin Korb via rsync" 
> >> An: rsync@lists.samba.org
> >> Betreff: Re: rsync remote raw block device with --inplace
> >>
> >> There have been addons to rsync in the past to do that but rsync really
> >> isn't the correct tool for the job.  Neither is dd.
> >>
> >> The right tool is something that understands the filesystem within the
> >> block device such as ntfsclone (what I use) or partimage (if you have
> >> ever used Clonezilla this is what it uses).  These will know how to skip
> >> all the empty parts of the filesystem and will still be capable of
> >> restoring a complete image in a bare metal restore.  You can still use
> >> dd to snag a copy of the MBR since that is outside of any filesystems.
> >>
> >> Also, if you do have to resort to a plain image use ddrescue instead of
> >> dd.  It has a status screen and it can resume as long as you used a log
> >> file when you ran it.
> >>
> >> On 12/30/18 1:45 PM, Steve Newcomb via rsync wrote:
> >>> It would be very nice to be able to rsync the raw data content of, e.g.,
> >>> a non-mounted disk partition, particularly in combination with --inplace.
> >>>
> >>> Our reality: several dual-boot machines running Windows during the day
> >>> and Linux at night, during backups.  Windows is very tedious and iffy to
> >>> re-reinstall without a raw disk image to start from.  Disks fail, and
> >>> the ensuing downtime must be minimized.
> >>>
> >>> We're using dd for this.  Most of the nightly work is redundant and
> >>> wasteful of elapsed time and storage.  Storage is cheap, but it's not
> >>> *that* cheap.  Elapsed time is priceless.
> >>>
> >>> Rsync refuses to back up raw devices, and even raw character devices,
> >>> with the message "skipping non-regular file" (I think the relevant
> >>> message is in generator.c).
> >>>
> >>> In Linux, anyway, the "raw" command allows a block device to be bound as
> >>> a character device, and then even a "cat" command can read the raw data
> >>> of the block device.  So why does rsync refuse to copy such content, or
> >>> why is it a bad idea, or what rsync doctrine conflicts with it?  I agree
> >>> there are security concerns here, but rsync already disallows some of
> >>> its functions unless the super user is requesting them.
> >>>
> >>>
> >> -- 
> >> ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
> >>Kevin Korb  Phone:(407) 252-6853
> >>Systems Administrator   Internet:
> >>FutureQuest, Inc.   ke...@futurequest.net  (work)
> >>Orlando, Floridak...@sanitarium.net (personal)
> >>Web page:   https://sanitarium.net/
> >>PGP public key available on web site.
> >> ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
> >>
> >> -- 
> >> Please use reply-all for most replies to avoid omitting the mailing list.
> >> To unsubscribe or change options: 
> >> https://lists.samba.org/mailman/listinfo/rsync
> >> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
> 
> 
> 
> -- 
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options: 
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Aw: Re: rsync remote raw block device with --inplace

2018-12-30 Thread devzero--- via rsync
> There have been addons to rsync in the past to do that but rsync really
> isn't the correct tool for the job.

why not correct tool ?

if rsync can greatly keep two large files in sync between source and destination
(using --inplace), why should it (generally spoken) not also be used to keep 
two 
blockdevices in sync ? 

maybe these links are interesting in that context:

https://lists.samba.org/archive/rsync/2010-June/025164.html

https://github.com/dop251/diskrsync

roland

> Gesendet: Sonntag, 30. Dezember 2018 um 19:53 Uhr
> Von: "Kevin Korb via rsync" 
> An: rsync@lists.samba.org
> Betreff: Re: rsync remote raw block device with --inplace
>
> There have been addons to rsync in the past to do that but rsync really
> isn't the correct tool for the job.  Neither is dd.
> 
> The right tool is something that understands the filesystem within the
> block device such as ntfsclone (what I use) or partimage (if you have
> ever used Clonezilla this is what it uses).  These will know how to skip
> all the empty parts of the filesystem and will still be capable of
> restoring a complete image in a bare metal restore.  You can still use
> dd to snag a copy of the MBR since that is outside of any filesystems.
> 
> Also, if you do have to resort to a plain image use ddrescue instead of
> dd.  It has a status screen and it can resume as long as you used a log
> file when you ran it.
> 
> On 12/30/18 1:45 PM, Steve Newcomb via rsync wrote:
> > It would be very nice to be able to rsync the raw data content of, e.g.,
> > a non-mounted disk partition, particularly in combination with --inplace.
> > 
> > Our reality: several dual-boot machines running Windows during the day
> > and Linux at night, during backups.  Windows is very tedious and iffy to
> > re-reinstall without a raw disk image to start from.  Disks fail, and
> > the ensuing downtime must be minimized. 
> > 
> > We're using dd for this.  Most of the nightly work is redundant and
> > wasteful of elapsed time and storage.  Storage is cheap, but it's not
> > *that* cheap.  Elapsed time is priceless.
> > 
> > Rsync refuses to back up raw devices, and even raw character devices,
> > with the message "skipping non-regular file" (I think the relevant
> > message is in generator.c). 
> > 
> > In Linux, anyway, the "raw" command allows a block device to be bound as
> > a character device, and then even a "cat" command can read the raw data
> > of the block device.  So why does rsync refuse to copy such content, or
> > why is it a bad idea, or what rsync doctrine conflicts with it?  I agree
> > there are security concerns here, but rsync already disallows some of
> > its functions unless the super user is requesting them. 
> > 
> > 
> 
> -- 
> ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
>   Kevin Korb  Phone:(407) 252-6853
>   Systems Administrator   Internet:
>   FutureQuest, Inc.   ke...@futurequest.net  (work)
>   Orlando, Floridak...@sanitarium.net (personal)
>   Web page:   https://sanitarium.net/
>   PGP public key available on web site.
> ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
> 
> -- 
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options: 
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Aw: Re: link-dest and batch-file

2018-07-20 Thread devzero--- via rsync
But don‘t forget —inplace, otherwise snapshots would not be efficient


> Gesendet: Mittwoch, 18. Juli 2018 um 21:53 Uhr
> Von: "Kevin Korb via rsync" 
> An: rsync@lists.samba.org
> Betreff: Re: link-dest and batch-file
>
> If you are using ZFS then forget --link-dest.  Just rsync to the same
> zfs mount every time and do a zfs snapshot after the rsync finishes.
> Then delete old backups with a zfs destroy.
> 
> On 07/18/2018 03:42 PM, Дугин Сергей via rsync wrote:
> > Hello.
> > 
> > I  need  that  during  today's backup, the metadata about the files is
> > saved  in  a  file,  so  that tomorrow when creating a backup with the
> > option  "link-dest" instead of this option I would specify a file with
> > metadata,   then   rsync   will  not  scan  the  folder  specified  in
> > "link-dest",   but  simply  reads  information  about this folder from
> > a  file with  metadata. This greatly saves time and load on the server
> > with backups. 
> > 
> > I  do not delete through rm -rf, but delete the ZFS partition, you can
> > also delete via find -delete, there are other ways
> > 
> > On 26 июня 2018 г., 22:47:56:
> > 
> >> I don't believe there is anything you can do with the batch options for
> >> this.  If you added a --write-batch to each of those you would get 3
> >> batch files that wouldn't be read without a --read-batch.  If you also
> >> did a --read-batch that would contain differences between a backup and
> >> the backup before it so rsync would still have to read the backup before
> >> it to understand the batch (and this would continue on to the oldest
> >> backup making the problem worse).
> > 
> >> Anyway, what you were asking for sounds a lot like rdiff-backup.  I
> >> didn't like it myself but maybe you would.
> > 
> >> BTW, my experience with many millions of files vs rsync --link-dest is
> >> that running the backup isn't a problem.  The problem came when it was
> >> time to delete the oldest backup.  An rm -rf took a lot longer than an
> >> rsync.  If you haven't gotten there yet maybe you should try one and see
> >> if it is going to be as big a problem as I had.
> > 
> >> On 06/26/2018 03:02 PM, Дугин Сергей via rsync wrote:
> >>> Hello.
> >>>
> >>> I am launching a cron bash script that does the following:
> >>>
> >>> Day 1
> >>> /usr/bin/rsync -aH --link-dest /home/backuper/.BACKUP/009/2018-06-25 
> >>> root@192.168.1.103:/home/ /home/backuper/.BACKUP/009/2018-06-26
> >>>
> >>> Day 2
> >>> /usr/bin/rsync -aH --link-dest /home/backuper/.BACKUP/009/2018-06-26 
> >>> root@192.168.1.103:/home/ /home/backuper/.BACKUP/009/2018-06-27
> >>>
> >>> Day 3
> >>> /usr/bin/rsync -aH --link-dest /home/backuper/.BACKUP/009/2018-06-27 
> >>> root@192.168.1.103:/home/ /home/backuper/.BACKUP/009/2018-06-28
> >>>
> >>> and etc.
> >>>
> >>>
> >>> The  backup server experiences a large flow of data when the quantity
> >>> of  files  exceeds  millions, as rsync scans the files of the previous
> >>> day   because   of  the  link-dest  option.  Is it possible to use the
> >>> batch-file   mechanism   in   such  a  way,  so  that  when  using the
> >>> link-dest  option,  the  file  with  the metadata from the current day
> >>> could  the  executed  the  following  day  without  having to scan the
> >>> folder, that is linked in the link-dest?
> >>>
> >>>
> >>> Yours faithfully,
> >>>   Sergey Dugin   mailto:d...@qwarta.ru
> >>>  QWARTA
> >>>
> >>>
> > 
> > 
> > 
> > 
> 
> -- 
> ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
>   Kevin Korb  Phone:(407) 252-6853
>   Systems Administrator   Internet:
>   FutureQuest, Inc.   ke...@futurequest.net  (work)
>   Orlando, Floridak...@sanitarium.net (personal)
>   Web page:   https://sanitarium.net/
>   PGP public key available on web site.
> ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
> 
> -- 
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options: 
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Aw: Re: rsync very very slow with multiple instances at the same time.

2018-03-23 Thread devzero--- via rsync
>The difference is not crazy. But the find itself takes so much time !

 

38m for a find across 2,8m files looks a little bit slow, i'm getting 14k lines/s when doing  "find . | pv -l -a  >/dev/null" in my btrfs volume located via iscsi on a synology storage (3,5" ordinary sata disks) - while the VM i'm running this inside is being backed up at hypervisor level, i.e. there is additional load on the storage...

 

anyway, you are comparing apples to oranges here. i guess the iscsi storage is'n ssd, is it? there are even more, iscsi is introducing additional latencies...

 

regards

roland

 

 


Gesendet: Freitag, 23. März 2018 um 17:52 Uhr
Von: "Jayce Piel via rsync" 
An: "Kevin Korb via rsync" 
Betreff: Re: rsync very very slow with multiple instances at the same time.


Ok, so i did some tests.



find /path -type f -ls > /dev/null




 

 

First on my local SSD disk (1.9 millions files) :

1 find : 

real 2m16.743s
user 0m7.607s
sys 0m45.952s
 

10 concurrent finds (approx same results for each)  :

real 4m48.629s
user 0m11.013s
sys 2m0.288s
 

Almost double time is somehow logic.

 

 

Now same test on my server on the iSCSI disk (when there is no other activity) (2.8 millions files) :

1 find :


real 38m54.964s

user 0m35.626s

sys 4m33.593s


 

10 concurrent finds :


real 76m34.781s

user 0m47.848s

sys 5m42.034s


 

The difference is not crazy. But the find itself takes so much time !

I now see i have a real issue on that server. Transfer time is not a problem, but access time seems to be terribly slow.


 

Le 21 mars 2018 à 16:59, Jayce Piel  a écrit :
 

Thanks for the answer.

I will do some tests of the stat() thing at a time when there is nothing else running.
 

For the compression i tried to find the lowest common factor between the clients and the server. Server is older for now.

I used to use -c arcfour-128 before it was no more an option.

 

The 2 ciphers you are mentionning are available on the Clients but not on the server, sadly.

But i keep this in mind for when i will upgrade the server (or move the destination backups).

 

 

Le 21 mars 2018 à 16:39, Kevin Korb via rsync  a écrit :
 

When rsync has a lot of files to look through but not many to actually
transfer most of the work will be gathering information from the stat()
function call.  You can simulate just the stat call with: find /path
-type f -ls > /dev/null
You can run one then a few of those to see if your storage has issues
with lots of stats all at once.

Also, why -c aes128-ctr ?  If your OpenSSH is current then the default
of chacha20-poly1...@openssh.com is much faster.  If your systems have
AES-NI in the CPU then aes128-...@openssh.com is much faster.  If your
OpenSSH is too old for chacha to be the default then aes128-ctr was the
default anyway.

On 03/21/2018 09:49 AM, Jayce Piel via rsync wrote:


Here are my options :

/usr/local/bin/rsync3 --rsync-path=/usr/local/bin/rsync3 -aHXxvE --stats
--numeric-ids --delete-excluded --delete-before --human-readable
—rsh="ssh -T -c aes128-ctr -o Compression=no -x" -z
--skip-compress=gz/bz2/jpg/jpeg/ogg/mp3/mp4/mov/avi/vmdk/vmem --inplace
--chmod=u+w --timeout=60 —exclude=‘Caches' —exclude=‘SyncService'
—exclude=‘.FileSync' —exclude=‘IMAP*' —exclude=‘.Trash' —exclude='Saved
Application State' —exclude='Autosave Information'
--exclude-from=/Users/pabittan/.UserSync/exclude-list --max-size=1000M
/Users/pabittan/ xserve.local.fftir:./
 



 




-- 


Jayce Piel   —    jayce.p...@gmail.com  --  0616762431

   Responsable Informatique F.F.Tir








 




-- 


Jayce Piel   —    jayce.p...@gmail.com  --  0616762431

   Responsable Informatique F.F.Tir





-- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html





-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Aw: rsync very very slow with multiple instances at the same time.

2018-03-21 Thread devzero--- via rsync

most likely, you ovestrain your NAS with disk random IOPs. furthermore, iSCSI is an additional throttle here, making things worse.

 

your issue is probably centered around metadata reads/latency...

 

have a look on IO-Wait on the server/nas side...

 

regards

roland

 

 

Gesendet: Mittwoch, 21. März 2018 um 14:49 Uhr
Von: "Jayce Piel via rsync" 
An: rsync@lists.samba.org
Betreff: rsync very very slow with multiple instances at the same time.



I create a new thread, because the issue is not really the same, but i copy here the thread that made me jump into the list.

 

My issue is not really that it waits before starting copying, but a general performance issue, specially when there are multiple rsync running at the same time.

 

Here is my situation :

I have multiple clients (around 20) with users and i want to rsync their home dirs with my server to keep a copy of their local files.

On the server, files are hosted on a iSCSI volume (on a Thecus RAID) where i never had any performance issue before.

 

When there is only one client, i have no real performance issues. In a few minutes, even with a very large number of files (some users have up to ), the sync is done if there are not too many changed files.

But when there are 3 or more rsync at the same time, all rsync become very very slow and can take a few hours to complete.

 

Here are my options :

 

/usr/local/bin/rsync3 --rsync-path=/usr/local/bin/rsync3 -aHXxvE --stats --numeric-ids --delete-excluded --delete-before --human-readable —rsh="ssh -T -c aes128-ctr -o Compression=no -x" -z --skip-compress=gz/bz2/jpg/jpeg/ogg/mp3/mp4/mov/avi/vmdk/vmem --inplace --chmod=u+w --timeout=60 —exclude=‘Caches' —exclude=‘SyncService' —exclude=‘.FileSync' —exclude=‘IMAP*' —exclude=‘.Trash' —exclude='Saved Application State' —exclude='Autosave Information' --exclude-from=/Users/pabittan/.UserSync/exclude-list --max-size=1000M /Users/pabittan/ xserve.local.fftir:./

 
 

Here is the version i use (self compiled) : 


$ /usr/local/bin/rsync3 --version

rsync  version 3.1.2-jsp  protocol version 31

Copyright (C) 1996-2015 by Andrew Tridgell, Wayne Davison, and others.

Web site: http://rsync.samba.org/

Capabilities:

    64-bit files, 64-bit inums, 64-bit timestamps, 64-bit long ints,

    socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace,

    append, ACLs, xattrs, iconv, symtimes, no prealloc, file-flags

 

I had to put in place a sort of queue to not allow more than 4 simultaneous rsync to be sure they run at least once a day each. Even limiting to 4 rsync makes some wait hours before starting a backup.

 

I’m open to any help to improve perfs. (i have put my whole script calling rsync on github : https://github.com/jpiel/UserSync )

 

PS: 

I checked, CPU is not under pressure, each rsync instance use between 2 and 5% CPU. The whole CPU usage 30%.

I also checked network, and it’s not either an issue.

Disk usage doesn’t seem to be at a high load either… (peak at 300 IO/sec)

 


 

Le 20 mars 2018 à 13:00, rsync-requ...@lists.samba.org a écrit :
 


De: Kevin Korb 

Objet: Rép : Very slow to start sync with millions of directories and files

Date: 19 mars 2018 à 15:33:31 UTC+1

À: rsync@lists.samba.org


The performance of rsync with a huge number of files is greatly
determined by every option you are using.  So, what is your whole
command line?

On 03/19/2018 09:05 AM, Bráulio Bhavamitra via rsync wrote:

Hi all,
 
I'm using rsync 3 to copy all files from one disk to another. The files
were writen by Minio, an S3 compatible opensource backend.

The number of files is dozens of millions, almost each of them within
its own directory.

Rsync takes a long time, when not several hours, to even start syncing
files. I already see a few reasons:
- it first create all directories to put files in, that could be done
along with the sync
- it needs to generate the list of all files before starting, and cannot
start syncing and keep the list generation in a different thread.

Cheers,
bráulio

 

-- 
~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
Kevin Korb   Phone:    (407) 252-6853
Systems Administrator  Internet:
FutureQuest, Inc.  ke...@futurequest.net  (work)
Orlando, Florida  k...@sanitarium.net (personal)
Web page:   http://www.sanitarium.net/
PGP public key available on web site.
~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,


 




-- 


Jayce Piel   —    jayce.p...@gmail.com  --  0616762431

   Responsable Informatique F.F.Tir





-- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html





-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, 

Aw: Re: some files vanished before... but which?

2017-11-15 Thread devzero--- via rsync
good hint - but not exactly. 

i found that for the osx host "gonzo" , the vanished files (not the warning 
message itself) appear in stdout - for linux hosts they _both_ appear in stderr 
, but nothing in stdout (rsync.err.#num is stderr, rsync.log is stdout)

maybe it`s because backupvm1->spacewalk is 3.0.9->3.0.9 and backupvm1->gonzo is 
3.0.9->3.1.2 (installed from brew repo) ?

[root@backupvm1 backup]# grep -i vanished spacewalk/rsync.*
spacewalk/rsync.err.1:file has vanished: 
"/var/lib/rhn/search/indexes/server/_80a2.cfs"
spacewalk/rsync.err.1:file has vanished: 
"/var/lib/rhn/search/indexes/server/_80a2_6.del"
spacewalk/rsync.err.1:file has vanished: 
"/var/lib/rhn/search/indexes/server/_80a3.cfs"
spacewalk/rsync.err.1:file has vanished: 
"/var/lib/rhn/search/indexes/server/_80a4.cfs"
spacewalk/rsync.err.1:file has vanished: 
"/var/lib/rhn/search/indexes/server/_80a5.cfs"
spacewalk/rsync.err.1:file has vanished: 
"/var/lib/rhn/search/indexes/server/segments_n801"
spacewalk/rsync.err.1:rsync warning: some files vanished before they could be 
transferred (code 24) at main.c(1518) [generator=3.0.9]


[root@backupvm1 backup]# grep -i vanished gonzo/rsync.*
gonzo/rsync.err.1:rsync warning: some files vanished before they could be 
transferred (code 24) at main.c(1518) [generator=3.0.9]
gonzo/rsync.log:file has vanished: 
"/Library/Server/Mail/Data/spool/maildrop/AFD811E2D07C"
gonzo/rsync.log:file has vanished: 
"/Library/Server/Wiki/Database.xpg/Cluster.pg/pg_xlog/0001004F0025"
gonzo/rsync.log:file has vanished: 
"/Library/Server/Wiki/Database.xpg/Cluster.pg/pg_xlog/archive_status/0001004F0025.done"
gonzo/rsync.log:file has vanished: 
"/Library/Server/Wiki/Database.xpg/backup/0001004F0026.partial"




> Gesendet: Mittwoch, 15. November 2017 um 14:56 Uhr
> Von: "Kevin Korb via rsync" <rsync@lists.samba.org>
> An: rsync@lists.samba.org
> Betreff: Re: some files vanished before... but which?
>
> That is what rsync says at the end of the run in case you missed the
> file names fly by in the other output.  The names should be in the rest
> of the output from when rsync found the problem.
> 
> On 11/15/2017 04:53 AM, devzero--- via rsync wrote:
> > Hi !
> > 
> > I`m getting "rsync warning: some files vanished before they could be 
> > transferred (code 24) at main.c(1518) [generator=3.0.9]" on one of my 
> > systems i`m backing up with rsync , but rsync doesn`t show WHICH files.
> > 
> > Does anybody have a clue under which circumstances rsync doesn`t show these 
> > ?
> > 
> > regards
> > Roland
> > 
> > 
> 
> -- 
> ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
>   Kevin Korb  Phone:(407) 252-6853
>   Systems Administrator   Internet:
>   FutureQuest, Inc.   ke...@futurequest.net  (work)
>   Orlando, Floridak...@sanitarium.net (personal)
>   Web page:   http://www.sanitarium.net/
>   PGP public key available on web site.
> ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
> 
> -- 
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options: 
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


some files vanished before... but which?

2017-11-15 Thread devzero--- via rsync
Hi !

I`m getting "rsync warning: some files vanished before they could be 
transferred (code 24) at main.c(1518) [generator=3.0.9]" on one of my systems 
i`m backing up with rsync , but rsync doesn`t show WHICH files.

Does anybody have a clue under which circumstances rsync doesn`t show these ?

regards
Roland


-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Aw: rsync buffer overflow detected

2017-04-16 Thread devzero--- via rsync
What's the value of "i" when this happens and what are the system ulimit values 
for the user running that?

Roland



> Gesendet: Freitag, 14. April 2017 um 19:22 Uhr
> Von: "Boris Savelev via rsync" 
> An: rsync@lists.samba.org
> Betreff: rsync buffer overflow detected
>
> Hello!
> 
> I use rsync from python on my Debian Jessie amd64 and get this error:
> *** buffer overflow detected ***: /rsync terminated
> === Backtrace: =
> /lib/x86_64-linux-gnu/libc.so.6(+0x731af)[0x778971af]
> /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x37)[0x7791caa7]
> /lib/x86_64-linux-gnu/libc.so.6(+0xf6cc0)[0x7791acc0]
> /lib/x86_64-linux-gnu/libc.so.6(+0xf8a17)[0x7791ca17]
> /rsync(+0x30c78)[0x55584c78]
> /rsync(+0x31cfe)[0x55585cfe]
> /rsync(+0x31ef6)[0x55585ef6]
> /rsync(+0x336ed)[0x555876ed]
> /rsync(+0x22417)[0x55576417]
> /rsync(+0x2395e)[0x5557795e]
> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x77845b45]
> /rsync(+0x7f89)[0xbf89]
> 
> I guess that problem is about too many open fds
> STR for this is a small script on python:
> import os
> import subprocess
> 
> F = 'test'
> OPENS = 1600
> 
> cmd = [
> #'gdb', '--args',
> './rsync',
> '-aviH',
> '/etc/passwd',
> '/tmp/passwd'
> ]
> 
> for i in xrange(OPENS):
> fd = os.open(F, os.O_WRONLY | os.O_CREAT)
> print(cmd)
> subprocess.check_call(cmd)
> 
> I rebuild rsync-3.1.1 from Debian source with debug and -O1 and get bt from 
> gdb:
> (gdb) bt
> #0  0x77859067 in __GI_raise (sig=sig@entry=6) at
> ../nptl/sysdeps/unix/sysv/linux/raise.c:56
> #1  0x7785a448 in __GI_abort () at abort.c:89
> #2  0x778971b4 in __libc_message (do_abort=do_abort@entry=2,
> fmt=fmt@entry=0x77989cb3 "*** %s ***: %s terminated\n")
> at ../sysdeps/posix/libc_fatal.c:175
> #3  0x7791caa7 in __GI___fortify_fail
> (msg=msg@entry=0x77989c4a "buffer overflow detected") at
> fortify_fail.c:31
> #4  0x7791acc0 in __GI___chk_fail () at chk_fail.c:28
> #5  0x7791ca17 in __fdelt_chk (d=d@entry=1606) at fdelt_chk.c:25
> #6  0x55584c78 in safe_read (fd=fd@entry=1606,
> buf=buf@entry=0x7fffa810 "\037", len=len@entry=4) at io.c:245
> #7  0x55585cfe in read_buf (f=f@entry=1606,
> buf=buf@entry=0x7fffa810 "\037", len=len@entry=4) at io.c:1815
> #8  0x55585ef6 in read_int (f=f@entry=1606) at io.c:1711
> #9  0x555876ed in setup_protocol (f_out=1605, f_in=1606) at 
> compat.c:158
> #10 0x55576417 in client_run (f_in=1606, f_out=1605,
> pid=24793, argc=1, argv=0x557d5240) at main.c:1128
> #11 0x5557795e in start_client (argv=0x557d5240, argc=1)
> at main.c:1423
> #12 main (argc=2, argv=0x557d5240) at main.c:1651
> 
> It looks like a bug, but I'm not sure)
> 
> --
> Boris
> 
> -- 
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options: 
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
>

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Aw: Re: rsync show files changed during transfer - how?

2016-12-20 Thread devzero
interesting.

apparently, there is some logic inside which checks for change during save.

i also found this one:

https://git.samba.org/rsync.git/?p=rsync.git;a=commit;h=e2bc4126691bdbc8ab78e6e56c72bf1d8bc51168

i`m curious why it doesn`t handle my test-case then.

regards
roland


> Gesendet: Montag, 19. Dezember 2016 um 17:09 Uhr
> Von: "Fabian Cenedese" 
> An: rsync@lists.samba.org
> Betreff: Re: Aw: rsync show files changed during transfer - how?
>
> At 17:00 19.12.2016, devz...@web.de wrote:
> 
> >>http://olstrans.sourceforge.net/release/OLS2000-rsync/OLS2000-rsync.html
> >
> > But the filename twice can happen under other circumstances; if 
> >you've seen this happen, it's almost certainly because the file changed 
> >during transfer. Rsync does no locking. Which means that: if you are 
> >modifying a file while it's being transferred, then probably the checksum 
> >will fail and it'll go round again. And if it goes around twice, and it 
> >still fails, then it prints a message saying; Error, checksum failed, file 
> >changed during transfer? And it's probably a file like a log file that's 
> >being constantly updated and so the checksums didn't match because it's 
> >never going to be able to get it exact; it means that what you've got on the 
> >other end is something which will approximate some snapshot of the file, but 
> >because it's not doing any locking, it can't guarantee that it's got a 
> >particular snapshot of the file, because you can't have an atomic read of 
> >the whole file. [31m, 49s] 
> >nope, this is wrong, there is no message "checksum failed"
> 
> Actually rsync does that (from a log on our server):
> ...
> PUBLIC_SERVER_BACKUP/wiki/data/index/i10.idx
> WARNING: PUBLIC_SERVER_BACKUP/wiki/data/index/i10.idx failed verification -- 
> update retained (will try again).
> ...
> PUBLIC_SERVER_BACKUP/wiki/data/index/i10.idx
> 
> So it did recognize a change and synched it a second time. Don't know what 
> happens
> if the file changes again though.
> 
> >to be a valuable backup solution, i`d like some networker a`like behaviour 
> >which gives "file changed during save"
> > 
> >i wonder what`s so difficult for rsync to "look" at the timestamp or 
> >checksum a second time after transfer and if it changed it should spit out a 
> >warning
> > 
> >isn`t this something which _could_ be implemented, if somebody is able to do 
> >ist ?
> 
> Maybe it's a question of informational flags (-v, --progress, --stats etc).
> 
> bye  Fabi
> 
> 
> -- 
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options: 
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Aw: rsync show files changed during transfer - how?

2016-12-19 Thread devzero

>http://olstrans.sourceforge.net/release/OLS2000-rsync/OLS2000-rsync.html


But the filename twice can happen under other circumstances; if you've seen this happen, it's almost certainly because the file changed during transfer. Rsync does no locking. Which means that: if you are modifying a file while it's being transferred, then probably the checksum will fail and it'll go round again. And if it goes around twice, and it still fails, then it prints a message saying; Error, checksum failed, file changed during transfer? And it's probably a file like a log file that's being constantly updated and so the checksums didn't match because it's never going to be able to get it exact; it means that what you've got on the other end is something which will approximate some snapshot of the file, but because it's not doing any locking, it can't guarantee that it's got a particular snapshot of the file, because you can't have an atomic read of the whole file. [31m, 49s]



nope, this is wrong, there is no message "checksum failed"

 

i did some testing - on the system to be backed up i`m writing a file like this:

 

"dd if=/dev/urandom of=test.dat"

 

on the server where i backup this file i simply get

 


[root@backupvm1 test]# rsync -av root@backupvm2:/btrfspool/test .
receiving incremental file list
test/test.dat

sent 113417 bytes  received 82924265 bytes  12775028.00 bytes/sec
total size is 329252864  speedup is 3.97


 

to be a valuable backup solution, i`d like some networker a`like behaviour which gives "file changed during save"

 

i wonder what`s so difficult for rsync to "look" at the timestamp or checksum a second time after transfer and if it changed it should spit out a warning

 

isn`t this something which _could_ be implemented, if somebody is able to do ist ?

 

regards

roland

 

 


Gesendet: Dienstag, 01. November 2016 um 00:36 Uhr
Von: devz...@web.de
An: rsync@lists.samba.org
Betreff: rsync show files changed during transfer - how?

i'm using rsync for backup and, as rsync can detect if files have vanished during transfer, i wonder how rsync can tell which files got modified during transfer (i.e. which are not consistent on the destination side after transfer)

apparently, rsync can't show that information?

wouldn't that be an extremely useful feature if rsync could do another additional mtime or even checksum comparison after each file transfer?

for example emc networker notifies about "changed during save". i'm curious wg rsync doesn't

regards
roland
--
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.-- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html




-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: script showing extended stats ( deleted/added ...)

2016-12-17 Thread devzero
for pre 3.0.9 which is still standard in centos7 with recent updates, --stats 
does neither show number of deleted, nor added files

Am 17. Dezember 2016 18:06:56 MEZ, schrieb Kevin Korb :
>--stats has most of that information in it.
>
>On 12/17/2016 08:01 AM, devz...@web.de wrote:
>> is there a script which analyses rsync output with --itemize-changes
>?
>> 
>> i.e. i would like to have extended information on number of deleted
>files, created directories, changed files
>> 
>> i know rsync 3.1.x is better with this, but it`s still not in centos
>5/6/7 and i don`t want to update tons of systems to get extended
>statistics, so i wonder if anbody did an analyze script to get that
>information from --itemize-changes afterwards.
>> 
>> if it does not exist, i would try create such script. should not be
>too hard to do...
>> 
>> regards
>> roland
>> 
>
>-- 
>~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
>   Kevin Korb  Phone:(407) 252-6853
>   Systems Administrator   Internet:
>   FutureQuest, Inc.   ke...@futurequest.net  (work)
>   Orlando, Floridak...@sanitarium.net (personal)
>   Web page:   http://www.sanitarium.net/
>   PGP public key available on web site.
>~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
>
>
>
>
>
>-- 
>Please use reply-all for most replies to avoid omitting the mailing
>list.
>To unsubscribe or change options:
>https://lists.samba.org/mailman/listinfo/rsync
>Before posting, read:
>http://www.catb.org/~esr/faqs/smart-questions.html

-- 
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

script showing extended stats ( deleted/added ...)

2016-12-17 Thread devzero
is there a script which analyses rsync output with --itemize-changes ?

i.e. i would like to have extended information on number of deleted files, 
created directories, changed files

i know rsync 3.1.x is better with this, but it`s still not in centos 5/6/7 and 
i don`t want to update tons of systems to get extended statistics, so i wonder 
if anbody did an analyze script to get that information from --itemize-changes 
afterwards.

if it does not exist, i would try create such script. should not be too hard to 
do...

regards
roland

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: rsyncing from a compressed tarball.

2016-12-08 Thread devzero
try archivemount or squashfs

Am 8. Dezember 2016 11:43:07 MEZ, schrieb Simon Hobson :
>Ed Peschko  wrote:
>
>> As it stands right now, we use xz for our compression, so if rsync
>had
>> a similar option for xz that would probably be an improvement.
>
>Have xz as an option for what ?
>As others have already pointed out, rsync works with files on
>filesystems - it does not work with files embedded in other files. In
>the same way, it doesn't work with a disk partition and work out what
>files are held within the filesystem on that partition - you are
>expected to mount that filesystem so rsync can access the files. It
>would be incredibly wasteful (understanding filesystems construction)
>to duplicate that functionality into rsync !
>
>I'd concur with others - if there's something like a fuse module to
>mount the compressed archive as a (read-only) fileystem then use that,
>otherwise you'll need to re-assess the whole process.
>
>
>-- 
>Please use reply-all for most replies to avoid omitting the mailing
>list.
>To unsubscribe or change options:
>https://lists.samba.org/mailman/listinfo/rsync
>Before posting, read:
>http://www.catb.org/~esr/faqs/smart-questions.html

-- 
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: segfault at 968 Error

2016-12-01 Thread devzero
does it crash reproducable at the same file or is it randomly?

Am 2. Dezember 2016 07:50:20 MEZ, schrieb VigneshDhanraj G 
:
>Any update on this issue.
>
>On Wed, Nov 30, 2016 at 6:29 PM, VigneshDhanraj G <
>vigneshdhanra...@gmail.com> wrote:
>
>> Hi Team,
>>
>> While Running rsync rsync://username@ip:873 , I am getting following
>> error.
>>
>> rsync: safe_read failed to read 1 bytes [Receiver]: Connection reset
>by
>> peer (104)
>> rsync error: error in rsync protocol data stream (code 12) at
>io.c(276)
>> [Receiver=3.1.1]
>>
>> In Remote pc , i can see segmentation fault, dmesg gives following
>error.
>> My remote PC using debian wheezy.
>>
>> rsync[9022]: segfault at 968 ip 7f90001cd790 sp 7fffe96008a0
>error
>> 4 in libpthread-2.13.so[7f90001c8000+17000]
>>
>> gdb output: bt
>>
>> (gdb) bt
>> #0  0x7fa75e8ab790 in __pthread_initialize_minimal_internal ()
>from
>> /lib/x86_64-linux-gnu/libpthread.so.0
>> #1  0x7fa75e8ab209 in _init () from /lib/x86_64-linux-gnu/
>> libpthread.so.0
>> #2  0x7fa75e6a in ?? ()
>> #3  0x7fa75f967f09 in ?? () from /lib64/ld-linux-x86-64.so.2
>> #4  0x7fa75f9680ce in ?? () from /lib64/ld-linux-x86-64.so.2
>> #5  0x7fa75f96c333 in ?? () from /lib64/ld-linux-x86-64.so.2
>> #6  0x7fa75f967ba6 in ?? () from /lib64/ld-linux-x86-64.so.2
>> #7  0x7fa75f96bb1a in ?? () from /lib64/ld-linux-x86-64.so.2
>> #8  0x7fa75efe5760 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> #9  0x7fa75f967ba6 in ?? () from /lib64/ld-linux-x86-64.so.2
>> #10 0x7fa75efe57ff in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> #11 0x7fa75efe58f7 in __libc_dlopen_mode () from
>> /lib/x86_64-linux-gnu/libc.so.6
>> #12 0x7fa75efbee94 in __nss_lookup_function () from
>> /lib/x86_64-linux-gnu/libc.so.6
>> #13 0x7fa75efbf6ab in __nss_next2 () from
>> /lib/x86_64-linux-gnu/libc.so.6
>> #14 0x7fa75efc5666 in gethostbyaddr_r () from
>> /lib/x86_64-linux-gnu/libc.so.6
>> #15 0x7fa75efcae1d in getnameinfo () from
>> /lib/x86_64-linux-gnu/libc.so.6
>> #16 0x00441002 in lookup_name ()
>> #17 0x00440e3b in client_name ()
>> #18 0x0044d566 in start_daemon ()
>> #19 0x0043eb74 in start_accept_loop ()
>> #20 0x0044da7e in daemon_main ()
>> #21 0x00425751 in main ()
>>
>> strace output which shows segmentation fault.
>> 7241  read(5,
>"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\3008\0\0\0\0\0\0"...,
>> 832) = 832
>> 7241  fstat(5, {st_mode=S_IFREG|0644, st_size=80712, ...}) = 0
>> 7241  mmap(NULL, 2185864, PROT_READ|PROT_EXEC,
>MAP_PRIVATE|MAP_DENYWRITE,
>> 5, 0) = 0x7fd406a88000
>> 7241  mprotect(0x7fd406a9b000, 2093056, PROT_NONE) = 0
>> 7241  mmap(0x7fd406c9a000, 8192, PROT_READ|PROT_WRITE,
>> MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 5, 0x12000) = 0x7fd406c9a000
>> 7241  mmap(0x7fd406c9c000, 6792, PROT_READ|PROT_WRITE,
>> MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fd406c9c000
>> 7241  close(5)  = 0
>> 7241  mprotect(0x7fd406c9a000, 4096, PROT_READ) = 0
>> 7241  mprotect(0x7fd406ea2000, 4096, PROT_READ) = 0
>> 7241  set_tid_address(0x7fd4080809d0)   = 7241
>> 7241  --- SIGSEGV (Segmentation fault) @ 0 (0) ---
>> 6080  <... select resumed> )= ? ERESTARTNOHAND (To be
>> restarted)
>> 6080  --- SIGCHLD (Child exited) @ 0 (0) ---
>> 6080  wait4(-1, NULL, WNOHANG, NULL)= 7241
>> 6080  wait4(-1, NULL, WNOHANG, NULL)= -1 ECHILD (No child
>processes)
>> 6080  rt_sigreturn(0x)  = -1 EINTR (Interrupted
>system
>> call)
>> 6080  select(5, [4], NULL, NULL, NULL 
>>
>> so, i have upgraded rsync to latest in my remote pc, but problem not
>get
>> solved.
>> Please help me to solve this.
>>
>> Regards,
>> Vigneshdhanraj G
>>
>
>
>
>
>-- 
>Please use reply-all for most replies to avoid omitting the mailing
>list.
>To unsubscribe or change options:
>https://lists.samba.org/mailman/listinfo/rsync
>Before posting, read:
>http://www.catb.org/~esr/faqs/smart-questions.html

-- 
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

rsync show files changed during transfer - how?

2016-10-31 Thread devzero
i'm using rsync for backup and, as rsync can detect if files have vanished 
during transfer, i wonder how rsync can tell which files got modified during 
transfer (i.e. which are not consistent on the destination side after transfer)

apparently, rsync can't show that information? 

wouldn't that be an extremely useful feature if rsync could do another 
additional mtime or even checksum comparison after each file transfer?

for example emc  networker notifies about "changed during save". i'm curious wg 
rsync doesn't

regards
roland
-- 
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

O_NOATIME ?

2016-10-26 Thread devzero
Hello, 

since we are using rsync for backing up millions of files in a virtual 
environment, and most of the virtual machines run on SSD cached storage, i`d be 
curious how that negatively impacts lifetime of the SSD`s when we do rsync run 
every night for backup

my question:
does rsync normal file comparison run to determine if anything has changed 
change atime of any files ?

for me it seems, stat/lstat calls of rsync do NOT modify atime, but i`m not 
sure under which conditions atime is changed. 

grepping the source for O_NOATIME in rsync3.txt i found :

  - Propagate atimes and do not modify them.  This is very ugly on
Unix.  It might be better to try to add O_NOATIME to kernels, and
call that.

furhermore, apparently there _IS_ O_NOATIME in linux kernels for a while:

http://man7.org/linux/man-pages/man2/open.2.html

O_NOATIME (since Linux 2.6.8)
  Do not update the file last access time (st_atime in the
  inode) when the file is read(2).

  This flag can be employed only if one of the following
  conditions is true:

  *  The effective UID of the process matches the owner UID of
 the file.

  *  The calling process has the CAP_FOWNER capability in its
 user namespace and the owner UID of the file has a mapping
 in the namespace.

  This flag is intended for use by indexing or backup programs,
  where its use can significantly reduce the amount of disk
  activity.  This flag may not be effective on all filesystems.
  One example is NFS, where the server maintains the access
  time.


so, maybe someone likes to comment on NOATIME !? 

maybe it could be useful to make rsync honour O_NOATIME ?

regards
roland

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

error code 255 - not mentioned in manpage?

2016-10-25 Thread devzero
hi, 

is there a reason why error code 255 is not mentioned in the manpage 
and wouldn`t it make sense to add "255 Unexplained Error" there 
for completeness ?

I`m writing a script which checks exit values and while testing it, i 
got value 255, looked into the manpage and scratched my head what`s the 
meaning of it. 

ok - it`s obvious if you take a look at rsync error output, but i think 
all exit values should be documented.

regards
roland


# cat rsync.err
ssh: connect to host .. port 22: Connection refused
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(605) [Receiver=3.0.9]

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: rsync: connection unexpectedly closed

2016-10-19 Thread devzero
what does lsof tell? does rsync hang on a specific file? 

i would wonder if this is a rsync problem. as you told you killed all 
processes. 

so, on the second run rsync knows nothing from before...

roland

Am 18. Oktober 2016 12:08:00 MESZ, schrieb Bernd Hohmann 
:
>On 18.10.2016 07:03, Kip Warner wrote:
>
>> From what I can tell, there are no hardware problems. I also ran fsck
>> on the drive. The machine seems to be fine.
>
>I can confirm the problem.
>
>Situation here: 2 identical HP Microservers (Debian 7, on site compiled
>rsync 3.1.2, connected via OpenVPN).
>
>SSH is used for transport.
>
>Both machines have the correct date/time set via ntpd.
>
>All files on Client/Server are rw and have the right owner and are
>copy'able. oth sides.
>
>The "directory to backup" is a Samba-share (I stopped nmbd and smbd, no
>change). Client: 200GB, 42000 files total. Enough disk-space and memory
>on both sides.
>
>All rsync instances were killed (Client/Server) before starting rsync.
>
>tcpdump shows me a NOP packet every 2 min.
>
>
>I can provoke the error doing this:
>
>1) Start the transfer (rsync scans *all* client files and starts
>sending
>a file)
>
>2) ^C rsync on client
>
>3) "pkill rsync" on server until all rsync-processes are killed. Same
>on
>client (just to be sure)
>
>4) Start the transfer again, now rsync scans the top directories only
>and hangs (see straces below).
>
>
>Commandline:
>
>./rsync-debug -v --archive --progress
>  --human-readable --delete-during \
>  --rsync-path=/home/backup-hugo/bin/rsync-debug \
>  /srv/backup-bernd backup-h...@backup-hugo.vpn:/srv/
>
>
>Client says (PID 5909 = rsync, 5910 = ssh)
>
>[...]
>5910  10:13:50 select(7, [3 4], [3], NULL, {240, 0}) = 1 (in [4], left
>{239, 90})
>5910  10:13:50 read(4, "2010_20120119093643.pdf\0\3740O
>[...]
>\242}\30:V0124160__Nr.036_vom_10.09.2010_2012011"..., 16384) = 3072
>
>loop:
>5910  10:13:50 select(7, [3 4], [3], NULL, {240, 0} 
>5909  10:14:51 <... select resumed> )   = 0 (Timeout)
>5909  10:14:51 select(6, [5], [], [5], {60, 0}) = 0 (Timeout)
>5909  10:15:51 select(6, [5], [], [5], {60, 0}) = 0 (Timeout)
>5909  10:16:51 select(6, [5], [], [5], {60, 0} 
>5910  10:17:51 <... select resumed> )   = 0 (Timeout)
>goto loop
>
>
>Server says (PID 10331 = rsync --server, 10332 = ssh)
>
>[...]
>10331 10:13:50 lstat("backup-bernd/Schreibtisch",
>{st_mode=S_IFDIR|0755,
>st_size=4096, ...}) = 0
>10331 10:13:50 lstat("backup-bernd/VirtualBox", {st_mode=S_IFDIR|0755,
>st_size=4096, ...}) = 0
>10331 10:13:50 lstat("backup-bernd/bin", {st_mode=S_IFDIR|0755,
>st_size=4096, ...}) = 0
>10331 10:13:50 lstat("backup-bernd/projekte", {st_mode=S_IFDIR|0755,
>st_size=4096, ...}) = 0
>10331 10:13:50 lstat("backup-bernd/transfer", {st_mode=S_IFDIR|0755,
>st_size=4096, ...}) = 0
>10331 10:13:50 select(4, [3], [1], [3], {60, 0}) = 1 (out [1], left
>{59,
>91})
>10331 10:13:50 write(1, "\4\0\0\7\3\20\0\0", 8) = 8
>10331 10:13:50 select(4, [3], [], [3], {60, 0} 
>10332 10:14:50 <... select resumed> )   = 0 (Timeout)
>
>loop:
>10332 10:14:50 select(1, [0], [], [0], {60, 0} 
>10331 10:14:50 <... select resumed> )   = 0 (Timeout)
>10331 10:14:50 select(4, [3], [], [3], {60, 0} 
>10332 10:15:50 <... select resumed> )   = 0 (Timeout)
>goto loop
>
>
>-- 
>Bernd Hohmann
>Organisationsprogrammierer
>Höhenstrasse 2 * 61130 Nidderau
>Telefon: 06187/900495 * Telefax: 06187/900493
>Blog: http://blog.harddiskcafe.de
>
>
>
>-- 
>Please use reply-all for most replies to avoid omitting the mailing
>list.
>To unsubscribe or change options:
>https://lists.samba.org/mailman/listinfo/rsync
>Before posting, read:
>http://www.catb.org/~esr/faqs/smart-questions.html

-- 
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Aw: Strange dry-run problem

2016-08-06 Thread devzero
is it still reproducible after a fresh boot? no stale nfs or fuse mounts or similar?
-- 
Diese Nachricht wurde von meinem Android Mobiltelefon mit WEB.DE Mail gesendet.Am 06.08.2016, 00:52, Tom Horsley  schrieb:

  
I was working on a backup script today and doing lots
of runs with the --dry-run option to make sure I
had things the way I wanted them.

One particular filesystem I was backing up always
hung right as it should have been finished. (This
happened every time I did a dry run, it was 100%
reproducible). Several other filesystems I was
backing up had no problems of any kind with the
dry run using the same command with different
source and dest directories (and exclude lists).

Checking all the rsync processes (this was a local
file to local file rsync, so everything was on the
same system), I found all 3 rsync processes sitting
in a select() call, and none of them had any files
open, just a couple of sockets and stdout showed up in
the /proc/pid/fd directories.

I added a --timeout=300 option, and that at least
made it exit after 5 minutes.

Later, when I removed the --dry-run option and
did the real backup, the same filesystem caused
no problems at all. No hang, all the backups
copied OK, no errors of any kind.

I just thought it was weird enough to report,
but I'm unlikely to have any time to do any debugging.
Just wanted to get it on record in case someone else
sees it.

This was on a centos 7.0 system, and I updated
rsync on it just in case that fixed it, but the
problem continued even after the yum update.

--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
  


-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Aw: man page

2016-07-27 Thread devzero
yes, agreed.

and if the home directory contains ms word files, it's typically on a windows system - which is not the primary platform for using rsync, anyway.
-- 
Diese Nachricht wurde von meinem Android Mobiltelefon mit WEB.DE Mail gesendet.Am 27.07.2016, 13:56, Marcus Fonzarelli  schrieb:

  
Hi,

I've found this in the man page for rsync: "To backup my wife’s home directory, which consists of large MS Word files and mail folders, I use a cron job that runs ...".

I believe it's very inappropriate and inconsiderate to mention Microsoft products unless it's really necessary. I'm mostly concerned with the fact that this careless usage not only gives them more publicity than they deserve, but also gives those programs a legitimacy that they don't have since they are proprietary software (and also promoting proprietary standards).

Therefore I'd like to suggest that the man page be changed to "which consists of large documents and mail folders", or at least mention another software such as LibreOffice.

Thanks for the understanding.

--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
  


-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Aw: Re: rsync error: error allocating core memory buffers

2015-04-09 Thread devzero
 You should not be using rsync's --checksum during routine backups. 

you know that excel can change a file`s contents without changing a file`s 
timestamp - do you? ;)

 Gesendet: Donnerstag, 09. April 2015 um 18:37 Uhr
 Von: Kevin Korb k...@sanitarium.net
 An: rsync@lists.samba.org
 Betreff: Re: rsync error: error allocating core memory buffers

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 You should not be using rsync's --checksum during routine backups.  It
 is only for very rare use cases not every backup run.
 
 On 04/09/2015 04:43 AM, Hans Kraus wrote:
  Hi,
  
  I've configured 'backuppc' to transfer files via rsyncd, with
  enabled checksums. Whith one of the shares I get the error (in
  syslog): 
  -
 
  
 robbe rsyncd[2183]: ERROR: out of memory in receive_sums [sender]
  robbe rsyncd[2183]: rsync error: error allocating core memory
  buffers (code 22) at util2.c(106) [sender=3.1.2dev] robbe
  rsyncd[9821]: connect from elefant.control.local (192.168.1.200) 
  robbe rsyncd[9821]: rsync on . from backuppc@elefant.control.local 
  (192.168.1.200) 
  -
 
  
 I read that the memory overflow comes from bulding the checksums list.
  
  Is there a way to find out where in the file tree that overflow
  occurs for determine splitting points?
  
  The OS is Debian 7.4 amd, 24 GB RAM, 32 GB swap.
  
  Kind regards, Hans
 
 - -- 
 ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~
   Kevin Korb  Phone:(407) 252-6853
   Systems Administrator   Internet:
   FutureQuest, Inc.   ke...@futurequest.net  (work)
   Orlando, Floridak...@sanitarium.net (personal)
   Web page:   http://www.sanitarium.net/
   PGP public key available on web site.
 ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2
 
 iEYEARECAAYFAlUmqrAACgkQVKC1jlbQAQdYnQCfSDNBGlPPbi1T0ATUlNngj3tz
 fTsAn1OwEGeDdkOKf+lCaDTZEBJoS/jg
 =TJEs
 -END PGP SIGNATURE-
 -- 
 Please use reply-all for most replies to avoid omitting the mailing list.
 To unsubscribe or change options: 
 https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
 
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Aw: Re: Re: rsync 3.0.9 segmentation fault

2015-03-27 Thread devzero

build and about 1.5GB is used during the actual transfer. The client has 16GB of RAM with a peak usage of 8.5GB.



1.5GB + 8.5GB of systems memory, including buffers etc?

give it a closer look to the rsync process with ps (as mentioned below)



also have a look at:

https://rsync.samba.org/FAQ.html#4

https://rsync.samba.org/FAQ.html#5



but what is mentioned there does not really fit to your problem with 8M files - as that theoretically should sum up to 1GB rsync memory requirement (if things mentioned in the faq are still valid)



are your rsync binares 32 or 64 bit ?



regards

roland







Gesendet:Freitag, 27. Mrz 2015 um 12:19 Uhr
Von:Aron Rotteveel rotteveel.a...@gmail.com
An:devz...@web.de
Cc:rsync@lists.samba.org
Betreff:Re: Re: rsync 3.0.9 segmentation fault


Hi Roland,



Thanks for the reply. Memory usage on both machines seem fine. The server has 4GBs of RAM, of which about 3GB is used during the file list build and about 1.5GB is used during the actual transfer. The client has 16GB of RAM with a peak usage of 8.5GB.



I just tried three transfers in a row and it consistently breaks at a certain point, after which I get the ERROR: out of memory in flist_expand [sender] error. There is not much special to mention regarding the file on which it breaks: its a 22KB JPEG file with no special attributes.



The backup server is running Debian 7.8, the client runs on CentOS 5.11.



A find .  wc -l in the backup directory results in7434013 files.




--
Best regards / Met vriendelijke groet,

Aron Rotteveel



2015-03-19 20:10 GMT+01:00 devz...@web.de:





Hi Aron,



i hope its ok for you if i bring this back on-list. Your issue and the way or possible fix to resolve it may be interesting for others too (that may include future searches etc)



so with 3.1.1 we are a step further



i dont really have a clue whats happening here but my next step would be taking a closer look on how the memory usage of rsync on the client and server grows.



you could log it like this:


while true;do ps -eo vsz,rss,sz,rsyncgrep cron;sleep 10;done logfile




does it grow continuously? does the oom situation reproducibly happen at a certain size ?

whats the client and server platform?

how many files? (- https://rsync.samba.org/FAQ.html#5 ! )



regards

roland





Gesendet:Donnerstag, 19. Mrz 2015 um 12:24 Uhr
Von:Aron Rotteveel rotteveel.a...@gmail.com
An:devz...@web.de
Betreff:Re: rsync 3.0.9 segmentation fault




In addition to my last message:


	Client (sender) has 16GBs or RAM, of which only 6.5GB is used during peak.
	I tried using --no-inc-recursive, but it does not solve the issue.


What currrently is puzzling me is the question of why I am receiving these errors when my server seems to have plenty of memory to spare.





--
Best regards / Met vriendelijke groet,

Aron Rotteveel



2015-03-19 11:52 GMT+01:00 Aron Rotteveel rotteveel.a...@gmail.com:


Hi Roland,


I just upgrade both the client and host to 3.1.1 and seem to memory related issues now:




ERROR: out of memory in make_file [sender]

rsync error: error allocating core memory buffers (code 22) at util2.c(102) [sender=3.1.1]

[sender] _exit_cleanup(code=22, file=util2.c, line=102): about to call exit(22)

[Receiver] _exit_cleanup(code=22, file=io.c, line=1633): about to call exit(22)



rsnapshot encountered an error! The program was invoked with these options:

/usr/bin/rsnapshot -c 

  /home/remotebackup/hosts/redacted/rsnapshot.conf sync







--
Best regards / Met vriendelijke groet,

Aron Rotteveel





2015-03-18 23:43 GMT+01:00 devz...@web.de:





Hi,



rsync 3.0.9 is quite ancient, more than 3 years old. A lot of bugs have been fixed since then.



Is there a chance to update to the latest rsync version and retry with that ?



regards

Roland



Gesendet:Dienstag, 17. Mrz 2015 um 11:51 Uhr
Von:Aron Rotteveel rotteveel.a...@gmail.com
An:rsync@lists.samba.org
Betreff:rsync 3.0.9 segmentation fault




Hi,



I am experiencing segfaults when transferring files via rsync though sudo.

Setup:



- Backupserver initiates the rsync command with--delete -vvv --no-inc-recursive --numeric-ids --delete-excluded --relative --rsync-path=/home/backupuser/rsync-wrapper.sh

- rsync-wrapper.sh (on the client) contains/usr/bin/sudo /usr/bin/rsync @;

- user backupuser has sudo access to the rsync command

- Both host and client are running 3.0.9



The transfer starts and some files are actually transferred. Once a certain file is reached (plain PHP file, no special characters or any other peculiarities) it segfaults.



rsync host output:




[sender] make_file(redacted/libraries/phputf8/mbstring/strlen.php,*,2)

rsync: connection unexpectedly closed (51261222 bytes received so far) [Receiver]

rsync error: unexplained error (code 139) at io.c(605) [Receiver=3.0.9]

[Receiver] 

Aw: Re: rsync 3.0.9 segmentation fault

2015-03-19 Thread devzero

Hi Aron,



i hope its ok for you if i bring this back on-list. Your issue and the way or possible fix to resolve it may be interesting for others too (that may include future searches etc)



so with 3.1.1 we are a step further



i dont really have a clue whats happening here but my next step would be taking a closer look on how the memory usage of rsync on the client and server grows.



you could log it like this:


while true;do ps -eo vsz,rss,sz,rsyncgrep cron;sleep 10;done logfile




does it grow continuously? does the oom situation reproducibly happen at a certain size ?

whats the client and server platform?

how many files? (- https://rsync.samba.org/FAQ.html#5 ! )



regards

roland





Gesendet:Donnerstag, 19. Mrz 2015 um 12:24 Uhr
Von:Aron Rotteveel rotteveel.a...@gmail.com
An:devz...@web.de
Betreff:Re: rsync 3.0.9 segmentation fault


In addition to my last message:


	Client (sender) has 16GBs or RAM, of which only 6.5GB is used during peak.
	I tried using --no-inc-recursive, but it does not solve the issue.


What currrently is puzzling me is the question of why I am receiving these errors when my server seems to have plenty of memory to spare.





--
Best regards / Met vriendelijke groet,

Aron Rotteveel



2015-03-19 11:52 GMT+01:00 Aron Rotteveel rotteveel.a...@gmail.com:


Hi Roland,


I just upgrade both the client and host to 3.1.1 and seem to memory related issues now:




ERROR: out of memory in make_file [sender]

rsync error: error allocating core memory buffers (code 22) at util2.c(102) [sender=3.1.1]

[sender] _exit_cleanup(code=22, file=util2.c, line=102): about to call exit(22)

[Receiver] _exit_cleanup(code=22, file=io.c, line=1633): about to call exit(22)



rsnapshot encountered an error! The program was invoked with these options:

/usr/bin/rsnapshot -c 

  /home/remotebackup/hosts/redacted/rsnapshot.conf sync







--
Best regards / Met vriendelijke groet,

Aron Rotteveel





2015-03-18 23:43 GMT+01:00 devz...@web.de:





Hi,



rsync 3.0.9 is quite ancient, more than 3 years old. A lot of bugs have been fixed since then.



Is there a chance to update to the latest rsync version and retry with that ?



regards

Roland



Gesendet:Dienstag, 17. Mrz 2015 um 11:51 Uhr
Von:Aron Rotteveel rotteveel.a...@gmail.com
An:rsync@lists.samba.org
Betreff:rsync 3.0.9 segmentation fault




Hi,



I am experiencing segfaults when transferring files via rsync though sudo.

Setup:



- Backupserver initiates the rsync command with--delete -vvv --no-inc-recursive --numeric-ids --delete-excluded --relative --rsync-path=/home/backupuser/rsync-wrapper.sh

- rsync-wrapper.sh (on the client) contains/usr/bin/sudo /usr/bin/rsync @;

- user backupuser has sudo access to the rsync command

- Both host and client are running 3.0.9



The transfer starts and some files are actually transferred. Once a certain file is reached (plain PHP file, no special characters or any other peculiarities) it segfaults.



rsync host output:




[sender] make_file(redacted/libraries/phputf8/mbstring/strlen.php,*,2)

rsync: connection unexpectedly closed (51261222 bytes received so far) [Receiver]

rsync error: unexplained error (code 139) at io.c(605) [Receiver=3.0.9]

[Receiver] _exit_cleanup(code=12, file=io.c, line=605): about to call exit(139)



rsnapshot encountered an error! The program was invoked with these options:

/usr/bin/rsnapshot -c 

  /home/remotebackup/hosts/redacted/rsnapshot.conf sync



ERROR: /usr/bin/rsync returned 139 while processing backupuser@redacted:/backup/




Client output when using gdb to debug the coredump:




warning: no loadable sections found in added symbol-file system-supplied DSO at 0x7fff015fd000

Core was generated by /usr/bin/rsync --server --sender -vvvlogDtprRe.Lsf --numeric-ids . /backup.

Program terminated with signal 11, Segmentation fault.

#0 0x0035cda7b441 in memcpy () from /lib64/libc.so.6




Any help would be greatly appreciated. Please let me know if additional info is required to properly debug this issue.


--
Best regards / Met vriendelijke groet,

Aron Rotteveel




-- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

















-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Aw: rsync 3.0.9 segmentation fault

2015-03-18 Thread devzero

Hi,



rsync 3.0.9 is quite ancient, more than 3 years old. A lot of bugs have been fixed since then.



Is there a chance to update to the latest rsync version and retry with that ?



regards

Roland



Gesendet:Dienstag, 17. Mrz 2015 um 11:51 Uhr
Von:Aron Rotteveel rotteveel.a...@gmail.com
An:rsync@lists.samba.org
Betreff:rsync 3.0.9 segmentation fault


Hi,



I am experiencing segfaults when transferring files via rsync though sudo.

Setup:



- Backupserver initiates the rsync command with--delete -vvv --no-inc-recursive --numeric-ids --delete-excluded --relative --rsync-path=/home/backupuser/rsync-wrapper.sh

- rsync-wrapper.sh (on the client) contains/usr/bin/sudo /usr/bin/rsync @;

- user backupuser has sudo access to the rsync command

- Both host and client are running 3.0.9



The transfer starts and some files are actually transferred. Once a certain file is reached (plain PHP file, no special characters or any other peculiarities) it segfaults.



rsync host output:




[sender] make_file(redacted/libraries/phputf8/mbstring/strlen.php,*,2)

rsync: connection unexpectedly closed (51261222 bytes received so far) [Receiver]

rsync error: unexplained error (code 139) at io.c(605) [Receiver=3.0.9]

[Receiver] _exit_cleanup(code=12, file=io.c, line=605): about to call exit(139)



rsnapshot encountered an error! The program was invoked with these options:

/usr/bin/rsnapshot -c 

  /home/remotebackup/hosts/redacted/rsnapshot.conf sync



ERROR: /usr/bin/rsync returned 139 while processing backupuser@redacted:/backup/




Client output when using gdb to debug the coredump:




warning: no loadable sections found in added symbol-file system-supplied DSO at 0x7fff015fd000

Core was generated by /usr/bin/rsync --server --sender -vvvlogDtprRe.Lsf --numeric-ids . /backup.

Program terminated with signal 11, Segmentation fault.

#0 0x0035cda7b441 in memcpy () from /lib64/libc.so.6




Any help would be greatly appreciated. Please let me know if additional info is required to properly debug this issue.


--
Best regards / Met vriendelijke groet,

Aron Rotteveel


-- Please use reply-all for most replies to avoid omitting the mailing list. To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html



-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Aw: Re: Re: Re: rsync not copy all information for font file

2014-12-18 Thread devzero

im sure that if you use ext3/4 as the destination filesystem (instead of hfs+) and also access that via netatalk the same way as you access the source, then all is fine again.



i think its the transition from ext4-hfs+ and the dbehaviour of netatalk accessing and handling metadata on ext4 and hfs+ differently



you may take a look at http://netatalk.sourceforge.net/3.1/htmldocs/afp.conf.5.html to see how netatalk handles metadata different on different filesystem, so maybe params appledouble, ea, convert appledouble are interesting for you.



maybe there are others, i did not every use netatalk yet...



so, indeed no rsync issue but a apple/netatalk specific one...



regards

roland





Gesendet:Donnerstag, 18. Dezember 2014 um 12:36 Uhr
Von:bryan.pliatsios bryan.pliats...@wellcom.com.au
An:Ram Ballu r...@edom.it, devz...@web.de, rsync rsync@lists.samba.org
Betreff:Re: Aw: Re: Re: rsync not copy all information for font file


Ram,

 Look inside the .AppleDouble folder and tou will find the resource forks that hold the main part of the font data.



 Not a problem with netatalk, but just how netatalk works so that it can represent apple files on non-apple filesystems.



 If you confirm the data all matches, this will prove rsync is working properly.



 Bryan


 Original message 
From: Ram Ballu
Date:18/12/2014 22:11 (GMT+10:00)
To: devz...@web.de,rsync
Subject: Re: Aw: Re: Re: rsync not copy all information for font file

ls -la Source folder
--


root@---:/BKP_SC//FONT# ls -la

totale 2808

drwxr-sr-x 1 nobody 65533  944 giu 30 2010 .

drwxr-sr-x 1 nobody 65533  1440 mar 18 2014 ..

-rw-r--r-- 1 nobody 65533 12292 apr 20 2012 :2eDS_Store

drwxr-sr-x 1 nobody 65533  880 giu 30 2010 .AppleDouble

-rw-r--r-- 1 nobody 65533   0 mar 5 2010 Davys Dingbats 1

-rw-r--r-- 1 nobody 65533   0 mar 5 2010 Davys Dingbats 2

-rw-r--r-- 1 nobody 65533   0 mar 22 2010 Graeca.fam

drwxr-sr-x 1 nobody 65533  144 mar 26 2010 Greca Opentype

-rw-r--r-- 1 nobody 65533   0 lug 28 2007 L Decoration Pi 1

-rw-r--r-- 1 nobody 65533   0 ago 18 2006 LDecPiOne

-rw-r--r-- 1 nobody 65533   0 lug 28 2007 MathematicalPi 3

-rw-r--r-- 1 nobody 65533   0 ago 18 2006 MathePiThr

drwxr-sr-x 1 nobody 65533  536 apr 23 2010 Method

-rw-r--r-- 1 nobody 65533 279376 lug 31 2009 MinionPro-BoldCnIt.otf

-rw-r--r-- 1 nobody 65533 236528 lug 31 2009 MinionPro-BoldCn.otf

-rw-r--r-- 1 nobody 65533 280820 lug 31 2009 MinionPro-BoldIt.otf

-rw-r--r-- 1 nobody 65533 234868 lug 31 2009 MinionPro-Bold.otf

-rw-r--r-- 1 nobody 65533 280924 lug 31 2009 MinionPro-It.otf

-rw-r--r-- 1 nobody 65533 279836 lug 31 2009 MinionPro-MediumIt.otf

-rw-r--r-- 1 nobody 65533 236940 lug 31 2009 MinionPro-Medium.otf

-rw-r--r-- 1 nobody 65533 235436 lug 31 2009 MinionPro-Regular.otf

-rw-r--r-- 1 nobody 65533 281364 lug 31 2009 MinionPro-SemiboldIt.otf

-rw-r--r-- 1 nobody 65533 237800 lug 31 2009 MinionPro-Semibold.otf

-rw-r--r-- 1 nobody 65533   0 ago 18 2006 Monotype Sorts

-rw-r--r-- 1 nobody 65533   0 ago 18 2006 Rapha

-rw-r--r-- 1 nobody 65533   0 lug 28 2007 Raphael

-rw-r--r-- 1 nobody 65533 75020 dic 25 2005 Symbol.dfont

-rw-r--r-- 1 nobody 65533 154081 dic 25 2005 ZapfDingbats.dfont

----



ls -la destination folder

**---


root@---:/BKP_DES# ls -la

totale 2808

drwxr-sr-x 1 nobody 65533  952 giu 30 2010 .

drwxr-xr-x 25 root  root  4096 dic 12 06:36 ..

-rw-r--r-- 1 nobody 65533 12292 apr 20 2012 :2eDS_Store

drwxr-sr-x 1 nobody 65533  884 giu 30 2010 .AppleDouble

-rw-r--r-- 1 nobody 65533   0 mar 5 2010 Davys Dingbats 1

-rw-r--r-- 1 nobody 65533   0 mar 5 2010 Davys Dingbats 2

-rw-r--r-- 1 nobody 65533   0 mar 22 2010 Graeca.fam

drwxr-sr-x 1 nobody 65533  170 mar 26 2010 Greca Opentype

-rw-r--r-- 1 nobody 65533   0 lug 28 2007 L Decoration Pi 1

-rw-r--r-- 1 nobody 65533   0 ago 18 2006 LDecPiOne

-rw-r--r-- 1 nobody 65533   0 lug 28 2007 MathematicalPi 3

-rw-r--r-- 1 nobody 65533   0 ago 18 2006 MathePiThr

drwxr-sr-x 1 nobody 65533  612 apr 23 2010 Method

-rw-r--r-- 1 nobody 65533 279376 lug 31 2009 MinionPro-BoldCnIt.otf

-rw-r--r-- 1 nobody 65533 236528 lug 31 2009 MinionPro-BoldCn.otf

-rw-r--r-- 1 nobody 65533 280820 lug 31 2009 MinionPro-BoldIt.otf

-rw-r--r-- 1 nobody 65533 234868 lug 31 2009 MinionPro-Bold.otf

-rw-r--r-- 1 nobody 65533 280924 lug 31 2009 MinionPro-It.otf

-rw-r--r-- 1 nobody 65533 279836 lug 31 2009 MinionPro-MediumIt.otf

-rw-r--r-- 1 nobody 65533 236940 lug 31 2009 MinionPro-Medium.otf

-rw-r--r-- 1 nobody 65533 235436 lug 31 2009 MinionPro-Regular.otf

-rw-r--r-- 1 nobody 65533 281364 lug 31 2009 MinionPro-SemiboldIt.otf

-rw-r--r-- 1 nobody 65533 237800 lug 31 2009 MinionPro-Semibold.otf

-rw-r--r-- 1 nobody 65533   0 ago 18 2006 Monotype Sorts

-rw-r--r-- 1 nobody 65533   0 ago 18 2006 Rapha

-rw-r--r-- 1 nobody 

Aw: Re: Re: rsync not copy all information for font file

2014-12-14 Thread devzero

but this does not explain why the files on the destination are 0kb sized - with that background information you delivered, i assume they may already be 0kb on the source side - but Ram may have overseen that because he did not look from the shells point of view but from the OSX point of view via netatalk ? so this is not a rsync problem at all then...



roland



Gesendet:Sonntag, 14. Dezember 2014 um 11:12 Uhr
Von:Bryan Pliatsios bryan.pliats...@wellcom.com.au
An:rsync rsync@lists.samba.org
Cc:Ram Ballu r...@edom.it
Betreff:Re: Re: rsync not copy all information for font file



Hi Ram,



 In OS X, some font types (not all) put the font payload in the resource fork. Netatalk provides AFP filesharing, imitating the resource forks by creating secondary files in .Appledouble folders within each folder. Netatalk tracks the resource forks, and other metadata, by keeping a Desktop database at the root of the shared volume  look for .AppleDesktop, and .AppleDB.



 You can double check the font still has its payload by looking at the size of ./.Appledouble/fontname.ext

 To use the backup, you have to present that storage on a machine that has Netatalk, AND to rebuild the desktop file.

 To rebuild the desktop: sudo dbd -r /home/AFPshare, or just to check it use -s for a scan. I usually stop the service, run the scan piped to wc -l and then do a rebuild, scan and count the errors, rebuild, etc until the number of error lines stops going down, or is zero. If there are still errors (usually codepage issues from moving between platforms) I check the files and work out how to fix it.



 My best recommendation is to move away from Netatalk and go to Samba. Its not perfect, but with OSX10.10 things are starting to get better. Samba removes the custom .Appledouble and the database, forcing the client computer to create dot-underscore files (fontname.ext  ._fontname.ext).



 Also worth noting, Apple still ships rsync 2.6.9 with OSX10.10! The same version since late 10.4 version. Ive followed the suggestion of Mike Bombich who writes CarbonCopyCloner and built a universal 3.0.7 executable which I deploy to all my Apple servers.



OSX 10.10 native:


rsyncversion 2.6.9protocol version 29
Copyright (C) 1996-2006 by Andrew Tridgell, Wayne Davison, and others.
http://rsync.samba.org/
Capabilities: 64-bit files, socketpairs, hard links, symlinks, batchfiles,
  inplace, IPv6, 64-bit system inums, 64-bit internal inums




per Bombich instructions:



Capabilities:
 64-bit files, 32-bit inums, 64-bit timestamps, 64-bit long ints,
 socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace,
 append, ACLs, xattrs, no iconv, symtimes, file-flags



file rsync3
rsync3: Mach-O universalbinary with 3 architectures
rsync3 (for architectureppc): Mach-O executable ppc
rsync3 (for architecturei386): Mach-O executable i386
rsync3 (for architecturex86_64): Mach-O 64-bit executable x86_64






Regards,



 Bryan






On 13 Dec 2014, at 4:04 am, devz...@web.de wrote:


what is the source and destination filesystem?

here is some report that rsync has some problem with HFS+ filesystems and ressource forks: http://quesera.com/reynhout/misc/rsync+hfsmode/

but as you are using ubuntu and not osx im curious whats the problem, so i think we need more information here.

regards
roland


Gesendet: Freitag, 12. Dezember 2014 um 15:31 Uhr
Von: Ram Ballu r...@edom.it
An: rsync@lists.samba.org
Betreff: Re: Aw: Re: rsync not copy all information for font file

Some more details on my problem.
The machine have Ubuntu 12.04 with rsync version 3.0.9
I am using this server as data server using Netatalk for share with apple OS for my graphics work, so here i save my work containing file created by (indesign, photoshop, illustrator, quark xpress, mathtype etc.)
I have try to backup my shared folder on external HDD using rsync mounting external HDD in local. But in backup copy the font file result Zero kb.

Here is the result i try to backup only the folder containing font.
Here PROVA_BKP_SRC is source directory(folder on local HDD) and BKP_DES is folder on external HDD mounted in local.

# rsync -av PROVA_BKP_SRC/ /BKP_DES/
sending incremental file list
./
:2eDS_Store
Times Bold Italic/
Times Bold Italic/BI Times BoldItalic
Times Bold Italic/TimesBolIta
Times Bold Italic/.AppleDouble/
Times Bold Italic/.AppleDouble/.Parent
Times Bold Italic/.AppleDouble/BI Times BoldItalic
Times Bold Italic/.AppleDouble/TimesBolIta
Times Bold/
Times Bold/B Times Bold
Times Bold/TimesBol
Times Bold/.AppleDouble/
Times Bold/.AppleDouble/.Parent
Times Bold/.AppleDouble/B Times Bold
Times Bold/.AppleDouble/TimesBol
Times Italic/
Times Italic/I Times Italic
Times Italic/TimesIta
Times Italic/.AppleDouble/
Times Italic/.AppleDouble/.Parent
Times Italic/.AppleDouble/I Times Italic
Times Italic/.AppleDouble/TimesIta
Times Roman/
Times Roman/Times
Times Roman/TimesRom
Times Roman/.AppleDouble/
Times Roman/.AppleDouble/.Parent
Times Roman/.AppleDouble/Times
Times 

Aw: rsync not copy all information for font file

2014-12-12 Thread devzero
you mean, rsync silently creates 0kb sized files and only a special type of 
file shows this behaviour?

try increasing rsync verbosity with -v , delete the 0kb files and retry. you 
can send the output of rsync to this list if it`s not too long if you don`t get 
a clue from that. mind that it may contain private information.

you should also omit -z because it makes no sense for local transfer. you do 
not want compression here, because it slows things down and burns cpu for 
nothing.

regards
roland

ps:
i`m no native english speaker, but i think your english is quite ok.




 Gesendet: Freitag, 12. Dezember 2014 um 10:44 Uhr
 Von: Ram Ballu r...@edom.it
 An: rsync@lists.samba.org
 Betreff: rsync not copy all information for font file

 Good morning list, 
 this is my first question and hope really to get an answer, sorry for my bad 
 english :-(
 
 Ok so i have a machine with ubuntu and use as data server for my graphics 
 works. 
 Now i am trying to backup my folder on an external HDD using rsync where i 
 save all my data file (file of graphics software, image, font etc.).
 For this i mount HDD in local and launch command rsync  -avz  source_folder  
 destination_folder.
 I notice that in backup copy font file have zero kb size rather than 25-60 kb 
 in source folder and i can't use the font from backup folder as it results 
 zero kb.
 Can someone help me to solve this issue.
 Thanks in advance to all for reading this and spend your time.
 Ram
 
 -- 
 Please use reply-all for most replies to avoid omitting the mailing list.
 To unsubscribe or change options: 
 https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
 
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Aw: Re: Re: rsync not copy all information for font file

2014-12-12 Thread devzero
what is the source and destination filesystem?

here is some report that rsync has some problem with HFS+ filesystems and 
ressource forks: http://quesera.com/reynhout/misc/rsync+hfsmode/

but as you are using ubuntu and not osx i`m curious what`s the problem, so i 
think we need more information here.

regards
roland


 Gesendet: Freitag, 12. Dezember 2014 um 15:31 Uhr
 Von: Ram Ballu r...@edom.it
 An: rsync@lists.samba.org
 Betreff: Re: Aw: Re:  rsync not copy all information for font file

 Some more details on my problem.
 The machine have Ubuntu 12.04 with rsync version 3.0.9
 I am using this server as data server using Netatalk for share with apple 
 OS for my graphics work, so here i save my work containing file created by 
 (indesign, photoshop, illustrator, quark xpress, mathtype etc.)
 I have try to backup my shared folder on external HDD using rsync mounting 
 external HDD in local. But in backup copy the font file result Zero kb.
 
 Here is the result i try to backup only the folder containing font.
 Here PROVA_BKP_SRC is source directory(folder on local HDD) and BKP_DES is 
 folder on external HDD mounted in local. 
 
 # rsync -av PROVA_BKP_SRC/ /BKP_DES/
 sending incremental file list
 ./
 :2eDS_Store
 Times Bold Italic/
 Times Bold Italic/BI Times BoldItalic
 Times Bold Italic/TimesBolIta
 Times Bold Italic/.AppleDouble/
 Times Bold Italic/.AppleDouble/.Parent
 Times Bold Italic/.AppleDouble/BI Times BoldItalic
 Times Bold Italic/.AppleDouble/TimesBolIta
 Times Bold/
 Times Bold/B Times Bold
 Times Bold/TimesBol
 Times Bold/.AppleDouble/
 Times Bold/.AppleDouble/.Parent
 Times Bold/.AppleDouble/B Times Bold
 Times Bold/.AppleDouble/TimesBol
 Times Italic/
 Times Italic/I Times Italic
 Times Italic/TimesIta
 Times Italic/.AppleDouble/
 Times Italic/.AppleDouble/.Parent
 Times Italic/.AppleDouble/I Times Italic
 Times Italic/.AppleDouble/TimesIta
 Times Roman/
 Times Roman/Times
 Times Roman/TimesRom
 Times Roman/.AppleDouble/
 Times Roman/.AppleDouble/.Parent
 Times Roman/.AppleDouble/Times
 Times Roman/.AppleDouble/TimesRom
 
 sent 183840 bytes  received 447 bytes  368574.00 bytes/sec
 total size is 183542  speedup is 1.00
 --
 
 
 
 
 
 Il giorno 12/dic/2014, alle ore 13.02, devz...@web.de ha scritto:
 
  yes
  
  Gesendet: Freitag, 12. Dezember 2014 um 12:59 Uhr
  Von: Ram Ballu r...@edom.it
  An: devz...@web.de
  Betreff: Re: Aw: rsync not copy all information for font file
  
  Ronald, 
  thanks a lot for your kind suggestion and for compliments on my english(i 
  take it like compliments :-) )
  I will try after sometime as you say me and than update you.
  so now command i have to use is 
  
  rsync -av /source_folder /Destination_folder ?
  
  Thanks again
  Ram
  
  Il giorno 12/dic/2014, alle ore 11.27, devz...@web.de ha scritto:
  
  you mean, rsync silently creates 0kb sized files and only a special 
  type of file shows this behaviour?
  
  try increasing rsync verbosity with -v , delete the 0kb files and 
  retry. you can send the output of rsync to this list if it`s not too long 
  if you don`t get a clue from that. mind that it may contain private 
  information.
  
  you should also omit -z because it makes no sense for local transfer. 
  you do not want compression here, because it slows things down and burns 
  cpu for nothing.
  
  regards
  roland
  
  ps:
  i`m no native english speaker, but i think your english is quite ok.
  
  
  
  
  Gesendet: Freitag, 12. Dezember 2014 um 10:44 Uhr
  Von: Ram Ballu r...@edom.it
  An: rsync@lists.samba.org
  Betreff: rsync not copy all information for font file
  
  Good morning list, 
  this is my first question and hope really to get an answer, sorry for my 
  bad english :-(
  
  Ok so i have a machine with ubuntu and use as data server for my 
  graphics works. 
  Now i am trying to backup my folder on an external HDD using rsync 
  where i save all my data file (file of graphics software, image, font 
  etc.).
  For this i mount HDD in local and launch command rsync  -avz  
  source_folder  destination_folder.
  I notice that in backup copy font file have zero kb size rather than 
  25-60 kb in source folder and i can't use the font from backup folder as 
  it results zero kb.
  Can someone help me to solve this issue.
  Thanks in advance to all for reading this and spend your time.
  Ram
  
  -- 
  Please use reply-all for most replies to avoid omitting the mailing list.
  To unsubscribe or change options: 
  https://lists.samba.org/mailman/listinfo/rsync
  Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
  
  
  
 
 -- 
 Please use reply-all for most replies to avoid omitting the mailing list.
 To unsubscribe or change options: 
 https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
 
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: 

Aw: Re: rsync doesn't checksum for local transfers?

2014-12-04 Thread devzero
 You are missing the point of the checksum.  It is a verification that
 the file was assembled on the target system correctly.  The only
 post-transfer checksum that would make any sense locally would be to
 make sure that the disk stored the file correctly which would require
 a flushing of the cache and a re-reading of the file.  Rsync has no
 capability to do this whether remote or not.

yes, but indeed this could be explained more clearly in the manpage

  Note that rsync always verifies that each transferred file was 
  correctly reconstructed on the receiving side by checking a
  whole-file checksum that is generated as the file is
  transferred

let me try to add some lines :

After being written to disk, for both local and remote transfers,  the 
destination file as a whole is not being re-read for checksumming. 
Checksumming is only being done for the reconstruction process: 
The checksum is calculated across the bits being received and the 
bits being read from the target file, so essentially the updated 
target file is being checksummed while it`s being written to.

is that correct ?

 
 On 12/03/2014 09:17 PM, Shriramana Sharma wrote:
  Hello. Please see http://unix.stackexchange.com/a/66702. I would
  like to have confirmation whether or not rsync verifies the
  transferred files' integrity at the target location by checksumming
  as advertised in the manpage:
  
  Note that rsync always verifies that each transferred file was 
  correctly reconstructed on the receiving side by checking a
  whole-file checksum that is generated as the file is
  transferred
  
  The word always here seems to indicate that the integrity check
  will happen whether for local or network transfers, but the above
  Stack Exchange post claims otherwise. Please clarify.
  
  Also, once it is assured that the check will happen *really*
  always, it would be useful to advertise the fact about the
  integrity check in the website and description part of the manpage
  itself IMO.
  
  FWIW I'm using rsync 3.1.1 (latest) on openSUSE Tumbleweed.
  
  Thanks.
  
 
 - -- 
 ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~
   Kevin Korb  Phone:(407) 252-6853
   Systems Administrator   Internet:
   FutureQuest, Inc.   ke...@futurequest.net  (work)
   Orlando, Floridak...@sanitarium.net (personal)
   Web page:   http://www.sanitarium.net/
   PGP public key available on web site.
 ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2
 
 iEYEARECAAYFAlR/yO4ACgkQVKC1jlbQAQdevACgvdnZ0x6n0EjpAksx0rbrBSDr
 XxYAn3jCn3M04IAcZ7vbNIWKRz+5AxRe
 =wEBv
 -END PGP SIGNATURE-
 -- 
 Please use reply-all for most replies to avoid omitting the mailing list.
 To unsubscribe or change options: 
 https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
 
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Aw: Re: encrypted rsyncd - why was it never implemented?

2014-12-03 Thread devzero
from a security perspective this is bad. think of a backup provider who wants 
to make rsyncd modules available to the end users so they can push backups to 
the server. do you think that such server is secure if all users are allowed to 
open up an ssh shell to secure their rsync transfer ?

ok, you can restrict the ssh connection, but you open up a hole and you need to 
think twice to make it secure - leaving room for hacking and circumventing ssh 
restrictions.

indeed, rsyncd with ssl is quite attractive, but adding ssl to rsync adds quite 
some complexity and also increases maintenance work.

for some time there is a ssl patch in the contrib directory, but  i`m curious 
why nobody is aware of rsyncssl, which is not a perfect but quite some elegant 
solution to support wrapping rsyncd with ssl via stunnel:

http://dozzie.jarowit.net/trac/wiki/RsyncSSL
https://git.samba.org/?p=rsync.git;a=commit;h=70d4a945f7d1ab1aca2c3ca8535240fad4bdf06b

regards
roland



 Gesendet: Mittwoch, 03. Dezember 2014 um 19:19 Uhr
 Von: Kevin Korb k...@sanitarium.net
 An: rsync@lists.samba.org
 Betreff: Re: encrypted rsyncd - why was it never implemented?

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 You can run rsyncd over ssh as well.  Either with -e ssh host::module
 or you can use ssh's -L to tunnel the rsyncd port.  The difference is
 which user ends up running the rsyncd.
 
 On 12/03/2014 12:40 PM, Tomasz Chmielewski wrote:
  rsync in daemon mode is very powerful, yet it comes with one big 
  disadvantage: data is sent in plain.
  
  The workarounds are not really satisfying:
  
  
  - use VPN - one needs to set up an extra service, not always
  possible
  
  - use stunnel - as above
  
  - use SSH - is not as powerful as in daemon mode (i.e. read only
  access, chroot, easy way of adding/modifying users and modules
  etc.)
  
  
  Why was encrypted communication in rsyncd never implemented? Some 
  technical disagreements? Nobody volunteered?
  
  
 
 - -- 
 ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~
   Kevin Korb  Phone:(407) 252-6853
   Systems Administrator   Internet:
   FutureQuest, Inc.   ke...@futurequest.net  (work)
   Orlando, Floridak...@sanitarium.net (personal)
   Web page:   http://www.sanitarium.net/
   PGP public key available on web site.
 ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2
 
 iEYEARECAAYFAlR/VEUACgkQVKC1jlbQAQcE+wCfYD+irslnu/nRool4RPL+KjUC
 J9wAoKmYNAlfpCMlVKYcV+jpW8e0YNF6
 =oUk3
 -END PGP SIGNATURE-
 -- 
 Please use reply-all for most replies to avoid omitting the mailing list.
 To unsubscribe or change options: 
 https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
 
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Aw: Re: Comparing FLAC header before syncing

2014-12-02 Thread devzero
yes, i`d second that.

maybe you just try using metaflac to generate the appropriate list of files to 
sync and then feed that list to rsync.


 Gesendet: Dienstag, 02. Dezember 2014 um 08:37 Uhr
 Von: Fabian Cenedese cened...@indel.ch
 An: rsync@lists.samba.org
 Betreff: Re: Comparing FLAC header before syncing

 At 02:24 02.12.2014, Mike Garey wrote:
 Hi all, I'd like to modify rsync to add a flag to compare the MD5 signature 
 of the unencoded audio data in the header of a FLAC file to determine 
 whether or not to transfer a file. Â 
 
 The reason being that I've got a large number of FLAC files, many of which 
 are corrupted in the destination volume, but many of which are valid but the 
 tags have been modified.  The sizes of both the source and destination 
 files are the exact same, regardless of whether they're corrupted or not.  
 If I were to rsync all files based on the modification date, it would 
 overwrite these valid FLAC files whose tags have been altered, which I want 
 to prevent.  I only want to rsync new files and files whose MD5 signatures 
 don't match.
 
 If anyone could point me in the right direction of where in the source code 
 I should concentrate my efforts to add this modification, it would help 
 greatly.
 
 You'd probably get a result faster with less problems (future updates) if you
 separate your logic into a single program instead of incorporating it into 
 rsync.
 If you have a way of accessing both the source and dest then you can create
 a list of files to sync and feed this to rsync with --files-from. Or you run 
 your
 script/program on the destination, check all files and create the list which
 you can then transfer to the source and let rsync run. Or you have rsync run
 on the destination if you have a server on or direct access to the source.
 
 If you really want to change rsync you can probably look at -c (checksum)
 which you may be able to adjust to use the header's checksum instead
 of generating a file's checksum. But I don't know the source code.
 
 bye  Fabi
 
 
 
 -- 
 Please use reply-all for most replies to avoid omitting the mailing list.
 To unsubscribe or change options: 
 https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Aw: Dealing with an unreliable remote

2014-11-25 Thread devzero
you may have a look here:

http://superuser.com/questions/192766/resume-transfer-of-a-single-file-by-rsync
http://stackoverflow.com/questions/16572066/resuming-rsync-partial-p-partial-on-a-interrupted-transfer

if you use inplace or append, for security reason you could even run another 
rsync diff to compare if 
source and destination is really identical, see:

http://www.techrepublic.com/blog/linux-and-open-source/how-to-compare-the-content-of-two-or-more-directories-automatically/

regards
roland


 Gesendet: Dienstag, 25. November 2014 um 16:02 Uhr
 Von: net.rs...@io7m.com
 An: rsync@lists.samba.org
 Betreff: Dealing with an unreliable remote

 'Lo.
 
 I've run into a frustrating issue when trying to synchronize a
 directory hierarchy over a reliable (but slow) connection to an 
 unreliable remote. Basically, I have the following:
 
   http://mvn-repository.io7m.com/com/io7m/
 
 This is a set of nested directories containing binaries and sources for
 projects I develop/maintain. Every time a new release is made, I deploy
 the binaries and sources to an exact copy of the above hierarchy on my
 local machine, and then rsync that (over SSH) to
 mvn-repository.io7m.com.
 
   $ rsync -avz --delete --progress local/ 
 io7m.com:/home/io7m/mvn-repository.io7m.com/
 
 The problem:
 
 The latest project produces one .jar file that's about 80mb.
 Evidently, the hosting provider I use for io7m.com is using some sort
 of process tracking system that kills processes that have been running
 for too long (I think it just measures CPU time). The result of this is
 that I get about 50% of the way through copying that
 (comparatively) large file, and then the remote rsync process is
 suddenly killed because it has been running for too long.
 
 This would be fine, except that it seems that rsync is utterly refusing
 all efforts to continue copying that file from wherever it left off. It
 always restarts copying of the file from nothing and tries to copy the
 full 80mb, resulting it being killed halfway through and causing much
 grinding of teeth.
 
 The documentation for --partial states that Using the --partial option
 tells rsync to keep the partial file which should make a subsequent
 transfer of the rest of the file much faster.. Well, for whatever
 reason, it doesn't (or it at least fails to continue using it).
 
 I've tried --partial-dir, passing it an absolute path to the temporary
 directory in my home directory. It created a file in there the first time, 
 but after being killed by the remote side and restarting, it ignored
 that file and instead created a new temporary file (with a random suffix) 
 in the destination directory! Am I doing something wrong?
 
   $ rsync -avz --delete --progress --partial-dir=/home/io7m/tmp/rsync 
 io7m.com:/home/io7m/mvn-repository.io7m.com/
 
 I'm at a loss. How can I reliably get this directory hierarchy up onto
 the server? I don't care if I have to retry the command multiple times
 until the copy has fully succeeded, but I obviously can't do that if
 rsync keeps restarting the failed file from scratch every time.
 
 M
 -- 
 Please use reply-all for most replies to avoid omitting the mailing list.
 To unsubscribe or change options: 
 https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
 
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Bug-report:rsync may hung if time jumps backwards

2014-11-19 Thread devzero
Hi, 

it seems that one already has been fixed in 3.1.0, see

https://bugzilla.samba.org/show_bug.cgi?id=9789

and

https://git.samba.org/?p=rsync.git;a=commit;h=2dc2070992c00ea6625031813f2b6c886ddc3ade

you are still using 2.6.9 ? that`s rather old (~ 8yrs?) and may have bugs and 
security issues already being fixed.

regards
roland


 List:   rsync
 Subject:Bug-report:rsync may hung if time jumps backwards
 From:   yhu2 yadi.hu () windriver ! com
 Date:   2014-11-17 6:44:25
 Message-ID: 54699949.1020503 () windriver ! com
 [Download message RAW]
 
 Hello eveyone!
 
 According to below reproduce steps,you could observe a rsync hang:
 
 1:configure and startup rsync service and
 
 mkdir /root/a
 mkdir /root/b
 dd if=/dev/zero of=/root/b/1 bs=1M count=1
 dd if=/dev/zero of=/root/b/2 bs=1M count=1
 dd if=/dev/zero of=/root/b/3 bs=1M count=1
 dd if=/dev/zero of=/root/b/4 bs=1M count=1
 dd if=/dev/zero of=/root/b/5 bs=1M count=1
 
 
 2: start testcase
 
 ./change-time-loop.sh  /dev/null 
 ./rsync-loop.sh
 
 
 
 After applying this patch ,this issue went away.
 
 
 --- rsync-2.6.9/BUILD/rsync-2.6.9/util.c2014-11-11 
 13:02:11.495609639 +0800
 +++ rsync-2.6.9/BUILD/rsync-2.6.9/util.c2014-11-11 
 13:01:37.606569696 +0800
 @@ -1174,8 +1174,11 @@
* Always returns TRUE.  (In the future it might return FALSE if
* interrupted.)
**/
 +
   int msleep(int t)
   {
 +
 +#if 0
   int tdiff = 0;
   struct timeval tval, t1, t2;
 
 @@ -1192,7 +1195,8 @@
   tdiff = (t2.tv_sec - t1.tv_sec)*1000 +
   (t2.tv_usec - t1.tv_usec)/1000;
   }
 -
 +#endif
 +  usleep(t*1000);
   return True;
   }
 
 
 Is it a correct fix? any comments would be appreciated!!!
 
 
 
 [change-time-loop.sh (application/x-sh)]
 
 #!/bin/bash
 
 while [ 1 ] 
 do 
 date -s 2012-10-30 06:28:04
 #sleep 3
 date -s 2014-11-04 17:13:04 
 #sleep 3
 done
 
 [rsync-loop.sh (application/x-sh)]
 
 #!/bin/bash
 
 while [ 1 ] 
 do
 rsync -avz --password-file=/root/my.secrets /root/b root@127.0.0.1::logs 
 rm /root/a/* -rf
 done
 
 
 -- 
 Please use reply-all for most replies to avoid omitting the mailing list.
 To unsubscribe or change options: 
 https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
 
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Adjusting transfer rate on the fly

2014-07-20 Thread devzero
it`s probably not exactly what you want, but have a look at this:

https://bugzilla.samba.org/show_bug.cgi?id=7120#c3

regards
Roland




List:   rsync
Subject:Adjusting transfer rate on the fly
From:   mr queue mrqueue32 () gmail ! com
Date:   2014-07-20 0:43:12
Message-ID: CACcQGfYPLTbyg-dfYzTKytxxKmGS5hy+d-pY47yrCzG2-5uPug () mail ! 
gmail ! com
[Download message RAW]

Hello,

Does anyone know if it is possible to adjust the value of --bwlimt= on
the fly using gdb or the like? I've adjusted mysql max_connections in
the past with gdb and was curious if I would be able to make
adjustments to rsync in a similar fashion.

I am very well aware I can stop it and start it again with a new
value. That is not what I'm after here.


Regards,
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: 
https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Alternative rsync client for Windows

2014-04-10 Thread devzero
Great to see that there is a native rsync now.

This is NOT a derived work of GPL´ed rsync but a re-implementation from scratch 
?

regards
Roland


List:   rsync
Subject:Alternative rsync client for Windows
From:   Gilbert (Gang) Chen gchen () acrosync ! com
Date:   2014-04-08 16:41:36
Message-ID: CAPQn=Q_kMJLcPkWq54f1y6X6-gL_CCv=ozL1dOVK2xxkzdcAMQ () mail ! gmail 
! com
[Download message RAW]

[Attachment #2 (multipart/alternative)]


Hi, there,


We're pleased to announce the open beta of Acrosync, a native rsync
client for Windows.  Key features include:

   - Easy install, no more dependency on cygwin
   - Simple GUI, one click to start the sync
   - sync over ssh for security, or directly with an rsync daemon
   - For ssh, no need to set up password-less login
   - Built-in file system monitor to upload changed files instantly (just
   like Dropbox)
   - Support hourly incremental snapshot creation (just like Time Machine)
   - Can invoke Volume Shadow Copy Service to upload locked files

It can be downloaded from http://www.acrosync.com/windows.html.  For
bug reports, comments, suggestions, etc, please visit the forum
http://download.acrosync.com/forum.html


Thanks,

The Acrosync Team
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: RFC: slow-down option

2014-04-08 Thread devzero
regarding dynamic slowdown, you may also have a look at:

https://bugzilla.samba.org/show_bug.cgi?id=7120

regards
Roland

List:   rsync
Subject:Re: RFC: slow-down option
From:   Marian Marinov mm () yuhu ! biz
Date:   2014-04-03 12:52:53
Message-ID: 533D59A5.4080503 () yuhu ! biz
[Download message RAW]

On 04/03/2014 02:48 PM, Christoph Biedl wrote:
 Marian Marinov wrote...
 
  I've been using rsync on some backup servers for years. In 2011 we
  had a situation where the FS of the backup server was behaving
  strange, even thou there was enough available I/O, the fs(ext4 on
  16TB partition with a lot of inodes) was lagging. After much testing
  we found that rsync was hammering the fs too hard.
 
 I'd like to learn more about that scenario. Mostly, I'm curious
 whether these file transfers involved creation of a *lot* and probably
 rather small files.

The files were *mostly* small files under 10k. We had around 10 concurrent 
rsyncs \
running at any given time. The fs was build on top of RAID6 array. But 
unfortunately \
I did not create the journal on separate device and actually  did not left any 
space \
for separate device for the journal, which in my opinion would help in that \
situation.

When I did the patch, I was also thinking about a more dynamic slowdown, one 
that \
would take into account the size of  the file that was previously transmitted 
or the \
size of the file that will be transmitted.

Marian
 
 Christoph
 

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: silent data corruption with rsync

2014-03-13 Thread devzero
What do They recommend instead?

If it`s all about copying and network bandwidth is not an issue, you can use 
scp or whatever dumb tool which just shuffle the bits around as is.  rsync is 
being used when you want to keep data in sync and if you want to save bandwidth 
to handle that task. You CAN use it for copying only, but you somewhat take a 
sledgehammer to crack a nut.

Anyway, if They care about their data , They use checksumming for storing 
their data on disk, do They ? ;)

The network is not the only place where data corruption can happenand 
silent bitrot on disks _does_ happen, especially when your harddisks go nuts 
and/or your raid arrays break or your storage controller`s firmware got 
hiccups. It does not happen often, but it happens and mostly you won`t know 
when and where. In my IT job i had one case were some SAN storage lost some 
cache contents and the only place we really knew where data loss/curruption has 
happend were the oracle and exchange databases. For all the other data, we 
don`t know if they are in 100% perfect condition.

regards
Roland



List:   rsync
Subject:silent data corruption with rsync
From:   Sig_Pam spam () itserv ! de
Date:   2014-03-11 16:02:28
Message-ID: zarafa.531f3394.439c.5f8c77014439296d () exchange64 ! corp ! 
itserv ! de
[Download message RAW]

[Attachment #2 (multipart/alternative)]


Hi everbody!

I'm currently working in a project which has to copy huge amounts of data from 
one \
storage to another. For a reason I cannot validate any longer, there is a 
roumor that \
rsync may silently corrupt data. Personally, I don't believe that.

They explain it this way: rsync does an in-stream data deduplication. It 
creates a \
checksum for each data block to transfer, and if a block with the same 
checksum has \
already been transferred sooner, this old block will be re-used to save 
bandwidth. \
But, for any reason, two diffent blocks can produce the same checksum even if 
the \
source data is not the same, effectively corrupting the data stream.

Did you ever hear something like this? Has this been a bug in any early 
version of \
rsync? If so, when was it fixed?

Thank you,

  sig 


--
Angaben gemäß §35a GmbH-Gesetz:
ITServ GmbH
Sitz der Gesellschaft: 55294 Bodenheim/Rhein
Eingetragen unter Registernummer HRB 41668 beim Amtsgericht Mainz
Vertretungsberechtiger Geschäftsführer: Peter Bauer, 55294 Bodenheim
Umsatzsteuer-ID: DE182270475
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Fw: Aw: Re: backup-dir over nfs

2014-03-08 Thread devzero

this is a linux kernel or hardware issue, please update (if not yet done) your system to the latest patches, especially the kernel package.



if that does not help, jump onto these bugreports or open a new one for your distro. (kernel.org bugtracker is typically not the best choice for normal end-users)



https://bugzilla.kernel.org/show_bug.cgi?id=12118
https://bugzilla.novell.com/show_bug.cgi?id=579932




regards

roland




Gesendet:Freitag, 07. Mrz 2014 um 22:04 Uhr
Von:devz...@web.de
An:Philippe LeCavalier supp...@plecavalier.com
Cc:rsync@lists.samba.org
Betreff:Aw: Re: backup-dir over nfs





im sure this is no rsync issue. i guess rsync is just triggering it.



https://groups.google.com/forum/#!msg/fa.linux.kernel/bxYvmkgvwGo/-gbIAVLz0zAJ



maybe clocksource=jiffies or nohz=off is worth a try to see if it makes a difference




regards

roland










you mean, when the hang appears, you get ping timeout from another host, i.e. its not only userspace which is being locked but the kernel also locks up completely (i.e. does not respond to ping anymore) ?



regards

roland





Gesendet:Freitag, 07. Mrz 2014 um 17:21 Uhr
Von:Philippe LeCavalier supp...@plecavalier.com
An:devz...@web.de
Cc:rsync@lists.samba.org
Betreff:Re: backup-dir over nfs


Hi Roland.



On Thu, Mar 6, 2014 at 4:45 PM, devz...@web.de wrote:




I cannot really follow how your setup works, can you perhaps describe it a little bit better?

Ive tried to keep it as simple as possible considering...



Various remote hosts(running rsync daemon)

-

HOST A(rsync pull / backup-dir=NFS host)



HOST B(NFS export for backup-dir)



When HOST A is done pulling the data from the various hosts a second script starts and rsyncs(push) to HOST C. Every other time HOST C grabs a fresh copy directly from the remote hosts. Of course, this part is never automatically achieved since Im never around to key waking the system from the console. So every time I walk by I hit a key and it keeps going.


please provide the command which works and which not.

hangs with: -avS --exclude-from=exclude.rsync --delete --backup --backup-dir=/path/to/nfs on HOST B/date +todays date



does not hang with: -avS --exclude-from=exclude.rsync --delete --backup --backup-dir=/path/to/local/date +todays date fs on HOST A





so - resuming from hang is all about keypress?

Yes.


that sounds like an interrupt issue,i guess its no rsync problem but its only being triggered by rsync.

My assumption is also that its likely not directly related to rsync. I was thinking more NFS though.


keep an eye on /proc/interrupts.

Interesting. Hadnt thought of that. I will.


please post the answer to the rsync list, so others can also have a look

Done.



Question: Im running this in cron as root. Although root can rw to the NFS share while Im in the console, is it somehow possible Im encountering a permissions issue and just not seeing a permission denied anywhere?













-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Aw: Re: strange behavior of --inplace on ZFS

2014-03-07 Thread devzero
Hi Pavel, 
could you try if --inplace --no-whole-file makes a difference?

normally, if using rsync --inplace on local host without network in betweens 
makes rsync switch to --whole-file, and  i assume, that for some reason your 
rsync is rewriting the whole file inplace. and any rewritten block on zfs means 
newly allocated block, which quickly eats up your space.

-W, --whole-file
  With this option rsyncâs delta-transfer algorithm is not used and 
the whole file  is  sent  as-is  instead.
  The  transfer  may  be  faster if this option is used when the 
bandwidth between the source and destination
  machines is higher than the bandwidth to disk (especially when 
the disk is actually a networked  filesys-
  tem).   This  is the default when both the source and destination 
are specified as local paths, but only if
  no batch-writing option is in effect.

i came across this wile trying to recreate your issue - i tested this on 
localhost and while scratching my head abourt the weird growth i remembered 
that rsync is too intelligent when not run via network

regards
Roland


 Gesendet: Freitag, 07. März 2014 um 00:50 Uhr
 Von: Pavel Herrmann morpheus.i...@gmail.com
 An: Hendrik Visage hvj...@gmail.com
 Cc: devz...@web.de, rsync@lists.samba.org rsync@lists.samba.org
 Betreff: Re: strange behavior of --inplace on ZFS

 Hi
 
 apologies for reordering the message parts, and for the long bit at the end
 
 On Thursday 06 March 2014 23:45:22 Hendrik Visage wrote:
  Question: The source and destination folder host OS and are both sides ZFS?
 
 no, the remote (source) was ext4. However, i plan to use it against at least 
 one NTFS machine as well
 
  I'd like to see some stats that rsync said it transfered, also add the
  -S flag as an extra set of tests.
 
 --sparse had the opposite result, minimal size for zeroed file, no space 
 saved 
 for random file.
 combination of --sparse and --inplace is not supported
 
  When you are on Solaris, also see the impact of a test case using
  mkfile and not dd if=/dev/zero.
 
 sadly, I have no Solaris in my environment, all test were done on Linux
 
  
  On Thu, Mar 6, 2014 at 11:17 PM,  devz...@web.de wrote:
   Hi Pavel,
   
   maybe that´s related to zfs compression ?
   
   on compressed zfs filesystem, zeroes are not written to disk.
 
 compression was not turned on on the volume (unless this is enabled even if 
 compression is set to off)
 
   
   # dd if=/dev/zero of=test.dat bs=1024k count=100
   
   /zfspool # ls -la
   total 8
   drwxr-xr-x  3 root root 4 Feb 26 10:18 .
   drwxr-xr-x 27 root root  4096 Mar 29  2013 ..
   drwxr-xr-x 25 root root25 Mar 29  2013 backup
   -rw-r--r--  1 root root 104857600 Feb 26 10:18 test.dat
   
   /zfspool # du -k test.dat
   1   test.dat
   
   /zfspool # du -k --apparent-size test.dat
   102400  test.dat
   
   despite that, space calculation on compressed fs is a difficult thing...
   
 
 space was as reported by 'zfs list', on a volume that was created 
 specifically 
 for this test. I would assume that is the most reliable way to get space usage
 
 
  The other Question that would be interested (both with and without -S)
  is when you use the dd if=/dev/urandom created file, but change some
  places with dd =/dev/zero (ie the reverse of the A test case, creatin
  with dd if=/dev/zero and changes with dd if=/dev/urandom)
 
 I just rerun all the tests with --sparse and --inplace, results follow 
 (cleanup is done after each 'zfs list', not shown)
 
 
 thanks
 Pavel Herrmann
 
 
 
 
 
 zero-inited file
 
 remote runs:
 # dd if=/dev/zero of=testfile bs=1024 count=102400
 # dd if=/dev/urandom of=testfile count=1 bs=1024 seek=854 conv=notrunc
 # dd if=/dev/urandom of=testfile count=1 bs=1024 seek=45368 conv=notrunc
 # dd if=/dev/urandom of=testfile count=50 bs=1024 seek=9647 conv=notrunc
 
 
 # rsync -aHAv --stats --delete --sparse root@remote:/test/ .
 receiving incremental file list
 ./
 testfile
 
 Number of files: 2 (reg: 1, dir: 1)
 Number of created files: 1 (reg: 1)
 Number of regular files transferred: 1
 Total file size: 104,857,600 bytes
 Total transferred file size: 104,857,600 bytes
 Literal data: 104,857,600 bytes
 Matched data: 0 bytes
 File list size: 50
 File list generation time: 0.001 seconds
 File list transfer time: 0.000 seconds
 Total bytes sent: 33
 Total bytes received: 104,870,500
 
 sent 33 bytes  received 104,870,500 bytes  41,948,213.20 bytes/sec
 total size is 104,857,600  speedup is 1.00
 # zfs snapshot zraid/test@a
 # rsync -aHAv --stats --delete --sparse root@remote:/test/ .
 receiving incremental file list
 testfile
 
 Number of files: 2 (reg: 1, dir: 1)
 Number of created files: 0
 Number of regular files transferred: 1
 Total file size: 104,857,600 bytes
 Total transferred file size: 104,857,600 bytes
 Literal data: 10,240 bytes
 Matched data: 104,847,360 bytes
 File list size: 50
 File list generation time: 

Aw: Re: backup-dir over nfs

2014-03-07 Thread devzero


im sure this is no rsync issue. i guess rsync is just triggering it.



https://groups.google.com/forum/#!msg/fa.linux.kernel/bxYvmkgvwGo/-gbIAVLz0zAJ



maybe clocksource=jiffies or nohz=off is worth a try to see if it makes a difference




regards

roland










you mean, when the hang appears, you get ping timeout from another host, i.e. its not only userspace which is being locked but the kernel also locks up completely (i.e. does not respond to ping anymore) ?



regards

roland





Gesendet:Freitag, 07. Mrz 2014 um 17:21 Uhr
Von:Philippe LeCavalier supp...@plecavalier.com
An:devz...@web.de
Cc:rsync@lists.samba.org
Betreff:Re: backup-dir over nfs


Hi Roland.



On Thu, Mar 6, 2014 at 4:45 PM, devz...@web.de wrote:




I cannot really follow how your setup works, can you perhaps describe it a little bit better?

Ive tried to keep it as simple as possible considering...



Various remote hosts(running rsync daemon)

-

HOST A(rsync pull / backup-dir=NFS host)



HOST B(NFS export for backup-dir)



When HOST A is done pulling the data from the various hosts a second script starts and rsyncs(push) to HOST C. Every other time HOST C grabs a fresh copy directly from the remote hosts. Of course, this part is never automatically achieved since Im never around to key waking the system from the console. So every time I walk by I hit a key and it keeps going.


please provide the command which works and which not.

hangs with: -avS --exclude-from=exclude.rsync --delete --backup --backup-dir=/path/to/nfs on HOST B/date +todays date



does not hang with: -avS --exclude-from=exclude.rsync --delete --backup --backup-dir=/path/to/local/date +todays date fs on HOST A





so - resuming from hang is all about keypress?

Yes.


that sounds like an interrupt issue,i guess its no rsync problem but its only being triggered by rsync.

My assumption is also that its likely not directly related to rsync. I was thinking more NFS though.


keep an eye on /proc/interrupts.

Interesting. Hadnt thought of that. I will.


please post the answer to the rsync list, so others can also have a look

Done.



Question: Im running this in cron as root. Although root can rw to the NFS share while Im in the console, is it somehow possible Im encountering a permissions issue and just not seeing a permission denied anywhere?








-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: could rsync support to specify source IP address when current host has multiple NIC?

2014-03-06 Thread devzero
if you use ssh as transport, you can try

rsync -e 'ssh -oBindAddress=local interface ip address'

man 5 ssh_config is telling:

 BindAddress
 Use the specified address on the local machine as the source
 address of the connection.  Only useful on systems with more than
 one address.  Note that this option does not work if
 UsePrivilegedPort is set to âyesâ

regards
roland


Hi,

Could rsync support to specify Source IP address when local host has
multiple NICs? In some complex network configuration, we need some
restrictions for this type of rsync usage.

thanks,
Emre
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: strange behavior of --inplace on ZFS

2014-03-06 Thread devzero
Hi Pavel, 

maybe that´s related to zfs compression ?

on compressed zfs filesystem, zeroes are not written to disk.

# dd if=/dev/zero of=test.dat bs=1024k count=100

/zfspool # ls -la
total 8
drwxr-xr-x  3 root root 4 Feb 26 10:18 .
drwxr-xr-x 27 root root  4096 Mar 29  2013 ..
drwxr-xr-x 25 root root25 Mar 29  2013 backup
-rw-r--r--  1 root root 104857600 Feb 26 10:18 test.dat

/zfspool # du -k test.dat
1   test.dat

/zfspool # du -k --apparent-size test.dat
102400  test.dat

despite that, space calculation on compressed fs is a difficult thing...

if that gives no pointer, i think that question is better being placed on a zfs 
mailing list.

regards
roland



List:   rsync
Subject:strange behavior of --inplace on ZFS
From:   Pavel Herrmann morpheus.ibis () gmail ! com
Date:   2014-02-25 3:26:03
Message-ID: 5129524.61kVAFkjCM () bloomfield
[Download message RAW]

Hi

I am extending my ZFS+rsync backup to be able to handle large files (think 
virtual machine disk images) in an efficient manner. however, during testing I 
have found a very strange behavior of --inplace flag (which seems to be what I 
am looking for).

what I did: create a 100MB file, rsync, snapshot, change 1k in random location, 
rsync, snapshot, change 1K in other random location, repeat a couple times, 
`zfs list` to see how large my volume actually is.

the strange thing here is that the resulting size was wildly different 
depending on how I created the initial file. all modifications were done by the 
same command, namely
dd if=/dev/urandom of=testfile count=1 bs=1024 seek=some_num conv=notrunc

situation A:
file was created by running 
dd if=/dev/zero of=testfile bs=1024 count=102400
the resulting size of the volume is approximately 100MB times the number of 
snapshots

situation B:
file was created by running
dd if=/dev/urandom of=testfile count=102400 bs=1024
the resulting size of the volume is just a bit over 100MB

the rsync command used was
rsync -aHAv --delete --inplace root@remote:/test/ .

rsync on backup machine (the destination) is 3.1.0, remote has 3.0.9

there is no compression or dedup enabled on the zfs volume

anyone seen this behavior before? is it a bug? can I avoid it? can I make 
rsync give me disk IO statistics to confirm?

regards
Pavel Herrmann
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: file corruption

2013-03-12 Thread devzero
It seems that Microsoft knows how to change a file without altering 
the modification time.

yes, they do. 
see https://bugzilla.samba.org/show_bug.cgi?id=1601

but it`s not too difficult. 

the issue is, that you don`t expect a program actively changing a files 
contents when just opening it for reading/viewing.

regards
roland



List:   rsync
Subject:Re: file corruption
From:   joop g jjge () xs4all ! nl
Date:   2013-03-09 10:33:54
Message-ID: 3286855.GThHFMr6je () n2k6
[Download message RAW]

You said, the diff concerned just one byte, right?
Were the corrupted files all Microsoft Office files? I have seen this 
behaviour 
once, and then it turned out to be the originals that had been changed in the 
meantime. It seems that Microsoft knows how to change a file without altering 
the modification time.
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: [Bug 7120] Variable bandwidth limit .. bwlimit

2013-03-06 Thread devzero
mhh - interesting question..

what about combining the power of throttle ( 
http://linux.die.net/man/1/throttle ) or similar tools (there are some more 
like this) with rsync ?

via this hint: http://lists.samba.org/archive/rsync/2006-February/014623.html i 
got a clue how to combine rsync and throttle and gave it a try :

cat throttle-wrap
#!/bin/bash
throttle -k 1 -s 1| $@

rsync --rsh='/tmp/throttle-wrap ssh' -avz /src  user@host:/dest 

seems to work fine for me, but mind that throttle needs -s 1 as this seems to 
circumvent the buffering problem. 

not sure about the maximum bandwidth you can reach with that and how good you 
really can adjust the bandwith, so please report your findings..

If that is a useful option, can someone please put this into the bugzilla entry?

regards
Roland 



List:   rsync
Subject:[Bug 7120] Variable bandwidth limit .. bwlimit
From:   samba-bugs () samba ! org
Date:   2013-03-05 15:15:53
Message-ID: E1UCtar-0010tB-0s () samba-bugzilla ! samba ! org
[Download message RAW]

https://bugzilla.samba.org/show_bug.cgi?id=7120

--- Comment #2 from It is me p...@mnet-online.de 2013-03-05 15:15:51 UTC ---
Hi,
just a note, due to the fact I am also looking forward for this feature.

I see two options regarding usability and scritablility. :)

1# Piping (not used up to now)

mkfifo /tmp/bla
echo 1000  /tmp/bla

cat /tmp/bla | rsync ... --bwlimit-stdin ...

changing by just write to the fifo
echo 50  /tmp/bla


2# file and signal

mktemp  /tmp/bla2
echo 1000  /tmp/bla2
rsync ... -bwlimit-from=/tmp/bla2  
RSYNCPID=$!
echo 50  /tmp/bla2
kill -USR1 ${RSYNCPID}

hope the thinking helps.
Me.

-- 
Configure bugmail: https://bugzilla.samba.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the QA contact for the bug.
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
[prev in list] [next in list] [prev in thread] [next in thread] 



-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: [Bug 7120] Variable bandwidth limit .. bwlimit

2013-03-06 Thread devzero
or better try pipe viewer. it seems less buggy (kill -SIGUSR1 $PIDOFTHROTTLE 
doesn't work for me) , has realtime progress and there is a homepage and 
maintainer ( http://www.ivarch.com/programs/pv.shtml )

linux-rqru:/tmp # cat /tmp/pv-wrapper
#!/bin/bash
pv -L 1000 | $@

Adjust Transfer-Rate:

pv -R $PIDOFPV -L RATE

linux-rqru:/tmp # rsync --rsh='/tmp/pv-wrapper ssh' -a /proc 
root@localhost:/tmp/test
Password:4 B 0:00:01 [3.68 B/s] [ =   
 ]
file has vanished: /proc/10/exe
file has vanished: /proc/10/task/10/exe
file has vanished: /proc/11/exe
file has vanished: /proc/11/task/11/exe
file has vanished: /proc/12/exe= 
   ]
4.88kiB 0:00:05 [1002 B/s] [ =   

regards
roland






mhh - interesting question..

what about combining the power of throttle ( 
http://linux.die.net/man/1/throttle ) or similar tools (there are some more 
like this) with rsync ?

via this hint: http://lists.samba.org/archive/rsync/2006-February/014623.html i 
got a clue how to combine rsync and throttle and gave it a try :

cat throttle-wrap
#!/bin/bash
throttle -k 1 -s 1| $@

rsync --rsh='/tmp/throttle-wrap ssh' -avz /src  user@host:/dest 

seems to work fine for me, but mind that throttle needs -s 1 as this seems to 
circumvent the buffering problem. 

not sure about the maximum bandwidth you can reach with that and how good you 
really can adjust the bandwith, so please report your findings..

If that is a useful option, can someone please put this into the bugzilla entry?

regards
Roland 



List:   rsync
Subject:[Bug 7120] Variable bandwidth limit .. bwlimit
From:   samba-bugs () samba ! org
Date:   2013-03-05 15:15:53
Message-ID: E1UCtar-0010tB-0s () samba-bugzilla ! samba ! org
[Download message RAW]

https://bugzilla.samba.org/show_bug.cgi?id=7120

--- Comment #2 from It is me p...@mnet-online.de 2013-03-05 15:15:51 UTC ---
Hi,
just a note, due to the fact I am also looking forward for this feature.

I see two options regarding usability and scritablility. :)

1# Piping (not used up to now)

mkfifo /tmp/bla
echo 1000  /tmp/bla

cat /tmp/bla | rsync ... --bwlimit-stdin ...

changing by just write to the fifo
echo 50  /tmp/bla


2# file and signal

mktemp  /tmp/bla2
echo 1000  /tmp/bla2
rsync ... -bwlimit-from=/tmp/bla2  
RSYNCPID=$!
echo 50  /tmp/bla2
kill -USR1 ${RSYNCPID}

hope the thinking helps.
Me.

-- 
Configure bugmail: https://bugzilla.samba.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the QA contact for the bug.
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
[prev in list] [next in list] [prev in thread] [next in thread] 



-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: rsyncssl

2013-02-04 Thread devzero
Why put that extra effort into rsync, if you can chain things together ?

The power of unix is exactly that - it`s not about using specialiced tools, but 
it`s about combining them in innumerable ways, thus multiplying their 
capabilities.

Another good reason for a SSL-version of rsync: non-Unix clients...

Stunnel probably runs on as many platforms like rsync. 
https://www.stunnel.org/ports.html
Besides that, mind that there is no usable native port of rsync on windows. 
(The cygwin based rsync is very slow, btw)
I think stunnel even runs native on win32.(MinGW)

I was hoping for ssl in rsync for long, but when i saw RsyncSSL, i think it 
could obsolete an rsync with compiled in ssl support.

Nobdoy would have the idea to put ssh into rsync, rsync is just using that as a 
sub-process/pipe(and vice versa).
So does RsyncSSL (with stunnel).

On the server side, with rsync + ssh, the ssh daemon listens for incomming ssh 
connection and then starts rsync, connecting via stdin/stdout.

Analogously, stunnel daemon listens for incoming ssl connection and then starts 
rsync(d) as a sub-process. The only difference is, that RsyncSSL adds some 
missing glue.

I'd love to see rsync-ssl (with the server having CRL support, client
cert support, and the client/server doing cert validation of course) as
for one thing I think it would make a damn fine laptop backup solution.

It´s exactly what RsyncSSL can do for you.

regards
roland


List:   rsync
Subject:Re: rsyncssl
From:   Jason Haar Jason_Haar () trimble ! com
Date:   2013-02-04 2:45:47
Message-ID: 510F20DB.7050003 () trimble ! com
[Download message RAW]

Another good reason for a SSL-version of rsync: non-Unix clients...

It's all well and good to talk about using vpns and ssh tunnels - but
the fact is that a large percentage of rsync clients are non-Unix - like
Windows - and getting them set up for ssh/etc is layering extra software
on top of rsync. I'm not saying it can't work  - but it's not simple.

I'd love to see rsync-ssl (with the server having CRL support, client
cert support, and the client/server doing cert validation of course) as
for one thing I think it would make a damn fine laptop backup solution.
I've run more than my share of Internet-facing services in my time and
the lowest maintenance ones are the SSL/TLS services that require client
certs. The bad guys cannot even knock on the door!

An Internet-based rsync-ssl server that requires client certs would be
brilliant for backing up laptops over the Internet: an enterprise
competitor to all those cloudy services such as Dropbox/etc. :-) [well,
probably need that VSS patch for rsync-win32 too ;-)]


--
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

re: rsync 3.0.9 compile --copy-devices --write-devices not working

2013-02-03 Thread devzero
Anyway, i tried not to give up.

And found

https://rsync.samba.org/ftp/rsync/rsync-patches-3.0.9.tar.gz

In there, i found copy-devices.diff, which could be applied successfully :-)
A write-devices.diff is missing :(

Mh, apparently rsync-patches-$release.tar.gz and rsync-patches.git are not in 
sync for some reason.

I found that there are patches in git which are missing in tar and vice versa. 

I would have thought that patches.tar.gz is a copy of the .git contents at 
relase date (For example, drop-cache.diff is in the tar, but not in current git)

regards
roland


-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


patches in distros - include upstream ?

2013-02-02 Thread devzero
Hello, 
i have found, that major distros (especially opensuse) ship their rsync 
packages with a lot of patches which i don`t find in the official rsync-patches 
git.

Maybe there is a reason for that or i missed something or looked wrong,  but 
for convenience/transparency i have compiled a list of those and ask if it 
would make sense to put them into the official rsync patches git, as most of 
them don`t look very distro specific to me.

regards
Roland



Opensuse:
 
http://ftp.uni-ulm.de/mirrors/opensuse/source/distribution/openSUSE-current/repo/oss/suse/src/rsync-3.0.9-14.2.1.src.rpm

 dparam.diff
This patch adds a daemon option, --dparam (-M), that lets you override
global/default items in the config file when starting the daemon up.

 drop-cache.diff
From: Tobi Oetiker tobi{at}oetiker.ch
Date: 2007-04-23

I am using rsync for hard-link backup. I found that there is a
major problem with frequent backup filling up the file system cache
with all the data from the files being backed up. The effect is
that all the other 'sensible' data in the cache gets thrown out in
the process. This is rather unfortunate as the performance of the
system becomes very bad after running rsync.
--snip--

 log-checksum.diff
This patch to rsync adds a %C log escape that expands to the sender's
post-transfer checksum of a file for protocol 30 or above.  This way, if
you need the MD5 checksums of transferred files, you can have rsync log
them instead of spending extra processor time on a separate command to
compute them.
-- Matt McCutchen hashprod...@gmail.com

 munge-links.diff
This patch adds the --munge-links option, which works like the daemon's
munge symlinks parameter.

 preallocate.diff
This patch adds the --preallocate option that asks rsync to preallocate the
copied files.  This slows down the copy, but should reduce fragmentation on
systems that need that.

 remote-option.diff
This patch implements a new --remote-option (-M) option that allows the
user to override certain settings on the remote system.  For instance, it
is now easier to pass -M--fake-super or -M --log-file=/tmp/foo instead of
kluging up an --rsync-path setting.  This also solves the case where we
want local --one-file-system (-x) but not remote (-x -M--no-x), or visa
versa (-M-x).

stdout.diff
This patch adds a --stdout=line|unbuf option that lets the
user change the buffering of stdout.

usermap.diff
This adds a --usermap and a --groupmap option.  See the man page for
more details.


RedHat/CentOS:

http://vault.centos.org/6.3/os/Source/SPackages/rsync-3.0.6-9.el6.src.rpm
(i excluded those which are already listed under opensuse)

daemon-forward-lookup.diff
This patch adds a forward lookup of any hostnames listed in the
hosts allow or hosts deny daemon config options.  Based on
a patch by Paul Williamson.

osx-xattr-nodev.diff
This patch makes the xattr functions skip devices and special files,
because OS X returns the wrong errno when getting/setting xattrs on
them (it returns EPERM instead of ENOTSUP).


Debian:
http://patch-tracker.debian.org/package/rsync/3.0.9-4

+++ rsync-3.0.9/debian/patches/README
@@ -0,0 +1,27 @@
+These are the main patches to the rsync source.
+(The changes to the manpages for correct hyphens
+and quotes is a bit big, so not included.)
+
+If you're wondering about the lack of patches, the
+explanation is that upstream has adopted most of them :-)
+
+logdir.diff - fix the location of the logdir
+ssh-6-option.diff   - call ssh with -6 option if rsync was called with -6,
+  ditto with -4
+rsyncd.conf.5.comment.diff - explain that a hash can only be at the beginning
+ of a line.
+delete-delay.diff  - correct the error message given
+partial-timestamp.diff - update mtime on partially trasferred file
+ fixes problem with --update together with --partial
+manpages.GPL.diff  - properly state GNU General Public License
+
+These are patches from the development branch that I consider important
+enough to include now:
+
+cast--1-size_t.diff
+- Explicitly cast a -1 that is being assigned to a size_t.
+progress-cursor-pos.diff
+- The --progress output now leaves the cursor at the end of the line
+  (instead of the start) in order to be extra sure that an error won't
+  overwrite it.  We also ensure that the progress option can't be enabled
+  on the server side.










-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Aw: Re: rsyncssl

2013-01-28 Thread devzero
The only place that an SSL would make some sense, is if you are going to do it to/from an rsync daemon,yes, exactly.but then how would that be better than a ssh-only account with keys/etc. only allowing the rsync to execute?I think thats far more secure by design, because you wont allow shell-access which needs hardening afterwards. For me its like giving someone the key to my house and then trying to keep him in the hallway , hoping all the other doors being properly closedIfeel extremely uncomfortable with allowing other people shell access to a box, where they need nothing but filetransfer into some dedicated subdir.regardsroland


Gesendet:Montag, 28. Januar 2013 um 10:22 Uhr
Von:Hendrik Visage hvj...@gmail.com
An:devz...@web.de
Cc:rsync@lists.samba.org rsync@lists.samba.org
Betreff:Re: rsyncssl


On Sun, Jan 27, 2013 at 12:07 AM,  devz...@web.de wrote:
Hi,snipped 
Isnt RsyncSSL (wrap rsync with stunnel via stdin/out) the better solution ? (as it is using a mature external program for the SSL stuff)Why SSL when you already have a proper working SSH with certificates etc. that should be as good if not better?
The only place that an SSL would make some sense, is if you are going to do it to/from an rsync daemon, but then how would that be better than a ssh-only account with keys/etc. only allowing the rsync to execute?





-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

list free space via rsyncd ?

2013-01-28 Thread devzero
Hello, 
if you have a backup server with rsync running in daemon mode - is there a way 
for a client to obtain information about free diskspace via rsync ?

I searched through all the docs, but could not find anything about it.

if there is no way, i guess implementing it would need the rsync protocol to be 
changed/extended !?

regards
roland 
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


rsyncssl

2013-01-26 Thread devzero
Hi, 

i`m wondering - can't THIS one
http://gitweb.samba.org/?p=rsync-patches.git;a=blob;f=openssl-support.diff

be completely replaced with THIS one ?
http://dozzie.jarowit.net/trac/wiki/RsyncSSL
http://dozzie.jarowit.net/git?p=rsync-ssl.git;a=tree

Isn`t RsyncSSL (wrap rsync with stunnel via stdin/out) the better solution ? 
(as it is using a mature external program for the SSL stuff)

regards
roland
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: rsync speedup - how ?

2009-08-08 Thread devzero
 I really don't think it's a good idea to sync large data files in use,
 which is modified frequently, e.g. SQL database, VMware image file.
 
 As rsync do NOT have the algorithm to keep those frequently modified
 data file sync with the source file. And this will course data file
 corrupted.
 
 If I'm wrong, please correct me. Thanks.

they are not in use, as i do a snapshot before rsync. 
so, the file won`t change during transfer.
so i`m doing sorta crash consistent copy.

roland


Neu: WEB.DE Doppel-FLAT mit Internet-Flatrate + Telefon-Flatrate
für nur 19,99 Euro/mtl.!* http://produkte.web.de/go/02/

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: rsync speedup - how ?

2009-08-07 Thread devzero
so, instead of 500M i would transfer 100GB over the network.
that`s no option.

besides that, for transferring complete files i know faster methods than rsync.

one more question: 
how safe is transferring a 100gb file, i.e. as rsync is using checksums 
internally to compare the contents of two files, how can i calculate the risk 
of 2 files being NOT perfectly in sync after rsync run ?  i assume there IS a 
risk, just as like there is a risk that 2 files may have the same md5 checksum 
by chance

regards
roland



List:   rsync
Subject:Re: rsync speedup - how ?
From:   Jon Forrest jlforrest () berkeley ! edu
Date:   2009-08-07 0:25:12
Message-ID: h5fs8m$nqj$1 () ger ! gmane ! org
[Download message RAW]

One way I've been trying to speedup rsync may
not apply in every situation. In my situation
when files change, they usually change completely.
This is especially true for large files. So,
the rsync algorithm does me no good. So,
I've been using the W flag (e.g. rsync -avzW)
to turn this off.

I don't know objectively how much difference
this makes but it seems reasonable.

Comments?


Neu: WEB.DE Doppel-FLAT mit Internet-Flatrate + Telefon-Flatrate
für nur 19,99 Euro/mtl.!* http://produkte.web.de/go/02/

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


sparse files patch question

2009-08-07 Thread devzero
Hello, 

i just came across the sparse-block patch. 

i`m using rsync to store vmware vmdk virtual disks to a zfs filesystem.
vmdk files have large portions of zeroed data and when thin provisioned (not 
being used yet), they even may be sparse.
on the target, after writing to zfs the zeroes are always efficiently 
stored/compressed, i.e. they take no additional space on zfs.

is this patch worth a try here to speed things up ?

i`m a little bit unsure, but i assume no as this is to be used when llseek() 
is costly, correct ?

i`m asking as i don`t want to waste time with experiments at a customer site.

regards
roland



This patch adds the --sparse-block option.  Andrea Righi writes:

  In some filesystems, typically optimized for large I/O throughputs (like
  IBM GPFS, IBM SAN FS, or distributed filesystems in general) a lot of
  lseek() operations can strongly impact on performances. In this cases it
  can be helpful to enlarge the block size used to handle sparse files
  directly from a command line parameter.

  For example, using a sparse write size of 32KB, I've been able to
  increase the transfer rate of an order of magnitude copying the output
  files of scientific applications from GPFS to GPFS or GPFS to SAN FS.

  -Andrea

To use this patch, run these commands for a successful build:

patch -p1 patches/sparse-block.diff
./configure   (optional if already run)
make
__
GRATIS für alle WEB.DE-Nutzer: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://movieflat.web.de

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: bug? rsync counts inaccesible file as being transferred

2009-08-07 Thread devzero
it`s even worse:

 Number of files: 44
 Number of files transferred: 1   
 Total file size: 59 bytes
 Total transferred file size: 27793 bytes

this is wrong. that`s the size of the file which failed to transfer. so it 
should not be added to the total transfer file size, shouldn`t it ?

 Literal data: 0 bytes
 Matched data: 0 bytes
 File list size: 1094
 File list generation time: 0.001 seconds
 File list transfer time: 0.000 seconds
 Total bytes sent: 
 Total bytes received: 31




 hello, 
 
 with --stats, shouldn`t we differ between number of files transferred and 
 number of files failed ?
 
 the problem is, that i have files which ALWAYS fail on transfer, and to check 
 for number of files failed=2 would be the best way for me to check if the 
 overall transfer was ok.
 
 if i magaged to create a patch to adress this, would such patch be accepted ?
 
 regards
 roland
 
 
 rsync --stats -av * /tmp/test2
 sending incremental file list
 rsync: send_files failed to open 
 /tmp/rsync-3.0.6/patches/openssl-support.diff: Permission denied (13)
 
 Number of files: 44
 Number of files transferred: 1   
 Total file size: 59 bytes
 Total transferred file size: 27793 bytes
 Literal data: 0 bytes
 Matched data: 0 bytes
 File list size: 1094
 File list generation time: 0.001 seconds
 File list transfer time: 0.000 seconds
 Total bytes sent: 
 Total bytes received: 31
 
 sent  bytes  received 31 bytes  2284.00 bytes/sec
 total size is 59  speedup is 516.64
 rsync error: some files/attrs were not transferred (see previous errors) 
 (code 23) at main.c(1040) [sender=3.0.4]
 
 i would expect:
 Number of files: 44
 Number of files transferred: 0
 Number of files failed: 1   
 Total file size: 59 bytes
 ...
 



Neu: WEB.DE Doppel-FLAT mit Internet-Flatrate + Telefon-Flatrate
für nur 19,99 Euro/mtl.!* http://produkte.web.de/go/02/

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


bug? rsync counts inaccesible file as being transferred

2009-08-07 Thread devzero
hello, 

with --stats, shouldn`t we differ between number of files transferred and 
number of files failed ?

the problem is, that i have files which ALWAYS fail on transfer, and to check 
for number of files failed=2 would be the best way for me to check if the 
overall transfer was ok.

if i magaged to create a patch to adress this, would such patch be accepted ?

regards
roland


rsync --stats -av * /tmp/test2
sending incremental file list
rsync: send_files failed to open 
/tmp/rsync-3.0.6/patches/openssl-support.diff: Permission denied (13)

Number of files: 44
Number of files transferred: 1   
Total file size: 59 bytes
Total transferred file size: 27793 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 1094
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 
Total bytes received: 31

sent  bytes  received 31 bytes  2284.00 bytes/sec
total size is 59  speedup is 516.64
rsync error: some files/attrs were not transferred (see previous errors) (code 
23) at main.c(1040) [sender=3.0.4]

i would expect:
Number of files: 44
Number of files transferred: 0
Number of files failed: 1   
Total file size: 59 bytes
...


Neu: WEB.DE Doppel-FLAT mit Internet-Flatrate + Telefon-Flatrate
für nur 19,99 Euro/mtl.!* http://produkte.web.de/go/02/

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: rsync speedup - how ?

2009-08-07 Thread devzero
 devz...@web.de wrote:
  so, instead of 500M i would transfer 100GB over the network.
  that`s no option.
 
 I don't see how you came up with such numbers.
 If files change completely then I don't see why
 you would transfer more (or less) over the network.
 The difference that I'm thinking of is that
 by not using the rsync algorithm then you're
 substantially reducing the number of disk I/Os.

let me explain: all files are HUGE datafiles and they are of constant size.
they are harddisk-images and the contents being changed inside, i.e. 
specific blocks in the files being accessed and rewritten.

so, the question is:
is rsync rolling checksum algorithm the perfect (i.e. fastest) algorithm to 
match 
changed blocks at fixed locations between source and destination files ?
i`m not sure because i have no in depth knowledge of the mathematical background
in rsync algorithm. i assume: no - but it`s only a guess...

 The reason I say this, and I could be wrong since
 I'm no rsync algorithm expert, is because when the
 local version of a file and the remote version of
 a file are completely different, and the rsync
 algorithm is being used, the amount of I/O
 that must be done consists of the I/Os that
 compare the two files, plus the actual transfer
 of the bits from the source file to the destination
 file. (That's a very long sentence, isn't it.)
 Please correct this thinking if it's wrong.

yes, that`s correct. but what i`m unsure about is, if
rsync isn`t doing too much work with detecting the
differences. it doesn`t need to look forth and back (as i read 
somewhere it would) , it just need to check if block1 in filea differs 
from block1  in fileb.sorta stupid comparison without need for complex
math or any real intelligence to detect relocation of data.
see this post: 
http://www.mail-archive.com/backuppc-us...@lists.sourceforge.net/msg08998.html

  besides that, for transferring complete files i know faster methods than 
  rsync.
 
 Maybe so (I'd like to hear what you're referring to) but one reason
 I like to use rsync is that using the '-avzW' flags
 results in a perfect mirror on the destination, which is
 my goal. Do your faster methods have a way of doing that?

no, i have no faster replacement which is as good in perfect mirroring like 
rsync, but there are faster methods for transferring files.
here is some example: http://communities.vmware.com/thread/29721

  one more question: 
  how safe is transferring a 100gb file, i.e. as rsync
  is using checksums internally to compare the contents
  of two files, how can i calculate the risk of 2 files
  being NOT perfectly in sync after rsync run ?
 
 Assuming the rsync algorithm works correctly, I don't
 see any difference between the end result of copying
 a 100gb file with the rsync algorithm or without it.
 The only difference is the amount of disk and network
 I/O that must occur.

the rsync algorithm is using checksumming to find differences.
checksums are sort of data reduction which create a hash from
a larger amount of data. i just want to understand what makes
sure that there are no hash collisions which break the algorithm.
mind that rsync exists for some time and by that time file sizes
transferred with rsync may have grown by a factor of 100 or 
even 1000.  

regards
roland
 

Neu: WEB.DE Doppel-FLAT mit Internet-Flatrate + Telefon-Flatrate
für nur 19,99 Euro/mtl.!* http://produkte.web.de/go/02/

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


rsync speedup - how ?

2009-08-06 Thread devzero
Hello, 

i`m using rsync to sync large virtual machine files from one esx server to 
another. 

rsync is running inside the so called esx console which is basically a 
specially crafted linux vm with some restrictions.

the speed is reasonable, but i guess it`s not the optimum - at least i donŽt 
know where the bottleneck is.

i`m not using ssh as transport but run rsync in deamon mode on the target. so 
this speeds things up if large amounts of data go over the wire.

i read that rsync would be not very efficient with ultra-large files (i`m 
syncing files with up to 80gb size)

regarding the bottleneck:  neither cpu, network or disk is at their limits - 
neither on the source nor on the destination system.
i don`t see 100% cpu, i don`t see 100% network or 100% disk i/o usage

furthermore, i wonder:
isn`t rsync just too intelligent for such file transfers, as the position of 
data inside that files (containing harddisk-images) won`t really change?
i.e. we don`t need to check for data relocation, we just need to know if some 
blocks changed inside a block of size x and if there was a change, we could 
transfer that whole block again. so i wonder if we need a rolling checksum at 
all to handle this. wouldn`t checksums over fixed block size be sufficient for 
this task? 

regards
roland

__
GRATIS für alle WEB.DE-Nutzer: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://movieflat.web.de

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


ai_family not supported

2009-06-15 Thread devzero
hello, 

i`m trying to use rsync via inetd and having problems.

i`m getting 

2009/06/15 18:25:29 [41082] name lookup failed for : ai_family not supported
2009/06/15 18:25:29 [41082] connect from UNKNOWN ()

when trying to write to rsync daemon via inetd.
reading works fine.

inetd.conf looks like this:

rsync stream tcp nowait root /path/rsync rsync --daemon 
--log-file=/tmp/rsync.log --config=/path/rsyncd.conf -vv

rsyncd.conf looks like this:

pid file=/tmp/rsyncd.pid
use chroot = no
list = yes
ignore errors = no
ignore nonreadable = yes
transfer logging = no
uid = root
gid = root
strict modes = no

[backup]
comment = backup
path = /path/backup/
read only = no



what does that error mean and how can this be solved ?

regards
roland
___
WEB.DE FreeDSL Komplettanschluss mit DSL 6.000 Flatrate und 
Telefonanschluss für 17,95 Euro/mtl.! http://produkte.web.de/go/02/

--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: version 3 and glibc

2007-10-11 Thread devzero
wondering if the only option to have rsync 3 running is have a glibc 2.4+?

who is telling this?
sure you can`t run rsync on system with old glibc if it`s compiled on system 
with newer glibc - but you can compile it agains old glibc

regards
roland


 -Ursprüngliche Nachricht-
 Von: [EMAIL PROTECTED]
 Gesendet: 11.10.07 22:08:44
 An: rsync list rsync@lists.samba.org
 Betreff: version 3 and glibc


 
 I did ask before but go no reply...here i go again to see if someone
 copuld reply :-)
 
 Thanx!
 
 ...
 
 wondering if the only option to have rsync 3 running is have a glibc 2.4+?
 
 I have a backup server and many other servers running cpanel on them so
 a glibc update is not an option as it could skrew up the system. Any
 idea or workaround? Or i should to stick with old versions of rsync?
 
 Thanx in advance!
 
 Manuel
 
 
 
 -- 
 To unsubscribe or change options: 
 https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
 


___
Jetzt neu! Schützen Sie Ihren PC mit McAfee und WEB.DE. 3 Monate
kostenlos testen. http://www.pc-sicherheit.web.de/startseite/?mc=00

--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Mapped Drive

2007-09-30 Thread devzero
for now there is no caching - anyway - how should checksums be cached?
if mtime/size is no reliable method for detecting file changes and checksum is 
the only method - to detect if you need to update the cache you need to ... 
checksum  and thus a checksum cache is quite nonsense, imho.

 I suppose it could be cached at either storage location.
if you use rsync on a mapped drive, you have no local and remote storage 
location from an rsync`s point of view, because rsync isn`t being executed on 
the remote node. so if rsync calculates a checksum it`s always transferring the 
whole file via mapped drive.

regards
roland



 -Ursprüngliche Nachricht-
 Von: Stephen Zemlicka [EMAIL PROTECTED]
 Gesendet: 30.09.07 07:30:04
 An: 'Matt McCutchen' [EMAIL PROTECTED]
 CC: rsync@lists.samba.org
 Betreff: RE: Mapped Drive


 
 The problem is some files don't change in size.  So I was hoping that the
 checksums could be cached.  Perhaps I'm mistaken but I thought the checksum
 determined what actual blocks were transferred.  I suppose it could be
 cached at either storage location.
 
 _
 Stephen Zemlicka
 Integrated Computer Technologies
 PH. 608-558-5926
 E-Mail [EMAIL PROTECTED] 
 
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Matt
 McCutchen
 Sent: Saturday, September 29, 2007 5:03 PM
 To: Stephen Zemlicka
 Cc: rsync@lists.samba.org
 Subject: Re: Mapped Drive
 
 On 9/28/07, Stephen Zemlicka [EMAIL PROTECTED] wrote:
  Is there a way to have rsync cache the checksums for something like this
 and
  would that help?
 
 I'm not sure exactly what you mean.  You said you were using the -c
 (--checksum) option, which makes rsync decide whether to update each
 destination file by reading the file in full and comparing its MD4/MD5
 checksum with that of the source file.  Do you mean you want rsync to
 cache the checksums of the destination files?  On which machine would
 the cache be?
 
 Anyway, if the issue is that you don't want rsync spending the
 bandwidth to read the destination files for the --checksum check, just
 remove -c and rsync will use the default size-and-mtime quick check.
 
 Matt
 
 -- 
 To unsubscribe or change options: 
 https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
 


_
In 5 Schritten zur eigenen Homepage. Jetzt Domain sichern und gestalten! 
Nur 3,99 EUR/Monat! http://www.maildomain.web.de/?mc=021114

--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Rsync and opened files

2007-09-26 Thread devzero
could this copy correctly opened files?
it`s not a question if it`s open - it`s a question if you get a consistend copy.

with this, there is nothing which makes sure that the files doesn`t change 
during transfer - so if it happens, on the target side you have a file 
different from the source.

think of a big database file, gigabytes sized.
now you start rsync. it starts and finds the file different from the backup 
copy, so it begins to transfer the changes
to the target side. for example, after scanning 2gb of the files and 
transferring the diffs, the database process may change some blocks within the 
first 2gb, after that some blocks within the next 2gb rsync yet has to 
transfer.  but rsync won`t see the changes within the first 2gb and continues 
reading to the end and transferring/syncing the rest but leaving the first 2gb 
as is.
so you may end up having a file in some really inconsistent/weird state!

 -Ursprüngliche Nachricht-
 Von: Egoitz Aurrekoetxea [EMAIL PROTECTED]
 Gesendet: 26.09.07 04:03:22
 An: rsync@lists.samba.org
 Betreff: Rsync and opened files 


 
 
 
 Hello,
 
 I'm trying to determine if rsync is a sure method of backing up servers 
 (Linux and Windows) whose files are constantly being accesed and are not able 
 to be stoped they're services for backing up purposes... I would use it over 
 ssh for making incremental backups... in my tests seem to always have worked 
 backing up from a debian server to the copy server that runs debian too...  
 I'm using the next :
 
 OPTS=--force --ignore-errors --delete --backup 
 --backup-dir=/home/ramattack/pruebas-rsync/$BACKUPDIRMES/$dia -avz
 
 rsync $OPTS $BDIR /home/ramattack/pruebas-rsync/$BACKUPDIRMES/imagen-copia
 
 BDIR is source I want to backup and /home/ramattack/pruebas-rsync... is 
 the destination...
 
 could this copy correctly opened files? Normally I will use it for backing up 
 linux machines normally... and the backup server will be of course a linux 
 machine (debian machine). but how does it behave with linux machines?
 
 P.D. I have googled and searched over there but all posts I've find are 
 old... and I wanted to have a recent answer.
 
 Thanks a lot mates!!!
 
 
 -- 
 To unsubscribe or change options: 
 https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
 


__
Erweitern Sie FreeMail zu einem noch leistungsstärkeren E-Mail-Postfach!

Mehr Infos unter http://produkte.web.de/club/?mc=021131

--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: compression of source and target files

2007-09-23 Thread devzero
oh, this is interesting patch - thanks for giving pointer.

i have tried it and it and it looks interesting, but somewhat incomplete.

i can transfer remote files to a local dir and they are being compressed on the
local side, but (quite logical) this breaks size/content-checking. 

being also mentioned in the patch:

+Use of --dest-filter automatically enables --whole-file.
+If your filter does not output the same number of bytes that it
+received on input, you should use --times-only to disable size and
+content checks on subsequent rsync runs.

so, just deciding transfer based on timestamp is a little bit unsafe for me.

i tried --checksum but it didn`t work as expected.

seems this option needs to be touched/enhanced to work with 
--dest-filter/--source-filter or maybe even size/content-checking 
can be implemented by just making sure rsync takes the uncompressed
size for comparison. this will use much cpu, though

regards
roland


 -Ursprüngliche Nachricht-
 Von: Matt McCutchen [EMAIL PROTECTED]
 Gesendet: 23.09.07 04:11:43
 An: Kenneth Simpson [EMAIL PROTECTED]
 CC: rsync@lists.samba.org
 Betreff: Re: compression of source and target files


 
 On 9/21/07, Kenneth Simpson [EMAIL PROTECTED] wrote:
   Sorry, I neglected to mention the source is uncompressed but
  we need to compress the target file because we're running out
  of disk space and the files are highly compressible.
 
 You might try the experimental patch source-filter_dest-filter.diff
 that comes in patches/ of the rsync source distribution.  It adds
 options that claim to do what you want, but I have never tested it;
 your mileage may vary.
 
 Matt
 -- 
 To unsubscribe or change options: 
 https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
 


___
Jetzt neu! Schützen Sie Ihren PC mit McAfee und WEB.DE. 3 Monate
kostenlos testen. http://www.pc-sicherheit.web.de/startseite/?mc=00

--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: File changed during save....

2007-09-16 Thread devzero
 Handling of concurrent changes to source files is one of rsync's
 weaknesses.  

too bad, but good to know :)

The rsync sender naively reads each source file from
 beginning to end and sends what it sees; it doesn't detect if source
 files change while being transferred.  

that`s what i feared.

 In many cases, the concurrent
 modification will leave the source file with an mtime different from
 the mtime rsync saw when it statted the file during file-list building
 (which gets set on the destination file), 
so a subsequent rsync run will fix the corrupted destination file.  

mhh - if the file being very large and stat`ed  before transfer and
then changing during transfer because that file is accidentally in 
use (how should the receiver/sender always know?) - how should 
a subsequent rsync run be able to fix this reliably ? the next run, 
rsync would detect that it changed and would transfer the differences 
again - and while doing this, the file could change againand so on..
so, this is not a reliable workaround, imho.

what about thinking about an option for stat`ing the file _after_ transfer
again, so rsync could at least tell warning: file changed during rsync 
transfer which would be better than nothing, imho.

See this thread for more information:
 http://lists.samba.org/archive/rsync/2006-January/014534.html

thanks for the pointer

roland

ps:
sorry, couldn`t cc to [EMAIL PROTECTED] - my webmailer won`t let me


 -Ursprüngliche Nachricht-
 Von: Matt McCutchen [EMAIL PROTECTED]
 Gesendet: 16.09.07 17:56:45
 An: roland [EMAIL PROTECTED]
 CC: rsync@lists.samba.org
 Betreff: Re: File changed during save


 
 On 9/15/07, roland [EMAIL PROTECTED] wrote:
  what`s the rsync equivalent to this?
  how can i see which files changed while rsync was transferring them ?
 
 Handling of concurrent changes to source files is one of rsync's
 weaknesses.  The rsync sender naively reads each source file from
 beginning to end and sends what it sees; it doesn't detect if source
 files change while being transferred.  In many cases, the concurrent
 modification will leave the source file with an mtime different from
 the mtime rsync saw when it statted the file during file-list building
 (which gets set on the destination file), so a subsequent rsync run
 will fix the corrupted destination file.  See this thread for more
 information:
 
 http://lists.samba.org/archive/rsync/2006-January/014534.html
 
 Matt
 


___
Jetzt neu! Schützen Sie Ihren PC mit McAfee und WEB.DE. 3 Monate
kostenlos testen. http://www.pc-sicherheit.web.de/startseite/?mc=00

--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: File changed during save....

2007-09-16 Thread devzero
 Note that back-to-back rsyncs make the window of opportunity much
 much smaller for things to change during transit.

yes, but it still leaves room for corrupted transfers nobody would probably 
know about !?


 -Ursprüngliche Nachricht-
 Von: [EMAIL PROTECTED]
 Gesendet: 16.09.07 18:24:23
 An:  'roland' [EMAIL PROTECTED]
 CC: rsync@lists.samba.org
 Betreff: RE: File changed during save


 
 Matt McCutchen wrote:
  
  On 9/15/07, roland [EMAIL PROTECTED] wrote:
   what`s the rsync equivalent to this?
   how can i see which files changed while rsync was 
  transferring them ?
  
  Handling of concurrent changes to source files is one of rsync's
  weaknesses.  The rsync sender naively reads each source file from
  beginning to end and sends what it sees; it doesn't detect if source
  files change while being transferred.  In many cases, the concurrent
  modification will leave the source file with an mtime different from
  the mtime rsync saw when it statted the file during file-list building
  (which gets set on the destination file), so a subsequent rsync run
  will fix the corrupted destination file.  See this thread for more
  information:
  
  http://lists.samba.org/archive/rsync/2006-January/014534.html
  
  Matt
  -- 
 
 Note that back-to-back rsyncs make the window of opportunity much
 much smaller for things to change during transit.
 
 


__
XXL-Speicher, PC-Virenschutz, Spartarife  mehr: Nur im WEB.DE Club!
Jetzt testen! http://produkte.web.de/club/?mc=021130

--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: File changed during save....

2007-09-16 Thread devzero
sounds interesting - are you speaking about a special rsync version or about 
this helper script:

http://marc.info/?l=rsyncm=115822570129821w=2

?

 -Ursprüngliche Nachricht-
 Von: Stephen Zemlicka [EMAIL PROTECTED]
 Gesendet: 16.09.07 19:43:54
 An:  'roland' [EMAIL PROTECTED]
 CC: rsync@lists.samba.org
 Betreff: RE: File changed during save


 
 If you're on windows, someone wrote a vss patch for rsync.  I haven't used
 it extensively though but it has worked for in-use outlook pst files so far.
 I plan on testing it with exchange and sql databases in the near future.
 
 _
 Stephen Zemlicka
 Integrated Computer Technologies
 PH. 608-558-5926
 E-Mail [EMAIL PROTECTED] 
 
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of
 Tony Abernethy
 Sent: Sunday, September 16, 2007 11:24 AM
 To: 'Matt McCutchen'; 'roland'
 Cc: rsync@lists.samba.org
 Subject: RE: File changed during save
 
 Matt McCutchen wrote:
  
  On 9/15/07, roland [EMAIL PROTECTED] wrote:
   what`s the rsync equivalent to this?
   how can i see which files changed while rsync was 
  transferring them ?
  
  Handling of concurrent changes to source files is one of rsync's
  weaknesses.  The rsync sender naively reads each source file from
  beginning to end and sends what it sees; it doesn't detect if source
  files change while being transferred.  In many cases, the concurrent
  modification will leave the source file with an mtime different from
  the mtime rsync saw when it statted the file during file-list building
  (which gets set on the destination file), so a subsequent rsync run
  will fix the corrupted destination file.  See this thread for more
  information:
  
  http://lists.samba.org/archive/rsync/2006-January/014534.html
  
  Matt
  -- 
 
 Note that back-to-back rsyncs make the window of opportunity much
 much smaller for things to change during transit.
 
 -- 
 To unsubscribe or change options:
 https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
 
 


_
Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
http://smartsurfer.web.de/?mc=100071distributionid=0066

--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: RSYNC via pipe/socket ?

2006-02-09 Thread devzero
hello matt, 

thank you for your reply.

as is see, the method you describe is just theoretical, because it won`t work 
due to buffering issue.
furthermore, it still needs ssh or maybe another remote shell.

i'd like to leave out ssh or probably any remote shell entirely because 
encryption is slow and needs cpu.
i read that there is a patch to openssh that you can use -c none, i.e. Cipher 
none for using ssh without
encryption, but this is no option because i cannot install patched ssh on all 
machines.

 The next best thing is to use rsync to generate a batch file on the
 sender, bring the batch file to the receiver by hand (compressing and
 decompressing), and apply the batch file on the receiver using rsync.
 This effectively lets you compress the mainstream file-data part of the
 transmission, but much of the metadata must still go over the wire
 during the creation of the batch file.  See the man page for details.


   $ rsync --write-batch=pfx -a /source/dir/ /adest/dir/
   $ rcp pfx.rsync_* remote:
   $ ssh remote rsync --read-batch=pfx -a /bdest/dir/
   # or alternatively
   $ ssh remote ./pfx.rsync_argvs /bdest/dir/

ah - i understand. quite interesting and cool stuff, but unfortunately it 
should need tons 
of temporary space if you do the first rsync or if there is significant diff 
between the two 
hosts.

conclusion:
both methods seem to be no option for me at the moment.

any chance to see pluggable compression in rsync one day ? maybe it`s a 
planned feature ?

regards
roland



Matt McCutchen [EMAIL PROTECTED] schrieb am 09.02.06 02:02:47:
 
 On Fri, 2005-12-09 at 00:46 +0100, roland wrote:
  I`m trying to find a way to use lzo compression for the data being
  transferred by rsync.
 
 It's easy set this up.  Consider this (for gzip but it's easy to do the
 same for any compression program):
 
 /usr/local/bin/gzip-wrap:
   #!/bin/bash
   gzip | $@ | gunzip
 
 /usr/local/bin/gunzip-wrap:
   #!/bin/bash
   gunzip | $@ | gzip
 
 Then run:
   rsync --rsh='gzip-wrap ssh' --rsync-path='gunzip-wrap rsync'
   options source dest
 
 As elegant as this technique is, it fails because compression programs
 perform internal buffering.  One rsync will send a block of data and
 wait for an acknowledgement that the other side has received it, but
 since the end of the data is buffered in the compression program, the
 other side never responds and deadlock results.  There might be a way
 around this, but I can't think of one.
 
 The next best thing is to use rsync to generate a batch file on the
 sender, bring the batch file to the receiver by hand (compressing and
 decompressing), and apply the batch file on the receiver using rsync.
 This effectively lets you compress the mainstream file-data part of the
 transmission, but much of the metadata must still go over the wire
 during the creation of the batch file.  See the man page for details.
 -- 
 Matt McCutchen
 [EMAIL PROTECTED]
 http://mysite.verizon.net/hashproduct/
 


__
Verschicken Sie romantische, coole und witzige Bilder per SMS!
Jetzt bei WEB.DE FreeMail: http://f.web.de/?mc=021193

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html