Re: Is there a better way to transfer data that doesn't use so much cache?

2022-08-03 Thread Dan Stromberg via rsync
On Wed, Aug 3, 2022 at 5:41 PM Robin Lee Powell via rsync <
rsync@lists.samba.org> wrote:

> On Wed, Aug 03, 2022 at 02:04:22PM -0400, Rob Campbell via rsync wrote:
> > The problem isn't that there are many syncs because the problem happens
> on
> > the first one that runs.
>
> You didn't actually say what the problem *is*.
>
> I can infer from the subject that you think it's bad that rsync is
> using a bunch of disk/buffer cache, but that's not rsync, that's
> Linux, and it's by design; Linux uses as much RAM as it possibly can
> for disk cache, always.  This improves performance.  In a
> well-performing Linux system, the "free" column of "free -h" is very
> low, and the "available" column is very high.
>

Linux does indeed try to put your RAM to good use, and often that means
caching data from disk in RAM.

However, if you transfer a large amount of data and do not intend to
retransmit that data any time soon, then the memory isn't really put to
good use, and can actually cause your system to slow down significantly -
particularly if there's a lot of such data transferred.

It is, however, theoretically possible to skip the buffer cache using
O_DIRECT, but that requires your application to have O_DIRECT support, or
to use something like https://stromberg.dnsalias.org/~strombrg/libodirect/

HTH.
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Merging three slightly different directories

2022-06-07 Thread Dan Stromberg via rsync
I suspect you want a duplicate finder more than a file transfer tool.

EG: https://stromberg.dnsalias.org/~strombrg/equivalence-classes.html


On Tue, Jun 7, 2022 at 5:36 PM hput via rsync  wrote:

> I want to merge 3 slightly different directories of mostly images.
>
> Not just mostly but the vast majority are images files.
>
> Each directory has about 285 GB of files.
>
> At first I thought I would just run a straightish rsync from each directory
> inturn starting with the biggest which is not much bigger ... maybe
> a few MB.
>
> Like:
>
> rsync -vvrptgoD --stats /biggest/ /emptydir/
>
> rsync -vvrptgoD --stats /next-biggest/ /same-dir/
>
> rsync -vvrptgoD --stats /smallest/ /same-dir
>
> But after some thought I'm guessing that might be wrong headed way to go.
>
> All three dir have mostly the same stuff in them and in the same
> places but a close inspection, given the 285 GB would be pretty much a
> non-starter.
>
> There will be thousands that have matching names maybe newer or older
> bigger etc.  And maybe some of the same stuff but in slightly  different
> places.
>
> How can I make rsync do the work for me?  So I don't end up loosing files.
>
>
>
> --
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options:
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
>
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Trying to elevate rsync privileges when connecting over ssh without using NOPASSWD in sudoers

2022-03-12 Thread Dan Stromberg via rsync
On Sat, Mar 12, 2022 at 12:23 PM Dr. Mark Asbach via rsync <
rsync@lists.samba.org> wrote:

> Hi there, hi past me,
>
> > My (non-working) attempt:
> > […]
> > So it seems the "-l" is dropped into the void letting ssh assume USER
> was the target host? I don’t actually get what I can do.
>
> Turns out, I have to write down the description of my issue and then send
> the email before I magically understand the solution ;-)
>
> Here’s a working example that does not need a wrapper script:
>
> PASSWORD= rsync -vv --delete-after --delay-updates '/bin/sh -c
> "{ echo $PASSWORD; cat - ; } | ssh -i ~/.ssh/id.key $0 $* &"'
> --rsync-path='sudo -S rsync‘ ./SRCDIR USER@HOST:DSTDIR
>
> The trick was actually to add "$0" because $* will drop the first argument
> from the list as this typically is the name of the script itself (duh!).
>
> Hope this is of help to anyone,
>

Cool, glad you found a solution you're happy with.

Bear in mind, putting a password in an environment variable can be seen by
other users on the same system with "ps auxwwe".
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Trying to elevate rsync privileges when connecting over ssh without using NOPASSWD in sudoers

2022-03-11 Thread Dan Stromberg via rsync
Why not rsync directly as root?  Then you can use a passwordless,
passphraseless RSA (or similar) keypair.

On Fri, Mar 11, 2022 at 4:58 AM Dr. Mark Asbach via rsync <
rsync@lists.samba.org> wrote:

> Hi there,
>
> We are using ansible to deploy system configuration and web application
> source code to clusters of Linux computers. One part of this process
> requires transferring large directories to the target hosts, which is done
> using the „synchronize“ command in ansible that is in turn a wrapper around
> rsync. This work great in most scenarios, but we run into an issue with a
> specific (albeit for us: prominent) use case:
>
> - We try to have rsync connect over ssh using a non-privileged user
> account.
> - The account is set up for publickey authentication, so we can use ‚rsync
> -e „ssh -i /home/user/.ssh/some_id“‘.
> - On the target side, we want to escalate privileges for rsync, which we
> try using ‚rsync --rsync-path=„sudo rsync“‘.
>
> This whole scenario works fine, as long as for the ssh account we use for
> logging in, passwordless sudo is set up on the target. For security
> reasons, we do not want to go this route. Instead, we want to supply the
> user’s password for gaining privileges. On the web, I’ve found to
> suggestions for solving this:
>
> a) Using ssh-askpass, we can use the options -e "ssh -X"
> --rsync-path="sudo -A rsync" (see https://askubuntu.com/a/1167758). The
> problem in our scenario is that using ansible, we run the identical rsync
> command on multiple hosts in parallel (we target about 32 VMs in one go).
> So the person running the script would have to enter the password into 32
> dialogs exactly at the time they pop up.
>
> b) Passing the password to sudo via stdin using --rsync-path "echo
> MYPASSWORD | sudo -S rsync" (see https://askubuntu.com/a/1155897). This
> has the potential security implication that if the calling line is stored
> somewhere in a shell history file of the control host, the password will be
> breached, but there’s a couple of measures we can take so mitigate that.
> However, I fail at getting this to run.
>
> Here’s a sample command that I get out of a patched ansible „synchronize“
> command. I’m trying to connect to a Ubuntu 18.04 VM with the user account
> „mark“ that is in the „sudoers“ group but does not have „NOPASSWD“ set, so
> running „sudo“ for the first time in a session will require to enter the
> password for „mark“ which here is „test“:
>
> rsync --delay-updates -F --compress --delete-after --archive --no-perms
> --no-owner --no-group --rsh='/usr/bin/ssh -S none -i ~/ssh/some_private_key
> -o Port= -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
> --rsync-path='echo test | sudo -S -u root rsync 2>/dev/null'
> --out-format='<>%i %n%L' ~/test_source_dir mark@127.0.0.1:
> /some/test_target_dir
>
> This is what I get:
> > Warning: Permanently added '[127.0.0.1]:' (ED25519) to the list of
> known hosts.
> > rsync: connection unexpectedly closed (0 bytes received so far) [sender]
>
> As far as I understand, this could be due to "sudo -S" prompting for the
> password and that prompt interfering with the rsync communications.
> However, I’m out of ideas what I could do to get around that.
>
> Help would be greatly appreciated ;-)
>
> Thanks and greetings from Cologne,
> Mark--
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options:
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
>
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Confused as to why rsync thinks time, owner and group of many files differ

2022-02-03 Thread Dan Stromberg via rsync
On Thu, Feb 3, 2022 at 3:50 PM Andy Smith via rsync 
wrote:

> I am tempted to blow away the btrfs filesystem and just do xfs to
> xfs, to rule out weird issues there. It would be a shame though as
> I was hoping to use btrfs's compression here.
>

You might be able to do a partial transfer to a small XFS for testing -
perhaps even a loopback-mounted XFS.

Last I heard BTRFS wasn't ready for production.
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Windows backups?

2021-12-27 Thread Dan Stromberg via rsync
Can rsync back up an NTFS using a Windows 10 kernel?  So far I've had good
luck backing up NTFS filesystems on a dual boot system when booted into
Linux, but not when booted into Windows.

I've been bitten in the past by /usr/bin/find (for EG) having problems with
Windows junctions over sshfs.  The problem appears to be that Windows'
sftp-server doesn't map junctions to directory symlinks; instead, it treats
junctions as just another directory, which leads to backup and find
problems (both, individually) because of a directory loop: eventually they
complain about a too-deep recursion.

I heard that rsync (in some form, I know not which one) can skip junctions,
so perhaps I could use an rsync wrapper for backing up Windows machines?  I
heard that at
https://www.reddit.com/r/linuxquestions/comments/kr9z2t/any_way_to_make_rsync_follow_junctions_on_ntfs/

I'd prefer to use rsync over ssh, but I perhaps could tunnel the rsyncd
protocol over ssh if it becomes necessary - that is, if it's likely to help.

Any suggestions?  Any guesses which rsync implementation, if any, is able
to treat junctions as symlinks or similar?

Thanks!
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: BackUp with "inverse" increments

2021-10-07 Thread Dan Stromberg via rsync
rsync --link-dest is fast and simple, good stuff.

I recommend using it with some sort of wrapper script for rotations, like
Backup.rsync:
https://stromberg.dnsalias.org/~strombrg/Backup.remote.html

On Wed, Oct 6, 2021 at 7:29 AM Kevin Korb via rsync 
wrote:

> See --link-dest.  That is what makes rsync shine for backups.
>
> On 10/6/21 10:09 AM, Helmut Jarausch via rsync wrote:
> > Hi,
> >
> > I'd like to mirror my root file system (e.g.) to a different disk. The
> > mirror should always be most recent.
> > In addition, I'd like to be able to restore my file system to the state
> > of one or more backups before
> > without storing full size snapshots.
> >
> > The --backup-dir option does most of the work, but what happens if the
> > old backup doesn't contain a file
> > which is part of the current source. How can I restore the old version
> > including deletion of files
> > which didn't exist in the old version?
> >
> > Many thanks for some hints or pointers,
> > Helmut
> >
>
> --
> ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
> Kevin Korb  Phone:(407) 252-6853
> Systems Administrator   Internet:
> FutureQuest, Inc.   ke...@futurequest.net  (work)
> Orlando, Floridak...@sanitarium.net (personal)
> Web page:   https://sanitarium.net/
> PGP public key available on web site.
> ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
>
> --
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options:
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
>
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Question about rsync -uav dir1/. dir2/.: possib to link?

2021-09-04 Thread Dan Stromberg via rsync
I was thinking --link-dest too.

Sometimes this can be done with cpio too; check out the -pdlv options.

On Sat, Sep 4, 2021 at 4:57 PM Kevin Korb via rsync 
wrote:

> Rsync does almost everything cp does but since it is designed to network
> it never got that feature.  I was thinking maybe --link-dest could be
> tortured into doing it but if it can I can't figure out how.  BTW, you
> have some pointless dots in there.
>
> On 9/4/21 6:41 PM, L A Walsh via rsync wrote:
> > I noticed in looking at download dirs for a project, that
> > another mirror had "crept-in" for usage (where different mirrors
> > are stored under mirror-URL names). To copy over the diffs,
> > normally I'd do:
> >   rsync -uav dir1/. dir2/.
> > (where dir1="the new mirror that I'd switched
> > to by accident, and dir2=the original dir).
> >
> > The files were "smallish" so I just copied them, BUT I wass
> > wondering if there was an option similar to using 'cp' for
> > a dircopy, but instead of
> >   cp -a dr1 dr2
> > using:
> >   cp -al dr1 dr2
> >
> > to just hard-link over files from "dir1" to "dir2" (both
> > are on the same file system).
> >
> > I looked at (and tried) --link-dest=DIR
> > (hardlink to files in DIR when unchanged), but either I had the syntax
> > wrong, or didn't understand it as it didn't seem to do what I
> > wanted: cp'ing the new files in dir1 into the orig dir).
> >
> > Does rsync have an option to just "copy" over the new
> > files via a hardlink?
> >
> > Tnx!
> >
> >
> >
> >
>
> --
> ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
> Kevin Korb  Phone:(407) 252-6853
> Systems Administrator   Internet:
> FutureQuest, Inc.   ke...@futurequest.net  (work)
> Orlando, Floridak...@sanitarium.net (personal)
> Web page:   https://sanitarium.net/
> PGP public key available on web site.
> ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
>
> --
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options:
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
>
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Which method (rsync-based) is better for backing up Linux servers to Synology NAS?

2021-08-09 Thread Dan Stromberg via rsync
I suppose I may as well mention:
https://stromberg.dnsalias.org/~dstromberg/Backup.remote

It just does rsync snapshotting with --link-dest, and keeps the last n
snapshots.  It's smart enough to resume a previously interrupted snapshot.

It's pretty simple - both to set up and to use.  I used to use it a lot.  I
still will if I come up with a backup usage that requires high speed.


On Sun, Aug 8, 2021 at 11:46 PM Turritopsis Dohrnii Teo En Ming via rsync <
rsync@lists.samba.org> wrote:

> Subject: Which method (rsync-based) is better for backing up Linux
> servers to Synology NAS?
>
> Good day from Singapore,
>
> Our customer has 2 CentOS 7.9 Linux servers with cPanel web hosting
> control panel installed.
>
> We would like to backup the 2 CentOS 7.9 Linux servers to Synology NAS.
>
> Which method (rsync-based) is better for backing up Linux servers to
> Synology NAS?
>
> We are looking at 2 methods.
>
> Method 1
> 
>
> rsnapshot
>
> Method 2
> 
>
> Guide: Backup a Linux PC to a Synology NAS using Rsync!
> Link:
> https://www.wundertech.net/how-to-backup-a-linux-pc-to-a-synology-nas-using-rsync/
>
> Both methods are based on rsync.
>
> If we choose method 1, we are able to have retention periods. If we
> choose method 2, we can't have retention periods. But rsnapshot is
> much more difficult to configure that method 2.
>
> We are looking forward to your advice.
>
> Thank you very much.
>
> Mr. Turritopsis Dohrnii Teo En Ming, 43 years old as of 9 August 2021,
> is a TARGETED INDIVIDUAL living in Singapore. He is an IT Consultant
> with a System Integrator (SI)/computer firm in Singapore. He is an IT
> enthusiast.
>
>
>
>
>
> -BEGIN EMAIL SIGNATURE-
>
> The Gospel for all Targeted Individuals (TIs):
>
> [The New York Times] Microwave Weapons Are Prime Suspect in Ills of
> U.S. Embassy Workers
>
> Link:
> https://www.nytimes.com/2018/09/01/science/sonic-attack-cuba-microwave.html
>
>
> 
>
> Singaporean Targeted Individual Mr. Turritopsis Dohrnii Teo En Ming's
> Academic Qualifications as at 14 Feb 2019 and refugee seeking attempts
> at the United Nations Refugee Agency Bangkok (21 Mar 2017), in Taiwan
> (5 Aug 2019) and Australia (25 Dec 2019 to 9 Jan 2020):
>
> [1] https://tdtemcerts.wordpress.com/
>
> [2] https://tdtemcerts.blogspot.sg/
>
> [3] https://www.scribd.com/user/270125049/Teo-En-Ming
>
> -END EMAIL SIGNATURE-
>
> --
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options:
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
>
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Utility of --backup

2021-07-19 Thread Dan Stromberg via rsync
In Backup.rsync, which of course is a wrapper around rsync that can be used
for backups, I do not use --backup, but I do use --link-dest:
https://stromberg.dnsalias.org/~strombrg/Backup.remote.html

On Sun, Jul 18, 2021 at 11:13 PM Lisa via rsync 
wrote:

> I would like some feedback about the --backup option in rsync. Is
> it worth using it for backups, or should I just use rsync
> commands that just transfer files without the use of --backup
> option?
>
> -b, --backup  make backups (see --suffix & --backup-dir)
> --backup-dir=DIR  make backups into hierarchy based in DIR
> --suffix=SUFFIX   backup suffix (default ~ w/o --backup-dir)
>
> I am somewhat hesitant to use it because with the backup option,
> preexisting destination files are renamed as each file is
> transferred or deleted. It also says that previously backed-up
> files could get deleted.  Thusly I need some assistance
> understanding all the pros and cons.
>
> --
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options:
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
>
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Syntax Question on 'rsyncd.conf' on Windows

2021-04-09 Thread Dan Stromberg via rsync
This is probably more of a Cygwin question than an rsync question.

On Cygwin, E: should show up automatically as /cygdrive/e

You can test that by opening a Cygwin terminal and cd'ing to /cygdrive/e

On Tue, Apr 6, 2021 at 1:32 PM Tim Evans via rsync 
wrote:

> Cygwin distribution of rsync for Windows contains an example rsyncd.conf,
> excerpt below:
>
> [cDrive]
> path = /cygdrive/c/
> comment = Entire C Drive
>
> Having trouble setting up a second Windows physical drive.  Is the
> "cygdrive" designation a reference to the full system root, so that  syntax
> for Windows Drive "E:" would therefore be referenced "/cygdrive/e/"?
>
> Or is there a different syntax for non-C: drive?
>
> Thanks.
>
> --
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options:
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
>
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: '--address' option on client side.

2021-03-26 Thread Dan Stromberg via rsync
I've not looked into mrsync and multiple interfaces; I'm guessing it'll use
your (multicast) routing table.

On Fri, Mar 26, 2021 at 1:13 PM Harry Mangalam via rsync <
rsync@lists.samba.org> wrote:

> mrcast is interesting (Hadn't stumbled across it before) but while it
> handles multicat, it doesn't seem to be able to handle multiple interfaces,
> if I read the docs correctly.
> Am I wrong?
> harry
>
> On Fri, Mar 26, 2021 at 12:29 PM Dan Stromberg 
> wrote:
>
>>
>> Hi Harry.  Are you the person I worked with at UCI a bit?
>>
>> Anyway, you might consider trying mrsync; it's intended to do rsync over
>> multicast.
>>
>> HTH.
>>
>> On Fri, Mar 26, 2021 at 12:22 PM Harry Mangalam via rsync <
>> rsync@lists.samba.org> wrote:
>>
>>> Spent an hour trying to find the answer to this on the various SO, SF,
>>> other usual suspects, but have failed.
>>>
>>> I'm trying to improve a parallel rsync wrapper called parsyncfp (pfp)
>>> in response to a user request.  He wants rsync to emit data on multiple
>>> interfaces (one interface per rsync instance). From the man page it seems
>>> like the '--address' option would do that and in fact using it as such does
>>> not result in an error, but it also does not result in both interfaces
>>> being used, either from pfp or when launched directly from different shells.
>>>
>>> My route (working from home) shows the 2 wlan interfaces up with
>>> different IP #s:
>>> wlp3s0: flags=4163  mtu 1500
>>>inet 192.168.1.223  netmask 255.255.255.0  broadcast
>>> 192.168.1.255
>>> ...
>>>
>>> wlx9cefd5fb0bb5: flags=4163  mtu 1500
>>>inet 192.168.1.186  netmask 255.255.255.0  broadcast
>>> 192.168.1.255
>>> ...
>>> and route shows:
>>> $ route
>>>
>>> Kernel IP routing table
>>> Destination Gateway Genmask Flags Metric RefUse
>>> Iface
>>> default router.asus.com 0.0.0.0 UG60100
>>> wlx9cefd5fb0bb5
>>> default router.asus.com 0.0.0.0 UG60200
>>> wlp3s0
>>> link-local  0.0.0.0 255.255.0.0 U 1000   00
>>> wlp3s0
>>> 192.168.1.0 0.0.0.0 255.255.255.0   U 60100
>>> wlx9cefd5fb0bb5
>>> 192.168.1.0 0.0.0.0 255.255.255.0   U 60200
>>> wlp3s0
>>>
>>> and while the arp results from the rsyncing machine look OK:
>>> $ arp -n
>>>
>>> Address  HWtype  HWaddress   Flags Mask
>>>Iface
>>> 192.168.1.107ether   90:73:5a:f1:23:ee   C
>>> wlx9cefd5fb0bb5
>>> 192.168.1.107ether   90:73:5a:f1:23:ee   C
>>> wlp3s0
>>> 192.168.1.1  ether   74:d0:2b:5e:32:40   C
>>> wlp3s0
>>> 192.168.1.139ether   d8:31:34:64:bc:f0   C
>>> wlp3s0
>>> 192.168.1.139ether   d8:31:34:64:bc:f0   C
>>> wlx9cefd5fb0bb5
>>> 192.168.1.198ether   94:94:26:08:b2:4e   C
>>> wlx9cefd5fb0bb5
>>> 192.168.1.1  ether   74:d0:2b:5e:32:40   C
>>> wlx9cefd5fb0bb5
>>>
>>>
>>> the arp table from another machine on the same net show this:
>>> $ arp -n
>>> Address  HWtype  HWaddress   Flags Mask
>>>Iface
>>> 192.168.1.203ether   b0:68:e6:3d:58:a7   C
>>> wlp3s0
>>> 192.168.1.107ether   90:73:5a:f1:23:ee   C
>>> wlp3s0
>>> 192.168.1.186ether   9c:ef:d5:fb:0b:b5   C
>>> wlp3s0
>>> 192.168.1.1  ether   74:d0:2b:5e:32:40   C
>>> wlp3s0
>>> 192.168.1.223ether   9c:ef:d5:fb:0b:b5   C
>>> wlp3s0
>>>
>>> and the rsync machine is the .186 and .223 above, indicating that the 2
>>> interfaces are regarded as the same MAC.
>>>
>>> The alternating rsync commands generated from pfp are:
>>> rsync  --address=192.168.1.223 --bwlimit=100 -a -s
>>> --log-file=/home/hjm/.parsyncfp/rsync-logfile-14.34.52_2021-03-25_16
>>>  --files-from=/home/hjm/.parsyncfp/fpcache/f.16  '/home/hjm'
>>>  bridgit:/home/hjm/test
>>>
>>> and
>>>
>>> rsync  --address=192.168.1.186 --bwlimit=100 -a -s
>>> --log-file=/home/hjm/.parsyncfp/rsync-logfile-14.34.52_2021-03-25_17
>>>  --files-from=/home/hjm/.parsyncfp/fpcache/f.17  '/home/hjm'
>>>  bridgit:/home/hjm/test
>>>
>>> But the byte streams show only data flowing on one.  This is the case
>>> whether the rsyncs are started from parsyncfp or via separate rsyncs
>>> started from separate shells.
>>> Before I go further down the rabbit hole and start messing with ARP
>>> tables and network namespaces, was this the intent of the option or am I
>>> misunderstanding it?
>>> On the server side, the --address option seems to be used to bind the
>>> responding IP# and while I haven't tried that, that seems to be
>>> straightforward (but not useful for me).
>>>
>>> thanks in advance for such a magical program

Re: '--address' option on client side.

2021-03-26 Thread Dan Stromberg via rsync
Hi Harry.  Are you the person I worked with at UCI a bit?

Anyway, you might consider trying mrsync; it's intended to do rsync over
multicast.

HTH.

On Fri, Mar 26, 2021 at 12:22 PM Harry Mangalam via rsync <
rsync@lists.samba.org> wrote:

> Spent an hour trying to find the answer to this on the various SO, SF,
> other usual suspects, but have failed.
>
> I'm trying to improve a parallel rsync wrapper called parsyncfp (pfp)  in
> response to a user request.  He wants rsync to emit data on multiple
> interfaces (one interface per rsync instance). From the man page it seems
> like the '--address' option would do that and in fact using it as such does
> not result in an error, but it also does not result in both interfaces
> being used, either from pfp or when launched directly from different shells.
>
> My route (working from home) shows the 2 wlan interfaces up with
> different IP #s:
> wlp3s0: flags=4163  mtu 1500
>inet 192.168.1.223  netmask 255.255.255.0  broadcast 192.168.1.255
> ...
>
> wlx9cefd5fb0bb5: flags=4163  mtu 1500
>inet 192.168.1.186  netmask 255.255.255.0  broadcast 192.168.1.255
> ...
> and route shows:
> $ route
>
> Kernel IP routing table
> Destination Gateway Genmask Flags Metric RefUse
> Iface
> default router.asus.com 0.0.0.0 UG60100
> wlx9cefd5fb0bb5
> default router.asus.com 0.0.0.0 UG60200
> wlp3s0
> link-local  0.0.0.0 255.255.0.0 U 1000   00
> wlp3s0
> 192.168.1.0 0.0.0.0 255.255.255.0   U 60100
> wlx9cefd5fb0bb5
> 192.168.1.0 0.0.0.0 255.255.255.0   U 60200
> wlp3s0
>
> and while the arp results from the rsyncing machine look OK:
> $ arp -n
>
> Address  HWtype  HWaddress   Flags Mask
>Iface
> 192.168.1.107ether   90:73:5a:f1:23:ee   C
> wlx9cefd5fb0bb5
> 192.168.1.107ether   90:73:5a:f1:23:ee   C
> wlp3s0
> 192.168.1.1  ether   74:d0:2b:5e:32:40   C
> wlp3s0
> 192.168.1.139ether   d8:31:34:64:bc:f0   C
> wlp3s0
> 192.168.1.139ether   d8:31:34:64:bc:f0   C
> wlx9cefd5fb0bb5
> 192.168.1.198ether   94:94:26:08:b2:4e   C
> wlx9cefd5fb0bb5
> 192.168.1.1  ether   74:d0:2b:5e:32:40   C
> wlx9cefd5fb0bb5
>
>
> the arp table from another machine on the same net show this:
> $ arp -n
> Address  HWtype  HWaddress   Flags Mask
>Iface
> 192.168.1.203ether   b0:68:e6:3d:58:a7   C
> wlp3s0
> 192.168.1.107ether   90:73:5a:f1:23:ee   C
> wlp3s0
> 192.168.1.186ether   9c:ef:d5:fb:0b:b5   C
> wlp3s0
> 192.168.1.1  ether   74:d0:2b:5e:32:40   C
> wlp3s0
> 192.168.1.223ether   9c:ef:d5:fb:0b:b5   C
> wlp3s0
>
> and the rsync machine is the .186 and .223 above, indicating that the 2
> interfaces are regarded as the same MAC.
>
> The alternating rsync commands generated from pfp are:
> rsync  --address=192.168.1.223 --bwlimit=100 -a -s
> --log-file=/home/hjm/.parsyncfp/rsync-logfile-14.34.52_2021-03-25_16
>  --files-from=/home/hjm/.parsyncfp/fpcache/f.16  '/home/hjm'
>  bridgit:/home/hjm/test
>
> and
>
> rsync  --address=192.168.1.186 --bwlimit=100 -a -s
> --log-file=/home/hjm/.parsyncfp/rsync-logfile-14.34.52_2021-03-25_17
>  --files-from=/home/hjm/.parsyncfp/fpcache/f.17  '/home/hjm'
>  bridgit:/home/hjm/test
>
> But the byte streams show only data flowing on one.  This is the case
> whether the rsyncs are started from parsyncfp or via separate rsyncs
> started from separate shells.
> Before I go further down the rabbit hole and start messing with ARP tables
> and network namespaces, was this the intent of the option or am I
> misunderstanding it?
> On the server side, the --address option seems to be used to bind the
> responding IP# and while I haven't tried that, that seems to be
> straightforward (but not useful for me).
>
> thanks in advance for such a magical program
> Harry
> --
>
> Harry Mangalam
> --
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options:
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
>
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Rsync checksum issue

2020-12-15 Thread Dan Stromberg via rsync
On Tue, Dec 15, 2020 at 3:25 AM Laurent B via rsync 
wrote:

> Dear all,
>
> I'm encountering a problem with one of my backup. For some files, the
> checksum calculation is failing leading to the following error :
>
It sounds a little like a bug, but perhaps if you share the command you're
using for the backup?
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Is there any way to restore/create hardlinks lost in incremental backups?

2020-12-13 Thread Dan Stromberg via rsync
On Sun, Dec 13, 2020 at 11:59 AM Wayne Davison via rsync <
rsync@lists.samba.org> wrote:

> I should also mention that there are totally valid reasons why the dir
> might be huge on day4. For instance, if someone changed the mode on the
> files from 664 to 644 then the files cannot be hard-linked together even if
> the file's data is unchanged. The same goes for differences in preserved
> xattrs, acls, and ownership.  In such a case you could decide that you
> don't care about the change in meta info and tweak it on the earlier files
> to match day4's files and then the suggested re-link command would decide
> it could join them together.  You'd probably then need to keep going and
> re-link day5's pictures (since it was probably linking to the old day4's
> pictures).
>
> ..wayne..
>

I totally get why some folks would prefer to use rsync --link-dest for
backups: It's very fast, and the backup itself is usable as a replacement
filesystem.

If you are open to trying something else though, there are probably several
tools at
https://stromberg.dnsalias.org/~strombrg/backshift/documentation/comparison/index.html
that can backup permissions changes without needing to create a copy of the
file data.  Sadly, I don't know about most of the tools there, but I know
that backshift wouldn't.  Backshift is much slower than rsync, but also
takes up quite a bit less storage space, even if you mv a large hierarchy
or change all the file permissions in a hierarchy.

HTH
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Is there any way to restore/create hardlinks lost in incremental backups?

2020-12-10 Thread Dan Stromberg via rsync
Hi.

Is it possible that, if day4 is consuming too much space, that day3 was an
incomplete backup?

The rsync wrapper I wrote goes to a little trouble to make sure that
incomplete backups aren't allowed.  It's called Backup.rsync, and can be
found at:
https://stromberg.dnsalias.org/~strombrg/Backup.remote.html
It does this by mv'ing backups to a magic name scheme only after they fully
finish, to distinguish them from partial backups.  If a backup is found
that doesn't have that magic name scheme, it is assumed to be partia, and
is reused as the starting point for the next snapshot.

Feel free to use it, or raid it for ideas.

HTH


On Thu, Dec 10, 2020 at 9:29 AM Chris Green via rsync 
wrote:

> I run a simple self written incremental backup system using rsync's
> --link-dest option.
>
> Occasionally, because I've moved things around or because I've done
> something else that breaks things, the hard links aren't created as
> they should be and I get a very space consuming backup increment.
>
> Is there any easy way that one can restore hard links in the *middle*
> of a series?  For example say I have:-
>
> day1/pictures
> day2/pictures
> day3/pictures
> day4/pictures
> day5/pictures
>
> and I notice that day4/pictures is using as much space as
> day1/pictures but all the others are relatively small, i.e.
> day2 day3 and day5 have correctly hard linked to the previous day but
> day4 hasn't.
>
> It needs a tool that can scan day4, check a file is identical with the
> one in day3 then hardlink it without losing the link from day5.
>
> There's jdupes but that does lose the link from day5 so you'd have to
> apply it to all the directories after the one that's lost the links.
>
>
>
> --
> Chris Green
> ·
>
>
> --
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options:
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
>
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: link-dest and batch-file

2018-06-26 Thread Dan Stromberg via rsync
On Tue, Jun 26, 2018 at 12:02 PM, Дугин Сергей via rsync <
rsync@lists.samba.org> wrote:

> I am launching a cron bash script that does the following:
>
> Day 1
> /usr/bin/rsync -aH --link-dest /home/backuper/.BACKUP/009/2018-06-25
> root@192.168.1.103:/home/ /home/backuper/.BACKUP/009/2018-06-26
>
> Day 2
> /usr/bin/rsync -aH --link-dest /home/backuper/.BACKUP/009/2018-06-26
> root@192.168.1.103:/home/ /home/backuper/.BACKUP/009/2018-06-27
>
> Day 3
> /usr/bin/rsync -aH --link-dest /home/backuper/.BACKUP/009/2018-06-27
> root@192.168.1.103:/home/ /home/backuper/.BACKUP/009/2018-06-28
>
> and etc.
>
This isn't really what you were asking, but with the "dated directories"
scheme, what happens if one or your machines crashes during a backup?
Don't you end up storing a lot more data in the next successful backup?
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

GPL-compatible library implementing the rsync-over-ssh protocol?

2018-06-20 Thread Dan Stromberg via rsync
Is there such a thing?

I saw librsync, which appears to be the right algorithm, but not the
protocol.

And I saw the acrosync-library, which appears to be the protocol, but it's
not GPL-compatible.

Are there others?

Thanks!
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: Rsync between 2 datacenters not working

2018-03-28 Thread Dan Stromberg via rsync
If reducing the MTU is helping, you might look into turning on Path
MTU Discovery.

NFS can be fast for large transfers if you tune it.
http://stromberg.dnsalias.org/~strombrg/nfs-test.html

NFS is not terribly secure though - at least v2 and v3 weren't. Not
sure about v4.

On Wed, Mar 28, 2018 at 12:59 AM, Marc Roos via rsync
<rsync@lists.samba.org> wrote:
>
> Kevin, Dan, Thanks for the pointers to work arounds, at the moment I am
> testing with lower mtu size that seems to be working. Otherwise I need
> to fall back on mounting the fs maybe even as nfs.
>
>
>
> -Original Message-
> From: Kevin Korb via rsync [mailto:rsync@lists.samba.org]
> Sent: zondag 25 maart 2018 22:42
> To: rsync@lists.samba.org
> Subject: Re: Rsync between 2 datacenters not working
>
> Note that if you do this you are stuck with --whole-file
>
> On 03/25/2018 04:36 PM, Dan Stromberg via rsync wrote:
>> You could try using an automounter, like autofs, in combination with
>> sshfs.  It'll be slower, possibly a lot slower, but it should be more
>> reliable over an unreliable connection.
>>
>> I've been using:
>> remote
>> -fstype=fuse,allow_other,nodev,noatime,reconnect,ServerAliveInterval=1
>> 5,ServerAliveCountMax=40,uid=0,gid=0,ro,nodev,noatime
>> :sshfs\#r...@remote.host.com\:/
>>
>> BTW, I'm not sure it's necessary to escape the #.  I never tried it
> without.
>>
>> Also note that it flattens the remote host's mount tree into a single
>> filesystem - so things like /proc look like they are in the same
>> filesystem as /.  This can lead to backing up /proc's contents (many
>> pseudofiles), if you don't exclude it, even if you use rsync's -x
>> option.
>>
>> On Sun, Mar 25, 2018 at 6:43 AM, Marc Roos via rsync
>> <rsync@lists.samba.org> wrote:
>>>
>>> I still stuck with these errors
>>>
>>> packet_write_wait: Connection to 192.168.10.43 port 22: Broken pipe
>>> rsync: connection unexpectedly closed (534132435 bytes received so
>>> far) [receiver] rsync error: error in rsync protocol data stream
>>> (code 12) at io.c(605) [receiver=3.0.9]
>>> rsync: connection unexpectedly closed (27198 bytes received so far)
>>> [generator] rsync error: unexplained error (code 255) at io.c(605)
>>> [generator=3.0.9]
>>
>
> --
> ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
> Kevin Korb  Phone:(407) 252-6853
> Systems Administrator   Internet:
> FutureQuest, Inc.   ke...@futurequest.net  (work)
> Orlando, Floridak...@sanitarium.net (personal)
> Web page:   http://www.sanitarium.net/
> PGP public key available on web site.
> ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
>
>
>
> --
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options: 
> https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Rsync between 2 datacenters not working

2018-03-25 Thread Dan Stromberg via rsync
You could try using an automounter, like autofs, in combination with
sshfs.  It'll be slower, possibly a lot slower, but it should be more
reliable over an unreliable connection.

I've been using:
remote 
-fstype=fuse,allow_other,nodev,noatime,reconnect,ServerAliveInterval=15,ServerAliveCountMax=40,uid=0,gid=0,ro,nodev,noatime
:sshfs\#r...@remote.host.com\:/

BTW, I'm not sure it's necessary to escape the #.  I never tried it without.

Also note that it flattens the remote host's mount tree into a single
filesystem - so things like /proc look like they are in the same
filesystem as /.  This can lead to backing up /proc's contents (many
pseudofiles), if you don't exclude it, even if you use rsync's -x
option.

On Sun, Mar 25, 2018 at 6:43 AM, Marc Roos via rsync
 wrote:
>
> I still stuck with these errors
>
> packet_write_wait: Connection to 192.168.10.43 port 22: Broken pipe
> rsync: connection unexpectedly closed (534132435 bytes received so far)
> [receiver]
> rsync error: error in rsync protocol data stream (code 12) at io.c(605)
> [receiver=3.0.9]
> rsync: connection unexpectedly closed (27198 bytes received so far)
> [generator]
> rsync error: unexplained error (code 255) at io.c(605) [generator=3.0.9]

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: [Bug 13317] rsync returns success when target filesystem is full

2018-03-05 Thread Dan Stromberg via rsync
On Mon, Mar 5, 2018 at 3:09 PM, just subscribed for rsync-qa from
bugzilla via rsync  wrote:
> https://bugzilla.samba.org/show_bug.cgi?id=13317
>
> --- Comment #6 from Rui DeSousa  ---
> (In reply to Rui DeSousa from comment #5)
>
> It looks like no error is returned and result is a sparse file. I think a
> sync() would be required otherwise the file is truncated on close to meet the
> quota.

IINM, to create a sparse file, you have to seek past the concrete part
of a file (if any), and write something.

I'm not sure that a sync would change whether you've gone over your
quota.  At least, if I were designing a quota system, I'd make it
depend on what you've written to the buffer cache and disk, not disk
alone.

Here's an example of creating a sparse file:
$ /usr/local/pypy3-5.10.0/bin/pypy3
below cmd output started 2018 Mon Mar 05 04:01:55 PM PST
Python 3.5.3 (09f9160b643e, Dec 22 2017, 10:10:27)
[PyPy 5.10.0 with GCC 6.2.0 20160901] on linux
Type "help", "copyright", "credits" or "license" for more information.
 file_ = open('/tmp/sparse-file', 'wb')
 file_.seek(1024*1024*8)
8388608
 file_.write(b'a')
1
 file_.close()

above cmd output done2018 Mon Mar 05 04:02:14 PM PST
dstromberg@zareason2:~ x86_64-unknown-linux-gnu 4805

$ ls -l /tmp/sparse-file
below cmd output started 2018 Mon Mar 05 04:02:18 PM PST
-rw-rw-r-- 1 dstromberg dstromberg 8388609 Mar  5 16:02 /tmp/sparse-file
above cmd output done2018 Mon Mar 05 04:02:18 PM PST
dstromberg@zareason2:~ x86_64-unknown-linux-gnu 4805

$ du -sh /tmp/sparse-file
below cmd output started 2018 Mon Mar 05 04:02:22 PM PST
4.0K/tmp/sparse-file

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html