Ah, then it still applies.
Given that you have your stick mounted at /mnt/usbstick...
Use the same files (except /etc/rsyncd.conf, /etc/rsync.pass,
/etc/rsyncd.pass), but your command line should look like:
Here, I'm backing up root to /mnt/usbstick
rsync -aHz --delete --delete-excluded --force
From: ToddAndMargo toddandma...@zoho.com
To: SCIENTIFIC-LINUX-USERS SCIENTIFIC-LINUX-USERS@listserv.fnal.gov
Sent: 7. March 2015 05:40:43
Subject: Re: need rsync exclude help
--exclude='{wine-*,wine-1.7.24}' /home/CDs/Linux /mnt/MyCDs/.
I am not real certain that the {} thingy works
On 03/07/2015 03:32 AM, David Sommerseth wrote:
From: ToddAndMargo toddandma...@zoho.com
To: SCIENTIFIC-LINUX-USERS SCIENTIFIC-LINUX-USERS@listserv.fnal.gov
Sent: 7. March 2015 05:40:43
Subject: Re: need rsync exclude help
--exclude='{wine-*,wine-1.7.24}' /home/CDs/Linux /mnt/MyCDs
On 03/07/2015 06:32 AM, David Sommerseth wrote:
From: ToddAndMargo toddandma...@zoho.com
To: SCIENTIFIC-LINUX-USERS SCIENTIFIC-LINUX-USERS@listserv.fnal.gov
Sent: 7. March 2015 05:40:43
Subject: Re: need rsync exclude help
--exclude='{wine-*,wine-1.7.24}' /home/CDs/Linux /mnt/MyCDs/.
I
, James Rogers wa...@preternatural.net
wrote:
I always use an exclude file. And I sync different exclude files between
various servers.
I would put my excludes in a file.
My commandline:
rsync -aHz --delete --delete-excluded --force
--password-file=/etc/pootypoo.pwd --exclude-from=/etc
I always use an exclude file. And I sync different exclude files between
various servers.
I would put my excludes in a file.
My commandline:
rsync -aHz --delete --delete-excluded --force
--password-file=/etc/pootypoo.pwd --exclude-from=/etc/rsync.exclude rsync://
blarg.yashakarant.com/hootyhoo
On 03/07/2015 09:05 PM, James Rogers wrote:
Dig? You should use SSH as your shell for rsync so that your
connections are encrypted. This is only for a local, trusted network, as
all transfers will be in compressed cleartext. Right?
This is a backup between a local hard drive and a local
USB
On 03/04/2015 10:41 PM, ToddAndMargo wrote:
Hi All,
I am trying to do an rsync and exclude a directory called
/home/CDs/Linux/Wine/wine-1.7.37
Problem: rsync sync's it anyway, including when I remove
the * and spell it all out.
What am I doing wrong?
rsync -rv --delete --delete-excluded
--exclude='{wine-*,wine-1.7.24}' /home/CDs/Linux /mnt/MyCDs/.
I am not real certain that the {} thingy works correctly.
Anyway, I only needed 'wine-*'
--
~~
Computers are like air conditioners.
They malfunction when you open windows
Hi All,
I am trying to do an rsync and exclude a directory called
/home/CDs/Linux/Wine/wine-1.7.37
Problem: rsync sync's it anyway, including when I remove
the * and spell it all out.
What am I doing wrong?
rsync -rv --delete --delete-excluded --modify-window=1 \
--times --inplace
Hi,
It seems the leading / in the excluded path is interpreted as the root
of the source directory hierarchy, in this case /home/CDs/Linux/Wine. So
you should try
--exclude '/wine-*'
This is explained in the rsync man page, quite a long way down, under
ANCHORING INCLUDE/EXCLUDE PATTERNS
-Original message-
From:Bill Maidment b...@maidment.me
Sent: Saturday 27th September 2014 11:34
To: scientific-linux-de...@fnal.gov
Subject: 7rolling rsync not fully populated
Hi Pat
Thanks for the hard work. You guys have been busy!!!
It appears that the rsync server does
Am 15.07.14 07:08, schrieb SCIENTIFIC-LINUX-USERS automatic digest system:
I wiped the stick (set all the charges to zero) and started over.
I am using
rsync -rv --delete --modify-window=1 --times --inplace \
$MyCDsSource/Linux $MyCDsTarget/.; sync; sync
I will have results in a few
I have a question about your test did you unmount the stick between
runs of rsync. if not you may have already had all of the information
about the filesystem cached in memory instead of having to search the
FAT table for information. this could have a huge effect on the speed
of an update.
also
On 07/15/2014 10:44 AM, Paul Robert Marino wrote:
I have a question about your test did you unmount the stick between
runs of rsync.
The script automatically does that. Same behavour this morning
after a power off of the machine last night. It too about 3 minutes
to catch all the updates I
, Patrick J. LoPresti wrote:
On Tue, Jul 15, 2014 at 10:44 AM, Paul Robert Marino
prmari...@gmail.com wrote:
I have a question about your test did you unmount the stick between
runs of rsync. if not you may have already had all of the information
about the filesystem cached in memory instead
On 07/15/2014 11:02 AM, ToddAndMargo wrote:
On 07/15/2014 10:52 AM, Patrick J. LoPresti wrote:
On Tue, Jul 15, 2014 at 10:44 AM, Paul Robert Marino
prmari...@gmail.com wrote:
I have a question about your test did you unmount the stick between
runs of rsync. if not you may have already had all
will still keep a copy of those pages cached in RAM, forever,
or until it needs that RAM for something else.
That copy in RAM can mess up your performance tests. Specifically, if
all file meta-data (specifically file sizes and timestamps) is cached,
rsync will not actually touch either the source
for up to an hour
after the rsync command if I don't throw the sync command
) is cached,
rsync will not actually touch either the source or destination disk at
all.
sysctl -w vm.drop_caches=3 forces the kernel to forget what it
cached, setting a clean slate for an honest benchmark.
- Pat
Got the same results the next day after powering off the machine
for the night. Does
On 07/11/2014 12:58 PM, ToddAndMargo wrote:
Hi All,
I have a bash script for synchronizing a flashing drive (target)
with my hard drive (source) I take to customer sites (with a read
only switch so I don't spread viruses).
I currently rsync 11 different directories. Each sync line
looks like
Wow I wish I got into this thread earlier I could have explained a lot.
I've worked with rsync on a low level for many years and have even
debated about writing a C library and possibly a multicast transport
layer for it so I know it quiet well.
Ive seen a lot of misinformation and guessing
will probably put the 10 bytes in the directory
entry or use up a maximum of 4KB for 4KB clusters.
But I don't see why rsync would care about the unused data. It should just
sync the 10 bytes accessible. I'm ignoring alternate streams here.
a minimum of 32KB
even for a 10 byte file. While NTFS will probably put the 10 bytes in the
directory entry or use up a maximum of 4KB for 4KB clusters.
But I don't see why rsync would care about the unused data. It should just
sync the 10 bytes accessible. I'm ignoring alternate
clusters.
But I don't see why rsync would care about the unused data. It should
just sync the 10 bytes accessible. I'm ignoring alternate streams here.
This is the usual confusion between the st_size and st_blocks entries
in struct stat returned by lstat() and co.
Is what I
On 07/11/2014 12:58 PM, ToddAndMargo wrote:
Hi All,
I have a bash script for synchronizing a flashing drive (target)
with my hard drive (source) I take to customer sites (with a read
only switch so I don't spread viruses).
I currently rsync 11 different directories. Each sync line
looks like
On 07/11/2014 12:58 PM, ToddAndMargo wrote:
Hi All,
I have a bash script for synchronizing a flashing drive (target)
with my hard drive (source) I take to customer sites (with a read
only switch so I don't spread viruses).
I currently rsync 11 different directories. Each sync line
looks like
Hi T,
sorry to come late into this discussion (I read the digest only).
You are missing the -t option to rsync. This will keep the time setting
on the transferred file(s). Since you are not using it, rsync does not
recognize that two files are the same, because their timestamps do not
match
One more point: googling rsync time zone fat32 brings up an excellent
page - http://sabg.tk/wiki/config:vfat which discusses timezone
problems, mount options and the like, and also suggests using
”–modify-window=1” .
HTH,
Kay
smime.p7s
Description: S/MIME Cryptographic Signature
On Fri, Jul 11, 2014 at 10:24 PM, ToddAndMargo toddandma...@zoho.com wrote:
Hi Pat,
--modify-window=1
3 hr - 9 sec
--modify-window=10
3 hr - 8 sec
Rat! I really though this sounded right
Oh, well...
Any way to turn of the check sum testing?
Well, there is the
On 07/12/2014 06:20 PM, Patrick J. LoPresti wrote:
On Fri, Jul 11, 2014 at 10:24 PM, ToddAndMargo toddandma...@zoho.com wrote:
Hi Pat,
--modify-window=1
3 hr - 9 sec
--modify-window=10
3 hr - 8 sec
Rat! I really though this sounded right
Oh, well...
Any way to turn of the
Hi All,
I have a bash script for synchronizing a flashing drive (target)
with my hard drive (source) I take to customer sites (with a read
only switch so I don't spread viruses).
I currently rsync 11 different directories. Each sync line
looks like this:
rsync -rv --delete $MyCDsSource/Linux
rsync 11 different directories. Each sync line
looks like this:
rsync -rv --delete $MyCDsSource/Linux $MyCDsTarget/.; sync; sync
Problem: it is slow -- takes three hours. To help the
speed issue, I upgraded from USB 2 to USB 3. Backup went
from 3 hr-15 min to 3 hr-5 min. It is almost
On 11 July 2014 13:58, ToddAndMargo toddandma...@zoho.com wrote:
Hi All,
I have a bash script for synchronizing a flashing drive (target)
with my hard drive (source) I take to customer sites (with a read
only switch so I don't spread viruses).
I currently rsync 11 different directories
viruses).
I currently rsync 11 different directories. Each sync line
looks like this:
rsync -rv --delete $MyCDsSource/Linux $MyCDsTarget/.; sync; sync
Problem: it is slow -- takes three hours. To help the
speed issue, I upgraded from USB 2 to USB 3. Backup went
from 3 hr-15
a flashing drive (target)
with my hard drive (source) I take to customer sites (with a read
only switch so I don't spread viruses).
I currently rsync 11 different directories. Each sync line
looks like this:
rsync -rv --delete $MyCDsSource/Linux $MyCDsTarget/.; sync; sync
Problem: it is slow
rsync be default is using an encryption method most likely more taxing than
arcfour.
I'd imagine your local disk is unencrypted while you are reading/writing to
it
On Fri, Jul 11, 2014 at 4:09 PM, ToddAndMargo toddandma...@zoho.com wrote:
On Fri, Jul 11, 2014 at 3:58 PM, ToddAndMargo
).
I currently rsync 11 different directories. Each sync line
looks like this:
rsync -rv --delete $MyCDsSource/Linux $MyCDsTarget/.; sync; sync
Problem: it is slow -- takes three hours. To help the
speed issue, I upgraded from USB 2 to USB 3. Backup went
from 3 hr-15 min to 3
(with a read
only switch so I don't spread viruses).
I currently rsync 11 different directories. Each sync line
looks like this:
rsync -rv --delete $MyCDsSource/Linux $MyCDsTarget/.; sync; sync
Problem: it is slow -- takes three hours. To help the
speed issue, I upgraded from USB 2 to USB 3. Backup
(with a read
only switch so I don't spread viruses).
I currently rsync 11 different directories. Each sync line
looks like this:
rsync -rv --delete $MyCDsSource/Linux $MyCDsTarget/.; sync; sync
Problem: it is slow -- takes three hours. To help the
speed issue, I upgraded from
).
I currently rsync 11 different directories. Each sync line
looks like this:
rsync -rv --delete $MyCDsSource/Linux $MyCDsTarget/.; sync; sync
Problem: it is slow -- takes three hours. To help the
speed issue, I upgraded from USB 2 to USB 3. Backup went
from 3 hr
My USB stick is formatted in EXT4. I wonder if that makes a difference. I
don't recall if rsync checks file timestamps or just the checksum.
(with a read
only switch so I don't spread viruses).
I currently rsync 11 different directories. Each
sync line
looks like this:
rsync -rv --delete $MyCDsSource/Linux $MyCDsTarget/.;
sync; sync
On 07/11/2014 01:25 PM, Ken Teh wrote:
My USB stick is formatted in EXT4. I wonder if that makes a
difference. I don't recall if rsync checks file timestamps or just the
checksum.
Most of the computers I stick the thing into are Windows,
so I am stuck with FAT32.
For my own stuff, I wipe
On 11 July 2014 14:26, ToddAndMargo toddandma...@zoho.com wrote:
I'd imagine your local disk is unencrypted while you are reading/writing
to it
On 07/11/2014 01:11 PM, Steven Miano wrote:
rsync be default is using an encryption method most likely more taxing
than arcfour.
Just
)
with my hard drive (source) I take to customer sites
(with a read
only switch so I don't spread viruses).
I currently rsync 11 different directories. Each
sync line
looks like
On Fri, Jul 11, 2014 at 1:20 PM, ToddAndMargo toddandma...@zoho.com wrote:
On 07/11/2014 01:04 PM, Stephen John Smoogen wrote:
So rsync is going to have to read every file on target and host and see
if various things have changed.
Not true, at least by default... rsync assumes
On Fri, Jul 11, 2014 at 1:40 PM, Patrick J. LoPresti lopre...@gmail.com wrote:
Try giving the --size-only option to rsync.
Better yet, try --modify-window=1. From the rsync man page:
--modify-window
When comparing two timestamps, rsync treats the timestamps
On 07/11/2014 01:40 PM, Patrick J. LoPresti wrote:
On Fri, Jul 11, 2014 at 1:20 PM, ToddAndMargo toddandma...@zoho.com wrote:
On 07/11/2014 01:04 PM, Stephen John Smoogen wrote:
So rsync is going to have to read every file on target and host and see
if various things have changed.
Not true
On 07/11/2014 01:49 PM, Patrick J. LoPresti wrote:
On Fri, Jul 11, 2014 at 1:40 PM, Patrick J. LoPresti lopre...@gmail.com wrote:
Try giving the --size-only option to rsync.
Better yet, try --modify-window=1. From the rsync man page:
--modify-window
When comparing two
with the
speeds that you see.
With some work, you can find media that writes at 20-30M/s, as measured by
timing dd, but drops
severely when you time rsync (must be inefficient at writing small files).
So when you select a brand USB flash drive for your workload, as you run
rsync, watch
time rsync (must be inefficient at writing small files).
So when you select a brand USB flash drive for your workload, as you run
rsync, watch
the output of vmstat 1 (the bo column is Mbytes/sec written to disk) and
the output of iostat -x 1 - you will see %util pegged at 100% and svctm
On 07/11/2014 01:49 PM, Patrick J. LoPresti wrote:
On Fri, Jul 11, 2014 at 1:40 PM, Patrick J. LoPresti lopre...@gmail.com wrote:
Try giving the --size-only option to rsync.
Better yet, try --modify-window=1. From the rsync man page:
--modify-window
When comparing two
drives. They
just don't get the difference and that one is three times
cheaper than yours. When it finally drives them crazy
waiting on the thing, then I make a sale.
Back to my problem. I am noticing that it takes forever
to get past large files. It is like rsync is doing a checksum
Hi,
Is there a problem with rsyncing from rsync.scientificlinux.org
at the moment? I'm getting:
@ERROR: max connections (60) reached -- try again later
rsync error: error starting client-server protocol (code 5) at main.c(1530)
[receiver=3.0.6]
--
Mark Whidby
Infrastructure Coordinator (Unix
On 10/08/2012 08:11 AM, Mark Whidby wrote:
Hi,
Is there a problem with rsyncing from rsync.scientificlinux.org
at the moment? I'm getting:
@ERROR: max connections (60) reached -- try again later
rsync error: error starting client-server protocol (code 5) at main.c(1530)
[receiver=3.0.6
On Mon, 2012-10-08 at 08:23 -0500, Pat Riehecky wrote:
On 10/08/2012 08:11 AM, Mark Whidby wrote:
Hi,
Is there a problem with rsyncing from rsync.scientificlinux.org
at the moment? I'm getting:
@ERROR: max connections (60) reached -- try again later
rsync error: error starting client
On Tue, 15 Nov 2011, Pat Riehecky wrote:
On 11/15/2011 12:49 AM, g wrote:
On 11/15/2011 05:56 AM, Jon Peatfield wrote:
rsync://rsync.scientificlinux.org/ has been failing with:
@ERROR: max connections (30) reached -- try again later
are too many of us trying to fetch by rsync
On 11/16/2011 01:26 PM, Jon Peatfield wrote:
Thanks, the errors seem to have gone away.
:) Happy to help.
snip
How large do you suggest making the random variation?
I'd say about 30 minutes of randomness should be enough swing to keep it
interesting while not being too all over the place
On 11/15/2011 12:49 AM, g wrote:
On 11/15/2011 05:56 AM, Jon Peatfield wrote:
rsync://rsync.scientificlinux.org/ has been failing with:
@ERROR: max connections (30) reached -- try again later
are too many of us trying to fetch by rsync or should we have changed to a
different server
For the past few days our nightly cron job to update our local site sl
mirror from rsync://rsync.scientificlinux.org/ has been failing with:
@ERROR: max connections (30) reached -- try again later
are too many of us trying to fetch by rsync or should we have changed to a
different server
On 11/15/2011 05:56 AM, Jon Peatfield wrote:
For the past few days our nightly cron job to update our local site sl
mirror from rsync://rsync.scientificlinux.org/ has been failing with:
@ERROR: max connections (30) reached -- try again later
are too many of us trying to fetch by rsync
On 07/30/2011 12:49 PM, Nico Kadel-Garcia wrote:
On Fri, Jul 29, 2011 at 11:24 PM, d tbskytbs...@gmail.com wrote:
hi:
just try to rsync sl 6.1, and found link missing errors below:
rsync rsync.scientificlinux.org::scientific/6.1/i386/os/images/xen/
rsync: link_stat /6.1/i386/os/images/xen
On Fri, Jul 29, 2011 at 11:24 PM, d tbsky tbs...@gmail.com wrote:
hi:
just try to rsync sl 6.1, and found link missing errors below:
rsync rsync.scientificlinux.org::scientific/6.1/i386/os/images/xen/
rsync: link_stat /6.1/i386/os/images/xen/. (in scientific) failed:
No such file
hi:
just try to rsync sl 6.1, and found link missing errors below:
rsync rsync.scientificlinux.org::scientific/6.1/i386/os/images/xen/
rsync: link_stat /6.1/i386/os/images/xen/. (in scientific) failed:
No such file or directory (2)
rsync rsync.scientificlinux.org::scientific/6.1/x86_64/os/images
It seems that version 3.0.6-4.el5 of rsync that was made available from
the sl-security repo yesterday is broken. In my usage:
rsync -rlHptS --delete --rsh=ssh directory machine:directory
ssh is being invoked with no arguments. This is, obviously, causing it
to fail.
Has anyone else seen
Hi Paul,
Yes, this is a known issue by us, and by RHEL.
I have already yanked it from the yum repositories, and sent an email to
the errata list with the following
--
This errata has been pulled from the repositories.
There is a bug in the 3.0.6-4.el5 version of rsync.
rsync commands
We have been using rsync for years as our main back up scheme.
The second rsync options localdir username@remotehost:remotedir works
well. On systems we back up over the net work they are listed
in known_hosts each machine is listed. A pain in the posterior to setup
but worth the effort.
My
Am 11.04.2011 13:08, schrieb Federico Alves:
On 4/10/11 11:35 PM, Larry Brower larry-li...@maxqe.com wrote:
On 04/11/2011 05:59 AM, Federico Alves wrote:
I am using rsync to send almost 1 TB of sparse files across the LAN to
another identical Linux box. If I fire only 1 command, I get
Am 11.04.2011 10:23, schrieb Florian Philipp:
Am 11.04.2011 13:08, schrieb Federico Alves:
On 4/10/11 11:35 PM, Larry Brower larry-li...@maxqe.com wrote:
On 04/11/2011 05:59 AM, Federico Alves wrote:
I am using rsync to send almost 1 TB of sparse files across the LAN to
another identical
around 400 MB speed using FTP transfer, windows to windows. There
must be a different way to do this from Linux.The files are sparse files,
and I need to keep them that way, that's why I use rsync.
Have you tried rsync server on the remote side? I've always found
transfers over SSH to be rather
. There
must be a different way to do this from Linux.The files are sparse files,
and I need to keep them that way, that's why I use rsync.
Well transferring sparse files is going to be slow and it could be
hardware (unless you are somehow testing with windows of copying
sparse files over). rsync is having
On 29.03.2011 10:08, Ciprian Pinzaru wrote:
On 29.03.2011 01:50, Charles G Waldman wrote:
Steven J. Yellin writes:
Since you have rsync, I assume you have the rsync rpm, in
which case
you can use 'man rsync' to see about running rsync as a daemon.
I don't think this is what
Hello,
Could you tell me if exist a server for rsync for SL 5.4?
Until now I made rsync with next script:
*#!/bin/bash
rsync -vaH --delete --exclude=.~tmp~
rsync://server/mirrors/scientificlinux.org/54/x86_64/
/storage/vol_00/repo.grid.uaic.ro/mirrors/sl/5.4/
Thanks
Since you have rsync, I assume you have the rsync rpm, in which case
you can use 'man rsync' to see about running rsync as a daemon. Use
'man rsyncd.conf' for further information about setting it up. You can
also try googling rsync server. In any case, not only does a server for
rsync
Hello,
I apologize for not sending this out earlier. I had been notified, so
this is my fault for not passing on the information.
The backend filesystem of our distribution servers was down for
maintenance from 6:00 am to 6:45 am Chicago, Illinois USA time.
So, while ftp.scientificlinux.org
Hi,
I've been using rsync as a primitive backup tool on a small cluster of SL45
and SL5 machines. Lately, there is an intermittant error when I run rsync
to backup a large (20GB) directory of mixed file types. The error isn't a
loud failure, but rather just that the filetransfer stalls
Nathan Moore wrote:
Hi,
I've been using rsync as a primitive backup tool on a small cluster of SL45
and SL5 machines. Lately, there is an intermittant error when I run rsync
to backup a large (20GB) directory of mixed file types. The error isn't a
loud failure, but rather just
Volume of data isn't the only measure of large - the number of files
is important too.
I started having problems at about 45K - 50K files per directory, using ext3
(pre-hashed-b-trees). I forget total # files in top level. This particular
issue wasn't rsync-specific, however.
It also used
Hi,
I use mrepo to mirror SL. When recently adding SL5x to the mirror mix, I'm
getting the following errors:
rsync: link_stat /5x/i386/SL/RPMS/. (in scientific) failed: No such file or
directory (2)
rsync error: some files could not be transferred (code 23) at main.c(1385)
[receiver=2.6.9]
mrepo
had been added recently and a README that states
the symlink is necessary to allow xen installations to work.
Because I don't keep a full mirror of the entire
ftp.scientificlinux.org, I have rsync setup with the -L switch so that
rather than copying symlinks I copy the referent.
Would it be best
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
If you still want the sites/example I think the best way would to exclude
/i386/sites/example/sites/
/x86_64/sites/example/sites/
Thanks Troy. That will certainly work and should still satisfy the xen
requirement that was listed in the README.
of a failed upgrade is easier that way.
The wisdom I have gathered is not to use dd for such
disk duplication, but to use fdisk for partitioning, and
rsync -ax /orginal/ /target/ for file copy
(be careful with the trailing /'s).
The first reason is that dd copies bad blocks, and
the second
83 matches
Mail list logo