On Fri, Feb 17, 2023 at 01:21:40PM +0100, anubis23 via rsync wrote:
> Hi,
>
> I've read through the rsync manpage, this mailing list, asked Google and
> studied lots of posts on stackexchange.com (stackoverflow,
> superuser...), askubuntu.com and some others, concerning rsync's
> capabilities of s
Hi,
I've read through the rsync manpage, this mailing list, asked Google and
studied lots of posts on stackexchange.com (stackoverflow,
superuser...), askubuntu.com and some others, concerning rsync's
capabilities of showing progress information. But all I've found was
what I already knew: --prog
--compress or -z will compress during data transfer
what about a variable compression rate?
--optimal-compress or -za
monitor the speed of the transfer and the cpu throughput and
automatically find the optimum compression in almost-real-time with
the goal of reduced transfer time
i figure you'd
hi
currently i rsynced several tbytes of files / backups to other/newer drives
using fedora 32 x64.
- the data was on ext4 / ntfs drives before.
- the machine has 8GB ram and swap
- different external usb3 drives up to 10tb
bug:
- rsync to a exfat drive often halted leaving a defunct rsync
Got it working properly. Many thanks!
Regards,
Matt Stevens
On 8/3/20 10:28 AM, Paul Slootman via rsync wrote:
On Mon 03 Aug 2020, Matt Stevens via rsync wrote:
So I've gotten excluding paths to work as a standalone command. When I paste
this into a script however, it ignores the exclusions.
On Mon 03 Aug 2020, Matt Stevens via rsync wrote:
> So I've gotten excluding paths to work as a standalone command. When I paste
> this into a script however, it ignores the exclusions. Any advice?
>
> rsync -aXvr --times --links
> --exclude={'*.vdi','*.vmdk','*.ova','*.qcow2','.config/discord/'}
So I've gotten excluding paths to work as a standalone command. When I
paste this into a script however, it ignores the exclusions. Any advice?
rsync -aXvr --times --links
--exclude={'*.vdi','*.vmdk','*.ova','*.qcow2','.config/discord/'}
/home/path/ user@nas:/NAS/HOME/destination/
Are there
Hi Matt,
On Sat, Aug 01, 2020 at 10:10:49PM -0400, Matt Stevens via rsync wrote:
> I lack development skills. Would there be a way for rsync to be passed an
> option to exclude a specific path during a sync operaton? All of my attempts
> to use exclude have failed, as it does not respect paths, on
Greetings. I've been using rsync for quite a long time. I have a homedir
I like to backup to a NAS. I find myself having to manually specifying
paths rather than letting rsync copy the entire home directory.
I lack development skills. Would there be a way for rsync to be passed
an option to ex
https://bugzilla.samba.org/show_bug.cgi?id=10405
Wayne Davison changed:
What|Removed |Added
Resolution|--- |WONTFIX
Status|NEW
https://bugzilla.samba.org/show_bug.cgi?id=10636
Wayne Davison changed:
What|Removed |Added
Status|NEW |RESOLVED
Resolution|---
On Fri, Nov 8, 2019 at 8:00 AM Ciprian Dorin Craciun via rsync <
rsync@lists.samba.org> wrote:
> Therefore I've tried to patch `rsync` myself, mainly by copy-pasting the
> code related to `pre-xfer exec` option:
>
> https://github.com/cipriancraciun/rsync/commit/1f85c5f596542ed878d09a60e55ea027934
https://bugzilla.samba.org/show_bug.cgi?id=14390
--- Comment #4 from Wayne Davison ---
Thanks for the gentle prod to remind me about 14338. As things currently
stand, the master branch now has support for both zstd & lz4 compression.
--
You are receiving this mail because:
You are the QA Conta
https://bugzilla.samba.org/show_bug.cgi?id=14390
--- Comment #3 from Sebastian A. Siewior ---
(In reply to Wayne Davison from comment #2)
I've sent a zstd patch, what do you want me to do with it (#14338)?
--
You are receiving this mail because:
You are the QA Contact for the bug.
--
Please u
https://bugzilla.samba.org/show_bug.cgi?id=14390
--- Comment #2 from Wayne Davison ---
I've updated the compression code to add a negotiation idiom like I did for
checksums, and then I re-enabled the external zlib's ability to handle both the
old-style compression (now named "zlib") and the newer
https://bugzilla.samba.org/show_bug.cgi?id=14390
Wayne Davison changed:
What|Removed |Added
Status|NEW |ASSIGNED
--- Comment #1 from Wayne Davison
This is great! However, do you have access to a big-endian CPU? I'm
not sure how relevant this still is but I've read at some point that
xxhash might have produced different (reverse?) hashes on different
endian CPUs. It may be prudent to acutally test if that is the case
with this implementation o
That's excellent news!
On Sat, 23 May 2020 at 08:11, Wayne Davison via rsync
wrote:
> On Tue, Oct 1, 2019 at 8:02 AM Bill Wichser via rsync <
> rsync@lists.samba.org> wrote:
>
>> Attached is the patch we applied [to add xxhash checksums]
>
>
> Thanks, Bill! I finally got around to finishing up
On Tue, Oct 1, 2019 at 8:02 AM Bill Wichser via rsync
wrote:
> Attached is the patch we applied [to add xxhash checksums]
Thanks, Bill! I finally got around to finishing up some checksum
improvements and have added support for xxhash in the master branch. The
latest version in git now picks th
Just FYI, I decided to create a ticket on bugzilla to make the
tracking of the feature request more easy, I also used a more
descriptive title.
Bug 14390 - Feature request: don't fail if using "-z" transferring to
rsync complied with --with-included-zlib=no
https://bugzilla.samba.o
https://bugzilla.samba.org/show_bug.cgi?id=14390
Bug ID: 14390
Summary: Feature request: don't fail if using "-z" transferring
to rsync complied with --with-included-zlib=no
Product: rsync
Version: 3.1.3
deal since the transfer still happens, but
issue number 2 might cause issues for users who don't migrate their
old systems to use the parameter "-zz" instead of " -z".
The error message that I get from FIndingD, Issue 2, is:
"rsync: This rsync lacks old-style --c
Hi! I'm a long time user of rsync, and a big fan. First of all; thanks for
creating it and making it FOSS.
I just thought of a feature that I think could be very useful, but I'm not
familiar enough with the rsync code (and probably not skilled enough) to
create it myself. So thought I'd just post
(I think the subject is quite descriptive; however for use-cases and
details see bellow after the mention of the old conversation and the
patch.)
Searching the mailing list about this topic yields an old conversation
about this from 2008:
* https://lists.samba.org/archive/rsync/2008-November/022
Paul,
Thanks. I can see your point for sure. I wasn't suggesting an all out
switch but just an option to use with a flag. Since we're using a GPFS
to GPFS transfer over a high speed link, doing billions of files at the
moment, even a marginal increase in speed helps and is why we were using
On Tue 01 Oct 2019, Bill Wichser via rsync wrote:
>
> Attached is the patch we applied. Since xxhash is in the distro, a
> dependency would be required for this RPM. If nothing else, perhaps the
> developers should just take a look as this could benefit many.
"The distro" is a bit vague for a t
Back in the spring, we started using rsync for a disk to disk backup
system maintaining close to 10PB of data. I am not here to debate the
issue of what is the right tool but only to discuss what we found to be
a problem with rsync when doing so.
We traced the various processes hoping to find
https://bugzilla.samba.org/show_bug.cgi?id=13388
Bug ID: 13388
Summary: Feature request: When deleting files only delete files
that are over a certain age.
Product: rsync
Version: 3.1.3
Hardware: All
OS
Adam Nielsen wrote...
> I'm wondering whether it is feasible to have an option that will make
> rsync spawn a separate thread to close files it has created, to avoid
> the main process blocking while the destination files are flushed during
> the close operation?
While your scenario resembles a p
Can you let rsync do the networking? If rsync isn't doing the
networking then it isn't much more capable than cp yet it is
significantly slower than cp.
On 12/18/2016 09:13 AM, Adam Nielsen wrote:
> Hi all,
>
> I'm wondering whether it is feasible to have an option that will make
> rsync spawn a
Hi all,
I'm wondering whether it is feasible to have an option that will make
rsync spawn a separate thread to close files it has created, to avoid
the main process blocking while the destination files are flushed during
the close operation?
The reason I ask is that it is currently very slow to u
https://bugzilla.samba.org/show_bug.cgi?id=11422
Kevin Korb changed:
What|Removed |Added
Resolution|--- |INVALID
Status|NEW
https://bugzilla.samba.org/show_bug.cgi?id=11422
Bug ID: 11422
Summary: Feature request: add support for Linux libcap[-ng]
Product: rsync
Version: 3.1.1
Hardware: All
OS: Linux
Status: NEW
Severity
This post
http://unix.stackexchange.com/questions/153262/get-rsync-to-dereference-symlinked-dirs-presented-on-cmdline-like-find-h
explains most of what i want, but basically, looking for a find -H option to
rsync.
Reason is so that I can hit a source (or target!) dir in rsync by making a nice
d
https://bugzilla.samba.org/show_bug.cgi?id=11152
--- Comment #1 from Andre Bruce ---
I believe that I can get our company to make a donation/contribution, using
paypal, to have this feature (or, if you prefer, we may contribute with server
hardware which is not in use anymore). If there is any de
https://bugzilla.samba.org/show_bug.cgi?id=11152
Bug ID: 11152
Summary: Feature Request: Cache Filelist
Product: rsync
Version: 3.1.1
Hardware: All
OS: All
Status: NEW
Severity: normal
Priority
https://bugzilla.samba.org/show_bug.cgi?id=10636
--- Comment #1 from Shawn Heisey ---
I really like this idea. I'm using rsync extensively during a storage
migration, as it's the only tool that can quickly and effectively synchronize a
source tree to a destination tree when there are periodic ch
https://bugzilla.samba.org/show_bug.cgi?id=10799
Wayne Davison changed:
What|Removed |Added
Status|NEW |RESOLVED
Resolution|
https://bugzilla.samba.org/show_bug.cgi?id=10799
Summary: Feature request: detail --dry-run mode when
--debug=exit
Product: rsync
Version: 3.1.1
Platform: All
OS/Version: All
Status: NEW
Severity
https://bugzilla.samba.org/show_bug.cgi?id=10405
--- Comment #3 from Christian Ruppert 2014-08-07 19:17:22 UTC
---
(In reply to comment #2)
> regarding restricted ssh - wouldn`t that be a security nightmare if rsync
> could
> exec any additional command ?
At least my idea is meant to be client
https://bugzilla.samba.org/show_bug.cgi?id=10405
--- Comment #2 from roland 2014-08-06 21:45:38 UTC ---
regarding restricted ssh - wouldn`t that be a security nightmare if rsync could
exec any additional command ?
--
Configure bugmail: https://bugzilla.samba.org/userprefs.cgi?tab=email
---
https://bugzilla.samba.org/show_bug.cgi?id=2423
--- Comment #10 from Rainer Glaschick 2014-06-04 06:56:27 UTC
---
I vote for the original proposal.
Using --files-from does not work with --delete, which is absolutely correct,
but --newer could delete all files on the target that are not in the ne
Hi all,
I am currently (ab)using rsync as a backup utility, creating hardlinks
to files that haven't changed since last backup.
Unfortunately, using a rsync daemon connection between different
machines, this is only possible as a pull-type process. I.e. the daemon
needs to run on the source and t
https://bugzilla.samba.org/show_bug.cgi?id=10636
Summary: Feature request: Show current source path on receipt
of Posix signal
Product: rsync
Version: 3.1.0
Platform: All
OS/Version: All
Status: NEW
On 05.02.2014 13:00, Daniel Mare wrote:
> As a safety feature, I would like to see a feature that would prevent rsync
> from syncing when the sync, if it were to go ahead, would result in more than
> a certain number of files being deleted from the destination.
>
> A similar feature, --max-delet
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
You could do a --dry-run with --max-deletes and check for exit code
25. Then only do the real run if not exit code 25.
On 02/05/2014 12:00 AM, Daniel Mare wrote:
> As a safety feature, I would like to see a feature that would
> prevent rsync from syn
As a safety feature, I would like to see a feature that would prevent rsync
from syncing when the sync, if it were to go ahead, would result in more than a
certain number of files being deleted from the destination.
A similar feature, --max-delete, does exist, but does not prevent rsync from
do
https://bugzilla.samba.org/show_bug.cgi?id=10405
--- Comment #1 from Kevin Korb 2014-01-29 21:57:03 UTC
---
Personally, I wouldn't have any use for the processing of the output part but a
pre and post command to create and destroy lvm2 snapshots would be useful.
Especially in keeping rsync from
https://bugzilla.samba.org/show_bug.cgi?id=10405
Summary: Feature request: Add support for pre/post cmds for the
rsync client
Product: rsync
Version: 3.1.1
Platform: All
OS/Version: All
Status: NEW
https://bugzilla.samba.org/show_bug.cgi?id=9041
Kevin Korb changed:
What|Removed |Added
Status|NEW |RESOLVED
Resolution|
https://bugzilla.samba.org/show_bug.cgi?id=9498
Wayne Davison changed:
What|Removed |Added
Status|NEW |RESOLVED
Resolution|
https://bugzilla.samba.org/show_bug.cgi?id=9498
Summary: Feature request for extending --backup-dir
functionality
Product: rsync
Version: 3.1.0
Platform: All
OS/Version: All
Status: NEW
Severity: normal
https://bugzilla.samba.org/show_bug.cgi?id=9041
Summary: Feature request: Better handling of btrfs based
sources
Product: rsync
Version: 3.1.0
Platform: All
OS/Version: Linux
Status: NEW
Severity
https://bugzilla.samba.org/show_bug.cgi?id=8615
--- Comment #3 from itpp11 2011-11-23 21:19:41 UTC ---
Tnx! yes it would help if fuzzy would have a wider search area, for example
ntbackup files are single files, wbadmin backup files are stored inside their
own folder structure which changes with
https://bugzilla.samba.org/show_bug.cgi?id=8615
Wayne Davison changed:
What|Removed |Added
Status|NEW |RESOLVED
Resolution|
https://bugzilla.samba.org/show_bug.cgi?id=8615
--- Comment #1 from itpp11 2011-11-18 11:35:57 UTC ---
While trying to find a workaround via ssh, like:
plink -ssh -v -C -L 875:localhost:873 -l root %NASDEST% -pw %rootPW% -m
sshcmds.txt
Where sshcmds.txt contains cp(copy) commands to get future co
https://bugzilla.samba.org/show_bug.cgi?id=8615
Summary: feature request 'update by reference'
Product: rsync
Version: 3.0.9
Platform: All
OS/Version: All
Status: NEW
Severity: enhancement
Pr
Wow can't tell you how many times I've read over the man page but Batch Mode
never jumped out at me before.
Batch Mode could very well provide a solution, I'm going to give it a run.
Also, I think I found the knob in Ubuntu that's creating the error in rsync.
It's /proc/sys/net/ipv4/tcp_retries2
On 7/15/2011 2:42 PM, Matthias Schniedermeyer wrote:
On 15.07.2011 13:10, Donald Pearson wrote:
Matthias,
A vpn tunnel is an interesting idea. Do you know how long you're able to
keep rsync in limbo before it will give up?
I haven't really tried. But it was about 15 Minutes the one time it
Hi,
On Fri, 15 Jul 2011, Donald Pearson wrote:
I looked in to the -b switch and it's a good idea but I have been unable to
find a way to use it such that a resume can continue where it left off,
without re-checking what has already been completed, *and* continue to use
the same ultimate destina
On 15.07.2011 13:10, Donald Pearson wrote:
>
> Matthias,
>
> A vpn tunnel is an interesting idea. Do you know how long you're able to
> keep rsync in limbo before it will give up?
I haven't really tried. But it was about 15 Minutes the one time it
didn't reconnect in time.
> The issue I think
Eberhard,
I looked in to the -b switch and it's a good idea but I have been unable to
find a way to use it such that a resume can continue where it left off,
without re-checking what has already been completed, *and* continue to use
the same ultimate destination file as a source of diffing.
Looki
On 12.07.2011 11:10, Donald Pearson wrote:
...
A 'trick' i personally use for an unreliable connection is an
OpenSSH-Tunnel.
Altough any VPN-solution should to the trick.
That way the connection between the two rsync-halvs isn't directly tied
to the internet-connection.
In my case that means
On 7/12/2011 11:10 AM, Donald Pearson wrote:
@Eberhard: I understand what you're trying to say, but in this
environment the reality is rsync reaches and impasse where it is unable
to get beyond work that has already been completed before link failure
cuts it off again.
@Leen: A combination of
Hi,
On Tue, 12 Jul 2011, Donald Pearson wrote:
[very selective cited, just to lighten one aspect]
@Eberhard: I understand what you're trying to say, but in this environment
the reality is rsync reaches and impasse where it is unable to get beyond
work that has already been completed before li
@Eberhard: I understand what you're trying to say, but in this environment
the reality is rsync reaches and impasse where it is unable to get beyond
work that has already been completed before link failure cuts it off again.
@Leen: A combination of --append with --partial is what I tried, howeve
On 11.07.2011 16:01, Donald Pearson wrote:
> I am looking to do state-full resume of rsync transfers.
>
> My network environment is is an unreliable and slow satellite
> infrastructure, and the files I need to send are approaching 10 gigs in
> size. In this network environment often times links c
t destroying the original destination file, so
>>> that it's
>>> blocks can be used to minimize transferred data and not need to
>>> always start
>>> from block #1. Such that the aggregate of multiple rsync attempts
>>> are able
>>> to complet
was sent in a single rsync
session.
If this is possible with rsync's current feature set I would be very
appreciative of someones time to reply with an example.
Or if this is not currently possible, an idea that comes to mind and
ultimately a feature request would be to have a switch that
nc's current feature set I would be very
appreciative of someones time to reply with an example.
Or if this is not currently possible, an idea that comes to mind and
ultimately a feature request would be to have a switch that tells rsync upon
session drop, to do a memory dump of its checksum l
reciative of someones time to reply with an example.
Or if this is not currently possible, an idea that comes to mind and
ultimately a feature request would be to have a switch that tells rsync upon
session drop, to do a memory dump of its checksum list, and the last
completed block worked on,
On Wed, 2010-08-11 at 10:18 -0700, travis+ml-rs...@subspacefield.org
wrote:
> I often push files from my user account over SSH to my web server, and
> want them owned by www-user, which may not have a login shell, should
> never accept remote logins, and who may not have a ~/.ssh directory
> (and i
On Wed, Aug 11, 2010 at 02:51:35PM -0700, travis+ml-rs...@subspacefield.org
wrote:
> On Wed, Aug 11, 2010 at 01:32:42PM -0400, Brian Cuttler wrote:
> [Set u+s on directories, don't worry about owners]
>
> It seems to work relatively well. I get an error about not being
> able to chgrp the files
On Wed, Aug 11, 2010 at 01:32:42PM -0400, Brian Cuttler wrote:
[Set u+s on directories, don't worry about owners]
It seems to work relatively well. I get an error about not being
able to chgrp the files owned by other users, and, in my case,
the group ends up wrong because it's not supposed to be
On 08/11/10 13:18, travis+ml-rs...@subspacefield.org wrote:
I often push files from my user account over SSH to my web server, and
want them owned by www-user, which may not have a login shell, should
never accept remote logins, and who may not have a ~/.ssh directory
(and if it did, it would be
On Wed, Aug 11, 2010 at 01:34:44PM -0400, Brian Cuttler wrote:
> As a matter of principle, SOP, we don't like to ssh/rsync as root
> and generally don't allow root ssh/rsync into a box. Better/safer
> to move the security stuff to a lower powered user if you can.
I'm familiar with the argument. L
Travis,
We also use rsync to push our files. While there are several users with
the ability to do the push, the files on the webserver host are set with
su-gid bit set.
No matter which of our web people push the files to the visible
server the files all move to a consistent groupship that allows
Travis,
As a matter of principle, SOP, we don't like to ssh/rsync as root
and generally don't allow root ssh/rsync into a box. Better/safer
to move the security stuff to a lower powered user if you can.
On Wed, Aug 11, 2010 at 10:18:11AM -0700, travis+ml-rs...@subspacefield.org
wrote:
> I oft
I often push files from my user account over SSH to my web server, and
want them owned by www-user, which may not have a login shell, should
never accept remote logins, and who may not have a ~/.ssh directory
(and if it did, it would be under the wwwroot, ack!).
Currently I push as root and then d
On 21.06.2010 10:53, Matthias Schniedermeyer wrote:
> On 17.06.2010 15:37, super master wrote:
> > Hello,
> > many linux SW is starting to implement new lzma compresson instrad of old
> > zlib (gzip) od bzip2.
> >
> > lzma is default comrpession in very good compression SW 7-zip, which is
> > fa
On 17.06.2010 15:37, super master wrote:
> Hello,
> many linux SW is starting to implement new lzma compresson instrad of old
> zlib (gzip) od bzip2.
>
> lzma is default comrpession in very good compression SW 7-zip, which is
> faster and have higher compression ratio then bzip2 or rar.
>
> Cur
> lzma is default comrpession in very good compression SW 7-zip, which is
> faster and have higher compression ratio then bzip2 or rar.
>From what I've seen, lzma/xz compression is slower than bzip2, but generally
provides higher compression ratios. Both are remarkably slower than gzip,
but again,
Hello,
many linux SW is starting to implement new lzma compresson instrad of old zlib
(gzip) od bzip2.
lzma is default comrpession in very good compression SW 7-zip, which is faster
and have higher compression ratio then bzip2 or rar.
Currently its probalby the best compressor in therms of comp
On Sat, 2008-11-22 at 17:05 +0100, Rene Bartsch wrote:
> when using RSync for mirroring, the checksums of a file are calculated for
> each transfer. But that
> causes a lot of CPU/IO load on the server machine and degrades throughput
> even if there's plenty
> of network bandwidth. So I want to p
Hi,
when using RSync for mirroring, the checksums of a file are calculated for each
transfer. But that
causes a lot of CPU/IO load on the server machine and degrades throughput even
if there's plenty
of network bandwidth. So I want to propose a pre-calculating feature for
checksums.
The easies
On Sun, 2008-09-28 at 19:55 +0200, Tim Newsome wrote:
> I have a program that copies pictures to a web server using rsync.
> Typically it will do something like copy IMG_1234.JPG to
> server:/www/docroot/YEAR/MONTH/DAY, where the appropriate numbers are
> inserted for YEAR/MONTH/DAY. Currently the
On Sun, 2008-09-28 at 19:52 +0200, Tim Newsome wrote:
> I'm currently in Egypt, and my Internet connection goes up and down a
> lot. rsync --partial --timeout is great, especially when combined with a
> little script that checks whether it failed due to timeout, and in that
> case starts the transf
I have a program that copies pictures to a web server using rsync.
Typically it will do something like copy IMG_1234.JPG to
server:/www/docroot/YEAR/MONTH/DAY, where the appropriate numbers are
inserted for YEAR/MONTH/DAY. Currently the only way to do this with
rsync is to locally create YEAR/MONTH
I'm currently in Egypt, and my Internet connection goes up and down a
lot. rsync --partial --timeout is great, especially when combined with a
little script that checks whether it failed due to timeout, and in that
case starts the transfer up again. I think it would be a good feature
for rsync to s
On 14/09/08 04:53:55, Carlos Carvalho wrote:
Quey ([EMAIL PROTECTED]) wrote on 13 September 2008 07:10:
>Is it possible to request a new feature that will help out some
of
us
>doing many mirrors, that is each mirror has their own system uid
for
>security puroposes, it would be of great adva
On Sat, 2008-09-13 at 15:53 -0300, Carlos Carvalho wrote:
> Quey ([EMAIL PROTECTED]) wrote on 13 September 2008 07:10:
> >Is it possible to request a new feature that will help out some of us
> >doing many mirrors, that is each mirror has their own system uid for
> >security puroposes, it wo
Quey ([EMAIL PROTECTED]) wrote on 13 September 2008 07:10:
>Is it possible to request a new feature that will help out some of us
>doing many mirrors, that is each mirror has their own system uid for
>security puroposes, it would be of great advantage (to I'm sure very
>many) to have an o
Hi Wayne,
Fantastic, patch works a treat, thanks!
+1 to commit to 3.1.0 release :)
On 13/09/08 11:39:46, Wayne Davison wrote:
On Sat, Sep 13, 2008 at 07:10:53AM +1000, Quey wrote:
> maybe something like a --chown user.group
There is a diff in the patches directory that allows you to do
this
On Sat, Sep 13, 2008 at 07:10:53AM +1000, Quey wrote:
> maybe something like a --chown user.group
There is a diff in the patches directory that allows you to do this.
If you apply patches/usermap.diff, you can use a command like this:
rsync -av --usermap=*:someuser --groupmap=*:somegroup /src/ /d
Hi,
Is it possible to request a new feature that will help out some of us
doing many mirrors, that is each mirror has their own system uid for
security puroposes, it would be of great advantage (to I'm sure very
many) to have an option to "save as user" rather than
have the files/directo
Hi there,
One of the things that I've been doing for fun is to try to speed up
ext4's fsck time. As you can see here:
http://thunk.org/tytso/blog/2008/08/08/fast-ext4-fsck-times/
Fsck'ing an ext4 filesystem can be between 6-8 times after than the
equivalent file hierarchy on ext4. In or
Feature request: two layers of exclude/include patterns
=== Logic ===
there are two (or more than one) lists of filters. only when check which
file should be included, it go through two layers.
if (file included in layer1_filer && file included in layer2_filter) {
//it's in
Hello
I have started to use rsync to backup my personal files to a server. I
keep my mirrored files on a server folder called /uptodate and (I use
--backup option) the changes from last backup are moved by rsync to
/changes-20080731 (for example). This is great, and it lets me keep
several v
> What if you 2>&1 |while read x; do echo "`date`: $x"; done ?
I'm sure this would work great under "real" linux, but I'm running rsync
under cygwin. It is running via a command line using the cygwin1.dll. I
could do it within a bash script but would prefer to keep it in the scripts
I use today.
I've been trying to find out how to improve performance with large files
(e.g. Exchange databases > 80GB). One thing I've found is that reviewing
the log output using - is difficult to pinpoint what is taking the
majority of the time without monitoring the log "live" which isn't really
feasibl
1 - 100 of 275 matches
Mail list logo