RE: Backup Microsoft Exchange

2008-10-07 Thread Rob Bosch
The main reason I recommended using vshadow is that you don't have to create
the BKF file so the space used is only what the shadow copy requires (300MB
minimum).  You can delete the shadow copy when you are done so no meaningful
space impact on your drive.  The shadow copy will also be very fast,
normally less than 20 seconds on a 70GB data store, to create the shadow
copy.

Of course, either will work and ntbackup is a much easier way.
 
Rob 


-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Question on Resource temporarily unavailable error

2008-10-07 Thread Rob Bosch
I've removed the file from the destination machine and still received the
error.  When I disabled the preallocate option it worked.  I suspect there
is an issue in the cygwin preallocate option in 1.7.  I'll report it to the
cygwin message board.

Rob


-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


sporadic issue with lnk file

2008-10-06 Thread Rob Bosch
I'm having some issues with some lnk files using cygwin 1.7 and rsync 3.0.3.
I'm not sure if this is an issue with rsync or cygwin.here is the error
written to the rsync log:

rsync: rename /EDrive/Documents and Settings/csams/Application
Data/Microsoft/Office/Recent/.Order requests Lori Kopp.doc.LNK.jJOmON (in
test.backup) - Documents and Settings/csams/Application
Data/Microsoft/Office/Recent/Order requests Lori Kopp.doc.LNK: No such file
or directory (2)

The problem is not apparent since there are many LNK files that get
successfully transferred.  I do not see any filename conflict that would
prohibit a rename on the receiver side.  The receiver has the file there and
has a date/time of the transfer, not date/time to match the origination
file.  I have the rsync option to update the date/times and they normally
match.  Just wondering if anyone knew of an issue. 

Rob

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Backup Microsoft Exchange

2008-10-06 Thread Rob Bosch
You need to use shadow copies to create a drive you can use with rsync, or
you have to take Exchange offline.  You can use vshadow.exe (use version
3.0) to create the shadow copy and then expose it as a drive.  Then use
rsync to sync this point-in-time backup to your offsite location across an
ssh tunnel.  vshadow with the -w option means it will tell the Exchange
writer to create a consistent backup so you won't have problems with
recovery. 

There is plenty of documentation on how to use vshadow to expose a shadow
copy as a drive.  When using rsync to send it offsite be sure to use the -I
option since the date/time option of the exchange edb file will not
necessarily change.  

You can also use betest (VSS SDK) to truncate logs periodically and use the
-I option only when you truncate the logs.  The rest of the time you will
get the logs and you can do a soft recovery against your edb in the event of
a disaster.  If you truncate logs or use circular logging you have to use -I
to force a sync even if the date/time has not changed.

Rob 

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Question on Resource temporarily unavailable error

2008-10-03 Thread Rob Bosch
Thanks for the tip.  The destination is on a fibre channel array.  I'm able
to replicate the issue when trying to rsync locally and I get a read error.
I'm wondering if it is a hardware issue.  I'm deleting the file and letting
rsync recreate it...then I'll see if the issue occurs again.  You're
probably right, it may be a hardware issue.

Rob

On Tue, 2008-09-30 at 14:20 -0600, Rob Bosch wrote:
 2008/09/30 12:09:55 [12508] rsync: write failed on /EDrive/testfile.edb
 (in Test.Backup):Resource temporarily unavailable (11)

That error is coming from the destination filesystem.  What happens if
you copy the files to another place on the destination machine and then
locally (using rsync or another tool) to the directory that is giving
the errors?  If the /EDrive/testfile.edb path is on a network
filesystem, see if the network filesystem has logged any errors or try
taking it out of the loop by rsyncing directly to the machine holding
the destination files.

Matt


-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Question on Resource temporarily unavailable error

2008-09-30 Thread Rob Bosch
I'm using rsync over an ssh tunnel to transfer some large files.  I've been
getting the Resource temporarily unavailable in my rsyncd.log frequently.
The connections are T1's and very stable so I don't think it is a network
drop.  The disconnect occurs on the same files every time...although these
files are also the largest ones taking the most time.  I tried limiting the
bandwidth to make sure that wasn't an issue and still get the error.  I'm
running rsync on cygwin 1.7.  rsync version is 3.0.3 with preallocate patch
applied.  Anyone have any suggestions?

Example error in rsyncd.log

2008/09/30 12:09:55 [12508] rsync: write failed on /EDrive/testfile.edb
(in Test.Backup):Resource temporarily unavailable (11)
2008/09/30 12:09:55 [12508] rsync error: error in file IO (code 11) at
receiver.c(321) [receiver=3.0.3]
2008/09/30 12:09:55 [12508] rsync: connection unexpectedly closed (86 bytes
received so far) [generator]
2008/09/30 12:09:55 [12508] rsync error: error in rsync protocol data stream
(code 12) at io.c(644)

Rob



-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Large file - match process taking days

2008-07-30 Thread Rob Bosch
I've been trying to figure out why some large files are taking a long time
to rsync (80GB file).  With this file, the match process is taking days.
I've added logging to verbose level 4.  The output from match.c is at the
point where it is writing out the potential match at message.  In a 9 hour
period the match verbiage has changed from:

potential match at 14993337175 i=2976 sum=7c07ae74
potential match at 14993834514 i=3517 sum=0956772e
potential match at 14994673480 i=3232 sum=9be33b55
potential match at 14994912897 i=4739 sum=7b87587a
potential match at 14996877980 i=1453 sum=b7715246
potential match at 14999624225 i=906 sum=d9d831c6
potential match at 1451039 i=2235 sum=6ca97091
potential match at 15001174331 i=3866 sum=12f966ee
potential match at 15001209073 i=2080 sum=783c7750
potential match at 15001399336 i=4522 sum=87f122e0
potential match at 15001543265 i=1360 sum=85dee02c
potential match at 15001770789 i=1637 sum=c55912e6
potential match at 15002913113 i=2783 sum=3fdbf408
potential match at 15004011466 i=3552 sum=ea7d0f44
potential match at 15005784863 i=2758 sum=cf9e00d6

To

potential match at 19827231165 i=3880 sum=f0b58ab2
potential match at 19827785238 i=4099 sum=f3338531
potential match at 19827870435 i=1232 sum=6abf175c
potential match at 19829135485 i=4472 sum=1ed3674e
potential match at 19829758278 i=2705 sum=dc796cb7
potential match at 19830224336 i=2959 sum=f0bd8161
potential match at 19830896106 i=3185 sum=6f83947a
potential match at 19832087866 i=1306 sum=14b38acb
potential match at 19832536037 i=1411 sum=3de116db
potential match at 19833817328 i=102 sum=45a8d003
potential match at 19835208508 i=2706 sum=e326d8e4
potential match at 19836927143 i=1591 sum=e357d821
potential match at 19838869812 i=4324 sum=1b113e13
potential match at 19839194857 i=3894 sum=03e116c1
potential match at 19839789868 i=3285 sum=39139716

I believe this means that 4.8GB of the file has been processed in this 9
hour period?  Blocksize is currently manually set at 1149728, 4 times the
default value.  Any idea on why it would be taking so long to go through
this portion of the sync process?  Rsync version is 3.0.3 on both ends.

Rob 



-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Large file - match process taking days

2008-07-30 Thread Rob Bosch
The files are very similar, a maximum of about 5GB of data differences over
80GB.  The CPU on both sides is low (3-5 percent) and the memory usage is
low (11MB on the client, not sure on the server).

Full rsync options are:

-ruityz --partial --partial-dir=.rsync-partial --links --ignore-case
--preallocate --ignore-errors --stats --del --block-size=1149728 -I 

I'm using the -I option to force a full sync since date/time changes on
database files is not a reliable measure of changes.

I'll try the block-size at 1638400 although I have not seen a big change in
moving it from about 287000 (default square root) to 1149728.

Rob

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Large file - match process taking days

2008-07-30 Thread Rob Bosch
The two servers are connected via T1 on one end, and 30MB Internet at the
other.  Based on the stats I'm looking at the T1 is only about 20% utilized.
The other end is not even close to fully utilized.  It would seem to me that
the data connection would be an issue when sending the literal data versus
the checksum.  Based on what I see in the logs it is taking forever to do
the checksum process.  Or am I misunderstanding the log? 

Rob

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Feature request - timestamp with -v higher than 3

2008-07-30 Thread Rob Bosch
 What if you 21 |while read x; do echo `date`: $x; done ?

I'm sure this would work great under real linux, but I'm running rsync
under cygwin.  It is running via a command line using the cygwin1.dll.  I
could do it within a bash script but would prefer to keep it in the scripts
I use today...good idea though.

Rob

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Feature request - timestamp with -v higher than 3

2008-07-29 Thread Rob Bosch
I've been trying to find out how to improve performance with large files
(e.g. Exchange databases  80GB).  One thing I've found is that reviewing
the log output using - is difficult to pinpoint what is taking the
majority of the time without monitoring the log live which isn't really
feasible on 80GB files.  It would be helpful if the additional logging (or
all logging) included a timestamp (optional?) on the output if the verbosity
level is above X, where X is 3?  Especially in the match.c code segment.
Just my 2 cents.

Rob



-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


maximum block-size (-B) parameter

2008-07-25 Thread Rob Bosch
I've been trying to test rsync on files over 80GB again to see if I can find
ways to improve performance further.  I tried passing a manual block-size
value and I keep getting Invalid block length from the [sender].  Can
someone refresh my memory of the valid values that can be passed using this
parameter?  

I've tried passing 1048 and the -vvv output shows the file being processed
in 1KB intervals.  I pass 1048576 and I get the error message from the
sender.  I have not modified the define in rsync.h, it is still ((int32)1 
17).  Since the value is an int32, passing 16MB or 1MB or 512KB should not
be a problem, right?

Rob

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: maximum block-size (-B) parameter

2008-07-25 Thread Rob Bosch
The maximum value allowed was 128KB.  I think this is an issue with cygwin
(using 1.7.6 for testing) and how it interprets an int32.  I've modified the
define value from the int32 and put in an upper limit instead to test.  Just
an FYI in case someone else has this issue.

Rob



-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Remote Differential Compression comparative run

2008-07-25 Thread Rob Bosch
I ran an rsync test using the RDC program that comes with the Server 2008
SDK.  The RDC program is a simple program to demonstrate Microsoft's Remote
Differential Compression algorithm.  It uses the API for RDC to copy a file
across a WAN/LAN connection.  The program, as-is, has little functionality
and a lot of limitations but I thought it might be fun to test. I compiled
the RDC code as-is using Visual Studio 2008.  

A run on an 80GB file took approximately 14.75 hours to transfer the file
across a gigabit LAN.  The same run using rsync 3.0.3 took 1.62 hours.  The
rsync run used cygwin for both the client and server with the preallocate
patch applied.

I'm not sure this is a fair comparison given the rudimentary application
used for RDC but it sure looks like rsync blows away RDC.

Rob



-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Directory with ! in filename

2008-03-05 Thread Rob Bosch
I've been getting the error below and the only thing I can see odd about the
directory is the ! in the directory name.  That and there is a space at the
end of the text which doesn't seem like it should be there either.  There
are no permissions issues with accessing the directory and the directory
does exist.  Does anyone know if the ! will cause a problem or will rsync
escape the special character?  Is the space at the end of the quotes a
problem with the error message or is something else going on?  I'm going
from a Windows client using cygwin to a Linux receiver/server.  I'm still
using 3.0.0pre10.

rsync: opendir /cygdrive/j/Documents and Settings/All Users/Start
Menu/Programs/ACT! 2006  failed: No such file or directory (2)

Thanks!

Rob




-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Fragmentation on XFS

2008-02-27 Thread Rob Bosch
Let me know of any additional info or tests you need me to run.  I'll halp any 
way I can.  thanks.

rob


-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Fragmentation on XFS

2008-02-27 Thread Rob Bosch
Wayne, thanks for your help on this issue.  It turned out to be a user error
(me) since the client was the pre5 client instead of the pre10.  I reran the
test with the pre10 client as you suggested and here are the results.  The
only odd thing I noticed is that even though all the data matched, the file
was recreated on the receiving side.  If there is 100% match shouldn't it
just leave the file as is even if the -I option is selected?  Or is that
caused by a different option I have set up? FYI, the fragmentation was only
27 extents for a 59GB file...I really like XFS!

false_alarms=53852 hash_hits=460671 matches=460671
sender finished FILENAME
send_files phase=1
send_files phase=2
send files finished
total: matches=460671  hash_hits=460671  false_alarms=53852 data=0

rsync[4552] (sender) heap statistics:
  arena: 524288   (bytes from sbrk)
  ordblks:2   (chunks not in use)
  smblks: 0
  hblks:  0   (chunks from mmap)
  hblkhd:   2686976   (bytes from mmap)
  allmem:   3211264   (bytes from sbrk + mmap)
  usmblks: 22151168
  fsmblks:0
  uordblks: 3202504   (bytes used)
  fordblks:8760   (bytes free)
  keepcost:8624   (bytes in releasable chunk)

Number of files: 1
Number of files transferred: 1
Total file size: 60381007872 bytes
Total transferred file size: 60381007872 bytes
Literal data: 0 bytes
Matched data: 60381007872 bytes
File list size: 33
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 96
Total bytes received: 3685395

sent 96 bytes  received 3685395 bytes  492.61 bytes/sec
total size is 60381007872  speedup is 16383.44
_exit_cleanup(code=0, file=main.c, line=1060): about to call exit(0)

ptime 1.0 for Win32, Freeware - http://www.pc-tools.net/
Copyright(C) 2002, Jem Berkes

=== rsync.exe -I --no-whole-file --port=888 -vvv --compress-level=9 --stats 

Execution time: 7481.359 s


-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
For a meaningful test, you should actually write 77GB of data into a new
file and an ftruncated file and see if there's any difference in the
resulting fragmentation.

In your patch, you should use fallocate in place of ftruncate.  If your
glibc is like mine and doesn't provide direct access to fallocate,
you'll have to use syscall and __NR_fallocate .

Matt

I ran these tests with these results... I couldn't get fallocate function to
work since my kernel doesn't support it.  

Destination file on Ext3
- ftruncate, 123MB file, Execution time 25 secs, 62 extents
- posix_fallocate, 123MB file, Execution time 11 secs, 53 extents

Destination file on XFS
- ftruncate, 123MB file, Execution time 11 secs, 1 extents
- posix_fallocate, 123MB file, Execution time 10 secs, 1 extents
- without preallocate, 123MB file, Execution time 14 secs, 1 extents
- ftruncate, 59GB file, Execution time 52776 secs, 1235 extents
- posix_fallocate, 59GB file, Execution time 53919 secs, 11 extents

The tests were run using a single file first rsync'd to the destination and
then using rsync to send the file again.  The source and destination are the
same so I used the -I option to force a sync.  The tests seem to show the
ftruncate doesn't do anything to avoid fragmentation as you pointed out.
The posix_fallcoate also doesn't seem to have an impact on overall runtime.
You're patch to use fallocate if available seems like the best way to go,
nice work!

The odd thing is that a huge amount of the file was resent again even though
the files are identical at the source and destination.  Here are the stats
on the transfer:

rsync[7752] (sender) heap statistics:
  arena: 655360   (bytes from sbrk)
  ordblks:3   (chunks not in use)
  smblks: 0
  hblks:  0   (chunks from mmap)
  hblkhd:655360   (bytes from mmap)
  allmem:   1310720   (bytes from sbrk + mmap)
  usmblks: 20250624
  fsmblks:0
  uordblks: 1238408   (bytes used)
  fordblks:   72312   (bytes free)
  keepcost:   62248   (bytes in releasable chunk)

Number of files: 1
Number of files transferred: 1
Total file size: 60381007872 bytes
Total transferred file size: 60381007872 bytes
Literal data: 52482277376 bytes
Matched data: 7898730496 bytes
File list size: 44
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 38928961023
Total bytes received: 3685406

sent 38928961023 bytes  received 3685406 bytes  722051.33 bytes/sec
total size is 60381007872  speedup is 1.55

Both the ftruncate and posix_fallocate runs have exactly the same stats
which makes sense.  Resending 38GB of the data doesn't make sense.

Rob

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
 Any idea why Glibc's posix_fallocate makes any difference?

 Doesn't it simply write a lot of zeros?  In that case, why doesn't
 rsync writing lots of data sequentially result in the same number of
 extents?

The destination server had a lot of other processes running at the same
time.  I suspect this concurrency is causing the additional extents since
many processes writing to the same disk array.

I thought only 1235 extents without posix_fallocate was pretty good. Under
cygwin and NTFS, an equivalent file was generating around 25000 extents.

Rob

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
 A local transfer needs --no-whole-file if you want it to use the rsync
 algorithm (which uses more disk I/O, so it's not the default).

The transfers occurred across a local network but were on separate machines.

Rob

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
 Though, did I get the right impression that NTFS generates lots of
 extents for small writes even when nothing else is running?

The fragmentation on NTFS was a problem even when nothing else was running
on the server.  The preallocation patch made all the difference on NTFS and
cygwin.  In that world it is a must have since it increases performance on
the reads by orders of magnitude while not impacting the write-side since
the cygwin posix_fallocate is very efficient. 

Rob

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
 Was that simply due to writing too-small block to NTFS?  In other
 words, would increasing the size of write() calls have fixed it
 instead, without leaving allocated but unused disk space in the case
 of a user-abort with --partial, --partial-dir or --inplace?

It could have been a function of the block size but I don't think so.  I
never tested the strategies you list.  Under cygwin the posix_fallocate
function is extremely efficient in that it immediately allocates the files
and does no writing, yet still provides a single-extent file if it can be
provided (just like fallocate if supported in the kernel).  Given that
solved the problem I didn't pursue any other alternatives.

Rob

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
The patch truncates the file with ftruncate if a transfer fails in
receiver.c.  This should avoid the problem you mention.  Even if this didn't
occur, the file would exist on the FS with the predefined size.  It would be
in the allocation table and exist on the disk (you can see it under Windows
explorer).  It wouldn't have data in the entire file size but it is still a
valid, if sparse, file.  

The writes in larger chunks won't fully solve the problem unless you have a
machine that does not do much concurrency.  My Windows machine using NTFS
experienced high fragmentation in ALL files, not just large ones.  The
server was receiving about 75 concurrent inbound rsync processes.  On NTFS I
used preallocate for all files.  This changed the throughput on our SATA
RAID array from a mere 10MBps to 80MBps bursting to 150MBps.  The only
change we made was the preallocate option and as our files became
defragmented due to updates we noticed the huge change in performance.  We
also never experienced any OS corruption or other issues with the NTFS
machines.

The reason we moved from Windows to Linux for our rsync machines was due to
scalability (CPU and memory usage) and reliability (we were getting crashes
due to driver problems under Windows).

Rob

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
 A local transfer needs --no-whole-file if you want it to use the rsync
 algorithm (which uses more disk I/O, so it's not the default).

..wayne..

I reran this test with the --no-whole-file option and received the exact
same results.  Any idea on why some much data is being sent when the files
are exactly the same on both sides?  Stats on the transfer are:

=== rsync -I --no-whole-file --port=888 -vv --compress-level=9 --stats
'/cygdrive/s/backups/*' 'rsync://[EMAIL PROTECTED]/TestXFS' ===
opening tcp connection to 10.0.0.3 port 888
sending daemon args: --server -vvIz --compress-level=9 . TestXFS
TestFile.bkf

total: matches=60263  hash_hits=-1173174026  false_alarms=887889770
data=52482277376

rsync[6424] (sender) heap statistics:
  arena: 655360   (bytes from sbrk)
  ordblks:3   (chunks not in use)
  smblks: 0
  hblks:  0   (chunks from mmap)
  hblkhd:655360   (bytes from mmap)
  allmem:   1310720   (bytes from sbrk + mmap)
  usmblks: 20250624
  fsmblks:0
  uordblks: 1238408   (bytes used)
  fordblks:   72312   (bytes free)
  keepcost:   62248   (bytes in releasable chunk)

Number of files: 1
Number of files transferred: 1
Total file size: 60381007872 bytes
Total transferred file size: 60381007872 bytes
Literal data: 52482277376 bytes
Matched data: 7898730496 bytes
File list size: 44
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 38928961023
Total bytes received: 3685406

sent 38928961023 bytes  received 3685406 bytes  1174988.20 bytes/sec
total size is 60381007872  speedup is 1.55

Execution time: 33134.571 s


-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
I ran rsync on the 59GB file again without preallocate on XFS.  It created
only 383 extents...very low fragmentation.

Rob

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Fragmentation on XFS

2008-02-25 Thread Rob Bosch
 A local transfer needs --no-whole-file if you want it to use the rsync
 algorithm (which uses more disk I/O, so it's not the default).

..wayne..

Here are the stats on a small file...exactly as expected with everything
getting matched.  Is this an issue with how much of the file rsync can look
across to find a match?  I noticed the previous run had a huge number of
false_alarms and a negative number for hash_hits.  Does this point to where
the problem may be?

Rob

=== rsync -I --no-whole-file --port=888 -vv --compress-level=9 --stats
'/cygdrive/e/work data/temp/*' 'rsync://[EMAIL PROTECTED]/TestXFS' ===
opening tcp connection to 10.0.0.3 port 888
sending daemon args: --server -vvIz --compress-level=9 . TestXFS
test.exe
total: matches=11225  hash_hits=11225  false_alarms=1 data=0

rsync[360] (sender) heap statistics:
  arena: 524288   (bytes from sbrk)
  ordblks:2   (chunks not in use)
  smblks: 0
  hblks:  0   (chunks from mmap)
  hblkhd:655360   (bytes from mmap)
  allmem:   1179648   (bytes from sbrk + mmap)
  usmblks:  1966080
  fsmblks:0
  uordblks: 1118544   (bytes used)
  fordblks:   61104   (bytes free)
  keepcost:   61040   (bytes in releasable chunk)

Number of files: 1
Number of files transferred: 1
Total file size: 125892318 bytes
Total transferred file size: 125892318 bytes
Literal data: 0 bytes
Matched data: 125892318 bytes
File list size: 27
File list generation time: 0.010 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 81
Total bytes received: 78613

sent 81 bytes  received 78613 bytes  4496.80 bytes/sec
total size is 125892318  speedup is 1599.77

Execution time: 17.693 s

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Fragmentation on XFS

2008-02-23 Thread Rob Bosch
On Sat, 2008-02-23 at 16:43 -0700, Rob Bosch wrote:

In your patch, you should use fallocate in place of ftruncate.  If your
glibc is like mine and doesn't provide direct access to fallocate,
you'll have to use syscall and __NR_fallocate .

I'll run a test with both ftruncate and fallocate using XFS as the target
system.  I have a 75GB file I can test rsyncing across the internal network
to see how it will work.  My Ext3 system is a little smaller so I run some
quicker tests there first (like 2GB).  I'll post the results as soon as the
test is complete.  Thanks for the tips.

Rob


-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Error code 23, delete failed, error 0

2008-02-06 Thread Rob Bosch
I'm getting errors in my rsync.logs using rsync3.0.0pre8 with the -vv
option.  The client is running on cygwin as the client, CentOS as the
server.  The error in the log is:

rsync: delete_file: rmdir(filepath) failed: No error (0) 

When I look on the server/receiving side the directory does not exist.  Just
trying to understand why this would cause a non-zero exit code.

Rob

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


FW: Error code 23, delete failed, error 0

2008-02-06 Thread Rob Bosch
Just as clarification, when I said the directory does not exist I mean that
is was properly deleted by the server/receiving side.  It appears as if
rsync did everything that it should but an error code of zero is being
returned and the exit code is set to 23, non-zero.

Full options:
-ruvvityz --compress-level=9 --links --ignore-case --ignore-errors --stats
--del --exclude-from=/cygdrive/c/DiffExclude.log

Patches:
Using the preallocate.diff and ignore-case.diff patches from the
rsync-patches tarball for 3.0.0pre8.

FYI, I was getting this issue on pre5 and still getting it with pre8.  

Rob

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Compiling with --disable-iconv

2008-02-05 Thread Rob Bosch
I tried changing the HAVE_ICONV_H and HAVE_ICONV_OPEN but this does not
resolve the issue.  I noticed in rsync.c that there is an #endif without a
matching if condition at the end of the iconvbufs function.  Should this
function be available if HAVE_ICONV_H is not available?  I also don't see
how the ic_chck = iconv_open(defset, defset); (line 83) can be impacted by
these options.

 

I have a feeling some of this is a function of the cygwin compile
environment that I'm using but it seems I should be able to define these
parameters as 0 and get rsync to compile without iconv.

 

Rob

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

RE: Compiling with --disable-iconv

2008-02-05 Thread Rob Bosch
Sorry, didn't see the matching if statement.  Commenting out the two lines
with ICONV in the config.h fixed my issue.  Thanks...

Rob

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Compiling with --disable-iconv

2008-02-03 Thread Rob Bosch
I've been trying to compile 3.0.0pre8 under cygwin.  I was getting a problem 
with the iconv functions which I think are related to the cygwin environment 
and its iconv.h.
 
To bypass it I wanted to compile with the --disable-iconv function.  I'm still 
getting the following errors during compile:
 
/usr/src/rsync-3.0.0pre8/rsync.c:85: undefined reference to 
`_iconv_open'rsync.o: In function 
`iconvbufs':/usr/src/rsync-3.0.0pre8/rsync.c:193: undefined reference to 
`_iconv'/usr/src/rsync-3.0.0pre8/rsync.c:163: undefined reference to 
`_iconv'collect2: ld returned 1 exit statusmake: *** [rsync.exe] Error 1
 
It looks like the line 85 and 163 are not conditionalized for the 
--disable-iconv option.  I'm not sure why line 193 is showing the error.
 
Any suggestions?  Am I missing something (as usual)?Rob -- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

RE: Compiling with --disable-iconv

2008-02-03 Thread Rob Bosch
The error on 193 is due to the iconv reference in the while loop.Rob 
 
/usr/src/rsync-3.0.0pre8/rsync.c:85: undefined reference to `_iconv_open' 
rsync.o: In function `iconvbufs':/usr/src/rsync-3.0.0pre8/rsync.c:193: 
undefined reference to `_iconv'/usr/src/rsync-3.0.0pre8/rsync.c:163: undefined 
reference to `_iconv'collect2: ld returned 1 exit statusmake: *** [rsync.exe] 
Error 1 -- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

--partial replaces destination file on interrupted transfer

2008-01-30 Thread Rob Bosch
I had a case where I was using the --partial option (client is running
rsync-pre5, server daemon is rsync-pre8).  The transfer was interrupted and
the partially transferred file replaced the destination file on the server.
My understanding is that the partially transferred file would be kept but it
would not replace the destination file...that is, both files would be kept
so that rsync could pick up where it left off.

Details:
Client - rsync-pre5 with preallocation.diff, ignore-case.diff patches
applied running on cygwin
Server - rsync-pre8 with ignore-case.diff patch applied, running on CentOS 5
Full client command line options: -ruvity --partial --super --ignore-case
--ignore-errors --stats --delete-during --exclude-from=DiffExclude.txt

Rob

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: --partial replaces destination file on interrupted transfer

2008-01-30 Thread Rob Bosch
I guess a quick read of the man page would have resolved this.  Thanks for
the tip.

Rob

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Workaround for ACL and file attributes

2007-12-17 Thread Rob Bosch
Just thought I'd post a simple workaround for Windows ACL's and file
attributes for rsync.  Using some basic scripting will allow ACL information
to be stored for future use on files transferred using rsync.  Most will
probably find the information worthless but thought I'd post it anyway.

- To get ACL information, use the icacls program provided as part of Server
2003 SP1
(http://technet2.microsoft.com/windowsserver2008/en/library/403edfcc-328a-47
9d-b641-80c290ccf73e1033.mspx?mfr=true).  This program can store ACL's for a
given path into a file for restoring the ACL's later.  Running this on the
path for rsync and including the output file in the rsync backup will
provide you with everything required to restore ACL's later.  Just use
something like icacls c:\* /t /save caclfile and then make sure caclfile is
in your rsync capture.

- You can also use a simple dir command to capture all hidden, system, and
read-only files.  For example, running dir /s /b /a:h C:\*  chidden.txt
will profile you with a simple list of hidden files in C:\.  You can run
this script on the directory you are backing up with rsync and make sure the
chidden.txt file is included in the backup.  When you restore, you can use
the command for /f %a in ('type chidden.txt') do attrib +h %a to restore
the attributes.  The command will apply the hidden attribute to the files.
If you run the dir command for each type (hidden, system, read-only) you
will have all the information required to restore the files to the original
state in 3 simple text files.

I use these types of simple steps to work around capturing ACL's and
attributes.  It also provides me with a very reliable way of restoring the
files I backed up.  It never hurts to have the additional information
anyway.

Rob

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: preallocate CPU usage - pre4

2007-11-18 Thread Rob Bosch
Matt, I have absolutely no programming skills for developing my own program!  
I'd be happy to compile and test a program however.  

Are you going to put the fix for the preallocate patch in the 3.0 next 
pre-release or release?  

I'm pretty sure the function is being called properly since I see the file 
created after the delay...but it is true that is about all I know.  I'll look 
into setting up a test to do the trace.

I've been updating my cygwin environment so I'll repeat some tests to see if 
the problem still exists.  If so, I'll try go gather some more data on it.  If 
you send me a simple program to compile I'll test that too.  Thanks for your 
response and support on this patch!!

Rob 

--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: preallocate CPU usage - pre4

2007-11-18 Thread Rob Bosch
 Attached is the C++ source for a simple program allocate to allocate a 
 file.  Call the 
 program like ./allocate thefile 770.

Now that was fun.  I was creating 300GB files with no fragmentation in less 
than 1/2 a second!  Seriously, the performance using the program was identical 
to what Corinna's results were.  I was able to create a file instantaneously 
without fragmentation.  I tested up to 300GB.  I even did this on the 
production system while it was under load and still had the same results.  

Then I ran the test to see if rsync took any longer knowing my cygwin 
environment had changed.  Well, of course, the files were created without any 
delay.  

I'll chalk this entire problem up to a corrupted cygwin environment and close 
it out.  Thanks for taking a look at it.  I'll use the allocate program to test 
my cygwin environment in the future.

 That's Wayne's job.

Right, thanks.



--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


preallocate CPU usage - pre4

2007-11-05 Thread Rob Bosch
When I use the preallocate patch and create a 77GB file using the function I
get a CPU spike on the server-side.  The spike lasts about 20 minutes and
uses about 20%-25% of the cpu associated with the rsync instance creating
the file.  The spike is directly linked to the time it takes to create the
file.

I compiled rsync using cygwin CVS.  I initially suspected the implementation
of posix_fallocate in cygwin.  I posted the same question to their board and
Corinna indicated she can create a 40GB file in a few milliseconds.  I'm
wondering if there is something else going on at this point.  

Does anyone else get this issue too?  I want to confirm it isn't just a
problem with my environment.  I've replicated it on machines but they are
all using the same cygwin build I created.  It would be great to get a
confirmation (positive or negative) from someone running it under Linux.

The preallocate is working, it just is taking a while to create the file. 

Rob



-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Building hash table times for large files

2007-11-02 Thread Rob Bosch
I'm running pre4 on a 77GB file.  It seems like the hash table is taking a
long time to be built.  I'm not sure what is involved in this step but as an
example the following is logged during a run:

send_files(11, priv1.edb) 
send_files mapped priv1.edb of size 79187419136 
calling match_sums priv1.edb 
f.st.. priv1.edb 
hash search b=131072 len=79187419136 
built hash table for entries 0 - 52427 
built hash table for entries 52428 - 104855 
built hash table for entries 104856 - 157283 
built hash table for entries 157284 - 209711

The first two hash table entries (0 - 52427 and 52428 - 104855) took only a
few minutes.  The next two entries written to the log file have taken
several hours.  

Does this make sense?  Should the entries take so long to be built?

Rob



-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


rsync error: STATUS_ACCESS_VIOLATION

2007-10-25 Thread Rob Bosch
I received the following error on the client when testing the pre2 release.
I'm not sure what it means or if it helps.  Rsync 2.6.9 runs the same backup
without error.  I compiled rsync under cygwin CVS so it may be an issue
there as well.  Server side did not crash.  It did log an error, but I'm
pretty sure it was due to the client-side crash.  Rsync was running using an
ssh port forwarding initiated by a separate command.

 

  4 [main] rsync 8728 _cygtls::handle_exceptions: Exception:
STATUS_ACCESS_VIOLATION

Exception: STATUS_ACCESS_VIOLATION at eip=

eax=011BD368 ebx=0040 ecx=61121124 edx= esi=C128
edi=0022D008

ebp=001F2E88 esp=001F2DF0 program=E: \bin\rsync.exe, pid 8728, thread main

cs=001B ds=0023 es=0023 fs=003B gs= ss=0023

Stack trace:

Frame Function  Args

 

 

Server Log Entry:

2007/10/25 18:52:29 [4328] rsync error: error in rsync protocol data stream
(code 12) at io.c(596) [receiver=3.0.0pre2]

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

RE: rsync error: STATUS_ACCESS_VIOLATION

2007-10-25 Thread Rob Bosch
That appears to have been the problem.  I pulled the latest CVS version and
the backup passed the same point and is still running.  If I get another
failure I'll post it back.  Thanks Matt!

Rob

-Original Message-
From: Matt McCutchen [mailto:[EMAIL PROTECTED] On Behalf Of Matt
McCutchen
Sent: Thursday, October 25, 2007 7:03 PM
To: Rob Bosch
Cc: rsync@lists.samba.org
Subject: Re: rsync error: STATUS_ACCESS_VIOLATION

On Thu, 2007-10-25 at 18:55 -0600, Rob Bosch wrote:
 I received the following error on the client when testing the pre2
 release.

   4 [main] rsync 8728 _cygtls::handle_exceptions: Exception:
 STATUS_ACCESS_VIOLATION

This might be the same crash that Erik Jan Tromp reported here:

http://lists.samba.org/archive/rsync/2007-October/018934.html

That crash was fixed in the CVS rsync after rsync 3.0.0.pre2 .  Please
try again with the latest rsync from CVS.

Matt


-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Attributes on Windows files - not ACL

2007-10-14 Thread Rob Bosch
I'm wondering why I'm having trouble getting attributes like Hidden or
System to come across from my Windows rsync under cygwin implementation
(2.6.9).  I'm using rsync via an ssh tunnel from one Windows machine to the
other, rsync version 2.6.9.  The options I'm using are -ruvitpAE --super
--preallocate --ignore-errors --stats.  Transfers work great, I just don't
get the target files to have the same Windows file attributes such as hidden
file.  Note, I'm not talking about the ACL issue commonly discussed in the
forums.we just extract those to a text file using fileacl and then capture
that file.  I can't find a  similar way to do file attributes otherwise I'd
be happy taking that approach.  Thanks for any assistance in advance.

 

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

RE: File Fragmentation issue...especially relating to NTFS...

2006-12-18 Thread Rob Bosch
The fragmentation we see on NTFS is due to so many streams writing to the
same disk when multiple rsync clients are sending data to the rsync daemon.
Windows will not reserve space for any of the new files unless the
posix_fallocate function is used.  The writes occur as the data comes
in...resulting in extremely high fragmentation.  The testing I've done with
Matt's patch has shown the posix_fallocate function nearly eliminates the
fragmentation issue with little to no performance penalty even with very
large files (50GB+).

I'd be happy to hear the results of additional strategies though so please
post what you find out!

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Matt
McCutchen
Sent: Sunday, December 17, 2006 8:52 AM
To: Linda Walsh
Cc: Rob Bosch; rsync
Subject: Re: File Fragmentation issue...especially relating to NTFS...

On 12/2/06, Linda Walsh [EMAIL PROTECTED] wrote:
If rsync wants to easily encourage less fragmentation on NTFS, I'd
 suggest buffering more information in memory before doing the write.
 Ideally, it could buffer each file in full before writing it out, but
 large files may not fit in memory.

The posix_fallocate strategy does seem to work, in Rob's setup at
least.  Perhaps you could implement the buffering strategy and run
some tests to see whether it produces fewer fragments than the
posix_fallocate strategy on various filesystems.

Matt

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: File Fragmentation issue...especially relating to NTFS...

2006-12-18 Thread Rob Bosch
OK, sounds like completely different scenarios.  We haven't been using rsync
locally to do copies or things like that. 

-Original Message-
From: Linda Walsh [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 18, 2006 2:42 PM
To: Rob Bosch
Cc: 'Matt McCutchen'; 'rsync'
Subject: Re: File Fragmentation issue...especially relating to NTFS...

Hey Rob, some clarifications/comments...:-)

I was talking about the undesirable fragmentation that results from using
rsync as a type of smart-copy -- copying files from a source to a
destination with the target rsync invoked by the source-tree process (i.e.
rsync isn't a daemon). For me, that's the most common case.  Rsync and other
cygwin-based utils don't operate as well as the native Windows Copy or Xcopy
commands.

I never use rsync in daemon mode due to security paranoia.  I use 'ssh' when
connecting to remote systems.  I do use rync with RSYNC_RSH =/usr/bin/ssh.
It is also rare that I have multiple rsyncs copying to the same target
system or have multiple rsync commands copying/updating directory trees on
the same system, locally.

My test case, that produced bad fragmentation, was using only 1 writer in
the case for rsync copying from one source host to another.  It came out of
this same topic (File Fragmentation) arising on the cygwin list, where there
was a general problem with the *nix utils producing poor fragmentation
performance on NTFS.

I did tests of two linux file systems that had utils that listed #fragments
on their file system (only ext2/3 and xfs had such a util).  I used
sysinternals Contig program to show fragmentation on NTFS.  Writing the
entire in 1 large write (using a 64M test file), provided the fewest
fragments on NTFS.  The write size didn't make as much of a difference on a
heavily fragmented ext2/ext3 disk, with over 500 fragments being produced
for the 64M file in my test cases.  The write size also didn't make as much
of a difference on XFS: it reliably put the 64M file in 1 contiguous
fragment (extent) in all of my tested block sizes.

These were all cases using 1 writer/disk.

For NTFS, what seemed to be true was that fragmentation could be positively
affected by writing output files in 1 write call -- buffering the entire
file in memory before writing.

For example, if I use rsync to copy 1 file to a target host where the target
host uses NTFS, rsync writes in smallish chunks (better than the gnu-file
utils, though).  As a result, NTFS looks for a contiguous space for each of
the smallish chunks (whatever rsync's file-write size is).

In this simplistic case, on NTFS, rsync can achieve performance as optimally
as the native windows utilities like the shell Copy and xcopy commands
internet explorer by buffering the entire file in memory and doing the final
write in 1 write call (this presumes your file fits in memory, which is
another issue).

In the single-writer case, there doesn't seem to be any performance penalty
that I could easily measure.  As an aside, IE downloads a file first to a
tempdir.  Given slow download speeds compared to other operations, it might
have been presumed that users might engage in other activities while a large
file download was in progress -- this would approximate the multiple-writers
case.

As for multiple writers -- one might have to use the falloc type call, which
may be more effective for a heavily loaded server process writing multiple
files at the same time, as it might be too memory intensive to buffer all
incoming files completely in memory before writing them out.

I do wonder, though, since the falloc type call (or windows equivalent) is
presumably more efficient than downloading to a tmpdir and then copying to
the final destination, why wouldn't the MS ID designers have used the call?
MS has a history of using internal or specialized MS calls to make their
products perform better than competitors.  I'd think the falloc call would
be a perfect candidate for speeding up file download performance in their
browser (in the sense that it is a type of file-transfer agent).  Maybe
they were just being lazy?


Rob Bosch wrote:
  The fragmentation we see on NTFS is due to so many streams writing to the
  same disk when multiple rsync clients are sending data to the rsync
daemon.
  Windows will not reserve space for any of the new files unless the
  posix_fallocate function is used.  The writes occur as the data comes
  in...resulting in extremely high fragmentation.  The testing I've done
with
  Matt's patch has shown the posix_fallocate function nearly eliminates the
  fragmentation issue with little to no performance penalty even with very
  large files (50GB+).
 
  I'd be happy to hear the results of additional strategies though so
please
  post what you find out!
---

I would be too! :-)

Seriously -- I think I need to get some more HW here, my previously
used
test disk has gotten a bit too full to be real useful.

I'm not sure I'm quite as interested in the multi

Re: Clarification on the RSYNC_EXIT_STATUS

2006-09-20 Thread Rob Bosch
Wayne answered in my separate post...ignore this message.

Rob

Rob Bosch [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]...
 I'm guessing no one else has seen this issue?  Most exit codes are
properly
 reported (e.g. code 12, code 10, code 2).  The specific scenario is
when a
 filename too long or directory not present error causes the client to
exit
 with a code of 23 (reported as the exit code on the client properly
and
 captured in the environment variable there).  The exit code of the
server
 still shows an exit code of zero.  Any comments or ideas on what I may
be
 doing wrong?  
 
  
 
 Also, if anyone has comments on what section the code sets the
environment
 variable errors I'd appreciate it.thanks.
 
  
 
 Rob
 
  
 
  
 
 
--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Clarification on the RSYNC_RAW_STATUS and RSYNC_EXIT_STATUS

2006-09-20 Thread Rob Bosch
Wayne, thanks for the clarification.  We're going to use a SQL query
across the SSH tunnel to the database where we are centralizing the
errors.  Given this structure we can see if the server reported the
error such as an interrupt or some dropped connection.  If it didn't
we'll post the client result.  This covers us in there is a drop in the
remote link.  Anyway, thanks!

Rob

Wayne Davison [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]...
 On Mon, Sep 18, 2006 at 11:13:04AM -0600, Rob Bosch wrote:
  I've noticed that even if rsync has a non-zero exit code on the
client that
  the server still reports an exit status of 0.
 
 This can happen if the error occurred on the client side because there
 is no attempt to get the final exit status from the client side to the
 server side.  The only thing this server-side exit status tells you is
 the exit status of the server process.
 
 ..wayne..
 -- 
 To unsubscribe or change options:
https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read:
http://www.catb.org/~esr/faqs/smart-questions.html
 
--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Clarification on the RSYNC_EXIT_STATUS

2006-09-19 Thread Rob Bosch








Im guessing no one else has seen this issue? Most
exit codes are properly reported (e.g. code 12, code 10, code 2). The specific
scenario is when a filename too long or directory not present error causes the client
to exit with a code of 23 (reported as the exit code on the client properly and
captured in the environment variable there). The exit code of the server still
shows an exit code of zero. Any comments or ideas on what I may be doing
wrong? 



Also, if anyone has comments on what section the code sets
the environment variable errors Id appreciate itthanks.



Rob










-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Clarification on the RSYNC_RAW_STATUS and RSYNC_EXIT_STATUS

2006-09-18 Thread Rob Bosch
I've noticed that even if rsync has a non-zero exit code on the client that
the server still reports an exit status of 0.  For example, I've received an
error 23 with the following output on the client-side rsync error: some
files could not be transferred (code 23) at main.c(954) [sender=2.6.9cvs]
while the server-side rsync_raw_status and rsync_exit_status were still both
zero.  The error code returned by rsync on the client side was correctly 23.
Will the rsync server-side error status report the same exit code the client
reports?  It seems like if the client or the server experience an error that
the rsync_raw_status or the rsync_exit_status should represent the same
value, right?

Rob


-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: post-xfer exec -- more juice!

2006-09-18 Thread Rob Bosch
The way I have accomplished this is to modify the rsync request slightly and
cache the modulename and request into a database.  In my case, this provides
me with a unique key in the database so I know the requests match up.  On
the client side I have added the top-level path into the request (e.g. rsync
-optionshere /usr/bin [EMAIL PROTECTED]::modulename/bin/).  This is the
only way I can get the path into the request but it does work.  On the
pre-xfer side I insert the modulename and path as well as other pertinent
information.  On the post-xfer side I fetch the record from the database
matching the modulename and pull in the request value to get the path.  

A bit of a hack but it has worked for us to track all open connections to
the rsync server and create a database of exit codes by module (user
surrogate in our world) and path.

Rob

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: code 23 error.

2006-09-18 Thread Rob Bosch
Normally rsync will provide more information in the output, not just the
error at the end of the log.  That is, during the transfer there could be a
line that says Filename too long or some other issue with reading the
file.  We typically see the 23 error code when either the file name is too
long or the directory specified in the backup set doesn't exist.  I'm sure
there are others.  Take a look at the full output of rsync and you should be
able to find the reason.

Rob 



-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: File fragmentation

2006-09-13 Thread Rob Bosch








Waynemy
vote is for a command-line option. Ive noticed there is some penalty
for very large files (35GB-50GB). The penalty is relatively small based on my intuitive
measurementsread me watching without running a real timer. The
difference is very small compared to what happens after a few weeks without the
fragmentation patch. Our SAN was becoming so fragmented that we were only
getting 10% of normal throughput. Since weve applied this patch our SAN
is continuously getting maximum throughput. Benchmarks are numbers such as
10MB/sec throughput due to fragmentation versus 90MB-150MB without
fragmentation (numbers reported by telnet sessions to the SAN device).



Fragmentation decreases are very significant. We see most
files now created without any fragmentation (single fragment reported by
sysinternals contig utility) whereas prior to the patch the same files were
containing 40,000 fragments and above. In some cases we are still getting
fragmented files but those files normally contain less than 2000 fragments and
are only on files 10GB and above.



Just my two cents










-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: File fragmentation

2006-09-13 Thread Rob Bosch
Wayne…my vote is for a command-line option.  I’ve noticed there is some
penalty for very large files (35GB-50GB).  The penalty is relatively small
based on my ‘intuitive’ measurements…read me watching without running a real
timer.  The difference is very small compared to what happens after a few
weeks without the fragmentation patch.  Our SAN was becoming so fragmented
that we were only getting 10% of normal throughput.  Since we’ve applied
this patch our SAN is continuously getting maximum throughput.  Benchmarks
are numbers such as 10MB/sec throughput due to fragmentation versus
90MB-150MB without fragmentation (numbers reported by telnet sessions to the
SAN device).

Fragmentation decreases are very significant.  We see most files now created
without any fragmentation (single fragment reported by sysinternals contig
utility) whereas prior to the patch the same files were containing 40,000
fragments and above.  In some cases we are still getting fragmented files
but those files normally contain less than 2000 fragments and are only on
files 10GB and above.

Just my two cents…



--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: File fragmentation

2006-08-25 Thread Rob Bosch
That would be great!  If you need any help testing, let me know.

Rob

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Matt
McCutchen
Sent: Friday, August 25, 2006 6:15 PM
To: Rob Bosch
Cc: rsync
Subject: Re: File fragmentation

On 8/25/06, Rob Bosch [EMAIL PROTECTED] wrote:
 This article suggests the ftruncate function is in the cygwin system
 (http://www.cygwin.com/ml/cygwin-cvs/2006-q3/msg00114.html).  Based on the
 open base specifications the ftruncate should allow the file to be resized
 to the desired level achieving what we want after the file is created?

I took a look at the change to fhandler_disk_file.cc.diff linked from
that message and found that  posix_fallocate is the function we want.
Cygwin correctly implements it using SetEndOfFile.  So rsync's
configure script should check for posix_fallocate, and the receiver
should call it just after opening each received file.  I might write a
patch for this.

Matt

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: File fragmentation

2006-08-19 Thread Rob Bosch
Matt,

I was looking at this a little on the MSDN site and found this information
(http://msdn.microsoft.com/library/default.asp?url=/library/en-us/fileio/fs/
setfilevaliddata.asp).  It mentions here that all files have a File Size,
Allocation Size, as well as valid data length.  It isn't clear, however,
how functions such as the copy grab the right amount of space for the
copied file (copied files are usually less fragmented if large enough free
blocks exist).  I'm not sure if it is a call to create, then move the
pointer, then set end of file...seems convoluted.  I wonder how the copy
feature does it.

Not sure if this helps at all or not.  I couldn't find anything else in the
create file functions that allow a parameter to define the file size.

Rob

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Matt
McCutchen
Sent: Monday, August 14, 2006 5:33 PM
To: Rob Bosch
Cc: rsync@lists.samba.org
Subject: Re: File fragmentation

On 8/13/06, Rob Bosch [EMAIL PROTECTED] wrote:
 Is this something that should be implemented at the rsync level or cygwin
 level?  It would seem that is the opportunity to pre-allocate is
available
 it would be useful to make sure space is available for the new file.

Only rsync knows how much data it plans to write, so rsync would have
to make some sort of call to provide the eventual file size.  I don't
know anything about the Windows call for preallocating space or
whether it is available through Cygwin.  If I get a chance, I will try
to find out.

Matt

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: File fragmentation

2006-08-13 Thread Rob Bosch
Is this something that should be implemented at the rsync level or cygwin
level?  It would seem that is the opportunity to pre-allocate is available
it would be useful to make sure space is available for the new file.  

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Matt
McCutchen
Sent: Thursday, August 10, 2006 6:59 PM
To: Rob Bosch
Cc: rsync@lists.samba.org
Subject: Re: File fragmentation

On 8/6/06, Rob Bosch [EMAIL PROTECTED] wrote:
 I was wondering why rsync doesn't preallocate the space like copy
 or other utilites do under Windows. This would insure you have the space
to
 create the new file plus it would reduce fragmentation since rsync would
be
 writing to a pre-allocated file space.

I don't think there's a reason that rsync shouldn't preallocate the
space.  Since there is no way to preallocate in POSIX, I guess no one
thought about preallocating in Windows.  However, if there is a system
call available through Cygwin, it would be easy to make rsync check
for the call at configure time and use it if it is present.

Matt

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Invalid File Index error

2006-08-07 Thread Rob Bosch








Since upgrading to
2.6.8 with the tag 3 patch, Ive been consistently getting an error using
rsync. We have about 100 servers using the client to a central backup
server and the issue is repeatable on the servers having the problem. About
10 of the servers are having the problem. The error reported by rsync is:



Invalid file index:
-1019607330 (count=8353) [sender]

rsync error: protocol
incompatibility (code 2) at sender.c(169) [sender]



The number following the index:  and count=
values are different by machine. They occur consistently in the same
place on each machine, for example, following a particular file backup. Using
the vvv option only shows the file before the error finishing and then
the same error (no new logging information).



The build was created by Tev at http://www.itefix.no and Ive also
posted this to his forum. I was wondering if anyone else was experiencing
the problem.



Version: rsync 2.6.8 with tag 3 patch

Running over OpenSSH tunnel (running in background, not as
part of rsync command line)








-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

RE: Invalid File Index error

2006-08-07 Thread Rob Bosch








Good planIll test that now
and post my results.











From: Tevfik Karagülle
[mailto:[EMAIL PROTECTED] 
Sent: Monday, August 07, 2006 3:36
PM
To: 'Rob Bosch';
rsync@lists.samba.org
Subject: RE: Invalid File Index
error





cwRsync version 2.0.8
usesnewer versions of cygwin (1.5.20) and openssl (0.9.8b). Problems
might be related to them as well. There is also a new cygwin version 1.5.21 out
there now. I recommend to check if upgrade to 1.5.21 or downgrade to 1.5.19 can
help.



Rgrds Tev











From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Rob Bosch
Sent: Monday, August 07, 2006
11:12 PM
To: rsync@lists.samba.org
Subject: Invalid File Index error

Since upgrading to
2.6.8 with the tag 3 patch, Ive been consistently getting an error using
rsync. We have about 100 servers using the client to a central backup
server and the issue is repeatable on the servers having the problem.
About 10 of the servers are having the problem. The error reported by
rsync is:



Invalid file index: -1019607330 (count=8353)
[sender]

rsync error: protocol incompatibility (code 2) at
sender.c(169) [sender]



The number following the index:  and count=
values are different by machine. They occur consistently in the same
place on each machine, for example, following a particular file backup.
Using the vvv option only shows the file before the error finishing and
then the same error (no new logging information).



The build was created by Tev at http://www.itefix.no and Ive also
posted this to his forum. I was wondering if anyone else was experiencing
the problem.



Version: rsync 2.6.8 with tag 3 patch

Running over OpenSSH tunnel (running in background, not as
part of rsync command line)










-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

RE: Invalid File Index error

2006-08-07 Thread Rob Bosch








Testing with an updated version of cygwin
(1.5.21) proved to resolve the problem.  Thanks Tev!!!



Rob











From: Tevfik Karagülle
[mailto:[EMAIL PROTECTED] 
Sent: Monday, August 07, 2006 3:36
PM
To: 'Rob Bosch';
rsync@lists.samba.org
Subject: RE: Invalid File Index
error





cwRsync version 2.0.8
usesnewer versions of cygwin (1.5.20) and openssl (0.9.8b). Problems
might be related to them as well. There is also a new cygwin version 1.5.21 out
there now. I recommend to check if upgrade to 1.5.21 or downgrade to 1.5.19 can
help.



Rgrds Tev











From:
[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Rob Bosch
Sent: Monday, August 07, 2006
11:12 PM
To: rsync@lists.samba.org
Subject: Invalid File Index error

Since upgrading to
2.6.8 with the tag 3 patch, Ive been consistently getting an error using
rsync. We have about 100 servers using the client to a central backup
server and the issue is repeatable on the servers having the problem. About
10 of the servers are having the problem. The error reported by rsync is:



Invalid file index: -1019607330 (count=8353)
[sender]

rsync error: protocol incompatibility (code 2) at
sender.c(169) [sender]



The number following the index:  and count=
values are different by machine. They occur consistently in the same
place on each machine, for example, following a particular file backup.
Using the vvv option only shows the file before the error finishing and
then the same error (no new logging information).



The build was created by Tev at http://www.itefix.no and Ive also
posted this to his forum. I was wondering if anyone else was experiencing
the problem.



Version: rsync 2.6.8 with tag 3 patch

Running over OpenSSH tunnel (running in background, not as
part of rsync command line)










-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

File fragmentation

2006-08-06 Thread Rob Bosch








I've been running
some tests on files created by rsync and noticing fragmentation issues. I
started the testing because our 5TB array started performing very slowly and it
appears fragmentation was the culprit. The test I conducted was straighforward:

1. Copy over a 49GB file. Analyzed with contig (from sysinternals), no
fragments.
2. Ran rsync and the file was recreated normally (rsync options were -ruvi
--times --no-blocking-io --compress-level=9 --bwlimit=0 --stats --delete
--modify-window=2)
3. Resulting file had almost 12000 fragments. 

It doesn't really surprise me since rsync grabs blocks as it needs them for the
new. I was wondering why rsync doesn't preallocate the space like copy or
other utilites do under Windows. This would insure you have the space to create
the new file plus it would reduce fragmentation since rsync would be writing to
a pre-allocated file space.



I am running the
cygwin version of rsync provided be Tev at www.itefix.no.
His site is a huge help to those of us running rsync under Windows.










-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html