Re: pruning old files

2002-10-21 Thread jw schultz
On Tue, Oct 22, 2002 at 08:34:42AM +0200, Michael Salmon wrote:
> On Tuesday, October 22, 2002 09:46:36 AM +0900 Shinichi Maruyama 
> <[EMAIL PROTECTED]> wrote:
> +--
> |
> | jw> In the past i found that using find was quite good for this.
> | jw> Use touch to create a file with a mod_time just before you
> | jw> started the last sync.  Then from inside $src run
> | jw> find .  -newer $touchfile -print|cpio -pdm $dest
> |
> | For pruning, how about to add the feature to rsync.
> | Is it difficult ?
> |
> | --exclude-older=SECONDs
> | exclude files older than SECONDs before
> | --ignore-older=SECONDs
> | ignore any operations with the files older than
> | SECONDs before
> | differ from --exclude-olders, these files are not
> | affected from --include files or --delete-excluded
> +-X8
> 
> Wouldn't a better solution be to add a file list option, similar to cpio, 
> to rsync? That would also satisfy those who want complex include and 
> exclude rules. Probably 2 options are required, one for newline terminated 
> names and the other for null terminated names.

A file list option would help for some things.  Not for this
particular case.  The filelist is too big for rsync to
handle but cpio doesn't have the memory footprint problems.
In this particular case the rsync algorythm doesn't buy him
anything, in fact it actually hurts performance because both
source and destination are mounted via NFS.  The only
advantage rsync would in this situation of local sync of nfs
mounts is that it will recognize and propogate file
deletion.


-- 

J.W. SchultzPegasystems Technologies
email address:  [EMAIL PROTECTED]

Remember Cernan and Schmitt
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html



Re: pruning old files

2002-10-21 Thread Michael Salmon
On Tuesday, October 22, 2002 09:46:36 AM +0900 Shinichi Maruyama 
<[EMAIL PROTECTED]> wrote:
+--
|
| jw> In the past i found that using find was quite good for this.
| jw> Use touch to create a file with a mod_time just before you
| jw> started the last sync.  Then from inside $src run
| jw> 	find .  -newer $touchfile -print|cpio -pdm $dest
|
| For pruning, how about to add the feature to rsync.
| Is it difficult ?
|
| 	--exclude-older=SECONDs
| 		exclude files older than SECONDs before
| 	--ignore-older=SECONDs
| 		ignore any operations with the files older than
| 		SECONDs before
| 		differ from --exclude-olders, these files are not
| 		affected from --include files or --delete-excluded
+-X8

Wouldn't a better solution be to add a file list option, similar to cpio, 
to rsync? That would also satisfy those who want complex include and 
exclude rules. Probably 2 options are required, one for newline terminated 
names and the other for null terminated names.

/Michael
--
This space intentionally left non-blank.
--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html


Otomasyonda Barkod Zamaný

2002-10-21 Thread Ozgur Barkod
Title: Özgür Barkod Center



 
  
 
  
   

  

  
  


  

  


   
 
  
  
 
   
OTOMASYONDA 
  BARKOD ZAMANI
  


  
Gelin 
  Sizede BARKOD sisteminin kolayliklarini
yasatalim
  

 
  
istanbul'daki ilk BARKOD CENTER olan Özgür 2000, 

• 14 yillik tecrübesi, 
• 14 binin üzerinde referansi, 
• Vade, Takas, Visa ve Tüketici Kredisiyle Satis imkani,

• Anahtar Teslim Depo – Magaza - Market otomasyon
çözümleri,
• Satilan Tüm Yazilim ve Donanim Ürünleri için bünyemizde
Acil 
   Servis Imkaniyla her türlü sorunuzu
cevaplamaya 
haziriz.

  
  
  
  
 
   

 
   
KAMPANYA Pesin
Fiyatina 5 Ay Taksit
  
  
TEKNOLOJIYI 
  IZLEMEYIN KULLANIN

   
 
  

 
  

  
   
 
  www.ozgurbarkod.com
[EMAIL PROTECTED] 



  


   
 
  Sirketimizin 
kampanyadaki ürünlerini, web sitemizin kampanya
sayfasindan 
ögrenebilir, avantajli ödemelerle satin alabilirsiniz.
Sitemizde 
Flash ile hazirlanmis bir animasyonunu da
izleyebilirsiniz. 
Animasyonda, bir market düzeninde, barkodun
saglayacagi faydalar, 
çizgi film mantigiyla simule edilmistir.

  

  

  

  
  
  

   
Merkez: 
  Fevzipasa cad. NO:271\A Fatih \ Istanbul ( Edirnekapi Vefa
Stadi’ 
  nin karsisi ) 
  Tel: (212) 635 82 82 pbx - (212) 534 43 50 - (212) 521 23
15
  Fax : (212) 534 75 96
  

  
  
  

  
  

  




  
 
  E-mail Listesinden çikmak için « tiklayin« 
To unsubscribe from this list please click
  
  



-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html


Re: pruning old files

2002-10-21 Thread Shinichi Maruyama

bhards> >   --exclude-older=SECONDs
bhards> >   exclude files older than SECONDs before
bhards> Define "older"?
bhards> Do you mean atime, mtime or ctime?

I think mtime is natural like traditional find's -newer or -mtime.
Of course it may good to be able to specify them, if someone needs it.

bhards> >   --ignore-older=SECONDs
bhards> >   ignore any operations with the files older than
bhards> >   SECONDs before
bhards> >   differ from --exclude-olders, these files are not
bhards> >   affected from --include files or --delete-excluded
bhards> Same here. What does "operations" mean?

I mean compare or delete operations.
I think --exclude-older option is for adding the files to exclude list
like --exclude pattern match does. And affected from --include,
--delete-excluded options.
While --ignore-older files not affected. It ignores and keep them.
Isn't it effective to speed up rsync process ?

Think of the case of periodicaly rsyncing traditional news spool or
large Mailing List archives. And server/client expiring policy is
different.

-- 
MARUYAMA Shinichi <[EMAIL PROTECTED]>
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html



Re: pruning old files

2002-10-21 Thread Brad Hards
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Tue, 22 Oct 2002 10:46, Shinichi Maruyama wrote:
> jw> In the past i found that using find was quite good for this.
> jw> Use touch to create a file with a mod_time just before you
> jw> started the last sync.  Then from inside $src run
> jw>   find .  -newer $touchfile -print|cpio -pdm $dest
>
> For pruning, how about to add the feature to rsync.
> Is it difficult ?
Shouldn't be. rsync already has feeping creaturism, although some 
conservatives have been known to object on principle to adding cool new 
features.

>   --exclude-older=SECONDs
>   exclude files older than SECONDs before
Define "older"?
Do you mean atime, mtime or ctime?

>   --ignore-older=SECONDs
>   ignore any operations with the files older than
>   SECONDs before
>   differ from --exclude-olders, these files are not
>   affected from --include files or --delete-excluded
Same here. What does "operations" mean?

Brad
- -- 
http://linux.conf.au. 22-25Jan2003. Perth, Aust. I'm registered. Are you?
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE9tLYXW6pHgIdAuOMRAtkWAJ48aeJJkFswMy6LV+pV2iDhnQdZEQCdESxv
6GSzBILvcWFdLwQmMYswZ6Y=
=il4h
-END PGP SIGNATURE-

-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html



pruning old files

2002-10-21 Thread Shinichi Maruyama

jw> In the past i found that using find was quite good for this.
jw> Use touch to create a file with a mod_time just before you
jw> started the last sync.  Then from inside $src run
jw> find .  -newer $touchfile -print|cpio -pdm $dest

For pruning, how about to add the feature to rsync.
Is it difficult ?

--exclude-older=SECONDs
exclude files older than SECONDs before
--ignore-older=SECONDs
ignore any operations with the files older than
SECONDs before
differ from --exclude-olders, these files are not
affected from --include files or --delete-excluded

-- 
MARUYAMA Shinichi <[EMAIL PROTECTED]>
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html



RE: Any work-around for very large number of files yet?

2002-10-21 Thread Crowder, Mark
JW (and others),
Thanks for the input.  --whole-file did indeed allow it to reach the
failure point faster...
I've been experimenting with find/cpio, and there's probably an answer
there.

Thanks Again,
Mark

-Original Message-
From: jw schultz [mailto:jw@;pegasys.ws]
Sent: Monday, October 21, 2002 4:27 PM
To: [EMAIL PROTECTED]
Subject: Re: Any work-around for very large number of files yet?


On Mon, Oct 21, 2002 at 09:37:45AM -0500, Crowder, Mark wrote:
> Yes, I've read the FAQ, just hoping for a boon...
> 
> I'm in the process of relocating a large amount of data from one nfs
server
> to another (Network Appliance filers).  The process I've been using is to
> nfs mount both source and destination to a server (solaris8) and simply
use
> rsync -a /source/ /dest .   It works great except for the few that have >
10
> million files.   On these I get the following:
> 
> ERROR: out of memory in make_file
> rsync error: error allocating core memory buffers (code 22) at util.c(232)
> 
> It takes days to resync these after the cutover with tar, rather than the
> few hours it would take with rsync -- this is making for some angry users.
> If anyone has a work-around, I'd very much appreciate it.

Sorry.  If you want to use rsync you'll need to break the
job up into manageable pieces.

If, and only if, mod_times reflect updates (most likely) you
will get better performance in this particular case using
find|cpio.  After it uses the meta-data to pick candidates
rsync will read both the source and destination files to
generate the checksums.  This means that your changed files
will be pulled in their entirety across the network twice
before even starting to copy them.  --whole-file will
disable that part.  Rsync is at a severe disadvantage when
running on nfs mounts; nfs->nfs is even worse.

In the past i found that using find was quite good for this.
Use touch to create a file with a mod_time just before you
started the last sync.  Then from inside $src run
find .  -newer $touchfile -print|cpio -pdm $dest
Without the -u option to cpio it will skip (and warn about)
any files where the mod_dates haven't change but that is
faster than transferring the file.

The use of the touchfile is, in my opinion, bettern than
-mtime and related options because it can have been created
as part of the earlier cycle and it is less prone to
user-error.


-- 

J.W. SchultzPegasystems Technologies
email address:  [EMAIL PROTECTED]

Remember Cernan and Schmitt
-- 
To unsubscribe or change options:
http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html



Re: Any work-around for very large number of files yet?

2002-10-21 Thread jw schultz
On Mon, Oct 21, 2002 at 09:37:45AM -0500, Crowder, Mark wrote:
> Yes, I've read the FAQ, just hoping for a boon...
> 
> I'm in the process of relocating a large amount of data from one nfs server
> to another (Network Appliance filers).  The process I've been using is to
> nfs mount both source and destination to a server (solaris8) and simply use
> rsync -a /source/ /dest .   It works great except for the few that have > 10
> million files.   On these I get the following:
> 
> ERROR: out of memory in make_file
> rsync error: error allocating core memory buffers (code 22) at util.c(232)
> 
> It takes days to resync these after the cutover with tar, rather than the
> few hours it would take with rsync -- this is making for some angry users.
> If anyone has a work-around, I'd very much appreciate it.

Sorry.  If you want to use rsync you'll need to break the
job up into manageable pieces.

If, and only if, mod_times reflect updates (most likely) you
will get better performance in this particular case using
find|cpio.  After it uses the meta-data to pick candidates
rsync will read both the source and destination files to
generate the checksums.  This means that your changed files
will be pulled in their entirety across the network twice
before even starting to copy them.  --whole-file will
disable that part.  Rsync is at a severe disadvantage when
running on nfs mounts; nfs->nfs is even worse.

In the past i found that using find was quite good for this.
Use touch to create a file with a mod_time just before you
started the last sync.  Then from inside $src run
find .  -newer $touchfile -print|cpio -pdm $dest
Without the -u option to cpio it will skip (and warn about)
any files where the mod_dates haven't change but that is
faster than transferring the file.

The use of the touchfile is, in my opinion, bettern than
-mtime and related options because it can have been created
as part of the earlier cycle and it is less prone to
user-error.


-- 

J.W. SchultzPegasystems Technologies
email address:  [EMAIL PROTECTED]

Remember Cernan and Schmitt
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html



rsync: read error: Connection timed out

2002-10-21 Thread Ryan
I am rsyncing several directories  some of them have over 150,000
files ...  I have seen this error messages several times:

rsync: read error: Connection timed out
rsync error: error in rsync protocol data stream (code 12) at io.c(162)
rsync: connection unexpectedly closed (359475 bytes read so far)
rsync error: error in rsync protocol data stream (code 12) at io.c(150)


I am rsyncing over 100mbit ethernet connections (on different subnets).
There is plenty of disk space on both sides.

Anyone have any idea what the problem is?  I have to re-run the rsync if
this error message popps up .. and then it gets a bit further syncing up
the directories and then breaks again.. sometimes takes me upto 3 times to
run the rsync for it to finally complete.

It didn't start doing this till the list of files got really large.



Ryan

-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html



Re: Any work-around for very large number of files yet?

2002-10-21 Thread tim . conway
Mark:  You are S.O.L.  There's been a lot of discussion on the subject, 
and so far, the only answer is faster machines with more memory.  For my 
own application, I have had to write my own system, which can be best 
described as find, sort, diff, grep, cut, tar, gzip.  It's a bit more 
complicated than that, and the find, sort, diff, grep, and cut are 
implemented in perl code.  It also gets to use some assumptions I can make 
about our data, concerning file naming, dating, and sizing, and has no 
replacement for rsync's main magic, the incremental update of a file. 
Nonetheless, a similar approach might do well for you, as chances are, 
most of your changes are the addition and removal of files, with changes 
to existing files always entailing a change in size and/or timestamp.

Tim Conway
[EMAIL PROTECTED] reorder name and reverse domain
303.682.4917 office, 303.921.0301 cell
Philips Semiconductor - Longmont TC
1880 Industrial Circle, Suite D
Longmont, CO 80501
Available via SameTime Connect within Philips, caesupport2 on AIM
"There are some who call me Tim?"




"Crowder, Mark" <[EMAIL PROTECTED]>
Sent by: [EMAIL PROTECTED]
10/21/2002 08:37 AM

 
To: [EMAIL PROTECTED]
cc: (bcc: Tim Conway/LMT/SC/PHILIPS)
Subject:Any work-around for very large number of files yet?
Classification: 



Yes, I've read the FAQ, just hoping for a boon... 
I'm in the process of relocating a large amount of data from one nfs 
server to another (Network Appliance filers).  The process I've been using 
is to nfs mount both source and destination to a server (solaris8) and 
simply use rsync -a /source/ /dest .   It works great except for the few 
that have > 10 million files.   On these I get the following:
ERROR: out of memory in make_file 
rsync error: error allocating core memory buffers (code 22) at util.c(232) 
It takes days to resync these after the cutover with tar, rather than the 
few hours it would take with rsync -- this is making for some angry users. 
 If anyone has a work-around, I'd very much appreciate it.
Thanks, 
Mark Crowder 
Texas Instruments, KFAB Computer Engineering 
email: [EMAIL PROTECTED] 


-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html



Re: Rsync and "ignore nonreadable" and timeout

2002-10-21 Thread tim . conway
All parameters are in parameter/value pairs, joined by '=' characters. 
This is important even for apparent simple assertions, as there is only 
one name for each parameter... i.e.  there is no "do not ignore 
nonreadable", or "do not use chroot", but rather "ignore nonreadable = no" 
and "use chroot = no".

ignore nonreadable = yes
timeout = 600

Tim Conway
[EMAIL PROTECTED] reorder name and reverse domain
303.682.4917 office, 303.921.0301 cell
Philips Semiconductor - Longmont TC
1880 Industrial Circle, Suite D
Longmont, CO 80501
Available via SameTime Connect within Philips, caesupport2 on AIM
"There are some who call me Tim?"




Lachlan Cranswick <[EMAIL PROTECTED]>
Sent by: [EMAIL PROTECTED]
10/21/2002 05:35 AM

 
To: [EMAIL PROTECTED]
cc: (bcc: Tim Conway/LMT/SC/PHILIPS)
Subject:Rsync and "ignore nonreadable" and timeout
Classification: 




Hi,

Can anyone send me an example config file that makes use of 

   ignore nonreadable
   timeout 600

When I try to put this in a module - rsync seems happy but it
gets nasty logfile messages when a client connects to
the server.

Oct 19 20:30:14 4T:sv1 rsyncd[3706636]: params.c:Parameter() - Ignoring 
badly fo
rmed line in configuration file: ignore nonreadable
Oct 19 20:30:14 4T:sv1 rsyncd[3706636]: params.c:Parameter() - Ignoring 
badly fo
rmed line in configuration file: timeout 600

What versions of the client and server of rsync support these two
options?

Cheers,

Lachlan.

---
Lachlan M. D. Cranswick

Collaborative Computational Project No 14 (CCP14)
for Single Crystal and Powder Diffraction
  Birkbeck University of London and Daresbury Synchrotron Laboratory 
Postal Address: CCP14 - School of Crystallography,
Birkbeck College,
Malet Street, Bloomsbury,
WC1E 7HX, London,  UK
Tel: (+44) 020 7631 6850   Fax: (+44) 020 7631 6803
E-mail: [EMAIL PROTECTED]   Room: B091
WWW: http://www.ccp14.ac.uk/

-- 
To unsubscribe or change options: 
http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html



-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html



Any work-around for very large number of files yet?

2002-10-21 Thread Crowder, Mark
Title: Any work-around for very large number of files yet?





Yes, I've read the FAQ, just hoping for a boon...


I'm in the process of relocating a large amount of data from one nfs server to another (Network Appliance filers).  The process I've been using is to nfs mount both source and destination to a server (solaris8) and simply use rsync -a /source/ /dest .   It works great except for the few that have > 10 million files.   On these I get the following:

ERROR: out of memory in make_file
rsync error: error allocating core memory buffers (code 22) at util.c(232)


It takes days to resync these after the cutover with tar, rather than the few hours it would take with rsync -- this is making for some angry users.  If anyone has a work-around, I'd very much appreciate it.

Thanks,
Mark Crowder    
Texas Instruments, KFAB Computer Engineering
email: [EMAIL PROTECTED] 





Re: ERROR: buffer overflow in receive_file_entry

2002-10-21 Thread Craig Barratt
> has anyone seen this error:
> 
> ns1: /acct/peter> rsync ns1.pad.com::acct
> overflow: flags=0xe8 l1=3 l2=20709376 lastname=.
> ERROR: buffer overflow in receive_file_entry
> rsync error: error allocating core memory buffers (code 22) at util.c(238)
> ns1: /acct/peter> 

Either something is wrong with your setup or configuration or this
is a bug.  The packed file list data sent right at the start is
not being decoded correctly.  l1=3 means that 3 bytes of the full
name should be kept, but lastname = "." is just a single character
long.  Also, l2=20709376 looks like ascii, not a small integer.
The flag value 0xe8 is maybe ok: long file name, same mtime, same
dir, same_uid.

It would be great if you could debug this further.  I would first
try to find a small set of files on which you get the error, then
add some debug prints to writefd_unbuffered() to print what the
sender is sending, and to read_unbuffered() to print what the
receiver is reading.  Then look for 0xe8 03 76 93 70 20 in the
output (byte reversed from the error), and see what is a little
before that.

Craig
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html



Re: Path to rsync Binary?

2002-10-21 Thread tim . conway
SunOS 5.7   Last change: 25 Jan 20025

User Commandsrsync(1)

  -e, --rsh=COMMAND   specify rsh replacement
  --rsync-path=PATH   specify path to rsync on the remote 
machine

Tim Conway
[EMAIL PROTECTED] reorder name and reverse domain
303.682.4917 office, 303.921.0301 cell
Philips Semiconductor - Longmont TC
1880 Industrial Circle, Suite D
Longmont, CO 80501
Available via SameTime Connect within Philips, caesupport2 on AIM
"There are some who call me Tim?"




<[EMAIL PROTECTED]>
Sent by: [EMAIL PROTECTED]
10/20/2002 08:57 PM

 
To: [EMAIL PROTECTED]
cc: (bcc: Tim Conway/LMT/SC/PHILIPS)
Subject:Path to rsync Binary?
Classification: 



I am using rsync between to Solaris machines.  One has
rsync under the /usr/local/bin/rsync location and the other
under /opt/rsync.  Is there a way for me to issue the rsync
command from the "source" machine and tell it as part of
the command where rsync is on the target?  If not, does
this mean that in order to perform the sync between two
systems that I need to have a 1:1 relationship of where the
rsync binary is installed?

Thanks for the help in advance.

Don
-- 
To unsubscribe or change options: 
http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html



-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.tuxedo.org/~esr/faqs/smart-questions.html