Re: contribute to rsync

2013-05-19 Thread Yan Seiner

garvit sharma wrote:

Hello All,

   Myself Garvit Sharma working in hyderabad central 
university(HCU) at DCIS. I have started using rsync to sync the data 
and i found it very interesting. After using rsync a lot i realized to 
contribute to rsync by adding some extra features into it. Every time 
for synchronization we need to do it manually by executing command on 
the command line but instead of doing it manually imagine if it does 
automatically sync whenever it find changes in the single file.
To do this we need to start a daemon on both the source and 
destination and these daemon will check the time stamp of last 
modification of the directory we want and if it finds the new time 
stamp then it will sync.


The above mentioned idea is my own and i have already started reading 
and understanding the source of rsync. Please give your comments, your 
comments matters a lot for me.


Look at rsnapshot and unison

http://www.cis.upenn.edu/~bcpierce/unison/
http://www.rsnapshot.org/


--
Engineer for hire
Contract management, administration, training
http://www.seiner.com/engineer/resume.pdf


--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


using iconv - how?

2011-11-14 Thread Yan Seiner
I am trying to use iconv to copy files from a UTF-8 machine to a iso8859 
machine.  The target is an embedded box with no UTF-8 support.


I've tried both --iconv=utf-8,iso88591 and --iconv=. and the result is 
the same:


[sender] cannot convert filename: Chris Botti _ Michael Bubl\#351 
(Invalid argument)
[sender] cannot convert filename: Stan Getz/Ballads and Bossa Nova/\#311 
Preciso Perdoar.mp3 (Invalid or incomplete multibyte or wide character)
[sender] cannot convert filename: Stan Getz/Compact Jazz/So Dan\#347o 
Samba (I Only Dance S.mp3 (Invalid or incomplete multibyte or wide 
character)
[receiver] cannot convert filename: Unknown Artist/Unknown 
Album/\#345\#265\#220- Believe (PV).mp3 (Invalid or incomplete multibyte 
or wide character)
[generator] cannot convert filename: Unknown Artist/Unknown 
Album/\#345\#265\#220- Believe (PV).mp3 (Invalid or incomplete multibyte 
or wide character)
[receiver] cannot convert filename: Yothu Yindi/Tribal 
Voice/Dj\#303\#244pana.mp3 (Invalid or incomplete multibyte or wide 
character)
[generator] cannot convert filename: Yothu Yindi/Tribal 
Voice/Dj\#303\#244pana.mp3 (Invalid or incomplete multibyte or wide 
character)


And the file is not copied.

Can anyone suggest where I should look?  I built rsync with iconv 
support, the libraries are in place, and it's clearly trying to do 
something and failing.


--
Few people are capable of expressing with equanimity opinions which differ from 
the prejudices of their social environment. Most people are even incapable of 
forming such opinions.
   Albert Einstein

--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: order transfers by file size

2009-04-08 Thread Yan Seiner

On Wed, April 8, 2009 8:19 am, Victoria Muntean wrote:
 Is it possible to have rsync order transfers by file size (smallest
 files first) ?

Ooooh, I like that.  I have a client that has a bad habit of creating  a
5GB zipfile, that, of course, fails to rsync across 3,000 miles.  Since
it's a zip file, rsync can't diff the old and new versions; it ends up
trying to send the whole thing and the connection just isn't reliable
enough.  It would be nice to be able to transfer everything else first.

As long as we're on that topic, a size limit on file size to be
transferred would be nice.

--Yan


 Would it be a big patch ?

 Thanks
 Viki
 --
 Please use reply-all for most replies to avoid omitting the mailing list.
 To unsubscribe or change options:
 https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

 !DSPAM:49dcc0b587011804284693!




-- 
Yan Seiner, PE

Support my bid for the 4J School Board
http://www.seiner.com

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: New feature: detect and avoid transfering renamed files

2008-09-21 Thread Yan Seiner

Phil Vandry wrote:

On Tue, 9 Sep 2008 07:49:06 -0700, Wayne Davison wrote:
  

Sorry for the slow reply -- I marked your message for more in-depth
study, and failed to get back to it until now.



That's OK, I've done worse :-(

  

drawbacks:

 - It creates a single (potentially really big) directory of files on
   the receiver for the byinode/* files.


[others deleted]

Indeed, it more or less assumes you have a filesystem which handles this
well. Your other observations are also quite correct.

  

I had been thinking of extending the db patch to add the ability to
track files by checksum in a database.  This would allow a run that used
the DB to be an efficient checksum run (reading the checksums from the
DB, not slowly generating them) and look up matching checksums in the DB

I missed the first part of the discussion, and I may be off-base, but 
you may want to look at unison and how it handles its database.


http://www.cis.upenn.edu/~bcpierce/unison/

Where rsync is stateless, unison is stateful.  You're talking about 
making rsync stateful.


--Yan

--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


rsync failure with error 12

2008-08-11 Thread Yan Seiner
, 0) = 4
bind(4, {sa_family=AF_NETLINK, pid=0, groups=}, 12) = 0
getsockname(4, {sa_family=AF_NETLINK, pid=18279, groups=}, [12]) = 0
time(NULL)  = 1218454958
sendto(4, \24\0\0\0\26\0\1\3\256%\240H\0\0\0\0\0\0\0\0, 20, 0, 
{sa_family=AF_NETLINK, pid=0, groups=}, 12) = 20
recvmsg(4, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=}, 
msg_iov(1)=[{0\0\0\0\24\0\2\0\256%\240HgG\0\0\2\10\200\376\1\0\0\0\10..., 
4096}], msg_controllen=0, msg_flags=0}, 0) = 212
recvmsg(4, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=}, 
msg_iov(1)=[{@\0\0\0\24\0\2\0\256%\240HgG\0\0\n\200\200\376\1\0\0\0..., 
4096}], msg_controllen=0, msg_flags=0}, 0) = 128
recvmsg(4, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=}, 
msg_iov(1)=[{\24\0\0\0\3\0\2\0\256%\240HgG\0\0\0\0\0\0\1\0\0\0\24\0..., 
4096}], msg_controllen=0, msg_flags=0}, 0) = 20

close(4)= 0
open(/etc/gai.conf, O_RDONLY) = 4
fstat64(4, {st_mode=S_IFREG|0644, st_size=2349, ...}) = 0
fstat64(4, {st_mode=S_IFREG|0644, st_size=2349, ...}) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 
0) = 0xb7f83000

read(4, # Configuration for getaddrinfo(..., 1024) = 1024
read(4, ask   value\n#Add another ..., 1024) = 1024
read(4, \n#and 10.3 in RFC 3484.  The..., 1024) = 301
read(4, , 1024)   = 0
close(4)= 0
munmap(0xb7f83000, 4096)= 0
socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 4
connect(4, {sa_family=AF_INET6, sin6_port=htons(873), 
inet_pton(AF_INET6, ::, sin6_addr), sin6_flowinfo=0, 
sin6_scope_id=0}, 28) = 0
getsockname(4, {sa_family=AF_INET6, sin6_port=htons(35626), 
inet_pton(AF_INET6, ::1, sin6_addr), sin6_flowinfo=0, 
sin6_scope_id=0}, [28]) = 0
connect(4, {sa_family=AF_UNSPEC, 
sa_data=\0\0\0\0\0\0\0\0\0\0\0\0\0\0}, 16) = 0
connect(4, {sa_family=AF_INET, sin_port=htons(873), 
sin_addr=inet_addr(0.0.0.0)}, 16) = 0
getsockname(4, {sa_family=AF_INET6, sin6_port=htons(35626), 
inet_pton(AF_INET6, :::127.0.0.1, sin6_addr), sin6_flowinfo=0, 
sin6_scope_id=0}, [28]) = 0

close(4)= 0
socket(PF_INET6, SOCK_STREAM, IPPROTO_TCP) = 4
setsockopt(4, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
setsockopt(4, SOL_IPV6, IPV6_V6ONLY, [1], 4) = 0
bind(4, {sa_family=AF_INET6, sin6_port=htons(873), inet_pton(AF_INET6, 
::, sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0

socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 5
setsockopt(5, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
bind(5, {sa_family=AF_INET, sin_port=htons(873), 
sin_addr=inet_addr(0.0.0.0)}, 16) = 0

listen(4, 5)= 0
listen(5, 5)= 0
select(6, [4 5], NULL, NULL, NULL)  = 1 (in [5])
accept(5, {sa_family=AF_INET, sin_port=htons(48481), 
sin_addr=inet_addr(10.10.0.1)}, [16]) = 6
rt_sigaction(SIGCHLD, {0x80768c0, [], SA_RESTORER|SA_NOCLDSTOP, 
0xb7e4cfa8}, NULL, 8) = 0
clone(child_stack=0, 
flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, 
child_tidptr=0xb7e1c6f8) = 18280

close(6)= 0
select(6, [4 5], NULL, NULL, NULL)  = ? ERESTARTNOHAND (To be restarted)
--- SIGCHLD (Child exited) @ 0 (0) ---
waitpid(-1, NULL, WNOHANG)  = 18280
waitpid(-1, NULL, WNOHANG)  = -1 ECHILD (No child processes)
sigreturn() = ? (mask now [])
select(6, [4 5], NULL, NULL, NULL


--
 o__
 ,/'_  o__
 (_)\(_),/'_o__
Yan Seiner  (_)\(_)   ,/'_ o__
  Personal Trainer  (_)\(_),/'_o__
Professional Engineer (_)\(_)   ,/'_
Who says engineers have to be pencil necked geeks?  (_)\(_)

I worry about my child and the Internet all the time, even though she's too young 
to have logged on yet. Here's what I worry about. I worry that 10 or 15 years from now, 
she will come to me and say 'Daddy, where were you when they took freedom of the press 
away from the Internet?'
--Mike Godwin, Electronic Frontier Foundation 


--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Help

2008-05-14 Thread Yan Seiner

On Wed, May 14, 2008 12:55 pm, Victor Farah wrote:
 Hello there
   I have a question about rsync, and some options.
 I have 10 servers here that all need data from one machine.  It's a
 LARGE amount of files all pictures and files and such, now every time I
 rsync the directory over it takes like hours to create the file list.
 Now I'm fine with that, but I need to know if it can save the file list
 it generates and uses it over again for the other machines?
 Now my script is pretty basic:
 rsync -urvlopg --delete /local/dir rsync://remote1/MODULE
 repeat this ten times for remote1 through remote10.  I want to speed up
 this process by making it so it only gets the file list once and copies
 the updates it needs to each machine.  Right now when the script runs it
 will rsync generate file list, copy the files to the remote and then
 finish, go to the next line and do the same thing over again, so it
 takes hours to create the file list and hours to copy.  I just want it
 to create the file list once and copy what it needs to each machine, is
 there a way to do this?

rsync is stateless, so AFAIK it doesn't save the state.

You can look into unison, which is stateful - it saves the state of each
replica between runs.  I believe there is a way to get it to act like
rsync - one directional transfers.

--Yan


-- 
  o__
  ,/'_  o__
  (_)\(_),/'_o__
Yan Seiner  (_)\(_)   ,/'_ o__
   Personal Trainer  (_)\(_),/'_o__
 Professional Engineer (_)\(_)   ,/'_
Who says engineers have to be pencil necked geeks?  (_)\(_)

I worry about my child and the Internet all the time, even though she's
too young to have logged on yet. Here's what I worry about. I worry that
10 or 15 years from now, she will come to me and say 'Daddy, where were
you when they took freedom of the press away from the Internet?'
--Mike Godwin, Electronic Frontier Foundation


-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: unlimited backup revisions?

2001-03-22 Thread Yan Seiner

OK, but I'm trying to do is to keep the last n revisions - NOT the last
n weeks.

So what if I have a file that changes once every 6 weeks?  I want to
keep 4 revisions, so that means I have to go back 6 months.

But now the file next to it gets updated daily

You see my problem?

I want to keep a specific depth of old files, not a time increment.  I
have jobs that remain dormant for years, then re-activate and get a
flurry of activity, then go dormant again.

The problem is that if something happens as a job is going dormant, I
may not realize I've lost that particular file until months later.  I
lost an entire client directory (6 projects and hundreds of files)
exactly this way.  So I want to keep the last n revisisons, no matter
how old they are.

--Yan

[EMAIL PROTECTED] wrote:
 
 There isn't one?
 rsync has the --backup-dir= option.
 keep each set of backups to a different directory, then merge them back into the 
main heirarchy if needed.  Since they're already sifted out, it'd be easy to archive 
them, as well.
 if it's a daily, --backup-dir=$(( $(date +%j) % 28 )) will keep 4 weeks worth, then 
go back over the old ones.
 Of course, you'd probably want to get that integer seperately, and use it to delete 
the one you're about to write into, to keep it clean.
 
 The point is, somebody already anticipated your need, and made it easy to script it.
 
 Tim Conway
 [EMAIL PROTECTED]
 303.682.4917
 Philips Semiconductor - Colorado TC
 1880 Industrial Circle
 Suite D
 Longmont, CO 80501
 
 [EMAIL PROTECTED]@[EMAIL PROTECTED] on 03/22/2001 03:17:35 AM
 Sent by:[EMAIL PROTECTED]
 To: [EMAIL PROTECTED]@SMTP
 cc:
 Subject:Re: unlimited backup revisions?
 Classification:
 
 I've been trying to come up with a scripting solution for this for some time, and
 I'm convinced there isn't one.
 
 You definitely want to handle the revisions in the same way as logrotate: keep a
 certain depth, delete the oldest, and renumber all the older ones.
 
 If you want to get real ambitius, you could even include an option for compressing
 the backups to save space.
 
 My biggest concern is a disk failure on the primay during sync (unlikely, but I
 had my main raid fail during sync, and it wiped out both my mirrors.)  A managed
 backup strategy is the only thing that saved my bacon.
 
 Some tools for managing the backups (listing them - what's the revision history on
 this file type of query, donig a mass recover, etc)  would be useful.
 
 Just some random thoughts,
 
 --Yan
 
 "Sean J. Schluntz" wrote:
 
   That's what I figured.  Well, I need it for a project so I guess you all
   won't mind if I code it and submit a patch ;)
  
   How does --revisions=XXX sound.  --revisions=0 would be unlimited, any other
   number would be the limiter for the number of revisions.
  
  And when it reaches that number, do you want it to delete old
  revisions, or stop making new revisions?
 
  You would delete the old one as you continue rolling down.
 
  Perhaps something like --backup=numeric would be a better name.  In
  the long term it might be better to handle this with scripting.
 
  I don't see a good scripting solution to this. The closest I could come up
  with was using the --backup-dir and then remerging the tree after the copy
  and that is a real cluge.  The scripting solution I see would be to clean
  up if you had the backup copys set to unlimited so you don't run out of
  disk space.
 
  -Sean





Re: unlimited backup revisions?

2001-03-22 Thread yan seiner

I glanced at the source code, and it appears pretty trivial (a change to
options.c and a change to backup.c) to implement a user-selected backup
script.  All it appears to involve is adding an option to the command
line and an extra if statement in backup.c

I might give it a shot in my ample spare time...  It would make rsync a
lot more useful to me anyway  A welcome change from the 5 hour
public meeting I just had to chair - and I have cpu cycles to spare.

--Yan

Martin Schwenke wrote:
 
  "yan" == yan seiner [EMAIL PROTECTED] writes:
 
 yan I've been trying to come up with a scripting solution for
 yan this for some time, and I'm convinced there isn't one.
 
 yan You definitely want to handle the revisions in the same way
 yan as logrotate: keep a certain depth, delete the oldest, and
 yan renumber all the older ones.
 
 Another option that I've implemented is based on amount of free disk
 space rather than number of incremental backups.  I keep all of the
 (date/time based) incrementals on their own filesystem.  Before
 starting a new backup I check whether the disk usage on the filesystem
 is above a certain threshold and, if it is, I delete the oldest
 incremental.  Repeat until disk usage on the incremental filesystem is
 below the threshold and then do the new backup.
 
 In this way I don't have to guess the number of incremental backups
 that I can afford to keep...  it is based purely on free disk space.
 Naturally, if there's an unusual amount of activity on a particular
 day then this system can also be screwed over...  :-)
 
 Someone else noted that it is more useful to keep a certain number of
 revisions of files, rather than a certain number of days worth of
 backups.  It would be relatively easy to implement this sort of scheme
 on top of date/time-based incrementals.  Use "find" on each
 incremental directory (starting at the oldest) and either keep a map
 (using TDB?) of filenames and information about the various copies
 around the place or use locate to find how many copies there are of
 each file...  or a combination of the 2: the map would say how many
 copies there are, but not where they are; if you're over the threshold
 then use locate to find and remove the oldest ones...
 
 It isn't cheap, but what else does your system have to do on a Sunday
 morning?  :-)
 
 I might implement something like that...
 
 peace  happiness,
 martin




A cause for those hangups

2000-11-06 Thread Yan Seiner

I've been having some serious problems with both rsync and unison
hanging.

I run both in daemon mode, with a dial-up client to an ISDN server.

The setup works fine from one machine, and would consistently hang on
another.

I changed the hardware on the problem client completely (from a p5/166
to a celeron 333, swapped modems) and still the problem persisted. 
Basically, at some point, rsync or unison would simply quit sending
data.

The problem turned out to be the hardware - both motherboards on the
problem clinet would lose serial interrupts when the IDE drive was under
severe load.

I have now replaced the cheap mobo the celeron lived on with a Tyan
Tiger 100 mobo and last night the sync finally ran for the first time in
a week - for 12 hours straight.

Anyway, lesson learned - I wasted an enormous amount of time testing the
various sync packages, my VPN software, phone lines, ISDN lines, ISDN
modems, etc. etc... - when the problem was cheap mobos

My recommendation if you're seeing freezes on a serial line and are
using IDE drives - check if the mobo is dropping interrupts.

--Yan




Re: issues with NT port?

2000-10-10 Thread Yan Seiner

I've had similar problems with rsync.  It is definitely sensitive to
large latencies.

I solved the problem by reducing txqueuelen to 0 on the eth if and also
on the vitual vpn if that talks to my router.  This appeared to have
fixed the problem.

Various gurus have told me that reducing txqueuelen to 0 is NOT A GOOD
THING but what the hay, it works for me.

--Yan

Jason Haar wrote:
 
 New development.
 
 It does affect NT client as well as Linux rsync client. It just looks like
 pure luck that the NT client didn't show the same symptoms sooner...
 
 So to rewrite my original message...
 
 We're trying to use rsync-2.4.5 (client and server) to replicate data over
 a busy 128Kb Intercontinental Frame-Relay link from both NT and Linux
 clients to a NT rsync server. It appears to work for a few connections - e.g.
 
  rsync ntserver::
  rsync -a dir ntserver::share/path
 
 ...but then on the 3-5 go, the NT rsync server will crash.
 
 This same NT rsync server is being 100% happily used rsync'ing to other NT
 clients over US-to-US T1 links, so this leads me to believe there is some
 NT IP-stack issue that tickles a bug in rsync only when there are large
 latencies (that's the only real difference I can come up with).
 
 Sound possible? [I say it's a rsync bug as obviously other NT network apps
 work fine over this link - maybe I should say "NT-specific rsync bug" :-)]
 
 --
 Cheers
 
 Jason Haar
 
 Unix/Network Specialist, Trimble NZ
 Phone: +64 3 9635 377 Fax: +64 3 9635 417