Re: cross-platform backup tool Duplicate timestamp date after copying rdiff-backup repository.

2021-12-23 Thread Chris Wilson
Hi Reio,

You need to delete files from the destination that have been removed from
the source, especially the current_mirror file.

Use rsync with --delete to do that.

Thanks, Chris.

On Thu, 23 Dec 2021 at 11:24, Reio Remma via Any discussion of rdiff-backup
 wrote:

> Hello!
>
> I'm migrating my backups from an LVM volume to ZFS dataset, however after
> rsyncing the data over, I'm getting the following error:
>
> $ rdiff-backup --verify backup-zfs/hostname
> Warning, two different times for current mirror found
> Fatal Error: Metadata file
> '/mnt/backup-zfs/hostname/rdiff-backup-data/mirror_metadata.2021-12-23T06:17:32+02:00.diff.gz'
> has a duplicate timestamp date, you might not be able to recover files on
> or earlier than this date. Check the man page on how to clean up your
> repository using the '--allow-duplicate-timestamps' option.
>
> I'm unsure what to make of it or how to avoid it.
>
> I used the following rsync command to copy the data:
>
> rsync -avhA --progress --stats backup/ backup-zfs/
>
> It seems that it breaks when I run rsync again after an initial run and
> when data has changed at the source by then.
>
> Thanks!
>
> Reio
>


Re: [rdiff-backup-users] Backup freezing destination computer

2019-06-14 Thread Chris Wilson
It's also possible that rdiff-backup on the server is using too much memory
(or space on /tmp, if that's a ramdisk), causing the system to swap to
death. You could run vmstat on the server during the backup, and when it
freezes, look at the recent statistics (the last few lines before it froze).

Thanks, Chris.

On Fri, 14 Jun 2019 at 20:18, Patrik Dufresne 
wrote:

> I would really continue digging into the IO performance. The CPU should not
> reach 100% during a backup, it's supposed to mostly wait for the disk IO.
> Most likely, your CPU is doing some extra work that should not take place.
>
> You may also take a look at `iostat` to see if the disk are busy. You may
> also run a benchmark on this system. In the past, I used "phoronix test
> suite" which provide multiple test. In my case, one of the test never
> complete and server died.
>
> --
> Patrik Dufresne Service Logiciel inc.
> http://www.patrikdufresne.com /
> 514-971-6442
> 130 rue Doris
> St-Colomban, QC J5K 1T9
>
>
> On Fri, Jun 14, 2019 at 2:32 PM Robert Farrington 
> wrote:
>
> > Thanks for your response.  I don't think it's a hardware problem, but
> > here's our hardware setup:
> >
> > Pentium 4 3GHz
> > 2 1TB Western Digital hard drives in Raid-1 config, SATA-2 interface
> > It looks like we are using XFS and LVM.
> >
> > I know it's an old cpu, but the disks have been replaced recently.
> >
> > I can't usually look at the load average when the system freezes because
> I
> > can't do anything. But I have seen the CPU used 100% during a backup.
> >
> > Bob
> > On Wednesday, June 12, 2019, 8:00:03 PM PDT, Patrik Dufresne <
> > i...@patrikdufresne.com> wrote:
> >
> >
> > It's probably not directly related to rdiff-backup but the high volume of
> > IO on the server. If you have a monitoring system, could you take a look
> at
> > loadaverage ? It's probably going up when the backup is running. It's
> > telling you the IO of the server is probably not capable enough. It might
> > be related to various factor. Could you provide more technical details
> > regarding your setup ? Hardware, RAID config, SATA or SAS, FileSystem,
> > ext4, LVM ?
> >
> > As an example. On one of our server, an LXC on Proxmox, running on ZFS, I
> > remeber I needed to disable a kernel feature related to huge memory page.
> > Otherwise, the container would died without apparent reason.
> >
> > --
> > Patrik Dufresne Service Logiciel inc.
> > http://www.patrikdufresne.com /
> > 514-971-6442
> > 130 rue Doris
> > St-Colomban, QC J5K 1T9
> >
> >
> > On Wed, Jun 12, 2019 at 8:49 PM Robert Farrington  >
> > wrote:
> >
> > I am using rdiff-backup 1.2.8 on Centos 7, backing up a remote Centos 7
> > computer.  It occasionally freezes up the destination computer, and I
> can't
> > log in or ssh in to see what's wrong.  If I leave it logged in and it
> > freezes, I still can't do anything to find out what's wrong.  I have to
> > power cycle to get it back.  Any suggestions on how to fix this?Bob
> >
> > ___
> > rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
> > https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
> > Wiki URL:
> > http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki
> >
> >
> ___
> rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
> Wiki URL:
> http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki
___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] Examples for whole-system backup

2015-04-09 Thread Chris Wilson
Hi all,

I have done bare metal restores with duplicity (same author, different tech). 
My previous job used this as the main emergency recovery mechanism. I believe 
that rdiff-backup would work just as well. You'll need to sort the comms 
between client and server (e.g. Ssh keys and firewalls), handle databases and 
other files being modified during the backup, and test it of course!

Taking LVM snapshots is a very different approach, relatively fast and low 
impact but requires a lot of storage and bandwidth (no diffs). In the case of 
bare metal, where you normally only need one backup and often keep large disks 
on the same site (depending on your threat model) it could also be a good 
solution for you.

Cheers, Chris. 

Sent from my iPhone

 On 9 Apr 2015, at 12:41, rhkra...@gmail.com wrote:
 
 Not the OP, but what do you recommend (in the LInux world, please, as that is 
 what I use...)?
 
 On Thursday, April 09, 2015 02:41:44 AM you wrote:
 I'm sure that you could devise some scheme to do a full metal restore
 with rdiff-backup, but in my opinion, it's not the tool for the job.
 
 ___
 rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
 https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
 Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki
 

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] adding --resume back

2014-09-04 Thread Chris Wilson

Hi all,

On Thu, 4 Sep 2014, Dominic Raferd wrote:

If I had enough space for the LVM snapshot, I would probably rsync the 
current
data and run rdiff-backup locally on the destination every time rsync 
succeeds.

This would provide - in our setup - the same protection as LVM with respect
to broken increments, but also resume a partial session after network 
shortage

and server restarts.


You are facing quite a tough situation! You didn't comment on the idea of 
lengthening the ssh timeouts, but given the severity of the situations you 
have to allow for, maybe this can't help. I should point out that using an 
LVM snapshot should not need nearly as much space as rsync because it only 
has to store the differences between the old rdiff-backup archive and the 
new, and it does not have to persist once the backup is complete. Still rsync 
is a simpler (and more familiar) solution and surely the lack of disk space 
is cheaper to fix than the value of your time recoding rdiff-backup?


That gives me an interesting idea. Since the rdiff-backup destination is a 
mirror of the source, we can rsync over it. Of course if we did this on a 
real repository it would destroy it, but we could safely do this:


* create a writable LVM snapshot containing the rdiff-backup repository,
* delete the rdiff-backup-data directory from it,
* rsync from the target over the snapshot's rdiff directory,
* run rdiff-backup from the snapshot back to the original location,
* then discard the snapshot.

Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson chris+...@qwirx.com Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] ignore file access error

2014-03-14 Thread Chris Wilson

Hi all,

On Fri, 14 Mar 2014, Dominic Raferd wrote:

On 13/03/2014 20:23, Martin Mazur wrote:


for a few weeks I am using rdiff-backup and I am very happy with it.

  From time to time Cron is sending me errors like this:

UpdateError website/logs/ssl_access.log Updated mirror temp file
/var/data/backup/www/website/logs/rdiff-backup.tmp.213 does not match
source

Since there is not much I can do about this error I don't wont it to be
displayed. Is there a way of doing this?


As Dominic (and Ben Escoto) said, the best way to actually fix this is to 
backup from LVM snapshots or xfs freeze. Other workarounds include:


* exclude files from your backup that change often,

* or make an unchanging copy somewhere for backup (mysqlhotcopy etc.),

* live with the fact that they will sometimes/often/never be backed up.

Personally I hide these errors by piping the output of rdiff-backup into 
grep:


rdiff-backup \
--exclude-device-files \
source dest \
21 | grep -v ^UpdateError .* Updated mirror temp file 
.* does not match source


(the grep expression should be all on one line, not wrapped by my email 
client).


You can filter out more specific patterns, such as /var/log, so that you 
don't get caught out when your important documents and virtual machine 
images aren't backed up because your applications are modifying them all 
the time.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson chris+...@qwirx.com Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup should have MUCH simpler defaults

2014-02-21 Thread Chris Wilson

Hi Jeff,

On Wed, 19 Feb 2014, sunfun wrote:

I used the --no-compression argument, but can't seem to find a way to 
avoid the the restoration process of the .diff version files, and just 
keep the versioned text files as text files!


Am i missing somenthing, or do i have to go through a restore process to to 
even take a peek at the text in two versions of a text file?


I'm afraid you do. Rdiff-backup uses rdiff for storing differences between 
versions. Nothing else. The only way to restore old versions (that I 
know) is rdiff-backup -r.


If you want to store complete copies of old versions, then I'm afraid you 
need a different tool. Perhaps dirvish?


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson chris+...@qwirx.com Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Moving the backup-location to another machine

2014-01-12 Thread Chris Wilson

Hi Ron,

On Sun, 12 Jan 2014, Ron Leach wrote:

It seems that someone reduced the number of reserved blocks on your old 
system to zero.


That was set by the Debian installer, in fact, but it was xfs, though, 
not ext4.  This new lv is ext4.


Ok, perhaps XFS does not have such a feature. ext* does, and this explains 
why you have some blocks missing/not usable.


It's an extremely bad idea in my view to run an ext4 filesystem over 
80% full, as it results in increasing fragmentation over time and there 
is no defragmentation tool available.


Very interesting point.  We keep the backup increments indefinitely so, 
other than rdiff-backup housekeeping files, the bulk of the filesystem 
stays in existence.  No deletion, in practice.


That doesn't matter. New files, especially large ones, will be forced to 
squeeze into the increasingly small gaps left in each inode group, and 
thus be fragmented from birth.



I'm not sure if the reverse-diffs change on each backup run


No, new ones are created and the old full image files are deleted whenever 
a file changes. (I think).


Chris, that was an extremely helpful post; it has pointed me towards a 
more critical appraising of suitable filesystems.


Thanks, I'm glad. But to be honest, I don't think there's a filesystem 
that can avoid fragmentation when 99% full except for a pure 
log-structured one, and you'll have other problems there. (Can't delete 
files when the filesystem is full; may not be able to resize either.)


Nor have I ever seen a good reason to avoid ext* for any task except:

* huge numbers of very small files (for which the cure at the time was 
reiserfs, now deprecated I think, so no excuse any more);


* needing live snapshots (which only BSD UFS2, ZFS and the experimental 
BTRFS support, afaik);


* needing write-once integrity (for which you need a log-structured 
filesystem such as nilfs);


* needing to write directly to NAND flash (for which you need yaffs or 
ubifs);


* needing live mirroring to another filesystem, dynamic volume pools 
within a filesystem or in-filesystem RAID (for which ZFS is the only 
option that I know of).


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson chris+...@qwirx.com Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Delay between EndTime and process end - followup

2013-08-12 Thread Chris Wilson

Hi Laurent,

On Mon, 12 Aug 2013, Laurent De Buyst wrote:

Two weeks ago I sent a mail to this list asking about some strange 
delays I was seeing between the EndTime as logged byt rdiff-backup and 
the actual end of the running process.


Since then, I've been digging deeper and I'd like to ask once more if 
anyone can tell me what's going on and how I might be able to 'fix' 
this.


See, I ran my backups using -v9 for a few days and this has shown in 
greater detail when the delay happens.


2013/08/12 06:00:56   Mon Aug 12 05:56:03 2013  Copying inc attrs from ... to 
...
2013/08/12 06:00:56   Mon Aug 12 05:56:03 2013  Setting time of ... to 
1376107515
2013/08/12 06:00:56   Mon Aug 12 05:57:36 2013  Renaming ... to 
.../backup/home/domino/data/webadmin.ntf
...
2013/08/12 06:00:56   Mon Aug 12 06:00:56 2013  Setting time of .../backup/tmp 
to 1376245594
2013/08/12 06:00:56   Mon Aug 12 06:00:56 2013  Copying attributes from () to 
.../backup
2013/08/12 07:17:03   Mon Aug 12 06:00:56 2013  Setting time of .../backup to 
1371199946
2013/08/12 07:17:03   Mon Aug 12 06:01:07 2013  Touching 
.../backup/rdiff-backup-data/extended_attributes.2013-08-12T00:53:28+02:00.snapshot
2013/08/12 07:17:03   Mon Aug 12 06:02:41 2013  Touching 
.../backup/rdiff-backup-data/access_control_lists.2013-08-12T00:53:28+02:00.snapshot
2013/08/12 07:17:03   Mon Aug 12 06:03:26 2013  Writing mirror_metadata diff
2013/08/12 07:17:03   Mon Aug 12 06:04:00 2013  Deleting 
.../backup/rdiff-backup-data/mirror_metadata.2013-08-11T00:55:43+02:00.snapshot.gz
2013/08/12 07:17:03   --[ Session statistics ]--

[...]

2013/08/12 07:17:03   --
2013/08/12 07:17:03   Mon Aug 12 07:17:02 2013  Deleting 
.../backup/rdiff-backup-data/current_mirror.2013-08-11T00:55:43+02:00.data



Now, the first timestamp comes from the wrapper I'm using, while the second 
ones are from rdiff-backup.

The first thing I've noticed even before is that output from 
rdiff-backup seems to come in bursts. You can see that it's doing things 
at 05:56 that only get logged at 06:00. This seems to be normal 
operational mode, I assume there's a cache involved somewhere.


Depending on how your wrapper runs rdiff-backup, if you are feeding the 
results into a pipe (for example, rdiff-backup | wrapper-script) then they 
will be buffered by the OS. It's possible to turn that buffering off if 
you need to. But I would say, just don't trust the timestamps added by 
your wrapper script. The timestamps from rdiff-backup contain the time 
when the message was generated.


Bearing that in mind, the delay actually seems to come here:

2013/08/12 07:17:03   Mon Aug 12 06:04:00 2013  Deleting 

.../backup/rdiff-backup-data/mirror_metadata.2013-08-11T00:55:43+02:00.snapshot.gz

2013/08/12 07:17:03   --[ Session statistics ]--

[...]

2013/08/12 07:17:03   --
2013/08/12 07:17:03   Mon Aug 12 07:17:02 2013  Deleting 

.../backup/rdiff-backup-data/current_mirror.2013-08-11T00:55:43+02:00.data

Since I doubt that the delete operations take a long time (unless you're 
running on a weird distributed filesystem, or a very heavily loaded 
fileserver, in which case all bets are off), I'd say the delay is in 
calculating the session statistics.


I don't know quite what it does, but scanning the code should help you, or 
strace()ing the process while it's in this delaying mode. It might be 
stat()ing 48,000 files. It shouldn't take hours to do that, but if it's 
over a network, with high latency, it might do.


You could run rdiff-backup manually without any pipes, so that you can see 
the output as it's generated and know when deleting snapshot.gz happens. 
After that, it's calculating session statistics. Then attach strace to the 
process and see what it's doing and which operations take a long time.


And as long as I don't know exactly what it is, I can't really get to 
work on this because I don't know if I need more ram, more CPU power, 
more CPU speed, faster disks, ...


You could look at which resource is most heavily used when the server is 
running the backup jobs. top can help with that. Is the CPU at 100%? Is it 
much less than 100%, but you have a high iowait time? Do you have a lot of 
swap usage? Does the swap usage grow while the backups are running? Does 
vmstat show more blocks in/out or swap in/out while the backups are 
running?


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson chris+...@qwirx.com Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] Socket error: AF_UNIX path too long

2012-11-13 Thread Chris Wilson

Hi all,

On Tue, 13 Nov 2012, GabrielNar wrote:


This a frequent problem,that generally causes that the directory path is
longer than 255 characters (including spaces).
I had the same problem and finally I found solution:
Long Path Tool
http://PathTooDeep.com
I hope that it will help you!


I see absolutely no value in backing up sockets, so I recommend you use 
the --exclude-sockets option instead.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson chris+...@qwirx.com Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Backup not identifying changed file, even though hash is different

2011-03-24 Thread Chris Wilson

Hi Scott,

On Thu, 24 Mar 2011, Scott Jilek wrote:

Unfortunately, I can't just touch the file because it's in use and locked by 
the truecrypt process.


In that case, rdiff-backup probably won't be able to open it to back it up 
either? You'll probably get an error message that the file can't be opened 
when you try to back it up? And even if you could back it up, you'd 
probably get an inconsistent and hence useless/corrupt backup if truecrypt 
is really writing to the encrypted filesystem while a backup or rsync is 
in progress.


I could try to install one of the windows rsync variants in this case, 
but the whole idea of a practical backup process to me is getting 
multiple snapshots in time, whereas rsync only gives me efficient 
mirroring.  I'm not too keen on mirroring as a backup strategy, because 
corrupted files completely destroy the usefulness of mirroring.  If it's 
bad in the source, it becomes bad in the singular backup as soon as the 
backup process runs and then you have no recovery option.  With Rdiff, I 
can go back in time to just before the corruption occurred and restore 
the file to the last know working time.  As far as I'm concerned, simple 
mirroring is not a safe method of backup


You could rsync the truecrypt file to somewhere else, if that works, and 
then back it up with a separate rdiff-backup command?


Or alternatively create a VSS snapshot and back that up? But Truecrypt 
would probably need to be patched to be a VSS writer, or to not 
reset the timestamp when it writes to the file, because you probably 
can't change the timestamp in the VSS snapshot, it being a snapshot and 
hence read-only.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson chris+...@qwirx.com Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


[rdiff-backup-users] Wasted bandwidth and accounting for it

2010-05-12 Thread Chris Wilson

Hi all,

I have been a pretty happy user of rdiff-backup for many years. Currently 
all servers that I control use rdiff-backup 1.0.x for local and offsite 
backups. The reason for sticking with an ancient version is simply that I 
cannout upgrade my backup servers and all clients at the same time, and 
cross-version compatibility is difficult.


I recently discovered that one of my offsite hosts had been backing up an 
increasing amount of data each day:


  http://i41.tinypic.com/2qajtyx.png

(this host is shown in dark green). When I investigated, the session 
statistics showed that the change in the destination was small:


$ sudo cat 
rdiff-backup-data/session_statistics.2010-05-07T00:17:43+01:00.data

StartTime 1273187863.00 (Fri May  7 00:17:43 2010)
EndTime 1273229597.92 (Fri May  7 11:53:17 2010)
ElapsedTime 41734.92 (11 hours 35 minutes 34.92 seconds)
SourceFiles 144698
SourceFileSize 11994143167 (11.2 GB)
MirrorFiles 144697
MirrorFileSize 12043475927 (11.2 GB)
NewFiles 5
NewFileSize 12542425 (12.0 MB)
DeletedFiles 4
DeletedFileSize 61886561 (59.0 MB)
ChangedFiles 74
ChangedSourceSize 14252455 (13.6 MB)
ChangedMirrorSize 14241079 (13.6 MB)
IncrementFiles 84
IncrementFileSize 8256201 (7.87 MB)
TotalDestinationSizeChange -41076559 (-39.2 MB)
Errors 0

Even though this backup took 11.5 hours to finish, and in fact transferred 
2.6 GB of data!


I looked into the error log, and this gave me the clue that some huge 
files (e.g. 2GB Argus packet logs) were being tranferred and then 
discarded because they were still changing on the sending side.


First of all, the files transferred and then discarded are not logged 
anywhere in the session statistics. I think it would be useful to track 
this as a metric. Could I request this as a feature, if 1.2.x doesn't 
already do it? (we will upgrade eventually).


I think in some cases, for example log files, I would prefer to keep the 
corrupted copy as part of the snapshot, as I know that the file will 
just be growing and I'd rather not discard and transfer it again every 
time. Could I request this as a feature for 1.2.x as well?


Perhaps it would even be possible, if the checksum fails, to compare the 
checksums of the first X bytes (the recorded length when the transfer 
started), and if these match, to truncate the file on the destination to 
that length? Could I request that as another feature for 1.2.x?


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] how to shrink/purge/truncate the metadata file (and thefile_statistics file)?

2010-01-05 Thread Chris Wilson

Hi Jeff,

On Sun, 3 Jan 2010, =JeffH wrote:


 My rdiff-backup-data/mirror_metadata.*.snapshot.gz file seems to grow
 without bound, even though I have perodically used --remove-older-than to
 remove a number of prior increments. I.e. I expected --remove-older-than to
 shrink/purge/truncate the metadata file (as well as the file_statistics
 file), but this seems to not be the case. So eventually my backup disk fills
 up.


I think there may be a misunderstanding here. Each mirror_metadata file 
contains the metadata for just one backup: either a complete snapshot, or in 
some cases with more recent versions, a patch against the previous one.


The file is not modified in any way after the snapshot is created (afaik), and 
is deleted when the snapshot is deleted. Therefore it cannot grow without 
bound.


If you mean that each mirror_metadata file is larger than the previous one, 
this must mean that you are backing up more files each time. You could try 
diffing the last two to see why it is growing.


In any case the file should not be huge. For example, one of my backups 
currently covers about 65,000 files, and each mirror_metadata is about 800kb 
compressed.



 If not, the only way I know to address the issue is to delete everything on
 my backup disk when it gets near full, and the start all over with an
 effectively full initial backup. This seems suboptimal for various reasons.


If it's really a problem then you could just delete the rdiff-backup-data 
directory and backup with --force next time, which removes all older increments 
but does avoid the need to re-transfer all the data.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson chris+...@qwirx.com Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] how to shrink/purge/truncate the metadata file (and the file_statistics file)?

2010-01-05 Thread Chris Wilson

Hi Jeff,

On Sun, 3 Jan 2010, =JeffH wrote:

My rdiff-backup-data/mirror_metadata.*.snapshot.gz file seems to grow 
without bound, even though I have perodically used --remove-older-than 
to remove a number of prior increments. I.e. I expected 
--remove-older-than to shrink/purge/truncate the metadata file (as well 
as the file_statistics file), but this seems to not be the case. So 
eventually my backup disk fills up.


I think there may be a misunderstanding here. Each mirror_metadata file 
contains the metadata for just one backup: either a complete snapshot, or 
in some cases with more recent versions, a patch against the previous one.


The file is not modified in any way after the snapshot is created (afaik), 
and is deleted when the snapshot is deleted. Therefore it cannot grow 
without bound.


If you mean that each mirror_metadata file is larger than the previous 
one, this must mean that you are backing up more files each time. You 
could try diffing the last two to see why it is growing.


In any case the file should not be huge. For example, one of my backups 
currently covers about 65,000 files, and each mirror_metadata is about 
800kb compressed.


If not, the only way I know to address the issue is to delete everything 
on my backup disk when it gets near full, and the start all over with an 
effectively full initial backup. This seems suboptimal for various 
reasons.


If it's really a problem then you could just delete the rdiff-backup-data 
directory and backup with --force next time, which removes all older 
increments but does avoid the need to re-transfer all the data.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson chris+...@qwirx.com Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] win32, multiple drive source , how to? cygdrive ?

2009-12-18 Thread Chris Wilson

Hi Davy,

On Thu, 17 Dec 2009, Davy Stoffel wrote:

I would like to set up a single rdiff-backup instance that backups 2 
sources on two different drives (windows).


Example, using C:\myshare1 and D:\myshare2 as an include statement.


I don't think you can do that with rdiff-backup for Windows/win32. Try 
making two backups to different destinations. It will probably work better 
in the long run anyway. Or install rdiff-backup using cygwin python.


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] option to preserve every nth backup older than a certain age

2009-09-17 Thread Chris Wilson

On Wed, 16 Sep 2009, Dominic Raferd wrote:


Dean Cording wrote:

 What would be more useful is if rdiffbackup could list every backup in
 which a specified file changed so you didn't need to go searching through
 numerous daily backups to find when it changes.


rdiff-backup --list-increments /mnt/backup/path/to/file

Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson chris+...@qwirx.com Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Unclear part in the --remove-older-then section of man rdiff-backup

2009-09-15 Thread Chris Wilson

Hi Cybertinus,

On Wed, 16 Sep 2009, Cybertinus wrote:

I'm reading the manual page of rdiff-backup and I don't understand the 
following part of the manual. It is the last paragraph of the option 
--remove-older-then:


Note that snapshots of deleted files are covered by this operation. 
Thus if you deleted a file two weeks ago, backed up immediately 
afterwards, and then ran rdiff-backup with --remove-older-than 10D 
today, no trace of that file would remain.  Finally, file selection 
options such as --include and --exclude don't affect 
--remove-older-than.


Does this mean that when I delete a file by accident but I've run a
rdiff-backup --remove-older-than in the mean time I can't get it back
from my backup? Not even from a snapshot that was created before I
deleted that file (but to new to be deleted by --remove-older-then)?


My reading of your quotation from the manual describes the behaviour that 
you and I both expect, and not the bad behaviour that you describe above.


The sentence in question is:

If you deleted a file two weeks ago, backed up immediately afterwards, 
and then ran rdiff-backup with --remove-older-than 10D today, no trace of 
that file would remain.


I think this sentence counts as explanatory rather than as a definition of 
behaviour, so arguably it is superfluous. It seems clear enough to me, but 
if it's confusing to anyone then I think it ought to be rewritten to 
improve it. Perhaps this version would be easier to understand:


The --remove-older-than option will delete all snapshots older than the 
specified date, including all changes and deleted files contained in them. 
If a file was deleted or corrupted before one of those snapshots was made, 
it will no longer be recoverable.


Does that help you? Does anyone else think it is an improvement, or wish 
to suggest a better version?


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson chris+...@qwirx.com Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Re: Re: Will these errors clear on the next pass?

2009-09-02 Thread Chris Wilson

Hi Chris,

On Wed, 2 Sep 2009, Chris G wrote:

/etc/cron.daily/backup: UpdateError nfs/rpc_pipefs/nfs/clnt6/info 
Updated mirror temp file 
/bak/var/lib/nfs/rpc_pipefs/nfs/clnt6/rdiff-backup.tmp.52 does not 
match source UpdateError nfs/rpc_pipefs/nfs/clnt7/info



... and it doesn't go away by itself, I have *exactly* the same errors
today.


On looking harder at it I realised *why* it won't go away.  The does
not match errors are coming from /var/lib/nfs/rpc_pipefs/nfs
directories and I'm backing up to, guess what, a remote NFS mounted
drive so the files in question really *are* changing during the backup.

I want to back up /var/lib so this is a problem.

All I need is a way to make the error messages go away so a successful
backup is silent.


Try --exclude-other-filesystems or --exclude /var/lib/nfs/rpc_pipefs

There's no reason to back up an rpc_pipefs because you wouldn't want to 
restore it. Same for /proc, /sys, etc.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] errors installing 1.0.5

2009-02-27 Thread Chris Wilson

Hi Bob,

On Fri, 27 Feb 2009, Bob Mead wrote:


Hello all:
I am having troubles installing v1.0.5 on an old gnu/linux box:
when I run 'python setup.py install', I get the output (below - with errors). 

...
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall -Wstrict-prototypes 
-fPIC -I/usr/include/python2.4 -c cmodule.c -o 
build/temp.linux-i686-2.4/cmodule.o

cmodule.c:24:20: error: Python.h: No such file or directory


You need to install the python-dev or python-devel package.

Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] errors installing 1.0.5

2009-02-27 Thread Chris Wilson

Hi Bob,

On Fri, 27 Feb 2009, Bob Mead wrote:


 You need to install the python-dev or python-devel package.


Ugh... that brought a whole new list of packages (and problems :-\) it seems 
that my version of gnu/linux (fiesty era) doesn't have/can't find the 
repositories for several of the required packages/updates/upgrades.


Feisty is no longer supported, it's possible that the repositories have 
been withdrawn by now.


I tried apt's suggestion of adding the --fix-missing option to no avail. 
Is there another way to make this work? I *really* don't want to rebuild 
this server (actually, I'm not sure I *can* rebuild this server...).


Perils of using an unsupported distribution :( You could try to install 
python from source, but your machine is fundamentally busted without 
repositories to download software from. You can still get the ISOs from 
http://old-releases.ubuntu.com/releases/feisty/, or change your apt config 
to use http://old-releases.ubuntu.com/ubuntu/dists/.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] User/Group Permissions Question - IS THIS A BUG?

2009-02-25 Thread Chris Wilson

Hi Francisco,

On Wed, 25 Feb 2009, Francisco M. Marzoa Alonso wrote:


Chris Wilson escribió:

You need to use --preserve-numerical-ids on the restore command to
restore with the correct ownership.

That does not work neither:

fmmar...@durruti:~$ ls -la test
total 8
drwxr-xr-x  2 fmmarzoa fmmarzoa 4096 2009-02-23 20:07 .
drwxr-xr-x 78 fmmarzoa fmmarzoa 4096 2009-02-25 09:54 ..
-rw-r--r--  1 fmmarzoa devel   0 2009-02-23 20:07 testfile
fmmar...@durruti:~$ rdiff-backup --preserve-numerical-ids test test_backup
fmmar...@durruti:~$ ls -la test_backup
total 12
drwxr-xr-x  3 fmmarzoa fmmarzoa 4096 2009-02-23 20:07 .
drwxr-xr-x 79 fmmarzoa fmmarzoa 4096 2009-02-25 09:56 ..
drwx--  3 fmmarzoa fmmarzoa 4096 2009-02-25 09:56 rdiff-backup-data
-rw-r--r--  1 fmmarzoa fmmarzoa0 2009-02-23 20:07 testfile
fmmar...@durruti:~$ rm -Rf test
fmmar...@durruti:~$ rdiff-backup -r now --preserve-numerical-ids
test_backup test
fmmar...@durruti:~$ ls -al test
total 8
drwxr-xr-x  2 fmmarzoa fmmarzoa 4096 2009-02-23 20:07 .
drwxr-xr-x 79 fmmarzoa fmmarzoa 4096 2009-02-25 09:57 ..
-rw-r--r--  1 fmmarzoa fmmarzoa0 2009-02-23 20:07 testfile
fmmar...@durruti:~$

:-(


As Damon Timm pointed out, preserving any kind of ownership does not work 
when you run the restore as a non-root user. Unix does not allow non-root 
users (including rdiff-backup when run on your behalf) to change the owner 
of files. They will always be owned by the user that you are running as. 
If you run the restore as root, then ownership should be preserved either 
by name or by UID.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] User/Group Permissions Question - IS THIS A BUG?

2009-02-24 Thread Chris Wilson

Hi Francisco,

On Mon, 23 Feb 2009, Francisco M. Marzoa Alonso wrote:


It seems like --preserve-numerical-ids option isn't working right on
rdiff-backup, or perhaps we're missundestading how it works, because I
can reproduce the problem you say, and it's not preserving uid and gid
as it should, according to 'man rdiff-backup'. Even using rdiff-backup
-r now it does NOT recover uid:gid original ownership. See this:

fmmar...@durruti:~$ ls -al test
total 8
drwxr-xr-x  2 fmmarzoa devel4096 2009-02-23 20:07 .
drwxr-xr-x 77 fmmarzoa fmmarzoa 4096 2009-02-23 20:07 ..
-rw-r--r--  1 fmmarzoa devel   0 2009-02-23 20:07 testfile

I've a directory called test with uid:gid as fmmarzoa:devel, and a testfile 
within the directory with same ownership. Now I backup it with:

fmmar...@durruti:~$ rdiff-backup --preserve-numerical-ids test test_backup
fmmar...@durruti:~$ ls -la test_backup
total 12
drwxr-xr-x  3 fmmarzoa fmmarzoa 4096 2009-02-23 20:07 .
drwxr-xr-x 78 fmmarzoa fmmarzoa 4096 2009-02-23 20:10 ..
drwx--  3 fmmarzoa fmmarzoa 4096 2009-02-23 20:10 rdiff-backup-data
-rw-r--r--  1 fmmarzoa fmmarzoa0 2009-02-23 20:07 testfile

So in fact, ownership information has been lost. Obviously restoring 
just doing a cp test_backup test will not recover the ownership data, 
but restore command neither!:


fmmar...@durruti:~$ rdiff-backup -r now test_backup test
fmmar...@durruti:~$ ls -la test
total 8
drwxr-xr-x  2 fmmarzoa fmmarzoa 4096 2009-02-23 20:07 .
drwxr-xr-x 78 fmmarzoa fmmarzoa 4096 2009-02-23 20:12 ..
-rw-r--r--  1 fmmarzoa fmmarzoa0 2009-02-23 20:07 testfile


You need to use --preserve-numerical-ids on the restore command to restore 
with the correct ownership. As far as I know, rdiff-backup stores both 
uid/gid and corresponding names in the metadata file, so it can restore 
using numeric or named UIDs. I guess that using named UIDs in the 
repository instead of numeric ones is a bug.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] How to manage with old[ish] versions?

2009-02-19 Thread Chris Wilson

Hi Chris,

On Thu, 19 Feb 2009, Chris G wrote:

I am using rdiff-backup from xubuntu 8.10, the version available on the 
Ubuntu repositories is 1.1.16.  This makes life rather difficult when 
trying to backup to and fro from systems where I'm building rdiff-backup 
myself.


OK, I can do backups between different versions but it gives warning 
messages and also different versions are sometimes incompatible.


Is there any way to tell a newer version to work compatibly (and 
silently) with older versions?


I find the simples way is to ignore the packages and install exactly the 
version that I want from source. Works on any distro :)


I find the backwards-incompatibility to be one of the biggest bugbears of 
rdiff-backup and boxbackup (which I maintain). I'm still running 
rdiff-backup 1.0.x on all my machines for this reason.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup on WinXP, how to set up batch file

2009-01-25 Thread Chris Wilson

Hi Pieter,

On Sun, 25 Jan 2009, Pieter Donche wrote:


- downloaded win32 binary rdiff-backup.1.1.17-cvs.win32-py2.5.exe from
your http://solutionsfirst.com.au/~dave/backup site

[...]

But I have problems with the final step: launching rdiff-backup.exe
the web page states:
rdiff-backup.exe -v5 --no-hard-links --print-statistics --remote-schema 
plink.exe -i privatekey.ppk %s rdiff-backup --server c:\sourcedirectory 
usern...@servername.domainname::/destinationdirectory


Nowhere on my WinXP system I can find a file rdiff-backup.exe

How must rdiff-backup be called then ??


It's the one that you downloaded, rdiff-backup.1.1.17-cvs.win32-py2.5.exe. 
You could rename that to rdiff-backup.exe.


What should be the contents of a batch-file (to be executed at startup 
of the WinXP system) doing the above ?


Exactly the same as above, i.e. doing the same things that you would do 
from the command line to run rdiff-backup manually. That's all a batch 
file is, a series of commands to run.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Easy Way to Find Lost Files ?

2009-01-17 Thread Chris Wilson

Hi Damon,

On Sat, 17 Jan 2009, Damon Timm wrote:

What is the best way to go looking for lost files with rdiff-backup? 
That is, if I backup a file, then delete the file, then unknown 
quantities of time pass (all the while running nightly backups), is 
there an easy way to find the file I deleted X number of days ago? 
Especially if I am not one hundred percent sure of its name and/or 
location ?


With rdiff-backup --list-changed-since, you can get a list of all files 
changed within the last 6 weeks, 1 year, etc. with the date and the type 
of change for each one. It is quite slow, however.


Andrew, if I strace this process it's doing a LOT of futex calls for no 
apparent reason (no threads or sub processes involved). I imagine it could 
be sped up by at least ten times by removing these. Worth investigating?


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Easy Way to Find Lost Files ?

2009-01-17 Thread Chris Wilson

Hi Andrew,

On Sat, 17 Jan 2009, Andrew Ferguson wrote:

The reason it hasn't been implemented is that it would require either 
figuring out how much space is needed beforehand (which requires 
scanning the whole source repository in advance, something which 
rdiff-backup is currently not setup for; however implementing that 
functionality would allow various other requested features to be 
developed), or rdiff-backup would have to detect that out-of-disk-space 
event, reverse the current session, delete an increment, and start over 
(that, of course, hits a horrible case when the current backup wants to 
add a, say, 40GB file, and deleting each increment only frees a few MB 
or so).


Surely it's possible to delete an old increment without reversing the 
entire backup in progress? As I understand it, a backup creates a new 
increment, it doesn't touch any of the existing ones, therefore it should 
be perfectly safe to delete any increment except the one currently being 
created, while the backup is running. At worst, we'd have to restart the 
backup of the current file, if the out of space error left the binary diff

file hopelessly corrupted.

Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] backup misses files with older date

2009-01-10 Thread Chris Wilson

On Fri, 9 Jan 2009, Matthew Flaschen wrote:


Michael wrote:

GNUtar includes a file in the incremental backup if the mtime OR ctime
changes.   Does rdiff-backup check only the mtime and ignores the ctime?


there isn't anywhere to set ctime to an arbitary date/time?


Apparently, not on POSIX. 
http://www.gnu.org/software/coreutils/manual/html_node/touch-invocation.html 
says In any case, it is not possible, in normal operations, for a user 
to change the ctime field to a user-specified value.


I'm not sure what normal operations means.


Not using a hex editor on /dev/sda :)

Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] backup misses files with older date

2009-01-09 Thread Chris Wilson

Hi,

On Wed, 7 Jan 2009, bt101 wrote:

I installed rdiff-backup and used it for a few days.  I was eager to see 
how it would handle a small change to a large (2G) truecrypt file.  I 
added a small file into the truecrypt file and then performed a backup. 
To my surprise, the truecrypt file was completely missed from the 
backup.  I then looked at the modify date of the truecrypt file and it 
was old.  It seems truecrypt will preserve the modify date (there's a 
setting to change it if you wish).


This is an annoying and pointless misfeature of truecrypt. It causes 
problems for other backup software as well. Turn it off if you want to 
back up truecrypt volumes, and complain to the developers of truecrypt. 
Timestamps exist for a reason and subverting them by default is dangerous 
and pointless. I'm fairly sure that the ctime will change whenever the 
volume is accessed as the timestamp is reset, and therefore changing the 
mtime provides only the tiniest shred of obscurity.


Be that as it may, am I to understand that backup programs will 
completely miss files that NEED to be backed-up, simply because their 
dates are old?  Do they not do a checksum or something?


Almost all backup software will honour timestamps. They exist for a 
reason. Software that changes the data without changing the timestamp is 
broken.


Incidentally, I am also performing a dar backup of the filesystem as 
well, and it also missed the backup of this modified file.


Seems like a pretty big hole to me!


It is a big hole - in truecrypt.

Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] different versions

2009-01-05 Thread Chris Wilson

Hi Peter,

On Mon, 5 Jan 2009, Peter wrote:

Being a novice at Linux and still learning, I use a few different 
versions. My main mail server is Fedora, My desktop is Kubuntu, andother 
is another flavoour etc.


On the Ubuntu unit I use apt-get install rdiff-backup and it installed 
version 1.15.something


On my fedora I had version 1.05 or something.

and offcourse they didn't work when my fedora tried to backup to my 
ubuntu.


What's the easiest way for me to get and install the same version on all 
boxes?


The easiest way I've found is to download the source code from the 
rdiff-backup website (rdiff-backup.nongnu.org) and build from source on 
each computer.


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Data Privacy from system administrator with rdiff-backup

2008-12-28 Thread Chris Wilson

Hi Dominic,

On Sun, 28 Dec 2008, Dominic wrote:

Is it possible with rdiff-backup to create backups on a (linux) backup server 
which cannot then be accessed by the system administrator? I think that 
Duplicity was created to provide this privacy, but in other respects 
rdiff-backup seems a much more polished solution and I wondered if this 
problem can be overcome while sticking with rdiff-backup.


For instance, no sort of third-party rdiff-backup-based backup solution is 
really satisfactory if the third party administrator can just read any 
customer's backed-up private files.


Have you looked at Box Backup? It was designed for this situation.

rdiff-backup can do it if you mount an encrypted filesystem over the 
network, but it's not easy and I don't think it's secure.


Otherwise, I think Duplicity (or possibly Amanda with encryption) is your 
best bet.


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Restructuring an archive

2008-11-14 Thread Chris Wilson
Hi all,

On Thu, 13 Nov 2008, Dominic wrote:

 I'm also curious about the additional overhead (in disk space) that is 
 created by frequent rdiff-backup runs. If one backs up daily, how much 
 more disk space is used than if one backs up weekly? In theory no more 
 because 7 x daily incremental diffs have the same info as 1 x weekly 
 incremental diff.

I was running a backup of about 150 GB of data three (home directories) 
about 3 times a day, and it was taking 8 MB per incremental backup with 
rdiff-backup 1.0.5, even if nothing had changed. It may well be less with 
more recent versions, due to (1) compressed metadata and (2) incremental 
patches to metadata.

Cheers, Chris.
-- 
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\ _/_/_/_//_/___/ | Stop nuclear war http://www.nuclearrisk.org |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Help how to restore a single file or directory on windows.

2008-10-16 Thread Chris Wilson
Hi,

On Mon, 13 Oct 2008, weesterweb wrote:

 C:\rdiff-backup.exe -r -v9 
 c:\dest\rdiff-backup-data\increments\newbuild.txt.2008-10-13T15;05813;05857-05;05800.diff.gz
  c:\restore
 Fatal Error: Could not find rdiff-backup repository at 
 c:\dest\rdiff-backup-data\increments\newbuild.txt.2008-10-13T15;05813;05857-05;05800.diff.gz
 
 C:\rdiff-backup.exe -r -v9 
 c:\dest\rdiff-backup-data\increments\newbuild.txt.2008-10-13T15;05814;05855-05;05800.snapshot.gz
  c:\restore
 Fatal Error: Could not find rdiff-backup repository at 
 c:\dest\rdiff-backup-data\increments\newbuild.txt.2008-10-13T15;05814;05855-05;05800.snapshot.gz

There should be a parameter to -r in these commands, e.g. -r now.

Cheers, Chris.
-- 
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\ _/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Help how to restore a single file or directory on windows.

2008-10-11 Thread Chris Wilson
Hi Joe,

On Fri, 10 Oct 2008, weesterweb wrote:

 I use rdiff-backup 1.2 on windows xp. I am trying to restore a archived 
 file /directory but has no luck. Suggestions? or syntax please.
 
 I tried this but no luck
 
 rdiff-backup 
 host.net::/remote-dir/rdiff-backup-data/increments/myfile.2008-10-05T12:21:41-07:00.diff.gz
  
 local-dir/file

Try with the -r option, and if it doesn't work, post the error message 
that you get.

Cheers, Chris.
-- 
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\ _/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Any plans for Amazon S3 support?

2008-08-25 Thread Chris Wilson
Hi Greg,

On Mon, 25 Aug 2008, Greg Freemyer wrote:

 I took a look at S3's pricing today and it is pretty good. 
 ($0.15/GB/month)  http://aws.amazon.com/s3
 
 The trouble from a rdiff-backup perspective is that it has a custom API.
 
 I'd like to see rdiff-backup have support for backing up to S3, but that 
 may be much difficult than I expect.
 
 So count this as a vote for S3 support.

Have you looked at Amazon EBS? With this, you should be able to back up to 
an ECC instance running rdiff-backup, and save the resulting filesystem 
images on S3.

Cheers, Chris.
-- 
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\ _/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Any plans for Amazon S3 support?

2008-08-25 Thread Chris Wilson
Hi Greg,

On Mon, 25 Aug 2008, Greg Freemyer wrote:

 I've been deleting the articles about most of the Amazon Cloud instead 
 of reading them, so a couple questions if you happen to know.
 
 I gather the ECC is basically a virtual server with a full OS 
 installation?  (I currently have one on those at a provider that I run 
 our company website on.)

Yes, effectively. My understanding is that the storage is NOT persistent, 
unlike most virtual and real servers, but you can put snapshots of the 
system into S3 or EBS, and with EBS you can efficiently update those 
snapshots.

 If so, could I consolidate my current webserver virtual Server and
 then add on the EBS storage to backup our fileservers to it?

I think so, yes.

 Not sure how the ECC is priced, but EBS includes per i/o pricing. Would 
 that be a lot for a rdiff-backup backend server?

http://www.amazon.com/b/ref=sc_fe_c_1_3435361_1?ie=UTF8node=689343011no=3435361me=A36L942TSJ2AJA

$0.50 per GB and $0.10 per million I/O requests. Doesn't sound too bad to 
me.

However, this is not truly persistent storage. For that, you'd need to 
snapshot it to S3 and pay the charges for that as well.

 Also, is anyone already doing something like this?  Personally, I use 
 rdiff-backup to a local drive, then rsync the whole repository offsite 
 to a online storage vendor.  I'm currently paying about $75/month for 
 250GB of repository.

Not yet with rdiff-backup, but I know someone who is using ECC and testing 
S3 and EBS with Box Backup, and I'm planning to do the same.

Cheers, Chris.
-- 
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\ _/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Any plans for Amazon S3 support?

2008-08-25 Thread Chris Wilson
Hi Greg,

On Tue, 26 Aug 2008, Chris Wilson wrote:

  Not sure how the ECC is priced, but EBS includes per i/o pricing. 
  Would that be a lot for a rdiff-backup backend server?
 
 http://www.amazon.com/b/ref=sc_fe_c_1_3435361_1?ie=UTF8node=689343011no=3435361me=A36L942TSJ2AJA
 
 $0.50 per GB and $0.10 per million I/O requests. Doesn't sound too bad 
 to me.

Sorry, that should be $0.10 per GB, much more affordable.

Cheers, Chris.
-- 
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\ _/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Client dying randomly on 1.1.16

2008-08-22 Thread Chris Wilson
Hi Oliver,

On Fri, 22 Aug 2008, Oliver Hookins wrote:

 We are doing backups by initiating from the backup server, and for some 
 reason clients will die during the night but not every night. Sometimes 
 only once a week or less.
 
 The only message we get from the operation is the following from the cron
 job ending:
 
 Read from remote host exampleclient.backup: Connection timed out
 Fatal Error: Lost connection to the remote system

My guess is that either your connection to the server is going down, or 
it's being terminated by a firewall in between the client and the server, 
when the connection is idle for some time, e.g. while the client is doing 
work locally, scanning directories or diffing a large file or something.

You might want to check whether, for example, a long-lived SSH session 
between client and server, without TCP or SSH keep-alives, is able to 
survive the night without activity, and not be reset as soon as you try to 
use it in the morning.

Failing that, the last few packets of output of a tcpdump of the 
connection on the client side, with timestamps, might be useful in 
diagnosing the problem.

Cheers, Chris.
-- 
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\ _/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Re: Truncated header string error, how to diagnose?

2008-05-11 Thread Chris Wilson

Hi Chris,

On Sun, 11 May 2008, Chris G wrote:


whereas the 1.0.5 was a root/system installion.

I think that's the problem but I can't work out how to fix it, any 
rdiff-backup command, e.g.:-


   ssh [EMAIL PROTECTED] rdiff-backup --version

simply returns with no output.  All other commands executed like this
(at least all the ones I've tried) work fine.


Since 1.1.15 is installed in your user account, perhaps the environment 
necessary to access it isn't loaded when you ssh with a command (which 
doesn't run a login shell)?


Could you try creating a wrapper script on the remote side that sets 
PYTHONPATH=/home/ibsd/.../lib/python (or wherever rdiff_backup/Main.py is 
located under your home) and then runs rdiff-backup?


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup versus rsnapshot

2008-04-28 Thread Chris Wilson
Hi all,

On Mon, 28 Apr 2008, Mike Marseglia wrote:

 A quick google turned up the following from
 http://www.backupcentral.com/components/com_mambowiki/index.php/Rdiff-backup:
...
 Disadvantages
 
 Let’s be honest: rdiff-backup has some disadvantages too:
 
 Speed
 
 rdiff-backup consumes more CPU than rsync and is therefore slower than
 most rsync scripts. This difference is often not noticeable when the
 bottleneck is the network or a disk drive but can be significant for
 local backups.

I would add the following:

rdiff-backup uses a lot more network bandwidth than rsync. About 1GB per 
100GB covered per backup, in my estimate, in addition to the deltas. This 
makes it too slow for us to use for daily offsite backups (uploading over 
a 384kbit DSL, backing up about 500GB daily).

rdiff-backup is a bit fragile. It's easy to corrupt the metadata, for 
example if the store disk gets full, or multiple backups run to the same 
destination at the same time, and usually impossible (i.e. nobody knows 
how) to recover the history after that happens.

rdiff-backup does not allow one to remove an intermediate increment (for 
example if a large file accidentally got backed up that shouldn't be) or 
to remove a subtree of the backup (at least not without risking metadata 
corruption again).

 With rsync scripts, all past backups appear as copies and are thus 
 easy to verify, restore, and delete. With rdiff-backup, only the current 
 backup appears as a true copy. (Earlier backups are stored as compressed 
 deltas.)

For me, this is a mixed blessing. Large numbers of small files take a lot 
of space on the remote server (as with rsync too), and you can trash your 
backup by accidentally modifying files in the remote repository. But it 
has been useful in emergency recovery situations where I have had to boot 
from a recovery CD that didn't have rdiff-backup on it.

I do like rdiff-backup and I use it extensively, but these are things that 
I wish for that would make it even better.

Cheers, Chris.
-- 
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\ _/_/_/_//_/___/ | We are GNU : free your mind  your software |___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] external hard disk backups - best practice

2008-03-17 Thread Chris Wilson

Hi Greg,

On Mon, 17 Mar 2008, Greg Freemyer wrote:


I also maintain a local and offsite rdiff backup copy.

But I only have rdiff create the first copy.  Then I rsync that to my 
offsite location.


I have about 150GB of data I backup.  rdiff + rsync takes about 45 
minutes most nights if there is not a significant change to my dataset. 
(FYI: I have a number of spindles involved via raid, so I'm not slowed 
by disk i/o as much as one might expect.)


The bad news is that if my first set gets corrupted, then rsync will 
relay the corruption out to the offsite copy.  My reason for two copies 
is disaster recovery, not backup repository corruption.  May need to 
rethink that based on this discussion.


What I've started doing to work around this is to rsync the original data 
(not the rdiff-backup copy) offsite and run rdiff-backup locally on the 
remote server. Not perfect, I end up with two copies of all the data on 
the remote server, but at least it does work and appears to save 
bandwidth.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff runs then stops without error output

2008-02-27 Thread Chris Wilson

Hi Jeff,

On Wed, 27 Feb 2008, Latimer, Jeff wrote:


I am new to rdiff and not really strong in Linux so bear with me.

rdiff (1.1.5.4) is configured and running. It seems to run ok, but it 
will run for about 20 minutes then stops moving data. The rdiff-backup 
process is still running when I do a ps aux, but it is not using any 
processor cycles. It just sits there. I increased the verbosity level to 
7 and viewed backup.log and it just shows the last file copied and 
nothing else - no error output or anything. /var/log/messages shows 
nothing, as does syslog. It is as if it is waiting for something.


Any ideas where I can look?


I reckon it's probably trying to back up a mounted filesystem that's not a 
real filesystem and contains virtual files that could confuse it. For 
example, /proc or /sysfs.


You could try running from the command line with -v 9. Pay close attention 
to the last line output, that may tell you what it's trying to back up.


If that doesn't help, try running the strace command on the running 
rdiff-backup process, which should tell you what system call it's hung up 
executing. If that doesn't help you, try posting the output here.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: Re: [rdiff-backup-users] can rdiff-backup be stopped / paused / restarted? - HOWTO?

2008-01-19 Thread Chris Wilson
Hi Maarten and devzero,

On Mon, 14 Jan 2008, Maarten Bezemer wrote:

 On Sun, 13 Jan 2008 [EMAIL PROTECTED] wrote:
 
  when rdiff-backup is really cancelled while running there is no return
  and rdiff-backup will need to recover first on the next run..
 
 Although I can understand why it is done this way, I'm not entirely 
 convinced that it can't be done differently. I mean: resume isn't 
 considered the Right Thing because rdiff-backup is supposed to be a 
 'snapshot' at 1 point in time. But in practice, if you have slow links 
 or a lot of stuff to back up, the data backed up at the end of the run 
 may be from a much later time than the data backup up at the beginning 
 of that run. This wouldn't be much different from starting a backup, 
 network link breaking, and resuming 5 minutes later. Yet, this isn't 
 supported... :-(
 
 Not knowing the python code very well, I'm not sure if saving checkpoint 
 data in the exception handler is feasible. Same goes for reading 
 checkpoint data and continuing from that point. Would be nice to have, 
 but I'm not expecting anything like this to happen before we get 
 increment-merging ;-)

As I see it, the problem is that rdiff-backup saves increment files as it 
goes along updating the remote repository. It does this in such a way that 
it can undo the increments if necessary, with --check-destination-dir, but 
I think it might not be able (currently) to:

* determine which increments have already been applied when restarting the 
backup, and not apply them again; and

* handle the case where a file that was incremented during the last run 
has subsequently changed and needs to be incremented again (merging 
increments); and

* handle the case where the increments created so far do not match the log 
file written so far (because the two cannot be updated atomically in 
step).

These problems are not impossible to solve, but backup software is tricky 
to get right and also very very important to get right, and I can 
understand the authors' reluctance (so far) to try this.

In most cases, it's not actually necessary either. Careful bandwidth 
management (QoS) on your Internet connection can ensure that your backups 
can run for as long as necessary without needing to be interrupted and 
without disturbing other traffic.

We do this at the company that I work for (I implemented it) and it works 
reasonably well, although rdiff-backup has some other problems as well, so 
we're looking at other solutions. So far I'm not aware of anything else 
that has all the nice properties of rdiff-backup (which I really want), so 
we're stuck with it (and I don't know python to fix rdiff-backup myself).

Cheers, Chris.
-- 
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Rotation?

2007-10-12 Thread Chris Wilson

Hi Chris,

What's the best way to implement a rotation policy? We'd like to keep 
daily backups for a week, then weekly backups for a month, then monthly 
backups. Are there some scripts available, or perhaps docs that explain 
it in simple terms?


Sorry, as far as I know it's not possible to combine old rdiff-backup 
increments to reduce their size and implement such a strategy. You can 
only delete all increments over a certain age. It's something that I would 
really like to see as well.


Then again, rdiff-backup reduces the need for this by being quite 
efficient with disk space. You might not save much by deleting 
intermediate increments anyway (probably only the size of the metadata 
files, which can be a few megabytes per increment).


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] reorganizing backup folders

2007-09-24 Thread Chris Wilson

Hi Chuck,

On Mon, 24 Sep 2007, Steven Willoughby wrote:

chuck odonnell wrote:

 i have a fairly large repository that i back up using something like:

 $ rdiff-backup remote::/m /backups/remote/m

 but now i want to add some other folders that are above /m, e.g.:

 $ cat include_file
 /m
 /etc
 /usr/local/etc
 $ rdiff-backup --include-globbing-filelist include_file --exclude / \
   remote::/ /backups/remote

 is there a way to merge the new folders into the existing backup tree
 or do i have to move the old /backups/remote/m folder out of the way
 and start from scratch? i am less concerned about preserving
 historical increments than i am with having to refetch the many gigs
 of data that comprise remote::/m.


Delete the rdiff-backup-data directory and then run your above command with 
the --force option.


You might want to move your existing files (sans rdiff-backup-data 
directory) to the location that they will end up in the new repository, so 
that rdiff-backup will find the existing files where it expects them, and 
rsync them instead of just deleting and re-uploading them.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Rdiff large files

2007-09-24 Thread Chris Wilson

Hi Stephen,

On Mon, 24 Sep 2007, Stephen Zemlicka wrote:


Does rdiff transfer the whole file or just the part that changes as in
rsync?


It just transfers the parts that have changed, but it uses a similar 
algorithm to rsync to do this (i.e. calculate the checksums of each 
block in the destination file) ...


My situation is I need to backup over the internet.  I cannot modify the 
server to run the rsync server on it.  Currently, my connectivity is 
limited to a mapped drive.  Since rsync transfers the whole file, I was 
hoping I could use rdiff to only transfer the changes and in a restore, 
the specified change file/folder would be applied to patch the files to 
the current version.


... and if rsync doesn't do what you want, I very much doubt that rdiff 
(or rdiff-backup) will either.


What you're trying to do won't work, because rsync, rdiff-backup and most 
similar tools will need to download the entire current remote file to 
calculate checksums for each block, to determine what needs to be 
uploaded. The backup optimises efficiency over the link between rsync 
client and rsync server, not between rsync server and the disk (which in 
your case is not local). They were not designed to do that.


Unless you can change the remote server to run something smarter than a 
simple file server daemon on it, I'm afraid you're quite stuck. You could 
do something completely different, remember what checksum each file had 
last time you uploaded it (or keep the block checksums locally) and just 
upload the changed blocks that way. But I don't think rdiff-backup will 
help you to do that, sorry.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] why does exclude move files to increments?

2007-09-01 Thread Chris Wilson

Hi Brandon,

I want to use rdiff-backup to do incrementals, but I also only want to 
process files that have changed in the last 24 hours. How come when I 
use exclude directives that the directories excluded in subsequent 
backups are moved to incrementals?


This is by design. The principle is that if you restore from the 
repository to a date before you excluded those files, then the files will 
be restored. Of course they have to be stored somewhere on the server in 
order to be restored. rdiff-backup treats them like any other file that 
disappeared (e.g. was deleted) from the source filesystem, and creates a 
compressed increment that describes the change, or in this case, how to 
recreate the file with its original contents.


This only happens once, when the file is excluded, and after that it won't 
be touched until you purge the snapshot that contains it from the 
repository.



This is causing huge amounts of unnecessary I/O on my disk.


But only once, the first time that you exclude each file.

I want to create a baseline rdiff-backup and then only push updates 
daily by using mtime.


That is what rdiff-backup does for you.

I have 300GB worth of data and it takes an insane 16 hours to run. 
That's fine for a baseline,but not an incremental.


Sorry, rdiff-backup is quite slow in my opinion (I have a similar 
problem). I think it's inefficient in marshaling data to send over the 
network. I've worked around it by rsyncing my data to the backup server 
and running rdiff-backup locally there, but that may not work for you, 
especially if you don't want two copies of your data on the backup server.



Does rdiff-backup to a file compare on every single run?


I think it compares checksums of local file blocks against checksums of 
remote file blocks, to determine which blocks have changed and must be 
sent again. Just as rdiff does.



I create my baseline rdiff-backup with the following:
rdiff-backup --print-statistics
--exclude-other-filesystems --exclude-sockets
--exclude-device-files --exclude-fifos --include /test
--exclude '**' / /backup/testing

...

Now when I run rdiff-backup on just those files that
have changed, it moves the non-changed files to
increments! Why?


Because you excluded all files from the backup with --exclude '**'.


I am simulating the non-changed files
by excluding dir2 from my processing. I only want to
update the rdiff-backup tree for changed files based
on mtime to avoid the processing and I/O caused by
comparing all files' signatures.


I think that is not a valid test.


Here is my command for only processing changed files:



find /test -mtime 0 | rdiff-backup --print-statistics
--exclude-other-filesystems --exclude-sockets
--exclude-device-files --exclude-fifos --exclude
/test/dir2 --include-filelist-stdin --exclude '**' /
/backup/testing


Umm, rdiff-backup will ignore the output of find /test -mtime 0 that you 
pipe into it. It works out which files have changed for itself. Just run 
it the same way each time (without the find command). It will do what you 
want.


Hope that helps.

Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] why does exclude move files to increments?

2007-09-01 Thread Chris Wilson

Hi Brandon,

On Sat, 1 Sep 2007, Chris Wilson wrote:


 I am simulating the non-changed files by excluding dir2 from my
 processing. I only want to update the rdiff-backup tree for changed
 files based on mtime to avoid the processing and I/O caused by
 comparing all files' signatures.


I think that is not a valid test.


Sorry, thinking about it again I think I understand what you're trying to 
do now. You want rdiff-backup to treat excluded files as though they had 
not changed. Unfortunately that's not what the --exclude option does. It 
treats the excluded files as though they no longer exist, i.e. as though 
they had been deleted. I don't think this is what you want, either for 
testing or to only back up changed files in real life.



 Here is my command for only processing changed files:



 find /test -mtime 0 | rdiff-backup --print-statistics
 --exclude-other-filesystems --exclude-sockets --exclude-device-files
 --exclude-fifos --exclude /test/dir2 --include-filelist-stdin
 --exclude '**' / /backup/testing


Umm, rdiff-backup will ignore the output of find /test -mtime 0 that 
you pipe into it.


Sorry, I missed the --include-filelist-stdin option. It will read your 
list of included files, but all others will be treated as having been 
deleted, which is not what you want.


This still applies though:

It works out which files have changed for itself. Just run it the same 
way each time (without the find command). It will do what you want.


I hope :-)

Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] ssh on different port

2007-05-31 Thread Chris Wilson

Hi Rudy

On Thu, 31 May 2007, RudySC wrote:

Yeah thats what i did initially, but there are lots of scanning on port 
22 the reason why i changed default port. I was thinking of editing the 
source, but dont know what file the ssh -C $IP_ADDRESS is associated.


Please re-read my answer. This is what you do _on the client_ when the 
server is NOT on port 22:



What I normally do is edit /etc/ssh/ssh_config to set the default port for
the specified remote host:

Host xxx.yyy.com
Port 


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Install problem? [was: backup to cifs share?]

2007-04-29 Thread Chris Wilson

Hi Morgan,

On Sun, 29 Apr 2007, Morgan Read wrote:


Seems my problem (below) has nothing to do with cifs.  After running a
local test, I get the same error:

[EMAIL PROTECTED] ~]# rdiff-backup --include-globbing-filelist
include-list-test / /home/tmp/testbackup
Traceback (most recent call last):
 File /usr/local/bin/rdiff-backup, line 20, in ?
   import rdiff_backup.Main
ImportError: No module named rdiff_backup.Main
[EMAIL PROTECTED] ~]#

include-list-test is:
/home/olwyn
- **

Perhaps, this is something to do with the install?  I used the following
command straight from the README to install to /usr/local :
[EMAIL PROTECTED] ~]# python setup.py install --prefix=/usr/local

Might installing to /usr/local have caused some problems


Yes, if you install with a prefix then you have to update your Python 
module search path. Please see the list archives for details.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] SHA1digests missing ?

2007-04-24 Thread Chris Wilson

Hi Roland,

On Sun, 22 Apr 2007, [EMAIL PROTECTED] wrote:


sorry to ask again, but i`m unsure if this is an issue.

could somebody please check, how mirror_metadata looks on their system ? 
are there also digests missing in your metadata file(s) ?


i`m really curious.


I'm using 1.0.5, and I have no SHA1 digests at all.

Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup error: Mac OS X - Linux

2007-03-06 Thread Chris Wilson

Hi Andrew,

On Tue, 27 Feb 2007, Andrew Ferguson wrote:

 Chris, rdiff-backup will adjust permissions as necessary (eg, adding 100 to 
 Spotlight-V100 during backup), but you have to be the file's owner to do so 
 (or root).


Sorry for the late reply. Do you happen to know where it does that, and since 
what version? I've run into this problem when backing up directories with weird 
permissions using 1.0.4 and 1.0.5.


Also, rdiff-backup shouldn't need any special permissions to alter the 
permissions of the directories _which it creates and owns_ in the mirror. It 
just needs to be careful to ensure that they are created with sane permissions.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup error: Mac OS X - Linux

2007-03-06 Thread Chris Wilson

Hi Andrew,

On Tue, 27 Feb 2007, Andrew Ferguson wrote:

Chris, rdiff-backup will adjust permissions as necessary (eg, adding 100 
to Spotlight-V100 during backup), but you have to be the file's owner to 
do so (or root).


Sorry for the late reply. Do you happen to know where it does that, and 
since what version? I've run into this problem when backing up directories 
with weird permissions using 1.0.4 and 1.0.5.


Also, rdiff-backup shouldn't need any special permissions to alter the 
permissions of the directories _which it creates and owns_ in the mirror. 
It just needs to be careful to ensure that they are created with sane 
permissions.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Re: backup project to a quota limited directory

2007-02-12 Thread Chris Wilson

Hi Ahmed,

Yeah, that's what I want to do. Except to put a quota on a 
project/group, all files uploaded to the server by the group of people 
working on a certain project, would have to be owned by a specific Unix 
group. That's what I cannot do? i.e. how to force all files pushed to 
the server, to be owned by a certain Unix group.


I don't think that quotas work on groups, at least on a Linux server. As 
far as I know, you can only set a quota on a user. So make all members of 
a group log in as the same Unix user, and apply a quota to that user.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Re: backup project to a quota limited directory

2007-02-12 Thread Chris Wilson

Hi Ahmed,


oops, having everyone login as the same user is evil. And yes, Linux can
surely impose quota on Unix groups edquota -g. So, any tricks how to force
a Unix group for pushed files? Perhaps something that kinda uses
--remote-schema!


Can you make them all members of just one group on the backup server? Then 
rdiff-backup on the server will be forced to use that group, I think.


But why is it evil to make them log in as the same user anyway? You can 
give them different SSH keys that restrict them to different directories 
so that they can't see each others' files anyway.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Re: backup project to a quota limited directory

2007-02-12 Thread Chris Wilson

Hi Ahmed,

On Tue, 13 Feb 2007, Ahmed Kamal wrote:

thanks a lot for the help. I'm afraid having people in just one group is 
not really possible, as a group is a project, and people will be working 
on multiple projects, that's just a fact.


Well, rdiff-backup can't do exactly what you want out of the box, that's 
just a fact (afaik).



I wasn't aware ssh keys could restrict users in certain directories like
that, any pointers please?


man sshd and search for authorized key file format

man rdiff-backup and search for --restrict


It still looks hackish though.


Unix is one big hack, may as well get used to it :-)

Isn't there any client side option to force running some command 
newgrp in this case on the server, before the actual transfer begins ?


The problem is that rdiff-backup won't just create the files and leave 
them alone, it will try to change their group to match the one on the 
client.


Try the --group-mapping-file option (man rdiff-backup) on the SSH key to 
map all groups to the same one. Each key can have its own group mapping, 
so if users log in with different keys, their files get mapped to a 
different group (whatever group they were on the client).


Or you could enforce that all files for a particular project have the same 
group (of the project) on the client before uploading, with chmod -R.


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


[rdiff-backup-users] IOError on missing file

2007-01-21 Thread Chris Wilson

Hi all,

This error suddenly appeared in one of my daily backups:

Previous backup seems to have failed, regressing destination now.
Regressing to Fri Jan 19 01:00:15 2007
Regressing file tmp/.6200.3c4f1
Warning: Could not restore file /mnt/tmp/aidworld-rdiff/tmp/.6200.3c4f1!

A regular file was indicated by the metadata, but could not be constructed 
from existing increments because last increment had type None.  Instead of 
the actual file's data, an empty length file will be created.  This error 
is probably caused by data loss in the rdiff-backup destination directory, 
or a bug in rdiff-backup


Traceback (most recent call last):
  File /usr/bin/rdiff-backup, line 23, in ?
rdiff_backup.Main.Main(sys.argv[1:])
  File /usr/lib/python2.4/site-packages/rdiff_backup/Main.py, line 285,
in Main
take_action(rps)
  File /usr/lib/python2.4/site-packages/rdiff_backup/Main.py, line 255,
in take_action
elif action == backup: Backup(rps[0], rps[1])
  File /usr/lib/python2.4/site-packages/rdiff_backup/Main.py, line 299,
in Backup
backup_final_init(rpout)
  File /usr/lib/python2.4/site-packages/rdiff_backup/Main.py, line 396,
in backup_final_init
checkdest_if_necessary(rpout)
  File /usr/lib/python2.4/site-packages/rdiff_backup/Main.py, line 911,
in checkdest_if_necessary
dest_rp.conn.regress.Regress(dest_rp)
  File /usr/lib/python2.4/site-packages/rdiff_backup/regress.py, line
70, in Regress
for rf in iterate_meta_rfs(mirror_rp, inc_rpath): ITR(rf.index, rf)
  File /usr/lib/python2.4/site-packages/rdiff_backup/rorpiter.py, line
285, in __call__
last_branch.fast_process(*args)
  File /usr/lib/python2.4/site-packages/rdiff_backup/regress.py, line
232, in fast_process
if rf.metadata_rorp.isreg(): self.restore_orig_regfile(rf)
  File /usr/lib/python2.4/site-packages/rdiff_backup/regress.py, line
260, in restore_orig_regfile
rf.mirror_rp.write_from_fileobj(rf.get_restore_fp())
  File /usr/lib/python2.4/site-packages/rdiff_backup/rpath.py, line 977,
in write_from_fileobj
outfp = self.open(wb, compress = compress)
  File /usr/lib/python2.4/site-packages/rdiff_backup/rpath.py, line 957,
in open
else: return open(self.path, mode)

IOError: [Errno 2] No such file or directory: 
'/mnt/tmp/aidworld-rdiff/tmp/.6200.3c4f1'


Now I can't back up that machine any more. It seems that the missing file 
should only have caused a warning, but then rdiff-backup tries to access 
it and fails, which causes it to die.


I could touch the missing file, but does anyone have a better idea?

Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Crash due to IOError?

2007-01-03 Thread Chris Wilson

Hi Kristian,

On Mon, 4 Dec 2006, [ISO-8859-1] Kristian R�nningen wrote:


On a semi-regular basis I'm getting the following error while backing
up a certain server, it seems to be some kind of IO-Error (disk or
network), and was wondering if there was anything I could do to
prevent this.

The servers are running Debian Sarge, and its rdiff-backup package.
rdiff-backup 0.13.4-5

rdiff-backup --version says 0.13.4


Is that the same version as the client? Could you update both sides to 
1.0.x?



   nice -n 19 /usr/bin/rdiff-backup \


That's very nice. If anything CPU-intensive happens on your client, it 
will starve rdiff-backup for a long time, which could cause the following 
problem:



* Mon Dec  4 11:36:35 CET 2006 :: Backup my.server.net starts

[...]

Read from remote host my.server.net: Connection reset by peer
Exception '[Errno 32] Broken pipe' raised of class
'exceptions.IOError':

[...]

* Mon Dec  4 12:24:38 CET 2006 :: Backup my.server.net ends


The backup has been running for almost an hour. It's possible that 
there was no network activity for a long time between client and 
server, and some mischevious firewall has closed the connection?


Is there a firewall between client and server, and can you do anything to 
remove, replace or tune it not to break TCP connection?


Can you try running the backup without nice?

Can you enable TCP keepalives on the client? 
(/proc/sys/net/ipv4/tcp_keepalive_* on linux)


Can you provide a tcpdump for the connection, or at least the last few 
packets exchanged? (from the last successful ack from the server, to the 
reset, with timestamps)


Is it possible that rdiff-backup crashed or was killed on the server? Is 
there anything in the system logs around the time that the backup dies?


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] Simple nightly script?

2006-10-22 Thread Chris Wilson

Hi Will,

On Sat, 21 Oct 2006, Will Prater wrote:

Thanks, the first line told me it was already loaded and the second appeared 
to mount successfully.  Is there a way to make this happen automatically, or 
do I have to edit your script to mount the /proc?


It should be mounted by your /etc/fstab. This always worked for me on new 
FreeBSD installations, so I don't know why it's not working for you.


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Simple nightly script?

2006-10-20 Thread Chris Wilson

Hi Will,


My /proc/ folder is empty on FreeBSD.


Try this:

kldload procfs
mount -t procfs procfs /proc

Or:

kill -0 $OTHERPID

Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Simple nightly script?

2006-10-19 Thread Chris Wilson

Hi Greg,

On Thu, 19 Oct 2006, Greg Freemyer wrote:


I've looked at the examples at
http://www.nongnu.org/rdiff-backup/examples.html, but none of them seem to
address automated nightly scripts and error handling.

Are there some more complex examples available?


You need to implement locking yourself, in case a backup takes more than 
24 hours. I use this:


LOCKFILE=/var/lock/backup-by-rsync.pid

if [ -r $LOCKFILE ]; then
OTHERPID=`cat $LOCKFILE`
if [ -n $OTHERPID -a -d /proc/$OTHERPID ]; then
echo Another backup is already running:
ps auxww | grep $OTHERPID | grep -v grep
exit 1
else
echo Stale lockfile deleted: process $OTHERPID not running
fi
fi

echo $$  $LOCKFILE

rdiff-backup ...

rm $LOCKFILE

I'm backing up to a local directory on a dedicated backup disk so it is 
easy enough to add rdiff-backup /src /backup to my backup script.


But what about catching errors?

Seems like I should be sending output to a log file, grepping thru it and
e-mailing it to myself if anything goes wrong.


I used to let rdiff-backup write errors to the standard output, which 
means that they are included in cron's email. However, I get a load of 
errors of minor importance (UpdateError /some/file does not match source, 
socket path too long, etc.).


What I do now is write the output from rdiff-backup to a logfile, and 
check the return code ($?), and if it's greater than zero, I echo 
something which tells me which backup failed, and which logfile to check. 
This ignores the minor errors, but more major ones result in the 
appropriate warning.


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Lock file

2006-08-24 Thread Chris Wilson

Hi Sebastien,

On Wed, 23 Aug 2006, Sebastien Maret wrote:


I use rdiff-backup to backup my home directory on a desktop computer
with a cron job. From time to time, the backup fails because of a lost
connection, for example when I move my laptop from one place to the
other.

When this occurs, the next backup fails because rdiff-backup believes
that there is still a process running, although it is not the case. Is
this a bug in the way rdiff-backup check if an other process is
running?

That's a bit annoying, because when this happens, I need to regress
the backup directory by hand before starting an other backup
process.


You could try changing your backup script so that it automatically runs 
check-destination-dir after each backup finishes. Then, if the backup 
failed, you won't need to regress by hand. It also prevents you from 
accidentally running two backups at once, which will probably happen and 
corrupt your repository one day.


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] stupid, stupid question, but necessary to know answer

2006-08-06 Thread Chris Wilson

Hi Maarten and Ty,

On Fri, 4 Aug 2006, Maarten Bezemer wrote:


It's not always known beforehand what the source contains. Since
rdiff-backup is already checking for other filesystem capabilities, it
would make some sense to check for and escape directories (or
files!!) named rdiff-backup-data as well, so that they wouldn't interfere
with the rdiff-backup program. After all, why should anyone using the
source tree know that it's being backed up by rdiff-backup? Even moreso:
how *could* they know e.g. when an ISP is backing up their customer's
files using rdiff-backup. The contents of the source tree should have no
influence on the way rdiff-backup behaves.


I think there is a misunderstanding here. If you back up directory /foo to 
directory /bar using rdiff-backup, then the rdiff-backup-data directory is 
created under /bar, not /foo.


So if the ISP is backing up someone's home directory using rdiff-backup, 
the user will not see any rdiff-backup-data directory inside their home 
directory.


The presence of an rdiff-backup-data directory inside some directory /bar 
should be taken to mean that the contents of /bar MUST NOT BE MODIFIED IN 
ANY WAY except by running rdiff-backup /foo /bar. Any changes to /bar 
not made by rdiff-backup will corrupt the mirror and prevent accurate 
restores.


This is one reason why Ty Boyack should not treat his rdiff-backup 
destination directory as an alternative which he can just swap in to 
replace the source directory if it becomes unavailable for some reason.


Not only will his users be confused by the sudden appearance of an 
rdiff-backup directory, but if they make any changes at all to the newly 
appeared filesystem, rdiff-backup will become confused and his backups 
will become worthless.


Depending on the capabilities of the destination filesystem and the rights 
of the user performing the backup, there may be other differences between 
the original and the mirror; for example, some characters in file names 
may be escaped, or file ownership and permissions may be different.


rdiff-backup is a backup tool which preserves history, not a mirroring 
tool, and it like any tool it can be abused. I would advise Mr Boyack to 
keep his rdiff-backups WELL AWAY from his users, and instead make a 
separate, identical mirror of his main filesystem to use as a swap-in 
backup, perhaps using rsync or an inotify-based tool, for example.


The fact that rdiff-backup creates a mirror, which happens to use the same 
filenames (if possible) and store the latest version of the mirrored data 
verbatim and uncompressed, should be treated solely as a convenience for 
emergency restoration of the latest versions of files without access to 
the rdiff-backup tool. It should NOT be misunderstood as a license to make 
any kind of changes to that directory with impunity, or a guarantee that 
the original and mirror are identical (apart from the rdiff-backup-data 
directory in the mirror).


I'm beginning to feel that this mirroring feature, while nice at first 
sight, actually presents a danger of users making unwarranted and 
unjustified assumptions that the mirror will be an exact mirror, and 
modifiable. I would be somewhat more comfortable if it was less obviously 
similar, for example if every filename was preceded with a special 
character such as . or _.



Now, when issuing this command, rdiff-backup goes to find its
rdiff-backup-data directory. My question was: How is this done?


I don't have the exact answer, but I guess that rdiff-backup recurses up 
the directory tree until it finds an rdiff-backup-data, or runs out of 
directories to recurse into.


I don't know what happens if you try to back up an rdiff-backup 
destination directory (including rdiff-backup-data) using rdiff-backup, 
but I suspect that it would be unwise to try. In any case, I don't see how 
it would give you any benefit over backing up that directory using rsync.


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] stupid, stupid question, but necessary to know answer

2006-08-06 Thread Chris Wilson

Hi J,

I think I did a bone-headed thing.  I excluded the rdiff-backup-data 
path in the backup.  I probably want that path in order to do 
incremental restores... is that correct?


I'm sorry, I don't understand. What command did you use, and why are you 
worried about the results?


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] SpecialFileError, UpdateError

2006-06-06 Thread Chris Wilson

Hi Felix,

I intend to set up nightly backups as well, but often I work on sth. 
important, and then I want to make quick backups, e.g. before taking my 
laptop out of house.  They should not take more than about 5 minutes.


You might find it easier to have a different backup mechanism in that 
case, maybe combined with rdiff-backup. Something like Subversion or star, 
might suit you very well for small numbers of changed files, while being 
less convenient for whole system backups and restores.



I haven't tried it yet, but it appears to be quite complex.


That's true, but it does include a remote server component. I would like 
to see the possibility to use DAV, FTP, SSH, NFS, local filesystem, etc. 
for the remote server, to reduce the amount of code in Box Backup itself. 
I hope that I can make some of those happen.



star has numerous advantages:

- It's stable and I've a little test suite that I created for it some
 time ago.  I trust it very much.


Box Backup also has a test suite and a lot of users. I think there's no 
substitute for regularly comparing your backups and running test restores, 
whatever application you use.



- Tar files are extremely easy to handle:


I would disagree - my 30 GB system backups would be completely 
unmanageable. I dropped Tar for this reason about 4 years ago and I'm 
still searching for a really good replacement, although rdiff-backup is 
close and Box Backup will be even better when I'm done with it :-)



 - One can compress and encrypt them at will.


If one has time to compress and encrypt those 30 GB files, and also if one 
doesn't mind that a bad disk block will destroy the remainder of the tar 
file.



 - The file system containing the backups can be something simple,
   e.g. a Fat32 file system.


That's true. I'm very close to having Box Backup able to write to a local 
FAT filesystem, at least in principle (it works right now but not all the 
unit tests pass yet). You still need a separate server process for now. I 
hope to eliminate the need for that soon.



 - One can easily write them to DVDs.


That is true, but the bad sector argument applies here too.


 - One can easily send data over the Internet, say every week's dump,
   compressed and encrypted.


But you can't send your full dumps over the Internet unless you don't mind 
tying up your connection for several days.



- Since only the time stamp of the last backup is needed, one can create
 incremental backups without having access to the lower level dumps.
 For example one can create an incremental dump to a little USB memory
 stick when sitting in the train.


True, and not many other solutions offer that option.


- It's quite fast.


Box Backup may be faster over an Internet connection, because it uses the 
rsync algorithm to synchronise the (encrypted) data with the remote store, 
so it sends only changed (encrypted) data within files, and not the whole 
file.



The disadvantage is that it's very cumbersome to get back to old data
since one has to play back, say, three archives.


At least three, possibly many more, depending on your strategy. I find 
differential backups unworkable because if I make a significant change, 
say adding a few GB of files, it will be multiplied by the number of 
differentials. So I used to run a monthly full backup and daily 
incrementals. But in case of a disaster, I would have had to unpack about 
15 archives on average. In case of needing a particular file, I had to 
search 15 archives on average.


Extracting one file from a compressed or encrypted archive requires you to 
uncompress/unencrypt half the archive on average, which can be a lot of 
data, and makes restores very slow as well.


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] SpecialFileError, UpdateError

2006-06-06 Thread Chris Wilson

Hi Felix,

On Tue, 6 Jun 2006, Felix E. Klee wrote:

So, you think that 5 minutes for rdiff-backup'ing a small incremental of 
a 7 GB file system won't be possible in the near future?


Well, I hate to say it but this has been a well-known issue for at least 
six months and nothing has happened yet. If I knew Python and I was a bit 
more motivated I would fix it myself.


The problem is: I don't want to back up directly to the Internet.  I'd 
like to be able to quickly back up to the LinkStation, and then the 
LinkStation should take care of backing up to a server on the Internet. 
I doubt that this two stage backup process is feasible with BoxBackup, 
but to be honest: Thus far, I've never backed up more than a couple of 
MB to a server on the Internet, so I've close to no experience in this 
area.


Well, you could use Box Backup in a couple of ways: box your machine to 
the LinkStation, and rsync out to the Internet, or rsync to the 
LinkStation and Box out to the Internet. I think the latter would be more 
bandwidth efficient, and possibly easier on the LinkStation, but you'd 
have to measure it to be sure.



At least every two months I want to create a full dump, no matter which
tool I use, better every month.


I'm afraid Box can't do that yet. It's planned for version 0.20, but 
there's no timescale yet.


Of course, I'd only include important data which totals at about 3GB. 
Important data, in my case, includes almost all the data (except emails) 
that I authored, including my first Basic programs created more than 15 
years ago.


I have similar requirements, but also my photos, which take about 20 GB 
(soon to be 40 when I get back from Ghana).


Sorry that neither Box nor rdiff-backup will meet your needs right now. 
Perhaps you are better off with star.


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] make a copy of rdiff-backup'ed directory

2006-06-04 Thread Chris Wilson

Hi Bas,

On Fri, 2 Jun 2006, Bas van den Heuvel wrote:

I'm using rdiff-backup to backup serveral internet hosts. Now I'm trying 
to backup the rdiff-backup data to another server on a different 
location so that when a server burns down or a nuclear meltdown occurs I 
can recover from the copied directory.


I tried to copy the directory with rsync -caz /dir remote:/dir, but when 
I try to get an incremental list on the remote I get this error


rdiff-backup -l /dir

Fatal Error: Previous backup to /data/backup/garisson_home/vsic/vsic01 seems
to have failed.
Rerun rdiff-backup with the --check-destination-dir option to revert directory
to state before unsuccessful session.


Did you run this command while an rdiff-backup process was writing to the 
directory? Did you get any errors from rsync? Are you sure that the last 
rdiff-backup completed successfully? What does rdiff-backup -l /dir on 
the original destination directory (not the rsync copy) report?


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup to LinkStation: *slow*

2006-05-26 Thread Chris Wilson

Hi Felix,

On Fri, 26 May 2006, Felix E. Klee wrote:


You can also change the SSH encryption algorithm on the client host
using the Ciphers option in /etc/ssh/ssh_config. According to this
page, arcfour is faster [http://www.hlrn.de/doc/ssh/index.html]. See
the man page for ssh_config for details.


The server (dropbear) does not support arcfour.


Then try OpenSSH instead? :-)

Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Warning, metadata file has entry for ...

2006-05-12 Thread Chris Wilson

Hi all,

I'm getting very worried by this. I have no backups for the last two 
months, because rdiff-backup keeps crashing with this error every day. My 
only option right now is to blow away a year's history with the 
rdiff-backup-data directory. I don't know Python so I can't diagnose it 
myself. Can anyone help, or tell me what more information I need to 
provide to fix this?


Cheers, Chris.

On Sat, 1 Apr 2006, Chris Wilson wrote:


Hi all,


On the machine where the disk filled up, I'm now getting the following
errors every time I run a particular backup:

[...]
The problem seems to be that the current backup will not complete 
successfully, so I get the same warnings next time.


The backup aborts with this error:

Warning, metadata file has entry for 
tmp/home/chris/lex/WEB-INF/classes/com/qwirx/lex/WrongTypeError.class,
but there are no associated files.
Traceback (most recent call last):
 File /usr/bin/rdiff-backup, line 23, in ?
   rdiff_backup.Main.Main(sys.argv[1:])
 File /usr/lib/python2.4/site-packages/rdiff_backup/Main.py, line 285, in Main
   take_action(rps)
 File /usr/lib/python2.4/site-packages/rdiff_backup/Main.py, line 257, in 
take_action
   elif action == check-destination-dir: CheckDest(rps[0])
 File /usr/lib/python2.4/site-packages/rdiff_backup/Main.py, line 861, in 
CheckDest
   dest_rp.conn.regress.Regress(dest_rp)
 File /usr/lib/python2.4/site-packages/rdiff_backup/regress.py, line 70, in 
Regress
   for rf in iterate_meta_rfs(mirror_rp, inc_rpath): ITR(rf.index, rf)
 File /usr/lib/python2.4/site-packages/rdiff_backup/regress.py, line 162, in 
iterate_meta_rfs
   for raw_rf, metadata_rorp in collated:
 File /usr/lib/python2.4/site-packages/rdiff_backup/rorpiter.py, line 92, in 
Collate2Iters
   try: relem1 = riter1.next()
 File /usr/lib/python2.4/site-packages/rdiff_backup/regress.py, line 140, in 
helper
   for sub_sub_rf in helper(sub_rf):
 File /usr/lib/python2.4/site-packages/rdiff_backup/regress.py, line 140, in 
helper
   for sub_sub_rf in helper(sub_rf):
 File /usr/lib/python2.4/site-packages/rdiff_backup/regress.py, line 140, in 
helper
   for sub_sub_rf in helper(sub_rf):
 File /usr/lib/python2.4/site-packages/rdiff_backup/regress.py, line 140, in 
helper
   for sub_sub_rf in helper(sub_rf):
 File /usr/lib/python2.4/site-packages/rdiff_backup/regress.py, line 140, in 
helper
   for sub_sub_rf in helper(sub_rf):
 File /usr/lib/python2.4/site-packages/rdiff_backup/regress.py, line 140, in 
helper
   for sub_sub_rf in helper(sub_rf):
 File /usr/lib/python2.4/site-packages/rdiff_backup/regress.py, line 140, in 
helper
   for sub_sub_rf in helper(sub_rf):
 File /usr/lib/python2.4/site-packages/rdiff_backup/regress.py, line 139, in 
helper
   for sub_rf in rf.yield_sub_rfs():
 File /usr/lib/python2.4/site-packages/rdiff_backup/restore.py, line 546, in 
yield_sub_rfs
   yield self.__class__(mirror_rp, inc_rp, inc_list)
 File /usr/lib/python2.4/site-packages/rdiff_backup/regress.py, line 182, in 
__init__
   self.set_regress_inc()
 File /usr/lib/python2.4/site-packages/rdiff_backup/regress.py, line 197, in 
set_regress_inc
   assert len(newer_incs) = 1, Too many recent increments
AssertionError: Too many recent increments

The same thing happens when I run --check-destination-dir on the target
directory.

I don't know Python and I have no idea what's causing this assertion
failure (although I imagine it's due to corruption in the metadata). Can
anyone help?

I'd really rather not wipe out the rdiff-backup-data directory and lose
all my history, since that's why I use rdiff-backup in the first place.

Cheers, Chris.



--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


[rdiff-backup-users] Warning, metadata file has entry for ...

2006-03-27 Thread Chris Wilson
Hi all, especially Ben,

On the machine where the disk filled up, I'm now getting the following
errors every time I run a particular backup:

Warning: Local version 1.0.1 does not match remote version 1.0.3.
Previous backup seems to have failed, regressing destination now.
Warning, metadata file has entry for tmp/.keep,
but there are no associated files.
Warning, metadata file has entry for tmp/10618/files/1,
but there are no associated files.
Warning, metadata file has entry for tmp/10618/files/10,
but there are no associated files.
Warning, metadata file has entry for tmp/10618/files/11,
but there are no associated files.

In fact, I get so many of these, that the report email is 6.9 MB!
Please, can anyone help me to fix this problem to stop the warnings?

Cheers, Chris.
-- 
  ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


[rdiff-backup-users] IOError: No such file or directory

2006-03-19 Thread Chris Wilson

Hi all,

I'm using rdiff-backup 1.0.4. Recently my backup server ran out of disk 
space and I left it that way for a few days before fixing it (with backups 
still being run against it, and failing). This seems to have royally 
broken my rdiff-backup repositories.


I managed to recover (I hope) from corrupted gzip metadata files, by 
uncompressing and recompressing them. However, this other problem 
persists:


Processing changed file tmp/home/chris/genesis/MC/Config
Incrementing mirror file 
/mnt/backup/local-rdiff/tmp/home/chris/genesis/MC/Config

Traceback (most recent call last):
  File /usr/bin/rdiff-backup, line 23, in ?
rdiff_backup.Main.Main(sys.argv[1:])
  File /usr/lib/python2.4/site-packages/rdiff_backup/Main.py, line 283, 
in Main

take_action(rps)
  File /usr/lib/python2.4/site-packages/rdiff_backup/Main.py, line 253, 
in take_action

elif action == backup: Backup(rps[0], rps[1])
  File /usr/lib/python2.4/site-packages/rdiff_backup/Main.py, line 303, 
in Backup

backup.Mirror_and_increment(rpin, rpout, incdir)
  File /usr/lib/python2.4/site-packages/rdiff_backup/backup.py, line 51, 
in Mirror_and_increment

DestS.patch_and_increment(dest_rpath, source_diffiter, inc_rpath)
  File /usr/lib/python2.4/site-packages/rdiff_backup/backup.py, line 
229, in patch_and_increment

ITR(diff.index, diff)
  File /usr/lib/python2.4/site-packages/rdiff_backup/rorpiter.py, line 
285, in __call__

last_branch.fast_process(*args)
  File /usr/lib/python2.4/site-packages/rdiff_backup/backup.py, line 
612, in fast_process

inc = self.inc_with_checking(tf, rp, self.get_incrp(index))
  File /usr/lib/python2.4/site-packages/rdiff_backup/backup.py, line 
598, in inc_with_checking

try: inc = increment.Increment(new, old, inc_rp)
  File /usr/lib/python2.4/site-packages/rdiff_backup/increment.py, line 
40, in Increment

if not mirror.lstat(): incrp = makemissing(incpref)
  File /usr/lib/python2.4/site-packages/rdiff_backup/increment.py, line 
51, in makemissing

incrp.touch()
  File /usr/lib/python2.4/site-packages/rdiff_backup/rpath.py, line 840, 
in touch

self.conn.open(self.path, w).close()
IOError: [Errno 2] No such file or directory: 
'/mnt/backup/local-rdiff/rdiff-backup-data/increments/tmp/home/chris/genesis/MC/Config.2006-03-14T00:29:02Z.missing'
Exception exceptions.TypeError: 'NoneType' object is not callable in 
bound method GzipFile.__del__ of gzip open file 
'/mnt/backup/local-rdiff/rdiff-backup-data/file_statistics.2006-03-19T12:27:23Z.data.gz', 
mode 'wb' at 0xb798acc8 -0x48656d54 ignored
Exception exceptions.TypeError: 'NoneType' object is not callable in 
bound method GzipFile.__del__ of gzip open file 
'/mnt/backup/local-rdiff/rdiff-backup-data/error_log.2006-03-19T12:27:23Z.data.gz', 
mode 'wb' at 0xb798af50 -0x48657d14 ignored
Exception exceptions.TypeError: 'NoneType' object is not callable in 
bound method GzipFile.__del__ of gzip open file 
'/mnt/backup/local-rdiff/rdiff-backup-data/mirror_metadata.2006-03-19T12:27:23Z.snapshot.gz', 
mode 'wb' at 0xb798aec0 -0x48656f94 ignored


I've tried --check-destination-dir already and it completed without 
errors, but I still can't back up to it.


Any ideas?

Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Crypt backup

2006-01-31 Thread Chris Wilson

Hi Mike,

On Mon, 30 Jan 2006, Mike Bydalek wrote:

For people that wanted secure backups, I used box-backup, but it's 
extremely cumbersome to restore, so frankly, I don't like it that much.


I'm a Box Backup developer as well as an rdiff-backup user, and I'd be 
very interested to know how you found the Box Backup restore cumbersome? 
I'd like to work on improving it.


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] The rdiff-backup poll!

2005-12-20 Thread Chris Wilson

Hi Ben and all,


How do you think rdiff-backup should be improved?


I voted in the poll, thanks. However, a few things have come to mind 
since:


I normally back up to an unprivileged account on the remote server, and 
I'd like a way to suppress these warning messages:


Warning: ownership cannot be changed on filesystem at
/mnt/backup/.../rdiff-backup-data
Unable to import module xattr.
Extended attributes not supported on filesystem at
/mnt/backup/.../rdiff-backup-data/rdiff-backup.tmp.0
Unable to import module posix1e from pylibacl package.
ACLs not supported on filesystem at
/mnt/backup/.../rdiff-backup-data/rdiff-backup.tmp.0

Like others on the list, I would like a way to remove old versions of 
certain (large) files from the increments, and to remove (merge) 
older increments.


I'm also considering developing a GUI tool for running and managing 
rdiff-backup, and I'd be interested to know what others think. I'm already 
developing a similar tool for Box Backup, using C++. Ideally I would like 
this tool, Boxi [http://boxi.sourceforge.net], to support all the 
following backup systems:


* Box Backup
* Rdiff-backup
* Duplicity
* Bacula
* any others that the community are interested in

The main issue with supporting Rdiff-backup is the interface to Python 
(from C++). I was wondering if I could/should embed a Python interpreter 
in my code and use that to call rdiff-backup's functions? I would probably 
also need to implement callbacks to report backup progress and errors in 
an appropriate manner for a GUI.


I'd like to know whether anyone else is interested in such support, 
whether there are any objections or suggestions from rdiff-backup users, 
and whether there is any likelihood that the changes I will need to make 
to the core rdiff-backup code might be merged back into the core 
distribution.


I will also need to learn Python, and would appreciate pointers to good 
resources for this.


Has anyone tested rdiff-backup on Windows platforms, with or without 
Cygwin?


By the way, in case you didn't know, Sourceforget uses rdiff-backup for 
their backup strategy. Well done!


Deployed an rdiff-backup based backup solution to strengthen our our 
tape backup coverage; centralized filesystem-based backups archived to 
tape. Launched 2005-09-21.


[https://sourceforge.net/docman/display_doc.php?docid=19242group_id=1]

Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] The rdiff-backup poll!

2005-12-20 Thread Chris Wilson

Hi Dean,

why wouldn't you just spawn the rdiff-backup program?  otherwise you'd 
be dependent on the internal apis remaining stable... which doesn't seem 
like a good choice for your gui or for rdiff-backup development...


Well, I'm assuming that sooner or later rdiff-backup will have some kind 
of stable API that I can use. Also, the compiler (and python bytecode 
interpreter) will check that I'm using the correct API and give me a 
warning if not. Trying to parse the output of a process is a nightmare 
that I really don't want to get involved with (again).



progress meters kind of depend on knowing how much there is to be done,
and rdiff-backup doesn't know that initially at least... because it starts
transferring data even before it has scanned all the inodes needing
backup.


Well, I guess I would have to change that :-)

but rdiff-backup could probably be easily mod'd to have an option to 
display how many inodes/bytes it had processed so far... just hook in 
where the file-statistics file is written...


Yeah.

Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] The rdiff-backup poll!

2005-12-20 Thread Chris Wilson

Hi Ben,


Warning: ownership cannot be changed on filesystem at
/mnt/backup/.../rdiff-backup-data



If you run at default verbosity I don't think any of these should show
up.  Also I can't find this ownership cannot be changed warning
anywhere.  What version are you using?


This one I get all the time, even at default verbosity. I'm using 1.0.0 on 
some hosts, 1.0.3 on others.


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] The rdiff-backup poll!

2005-12-20 Thread Chris Wilson

Hi Ben,


I actually consider this a feature.  Since rdiff-backup only makes one
pass, it saves time making an extra pass through the directory tree.
Also, it doesn't need to fit the entire directory list in memory at
the same time (this is a common complaint against rsync).  Finally, it
doesn't need to worry about the various ways that the first pass can
differ from the second pass if the directory is changed.

You could have your program make a preliminary pass just to count the
number of files in the source directory, but I don't think
rdiff-backup needs to do this as a matter of course.


I'd probably do that then, as long as I can handle rdiff-backup's exclude 
lists and get the same results as it does. I don't need to hold the whole 
file list in memory, just the total file and byte counts.


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Long path problem

2005-11-22 Thread Chris Wilson

Hi Ben,


So anyway, I'm curious when/how the long-path-problem is coming up.  I
wish rdiff-backup could successfully mirror all the files in this
case, but don't see a way of doing this that's not error-prone.
(Treating them the same way as long filename files and sticking them
in a separate directory would be a bit easier.)


I used a simple shell script on a Linux ext3 filesystem:

while mkdir   cd ; do true; done

The path got to nearly 8000 characters long, and although cd was giving 
warnings about getcwd: path name too long, it was still going when I 
killed it.


When I try to back up the root of this huge tree with rdiff-backup, I get 
the following error:


Processing changed file //.../
Exception '[Errno 36] File name too long: '/.../a'' raised 
of class 'exceptions.OSError':
  File /usr/lib/python2.3/site-packages/rdiff_backup/robust.py, line 32, 
in check_common_error

try: return function(*args)
  File /usr/lib/python2.3/site-packages/rdiff_backup/rpath.py, line 922, 
in append

return self.__class__(self.conn, self.base, self.index + (ext,))
  File /usr/lib/python2.3/site-packages/rdiff_backup/rpath.py, line 669, 
in __init__

else: self.setdata()
  File /usr/lib/python2.3/site-packages/rdiff_backup/rpath.py, line 693, 
in setdata

self.data = self.conn.C.make_file_dict(self.path)

Hope this helps,

Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


[rdiff-backup-users] Strange error

2005-11-06 Thread Chris Wilson

Hi Ben and all,

I've recently started to get this error for no apparent reason (as far as 
I can see nothing has changed). It happens when backing up a Fedora Core 1 
box (phpeace) onto a Gentoo box (top). Both run rdiff-backup 1.0.0.


--
top root # rdiff-backup --force --exclude '/proc'
--exclude '/var/spool/exim' [EMAIL PROTECTED]::/
/mnt/backup/phpeace-rdiff/
Traceback (most recent call last):
  File /usr/bin/rdiff-backup, line 21, in ?
import rdiff_backup.Main
  File /usr/lib/python2.2/site-packages/rdiff_backup/Main.py, line 25, 
in ?

import Globals, Time, SetConnections, selection, robust, rpath, \
  File /usr/lib/python2.2/site-packages/rdiff_backup/SetConnections.py, 
line 30, in ?

import Globals, connection, rpath
  File /usr/lib/python2.2/site-packages/rdiff_backup/connection.py, line 
534, in ?

import Globals, Time, Rdiff, Hardlink, FilenameMapping, C, Security, \
  File /usr/lib/python2.2/site-packages/rdiff_backup/manage.py, line 24, 
in ?

import Globals, Time, static, statistics, restore, selection
ValueError: bad marshal data
Fatal Error: Truncated header string (problem probably originated 
remotely)


Couldn't start up the remote connection by executing

ssh -C [EMAIL PROTECTED] rdiff-backup --server

Remember that, under the default settings, rdiff-backup must be
installed in the PATH on the remote system.  See the man page for more
information on this.  This message may also be displayed if the remote
version of rdiff-backup is quite different from the local version (1.0.0).
--

But I can run the ssh command just fine:

--
top root # ssh -C [EMAIL PROTECTED] -v rdiff-backup --server
OpenSSH_3.9p1, OpenSSL 0.9.7e 25 Oct 2004
[...]
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
debug1: Sending command: rdiff-backup --server
--

And if I enter some junk data into this session n, the remote 
rdiff-backup throws an exception:


--

aa
aaa

debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
Traceback (most recent call last):
  File /usr/bin/rdiff-backup, line 24, in ?
rdiff_backup.Main.Main(sys.argv[1:])
[...]
  File /usr/lib/python2.2/site-packages/rdiff_backup/connection.py, line 
208, in _read

return self.inpipe.read(length)
OverflowError: long int too large to convert to int
debug1: channel 0: free: client-session, nchannels 1
debug1: Transferred: stdin 0, stdout 0, stderr 0 bytes in 7.8 seconds
debug1: Bytes per second: stdin 0.0, stdout 0.0, stderr 0.0
debug1: Exit status 1
--

I'm completely stumped. Any ideas, anyone?

Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Re: rdiff-backup for changes only

2005-10-07 Thread Chris Wilson

Hi all,

On Fri, 7 Oct 2005, Troels Arvin wrote:

On Fri, 07 Oct 2005 18:11:24 +1000, Herman Lim wrote:

Does anyone know if there's a way to use rdiff-backup to make incremental
'backups' of changes only? ie. without mirroring the original directory
(then the changes subsequently).


Not that I know of.


Have a massive source directory and would only like to keep track of the
changes only.


You could use diff, I guess.


How about rdiff-backup to a local copy on the server, and then backup only 
the rdiff-backup directory with something like rsync?


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Possible python 2.3 dependency?

2005-08-31 Thread Chris Wilson

Hi Ben,


 Yes, rdiff-backup is meant to only require python 2.2.  If you avoided
 backing up device files (--exclude-device-files) you wouldn't get the
 error you got, so that may be an easy workaround.


Thanks, that worked :-)


Sorry, it appears that I spoke too soon. It works for backups to a remote 
machine, but local ones still fail, with the following trace:


Previous backup seems to have failed, regressing destination now.
Exception ''module' object has no attribute 'mknod'' raised of class 
'exceptions.AttributeError':
  File /usr/lib/python2.2/site-packages/rdiff_backup/robust.py, line 32, in 
check_common_error
try: return function(*args)
  File /usr/lib/python2.2/site-packages/rdiff_backup/rpath.py, line 213, in 
copy_with_attribs
copy(rpin, rpout, compress)
  File /usr/lib/python2.2/site-packages/rdiff_backup/rpath.py, line 101, in 
copy
rpout.makedev(c, major, minor)
  File /usr/lib/python2.2/site-packages/rdiff_backup/rpath.py, line 1042, in 
makedev
try: self.conn.os.mknod(self.path, mode, self.conn.os.makedev(major, minor))

Traceback (most recent call last):
  File /usr/bin/rdiff-backup, line 24, in ?
rdiff_backup.Main.Main(sys.argv[1:])
  File /usr/lib/python2.2/site-packages/rdiff_backup/Main.py, line 282, in 
Main
take_action(rps)
  File /usr/lib/python2.2/site-packages/rdiff_backup/Main.py, line 252, in 
take_action
elif action == backup: Backup(rps[0], rps[1])
  File /usr/lib/python2.2/site-packages/rdiff_backup/Main.py, line 302, in 
Backup
backup.Mirror_and_increment(rpin, rpout, incdir)
  File /usr/lib/python2.2/site-packages/rdiff_backup/backup.py, line 51, in 
Mirror_and_increment
DestS.patch_and_increment(dest_rpath, source_diffiter, inc_rpath)
  File /usr/lib/python2.2/site-packages/rdiff_backup/backup.py, line 229, in 
patch_and_increment
ITR(diff.index, diff)
  File /usr/lib/python2.2/site-packages/rdiff_backup/rorpiter.py, line 279, 
in __call__
last_branch.fast_process(*args)
  File /usr/lib/python2.2/site-packages/rdiff_backup/backup.py, line 611, in 
fast_process
inc = self.inc_with_checking(tf, rp, self.get_incrp(index))
  File /usr/lib/python2.2/site-packages/rdiff_backup/backup.py, line 597, in 
inc_with_checking
try: inc = increment.Increment(new, old, inc_rp)
  File /usr/lib/python2.2/site-packages/rdiff_backup/increment.py, line 44, 
in Increment
else: incrp = makesnapshot(mirror, incpref)
  File /usr/lib/python2.2/site-packages/rdiff_backup/increment.py, line 69, 
in makesnapshot
(mirror, snapshotrp, compress)) == 0:
  File /usr/lib/python2.2/site-packages/rdiff_backup/robust.py, line 32, in 
check_common_error
try: return function(*args)
  File /usr/lib/python2.2/site-packages/rdiff_backup/rpath.py, line 213, in 
copy_with_attribs
copy(rpin, rpout, compress)
  File /usr/lib/python2.2/site-packages/rdiff_backup/rpath.py, line 101, in 
copy
rpout.makedev(c, major, minor)
  File /usr/lib/python2.2/site-packages/rdiff_backup/rpath.py, line 1042, in 
makedev
try: self.conn.os.mknod(self.path, mode, self.conn.os.makedev(major, minor))
AttributeError: 'module' object has no attribute 'mknod'
Exception exceptions.TypeError: 'NoneType' object is not callable in bound method 
GzipFile.__del__ of gzip open file 
'/mnt/backup/server/rdiff/rdiff-backup-data/file_statistics.2005-08-30T23:59:59+01:00.data.gz', mode 
'wb' at 0x8481ea0 0x848316c ignored
Exception exceptions.TypeError: 'NoneType' object is not callable in bound method 
GzipFile.__del__ of gzip open file 
'/mnt/backup/server/rdiff/rdiff-backup-data/mirror_metadata.2005-08-30T23:59:59+01:00.snapshot.gz', 
mode 'wb' at 0x83577e0 0x8499064 ignored
Exception exceptions.TypeError: 'NoneType' object is not callable in bound method 
GzipFile.__del__ of gzip open file 
'/mnt/backup/server/rdiff/rdiff-backup-data/error_log.2005-08-30T23:59:59+01:00.data.gz', mode 'wb' 
at 0x8480d48 0x847dec4 ignored

This presumably means that this box isn't being backed up (locally) right 
now, which is worrying. Any ETA on a fix, before I have to downgrade all 
my boxen to 0.12.8 to get my backups working again?


rdiff-backup was run with the following command line:

rdiff-backup --force --exclude-device-files \
--exclude /dev/log \
--exclude /dev/reboot \
... more excludes ... \
--exclude /var/state \
/ /mnt/backup/server/rdiff

rdiff-backup version 1.0.0 on Fedora Core 1, python 2.2.3.

Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Possible python 2.3 dependency?

2005-08-27 Thread Chris Wilson

Hi Charles,

   So now I have to upgrade all my boxes to 1.0.0, including Gentoo ones 
   which don't know about 1.0.0 yet


 Creating custom ebuilds is really, really easy -- so getting 1.0.0 in use on 
 your Gentoo systems shouldn't be much trouble at all. Send me personal mail 
 if you need any help -- if I have the time, I might create a 1.0.0 ebuild 
 myself.


Yeah, I already did it, piece of cake. I like Gentoo :-) But I'm just a little 
surprised that they are behind Fedora (even FC2) in updates.


Cheers, Chris.
--
_ ___ __ _
 / __/ / ,__(_)_  | Chris Wilson  at qwirx.com - Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Perl/SQL/HTML Developer |
\ _/_/_/_//_/___/ | We are GNU-free your mind-and your software |




___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki