Re: [rdiff-backup-users] rdiff-backup result is missing text files but no errors, sourcedir to big >2TB

2019-07-20 Thread Dominic Raferd
On Fri, 19 Jul 2019 at 20:43, Jelle de Jong 
wrote:

> On 7/18/19 4:45 PM, Lew Wolfgang wrote:
> > On 07/17/2019 04:48 AM, Jelle de Jong wrote:
> >> Hello everybody,
> >>
> >> I am trying to run an rdiff-backup and it keeps missing some documents
> >> compared with the source, we are using a simple test.txt file to check
> >> if the backups are still working and I got one big source backup of
> >> +2.4TB. When running rdiff-backup there are no errors. So I can not
> >> figure out what is going wrong.
> >>
> >> I split the backup to make the source smaller and then it does make
> >> backups of the test.txt file.
> >>
> >> How can rdiff-backup handle bigger 3TB backups? The command I am
> running?
> >>
> >> /usr/bin/python /usr/bin/rdiff-backup --tempdir
> >> /srv/storage/backups/tempdir --exclude-device-files --exclude-sockets
> >> --exclude-fifos /mnt/sr8-sdb2/
> >> /srv/storage/backups/libvirt-filesystems/sr8-sdb2/
> >
> > Just a wild guess:  a size limitation in your disk partition,
> > filesystem, or
> > operating system?
> >
> > What operating system:
> > 32 or 64-bits:
> > Partition table:  (MBR or GPT)
> > Filesystem type:  (NTFS, EXT4, ZFS, XFS, etc)
> > Size of destination partition:
> > BIOS type:  (Legacy or UEFI)
>
> Thank you all for responding, here is the information.
>
> # rdiff-backup --version
> rdiff-backup 1.2.8
>
> root@backup:~# uname -a
> Linux backup 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5 (2019-06-19) x86_64
> GNU/Linux
>
> root@backup:~# df -hal /srv/storage/backups/libvirt-filesystems/sr8-sdb2
> Filesystem Size  Used Avail Use% Mounted on
> /dev/mapper/lvm0--vol-storage   22T   15T  7.6T  66% /srv/storage
>
> root@backup:~# mount | grep  /dev/mapper/lvm0--vol-storage
> /dev/mapper/lvm0--vol-storage on /srv/storage type xfs
> (rw,relatime,attr2,inode64,logbsize=64k,sunit=128,swidth=1024,noquota)
>
> root@backup:~# mount | grep /boot
> /dev/sdr1 on /boot type ext3 (rw,relatime)
>
> root@backup:~# parted /dev/sdr1 print
> Model: Unknown (unknown)
> Disk /dev/sdr1: 30.7GB
> Sector size (logical/physical): 512B/512B
> Partition Table: loop
> Disk Flags:
>
> Number  Start  End SizeFile system  Flags
>   1  0.00B  30.7GB  30.7GB  ext3
>
> # Legacy bios
> root@backup:~# dmidecode -t bios
> # dmidecode 3.2
> Getting SMBIOS data from sysfs.
> SMBIOS 2.5 present.
>
> Handle 0x0005, DMI type 0, 24 bytes
> BIOS Information
> Vendor: Intel Corp.
> Version: S5500.86B.01.00.0061.030920121535
> Release Date: 03/09/2012
> Address: 0xF
> Runtime Size: 64 kB
> ROM Size: 8192 kB
> Characteristics:
> PCI is supported
> PNP is supported
> BIOS is upgradeable
> BIOS shadowing is allowed
> Boot from CD is supported
> Selectable boot is supported
> EDD is supported
> 3.5"/2.88 MB floppy services are supported (int 13h)
> Print screen service is supported (int 5h)
> 8042 keyboard services are supported (int 9h)
> Serial services are supported (int 14h)
> CGA/mono video services are supported (int 10h)
> ACPI is supported
> USB legacy is supported
> LS-120 boot is supported
> ATAPI Zip drive boot is supported
> Function key-initiated network boot is supported
> Targeted content distribution is supported
> BIOS Revision: 17.18
> Firmware Revision: 0.0
>
> Handle 0x002D, DMI type 13, 22 bytes
> BIOS Language Information
> Language Description Format: Long
> Installable Languages: 1
> en|US|iso8859-1
> Currently Installed Language: en|US|iso8859-1
>
> /usr/bin/python /usr/bin/rdiff-backup --tempdir
> /srv/storage/backups/tempdir --exclude-device-files --exclude-sockets
> --exclude-fifos /mnt/sr8-sda2
> /srv/storage/backups/libvirt-filesystems/sr8-sda2
>
> root@backup:~# dd if=/mnt/sr8-sda2/pagefile.sys
> of=/srv/storage/backups/tempdir/pagefile.sys
> 2621440+0 records in
> 2621440+0 records out
> 1342177280 bytes (1.3 GB, 1.2 GiB) copied, 11.8273 s, 113 MB/s
>
> Is there an option to speed up /usr/bin/rdiff-backup it runs with about
> 01-10M/s compared to the possible 113M/s on the underlying systems.


Could there be some permission / file lock issues on the source? Have you
tried creating an LVM snapshot of the source and backing up from that
instead?

Regarding speed, you can try turning off compression in rdiff-backup with
the --no-compression switch.
___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup result is missing text files but no errors, sourcedir to big >2TB

2019-07-19 Thread Jelle de Jong

On 7/18/19 4:45 PM, Lew Wolfgang wrote:

On 07/17/2019 04:48 AM, Jelle de Jong wrote:

Hello everybody,

I am trying to run an rdiff-backup and it keeps missing some documents 
compared with the source, we are using a simple test.txt file to check 
if the backups are still working and I got one big source backup of 
+2.4TB. When running rdiff-backup there are no errors. So I can not 
figure out what is going wrong.


I split the backup to make the source smaller and then it does make 
backups of the test.txt file.


How can rdiff-backup handle bigger 3TB backups? The command I am running?

/usr/bin/python /usr/bin/rdiff-backup --tempdir 
/srv/storage/backups/tempdir --exclude-device-files --exclude-sockets 
--exclude-fifos /mnt/sr8-sdb2/ 
/srv/storage/backups/libvirt-filesystems/sr8-sdb2/ 


Just a wild guess:  a size limitation in your disk partition, 
filesystem, or

operating system?

What operating system:
32 or 64-bits:
Partition table:  (MBR or GPT)
Filesystem type:  (NTFS, EXT4, ZFS, XFS, etc)
Size of destination partition:
BIOS type:  (Legacy or UEFI)


Thank you all for responding, here is the information.

# rdiff-backup --version
rdiff-backup 1.2.8

root@backup:~# uname -a
Linux backup 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5 (2019-06-19) x86_64 
GNU/Linux


root@backup:~# df -hal /srv/storage/backups/libvirt-filesystems/sr8-sdb2
Filesystem Size  Used Avail Use% Mounted on
/dev/mapper/lvm0--vol-storage   22T   15T  7.6T  66% /srv/storage

root@backup:~# mount | grep  /dev/mapper/lvm0--vol-storage
/dev/mapper/lvm0--vol-storage on /srv/storage type xfs 
(rw,relatime,attr2,inode64,logbsize=64k,sunit=128,swidth=1024,noquota)


root@backup:~# mount | grep /boot
/dev/sdr1 on /boot type ext3 (rw,relatime)

root@backup:~# parted /dev/sdr1 print
Model: Unknown (unknown)
Disk /dev/sdr1: 30.7GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End SizeFile system  Flags
 1  0.00B  30.7GB  30.7GB  ext3

# Legacy bios
root@backup:~# dmidecode -t bios
# dmidecode 3.2
Getting SMBIOS data from sysfs.
SMBIOS 2.5 present.

Handle 0x0005, DMI type 0, 24 bytes
BIOS Information
Vendor: Intel Corp.
Version: S5500.86B.01.00.0061.030920121535
Release Date: 03/09/2012
Address: 0xF
Runtime Size: 64 kB
ROM Size: 8192 kB
Characteristics:
PCI is supported
PNP is supported
BIOS is upgradeable
BIOS shadowing is allowed
Boot from CD is supported
Selectable boot is supported
EDD is supported
3.5"/2.88 MB floppy services are supported (int 13h)
Print screen service is supported (int 5h)
8042 keyboard services are supported (int 9h)
Serial services are supported (int 14h)
CGA/mono video services are supported (int 10h)
ACPI is supported
USB legacy is supported
LS-120 boot is supported
ATAPI Zip drive boot is supported
Function key-initiated network boot is supported
Targeted content distribution is supported
BIOS Revision: 17.18
Firmware Revision: 0.0

Handle 0x002D, DMI type 13, 22 bytes
BIOS Language Information
Language Description Format: Long
Installable Languages: 1
en|US|iso8859-1
Currently Installed Language: en|US|iso8859-1

/usr/bin/python /usr/bin/rdiff-backup --tempdir 
/srv/storage/backups/tempdir --exclude-device-files --exclude-sockets 
--exclude-fifos /mnt/sr8-sda2 
/srv/storage/backups/libvirt-filesystems/sr8-sda2


root@backup:~# dd if=/mnt/sr8-sda2/pagefile.sys 
of=/srv/storage/backups/tempdir/pagefile.sys

2621440+0 records in
2621440+0 records out
1342177280 bytes (1.3 GB, 1.2 GiB) copied, 11.8273 s, 113 MB/s

Is there an option to speed up /usr/bin/rdiff-backup it runs with about 
01-10M/s compared to the possible 113M/s on the underlying systems.


Kind regards,

Jelle de Jong


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup result is missing text files but no errors, sourcedir to big >2TB

2019-07-18 Thread Yves Bellefeuille
"Lew Wolfgang"  wrote:

>  Just a wild guess: a size limitation in your disk partition,
>  filesystem, or operating system?

I'd suspect the same thing. Can you clarify whether 2 TB is the size
of a file, of the entire backup, of the partition, or what?

ext2 and ext3 have a maximum file size of 2 TB (actually 2 TiB).

-- 
Yves Bellefeuille




___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup result is missing text files but no errors, sourcedir to big >2TB

2019-07-18 Thread Lew Wolfgang

On 07/17/2019 04:48 AM, Jelle de Jong wrote:

Hello everybody,

I am trying to run an rdiff-backup and it keeps missing some documents 
compared with the source, we are using a simple test.txt file to check 
if the backups are still working and I got one big source backup of 
+2.4TB. When running rdiff-backup there are no errors. So I can not 
figure out what is going wrong.


I split the backup to make the source smaller and then it does make 
backups of the test.txt file.


How can rdiff-backup handle bigger 3TB backups? The command I am running?

/usr/bin/python /usr/bin/rdiff-backup --tempdir 
/srv/storage/backups/tempdir --exclude-device-files --exclude-sockets 
--exclude-fifos /mnt/sr8-sdb2/ 
/srv/storage/backups/libvirt-filesystems/sr8-sdb2/ 


Just a wild guess:  a size limitation in your disk partition, filesystem, or
operating system?

What operating system:
32 or 64-bits:
Partition table:  (MBR or GPT)
Filesystem type:  (NTFS, EXT4, ZFS, XFS, etc)
Size of destination partition:
BIOS type:  (Legacy or UEFI)

Regards,
Lew


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup under Windows with Cygwin

2018-08-17 Thread Yves Bellefeuille
Dominic Raferd  wrote:

>  Presumably there is some difference between the rdiff-backup command
>  that creates the tage backup and the ones that create the monate and
>  semajne backups. I would look at that difference.

Actually, the commands are identical.

"Tage", "semajne" and "monate" mean daily, weekly, and monthly (in
Esperanto). The difference is in how often the backups are made and
how long they're kept.

Daily backups are made every day and are kept for a week, weekly
backups are made once a week and are kept for a month, and monthly
backups are made once a month and are kept for a year. It's a way to
compromise between having old backups while acknowledging the limited
disk space available.

The commands are (for example):

rdiff-backup  --remove-older-than 7D  --force
root@192.168.1.6::/data/savkopio/rdiff-backup/financo/tage ;
rdiff-backup  -v5  --exclude  [several directories]  --include
[several directories] --exclude  /cygdrive/c/'**'  /cygdrive/c 
root@192.168.1.6::/data/savkopio/rdiff-backup/financo/tage

I've since found the page
https://dadhacker.blogspot.com/2013/03/getting-rdiff-backup-to-work-with.html
, where you're mentioned, and I'll try the suggestion it makes, which
is to modify /etc/fstab (in Cygwin on the Windows computer).

Thanks.

-- 
Yves Bellefeuille






___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup-regress script query - need to be 'nobody' or 'root'

2017-06-23 Thread Dominic Raferd
On 23 June 2017 at 12:53, Ron Leach  wrote:

> List, good morning,
>
> Having messed up the latest backup increment of our quite-large filesystem
> and, having long been a very happy user of Dominic's rdiff-backup-regress
> script, I thought I'd better regress the messed-up increment.
>
> On running the regress script as user 'ron', the script advised that I
> needed to be user 'nobody' or 'root'.  But, after su-ing to root, the
> script warned me:
>
> root@fileserver:/home/ron# ./rdiff-backup-regress.sh -n 1
> /mnt/backserver/Backups
>
> rdiff-backup-regress.sh v1.0 [25 Aug 2016] by Dominic (-h for help)
> ===
>
> You are user 'root', not 'nobody', which may result in changed ownership
> of some files.
> Are you sure you wish to continue (y/-)? n
> Exiting, no changes made
> root@fileserver:/home/ron#
>
> I've not had this warning before.  Many of the files in this backup are
> used by MS Windows clients, and managed by samba; I think this may be where
> the 'nobody' user is coming from.  The files are also served over NFS to
> linux clients, perhaps they are marked as 'nobody'.
>
> I think I ought not mess up the file ownerships, so I wanted to take heed
> of the script's warning.  I don't know how to become user 'nobody' - I
> don't think there is a 'nobody' password.
>
> Before I dig a deeper hole, has someone else bumped into this when using
> rdiff-backup-regress script and (if so) what did they find was the best way
> forward?  I probably do need to regress this increment, really.
>
> Grateful for any comment,


​Hi Ron, I'm glad my script has helped you in the past. The warning is from
my script, not from rdiff-backup and was based on my experience. The safest
way to run the script IMO is as user nobody i.e. the same user as created
the archive, ​you might have to create a password temporarily for this user
and then remove it afterwards. If you run it as root there is a risk that
some files in the repository are left readable and/or writeable only by
root so future rdiff-backup sessions by the original user could fail. If
you have to run the regression as root I advise taking a backup of the
entire repository beforehand. Unless anyone knows different?
___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup Errno 5 exceptions.IOError

2016-12-14 Thread Stephen Butler
Hi guys, I'm getting frequent backup failures to our server.


Here's output from the latest error.


I've clean'd the PC of dust.


I've checked the hard drive with relevant WD diagnostic tools, it passed.


Any suggestions welcome.


rdiff-backup --remote-tempdir /mnt/hda1/tmp --ssh-no-compression --force -v5 
--print-statistics B:/Users/Michelle 
tc@10.1.1.150::/mnt/hda1/rdiffbackup.repositorys/Michelle@MICHELLE
Exception '[Errno 5] Input/output error' raised of class '':
  File "rdiff_backup\Main.pyc", line 304, in error_check_Main
  File "rdiff_backup\Main.pyc", line 324, in Main
  File "rdiff_backup\Main.pyc", line 280, in take_action
  File "rdiff_backup\Main.pyc", line 343, in Backup
  File "rdiff_backup\backup.pyc", line 51, in Mirror_and_increment
  File "rdiff_backup\connection.pyc", line 450, in __call__
  File "rdiff_backup\connection.pyc", line 370, in reval

Traceback (most recent call last):
  File "rdiff-backup", line 30, in 
  File "rdiff_backup\Main.pyc", line 304, in error_check_Main
  File "rdiff_backup\Main.pyc", line 324, in Main
  File "rdiff_backup\Main.pyc", line 280, in take_action
  File "rdiff_backup\Main.pyc", line 343, in Backup
  File "rdiff_backup\backup.pyc", line 51, in Mirror_and_increment
  File "rdiff_backup\connection.pyc", line 450, in __call__
  File "rdiff_backup\connection.pyc", line 370, in reval
IOError: [Errno 5] Input/output error
Fatal Error: Lost connection to the remote system


It may be worth noting that the backup server is freezing up a bit as well. I'm 
currently testing it with MemTest86.


If its not the hard drive or the ram, I guess I'll try the power supply after 
that.


Thanks,

Stephen.
___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup dies with ENOMEM due to requesting enormous amounts of memory

2016-06-10 Thread Jeff White

On 06/10/2016 04:34 AM, Patrik Dufresne wrote:

Hello Jeff,

You problem is interesting to me ! Rdiff-backup doesn't have the 
tendency to use alot of memory (it rarely reach 100MB on my systems). 
It is surprising to see an out or memory error like this in rdiff-backup.


May you provide some information revargin your current environment:
1. How many memory free you have

Should be plenty:

$ free -m
  totalusedfree  shared buff/cache   
available

Mem:   1869 1721169   0 5271538
Swap: 0   0   0
2. May you describes how big it the stuff you are trying to backup (nb 
files, average file size, etc.)
I'm trying to back up the root filesystem of a generic CentOS 7 box.  It 
doesn't output that it even hit the filesystem yet though, it dies 
before that from what I can tell.  It dies in the same way even when I 
try to back up a single empty directory.

3. x86 or x86_64 ?

x86_64

4. What is you python version ?

$ python -V
Python 2.7.5


Can you get your hand on the server logs too ? Causes it fail when 
requesting stuff from remote server...
Nothing in any logs other than simply showing someone logged in via 
SSH.  The ENOMEM is /not/ on the remote side, it's on the side starting 
the backup and doing a pull from another server.



--
Patrik Dufresne Service Logiciel inc.
http://www.patrikdufresne.com 
/

514-971-6442
1-114 rue des Hautbois,
St-Colomban, QC J5K 2H6

On Thu, Jun 9, 2016 at 6:54 PM, Jeff White > wrote:


I'm trying to get rdiff-backup working for the first time across
SSH (with sudo) on two CentOS 7 hosts.  I'm getting the following
crash:

$ sudo rdiff-backup --restrict-read-only / -v5 --remote-schema
'ssh -l rdiff -i /home/rdiff/.ssh/id_rsa -t -t %s "sudo
/usr/bin/rdiff-backup --server"' bacula-p1n01::/ /dumps/bacula-p1n01/
Thu Jun  9 15:34:42 2016  Using rdiff-backup version 1.2.8
Thu Jun  9 15:34:42 2016  Executing ssh -l rdiff -i
/home/rdiff/.ssh/id_rsa -t -t bacula-p1n01 "sudo
/usr/bin/rdiff-backup --server"
Thu Jun  9 15:34:42 2016  Client sending (0): ConnectionRequest:
Globals.get with 1 arguments
Thu Jun  9 15:34:42 2016  Client sending (0): 'version'
Connection to bacula-p1n01 closed.
Thu Jun  9 15:34:43 2016  Exception '' raised of class '':
  File "/usr/lib64/python2.7/site-packages/rdiff_backup/Main.py",
line 304, in error_check_Main
try: Main(arglist)
  File "/usr/lib64/python2.7/site-packages/rdiff_backup/Main.py",
line 321, in Main
rps = map(SetConnections.cmdpair2rp, cmdpairs)
  File
"/usr/lib64/python2.7/site-packages/rdiff_backup/SetConnections.py",
line 76, in cmdpair2rp
if cmd: conn = init_connection(cmd)
  File
"/usr/lib64/python2.7/site-packages/rdiff_backup/SetConnections.py",
line 150, in init_connection
check_connection_version(conn, remote_cmd)
  File
"/usr/lib64/python2.7/site-packages/rdiff_backup/SetConnections.py",
line 158, in check_connection_version
try: remote_version = conn.Globals.get('version')
  File
"/usr/lib64/python2.7/site-packages/rdiff_backup/connection.py",
line 450, in __call__
return apply(self.connection.reval, (self.name

,)
+ args)
  File
"/usr/lib64/python2.7/site-packages/rdiff_backup/connection.py",
line 368, in reval
result = self.get_response(req_num)
  File
"/usr/lib64/python2.7/site-packages/rdiff_backup/connection.py",
line 315, in get_response
try: req_num, object = self._get()
  File
"/usr/lib64/python2.7/site-packages/rdiff_backup/connection.py",
line 240, in _get
data = self._read(length)
  File
"/usr/lib64/python2.7/site-packages/rdiff_backup/connection.py",
line 210, in _read
try: return self.inpipe.read(length)

Traceback (most recent call last):
  File "/usr/bin/rdiff-backup", line 30, in 
rdiff_backup.Main.error_check_Main(sys.argv[1:])
  File "/usr/lib64/python2.7/site-packages/rdiff_backup/Main.py",
line 304, in error_check_Main
try: Main(arglist)
  File "/usr/lib64/python2.7/site-packages/rdiff_backup/Main.py",
line 321, in Main
rps = map(SetConnections.cmdpair2rp, cmdpairs)
  File
"/usr/lib64/python2.7/site-packages/rdiff_backup/SetConnections.py",
line 76, in 

Re: [rdiff-backup-users] rdiff-backup dies with ENOMEM due to requesting enormous amounts of memory

2016-06-10 Thread Patrik Dufresne
Hello Jeff,

You problem is interesting to me ! Rdiff-backup doesn't have the tendency
to use alot of memory (it rarely reach 100MB on my systems). It is
surprising to see an out or memory error like this in rdiff-backup.

May you provide some information revargin your current environment:
1. How many memory free you have
2. May you describes how big it the stuff you are trying to backup (nb
files, average file size, etc.)
3. x86 or x86_64 ?
4. What is you python version ?

Can you get your hand on the server logs too ? Causes it fail when
requesting stuff from remote server...


--
Patrik Dufresne Service Logiciel inc.
http://www.patrikdufresne.com /
514-971-6442
1-114 rue des Hautbois,
St-Colomban, QC J5K 2H6

On Thu, Jun 9, 2016 at 6:54 PM, Jeff White  wrote:

> I'm trying to get rdiff-backup working for the first time across SSH (with
> sudo) on two CentOS 7 hosts.  I'm getting the following crash:
>
> $ sudo rdiff-backup --restrict-read-only / -v5 --remote-schema 'ssh -l
> rdiff -i /home/rdiff/.ssh/id_rsa -t -t %s "sudo /usr/bin/rdiff-backup
> --server"' bacula-p1n01::/ /dumps/bacula-p1n01/
> Thu Jun  9 15:34:42 2016  Using rdiff-backup version 1.2.8
> Thu Jun  9 15:34:42 2016  Executing ssh -l rdiff -i
> /home/rdiff/.ssh/id_rsa -t -t bacula-p1n01 "sudo /usr/bin/rdiff-backup
> --server"
> Thu Jun  9 15:34:42 2016  Client sending (0): ConnectionRequest:
> Globals.get with 1 arguments
> Thu Jun  9 15:34:42 2016  Client sending (0): 'version'
> Connection to bacula-p1n01 closed.
> Thu Jun  9 15:34:43 2016  Exception '' raised of class ' 'exceptions.MemoryError'>':
>   File "/usr/lib64/python2.7/site-packages/rdiff_backup/Main.py", line
> 304, in error_check_Main
> try: Main(arglist)
>   File "/usr/lib64/python2.7/site-packages/rdiff_backup/Main.py", line
> 321, in Main
> rps = map(SetConnections.cmdpair2rp, cmdpairs)
>   File
> "/usr/lib64/python2.7/site-packages/rdiff_backup/SetConnections.py", line
> 76, in cmdpair2rp
> if cmd: conn = init_connection(cmd)
>   File
> "/usr/lib64/python2.7/site-packages/rdiff_backup/SetConnections.py", line
> 150, in init_connection
> check_connection_version(conn, remote_cmd)
>   File
> "/usr/lib64/python2.7/site-packages/rdiff_backup/SetConnections.py", line
> 158, in check_connection_version
> try: remote_version = conn.Globals.get('version')
>   File "/usr/lib64/python2.7/site-packages/rdiff_backup/connection.py",
> line 450, in __call__
> return apply(self.connection.reval, (self.name,) + args)
>   File "/usr/lib64/python2.7/site-packages/rdiff_backup/connection.py",
> line 368, in reval
> result = self.get_response(req_num)
>   File "/usr/lib64/python2.7/site-packages/rdiff_backup/connection.py",
> line 315, in get_response
> try: req_num, object = self._get()
>   File "/usr/lib64/python2.7/site-packages/rdiff_backup/connection.py",
> line 240, in _get
> data = self._read(length)
>   File "/usr/lib64/python2.7/site-packages/rdiff_backup/connection.py",
> line 210, in _read
> try: return self.inpipe.read(length)
>
> Traceback (most recent call last):
>   File "/usr/bin/rdiff-backup", line 30, in 
> rdiff_backup.Main.error_check_Main(sys.argv[1:])
>   File "/usr/lib64/python2.7/site-packages/rdiff_backup/Main.py", line
> 304, in error_check_Main
> try: Main(arglist)
>   File "/usr/lib64/python2.7/site-packages/rdiff_backup/Main.py", line
> 321, in Main
> rps = map(SetConnections.cmdpair2rp, cmdpairs)
>   File
> "/usr/lib64/python2.7/site-packages/rdiff_backup/SetConnections.py", line
> 76, in cmdpair2rp
> if cmd: conn = init_connection(cmd)
>   File
> "/usr/lib64/python2.7/site-packages/rdiff_backup/SetConnections.py", line
> 150, in init_connection
> check_connection_version(conn, remote_cmd)
>   File
> "/usr/lib64/python2.7/site-packages/rdiff_backup/SetConnections.py", line
> 158, in check_connection_version
> try: remote_version = conn.Globals.get('version')
>   File "/usr/lib64/python2.7/site-packages/rdiff_backup/connection.py",
> line 450, in __call__
> return apply(self.connection.reval, (self.name,) + args)
>   File "/usr/lib64/python2.7/site-packages/rdiff_backup/connection.py",
> line 368, in reval
> result = self.get_response(req_num)
>   File "/usr/lib64/python2.7/site-packages/rdiff_backup/connection.py",
> line 315, in get_response
> try: req_num, object = self._get()
>   File "/usr/lib64/python2.7/site-packages/rdiff_backup/connection.py",
> line 240, in _get
> data = self._read(length)
>   File "/usr/lib64/python2.7/site-packages/rdiff_backup/connection.py",
> line 210, in _read
> try: return self.inpipe.read(length)
> MemoryError
>
>
> On that host when I strace the process I see:
>
> [pid 11602] mmap(NULL, 18118029061677056, PROT_READ|PROT_WRITE,
> MAP_PRIVATE|MAP_ANONYMOUS, -1, 0 
> ...
> [pid 11602] <... mmap resumed> )= -1 ENOMEM (Cannot allocate
> memory)
>
>
> ... am I mistaken or is rdiff-backup 

Re: [rdiff-backup-users] rdiff-backup, hard links, and multiple configs

2016-06-02 Thread Patrik Dufresne
I notice you didn't got any replies yet. I will try to provide you some
answers.

> The actual question:  Does this make sense, or is there some danger in
hard-linking into an active rdiff-backup archive?

Hard-linking is not an issue for rdiff-backup. As long as the files are not
modified afterwards, it's not an issue.

> Second question:  is it possible to run rdiff-backup on a subset of the
total more frequently, but with the same target archive?

It's not possible backup only subset of files with rdiff-backup, but it
sound like a nice feature to get implemented ! ;-)

Would be nice to hear back from you once you completed your setup !


--
Patrik Dufresne


On Sun, May 29, 2016 at 7:39 PM, meeotch <
rdiff-backup-fo...@backupcentral.com> wrote:

> I'm new to rdiff-backup, and I've got a couple questions about a potential
> backup scheme.  Background:  I'm using a package (snapraid) on my server
> that protects its data against bit-rot & drive failure.  I periodically
> rsync my (windows) laptop to that (linux) server in order to have a bit-rot
> protected clone of the laptop.  However, snapraid isn't really meant for
> use on files that are constantly changing (for instance, mail files,
> Firefox profile, etc.), so these snapshots don't happen that frequently.
>
> I'd like to use rdiff-backup to do nightly backups of the laptop, for the
> versioning & accidental delete protection.  The backup target would be a
> non-snapraid-protected area on the same server.  But rather than maintain
> two full copies of all the data, it would be great if the overlaps were
> consolidated using hard links.  So the plan would be:  rdiff-backup nightly
> to a "master" archive (non-snapraid), then snapshot the rdiff-backup master
> weekly, using rsync --link-dest, into a snapraid-protected area.
>
> The actual question:  Does this make sense, or is there some danger in
> hardlinking into an active rdiff-backup archive?
>
> Second question:  is it possible to run rdiff-backup on a subset of the
> total more frequently, but with the same target archive?  For instance, a
> nightly backup of everything, then an hourly backup of certain critical
> files/dirs.  My grasp of rsync --include/exclude filters is marginal, but
> my gut says that some special magic is required to keep the hourly backup
> from blowing away excluded files (while correctly deleting files removed
> from the "hourly" areas).
>
> +--
> |This was sent by mi...@houseofpain.org via Backup Central.
> |Forward SPAM to ab...@backupcentral.com.
> +--
>
>
>
> ___
> rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
> Wiki URL:
> http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki
>
___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup performance -- slow operation on initial backup?

2016-03-30 Thread Derek Atkins
Should I be profiling on the backup server or the target/client (the
system being backed up)?
-derek

On Wed, March 30, 2016 1:47 pm, Patrik Dufresne wrote:
> I you want to spent some time debuging this this issue. It might be
> helpful
> to run a profiler.
>
> python -m cProfile -o data.prof /usr/bin/rdiff-backup source destination
>
> Then send us the data.prof. I will use `snakeviz data.prof` to visualize
> the data.
>
> --
> Patrik Dufresne Service Logiciel inc.
> http://www.patrikdufresne.com /
> 514-971-6442
> 1-114 rue des Hautbois,
> St-Colomban, QC J5K 2H6
>
> On Wed, Mar 30, 2016 at 1:30 PM, Greg Troxel  wrote:
>
>>
>> "Derek Atkins"  writes:
>>
>> > ... according to repeated netstat the socket buffers are GENERALLY
>> 0,0.
>> I
>> > did see it get up to about 1000, at one point...  OH, I take that
>> back,
>> > the SEND Queue value was just up to 243816 on the target system
>>
>> If it bursts and comes back, it's probably fine, and many systems
>> dynamically size the buffers.  This is unlikely to be the issue in a LAN
>> environment - more like when you have 100 Mb/s pipes and 70ms latencies.
>>
>> >>> Unfortunately bup is not available on all my target platforms.
>> >>
>> >> bup is python with a little C, and thus seems pretty portable.
>> Where
>> >> isn't it working?
>> >
>> > Didn't say it wasn't working.  I said "it is not available", meaning
>> there
>> > is no prepackaged RPM for some of my systems.  I could certainly try
>> to
>> > piece it together, but of course I prefer to use distro-supplied
>> software
>> > wherever possible.  It makes upgrading much easier.
>>
>> I see.  (It's in pkgsrc, which builds pretty much everywhere, but that
>> gets you into a second packaging system.)
>>
>> >> amanda is basically wrappers around dump and tar.  If you have 50
>> >> machines and want to do level 0/1/2 to tape and take tapes offsite,
>> it
>> >> works great, after you pay for the LTO tape drive.
>> >
>> > I don't have 50 machines.  I do have ~10-15.  Pretty much all Linux
>> boxes.
>> >  Not using tape; backups are all on spinning media.
>>
>> amanda can cope with that (files on disk that are virtual tapes), but
>> I'm not at all sure it's the right approach for you.
>>
>> Good luck figuring out where your bottleneck is.
>>
>> ___
>> rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
>> https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
>> Wiki URL:
>> http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki
>>
>


-- 
   Derek Atkins 617-623-3745
   de...@ihtfp.com www.ihtfp.com
   Computer and Internet Security Consultant


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup performance -- slow operation on initial backup?

2016-03-30 Thread Patrik Dufresne
I you want to spent some time debuging this this issue. It might be helpful
to run a profiler.

python -m cProfile -o data.prof /usr/bin/rdiff-backup source destination

Then send us the data.prof. I will use `snakeviz data.prof` to visualize
the data.

--
Patrik Dufresne Service Logiciel inc.
http://www.patrikdufresne.com /
514-971-6442
1-114 rue des Hautbois,
St-Colomban, QC J5K 2H6

On Wed, Mar 30, 2016 at 1:30 PM, Greg Troxel  wrote:

>
> "Derek Atkins"  writes:
>
> > ... according to repeated netstat the socket buffers are GENERALLY 0,0.
> I
> > did see it get up to about 1000, at one point...  OH, I take that back,
> > the SEND Queue value was just up to 243816 on the target system
>
> If it bursts and comes back, it's probably fine, and many systems
> dynamically size the buffers.  This is unlikely to be the issue in a LAN
> environment - more like when you have 100 Mb/s pipes and 70ms latencies.
>
> >>> Unfortunately bup is not available on all my target platforms.
> >>
> >> bup is python with a little C, and thus seems pretty portable.   Where
> >> isn't it working?
> >
> > Didn't say it wasn't working.  I said "it is not available", meaning
> there
> > is no prepackaged RPM for some of my systems.  I could certainly try to
> > piece it together, but of course I prefer to use distro-supplied software
> > wherever possible.  It makes upgrading much easier.
>
> I see.  (It's in pkgsrc, which builds pretty much everywhere, but that
> gets you into a second packaging system.)
>
> >> amanda is basically wrappers around dump and tar.  If you have 50
> >> machines and want to do level 0/1/2 to tape and take tapes offsite, it
> >> works great, after you pay for the LTO tape drive.
> >
> > I don't have 50 machines.  I do have ~10-15.  Pretty much all Linux
> boxes.
> >  Not using tape; backups are all on spinning media.
>
> amanda can cope with that (files on disk that are virtual tapes), but
> I'm not at all sure it's the right approach for you.
>
> Good luck figuring out where your bottleneck is.
>
> ___
> rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
> Wiki URL:
> http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki
>
___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup performance -- slow operation on initial backup?

2016-03-30 Thread Greg Troxel

"Derek Atkins"  writes:

> ... according to repeated netstat the socket buffers are GENERALLY 0,0.  I
> did see it get up to about 1000, at one point...  OH, I take that back,
> the SEND Queue value was just up to 243816 on the target system

If it bursts and comes back, it's probably fine, and many systems
dynamically size the buffers.  This is unlikely to be the issue in a LAN
environment - more like when you have 100 Mb/s pipes and 70ms latencies.

>>> Unfortunately bup is not available on all my target platforms.
>>
>> bup is python with a little C, and thus seems pretty portable.   Where
>> isn't it working?
>
> Didn't say it wasn't working.  I said "it is not available", meaning there
> is no prepackaged RPM for some of my systems.  I could certainly try to
> piece it together, but of course I prefer to use distro-supplied software
> wherever possible.  It makes upgrading much easier.

I see.  (It's in pkgsrc, which builds pretty much everywhere, but that
gets you into a second packaging system.)

>> amanda is basically wrappers around dump and tar.  If you have 50
>> machines and want to do level 0/1/2 to tape and take tapes offsite, it
>> works great, after you pay for the LTO tape drive.
>
> I don't have 50 machines.  I do have ~10-15.  Pretty much all Linux boxes.
>  Not using tape; backups are all on spinning media.

amanda can cope with that (files on disk that are virtual tapes), but
I'm not at all sure it's the right approach for you.

Good luck figuring out where your bottleneck is.


signature.asc
Description: PGP signature
___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup performance -- slow operation on initial backup?

2016-03-30 Thread Derek Atkins

On Wed, March 30, 2016 12:47 pm, Greg Troxel wrote:
>
> "Derek Atkins"  writes:
>
>>> I would up the send/receive socket buffers (because it's easy, not
>>> because I think that's the problem), and watch disk/cpu on both sides,
>>> and also run netstat to see if data is piling up in the transmit socket
>>> buffer.
>>
>> Do you mean within rdiff-backup, or at some other level?
>
> On NetBSD I mean bumping up these sysctls
>
> net.inet.tcp.sendspace = 131072
> net.inet.tcp.recvspace = 131072
> net.inet6.tcp6.sendspace = 131072
> net.inet6.tcp6.recvspace = 131072
>
> and presumably that's similar on other systems.

I'll look for the Linux equivalent, however...

> But, if your socket buffers aren't full, that's probably not your
> problem.

... according to repeated netstat the socket buffers are GENERALLY 0,0.  I
did see it get up to about 1000, at one point...  OH, I take that back,
the SEND Queue value was just up to 243816 on the target system

>>> FWIW, I used to use rdiff-backup but found it to be nonrobust on
>>> machines with limited (only a few GB) RAM and hundreds of GB of backup.
>>> I have switched to bup.
>>
>> Unfortunately bup is not available on all my target platforms.
>
> bup is python with a little C, and thus seems pretty portable.   Where
> isn't it working?

Didn't say it wasn't working.  I said "it is not available", meaning there
is no prepackaged RPM for some of my systems.  I could certainly try to
piece it together, but of course I prefer to use distro-supplied software
wherever possible.  It makes upgrading much easier.

> There is also attic and borg which are similar to bup.
>
>> Maybe I should consider amanda or bacula?
>
> amanda is basically wrappers around dump and tar.  If you have 50
> machines and want to do level 0/1/2 to tape and take tapes offsite, it
> works great, after you pay for the LTO tape drive.

I don't have 50 machines.  I do have ~10-15.  Pretty much all Linux boxes.
 Not using tape; backups are all on spinning media.

> Two things to think about:
>
>   do you care about deduplication?  bup does not only per-file but
>   within-file deduplication, so if multiple boxes have the same data it
>   doesn't take up extra space

No.  At least, not at the block level.  I would care about file-level
dedup, but honestly I only care about that from within one server.  If I
have multiple systems that happen to have the exact same config file I
don't particularly care about dedupping that.  I just don't want a daily
copy of /etc/passwd :)

>   Do you really need to back up all platforms, or could you sync from
>   some (Android?) to a machine with more disk and back that up?  I have
>   been using syncthing, which seems to be pretty solid, for syncing
>   among Android and regular computers (BSD and OS X).  (It is written in
>   go so it's not in practice that portable.)

Pretty much all the systems I want to backup are Linux, but different
vintages and such.

Android is different and I back that up differently.  I'm just trying to
maximize performance.  I've got a GigE network and relatively plenty of
encryption performance; I'd like to leverage that in my backup (and
restore) operations.

Thanks,

-derek

-- 
   Derek Atkins 617-623-3745
   de...@ihtfp.com www.ihtfp.com
   Computer and Internet Security Consultant


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup performance -- slow operation on initial backup?

2016-03-30 Thread Greg Troxel

"Derek Atkins"  writes:

>> I would up the send/receive socket buffers (because it's easy, not
>> because I think that's the problem), and watch disk/cpu on both sides,
>> and also run netstat to see if data is piling up in the transmit socket
>> buffer.
>
> Do you mean within rdiff-backup, or at some other level?

On NetBSD I mean bumping up these sysctls

net.inet.tcp.sendspace = 131072
net.inet.tcp.recvspace = 131072
net.inet6.tcp6.sendspace = 131072
net.inet6.tcp6.recvspace = 131072

and presumably that's similar on other systems.

But, if your socket buffers aren't full, that's probably not your
problem.

>> FWIW, I used to use rdiff-backup but found it to be nonrobust on
>> machines with limited (only a few GB) RAM and hundreds of GB of backup.
>> I have switched to bup.
>
> Unfortunately bup is not available on all my target platforms.

bup is python with a little C, and thus seems pretty portable.   Where
isn't it working?

There is also attic and borg which are similar to bup.

> Maybe I should consider amanda or bacula?

amanda is basically wrappers around dump and tar.  If you have 50
machines and want to do level 0/1/2 to tape and take tapes offsite, it
works great, after you pay for the LTO tape drive.

Two things to think about:

  do you care about deduplication?  bup does not only per-file but
  within-file deduplication, so if multiple boxes have the same data it
  doesn't take up extra space

  Do you really need to back up all platforms, or could you sync from
  some (Android?) to a machine with more disk and back that up?  I have
  been using syncthing, which seems to be pretty solid, for syncing
  among Android and regular computers (BSD and OS X).  (It is written in
  go so it's not in practice that portable.)


signature.asc
Description: PGP signature
___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup performance -- slow operation on initial backup?

2016-03-30 Thread Derek Atkins
Hey Greg!

On Wed, March 30, 2016 8:13 am, Greg Troxel wrote:
>
> Dominic Raferd  writes:
>
>> It doesn't sound like rdiff-backup is the culprit here. You could try
>> hpn-ssh https://sourceforge.net/projects/hpnssh/ ?
>
> I would be very surprised if normal people have networks available that
> really need hpn-ssh, and 2 MB/s is not that fast.  urely rsync and
> rdiff-backup are running over ssh, so that should have the same
> transport properties.

Indeed.  Like I said using scp I see 5+ MB/s with the same data set.  Just
using dd over ssh I get 7.   But yes, rsync and rdiff-backup are giving me
the same transport properties.  The fact that it's taking 36-40 hours to
backup a system does not give me confidence in the ability (or timeliness)
to restore!

> I would up the send/receive socket buffers (because it's easy, not
> because I think that's the problem), and watch disk/cpu on both sides,
> and also run netstat to see if data is piling up in the transmit socket
> buffer.

Do you mean within rdiff-backup, or at some other level?

I've already tried increasing the blocksize and conn_blocksize numbers in
Globals.py but didn't see any performance difference.

> FWIW, I used to use rdiff-backup but found it to be nonrobust on
> machines with limited (only a few GB) RAM and hundreds of GB of backup.
> I have switched to bup.

Unfortunately bup is not available on all my target platforms.

Maybe I should consider amanda or bacula?

-derek

-- 
   Derek Atkins 617-623-3745
   de...@ihtfp.com www.ihtfp.com
   Computer and Internet Security Consultant


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup performance -- slow operation on initial backup?

2016-03-30 Thread Derek Atkins
Hi,

You are right that it doesn't seem like rdiff-backup alone is the issue;
but to me it sounds like part of the issue is librsync.

>From my original email, I'm getting a good 5 MB/s using raw scp.  But I'm
only seeing 1-2MB/s using rsync and/or rdiff-backup.  So there is clearly
overhead with these two protocols over raw ssh/scp.

Of course scp is still being a bottleneck (compared to 20MB/s capability
on the back-end).  But even if I could get rsync/rdiff-backup up to the
scp-capability of 5MB/s that would help.

I'll take a look at hpn-ssh, but building that for all targets might be a
major PITA.  I might also look at alternate ciphers.

Thanks,

-derek

On Wed, March 30, 2016 3:29 am, Dominic Raferd wrote:
> It doesn't sound like rdiff-backup is the culprit here. You could try
> hpn-ssh https://sourceforge.net/projects/hpnssh/ ?
>
> On 29 March 2016 at 21:03, Derek Atkins  wrote:
>
>> Hi,
>>
>> On Mon, March 28, 2016 10:56 am, Derek Atkins wrote:
>> > Hi,
>> >
>> > Thank you for taking the time to look at this..
>> >
>> > On Mon, March 28, 2016 10:41 am, Dominic Raferd wrote:
>> >> Is this really your first rdiff-backup to this location? If you have
>> any
>> >> previous rdiff-backup runs to this repository then the situation is
>> >> complicated by rdiff-backup's need to create a new set of reverse
>> diff
>> >> files to be able to regress to previous file contents.
>> >
>> > Yes, this s really the first rdiff-backup to this location.
>> >
>> > A second backup run shortly after the first one completed finished in
>> 55
>> > minutes.
>> >
>> >> What is your /tmp location? rdiff-backup uses this location for some
>> >> operations though not AFAIK for standard backup runs. Still, if /tmp
>> is
>> >> on
>> >> encfs maybe it could be a culprit; you can override rdiff-backup's
>> >> temporary file location with --tempdir and --remote-tempdir.
>> >
>> > If it is truly /tmp then no; /tmp is a ramdisk on the backup server
>> and
>> is
>> > on the root disk on the target server.  Neither are being run through
>> > encfs.
>> >
>> > If, however, you mean the rdiff-backup-data.tmp files, those ARE being
>> run
>> > through encfs.
>> >
>> >> Might also be worth trying --ssh-no-compression.
>> >
>> > I already have "Compression no" set in ~/.ssh/config so I'm not sure
>> what
>> > this would add?
>> >
>> >> Dominic
>> >> http://www.timedicer.co.uk
>> >
>> > -derek
>> >
>> > PS: I'm running a raw rsync command now just to see how it behaves --
>> so
>> > far I'm only seeing about 2MB/s, but it's only been running for 10
>> minutes
>> > or so.
>>
>> My rsync backup just finished.  It copied 202688912 KB in 1599m55.255s
>> so
>> about 2.1MB/s.  Still significantly slower than SCP, but faster than
>> rdiff-backup.
>>
>> The command I ran was:
>>
>> rsync -art -e "ssh -l root -i /dev/null -o Compression=no"
>> root@server:/var/www/ /backups/server
>>
>> :-(
>>
>> -derek
>>
>> >
>> >>
>> >> On 28 March 2016 at 14:37, Derek Atkins  wrote:
>> >>
>> >>> Just a quick update:  I tried making these changes on both sides and
>> it
>> >>> really didn't make a difference.  Full backup of 202852072 Kbytes
>> >>> required 2267m25.913s (previously it took 2346m57.800s, so it only
>> sped
>> >>> up a factor of 3%.
>> >>>
>> >>> Only thing I have not yet tried is running a raw rsync to see how
>> fast
>> >>> that runs.  I'll do that next.
>> >>>
>> >>> So, back to my orignal question: anyone have any idea how to get
>> >>> initial
>> >>> transfers to run faster (or indeed any significant data transfers)?
>> >>>
>> >>> Thanks,
>> >>>
>> >>> -derek
>> >>>
>> >>> "Derek Atkins"  writes:
>> >>>
>> >>> > Hi,
>> >>> >
>> >>> > I'm trying to use rdiff-backup to backup a bunch of servers.  One
>> >>> > particular server contains about 160GB of data, but when I try to
>> >>> perform
>> >>> > the rdiff-backup it's saving the data at a measly 1MB/s.
>> >>> >
>> >>> > Here's my configuration:
>> >>> >
>> >>> >   [server] <--ssh--> [backup-server]{encfs} <--nfs--> [freenas]
>> >>> >
>> >>> > I ran a bunch of tests to try to figure out my bottlenecks.
>> >>> >
>> >>> > I ran a bunch of dd tests (using dd if=/dev/zero bs=1M count=1000)
>> on
>> >>> the
>> >>> > backup server.  Going directly to FreeNAS via NFS (bybassing
>> encfs) I
>> >>> get
>> >>> > 50.2MB/s.  If I run dd directly on the backup server (through
>> encfs)
>> >>> I
>> >>> get
>> >>> > 20.1MB/s.  If I go over SSH from the backup server to the target
>> >>> server
>> >>> > and run the dd on the target server, then write to FreeNas through
>> >>> encfs
>> >>> > declines to 7.6MB/s.
>> >>> >
>> >>> > Note that in those SSH tests, however, I forgot to turn off
>> >>> compression.
>> >>> > When I do that, the throughput for the dd test reduced to 6.6BM/s.
>> >>> (Note
>> >>> > that this is running simultaneously with a running rdiff-backup,
>> so
>> >>> it's
>> >>> > possible that they are reducing performance).
>> >>> >
>> >>> > Then 

Re: [rdiff-backup-users] rdiff-backup performance -- slow operation on initial backup?

2016-03-30 Thread Greg Troxel

Dominic Raferd  writes:

> It doesn't sound like rdiff-backup is the culprit here. You could try
> hpn-ssh https://sourceforge.net/projects/hpnssh/ ?

I would be very surprised if normal people have networks available that
really need hpn-ssh, and 2 MB/s is not that fast.  urely rsync and
rdiff-backup are running over ssh, so that should have the same
transport properties.

I would up the send/receive socket buffers (because it's easy, not
because I think that's the problem), and watch disk/cpu on both sides,
and also run netstat to see if data is piling up in the transmit socket
buffer.

FWIW, I used to use rdiff-backup but found it to be nonrobust on
machines with limited (only a few GB) RAM and hundreds of GB of backup.
I have switched to bup.


signature.asc
Description: PGP signature
___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup performance -- slow operation on initial backup?

2016-03-30 Thread Dominic Raferd
It doesn't sound like rdiff-backup is the culprit here. You could try
hpn-ssh https://sourceforge.net/projects/hpnssh/ ?

On 29 March 2016 at 21:03, Derek Atkins  wrote:

> Hi,
>
> On Mon, March 28, 2016 10:56 am, Derek Atkins wrote:
> > Hi,
> >
> > Thank you for taking the time to look at this..
> >
> > On Mon, March 28, 2016 10:41 am, Dominic Raferd wrote:
> >> Is this really your first rdiff-backup to this location? If you have any
> >> previous rdiff-backup runs to this repository then the situation is
> >> complicated by rdiff-backup's need to create a new set of reverse diff
> >> files to be able to regress to previous file contents.
> >
> > Yes, this s really the first rdiff-backup to this location.
> >
> > A second backup run shortly after the first one completed finished in 55
> > minutes.
> >
> >> What is your /tmp location? rdiff-backup uses this location for some
> >> operations though not AFAIK for standard backup runs. Still, if /tmp is
> >> on
> >> encfs maybe it could be a culprit; you can override rdiff-backup's
> >> temporary file location with --tempdir and --remote-tempdir.
> >
> > If it is truly /tmp then no; /tmp is a ramdisk on the backup server and
> is
> > on the root disk on the target server.  Neither are being run through
> > encfs.
> >
> > If, however, you mean the rdiff-backup-data.tmp files, those ARE being
> run
> > through encfs.
> >
> >> Might also be worth trying --ssh-no-compression.
> >
> > I already have "Compression no" set in ~/.ssh/config so I'm not sure what
> > this would add?
> >
> >> Dominic
> >> http://www.timedicer.co.uk
> >
> > -derek
> >
> > PS: I'm running a raw rsync command now just to see how it behaves -- so
> > far I'm only seeing about 2MB/s, but it's only been running for 10
> minutes
> > or so.
>
> My rsync backup just finished.  It copied 202688912 KB in 1599m55.255s  so
> about 2.1MB/s.  Still significantly slower than SCP, but faster than
> rdiff-backup.
>
> The command I ran was:
>
> rsync -art -e "ssh -l root -i /dev/null -o Compression=no"
> root@server:/var/www/ /backups/server
>
> :-(
>
> -derek
>
> >
> >>
> >> On 28 March 2016 at 14:37, Derek Atkins  wrote:
> >>
> >>> Just a quick update:  I tried making these changes on both sides and it
> >>> really didn't make a difference.  Full backup of 202852072 Kbytes
> >>> required 2267m25.913s (previously it took 2346m57.800s, so it only sped
> >>> up a factor of 3%.
> >>>
> >>> Only thing I have not yet tried is running a raw rsync to see how fast
> >>> that runs.  I'll do that next.
> >>>
> >>> So, back to my orignal question: anyone have any idea how to get
> >>> initial
> >>> transfers to run faster (or indeed any significant data transfers)?
> >>>
> >>> Thanks,
> >>>
> >>> -derek
> >>>
> >>> "Derek Atkins"  writes:
> >>>
> >>> > Hi,
> >>> >
> >>> > I'm trying to use rdiff-backup to backup a bunch of servers.  One
> >>> > particular server contains about 160GB of data, but when I try to
> >>> perform
> >>> > the rdiff-backup it's saving the data at a measly 1MB/s.
> >>> >
> >>> > Here's my configuration:
> >>> >
> >>> >   [server] <--ssh--> [backup-server]{encfs} <--nfs--> [freenas]
> >>> >
> >>> > I ran a bunch of tests to try to figure out my bottlenecks.
> >>> >
> >>> > I ran a bunch of dd tests (using dd if=/dev/zero bs=1M count=1000) on
> >>> the
> >>> > backup server.  Going directly to FreeNAS via NFS (bybassing encfs) I
> >>> get
> >>> > 50.2MB/s.  If I run dd directly on the backup server (through encfs)
> >>> I
> >>> get
> >>> > 20.1MB/s.  If I go over SSH from the backup server to the target
> >>> server
> >>> > and run the dd on the target server, then write to FreeNas through
> >>> encfs
> >>> > declines to 7.6MB/s.
> >>> >
> >>> > Note that in those SSH tests, however, I forgot to turn off
> >>> compression.
> >>> > When I do that, the throughput for the dd test reduced to 6.6BM/s.
> >>> (Note
> >>> > that this is running simultaneously with a running rdiff-backup, so
> >>> it's
> >>> > possible that they are reducing performance).
> >>> >
> >>> > Then I ran an scp test to the same target server; copying about 1.4GB
> >>> of
> >>> > photos.  Files ranged in size from 10KB to 5MB.  When run in standard
> >>> mode
> >>> > (displaying each file status) I got 4.4MB/s.  Running in quiet mode I
> >>> get
> >>> > 5.1MB/s.
> >>> >
> >>> > So clearly the bottleneck is in rdiff-backup -- performance (IMHO)
> >>> should
> >>> > not be significantly slower than the last dd-over-ssh test.  It
> >>> appears
> >>> > rdiff-backup is slowing me down by a factor of 5x throughput versus
> >>> scp.
> >>> >
> >>> > I found a message from Ben from 2005 where he suggests increasing the
> >>> > blocksize and conn_bufsiz settings in Globals.py:
> >>> >
> >>>
> https://lists.gnu.org/archive/html/rdiff-backup-users/2005-10/msg00062.html
> >>> >
> >>> > What he didn't say was whether this needed to be changed on the
> >>> target
> >>> > server, the backup 

Re: [rdiff-backup-users] rdiff-backup performance -- slow operation on initial backup?

2016-03-29 Thread Derek Atkins
Hi,

On Mon, March 28, 2016 10:56 am, Derek Atkins wrote:
> Hi,
>
> Thank you for taking the time to look at this..
>
> On Mon, March 28, 2016 10:41 am, Dominic Raferd wrote:
>> Is this really your first rdiff-backup to this location? If you have any
>> previous rdiff-backup runs to this repository then the situation is
>> complicated by rdiff-backup's need to create a new set of reverse diff
>> files to be able to regress to previous file contents.
>
> Yes, this s really the first rdiff-backup to this location.
>
> A second backup run shortly after the first one completed finished in 55
> minutes.
>
>> What is your /tmp location? rdiff-backup uses this location for some
>> operations though not AFAIK for standard backup runs. Still, if /tmp is
>> on
>> encfs maybe it could be a culprit; you can override rdiff-backup's
>> temporary file location with --tempdir and --remote-tempdir.
>
> If it is truly /tmp then no; /tmp is a ramdisk on the backup server and is
> on the root disk on the target server.  Neither are being run through
> encfs.
>
> If, however, you mean the rdiff-backup-data.tmp files, those ARE being run
> through encfs.
>
>> Might also be worth trying --ssh-no-compression.
>
> I already have "Compression no" set in ~/.ssh/config so I'm not sure what
> this would add?
>
>> Dominic
>> http://www.timedicer.co.uk
>
> -derek
>
> PS: I'm running a raw rsync command now just to see how it behaves -- so
> far I'm only seeing about 2MB/s, but it's only been running for 10 minutes
> or so.

My rsync backup just finished.  It copied 202688912 KB in 1599m55.255s  so
about 2.1MB/s.  Still significantly slower than SCP, but faster than
rdiff-backup.

The command I ran was:

rsync -art -e "ssh -l root -i /dev/null -o Compression=no"
root@server:/var/www/ /backups/server

:-(

-derek

>
>>
>> On 28 March 2016 at 14:37, Derek Atkins  wrote:
>>
>>> Just a quick update:  I tried making these changes on both sides and it
>>> really didn't make a difference.  Full backup of 202852072 Kbytes
>>> required 2267m25.913s (previously it took 2346m57.800s, so it only sped
>>> up a factor of 3%.
>>>
>>> Only thing I have not yet tried is running a raw rsync to see how fast
>>> that runs.  I'll do that next.
>>>
>>> So, back to my orignal question: anyone have any idea how to get
>>> initial
>>> transfers to run faster (or indeed any significant data transfers)?
>>>
>>> Thanks,
>>>
>>> -derek
>>>
>>> "Derek Atkins"  writes:
>>>
>>> > Hi,
>>> >
>>> > I'm trying to use rdiff-backup to backup a bunch of servers.  One
>>> > particular server contains about 160GB of data, but when I try to
>>> perform
>>> > the rdiff-backup it's saving the data at a measly 1MB/s.
>>> >
>>> > Here's my configuration:
>>> >
>>> >   [server] <--ssh--> [backup-server]{encfs} <--nfs--> [freenas]
>>> >
>>> > I ran a bunch of tests to try to figure out my bottlenecks.
>>> >
>>> > I ran a bunch of dd tests (using dd if=/dev/zero bs=1M count=1000) on
>>> the
>>> > backup server.  Going directly to FreeNAS via NFS (bybassing encfs) I
>>> get
>>> > 50.2MB/s.  If I run dd directly on the backup server (through encfs)
>>> I
>>> get
>>> > 20.1MB/s.  If I go over SSH from the backup server to the target
>>> server
>>> > and run the dd on the target server, then write to FreeNas through
>>> encfs
>>> > declines to 7.6MB/s.
>>> >
>>> > Note that in those SSH tests, however, I forgot to turn off
>>> compression.
>>> > When I do that, the throughput for the dd test reduced to 6.6BM/s.
>>> (Note
>>> > that this is running simultaneously with a running rdiff-backup, so
>>> it's
>>> > possible that they are reducing performance).
>>> >
>>> > Then I ran an scp test to the same target server; copying about 1.4GB
>>> of
>>> > photos.  Files ranged in size from 10KB to 5MB.  When run in standard
>>> mode
>>> > (displaying each file status) I got 4.4MB/s.  Running in quiet mode I
>>> get
>>> > 5.1MB/s.
>>> >
>>> > So clearly the bottleneck is in rdiff-backup -- performance (IMHO)
>>> should
>>> > not be significantly slower than the last dd-over-ssh test.  It
>>> appears
>>> > rdiff-backup is slowing me down by a factor of 5x throughput versus
>>> scp.
>>> >
>>> > I found a message from Ben from 2005 where he suggests increasing the
>>> > blocksize and conn_bufsiz settings in Globals.py:
>>> >
>>> https://lists.gnu.org/archive/html/rdiff-backup-users/2005-10/msg00062.html
>>> >
>>> > What he didn't say was whether this needed to be changed on the
>>> target
>>> > server, the backup server, or both.  Nor do I know if that would
>>> actually
>>> > help this situation.
>>> >
>>> > Do you have any ideas?
>>> >
>>> > Thanks,
>>> >
>>> > -derek
>>> >
>>> > PS: According to rpm, both systems are running version 1.2.8.
>>>
>>> --
>>>Derek Atkins 617-623-3745
>>>de...@ihtfp.com www.ihtfp.com
>>>Computer and Internet Security Consultant
>>>
>>> ___
>>> 

Re: [rdiff-backup-users] rdiff-backup performance -- slow operation on initial backup?

2016-03-28 Thread Dominic Raferd
Is this really your first rdiff-backup to this location? If you have any
previous rdiff-backup runs to this repository then the situation is
complicated by rdiff-backup's need to create a new set of reverse diff
files to be able to regress to previous file contents.

What is your /tmp location? rdiff-backup uses this location for some
operations though not AFAIK for standard backup runs. Still, if /tmp is on
encfs maybe it could be a culprit; you can override rdiff-backup's
temporary file location with --tempdir and --remote-tempdir.

Might also be worth trying --ssh-no-compression.

Dominic
http://www.timedicer.co.uk


On 28 March 2016 at 14:37, Derek Atkins  wrote:

> Just a quick update:  I tried making these changes on both sides and it
> really didn't make a difference.  Full backup of 202852072 Kbytes
> required 2267m25.913s (previously it took 2346m57.800s, so it only sped
> up a factor of 3%.
>
> Only thing I have not yet tried is running a raw rsync to see how fast
> that runs.  I'll do that next.
>
> So, back to my orignal question: anyone have any idea how to get initial
> transfers to run faster (or indeed any significant data transfers)?
>
> Thanks,
>
> -derek
>
> "Derek Atkins"  writes:
>
> > Hi,
> >
> > I'm trying to use rdiff-backup to backup a bunch of servers.  One
> > particular server contains about 160GB of data, but when I try to perform
> > the rdiff-backup it's saving the data at a measly 1MB/s.
> >
> > Here's my configuration:
> >
> >   [server] <--ssh--> [backup-server]{encfs} <--nfs--> [freenas]
> >
> > I ran a bunch of tests to try to figure out my bottlenecks.
> >
> > I ran a bunch of dd tests (using dd if=/dev/zero bs=1M count=1000) on the
> > backup server.  Going directly to FreeNAS via NFS (bybassing encfs) I get
> > 50.2MB/s.  If I run dd directly on the backup server (through encfs) I
> get
> > 20.1MB/s.  If I go over SSH from the backup server to the target server
> > and run the dd on the target server, then write to FreeNas through encfs
> > declines to 7.6MB/s.
> >
> > Note that in those SSH tests, however, I forgot to turn off compression.
> > When I do that, the throughput for the dd test reduced to 6.6BM/s.  (Note
> > that this is running simultaneously with a running rdiff-backup, so it's
> > possible that they are reducing performance).
> >
> > Then I ran an scp test to the same target server; copying about 1.4GB of
> > photos.  Files ranged in size from 10KB to 5MB.  When run in standard
> mode
> > (displaying each file status) I got 4.4MB/s.  Running in quiet mode I get
> > 5.1MB/s.
> >
> > So clearly the bottleneck is in rdiff-backup -- performance (IMHO) should
> > not be significantly slower than the last dd-over-ssh test.  It appears
> > rdiff-backup is slowing me down by a factor of 5x throughput versus scp.
> >
> > I found a message from Ben from 2005 where he suggests increasing the
> > blocksize and conn_bufsiz settings in Globals.py:
> >
> https://lists.gnu.org/archive/html/rdiff-backup-users/2005-10/msg00062.html
> >
> > What he didn't say was whether this needed to be changed on the target
> > server, the backup server, or both.  Nor do I know if that would actually
> > help this situation.
> >
> > Do you have any ideas?
> >
> > Thanks,
> >
> > -derek
> >
> > PS: According to rpm, both systems are running version 1.2.8.
>
> --
>Derek Atkins 617-623-3745
>de...@ihtfp.com www.ihtfp.com
>Computer and Internet Security Consultant
>
> ___
> rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
> Wiki URL:
> http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki
>
___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup performance -- slow operation on initial backup?

2016-03-28 Thread Derek Atkins
Hi,

Thank you for taking the time to look at this..

On Mon, March 28, 2016 10:41 am, Dominic Raferd wrote:
> Is this really your first rdiff-backup to this location? If you have any
> previous rdiff-backup runs to this repository then the situation is
> complicated by rdiff-backup's need to create a new set of reverse diff
> files to be able to regress to previous file contents.

Yes, this s really the first rdiff-backup to this location.

A second backup run shortly after the first one completed finished in 55
minutes.

> What is your /tmp location? rdiff-backup uses this location for some
> operations though not AFAIK for standard backup runs. Still, if /tmp is on
> encfs maybe it could be a culprit; you can override rdiff-backup's
> temporary file location with --tempdir and --remote-tempdir.

If it is truly /tmp then no; /tmp is a ramdisk on the backup server and is
on the root disk on the target server.  Neither are being run through
encfs.

If, however, you mean the rdiff-backup-data.tmp files, those ARE being run
through encfs.

> Might also be worth trying --ssh-no-compression.

I already have "Compression no" set in ~/.ssh/config so I'm not sure what
this would add?

> Dominic
> http://www.timedicer.co.uk

-derek

PS: I'm running a raw rsync command now just to see how it behaves -- so
far I'm only seeing about 2MB/s, but it's only been running for 10 minutes
or so.

>
> On 28 March 2016 at 14:37, Derek Atkins  wrote:
>
>> Just a quick update:  I tried making these changes on both sides and it
>> really didn't make a difference.  Full backup of 202852072 Kbytes
>> required 2267m25.913s (previously it took 2346m57.800s, so it only sped
>> up a factor of 3%.
>>
>> Only thing I have not yet tried is running a raw rsync to see how fast
>> that runs.  I'll do that next.
>>
>> So, back to my orignal question: anyone have any idea how to get initial
>> transfers to run faster (or indeed any significant data transfers)?
>>
>> Thanks,
>>
>> -derek
>>
>> "Derek Atkins"  writes:
>>
>> > Hi,
>> >
>> > I'm trying to use rdiff-backup to backup a bunch of servers.  One
>> > particular server contains about 160GB of data, but when I try to
>> perform
>> > the rdiff-backup it's saving the data at a measly 1MB/s.
>> >
>> > Here's my configuration:
>> >
>> >   [server] <--ssh--> [backup-server]{encfs} <--nfs--> [freenas]
>> >
>> > I ran a bunch of tests to try to figure out my bottlenecks.
>> >
>> > I ran a bunch of dd tests (using dd if=/dev/zero bs=1M count=1000) on
>> the
>> > backup server.  Going directly to FreeNAS via NFS (bybassing encfs) I
>> get
>> > 50.2MB/s.  If I run dd directly on the backup server (through encfs) I
>> get
>> > 20.1MB/s.  If I go over SSH from the backup server to the target
>> server
>> > and run the dd on the target server, then write to FreeNas through
>> encfs
>> > declines to 7.6MB/s.
>> >
>> > Note that in those SSH tests, however, I forgot to turn off
>> compression.
>> > When I do that, the throughput for the dd test reduced to 6.6BM/s.
>> (Note
>> > that this is running simultaneously with a running rdiff-backup, so
>> it's
>> > possible that they are reducing performance).
>> >
>> > Then I ran an scp test to the same target server; copying about 1.4GB
>> of
>> > photos.  Files ranged in size from 10KB to 5MB.  When run in standard
>> mode
>> > (displaying each file status) I got 4.4MB/s.  Running in quiet mode I
>> get
>> > 5.1MB/s.
>> >
>> > So clearly the bottleneck is in rdiff-backup -- performance (IMHO)
>> should
>> > not be significantly slower than the last dd-over-ssh test.  It
>> appears
>> > rdiff-backup is slowing me down by a factor of 5x throughput versus
>> scp.
>> >
>> > I found a message from Ben from 2005 where he suggests increasing the
>> > blocksize and conn_bufsiz settings in Globals.py:
>> >
>> https://lists.gnu.org/archive/html/rdiff-backup-users/2005-10/msg00062.html
>> >
>> > What he didn't say was whether this needed to be changed on the target
>> > server, the backup server, or both.  Nor do I know if that would
>> actually
>> > help this situation.
>> >
>> > Do you have any ideas?
>> >
>> > Thanks,
>> >
>> > -derek
>> >
>> > PS: According to rpm, both systems are running version 1.2.8.
>>
>> --
>>Derek Atkins 617-623-3745
>>de...@ihtfp.com www.ihtfp.com
>>Computer and Internet Security Consultant
>>
>> ___
>> rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
>> https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
>> Wiki URL:
>> http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki
>>
>


-- 
   Derek Atkins 617-623-3745
   de...@ihtfp.com www.ihtfp.com
   Computer and Internet Security Consultant


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org

Re: [rdiff-backup-users] rdiff-backup performance -- slow operation on initial backup?

2016-03-28 Thread Derek Atkins
Just a quick update:  I tried making these changes on both sides and it
really didn't make a difference.  Full backup of 202852072 Kbytes
required 2267m25.913s (previously it took 2346m57.800s, so it only sped
up a factor of 3%.

Only thing I have not yet tried is running a raw rsync to see how fast
that runs.  I'll do that next.

So, back to my orignal question: anyone have any idea how to get initial
transfers to run faster (or indeed any significant data transfers)?

Thanks,

-derek

"Derek Atkins"  writes:

> Hi,
>
> I'm trying to use rdiff-backup to backup a bunch of servers.  One
> particular server contains about 160GB of data, but when I try to perform
> the rdiff-backup it's saving the data at a measly 1MB/s.
>
> Here's my configuration:
>
>   [server] <--ssh--> [backup-server]{encfs} <--nfs--> [freenas]
>
> I ran a bunch of tests to try to figure out my bottlenecks.
>
> I ran a bunch of dd tests (using dd if=/dev/zero bs=1M count=1000) on the
> backup server.  Going directly to FreeNAS via NFS (bybassing encfs) I get
> 50.2MB/s.  If I run dd directly on the backup server (through encfs) I get
> 20.1MB/s.  If I go over SSH from the backup server to the target server
> and run the dd on the target server, then write to FreeNas through encfs
> declines to 7.6MB/s.
>
> Note that in those SSH tests, however, I forgot to turn off compression. 
> When I do that, the throughput for the dd test reduced to 6.6BM/s.  (Note
> that this is running simultaneously with a running rdiff-backup, so it's
> possible that they are reducing performance).
>
> Then I ran an scp test to the same target server; copying about 1.4GB of
> photos.  Files ranged in size from 10KB to 5MB.  When run in standard mode
> (displaying each file status) I got 4.4MB/s.  Running in quiet mode I get
> 5.1MB/s.
>
> So clearly the bottleneck is in rdiff-backup -- performance (IMHO) should
> not be significantly slower than the last dd-over-ssh test.  It appears
> rdiff-backup is slowing me down by a factor of 5x throughput versus scp.
>
> I found a message from Ben from 2005 where he suggests increasing the
> blocksize and conn_bufsiz settings in Globals.py:
> https://lists.gnu.org/archive/html/rdiff-backup-users/2005-10/msg00062.html
>
> What he didn't say was whether this needed to be changed on the target
> server, the backup server, or both.  Nor do I know if that would actually
> help this situation.
>
> Do you have any ideas?
>
> Thanks,
>
> -derek
>
> PS: According to rpm, both systems are running version 1.2.8.

-- 
   Derek Atkins 617-623-3745
   de...@ihtfp.com www.ihtfp.com
   Computer and Internet Security Consultant

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup-users Digest, Vol 153, Issue 2

2015-08-14 Thread Carter J. Castor
unsubscribe rdiff-backup-users
Carter J. Castor


On Sat, Aug 15, 2015 at 3:01 AM,  rdiff-backup-users-requ...@nongnu.org wrote:
 Send rdiff-backup-users mailing list submissions to
 rdiff-backup-users@nongnu.org

 To subscribe or unsubscribe via the World Wide Web, visit
 https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
 or, via email, send a message with subject or body 'help' to
 rdiff-backup-users-requ...@nongnu.org

 You can reach the person managing the list at
 rdiff-backup-users-ow...@nongnu.org

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of rdiff-backup-users digest...


 Today's Topics:

1. Re: State of the rdiff-backup project (Robert Nichols)
2. Re: State of the rdiff-backup project (Tobias Leupold)
3. Re: State of the rdiff-backup project (Claus-Justus Heine)
4. Re: State of the rdiff-backup project (Tobias Leupold)
5. Re: State of the rdiff-backup project (Dominic Raferd)
6. Re: State of the rdiff-backup project (Claus-Justus Heine)
7. Re: State of the rdiff-backup project (Claus-Justus Heine)


 --

 Message: 1
 Date: Fri, 14 Aug 2015 12:12:18 -0500
 From: Robert Nichols rnicholsnos...@comcast.net
 To: rdiff-backup-users@nongnu.org
 Subject: Re: [rdiff-backup-users] State of the rdiff-backup project
 Message-ID: mql7hj$gre$1...@ger.gmane.org
 Content-Type: text/plain; charset=windows-1252; format=flowed

 On 08/13/2015 02:16 PM, Claus-Justus Heine wrote:
  I'm really no python guy. But: I really
 would like to have the regression takes ages and crowded directories
 take ages being resolved. OR OR OR: would like to understand this
 issue. Cannot claim that I will be able to fix it. But I would very much
 like to understand what the heck is going on there. Just doesn't feel sa
 ne.

 I doubt that the regression takes ages problem can be fixed within
 rdiff-backup. It's inherently a complex operation that requires
 searching throughout the archive for things that aren't consistent with
 the previous state. Remember that you can't trust that the latest
 metadata files are consistent with the current state of the mirror and
 increments.  In large part it's due to use of the filesystem as a
 database, with bits of information scattered in file names in the
 increments directory and various metadata files. You're not going to
 change that without a major rewrite.

 I suppose one solution to the regression issue is to store the archive
 in a filesystem or LVM volume that supports snapshots.  Rather than let
 rdiff-backup do the regression, stop it and restore the snapshot. I
 suspect the penalty in space (transient, until the snapshot is deleted)
 and performance for the backup would be serious. And that still leaves
 the issue of regressing more than the last level.

 Like you, I'm no Python guy. Every time I try to study it, I end up as a
 lump in a snake's belly. I think it's because there are some things
 about the language that I hate (starting with the use of whitespace as a
 syntax element) and the incompatibility of major versions. And then
 there is the tendency of Python programmers to believe that stack
 backtraces are an acceptable substitute for meaningful error messages.
 It all leaves a bad taste that I just can't get around.

 --
 Bob Nichols NOSPAM is really part of my email address.
  Do NOT delete it.




 --

 Message: 2
 Date: Fri, 14 Aug 2015 19:30:49 +0200
 From: Tobias Leupold tobias.leup...@web.de
 To: rdiff-backup-users@nongnu.org
 Subject: Re: [rdiff-backup-users] State of the rdiff-backup project
 Message-ID: 1924852.jOvLUf9yh1@skoni
 Content-Type: text/plain; charset=us-ascii

 You're not going to change that without a major rewrite.

 Like you, I'm no Python guy.

 I like Python. And I like rdiff-backup. But perhaps, some skilled programmer
 has the energy to take the chance of the questionable state of rdiff-backup
 by only continuing the concept of rdiff-backup and rewriting some similar
 program from scratch, e. g. in C++. Perhaps with some metadata database to
 solve the regression issue (which really sucks). And so on.

 Not that I would be remotely skilled enough in C++ to do so ... I'm only
 thinking about what could be done ...



 --

 Message: 3
 Date: Fri, 14 Aug 2015 20:43:18 +0200
 From: Claus-Justus Heine hims...@claus-justus-heine.de
 To: rdiff-backup-users@nongnu.org
 Subject: Re: [rdiff-backup-users] State of the rdiff-backup project
 Message-ID: 55ce36c6.5030...@claus-justus-heine.de
 Content-Type: text/plain; charset=utf-8

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Am 14.08.2015 um 19:30 schrieb Tobias Leupold:
 You're not going to change that without a major rewrite.

 Like you, I'm no Python guy.

 I do not have any serious objections against Python. Its widely used,
 and the 

Re: [rdiff-backup-users] rdiff-backup destination on exFAT?

2014-03-11 Thread Ronny Standtke
 Have you tried the --no-acls option? There's also one called --no-eas. 

I did but, unfortunately, this didn't change anything. rdiff-backup
still crashes with the same exception.

It looks like rdiff-backup runs into a failed assertion (assert not
upper_a.lstat()) while checking the exFAT destination in
fs_abilities.py in set_case_sensitive_readwrite because exFAT has
strange file name case-sensitivity:
The file names are created case-preserving (i.e. you can create a file
Test.txt and it will show up like this with lower and upper case
letters) but at the same time exFAT is also case-insensitive: when you
already have a file Test.txt you can not create another file
test.txt, for exFAT they are both identical.
I just wonder why rdiff-backup does not crash with the same exception
when using a destination on FAT32, as far as I know it has the same
properties regarding file name cases...

Cheers

Ronny

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup destination on exFAT?

2014-03-10 Thread Thomas Harold
On 3/8/2014 12:57 PM, Ronny Standtke wrote:
 Hi all
 
 Is there anybody out there who successfully used an rdiff-backup
 destination on an exFAT filesystem?
 
 Whatever I try, it always fails and just produces the following console
 output, when running rdiff-backup with the -v5 switch on the current
 Ubuntu release:
 
 Using rdiff-backup version 1.2.8
 Cannot change permissions on target directory.
 Unable to import win32security module. Windows ACLs
 not supported by filesystem at /tmp/source
 escape_dos_devices not required by filesystem at /tmp/source
 -

Have you tried the --no-acls option? There's also one called --no-eas.



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup should have MUCH simpler defaults

2014-02-22 Thread Robert Nichols

On 02/19/2014 06:28 PM, sunfun wrote:

Am i missing somenthing, or do i have to go through a restore process to to even
take a peek at the text in two versions of a text file?


There is a FUSE tool rdiff-backup-fs that mounts the backup directory as
a virtual filesystem and hides the restore process needed to view older
versions.  I've never used it myself, but you can find all the particulars
via   https://www.google.com/search?q=rdiff-backup-fs .

--
Bob Nichols NOSPAM is really part of my email address.
Do NOT delete it.


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup should have MUCH simpler defaults

2014-02-22 Thread Ronny Standtke
Am 22.02.2014 17:57, schrieb Robert Nichols:
 On 02/19/2014 06:28 PM, sunfun wrote:
 Am i missing somenthing, or do i have to go through a restore process
 to to even
 take a peek at the text in two versions of a text file?


You can also use JBackpack which has an easy version browser including a
preview feature:
http://www.nongnu.org/jbackpack/

Disclaimer: I am (more or less) the maintainer of JBackpack.

Cheers

Ronny

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup should have MUCH simpler defaults

2014-02-21 Thread Chris Wilson

Hi Jeff,

On Wed, 19 Feb 2014, sunfun wrote:

I used the --no-compression argument, but can't seem to find a way to 
avoid the the restoration process of the .diff version files, and just 
keep the versioned text files as text files!


Am i missing somenthing, or do i have to go through a restore process to to 
even take a peek at the text in two versions of a text file?


I'm afraid you do. Rdiff-backup uses rdiff for storing differences between 
versions. Nothing else. The only way to restore old versions (that I 
know) is rdiff-backup -r.


If you want to store complete copies of old versions, then I'm afraid you 
need a different tool. Perhaps dirvish?


Cheers, Chris.
--
_ __ _
\  __/ / ,__(_)_  | Chris Wilson chris+...@qwirx.com Cambs UK |
/ (_/ ,\/ _/ /_ \ | Security/C/C++/Java/Ruby/Perl/SQL Developer |
\__/_/_/_//_/___/ | We are GNU : free your mind  your software |

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup should have MUCH simpler defaults

2014-02-21 Thread moparisthebest
Hello Jeff,

In this particular case, it's a matter of using the right tool for the
right job.  If you want to store previous versions of text files and be
able to go back and view history easily, you want version control.  I
recommend git or mercurial.

Have a good one!

On 02/19/2014 07:28 PM, sunfun wrote:
 Just spent an hour unsuccessfully testing rdiff-backup with an obvious
 and EXTREMELY simple task...

 Create a text file. Save it. Run rdiff-backup. Modify the text file.
 Save it. Run rdiff-backup. Look at the two versions with a text editor.



 I used the --no-compression argument, but can't seem to find a way to
 avoid the the restoration process of the .diff version files, and just
 keep the versioned text files as text files!

 Am i missing somenthing, or do i have to go through a restore process
 to to even take a peek at the text in two versions of a text file?



 many thanks for all the work put in on the rdiff-backup project!
 Jeff







 ___
 rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
 https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
 Wiki URL:
 http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup should have MUCH simpler defaults

2014-02-20 Thread Joschka Tillmanns
Okay if you have a look at the increments directory inside
rdiff-backup-data you will find a .diff file which contains the difference
between the two versions created by the rsync algorithm rdiff-backup uses
to keep the most minimal amount of data.
Am 20.02.2014 23:56 schrieb sunfun sun...@cfl.rr.com:

 Just spent an hour unsuccessfully testing rdiff-backup with an obvious and
 EXTREMELY simple task...

 Create a text file. Save it. Run rdiff-backup. Modify the text file. Save
 it. Run rdiff-backup. Look at the two versions with a text editor.



 I used the --no-compression argument, but can't seem to find a way to
 avoid the the restoration process of the .diff version files, and just keep
 the versioned text files as text files!

 Am i missing somenthing, or do i have to go through a restore process to
 to even take a peek at the text in two versions of a text file?



 many thanks for all the work put in on the rdiff-backup project!
 Jeff







 ___
 rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
 https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
 Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.
 php/RdiffBackupWiki

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup.org domain renewed

2013-12-15 Thread Edward Ned Harvey (rdiff-backup)
 From: rdiff-backup-users-bounces+rdiff-
 backup=nedharvey@nongnu.org [mailto:rdiff-backup-users-
 bounces+rdiff-backup=nedharvey@nongnu.org] On Behalf Of Dave
 Kempe
 
 we just renewed rdiff-backup.org domain rego. It redirects
 to http://www.nongnu.org/rdiff-backup/ .
 Anything else anyone wants to do with it? We are happy to continue
 rego/hosting/redirection whatever.

Thank you.   :-)
___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff says i/o error

2013-10-18 Thread Dominic Raferd

The error seems to have been triggered by this file:

/media/b080d187-ad00-4b4a-9721-fcbe5e839827/Fedora15-2Backup/rdiff-backup-data/increments/home/eric/.thumbnails/normal/ad2df772086dcc00f02c870a857435bd.png.2013-04-14T20:24:22-04:00.diff

Perhaps this file is corrupt?

Perhaps a previous run (perhaps the one which generated the 'IOError: 
[Errno 30] Read-only file system' error) damaged the repository, and 
this may in turn have been caused by an imperfect physical connection to 
the backup media. In which case make sure the backup drive is on a solid 
connection and then try regressing the archive with rdiff-backup 
--check-destination-dir.


I also note that the permissions of your Fedora15-2Backup folder are 
shown as 'drwxr-xr-x.' - that trailing dot indicates an SELinux security 
context, but no other alternate access method. Maybe this has something 
to do with your problem?


Dominic

On 18/10/2013 01:56, Eric Beversluis wrote:

I don't understand the following. Can someone help me?

The rdiff-backups were made on a fedora 17 machine. Last week I copied
the latest versions to a Ubuntu 12.04 machine I was using while the
other was out of commission. Tonight I tried to do an update to the
rdiff-backup files from the Ubuntu machine, to move the files back to
the fedora machine, which has been updated, and I'm getting the result
indicated below. A previous attempt resulted in IOError: [Errno 30]
Read-only file system along with lots of python stuff, but this time I
didn't get the read only file system error.



root@eric-lenovo:/media# rdiff-backup
--exclude /home/eric/.evolution/cache/tmp
--exclude /home/eric/.config/google-chrome/SingletonSocket
--exclude /home/eric/.gtk-bookmarks/.gvfs --exclude
/home/eric/VirtualBox VMs --exclude /home/eric/repository
--include /home/eric --include /var/www --exclude
'**' / /media/b080d187-ad00-4b4a-9721-fcbe5e839827/Fedora15-2Backup

Previous backup seems to have failed, regressing destination now.
Exception '[Errno 5] Input/output error:
'/media/b080d187-ad00-4b4a-9721-fcbe5e839827/Fedora15-2Backup/rdiff-backup-data/increments/home/eric/.thumbnails/normal/ad2df772086dcc00f02c870a857435bd.png.2013-04-14T20:24:22-04:00.diff''
 raised of class 'type 'exceptions.OSError'':
...
   File /usr/lib/python2.7/dist-packages/rdiff_backup/rpath.py, line
287, in make_file_dict return C.make_file_dict(filename)
OSError: [Errno 5] Input/output error:
'/media/b080d187-ad00-4b4a-9721-fcbe5e839827/Fedora15-2Backup/rdiff-backup-data/increments/home/eric/.thumbnails/normal/ad2df772086dcc00f02c870a857435bd.png.2013-04-14T20:24:22-04:00.diff'

root@eric-lenovo:/media# cd b*
root@eric-lenovo:/media/b080d187-ad00-4b4a-9721-fcbe5e839827# ls -al
total 561480
drwxr-xr-x  7 root root   4096 Mar 21  2012 .
drwxr-xr-x  5 root root   4096 Oct 17 20:35 ..
-rw-r--r--  1 eric eric  574345847 Sep  8  2011 evolution-backup.tar.gz
drwxr-xr-x. 5 root root   4096 Oct 11 10:25 Fedora15-2Backup
drwx--  5 root root   4096 Dec 21  2011 Fedora15Backup
drwx--  2 eric users 16384 Mar  4  2010 lost+found
drwxr-xr-x  6 root root   4096 Sep 27  2012 openSUSE11.2Backup
drwx--  4 root root   4096 Mar 21  2012 .Trash-0



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup and rotated external drives

2013-09-04 Thread Dominic Raferd
Dan, with your strategy each external drive will contain 6 sets of daily 
incremental changes and then 1 set of changes which covers 8 days 
(repeated every fortnight) - and they will be different from one 
another. It's workable but not ideal. John's strategy is more logical.


Do you have the possibility of running rsync from the server direct to 
an offsite location? That way you could keep your rdiff-backup 
repository on your server and regularly sync it to the offsite location, 
so no one has to run around with disk drives.


On 03/09/2013 14:56, John Wesorick wrote:
There is no way for rdiff-backup to know what changed if you swap the 
drives, so it would be missing out on a week of changes. A cheap way 
around this (and what I do) is I have rdiff-backup backup to an 
external drive, and then I clone that external drive to another one 
using rsync, and the second external drive becomes the off-site backup.


John Wesorick
IT Systems Engineer | Riders Discount
866.931.6644 x852 |www.RidersDiscount.com 
http://www.google.com/url?q=http%3A%2F%2Fwww.ridersdiscount.com%2Fsa=Dsntz=1usg=AFrqEzeo6ElGug_XjM2ao9Nwo9ZXJLMc9g
image.png 
http://www.google.com/url?q=http%3A%2F%2Fwww.facebook.com%2Fridersdiscountsa=Dsntz=1usg=AFrqEzeePs4sNbm1W3Jq5S44dTpvLtKQ7A 
image.png 
http://www.google.com/url?q=https%3A%2F%2Ftwitter.com%2F%23%21%2Fridersdiscountsa=Dsntz=1usg=AFrqEzdXQ0j5SkLmQyjyHwvgL5SZjn9A4A
Deal of the Day 
http://www.google.com/url?q=http%3A%2F%2Fwww.twitter.com%2F%23%21%2Frd_dealofthedaysa=Dsntz=1usg=AFrqEzcknX8kRmATu7z5a6swyClVqXeCqA



On Tue, Sep 3, 2013 at 9:52 AM, Dan Joyce d...@abiconsulting.us 
mailto:d...@abiconsulting.us wrote:


Looking at rdiff-backup as an Ubuntu Server solution. It looks
great, but I can't find any documentation on using multiple
external hard drives and how it will handle them being rotated
off-site. I'd like to have at least two externals that will rotate
weekly. Does rdiff work with this setup?  If so, can I keep the
two drives in sync? In other words, if Drive1 is attached this
week and backs up incrementally daily then Friday I rotate in
Drive2 through next Friday, will Drive2 somehow include all the
incremental changes--and therefore deleted/changed files--that
happened in the week it was off-site? Or, will I only have
every-other week's changes on one drive?

Thanks,

Dan J.

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
mailto:rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL:
http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki




___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki



--
*TimeDicer* http://www.timedicer.co.uk: Free File Recovery from Whenever

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup detects that many files has changed - cifs mount

2013-09-04 Thread Teddai
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi!

On 07/09/2013 03:29 PM, Attila Strba wrote:
 Sorry for the late reply. Unfortunately I was not able to solve
 the problem. Currently we arre running a daily full backup, and try
 to live with it (it takes 5 h instead of 3-5minutes). I guess it
 has to do with the Samba shares. I am still hoping that I can find
 a solution. If I can't find any prehaps I will move to Unison.

I found a working solution for us - maybe it helps you as well:

The mount option noserverino for the cifs mounts (backup source
directories) did the trick:

//x.x.x.x/share /path/to/mountpoint cifs username=,.,noserverino

Now rdiff-backup only treats files as changed if they were really
changed and the backup takes 1-2 hours instead of 20-22 hours!

HTH,

Markus




-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEARECAAYFAlInw1YACgkQjcj5w/gFHu5CUgCfRswrkdjT/bEOmCIV9JasiFpb
wN0Anj4vJD/GlwkWvmV75xHCfik0wYJi
=bdg6
-END PGP SIGNATURE-

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup and rotated external drives

2013-09-03 Thread John Wesorick
There is no way for rdiff-backup to know what changed if you swap the
drives, so it would be missing out on a week of changes. A cheap way around
this (and what I do) is I have rdiff-backup backup to an external drive,
and then I clone that external drive to another one using rsync, and the
second external drive becomes the off-site backup.

John Wesorick
IT Systems Engineer | Riders Discount
866.931.6644 x852 |
www.RidersDiscount.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.ridersdiscount.com%2Fsa=Dsntz=1usg=AFrqEzeo6ElGug_XjM2ao9Nwo9ZXJLMc9g

[image: 
image.png]http://www.google.com/url?q=http%3A%2F%2Fwww.facebook.com%2Fridersdiscountsa=Dsntz=1usg=AFrqEzeePs4sNbm1W3Jq5S44dTpvLtKQ7A
 [image: 
image.png]http://www.google.com/url?q=https%3A%2F%2Ftwitter.com%2F%23!%2Fridersdiscountsa=Dsntz=1usg=AFrqEzdXQ0j5SkLmQyjyHwvgL5SZjn9A4A
Deal of the 
Dayhttp://www.google.com/url?q=http%3A%2F%2Fwww.twitter.com%2F%23!%2Frd_dealofthedaysa=Dsntz=1usg=AFrqEzcknX8kRmATu7z5a6swyClVqXeCqA


On Tue, Sep 3, 2013 at 9:52 AM, Dan Joyce d...@abiconsulting.us wrote:

 Looking at rdiff-backup as an Ubuntu Server solution. It looks great, but
 I can't find any documentation on using multiple external hard drives and
 how it will handle them being rotated off-site. I'd like to have at least
 two externals that will rotate weekly. Does rdiff work with this setup?  If
 so, can I keep the two drives in sync? In other words, if Drive1 is
 attached this week and backs up incrementally daily then Friday I rotate in
 Drive2 through next Friday, will Drive2 somehow include all the incremental
 changes--and therefore deleted/changed files--that happened in the week it
 was off-site? Or, will I only have every-other week's changes on one drive?

 Thanks,

 Dan J.

 ___
 rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
 https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
 Wiki URL:
 http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

image.pngimage.png___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] Rdiff Backup Web Interface

2013-08-23 Thread Øyvind Skaar
 The code can be revied at: 
https://bitbucket.org/jostillmanns/rdiff-commander


Screenshots? Demo? :)



--
Øyvind Skaar
Systemkonsulent, Opoint AS
Akersgata 28A, 0158 Oslo
Epost: o...@opoint.com - Mobil (+47) 48278480


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Rdiff Backup Web Interface

2013-08-22 Thread Edward Ned Harvey (rdiff-backup)
 From: rdiff-backup-users-bounces+rdiff-
 backup=nedharvey@nongnu.org [mailto:rdiff-backup-users-
 bounces+rdiff-backup=nedharvey@nongnu.org] On Behalf Of Joschka
 Tillmanns
 
 I wrote a web interface for rdiff-backup in google's new language
 golang. I considure the current pushed state as the first useable.

Thanks, that sounds really cool!   :-)   

Unfortunately these days, rdiff-backup isn't getting a lot of attention ... I 
stepped in as maintainer, to correct some broken links on the website, and make 
the mailing list more easily accessible...  But I haven't contributed any code, 
so the code has been unmaintained for about a couple of years.  This means, 
unfortunately, it's not very likely that I or anyone else here will have time 
to comment on your code ...

Do you do python?  It's a super awesome language...   ;-)   Any chance you'd be 
interested in reviewing the rdiff-backup source, and addressing any outstanding 
bugs which are probably low hanging fruit?   ;-)

No matter what, thanks for the web GUI.  That's awesome.  And what do you think 
of golang?

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup limit time range

2013-08-15 Thread Greg Troxel

Eric Gendron concep...@gmail.com writes:

 I would like to know 2 things...

 How to limit hours of the backup.  I would like to limit backup operations
 between 23h30 and 6h00am to not slow down the daytime bandwidth.

 Can I just kill the process (cron job) at 6h00am ?

You can, but the next one will start regressing and then start over, so
this is really not going to be a useful outcome.

 Also, I think if the job isn't finished 24hours later, a second
 rdiff-backup job will start...  Is there a protection to avoid that?  Is
 running 2 rdiff-backup on the same datas (source and destination) could be
 a problem?

That's really a question of writing scripts around rdiff-backup instead.

But if your backups don't complete in less than half of your interval,
your situation really isn't going to work anyway.

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup limit time range

2013-08-15 Thread Eric Gendron
Ho!

So how to limit backup working time ?

Or how to interrupt a working backup that will continue next night?
On Aug 15, 2013 12:29 PM, Greg Troxel g...@work.lexort.com wrote:


 Eric Gendron concep...@gmail.com writes:

  I would like to know 2 things...
 
  How to limit hours of the backup.  I would like to limit backup
 operations
  between 23h30 and 6h00am to not slow down the daytime bandwidth.
 
  Can I just kill the process (cron job) at 6h00am ?

 You can, but the next one will start regressing and then start over, so
 this is really not going to be a useful outcome.

  Also, I think if the job isn't finished 24hours later, a second
  rdiff-backup job will start...  Is there a protection to avoid that?  Is
  running 2 rdiff-backup on the same datas (source and destination) could
 be
  a problem?

 That's really a question of writing scripts around rdiff-backup instead.

 But if your backups don't complete in less than half of your interval,
 your situation really isn't going to work anyway.

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup limit time range

2013-08-15 Thread Greg Troxel
Eric Gendron concep...@gmail.com writes:

 So how to limit backup working time ?

Limit the area being backed up.

 Or how to interrupt a working backup that will continue next night?

rdiff-backup can't do this.

So

  rsync the entire remote system to shadow tree, incrementally when you
  can, and run rdiff-backup locally

or

  go learn about bup (not obviously ok to use right now, but I find it
  interesting)



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup limit time range

2013-08-15 Thread John Wesorick
You could write a wrapper shell script around rdiff-backup like this to
start it:

#!/bin/bash

if [ -n pidof rdiff-backup ] ; then
kill -CONT $(pidof rdiff-backup)
else
rdiff-backup
fi

and then at 6 AM run a cron job with a script like this:

#!/bin/bash

if [ -n pidof rdiff-backup ] ; then
kill -STOP $(pidof rdiff-backup)
fi

The first script sees if rdiff-backup is running, and if it is, it uses
kill to unpause it (if if is already unpaused, this has no effect/error),
otherwise it just fires up rdiff-backup. The second script just sees if
rdiff-backup is running, and if so, pauses it.

John Wesorick
IT Systems Engineer | Riders Discount
866.931.6644 x852 |
www.RidersDiscount.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.ridersdiscount.com%2Fsa=Dsntz=1usg=AFrqEzeo6ElGug_XjM2ao9Nwo9ZXJLMc9g

[image: 
image.png]http://www.google.com/url?q=http%3A%2F%2Fwww.facebook.com%2Fridersdiscountsa=Dsntz=1usg=AFrqEzeePs4sNbm1W3Jq5S44dTpvLtKQ7A
 [image: 
image.png]http://www.google.com/url?q=https%3A%2F%2Ftwitter.com%2F%23!%2Fridersdiscountsa=Dsntz=1usg=AFrqEzdXQ0j5SkLmQyjyHwvgL5SZjn9A4A
Deal of the 
Dayhttp://www.google.com/url?q=http%3A%2F%2Fwww.twitter.com%2F%23!%2Frd_dealofthedaysa=Dsntz=1usg=AFrqEzcknX8kRmATu7z5a6swyClVqXeCqA


On Thu, Aug 15, 2013 at 1:17 PM, Greg Troxel g...@work.lexort.com wrote:

 Eric Gendron concep...@gmail.com writes:

  So how to limit backup working time ?

 Limit the area being backed up.

  Or how to interrupt a working backup that will continue next night?

 rdiff-backup can't do this.

 So

   rsync the entire remote system to shadow tree, incrementally when you
   can, and run rdiff-backup locally

 or

   go learn about bup (not obviously ok to use right now, but I find it
   interesting)



 ___
 rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
 https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
 Wiki URL:
 http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

image.pngimage.png___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup limit time range

2013-08-15 Thread Eric Gendron
Wow!  Thanks a lot!
I will try it soon.


On Thu, Aug 15, 2013 at 1:35 PM, John Wesorick jo...@ridersdiscount.comwrote:

 You could write a wrapper shell script around rdiff-backup like this to
 start it:

 #!/bin/bash

 if [ -n pidof rdiff-backup ] ; then
 kill -CONT $(pidof rdiff-backup)
 else
 rdiff-backup
 fi

 and then at 6 AM run a cron job with a script like this:

 #!/bin/bash

 if [ -n pidof rdiff-backup ] ; then
 kill -STOP $(pidof rdiff-backup)
 fi

 The first script sees if rdiff-backup is running, and if it is, it uses
 kill to unpause it (if if is already unpaused, this has no effect/error),
 otherwise it just fires up rdiff-backup. The second script just sees if
 rdiff-backup is running, and if so, pauses it.

 John Wesorick
 IT Systems Engineer | Riders Discount
 866.931.6644 x852 | 
 www.RidersDiscount.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.ridersdiscount.com%2Fsa=Dsntz=1usg=AFrqEzeo6ElGug_XjM2ao9Nwo9ZXJLMc9g

 [image: 
 image.png]http://www.google.com/url?q=http%3A%2F%2Fwww.facebook.com%2Fridersdiscountsa=Dsntz=1usg=AFrqEzeePs4sNbm1W3Jq5S44dTpvLtKQ7A
  [image: 
 image.png]http://www.google.com/url?q=https%3A%2F%2Ftwitter.com%2F%23%21%2Fridersdiscountsa=Dsntz=1usg=AFrqEzdXQ0j5SkLmQyjyHwvgL5SZjn9A4A
 Deal of the 
 Dayhttp://www.google.com/url?q=http%3A%2F%2Fwww.twitter.com%2F%23%21%2Frd_dealofthedaysa=Dsntz=1usg=AFrqEzcknX8kRmATu7z5a6swyClVqXeCqA


 On Thu, Aug 15, 2013 at 1:17 PM, Greg Troxel g...@work.lexort.com wrote:

 Eric Gendron concep...@gmail.com writes:

  So how to limit backup working time ?

 Limit the area being backed up.

  Or how to interrupt a working backup that will continue next night?

 rdiff-backup can't do this.

 So

   rsync the entire remote system to shadow tree, incrementally when you
   can, and run rdiff-backup locally

 or

   go learn about bup (not obviously ok to use right now, but I find it
   interesting)



 ___
 rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
 https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
 Wiki URL:
 http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki



image.pngimage.png___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup limit time range

2013-08-15 Thread MasteRTriX
I would assign less bandwith or priority on the router side in the 
interval you require.



El 15/08/13 13:29, Greg Troxel escribió:

Eric Gendron concep...@gmail.com writes:


I would like to know 2 things...

How to limit hours of the backup.  I would like to limit backup operations
between 23h30 and 6h00am to not slow down the daytime bandwidth.

Can I just kill the process (cron job) at 6h00am ?

You can, but the next one will start regressing and then start over, so
this is really not going to be a useful outcome.


Also, I think if the job isn't finished 24hours later, a second
rdiff-backup job will start...  Is there a protection to avoid that?  Is
running 2 rdiff-backup on the same datas (source and destination) could be
a problem?

That's really a question of writing scripts around rdiff-backup instead.

But if your backups don't complete in less than half of your interval,
your situation really isn't going to work anyway.

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup detects that many files has changed - cifs mount

2013-07-09 Thread Attila Strba

Hi Markus,

Sorry for the late reply. Unfortunately I was not able to solve the 
problem. Currently we arre running a daily full backup, and try to live 
with it (it takes 5 h instead of 3-5minutes). I guess it has to do with 
the Samba shares. I am still hoping that I can find a solution. If I 
can't find any prehaps I will move to Unison.


Greetings
Attila

--
Mit freundlichen Grüßen
Attila Strba, PhD

iSYS RTS GmbH

Tel: +49 (0) 89 442 30 68-32 | Fax  (0) 89 442 30 68-14
Grillparzerstr. 10 | D-81675 Muenchen
www.isys-rts.de

Sitz der Gesellschaft: Muenchen | HRB 151897
Geschaeftsfuehrer: Georg Hutter

Am 05.07.2013 09:50, schrieb Teddai:

Hi!


I had a thought.  In the manpage for rdiff-backup, see if the paragraph
for
the --no-compare-inode option might explain what is happening.

I have the same problem now since 4 weeks. rdiff-backup is used for many
years on a Debian server to backup a Windows file server over various CIFS
shares.

After I moved the virtual machine of my Windows server from VMware Server to
VMware ESXi all files were always treated as changed on every backup. I can
reproduce this also with a clean backup repository. This only happens if the
backup source is a CIFS mount.

I tried various rdiff-backup flags (--no-compare-indoe, --no-acl, --no-eas)
and various mount options (iocharset and so on) with no success. The backup
needs now ~ 20 hours instead of 3-5 hours before...

@Attila Strba: How did you solve this problem?


TIA,

Markus





--
View this message in context: 
http://nongnu.13855.n7.nabble.com/rdiff-backup-detects-that-many-files-has-changed-cifs-mount-tp87635p170011.html
Sent from the rdiff-backup-users mailing list archive at Nabble.com.

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup detects that many files has changed - cifs mount

2013-07-05 Thread Teddai
Hi!

 I had a thought.  In the manpage for rdiff-backup, see if the paragraph
 for
 the --no-compare-inode option might explain what is happening.

I have the same problem now since 4 weeks. rdiff-backup is used for many
years on a Debian server to backup a Windows file server over various CIFS
shares. 

After I moved the virtual machine of my Windows server from VMware Server to
VMware ESXi all files were always treated as changed on every backup. I can
reproduce this also with a clean backup repository. This only happens if the
backup source is a CIFS mount.

I tried various rdiff-backup flags (--no-compare-indoe, --no-acl, --no-eas)
and various mount options (iocharset and so on) with no success. The backup
needs now ~ 20 hours instead of 3-5 hours before...

@Attila Strba: How did you solve this problem?


TIA,

Markus





--
View this message in context: 
http://nongnu.13855.n7.nabble.com/rdiff-backup-detects-that-many-files-has-changed-cifs-mount-tp87635p170011.html
Sent from the rdiff-backup-users mailing list archive at Nabble.com.

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup fails check-destination-dir

2013-05-23 Thread Rudi Meyer
unsubscribe


--
Rudi Meyer
Forlaget Systime


On 23 May 2013 09:47, Valerio Pachera siri...@gmail.com wrote:

 Hi this is what I get:

 root@server:/mnt/backup# rdiff-backup --check-destination-dir vm/

 Exception '' raised of class 'type 'exceptions.AssertionError'':
   File /usr/lib/pymodules/python2.6/rdiff_backup/Main.py, line 304,
 in error_check_Main
 try: Main(arglist)
   File /usr/lib/pymodules/python2.6/rdiff_backup/Main.py, line 324, in
 Main
 take_action(rps)
   File /usr/lib/pymodules/python2.6/rdiff_backup/Main.py, line 282,
 in take_action
 elif action == check-destination-dir: CheckDest(rps[0])
   File /usr/lib/pymodules/python2.6/rdiff_backup/Main.py, line 872,
 in CheckDest
 dest_rp.conn.regress.Regress(dest_rp)
   File /usr/lib/pymodules/python2.6/rdiff_backup/regress.py, line
 65, in Regress
 assert mirror_rp.isdir() and inc_rpath.isdir()

 Traceback (most recent call last):
   File /usr/bin/rdiff-backup, line 30, in module
 rdiff_backup.Main.error_check_Main(sys.argv[1:])
   File /usr/lib/pymodules/python2.6/rdiff_backup/Main.py, line 304,
 in error_check_Main
 try: Main(arglist)
   File /usr/lib/pymodules/python2.6/rdiff_backup/Main.py, line 324, in
 Main
 take_action(rps)
   File /usr/lib/pymodules/python2.6/rdiff_backup/Main.py, line 282,
 in take_action
 elif action == check-destination-dir: CheckDest(rps[0])
   File /usr/lib/pymodules/python2.6/rdiff_backup/Main.py, line 872,
 in CheckDest
 dest_rp.conn.regress.Regress(dest_rp)
   File /usr/lib/pymodules/python2.6/rdiff_backup/regress.py, line
 65, in Regress
 assert mirror_rp.isdir() and inc_rpath.isdir()
 AssertionError


 Debian 6.0.
 Python 2.6.
 Linux server 2.6.32-5-amd64 #1 SMP Wed Jan 12 03:40:32 UTC 2011 x86_64
 GNU/Linux

 Any idea?

 ___
 rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
 https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
 Wiki URL:
 http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff on rdiff

2013-02-08 Thread Adrian A. Baumann
On Fri, Feb 08, 2013 at 10:56:48AM +1100, m...@electronico.nc wrote:
 Hello,
 I think Dominic is telling you to exclude the 'rdiff-backup-data'
 folders located in your users directory from the second rdiff.

Good morning everybody,

...uh, no, I don't think so. How I understood Dominic is that you shouldn't 
have a rdiff-backup-data-directory as a direct subdirectory of your 
backup-root. As long as it's at least one level down, it should be fine. 
If you do exclude all the rdiff-backup-data directories, you'll lose the 
complete backup history in their parent directories.

To answer with your example:

/home/eric/repository/budget/index.php: OK
/home/eric/repository/budget/index.php: OK
/home/eric/repository/budget/rdiff-backup-data: Not just OK, but needed if you 
want thei backup history in .../budget
/home/eric/repository/experiment/rdiff-backup-data: ...same for .../experiment


/home/eric/rdiff-backup-data: Would *not* be OK, as this would conflict with 
the rdiff-backup-data-directory generated in your backup destination.

Hope that helps, my caffeine level isn't quite up to writing any clearer yet.

Cheers,
Ade Baumann


smime.p7s
Description: S/MIME cryptographic signature
___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff on rdiff

2013-02-07 Thread Dominic Ferard
The only thing to avoid is having a directory called 'rdiff-backup-data' 
directly off the root directory of the source. Because when you use 
rdiff-backup it creates such a directory in the destination and uses it 
to keep old versions of files. Otherwise I *think* it should be fine...


Dominic



On 07/02/2013 11:48, Eric Beversluis wrote:

Hope this makes sense:

I use rdiff-backup as a kind of version control for particular folders
in my Apache document root, rdiff-ing them several times a day to a
folder in my Home directory, which I call repository.

I'm wondering if there's any problem with including that repository
sub-directory when I do a general rdiff backup of my stuff, that is,
whether there's a problem with using rdiff-backup on files that are
themselves rdiff-backup files? (In which case I would exclude that
directory as I also include the document root in my backup.)

Thanks




___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff on rdiff

2013-02-07 Thread Eric Beversluis
Thanks. So I've got:

/home/eric/repository/budget/index.php
/home/eric/repository/budget/rdiff-backup-data
/homer/eric/repository/experiment/rdiff-backup-data
etc. 

It should be OK to run rdiff-backup on /home/eric in that case?

On Thu, 2013-02-07 at 16:39 +, Dominic Ferard wrote:
 The only thing to avoid is having a directory called 'rdiff-backup-data' 
 directly off the root directory of the source. Because when you use 
 rdiff-backup it creates such a directory in the destination and uses it 
 to keep old versions of files. Otherwise I *think* it should be fine...
 
 Dominic
 
 
 
 On 07/02/2013 11:48, Eric Beversluis wrote:
  Hope this makes sense:
 
  I use rdiff-backup as a kind of version control for particular folders
  in my Apache document root, rdiff-ing them several times a day to a
  folder in my Home directory, which I call repository.
 
  I'm wondering if there's any problem with including that repository
  sub-directory when I do a general rdiff backup of my stuff, that is,
  whether there's a problem with using rdiff-backup on files that are
  themselves rdiff-backup files? (In which case I would exclude that
  directory as I also include the document root in my backup.)
 
  Thanks
 
 
 
 ___
 rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
 https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
 Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff on rdiff

2013-02-07 Thread me

Hello,
I think Dominic is telling you to exclude the 'rdiff-backup-data' 
folders located in your users directory from the second rdiff.

Nicolas

Le 08/02/2013 10:45, Eric Beversluis a écrit :

Thanks. So I've got:

/home/eric/repository/budget/index.php
/home/eric/repository/budget/rdiff-backup-data
/homer/eric/repository/experiment/rdiff-backup-data
etc.

It should be OK to run rdiff-backup on /home/eric in that case?

On Thu, 2013-02-07 at 16:39 +, Dominic Ferard wrote:

The only thing to avoid is having a directory called 'rdiff-backup-data'
directly off the root directory of the source. Because when you use
rdiff-backup it creates such a directory in the destination and uses it
to keep old versions of files. Otherwise I *think* it should be fine...

Dominic



On 07/02/2013 11:48, Eric Beversluis wrote:

Hope this makes sense:

I use rdiff-backup as a kind of version control for particular folders
in my Apache document root, rdiff-ing them several times a day to a
folder in my Home directory, which I call repository.

I'm wondering if there's any problem with including that repository
sub-directory when I do a general rdiff backup of my stuff, that is,
whether there's a problem with using rdiff-backup on files that are
themselves rdiff-backup files? (In which case I would exclude that
directory as I also include the document root in my backup.)

Thanks



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki




___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff on rdiff

2013-02-07 Thread Eric Beversluis
OK. I wasn't clear about what he meant by 'the root directory of the
source.' I thought it was referring to /home/eric. So I'll exclude the
whole 'repository' directory, since I include /var/www/html in my
backup. Then at worst I would lose my diffs of the things in repository.
Can always periodically put those onto a stick.

Thanks.

On Fri, 2013-02-08 at 10:56 +1100, m...@electronico.nc wrote:
 Hello,
 I think Dominic is telling you to exclude the 'rdiff-backup-data' 
 folders located in your users directory from the second rdiff.
 Nicolas
 
 Le 08/02/2013 10:45, Eric Beversluis a écrit :
  Thanks. So I've got:
 
  /home/eric/repository/budget/index.php
  /home/eric/repository/budget/rdiff-backup-data
  /homer/eric/repository/experiment/rdiff-backup-data
  etc.
 
  It should be OK to run rdiff-backup on /home/eric in that case?
 
  On Thu, 2013-02-07 at 16:39 +, Dominic Ferard wrote:
  The only thing to avoid is having a directory called 'rdiff-backup-data'
  directly off the root directory of the source. Because when you use
  rdiff-backup it creates such a directory in the destination and uses it
  to keep old versions of files. Otherwise I *think* it should be fine...
 
  Dominic
 
 
 
  On 07/02/2013 11:48, Eric Beversluis wrote:
  Hope this makes sense:
 
  I use rdiff-backup as a kind of version control for particular folders
  in my Apache document root, rdiff-ing them several times a day to a
  folder in my Home directory, which I call repository.
 
  I'm wondering if there's any problem with including that repository
  sub-directory when I do a general rdiff backup of my stuff, that is,
  whether there's a problem with using rdiff-backup on files that are
  themselves rdiff-backup files? (In which case I would exclude that
  directory as I also include the document root in my backup.)
 
  Thanks
 
 
  ___
  rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
  https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
  Wiki URL: 
  http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki
 
 
  ___
  rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
  https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
  Wiki URL: 
  http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki
 
 
 
 ___
 rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
 https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
 Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup logging in through sshd several times

2012-09-05 Thread weloki


weloki wrote:
 
 I've been receiving logwatch reports each day from my servers and I notice
 after each backup that rdiff-backup connected via SSH many times, like
 from over 100 to 300 times per backup session. Why is this? Does it need
 to do that in order to function properly?
 

Is anyone else seeing this? Is there a way to have rdiff-backup login just
once through SSH to perform a daily backup?
-- 
View this message in context: 
http://old.nabble.com/rdiff-backup-logging-in-through-sshd-several-times-tp34271866p34392070.html
Sent from the rdiff-backup-users mailing list archive at Nabble.com.


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup logging in through sshd several times

2012-09-05 Thread Robert Nichols

On 09/05/2012 07:33 AM, weloki wrote:

weloki wrote:


I've been receiving logwatch reports each day from my servers and I notice
after each backup that rdiff-backup connected via SSH many times, like
from over 100 to 300 times per backup session. Why is this? Does it need
to do that in order to function properly?



Is anyone else seeing this? Is there a way to have rdiff-backup login just
once through SSH to perform a daily backup?


I see just one ssh login per rdiff-backup session.  I am pushing my backups
from the client to the server (source is local, destination is remote).
From your description, it sounds like that is what you are doing, too.
Your situation might be entirely different if you are controlling the
backups from the server and pulling from the clients.  I've never tried
that.

--
Bob Nichols NOSPAM is really part of my email address.
Do NOT delete it.


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup logging in through sshd several times

2012-09-05 Thread chuck odonnell
On Wed, Sep 05, 2012 at 10:31:48AM -0500, Robert Nichols wrote:
 On 09/05/2012 07:33 AM, weloki wrote:
weloki wrote:

 I've been receiving logwatch reports each day from my servers and
 I notice after each backup that rdiff-backup connected via SSH
 many times, like from over 100 to 300 times per backup
 session. Why is this? Does it need to do that in order to function
 properly?


 Is anyone else seeing this? Is there a way to have rdiff-backup
 login just once through SSH to perform a daily backup?
 
 I see just one ssh login per rdiff-backup session.  I am pushing my
 backups from the client to the server (source is local, destination
 is remote).  From your description, it sounds like that is what you
 are doing, too.  Your situation might be entirely different if you
 are controlling the backups from the server and pulling from the
 clients.  I've never tried that.
 

In my case the server acts as master and pulls from the clients
(source is remote, destination local).

On each client I see a single extended ssh login per night.





___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup logging in through sshd several times

2012-09-05 Thread weloki

My setup is such that the backup server logs in to each remote host and
fetches all the data to store centrally (source is remote, destination is
local).

Perhaps the specifics of what I'm doing would give clues...
I set up a regular user's account for rdiff-backup on my backup server as
well as a directory where I save the backed up files to. That directory has
permissions for only rdiff-backup user and group (chmod -R
rdiff-backup:rdiff-backup /dirname). On the servers that I want to be backed
up I also created a user account for rdiff-backup, and in addition to the
entry in /etc/sudoers, in the file at
/rdiff-backup_home/.ssh/authorized_keys I put this on one line:

command=sudo rdiff-backup --server --restrict-read-only
/,from=backup_server_IP_address,no-port-forwarding,no-X11-forwarding,no-pty
ssh-rsa B3NzaC1...long SSH public key here... ==
rdiff-backup@backup_server 

So would each command rdiff-backup issues on the remote hosts require a
separate SSH login session?



Robert Nichols-2 wrote:
 
 On 09/05/2012 07:33 AM, weloki wrote:
 weloki wrote:

 I've been receiving logwatch reports each day from my servers and I
 notice
 after each backup that rdiff-backup connected via SSH many times, like
 from over 100 to 300 times per backup session. Why is this? Does it need
 to do that in order to function properly?


 Is anyone else seeing this? Is there a way to have rdiff-backup login
 just
 once through SSH to perform a daily backup?
 
 I see just one ssh login per rdiff-backup session.  I am pushing my
 backups
 from the client to the server (source is local, destination is remote).
  From your description, it sounds like that is what you are doing, too.
 Your situation might be entirely different if you are controlling the
 backups from the server and pulling from the clients.  I've never tried
 that.
 
 -- 
 Bob Nichols NOSPAM is really part of my email address.
  Do NOT delete it.
 
 
 ___
 rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
 https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
 Wiki URL:
 http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki
 
 

-- 
View this message in context: 
http://old.nabble.com/rdiff-backup-logging-in-through-sshd-several-times-tp34271866p34393613.html
Sent from the rdiff-backup-users mailing list archive at Nabble.com.


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup detects that many files has changed - cifs mount

2012-07-30 Thread Attila Strba

Hi Guys,

Some more info.  I have checked the following files:
file_statistics.2012-07-27T04_00_16+02_00.data
file_statistics.2012-07-28T04_00_16+02_00.data
file_statistics.2012-07-29T04_00_16+02_00.data
file_statistics.2012-07-30T04_00_16+02_00.data

There are two strange things:
Various files, have a flag changed and an increment of size (though I am 
confident they haven't changed recently.


Example:
# Filename Changed SourceSize MirrorSize IncrementSize
Tmp/ProcessExplorer/Eula.txt 1 7005 7005 140

What is also strange that files that keep changing, in some backup 
session stops changing:

file_statistics.2012-07-27T04_00_16+02_00.data
Fachwissen/Digitale_Signalverarbeitung/TheEngineersGuideToDSP/CH20.PDF 0 
244431 244431 NA

file_statistics.2012-07-30T04_00_16+02_00.data
Fachwissen/Digitale_Signalverarbeitung/TheEngineersGuideToDSP/CH20.PDF 1 
244431 244431 184


error_log.data, mirror_metadata.data, increments.dir have a size of 0.

I also took a diff of
mirror_metadata.2012-07-26T14%3A07%3A12+02%3A00.snapshot.gz
mirror_metadata.2012-07-30T04%3A00%3A16+02%3A00.snapshot.gz

of the text file
Tmp/ProcessExplorer/Eula.txt
The changes are totaly the same, I see 9 bytes hex change at the 
beginning of the file:

72 73 02 36 46 00 1B 5D 00

Similary in other files these bytes are the same, but there are also 
more bytes that are indicated as increments.


I am missing extended_attributes, access_control_lists. Do I have to 
turn these on somehow special?


So I am more and more confused. I woul apretiate any further suggestions.


Greetings
Attila


Am 30.07.2012 01:13, schrieb Robert Nichols:

On 07/26/2012 10:59 AM, Attila Strba wrote:

Hi Bob,

Thanks for the response. It happens every time. I have disabled 
SELinux, still

the situation didn't improve.
Is there a way I could analyse why rdiff thinks the files are 
different (like

the security context)?


The only thing I can think of is to go into the rdiff-backup-data 
directory
and examine the metadata files (mirror_metadata, extended_attributes, 
access_control_lists, file_statistics, ...) for two consecutive dates 
where

the problem occurred and see if you can see what changed.  These are all
ASCII text files (most are compressed), though file_statistics may have
NUL, rather than newline, separators, so you'd have to run that through
tr '\0' '\n' to make it readable.



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup detects that many files has changed - cifs mount

2012-07-29 Thread Robert Nichols

On 07/26/2012 10:59 AM, Attila Strba wrote:

Hi Bob,

Thanks for the response. It happens every time. I have disabled SELinux, still
the situation didn't improve.
Is there a way I could analyse why rdiff thinks the files are different (like
the security context)?


The only thing I can think of is to go into the rdiff-backup-data directory
and examine the metadata files (mirror_metadata, extended_attributes, 
access_control_lists, file_statistics, ...) for two consecutive dates where

the problem occurred and see if you can see what changed.  These are all
ASCII text files (most are compressed), though file_statistics may have
NUL, rather than newline, separators, so you'd have to run that through
tr '\0' '\n' to make it readable.

--
Bob Nichols NOSPAM is really part of my email address.
Do NOT delete it.


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup exception

2012-06-15 Thread Tobias Gödderz
On Di, 2012-02-28 at 21:50 +0100, Tobias Gödderz wrote:
 On 27.02.2012 14:52, Dominic Raferd wrote:
  An error message like this has previously been associated with
  backing up to a case-insensitive file system, especially when a
  filename changes its case. What filesystem are you backing up to?
 
 ext4, but I'm backing up *from* NTFS.

Now, I'm having the same problem on a different backup; source and
target are on the same ext4 filesystem (which may seem kinda stupid, but
the source is a mirror of the data I want to backup).

It looks like the assertion
assert not incrp.lstat(), incrp
throws an exception unless incrp.data['type'] doesn't equal None, in
other words, if the file exists.

I attached the complete output.

Kind regards,

Tobias

 Kind regards,
 
 Tobias
 
  -- *TimeDicer* http://www.timedicer.co.uk: Free File Recovery
  from Whenever
  
  On 27/02/2012 13:08, Tobias Gödderz wrote:
  Hello,
  
  I tried to make my regular backup this weekend, and rdiff-backup
  aborted with the attached output. As there doesn't seem to be a
  meaningful error message but some tracebacks and data dumps, I'm
  somewhat lost and assume that this is not supposed to happen.
  
  How should I proceed with my backup? Just retry running
  rdiff-backup, or should I rather post this on rdiff-backup-bugs?
  
  I didn't touch the backup directory since then, in case it's any
  help to look something up.
  
  I'm using rdiff-backup 1.3.3.
  
  Kind regards,
  
  Tobi

-- 
open STDOUT, |-
and print uJa tsonrehtP  lreahrekc
or  print pack nNx4, unpack vVx4, STDIN

# rdiff-backup --tempdir /mnt/mybook/tmp/ /mnt/mybook/hive_backup/mirror/user 
/mnt/mybook/hive_backup/rdiff-backup/user
Exception 'Path: 
/mnt/mybook/hive_backup/rdiff-backup/user/rdiff-backup-data/long_filename_data/2.2011-12-23T19:09:48+01:00.diff.gz
Index: ('long_filename_data', '2.2011-12-23T19:09:48+01:00.diff.gz')
Data: {'uid': 1002, 'perms': 420, 'type': 'reg', 'gname': None, 'ctime': 
1333056383, 'devloc': 64772L, 'uname': 'chrisy', 'nlink': 1, 'gid': 1027, 
'mtime': 1313059884, 'atime': 1333056383, 'inode': 35725593, 'size': 64}' 
raised of class 'type 'exceptions.AssertionError'':
  File /usr/lib64/python2.7/site-packages/rdiff_backup/robust.py, line 32, in 
check_common_error
try: return function(*args)
  File /usr/lib64/python2.7/site-packages/rdiff_backup/increment.py, line 43, 
in Increment
incrp = makediff(new, mirror, incpref)
  File /usr/lib64/python2.7/site-packages/rdiff_backup/increment.py, line 79, 
in makediff
if compress: diff = get_inc(incpref, diff.gz)
  File /usr/lib64/python2.7/site-packages/rdiff_backup/increment.py, line 
123, in get_inc
assert not incrp.lstat(), incrp

Exception 'Path: 
/mnt/mybook/hive_backup/rdiff-backup/user/rdiff-backup-data/long_filename_data/2.2011-12-23T19:09:48+01:00.diff.gz
Index: ('long_filename_data', '2.2011-12-23T19:09:48+01:00.diff.gz')
Data: {'uid': 1002, 'perms': 420, 'type': 'reg', 'gname': None, 'ctime': 
1333056383, 'devloc': 64772L, 'uname': 'chrisy', 'nlink': 1, 'gid': 1027, 
'mtime': 1313059884, 'atime': 1333056383, 'inode': 35725593, 'size': 64}' 
raised of class 'type 'exceptions.AssertionError'':
  File /usr/lib64/python2.7/site-packages/rdiff_backup/Main.py, line 306, in 
error_check_Main
try: Main(arglist)
  File /usr/lib64/python2.7/site-packages/rdiff_backup/Main.py, line 326, in 
Main
take_action(rps)
  File /usr/lib64/python2.7/site-packages/rdiff_backup/Main.py, line 282, in 
take_action
elif action == backup: Backup(rps[0], rps[1])
  File /usr/lib64/python2.7/site-packages/rdiff_backup/Main.py, line 345, in 
Backup
backup.Mirror_and_increment(rpin, rpout, incdir)
  File /usr/lib64/python2.7/site-packages/rdiff_backup/backup.py, line 51, in 
Mirror_and_increment
DestS.patch_and_increment(dest_rpath, source_diffiter, inc_rpath)
  File /usr/lib64/python2.7/site-packages/rdiff_backup/backup.py, line 251, 
in patch_and_increment
ITR(diff.index, diff)
  File /usr/lib64/python2.7/site-packages/rdiff_backup/rorpiter.py, line 281, 
in __call__
last_branch.fast_process(*args)
  File /usr/lib64/python2.7/site-packages/rdiff_backup/backup.py, line 698, 
in fast_process
increment.Increment, (tf, mirror_rp, inc_prefix))
  File /usr/lib64/python2.7/site-packages/rdiff_backup/robust.py, line 32, in 
check_common_error
try: return function(*args)
  File /usr/lib64/python2.7/site-packages/rdiff_backup/increment.py, line 43, 
in Increment
incrp = makediff(new, mirror, incpref)
  File /usr/lib64/python2.7/site-packages/rdiff_backup/increment.py, line 79, 
in makediff
if compress: diff = get_inc(incpref, diff.gz)
  File /usr/lib64/python2.7/site-packages/rdiff_backup/increment.py, line 
123, in get_inc
assert not incrp.lstat(), incrp

Traceback (most recent call last):
  File /usr/bin/rdiff-backup-2.7, line 30, in module
rdiff_backup.Main.error_check_Main(sys.argv[1:])
  File 

Re: [rdiff-backup-users] rdiff-backup vs. Back-In-Time

2012-06-10 Thread Dominic Raferd
Thanks for your posting Kshitij, you give us yet another backup program 
to consider - back-in-time.


If it ain't broke, don't fix it:

If your backup routine works for you, why change to rdiff-backup? 
Back-in-time (according to the website) is a GUI and under the hood it 
uses rsync, diff, cp and hardlinks to take snapshots. To rdiff-backup 
users this sounds a bit like reinventing the wheel and it may well be 
much less space-efficient than rdiff-backup. It's not clear to me 
whether a small change in a large file leads back-in-time to store the 
whole file twice; if so, these types of files (databases, mailboxes) 
could eat up a huge amount of space (in theory 24 x their original size 
per day). rdiff-backup only stores the diffs or deltas which in such 
cases are comparatively small. But if you have plenty of space or are 
happy to delete snapshots older then a certain limited period (1 week? 1 
month?) then even in this scenario the space considerations may not be 
important.


bug-free = not yet thoroughly tested:

I do think however that anyone reading this mailing list should bear in 
mind that people post here when they have difficulties. For the most 
part rdiff-backup works very well and has proved a reliable backup for 
many years - for me and for others. I recently had to recover an old 
version of a database file from about 9 months ago (i.e. about 230 
reverse-diffs) - rdiff-backup worked perfectly (albeit the recovery was 
quite slow). I mention this not because it is any kind of record but as 
a real-world situation where rdiff-backup saved my bacon.


For users who find the command line too alarming but still like the 
power of rdiff-backup, there is jBackup 
http://sourceforge.net/projects/jbck/, while for Windows users I would 
naturally say that TimeDicer http://www.timedicer.co.uk (which I 
maintain) is the way to go. For restoring, rdiffWeb 
http://www.timedicer.co.uk/rdiffweb/index * works nicely from your browser.


Dominic

* static version of the wiki; the dynamic one is prone to spammer attacks

On 10/06/12 03:28, Kshitij Kotak wrote:

Dear rdiff-backup Experts

Pardon my naïve query, but need to understand what is the difference between 
rdiff-backup approach and the following steps:

1) we take the remote sync of primary data store on a mirror server using 
rsync. This is automated for every 1 hour using cron.

2) to get a point-in-time restore, we use back-in-time on mirror server to get 
the data stored locally in time slots to recover. My back-in-time runs once 
every day and is cronned using Back-In-time internal switch that allows me to 
define schedule.

This approach has worked flawless (so far). Plus back-in-time has a fantastic 
GUI which, for a non-expert like me is a great relief.

 From what gather on this group, rdiff-backup saves much larger amount of space 
than my approach. Is that correct? Considering the complexities of command line 
approach, restore issues and the kind of problems you guys report... I am 
petrified to try out rdiff-backup.

Does rdiff-backup offer me any significant benefits over my novice approach? If 
so, is there a better, less complex, more reliable way to implement backup (and 
guarantee restore :D ) for a novice like me?

Await your replies...

Cheers
Kshitiz




Sent from BlackBerry®
Xcuze typos if N E

___
rdiff-backup-users mailing list atrdiff-backup-us...@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL:http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup file consistency

2012-06-09 Thread D. Kriesel
Thanks for mentioning obnam, which I have not known so far. Seems to have some 
very nice features even though I would be interested in how it internally works 
(what kind of deltas are used, how efficient is the transmission (comparable to 
rsync?)).

Maybe I will give it a shot :-)



Nicolas Jungers nico...@jungers.net schrieb:

Hi Sorvath,

I use rdiff-backup with TB sized data stores, some churning 10's of GB
a 
day (I haven't so far experienced any problem), but if I had to shop
for 
a new backup system I'd investigate obnam. Which I'll probably do
anyway.

Regards,
Nicolas

On 06/08/2012 10:02 PM, shorvath wrote:
 Hi and thanks for the replies.

 Florian, you mention you use rdiff-backup (for max 20G) and rsync for
larger.
 Is it not recommended to use rdiff-backup on large backups?  Mine are
several Tb's with possible gb's daily increments.
 Would it be recommend to stick to rsync?
 How about backuppc?
 What are your thoughts/comments/suggestions regarding that?

 I backup several servers to a local backup server (daily,weekly,
monthly) and then sync that offsite.
 Currently each local backup is checksummed, xfered offsite and
checksummed again.
 We have a set of scripts that rsyncs, rotates, reports and checksums
but unfortunately they are very inefficient and are just not up to the
job.
 I have the choice of rewriting them from scratch or use something
that's already been written (rsnapshot, rdiff-backup, backuppc)
 All are much more efficient than anything I could come up with so why
reinvent the wheel?
 I just need to decide on one that does, at the very least, what we're
currently doing (albeit not very well)

 As for ram errors...All our servers have ecc ram.
 For bit errors... I'm VERY tempted to take the plunge into the zfs on
linux project but despite the many good reports on stability, as this
is a production service, I'm nervous.


+--
 |This was sent by mem...@thehorvaths.co.uk via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.

+--



 ___
 rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
 https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
 Wiki URL:
http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL:
http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

-- 
D. Kriesel / dkriesel.com

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup file consistency

2012-06-09 Thread D. Kriesel
If I get it right, there is no such thing as deltas. Every generation seems to 
virtually stand for its own. However, double chunks of data (no matter if being 
caused by different machines or generations or whatever) are deduplicated in a 
generalized way. Nice design! As check for duplicate data made before 
transmission, this could be also very efficient (however, one could still 
transmit with rsync and than use obnam).

I'll stop spamming this list now with obnam features, however the delta 
question is the one that arises for sure when other backup systems are 
mentioned here ;)

Cheers
David
-- 
D. Kriesel / dkriesel.com

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup file consistency

2012-06-09 Thread Florian Kaiser
Fri, 08 Jun 2012 13:02:43 -0700 shorvath

 Florian, you mention you use rdiff-backup (for max 20G) and rsync for
 larger. Is it not recommended to use rdiff-backup on large backups? 
 Mine are several Tb's with possible gb's daily increments. Would it be
 recommend to stick to rsync? 

There are some reasons we do not use rdiff-backup for high
volumes of data that also changes a lot frequently: 

1. rdiff-backup is slow if files get heavily modified or deleted.
Storing the increments needs a lot of I/Os and we do not want to spend
that much money on 10k u/min disks just for backups. rdiff-backup also
has serious performance issues for incomplete backups when
it has to revert to the old state.

2. rdiff-backup stores increments, thus it needs constant verification.
This process is extremely time consuming and, again, needs a lot of I/O
and a big tmpfs in ram, unless you want to make it even more time
consuming. We'd rather put our ram in production maschines.

3. listing and restoring of backups requires command line knowledge of
the rdiff-backupp command and read-access to the FULL repository. We
need something that provides easy-access, just like a directory
structure where you can just ls or cp and only if you had appropriate
permissions on the original file.

4. rdiff-backup can only delete the oldest increment(s), not in between.

5. restoring from increments (not the current mirror) is somewhat slow and 
takes a bunch of cpu and I/O.

6. wheen the repository gets corrupted in some way, it is really hard
to fix this. There is a high risk of losing a lot of files then.

There was a little more, but that were the primary reasons we decided
to use rsync with link-dest as this provides everything we need. It
takes somewhat more storage, but 7200u/min disks are cheap and with sequential
writes they offer succifient performance. And it is dead-simple to fix
a corrupt repository since its just plain files or hardlinks.

That said, one big advantage in using rdiff-backup for os backups is
that you can run it as an unprivileged user on the storage side. It
automatically takes care about saving metadata such as ownership and
permissions and so on even as non-root. This is something rsync cannot
really handle.

 How about backuppc? What are your
 thoughts/comments/suggestions regarding that?

Some of my clients use backuppc and that seems to run fine. Its fast but
not as fast as simple rsync and if something in the repository goes
wrong, bad things can happen. They do use rsync but spice it up with
some kind of snapshotting and deduplication. They have the same
performance issues for incompleted backup tasks as rdiff-backup. 
All in all, too complicated for me. Backups should be as simple as
possible.


Regards
Florian


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup file consistency

2012-06-09 Thread Marian 'VooDooMan' Meravy
Hi Robert,

On 6. 6. 2012 17:43, Robert Nichols wrote:
 [...]
 The way I handle it for the dailys is that once a week I do a verify for
 each of the 8 most recent daily backups.  That is enough to verity that the
 most recent part of the increments chain merges properly with the older
 increments.  I do this as part of a weekly process that synchronizes my
 active backup drive with another drive that is kept in more secure
 storage.  What I've found is that on a quad-core machine I can run 8
 simultaneous rdiff-backup --verify-at-time processes in almost exactly
 the
 same time that it takes to run a single verify.  The commonality of file
 access means that one process has to wait for the disk drive to read a
 block, but the other 7 processes find that block already in the kernel's
 cache.  For the most part, the 8 processes stay beautifully in sync.
 
 [...]

Thank you very much, I was not aware of --verify-at-time. And if I
were, I would not guess that actually I can run more than one checks at
a time without cost of kernel blocking due to I/O, because I just have
not realised that it is all actually in the cache, when running
simultaneously.

Once again, thank you for the suggestions, I have updated my scripts :-).

Best,
vdm
.





signature.asc
Description: OpenPGP digital signature
___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup file consistency

2012-06-09 Thread Dominic Raferd

  
  
@David: if you try obnam please let us know how you get on with it.

I guess one could store rdiff-backup repositories on a deduplication
file system such as lessfs or zfs and get the de-duplication
benefits. And using the --no-compression switch might or might not
bring further space savings (as well as, presumably, faster
backups/restores/verifies). But I am not sure if any deduplication
file systems are reliable enough for backup?

On 09/06/2012 09:37, D. Kriesel wrote:

  If I get it right, there is no such thing as deltas. Every generation seems to virtually stand for its own. However, double chunks of data (no matter if being caused by different machines or generations or whatever) are deduplicated in a generalized way. Nice design! As check for duplicate data made before transmission, this could be also very efficient (however, one could still transmit with rsync and than use obnam).

I'll stop spamming this list now with obnam features, however the delta question is the one that arises for sure when other backup systems are mentioned here ;)

Cheers
David

  
  -- 
Timedicer - File
  Recovery from Whenever

  


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup file consistency

2012-06-09 Thread D. Kriesel
After reading the obnam documentation I think a combination of 
1) several rsync-linkdest backupslots (for easy accessability) and 
2) an additional obnam repository (for more generation flexibility while at the 
same time preventing nearly all drawbacks of rdiff-backup mentioned in this 
thread)
might do the job for everyone of us.

I'm gonna give it a shot and report back if I experience major caveats.

Cheers
David



shorvath rdiff-backup-fo...@backupcentral.com schrieb:

So I've been giving obnam a quick test drive and although it does look
very interesting it seems that the backups are not accessible in the
way an rsync backup is.
The advantage with rsync, at least in my case, is that in the even of a
total disaster, I am able to serve my backups as a usable replacement.
Rdiff-backup is similar in the sense that at least the most recent
backup can be offered to clients in the same manner and with the
rdiff-backup fuse module increments can also be accessed, the same goes
for backuppc with the help of a fuse module.
I can't see anyway of doing this (that I can see, but I may be wrong)
with obnam.
If this is the case then obnam is not a viable solution for me.
I hope I'm wrong as it does look promising.
What's important to me is
a) backups are verified
b) can be presented at anytime (Even if it's read only) in the event of
a disaster.

Is anyone using zfs-on-linux here? (http://zfsonlinux.org/)
I know I can use solaris/openindiana/eon or freebsd but we are a linux
house and I need to keep things consistent.

+--
|This was sent by mem...@thehorvaths.co.uk via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL:
http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

-- 
D. Kriesel / dkriesel.com

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup file consistency

2012-06-09 Thread Marian 'VooDooMan' Meravy
Hi Robert,

On 6. 6. 2012 17:43, Robert Nichols wrote:
 [...]
 The way I handle it for the dailys is that once a week I do a verify for
 each of the 8 most recent daily backups.  That is enough to verity that the
 most recent part of the increments chain merges properly with the older
 increments.  I do this as part of a weekly process that synchronizes my
 active backup drive with another drive that is kept in more secure
 storage.  What I've found is that on a quad-core machine I can run 8
 simultaneous rdiff-backup --verify-at-time processes in almost exactly
 the
 same time that it takes to run a single verify.  The commonality of file
 access means that one process has to wait for the disk drive to read a
 block, but the other 7 processes find that block already in the kernel's
 cache.  For the most part, the 8 processes stay beautifully in sync.
 
 [...]

Thank you very much, I was not aware of --verify-at-time. And if I
were, I would not guess that actually I can run more than one checks at
a time without cost of kernel blocking due to I/O, because I just have
not realised that it is all actually in the cache, when running
simultaneously.

Once again, thank you for the suggestions, I have updated my scripts :-).

Best,
vdm
.



signature.asc
Description: OpenPGP digital signature
___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup file consistency

2012-06-08 Thread Nicolas Jungers

Hi Sorvath,

I use rdiff-backup with TB sized data stores, some churning 10's of GB a 
day (I haven't so far experienced any problem), but if I had to shop for 
a new backup system I'd investigate obnam. Which I'll probably do anyway.


Regards,
Nicolas

On 06/08/2012 10:02 PM, shorvath wrote:

Hi and thanks for the replies.

Florian, you mention you use rdiff-backup (for max 20G) and rsync for larger.
Is it not recommended to use rdiff-backup on large backups?  Mine are several 
Tb's with possible gb's daily increments.
Would it be recommend to stick to rsync?
How about backuppc?
What are your thoughts/comments/suggestions regarding that?

I backup several servers to a local backup server (daily,weekly, monthly) and 
then sync that offsite.
Currently each local backup is checksummed, xfered offsite and checksummed 
again.
We have a set of scripts that rsyncs, rotates, reports and checksums but 
unfortunately they are very inefficient and are just not up to the job.
I have the choice of rewriting them from scratch or use something that's 
already been written (rsnapshot, rdiff-backup, backuppc)
All are much more efficient than anything I could come up with so why reinvent 
the wheel?
I just need to decide on one that does, at the very least, what we're currently 
doing (albeit not very well)

As for ram errors...All our servers have ecc ram.
For bit errors... I'm VERY tempted to take the plunge into the zfs on linux 
project but despite the many good reports on stability, as this is a production 
service, I'm nervous.

+--
|This was sent by mem...@thehorvaths.co.uk via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup file consistency

2012-06-06 Thread Florian Kaiser
Hi shorvath,

 My question is whether or not checksumming the files is even necessary
 (Either either with rsync or rdiff-backup) at all. Doesn't rsync as
 well as rdiff -backup ensure file integrity in its basic operation and
 if so can anyone elaborate or point me to some reading material to back
 this up?
I guess we all agree up to the point that you should periodically
live-test your backups, e.g. restore them to a second maschine and
verify everything still works.

With rsync (and presumeably using the --link-dest option), I do not
think you have to checksum at all, because the underlying algorithm
already works using checkums. The only error you will catch using your
checksumming would be bit errors in ram or bad sectors on the target
disk. And note that both errors probably only cause one single version
of a file to be corrupt, because rsync has no increments, only
hardlinks or full files.

With rdiff-backup the situation is a little different. There is a
build-in checksumming as well that almost guarantees you that the
_last_ backup run had no errors what so ever. Please note the
stressing. But as rdiff-backup stores every change as a compressed
increment to the last run, you will need all the preceeding increments
to restore the oldest state of a file. So if you run into bit errors or
bad sectors that did _not_ affect the newest state of a file, you might
not only end up with one corrupt version of a file, instead you will
loose all preceeding versions, too. As rdiff-backups approach will save
you quite some diskspace, this is, apart from performance issues,
the most important drawback to consider. 

The creators were aware of this and build in a verify-at TIME option.
Using this after each backup will make a test-restore of every file up
to the  given timestamp and thus verify each increment, giving you the
guarantee that you will be able to restore up to that state. 
Please note that:
a) this will take _a lot_ of time on big repositories and 
b) be sure to have /tmp (or whatever your $TEMP points to) installed as a
big enough ram disk to fit the biggest file in the rep. 

We are heavy users of both rdiff-backup and rsync and we mainly use the
first for the OS (max 20G) and the latter for all the other (huge amounts of)
data. That gives us reasonable tradeoffs between security, reliability,
storage need and backup/verify speed.


Regards
Florian


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup file consistency

2012-06-06 Thread Robert Nichols

On 06/06/2012 02:10 AM, Florian Kaiser wrote:

The creators were aware of this and build in a verify-at TIME option.
Using this after each backup will make a test-restore of every file up
to the  given timestamp and thus verify each increment, giving you the
guarantee that you will be able to restore up to that state.


That is dangerously misleading.

A) The test only verifies that the files that existed at TIME can be
   restored to their state at that time.  Such a restoral might or
   might not make use of all the intermediate increments between the
   current mirror and the given TIME.

B) There is no test that a restore to any other time can be performed
   without error.  The only checksum that is verified is the one for
   the given TIME.  The only mirror_metadata file (which is where the
   checksums are stored) that is read is the one for the given TIME.
   The mirror_metadata files for other times could be corrupted or
   missing, and no error will be reported.

--
Bob Nichols NOSPAM is really part of my email address.
Do NOT delete it.


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup file consistency

2012-06-06 Thread Florian Kaiser
Datum: Wed, 06 Jun 2012 08:54:00 -0500, Robert Nichols
 That is dangerously misleading.
 
 A) The test only verifies that the files that existed at TIME can be
 restored to their state at that time.  Such a restoral might or
 might not make use of all the intermediate increments between the
 current mirror and the given TIME.
 
 B) There is no test that a restore to any other time can be performed
 without error.  The only checksum that is verified is the one for
 the given TIME.  The only mirror_metadata file (which is where the
 checksums are stored) that is read is the one for the given TIME.
 The mirror_metadata files for other times could be corrupted or
 missing, and no error will be reported.

Thank you very much for clarification, I do see that I highly
misintepreted the function and overestimated the verification process
rdiff-backup offers. I also thought that rdiff-backup always needs all
intermediate increments to restore a file, thus thinking if I can
verify at oldest state, I also verify that nothing in between is
corrupt. I guess I was wrong then. 

That said, I understand the only way to actually verify all increments
is to subsequently call verify-at TIME  for all given TIMES you have
increments for. Is that correct? It does not really sound
thought-through and I guess it is a very time consuming process, even
on small to midsized repositories given the typical amount of
increments is likely to be 30 days or more.

Regards
Florian


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup file consistency

2012-06-06 Thread Robert Nichols

On 06/06/2012 09:10 AM, Florian Kaiser wrote:

That said, I understand the only way to actually verify all increments
is to subsequently call verify-at TIME  for all given TIMES you have
increments for. Is that correct? It does not really sound
thought-through and I guess it is a very time consuming process, even
on small to midsized repositories given the typical amount of
increments is likely to be 30 days or more.


You'd better believe it.  I keep daily backups going back one year and
monthly backups going back forever.

The way I handle it for the dailys is that once a week I do a verify for
each of the 8 most recent daily backups.  That is enough to verity that the
most recent part of the increments chain merges properly with the older
increments.  I do this as part of a weekly process that synchronizes my
active backup drive with another drive that is kept in more secure
storage.  What I've found is that on a quad-core machine I can run 8
simultaneous rdiff-backup --verify-at-time processes in almost exactly the
same time that it takes to run a single verify.  The commonality of file
access means that one process has to wait for the disk drive to read a
block, but the other 7 processes find that block already in the kernel's
cache.  For the most part, the 8 processes stay beautifully in sync.

Then, once a month I make a monthly backup (separate archive, same drive),
do the synchronize and verify with the second drive, and then rotate that
second drive with one that is kept in offsite storage.  Updating and
verifying that other drive that is now a month out-of-date is literally an
all-night process since a full month's worth of increments for several
systems must be checked.

It's far from ideal, but it's the best I've come up with.  I have the
process automated to the point that I just have to tell it sync drive A to
drive B and go to bed.

--
Bob Nichols NOSPAM is really part of my email address.
Do NOT delete it.


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup-users Digest, Vol 112, Issue 2

2012-03-10 Thread Willem Buitendyk
Thanks for the ideas.  I think after doing some more reading that
using Rsync is a more reliable call.  I will break up my files into
smaller manageable pieces for Rsync handle.

Sent from my iPad

On 2012-03-10, at 9:00 AM, rdiff-backup-users-requ...@nongnu.org
rdiff-backup-users-requ...@nongnu.org wrote:

 Send rdiff-backup-users mailing list submissions to
rdiff-backup-users@nongnu.org

 To subscribe or unsubscribe via the World Wide Web, visit
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
 or, via email, send a message with subject or body 'help' to
rdiff-backup-users-requ...@nongnu.org

 You can reach the person managing the list at
rdiff-backup-users-ow...@nongnu.org

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of rdiff-backup-users digest...


 Today's Topics:

   1. how to push rdiff-backup (Willem Buitendyk)
   2. Re: how to push rdiff-backup (Nicolas Jungers)
   3. Re: how to push rdiff-backup (Jernej Simon?i?)
   4. Re: how to push rdiff-backup (Dominic Raferd)


 --

 Message: 1
 Date: Fri, 9 Mar 2012 19:07:32 -0800
 From: Willem Buitendyk wil...@pcfish.ca
 To: rdiff-backup-users@nongnu.org
 Subject: [rdiff-backup-users] how to push rdiff-backup
 Message-ID: 6ddc8814-2858-488f-84b5-8b3b2718a...@pcfish.ca
 Content-Type: text/plain; charset=windows-1252

 I'm a little perplexed.  My scenario is that I have data loggers in the 
 field, each with a 3g usb modem on board.  I want to have the data loggers 
 rdiff back to my server on amazon ec2 - or to push the backup.  My data 
 loggers have dynamic ip addresses and the telco company keeps them hidden.  
 In other words I can't have my backup amazon ec2 server poll my data logger.  
 Again, the need to push the backup.  I'm not really finding any information 
 on how to accomplish this.

 I tried something like:

 rdiff-backup --remote-schema 'ssh -i /home/ubuntu/id_rsa.pem %s rdiff-backup 
 ?-server? /home/ubuntu/data/ fielddata@22.22.222.222::/home/fielddata/data/

 This won't work as it complains about incorrect switches.

 Any suggestions would be greatly appreciated.

 Thanks

 Willem
 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 http://lists.nongnu.org/archive/html/rdiff-backup-users/attachments/20120309/949196a6/attachment.html

 --

 Message: 2
 Date: Sat, 10 Mar 2012 09:00:29 +0100
 From: Nicolas Jungers nico...@jungers.net
 To: rdiff-backup-users@nongnu.org
 Subject: Re: [rdiff-backup-users] how to push rdiff-backup
 Message-ID: 4f5b0a1d.7000...@jungers.net
 Content-Type: text/plain; charset=UTF-8; format=flowed

 On 2012-03-10 04:07, Willem Buitendyk wrote:
 I'm a little perplexed.  My scenario is that I have data loggers in the 
 field, each with a 3g usb modem on board.  I want to have the data loggers 
 rdiff back to my server on amazon ec2 - or to push the backup.  My data 
 loggers have dynamic ip addresses and the telco company keeps them hidden.  
 In other words I can't have my backup amazon ec2 server poll my data logger. 
  Again, the need to push the backup.  I'm not really finding any information 
 on how to accomplish this.

 I tried something like:

 rdiff-backup --remote-schema 'ssh -i /home/ubuntu/id_rsa.pem %s rdiff-backup 
 ?-server? /home/ubuntu/data/ fielddata@22.22.222.222::/home/fielddata/data/

 This won't work as it complains about incorrect switches.

 Any suggestions would be greatly appreciated.

 The way I do it is to establish an openVPN tunnel. Beware that
 rdiff-backup is very sensitive to the link quality.

 Regards,
 Nicolas


 Thanks

 Willem



 ___
 rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
 https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
 Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki




 --

 Message: 3
 Date: Sat, 10 Mar 2012 11:06:24 +0100
 From: Jernej Simon?i?jernej+s-non...@eternallybored.org
 To: Willem Buitendyk on [rdiff-backup-users]
rdiff-backup-users@nongnu.org
 Subject: Re: [rdiff-backup-users] how to push rdiff-backup
 Message-ID: 909954891.20120310110...@eternallybored.org
 Content-Type: text/plain; charset=windows-1250

 On Saturday, March 10, 2012, 4:07:32, Willem Buitendyk wrote:

 rdiff-backup --remote-schema 'ssh -i /home/ubuntu/id_rsa.pem %s
 rdiff-backup ?-server? /home/ubuntu/data/
 fielddata@22.22.222.222::/home/fielddata/data/

 This won't work as it complains about incorrect switches.

 Try putting a = between --remote-schema and 'ssh.

 --
  Jernej Simon?i?  http://eternallybored.org/ 

 The one day you'd sell your soul for something, souls are a glut.
   -- Mason's First Law of Synergism




 --

 Message: 4
 Date: Sat, 10 Mar 2012 11:08:30 +
 From: Dominic Raferd domi...@timedicer.co.uk

Re: [rdiff-backup-users] rdiff-backup exception

2012-02-28 Thread Tobias Gödderz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 27.02.2012 14:52, Dominic Raferd wrote:
 An error message like this has previously been associated with
 backing up to a case-insensitive file system, especially when a
 filename changes its case. What filesystem are you backing up to?

ext4, but I'm backing up *from* NTFS.

Kind regards,

Tobias

 -- *TimeDicer* http://www.timedicer.co.uk: Free File Recovery
 from Whenever
 
 On 27/02/2012 13:08, Tobias Gödderz wrote:
 Hello,
 
 I tried to make my regular backup this weekend, and rdiff-backup
 aborted with the attached output. As there doesn't seem to be a
 meaningful error message but some tracebacks and data dumps, I'm
 somewhat lost and assume that this is not supposed to happen.
 
 How should I proceed with my backup? Just retry running
 rdiff-backup, or should I rather post this on rdiff-backup-bugs?
 
 I didn't touch the backup directory since then, in case it's any
 help to look something up.
 
 I'm using rdiff-backup 1.3.3.
 
 Kind regards,
 
 Tobi
 
 
 ___ 
 rdiff-backup-users mailing list atrdiff-backup-us...@nongnu.org 
 https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users Wiki
 URL:http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.17 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk9NPhYACgkQafcnN+Rno0OIhQCfWeZefRM3zKSSZKRbcSWMEm7P
LHoAoJKZOXgAzWriWp+MU3acoiwHzdbf
=UwKU
-END PGP SIGNATURE-

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup-users Digest, Vol 110, Issue 2

2012-01-06 Thread Shane Bywater

Hi,
Thanks for the information.  I think that was the problem.  I 
might have used 5D to get this to work after posting this request for 
assistance although I'm not to sure as it was a very stressful day.  
Second thought since it was after midnight by the time I had the correct 
command syntax I probably failed to realize at the time that I needed to 
add another day.  Data was restored by 3am and the flat tire I got going 
to the customer was repaired at 3:30am (told you it was a stressful 
day/night/morning).


Regards,
Shane

On 1/4/2012 12:00 PM, rdiff-backup-users-requ...@nongnu.org wrote:

--

Message: 1
Date: Tue, 03 Jan 2012 12:25:26 -0600
From: Robert Nicholsrnicholsnos...@comcast.net
To: rdiff-backup-users@nongnu.org
Subject: Re: [rdiff-backup-users] user needs assistance in restoring
directory
Message-ID:jdvh6n$q52$1...@dough.gmane.org
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 01/02/2012 11:57 PM, Shane Bywater wrote:

Hi,
I'm having a strange issue when trying to restore a directory. I'm using
rdiff-backup 1.2.8 on both the local (backup) server where the backup is
initiated and the data is saved to and the remote fileserver.
The command I have been using to make the backups is:
On the backup server:
#rdiff-backup user@192.168.1.201::/data /backup
This process has been going along without errors showing up and the /data
directory on the remote fileserver is correctly being stored on /backup on the
local backup server.

On Dec. 30/2011 a user mistakenly deleted the directory /data/data/ACS/Data on
the fileserver.

/backup/data/rdiff-backup-data/session_statistics.2011-12-30T22:16:04-05:00.data
StartTime 1325301364.00 (Fri Dec 30 22:16:04 2011)
EndTime 1325301843.68 (Fri Dec 30 22:24:03 2011)
ElapsedTime 479.68 (7 minutes 59.68 seconds)
SourceFiles 10406
SourceFileSize 48650912083 (45.3 GB)
MirrorFiles 19804
MirrorFileSize 50013609650 (46.6 GB)
NewFiles 0
NewFileSize 0 (0 bytes)
DeletedFiles 9398
DeletedFileSize 1362697877 (1.27 GB)
ChangedFiles 6
ChangedSourceSize 12627 (12.3 KB)
ChangedMirrorSize 12317 (12.0 KB)
IncrementFiles 9404
IncrementFileSize 1195961836 (1.11 GB)
TotalDestinationSizeChange -166735731 (-159 MB)
Errors 0

I am wanting to restore the /data/data/ACS/Data directory to a temporary
directory on the backup server so that I can FTP it to my Windows machine which
is on the same LAN to burn the files on a DVD.

I've tried the following command:
On the backup server:

#sudo rdiff-backup -v8 -r 4D /backup/data /restore


[root@backupserver data]# sudo rdiff-backup -v8 -r 4D /backup/data /restore

[SNIP]

Restore: must_escape_dos_devices = 0
Starting restore of /backup/data to /restore as it was as of Thu Dec 29 23:58:21
2011.
Processing changed file .
Regular copying () to /rdiff-backup.tmp.1
Removing directory /restore
Restore finished
Cleaning up

As you can see at the end of the process it removes the /restore directory where
the restored directory and files should be. What am I doing wrong? I'm guessing
(hoping) it's something simple but I'm been at this for too long and it's past
midnight and I'm out of options. I'm hoping someone can let me know what command
to use and on what server to use it so that I can restore the deleted
/data/data/ACS/Data directory to a temporary directory on the backup server.

That result is consistent with the directory not existing for the specified
time.  Have you run rdiff-backup -l /backup/data or rdiff-backup -l
/backup/data/ACS/Data to see what increments exist for the directories of
interest?





___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup failed: error 31

2011-12-13 Thread Dominic Raferd

On 13/12/2011 14:31, Jean Pierre Dentone wrote:

Hi

I'm trying to get a backup of a 3TB partition and it failed with this error

Exception '[Errno 31] Too many links:
'/rdiff/newarsvrfs01/htdocs/chase.archinetonline.com/chasedata/project_89381''
raised of class 'exceptions.OSError':

This has been working ok for a while and then we had a problem with the
RAID being degrated. we fixed the raid issue and the backups never
worked again. I have even tried to do a fresh backup on a new
partition/location but we get the same error

any ideas what the problem could be? we havent changed anything on our
backup server.

PS: Im using rdiff-backup 1.2.8


Try running rdiff-backup with --no-eas option (unless you need extended 
attributes), and see if this helps.


Dominic
http://www.timedicer.co.uk: File Recovery from Whenever


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup failed: error 31

2011-12-13 Thread Jean Pierre Dentone

Thanks, but that did not work. I still got the same error.



On 12/13/2011 10:38 AM, Dominic Raferd wrote:

On 13/12/2011 14:31, Jean Pierre Dentone wrote:

Hi

I'm trying to get a backup of a 3TB partition and it failed with this 
error


Exception '[Errno 31] Too many links:
'/rdiff/newarsvrfs01/htdocs/chase.archinetonline.com/chasedata/project_89381'' 


raised of class 'exceptions.OSError':

This has been working ok for a while and then we had a problem with the
RAID being degrated. we fixed the raid issue and the backups never
worked again. I have even tried to do a fresh backup on a new
partition/location but we get the same error

any ideas what the problem could be? we havent changed anything on our
backup server.

PS: Im using rdiff-backup 1.2.8


Try running rdiff-backup with --no-eas option (unless you need 
extended attributes), and see if this helps.


Dominic
http://www.timedicer.co.uk: File Recovery from Whenever




Regards,

--
Jean Pierre Dentone
Senior Linux Administrator
FortressITX



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup failed: error 31

2011-12-13 Thread Jean Pierre Dentone

no errors in the source server

I have two raid controllers and both working ok is just a big partition 
that i have to backup


/dev/mapper/volgroup-volume01
   15T  3.0T   11T  22% /data

the backup server had some issues but it was fixed so I tried to run 
rdiff-backup on a new partition/directory but is not working :(


On 12/13/2011 4:12 PM, Dominic Raferd wrote:
Are you sure the source filesystem is error-free?  This error seems to 
be rarely seen with rdiff-backup so I suspect something unusual in 
your setup.


Dominic
--
*TimeDicer* http://www.timedicer.co.uk: Free File Recovery from Whenever


On 13/12/2011 19:11, Jean Pierre Dentone wrote:

Thanks, but that did not work. I still got the same error.


On 12/13/2011 10:38 AM, Dominic Raferd wrote:

On 13/12/2011 14:31, Jean Pierre Dentone wrote:

Hi

I'm trying to get a backup of a 3TB partition and it failed with this
error

Exception '[Errno 31] Too many links:
'/rdiff/newarsvrfs01/htdocs/chase.archinetonline.com/chasedata/project_89381''

raised of class 'exceptions.OSError':

This has been working ok for a while and then we had a problem with the
RAID being degrated. we fixed the raid issue and the backups never
worked again. I have even tried to do a fresh backup on a new
partition/location but we get the same error

any ideas what the problem could be? we havent changed anything on our
backup server.

PS: Im using rdiff-backup 1.2.8

Try running rdiff-backup with --no-eas option (unless you need
extended attributes), and see if this helps.

Dominic
http://www.timedicer.co.uk:  File Recovery from Whenever


Regards,






Regards,

--
Jean Pierre Dentone
Senior Linux Administrator
FortressITX


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup changes .gz files

2011-07-18 Thread Maarten Bezemer


On Sun, 17 Jul 2011, Robert Nichols wrote:


All that matters is that
rdiff-backup records a checksum that is consistent with the file it
_did_ back up, and that is always the case.


Given my example above, this may not always be the case. At least for most 
unix file systems.


You misunderstand me.  All I'm saying is that the recorded checksums are
always consistent with the files that are stored in the mirror and
increments.  That indeed might not match the source files if there were
hidden changes.


Well, maybe you're right and I misread your text. However, even though it 
is essential to have the internal mirror state in good order, the fact 
that there appear to be situations in which one would be unable to do a 
restore of the source to a working and consistent state, bothers me more.


In fact, this makes using rdiff-backup an insufficient way of making 
backups, which happens to be the primary function of the program :-S


--
Maarten

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup changes .gz files

2011-07-17 Thread Arno Wagner
Hi,

I found a bit more time and found also some .pm files
with the same issue. Looking into the files though,
I have a suspicion, namely that for some reason these
files were changed by a Debian update, that then did 
set the same original metadata after installing new 
files. The suspicion comes from the differences being date
format changes only, no corruption at all.

So new question: Is there a way to make rdiff-backup
actually do a checksum comparison for exisitng files in
order to determine what to backup? I know, that would
be significantly slower, but as least on Debian system
partitions, a metadata comparison is obviously not 
enough. As a substitute, can I force rdiff-backup
to backup specific files even if the metadata
is the same? 

Regards,
Arno




On Sat, Jul 09, 2011 at 10:19:59PM +0200, Arno Wagner wrote:
 Hi,
 
 I do full Linux root partition backups with rdiff-backup
 1.2.8 on Debian squeeze. As I have had numerous instances of
 silent corruption, I always verify my backups, in this case with
 
   rdiff-backup --compare-full --exclude-sockets --exclude-other-filesystems 
 
 This verify gives me several metadata the same, data changed 
 errors, that do not make sense. Examples are
 
/usr/share/doc/libssl-dev/demos/tunala/INSTALL.gz
/usr/share/doc/lrzsz/NEWS.gz
/usr/share/doc/openssl/doc/apps/rsa.pod.gz
/usr/share/doc/xfig/LATEX.AND.XFIG.zh_CN.gz
 
 When I compare manually, these files are indeed different, 
 but gzip -tv tells me both are fine and they decompress 
 to bit-identical files. 
 
 One thing I have noticed is that these files are all pretty old.
 
 What is going on here? As I also use md5sum for integrity checks,
 modifying .gz files in a backup is really not a good idea., even
 if they decompress the same. In fact changing anything when backing 
 up files is not a good idea. Also, with this problem, I have to 
 check manually for each verification error whether it is truely an 
 error or an instance of this not-really corruption.
 
 To me this looks like a bug in rdiff-backup. 
 I have observed this only with .gz files.
 
 Arno
 -- 
 Arno Wagner, Dr. sc. techn., Dipl. Inform., CISSP -- Email: a...@wagner.name 
 GnuPG:  ID: 1E25338F  FP: 0C30 5782 9D93 F785 E79C  0296 797F 6B50 1E25 338F
 
 Cuddly UI's are the manifestation of wishful thinking. -- Dylan Evans
 
 If it's in the news, don't worry about it.  The very definition of 
 news is something that hardly ever happens. -- Bruce Schneier 

-- 
Arno Wagner, Dr. sc. techn., Dipl. Inform., CISSP -- Email: a...@wagner.name 
GnuPG:  ID: 1E25338F  FP: 0C30 5782 9D93 F785 E79C  0296 797F 6B50 1E25 338F

Cuddly UI's are the manifestation of wishful thinking. -- Dylan Evans

If it's in the news, don't worry about it.  The very definition of 
news is something that hardly ever happens. -- Bruce Schneier 

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup changes .gz files

2011-07-17 Thread Robert Nichols

On 07/17/2011 12:35 PM, Arno Wagner wrote:

Hi,

I found a bit more time and found also some .pm files
with the same issue. Looking into the files though,
I have a suspicion, namely that for some reason these
files were changed by a Debian update, that then did
set the same original metadata after installing new
files. The suspicion comes from the differences being date
format changes only, no corruption at all.

So new question: Is there a way to make rdiff-backup
actually do a checksum comparison for exisitng files in
order to determine what to backup? I know, that would
be significantly slower, but as least on Debian system
partitions, a metadata comparison is obviously not
enough. As a substitute, can I force rdiff-backup
to backup specific files even if the metadata
is the same?


No way I know of to do that.  I find it surprising that .gz and .pm
files would be re-issued without changing the time stamps, but one
place where files do get changed without altering either the size or
modification time is during prelinking of ELF shared libraries and
binaries.  Now, the file size will increase the first time a file is
prelinked, and that change will be noticed by rdiff-backup.  But,
subsequent runs of prelink will just alter internal address fields,
so the size does not change.  In all cases the modification time is
preserved.

Frankly, I view this as a mixed blessing.  If all of the altered
files got backed up every time prelink was run, my incremental
backups would balloon quite seriously, and it matters very little
which prelinked version gets restored.  The loader would detect the
out-of-date prelink information and do the same work it would have
to do with a non-prelinked binary, and everything will get back in
sync the next time prelink is run.  All that matters is that
rdiff-backup records a checksum that is consistent with the file it
_did_ back up, and that is always the case.

One thing that _does_ have a problem with these files is using rsync
to maintain two copies of an rdiff-backup archive, since rsync would
happily update metadata and increment files but fail to notice that
the mirror file itself had changed.  Avoiding use of the -c option
(which would be prohibitively expensive on a large archive) gets a
bit messy.

--
Bob Nichols NOSPAM is really part of my email address.
Do NOT delete it.


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup changes .gz files

2011-07-17 Thread Robert Nichols

On 07/17/2011 06:08 PM, Maarten Bezemer wrote:




On 07/17/2011 12:35 PM, Arno Wagner wrote:



I have a suspicion, namely that for some reason these
files were changed by a Debian update, that then did
set the same original metadata after installing new
files. The suspicion comes from the differences being date
format changes only, no corruption at all.



So new question: Is there a way to make rdiff-backup
actually do a checksum comparison for exisitng files in
order to determine what to backup?


Better question: apparently Debian issues files with the same name and
timestamps but with different contents? I'd say that shouldn't happen.

Other question: can rdiff-backup be instructed to record and match a file's
ctime (change time, not creation time) in addition to the mtime for filesystems
that do support that?

I just tried chmod-ing a file, which changes ctime but not mtime, and indeed
rdiff-backup skipped the file.
Next, I copied (cp -a, so preserving metadata) the file to .blah, removed the
original, and renamed the .blah to original name.
Rdiff-backup then updated the directory (since it had updated mtime after file
copy and remove) but the file-under-test was still skipped.

Maybe we could check how rsync handles these cases?
Maybe we could also store  check inode number, but that wouldn't catch in-place
modifications followed by metadata resets.


On Sun, 17 Jul 2011, Robert Nichols wrote:


All that matters is that
rdiff-backup records a checksum that is consistent with the file it
_did_ back up, and that is always the case.


Given my example above, this may not always be the case. At least for most unix
file systems.


You misunderstand me.  All I'm saying is that the recorded checksums are
always consistent with the files that are stored in the mirror and
increments.  That indeed might not match the source files if there were
hidden changes.

--
Bob Nichols NOSPAM is really part of my email address.
Do NOT delete it.


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup changes .gz files

2011-07-17 Thread Patrick Nagel
Hi,

Arno Wagner a...@wagner.name wrote:
I found a bit more time and found also some .pm files
with the same issue. Looking into the files though,
I have a suspicion, namely that for some reason these
files were changed by a Debian update, that then did 
set the same original metadata after installing new 
files. The suspicion comes from the differences being date
format changes only, no corruption at all.

So new question: Is there a way to make rdiff-backup
actually do a checksum comparison for exisitng files in
order to determine what to backup? I know, that would
be significantly slower, but as least on Debian system
partitions, a metadata comparison is obviously not 
enough. As a substitute, can I force rdiff-backup
to backup specific files even if the metadata
is the same? 

Last time we talked about this is just a few weeks back:
http://lists.nongnu.org/archive/html/rdiff-backup-users/2011-03/msg00016.html
The context was a bit different, but same problem.

Patrick
--
Sent from my phone.

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup fails with forced-command but works through shell

2011-05-25 Thread chuck odonnell
On Tue, May 24, 2011 at 12:50:47PM -0400, Richard Freytag wrote:
 
 The Error
 
 The problem is that if I want to force the command on the server
 by altering the server-side public key so it looks for
 '/usr/local/bin/hard-coded-rdiff' as follows:
 
 from=client,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=/usr/local/bin/hard-coded-rdiff
  ssh-rsa AAasdfaj23jasljjj..etc., etc. 
 
[snip]
 
 chmod 744 /usr/local/bin/hard-coded-rdiff
 
 ...so it should be executable by all. 'user' can execute /user/local/bin/
 hard-coded-rdiff.
 
[snip]
 #! /usr/bin/sh
 
 /usr/bin/rdiff-backup --server --restrict-read-only /

Hi,

You need to 'exec rdiff-backup ...' rather than calling it.

  exec /usr/local/bin/rdiff-backup --server

Not sure why you want executable by all on your script, should only
need to be executable by the backup user?

Also, depending on what you are doing, you may need to export PATH in
the script since .profile is not executed by the sshd in this case,
e.g.,

  #!/bin/sh
  export PATH=/usr/bin:/bin:/usr/sbin:/sbin

Good luck.

Best,

Chuck

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup --verify 1.2.8 running around in circles

2011-03-01 Thread Dominic Raferd


On 20/02/2011 20:51, Marc Haber wrote:

On Sun, Feb 20, 2011 at 06:11:19PM +, Dominic Raferd wrote:

first step, unless the most recent
backup is critical to you, is to try regressing the repository to get it
back to a clean state, with the --check-destination-dir switch

Fatal Error: Destination dir scyw00225 does not need checking


  If this refuses to regress, claiming that there is no corruption, I
  have a bash script which can 'force' a regression. You can repeat
  this as many times as necessary to get past (hopefully) the original
  corruption.

So it'll be a force regression, try --verify, until --verify runs
through, losing one full, probably uncorrupted increment per iteration?


Exactly. Best to take a backup of the repository before trying it.

Dominic

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup streaming?

2011-02-20 Thread mark price
So if I have a 10 gig file backed up and want to check for updates and there is 
an update of only 500megs will it do this (which is what I need):

Read a block (let's say 100kb), then generate a signature for THAT block then 
generate a delta file for THAT block and then encrypt and compress THAT block, 
all in one streaming pass? And if so how is this performed? Is there some type 
of buffer function in the rdiff python wrapper?




On Feb 17, 2011, at 12:01 PM, Dominic Raferd domi...@timedicer.co.uk wrote:

 I used rdiff.exe briefly before moving to rdiff-backup. Certainly 
 rdiff-backup does your 2-4 'under the hood' and fast. It is highly optimised 
 both for speed of transfer and for storage space - but does require a 
 reliable connection between source and destination. It does not do your 1, 
 you can achieve this best by backing up from an LVM snapshot (if your source 
 machine is Linux) or VSS (if it is Windows).
 
 I am pretty sure that rdiff-backup does not use rdiff, instead it uses a 
 python module built from the librsync library (which is also called, I think, 
 by rdiff).
 
 Dominic
 http://www.timedicer.co.uk/
 
 On 17/02/2011 17:41, Mark Price wrote:
 Question -
 
 We use rdiff (not rdiff-backup) to do our incremental file backups.
 
 We do:
 
   1. Copy the file to a staging area (so the file won't disappear or
  be modified while we work on it)
   2. Hash the original file, and computes an rdiff signature (used
  for delta differencing)
   3. Comput an rdiff delta difference (if we have no prior version,
  this step is skipped)
   4. Compress  encrypt the resulting delta difference
 
 Our problem is all of these things happen in separate phases, distinctly one 
 from the other.  This means it takes a long time to do its job.  What I am 
 wondering is if rdiff-backup does all of these things in one read/write file 
 pass in a streaming manner, or if it simply calls to the standard rdiff.exe 
 (which isn't working for us)?  I didn't quite see information concerning 
 this in the docs or wiki.  We have considered using xdelta (because it 
 operates in a streaming manner) but the problem with xdelta is that it 
 stores double copies of the deltas and kills storage space.  Any help on 
 this matter would be great!

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup --verify 1.2.8 running around in circles

2011-02-20 Thread Dominic Raferd

Marc:

I can't help with the python. But first step, unless the most recent 
backup is critical to you, is to try regressing the repository to get it 
back to a clean state, with the --check-destination-dir switch. If this 
refuses to regress, claiming that there is no corruption, I have a bash 
script which can 'force' a regression. You can repeat this as many times 
as necessary to get past (hopefully) the original corruption.


Secondly, rdiff-backup --verify switch gives only a partial verification 
of the repository. Daniel Miller wrote a patch, available for 
rdiff-backup 1.2.8, which adds --verify-full and --verify-full-at-time 
switches which can perform a full verification of a repository. I use 
this patched version in TimeDicer Server for this purpose only in 
preference to the standard rdiff-backup program.


Dominic
Http://www.timedicer.co.uk

On 20/02/11 12:23, Marc Haber wrote:

Hi,

I have an rdiff-backup directory that was corrupted by a file system
crash. As I don't want to lose the backup, I tried rdiff-backup
-v9 --verify on the repository. But already after a few seconds of
running, output stops at:

Sun Feb 20 08:28:51 2011  Verified SHA1 digest of etc/kde4/kdm/backgroundrc
Sun Feb 20 08:28:51 2011  Verified SHA1 digest of etc/kde4/kdm/kdm.options
Sun Feb 20 08:28:51 2011  Verified SHA1 digest of etc/kde4/kdm/kdmrc
Sun Feb 20 08:28:51 2011  Verified SHA1 digest of etc/kde4/kdm/kdmrc.dpkg-old
Sun Feb 20 08:28:51 2011  Verified SHA1 digest of etc/kde4/khotnewstuff.knsrc
Sun Feb 20 08:28:51 2011  Verified SHA1 digest of etc/kde4/kshorturifilterrc
Sun Feb 20 08:28:51 2011  Verified SHA1 digest of 
etc/kernel/postinst.d/initrSun Feb 20 08:28:51 2011  Warning: Could not restore 
file etc/kernel/postinst.d/initramfs-tools!

A regular file was indicated by the metadata, but could not be
constructed from existing increments because last increment had type
None.  Instead of the actual file's data, an empty length file will be
created.  This error is probably caused by data loss in the
rdiff-backup destination directory, or a bug in rdiff-backup
Sun Feb 20 08:28:51 2011  Warning: Computed SHA1 digest of 
etc/kernel/postinst.d/initramfs-tools
da39a3ee5e6b4b0d3255bfef95601890afd80709
doesn't match recorded digest of
e69689ccfb34f74c4ba64d1f1e842df083020b31
Your backup repository may be corrupted!

and rdiff-backup starts taking 100 % CPU and stopped accessing the
disk. This was not the first case of an increment of type None being
found and flagged as such.

 From a cursory look at an strace output, rdiff-backup seems to be
running around in the following loop...


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup --verify 1.2.8 running around in circles

2011-02-20 Thread Marc Haber
On Sun, Feb 20, 2011 at 06:11:19PM +, Dominic Raferd wrote:
 I can't help with the python. But first step, unless the most recent  
 backup is critical to you, is to try regressing the repository to get it  
 back to a clean state, with the --check-destination-dir switch

Fatal Error: Destination dir scyw00225 does not need checking

  If this refuses to regress, claiming that there is no corruption, I
  have a bash script which can 'force' a regression. You can repeat
  this as many times as necessary to get past (hopefully) the original
  corruption.

So it'll be a force regression, try --verify, until --verify runs
through, losing one full, probably uncorrupted increment per iteration?

 Secondly, rdiff-backup --verify switch gives only a partial verification  
 of the repository. Daniel Miller wrote a patch, available for  
 rdiff-backup 1.2.8, which adds --verify-full and --verify-full-at-time  
 switches which can perform a full verification of a repository.

Debian's version does not seem to be patched. What does --verify omit?

Greetings
Marc

-- 
-
Marc Haber | I don't trust Computers. They | Mailadresse im Header
Mannheim, Germany  |  lose things.Winona Ryder | Fon: *49 621 72739834
Nordisch by Nature |  How to make an American Quilt | Fax: *49 3221 2323190

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup streaming?

2011-02-18 Thread Alex Pounds
On Thu, Feb 17, 2011 at 02:38:24PM -0500, Greg Freemyer wrote:
 Are you sure the rdiff delta's are encrypted?
 I know the primary backup is not.

If you're interested in encrypted backups, you should check out Duplicity: 
http://duplicity.nongnu.org/

-- 
Alex Pounds .~. http://www.alexpounds.com/
/V\http://www.ethicsgirls.com/
   // \\
Variables won't; Constants aren't   /(   )\
   ^`~'^

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup streaming?

2011-02-18 Thread Eric Wheeler
On Thu, 2011-02-17 at 10:41 -0700, Mark Price wrote:

 What I am wondering is if rdiff-backup does all of these things in one
 read/write file pass in a streaming manner, or if it simply calls to
 the standard rdiff.exe (which isn't working for us)?  I didn't quite
 see information concerning this in the docs or wiki.  We have
 considered using xdelta (because it operates in a streaming manner)
 but the problem with xdelta is that it stores double copies of the
 deltas and kills storage space.  Any help on this matter would be
 great!

rdiff-backup is a great tool and hides much of the accounting you  would
keep using rdiff, and while it does use temporary files under the hood,
this is transparent to the user.

What type of file do you use rdiff-backup for?  
Replicating real files, or block devices?

Incidentally, I wrote a blog entry the other day about streaming using
rdiff, ssh, and stdio pipes that you might find useful:

http://www.globallinuxsecurity.pro/blog.php


-- 
Eric Wheeler
President
eWheeler, Inc.
  dba Global Linux Security

www.GlobalLinuxSecurity.pro
503-330-4277
PO Box 14707
Portland, OR 97293











 
 Thanks,
 Mark Price
 
 ___
 rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
 Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup streaming?

2011-02-17 Thread Dominic Raferd
I used rdiff.exe briefly before moving to rdiff-backup. Certainly 
rdiff-backup does your 2-4 'under the hood' and fast. It is highly 
optimised both for speed of transfer and for storage space - but does 
require a reliable connection between source and destination. It does 
not do your 1, you can achieve this best by backing up from an LVM 
snapshot (if your source machine is Linux) or VSS (if it is Windows).


I am pretty sure that rdiff-backup does not use rdiff, instead it uses a 
python module built from the librsync library (which is also called, I 
think, by rdiff).


Dominic
http://www.timedicer.co.uk/

On 17/02/2011 17:41, Mark Price wrote:

Question -

We use rdiff (not rdiff-backup) to do our incremental file backups.

We do:

   1. Copy the file to a staging area (so the file won't disappear or
  be modified while we work on it)
   2. Hash the original file, and computes an rdiff signature (used
  for delta differencing)
   3. Comput an rdiff delta difference (if we have no prior version,
  this step is skipped)
   4. Compress  encrypt the resulting delta difference

Our problem is all of these things happen in separate 
phases, distinctly one from the other.  This means it takes a long 
time to do its job.  What I am wondering is if rdiff-backup does all 
of these things in one read/write file pass in a streaming manner, or 
if it simply calls to the standard rdiff.exe (which isn't working for 
us)?  I didn't quite see information concerning this in the docs or 
wiki.  We have considered using xdelta (because it operates in a 
streaming manner) but the problem with xdelta is that it stores double 
copies of the deltas and kills storage space.  Any help on this matter 
would be great!


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup streaming?

2011-02-17 Thread Greg Freemyer
Are you sure the rdiff delta's are encrypted?

I know the primary backup is not.

We address this by placing our rdiff backups on encrypted filessystems
(using encfs).

So our process is:

1) Create a LVM snapshot

2) run rdiff-backup sending data to a local encrypted filesystem.

3) use rsync to send the encrypted backup to a remote machine on the cloud.

If you don't feel the need for a local and remote copy, you can likely
combine 2 and 3 and go directly to the remote site, but I have not
looked into that.

Greg

On Thu, Feb 17, 2011 at 2:01 PM, Dominic Raferd domi...@timedicer.co.uk wrote:
 I used rdiff.exe briefly before moving to rdiff-backup. Certainly
 rdiff-backup does your 2-4 'under the hood' and fast. It is highly optimised
 both for speed of transfer and for storage space - but does require a
 reliable connection between source and destination. It does not do your 1,
 you can achieve this best by backing up from an LVM snapshot (if your source
 machine is Linux) or VSS (if it is Windows).

 I am pretty sure that rdiff-backup does not use rdiff, instead it uses a
 python module built from the librsync library (which is also called, I
 think, by rdiff).

 Dominic
 http://www.timedicer.co.uk/

 On 17/02/2011 17:41, Mark Price wrote:

 Question -

 We use rdiff (not rdiff-backup) to do our incremental file backups.

 We do:

   1. Copy the file to a staging area (so the file won't disappear or
      be modified while we work on it)
   2. Hash the original file, and computes an rdiff signature (used
      for delta differencing)
   3. Comput an rdiff delta difference (if we have no prior version,
      this step is skipped)
   4. Compress  encrypt the resulting delta difference

 Our problem is all of these things happen in separate phases, distinctly
 one from the other.  This means it takes a long time to do its job.  What I
 am wondering is if rdiff-backup does all of these things in one read/write
 file pass in a streaming manner, or if it simply calls to the standard
 rdiff.exe (which isn't working for us)?  I didn't quite see information
 concerning this in the docs or wiki.  We have considered using xdelta
 (because it operates in a streaming manner) but the problem with xdelta is
 that it stores double copies of the deltas and kills storage space.  Any
 help on this matter would be great!

 ___
 rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
 http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
 Wiki URL:
 http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki




-- 
Greg Freemyer
Head of EDD Tape Extraction and Processing team
Litigation Triage Solutions Specialist
http://www.linkedin.com/in/gregfreemyer
CNN/TruTV Aired Forensic Imaging Demo -
   
http://insession.blogs.cnn.com/2010/03/23/how-computer-evidence-gets-retrieved/

The Norcross Group
The Intersection of Evidence  Technology
http://www.norcrossgroup.com

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup old bug appears again?

2011-01-17 Thread Christian Kölpin
seems to be related to this one (ACL Related):
http://www.mail-archive.com/rdiff-backup-users@nongnu.org/msg03069.html

 Original-Nachricht 
 Datum: Mon, 17 Jan 2011 08:34:42 +0100
 Von: Andreas Olsson andr...@arrakis.se
 An: Christian Kölpin raptor2...@gmx.de
 CC: rdiff-backup-users@nongnu.org
 Betreff: Re: [rdiff-backup-users] rdiff-backup old bug appears again?

 mån 2011-01-17 klockan 07:50 +0100 skrev Christian Kölpin:
  i'am a little bit confused. i searched this error an found some sources 
  who says this error is fixed since 1.6.1.
 
 Just to spare others from having to spend time figuring out which bug
 report you are referring to, how about you share a link?
 
 // Andreas

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup old bug appears again?

2011-01-16 Thread Andreas Olsson
mån 2011-01-17 klockan 07:50 +0100 skrev Christian Kölpin:
 i'am a little bit confused. i searched this error an found some sources 
 who says this error is fixed since 1.6.1.

Just to spare others from having to spend time figuring out which bug
report you are referring to, how about you share a link?

// Andreas


signature.asc
Description: Detta är en digitalt signerad meddelandedel
___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Re: [rdiff-backup-users] rdiff-backup fails, no gzipped file

2010-12-11 Thread ~D

On 12/08/2010 11:01 AM, Dominic Raferd wrote:

On 07/12/2010 22:40, ~D wrote:

On 12/07/2010 09:33 PM, Dominic Raferd wrote:


On 07/12/10 18:54, ~D wrote:

On 12/07/2010 05:09 PM, Dominic Raferd wrote:

On 07/12/2010 14:42, ~D wrote:

On 12/07/2010 03:26 PM, D. Kriesel wrote:

Do you prevent a shutdown or reboot when rdiff-backup is running?
How?


Since rdiff-backup does not like backup interruptions (and
therefore is not usable on slow or unreliable connections) I rdiff
to a local repository on the server (the server usually doesn't
reboot) and rsync the entire repository to the remote destination.


My backup only runs when there is a internet connection. I use my
laptop
at home most of the time. I have to find a way to check wether
rdiff-backup is running and to reboot after it shutdown. I can use
htop
of course, but I am not sure if rdiff-backup is runnin also local
on my
laptop or only on my server.

Maybe there is a more advanced way to do this?

You can test for running rdiff-backup locally thus:

[ -n `ps -o pid --no-heading -C rdiff-backup` ]  echo running
|| echo not running

and you can query remote server thus:

[ -n `ssh u...@remote.server ps -o pid --no-heading -C
rdiff-backup 2/dev/null` ]  echo running || echo not 
running .

Ah Thanks! So it should be possible to make some script which checks
if it's is running, if no, shutdown, if yes, wait 15 minuts and check
again etc... right?

~D


yes you could write a bash script and put it as a job in your crontab
to run every 15 minutes, say...

My mirror backup server (not my primary) is updated from my primary by
a script which uses rdiff. The script runs on the primary and starts
by waking up the mirror server (over the internet), then runs rsync,
then after checking all is well, powers down the mirror.


Hmm I  just have a small NAS server (qnap 109), no 'mirror backup 
server' here.


What do you mean by 'waking up the mirror server'?

~D


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup fails, no gzipped file

2010-12-11 Thread Dominic Raferd



yes you could write a bash script and put it as a job in your crontab
to run every 15 minutes, say...

My mirror backup server (not my primary) is updated from my primary by
a script which uses rdiff. The script runs on the primary and starts
by waking up the mirror server (over the internet), then runs rsync,
then after checking all is well, powers down the mirror.


Hmm I  just have a small NAS server (qnap 109), no 'mirror backup 
server' here.


What do you mean by 'waking up the mirror server'?

~D Wiki


Well our main backup is done by all our machines to a server on our LAN 
using rdiff-backup - this I call the 'primary' backup server. Then each 
night this primary machine runs a script which uses rsync to copy its 
contents to an offsite machine, which therefore is maintained as a 
'mirror' of the primary.


The mirror machine is normally switched off. To switch it on, the script 
sends a 'magic packet' to the mirror machine, thus turning it on. Then 
the script runs rsync, and when that is over, it shuts the mirror 
machine down again.


In a simple case, you can use the wakonlan utility to wake the remote 
machine: see http://gsd.di.uminho.pt/jpo/software/wakeonlan/. If it 
isn't already available on your machine and you use Debian or Ubuntu you 
can add it with apt-get I think. If the remote machine is behind another 
router then waking it may be more complicated: I have a special script 
to pass through Netgear DG834G router, for example (see 
http://www.timedicer.co.uk/dg834g.)


Dominic

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup fails, no gzipped file

2010-12-11 Thread ~D

On 12/11/2010 01:37 PM, Dominic Raferd wrote:



yes you could write a bash script and put it as a job in your crontab
to run every 15 minutes, say...

My mirror backup server (not my primary) is updated from my 
primary by

a script which uses rdiff. The script runs on the primary and starts
by waking up the mirror server (over the internet), then runs rsync,
then after checking all is well, powers down the mirror.


Hmm I  just have a small NAS server (qnap 109), no 'mirror backup 
server' here.


What do you mean by 'waking up the mirror server'?

~D Wiki


Well our main backup is done by all our machines to a server on our 
LAN using rdiff-backup - this I call the 'primary' backup server. Then 
each night this primary machine runs a script which uses rsync to copy 
its contents to an offsite machine, which therefore is maintained as a 
'mirror' of the primary.


The mirror machine is normally switched off. To switch it on, the 
script sends a 'magic packet' to the mirror machine, thus turning it 
on. Then the script runs rsync, and when that is over, it shuts the 
mirror machine down again.


In a simple case, you can use the wakonlan utility to wake the remote 
machine: see http://gsd.di.uminho.pt/jpo/software/wakeonlan/. If it 
isn't already available on your machine and you use Debian or Ubuntu 
you can add it with apt-get I think. If the remote machine is behind 
another router then waking it may be more complicated: I have a 
special script to pass through Netgear DG834G router, for example (see 
http://www.timedicer.co.uk/dg834g.)


Sounds like a professional setup, advanced and interesting, thanks for info.

For now I keep it the way I have :)

~D

___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup fails, no gzipped file

2010-12-09 Thread ~D

On 12/08/2010 11:01 AM, Dominic Raferd wrote:

On 07/12/2010 22:40, ~D wrote:

On 12/07/2010 09:33 PM, Dominic Raferd wrote:


On 07/12/10 18:54, ~D wrote:

On 12/07/2010 05:09 PM, Dominic Raferd wrote:

On 07/12/2010 14:42, ~D wrote:

On 12/07/2010 03:26 PM, D. Kriesel wrote:

Do you prevent a shutdown or reboot when rdiff-backup is running?
How?


Since rdiff-backup does not like backup interruptions (and
therefore is not usable on slow or unreliable connections) I rdiff
to a local repository on the server (the server usually doesn't
reboot) and rsync the entire repository to the remote destination.


My backup only runs when there is a internet connection. I use my
laptop
at home most of the time. I have to find a way to check wether
rdiff-backup is running and to reboot after it shutdown. I can use
htop
of course, but I am not sure if rdiff-backup is runnin also local
on my
laptop or only on my server.

Maybe there is a more advanced way to do this?

You can test for running rdiff-backup locally thus:

[ -n `ps -o pid --no-heading -C rdiff-backup` ]  echo running
|| echo not running

and you can query remote server thus:

[ -n `ssh u...@remote.server ps -o pid --no-heading -C
rdiff-backup 2/dev/null` ]  echo running || echo not 
running .

Ah Thanks! So it should be possible to make some script which checks
if it's is running, if no, shutdown, if yes, wait 15 minuts and check
again etc... right?

~D


yes you could write a bash script and put it as a job in your crontab
to run every 15 minutes, say...

My mirror backup server (not my primary) is updated from my primary by
a script which uses rdiff. The script runs on the primary and starts
by waking up the mirror server (over the internet), then runs rsync,
then after checking all is well, powers down the mirror. Works well
with rsync - as David pointed out earlier, it is not a good idea to
run rdiff-backup over an unstable connection.

I was thinking about a script on my laptop for shutdown my machine,
instead of running sudo shutdown -h now, run a script which only make
the laptop shutdown when rdiff-backup is not runnning.

I'm using fluxbox, so I'm used to shutdown my machine via terminal.

~D


Here's a script 'auto-shutdown.sh' I use which shuts down a machine if 
there is no ssh connection and certain other programs are not running:


if [ -z `netstat -t|grep :ssh.*ESTABLISHED$` ]; then
  [ `ps -A|grep -Ec cp$|rsync$|mv$|rm$` = 0 ]  shutdown -h now
fi

add a line to /etc/crontab thus, to run the script hourly:
01 ** * *   root/opt/auto-shutdown.sh



I made this script to be sure it only shuts down when rdiff-backup isn't 
running:



#!/bin/bash

while true;
do if pgrep rdiff-backup  /dev/null; then sleep 10;
else
shutdown -h now;
fi;
done


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup fails, no gzipped file

2010-12-08 Thread Dominic Raferd

On 07/12/2010 22:40, ~D wrote:

On 12/07/2010 09:33 PM, Dominic Raferd wrote:


On 07/12/10 18:54, ~D wrote:

On 12/07/2010 05:09 PM, Dominic Raferd wrote:

On 07/12/2010 14:42, ~D wrote:

On 12/07/2010 03:26 PM, D. Kriesel wrote:

Do you prevent a shutdown or reboot when rdiff-backup is running?
How?


Since rdiff-backup does not like backup interruptions (and
therefore is not usable on slow or unreliable connections) I rdiff
to a local repository on the server (the server usually doesn't
reboot) and rsync the entire repository to the remote destination.


My backup only runs when there is a internet connection. I use my
laptop
at home most of the time. I have to find a way to check wether
rdiff-backup is running and to reboot after it shutdown. I can use
htop
of course, but I am not sure if rdiff-backup is runnin also local
on my
laptop or only on my server.

Maybe there is a more advanced way to do this?

You can test for running rdiff-backup locally thus:

[ -n `ps -o pid --no-heading -C rdiff-backup` ]  echo running
|| echo not running

and you can query remote server thus:

[ -n `ssh u...@remote.server ps -o pid --no-heading -C
rdiff-backup 2/dev/null` ]  echo running || echo not running .

Ah Thanks! So it should be possible to make some script which checks
if it's is running, if no, shutdown, if yes, wait 15 minuts and check
again etc... right?

~D


yes you could write a bash script and put it as a job in your crontab
to run every 15 minutes, say...

My mirror backup server (not my primary) is updated from my primary by
a script which uses rdiff. The script runs on the primary and starts
by waking up the mirror server (over the internet), then runs rsync,
then after checking all is well, powers down the mirror. Works well
with rsync - as David pointed out earlier, it is not a good idea to
run rdiff-backup over an unstable connection.

I was thinking about a script on my laptop for shutdown my machine,
instead of running sudo shutdown -h now, run a script which only make
the laptop shutdown when rdiff-backup is not runnning.

I'm using fluxbox, so I'm used to shutdown my machine via terminal.

~D


Here's a script 'auto-shutdown.sh' I use which shuts down a machine if 
there is no ssh connection and certain other programs are not running:


if [ -z `netstat -t|grep :ssh.*ESTABLISHED$` ]; then
  [ `ps -A|grep -Ec cp$|rsync$|mv$|rm$` = 0 ]  shutdown -h now
fi

add a line to /etc/crontab thus, to run the script hourly:
01 ** * *   root/opt/auto-shutdown.sh


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] rdiff-backup fails, no gzipped file

2010-12-07 Thread Dominic Raferd

On 07/12/2010 14:42, ~D wrote:

On 12/07/2010 03:26 PM, D. Kriesel wrote:

Do you prevent a shutdown or reboot when rdiff-backup is running? How?


Since rdiff-backup does not like backup interruptions (and therefore is not 
usable on slow or unreliable connections) I rdiff to a local repository on the 
server (the server usually doesn't reboot) and rsync the entire repository to 
the remote destination.


My backup only runs when there is a internet connection. I use my laptop
at home most of the time. I have to find a way to check wether
rdiff-backup is running and to reboot after it shutdown. I can use htop
of course, but I am not sure if rdiff-backup is runnin also local on my
laptop or only on my server.

Maybe there is a more advanced way to do this?

You can test for running rdiff-backup locally thus:

[ -n `ps -o pid --no-heading -C rdiff-backup` ]  echo running || 
echo not running


and you can query remote server thus:

[ -n `ssh u...@remote.server ps -o pid --no-heading -C rdiff-backup 
2/dev/null` ]  echo running || echo not running



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


  1   2   3   4   >