On 5/25/2011 5:02 AM, samuel_w...@t-online.de wrote:
> Its not really easy to backup a backuppc, read in the mailing list rsync is
> not a good idea because there a lot of hardlinks.
>
> Whats the best way to creat offline backups?
First, if you are changing topics, please create a new thread rat
Matthias Meyer wrote at about 11:43:23 +0200 on Monday, April 25, 2011:
> Hi Jeffrey,
>
> Thanks for sending your perl script.
Hmmm... not sure I even remember which of my scripts you are talking about...
> Unfortunately I can't answer you because:
>
>- The following addresses had
Hi Jeffrey,
Thanks for sending your perl script.
Unfortunately I can't answer you because:
- The following addresses had permanent fatal errors -
- Transcript of session follows -
... while talking to smtp.secureserver.net.:
<<< 550 5.7.1 SPF unauthorized mail is prohibit
Jeffrey J. Kosowsky wrote:
>
> I think there is a 3rd camp:
> 3. Scripts that understand the special structure of the pool and pc
> trees and efficiently create lists of all hard links in pc
> directory.
> a] BackupPC_tarPCCOPY
> Included in standard BackupPC installations. It uses a perl
>
Les Mikesell wrote on 02/24/2011 05:21:27 PM:
> On 2/22/2011 9:17 PM, Dennis Blewett wrote:
> > What I'm also curious about is if I should be rsyncing any other
files,
> > thus allowing me to restore from the offsite backup in the case I lose
> > everything and rebuild a backuppc configuration:
Dennis Blewett wrote on 02/24/2011 04:41:00 PM:
> @ Tim
>
> I did *briefly* search before making this email thread. As I found
> it similar to looking for the right kind of pants in a clothing
> store, I made the thread with particular keywords with the hope that
> Google would extract this em
On 2/22/2011 9:17 PM, Dennis Blewett wrote:
> 13,849 items, totalling 3.8 GB
>
> It would appear that I have a feasible number of files. I'm not sure how
> many more files I will have by the end of April, though.
>
> I've read about that "rsync -H" would be a practical command to use on
> the backu
@ Tim
I did *briefly* search before making this email thread. As I found it
similar to looking for the right kind of pants in a clothing store, I made
the thread with particular keywords with the hope that Google would extract
this email rather than the others don't don't so *briefly* get to the p
i've been having good success with a script that selects only the most
recent full and most recent incremental for each backup in the pc
directory, as well as the set of backups last successfully
transferred, and rsync's that set offsite, with -H. for me, this
still deduplicates, and keeps a reaso
John Goerzen wrote at about 14:26:33 + on Wednesday, February 23, 2011:
> Carl Wilhelm Soderstrom real-time.com> writes:
>
> >
> > On 02/21 11:00 , Dennis Blewett wrote:
> > > Will I come across many problems in later restoring the pool's data if I
> > > just rsync /var/lib/backuppc to
Carl Wilhelm Soderstrom real-time.com> writes:
>
> On 02/21 11:00 , Dennis Blewett wrote:
> > Will I come across many problems in later restoring the pool's data if I
> > just rsync /var/lib/backuppc to the server?
> > Are there other files and folders I should be rsync'ing to the server?
>
> r
On 2/22/2011 11:13 PM, Timothy J Massey wrote:
> Dennis Blewett wrote on 02/22/2011 10:17:29 PM:
>
> > 13,849 items, totalling 3.8 GB
> >
> > It would appear that I have a feasible number of files. I'm not sure
> > how many more files I will have by the end of April, though.
> >
> > I've read abou
Timothy J Massey wrote at about 23:13:52 -0500 on Tuesday, February 22, 2011:
> Dennis Blewett wrote on 02/22/2011 10:17:29 PM:
>
> > 13,849 items, totalling 3.8 GB
> >
> > It would appear that I have a feasible number of files. I'm not sure
> > how many more files I will have by the end o
Dennis Blewett wrote on 02/22/2011 10:17:29 PM:
> 13,849 items, totalling 3.8 GB
>
> It would appear that I have a feasible number of files. I'm not sure
> how many more files I will have by the end of April, though.
>
> I've read about that "rsync -H" would be a practical command to use
> on
13,849 items, totalling 3.8 GB
It would appear that I have a feasible number of files. I'm not sure how
many more files I will have by the end of April, though.
I've read about that "rsync -H" would be a practical command to use on the
backuppc folder.
What I'm also curious about is if I should
gregwm wrote at about 10:26:51 -0600 on Tuesday, February 22, 2011:
> > rsync'ing the BackupPC data pool is generally recommended against. The
> > number of hardlinks causes an explosive growth in memory consumption by
> > rsync and while you may be able to get away with it if you have 20GB of
gregwm wrote on 02/22/2011 11:26:51 AM:
> this issue sure comes up alot, and perhaps i should just keep quiet
> since i personally am in no position to do it or even go off looking
> for an rsync forum, nor do i have any knowledge of just how convoluted
> the rsync source may be to try to look at
> rsync'ing the BackupPC data pool is generally recommended against. The
> number of hardlinks causes an explosive growth in memory consumption by
> rsync and while you may be able to get away with it if you have 20GB of
> data (depending on how much memory you have); you will likely run out of
> m
On 02/21 11:00 , Dennis Blewett wrote:
> Will I come across many problems in later restoring the pool's data if I
> just rsync /var/lib/backuppc to the server?
> Are there other files and folders I should be rsync'ing to the server?
rsync'ing the BackupPC data pool is generally recommended against
On 8/31/2010 12:09 PM, Josh Malone wrote:
> Farmol SPA wrote:
>> Hi list.
>>
>> I would like to ask which is the simplest yet effective way to dump
>> backuppc stuff (mainly __TOPDIR__) eg to a removable hard disk that will
>> be used in a disaster recovery scenario where the plant were destroyed
>
Farmol SPA wrote:
Hi list.
I would like to ask which is the simplest yet effective way to dump
backuppc stuff (mainly __TOPDIR__) eg to a removable hard disk that will
be used in a disaster recovery scenario where the plant were destroyed
and I need to restore data from this survivor device. Is
Original Message
Subject: Re: [BackupPC-users] Backup backuppc
From: Carl Wilhelm Soderstrom
To: General list for user discussion, questions and support
Date: Mon Aug 23 2010 17:42:43 GMT+0200 (ora Legale Europa Occidentale)
> On 08/20 11:02 , Mirco Piccin wrote:
>>
On Monday 23 Aug 2010 18:27:59 Carl Wilhelm Soderstrom wrote:
> On 08/23 05:04 , Tyler J. Wagner wrote:
> > What was your disk configuration? Any RAID involved?
>
> It was RAIDed, don't remember the details. Possibly RAID 5 (yes, I know,
> poor performance -- but low price, and acceptably fast for
On 08/23 05:04 , Tyler J. Wagner wrote:
> What was your disk configuration? Any RAID involved?
It was RAIDed, don't remember the details. Possibly RAID 5 (yes, I know,
poor performance -- but low price, and acceptably fast for most needs).
--
Carl Soderstrom
Systems Administrator
Real-Time Enter
On Monday 23 Aug 2010 16:42:43 Carl Wilhelm Soderstrom wrote:
> I've tried this in the past. Taking a snapshot of a live backuppc instance
> and then backing up the snapshot (especially to tape, like I tried) turns
> out to be murderously slow due to all the disk updates.
>
> At least to tape (whe
On 08/20 11:02 , Mirco Piccin wrote:
> i usually use an LVM volume (easy to extend) as TOPDIR.
> In this case a FAST solution could be the lvm volume snapshot.
> Of course, you need to save also BackupPC conf files.
I've tried this in the past. Taking a snapshot of a live backuppc instance
and the
Farmol SPA wrote at about 16:09:02 +0200 on Friday, August 20, 2010:
> Original Message
> Subject: Re: [BackupPC-users] Backup backuppc
> From: Tyler J. Wagner
> To: backuppc-users@lists.sourceforge.net
> Cc: Farmol SPA
> Date: Fri Aug 20 2010 15:0
Les Mikesell wrote at about 07:51:56 -0500 on Friday, August 20, 2010:
> Farmol SPA wrote:
> > Original Message
> > Subject: Re: [BackupPC-users] Backup backuppc
> > From: Mirco Piccin
> > To: General list for user discussion, questions and su
On 8/20/2010 9:46 AM, Farmol SPA wrote:
> Original Message
> Subject: Re: [BackupPC-users] Backup backuppc
> From: Mirco Piccin
> To: General list for user discussion, questions and support
>
> Date: Fri Aug 20 2010 16:28:05 GMT+0200 (ora Legale Europa O
Original Message
Subject: Re: [BackupPC-users] Backup backuppc
From: Mirco Piccin
To: General list for user discussion, questions and support
Date: Fri Aug 20 2010 16:28:05 GMT+0200 (ora Legale Europa Occidentale)
> the point is that a snapshot :
> - is very fast, an
On Friday 20 Aug 2010 15:09:02 Farmol SPA wrote:
> So, whenever I create the snapshot I have a "static" copy of the source
> lv that I can copy with any method (eg rsync or netcat). At this point,
> please forgive me, I don't see the advantage of LVM snapshots than using
> directly the rsync approa
Hi
> So, whenever I create the snapshot I have a "static" copy of the source
> lv that I can copy with any method (eg rsync or netcat). At this point,
> please forgive me, I don't see the advantage of LVM snapshots than using
> directly the rsync approach on the "live" lv provided backuppc is
> sl
Original Message
Subject: Re: [BackupPC-users] Backup backuppc
From: Tyler J. Wagner
To: backuppc-users@lists.sourceforge.net
Cc: Farmol SPA
Date: Fri Aug 20 2010 15:00:37 GMT+0200 (ora Legale Europa Occidentale)
> On Friday 20 Aug 2010 12:32:10 Farmol SPA wrote:
&
On Friday 20 Aug 2010 12:32:10 Farmol SPA wrote:
> A question: the source logical volume and the snapshot one must reside
> in the same volume group for this feature to work?
I believe a snapshot of a logical volume must necessarily reside in the same
volume group, yes. So if you have an LV which
Farmol SPA wrote:
> Original Message
> Subject: Re: [BackupPC-users] Backup backuppc
> From: Mirco Piccin
> To: General list for user discussion, questions and support
>
> Date: Wed Aug 18 2010 14:25:38 GMT+0200 (ora Legale Europa Occidentale)
>>
Original Message
Subject: Re: [BackupPC-users] Backup backuppc
From: Tyler J. Wagner
To: backuppc-users@lists.sourceforge.net
Cc: Farmol SPA
Date: Fri Aug 20 2010 12:48:15 GMT+0200 (ora Legale Europa Occidentale)
> On Friday 20 Aug 2010 10:23:39 Farmol SPA wrote:
>&g
On Friday 20 Aug 2010 10:23:39 Farmol SPA wrote:
> TOPDIR resides on an LVM volume. I know about snapshots but I've never
> tried them in practice.
>
> Is it difficult? How could I take a snapshot of a volume and put it on
> another disk (external USB device not LVM)?
It's easy. I've done it on t
Original Message
Subject: Re: [BackupPC-users] Backup backuppc
From: Mirco Piccin
To: General list for user discussion, questions and support
Date: Fri Aug 20 2010 11:02:53 GMT+0200 (ora Legale Europa Occidentale)
>
> i usually use an LVM volume (easy to extend) as
Hi
> I would like to ask which is the simplest yet effective way to dump
> backuppc stuff (mainly __TOPDIR__) eg to a removable hard disk that will
> be used in a disaster recovery scenario where the plant were destroyed
> and I need to restore data from this survivor device. Is a "rsync -aH"
> en
Original Message
Subject: Re: [BackupPC-users] Backup backuppc
From: Mirco Piccin
To: General list for user discussion, questions and support
Date: Wed Aug 18 2010 14:25:38 GMT+0200 (ora Legale Europa Occidentale)
> Hi,
>
> On Wed, Aug 18, 2010 at 2:00 PM, F
Original Message
Subject: Re: [BackupPC-users] Backup backuppc
From: Les Mikesell
To: General list for user discussion, questions and support
Date: Wed Aug 18 2010 14:31:46 GMT+0200 (ora Legale Europa Occidentale)
> If your archive is small and you have plenty of RAM you
Farmol SPA wrote:
> Hi list.
>
> I would like to ask which is the simplest yet effective way to dump
> backuppc stuff (mainly __TOPDIR__) eg to a removable hard disk that will
> be used in a disaster recovery scenario where the plant were destroyed
> and I need to restore data from this survivor d
Hi,
On Wed, Aug 18, 2010 at 2:00 PM, Farmol SPA wrote:
> Hi list.
>
> I would like to ask which is the simplest yet effective way to dump
> backuppc stuff (mainly __TOPDIR__) eg to a removable hard disk that will
> be used in a disaster recovery scenario where the plant were destroyed
> and I ne
I am currently backing up BackupPc using rsync 3.0.6 to a USB2 drive
with no problems so far.
The total amount of data being backed up is about 300 Gb with 5M files.
The total time to sync all the BackupPc directories to the USB disk is
about 1 hour.
It's important to notice that the machine
On 10/28/09, Chris Owen wrote:
>
> Would like some advise on the best way to backup backuppc. I have a
> 1TB USB drive that I would like to copy all of our backups to. Has
> anyone done this? How easy is it to roll back from USB?
Hi Chris - I have a backup program in beta, HashBackup, that I've
> I have a setup with /var/lib/backuppc mounted to a 1 TB Firewire 800
> drive. From a standpoint local to BackupPC, it's transparent. With USB
> you'll be limited to 480 Mbps minus overhead, but that's about the
> only real consideration I'm aware of--and that's still 60 MB/s
> theoretical, which
On Wed, Oct 28, 2009 at 09:25:31PM -0600, dan wrote:
> You can tar up the whole pool directory and put it on an external drive
> pretty easily.
Serious question: of what value is a backup of (only) the pool directory?
danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953
Hey
Thanks for the posts back, I am going to have a look at the ideas
posted here and will let you know how I get on and how I am going to
do this.
Thanks for your help.
Chris Owen
Sent from my iPhone
On 29 Oct 2009, at 03:47, Shawn Perry wrote:
If you are using LVM, just use pvmove and
If you are using LVM, just use pvmove and vgsplit with only a little
downtime.
On Wed, Oct 28, 2009 at 9:25 PM, dan wrote:
> You can tar up the whole pool directory and put it on an external drive
> pretty easily. Just make sure that backuppc is not running when you do this
> -OR- do an LVM sna
You can tar up the whole pool directory and put it on an external drive
pretty easily. Just make sure that backuppc is not running when you do this
-OR- do an LVM snapshot and then backup the snapshot.
I have been using rsync to sync two servers for a long time but have
recently started experimen
I have a setup with /var/lib/backuppc mounted to a 1 TB Firewire 800
drive. From a standpoint local to BackupPC, it's transparent. With USB
you'll be limited to 480 Mbps minus overhead, but that's about the
only real consideration I'm aware of--and that's still 60 MB/s
theoretical, which you probab
On 11/10 06:37 , GATOUILLAT Pierre-Damien wrote:
> > I probably did something like:
> >
> > old-machine# dd if=/dev/vg00/lv00 | gzip | nc newhost
> > new-machine# nc -l -p | gunzip | dd of=/dev/vg00/backup
>
> I'm not sure that gzip works good with dd. I tested a dd / gzip and dd
> alo
> > Or with netcat and dd like that :
> >
> > (new_server)#nc -l -p 1 > /dev/new_part
> >
> > (old_server)#dd if=/dev/old_part | nc -w 5 @IP_new_server 1
>
> this is basically what I did when I had to move a 100GB
> backuppc partition from one machine to another. it only took
> about 1
On 11/10 04:49 , GATOUILLAT Pierre-Damien wrote:
> Perhaps with dd ? Something like :
>
> (on the old server)#dd if=/dev/old_partition | ssh new_server dd
> of=/dev/new_partition (perhaps indicate the bs= ? But with which ?)
>
> Or with netcat and dd like that :
>
> (new_server)#nc -l -p 10
On Fri, 2006-11-10 at 16:49 +0100, GATOUILLAT Pierre-Damien wrote:
> Perhaps with dd ? Something like :
> (on the old server)#dd if=/dev/old_partition | ssh new_server dd
> of=/dev/new_partition (perhaps indicate the bs= ? But with which ?)
That approach will work as long as (a) the partiti
TED]
> Cc : backuppc-users@lists.sourceforge.net
> Objet : Re: [BackupPC-users] Backup backuppc
>
> On Thu, 2006-11-09 at 08:39 -0600, [EMAIL PROTECTED] wrote:
> > On Thursday, November 09, 2006 5:00 AM, daniel berteaud wrote:
> > >
> > >You can use rsync to do that
On Thu, 2006-11-09 at 08:39 -0600, [EMAIL PROTECTED] wrote:
> On Thursday, November 09, 2006 5:00 AM, daniel berteaud wrote:
> >
> >You can use rsync to do that, something like
> >
> >rsync -avP -H /old_location [EMAIL PROTECTED]:/new_location
> >
> >should do the trick
> >
> >
>
> You can also
On Thursday, November 09, 2006 5:00 AM, daniel berteaud wrote:
>
>You can use rsync to do that, something like
>
>rsync -avP -H /old_location [EMAIL PROTECTED]:/new_location
>
>should do the trick
>
>
You can also just tar up the backuppc directory and move it as one chunk. You
are going to h
You can use rsync to do that, something like
rsync -avP -H /old_location [EMAIL PROTECTED]:/new_location
should do the trick
Le Thu, 9 Nov 2006 00:05:18 -0800 (PST),
Ariyanto Dewo <[EMAIL PROTECTED]> a écrit :
> Dear all,
> I have a problem but not with my backuppc. The
> backuppc is running f
59 matches
Mail list logo