Re: [BackupPC-users] config.pl "built-in" script to backup Windows shares with Virtual Shadow Service Copy

2021-01-03 Thread Dan Johansson

Tanks for your Feedback!

On 03.01.21 16:19, backu...@kosowsky.org wrote:

Dan Johansson wrote at about 14:24:25 +0100 on Sunday, January 3, 2021:
  > On 30.06.20 06:51, backu...@kosowsky.org wrote:
  > > Over the years, many have asked and struggled with backing up remote
  > > Windows shares with shadow copies. Shadow copies are useful since they
  > > allow both the backup to be 'consistent' and allow for reading files
  > > that are otherwise 'busy' and unreadable when part of active Windows
  > > partitions.
  > >

  > I found this old email with the attached script.
  > I have tried to set it up, but sadly I can not get it to work. (:-(

Some details would be helpful...




  > So now I have two questions.
  >
  > a) What is the syntax of the "Conf{ClientShareName2Path}" hash? Examples?

$Conf{ClientShareName2Path} = {
 'backuppc' => '/var/lib/backuppc',
 'boot-efi' => '/boot/efi',
 'home' => '/home',
 'root' => '/',
};



That looks like "Linux" paths and not "Windows" (i.e. C:\) paths.
How would "Windows" paths be defined?

$Conf{ClientShareName2Path} = {
 'C' => 'C:\\',
};

-- or maybe --

$Conf{ClientShareName2Path} = {
 'C' => '/tmp/shadows/C',
};
?


b) How shall the rsyncd be configured (rsyncd.conf, rsyncd.secrets)?


The script uses 'rsync' not 'rsyncd'
Not sure why you would ever want to use 'rsyncd' as it is less secure
(it uses a password 'secret' rather than a passphrase) and is more
complicated to setup than just running ssh over a background sshd daemon.


OK, that makes sense and I will change it to ssh+rsync.

Regards,
--
Dan Johansson,
***
This message is printed on 100% recycled electrons!
***


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] config.pl "built-in" script to backup Windows shares with Virtual Shadow Service Copy

2021-01-03 Thread Dan Johansson

On 30.06.20 06:51, backu...@kosowsky.org wrote:

Over the years, many have asked and struggled with backing up remote
Windows shares with shadow copies. Shadow copies are useful since they
allow both the backup to be 'consistent' and allow for reading files
that are otherwise 'busy' and unreadable when part of active Windows
partitions.

Various solutions (including one I proposed almost a decade ago) use
additional scripts and hacks to create the shadow copy.
Such solutions are kludgy and require the maintenance of separate
scripts either on the server or client.

I have written a combination of perl and bash code that can be stored
in the host.pl configuration file that does everything you need to
automagically create shadow copies for each share (where possible)
with minimal to no special configuration in host.pl and nothing to
configure on the Windows client (other than having cygwin+ssh+rsync
and an accessible account on your Windows client).

The only thing you need to do is to set up the hash
Conf{ClientShareName2Path} to map share names to their
(unshadowed) Windows paths. The attached script will then set up and
interpolate the appropriate shadow paths.

It should just work...
Just cut-and-paste the attachment into your host.pl code for Windows
clients.

Note: I included a fair amount of debugging & error messages in case
any shadows or links fail to get created or unwound.



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


I found this old email with the attached script.
I have tried to set it up, but sadly I can not get it to work. (:-(

So now I have two questions.

a) What is the syntax of the "Conf{ClientShareName2Path}" hash? Examples?
b) How shall the rsyncd be configured (rsyncd.conf, rsyncd.secrets)?

Regards,

--
Dan Johansson,
***
This message is printed on 100% recycled electrons!
***


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] Cant Rsync between Freenass and Win10

2019-04-13 Thread dan aneas
Dear all,


I have been trying many things (during 2 weeks now) to make it works
with no success so I wish to get some help here.

My setup is the following :

A freeNass11.2-u3 with a jail running Backuppc, for this I follow the
well written tutorial by JM BARDOU here :
https://www.ixsystems.com/community/threads/step-by-step-guide-for-backuppc-4-in-a-jail-on-freenas.74080/


>From the Win10 side, I installed rsyncd, setup config and secret file
add rules to firewall as described in this other tutorial
https://www.drivemeca.com/backuppc-client/

Pings works well in each direction.

But can't manage to make it works. Always get this error:


019-04-13 10:05:21 Started full backup on 192.168.8.100
<http://192.168.8.200/bpc/backuppc.pl?host=192.168.8.100> (pid=39826,
share=Desktop)

2019-04-13 10:06:42 Backup failed on 192.168.8.100
<http://192.168.8.200/bpc/backuppc.pl?host=192.168.8.100> (rsync
error: error in socket IO (code 10) at clientserver.c(125)
[Receiver=3.1.2.0]


Thanks in advance for any helps ;)

Dan
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_tarExtract exited with fail status 256

2017-07-11 Thread Dan LeVasseur
Yeah, I searched and saw your post with no response.  Quite honestly I
didn't know how to reply specifically to your message so I mailed my own
with your same title.  So, sorry if I did things "wrong".

Oddly enough, it seems to only really happen to one or two of my client
machines (my machine specifically being one of them).

On Tue, Jul 11, 2017 at 1:40 PM, Tim Evans <tkev...@tkevans.com> wrote:

> On 07/11/2017 01:53 PM, Dan LeVasseur wrote:
>
>> I too am having this issue again with version 4.1.3 fresh from GIT.  I
>> did check the version of the file in /bin and it does show as being from
>> 4.1.3.
>>
>> Anything I can run to help you diagnose the problem?
>>
>
> I'm the original poster on this, and yours is the only response.
>
> Meanwhile, I have backups running continuously here and they never, EVER
> complete.  Every 24 hours, the current one gets killed with this same error
> and cleaned up, then a new one starts.
>
>
> --
> Tim Evans   |5 Chestnut Court
> 443-394-3864|Owings Mills, MD 21117
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC_tarExtract exited with fail status 256

2017-07-11 Thread Dan LeVasseur
I too am having this issue again with version 4.1.3 fresh from GIT.  I did
check the version of the file in /bin and it does show as being from 4.1.3.

Anything I can run to help you diagnose the problem?
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup Running Forever

2016-11-02 Thread Dan LeVasseur
I actually have this problem using SMB against windows clients.  2-3 of
them that just at random run forever until I stop them.  The next time,
it'll run fine.


On Wed, Nov 2, 2016 at 10:23 AM, Les Mikesell  wrote:

> On Wed, Nov 2, 2016 at 7:29 AM, Christian Völker 
> wrote:
> > Hi all,
> >
> > BackupPC 3.x backing up Linux hosts. Runs fine since years now.
> >
> > But currently I have an issue with one of my hosts. The backup starts,
> > but runs forever and does never finish. I do not know why this happens.
> > These are the last log entries:
> >
> > 2016-10-24 03:00:07 Output from DumpPreUserCmd: mysqld beenden:  [  OK  ]
> > 2016-10-24 03:00:09 Output from DumpPreUserCmd: mysqld starten:  [  OK  ]
> > 2016-10-24 03:00:09 incr backup started back to 2016-10-18 02:00:05
> (backup #2418) for directory /
> > 2016-10-24 03:14:59 incr backup started back to 2016-10-18 02:00:05
> (backup #2418) for directory /boot
> > 2016-10-24 03:15:00 incr backup 2424 complete, 5450 files, 2233152544
> bytes, 0 xferErrs (0 bad files, 0 bad shares, 0 other)
> > 2016-10-24 03:15:00 removing incr backup 2417
> > 2016-10-25 03:00:05 Output from DumpPreUserCmd: mysqld beenden:  [  OK  ]
> > 2016-10-25 03:00:07 Output from DumpPreUserCmd: mysqld starten:  [  OK  ]
> > 2016-10-25 03:00:07 full backup started for directory / (baseline backup
> #2424)
> > 2016-10-25 04:41:33 Got fatal error during xfer (Child exited
> prematurely)
> > 2016-10-25 04:41:39 Backup aborted (Child exited prematurely)
> > 2016-10-25 04:41:39 Saved partial dump 2425
> > [...]
> > Here it got an error during full backup. Since then none of the
> following backups completed:
> >
> > 2016-10-25 04:42:00 Output from DumpPreUserCmd: mysqld beenden:  [  OK  ]
> > 2016-10-25 04:42:03 Output from DumpPreUserCmd: mysqld starten:  [  OK  ]
> > 2016-10-25 04:42:03 full backup started for directory /; updating
> partial #2425
> > 2016-10-28 04:26:37 Output from DumpPreUserCmd: mysqld beenden:  [  OK  ]
> > 2016-10-28 04:26:39 Output from DumpPreUserCmd: mysqld starten:  [  OK  ]
> > 2016-10-28 04:26:39 full backup started for directory /; updating
> partial #2425
> > [...]
> >
> > This goes on like this. Backups never complete nor do I see any error
> messages.
> >
> > I did an lsof | grep rsync on the target host, but it does not show any
> open files (except libs from rsync). Same on backuppc (except logfiles).
> >
> > What is going on here and how can I recover so I will get proper future
> backups?
> >
>
> It looks like the backuppc side thinks the remote side disconnected.
> If rsync is actually still running on the client it is probably some
> sort of network issue like a nat gateway or stateful firewall timing
> out.   If rsync did exit it could be out of memory or a filesystem or
> disk error.
>
> --
>Les Mikesell
>  lesmikes...@gmail.com
>
> 
> --
> Developer Access Program for Intel Xeon Phi Processors
> Access to Intel Xeon Phi processor-based developer platforms.
> With one year of Intel Parallel Studio XE.
> Training and support from Colfax.
> Order your platform today. http://sdm.link/xeonphi
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems: Ubuntu 16.04, Win10

2016-09-27 Thread Dan LeVasseur
It is due to the changes in smbclient.  I'm not sure how GIT works, I
manually edited the files before the commit was available.

On Sep 27, 2016 10:11 AM, "Kent Tenney" <kten...@gmail.com> wrote:

> There's no -N in the smbclient command, perms are OK, the
> configured files are being backed up, but errors are generated.
>
> In the following, the new file was correctly added, but due to
> errors, backup number stays at 0
>
> Contents of file /var/lib/backuppc/pc/systemsadmin/XferLOG.0.z, modified
> 2016-09-27 09:55:43
> ...
> tar:712 Total bytes received: 2478285
> ...
> create 644 0/0 6 Documents/Laserfiche/AddedToTestIncremental.txt
> ...
> tarExtract: Done: 0 errors, 6 filesExist, 2478279 sizeExist, 2151330
> sizeExistComp, 7 filesTotal, 2478285 sizeTotal
> Got fatal error during xfer (No files dumped for share kent.tenney)
> Backup aborted (No files dumped for share kent.tenney)
> Saving this as a partial backup
>
> Thanks,
> Kent
>
>
>
> On Tue, Sep 27, 2016 at 3:53 AM, Jurie Botha <jur...@taprojects.co.za>
> wrote:
>
>> [http://agresso.taprojects.co.za/TAP-Header.jpg]<http://www.
>> taprojects.co.za>
>> [http://agresso.taprojects.co.za/TAP-Line.jpg]
>>
>>
>> Where on the windows 10 machine are you backing up from? (Certain
>> locations have locked down permissions when accessing via share  - C:\Users
>> for example. UAC causes issues in these locations. )
>>
>>
>>
>> From: Jurie Botha [mailto:jur...@taprojects.co.za]
>> Sent: Tuesday, September 27, 2016 9:02 AM
>> To: General list for user discussion, questions and support <
>> backuppc-users@lists.sourceforge.net>
>> Subject: Re: [BackupPC-users] Problems: Ubuntu 16.04, Win10
>>
>>
>> Check the smbclient parameters in the configuration. If there’s a “-N” in
>> there – remove it and run backup again. Lemme know if it works.
>>
>> See my blog post:  http://monklinux.blogspot.com/
>> 2012/02/backuppc-host-configuration-backing-up.html
>>
>>
>>
>> From: Kent Tenney [mailto:kten...@gmail.com]
>> Sent: Monday, September 26, 2016 10:44 PM
>> To: General list for user discussion, questions and support > backuppc-users@lists.sourceforge.net>
>> Subject: Re: [BackupPC-users] Problems: Ubuntu 16.04, Win10
>>
>> Is there doc on installing and configuring the github code?
>> Thanks,
>> Kent
>>
>> On Mon, Sep 26, 2016 at 2:44 PM, Dan LeVasseur > d...@abigmailbox.com> wrote:
>> It does appear the fix was committed, not sure what that means for the
>> version in 16.10
>>
>> https://github.com/backuppc/backuppc/commit/d7a8403b537ed006
>> 8e862abc20065e98209527b7
>>
>>
>> On Mon, Sep 26, 2016 at 2:40 PM, Kent Tenney <mailto:kten...@gmail.com>
>> wrote:
>> Howdy,
>>
>> I'm having problems using on Ubuntu 16.04
>>
>> $ aptitude install backuppc
>> which installs version 3.3.1
>>
>> I can configure and backup a Windows 10 box,
>> but backuppc thinks it has failed:
>> 'No files dumped', 'Not saving this as a partial backup since it has
>> fewer files ...'
>>
>> google says I'm not the only one, seems to be related to recent versions
>> of smbclient
>>
>> Should I be using code from https://github.com/backuppc/backuppc ?
>>
>> Any suggestions, recommendations, welcome.
>>
>> Thanks, Kent
>>
>> 
>> --
>>
>> ___
>> BackupPC-users mailing list
>> mailto:BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
>>
>> 
>> --
>>
>> ___
>> BackupPC-users mailing list
>> mailto:BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
>> 
>> --
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
&

Re: [BackupPC-users] Problems: Ubuntu 16.04, Win10

2016-09-26 Thread Dan LeVasseur
It does appear the fix was committed, not sure what that means for the
version in 16.10

https://github.com/backuppc/backuppc/commit/d7a8403b537ed0068e862abc20065e98209527b7


On Mon, Sep 26, 2016 at 2:40 PM, Kent Tenney  wrote:

> Howdy,
>
> I'm having problems using on Ubuntu 16.04
>
> $ aptitude install backuppc
> which installs version 3.3.1
>
> I can configure and backup a Windows 10 box,
> but backuppc thinks it has failed:
> 'No files dumped', 'Not saving this as a partial backup since it has fewer
> files ...'
>
> google says I'm not the only one, seems to be related to recent versions
> of smbclient
>
> Should I be using code from https://github.com/backuppc/backuppc ?
>
> Any suggestions, recommendations, welcome.
>
> Thanks, Kent
>
> 
> --
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] heads up for newer versions of openssh

2016-05-12 Thread Dan Pritts




Joe Konecny <mailto:jkone...@rmtohio.com>
May 12, 2016 at 3:25 PM

I only used because the docs suggested it...

http://backuppc.sourceforge.net/faq/ssh.html


What is best, use an ip address or omit "from=" althogether?


Using an IP address is a fine idea.

I do basically the same thing, but via a different method.

I have ssh running on an alternate port, running out of  xinetd.  xinetd 
restricts what IPs can connect.  The normal sshd doesn't allow root 
login; this one does.


You could also accomplish more or less same thing via, e.g., iptables 
rules, or tcp wrappers.

--
Dan Pritts
ICPSR Computing & Network Services
University of Michigan

--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] heads up for newer versions of openssh

2016-05-12 Thread Dan Pritts




Joe Konecny <mailto:jkone...@rmtohio.com>
May 11, 2016 at 2:46 PM
I found after several days of wrestling that UseDNS now defaults to 
"no" in
newer versions of openssh. This causes a from="hostname" clause in 
authorized_keys
to reject the connection. You either have to... 1. use an ip address 
from="x.x.x.x"
2. set "UseDNS yes" in sshd_config or 3. omit "from=" altogether. Hope 
this helps

someone.


Unless you are using DNSSEC broadly (unless you know all about it, you 
aren't), depending on DNS for security is a bad idea.  This is a 
positive change, but a bummer that it's bitten you.


danno
--
Dan Pritts
ICPSR Computing & Network Services
University of Michigan

--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Status on new BackupPC v4

2016-05-11 Thread Dan Pritts


Les Mikesell wrote:
v3 uses its own perl implementation to be able to compare to the 
compressed archived copy.. v4 uses the server's native version. 


The docs for v4 say:

 * A modified rsync 3.0.9, called rsync_bpc, is used on the server
   side, with a C code layer that emulates all the file-system OS calls
   to be compatible with the BackupPC store. That means for rsync, the
   data path is now fully in complied C, which should mean a
   significant speedup. It also means many (but not all) of the rsync
   options are supported natively.

Is this not correct?

Or perhaps, can v4 use either this, or the native rsync?

thanks
danno


--
Dan Pritts
ICPSR Computing & Network Services
University of Michigan

--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Status on new BackupPC v4

2016-05-11 Thread Dan Pritts

No, at least by default,  it uses the perl module File::RsyncP.

There is a significant advantage to  this.  The server can look in the 
pool for the file, and avoid transferring it if it's already got the file.




Sorin Srbu wrote:

-Original Message-
From: Moorcroft, Mark (ARC-TS)[Analytical Mechanics Associates, INC.]
[mailto:mark.moorcr...@nasa.gov]
Sent: den 10 maj 2016 20:12
To: backuppc-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] Status on new BackupPC v4


Theproblem is that it replaces modern rsync with some ancient embedded
version. If that one thing could be "patched" I would hardly care about any
other changes. The possibility of using modern rsync, and possibly "parsync"
to do multi-threaded transfers is about the only thing I currently have to
work around. I have even looked at commercial alternatives, and so far none
of them make any economic sense to use.


Doesn't BPC use whatever rsync version is available on the BPC-server?


--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


--
Dan Pritts
ICPSR Computing & Network Services
University of Michigan

--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Random Daily Backups remain idle, can be started manually 4.

2016-04-16 Thread Dan Pritts



Adam Goryachev wrote:

On 16/04/2016 07:31, Les Mikesell wrote:

On Fri, Apr 15, 2016 at 3:47 PM, Tony Schreiner
<anthony.schrei...@bc.edu>  wrote:

I will increase
MaxBackups and MaxUserBackups from 2 to 6
to see if it makes any difference.

MaxBackups is the number that will be scheduled to run concurrently;
MaxUserBackups controls what you can start manually.  The optimal
value will depend very much on your hardware.  Often running 2
concurrently will complete the whole set faster than 6 due to less
disk head and RAM contention.



Just to make this clearer, 2 is probably the right value, maybe 3 or 4,
but probably not as high as 6, unless the machines you are backing up
are on the end of very slow links, and/or your backuppc server is
extremely fast (disk IO, and RAM).


mmm, depends.  6 concurrent full backups will probably be too many, but 
6 concurrent incrementals
probably aren't. Incrementals are limited by the speed of the tree walk 
on the client's disk,
looking for files to back up.  Unless htere are a lot of changes, this 
won't stress the server.


danno
--
Dan Pritts
ICPSR Computing & Network Services
University of Michigan

--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backupPC and NFS slow performance

2016-03-25 Thread Dan Pritts
NFS just isn't going to perform very well with lots of small files & 
directories, or other metadata-intensive operations.


I presume that the NFS server is an appliance of some sort (Netapp, 
etc), but if it's just a *nix server, running backuppc on the server 
would be much faster.




Dariusz Adamczyk wrote:
Hello, i tested BackupPC, and i would like to ask how to make better 
performance with NFS.
I have local server and backuppc on it, and I want to save backup on 
NFS (mounted to this server). I tweaked NFS mount  with 
noatime,nodiratime,rsize=65536,wsize=65536 (i have gigabit connection) 
but transfer is very low. I tried tar and rsync option and i dont see 
any differences, Since nfs dont allow me to set owner and group on it 
I disabled this options in config and simillar to this. i turned off 
compression.  I have a lot of small files and dir. i dont have idea 
what can I tweak..

After all tweaks stats are: 16GB in 512min! it says 1.8Mb/s .
**



--
Dan Pritts
ICPSR Computing & Network Services
University of Michigan

--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785351=/4140___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsync Xfer with large file system

2016-03-23 Thread Dan Pritts

try adding more swap.It doesn't have to be real memory.

Also, make sure you're running into actual memory exhaustion, rather 
than a limit.   try "ulimit" as the user who runs rsync on the source 
machine; assuming you're on linux, modify /etc/security/limits.conf as 
necessary.  other OSes will have similar mechanisms.



Laverne Schrock <mailto:schr1...@morris.umn.edu>
March 23, 2016 at 5:22 PM
Hi,

I have a file system that I am trying to backup with BackupPC using 
the rsync Xfer method. We are running BackupPC version 3.2.1. File 
server has rsync version 3.0.9. The logs show that the transfer uses 
protocol 28.


When we tried to run the backup, it started successfully, and then 
died after around 8 hours. The only error message that could be found 
was one in /var/log/BackupPC/LOG that said "out of memory".


From what I've read this is most likely caused by rsync running out of 
memory when it is building the in memory file list. Here are the stats 
on the file system. It contains 924G of data as reported by `df`. 
Running `find . | wc -l` reports 31983221 files. Some have suggested 
breaking the backup down into several runs. This is a home folder 
shared via NFS, so there is no easy division here.


Is there anything we can do to make this work with the rsync method, 
or is this just too much? I'd really prefer it over tar.


Thanks,

L. Schrock
--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785351=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


--
Dan Pritts
ICPSR Computing & Network Services
University of Michigan
+1 (734) 615-9529

--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785351=/4140___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Non standard installation... failed (resend)

2016-03-20 Thread Dan Pritts
probably the web server doesn't have the right permissions.  see what 
user each is running under.  For simplest case, just make them both run 
under the same user id.  However, depending on what else you have 
running, that might or might not be right for you.

Mauro Condarelli <mailto:mc5...@mclink.it>
March 18, 2016 at 6:05 AM
As said (see below) I managed to get an apparently working installation.
I sat up a test backup on localhost and manually started a backup.
Apparently everything was ok:
2016-03-18 09:44:29 full backup started for directory /volume1/homes
2016-03-18 10:27:23 full backup 0 complete, 704991 files, 26858441879 
bytes, 2019784 xferErrs (0 bad files, 0 bad shares, 2019784 other)

but the backup doesn't show up in "localhost HOME":
Host localhost Backup Summary
This PC has never been backed up!!

Apparently the datadir has been populated with something making sense
.../data/pc/localhost/0 exists and contains a subdir 
"f%2fvolume1%2fhomes" which is consistent with

the requested RsyncShareName(/volume1/homes).

The WEB GUI seems however unable to see this backup.

Nothe that, in order to test, I changed just the bare minimum: added 
host and then XferMethod, RsyncShareName and RsyncClientPath (the

latter because rsync is in a non-standard location on client)

What should I check?

Thanks in Advance
Mauro


Il 18/03/2016 03:07, Mauro Condarelli ha scritto:

--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Mauro Condarelli <mailto:mc5...@mclink.it>
March 17, 2016 at 10:07 PM
Thanks Dan,
after fiddling a bit with strace (available as option on my synology) 
I found the problem.


Problem is I installed with "--no-fhs" and in this condition install 
process (configure.pl) and Lib.pm completely disregard whatever I 
define as confdir and force it to be "$topDir/conf" (see: Lib.pm#126).

I see this as a bug.
Same behavior seems present in LogDir definition (next line).

Did I miss something?
What is the right way to report such bugs? (if this indeed is a bug).

Thanks
Mauro


Il 17/03/2016 17:54, Dan Pritts ha scritto:

--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Dan Pritts <mailto:da...@umich.edu>
March 17, 2016 at 12:54 PM
if your synology has strace, try

strace -e trace=file -o /tmp/trace.out /blah/blah/blah/BackupPC -d

that will show you all file access attempts by the script.  
Alternately, search through the installed source to see where it is 
looking for config.pl.


This seems unlikely, but also check for selinux failures.

both strace & selinux presume your synology is running  linux - if 
not, well, those won't work.



Mauro Condarelli <mailto:mc5...@mclink.it>
March 17, 2016 at 12:44 PM
Thanks Dan,
I have a config.pl:
backuppc@syno0:~/BackupPC/conf$ ls -la
  total 96
 drwxr-x--- 2 backuppc users  4096 Mar 13 12:31 .
 drwxr-xr-x 9 backuppc users  4096 Mar 13 12:39 ..
 -rw-r- 1 backuppc users 85652 Mar 13 12:31 config.pl
 -rw-r--r-- 1 backuppc users  2214 Mar 13 12:31 hosts
athough I did not (yet) configure it in any way (I plan to do it from 
the web IF) it already contains the

$Conf{Language} = 'en';
line.

My guess is BackupPC is unable to find (or read) config.pl, but I 
can't understand why.

What should I check?

Any pointer welcome.

Regards
Mauro


Il 17/03/2016 17:15, Dan Pritts ha scritto:

--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Dan Pritts <mailto:da...@umich.edu>
March 17, 2016 at 12

Re: [BackupPC-users] Non standard installation... failed (resend)

2016-03-19 Thread Dan Pritts

I know nothing about a synology nas, but i found this in the source:

Lib.pm:return "No language setting" if ( 
!defined($bpc->{Conf}{Language}) );


My config.pl file has

$Conf{Language} = 'en';


I wonder if you aren't getting a config.pl file at all, perhaps the 
source installation doesn't give you one, just an example with a 
different name that you are supposed to customize.


hope this helps.

Mauro Condarelli <mailto:mc5...@mclink.it>
March 17, 2016 at 6:58 AM
I already tried sending this, but it never showed in ML, so I'm trying 
again (sorry if You get a duplicate).

__
Hi,
I am trying to install BackupPC-3.3.1 (the latest, AFAIK) on a 
Synology NAS.
Obviously I have no way to use pre-packaged distributions and thus I 
resorted to install from .tar


I installed using:

==
#!/bin/bash

user=backuppc
home=/opt/backuppc
pdir=$home/BackupPC
ddir=$home/data

cd BackupPC-3.3.1

perl configure.pl --batch --backuppc-user=$user --compress-level=3 
--config-dir $pdir/conf \

--cgi-dir $pdir/cgi-bin --data-dir $ddir --no-fhs \
--hostname syno0 --html-dir $pdir/images --html-dir-url /images \
--install-dir $pdir --log-dir $pdir/log --uid-ignore
==

User backuppc exists and its HOME dir is /opt/backuppc.
/opt/backuppc is a mount point for a ext4 filesystem (actually mounted 
"-o bind", if it matters).


Trying a very basic:

backuppc@syno0:~/sandbox$ /opt/backuppc/BackupPC/bin/BackupPC
No language setting
BackupPC::Lib->new failed

Same result if I try the command as "root".
This is strange because I checked and Lib->new() always prints 
something to STDERR before returning in error (without a return value).

I also checked permissions, but everything looks good to me:

backuppc@syno0:~/BackupPC/lib/BackupPC$ ls -la
total 172
drwxr-xr-x 8 backuppc users 4096 Mar 13 12:31 .
drwxr-xr-x 4 backuppc users 4096 Mar 13 12:31 ..
-r--r--r-- 1 backuppc users 8794 Mar 13 12:31 Attrib.pm
drwxr-xr-x 2 backuppc users 4096 Mar 13 12:31 CGI
drwxr-xr-x 2 backuppc users 4096 Mar 13 12:31 Config
-r--r--r-- 1 backuppc users 11310 Mar 13 12:31 Config.pm
-r--r--r-- 1 backuppc users 12963 Mar 13 12:31 FileZIO.pm
drwxr-xr-x 2 backuppc users 4096 Mar 13 12:31 Lang
-r--r--r-- 1 backuppc users 42427 Mar 13 12:31 Lib.pm
-r--r--r-- 1 backuppc users 21079 Mar 13 12:31 PoolWrite.pm
drwxr-xr-x 2 backuppc users 4096 Mar 13 12:31 Storage
-r--r--r-- 1 backuppc users 2764 Mar 13 12:31 Storage.pm
-r--r--r-- 1 backuppc users 19728 Mar 13 12:31 View.pm
drwxr-xr-x 2 backuppc users 4096 Mar 13 12:31 Xfer
-r--r--r-- 1 backuppc users 5070 Mar 13 12:31 Xfer.pm
drwxr-xr-x 2 backuppc users 4096 Mar 13 12:31 Zip

Also the "No language setting" bit is strange, since I have:

backuppc@syno0:~/BackupPC/lib/BackupPC$ echo $LANG
en_US.UTF-8

I also did some other checks, but nothing looked wrong (to me).
Can someone, pretty please, help me?
TiA
Mauro

--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


--
Dan Pritts
ICPSR Computing & Network Services
University of Michigan
+1 (734) 615-9529

--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231=/4140___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Non standard installation... failed (resend)

2016-03-19 Thread Dan Pritts

Not sure about the suexec & what program needs what permissions.

Hmm.  if your http basic auth user name doesn't match a user name that 
is listed in the hosts file, you will not have access to the given host.



Mauro Condarelli <mailto:mc5...@mclink.it>
March 18, 2016 at 11:30 AM
Thanks Dan,
I am using suexec to run the CGI scripts as user backuppc.
Here follows my backuppc.conf:


ServerAdmin webmaster@localhost
DocumentRoot /opt/backuppc/BackupPC

LogLevel trace1

ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined

Alias /images /opt/backuppc/BackupPC/images

Require all granted
AllowOverride None
Order allow,deny
allow from all


Alias /cgi-bin/ /opt/backuppc/BackupPC/cgi-bin/
SuexecUserGroup backuppc users

Options +ExecCGI
SetHandler cgi-script
AuthType Basic
AuthName "Authentication Required"
AuthUserFile /opt/backuppc/BackupPC/.htpasswd
Require valid-user
Order deny,allow




Problem *might* be only BackupPC_Admin script is in suexec documentroot.
I was under assumption all other scripts were actually called by 
BacupPC_Admin, is it so?

If not, what are the scripts that must be suid-backuppc?

In any cases: setting DATADIR as world-readable does *not* cure the 
problem.


TiA
Mauro

Il 18/03/2016 15:50, Dan Pritts ha scritto:

--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Dan Pritts <mailto:da...@umich.edu>
March 18, 2016 at 10:50 AM
probably the web server doesn't have the right permissions.  see what 
user each is running under.  For simplest case, just make them both 
run under the same user id.  However, depending on what else you have 
running, that might or might not be right for you.


Mauro Condarelli <mailto:mc5...@mclink.it>
March 18, 2016 at 6:05 AM
As said (see below) I managed to get an apparently working installation.
I sat up a test backup on localhost and manually started a backup.
Apparently everything was ok:
2016-03-18 09:44:29 full backup started for directory /volume1/homes
2016-03-18 10:27:23 full backup 0 complete, 704991 files, 26858441879 
bytes, 2019784 xferErrs (0 bad files, 0 bad shares, 2019784 other)

but the backup doesn't show up in "localhost HOME":
Host localhost Backup Summary
This PC has never been backed up!!

Apparently the datadir has been populated with something making sense
.../data/pc/localhost/0 exists and contains a subdir 
"f%2fvolume1%2fhomes" which is consistent with

the requested RsyncShareName(/volume1/homes).

The WEB GUI seems however unable to see this backup.

Nothe that, in order to test, I changed just the bare minimum: added 
host and then XferMethod, RsyncShareName and RsyncClientPath (the

latter because rsync is in a non-standard location on client)

What should I check?

Thanks in Advance
Mauro


Il 18/03/2016 03:07, Mauro Condarelli ha scritto:

--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Mauro Condarelli <mailto:mc5...@mclink.it>
March 17, 2016 at 10:07 PM
Thanks Dan,
after fiddling a bit with strace (available as option on my synology) 
I found the problem.


Problem is I installed with "--no-fhs" and in this condition install 
process (configure.pl) and Lib.pm completely disregard whatever I 
define as confdir and force it to be "$topDir/conf" (see: Lib.pm#126).

I see this as a bug.
Same behavior seems present in LogDir definition (next line).

Did I miss something?
What is the right way to report such bugs? (if this indeed is a bug).

Thanks
Mauro


Il 17/03/2016 17:54, Dan Pritts ha scritto:

--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforg

Re: [BackupPC-users] Rebuilding a backuppc installation

2016-03-14 Thread Dan Pritts
for normal-ish filesystem usage, ZFS is ram-hungry but not ram-insane.  
Just don't turn on deduplication and you'll be fine with a few GB of RAM.


More importantly though, it's unlikely that you'll be able to transfer 
your pool without (temporarily) having enough disk to store two 
copies.   You might dork around with pulling out one of your raid5 disks 
as the start of your new zfs pool, but that is risky.  make sure to do a 
scrub on the raid5 before you do that.


It sounds like an awful lot of hassle just to be rid of systemd.  I am 
not a fan either, but the "big four" linux distros all switched to it - 
unless you plan to give up on linux altogether you might as well get 
used to it.



Pasi Koivisto <mailto:pasi+...@me.com>
March 14, 2016 at 8:48 AM
It might be possible to setup Linux with ZFS, create a zpool (1 disk 
might be enough for the transfer) and copy over the bpc pool. Export 
that zpool, setup FreeBSD and import the zpool.


Make sure you have plenty of ram as ZFS works better the more ram it has.

/P



--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Brad Alexander <mailto:stor...@gmail.com>
March 13, 2016 at 9:11 PM
I'm getting kinda frustrated with the entire systemd thing on linux,. 
So what I'm wondering is what the procedure is (if it is possible) to 
convert the OS from Linux to FreeBSD, and converting the base 
filesystem to ZFS, preferably without losing my pool. The hardware in 
question is a Dell PE1850. If I'm going to use ZFS, I will convert the 
RAID 5 array to JBOD. Any other sage advice from folks running 
backuppc on FreeBSD?


As I type this, I suspect I am going to lose my pool, so I should 
probably archive the older backups. Is there any way to re-import them 
into the pool after the conversion is done?


Thanks,
--b
--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


--
Dan Pritts
ICPSR Computing & Network Services
University of Michigan
+1 (734) 615-9529

--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231=/4140___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Remove

2015-01-08 Thread McGurk, Dan
Remove

 

Thank you,

Dan McGurk

Ransom Memorial Hospital

IT Department

785.229.8444

 

CONFIDENTIALITY NOTICE:  This facsimile/e-mail transmission contains
confidential information, some or all of which may be protected health
information as defined by the federal Health Insurance Portability 
Accountability Act (HIPAA) Privacy Rule. This transmission is intended
for the exclusive use of the individual or entity to whom it is
addressed and may contain information that is proprietary, privileged,
confidential and/or exempt from disclosure under applicable law. If you
are not the intended recipient (or an employee or agent responsible for
delivering this facsimile transmission to the intended recipient), you
are hereby notified that any disclosure, dissemination, distribution or
copying of this information is strictly prohibited and may be subject to
legal restriction or sanction. Please notify the sender by telephone
(number listed above) to arrange the return or destruction of the
information and all copies.

 

 

--
Dive into the World of Parallel Programming! The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Files being skipped

2014-05-15 Thread McGurk, Dan

Here is the relevant part of the XferLog

Xfer PIDs are now 3887,3886
[ skipped 1838 lines ]
File size change - truncating \ppart\CTSTATUS.FCS to 6109116 bytes
[ skipped 271465 lines ]
Error reading file \ppart\I002.FCS : NT_STATUS_FILE_LOCK_CONFLICT
NT_STATUS_UNSUCCESSFUL listing \ppart\*
[ skipped 15 lines ]
tarExtract: Done: 0 errors, 269815 filesExist, 267869130169 sizeExist,
91347626064 sizeExistComp, 270525 filesTotal, 269039345975 sizeTotal


My thinking is if I002.FCS is locked, it should just skip and move
to the next file...not sure what I am doing wrong.


Thank you,
Dan McGurk


-Original Message-
From: Les Mikesell [mailto:lesmikes...@gmail.com] 
Sent: Thursday, May 15, 2014 11:30 AM
To: General list for user discussion,questions and support
Subject: Re: [BackupPC-users] Files being skipped

On Thu, May 15, 2014 at 11:12 AM, McGurk, Dan dmcg...@ransom.org
wrote:
 Hello all.
 I am running a backup up of a complete folder ie (Name of Folder) 
 Inside that folder are lots of folders as well as a LOT of files.

 It backs up all the files starting with A- H.  It gets to a file name 
 I001.FCS and that is the last one it shows in the backup.  The 
 folder contains a LOT of files with names starting with J-Z, but none 
 of those are getting backed up.

What does the Xfer Error Summary for that backup run  say was the reason
they were skipped?

-- 
   Les Mikesell
lesmikes...@gmail.com


--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform
available Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


_
Scanned on 15 May 2014 16:30:32
Scanned by Erado



--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Errors on backup

2014-05-14 Thread McGurk, Dan
I can't figure this one out.  It doesn't provide much information, but
of course I am not sure what I am looking for.

 

These are the lines from when it aborts:

 

tarExtract: Done: 0 errors, 38 filesExist, 25058590 sizeExist, 24507089
sizeExistComp, 364 filesTotal, 466311750 sizeTotal

Backup aborted ()

 

 

I am not sure what all to post for you to help, as I am new to this.

 

 

 

 

 

 

 

Thanks,

Dan

 

--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Email working, BackupPC not alerting on failure

2014-01-11 Thread Dan Doughty
Alexander,

Thanks, that was exactly what I needed.  In my particular case I had to set 
just the username by the host since I had the email domain set in 
EMailUserDestDomain.

Dan

-Original Message-
From: Alexander Moisseev [mailto:mois...@mezonplus.ru] 
Sent: Wednesday, January 08, 2014 12:14 PM
To: General list for user discussion, questions and support
Subject: Re: [BackupPC-users] Email working, BackupPC not alerting on failure

 Is this by design?  I mean, I’d expect it to tell me that it hasn’t backed up 
 for a week whether the target/client was offline or not.

This is by design. Add yourself as user in BackupPC hosts file (or through CGI 
Edit Hosts), and you will get messages.
For testing run BackupPC_sendEmail without options, like this:
# su -s /bin/bash backuppc -c /usr/share/BackupPC/bin/BackupPC_sendEmail

BackupPC_sendEmail send warnings or errors (about BackupPC system wide critical 
events) to administrative user (i.e. person responsible for BackupPC system) 
and notifications (about their particular hosts) to other users (i.e. persons 
responsible for host). Last backup age not considered as system error. So 
notification about this event will be sent to user, not to administrative user. 
(i.e. To: $user$Conf{EMailUserDestDomain}, where $user - user name from the 
hosts file).


Documentation states:

$Conf{EMailAdminUserName} = '';
 Destination address to an administrative user who will receive a nightly 
email with *warnings and errors*. If there are no *warnings or errors* then no 
email will be sent.

$Conf{EMailNotifyOldBackupDays} = 7.0;
 How old the most recent backup has to be before notifying *user*. When 
there have been no backups in this number of days the *user* is sent an email.


P.S. Last backup fresh enough doesn't mean consistent. You will get e-mail 
notification only if host hasn't backed up at all in specified period of time.

--
Alexander

--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance affects 
their revenue. With AppDynamics, you get 100% visibility into your Java,.NET,  
PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

-
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2014.0.4259 / Virus Database: 3658/6994 - Release Date: 01/11/14


--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments  Everything In Between.
Get a Quote or Start a Free Trial Today. 
http://pubads.g.doubleclick.net/gampad/clk?id=119420431iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Email working, BackupPC not alerting on failure

2014-01-07 Thread Dan Doughty
Alexander:  I have not modified that setting and it is still at 7 days.

 

Peter,

 

When I use the web interface there is a section called Email summary and that 
is totally empty.

 

The Old LOGs have entries like this but they show up every night that there is 
a log:

2014-01-06 01:02:42 BackupPC_nightly now running BackupPC_sendEmail

2013-12-24 07:51:59 BackupPC_nightly now running BackupPC_sendEmail

 

This command, run from the command line, and if BackupPC is down will produce 
an email:

 

su -s /bin/bash backuppc -c /usr/share/BackupPC/bin/BackupPC_sendEmail -c

 

That command with the –t flag runs with no output:

 

bash-4.1$ /usr/share/BackupPC/bin/BackupPC_sendEmail -t

bash-4.1$

 

I’m really curious if it’s more a designed behavior.  Would it instead send me 
an email if the box was pingable, but hadn’t been backed up for a week?  But 
since it’s not pingable, it’s considered okay that it hasn’t been backed up?  
Like I said, that’s not the behavior I want but I’m not sure how the program is 
actually supposed to work.

 

Dan

 

From: Peter Major [mailto:pe...@major.me.uk] 
Sent: Saturday, January 04, 2014 3:57 AM
To: General list for user discussion, questions and support
Subject: Re: [BackupPC-users] Email working, BackupPC not alerting on failure

 

I had a similar problem 2 years ago when setting up BackupPC.

I made some notes at that time, which I am now trying to interpret ;-)

What I found at the time was that BackupPC_sendEmail uses slightly different 
scripting for sending test messages, admin messages, and messages to users, so 
the successful sending of a test message doesn't guarantee successful sending 
of the other types! However, 

1. I believe you should see some messages in the BackupPC log (through the web 
interface) if it is attempting to send email messages

2. Have you tried (as the BackupPC user) running BackupPC_sendEmail with the -t 
option - it will send the output to STDOUT instead of to the mail program?

Peter Major

-Original Message-
From: Dan Doughty d...@heartofamericait.com 
mailto:dan%20doughty%20%3c...@heartofamericait.com%3e 
Reply-to: General list for user discussion, questions and support 
backuppc-users@lists.sourceforge.net
To: 'General list for user discussion, questions and support' 
backuppc-users@lists.sourceforge.net 
mailto:%22'General%20list%20for%20user%20discussion,%20questions%20and%20support'%22%20%3cbackuppc-us...@lists.sourceforge.net%3e
 
Subject: [BackupPC-users] Email working, BackupPC not alerting on failure
Date: Fri, 3 Jan 2014 18:10:57 -0600

/usr/share/BackupPC/bin/BackupPC_sendEmail will generate a test message.

 

Unfortunately, I’m not getting warning messages telling me that one of my 
client’s hasn’t backed up for a week.  I’ve checked /var/log/maillog and there 
is no attempt to send mail.

 

My shell is set to /sbin/nologin for the backuppc user.  Perhaps that’s the 
problem.

 

When I check /var/log/BackupPC/LOG there’s no mention of the client at all.  
The client is offline due to it being a laptop.  Is this by design?  I mean, 
I’d expect it to tell me that it hasn’t backed up for a week whether the 
target/client was offline or not.  But just because I expect it doesn’t mean 
that is what the authors designed.

 

Any ideas?

Dan

 



 
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831 
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk 
iu=/4140/ostg.clktrk
___ BackupPC-users mailing list 
BackupPC-users@lists.sourceforge.net List: 
https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: 
http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/

 

  _  

No virus found in this message.
Checked by AVG - www.avg.com
Version: 2014.0.4259 / Virus Database: 3658/6977 - Release Date: 01/05/14

--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Email working, BackupPC not alerting on failure

2014-01-03 Thread Dan Doughty
/usr/share/BackupPC/bin/BackupPC_sendEmail will generate a test message.

 

Unfortunately, I'm not getting warning messages telling me that one of my
client's hasn't backed up for a week.  I've checked /var/log/maillog and
there is no attempt to send mail.

 

My shell is set to /sbin/nologin for the backuppc user.  Perhaps that's the
problem.

 

When I check /var/log/BackupPC/LOG there's no mention of the client at all.
The client is offline due to it being a laptop.  Is this by design?  I mean,
I'd expect it to tell me that it hasn't backed up for a week whether the
target/client was offline or not.  But just because I expect it doesn't mean
that is what the authors designed.

 

Any ideas?

Dan

 

--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] OT: rsyncd as a service on Windows 8

2013-10-29 Thread Dan Johansson
On 29.10.2013 14:50, Timothy J Massey wrote:
 Dan Johansson dan.johans...@dmj.nu wrote on 10/27/2013 02:26:06 PM:
 
 Have you checked the Event Viewer?  It usually shows you what's going 
 on 
 with rsync...
 
 This seemed to have gotten missed.  Is there anything in there?  Rsync is 
 usually pretty expressive in the Event Viewer...
 
 Also, someone else mentioned about the possibility of suspend/hibernate. I 
 don't back up clients, so all of my servers are up 24/7.  I'd look 
 carefully into that:  I've found way too many applications that are 
 unhappy about power saving.
 
 Tim, do you mind sharing the CMD-file?
 
 Literally about 30 seconds of effort went into it...  My rsync daemon is 
 installed in a directory called rsyncd and contains folder:  bin (which 
 contains rsync.exe and needed DLL's), log (which contains the log, .pid, 
 .lock, etc.), and etc (which contains rsyncd.conf and .secrets)
 
 
 
 @ECHO OFF
 REM Start Rsync Daemon v2
 IF EXIST rsync.exe GOTO PATH_OK
 ECHO You must start this script in the same directory as rsync.exe.
 GOTO END
 :PATH_OK
 DEL ..\log\rsyncd.pid
 START rsync.exe --config=..\etc\rsyncd.conf --daemon --no-detach
 :END
 
 
 I'm not worried about deleting an active .pid file (if it's even possible) 
 because I don't use it for anything, and if a second rsync daemon tries to 
 start it won't because it won't be able to attach to port 873.
 
 Like I said, not much effort in the script, but it works.
 
 I don't use services for starting my rsync daemon.  I use a scheduled task 
 set to start the daemon on startup *and* every hour.  That way if it 
 *does* crash, the daemon will restart automatically.
 
 I certainly could make the script much smarter:  check to see if there's 
 an rsync process already running, etc.  But this little bit of effort 
 works perfectly for my needs.
 

Thanks Tim,
OK, I thought that your script was a little more smarter, that's why I
asked. I do believe that the problem I am having is that this Win8
client do go to sleep. I'll try to disable the hibernating and see
what's happens.

Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***


0x2FB894AD.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] OT: rsyncd as a service on Windows 8

2013-10-27 Thread Dan Johansson
On 26.10.2013 23:32, Timothy J Massey wrote:
 Dan Johansson dan.johans...@dmj.nu wrote on 10/26/2013 08:37:48 AM:
 
 Any suggestions on
 a)   how to find out why rsyncd dies in the first place
 
 Not really:  I have never run rsync on Windows 8.  I *have* done it on 
 Windows Server 2012 (based on Win8) with zero crashes across 3-4 servers 
 on that OS and maybe 3 months of time.  So it may not be a Windows 8 
 problem exactly, it may be just *your* Windows 8.
 
 Have you checked the Event Viewer?  It usually shows you what's going on 
 with rsync...
 
 b)   how to fix this
 
 If we don't know why it dies how can we fix it?
 
 c)   if this is unfixable how can I make rsyncd restart even if there
 are a .pid and .lock file around
 
 You can't, to my knowledge.  I have wrapped launching rsync in a CMD to 
 delete stale files before launching the daemon.  (I hope I'm wrong:  it 
 would be nice not to need it!)
 
 Tim Massey

Tim, do you mind sharing the CMD-file?

Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***


0x2FB894AD.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60135991iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] OT: rsyncd as a service on Windows 8

2013-10-26 Thread Dan Johansson
Hi List,

This is not strictly a BackupPC problem, but as here are many
knowledgeable people on this list using BackupPC to backup Windows host
I thought I give it a try.

I have been using rsyncd (cygwin) as a service under Windows XP
successfully for some time for use together with BackupPC. Now I am
trying to do the same with Windows 8. The installation of the service
went fine and the service started, but after some time it crashed and
would not restart. At this point I could also not start the rsyncd
manually. I found that there was still a .pid and a .lock file in the
run directory and if I removed both of them the rsyncd-service could
be started successfully as well as starting the rsyncd manually.

Any suggestions on
a)  how to find out why rsyncd dies in the first place
b)  how to fix this
c)  if this is unfixable how can I make rsyncd restart even if there
are a .pid and .lock file around

Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***


0x2FB894AD.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60135991iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backup of Windows 8 (again)

2013-01-27 Thread Dan Johansson
Hi, it's me again.

I have gotten my backup of my new Windows 8 host to halfway work.
At the moment it is backing up to much (i.e. I can not get BackupFilesExclude 
to work properly).

e.g. If I put \\AppData in  BackupFilesExclude for my share, AppData 
correctly does not get backed up,
but if I put \\AppData\\Local in BackupFilesExclude, All data in AppData, 
including Local, gets backed up.

I have tried some variants of \\AppData\\Local like \\AppData\\Local\\, 
\\AppData\\Local\\* and \\AppData\Local, but they all fails.

Any suggestion on what I am doing wrong and how to solve it?

Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnnow-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC and Windows 8

2013-01-21 Thread Dan Johansson
Hi,

I have been using BackupPC for some time now and I am quite happy. I am backing 
up both Linux and Windows (XP using rsyncd) hosts.
Now I have gotten a new Windows 8 host as well and would also like to back it 
up.
Have anyone else successfully configured BackupPC to backup a Windows 8 host?
How have you done it (samba, rsync, rsyncd)?

Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. SALE $99.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122412
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] backuppcfuse.pl dies with Segmentation fault

2012-04-06 Thread Dan Johansson
Hi,

I have at last come around to play with backuppcfuse.pl.
But when I start it, it fails with Segmentation fault:

$ perl -I /usr/lib /usr/local/scripts/backuppcfs.pl -f /mnt/backuppc/
Segmentation fault

Here are the last few lines of a strace:
brk(0xece000)   = 0xece000
read(3, rn undef;\n}\n($sub)=(cach..., 4096) = 4096
read(3, e ($pos$offset) { # as long a t..., 4096) = 4096
brk(0xeef000)   = 0xeef000
read(3, ew session: $!\;\n}\n\ndaemonize if..., 4096) = 1014
read(3, , 4096)   = 0
close(3)= 0
getgroups(0, NULL)  = 2
getgroups(2, [16, 442]) = 2
stat(/mnt/backuppc/, {st_mode=S_IFDIR|0777, st_size=48, ...}) = 0
--- {si_signo=SIGSEGV, si_code=SI_KERNEL, si_addr=0} (Segmentation fault) ---
+++ killed by SIGSEGV +++
Segmentation fault


The mountpoint is there (owned by backluppc and mode 777):
$ ls -pal  /mnt/backuppc/
total 0
drwxrwxrwx 2 backuppc backuppc  48 Apr  6 14:22 ./
drwxr-xr-x 9 root root 240 Apr  6 14:22 ../

Any suggestions on how to resolve this and get backuppcfuse.pl up  running.

-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compare backups without restoring

2012-04-01 Thread Dan Johansson

On Saturday 31 March 2012 10.45:25 Les Mikesell wrote:
 On Sat, Mar 31, 2012 at 3:01 AM, Dan Johansson dan.johans...@dmj.nu wrote:
 
  The talk about backuppc-fuse sounds VERY interesting (instead of running 
  BackupPC_tarCreate -h host1 -n 887 -s /usr . | tar xf - to a temporary 
  filesystem),
  I did a quick search on the net and it looks like there are some different 
  versions around (Pieter Wuille's from Nov 2009 and unixtastic.com from Nov 
  2008).
  Are there other newer around? Does the one from Pieter work with BPC 
  3.2.1?
 
 I haven't tried the fuse approach, but note that for single
 directories, you can use the 'history' link to see which files have
 changed and when, and gnu tar has a --compare option so you could
 generate a stream via ssh/BackupPC_tarCreate and compare to the
 current filesystem or at least only have to restore one.

Yes, I know about those two, but the thing that I am trying to do is to do a 
backup of the backup to a remote location (Amazon S3) and that in a format 
where I _do not_ need BackupPC to restore.
So, once again: Anyone using BackupPC_fuse with BPC 3.2.1? Which 
BackupPC-fuse one are you using?

-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compare backups without restoring

2012-03-31 Thread Dan Johansson
On Friday 30 March 2012 09.57:27 Jeffrey J. Kosowsky wrote:
 One simple possibility would be to use backuppc-fuse to mount the pc
 tree and then use normal *nix routines like diff or cmp to find
 differences.
 
 .
 
 N.Trojahn wrote at about 14:09:06 +0200 on Friday, March 30, 2012:
   Hello list,
   
   I'd like to find the differences between to backups of a certain host
   without restoring the two backups (way too large) and diff 'em.
   
   Anyone has an idea how to achieve this using a script or something like
   that which runs over the BackupPC pool?

The talk about backuppc-fuse sounds VERY interesting (instead of running 
BackupPC_tarCreate -h host1 -n 887 -s /usr . | tar xf - to a temporary 
filesystem),
I did a quick search on the net and it looks like there are some different 
versions around (Pieter Wuille's from Nov 2009 and unixtastic.com from Nov 
2008).
Are there other newer around? Does the one from Pieter work with BPC 3.2.1?

Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Automatic redirected restore

2012-02-19 Thread Dan Johansson

On Saturday 18 February 2012 11.08:23 Les Mikesell wrote:
 On Sat, Feb 18, 2012 at 10:53 AM, Dan Johansson dan.johans...@dmj.nuwrote:
   Use the BackupPC_tarCreate command line program and pipe it directly
   to a 'tar -xf -' to extract instead of saving the output in a file.
Insert 
  ssh
  
   in the pipeline appropriately if you want the restore to happen
  
  elsewhere.
  
  Great - that was what I was looking for. Now I have another (related)
  problem.
  With BackupPC_archiveStart I get all shares for the host, with
  BackupPC_tarCreate I have to explicitly name all shares. This puts me in
  the
  situation that I have to dynamically figure out all shares for each
  host.
  Are
  there command that does this for me? If not, then I have to write a
  small
  perl-script  to read the host-config-file.
 
 Something like:
 
 BEGIN { require config.pl; }

I have created a small Perl script (actually a most of the code is borrowed 
from other BackupPC programs)  to list the shares for a specific host.

The only issue (that I know of / have encountered) is that if started with an 
unknown/unconfigured host the script continues and outputs the default share 
from config.pl. I really does not know how to catch this.

There are probably a few more errors/issues with the script as it is some 
years since I wrote my last Perl script (and then I was not using the object 
oriented features).
I have attached the script if someone needs something like this, feel free to 
use and change it. If you find some errors/issues and have a solution I would 
be grateful if you posted it here (or send me a private mail).

Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

BackupPC_Get_Shares
Description: Perl program
--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Inconsistent data in BackupPC

2012-02-19 Thread Dan Johansson
During my tests with BackupPC_tarCreate I noticed that some files could not be 
correctly restored from BackupPC. (Shrug!).

Running BackupPC_tarCreate -h host1 -n 887 -s /usr . | tar xf - I get the 
following Error:

Error: padding 
/var/lib/backuppc/pc/host1/887/f%2fusr/fshare/fwebapps/fgallery/f2.3.1/fhtdocs/fmodules/fnewitems/fpo/fru.mo
 to 4154 bytes from  bytes

In XferLOG.887 I see this for the file (and its neighbors) 
same 644   0/03574 
share/webapps/gallery/2.3.1/htdocs/modules/newitems/po/ro.po
 pool 644   0/04154 
share/webapps/gallery/2.3.1/htdocs/modules/newitems/po/ru.mo
same 644   0/05177 
share/webapps/gallery/2.3.1/htdocs/modules/newitems/po/ru.po

For the other files in this directory the first word is same!


When I try to restore the file it gets interesting.
If I select Download Tar archive I get a tar archive (restore.tar) containing 
the ru.mo file which then contains 4154 bytes of 0's:
$ ll restore.tar
-rw-r--r-- 1 dan users 10240 Feb 19 15:33 restore.tar

$ tar xvf restore.tar
./ru.mo

$ ll ru.mo 
-rw-r--r-- 1 dan users  4154 Jan  8  2011 ru.mo

$ hexdump -C ru.mo
  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  
||
*
1030  00 00 00 00 00 00 00 00  00 00|..|
103a

Selecting Download Zip archive on the other hand works OK:

$ ll restore.zip 
-rw-r--r-- 1 dan users 1768 Feb 19 15:32 restore.zip

$ unzip -v restore.zip
Archive:  restore.zip
 Length   MethodSize  CmprDateTime   CRC-32   Name
  --  ---  -- -   
4154  Defl:N 1644  60% 01-08-2011 17:52 c11e0302  ru.mo
  ---  ------
4154 1644  60%1 file

$ unzip  restore.zip
Archive:  restore.zip
  inflating: ru.mo   

$ ll ru.mo 
-rw-r--r-- 1 dan users 4154 Jan  8  2011 ru.mo

$ hexdump -C ru.mo | head
  de 12 04 95 00 00 00 00  15 00 00 00 1c 00 00 00  
||
0010  c4 00 00 00 1d 00 00 00  6c 01 00 00 00 00 00 00  
|l...|
0020  e0 01 00 00 1e 00 00 00  e1 01 00 00 07 00 00 00  
||
0030  00 02 00 00 5b 00 00 00  08 02 00 00 23 00 00 00  
|[...#...|
...

Have I stumbled upon a bug in BackupPC_tarCreate?

-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Automatic redirected restore

2012-02-18 Thread Dan Johansson
Hi,

I am looking for a way to do an automatic redirected restore.
At the moment I am playing with an Archive Host and are doing the following:

1)  run BackupPC_archiveStart with ArchiveDest set to a local directory (at 
the moment /(var/tmp).

2)  When the archive process is finished I create a subdirectory in 
/var/tmp 
and extract the tar-file from step one into this subdirectory.

This approach works but uses a lot of disk space (almost two times) as the 
tar-file from step one must exist together with the extracted data for some 
time (at least until the tar xvzf is finished and I can delete the tar 
file). Is there/do you know of any way to skip the use of a tar file and 
directly export/restore the backup to a redirected location.
With redirected I mean it does not get restored to the original host and or 
location.

Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Automatic redirected restore

2012-02-18 Thread Dan Johansson

On Saturday 18 February 2012 10.18:31 Les Mikesell wrote:
 On Sat, Feb 18, 2012 at 9:17 AM, Dan Johansson dan.johans...@dmj.nu wrote:
  I am looking for a way to do an automatic redirected restore.
  At the moment I am playing with an Archive Host and are doing the
  following:
  
  1)  run BackupPC_archiveStart with ArchiveDest set to a local
  directory (at
  the moment /(var/tmp).
  
  2)  When the archive process is finished I create a subdirectory in
  /var/tmp
  and extract the tar-file from step one into this subdirectory.
  
  This approach works but uses a lot of disk space (almost two times) as
  the tar-file from step one must exist together with the extracted data
  for some time (at least until the tar xvzf is finished and I can
  delete the tar file). Is there/do you know of any way to skip the use
  of a tar file and directly export/restore the backup to a
  redirected location. With redirected I mean it does not get restored
  to the original host and or location.
 
 Use the BackupPC_tarCreate command line program and pipe it directly to a
 'tar -xf -' to extract instead of saving the output in a file.  Insert ssh
 in the pipeline appropriately if you want the restore to happen elsewhere.

Great - that was what I was looking for. Now I have another (related) problem.
With BackupPC_archiveStart I get all shares for the host, with 
BackupPC_tarCreate I have to explicitly name all shares. This puts me in the 
situation that I have to dynamically figure out all shares for each host. Are 
there command that does this for me? If not, then I have to write a small 
perl-script  to read the host-config-file.

Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup# in DumpPostUserCmd

2012-01-30 Thread Dan Johansson
On Monday 30 January 2012 11.13:04 Flako wrote:
 2012/1/29 Dan Johansson dan.johans...@dmj.nu:
  Hi,
  
  Is it possible to get the Backup# in the DumpPostUserCmd?
  I have tried with $backup and $backupnumber but none work.
  I also have tried to get the last Backup# from the backups file, but
  this file is written _after_  is DumpPostUserCmd finished.
  
  Any suggestions?
  
  --
  Dan Johansson, http://www.dmj.nu
  ***
  This message is printed on 100% recycled electrons!
  ***
 
 mm variable, do not know ..
 but could pass as parameter $ HostIP and within the script as / data /
 bacupPC / pc / $ HostIP and retrieve that data.
 

Thanks for your tip, but I took an other approach - I split my script into two 
parts where part one is called in DumpPostUserCmd which does some basic checks 
and just ads the Client-name ($client) to a queue file. The second script is 
called on a regular basis from cron and it reads the queue file. Now I can get 
the last backup# from the backups file.

Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backup# in DumpPostUserCmd

2012-01-29 Thread Dan Johansson
Hi,

Is it possible to get the Backup# in the DumpPostUserCmd?
I have tried with $backup and $backupnumber but none work.
I also have tried to get the last Backup# from the backups file, but this file 
is written _after_  is DumpPostUserCmd finished.

Any suggestions?

-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC and Amazon S3

2012-01-28 Thread Dan Johansson
Hi,

I am playing with the thought of putting some of my backups in BackupPC  on S3 
storage as well as on local storage.

This is the procedure I was thinking about to use:

1)  Create a last-full-copy with the link-full-backup.sh script 
(previously  shown on this mailing-list).
2)  Mount my S3 bucket on  /mnt/s3 using the s3ql filesystem 
(http://code.google.com/p/s3ql/) which provides among other things encryption.
3)  rsync the last-full-copy to /mnt/s3
4)  umount /mnt/s3

Any thoughts on this procedure?
Pro/Cons/NoNo's?
 
Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Small issue with trash

2011-11-13 Thread Dan Johansson
I have a small issue with the trash, or more precisely  the trash-cleaner. 
Sometimes a directory will get stuck in the trash-directories and will not 
be cleaned away by the trash-cleaner, it just sits there forever or until 
manually (rm -rf) removed. Restarting BackupPC does not help.
Any suggestions what could be wrong?

Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] raid Re: Fairly large backuppc pool (4TB) moved with backuppc_tarpccopy

2011-10-02 Thread Dan Pritts
Tyler J. Wagner wrote:
 Raid5: Lose power during write operation = silent data corruption, which
 you'll find out about next time you try to rebuild.
depends on your disk array.  good hardware raid has a battery-backed 
write cache which will prevent such issues.  As long as the power comes 
back before your battery dies.

Les Mikesell wrote:
 The real problem is that there are parts of the disk that haven't been
 accessed for a while and the errors already exist on multiple drives
 before you notice them.   For software raid, I thought a cron job was
 supposed to be testing them periodically, but the notification may not
 reach you - and hardware raid may not to the tests.
Depends on your software RAID.  When I last checked, solaris did not 
automatically check your zfs raids.   FreeBSD 8.0 doesn't.

various folks said:

  RAID10 performs better than RAID6.  Or vice versa.

depends on your use case.

broadly speaking, if you are doing large reads and writes, or doing 
mostly reads, raid5/6 will be faster.  for random i/o raid10 will be 
faster.
-- 
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of dual-boot laptop

2011-09-30 Thread Dan Johansson
On Wednesday 28 September 2011 20.26:25 Arnold Krille wrote:
 On Wednesday 28 September 2011 18:59:38 Tim Fletcher wrote:
  On Wed, 2011-09-28 at 17:30 +0200, Dan Johansson wrote:
   I have a laptop that is dual-boot (Linux and WinXP) and gets the same
   IP from DHCP in both OS's. Today I have two entries in BackupPC for
   this laptop (hostname_lnx and hostname_win) with different backup
   methods for each (rsync over ssh for Linux and SMB for WinXP). This
   works good for me with one small exception - I always gets a
   Backup-Failed message for one of them each night.
   Does someone have a suggestion on how to solve this in a more
   beautiful way?
  
  Write a ping script that finds out is the laptop is in Windows or Linux
  so one of the other of the backup hosts won't ping.
 
 Yep, detecting the os with nmap should work. Or if you are not using dhcp
 or only for one of them, you could distinguish by ip-address.
 
  You can also make use of the fact that most desktop distros have avahi
  installed and use short hostname.local as a target host name.
 
 That will work until you install bonjour for windows (which is very nice in
 networks relying on zeroconf).
 
 Using the same archive-method for both would involve either mingw with
 stuff on the windows-machine or exporting / as C$ in samba. But still you
 will have different paths inside these shares which results in files and
 paths only present every other day.
 Better to use two different backup-hosts for the two os'es.

Thanks for the suggestion with nmap. I now have an almost working ping 
script (still got some fine-tuning to do)  to determine which OS is booted.

Regards
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backup of dual-boot laptop

2011-09-28 Thread Dan Johansson
I have a laptop that is dual-boot (Linux and WinXP) and gets the same IP from 
DHCP in both OS's. Today I have two entries in BackupPC for this laptop 
(hostname_lnx and hostname_win) with different backup methods for each (rsync 
over ssh for Linux and SMB for WinXP). This works good for me with one small 
exception - I always gets a Backup-Failed message for one of them each 
night.
Does someone have a suggestion on how to solve this in a more beautiful way?

Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] VMware ESXi performance?

2011-09-22 Thread Dan Pritts

Trey Dockendorf wrote:

 On Sep 22, 2011 6:04 PM, Les Mikesell lesmikes...@gmail.com
 mailto:lesmikes...@gmail.com wrote:
  
   Does anyone have a good estimate of the performance hit from running
   backuppc in a VM under VMware ESXi with nothing else sharing the
   physical disks for the archive?  And are there any tuning tricks to
   optimize the partition alignment, etc.?
  

 There are a few things to make the VM faster.  Ive found running the
 virtual disks as either LSI parallel SAS or paravirtual (pvscsi) helps.
 Also using VMXnet3 will lower cpu I/O on network activity.  This will
 require vmware tools for your respective distro.

 Alot would depend also on the disk and RAID level (assuming you use
 RAID). My ESXi server on RAID 10 is doing well with 20 servers being
 backed up and about 600GB of actual disk space used after pooling and
 compression.

I don't run backuppc under ESX but I run a lot of other stuff that way.

You shouldn't need to worry about alignment of the virtual disks.

Whenever running under vmware, set your kernel to use HZ=100 instead of 
the default 100.  RHEL  derivatives support a kernel boot option 
divider=10 (I think) to do this.

Bottom line there is measurable I/O overhead with ESX(i) but it's 
generally very low.

vmxnet3  pvscsi are definitely a win.  as is running under vmware in 
general (snapshots ftw!).
-- 
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrade BackupPC 2.1.2 to 3.2.1

2011-09-06 Thread Dan Johansson
On Sunday 04 September 2011 19.17:17 Timothy J Massey wrote:
 Dan Johansson dan.johans...@dmj.nu wrote on 09/04/2011 10:04:16 AM:
  As you can see it says that the Pool is 0.00GB. This can not be correct
 
 as
 
  there are data in the pool and I can do a restore. Even after a
  backup does it
  say 0.00GB.
 
 Has BackupPC_nightly run yet?  (It runs at the time each day of the first
 hour listed in the Wakeup variable).  An easier question:  is it past 24
 hours since you completed the upgrade?
 
 The statistics are part of the nightly run, IIRC.

Yes, you are completely right, after the nightly run the statistics are there.
Thanks,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrade BackupPC 2.1.2 to 3.2.1

2011-09-04 Thread Dan Johansson
Hi,

I have now updated to 3.2.1 and have (at least) one issue.

On the Status-Page in the GUI I see the following:

# Other info:
   ...
* Pool is 0.00GB comprising files and directories (as of 2011-09-04 15:47),
* Pool hashing gives repeated files with longest chain ,
* Nightly cleanup removed 0 files of size 0.00GB (around 2011-09-04 15:47),
* Pool file system was recently at 38% (2011-09-04 15:39), today's max is 
38% (2011-09-04 15:06) and yesterday's max was %. 

As you can see it says that the Pool is 0.00GB. This can not be correct as 
there are data in the pool and I can do a restore. Even after a backup does it 
say 0.00GB.

Any suggestions on what could be wrong?

Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***
On Sunday 04 September 2011 00.23:03 Holger Parplies wrote:
 Hi,
 
 Dan Johansson wrote on 2011-09-03 11:04:22 +0200 [[BackupPC-users] Upgrade 
BackupPC 2.1.2 to 3.2.1]:
  In Gentoo BackupPC 3.2.1 has just gone stable and I want to upgrade from
  2.1.2, and have some questions.
  Will 3.2.1 use the same configfiles as 2.1.2 (do I have to rewrite all my
  configfiles)?
 
 yes (no). Though there are some new variables in config.pl, so you might
 want or need to merge your local changes into the new config.pl.
 
  Will 3.2.1 use the same filesystem structure as 2.1.2 (can I restor a
  file backed up with 2.1.2 with 3.2.1)?
 
 Yes (yes).
 
  Are there some gotchas with this upgrade?
 
 In general, if you've installed a distribution package of BackupPC, it's up
 to the package to handle upgrades (you *did* previously install version
 2.1.2 from a package, too, right?).
 
 As far as the upstream BackupPC code is concerned, there should be no
 issues with the upgrade. For the Gentoo package, I have no idea. In
 theory, it *could* introduce problems (like moving the pool location;
 however, with 3.2.1 you could just set $Conf{TopDir} in config.pl to work
 around that). If you want a definite answer, you'll have to ask in a
 Gentoo forum (or, of course, read the source ;-).
 
 Regards,
 Holger

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Upgrade BackupPC 2.1.2 to 3.2.1

2011-09-03 Thread Dan Johansson
Hi,

In Gentoo BackupPC 3.2.1 has just gone stable and I want to upgrade from 
2.1.2, and have some questions.
Will 3.2.1 use the same configfiles as 2.1.2 (do I have to rewrite all my 
configfiles)?
Will 3.2.1 use the same filesystem structure as 2.1.2 (can I restor a file 
backed up with 2.1.2 with 3.2.1)?

Are there some gotchas with this upgrade?

Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] dump/restore support?

2011-08-11 Thread Dan Pritts
It's unlikely, because dump/restore do not store things on a file-by-file 
basis.  To do so you'd have to unwrap the dumpfile on the backup server and 
break it up into individual files to store.

On Jul 29, 2011, at 5:16 PM, Rory Toma wrote:
 Are there any plans to add dump/restore as a supported option?
 
 --
 Got Input?   Slashdot Needs You.
 Take our quick survey online.  Come on, we don't ask for help often.
 Plus, you'll get a chance to win $100 to spend on ThinkGeek.
 http://p.sf.net/sfu/slashdot-survey
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. 
http://p.sf.net/sfu/wandisco-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] serial or parallel backups

2011-06-27 Thread Dan Pritts
 
 What is your i/o subsystem? I have a striped array (raid 0) over two
 raid 6 arrays with 7 drives in each array, so effectively I have 10
 spindles. With this I can handle more i/o load than if I only had one
 drive.

I'm sure that John knows this, but for the benefit of the OP i'll note that 
it's a lot more complicated than that, of course.  For small writes I might say 
that you effectively have 2 spindles, and since each small write needs two disk 
operations I might even say you effectively have only 1 spindle.

OTOH for small reads you have 14 spindles.  

I don't mean to start an argument, and the numbers above are basically pulled 
out of my butt to make my point.  I just want to note for the OP that it gets 
complicated and you should do more research.  

 I would expect the throughput of multiple servers backing up in
 parallel to be less the the total i/o bandwidth of the disk since each
 write will end up moving the heads of the disks to different locations
 dropping the effective bandwidth of the disk.

Multiple I/O streams can probably make better use of your I/O bandwidth than 
can a single serial one, unless you are running on low-end hardware like a USB 
drive.

The OS and/or RAID controller and/or disks will reorder requests that come in 
in parallel so they can be more efficiently performed.  

Good enough hardware would be a SATA disk with NCQ enabled.  NCQ is a function
of the disk AND the controller AND the OS, don't assume that all SATA does it.

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] XFS BackupPC optimal mount options

2011-05-25 Thread Dan Pritts
 Has anyone tuned XFS with its several mount options?

some useful info below, which I didn't know (logbsize  delaylog).  I suspect 
it  
would increase backuppc performance.  

http://xfs.org/index.php/XFS_FAQ:

 Q: I want to tune my XFS filesystems for something
 
 The standard answer you will get to this question is this: use the defaults.
 
 There are few workloads where using non-default mkfs.xfs or mount
 options make much sense. In general, the default values already
 used are optimised for best performance in the first place. mkfs.xfs
 will detect the difference between single disk and MD/DM RAID setups
 and change the default values it uses to configure the filesystem
 appropriately.
 
 There are a lot of XFS tuning guides that Google will find for
 you - most are old, out of date and full of misleading or just plain
 incorrect information. Don't expect that tuning your filesystem for
 optimal bonnie++ numbers will mean your workload will go faster.
 You should only consider changing the defaults if either: a) you
 know from experience that your workload causes XFS a specific problem
 that can be worked around via a configuration change, or b) your
 workload is demonstrating bad performance when using the default
 configurations. In this case, you need to understand why your
 application is causing bad performance before you start tweaking
 XFS configurations.
 
 In most cases, the only thing you need to to consider for mkfs.xfs
 is specifying the stripe unit and width for hardware RAID devices.
 For mount options, the only thing that will change metadata performance
 considerably are the logbsize and delaylog mount options. Increasing
 logbsize reduces the number of journal IOs for a given workload,
 and delaylog will reduce them even further. The trade off for this
 increase in metadata performance is that more operations may be
 missing after recovery if the system crashes while actively making
 modifications.


One other thing that I found while poking around is references to lazy 
counters.  
This is a newer feature of XFS and should increase performance from what i can 
tell.  
If you make a filesystem with the current version of XFS tools it will be on by 
default.  
Make sure that your system is using the current version - see the mkfs.xfs man
page for more.

If you have an existing XFS filesystem it probably isn't on, but it appears you 
may be able to 
change that with xfs_util.   

I would make sure I had a backup of the filesystem before dorking with 
xfs_util.  

hope this helps

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224


--
vRanger cuts backup time in half-while increasing security.
With the market-leading solution for virtual backup and recovery, 
you get blazing-fast, flexible, and affordable data protection.
Download your free trial now. 
http://p.sf.net/sfu/quest-d2dcopy1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Slow Rsync Transfer?

2011-04-29 Thread Dan Lavu
Resolved. 

After looking at the file list, we found a 102GB log file, rsync doesn't like 
large files and there are a ton of threads about why. 

Troubleshooting steps that were taken that actually isolated the issue 

strace -p $PID (The output look like it was catting the file) 
lsof -f | grep rsync (and the following to confirm)

I hope this helps anybody else who might have this issue. 

Cheers,

 
___
Dan Lavu
System Administrator  - Emptoris, Inc.
www.emptoris.com Office: 703.995.6052 - Cell: 703.296.0645


-Original Message-
From: Adam Goryachev [mailto:mailingli...@websitemanagers.com.au] 
Sent: Friday, April 29, 2011 12:46 AM
To: backuppc-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] Slow Rsync Transfer?

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 29/04/11 04:08, Dan Lavu wrote:
 Gerald,
 
  
 
 Not the case with me, if you look at the host ras03, you see that the 
 average speed is .92MB/s while other host are significantly faster. It 
 is taking 40 hours to do 110GB, while other hosts are doing it in 
 about an hour. I’m about to patch this box and reboot it, it’s been up 
 for
 200+ days and I haven’t had a good backup for over a week now. So any
 input will be helpful, again thanks in advance.

One thing I've seen which can really slow down rsync backups is that a large 
file with changes will be much slower to backup than a number of small files 
(of the same total size) with the same amount of changes.

I backup disk images, original method was to just backup the image, but this 
was too slow. New method is:
use split to divide the file into a series of 20M or 100M files backup these 
individual files

I also do the same with database exports and other software backup files more 
than around 100M ... it just backup quicker, and also a failed backup will 
continue from the most recent chunk (in a full backup) instead of restarting 
the whole file. Also, timeout is shorter because it is reset after each chunk.

Regards,
Adam

- --
Adam Goryachev
Website Managers
www.websitemanagers.com.au
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk26QpkACgkQGyoxogrTyiXMlgCgghJ14sMasOdtJi28os6rBj4U
GeYAnRxasxrFgpSZ442w0+HKDNHJFsZZ
=d8vA
-END PGP SIGNATURE-

--
WhatsUp Gold - Download Free Network Management Software The most intuitive, 
comprehensive, and cost-effective network management toolset available today.  
Delivers lowest initial acquisition cost and overall TCO of any competing 
solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Slow Rsync Transfer?

2011-04-28 Thread Dan Lavu
Hi,

I've been running BackupPC for a while now and I've been noticing an odd trend, 
I have 29 servers which are identical in hardware and very similar in 
configuration. 28 of these servers transfer at 30-40MB/s via Rsync, one host 
and the database servers transfer between 2-10MB/s, even when ran individually 
when no other hosts are being backed up. 

I've checked the IOwait on both the  BackupPC server which is a 12 disk raid 
10, with 2TB of usable space, 15k RPM drives, it has 5% IO while the host has 
0%. 

I've done simple sftp transfers and they transfer at 20-30MB/s with no issues. 

Here is a backuppc log on one of the problem hosts, 

Backup# TypeFilled  Level   Start Date  Duration/mins   Age/days
Server Backup Path
112 fullyes 0   2/5 19:0662.3   
  81.6/backuppc/pc/ras03/112
140 fullyes 0   3/5 22:0055.8   
  53.5/backuppc/pc/ras03/140
154 fullyes 0   3/20 04:35  465.2   
   39.2/backuppc/pc/ras03/154
161 fullyes 0   3/27 21:42  142.3   
   31.5/backuppc/pc/ras03/161
167 fullyes 0   4/4 18:00854.0  
23.7/backuppc/pc/ras03/167
173 fullyes 0   4/11 18:00  1032.9  
 16.7/backuppc/pc/ras03/173
174 incrno  1   4/12 18:00  152.2   
  15.7/backuppc/pc/ras03/174
175 incrno  1   4/13 18:00  1019.5  
 14.7/backuppc/pc/ras03/175
176 incrno  1   4/14 18:00  1594.4  
 13.7/backuppc/pc/ras03/176
177 incrno  1   4/15 21:11  2094.5  
 12.6/backuppc/pc/ras03/177
178 fullyes 0   4/19 03:00  2037.8  
 9.3 /backuppc/pc/ras03/178
179 incrno  1   4/20 18:16  1147.2  
 7.7 /backuppc/pc/ras03/179
180 incrno  1   4/21 18:04  1840.7  
 6.7 /backuppc/pc/ras03/180
181 partial yes 0   4/25 12:32  2389.1  
 2.9 /backuppc/pc/ras03/181

I've already modified the checksum but it had no impact, 
'--checksum-seed=32761'. How does backuppc calculate the duration and speed? I 
know my oracle hosts, I kick off the RMAN prior to pulling the backuppc, so 
that might explain the slow transfer speeds if it calculated size/time which 
is inclusive of the RMAN (Database Export) backup. This host (above) boggles me 
though, these are static files that are being transferred and all other 28 
hosts transfer just fine. 

Thanks in advance for any input or troubleshooting steps. 
 
___
Dan Lavu
System Administrator  - Emptoris, Inc.
www.emptoris.com Office: 703.995.6052 - Cell: 703.296.0645



--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Slow Rsync Transfer?

2011-04-28 Thread Dan Lavu
Gerald, 

 

Not the case with me, if you look at the host ras03, you see that the average 
speed is .92MB/s while other host are significantly faster. It is taking 40 
hours to do 110GB, while other hosts are doing it in about an hour. I’m about 
to patch this box and reboot it, it’s been up for 200+ days and I haven’t had a 
good backup for over a week now. So any input will be helpful, again thanks in 
advance.

 

Host   User #Full Full Age (days)  Full Size (GB)   
  Speed (MB/s)#Incr Incr Age (days)  
Last Backup (days)State Last attempt

ras01 7  3.6
  106.97   36.06 6  
0.6  0.6  idle idle

ras02 7  3.5
  122.30   29.25 6  
0.5  0.5  idle idle

ras03 7  9.4
  112.47   0.927
  3.0  3.0  backup in progress  

ras04 8  0.5
  105.04   40.48 6  
1.5  0.5  idle idle

ras05 7  2.7
  29.77 17.19 6 
 0.7  0.7  idle idle

ras06 7  4.7
  38.23 14.11 6 
 0.7  0.7  idle idle

ras07 7  1.8
  134.75   30.72 6  
0.8  0.8  idle idle

ras08 7  1.7
  98.78 24.94 6 
 0.6  0.6  idle idle

ras09 7  1.7
  13.38 21.73 6 
 0.7  0.7  idle idle

ras10 7  1.5
  162.20   36.92 6  
0.5  0.5  idle idle

ras11 7  2.5
  100.25   23.62 6  
0.5  0.5  idle idle

ras12 7  1.7
  92.03 36.86 6 
 0.7  0.7  idle idle

 

 Cheers,

 

___

Dan Lavu

System Administrator  - Emptoris, Inc.

www.emptoris.com Office: 703.995.6052 - Cell: 703.296.0645

 

 

 

-Original Message-
From: Gerald Brandt [mailto:g...@majentis.com] 
Sent: Thursday, April 28, 2011 1:51 PM
To: General list for user discussion,questions and support
Subject: Re: [BackupPC-users] Slow Rsync Transfer?

 

 

 

- Original Message -

 Hi,

 

 I've been running BackupPC for a while now and I've been noticing an 

 odd trend, I have 29 servers which are identical in hardware and very 

 similar in configuration. 28 of these servers transfer at 30-40MB/s 

 via Rsync, one host and the database servers transfer between 

 2-10MB/s, even when ran individually when no other hosts are being 

 backed up.

 

 I've checked the IOwait on both the  BackupPC server which is a 12 

 disk raid 10, with 2TB of usable space, 15k RPM drives, it has 5% IO 

 while the host has 0%.

 

 I've done simple sftp transfers and they transfer at 20-30MB/s with no 

 issues.

 

 Here is a backuppc log on one of the problem hosts,

 

 Backup# TypeFilled  Level   Start Date  Duration/mins

   Age/daysServer Backup Path

 112 fullyes 02/5 19:0662.3

 81.6/backuppc/pc/ras03/112

 140 fullyes 03/5 22:00
 55.8

 53.5/backuppc/pc/ras03/140

 154 fullyes 03/20 04:35   
465.2

  39.2/backuppc/pc/ras03/154

 161 fullyes 0

Re: [BackupPC-users] Perl upgrade breaks my BackupPC installation

2010-12-13 Thread Dan Johansson
On Monday 15 November 2010 20.05:31 Les Mikesell wrote:
 On 11/15/2010 12:06 PM, Dan Johansson wrote:
  On Monday 15 November 2010 18.49:47 Dan Johansson wrote:
  Hi, I am new to this list so pleas be kind if this has already been 
  answered...
 
  After updating perl from 5.8.8 to 5.12.2 my BackupPC installation stopped 
  working as suidperl is no longer provided.
  I have now compiled a wrap-program (as describer here: 
  https://bugzilla.redhat.com/show_bug.cgi?id=611009#c3 )
  but it still wont work.
 
  Trying to access BackupPC_Admin via the web interface only gives my the 
  following:
  snip
  In the server-log I see the following:
  snip
  which is about the same as when I start the newly compiled C-wraper on the 
  command line:
  snip
  Any suggestions?
  Regards,
 
  Hi Again,
 
  Argh, I found the problem, the original perl-script still had the 's-bit' 
  set - after removing it everything works again.
 
  By the way, a big THANKS to the developers for a great product!
 
 
 Does the apache version in your distribution offer suEXEC support:
 http://httpd.apache.org/docs/2.2/suexec.html?
 
 That might work as well as a specific wrapper.
Yes, the distro (gentoo) offers suEXEC support, but I have not tried it (yet).
I'll try it some day when I have some more time on my hands.

Thanks for the tip.

Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
Lotusphere 2011
Register now for Lotusphere 2011 and learn how
to connect the dots, take your collaborative environment
to the next level, and enter the era of Social Business.
http://p.sf.net/sfu/lotusphere-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Don't use rsync -H!! (was Re: copying the pool to a new filesystem?!)

2010-12-07 Thread Dan Pritts
On Dec 7, 2010, at 2:58 AM, Tyler J. Wagner wrote:
 So, if you wanted to copy the entire /var/lib/backuppc directory to
 another filesystem, what commands would you use, for example?

Not exactly an answer to your question, but i would do this:

umount /var/lib/backuppc
dd if=/dev/onedisk of=/dev/someotherdisk bs=1M

this will overwrite /dev/someotherdisk with a complete copy of the 
/var/lib/backuppc filesystem. 

If it's not its own filesystem, well, you need to do the rest of the filesystem.

In practice, rsync -H is the reasonable way to do what you're after, EXCEPT 
that there are just too many hard links on a backuppc data store for this to 
work. 

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224


--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Don't use rsync -H!! (was Re: copying the pool to a new filesystem?!)

2010-12-07 Thread Dan Pritts
On Dec 7, 2010, at 2:44 PM, Robin Lee Powell wrote:
 On Tue, Dec 07, 2010 at 02:18:51PM -0500, Dan Pritts wrote:
 umount /var/lib/backuppc
 dd if=/dev/onedisk of=/dev/someotherdisk bs=1M
 
 Only works if you have identical disks, which is hard when you've
 got a few TiB on a SAN.

The disks do not have to be identical.  The target just has to be as big or 
bigger than the source.

The target disk of course could be another SAN lun, a linux softraid of 
external disks, etc.

Currently, my offsite backup set consists of 2 pairs of disks containing my 
backuppc filesystem and other data.  With 2TB disks, a 10TB disk-based offsite 
would be doable, although it would clearly take quite a while to cut the disks. 
 

If I had that big a pool, I think I'd not be using a single backuppc instance, 
but that's just me.  I'm guessing it works well for you, I'm glad it does.

danno
--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Perl upgrade breaks my BackupPC installation

2010-11-15 Thread Dan Johansson
Hi, I am new to this list so pleas be kind if this has already been answered...

After updating perl from 5.8.8 to 5.12.2 my BackupPC installation stopped 
working as suidperl is no longer provided.
I have now compiled a wrap-program (as describer here: 
https://bugzilla.redhat.com/show_bug.cgi?id=611009#c3 )
but it still wont work.

Trying to access BackupPC_Admin via the web interface only gives my the 
following:
**
Internal Server Error

The server encountered an internal error or misconfiguration and was unable to 
complete your request.

Please contact the server administrator, r...@localhost and inform them of the 
time the error occurred, and anything you might have done that may have caused 
the error.

More information about this error may be available in the server error log.
**

In the server-log I see the following:
**
[Mon Nov 15 18:45:18 2010] [error] [client 192.168.1.11] YOU HAVEN'T DISABLED 
SET-ID SCRIPTS IN THE KERNEL YET!, referer: 
http://torsson.dmj.nu/cgi-bin/backuppc/BackupPC_Admin?action=viewtype=LOG
[Mon Nov 15 18:45:18 2010] [error] [client 192.168.1.11] FIX YOUR KERNEL, PUT A 
C WRAPPER AROUND THIS SCRIPT, OR USE -u AND UNDUMP!, referer: 
http://torsson.dmj.nu/cgi-bin/backuppc/BackupPC_Admin?action=viewtype=LOG
[Mon Nov 15 18:45:18 2010] [error] [client 192.168.1.11] Premature end of 
script headers: BackupPC_Admin, referer: 
http://torsson.dmj.nu/cgi-bin/backuppc/BackupPC_Admin?action=viewtype=LOG
**

which is about the same as when I start the newly compiled C-wraper on the 
command line:
**
# /var/www/localhost/cgi-bin/backuppc/BackupPC_Admin
YOU HAVEN'T DISABLED SET-ID SCRIPTS IN THE KERNEL YET!
FIX YOUR KERNEL, PUT A C WRAPPER AROUND THIS SCRIPT, OR USE -u AND UNDUMP!
**

Any suggestions?

Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
Centralized Desktop Delivery: Dell and VMware Reference Architecture
Simplifying enterprise desktop deployment and management using
Dell EqualLogic storage and VMware View: A highly scalable, end-to-end
client virtualization framework. Read more!
http://p.sf.net/sfu/dell-eql-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Perl upgrade breaks my BackupPC installation

2010-11-15 Thread Dan Johansson
On Monday 15 November 2010 18.49:47 Dan Johansson wrote:
 Hi, I am new to this list so pleas be kind if this has already been 
 answered...
 
 After updating perl from 5.8.8 to 5.12.2 my BackupPC installation stopped 
 working as suidperl is no longer provided.
 I have now compiled a wrap-program (as describer here: 
 https://bugzilla.redhat.com/show_bug.cgi?id=611009#c3 )
 but it still wont work.
 
 Trying to access BackupPC_Admin via the web interface only gives my the 
 following:
snip
 In the server-log I see the following:
snip 
 which is about the same as when I start the newly compiled C-wraper on the 
 command line:
snip
 Any suggestions?
 Regards,
 
Hi Again,

Argh, I found the problem, the original perl-script still had the 's-bit' set - 
after removing it everything works again.

By the way, a big THANKS to the developers for a great product!

Regards,
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***

--
Centralized Desktop Delivery: Dell and VMware Reference Architecture
Simplifying enterprise desktop deployment and management using
Dell EqualLogic storage and VMware View: A highly scalable, end-to-end
client virtualization framework. Read more!
http://p.sf.net/sfu/dell-eql-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Irritating problem

2010-10-25 Thread Dan Pritts

On Oct 25, 2010, at 3:39 PM, Rob Poe wrote:

 I'm having an irritating problem with BackupPC for a client of mine.
 
 They're still running some Netware (yes, I know ...), and the 
 File::RsyncP perl module is barfing on the Netware rsync.
 
 https://rt.cpan.org/Public/Bug/Display.html?id=61882


try something like

alarm(0) if ( $rs-{timeout} );
my $errcnt = 0;
unless ( $line =~ m/\...@rsyncd:\s*(\d+)/ ) {
if ( $errcnt  10 ) {
die unexpected response from rsync
} 
$line = -getLine;
$errcnt++;
}

--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] internet traffic/rsync/backuppc

2010-10-07 Thread Dan Pritts
On Oct 1, 2010, at 10:54 AM, Wayne Walker wrote:
 BackupPC uses rsync as a transport.  Does it use any of rsync's smarts
 to prevent downloading unchanged files?  If I run 2 full backups back
 to back, does it pull the entire 90 GB both times?


Make sure to turn on rsync checksum-caching.  It does what you want but may not 
be on by default.   

RAID5 does suck but may or may not be your issue.  Note that 75MB/sec sustained 
writes to large files do not have much relevance to the kind of small random 
i/o that backuppc does.
--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] General Praise.

2010-10-07 Thread Dan Pritts
I agree with your general praise, BackupPC works very well for us in our 
environment, which is maybe half your size.  Due to your large size, I'll leave 
you with one thought:

One concern I've always had with backuppc is what would happen if i had a 
disaster and had to restore everything from backuppc.  

It would take absolutely forever to do this, because backuppc has to seek the 
disks so much (due to the effects of all those hard links).

I haven't done enough testing of this.  

We do send copies of our backuppc drives offsite.  I've always assumed that if 
i had to restore, the first thing I'd do would be duplicate the drives a couple 
times so I could do multiple restores in parallel.

On Sep 21, 2010, at 12:57 AM, Robin Lee Powell wrote:

 
 We've got 3 machines with 3.2T, 4.2T, and 851G of backups, all
 gathered via rsync over ssh *across the network between distant data
 centers* (the backups are in a totally different location than the
 servers), each server with 150+ machines to backup every day... and
 it's actually working.
 
 I wasn't sure if BackupPC was going to do OK in Real Production Use
 (tm); I'm really impressed.
 
 Go you all.
 
 -Robin
 
 -- 
 http://singinst.org/ :  Our last, best hope for a fantastic future.
 Lojban (http://www.lojban.org/): The language in which this parrot
 is dead is ti poi spitaki cu morsi, but this sentence is false
 is na nei.   My personal page: http://www.digitalkingdom.org/rlp/
 
 --
 Start uncovering the many advantages of virtual appliances
 and start using them to simplify application deployment and
 accelerate your shift to cloud computing.
 http://p.sf.net/sfu/novell-sfdev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsync / Tar for a WAN backup of OS X machines

2010-10-07 Thread Dan Pritts
 When tar makes full backups, does-it transfer everything even afer
 the first full backup ? Internet connection being really slow, this is
 not really an option...

I believe it will transfer everything for each full backup.

You can specify lists of files to exclude; I would suggest that you only back 
up home directories, but make sure you and your users understand all the 
implications of that

 - If it does, is it the same with Rsync ? Or with Rsync only the first
 full backup transfers everything ?

only the first full, if you use rsync checksum-caching.

 - With rsync, is there some ressource forks / filename problems for OS
 X / Linux transfers ?

Probably - but it depends on whether your important data has resource forks or 
not.  my limited experience is that most data files don't use resource forks 
any more.  But MOST does not mean ALL.  :)

You may want to try this:

http://www.quesera.com/reynhout/misc/rsync+hfsmode/

I have not tried it with backuppc.  I do not know whether it works with 
checksum-caching.

 - If I use tar, and if tar does transfer everything with each full
 backup, is it possible to have only one yearly full backup, and 52
 weekly incremental backups ? does it sound like a good idea ?

It is possible.  Make sure you understand what each incremental backup backs 
up, though. 

Read the Incremental backup section of the faq at: 
http://backuppc.sourceforge.net/faq/BackupPC.html

in short, there are multiple levels, and each level backs up the things changed 
since the previous higher-level backup.

you may want to do something like
yearly full
monthly level 1
weekly level 2
monday level 3
tuesday level 4
wednesday level 5
thursday level 3
friday level 4

In practice, I would watch the amount of data sent on incrementals and adjust 
the schedule accordingly.

If you want to get really tricky, search for towers of hanoi backup 
scheduling.  

 Many thanks for your help, and sorry for languages mistakes, english
 is not my primary language, doing my best ;)

Your English is 1000x better than my French (guessing from your name that is 
what you speak natively). 

danno
--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compress::Zlib

2010-10-01 Thread Dan Lavu
Oh, for anybody who might have an issue with Compress::Zlib, here is a
solution that worked for me. 

http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-
lists-3/backuppc-21/backuppc-cant-find-compress-zlib-after-recent-update
-on-cen-106280/


Dan Lavu
System Administrator
Office: 703.995.6052
Cell: 703.296.0645
dan.l...@rivermine.com
www.rivermine.com

This message may contain information that is proprietary to Rivermine,
its customers, and partners. 
If you have received it in error, please notify Rivermine immediately.


-Original Message-
From: Dan Lavu [mailto:dan.l...@rivermine.com] 
Sent: Monday, September 27, 2010 3:41 PM
To: backuppc-users@lists.sourceforge.net
Subject: [BackupPC-users] Compress::Zlib

So I patched my systems due to a recent Redhat/Centos 64bit root exploit
and now Compress::Zlib is broken.


###
2010-09-27 13:24:34 User dan.lavu requested backup of server01 (
server01)
2010-09-27 13:24:34 Backup failed on server01 (can't find
Compress::Zlib)
2010-09-27 14:00:00 Next wakeup is 2010-09-27 15:00:00

###

I've tried running CPAN, and reinstalling Compress::Zlib but no avail,
it refuses to compile. I've checked out the latest DAG repos for an rpm
and I did an upgrade but it still cannot find the library.


###
Bareword Compress::Zlib::zlib_version not allowed while strict subs
in use at t/cz-14gzopen.t line 34.
Bareword ZLIB_VERSION not allowed while strict subs in use at
t/cz-14gzopen.t line 34.
Bareword Z_FINISH not allowed while strict subs in use at
t/cz-14gzopen.t line 108.
Bareword Z_STREAM_END not allowed while strict subs in use at
t/cz-14gzopen.t line 121.
Bareword Z_STREAM_END not allowed while strict subs in use at
t/cz-14gzopen.t line 122.
Bareword Z_STREAM_ERROR not allowed while strict subs in use at
t/cz-14gzopen.t line 469.
Bareword Z_STREAM_ERROR not allowed while strict subs in use at
t/cz-14gzopen.t line 484.
Execution of t/cz-14gzopen.t aborted due to compilation errors.
# Looks like you planned 255 tests but only ran 2.
# Looks like your test died just after 2.
t/cz-14gzopen...dubious

Test returned status 255 (wstat 65280, 0xff00) DIED. FAILED
tests 1, 3-255
Failed 254/255 tests, 0.39% okay
t/globmapperok

Failed Test   Stat Wstat Total Fail  Failed  List of Failed

---
t/000prereq.t2   512302   6.67%  3 30
t/01misc.t   1   256   1181   0.85%  3
t/cz-01version.t   255 65280 23 150.00%  1-2
t/cz-03zlib-v1.t   255 65280   456  907 198.90%  1 4-456
t/cz-05examples.t  255 65280??   ??   %  ??
t/cz-06gzsetp.t255 65280??   ??   %  ??
t/cz-08encoding.t  255 6528029   57 196.55%  1-29
t/cz-14gzopen.t255 65280   255  507 198.82%  1 3-255
3 tests and 29 subtests skipped.
Failed 8/85 test scripts, 90.59% okay. 742/52174 subtests failed, 98.58%
okay.
make: *** [test_dynamic] Error 255
  /usr/bin/make test -- NOT OK
Running make install
  make test had returned bad status, won't install without force

###
 
This is on CentOS 5.5, is anybody else having this issue?

--
Best Regards,

Dan Lavu
Systems Administrator
Rivermine Software
703 995 6052 (o)
703 296 0645 (c)


--
Start uncovering the many advantages of virtual appliances and start
using them to simplify application deployment and accelerate your shift
to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Compress::Zlib

2010-09-27 Thread Dan Lavu
So I patched my systems due to a recent Redhat/Centos 64bit root exploit
and now Compress::Zlib is broken.

###
2010-09-27 13:24:34 User dan.lavu requested backup of server01
( server01)
2010-09-27 13:24:34 Backup failed on server01 (can't find
Compress::Zlib)
2010-09-27 14:00:00 Next wakeup is 2010-09-27 15:00:00
###

I've tried running CPAN, and reinstalling Compress::Zlib but no avail,
it refuses to compile. I've checked out the latest DAG repos for an rpm
and I did an upgrade but it still cannot find the library.

###
Bareword Compress::Zlib::zlib_version not allowed while strict subs
in use at t/cz-14gzopen.t line 34.
Bareword ZLIB_VERSION not allowed while strict subs in use at
t/cz-14gzopen.t line 34.
Bareword Z_FINISH not allowed while strict subs in use at
t/cz-14gzopen.t line 108.
Bareword Z_STREAM_END not allowed while strict subs in use at
t/cz-14gzopen.t line 121.
Bareword Z_STREAM_END not allowed while strict subs in use at
t/cz-14gzopen.t line 122.
Bareword Z_STREAM_ERROR not allowed while strict subs in use at
t/cz-14gzopen.t line 469.
Bareword Z_STREAM_ERROR not allowed while strict subs in use at
t/cz-14gzopen.t line 484.
Execution of t/cz-14gzopen.t aborted due to compilation errors.
# Looks like you planned 255 tests but only ran 2.
# Looks like your test died just after 2.
t/cz-14gzopen...dubious  
Test returned status 255 (wstat 65280, 0xff00)
DIED. FAILED tests 1, 3-255
Failed 254/255 tests, 0.39% okay
t/globmapperok   
Failed Test   Stat Wstat Total Fail  Failed  List of Failed
---
t/000prereq.t2   512302   6.67%  3 30
t/01misc.t   1   256   1181   0.85%  3
t/cz-01version.t   255 65280 23 150.00%  1-2
t/cz-03zlib-v1.t   255 65280   456  907 198.90%  1 4-456
t/cz-05examples.t  255 65280??   ??   %  ??
t/cz-06gzsetp.t255 65280??   ??   %  ??
t/cz-08encoding.t  255 6528029   57 196.55%  1-29
t/cz-14gzopen.t255 65280   255  507 198.82%  1 3-255
3 tests and 29 subtests skipped.
Failed 8/85 test scripts, 90.59% okay. 742/52174 subtests failed, 98.58%
okay.
make: *** [test_dynamic] Error 255
  /usr/bin/make test -- NOT OK
Running make install
  make test had returned bad status, won't install without force
###
 
This is on CentOS 5.5, is anybody else having this issue?

-- 
Best Regards,

Dan Lavu
Systems Administrator
Rivermine Software
703 995 6052 (o)
703 296 0645 (c)

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC without a pool (but with ZFS) - a sane idea?

2010-08-06 Thread Dan Pritts
On Thu, Aug 05, 2010 at 12:29:50PM +0200, Clemens Kalb wrote:
 Is this a good idea, or will it break BackupPC at some point?

one issue - if you use the checksum-seed option to rsync, files
that already exist in the pool are not transferred, even when you
do a full backup.  If that's OK with you it's OK with me.  :)

If you are already committed to ZFS dedup for other reasons,
more power to you and I'm sure this makes good sense, but 
it doesn't seem to be worth it just for backuppc.

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Long backup via rsync over unstable connection

2010-07-15 Thread Dan Pritts
On Tue, Jul 13, 2010 at 12:12:08PM -0500, Les Mikesell wrote:
 On 7/13/2010 11:53 AM, Brian Mathis wrote:
  I am trying to use BPC over a WAN, and the connection I'm using seems
  to be unstable.  The backup is dying every few hours and giving a
  Child exited prematurely message.  I can see a definite pattern in
  my bandwidth usage going up and down.

I had similar issues that i was never able to solve to my satisfaction.

I worked around them by adding a few cnames for the client host, and configuring
each of them as a backup client, with a specific portion of the filesystem.

if you do this, make sure you have one that includes all by default, and 
excludes the other portions.  that way a new directory will get backed up.

 
 I think incrementals are discarded on failures but the files copied in a 
 partial full are saved.  However saving them may not save that much time 
 since the full runs will verify the contents against the source anyway. 

it probably won't save time but it might save bandwidth, which might help
in his situation.

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

ESCC/Internet2 Joint Techs
July 11-15, 2010 - Columbus, Ohio
http://events.internet2.edu/2010/jt-oarnet/

--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] NAS

2010-04-09 Thread dan
http://www.globalscaletechnologies.com/p-31-guruplug-server-standard.aspx


With a guruplug you can do SMB, NFS, or iSCSI with debian on the device and
connect your external USB drives via USB2.  Device has 1 Gigabit ethernet
and 2 USB2 connections for $99.  $129 gets you another Gigabit ethernet and
eSATA and a microSD slot.

You also get wifi built in if you REALLY want it to be wireless but
filesharing over wireless isnt really that fun.

1.2Ghz ARM + 512MB DDR2 RAM.

You can put a hub on here if you like for more than 2 USB drives but you
would get slowdown from that.

Additionally, you can do RAID1 on two external drives or ATA over Ethernet
etc etc since you get a full debian on the device.

oh, and it uses just 5W of power.
--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] high load and stuck processes

2010-03-06 Thread dan
On Sat, Mar 6, 2010 at 1:07 AM, Eric Persson e...@persson.tm wrote:

 dan wrote:
  If you are using EXT3 or XFS then I suggest you use an external journal.
  get yourself a small SSD or a small 15RPM disk.  You could use a regular
  disk if you like but the faster the better.

 This would work with a fast usbstick as well? With quite good results I
 expect, and not much problems if it wears out, and perhaps cheaper than
 a ssd. Or could I put in 4 usbsticks and create a raidset from it, and
 store the journal on there? ;) Perhaps stupid, but worth a shot. Depends
 on what you're using the backupserver for i guess. SSD is probably more
 reliable for a bigger shop.


USB is the weak spot here.  You want something on SATA (or IDE, SCSI).  You
could certainly try it but USB is pretty weak on IO performance so I
wouldn't know if it would help performance or not.   Try to raid up a few
sticks BUT make use you put them on different controllers.  A modern USB
flash drive will see a USB bus speedlimit if you put two devices on 1 bus.
--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] high load and stuck processes

2010-03-06 Thread dan
Oh, just as a reminder, you can do an external journal on ext3 and ext4 as
well as xfs.

On Sat, Mar 6, 2010 at 9:37 AM, dan danden...@gmail.com wrote:



 On Sat, Mar 6, 2010 at 1:07 AM, Eric Persson e...@persson.tm wrote:

 dan wrote:
  If you are using EXT3 or XFS then I suggest you use an external journal.
  get yourself a small SSD or a small 15RPM disk.  You could use a regular
  disk if you like but the faster the better.

 This would work with a fast usbstick as well? With quite good results I
 expect, and not much problems if it wears out, and perhaps cheaper than
 a ssd. Or could I put in 4 usbsticks and create a raidset from it, and
 store the journal on there? ;) Perhaps stupid, but worth a shot. Depends
 on what you're using the backupserver for i guess. SSD is probably more
 reliable for a bigger shop.


 USB is the weak spot here.  You want something on SATA (or IDE, SCSI).  You
 could certainly try it but USB is pretty weak on IO performance so I
 wouldn't know if it would help performance or not.   Try to raid up a few
 sticks BUT make use you put them on different controllers.  A modern USB
 flash drive will see a USB bus speedlimit if you put two devices on 1 bus.

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] high load and stuck processes

2010-03-05 Thread dan
If you are using EXT3 or XFS then I suggest you use an external journal.
get yourself a small SSD or a small 15RPM disk.  You could use a regular
disk if you like but the faster the better.

(EXT3)Destroy the journal and re-create it on the extra disk.
#unmount the backuppc disk
#on the journal device
mke2fs -O journal_dev -L name_label /dev/journal_disk
#drop old journal
tune2fs -O ^has_journal /dev/current_disk
#recreate the journal
tune2fs -o journal_data -j -J device=LABEL=name_label /dev/current_disk
-or-
tune2fs -o journal_data -j -J device=/dev/journal_disk /dev/current_disk
#remount the disk

#You can add a directory index to the filesystem for a small gain
tune2fs -O dir_index /dev/current_disk

#also, mount with noatime. (/etc/fstab)

/dev/sdb1   /share   ext3
defaults,noatime,errors=remount-ro 0   1


The external journal will cut your I/O load on the disk/disk set in half
because the filesystem no longer writes the journal on each transaction to
that drive.  It's a small amount of data but it still requires a disk seek
which is what hits the most for many small files (backuppc)




On Fri, Mar 5, 2010 at 7:01 AM, Josh Malone jmal...@nrao.edu wrote:


  It's hard to judge; but basically if there are a lot of processes
 waiting
  for I/O (a 'D' state in 'top'); try cutting down the number of
 concurrent
  backups. You'll have to judge for yourself what the best number for you
 is.
  It may be that things work fastest when there's a certain amount of disk
  contention; but no more and no less.

 Also - you need a good filesystem to handle lots (or even not so many) of
 backups. I reently switched from EXT3 to EXT4 and saw on order of magnitude
 (I kid you not, 10+ hours to 1) reduction in the backup time and system
 load. Unfortunately, I think this introduced some problems in the RHEL5
 ext4 code so I also switched from 32-bit RHEL5 to 64-bit -- that seems to
 have cleared up the problems.

 -Josh


 --
 
   Joshua Malone   Systems Administrator
 (jmal...@nrao.edu)NRAO Charlottesville
434-296-0263 www.cv.nrao.edu
434-249-5699 (mobile)
 BOFH excuse #202:

 kernel panic: write-only-memory (/dev/wom0) capacity
 exceeded.
 


 --
 Download Intel#174; Parallel Studio Eval
 Try the new software tools for yourself. Speed compiling, find bugs
 proactively, and fine-tune applications for parallel performance.
 See why Intel Parallel Studio got high marks during beta.
 http://p.sf.net/sfu/intel-sw-dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Remote mirror sanity checks

2010-02-28 Thread dan
On Sun, Feb 28, 2010 at 3:06 PM, Johannes H. Jensen
j...@pseudoberries.comwrote:

 Actually, one of the files in question is the fformat file of a .svn/
 directory which appears many times in one of the backed up
 filesystems. Since it will have the same md5 sum, BackupPC links it to
 the same file in cpool...

I highly doubt you are reaching the maximum number of hardlinks for a single
file but rather the maximum for the filesystem.  I cant find any real
numbers on the maximum number of hardlinks for the filesystem or i'd post
it.


 This works fine in the local server since backuppc is aware of the
 link limit, but fails when rsyncing to the remote server since new
 backups would increase this link count...


If the disk size is different on one machine than the other, you might have
a different hardlink limit as i suspect that it is based on the number of
inodes but again, I didnt find and hard data on that floating around.

My suggestion is to split the backup load into smaller chunks on multiple
servers and either have a couple of remote servers *OR* a couple different
partitions/LVs on the remote system.

Either that or do something else on the remote end link a freebsd/ZFS
storage server which doesnt have hardlink limitations that are realstic to
hit in the next 20 years.
--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Remote mirror sanity checks

2010-02-27 Thread dan
Are you running into the actual hardlink limit or an inode limit? ext3 has a
hard coded hardlink limit but hardlinks are also limited by available
inodes.  you can check your available inodes with

tune2fs -l /dev/disk|grep -e Free inodes -e Inode count

if you have very few or none left then this is your problem.  You cant
change the inode count on an existing ext3 filesystem as far as I know but
if you re-create the filesystem you can do
mkfs.ext3 -N # /dev/disk
change the # to suite your needs.  You should know the current number
for the tune2fs command above.  I would just take your current filesystem
usage (lets say 62% for the math) then take the `current number` * 3 / .62
so that you have enough inodes for today PLUS you are compensated for when
the disks are fuller.



On Sat, Feb 27, 2010 at 6:12 AM, Johannes H. Jensen
j...@pseudoberries.comwrote:

 Thank you for your input,

 On Sat, Feb 27, 2010 at 3:38 AM, dan danden...@gmail.com wrote:
  if [ -e /var/lib/backuppc/testfile ];
 then rsync ;
 else echo uh oh!;
  fi
 
  should make sure that the filesystem is mounted.

 Yes, that's definitely a good idea. However it does not check to make
 sure that the integrity of the BackupPC pool is okay. If only a small
 subset of the backup pool gets removed/corrupted/etc, this would still
 get reflected in the remote mirror. I would prefer some
 BackupPC-oriented way of doing this (maybe BackupPC_serverMesg status
 info?) if someone could provide me with the details.

  you could also first do a try run
  rsync -avnH --delete /source /destination  /tmp/list
  then identify what will be deleted:
  cat /tmp/list|grep deleting|sed 's/deleting /\//g'
 
  now you have a list of everything that WOULD be deleted with the --delete
  option.  Run your normal sync and save this file for later
 
  You could save take this file list and send it to the remote system
 
  scp /tmp/list remotehost:/list-`date -%h%m%s`
 
  on remote system
 
  cat /list-* | xargs rm
 
  to delete the file list.  You could do this weekly or monthly or whenever
  you needed.

 That's a good idea. My original thought was to manually run the rsync
 with the --delete option once a week or so, but we've already run into
 filesystem (ext3) problems where we exceed the maximum links after a
 few days because we don't --delete... I guess we could use another
 filesystem with a higher limit instead...


 Best regards,

 Johannes H. Jensen



  On Fri, Feb 26, 2010 at 6:27 AM, Johannes H. Jensen 
 j...@pseudoberries.com
  wrote:
 
  Hello,
 
  We're currently syncing our local BackupPC pool to a remote server
  using rsync -aH /var/lib/backuppc/ remote:/backup/backuppc/
 
  This is executed inside a script which takes care of stopping BackupPC
  while rsync is running as well as logging and e-mail notification. The
  script nightly as a cronjob.
 
  This works fairly well, except it won't remove old backups from the
  remote server. Apart from using up unnecessary space, this has also
  caused problems like hitting the remote filesystems hard link limit.
 
  Now I'm aware of rsync's --delete option, but I find this very risky.
  If for some reason the local backup server fails and
  /var/lib/backuppc/ is somehow empty (disk fail etc), then --delete
  would cause rsync to remove *all* of the mirrored files on the remote
  server. This kind of ruins the whole point of having a remote
  mirror...
 
  So my question is then - how to make sure that the local backup pool
  is sane and up-to-date without risking loosing the entire remote pool?
 
  I have two ideas of which I'd love some input:
 
  1. Perform some sanity check before running rsync to ensure that the
  local backuppc directory is indeed healthy. How this sanity check
  should be performed I'm unsure of. Maybe check for existence of some
  file or examine the output of `BackupPC_serverMesg status info'?
 
  2. Run another instance of BackupPC on the remote server, using the
  same pc and hosts configuration as the local server but with
  $Conf{BackupsDisable} = 2 in the global config. This instance should
  then keep the remote pool clean (with BackupPC_trashClean and
  BackupPC_nightly), or am I mistaken? Of course, this instance also has
  to be stopped while rsyncing from the local server.
 
  If someone could provide some more info on how this can be done
  safely, it would be greatly appreciated!
 
 
  Best regards,
 
  Johannes H. Jensen
 
 
 
 --
  Download Intel#174; Parallel Studio Eval
  Try the new software tools for yourself. Speed compiling, find bugs
  proactively, and fine-tune applications for parallel performance.
  See why Intel Parallel Studio got high marks during beta.
  http://p.sf.net/sfu/intel-sw-dev
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo

Re: [BackupPC-users] Remote mirror sanity checks

2010-02-26 Thread dan
if [ -e /var/lib/backuppc/testfile ];
   then rsync ;
   else echo uh oh!;
fi

should make sure that the filesystem is mounted.

you could also first do a try run
rsync -avnH --delete /source /destination  /tmp/list
then identify what will be deleted:
cat /tmp/list|grep deleting|sed 's/deleting /\//g'

now you have a list of everything that WOULD be deleted with the --delete
option.  Run your normal sync and save this file for later

You could save take this file list and send it to the remote system

scp /tmp/list remotehost:/list-`date -%h%m%s`

on remote system

cat /list-* | xargs rm

to delete the file list.  You could do this weekly or monthly or whenever
you needed.




On Fri, Feb 26, 2010 at 6:27 AM, Johannes H. Jensen
j...@pseudoberries.comwrote:

 Hello,

 We're currently syncing our local BackupPC pool to a remote server
 using rsync -aH /var/lib/backuppc/ remote:/backup/backuppc/

 This is executed inside a script which takes care of stopping BackupPC
 while rsync is running as well as logging and e-mail notification. The
 script nightly as a cronjob.

 This works fairly well, except it won't remove old backups from the
 remote server. Apart from using up unnecessary space, this has also
 caused problems like hitting the remote filesystems hard link limit.

 Now I'm aware of rsync's --delete option, but I find this very risky.
 If for some reason the local backup server fails and
 /var/lib/backuppc/ is somehow empty (disk fail etc), then --delete
 would cause rsync to remove *all* of the mirrored files on the remote
 server. This kind of ruins the whole point of having a remote
 mirror...

 So my question is then - how to make sure that the local backup pool
 is sane and up-to-date without risking loosing the entire remote pool?

 I have two ideas of which I'd love some input:

 1. Perform some sanity check before running rsync to ensure that the
 local backuppc directory is indeed healthy. How this sanity check
 should be performed I'm unsure of. Maybe check for existence of some
 file or examine the output of `BackupPC_serverMesg status info'?

 2. Run another instance of BackupPC on the remote server, using the
 same pc and hosts configuration as the local server but with
 $Conf{BackupsDisable} = 2 in the global config. This instance should
 then keep the remote pool clean (with BackupPC_trashClean and
 BackupPC_nightly), or am I mistaken? Of course, this instance also has
 to be stopped while rsyncing from the local server.

 If someone could provide some more info on how this can be done
 safely, it would be greatly appreciated!


 Best regards,

 Johannes H. Jensen


 --
 Download Intel#174; Parallel Studio Eval
 Try the new software tools for yourself. Speed compiling, find bugs
 proactively, and fine-tune applications for parallel performance.
 See why Intel Parallel Studio got high marks during beta.
 http://p.sf.net/sfu/intel-sw-dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Replacing dead backup drive

2010-02-25 Thread Dan Smisko

After an absence because of personal issues, I have replaced the drive.
The only thing on the drive was the backup data, configuration is in /etc.
The drive is writable by backuppc.

After I start the backuppc daemon the BackupPC_TrashClean process starts.
I started a full dump of the host test.  BackupPC_dump -f test started,
but nothing happened.  The failure message is:

unable to open/create /mnt/b1/pc/test/XferLOG.z

Do I have to run the perl configure.pl command in the distribution
source to set up the backup directories?  Will it remember my old answers?
Will it overwrite the configuration in /etc/backuppc/config.pl?
Thanks very much for any help you can provide.

Dan Smisko
Balanced Audio Technology
1300 First State Blvd.
Suite A
Wilmington, DE  19804
(302)999-8855
(302)999-8818  fax
d...@balanced.com



Les Mikesell wrote:
 On 1/22/2010 4:24 PM, Dan Smisko wrote:
   
 I have a BackupPC 3.0.0 system that has been running well for some time,
 but the backup drive
 has just failed.  I have some replacement hardware that I'm looking at,
 but what I
 would like to do immediately is to plug in a replacement drive and
 continue with the
 current setup.  At this point resuming backups is more important than
 trying to retrieve
 (possibly unrecoverable) old data.

 Is there is simple way to do that?  Can I just plug in a
 new drive in and everything will proceed merrily along?  If not, can I
 manually initialize
 the new drive? Failing that, what is the quickest procedure to get the
 current backup
 system back up and running.  I apologize if there's already a document
 for this, please
 point me to it.
 

 The answer to this is going to depend on how you installed originally 
 and where the configuration lives.  If the only things on the dead drive 
 were the pool/cpool/trash/pc directories you can probably re-create them 
 (or that might happen by itself as long as the mount point is writable 
 by backuppc).

 This would be a good opportunity to think about using a RAID mirror for 
 the replacement, though.

   

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] experiences with very large pools?

2010-02-19 Thread dan
you would need to move up to 15K rpm drives to have a very large array and
the cost will grow exponentially trying to get such a large array.

as Les said, look at a zfs array with block level dedup.  I have a 3TB setup
right now and I have some been running a backup against a unix server and 2
linux servers in my main office here to see how the dedup works

opensolaris:~$ zpool list
NAME  SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
rpool  74G  5.77G  68.2G 7%  1.00x  ONLINE  -
storage  3.06T   1.04T  2.02T 66%  19.03x  ONLINE  -

this is just rsync(3) pulling data over to a directory
/storage/host1 which is a zfs fileset off pool storage for each host.

my script is very simple at this point

zfs snapshot storage/ho...@`date +%Y.%m.%d-%M.%S`
rsync -aHXA --exclude-from=/etc/backups/host1excludes.conf host1:/
/storage/host1

to build the pool and fileset
format #gives all available disks
zpool status will tell you what disks are already in pools
zpool create storage mirror disk1 disk2 disk3 etc etc spare disk11 cache
disk12 log disk13
#cache disk is a high RPM disk or SSD, basically a massive buffer for IO
caching,
#log is a transaction log and doesnt need a lot of size but IO is good so
high RPM or smaller SSD
#cache and log are optional and are mainly for performance improvements when
using slower storage drives like my 7200RPM SATA drives
zfs create -o dedup=on (or dedup=verify) -o compression=on -o storage/host1

dedup is very very good for writes BUT requires a big CPU.  dont re-purpose
your old P3 for this.
compression is actually going to help your write performance assuming you
have a fast CPU.  it will reduce the IO load and zfs will re-order writes on
the fly.
dedup is all in-line so it reduces IO load for anything with common blocks.
it is also block level not file level so a large file with slight changes
will get deduped.

dedup+compression really needs a fast dual core or quad core.

if you look at my zpool list above you can see my dedup at 19x and usage at
1.04 which effectively means Im getting 19TB in 1TB worth of space.  my
servers have relatively few files that change and the large files get
appended to so I really only store the changes.

snapshots are almost instant and can be browsed at
/storage/host1/.zfs/snapshot/ and are labeled by the @`date xxx` so i get
folders for the dates.  these are read only snapshots and can be shared via
samba or nfs.
zfs list -t snapshot

opensolaris:/storage/host1/.zfs/snapshot# zfs list -t snapshot
NAME
rpool/ROOT/opensola...@install   270M  -  3.26G  -
storage/ho...@2010.02.19-48.33

zfs set sharesmb=on storage/ho...@2010.02.19-48.33
-or-
zfs set sharenfs=on storage/ho...@2010.02.19-48.33


if you dont want to go pure opensolaris then look at nexenta.  it is a
functional opensolaris-debian/ubuntu hybrid with ZFS and it has dedup.  it
does not currently share via iscsi so keep that in mind.  I believe it also
uses a full samba package for samba shares while opensolaris can use the
native CIFS server which is faster than samba.

opensolaris can also join Active Directory. You also need to extend your AD
schema.  If you do you can give a priviliged use UID and GUI mappings in AD
and then you can access the windows1/C$ shares.  I would create a backup
user and add them to restricted groups in GP to be local administrators on
the machines (but not domain admins).  You would probably want to figure out
how to do a VSS and rsync that over instead of the active filesystem because
you will get tons of file locals if you dont.

good luck




On Fri, Feb 19, 2010 at 6:51 AM, Les Mikesell lesmikes...@gmail.com wrote:

 Ralf Gross wrote:
 
  I think I've to look for a different solution, I just can't imagine a
  pool with  10 TB.

 Backuppc's usual scaling issues are with the number of files/links more
 than
 total size, so the problems may be different when you work with huge files.
  I
 thought someone had posted here about using nfs with a common archive and
 several servers running the backups but I've forgotten the details about
 how he
 avoided conflicts and managed it.  Maybe this would be the place to look at
 opensolaris with zfs's new block-level de-dup and a simpler rsync copy.


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] long rsync backup time

2010-01-31 Thread dan

 Any thoughts on why it is taking so long?


what are the specs on the backup server and the client? CPU  RAM
specifically.

what is their connectivity?

is the 5GB is small files, large files, or a mix?

what is the system load on the backup server and the client?
--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Replacing dead backup drive

2010-01-26 Thread Dan Smisko

Yes, only the backup data (pool, cpool, etc) was on the dead drive.  The 
config is in /etc/BackupPC.
I guess I will try to re-create the backup directories and try another 
backup.

Certainly a RAID is worth considering, but the next question is what to 
put on a RAID drive.
A dead root drive would not be much fun either.  My preference would be 
software RAID
for portability, but I don't know about performance.  Does software RAID 
work on a Linux
root drive?

Thanks again for your help.

Dan Smisko



Les Mikesell wrote:

The answer to this is going to depend on how you installed originally 
and where the configuration lives.  If the only things on the dead drive 
were the pool/cpool/trash/pc directories you can probably re-create them 
(or that might happen by itself as long as the mount point is writable 
by backuppc).

This would be a good opportunity to think about using a RAID mirror for 
the replacement, though.


On 1/22/2010 4:24 PM, Dan Smisko wrote:
 I have a BackupPC 3.0.0 system that has been running well for some time,
 but the backup drive
 has just failed.  I have some replacement hardware that I'm looking at,
 but what I
 would like to do immediately is to plug in a replacement drive and
 continue with the
 current setup.  At this point resuming backups is more important than
 trying to retrieve
 (possibly unrecoverable) old data.

 Is there is simple way to do that?  Can I just plug in a
 new drive in and everything will proceed merrily along?  If not, can I
 manually initialize
 the new drive? Failing that, what is the quickest procedure to get the
 current backup
 system back up and running.  I apologize if there's already a document
 for this, please
 point me to it.
 

   

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Replacing dead backup drive

2010-01-22 Thread Dan Smisko

I have a BackupPC 3.0.0 system that has been running well for some time, 
but the backup drive
has just failed.  I have some replacement hardware that I'm looking at, 
but what I
would like to do immediately is to plug in a replacement drive and 
continue with the
current setup.  At this point resuming backups is more important than 
trying to retrieve
(possibly unrecoverable) old data.

Is there is simple way to do that?  Can I just plug in a
new drive in and everything will proceed merrily along?  If not, can I 
manually initialize
the new drive? Failing that, what is the quickest procedure to get the 
current backup
system back up and running.  I apologize if there's already a document 
for this, please
point me to it.

Thanks very much.

-- 
Dan Smisko
Balanced Audio Technology
1300 First State Blvd.
Suite A
Wilmington, DE  19804
(302)999-8855
(302)999-8818  fax
d...@balanced.com


--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best and brightest in the field, creating opportunities for Conference
attendees to learn about information security's most important issues through
interactions with peers, luminaries and emerging and established companies.
http://p.sf.net/sfu/rsaconf-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How big is your backup?

2009-12-31 Thread dan
On Thu, Dec 31, 2009 at 8:36 AM, Peter Vratny usenet2...@server1.at wrote:

 mark k wrote:
  Agreed sas drives are the way to go, just built a backup server with
  10 300gb sas running in a raid 50, going to hopefully replace 2 backup
  servers that were using SATA storage.

 This is just a question of price. We are currently running 3
 Backup-Servers, which is way cheaper than building one with SAS drives.


I agree.  SAS drives at 15,000rpms is very nice but VERY expensive.  Servers
and disks rarely scale perfectly with more Mhz, more RAM, or more disks.
There is a point of diminishing returns where you should get a second server
in my opinion.

I think we are nearing the need for a major change in how this data is
managed.  Disks are not getting faster at the rate that data is growing.
Networks are also not able to keep up with the every growing need to store
data.   In-line deduplication is looking pretty promissing.  As a way to
save space obviously but even more to reduce I/O.

with ZFS supporting deduplication on *solaris platforms and Chris Mason
planning deduplication in BTRfs for linux (
https://ldn.linuxfoundation.org/blog-entry/a-conversation-with-chris-mason-btrfs-next-generation-file-system-linux)
this could really be a solution to today's backup needs.

with block level, in-line deduplication in the filesystem consider the
following.
when a file is transfered, the data would only be written if the block was
uniq.  if the block was the same as another in the filesystem then a pointer
to that block would be written rather than the whole block.  With disk
caching this can radically reduce IO when a file is very similar to another
file on the filesystem.  This is all done essentially with hashes of each
block.  These would be very large hash tables but would be sorted and
indexed to quick lookup as the tables would be just a hash and a pointer and
the hash is of a known length making indexing pretty quick. Some fancy
algorythms could be used to identify when a file has a very low
deduplication rate and the dedupe process could be skipped and the file
tagged for dedupe later.
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] zfs dedupe test

2009-12-21 Thread dan
an update to my zfs dedupe test.

opensolaris-dev build 129 (first build to enable dedupe)

I create a zfs volume that I export with iSCSI, then did a gigabit crossover
cable to my test backuppc server running ubuntu 9.04.  I mount the iscsi
volume to /var/lib/backuppc.

let just say that this has to be seen to be believed. zfs dedupe totally
pegs my cpu and used up the 4GB of ram in my test SAN box BUT it
drastically improves write performance for files with the same blocks as
other files(which is a lot).  The different between a ubuntu 8.04 and 9.04
disk is only about 30MB with block level dedupe.  Because the system does
this dedupe in-stream, it doesnt have to write the 'extra' data and can
complete the process in a hurry.  I can copy a disk image of ubuntu 9.04 is
litterally 2 seconds because the dedupe is done online so it almost feels
like creating a hardlink at the expense of pegging the CPU.  I have a dual
core 2Ghz opteron and I can see the need for a quad core xeon but the
benefits look to be huge.

If you have some hardware lying around I encourage you to test out zfs
dedupe.  nexenta core3 alpha2 and opensolaris b129 both have it and it is
very nice.

I also tested the opensolaris cifs server joined to AD sharing a deduped
disk and that works very well also, your AD users and ACLs are handle well
by opensolaris.
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up many, many files over a medium speed link - kills CPU and fails halfway. Any help?

2009-11-25 Thread dan
with 370,000 files rsync should use 370,000*100B=35MB+/- 10% on each side.
How fast is your CPU?  are you sure that you can process the checksums fast
enough?

Are you compressing the rsync process and if so what compression level?
rsync compression at level 3 is only slightly worse that level 9 but uses a
tiny fraction of the CPU time that level 9 does.

Also, are you firing off both backups at the same time.  depending on your
hardware, you could be adding very very large I/O delays from disk seeks.
I/O performance is a big factor in system load.  slow IO can give you a 2
digit system load and have plenty or ram and CPU left over.

I would definitely try to break down the shares a bit.  You might also
consider not doing a checksum on older files that are in storage but doing
checksums on newer files that may be altered.  Most systems will always
update the mtime of a file when it is altered and unless you have something
special going on then the mtime is a reliable and easy check.  even if the
file was not actually changed but was opened and re-saved as is the mtime
will change. (again, there are some special circumstances)

I think the most likely cause here is the checksumming eating up RAM as more
and more files/larger files are checksummed in parallel and spilling over to
SWAP.  Then SWAP is adding I/O transactions to the system which in the best
case is adding work to the CPU and disk controller and worse case is that
the SWAP is on the same physical drive as TOP_DIR.  It them becomes a
traffic jam where every extra file takes more ram theirfore more swap which
slows down the existing checksum processes because they have to wait to
write which keeps more in the pipe which causes more memory to be used=more
swap=more io=x=x=x==x=x=x=and so on.

try just trusting mtime on the files and see what happens.  370,000 isnt
that much for backuppc or rsync.  I dont see issues until around 1million
files.  I then break that backup set down to seperate machines.




On Tue, Nov 24, 2009 at 8:42 PM, GB pse...@gmail.com wrote:

 Thanks Chris. I will give it a shot and see if I can make it behave in any
 way... was hoping for a bit of a magic bullet, I suppose :)


 On Tue, Nov 24, 2009 at 10:41 PM, Chris Bennett ch...@ceegeebee.comwrote:

 Hi,

  Thanks for the reply. The data is, in fact, all time in the sense that
 it
  goes back years, but it's sorted by filename, rather than date; it's
  essentially equivalent to how BackupPC stores data in cpool/, i.e. the
 first
  3 characters of the filename will generate 3 levels of subdirectories.
 The
  best I was able to do, to date, was to make 10 shares, 1-9, and back up
 10
  separate backup trees. But that was before, when I had about 100k
 files... I
  tried this recently, and seem to have made it go under. So I guess I'd
 need
  to make TWO levels of shares, so 1/0-1/9, 2/0-2/9, etc. Then, maybe,
 once I
  go through the full loop, it'll be easier to perform future incrementals
  since the delta will be small.

 Yeah, I've been able to archive large pools of files that have aged,
 so that backuppc doesn't have to consider such a large filelist.  I'm
 not too sure on the mechanics of backuppc and overhead - e.g. what
 amount of work does backuppc perform to perform a full and
 incremental.. how much memory is consumed per considered file.  I
 expect someone else can more succintly answer these kind of questions
 to help you build a more scalable configuration.

  My BackupPC box doesn't swap too much, it doesn't behave like it's under
  massive load at all; but then again, I think my IO subsystem (Dell Perc6
 +
  4x WD Greens in RAID5) hopefully outperforms the speed of the link+any
  overhead :) I haven't tried stracing rsync on the remote server. Any
  suggestions on how to use it? I've never tried it before.

 Get the pid of your rsync process on the source of data.

 Then perform something like
  # -s3000 specified 3000 characters printed per system call
  strace -p pid -s3000

 This will give you insight into the open/stat/read/close cycle that
 rsync will be doing when copying data.  I would expect it to be
 cycling faster than you can read, although in the case where I've seen
 high swap activity, you'll see batches of the cycle followed by
 pauses.

 Similarly, running:
  vmstat 1

 in another console and looking at the bi/bo columsn that represent
 blocks in/out helps you to know whether swap is being heavily used.

 Good luck and let me know if you find a good solution to your problem.

 Regards,

 Chris Bennett
 cgb




 --
 Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
 trial. Simplify your report design, integration and deployment - and focus
 on
 what you do best, core application coding. Discover what's new with
 Crystal Reports now.  http://p.sf.net/sfu/bobj-july
 ___
 BackupPC-users mailing list
 

Re: [BackupPC-users] BackupPC of an image of a PC

2009-11-25 Thread dan
FYI, try microsofts imageX.  its free and does shadowcopy based disk
images.  It can also be used like ghost used to be in that you can put the
image on bootable media with freedos and run the commandline version to
re-image a PC from cd/dvd which is nice for remote bare-metal restores.

On Wed, Nov 25, 2009 at 9:06 AM, Boniforti Flavio fla...@piramide.chwrote:

 Hello list,

 I'm back again with a new question...

 I'd like to create an image of my Windows PC (let's assume with Acronis
 TrueImage, or any similar software) and store it on my external USB
 drive. This scheduled operation would be done *every day* and would
 overwrite the same image file. Now, assuming the above statements, am I
 right if using BackupPC every night on that PC (grabbing the image from
 the USB drive), I would:

 A) only transfer the modified bytes/parts of that huge (many Gbytes)
 image file?
 B) get as a result, a daily backup without any big file transfers?

 Any thoughts, comments are very welcome...

 Thanks,
 Flavio Boniforti

 PIRAMIDE INFORMATICA SAGL
 Via Ballerini 21
 6600 Locarno
 Switzerland
 Phone: +41 91 751 68 81
 Fax: +41 91 751 69 14
 URL: http://www.piramide.ch
 E-mail: fla...@piramide.ch


 --
 Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
 trial. Simplify your report design, integration and deployment - and focus
 on
 what you do best, core application coding. Discover what's new with
 Crystal Reports now.  http://p.sf.net/sfu/bobj-july
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] ZFS dedupe

2009-11-25 Thread dan
Dont know how many of you follow solaris/nexenta/ZFS but here is a fresh
tidbit:

The beta of nexenta core 3 can install backuppc out of box and the release
will have zfs v21 which includes online dedupe as well as 'send' dedupe so
you can mirror systems at the block level while only sending the changes.
--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fwd: Deleting all copies of a file from pool

2009-11-22 Thread dan
I have been successfully deleting files from backups for some time.  I use
basic a basic `find dir -iname name -exec rm {} \;`  to hunt down files
and delete them.  I have never had any problems recovering the data as the
restore process doesnt have a file list it goes off of but just processes
each file in the directory.


to delete all mp3 files from the pc directories

I like to use regular expressions as it reduces the chance of getting the
wrong files.

find /path/to/backuppc/pc/ -regex   './.*\.mp3$   \*.mp3 -exec rm {} \;

be carefull with find and regex as you need the ./ at the front.  ./ in this
circumstance is specifically begins with a directory. then .* which is
anycharacter followed by any number of any characters, \.mp3 is saying use a
real period instead of the any wildcard that . represents followed by mp3,
the $ means the end of the line so you wont match a file like
/path/path/info_on.mp3.files.txt
but would match
/path/path/info_on.mp3

you dont need to do anything with the cpool/pool as the backuppc_nightly
will clean that up.  Also, the file names there are mangled beyond
recognition so find would help you anyway.

I use -or and search for video files, mp3 files, temp files
abbr vers:
find /path/to/pc/ -regex './.*\.mp3$' -or -regex './.*\.avi$' -or -regex
'./.*\.mpg$' -or -regex './.*\.mpeg$' -exec rm{} \;

I run it in this way because then find only runs through the filesystem 1
time where running these seperately would run it over and over.

Also, you need to be aware of the filename mangling done so you cannot
search for /myfile.ext but instead must search for /.myfile.ext or
/.*myfile.ext (with regext) or /*myfile.ext if you are using the standard
wildcard *

good luck.





On Sun, Nov 22, 2009 at 11:53 AM, Justin Guenther jguent...@gmail.comwrote:

 [reposted from backuppc-devel due to lack of response]

 Hello all,

 I have come across this problem frequently -- backups are running
 great, then one day I notice that backups are bigger than they should
 be. I look into it, and a large file was copied somewhere that got
 backed up but shouldn't have. I've searched around and can't find an
 answer to this one, forgive me if I overlooked a solution.

 I can add an exclusion and wait for the file to expire from the pool,
 but for my configuration this takes months.

 1) is there a current (safe) way to delete a file from the pool/cpool
 and from all backups
 2) if not, what would this take? I would have to delete the entry in
 the pool/cpool, and the link(s) to it from the individual pc/backup#/
 subdirectories, but would this be all I would need to do? I am not too
 worried about updating backup stats (size, compressed size, etc) but
 for completeness sake it would be nice to know exactly where I would
 need to look to update that information.

 Thanks,
 Justin Guenther


 --
 Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
 trial. Simplify your report design, integration and deployment - and focus
 on
 what you do best, core application coding. Discover what's new with
 Crystal Reports now.  http://p.sf.net/sfu/bobj-july
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] ZFS dedupe

2009-11-16 Thread dan
Any thoughts on ZFS and deduplication?  Its coming in build 128 of
opensolaris and also in nexenta core 3.

It does block level, online deduplication but apparently eats up tons of RAM
and needs some CPU power.

Its a pretty interesting thought to rsync over data to a pc/hostname/date/
folder and have the filesystem dedupe.  Kind of does a lot of backuppc's
logic in filesystem.
--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Limit user space

2009-11-05 Thread Dan Pritts
On Thu, Nov 05, 2009 at 08:20:54PM +0100, Tino Schwarze wrote:
  I was wondering if is it possibile to limit the backup space of an
  user like 30Gb or something like that
 
 This is not currently possible with BackupPC. It wouldn't fit the
 pooling scheme either - how would you count a file which is shared among
 5 backups and maybe 3 users?

Probably the requestor really wants to limit the space used by files
ONLY on that one user's computer; pooled across N backups doesn't
matter.  

you're certainly right it would not be easy to implement in backuppc
even with that caveat.

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

SC09: Visit the Internet2 Booth #1355
November 14-20, 2009
Portland, Oregon Convention Center
http://events.internet2.edu/2009/sc09/

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] entities using BackupPC

2009-10-30 Thread Dan Pritts
On Tue, Oct 27, 2009 at 03:06:41PM -0500, Les Mikesell wrote:
 I've never seen a tape read past an error regardless of what was on it.

Assume one bad tape out of 5 in your backup set.

With a conventional tape backup system that archives on a file by file
basis, you'll be able to read the data on the rest of the tapes.

OTOH, with a tape backup of a filesystem, god knows what you have.
certainly you'll have some data, but hope your filesystem code is good
at ignoring errors without panicing.

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

SC09: Visit the Internet2 Booth #1355
November 14-20, 2009
Portland, Oregon Convention Center
http://events.internet2.edu/2009/sc09/

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup backuppc

2009-10-30 Thread Dan Pritts
On Wed, Oct 28, 2009 at 09:25:31PM -0600, dan wrote:
 You can tar up the whole pool directory and put it on an external drive
 pretty easily.  

Serious question:  of what value is a backup of (only) the pool directory?

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

SC09: Visit the Internet2 Booth #1355
November 14-20, 2009
Portland, Oregon Convention Center
http://events.internet2.edu/2009/sc09/

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup backuppc

2009-10-30 Thread Dan Pritts
 I have a setup with /var/lib/backuppc mounted to a 1 TB Firewire 800
 drive. From a standpoint local to BackupPC, it's transparent. With USB
 you'll be limited to 480 Mbps minus overhead, but that's about the
 only real consideration I'm aware of--and that's still 60 MB/s
 theoretical, which you probably won't reach, anyway.

In one throughput test, I got about 25MB/sec over USB2 and 37MB/sec over
eSATA.  Same disk drive, I think it was a 1.5T seagate.  Don't remember
what USB bridge I used, i'm sure that matters.  Might have been one of
those cheap external dongles.

I would imagine firewire would do about as well as eSATA, it is pretty
efficient compared to USB.

  Would like some advise on the best way to backup backuppc. I have a
  1TB  USB drive that I would like to copy all of our backups to. Has
  anyone done this? How easy is it to roll back from USB?

All you need to do is unmount your backuppc filesystem, and dd from the
raw device containing the filesystem to your new device.  

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

SC09: Visit the Internet2 Booth #1355
November 14-20, 2009
Portland, Oregon Convention Center
http://events.internet2.edu/2009/sc09/

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] entities using BackupPC

2009-10-28 Thread dan
results of a twisted and desperate mind :)

On Wed, Oct 28, 2009 at 2:45 AM, Tyler J. Wagner ty...@tolaris.com wrote:

 On Wednesday 28 October 2009 02:03:52 dan wrote:
The only issue is that it cannot remove existing
   files in the restore target directory (think rsync -a --delete), so
 be
   sure to restore to a basic OS install with nothing else on it.
  
  I have gotten around this by touching each file in the target, doing the
  restore (which restores timestamps), and the searching for files that
 match
  the timestamp of when I touched the files and deleting those files. ( I
  actually move them to a holding directory just to be sure)  This works
 well
  and is pretty efficient because rsync already knows the timestamp as it
 is
  part of the algorithm.  There is a little i/o overhead but it is
 completely
  hidden by i/o on the server side.

 That's brilliant, Dan.  I'll try that with an unusual specific.  Unix time
 100, here we come!

 Regards,
 Tyler

 --
 Beware of altruism. It is based on self-deception, the root of all evil.
   -- Lazarus Long, Time Enough for Love, by Robert A. Heinlein


 --
 Come build with us! The BlackBerry(R) Developer Conference in SF, CA
 is the only developer event you need to attend this year. Jumpstart your
 developing skills, take BlackBerry mobile applications to market and stay
 ahead of the curve. Join us from November 9 - 12, 2009. Register now!
 http://p.sf.net/sfu/devconference
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup backuppc

2009-10-28 Thread dan
You can tar up the whole pool directory and put it on an external drive
pretty easily.  Just make sure that backuppc is not running when you do this
-OR- do an LVM snapshot and then backup the snapshot.

I have been using rsync to sync two servers for a long time but have
recently started experimenting with DRDB for the remote sync.  Are you
interested in running two machines or do you just want to archive the data?

If you just want an archive of the data maybe consider just pulling off
periodic full backups to your alternate media.  You have two options really
which is to do an archive or restore and put the tar or zip archive on the
external storage or you could even copy the specific backup for the
pool/cpool directory onto the media by host.

Another option would be to create a MD1 mirror with your primary data as a
member of the mirror and your USB device as a member.  You can add the USB
device to the array long enough for it to sync and the remove it for offsite
storage.  You can then re-sync it anytime you like or even rotate in other
disks.  This can be time consuming but I see MD1 rebuilds running at 30MB/s+
on very modest hardware so you could do a Terabyte in 8 or 10 hours and the
array can remain active (though slow).  My 4 drive array rebuilds at about
80MB/s  and it is just 4 seagate 7200.11 drives in a raid10 (two mirrors in
raid1) and I *can* add a single USB disk to the array though I have only
done this in testing and it isnt my standard practice.  I archive to DVD
periodically.

On Wed, Oct 28, 2009 at 12:53 AM, Chris Owen
chriso...@eigersecurities.comwrote:

 Hey

 Would like some advise on the best way to backup backuppc. I have a
 1TB  USB drive that I would like to copy all of our backups to. Has
 anyone done this? How easy is it to roll back from USB?

 Many Thanks

 Chris Owen
 Sent from my iPhone


 --
 Come build with us! The BlackBerry(R) Developer Conference in SF, CA
 is the only developer event you need to attend this year. Jumpstart your
 developing skills, take BlackBerry mobile applications to market and stay
 ahead of the curve. Join us from November 9 - 12, 2009. Register now!
 http://p.sf.net/sfu/devconference
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] entities using BackupPC

2009-10-27 Thread Dan Pritts
On Tue, Oct 27, 2009 at 11:18:15AM -0500, Adam Williams wrote:
 I've been running BackupPC in a test environment here, and so far it has 
 been working very well.  I'd like to roll it out into production, but I 
 wanted to know, are there any businesses or government entities using 
 BackupPC in live environments?   Can you give a range of employees or 

Internet2 uses backuppc to back up linux  solaris servers with rsync
as the transport.  

We built a new backup server 2 or 3 months ago.  On it:

There are 68 hosts that have been backed up, for a total of:

* 255 full backups of total size 3231.73GB (prior to pooling and 
compression),
* 1967 incr backups of total size 2006.21GB (prior to pooling and 
compression). 

We have plenty of critical data stored in backuppc. 

Whenever we've needed to restore we've been successful.  We also do
periodic restore tests, of course.

It has been mostly reliable.  We had a particular backup client that had
a lot of failures; i ended up splitting its backups into two pieces and
that kludged the issue into submission.  It still has some failures but
not enough that backuppc gives up retrying.  BackupPC could have been more
robust about dealing with these failures (eg, retrying smaller pieces).
I haven't complained on the list since i wasn't in a position to fix it myself.

We chose not to use it for our Windows servers (don't want SMB shares,
rsyncd seems kludgy, etc).  We have access to a backup service that we
use for windows clients.

We chose not to use it for our Mac laptops because they are mobile and
we don't do any dynamic DNS or similar, so we have a host lookup issue.
I believe it would have been solvable with moderate development effort but
management chose to go another direction.  Note that with Mac clients,
you have a resource fork  metadata issue.  You have this issue with
lots of other backup software too.


The big operational issue with backuppc is offsite backups.  We do
this by offsiting disks, as described by various folks on the list.

I would prefer a good network replication option.

ALternately, I'd also prefer to have a better way of offsiting by
tape (tapes are more physically durable than disk).  You can put a 
raw filesystem image on tape but i worry what happens if you have a minor
error on one tape. 

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

SC09: Visit the Internet2 Booth #1355
November 14-20, 2009
Portland, Oregon Convention Center
http://events.internet2.edu/2009/sc09/

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] entities using BackupPC

2009-10-27 Thread dan

  The only issue is that it cannot remove existing
 files in the restore target directory (think rsync -a --delete), so be
 sure
 to restore to a basic OS install with nothing else on it.

 I have gotten around this by touching each file in the target, doing the
restore (which restores timestamps), and the searching for files that match
the timestamp of when I touched the files and deleting those files. ( I
actually move them to a holding directory just to be sure)  This works well
and is pretty efficient because rsync already knows the timestamp as it is
part of the algorithm.  There is a little i/o overhead but it is completely
hidden by i/o on the server side.


 I absolutely love BackupPC.  I ran Bacula for 2 years before this, and have
 experience with several basic systems like rsync to disk and dump+tar to
 tape.

 tape sucks.  Unpleasant in every way.  I have to use a tru64 util vdump on
advfs volumes as all the tru64 restore tools rely on it (the good tools
anyway).  I now do infrequent vdumps and frequent data backups with backuppc
so I can restore the OS, then update that restore from backuppc.


 Regards,
 Tyler


--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] entities using BackupPC

2009-10-27 Thread dan
I have changed a lot of my setup in the past months and am using DPM from
microsoft on our server 2003 and 2008 infrastructure.  I used to use
backuppc but it is not the best tools for windows servers.  DPM can do
rolling backups in 15 minute intervals and has some nice client side agents
to do such things with volume shadow copy.

I do have a virtualization cluster running on proxmox ve with 3 vm hosts and
1 SAN (migrated off of citrix xenserver, which works great until it crashes
and requires complete restores of all VMs) and I use backuppc for the entire
cluster (vm hosts and actual virtual machines).  I am using both the openvz
containers for services like bind, dhcp, mysql, and web servers and use KVM
for higher level stuff like specific windows apps on xp pro vms) I do all
logins for the whole linux cluster via LDAP specifically to make sure UID
and GID are the same across every machine.  With the centralized
authentication and accounting, I setup a backupagent user on each machine
and give it sudo rights to rsync and limit logins to that user account by
host (the backuppc servers, specifically).

I have a single backuppc machine backing up each server which is my 'stable
setup' and then I have a secondary  test cluster of 2 backuppc machine
with DRDB syncing up the raid1 array between the two hosts.  one machine is
a primary and the other rsyncs the backuppc config over so they are
identical but does not run backuppc.  This link is over a 10Mbit wan link
from my colo to my office (same ISP, nice) and is so far a wildly successful
experiment running drdb with ssh compression over a WAN link.  DRDB does a
very good job and 'catching up' to the relatively slow WAN link.  I have
found that backuppc uses bandwidth in a pulsing fashion with high and low
moments witch lets drdb gain ground during the lulls in bandwidth usage.
(measured with iftop on the LAN interface and the WAN interface, and
watching the WAN stay saturated and the LAN pulse)

I have another backuppc machine that is backing up windows xp clients with
deltacopy(essentially cwrsync).  maybe 25 or so hosts on this machine.  I
backup the entire documents and settings folder.  I use microsofts ximage
to pull whole base level system images so I can restore that and dump the
settings and files back over the top.

anyway, I am successfully using backuppc for production and am doing
off-site replication and have done remote restores across WAN links.  my
setup is constantly evolving as we migrate away from our old equipment on to
more modern hardware and software.

good luck.
--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and power saving

2009-10-13 Thread dan
Wake-by-BIOS is interesting. Nice tip. I forgot the functionality even

 existed. This makes the setup simpler. So what about:

 1. BIOS wakes up machine.

 2. Wake machine around midnight so BackupPC_nightly can do its thing
 or just trigger manually upon bootup.

 3. Now we'll reverse the direction: BackupPC wakes the clients.

 4. BackupPC shuts itself down after jobs are complete OR after some
 time interval (I don't want it to wait forever if a laptop is out for
 the night, for example). Looks like BackupPC_serverMesg status hosts
 can help here.

 5. We wait for #1.


how about
1)boot on bios power-on schedule
2)let backuppc process all backups.  should automatically happen due to the
wakeup schedule though I would shorten that period.
3)run a script to determine if backuppc is doing a backup, if so then sleep
for your wakeup schedule + 1 minute, if not then execute backuppc nightly
4)trigger a shutdown when backuppc nightly completes.
--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] installer backup Pc

2009-10-06 Thread dan
Do you speak english?  I can read some french but cannot compose it well.

You CAN backup windows servers with backuppc with either CIFS or rsyncd.
The you can do rsyncd on windows with cwrsync or deltacopy which is a
repackaging of cygwin's rsync.  If you search the list for VSS and rsync you
can get instructions on how to use windows server's volume shadow copy to
take backups as well.
--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC on OpenSolaris

2009-10-06 Thread dan
Evan, if you dont mind me asking, what is your hardware setup.  How many
hosts and what type of host do you have an how do you like ZFS in this
environment?  Do you run into RAM issues with ZFS?  Do you use backuppc
compression or have ZFS do the compression?
--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] security headaches

2009-09-25 Thread dan
I use iptables and allow access only from my workstation to the web
interface, disable root and backuppc user's inbound ssh.  I also limit
inbound traffic with iptables so the backuppc must open the session to the
client.
--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] multiple pools

2009-09-25 Thread dan
 is it possible to have multiple pools

 My backupspc has raid1 pair of 1Tb drives - this is now 750gb used

 I guess its time to think about adding a 2nd 1Tb pair


 did you use LVM?  You could rotate the data.  Setup the new raid and put
that device in LVM.  Create a LV and then move all the old data over.  Then
you can remove the partition from the original drive and put that in the LVM
and then extend the new LV to include both raid arrays.

If you can manage to dump the 750 off to another drive you could do that and
just create a new RAID10 and then restore onto that also.


to directly answer your question, no you cannot have two pools.
--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


  1   2   3   4   5   6   7   >