Re: [BackupPC-users] Strange SMB errors

2014-03-13 Thread Arnold Krille
Am Thu, 13 Mar 2014 19:19:22 +0100
schrieb Marco lista...@gmail.com:
 username = administrator
 password = XXX
 domain = example.com
 
 Is that not enough to ensure I connect as domain admin?

On Windows users can (and by default do) lock out the Administrator
from seeing their files without extra work. True, the admin can always
take ownership of non-encrypted files and then change the permissions.
But by default users files are set so that the Administrator has no
rights.

For backing up these files you probably have to use a non-Admin-user
that has the backup-role...

Have fun,

Arnold


signature.asc
Description: PGP signature
--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] USB dongle backup

2014-03-10 Thread Arnold Krille
Hi,

On Mon, 10 Mar 2014 18:43:09 -0400 Claude cla...@phyto.qc.ca wrote:
 As I'm using BackupPC since many years for all my computers and
 laptop I'm wondering if it is possible to setup an automated backup
 system to catch my many usb dongle as I'm comming back to the office. 
 
 I want to insert these in a usb bay until next morning so backuppc
 will detect them and make a backup
 
 How to proceed. Is it possible to format these usb dongle to give
 them a unique name so backuppc will recognise them as they are
 plugged into the bay

I do something similar with my android phones:

I set up automount to mount the phones on fixed locations. Then I
created a backuppc 'host' with a custom ping command of
sudo /var/lib/backuppc/bin/checkhandy.sh /misc/schieber where
checkhandy.sh is:

===

#!/bin/bash

echo Checking for directory of mobile phone (with arguments '$*')
|logger

lockfile=/tmp/backuppc-lshw.lock

sleep $[$RANDOM%3]

if [ ! -e $lockfile ]; then
touch $lockfile
lshw  /dev/null
rm $lockfile
else   
while [ -e $lockfile ]; do
sleep 5
done
fi

if [ -z $1 ]; then
dir=/misc/schieber;
else   
dir=$1
fi

ret=-1
[ -d $dir ]  ret=0

echo Will return $ret for directory $dir |logger

exit $ret
===

Then its just a matter of backing up /misc/schieber in this case.

Running lshw in that script is needed on my headless server as
otherwise it wouldn't recognise the partition-table on the mobile. With
an usb-stick you can probably do without.

Hope that helps,

Arnold


signature.asc
Description: PGP signature
--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restore not possible

2014-03-03 Thread Arnold Krille
Am Mon, 03 Mar 2014 11:02:27 +0100
schrieb Pascal Heverhagen p...@inwx.de:
 When I want to do a restore of a backup after 5 seconds this error 
 appears in the log:
 
 Running: /usr/bin/smbclient TARGETSERVER\\ -U  -E -d 1 -c
 tarmode\ full -Tx -
 Running: /usr/share/backuppc/bin/BackupPC_tarCreate -h
 BACKUPPEDSERVER -n 0 -s / -t /
 Xfer PIDs are now 10685,10686
 restore failed: BackupPC_tarCreate failed
 
 
 Why is this happening? May you can help me! :)

I don't know why its happening. But I am pretty sure reposting the same
thing in two days time without any further information and without
showing that you did some debugging on your self will magically resolve
your problem.

irony off

Did you look into the logs? There isn't just the log you see in the
backuppc-webgui, there is also a log on the server you try to restore
to.
Did you try to restore your backup into a tar-file or zip-file for
download instead of directly to the target?
Did you try to execute the commands quoted above on the backuppc-server
_as the backuppc-user_?

And most important if you did any or all of the above: What is the
outcome?

Have fun,

Arnold


signature.asc
Description: PGP signature
--
Subversion Kills Productivity. Get off Subversion  Make the Move to Perforce.
With Perforce, you get hassle-free workflows. Merge that actually works. 
Faster operations. Version large binaries.  Built-in WAN optimization and the
freedom to use Git, Perforce or both. Make the move to Perforce.
http://pubads.g.doubleclick.net/gampad/clk?id=122218951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] User password to restore AD Windows clients

2014-01-31 Thread Arnold Krille
On Fri, 31 Jan 2014 14:09:47 +0100 Nicolas Cauchie
nico...@franceoxygene.fr wrote:
 Hello all,
 
 I'm tring to restore data to a Windows client. This computer is in an 
 Active Directory environment.
 
 To backup Windows clients who are in this domain, I created a user
 with complex password and select don't ask to change password, and
 I always use this user to backup data with backuppc.
 
 Everything's working fine during backup, as I set DOMAIN\user and 
 password for this user in the CONFIG/XFER tab. My host is set with 
 DOMAIN\user in the EDIT HOSTS\Hosts tab.
 
 When I select files to restore, I have 3 choices to restore. I choose 
 the first option : direct restore to the host.
 
 I set a new folder named restore to restore the files below dir 
 which doesn't exist on the host, and the restore failed. Same thing
 if I create this folder on the host before start the restore process.
 
 The user I use to backup Windows clients has write permissions on the 
 Windows share.

Does that user also have write permission on the disk? With win its two
places where permissions are checked, at network level when accessing
the share (for all the share) and then at file/directory level for each
(sub-)directory and file involved. Its probably easier to restore this
by downloading a zip-file and have the user unpack that himself.

- Arnold


signature.asc
Description: PGP signature
--
WatchGuard Dimension instantly turns raw network data into actionable 
security intelligence. It gives you real-time visual feedback on key
security issues and trends.  Skip the complicated setup - simply import
a virtual appliance and go from zero to informed in seconds.
http://pubads.g.doubleclick.net/gampad/clk?id=123612991iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc Restore to Previous Date

2014-01-22 Thread Arnold Krille
Am Wed, 22 Jan 2014 05:26:11 -0800
schrieb shankarp backuppc-fo...@backupcentral.com:

 Hello All,
 
 I am evaluating backuppc to implement it on our organization as our
 backup solution. I have noticed that, when restoring it provides the
 latest backup only. Is it possible to restore to some previous dates
 using backuppc (not the latest backup) ?

When you restore, you click on browse files for that node?

Above all the shares and files, there is a block of mostly text and
some special things. One of these things is a box to select the backup
which you want to browse/restore, the other is a link to the history of
the current folder. Both can be used to select a certain back-version
and restore from that.

And if you check the node's detailed status page and see the list of
backups, the number in front of each backups line is actually a link to
browse/restore that specific version. It leads to the same as browse
files and then selecting a version.

Have fun,

Arnold


signature.asc
Description: PGP signature
--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments  Everything In Between.
Get a Quote or Start a Free Trial Today. 
http://pubads.g.doubleclick.net/gampad/clk?id=119420431iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] System disk damaged

2013-12-21 Thread Arnold Krille
Am Fri, 20 Dec 2013 15:59:01 -0600
schrieb Michael Stowe mst...@chicago.us.mensa.org:
 Of course, depending on the thoroughness of your backups, if you
 recover /etc/password, /etc/shadow, and /etc/group ... the old user
 and group ids would be back, anyway...  And it's probably worth
 pointing out that if you do this FIRST, the users would already be
 there with the old user ids and group ids and restoration would be
 fairly simple.  You might want to just do that.

Be aware that if you recover passwd and groups after you install lots
of stuff on your server, the ids of the backuped data might fight while
the ids of already installed stuff are wrong. Better recover passwd and
shadow and group right after the basic install, then install what is
needed to make this machine the desired server and recover the data
from backup.

- Arnold


signature.asc
Description: PGP signature
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] first time full and then only increamental backup

2013-12-19 Thread Arnold Krille
Am Wed, 18 Dec 2013 03:42:35 -0800
schrieb niraj_vara backuppc-fo...@backupcentral.com:

 HI
 
   I have centos 5.6 and backuppc version is 3.2.1. its working fine.
 I want to set like when I add a backup pc the backuppc first time
 take the full backup and then after everytime it will take only the
 increamental backup. 

The behaviour you want is already there. Just use the rsync-transfer.
(And maybe activate the --checksum option as explained in the docs.)

Then the first full will be a full, second full and following will be a
comparison of remote and backuppc's filelist with respect to
modification-date _and_ checksum. So files will be transfered if they
have changes in meta-data and even if they only have changes in the data
itself.
Incrementals will always just transfer files with changed meta-data.

This is different when you use smb- or tar-transfer, there a full is a
full transfering everything...

Have fun,

Arnold


signature.asc
Description: PGP signature
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Debian / Ubuntu restore solution?

2013-12-11 Thread Arnold Krille
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Am Wed, 11 Dec 2013 06:15:06 +
schrieb Tyler J. Wagner ty...@tolaris.com:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 2013-12-10 21:05, Stefan Peter wrote:
  An internal LVM based (software) RAID 2 may have been the better
  solution in this regard.
 
 Stefan,
 
 The software RAID in Linux is MD RAID, or mdadm (the command). LVM2 is
 not RAID. What you want is a software RAID 1 (mirror), although LVM on
 top of that is often useful.

Actually, if you have three or more pvs for lvm (on non-raid disks),
using these for a mirrored-lv is more versatile then plain
software-raid as you can change/move more things around. You just need
three pvs because lvm forces the mirrorlog to be on a different pv then
the mirrors (which makes sense). But it actually doesn't force you to
have the mirrorlog on a different hdd, partitioning your 2TB disk
into [1GB, 1TB, rest of 3TB] gives you three pvs to create a
mirrored lv with the log on the first and data on the second and
third partition:-) Of course you only gain slower speed with that, no
security. But to get acquainted with the commands its great.

Have fun,

Arnold
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iF4EAREIAAYFAlKoYhAACgkQtuvagsE+DE4IjwEAnGi62XtmEhj84R0kUp/xxX6u
FhNNF3z2mRjBa7loHccBAPRYqKKwX9FrUslXsvwgaKr4fYgwYJYACebF0ecpTrSt
=niaB
-END PGP SIGNATURE-
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keeping a large number of backups? Any drawbacks?

2013-12-05 Thread Arnold Krille
On Thu, 5 Dec 2013 11:30:23 -0600 David Nelson
david.nelso...@gmail.com wrote:
 Hello,
 
 I built a BackupPC server with good older hardware and installed two
 WD RED 3 TB drives and used  LVM to make them one big ~6 TB volume
 for backupPC. The system boots from a separate disk BTW. I am backing
 up teacher and student files for a small school district. The total
 backup before pooling and compression is around 800GB. My pool size
 is around 1TB with a months worth of daily incremental and full
 backups on Fridays. I did not realize when I built the server how
 amazingly efficiently BackupPC stores the backups. I did not
 anticipate being able to keep so many backups!
 
 Is there any reason not to just increase the full and incremental keep
 count to a something like the whole school year? Another though is to
 keep the fulls and incremental for maybe 3 months then let the
 incremental go and just keep the weekly fulls? So that's the
 question, if I have plenty of space is there any reason not to just
 keep a ton of backups?

There are examples in the docs to set the fullKeepCnt to value [4, 0,
4] for example, meaning keep 4 full at FullPeriod-interval (defaults
to 6.5 days), 0 backups at twice the FullPeriod and 4 backups before
that in four times the FullPeriod (which becomes roughly a month). For
a customer we use that to save weekly for two months and then
~monthly for one (or two) years. Works like charm.

- Arnold


signature.asc
Description: PGP signature
--
Sponsored by Intel(R) XDK 
Develop, test and display web and hybrid apps with a single code base.
Download it for free now!
http://pubads.g.doubleclick.net/gampad/clk?id=111408631iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Not getting any mail after the backup

2013-11-27 Thread Arnold Krille
On Wed, 27 Nov 2013 12:27:15 +0530 Gopu Krishnan
gopukrishnan...@gmail.com wrote:
 I have deployed a new backuppc in my CentOS 6.4 server and backup is
 running fine except the mails. I am no longer receiving any mails
 after the backups, no matter whether it was successful or not. When I
 test it as below :
 ./BackupPC_sendEmail -u gopu@mailaddress
 Sending test email using /usr/sbin/sendmail -t -f
 backuppc@backup-sever it send the test mail successfully and I got
 received the test. So issue is with the backup status only. Even I
 have tried to send a test mail using : mail emai_address
 as the user backuppc and received fine. Kindly let me know whether I
 need to check.

There is no email after the backup. No news is good news. (Except for
when backuppc is stopped altogether.)

You only get mails when you are responsible for a host and the last
backup is too long ago.

There is a script out in the wild sending you a daily summary but for
that you can also look at backuppc summary page. And when you get an
email every day, the days where there is a problem don't stand out.

Have fun,

Arnold


signature.asc
Description: PGP signature
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc on NAS

2013-11-25 Thread Arnold Krille
Am Sun, 24 Nov 2013 19:51:59 +0100
schrieb Hans Kraus h...@hanswkraus.com:
 Am 24.11.2013 17:35, schrieb Arnold Krille:
  Am Fri, 22 Nov 2013 09:07:45 -0500
  schrieb Andy Stetzinger andy.stetzin...@riptidesoftware.com:
 
  Depends on what kind of NAS you're buying. If you're getting
  something that's a true linux OS, then yes. If not, then no.
 
  What I do here is run backuppc on a VM, and mount a network drive
  to the NAS. Works perfectly.
 
  So you only get backups when both the NAS and the main machine are
  running. A way better solution would be to a) run backuppc directly
  on the nas (if supported), b) run backuppc on the normal servers and
  just write tapes to the NAS or c) build your own NAS with a
  small machine with big disks and run minimal Linux+backuppc.
 
  - Arnold
 
 But isn't that true in any case if the machine on which the backup
 storage is mounted and the main machine aren't the same one? I mean,
 in that case both computers must be up and running to get a backup,
 regardless where backuppc is really running.

When the NAS is mounted on the machine running backuppc, both devices
must be up to get _any_ backups. Thats two single-points of failure
in a row compared to one. (Of course the machines-to-be-backed-up have
to be up as well.)

When additionally the machine running backuppc is the main server (as
is in many one-server setups) you get funny effects: The main server
will fail to boot or at least take a very long time when the NAS is not
reachable, the NAS on the other hand is waiting on the dhcp-server of
the main-server to be active so it gets an IP and network
connectivity... In short: Don't do that. Run backuppc on a dedicated
(virtual or real) machine and give it all the disk-space it needs
without relying on external stuff. The backups are the most valuable
thing you (or your client) has.

Have fun,

Arnold

--
Shape the Mobile Experience: Free Subscription
Software experts and developers: Be at the forefront of tech innovation.
Intel(R) Software Adrenaline delivers strategic insight and game-changing 
conversations that shape the rapidly evolving mobile landscape. Sign up now. 
http://pubads.g.doubleclick.net/gampad/clk?id=63431311iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc on NAS

2013-11-24 Thread Arnold Krille
Am Fri, 22 Nov 2013 09:07:45 -0500
schrieb Andy Stetzinger andy.stetzin...@riptidesoftware.com:

 Depends on what kind of NAS you're buying. If you're getting something
 that's a true linux OS, then yes. If not, then no.
 
 What I do here is run backuppc on a VM, and mount a network drive to
 the NAS. Works perfectly.

So you only get backups when both the NAS and the main machine are
running. A way better solution would be to a) run backuppc directly on
the nas (if supported), b) run backuppc on the normal servers and
just write tapes to the NAS or c) build your own NAS with a small
machine with big disks and run minimal Linux+backuppc.

- Arnold

--
Shape the Mobile Experience: Free Subscription
Software experts and developers: Be at the forefront of tech innovation.
Intel(R) Software Adrenaline delivers strategic insight and game-changing 
conversations that shape the rapidly evolving mobile landscape. Sign up now. 
http://pubads.g.doubleclick.net/gampad/clk?id=63431311iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] save backuppc backups on a ultrium tape

2013-11-15 Thread Arnold Krille
On Fri, 15 Nov 2013 11:17:07 -0600 Les Mikesell lesmikes...@gmail.com
wrote:
 On Fri, Nov 15, 2013 at 9:45 AM, i...@newoffice.it
 i...@newoffice.it wrote:
  On 11/15/2013 03:38 PM, i...@newoffice.it wrote:
 
  Or simply can i put /dev/st0 to # The path on the local file system
  where archives will be written:
  $Conf{ArchiveDest} =
 
 That would be part of an 'archivehost' setup.  It may work but you'd
 have to start the run from the web page and you can probably get
 better performance from your tape drive if you pipe the output of
 Backuppc_tarCreate through dd with some appropriate block size.  Or if
 you have extra disk space, write intermediate tar files to some
 holding space so you have a better chance of keeping the tape
 streaming.

We do exactly that: Trigger an archive-host to dump encrypted tars of
the last backup of all (or selected) machines into a dir. This
archive-host has a pre-script that mount the dump-dir via nfs (because
backuppc runs on its own virtual machine and the streamer isn't yet
virtualized). In the post-script the dir is unmounted and on the
streamer-machine amanda is started to dump the tars to tape in an
orderly fashion. Using amanda and not just dd to dump the tars has the
advantage that you get an index of your tape-contents and can restore
single tars from the middle of the tape.

We do have another type of archive-host that mounts an iscsi-device,
writes encrypted tars of all machines and unmounts the iscsi again...

Probably we should publish these script some time. Probably together
with the Chef-recipes to configure backuppc and its clients.

Have fun,

Arnold


signature.asc
Description: PGP signature
--
DreamFactory - Open Source REST  JSON Services for HTML5  Native Apps
OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
Free app hosting. Or install the open source package on any LAMP server.
Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC with ssh

2013-09-15 Thread Arnold Krille
On Sun, 15 Sep 2013 12:52:38 -0700 makgab
backuppc-fo...@backupcentral.com wrote:
 Hi!
 
 I would like to backup a machine over internet with ssh (simple file
 copy over ssh, remote machine a centos linux). How can I setup
 BackupPC for it? Which xfermethod is for simple backup over ssh? And
 how can I set it?
 
 (I set 'smb', 'ftp' method. These are working correctly.)

'rsync' does rsync over ssh.

Have fun,

Arnold


signature.asc
Description: PGP signature
--
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/22/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=64545871iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Need advice on XFS and RAID parameters for new BackupPC server with 25 SAS disks

2013-09-13 Thread Arnold Krille
On Fri, 13 Sep 2013 12:56:30 -0400 Carl Wilhelm Soderstrom
chr...@real-time.com wrote:
 On 09/13 02:12 , Marcel Meckel wrote:
  Debian Wheezy will be running from SD card inside the server,
 My company tried using CF cards as OS storage devices for a while. Our
 experience is that (anecdotally) they aren't any more reliable than
 spinny disk. They still fail sometimes.
 I don't know if SD cards will be any different, or if you might have a
 different way of mounting them which will be better.

When the host is simple (aka single-use) and maybe even configured
automatically with chef/puppet/whatever, the most you loose from a
failing CF-card/sd-card is the time it takes to reinstall.

 2. I always use LVM but it might not be useful in this case.
Would you recommend using LVM when the whole 12 TiB is used as
one big filesystem only? It might be useful if i have to add
another shelf of 25 disks to the system in the future to be
able to resize the DATADIR FS spanning then 2 enclosures.
 I wouldn't bother. I've done it both ways (with, and without the
 LVM). If you *know* that you'll be adding more disks in the future,
 it's a good idea. My experience is that planned expansions usually
 don't happen. ;) Also, if you're going to add more disks for more
 capacity, you're much better off adding a whole new machine. A second
 machine will increase your overall backup throughput as well as
 increasing your disk space. you won't get the benefit of pooling; but
 you will get more hosts backed up in a shorter amount of time.

My advise would actually be a bit different: The main problem with
backuppc isn't necessarily the disk acces but the memory-consumption as
backuppc (especially with rsync-method) has to keep a rather big
file-tree in memory.
So maybe do a minimal hw-host and run two or even three virtual
machines for backuppc. Then distribute your hosts-to-backup across
these. That way the file-tree per backuppc-instance should be smaller
with the cost of a bit less deduplication. But from my expierence is
the benefit of massive deduplication the files in /etc and similar
system-shares with small files. If you have duplicates in big user-data
files, you are either backing up one nas-resource over several clients
or your users are copying data where it shouldn't belong.

Hope that is understandible, at the end of the week writing in a
foreign language isn't the best way of expressing ones thoughts.

Have a nice weekend,

Arnold


signature.asc
Description: PGP signature
--
How ServiceNow helps IT people transform IT departments:
1. Consolidate legacy IT systems to a single system of record for IT
2. Standardize and globalize service processes across IT
3. Implement zero-touch automation to replace manual, redundant tasks
http://pubads.g.doubleclick.net/gampad/clk?id=5127iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Moving lots of data on a client

2013-08-20 Thread Arnold Krille
On Tue, 20 Aug 2013 14:23:38 -0400 Raman Gupta rocketra...@gmail.com
wrote:
 I have a client on which about 100 GB of data has been moved from one
 directory to another -- otherwise its exactly the same.
 
 As I understand it, since the data has been moved, BackupPC 3 will
 transfer all the data again (and discard it once it realizes the data
 is already in the pool) i.e. it does not skip the transfer of each
 file even though the checksum is identical to an existing file in the
 pool.
 
 I am using the rsync transfer method.
 
 Is there a workaround to prevent all 100 GB of data from being
 transferred again?

I think the workaround is to use rsync as transfer ;-) At least when you
added the checksum-seed= parameter to your config, it should
calculate the checksums on the client and compare with the servers
database and only transfer contents that differ.

Otherwise I would not manually fiddle with the dirs on the server, its
far less stress and risk for error if you just let backuppc do its
thing. Even if that means transfering the files again...

Have fun,

Arnold


signature.asc
Description: PGP signature
--
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup PC - Out of memory , BackupPC_dump killed

2013-08-13 Thread Arnold Krille
Hi,

Am 2013-08-13 14:24, schrieb Florian Streibelt:
 we are trying to backup out mailserver with lots and lots of small
 files using backuppc.
 The backup server already has 8 GB of RAM, but this seems not to be
 enough - anything we can do besides comitting more RAM to the machine?

You are probably using rsync as transfer-method. Which makes perfectly 
sense, since you want to backup many files that don't change, so there 
is no need to transfer all files as you would with tar. Unfortunately 
that means that both client and server have to built the while fs-tree 
with checksums in memory...

Two solutions:
  - use tar as transfer-method. Shouldn't hurt much as on incrementals 
tar only transfers files that are newer and normal mails aren't touched 
again after initial storage.
  - define two or more hosts in backuppc that backup different parts 
of your mail-store. Then you can also shift full-backups so that the 
load distributes over the days.

Have fun,

Arnold

--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with 2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup PC - Out of memory , BackupPC_dump killed

2013-08-13 Thread Arnold Krille
On Tue, 13 Aug 2013 18:14:28 +0200 Florian Streibelt
flor...@net.t-labs.tu-berlin.de wrote:
 On Tue, 13 Aug 2013 18:01:25 +0200
  Arnold Krille arn...@arnoldarts.de wrote:
- define two or more hosts in backuppc that backup different
  parts of your mail-store. Then you can also shift full-backups so
  that the load distributes over the days.
 
 that was another idea, I even had the tought of backing up the system
 with one host and then each user maildir as (automatically created)
 host 
 
 Does it make a difference to define different 'shares' with rsync or
 is that even possible?

With different shares inside one host you could probably reduce the
memory-problem. But you can't distribute the load of full backup over
several days.

When you define your backup hosts/resource from some automated system
(like we do via chef), it should be pretty easy to define one
backup-host per user. And maybe one host for each domains shared
folders, if you have that.

Have fun,

Arnold


signature.asc
Description: PGP signature
--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with 2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Move BackupPC TopDir To New Larger Hard Drive

2013-07-23 Thread Arnold Krille
Am 2013-07-23 11:02, schrieb Tyler J. Wagner:
 On 2013-07-23 00:54, rblake3 wrote:
 What am I missing?  What can I do to efficiently get the data copied 
 from
 olddrive to newdrive without this problem?
 The fastest way to do this is to do a blockwise copy of the old
 drive/partition to the new, then expand the filesystem with resize2fs 
 or
 parted.
 
 Unmount both drives, then:
 
 dd if=/dev/sdb1 of=/dev/sdc1 bs=1M
 resize2fs /dev/sdc1

Spending nights and weekends copying disks with dd, I find bs-sizes of 
4M to 16M to work much better then 1M.

And a second console with watch -i 10 killall -USR1 dd gives you 
regular status.

Have fun,

Arnold

--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] security web server questions

2013-06-30 Thread Arnold Krille
On Sun, 30 Jun 2013 01:17:04 -0700 Grant emailgr...@gmail.com wrote:
 On Linux, how is communication between the clients and the backup
 server achieved as far as security?  Does the backup server pull or do
 the clients push and as which user?

This depends on the transfer method:
 - tar and rsync tunnel their data over ssh - secure.
 - rsyncd afair uses its own (simple?) encryption.
 - smb gives you the usual windows-security - nill.

And as reading the basic docs of backuppc tell you, the server pulls
the changes from the clients automatically or when told via the
web-interface.

 Can the web server reside on a different system than the backup
 server?

Yes, but the web-interface has to have read-access to the
backuppc-filesystem to show you the current state and select files for
restore and stuff. If you really must, you can run the daemon and the
web-interface on separate machines and nfs-mount /var/lib/backuppc from
the daemons machine to the web-server.
But there is no real advantage in that, the web-interface isn't used
that heavily, its not slowing the daemon down if you run both on the
same machine. And you save administration-nightmare when you just run
both on the same machine.

Have fun,

Arnold


signature.asc
Description: PGP signature
--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Exclude empty drectories

2013-05-28 Thread Arnold Krille
Am 2013-05-28 11:20, schrieb electropica:
 I am looking to ignore backup of empty sub directories to make it
 lighter and more readable.
 In my case, i backup 300 users documents so there is always my
 pictures or similar empty directories.
 
 Is this possible ?

Empty directories in userprofiles on Windows tend to contain 
hidden-files with system-properties.
Empty directories in userprofiles on Linux tend to contain hidden 
.directory files with the xdg-properties (like icons and stuff).

Working around that is more difficult then just syncing the 
directory-properties.

And when you delete a dir from the backup just because it became empty, 
you are removing something from backup that is still present in reality. 
So your backup is not complete. Which is something you generally don't 
want.

Have fun,

Arnold

--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC not obeying blackout times

2013-05-22 Thread Arnold Krille
Am 2013-05-22 16:05, schrieb Zach La Celle:
 I'm running BackupPC 3.2.1 on a Ubuntu 12.04 linux distribution.
 
 I've created a custom perl file for one of my hosts (through the web
 interface) to disable backups during the day, and only allow them to 
 be
 queued at night.  However, backups are still being queued in 
 disallowed
 times.
 
 Under Schedule on the host's config options in the web interface, I
 have the following:
 BlackoutPeriods:
 Override (checked)
 hourBegin 8
 hourEnd 17
 weekDays 1, 2, 3, 4, 5
 
 There are spaces after the commas in weekDays
 
 In the perl file for the host, I have the following (direct copy):
 $Conf{BlackoutPeriods} = [
{
  'hourEnd' = '17',
  'weekDays' = [
1,
2,
3,
4,
5
  ],
  'hourBegin' = '8'
}
 ];
 
 However, I had a backup begin at 10:00 after making these changes.
 
 What do I need to do differently?

BlackOutPeriod will only be honored if there are enough good pings. And 
if there is no overdue backup scheduled. And if there is at least one 
good backup.
And its in the servers timezone, could be you are using a UTC-blackout 
while your office-work-hours are in CEST or something.

And BlackOutPeriod only affects the start-time of backups, no backup 
will be interrupted by the beginning of the period.

Have fun,

Arnold

--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] mysql streaming

2013-04-26 Thread Arnold Krille
On Thu, 25 Apr 2013 14:45:50 -0700 Lord Sporkton
lordspork...@gmail.com wrote:
 I'm currently backing up mysql by way of dumping the DB to a flat
 file then backing up the flat file. Which works well in most cases
 except when someone has a database that is bigger than 50% of the
 hdd. Or really bigger than around say 35% of the hdd if you account
 for system files and a reasonable amount of free space.
 I started thinking, mysqldump streams data into a file and then
 backuppc streams that file for backup. So why not cut out the middle
 man file and just stream right into backuppc? Ive been playing with
 the backup commands but im getting some unexpected results due to
 what I believe is my lack of total understanding of how backuppc
 actually streams things.

I don't know about your rates, but here in europe a new 2TB-disk costs
less then me thinking and trying to implement anything like this.

However, the idea seems interesting (hobby isn't always about hourly
rates:). Basically you have to send a stream from the client to the
server that is a valid tar-file. rsync is more for transfering parts,
but you want to dump the whole db every time. And smb is out for obvious
reasons.
But on how to trick backuppc to think the other side is sending a
tar-file, I have no clue...

So unless you have an academic interest in understanding how things
work, its much cheaper and easier to just push more disk-space into the
servers concerned. Probably its just putting two more disks on the host
and then increasing several machines disks?

Have fun,

Arnold


signature.asc
Description: PGP signature
--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What to backup to backup a backuppc server ?

2013-04-04 Thread Arnold Krille
Hi,

Am 2013-04-04 14:33, schrieb Nicolas Cauchie:
 I want to backup what backuppc backups to put it on a tape, to be
 safe in case of climatic disaster or anything else.
 I want to backup backups on a tape, using a LTO tape drive, and a
 program who runs on Windows.

Ouch, that last half-sentence hurts...

 When I run du - sh to know the size of my backuppc backups
 directory, on linux, it returns me 217G. When I do the same thing on
 Windows (right click on the folder- properties), it sizes me this
 folder more than 500G... I think it's because of the symbolic links...

Yep, windows afair isn't really great with the links. BTW: these links 
are hard-links, that is two file-entries pointing to the same contents, 
not one file-entry pointing to another file-entry as it is with symbolic 
links.

 So, the question is : What must I backup to be sure to be able to
 restore the backuppc server ?

Either you write the whole partition to tape, or:

We have a cron-job that daily in the morning tells backuppc using the 
tapearchiver-method to write tars of certain hosts into a directory 
where amanda can find them. And the post-script of the tapearchiver then 
runs amdump to actually write the tars to tape and then deletes the 
tars. This way only one backup-mechanism is collecting the stuff but we 
have the security of off-site storage.

On a side-note: the tar-files that backuppc writes are sent through gpg 
for encryption with a public-key to be decrypted with the private-key. 
Where the keys are changed once the private key has to be copied one of 
the machines for restoring from tape. So even if you get access to one 
of our tapes, you don't see anything except the names of the archives...

Have fun,

Arnold

--
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire 
the most talented Cisco Certified professionals. Visit the 
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how to install backuppc on seperate disk, alongwith backups on ubuntu1204 LTS?

2013-03-30 Thread Arnold Krille
Am 2013-03-30 00:35, schrieb Kris Lou:
 Assuming Ubuntu packages install in the normal locations, create a 
 symlink
 linking /var/lib/backuppc to your desired mount.  Then it should just 
 be
 apt-get install backuppc.

Why not just create /var/lib/backuppc and mount the desired disk there 
and then install backuppc?

Have fun,

Arnold

--
Own the Future-Intel(R) Level Up Game Demo Contest 2013
Rise to greatness in Intel's independent game demo contest. Compete 
for recognition, cash, and the chance to get your game on Steam. 
$5K grand prize plus 10 genre and skill prizes. Submit your demo 
by 6/6/13. http://altfarm.mediaplex.com/ad/ck/12124-176961-30367-2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Stop TrachClean / Return directories backup list

2013-03-22 Thread Arnold Krille
Am 2013-03-21 01:21, schrieb Phil Kennedy:
 Previous admin had us using monitoring with Zenoss (with SNMP hardware
 monitoring.) I'm in the process of moving us to Nagios with a much, 
 MUCH
 deeper level of monitoring. I've inherited mess that I was led to
 believe was a finely tuned machine. Somehow I doubt my experiences are
 unique

Been there, seen that.

Actually I am in the same position and working my way trough lots of 
interesting-configured machines and legacy stuff (to say it nicely).

But blaming the data-loss on the software-raid and not on the missing 
monitoring/care, is just plain wrong.
As others noted, with a hardware-raid you trade in a proven name for 
much less possibilities of monitoring. Reading /dev/mdstat or the output 
of mdadm -D raid works as soon as you have an ssh-connection. And no 
monitoring solution can claim to be usable without monitoring 
/dev/mdstat. For hw-raids on the other hand you get a different tool for 
each brand (and possibly series), a different way of alarming and then 
incompatibilities with controllers, harddisks and firmware-revisions. 
Once you lost data due to a hw-controller only notifying with a led on 
the back before failing completely and newer controller-releases not 
understanding the old revisions disk-format, you will be very glad when 
you can simply plug your sw-raids disk into any other linux machine and 
access the data. When you use the old meta-data format, you don't even 
need sw-raid support, you can simply mount the partitions to restore 
your data.

Your story reminds me of one of my first all-nighter here with a 
customers server. The former admin had set up the customers new 
storage-server just fine. But then we wondered why only one of the three 
disks was used for the raid. Mainly because he 'didn't yet find the time 
to sync'. But also because he did the setup on a broken disk with 
read-errors (in currently unused space) and syncing the complete disk to 
the other raid members would freeze the system and abort the sync... 
Something I noticed just seconds after I started the sync via ssh...

Anyway, stay calm, if in doubt stop all automatic backups on the 
machine concerned (there is a config option for that), and don't blame 
human deficits on the software/hardware in use.

Good luck,

Arnold

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_dump memory usage

2013-03-12 Thread Arnold Krille
On Mon, 11 Mar 2013 18:29:41 +0100 Holger Parplies wb...@parplies.de
wrote:
 Les Mikesell wrote on 2013-03-11 11:29:48 -0500 [Re: [BackupPC-users]
 BackupPC_dump memory usage]:
  [...]
  I am curious about this.  I recall when people first started using
  64-bit perl saying that memory use ballooned much more than expected
  on some programs.   I don't know if there was a bug that has been
  fixed or if those people just bought more RAM.
 
 ok, I wasn't aware of that. I'd be surprised, though, if we have
 nobody using 64-bit BackupPC server systems on the list!? I'm about
 to install a 64-bit system for ZFS on Linux tests ...

Not again.

Please people, come join us in the 21st century. 64bit has been around
long enough. If there is still an app that fails to compile/run on
64bits, its worthy of dropping it entirely.

If something basic like perl would still make problems on 64bits, there
would be a legion of programmers fixing it.

There is no problem running any contemporary app on any contemporary
distribution on a 64bit processor and system. These apps also don't
take up more memory, they get a very little bit bigger because
memory-addresses are now 64bits instead of 32bits. But they should be
using position-independant-code anyways. And now an app can also use
more then 3Gbyte of ram. And your system can also have more than 3.2GB
of ram without ugly clutches like PAE.

Nowadays 64bit isn't an exotic exception. I had trouble last week
installing a box: Took me one attempt to realized that my hw-people
gave me a 32-bit-only machine!

Have fun,

Arnold


signature.asc
Description: PGP signature
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keep backup data when source host/directory changes

2013-03-06 Thread Arnold Krille
On Wed, 6 Mar 2013 10:10:17 -0600 Les Mikesell lesmikes...@gmail.com
wrote:
 On Wed, Mar 6, 2013 at 9:56 AM, Maarten m...@milieudefensie.nl wrote:
 
  Yes - the pool/cpool directory is full of extra hardlinks where the
  hash of the content is the name to quickly match up new items
  (plus a little extra work to deal with collisions).  Rsync just
  follows the existing pc directory tree though, so you do have to
  copy the data from each new location once, but storage is all
  pooled.  If the target system is local, that usually isn't a
  problem.  If it is over a low-bandwidth connection it might be
  worth some extra work to fake the initial tree with a copy of the
  old link tree.
 
  OK, thanks for clarifying this. I feel silly that I have used
  BackupPC for years but never realized this.
 
  The bandwidth is not an issue, so I'm about to start a new full
  backup right away.
 
 I'm not sure how much temp space will be required for the initial run.
  I think backuppc does something clever to handle matching small files
 in memory to avoid any extra disk writes, but large ones may be saved
 then subsequently replaced with links which might be a problem in an
 extreme case.

Which might be a good opportunity to split one big server with one big
backup-share into several servers with maybe even smaller shares.
Then you can distribute full backups of that one machine onto several
days. And the first full-backup doesn't need to transfer all files to
the backup machine before deduplicating (which is the link-step in
the status afaik).

Have fun,

Arnold


signature.asc
Description: PGP signature
--
Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester  
Wave(TM): Endpoint Security, Q1 2013 and remains a good choice in the  
endpoint security space. For insight on selecting the right partner to 
tackle endpoint security challenges, access the full report. 
http://p.sf.net/sfu/symantec-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Pool synchronization?

2013-02-28 Thread Arnold Krille
On Thu, 28 Feb 2013 14:10:13 -0700 Mark Campbell
mcampb...@emediatrade.com wrote:
 So I'm trying to get a BackupPC pool synced on a daily basis from a
 1TB MD RAID1 array to an external Fireproof drive (with plans to also
 sync to a remote server at our collo).  I found the script
 BackupPC_CopyPcPool.pl by Jeffrey, but the syntax and the few
 examples I've seen online have indicated to me that this isn't quite
 what I'm looking for, since it appears to output it to a different
 layout.  I initially tried the rsync method with -H, but my server
 would end up choking at 350GB.  Any suggestions on how to do this?

Create a snapshot from the underlying lvm-volume and then copy / zip
that snapshot directly.

Or use BackupPC's 'archive' method to write full tar.gz of your hosts
to your external disk. We are using that to write tgz to a directory
where amanda then writes these to tape...

Have fun,

Arnold


signature.asc
Description: PGP signature
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed Mb/sec

2013-02-23 Thread Arnold Krille
Hi,

On Sat, 23 Feb 2013 07:52:58 -0800 zdravko
backuppc-fo...@backupcentral.com wrote:
 I'm still checking things with BackupPC. Yesterday I switched to tar
 instead of rsync and the result was awesome: 9min for incr. backup.

9min compared to what? Some of my incrementals with rsync only take 0.5
minutes.

 Let's see what will next few days bring.

Before you get your hopes to high: there is a difference in how rsync
and tar work.
 - rsync checks the whole file-tree and only transfers files that have
   changed in attributes and (on incremental) content. So it puts a bit
   more strain on server and client but transfers less files.
 - tar transfers everything where the attributes have changed,
   regardless whether the contents have changed or not. Less load on
   server and client but more data to transfer.

The transfer-rate shown in backuppc tells you how many data was
transfered in the backup-time. So the higher value for tar doesn't
necessarily mean that your backup transfered faster. It means your new
method of choice had more data to transfer and did so in the same
amount of time.

Have fun,

Arnold


signature.asc
Description: PGP signature
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up many small files

2013-02-06 Thread Arnold Krille
On Tuesday 05 February 2013 23:09:13 Adam Goryachev wrote:
  Speeds varies around 7 M(Mbyte? Mbit?)/s. I guess it's good enough for a 
  100Mbps-connection.
 
 This is not relevant, I meant to watch what the bandwidth usage was
 during a backup. BTW, 7MB/s is fine for a 10Mbps connection, but if you
 really have a 100Mbps network, you should see at least 80MB/s transfer
 speeds.

??? 7MB/s (that is 7Mbyte/s) is a usable value for a 100Mb/s (that is 100 
Mbits/s) connection! 100Mb/s translates to 12MB/s, 10Mb/s would be 1.2MB/s...

On a 1G network, you can get similar rates as with local hard disk, that is 
~100-120MB/s.

So on a 100Mb/s-network, the OP can't get higher transfer-rates then 12MB/s. 
Its the physical limit. 7MB/s is a good value.

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Free Next-Gen Firewall Hardware Offer
Buy your Sophos next-gen firewall before the end March 2013 
and get the hardware for free! Learn more.
http://p.sf.net/sfu/sophos-d2d-feb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] newbie: backupPC installed ok, but trouble with 1st backup

2012-11-23 Thread Arnold Krille
On Thu, 22 Nov 2012 21:14:53 -0800 Matthew Grob mg...@grobfamily.org
wrote:
 
 THANKS.   I'm not sure I would have figured that one out.  ICMP is
 often blocked these days so I would think this may be a common issue
 for backupPC?

Which is an interesting future we face there.

With IPv6 you have to have ICMP enabled, otherwise you won't get far...

So, if you are blocking ICMP-Ping to hide, think again and make your
systems secure so you don't have to hide. Its the future and the right
thing.


signature.asc
Description: PGP signature
--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full restore problems.

2012-11-22 Thread Arnold Krille
Hi,

On Thursday 22 November 2012 11:58:16 Gary Roach wrote:
 This is the second time around for restore problems. I need to do a full
 restore and am having problems building the tar file. I am no longer
 willing to fool with the command line method. I spent days trying to get
 things to work. I don't have time to fiddle with it any more. I find the
 instructions vague.
 
 The real problem is with the GUI. Check out the following:
 
  1. I selected the system from the left hand menu that I wish to
 restore.
  2. I selected the full backup #. In this case #169. This listed the
 backed up directories etc, var, root and home.
  3. I selected all and check marks appeared by all of the directories.
  4. I hit Restore selected files. The restore method page appeared.
  5. I selected method 3 (tar file ) and left the Make archive
 relative to / checked since I don't understand what it
 does. The save file screen appeared and I selected save. This created
a restore.tar file in my /home/gary/Download directory.
 
 Now the file that was created was 331MB. Unfortunately it should be
 21GB. Backuppc only restored the directory structure and some of the
 files that were in the first tier of directories. None of the lower
 level files were restored. I have tried breaking up the backup into
 individual directores (ie etc, var, home, root) but with essentially the
 same result. Beyond about the 2nd tier transfer is unreliable. I have
 checked the backup archives and they contain all of the data. The data
 is there but I can't get it back.

This won't help you but following the steps you write above I just created a 
full tarball of one of my hosts /etc-shares. From an incremental-backup 
(incremental is about file-transfer at backup-time, not about restore) and via 
gui. And the downloaded resotre.tar contains all the files down several levels 
and with correct permissions. At least thats what ark tells me.

Did you take a good look at the logfiles during the creation of your restore-
file? Did you try to debug the problem? (apart from I don't understand how you 
can work with it, its not working for me)

Have fun,

Arnold

PS: I don't want to tease you, but when deploying a backup system, the first 
thing to check is if restore works as expected and has all the needed stuff. 
And one should do that well before one needs it...

signature.asc
Description: This is a digitally signed message part.
--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Reduce backup load on clients

2012-11-22 Thread Arnold Krille
Hi,

On Thursday 22 November 2012 14:54:54 Jimmy Thrasibule wrote:
 On Thu, 2012-11-22 at 13:35 +, Paulo Almeida wrote:
  I don't use Windows myself, but would it be possible to link the backup
  to the computer shutdown, so that when a user turns off the computer, it
  first performs the backup and then shuts down?
 I really like this idea. But I don't know neither if this possible.

If the machines concerned are capable of WoL, it should be easy to either have 
/bin/true as ping-command and and something around etherwake/ssh host 
halt(*) as pre- and post-commands.

(*) Or its windows version.

  Would it be reasonable to do the backups during lunch, using backuppc's
  blackout periods?
 It could be and it's already like this. However I noticed that BackupPC
 doesn't take in account blackout periods for the first backup. Is that
 normal?

The first backup is very important. If there is no previous backup or no backup 
recent enough, a backup is scheduled immediately unless you are fast in the 
gui to block backups for that machine for the needed hours.

  There may be other solutions, outside the scope of backuppc, like
  traffic shaping and throttling processes.
 I'm also thinking about this. But one other side effect is disk I/O
 usage when making a backup.

The main-problem for the machines being backuped up is not the network-traffic, 
its the disk-io and disk-latency when rsync checks the files and the directory-
structure (the same for smb/tar).

One of our clients just accepted that every four weeks there is a monday-
morning where the machine is a bit slow and the hdd make a bit of noise. They 
had a fatal dataloss in the past, maybe that helps with acceptance of regular 
backups.
The other clients use diskless thin-clients and work on a terminal-server 
where the backup runs at night...

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync never starts transferring files (but does something)

2012-11-15 Thread Arnold Krille
Hi,

On Thursday 15 November 2012 19:14:53 Markus wrote:
 Your suggestion sounds great. I just found this small how-to on a forum.
 Is this how it works or is there another/better way?
 Create as many client names as you like, eg: client-share1,
 client-share2, client-share3, client-share4 (replace client with the
 real host name and share with the share names). In each
 pc/client-xxx/config.pl file, use;
 
 $Conf{ClientNameAlias} = client;
 
 (where client is the real host name). Add any other client-specific
 settings (eg: share name). This way all 4 virtual clients will refer to
 the same real client. All backups happen independently.

I think (I didn't yet have to try it) you can get the same result with a 
different approach: Create one server with many shares to back up. At least one 
share per filesystem (add --one-filesystem to the rsync parameters) and more 
shares for different subdirs on big filesystems.
Then the problem of rsync eating lots of memory should go away as each share 
is backed up independ but all will be done in the same run. And you get once 
nice filesystem-tree in the backuppc-gui.

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.2.1 incremental SMB backup question

2012-11-02 Thread Arnold Krille


Jim Stark jstarkjhea...@gmail.com schrieb:
Thanks for the prompt reply.

Machine A is an old Buffalo TeraStation. Ironically, it is running
Linux 
likely even has rsync, but as a practical matter, SMB is the only
available
access mechanism.

1) You can touch the files, or copy them in such a way that their times
change

No shell access to A for touch. No option I can find via the web
interface to have the copies to A take the current time as their
creation/mod time, though this would be great, if possible.

Mount the buffalo with smbfs or similar into a unix/linux machine and let touch 
run.

2) You can simply run full backups

Performance hit? I understand pooling will avoid creating multiple
copies,
but cost in backup time?

The performance hit is affecting win-workstations that only run at daytime and 
thus only are backuped during working hours. Your buffalo will likely run 24/7 
and will be available in then out-of-office times when backuppc normally runs.

I guess I'm mostly surprised that the incremental backup does not
realize
that there are files in the source that do not exist in the destination

back them up based on that, regardless of modtime.

That is exactly the reason why backuppc runs a full-backup at regular 
intervals. In an ideal world the only time to run a full backup would be the 
first time.

Have fun,

Arnold
-- 
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.

--
LogMeIn Central: Instant, anywhere, Remote PC access and management.
Stay in control, update software, and manage PCs from one command center
Diagnose problems and improve visibility into emerging IT issues
Automate, monitor and manage. Do more in less time with Central
http://p.sf.net/sfu/logmein12331_d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] A simple perl script to call Backuppc_tarCreate

2012-08-20 Thread Arnold Krille
Hi,

just a few comments without having tested your script (got my own that
also connects and mounts the iscsi-volume before writing tapes).

On Mon, 20 Aug 2012 13:26:59 +0200
Germano Paciocco germano.pacio...@gmail.com wrote:
 Hi everyone.
 I was not satisfied on how BackupPC_archiveStart works, so I decided
 to write a script that should help me with BackupPC_tarCreate and
 cron, avoiding to edit crontab everytime I edit a host or a share to
 backuppc configuration.
 The script is attached.
 I'm a Perl beginner, so I think it could have be done much better.
 You're welcome for suggestions or to improving.
 
 This is the help of the command line:
 
 ---
 BackupPC_tarCreateAll [--dir|-d /path/to/archive/dir] (default:
 /var/lib/backuppc/)

From looking at the help, I couldn't whether this option is only to
specify the directory where the tars are written or if this is also
used to find the backuppc-data. And its very strange to write the tars
by default to the same directory where backpc stores its files.

Also I noticed you skip localhost when creating a tar for each host.
Don't do that. That hostname might be misleading in restore, but its a
very valid setting to create backups from.

Have fun,

Arnold


signature.asc
Description: PGP signature
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync: full backup more than twice faster than incremental backup

2012-08-16 Thread Arnold Krille
On Thursday 16 August 2012 16:35:49 Tyler J. Wagner wrote:
 On 2012-08-16 15:51, Les Mikesell wrote:
  I've now disabled incremental backups on this server, but maybe someone
  has an idea how to enable incremental backups for this host as well.
  
  I think you've jumped to conclusions here - you need to time full runs
  other than the first.  Other things to keep in mind are:
  Incremental runs copy everything that has changed since the previous full.
 
 One way we've improved this is to make incrementals reference the previous
 incremental. As long as your filesystem timestamps are accurate (system
 dates are synced, file modification times are updated on write), this works
 well.
 $Conf{IncrLevels} = [  '1',  '2',  '3',  '4',  '5',  '6', '7' ];
 We use seven on the off chance that someone runs a manual one that week.

This variable (and also the corresponding FillLevels) works differently than 
you expect:
It means keep one backup at IncrInterval, keep 2 backups at 2*IncrInterval 
before that, keep 3 backups at 4*IncrInterval before that, keep 5 backups at 
8*IncrInterval before that,

You actually want to set this to just plain 7 to make backuppc keep seven 
incremental backups at IncrInterval.

For FullLevels you are well advised setting it to [4, 0, 4] or [4, 0, 6]. 
Given a FullInterval of 7 days this results in keeping the last four full 
backups (which happen to be weekly), keep zero with two weeks interval and 
keep 4 (or 6) at four-weeks interval which is appr. a month.

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Help setting up backups on HFS+ network drive

2012-08-15 Thread Arnold Krille
On Wed, 15 Aug 2012 12:58:53 -0300 Mike ispbuil...@gmail.com wrote:
 On 12-08-15 12:52 PM, Les Mikesell wrote:
  Does the device offer nfs as an option?  If so, I'd use that
  instead of cifs.
  Am I going about this in the wrong way? Should I be backing up to
  this drive in a different manner? Right now, $Conf{XferMethod} =
  'tar'; but I have tried it set to smb as well with the same
  results.
 I'm suprised nobody has suggested creating an appropriately sized
 file on a CIFS share, making a filesystem on the file, and then
 mounting it with -o loop and running BackupPC on that.

Regardless whether this loopback-file is on a local disk or via nfs/smb
mounted, this has the advantage that you can add encryption via luks to
the equation...

But why do you want to run backuppc on a server with the files on an
external nas? This has to many drawbacks:
 a) If one of these two machines fails, there is no new backups and no
 restore (at least during the outage).
 b) If the server is also the one providing ip addresses via dhcp and
 the nas needs an ip address, you get a nice loop on fresh starts like
 after a power-outage: The nas won't be accessible as it doesn't have
 an ip address and the server will not boot (and start backuppc and
 dhcp-server) because the device isn't reachable for mounting.

We use an internal disk for backuppc and write weekly/daily
encrypted 'tapes' to the nas which gets mounted for that (either via
nfs or iscsi). When the nas isn't there, we don't write the tapes. We
could also make the scripts try a second nas when the first isn't
there. Which gives the advantage of movable tapes with modern (and
cheaper) disks.

Have fun,

Arnold


signature.asc
Description: PGP signature
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] SMB not copying all files

2012-07-30 Thread Arnold Krille
On Monday 30 July 2012 14:02:27 Matthew Postinger wrote:
 I found the correct log. Its hitting a certain folder, say my music and
 getting an nt access denied. However the permissions appear to be set
 correctly

Well, apparently they aren't correct.

If you like to debug this, you should log into your linux-machine/-server, 
become the backuppc-user and then run smbclient similar to what backuppc would 
do (you can get the commandline from the logfiles).

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backupPC and backup via VPN connection.

2012-07-26 Thread Arnold Krille
On Tuesday 24 July 2012 02:51:47 bubolski wrote:
 Hello all.
 
 I got a problem with this (topic). When i'm connected to the same wireless
 via cable I can start backup on backuppc. When i got internet from wifi and
 i'm connected to my work wireless via vpn i can ping my computer from
 backuppc but can't start backup. Got information about no ping.
 
 Why i can ping from my - pc backuppc and from backuppc - my pc , but can't
 start backup for my computer ? Via cable is the same situation but I can
 start backup.

Probably backuppc isn't complaining about no ping but about slow ping.
The problem with wireless connections is that these are slooow. And wireless 
usually has higher latency, thats why backuppc checks the answer-time of the 
ping and then decides whether to treat the connection as slow or fast enough 
for backups.
Of course you can define what backuppc considers as slow by adjusting the 
corresponding value in the config/interface. Finding the value is left as an 
exercise to the reader...

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Running backuppc on a raspberry pi - possible ?

2012-07-23 Thread Arnold Krille
On Sun, 22 Jul 2012 21:27:45 +0200
Poul Nielsen p...@langhuse.dk wrote:
 I am considering using a raspberry pi which is a very low power, low 
 spec linux unit.
 http://www.raspberrypi.org/
 CPU:  700 MHz ARM11 ARM1176JZF-S core
 Memory (SDRAM)iB  256 MiB
 USB 2.0 ports:2 (via integrated USB hub)
 Wanted:
 - home use
 - versioned backup from a number of PCs and devices
 - speed is not essential
 - using rsync where possible
 rsync might need more memory than the max 192 Mb available?
 Any experience ?

I can only speak from my experience: I did built a home-server with
exactly your needs from an intel atom (N270 @1.6GHz 2GB-Ram). Hardware
isn't that much more then the rasperry, it has real sata, real gigabit
ethernet (*), something like 2-4GB RAM and a faster cpu at least with
HT, nowadays even with dual-core. And power-usage is mostly dictated by
your disks. I have a solid-state drive for the os and the logs and the
incoming-folder of the torrent. Everything else is on one of two hdds
which spin down when not in use (and when outside my home-office-hours).

(*) actually it has two gigabits and 6 serial interfaces as its an
industry-board which did cost something like 120€.

It runs:
 - yate
 - firewall + squid + privoxy + several openvpn instances
 - mpd (music player daemon to have the same audio in all our flat)
 - backuppc
 - apache with php and python
 - mysql (for some small development projects)
 - postgresql (for some other projects and as backend for yate)
 - with its internal wifi-card (mini-pci-express) it also saves the
   access-point.
 - nfs server
 - sometimes test-setups of gluster/ceph/sheepdog

When backuppc runs for two clients and a movie is decoding, music tends
to stutter. But all in all I am very content with the performance of
that baby.

Have fun,

Arnold


signature.asc
Description: PGP signature
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] encrypted pc and pool directory

2012-05-18 Thread Arnold Krille
On Thursday 17 May 2012 15:46:26 John Hutchinson wrote:
 ok That answers my question.  The issue is that we are looking at
 backing up clients machines and my boss wanted to be able to tell them
 that even we can not see their files.  I did not think it was possible
 but thought it was worth asking.

Encrypting the data on the client side has several consequences:
 - BackupPC is really good with de-duplication. The same file stored on several 
clients in several backups only takes up the space one time in the pool. With 
client-side encryption, this would be deactivated half-way as only the same 
file from the same client could be de-duplicated.
 - Client-side encryption also enforces client-side decryption. Loose the key 
on the client (because you lost the client) and you also loose all the data. 
This pretty much counters the whole purpose of a backup.

Yes, your clients have to trust you regarding the backup. But they (hopefully) 
already trust you with their system-administration.
And it will be easier for them to trust you with the backup while all is well, 
then trusting you in that you can restore at least some of their data from 
their fried disk using a clean-room and an oscilloscope.
And they should trust you with their backup instead of trusting a thieve to 
return the data...

What we do:
 - Encrypt the disk backuppc runs on, that helps when someone steals the 
disk/machine.
 - Secure our systems, that helps when someone enter the network.
 - Write gpg-encrypted tars to tape/nas. Helps when someone steals the media.

Have fun,

Arnold

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] encrypted pc and pool directory

2012-05-16 Thread Arnold Krille
On 16.05.2012 22:52, John Hutchinson wrote:
 Is there any way to setup backuppc so that the pc and the pool directory 
 are encrypted so they can only be accessed by the web interface with a 
 valid user?

If you mean encryption: No, not really. You can encrypt the disk where
backuppc stores the data. But anything you do will be un-encrypted as
long as backuppc (and the webinterface via apache) is running.

If you mean authentication/authorization, yes thats one of the things
apache can do. And thats really what access the web-interface with a
valid user means. Note the the definition of a valid user is only
limited by what apache supports for this (which is quite a lot and
includes kerberos and ldap and such things). See the
apache-documentation for that.

Have fun,

Arnold

PS: Is there a reason you didn't start your own thread? - Note that just
hitting reply and editing the subject does _not_ create a new thread,
your mail still contains headers in-reply-to: and references: and thus
is still belonging to a different thread...
-- 
Dieses Email wurde elektronisch erstellt und ist ohne handschriftliche
Unterschrift gültig.



signature.asc
Description: OpenPGP digital signature
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] apache config?

2012-05-10 Thread Arnold Krille
On 10.05.2012 00:01, Les Mikesell wrote:
 On Wed, May 9, 2012 at 4:56 PM, James Ward jew...@torzo.com wrote:
 It was the firewall!  Sorry for making you wonder!
 So you shouldn't have actually gotten a 500 error back- although these
 days browsers go out of their way to make the real problem hard to
 find...

When the firewall blocks the communication from cgi to backuppc-daemon,
you will get a 500.

-- 
Dieses Email wurde elektronisch erstellt und ist ohne handschriftliche
Unterschrift gültig.



signature.asc
Description: OpenPGP digital signature
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] apache config?

2012-05-09 Thread Arnold Krille
On Wednesday 09 May 2012 12:28:48 James Ward wrote:
 Any other ideas?  I've lost control of my BackupPC!

Nope. There is also the command-line interface and the config files. The webgui 
is just an addon. There is nothing wrong when it fails...

Please check the install-howto of backuppc and check if the command-line tools 
can control/see the daemon.

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] apache config?

2012-05-08 Thread Arnold Krille
On Tuesday 08 May 2012 11:27:47 James Ward wrote:
 So, apache on my BackupPC system responds to both http and https requests,
 but when I try to load the BackupPC interface, it hangs for a long time and
 then comes back with internal server error.  Any ideas?
 
 I'm using Debian Squeeze with current package installed apache
 2.2.16-6+squeeze7 and backuppc 3.1.0-9.1.
 
 This WAS working for EONs.

The deamon for backuppc is not running or is not reachable where its supposed 
to be.

signature.asc
Description: This is a digitally signed message part.
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bare metal restores

2012-04-27 Thread Arnold Krille
On Friday 27 April 2012 12:16:36 Brad Alexander wrote:
 Grub uses them too...But I changed them in grub.cfg and fstab (and
 /etc/cryptab), and grub was still having issues...I tried both
 update-grub /dev/sda and dpkg-reconfigure linux-image-3.2.0-2-amd64
 (to rebuild the initramfs) and both gave me disk not found.

You also want grub-install --recheck your_drive. That makes grub look at 
the actual devices and re-get the uuids.

Good luck,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup aborted

2012-04-22 Thread Arnold Krille
On Saturday 21 April 2012 16:15:51 Les Mikesell wrote:
 On Sat, Apr 21, 2012 at 1:55 PM, Gary Roach gary719_li...@verizon.net   
Any help will be appreciated
 Personally I just use rsync over ssh for localhost connections just
 like any other linux target so it isn't a special case.   But I use
 rsyncd on windows and it should work too.   You just need to run the
 daemon as a user with sufficient access, connect as an allowed user
 with the right password, and not send any extraneous text first.

Personally I use sudo, rsync and a correct sudoers-file to allow backuppc to 
run rsync directly without ssh on localhost backups. Rather easy to set up, no 
unwanted encryption involved and no cpu-cycles spent unnecessarily.

Arnold

signature.asc
Description: This is a digitally signed message part.
--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup aborted

2012-04-22 Thread Arnold Krille
On Sunday 22 April 2012 09:04:06 Gary Roach wrote:
 I think we are going a bit astray here. I have two other computers that
 need backup but am not talking about them because it is useless until I
 can get localhost to work. I don't want to be using two different
 methods. I think I am fighting a security issue at this point. I'm still
 working on the problem.

Why?

Why do you want to backup all hosts with the same method? Do you also try to 
backup windows machines with rsync(d)?
Use the best tool available for the job! When the backup is local, run local 
rsync (with sudo to get rights). When the backup is a remote machine, use 
ssh+rsync. When the remote machine is win, use smb... Or use tar if you like 
that better...

Arnold

signature.asc
Description: This is a digitally signed message part.
--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Encryption and long-term storage

2012-03-18 Thread Arnold Krille
Hi,

On Sunday 18 March 2012 16:44:30 Justin Finkelstein wrote:
 I've been reviewing a number of backup solutions to use for off-site
 backup of a number of machines in our office and I've a couple of
 questions regarding backuppc:
 
 1. Is there a way to encrypt the data that's stored?

While we write encrypted tapes to kind-of-remote discs, storing the real 
data encrypted is probably not a good idea because of the pooling. But you can 
put the whole backuppc-filesystem on an encrypted disk.

 2. Is it practical to store a year's worth of data in BackupPC? i.e.
 one full annual backup + daily differentials?
 (i.e. how does this affect the time to restore a recent version
 of a file?)

Read the documentation about the number of full backups to keep. It can 
actually be an array where each number denotes double the interval of its 
previous number. So with [4, 0, 4] it means to keep 4 weekly, 0 bi-weekly and 
4 almost-a-month full backups. [4, 0, 0, 4] means 4 weekly and 4 bi-monthly 
full backups to keep. That is when you full-interval actually is 7 days...

Whether you keep full or incremental backups on disk is not important as 
backuppc already de-duplicates the stored files. Its only a matter of which 
files are transfered from the client to the server.

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restore folder with partially removed files

2012-03-16 Thread Arnold Krille
Hi,

On 16.03.2012 16:23, Les Mikesell wrote:
 On Fri, Mar 16, 2012 at 5:56 AM, Tyler J. Wagnerty...@tolaris.com  wrote:
 Restore with the rsync method. It will copy only the lost files.
 But I think the default will overwrite newer changes with older backup
 files.   You could add --update to the RsyncArgs to change that.  I'm
 usually paranoid enough to copy or move the current directory contents
 out before a restore if there is any chance of needing it, though.

We usually restore to a global restore folder on our customers servers.
You can also download a zip-file...

Only when the clients lost a single file/dir we restore in-place.

Have fun,

Arnold
-- 
Dieses Email wurde elektronisch erstellt und ist ohne handschriftliche 
Unterschrift gültig.

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] replicating the pool from a NAS

2012-03-16 Thread Arnold Krille
Hi,

On 16.03.2012 12:05, Michael Kuss wrote:
 surely this is the nth question on how to replicate a backup pool.
 However, this time it's from an old 300GB Lacie ethernet disk mini.  I
 tried:
 1) plain rsync with -H.  This worked last year, I remember it took a week
 or so.  This year, either the LAN is worse, or the disk is aging, I get
 regularly timeouts.
 2) rsyncing just cpool, and using tarPCCopy for the single backups.  Works,
 but it is very slow, I'm now in the second week, with another week to go.
 And, I had some timeouts also here.  So, I have to rerun some backups for
 sure, and to be prudent I should very anything anyway.
 3) I tried to find a tool similar to dd which works on cifs mounted NAS and
 just copies the raw device.  I had no success.
 Anybody has any advice on how I could speed this process?In case it's
 relevant, the disk is formatted xfs.
 Another option is just forget about it, start with a fresh pool, and hope

Two ideas:
  - Take out the disk, plug it directly to your machine via sata. Then 
do the dd/rsync/whatever you want to do. 300GB locally copied don't take 
up that long.
  - Start a new backuppc-installation where you copy/recreate your setup 
with new bigger disks. Check that all works the same as the old install. 
Let them run in parallel for some days. Stop the backups on the old 
backuppc-install but do not delete it. You can still access the backups 
in case you need it. Remove the old installation only when the new 
installation starts to deprecate full backups.

Although it works, I can't really recommend using a distant nas for 
/var/lib/backuppc. It doesn't give you any of the off site advantages, 
because its not really off site and its always connected. Unless you 
stop the backuppc-process. But then you don't get the automatic 
backup-scheduling.
Better use an internal disk (mirrored with lvm, raid or drbd) and write 
daily, weekly or monthly archives to a nas. You can then either 
schedules these archives either by cron or by hand. You can even make 
cron check whether the nas is present. So that you or your co-workers 
regularly take the nas to home. Then its off-site... One further step 
would be two nas used every other interval.

Have fun,

Arnold
-- 
Dieses Email wurde elektronisch erstellt und ist ohne handschriftliche 
Unterschrift gültig.

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Archive function

2012-03-16 Thread Arnold Krille
Hi,

even if you intended for this question to be off-list, I think my answer 
could be interesting to others as well;-)

On 16.03.2012 18:36, Timothy J Massey wrote:
 Arnold Krillearn...@arnoldarts.de  wrote on 03/15/2012 04:29:45 PM:
 The scripts are in /var/lib/backuppc/bin, so they are present on both
 the cluster machines. Yes, my backuppc runs as (one of many) service on
 a two-node pacemaker-cluster with shared config for the
 backuppc+apache-stack on a small drbd and /var/lib/backuppc on a bigger
 drbd...

 Have you had any issues or shortcomings with this configuration compared
 to a normal single-machine BackupPC installation?  I looked into that same
 configuration (using DRDB and heartbeat), but decided that my BackupPC
 installations (which are on dedicated embedded-style hardware providing
 only BackupPC) were so very reliable that it wasn't worth the hassle, and
 that by far the biggest issue I've experienced has been disk failure and I
 addressed that with RAID (and even that failure was rare, and usually not
 without warning).

 I still feel this way, but would still be very curious to hear about your
 experience.  It is definitely something that I keep in my back pocket:
 with virtualized environments, we want both file- and disk-level backups,
 and my BackupPC servers are now growing into some pretty large servers of
 their own...

My backuppc-pacemaker-drbd setup was just a side-effect of a bigger 
project to use drbd and pacemaker to take our virtual machines and their 
disk-resources into the high-availability world. One of the first 
resources I made HA was a simple apache to display the cluster-status as 
html-page. Then I also added backuppc to that resource group.
So there is one drbd-share for the shared configuration (aka 
sharedconfig) with the config for this ClusterGroup of backuppc, bind, 
isc-dhcp, tftp and apache. That drbd-share is small, the additional 
drbd-resource for backuppc is of course bigger. Constraints then put all 
these resources onto the same node.
And it works rather nice. (Some other parts of these first experiments 
didn't go that well. Lets just say gfs2 isn't my friend for the next year.)

As for disk-mirroring with drbd and/or md and/or lvm:
  - One thing I learned from hard (bad) experience is that when using 
mirroring, you should always use two different disks. Buy the same disks 
at the same time (bonus points for directly incrementing serial 
numbers), put them in the same machine, let them live through the same 
usage and you will find that they also break at the same time.
Don't tell me this doesn't happen because it never happened to you, it 
happened to me and to people I know. And I can show you the statistical 
calculation that I am right.
  - I find mirroring with lvm to be more flexible then just hw- or 
sw-raid. With lvm you can decide for each of your dynamic discs if they 
are unimportant and don't need mirroring. Or if they are pretty 
important and need three copies while everything else is just two 
copies. I tend to use sw-raid only for the system-partition (with 
metadata-format 0.99) because this is also mountable if your kernel 
doesn't know about raid.
  - You should stay away from hw-raid. It might give you a bit more 
performance. But if you don't keep a second controller spare, you are 
f***ed when the controller goes to the electronic heaven and your client 
insists that he paid a premium for the raid to have zero downtime.
  - Mirroring discs with drbd works surprisingly well in standard 
single-primary mode. The latency will be a bit higher that what you get 
from pure local discs but you get much lower 'median time between 
failures' because you have less (or no) single points of failure.
  - When you need more mirrors, either bug/pay the drbd-authors or try 
one of the alternatives like MooseFS, Ceph or (my current favourite) 
GlusterFS. But I didn't yet test any of these with backuppc...

Man, I should make a blog-post out of this. If I had a blog...

Have fun,

Arnold
-- 
Dieses Email wurde elektronisch erstellt und ist ohne handschriftliche 
Unterschrift gültig.

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Archive function

2012-03-15 Thread Arnold Krille
Hi,

On 15.03.2012 15:39, Rob Hasselbaum wrote:
 Attached is a script I execute from cron that deletes old archives in a
 configured directory and then starts new archives for all hosts except
 localhost. Feel free to use it as a starting point. Should run with minimal
 modifications on Ubuntu Server.

Hihi, my script is much simpler.

But then I use pre- and post-archive scripts to:
pre archive:
  - Attach the iscsi
  - Create the mountpoint in /tmp
  - Mount the iscsi (via its link in /dev/disk/by-path)
  - Delete all old archives older then 30days
  - output df of the device for the backuppc log
after archive:
  - output df again for the logs
  - umount the iscsi
  - delete the mountpoint
  - detach the iscsi

The scripts are in /var/lib/backuppc/bin, so they are present on both 
the cluster machines. Yes, my backuppc runs as (one of many) service on 
a two-node pacemaker-cluster with shared config for the 
backuppc+apache-stack on a small drbd and /var/lib/backuppc on a bigger 
drbd...

Have fun,

Arnold

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using dd to copy the BackupPC data

2012-03-15 Thread Arnold Krille
Hi,

On 15.03.2012 22:05, Brad Morgan wrote:
 The BackupPC manual says: The best way to copy a pool file system, if
 possible, is by copying the raw device at the block level (eg: using dd).
 Could someone give an example of how to do this?

dd if=source_partition of=target_partition bs=4M

 Can someone explain why the output file system doesn't get reduced to the
 size of the input file system?

Actually the output-filesystem is an exact copy of the source. (Be 
careful with xfs filesystems here, they don't like to mount the same 
uuid on the same machine twice.)
If your target-partition is bigger, then you need to grow that 
filesystem after the dd-call finishes.

 For example, I have a 127GB disk, /dev/hdb1, mounted via fstab on
 /var/lib/backuppc. I have a new 200GB disk, /dev/hdd1, that I used fdisk to
 create a single partition, mkfs.ext4 to create the file system,

That mkfs-step is not necessary.


 and mounted it on /mnt/tempBackupPC.

this you should skip completely.

 If I use dd if=/dev/hdb1 of=/dev/hdd1 doesn't
 this copy the partition table and file system metadata, superblocks, etc.
 such that the output device will now look exactly like the input device
 including being reduced in size from 200GB to 127GB?

Notice the 1's in your command? These say that only the partition is 
copied, not the mbr and partition-table.
But then the whole filesystem is copied exactly.

Be careful with your additional mkfs and mount from above: the os will 
access that filesystem and might destroy the parts you just copied with 
dd. If you copy while that fs is mounted...

Have fun,

Arnold
-- 
Dieses Email wurde elektronisch erstellt und ist ohne handschriftliche 
Unterschrift gültig.

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Archive function

2012-03-14 Thread Arnold Krille
On Wednesday 14 March 2012 13:33:32 Philip Kimgård wrote:
 Hi,
 How can I choose to create an archive of all hosts like I can in the browser
 interface? I wan't to schedule an archive job using cron (or even better,
 in the CGI), but when I try to run BackupPC_archiveStart I can only choose
 one host, right?

You can select any number between 0 and the number of your hosts configured.
There is even a checkbox to select all available hosts with one click.

 Another (not as important though) problem is that when I set a host with
 transfer method to archive, it dosen't show in the Host Summary.

That is because an archive-host is not something backuppc schedules
automatically and therefor there is no need to have it in the summary. This
might be questionable, but thats how it is.

BTW: Once you have configured your archive-host, you can schedule automatic
backups (which are actually automatic-tar-write-outs) via cron and the
commandline. We use that here to write (encrypted) tar 'tapes' to a nas on a
weekly basis.

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Remote Mgmt

2012-02-17 Thread Arnold Krille
On Friday 17 February 2012 13:02:07 Steve Willoughby wrote:
 On 17-Feb-12 12:35, Les Mikesell wrote:
  On Fri, Feb 17, 2012 at 2:24 PM, Zach Lanich zlan...@gmail.com
  mailto:zlan...@gmail.com wrote:
  How do I access my backuppc interface from outside the local
  network? I have a webserver set up through isp config and I can get
  to my website I built but isp gives me a 500 error when i try to
  access backuppc.
  
  The packaged apache config probably restricts access to the local host.
  In the EPEL rpm that would be in /etc/httpd/conf.d/BackupPC.conf, but
  the ubuntu version might be somewhere else.   If you can find it, change
  it to allow from all and restart apache (if you are sure you want to do
  that...).
 
 If you are sure you want to do that...
 
 Re-read Les' last sentence a few times and let it really sink in before
 going further with this.
 
 BackupPC has essentially root-level access to all your backed-up
 systems, and certainly has access to all their file data (and can
 restore back onto them, probably).
 
 Do you *really* feel confident that the way you log in to BackupPC over
 the web is secure?  Using only HTTPS?  Checking certificates properly?
 Even with that, you're 100% sure your webserver can't be compromised
 from the outside?
 
 If you really need this, perhaps a better thing to do would be to SSH in
 to the host and set up a tunnel over that SSH connection to reach your
 BackupPC server.

Even if your apache and perl/cgi and backuppc is save from any hacking, is the 
rest of the system invincible too?
Do you really want to save your backups of crucial machines on hardware 
physically controlled by someone else?

It is less of a problem when its data from servers in the same third-party-
operated datacenter. But when the data housed inside your business goes 
outside for backups, you should really make sure all is save. And with 
backuppc its not only the data itself, but also the access to the backed-up 
machines.

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] I've Tried Everything

2012-02-16 Thread Arnold Krille
On Thursday 16 February 2012 02:01:53 Zach Lanich wrote:
 This is all the Log has in it when I try rsync:
 
 2012-02-15 21:59:02 full backup started for directory /Users/zlanich/Sites
 2012-02-15 21:59:35 Got fatal error during xfer (Child exited prematurely)

Oh! A premature exitulation!

You did become the backuppc-user on the backupserver and then did one first 
ssh root@target to accept the host-fingerprint and make sure the 
passwordless login works?

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Accessing old backuppc server

2012-02-16 Thread Arnold Krille
On Thursday 16 February 2012 11:08:38 Bowie Bailey wrote:
 On 2/16/2012 10:50 AM, Les Mikesell wrote:
  On Thu, Feb 16, 2012 at 8:45 AM, Bowie Bailey bowie_bai...@buc.com
  
  mailto:bowie_bai...@buc.com wrote:
  My backup method involves sending a mirrored drive offsite on a
  regular
  basis.  This drive contains the OS, BackupPC program, and the pool.
  
  Now I need to access some information from one of the old backups.  I
  can boot it up normally, but I want to make sure that BackupPC
  does not
  try to run any backups or do any cleanup while I'm working with it.  I
  know there's a flag I can set to prevent backups from running, but
  what
  do I need to do to prevent the cleanup from removing any of the
  existing
  backups?
  
  One option would be to not even start the server - just extract what
  you need with the BackupPC_tarCreate command line program.
 
 Hmm...  not a bad idea.  I would rather be able to use the GUI for
 simplicity, but the command line programs would be safer.

Please check (and report) what the gui shows when the daemon is not running.

Last time I tested with webgui on a different machine than the fs and daemon, 
the gui did show the machines and status but not the actual files. When the 
daemon can't be contacted, it could be the other way round:-)

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] I've Tried Everything

2012-02-16 Thread Arnold Krille
On Thursday 16 February 2012 10:49:53 Zach Lanich wrote:
 Yep, that's exactly what I did to test the ssh. I have no issues with my
 network or ssh locally or remotely outside of backuppc.

But you did that as root or as your user. backuppc normally runs as user 
backuppc and there it has its own .ssh dir and has to accept the hosts as 
well.

Basically you should take a look at what backuppc is doing, and use the exact 
same commands (running as the same user) and see where it fails.

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] I've Tried Everything

2012-02-16 Thread Arnold Krille
On Thursday 16 February 2012 13:07:01 you wrote:
 backuppc runs as user backuppc on the Server, but logs into the client/host
 as root. so there shouldnt be any permissions issues. and ssh works fine
 and authenticates automtically using ssh keys.

I am not giving up that fast (because I know its the primary source for non-
working backups):

Have you actually tested it that the user backuppc can successfully log into 
the client (as root) without password and with known host???

There shouldn't be and it should work and I don't see why it shouldn't 
are not statements that will make me stop buggering you.

Become the backuppc-user on the server (sudo -u backuppc -i bash), then do a 
ssh root@client and when that gives you a command-prompt without any 
further keyboard-action from you, you have advanced a step. Next is then to 
check whether rsync and/or tar are actually available on the client.

Good luck,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] different hosts, different schedules, different directories

2012-02-13 Thread Arnold Krille
On Monday 13 February 2012 09:46:31 Les Mikesell wrote:
 On Mon, Feb 13, 2012 at 7:58 AM, Ingo P. Korndoerfer korndoer...@crelux.com
  wrote:
  
  oh ...
  
  o.k. ... got it ...
  
  i was looking in the main menu, went into hosts and did not find any way
  to select that host there.
  
  and did not see the HUGE select a host at the top left.
 
 You can get there from the host links on the 'host summary' page also.  It
 is a good idea to visit that page occasionally and scan down the 'last
 backup' column to make sure the numbers are less than 1.

If you configured your backup-schedule that way.

I do have lots machines with an incremental every two days and a full backup 
every two weeks. And there are some system with incremental every week and 
full every month...

Better check the same page for any yellow or orange lines as this marks non-
reachable machines and errors.

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] offsite storage via laptops?

2012-01-31 Thread Arnold Krille
On Tuesday 31 January 2012 13:36:13 Joe Konecny wrote:
 In an effort to fulfill an offsite backup requirement I had the idea to
 place a password protected tar archive onto one or more of our users
 laptops.  The laptops are taken home each night and it would be quick to
 ftp the archive sometime during the day.
 
 Does anyone think this is a bad idea?

Replace that password protected by gpg encrypted and you get a good idea. 
A very good idea is when the private key is only on a secure machine in the 
admins possesion (and on an encrypted stick at the bosses home), only the 
public key is needed to encrypt the backup. Then the decryption can only be 
done with the private key. And onced you had to use that for a restore, you 
generate (and use!) a new key-pair for the new backups.

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] offsite storage via laptops?

2012-01-31 Thread Arnold Krille
On Tuesday 31 January 2012 18:22:29 Joe Konecny wrote:
 On 1/31/2012 6:07 PM, Arnold Krille wrote:
  Replace that password protected by gpg encrypted and you get a good
  idea. A very good idea is when the private key is only on a secure
  machine in the admins possesion (and on an encrypted stick at the
  bosses home), only the public key is needed to encrypt the backup. Then
  the decryption can only be done with the private key. And onced you had
  to use that for a restore, you generate (and use!) a new key-pair for
  the new backups.
 
 Yes security will be addressed.  But idea of transporting the archive
 offsite with a laptop
 that goes home each day is good I think.

Taking the tape home is basically your off-site-storage. So, yes, thats a 
very good idea. Our customers are told to do the same with their tapes (some 
even follow this advice).

But if its your responsibility to care for the security, better not rely on 
protection with a simple password. And anything below 100bytes/characters 
counts as insecure.
So do the only sensible thing and use gpg or ssl certificates to encrypt the 
tapes you sent off-site via laptops. And do the same when you sent them 
offsite via tape or usb-stick.

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] simplest way to back-up to a remote share?

2012-01-24 Thread Arnold Krille
On Tuesday 24 January 2012 14:38:28 Ivanus, Radu wrote:
 I configured backuppc on a Ubuntu server machine.
 All fine, but I'm having a hard time with the following scenario:
 I need to backup a network share called \\domain\folder to another
 network share called \\ip\folder file:///\\ip\folder  (different from
 the 1st one).
 What is the best way to do this? (I would appreciate if someone can help
 me with clear steps as I'm new to linux environment).

That one is simple: Not with backuppc! :-)

If you think backuppc can solve this problem, you haven't (yet?) used 
backuppc.

A solution for your problem would be anything starting from good old 'cp' and 
similar copy-operations (preferably preserving attributes).
I use(d) rsnapshot for a lot of stuff. But that too saves to a local disk with 
the need for hard-links. If at least the target-share is nfs, you can use it. 
If both shares are cifs/smb and you only want a backup of the current and 
maybe one old version, then rsync should help you.

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Renaming a host in WebGUI?

2012-01-18 Thread Arnold Krille
On Wednesday 18 January 2012 16:03:47 Mark Wass wrote:
 I want to rename a host in the backuppc config (I'm looking at the Edit
 Hosts page in the WebGui) and just wanted to check if all I have to do is
 rename it at this location or if there was anything else that had to be
 done.
 
 In the config of the host I want to rename I have set the ClientNameAlias
 override to the IP address of the host, I don't need to change that I just
 want to change what hosts is called in the WebGui.

Changing the alias will only change how backuppc tries to reach the remote 
machine. That is save and easy to change in the gui.

Changing the name of the host (which doesn't always have to be a host) 
requires you to also move the corresponding dir in /var/lib/backuppc/pc and if 
you want to keep the special configuration you should also copy the hosts 
config 
in /etc/backuppc.

Have fun,

Arnold

signature.asc
Description: This is a digitally signed message part.
--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] When do files get re-transferred?

2011-12-23 Thread Arnold Krille
On Friday 23 December 2011 17:54:40 Les Mikesell wrote:
 On Fri, Dec 23, 2011 at 5:09 AM, Rahul Amaram ra...@synovel.com wrote:
  Hi,
  I have a query about file transfer while taking backups. I understand
  that backuppc uses de-duplication i.e. only a single copy of the file is
  stored even if multiple copies of it exist on different machines.
  However, what I would like to know is when a file is transferred after
  being backed up once. Is it during the next full-backup? Is it after a
  certain duration has elapsed? Or is it that once a file is copied it is
  never transferred again (even for full-backup) unless it is changed?
 
 The de-dup and xfer are mostly unrelated.  Only the rsync and rsyncd
 xfer methods avoid subsequent transfers and they do it by comparing
 against the previous full of the same host.

Well, actually the comparison is done against the last backup of a lower 
level. And full dumps are always level 0, while for incremental you can freely 
cascade any levels greater then zero.
But altough we do that on some of our backups, we haven't yet found any 
logical or technical reason to actually do so. It only really matters when you 
save incremental tapes...

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 8.030.000, Too much files to backup ?

2011-12-16 Thread Arnold Krille
Hi,

On Friday 16 December 2011 10:42:00 Jean Spirat wrote:
   I use backuppc to save a webserver. The issue is that the application
 used on it is making thousand of little files used for a game to create
 maps and various things. The issue is that we are now at 100GB of data
 and 8.030.000 files so the backups takes 48H and more (to help the files
 are on NFS share). I think i come to the point where file backup is at
 it's limit.

Excuse my off topic-ness, but with that many small files I kind of expect a 
filesystem to reach certain limits. Why is that webapp written to use many 
little files? Why not with a database where all that stuff is in blobs?
That whould be easier to maintain and easier to back up.

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scary problem with USB3...

2011-12-15 Thread Arnold Krille
On Thursday 15 December 2011 19:31:32 Zach La Celle wrote:
 We just upgraded our backup machine and are using an external USB3 hard
 drive for backups.
 Last night, something went wrong, and when I got in this morning I saw
 the following errors on the backup machine:
 [88921.670598] usb 1-1: device not accepting address 0, error -71
 [88921.670665] hub 1-0:1.0: cannot disable port 1 (err = -32)
 [88921.674631] usb 1-1: Device not responding to set address.
 [88921.880971] usb 1-1: Device not responding to set address.
 Could this be caused by BackupPC?  When I unplugged and replugged the
 USB hard drive, it started working, but I'm worried that BackupPC is
 corrupting the drive somehow.

USB has some features built in like randomly disconnecting and reconnecting 
devices. What is merely an annoyance with keyboards and mice not reacting for 
a short moment, gets more awkward with disks disappearing (and thus 
unmounting) and reappearing without remounting...
Needless to say that the feature of reappearing is optional and missing in 
the most crucial use-cases.

Even when USB3 finally has a speed usable for transfering files bigger the a 
few 
KB in an acceptable time, I wouldn't use for anything of importance. Use esata 
when you need something external. Or use drive-bays for hot-swap when you need 
to carry around the disks. Of course using a nas-device is also always an 
option.

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
10 Tips for Better Server Consolidation
Server virtualization is being driven by many needs.  
But none more important than the need to reduce IT complexity 
while improving strategic productivity.  Learn More! 
http://www.accelacomm.com/jaw/sdnl/114/51507609/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrade on Ubuntu

2011-12-10 Thread Arnold Krille
On Saturday 10 December 2011 09:18:27 Thomas Nilsson wrote:
 I've inherited a BackupPC installation which hadn't made any backups for
 504.3 days (gulp!) but have managed to set that right (I think).

Nice! That the reason why the former admin was made to leave it to you?

 It is on an Ubuntu 8.04 server and now running BackupPC 3.0.0. As the wiki
 is very clear about not mixing package and by hand upgrades, I was
 wondering if there is a way to determine if my installation has been
 performed using apt-get or by hand?

Use aptitude (or synaptic if you prefer gui) and see whether backuppc was 
installed by package management.

And then update that 8.04 to 10.04 to get new features, more security fixes and 
less trouble when 12.04 arrives...

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up from BackupPC to BackupPC

2011-12-09 Thread Arnold Krille
On Friday 09 December 2011 15:28:43 Andrew Schulman wrote:
  On Fri, 2011-12-09 at 13:15 +, member horvath wrote:
   Hi,
   
   I have a requirement where I need to deploy a backuppc installation on
   a site that will connect to several servers and backup their required
   files.
   I need to keep a daily incremental of 30 daily and 6 monthly backups.
   This part is ok and I have no problem setting up (Except getting the
   schedule right - I find this hard to do with backuppc)
   
   As an offsite backup I'd like my onsite backuppc unit to inform my
   offsite backuppc unit that the backup is complete and then the remote
   needs to pull only the most current backup from the onsite.
   So basically a 30 day/6 month onsite backup with the most current
   backup stored offsite
   Can this be done?
  
  I'd look into the archive functionally of backuppc and push the current
  backup as a tarball to the offsite host and not worry about running
  remote backuppc. As you say you are only looking to hold the current
  backup offsite you can simply transfer the current archive with
  scp/rsync/tar over ssh to the offsite host.
 
 Another approach would be to set up an archive host that's mounted on a
 network-mounted drive, or on a local directory that's then rsynced over the
 network.  See Archive functions in the BackupPC docs.  Then
 DumpPostUserCommand etc. might be used to notify the remote server to start
 its rsync job.

I use the pre/post-archive commands to login/logout/mount/unmount an iscsi-
device when writing archives to it. And these archives are scheduled weekly by 
cron.
I also extended the script to remove all dirs/files older then 28 days from the 
remote disk. But that feature isn't fully debugged and needs to proof itself 
in around 1.5 days...

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] empty files

2011-12-06 Thread Arnold Krille
On Tuesday 06 December 2011 22:41:32 Greer, Jacob - District Tech wrote:
 I greatly appreciate the information I looked and the data directory on the
 server has folders but no files best I can tell. I am new to Linux and
 BackupPC.  This is a OpenSuse11 install that I did not setup.  I am just
 trying to make it work.  I am thinking about re-loading the server and
 staring all over.

When the target dir has just empty folders, that okay. That means there 
wasn't any change since the last full backup.

Please read the documentation, especially the part about restore. And stop 
looking at the backuppc-internals before you did your reading;-)

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Are these folder/file names normal?

2011-12-02 Thread Arnold Krille
On Friday 02 December 2011 17:33:41 Igor Sverkos wrote:
 Hi,
 
 today I browsed through the backup data folder. Is it normal that
 folders look like
 
   /var/lib/BackupPC/pc/foo.example.org/252/f%2f/fetc
   ^
 This is the backuped /etc folder from the foo.example.org (linux) host.
 
 Every folder/file is prefixed with a f char and I don't understand the
 folder name f%2f. Doesn't look right to me.
 
 Every backed up host shows that...

Thats perfectly normal. You will notice that file attributes are wrong too. 
That is because the attributes are stored separat. thus the f-prefix notes that 
this is a backuppc-thing. and f%2f is the notion of / in backuppc's own 
language.

Of course this looks strange directly on the file-system. But you are not 
supposed to use these file without the help of backuppc anyway.

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Force full backup on weekend

2011-11-25 Thread Arnold Krille
On Friday 25 November 2011 09:57:23 Müfit Eribol wrote:
 Hi All,
 
 Is it possible to force BackupPC to perform the full backup on a certain
 day of week, say on Sunday. During the workdays, full backup of our
 document management system chokes up the resources.

Thats what blackout periods are for. They let the backups only run outside 
work hours unless the backup is very important because the last backup is far 
to old.
But you can't distinguish between full and inc backups with blackout periods. 
The only solution there is to once log in on the weekend and schedule a full 
backup by hand. The following weeks, the incremental of 7 will again be on a 
weekend.

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc and excluding ip ranges?

2011-11-10 Thread Arnold Krille
On Thursday 10 November 2011 21:56:39 SSzretter wrote:
 I am wondering if it's possible already, or how difficult it might be to
 change the code for backuppc to allow it to skip a machine if it is not in
 a certain subnet mask.
 
 In my network, I have multiple sites.   Some of the sites have slow T1
 connections between them, so I have backuppc servers at them to back up
 directly on the LAN.
 
 This works great, but some of the machines (windows) are laptops, and
 people sometimes bring the laptop to another location.
 
 All these locations are tied together, so backuppc can still see (ping) the
 laptop even though it's in another building.
 
 The only real difference is that machine will pick up a dhcp address for
 that building.   The address will be in a different subnet, for example,
  192.168.2.x instead of 192.168.1.x.
 
 It would be great if a flag could be set to tell backuppc to only backup a
 machine if it is in a specific subnet range (192.168.2.x) and to skip it if
 not.
 
 Is this already possible, or where might a change be made in the code for
 something like that?

Three things I immediately think about:
 - Don't use hostnames, use IP-addresses. If you still want the names in the 
config (and/or don't move the existing directories), set the ip in the 
ClientNameAlias.
 - If the ping to the remote sites is longer then the local ping, reduce the 
PingMaxMsec for these hosts.
 - Replace the call to ping for these host with a command involving the 
correct ip instead of $host.

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC to NFS then to tape via DPM

2011-10-12 Thread Arnold Krille
On Wednesday 12 October 2011 21:48:45 rbastedo wrote:
 I've recently been given the task of setting up BackupPC to back up some of
 our servers running RHEL and PgSQL. Management wants me to back up data to
 an NFS where it can then be saved to tape via DPM for offsite storage.

There are three possibilities here I think:
 1. Use the nfs-share as the topdir for backuppc. This works, but saving this 
to tape is a) impractical and be will not help in any way for restore.
 2.1 Set backuppc up with a local topdir and use a tapearchive-host in 
backuppc to write out complete tars every night to the nfs-share before the 
remote tape-machine picks these to write the tape. Easy to set up, good for 
restore. And you only need the tapes in the case that backuppc fails.
 2.2 Set up backuppc on the remote-machine with the tape, let it get the data 
from the db over network and make it write the (additional) tapes locally.
 3. If you don't really need the built-in backups of backuppc and just want to 
collect full and/or incremental data to save to tape, use amanda...

In our business we are currently switching our (and our clients) backups from 
amanda to backuppc with less-then-daily write-out to nfs-/iscsi-shares or 
tapes.

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bad md5sums due to zero size (uncompressed) cpool files - WEIRD BUG

2011-10-07 Thread Arnold Krille
On Friday 07 October 2011 01:41:45 Holger Parplies wrote:
 Hi,
 
 Les Mikesell wrote on 2011-10-06 18:17:06 -0500 [Re: [BackupPC-users] Bad 
md5sums due to zero size (uncompressed) cpool files - WEIRD BUG]:
  On Thu, Oct 6, 2011 at 5:21 PM, Arnold Krille arn...@arnoldarts.de 
wrote:
No, it makes perfect sense for backuppc where the point is to keep
as much history as possible online in a given space.
   
   No, the point of backup is to be able to *restore* as much historical
   data as possible.  Keeping the data is not the important part.
Restoring it is.  Anything that is between storing data and
   *restoring* that data is in the way of that job.
   
   Actually the point of a backup is to restore the most recent version of
   something from just before the trouble (whatever that might be).
  
  Yes, but throw in the fact that it may take some unpredictable amount
  of time after the 'trouble' (which could have been accidentally
  deleting a rarely used file) before anyone notices and you see why you
  need some history available to restore from the version just before
  the trouble.
 
 I think you've all got it wrong. The real *point* of a backup is ...
 whatever the person doing the backup wants it for. For some people that
 might just be being able to say, hey, we did all we could to preserve the
 data as long as legally required - too bad it didn't work out.

No, that case of archiving documents and communications to fulfill legal 
requirements is called an _archive_! And while such a thing works well on 
paper (and for paper-documents, provided you are good friends with the 
archive-lady), try to access any old electronic n a company after they 
switched from lotus to SAP in between. Still the data is archived according to 
the law.

 Usually, it
 seems to be sufficient that the data is stored, but some of us really *do*
 want to be able to *restore* it, too, while others are doing backups mainly
 for watching the progress. Fine. That's what the flexibility of BackupPC is
 for, right?

No one wants a backup. Everyone only wants restore.

Good thing backuppc can do backups that don't get in your way, restore that 
works simply by clicking in a web-interface and additionally write complete 
dumps to archive-media...

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Dual-boot backuppc server

2011-10-07 Thread Arnold Krille
On Friday 07 October 2011 17:50:58 Ram Rao wrote:
 I have successfully been using backuppc on my Ubuntu 10.04 system for over
 a year to back up a couple of Linux clients.
 
 I  now desire to install Ubuntu 11.04 (on a different partition) on the
 system which serves as my backuppc server.  I would like to configure
 backuppc on 11.04 to use the backup information I have been using on
 10.04, and have the freedom to reboot between the two systems without
 impacting the backups of clients.  On my 10.04 system, I had created
 /var/lib/backuppc on a separate LVM volume.  Besides mounting this on my
 11.04 system, what else do I need to do enable preserving backuppc data in
 this dual-boot environment?

You also need the settings of /etc/backuppc. And you will definitely want to 
backup these with a different method in case the newer version breaks the 
config-files.

Have a nice weekend,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bad md5sums due to zero size (uncompressed) cpool files - WEIRD BUG

2011-10-06 Thread Arnold Krille
On Thursday 06 October 2011 20:04:57 Timothy J Massey wrote:
 Les Mikesell lesmikes...@gmail.com wrote on 10/06/2011 01:21:29 PM:
  On Thu, Oct 6, 2011 at 11:56 AM, Timothy J Massey tmas...@obscorp.com
 
 wrote:
  Personally, I feel that compression has no place in backups.  Back
  when we were highly limited in capacity by terrible analog devices
  (i.e. tape!) I used it from necessity.  Now, I just throw bigger
  hard drives at it and am thankful.  :)
  
  No, it makes perfect sense for backuppc where the point is to keep
  as much history as possible online in a given space.
 
 No, the point of backup is to be able to *restore* as much historical data
 as possible.  Keeping the data is not the important part.  Restoring it
 is.  Anything that is between storing data and *restoring* that data is in
 the way of that job.

Actually the point of a backup is to restore the most recent version of 
something from just before the trouble (whatever that might be).

Storing or restoring historical data is called an archive. Interestingly most 
commercial archive-solutions advertise their (certified) long-term archive but 
never the ability to get back that data. Makes you wonder...

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] System backup design advice (with amazon s3)

2011-10-03 Thread Arnold Krille
On Monday 03 October 2011 10:03:00 Daniele Menozzi wrote:
 So, this is the situation: backuppc makes daily incremental and weekly full
 backups of my server machine, and stores all data on the sever itself
 (localhost). Now, I also would like to send ONLY one weekly FULL backup to
 amazon s3.
 I know duplicity, and I already set it up to send data to amazon, and all
 works. But the question is: how to set up backuppc to let it prepare the
 data that duplicity will send to s3? I mean, actually backuppc manages full
 and incremental backups on localhost (inside
 /var/lib/backuppc/pc/localhost/ ), and keeps a lot of backups (every full
 bkp is ~10GB): I cannot send all this stuff every time on s3 (bandwidth
 problems, I have a 40KB upload rate...).
 So I need backuppc to create another dir with only one full backup inside
 at a time (but I also want to keep the full/incremental main cycle! ), and
 tell duplicity to take that dir and upload it on s3.

You could run a post-scrpit on your localhost-host to create a link current 
that links to the latest backup, set the option to have filled backups and then 
sync that /var/lib/backuppc/pc/localhost/current to S3.

Or use a dedicated partition to untar a weekly tar generated from backuppc to 
get the current representation and upload that.

But actually if I where you (and we are thinking about S3 too), I wouldn't 
upload any un-encrypted data to amazon. Especially if its information about 
other people or if its information that is important.

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fairly large backuppc pool (4TB) moved with backuppc_tarpccopy

2011-09-30 Thread Arnold Krille
On Friday 30 September 2011 06:20:42 Tim Connors wrote:
 Worst case, if you lose one disk, then rebuild, and during rebuild,
 suffer the likely consequence of losing another disk when rebuilding
 raid6, you still have a valid array.
 Worse case, fairly likely occurence with raid10, lose that second disk and
 lose all your data.
 Care for your data == don't use raid10.

Thats stupid.

Raid5: Loose one disk during recovery - you are screwed.
Raid6: Loose two disks during recovery - you are screwed.

Given that fact that most people use the same disks from the same vendor and 
the same production line (with consecutive serial numbers), the chances to 
have several disks fail at the same time is very high.

For Raid1 and Raid10: Use two lots of disks from different vendors or at least 
buy them half a year apart. And pair so that always two different kinds are 
toghether. Then one batch can collectivly jump the cliff while all your data is 
still intact.

And don't argue that disks with consecutive serial numbers won't break 
together: From the three disk failures I encountered where I had a second of 
the same type, that second broke shortly after.

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fairly large backuppc pool (4TB) moved with backuppc_tarpccopy

2011-09-30 Thread Arnold Krille
On Friday 30 September 2011 14:37:20 Les Mikesell wrote:
 On Fri, Sep 30, 2011 at 5:51 AM, Arnold Krille arn...@arnoldarts.de wrote:
  And don't argue that disks with consecutive serial numbers won't break
  together: From the three disk failures I encountered where I had a second
  of the same type, that second broke shortly after.
 
 I'd argue that it is not likely that a working disk is going to fail
 in that one pass that it takes to rebuild a raid/mirror,

That is exactly my point: With one of our clients the second broke (luckily!) 
shortly after the mirroring to the new second disk finished. Had it broken half 
an hour earlier the data would have been toast.

Two disks manufactured by the same people and machines at the same time, used 
at the same place with the same usage pattern _will_ fail at the same time.
The common argument is that two disks in raid1 are independent statistics. But 
the problem is when the disks are the same, they don't have independent 
statistics...

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Search for File

2011-09-28 Thread Arnold Krille
On Wednesday 28 September 2011 16:30:18 Timothy J Massey wrote:
 Gerald Brandt g...@majentis.com wrote on 09/28/2011 10:15:12 AM:
  I need to search for a specific file on a host, via backuppc.  Is
  there a way to search a host backup, so I don't have to manually go
  through all directories via the web interface?
 
 The easiest, most direct way of doing that would be:
 
 cd /path/to/host/pc/directory
 find . | grep ffilename
 
 I'm sure someone with more shell-fu will give you a much better command
 line (and I look forward to learning something!).

Here you are:

find path_where_to_start -iname string_to_search

iname means case-insensitive, so you don't have to care about that.
if you want to search for a combination of directory and filename, you have to 
think about the 'f' backuppc puts in front.

Using find you will realize that its rather slow and has your disk rattling 
away. Better to use the indexing services, for example locate:

locate string_to_search

gives a list of hits. But only from the state when locate last rebuilt its 
index (should happen daily/nightly). That is good enough to find files last 
seen 
two weeks ago, but doesn't find that file you just downloaded and can't 
remember 
where you saved it.

There are also disk-indexing services with web-frontends, htdig comes to mind. 
That even finds stuff inside the files.

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Search for File

2011-09-28 Thread Arnold Krille
On Wednesday 28 September 2011 17:23:17 Timothy J Massey wrote:
 Arnold Krille arn...@arnoldarts.de wrote on 09/28/2011 11:20:57 AM:
  Using find you will realize that its rather slow and has your disk
 rattling
  away. Better to use the indexing services, for example locate:
  
  locate string_to_search
 
 Yeah, that's great if you update the locate database (as you mention).  On
 a backup server, with millions of files and lots of work to do pretty much
 around the clock?  That's one of the first things I disable!  So no
 locate.

You could limit locate to the paths you want to be indexed. Or you could 
exclude the (c)pool of backuppc and still get the information.
And adding some minutes of updatedb indexing the filesystem-tree (its not even 
indexing the contents) to BackupPC-nightly shouldn't hurt that much.

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of dual-boot laptop

2011-09-28 Thread Arnold Krille
On Wednesday 28 September 2011 18:59:38 Tim Fletcher wrote:
 On Wed, 2011-09-28 at 17:30 +0200, Dan Johansson wrote:
  I have a laptop that is dual-boot (Linux and WinXP) and gets the same IP
  from DHCP in both OS's. Today I have two entries in BackupPC for this
  laptop (hostname_lnx and hostname_win) with different backup methods for
  each (rsync over ssh for Linux and SMB for WinXP). This works good for
  me with one small exception - I always gets a Backup-Failed message
  for one of them each night.
  Does someone have a suggestion on how to solve this in a more beautiful
  way?
 Write a ping script that finds out is the laptop is in Windows or Linux
 so one of the other of the backup hosts won't ping.

Yep, detecting the os with nmap should work. Or if you are not using dhcp or 
only for one of them, you could distinguish by ip-address.

 You can also make use of the fact that most desktop distros have avahi
 installed and use short hostname.local as a target host name.

That will work until you install bonjour for windows (which is very nice in 
networks relying on zeroconf).

Using the same archive-method for both would involve either mingw with stuff on 
the windows-machine or exporting / as C$ in samba. But still you will have 
different paths inside these shares which results in files and paths only 
present every other day.
Better to use two different backup-hosts for the two os'es.

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Request for comments/ideas

2011-09-09 Thread Arnold Krille
Hi all,

we are generally content with the possibilities and features of BackupPC. 
Still here are some ideas we have, which might be either of interest for some 
of you too or already solved somewhere. In either case I would love to hear 
your comments:

 - Encrypted tapes: Writing weekly full dumps to tape or disk is nice, but we 
would love to feed these through gpg at the time of writing. Has anybody done 
this? And is this better done with writing the tars to a temporary (encrypted) 
partition and then manually feed trough gpg to tape or is there an easy way to 
extend BackupPC_tarCreate to write gpg-encrypted tar?
 - Writing to AmazonS3 and other remote storages. Using a tape is nice, but 
when the tapes are stored near the drive and therefor near the server itself, 
there is no 'offsite' involved. And as a service to our customers we would love 
to not sell them real tapes but virtual tape space at amazon or any other 
remote-storage (might be our own machines) where the weekly tapes get pushed. 
Of course together with the encryption solved.

Are these ideas way off track? Do they fit into the perspective of BackupPC? 
Has 
anyone already done one of those successfully and would share the solution? 
Any hints from the devs whether these ideas are worth putting them on the 
feature-request-list?

Thanks for your comments,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
Why Cloud-Based Security and Archiving Make Sense
Osterman Research conducted this study that outlines how and why cloud
computing security and archiving is rapidly being adopted across the IT 
space for its ease of implementation, lower cost, and increased 
reliability. Learn more. http://www.accelacomm.com/jaw/sfnl/114/51425301/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] xxShareName = /cygwin ??

2011-09-08 Thread Arnold Krille
On Thursday 08 September 2011 20:54:57 hans...@gmail.com wrote:
 Our users have a variety of storage media from ordinary flash drives
 and SD cards to eSata or firewire HDDs, and even some swappable
 internal HD's. Much of these data is as or sometimes even more
 important than those on the fixed drives.
 Just as the notebook users are only intermittently attached to the
 LAN, these various drives are only occasionally attached to the
 notebooks.
snip
 Or is this a truly idiotic idea that should indeed be prevented by design?

This is a nightmare for backup. And its even more a nightmare for restore, 
because when you loose a disk (btw: did you ever think about the impact of a 
disk lost at the airport or the pub?) and want to restore the data that was on 
there, you will find yourself going through all the hosts to see which had that 
disk with that files on the last backup. Only to find that the last backup is 
two months old...

If your business really needs this variable working-style, I think the disk 
usage and therefor the problematic backup situation is a social problem that 
is not solvable by technical solutions.
I propose a social solution:
 - Assign each person some disks he is responsible for.
 - Create one backup-host for each laptop and one for each movable disk on the 
assigned laptop. And make the one responsible the connected user in backuppc.
 - Set the warning intervals and timeouts and let the email function do the 
nagging.
 - Get the responsible managers to stand behind the idea of regular backups so 
they too bug their people when their entries have been red for to long.

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
Doing More with Less: The Next Generation Virtual Desktop 
What are the key obstacles that have prevented many mid-market businesses
from deploying virtual desktops?   How do next-generation virtual desktops
provide companies an easier-to-deploy, easier-to-manage and more affordable
virtual desktop model.http://www.accelacomm.com/jaw/sfnl/114/51426474/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Would a noob-oriented HowTo be useful on the wiki?

2011-09-05 Thread Arnold Krille
On Monday 05 September 2011 04:49:35 hans...@gmail.com wrote:
 For example - I have extremely detailed notes on my most recent
 step-by-step process - which I'm happy to say is proceeding
 successfully with a matched set of Ubuntu's Lucid 10.04 (current
 LTS) server and its official BackupPC package, rather than mixing the
 latest Natty with the older package.

Here is what I had to do to get backuppc running on ubuntu 10.04:

sudo aptitude install backuppc

What more is there in your documentation that is really needed in addition to 
the above and the docs for backuppc?

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrade BackupPC 2.1.2 to 3.2.1

2011-09-04 Thread Arnold Krille
On Sunday 04 September 2011 16:04:16 Dan Johansson wrote:
 I have now updated to 3.2.1 and have (at least) one issue.
 On the Status-Page in the GUI I see the following:
 # Other info:
...
 * Pool is 0.00GB comprising files and directories (as of 2011-09-04
 15:47), * Pool hashing gives repeated files with longest chain ,
 * Nightly cleanup removed 0 files of size 0.00GB (around 2011-09-04
 15:47), * Pool file system was recently at 38% (2011-09-04 15:39), today's
 max is 38% (2011-09-04 15:06) and yesterday's max was %.
 
 As you can see it says that the Pool is 0.00GB. This can not be correct as
 there are data in the pool and I can do a restore. Even after a backup does
 it say 0.00GB.
 
 Any suggestions on what could be wrong?

The statistics you see are from the BackupPC_nightly run. Could be some 
internal format has changed there. If you updated just this morning and then 
checked backup/restore, the nightly statistics wheren't done yet. Wait a day 
and these should be okay too.

Or you are mixing compressed and uncompressed pool statistics.

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up slash partition

2011-09-03 Thread Arnold Krille
Hi,

On Sunday 04 September 2011 00:08:38 Holger Parplies wrote:
 Arnold Krille wrote on 2011-09-03 01:32:15 +0200 [Re: [BackupPC-users] 
Backing up slash partition]:
  On Saturday 03 September 2011 00:57:48 Timothy Murphy wrote:
  You should make sure you don't cross mount-points though. Otherwise the
  backup of localhosts '/' will recurse into itself.
 
 How that?

Backing up / (including /var/lib/backuppc/pc) into 
/var/lib/backuppc/pc/localhost. Unless you exclude that dir by excludes or --
one-filesystem (given that dir is on another partition) this results in a 
recursion.

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up slash partition

2011-09-02 Thread Arnold Krille
On Saturday 03 September 2011 00:57:48 Timothy Murphy wrote:
 Can one sensibly back up / with BackupPC?

As linux allows reading of locked files / has no concept of locking files 
exclusively, the answer is yes. The result you get is the same as if you would 
hard power-off your machine and restart it. Or take out one drive in your raid1 
while its running.

You should make sure you don't cross mount-points though. Otherwise the backup 
of localhosts '/' will recurse into itself.

Have fun,

Arnold


signature.asc
Description: This is a digitally signed message part.
--
Special Offer -- Download ArcSight Logger for FREE!
Finally, a world-class log management solution at an even better 
price-free! And you'll get a free Love Thy Logs t-shirt when you
download Logger. Secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsisghtdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/