Re: [BackupPC-users] large archives and scalability issues

2008-02-26 Thread Paul Archer
11:17am, Tomasz Chmielewski wrote:

 Why should any filesystem perform seeks better (when writing) than any
 other filesystem?

 I imagine it could be true only if:

 - kernel would cache a large amount of writes
 - kernel would commit these writes not in a FIFO manner, but whenever it
 sees that the blocks on the underlying device are close to each other

 Can ZFS do it?

zfs should be faster on writes because it never overwrites any data in 
place; instead, it finds an available bit of disk, writes its data, then 
updates the metablocks. Still unless a fs is very poorly written, you 
shouldn't see more than a few percent different between filesystems. The 
physical disk will almost always be your bottleneck.

Paul

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Ressources Server (BackupPC + Nagios + Cacti)

2008-01-28 Thread Paul Archer

In that case:
1) find a job where your boss has a clue
and/or
2) ask your boss what happens when nagios goes down. Does he expect it to 
notify you that it's down?




Paul


9:31am, [EMAIL PROTECTED] wrote:


Hello,

Thanks a lot for your answer. It's interresting.
I'm sorry but my IT manager doesn't want to have several servers for linux
softwares.
I have to configure just one server with Nagios, Cacti and BackupPC.
So I have to propose a configuration of server to support it.

Thanks a lot.
Regards,

Romain




Paul Archer [EMAIL PROTECTED]
23/01/2008 18:37

A
Romain PICHARD/Mondeville/VIC/[EMAIL PROTECTED]
cc
BackupPC users' mailing list backuppc-users@lists.sourceforge.net
Objet
Re: [BackupPC-users] Ressources Server (BackupPC + Nagios + Cacti)






[original message lost, sorry]

It doesn't really matter what the specs of your box are, putting a
monitoring solution on the same box as a production server is a bad idea.
What happens when that box fails? What's going to notify you? Nagios
won't,
since it'll be down, too.
I'm not sure about Cacti, but I can tell you that nagios can run pretty
comfortably on a fairly low-end machine. We monitor about 100 machines,
and
about 350 services with Nagios, on a VM.
Use the big box for BackupPC, and use some hardware that you retired
because
it got too slow for Nagios. While you're at it, setup at least two
Nagios
servers so you have redundancy. Otherwise you're in the same situation
that
I mentioned before: the Nagios box goes down and there's nothing to notify

you about it.

Paul




SC2N -S.A  Si?ge Social : 2, Rue Andre Boulle - 94000 Cr?teil  - 327 153
722 RCS Cr?teil



This e-mail message is intended only for the use of the intended
recipient(s).
The information contained therein may be confidential or privileged, and
its disclosure or reproduction is strictly prohibited.
If you are not the intended recipient, please return it immediately to its
sender at the above address and destroy it.







 Never ascribe to malice what can perfectly
 well be explained by stupidity. -Anonymous
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Ressources Server (BackupPC + Nagios + Cacti)

2008-01-23 Thread Paul Archer
[original message lost, sorry]

It doesn't really matter what the specs of your box are, putting a 
monitoring solution on the same box as a production server is a bad idea.
What happens when that box fails? What's going to notify you? Nagios won't, 
since it'll be down, too.
I'm not sure about Cacti, but I can tell you that nagios can run pretty 
comfortably on a fairly low-end machine. We monitor about 100 machines, and 
about 350 services with Nagios, on a VM.
Use the big box for BackupPC, and use some hardware that you retired because 
it got too slow for Nagios. While you're at it, setup at least two Nagios 
servers so you have redundancy. Otherwise you're in the same situation that 
I mentioned before: the Nagios box goes down and there's nothing to notify 
you about it.

Paul

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Ressources Server (BackupPC + Nagios + Cacti)

2008-01-23 Thread Paul Archer
6:20pm, Carl Wilhelm Soderstrom wrote:

 On 01/23 11:37 , Paul Archer wrote:
 Use the big box for BackupPC, and use some hardware that you retired because
 it got too slow for Nagios.

 I see what you're saying. Put big disks in the older hardware.

Actually, I meant what you say below: use the big iron for you backupps, and 
the older machine for Nagios.

Paul


 I would actually say that you're better off using the more powerful hardware
 for backuppc; backuppc will use all the CPU and RAM you can get. The more
 hardware you have, the faster backups will go, and the more you can do in
 parallel (tho I've found that even on really fast hardware, more than 2
 backups in parallel causes too much disk contention).

 --

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC on file systems that don't support hard links?

2008-01-16 Thread Paul Archer
8:32am, Greg Smith wrote:

 Thanks to all for the replies.  I've read all the backuppc docs and 
 realized that hard links were an integral part of the design so I thought 
 the answer to my question would be no but thought I'd ask.  Since the 
 principle reason for using hardlinks in backuppc is to use space more 
 efficently, I was hoping that there was some hack (e.g., setting 
 HardLinkMax to 1) that would work but use more disk space.

 There is no h/w interface into the NAS other than two 1GB network links 
 (no iSCSI) and I don't think (I've got a call into the tech support to 
 confirm) that it supports NFS.  The only way that I know of mounting the 
 device is via samba/cifs.  Since both samba  cifs support hard links it 
 might be the device does too with some undocumented setting, we'll see 
 what tech support has to say.

iSCSI is software, not hardware (internet SCSI)--so an ethernet port is 
all you need (from a hardware standpoint).
If you can't get hard links over samba/cifs, I doubt you'll be able to have 
them through NFS. It sounds like the backing filesystem doesn't support 
them.
Just what make/model is this creature, anyway?


 In the meantime, I'm taking Dan's suggestion of a loopback fs.

loopback is what my money's on...

Paul

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC on file systems that don't support hard links? - More info

2008-01-15 Thread Paul Archer
If you don't have any choice in the NAS (ie. you can't reformat it, take it 
apart, etc), then you could create one huge file on it, create an ext3 (or 
reiserfs, or xfs, or zfs) filesystem on it, and loopback mount it.
Or maybe the NAS has the capability to share out space via iSCSI?

Paul



7:53pm, Greg Smith wrote:

 In reading my previous message (below) I realize I didn't provide much 
 information about what I'm trying to do.

 I have an NAS (that doesn't support hard links) mounted on /var/lib/backuppc. 
  Here's the /etc/fstab entry:

 //nas/backup/var/lib/backuppc   cifs \ 
 credentials=/var/lib/backuppc/.smbpasswd,uid=backuppc,gid=backups 0 0

 since backuppc uses hard links in pool, cpool, etc., this gives thousands of 
 MakeFileLink errors.  I'm using tar as the transport mechanism.

 I'm not able to think of a way of using this device (which has lots of space) 
 and backuppc and would appreciate any suggestions.

 Thanks,

 Greg


 Greg Smith [EMAIL PROTECTED] wrote: Date: Tue, 15 Jan 2008 19:04:57 -0800 
 (PST)
 From: Greg Smith [EMAIL PROTECTED]
 To: backuppc-users@lists.sourceforge.net
 Subject: [BackupPC-users] BackupPC on file systems that don't support hard
 links?

 I've been reading the archives and searching the web all day and haven't been 
 able to find an answer to this so I apologize in advance if it's something I 
 should have been able to figure out.

 Is there a way to configure or use backuppc with a file system that DOES NOT 
 support hard links?  The file system in question (a NAS) has gobs of space 
 (3TB) so space is not an issue but I've confirmed multiple times that it 
 doesn't support hard links via either cifs or samba.  I've also verified that 
 the OS I'm running (linux/ubuntu) does correctly create hard links on a 
 windows XP file system via samba.

 Thanks for any help or suggestions.

 Greg


 -
 Looking for last minute shopping deals?   Find them fast with Yahoo! 
 Search.-
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2008.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/



 -
 Looking for last minute shopping deals?  Find them fast with Yahoo! Search.



---
--In 1555, Nostradamus wrote:--
-- Come  the  millennium,  month 12, --
-- In  the  home  of greatest power, --
-- The village idiot will come forth --
-- To  be  acclaimed   the   leader. --
---

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Newbie: excessive backup size for fist full backup (12GB data becomes a 130GB backup?)

2007-12-24 Thread Paul Archer
11:20am, Les Mikesell wrote:

 Mike Mrozek wrote:
 Greeting BackupPC users:

 I am new to linux (using Ubuntu v7.10) and to using BackupPC (v3.1.0).  I do 
 not understand the BacupPC behavior I observe during a full backup of my 
 client.  This is the first attempt, no other backup data files exist on the 
 BacupPC server.  My client has 12GB of data and I would expect 12GB of data 
 to be copied and saved onto the BacupPC server.  The backup job which I 
 started manually vice scheduled begins copying files and consumes 130+GB of 
 free space on my BackupPC server before completing.  I needed to manually 
 stop the backup job before running out of disk space on my BackupPC server.  
 I can't imagine that this is normal operation.  I've tried searching for 
 similar problems as this in this mailing list archive and by googling 
 without any luck.


 You probably have a large sparse file on the client that isn't handled
 very well by the copy mechanism.  On unix filesystems you can seek past
 the end of a file and write, creating a file that appears to be very
 long but without using the intermediate space where nothing has been
 written.  In particular, many 64-bit linux versions have a
 /var/log/lastlog file that appears to be 1.2 terabytes in size as an
 artifact of indexing it by uid numbers and using -1 for the nfsnobody
 id.  It's generally not important to back this file up, so if that is
 the problem you can just exclude it.


Another possibility is that you have remote mounts that are being backed up 
(--one-file-system for rsync is your friend), or some misconfiguration in 
your config file.
If you don't locate any large sparse files, and you don't have any remote 
mounts, send us the *relevant* parts of your config file(s), and we'll see 
if there's anything messed up there.

Paul

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Idea For BackupPC Improvement

2007-12-20 Thread Paul Archer
12:56pm, Les Mikesell wrote:

 Jon Forrest wrote:

 One reason why I think my idea has promise is because this is how
 all the commercial backup products I've ever used work. Adding
 this feature to BackupPC would just bring it closer to
 the commercial backup products.

 Don't those products all require a client side agent, at least for
 cross-platform operation?  I think it would theoretically be possible to
 get a target directory listing with the transfer methods that backuppc
 supports but it wouldn't be trivial.

 --

As long as you are using ssh for access to the client, you don't need a 
client-side agent.
Personally, I don't know if I would use it myself, but it's a good idea.

Paul

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] remote replication with unison instead of rsync?

2007-12-07 Thread Paul Archer
5:45pm, Les Mikesell wrote:

 dan wrote:
 i have been experimenting with this and think i have a workable
 solution.  only problem is that nexenta is a bit of a pain to get
 backuppc running on.  just seems to have a lot of little issues with
 backuppc 3.1 (running nexenta a7)

 i just downloaded solaris express to see if i have better luck with it
 and backuppc.

 The problem I have with nexenta is that it doesn't see any of the SATA
 controllers I have.  Is there something else with zfs and better driver
 support?

 -- 
I don't know if it has gotten far enough to be really usable, but the 
zfs-fuse project is out there: http://www.wizy.org/wiki/ZFS_on_FUSE

It's relatively slow, and not all the features are implemented, but it 
should do the job as far as keeping your data safe.

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Direct restore / Multiple hosts

2007-12-03 Thread Paul Archer
What version are you running? Ubuntu 7.10 comes with 3.0.0, pretty much 
everything else comes with 2.x.

Paul

5:53pm, Trygve Vea wrote:

 Hi!

 I've just set up backuppc for the first time, and I'm experiencing
 something I consider unexpected behaviour.

 I just want to mention that this is the Debian-packaged version of backuppc.

 If I in /etc/backuppc/hosts have more than one host specified, I will
 get the following error message when I want to execute a direct
 restore (through the web interface):

 Direct restore has been disabled for host hostname. Please select
 one of the other restore options.

 ... this message will always specify the same hostname even though I
 want to restore a different host.

 If I change /etc/backuppc/hosts to contain only one host ---
 regardless which host it contains --- restore works.


 Is this a known bug? Configuration error maybe? Has anyone else
 experienced this?

 -- 
 Trygve Vea

 -
 SF.Net email is sponsored by: The Future of Linux Business White Paper
 from Novell.  From the desktop to the data center, Linux is going
 mainstream.  Let it simplify your IT future.
 http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/





---Brady's First Law of Problem Solving:
When confronted by a difficult problem, you can
solve it more easily by reducing it to the question,
How would the Lone Ranger have handled this?


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full backup bandwidth reduction

2007-11-27 Thread Paul Archer
What kind of specs does your server have (besides running ZFS)? That is, 
processor, memory, etc.

I've got a P-III 500Mhz with 512MB RAM as my backup server. It also is my 
file server (I want to split those into separate machines, but I can't right 
now), with about 250GB of data. (Most of that is images/videos/mp3s, so I 
leave compression off.) It takes 30 hours to do a full doing an rsync to 
itself, and incrementals take about 3 hours.
That's a fair bit of data for a slow machine, so I'm trying to get an idea 
of what I can do to speed things up.
And FWIW, I am a fan of ZFS, but until I get another box, I can't really 
switch to it.

Paul


7:53am, dan wrote:

 I backup about 6-7Gb during a full backup of one of my sco unix servers
 using rsync over ssh and it takes under an hour.

 4-5Gb on an very old unix machine using rsync on an nfs mount takes just
 over an hour.

 full backups of my laptop is about 8Gb and takes about 15minutes though it
 is on gigabit and so is the backuppc server BUT the unix servers are not on
 gigabit, just 100Mb/s ethernet.

 On Nov 27, 2007 12:52 AM, Nils Breunese (Lemonbit) [EMAIL PROTECTED] wrote:

 Toni Van Remortel wrote:

 And I have set up BackupPC here 'as-is' in the first place, but we saw
 that the full backups, that ran every 7 days, took about 3 to 4 days
 to
 complete, while for the same hosts the incrementals finished in 1
 hour.
 That's why I got digging into the principles of BackupPC, as I
 wanted to
 know why the full backups don't works 'as expected'.

 Well, I can tell you BackupPC using rsync as the Xfermethod is working
 just fine for us. The incrementals don't take days, all seems normal.
 I hope you'll be able to find the problem in your setup.

 Nils Breunese.

 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/






---
Perl elegant? Perl is like your grandfather's garage. Sure, he kept
most of it tidy to please your grandmother but there was always one
corner where you could find the most amazing junk. And some days,
when you were particularly lucky, he'd show you how it worked.
--Shawn Corey shawn.corey [at] sympatico.ca--

-10921 days until retirement!-

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] define destination dir

2007-11-26 Thread Paul Archer
Look in the config.pl file (if debian, it's probably 
/etc/backuppc/config.pl).

If you have four 160GB drives, I would suggest using MD/LVM to create one 
large logical volume. The best arrangement would probably be something 
like a RAID 5 with all four drives, and maybe an LVM volume on top of that.

Paul

Tomorrow, Holm Kapschitzki wrote:

 Hello,

 i habe 4 older ide devices a 160 gb and i want to backup a few client
 hosts. So i cannot use one single device for backuppc. I read the doku
 and read something of configuring topdir to config the path where the
 data is backupt. On the other hand i read on debian package topdir is
 hardcoded.

 So my question is how to define different dir (at the different config
 files) for each host where i can backup the data?

 greets holm


 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/





___
Can't you recognize bullshit? Don't you think it would be a
useful item to add to your intellectual toolkits to be capable
of saying, when a ton of wet steaming bullshit lands on your
head, 'My goodness, this appears to be bullshit'?
_Neal Stephenson, Cryptonomicon__

-10921 days until retirement!-

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Hierarchy/Groups

2007-11-15 Thread Paul Archer
12:08pm, Renke Brausse wrote:

 Hello,
 Is it possible to setup a sort of groupings for servers such that there
 is only one set of config per group?
 not directly in one set (as far as I know...) but if you use backuppc 3+
 you can use the copy-host-functionality of the webGui - so you can use
 one model host for every of your defined groups and just copy the
 configuration.

 Renke

Or you could simply symlink the individual machines' config.pl files 
together. Then changing one changes all of them--which would be an issue if 
you're simply copying them with the GUI.

Paul

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] No ping possible, what to do?

2007-11-12 Thread Paul Archer
8:59am, Toni Van Remortel wrote:

 Hi all,

 As most of my systems in the backup are not responding to 'ping', I've
 set the $PingCmd to /bin/true (as documented).
 The problem now is, that when a system is down, BackupPC still gets
 successful ping's and tries to backup the system. After a while, the
 blackout period is reached, and backup attempts stop (because of
 successful ping's). But I want the system to be in the backup when the
 system is up again (mostly I fix the issue in the morning, so my current
 solution is to start a manual backup).

 Is there another option for the $PingCmd to use? An SSH check from the
 Nagios project? An SMB check?

 -- 
I think you've got a good idea with Nagios. Go to the downloads section off 
www.nagios.org, and get the plugins. You can use check_tcp, which will give 
you an exit status of zero if it connects, and 1 or 2 (configurable) if it 
doesn't. It's also very easy to set the timeout values you want to use.

Paul

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc and shh help

2007-11-10 Thread Paul Archer
1:49am, tts wrote:

 Guys I really been struggling with shh and getting backuppc to log to the
 client with no password. Done it before but just cant remember  how (done it
 by luck). I just cant get my head around how the key stuff works and where
 you are suppose to generate the key? server side? were backuppc runs ? both
 machines? place the key where? I found some guides at google but its hard to
 follow when you don't know how its suppose to work.I just cant get my head
 around it. If  one could make a animated video of  where exactly the keys
 exchange. I will donate a reasonable amount to you or what ever oginisation
 you like. Please get back to me even if the answer is no, a yes would be
 great ;) I'm very greatfull to the backuppc creators Backuppc is simply the
 best *hands on heart*.

First, before we tackle your question:

Trim your posts. You replied to a digest, and the entire digest (which was 
not relevent to your question at all) was part of your post. That just 
wastes bandwidth, and clutters things up.

Be careful how you send your messages. Somehow you managed things so I got a 
solid four copies of your email. One will suffice.


Now, on to your question:

The 'backuppc' user on your server (S) needs to connect to the root 
account on your client (C).

So su to [EMAIL PROTECTED], and run ssh-keygen -t dsa
When it asks you for a passphrase, just hit enter.
You'll end up with a ~backuppc/.ssh directory with (at least) two files:
id_dsa (your private key)
id_dsa.pub (your public key)

Copy the public key to the .ssh directory of [EMAIL PROTECTED] Make sure you 
rename it 
first! Otherwise, you're likely to overwrite root's own public key, which 
would be BAD. I recommend a name like id_dsa.pub-backuppc
Then add this key to the authorized_keys file. Safest way is:
cd ~root/.ssh  #(this is on the client, remember)
cat id_dsa.pub-backuppc  authorized_keys
(Make sure that's TWO 'greater-than's!)

Now go back to the [EMAIL PROTECTED] account, and run:
ssh -l root client(where 'client' is your client machine, of course).
When it asks you to accept the client's key, type 'yes'.
You should be logged into the client as root.
If that doesn't work, make sure root logins are not disallowed on the 
server. (Usually, /etc/ssh/sshd_config). Also, check permissions on root's 
.ssh directory and the authorized_keys file. ssh is picky about perms.


Repeat the copy and connect part for each client you have. Don't regenerate 
your keys!


Now, there's one thing I've kind of glossed over. Doing this means that 
anyone who has or can gain access to the backuppc account on your server 
owns every client you have, since that account has root access to all those 
machines.
You can mitigate this somewhat by using rsyncd on the client, and most 
importantly, by setting up forced commands in your clients' 
authorized_keys file. There has been some discussion on this mailing list 
about that, and you can Google for the relevent terms and find plenty of 
info on the subject.


Paul

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Question about LVM restore

2007-11-08 Thread Paul Archer
If I understand you correctly (you have LVM volumes on disks other than the 
failed one), then during a reinstall (or even just booting off a live CD) 
those volumes will be recognized. If you provide mount points and don't 
reformat them, the data should be available without a problem.
Try booting off a live CD and you'll see the volumes get scanned and 
recognized.


Paul


11:10am, Gene Horodecki wrote:


Can anyone comment regarding how easy it is to build a Ubuntu volume group
on a new Ubuntu install?? Say we have the following scenario:

- Ubuntu native disk
- Slave disk 1, part of a volume group with single LV used for backup
- Slave disk 2, part of a volume group with single LV used for backup

Say your Ubuntu native disk goes and Ubuntu needs to be rebuilt.? Is it
possible to import both slave disks back into a volume group and retrieve
the data that was on them before?? Are there any precautions prior to the
problem which must be taken in order to allow a restore down the road??
Thanks!






-
witzelsucht (vit'sel-zoocht) [Ger.]
  A mental condition characteristic of frontal lobe lesions
  and marked by the making of poor jokes and puns and
  the telling of pointless stories, at which
  the patient himself is intensely amused.

From Dorland's Illustrated Medical Dictionary, 26th edition.

-

-10940 days until retirement!--
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Designing a BackupPC system to include off-sitebackups

2007-11-07 Thread Paul Archer
You should look into partimage (www.partimage.org). It's going to be faster 
than dd for any disk that isn't 100% full.

Another option would be to rsync your backuppc directory somewhere else. 
That has the advantage of lower bandwith (since the OP wanted off-site 
replication). Just don't forget to use the --hard-links option to preserve 
all those hard links.

pma



10:46am, Christian Lahti wrote:

 I basically have the following, works for me :)  dd_rescue is a version
 of dd that gracefully handles errors.  YMMV, use at your own risk :)

 /Christian


 Setup Details
 /dev/sda1 mounted on /export/backups/ (2TB volume)
 /dev/sdb1 mounted on /export/offsite/ (2TB volume)

 Everything that usually goes in /var/lib/BackupPC is symlinked to
 /export/backups, so both the backups and pool are on the big volume.

 Every Friday morning I run the following script:

 #!/bin/bash
 #
 # updateoffsite.sh
 # This script rsyncs the backups to the removable
 # volume in an image file
 #
 # 2007-06-27 [EMAIL PROTECTED]
 #   Initial version
 # 2007-06-29 [EMAIL PROTECTED]
 #   added logging for stderr and stdout
 # 2007-07-20 [EMAIL PROTECTED]
 #   changed script to use dd_rescue
 #
 umount /export/backups
 mount /export/offsite

 #stop BackupPC
 service BackupPC stop

 echo begin image
 echo `date`
 #now dd to an image
 /usr/bin/dd_rescue -a -v /dev/sdb1 /export/offsite/backup.img
 echo end image
 echo `date`
 umount /export/offsite
 mount /export/backups mount /export/offsite
 service BackupPC start

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of
 Christopher Utley
 Sent: Wednesday, November 07, 2007 8:48 AM
 To: backuppc-users@lists.sourceforge.net
 Subject: [BackupPC-users] Designing a BackupPC system to include
 off-sitebackups

 I have setup a couple BackupPC systems now, and they have been working
 great for me.  Now I'm looking to install yet another, but with a
 twist.  I'd like to setup an off-site backup, which would essentially
 be a carbon copy of everything BackupPC maintains locally.  Is there a
 straightforward way to accomplish this?

 Thanks,
 Chris Utley


 -- 
 Christopher J. Utley
 Director of Technology
 Market Intelligence Group
 PH: 513.333.0676
 [EMAIL PROTECTED]

 
 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now  http://get.splunk.com/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


 
 
Checked by MailWasher server (www.Firetrust.com)
WARNING. No FirstAlert account found.
 To reduce spam further activate FirstAlert.
This message can be removed by purchasing a FirstAlert Account.
 
 


 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now  http://get.splunk.com/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/






echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq'|dc
(It's safe)


-10941 days until retirement!-

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 4 x speedup with one tweak to freebsd server

2007-11-06 Thread Paul Archer
Looks like a good tip. Unfortunately, since I'm running a reiserfs 
filesystem on Linux, it doesn't help me directly. But it does bring up a 
good point: does anyone know of any filesystem tweaks for reiserfs that 
might bring similar improvements in this situation of accessing millions of 
small files?

Paul


Yesterday, John Pettitt wrote:

 I'm posting this to the list so people searching for FreeBSD optimizations 
 will find it in the archives.
 
 I finally got around to looking at why my FreeBSD server was only backing up 
 at about 2.5MB/sec using tar with clients with lots of
 small files. 
 
 Using my desktop (a Mac PRO) as the test subject backups were running at 
 about 2.5MB/sec or more accurately 25 files a second.   The
 server (FreeBSD 6.2 with a 1.5 TB UFS2 raid 10 on a 3ware card) was disk 
 bound.
 
 Running the ssh / tar combo from the command line directed to /dev/null gave 
 close to 25MB/sec confirming that it wasn't the client or
 the network.  I've done the normal optimization stuff (soft updates, 
 noatime).   After a lot of digging I discovered
 vfs.ufs.dirhash_maxmem
 
 The ufs filesystem hashes directories to speed up access when there are lots 
 of files in a directory (as happens with the pool)
 however the maximum memory allocated to the hash by default is 2 MB! This 
 is way too small and the hash buffers were thrashing on
 almost every pool file open.
 
 (for those who care sysctl -a | egrep dirhash will show the min, max and 
 current hash usage - if current is equal to max you've
 probably got it set too small)
 
 On my box setting the vfs.ufs.dirhash_maxmem to 128M using sysctl did the 
 trick - the system is using 72M for the whole pool tree (2.5
 million files) and backups are now running at about 10 MB/sec and 100 files a 
 second! (this is now compute bound on the server which
 is an old P4 2.6 box).
 
 John
 
 
 





  I've started referring to the action against Iraq as
  Desert Storm 1.1, since it reminds me of a Microsoft upgrade:
  it's expensive, most people aren't sure they want it, and it
  probably won't work. -- Kevin G. Barkes


-10942 days until retirement!-

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 4 x speedup with one tweak to freebsd server

2007-11-06 Thread Paul Archer
4:19pm, Toni Van Remortel wrote:

 Paul Archer wrote:
 Looks like a good tip. Unfortunately, since I'm running a reiserfs
 filesystem on Linux, it doesn't help me directly. But it does bring up a
 good point: does anyone know of any filesystem tweaks for reiserfs that
 might bring similar improvements in this situation of accessing millions of
 small files?

 I mount my /backup raid with noatime and notail options.

Don't forget nodiratime.
I was thinking more of reisertunefs tweaks or similar, but it is supposed to 
be optimized for a lot of small files to begin with, so there might not be 
anything else (short of buying more disks).

Paul


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Excluding folders per host

2007-11-06 Thread Paul Archer
10:30am, Jake Solid wrote:

 Hello,

 I currently have backup PC running and backing up 7 different Linux servers.
 Inside the config.pl I have the directive $Conf{BackupFilesExclude}  to
 exclude a list of folders and subfolders. In the list I have the Backup PC
 to exclude the /var/log/spooler* for all the servers. The problem is that I
 need to back the content of /var/log/spooler on 1 of the 7 Linux servers.
 How can I accomplish this using the same config.pl ?

 Thanks in advanced,

Well, you might write some Perl code that uses the $host variable to do a 
substitution (although I haven't tested that myself). And as Craig Barratt 
has mentioned, if you change your configs through the CGI interface, custom 
Perl in your config file won't survive.
Otherwise, you'll need to create a config.pl for just that host, and put the 
appropriate config(s) in there.

Paul

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Anybody comming to LISA 2007 in Dallas next week?

2007-11-06 Thread Paul Archer
I'll be there (here, actually, since I'm local).

Paul

11:46pm, John Rouillard wrote:

 Hi all:

 Is anybody coming to the LISA conference
 http://www.usenix.org/events/lisa07/index.html in Dallas next week?

 I was thinking of scheduling a BOF (birds of a feather) session where
 we could swap information, tips, wishes etc.

 Anybody interested?

 --
   -- rouilj

 John Rouillard
 System Administrator
 Renesys Corporation
 603-643-9300 x 111

 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now  http://get.splunk.com/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/






My parents just came back from a planet where the dominant lifeform
had no bilateral symmetry, and all I got was this stupid F-Shirt.


-10942 days until retirement!-

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] upgrading from 2.1.2 to 3.0.0

2007-11-05 Thread Paul Archer
8:04am, Keith Edmunds wrote:

 On Sun, 4 Nov 2007 21:26:18 -0600 (CST), [EMAIL PROTECTED] said:

 This
 is for home use--important files (to me) but not a mission critical
 situation or anything.

 Files are either a) important and thus backed up b) unimportant but being
 backed up as part of a test or c) unimportant or already backed up.

 I'm not sure how the developers are supposed to deal with files that are
 important but not mission critical. Either you want them backed up or
 you don't, and you need to accept responsibility for determining which.

 Not meaning to be rude, but users of backup software often try to rate the
 importance of files on some kind of sliding scale whereas, in reality,
 it's pretty much black and white: they matter or they don't.


I think you misread me. Of course my files are important, and of course they 
matter. What I meant was that this is for home use. If things go wonky, and 
I can't back anything up for a week (or I have to jump through massive hoops 
to restore something), I don't have my boss screaming at me because a 
mission-critical server isn't protected. In other words, my files are on the 
line, but my job isn't.

And I never said (or even implied) that I wanted the developers to classify 
my data as important but not mission critical--or in any particular way. 
I was looking for a classification of the beta software. Ie. data-safe but 
there are some nits, or getting there but restores have been a problem, 
or whatever.

Paul

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Changing rsync options for different shares.

2007-11-04 Thread Paul Archer
9:08pm, John Rouillard wrote:

 Hi all:

 We tell our users that ~user/bak will be backed up and they can
 symbolically link in any directories/files they want backed up.

 To make this work with backuppc and rsync, it means adding the -L or
 --copy-links option to the rsync command. However I only want it when
 backing up those particular directories (shares) and not say when
 backing up / or /usr.

 Does anybody have any ideas for modifying the rsync options on a per
 share basis? I was thinking of something similar to the structure for:

$Conf{BackupFilesExclude} = {
   'c' = ['/temp', '/winnt/tmp'], # these are for 'c' share
   '*' = ['/junk', '/dont_back_this_up'], # these are for other shares
};

 Does this seem reasonable? Then I could do:
$Conf{RsyncShareArgs} = {
   '/home/user1/bak' = ['--copy-links' ],
   };

 The only workaround I can thing of right now is to define a new host
 and override the $Conf{RsyncArgs} in there. However this is less than
 optimal since I can't inherit the RsyncArgs from the main
 configuration file and augment them, I have to maintain them
 separately from the main config file.

 This is also a problem when I have hosts over the wan and I have to
 bandwidth limit them. I can't just add in some way --bwlimit=128 I
 have to duplicate the entire $Conf{RsyncArgs} variable definition.

 Does anybody have another workaround?

 --

I haven't tested this, but I believe the order is to source the main config 
file and then source the config file for the individual host being backed up 
So you should be able to do something like this:

$Conf{RsyncArgs} = [ @$Conf{RsyncArgs}, '--copy-links' ];

Since you're dealing with Perl, you could probably create a file with a list 
of slow hosts, read that into a hash (say, %slow_hosts, since I don't like 
CamelCase)), and then you can say:

$Conf{RsyncArgs} = [ @$RConf{RsyncArgs}, '--bwlimit=128' ] if defined 
$slow_hosts{$host};

Paul

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Changing rsync options for different shares.

2007-11-04 Thread Paul Archer
Tomorrow, John Rouillard wrote:

 On Sun, Nov 04, 2007 at 07:41:51PM -0600, Paul Archer wrote:
 9:08pm, John Rouillard wrote:
 We tell our users that ~user/bak will be backed up and they can
 symbolically link in any directories/files they want backed up.

 To make this work with backuppc and rsync, it means adding the -L or
 --copy-links option to the rsync command. However I only want it when
 backing up those particular directories (shares) and not say when
 backing up / or /usr.

 Does anybody have any ideas for modifying the rsync options on a per
 share basis? I was thinking of something similar to the structure for:
   $Conf{BackupFilesExclude} = {
 [...]
 The only workaround I can thing of right now is to define a new host
 and override the $Conf{RsyncArgs} in there. However this is less than
 optimal since I can't inherit the RsyncArgs from the main
 configuration file and augment them, I have to maintain them
 separately from the main config file.

 This is also a problem when I have hosts over the wan and I have to
 bandwidth limit them. I can't just add in some way --bwlimit=128 I
 have to duplicate the entire $Conf{RsyncArgs} variable definition.

 I haven't tested this, but I believe the order is to source the main config
 file and then source the config file for the individual host being backed
 up So you should be able to do something like this:

 $Conf{RsyncArgs} = [ @$Conf{RsyncArgs}, '--copy-links' ];

 You are partly correct. The main config file is sourced first followed
 by the per host config file. However I tried to access
 @$Conf{RsyncShareName} in the per host .pl file and found out the
 %Conf array is undefined. I think what is happening is this:

 In BackuPC::Storage::Text

  use vars qw(%Conf);
  ...
  sub ConfigDataRead
  {
my($s, $host) = @_;
my($ret, $mesg, $config, @configs);

#
# TODO: add lock
#
my $conf = {};
my $configPath = $s-ConfigPath($host);

push(@configs, $configPath) if ( -f $configPath );
foreach $config ( @configs ) {
%Conf = ();
if ( !defined($ret = do $config)  ($! || $@) ) {
$mesg = Couldn't open $config: $! if ( $! );
$mesg = Couldn't execute $config: $@ if ( $@ );
$mesg =~ s/[\n\r]+//;
return ($mesg, $conf);
}
%$conf = ( %$conf, %Conf );
}
   ...

 @configs is the global config followed by the per host config, but
 notice that %Conf is set to the empty array, and appended to %conf
 overwriting any elements in %conf that are the same as options in
 %Conf. This is how it accumlates the config entries.

 Since $conf is a lexically scoped variable it's contents aren't
 available in the do config clause. I was thinking of changing the my
 %conf to local %conf so I could get it's values in the config files
 and use:

  $Conf{RsyncArgs} = [ @$conf{RsyncArgs}, '--copy-links' ];

 but I am not sure of the collateral effects of this change.

That does complicate things. Perhaps using Data::Dumper to spit out 
everything that's set when the per-machine config.pl is called would reveal 
something useful.

It might also be possible to set your own variable(s) in the main config 
file that could then be accessed later on.


 Since you're dealing with Perl, you could probably create a file with a
 list of slow hosts, read that into a hash (say, %slow_hosts, since I don't
 like CamelCase)), and then you can say:

 $Conf{RsyncArgs} = [ @$RConf{RsyncArgs}, '--bwlimit=128' ] if defined
 $slow_hosts{$host};

 I assume $RConf was a typo and you meant $Conf?

Exactly.

Paul

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] upgrading from 2.1.2 to 3.0.0

2007-11-04 Thread Paul Archer
I installed backuppc on a (K)Ubuntu 7.04 machine, not realizing that I was 
getting version 2.1.2. I plan on upgrading the machine to 7.10, and 
upgrading backuppc to 3.0.0.
Two questions:

1) Is there anything particular I should worry about or watch out for after 
the upgrade?

2) Should I use the packaged 3.0.0, or is 3.1.0beta1 worth going to? (This 
is for home use--important files (to me) but not a mission critical 
situation or anything.)

Thanks,

Paul



-
Welcome to downtown Coolsville--population: us.
-

-10943 days until retirement!-

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Changing rsync options for different shares.

2007-11-04 Thread Paul Archer
Tomorrow, John Rouillard wrote:

 On Sun, Nov 04, 2007 at 06:41:24PM -0800, Craig Barratt wrote:
 Paul writes:
 So you should be able to do something like this:

 $Conf{RsyncArgs} = [ @$Conf{RsyncArgs}, '--copy-links' ];
 Since you're dealing with Perl, you could probably
 create a file with a list of slow hosts, read that
 into a hash (say, %slow_hosts, since I don't like
 CamelCase)), and then you can say:

 $Conf{RsyncArgs} = [ @$RConf{RsyncArgs}, '--bwlimit=128' ] if defined 
 $slow_hosts{$host};

 Unfortunately this won't work for two reasons: when the config files
 are parsed, the %Conf hash is empty (it is merged after each file is
 parsed).

 Yup. For those looking for the details see my prior email.


I was thinking: at the bottom of the main config.pl:
%mainConf = %Conf;

That should set a global variable that, as long as the main config.pl is 
sourced (eval'ed, actually, yes?) before the per-machine config.pl, should 
be available for use in the per-machine config. So the above line would 
become: 
$Conf{RsyncArgs} = [ @$mainConf{RsyncArgs}, '--bwlimit=128' ] if defined 
$slow_hosts{$host};


Paul

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] upgrading from 2.1.2 to 3.0.0

2007-11-04 Thread Paul Archer
9:19pm, dan wrote:

 betas are betas.  if this is a test setup, go ahead and run the beta, but if
 this is to hol;d critical data, use the stable.  if you are moving to ubuntu
 7.10, then just install ubuntu and `apt-get install backuppc` and you will
 be set.

 i will also point out to you that if you will be storing files somewhere
 other than the default(ubuntu) /var/lib/backuppc, maybe consider mounting
 that volume on /var/lib/backuppc, otherwise some of the status info in the
 gui is missing.

Good tip, thanks.


 also, if you want, you can pull bpc3.0 from ubuntu backports instead of
 upgrading.  i personally just moved a few machines to 7.10 server and im
 very happy.

I've upgraded a few machines too, with only a couple minor problems.

I enabled backports on that machine, but aptitude is still showing only the 
old version. Maybe I need to go back and kick it.

Paul


 On 11/4/07, Paul Archer [EMAIL PROTECTED] wrote:

 I installed backuppc on a (K)Ubuntu 7.04 machine, not realizing that I was
 getting version 2.1.2. I plan on upgrading the machine to 7.10, and
 upgrading backuppc to 3.0.0.
 Two questions:

 1) Is there anything particular I should worry about or watch out for
 after
 the upgrade?

 2) Should I use the packaged 3.0.0, or is 3.1.0beta1 worth going to? (This
 is for home use--important files (to me) but not a mission critical
 situation or anything.)

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] backuppc backing up backup directory

2007-11-03 Thread Paul Archer
First, I'm new to backuppc, so this may be something I've missed in the 
docs.

Setup:  (K)ubuntu 7.10 on an old PIII
backup directory is on /backup filesystem (as /backup/backuppc)
changed backup directory by modifying /etc/init.d/backuppc
machine name is shebop

I'm using rsync to backup the machine itself. Here's the relevent section of 
/backup/backuppc/pc/shebop/config.pl:

$Conf{RsyncShareName} = [
 '/',
 '/export/bedroom',
 '/export/lildell',
 '/data/extra',
 '/data/home_videos',
 '/data/images',
 '/data/mp3s',
 ];

In my main config.pl I have:
$Conf{XferMethod} = 'rsync';
$Conf{RsyncShareName} = '/';
#and
$Conf{RsyncArgs} = [
 '--numeric-ids',
 '--perms',
 '--owner',
 '--group',
 '--devices',
 '--links',
 '--times',
 '--block-size=2048',
 '--recursive',
 '-D',
 '--one-file-system',
];


My problem is that the system is backing up the /backup filesystem for some 
reason:

[EMAIL PROTECTED]:/backup/backuppc/pc/shebop# cd new
[EMAIL PROTECTED]:/backup/backuppc/pc/shebop/new# l
total 0
drwxr-x--- 3 backuppc backuppc 72 2007-11-03 05:27 f%2f
[EMAIL PROTECTED]:/backup/backuppc/pc/shebop/new# cd f%2f/
[EMAIL PROTECTED]:/backup/backuppc/pc/shebop/new/f%2f# l
total 0
drwxr-x--- 3 backuppc backuppc 80 2007-11-03 05:27 fbackup
[EMAIL PROTECTED]:/backup/backuppc/pc/shebop/new/f%2f# cd fbackup/
[EMAIL PROTECTED]:/backup/backuppc/pc/shebop/new/f%2f/fbackup# l
total 0
drwxr-x--- 6 backuppc backuppc 208 2007-11-03 05:27 fbackuppc
[EMAIL PROTECTED]:/backup/backuppc/pc/shebop/new/f%2f/fbackup# cd fbackuppc/
[EMAIL PROTECTED]:/backup/backuppc/pc/shebop/new/f%2f/fbackup/fbackuppc# l
total 8
-rw-r- 2 backuppc backuppc  955 2007-11-02 17:29 f.bash_history
drwxr-x--- 2 backuppc backuppc   48 2007-11-03 05:27 fcpool
drwxr-x--- 2 backuppc backuppc  320 2007-11-03 05:27 flog
drwxr-x--- 3 backuppc backuppc   72 2007-11-03 05:27 fpc
drwxr-x--- 2 backuppc backuppc  200 2007-11-03 05:27 f.ssh
-rw-r- 2 backuppc backuppc 3747 2007-11-02 17:29 f.viminfo
[EMAIL PROTECTED]:/backup/backuppc/pc/shebop/new/f%2f/fbackup/fbackuppc# du -sh
3.4G.


  [EMAIL PROTECTED]:/etc/backuppc# ps auxww |grep rsync
backuppc 26824  0.2  1.2   6040  3240 ?S19:21   0:00 /usr/bin/ssh 
-q -x -l root shebop /usr/bin/rsync --server --sender --numeric-ids --perms 
--owner --group --devices --links --times --block-size=2048 --recursive -D 
--one-file-system --exclude=/proc --exclude=/sys --ignore-times . /
root 26828  3.8  3.5  10648  9020 ?Ss   19:21   0:14 /usr/bin/rsync 
--server --sender --numeric-ids --perms --owner --group --devices --links 
--times --block-size=2048 --recursive -D --one-file-system --exclude=/proc 
--exclude=/sys --ignore-times . /

You can see here that the rsync is being passed two directories: '.' and 
'/'. Is that normal? I think this may be the root of my problem, but I can't 
quite figure out how the . is getting there. Any suggestions?

Paul

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc backing up backup directory

2007-11-03 Thread Paul Archer
8:01am, Paul Archer wrote:

  [EMAIL PROTECTED]:/etc/backuppc# ps auxww |grep rsync
 backuppc 26824  0.2  1.2   6040  3240 ?S19:21   0:00 /usr/bin/ssh 
 -q -x -l root shebop /usr/bin/rsync --server --sender --numeric-ids --perms 
 --owner --group --devices --links --times --block-size=2048 --recursive -D 
 --one-file-system --exclude=/proc --exclude=/sys --ignore-times . /
 root 26828  3.8  3.5  10648  9020 ?Ss   19:21   0:14 
 /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group 
 --devices --links --times --block-size=2048 --recursive -D --one-file-system 
 --exclude=/proc --exclude=/sys --ignore-times . /

 You can see here that the rsync is being passed two directories: '.' and
 '/'. Is that normal? I think this may be the root of my problem, but I can't
 quite figure out how the . is getting there. Any suggestions?


I should mention that I'm seeing the same behavior (as far as the . and / in 
the arg list) with other clients that don't have custom config.pl files, so 
it shouldn't be anything to do with my custom config.pl for shebop.

Paul

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc backing up backup directory

2007-11-03 Thread Paul Archer
8:01am, Paul Archer wrote:

 First, I'm new to backuppc, so this may be something I've missed in the docs.

 Setup:(K)ubuntu 7.10 on an old PIII
   backup directory is on /backup filesystem (as /backup/backuppc)
   changed backup directory by modifying /etc/init.d/backuppc
   machine name is shebop

 I'm using rsync to backup the machine itself. Here's the relevent section of 
 /backup/backuppc/pc/shebop/config.pl:

 $Conf{RsyncShareName} = [
'/',
'/export/bedroom',
'/export/lildell',
'/data/extra',
'/data/home_videos',
'/data/images',
'/data/mp3s',
];


I did a couple of more tests. First I removed 
/backup/backuppc/pc/shebop/config.pl, and it backed up normally (backed up 
the root filesystem, that is).

Then I changed the order of $Conf{RsyncShareName}:

$Conf{RsyncShareName} = [
 '/export/bedroom',
 '/',
 '/export/lildell',
 '/data/extra',
 '/data/home_videos',
 '/data/images',
 '/data/mp3s',
 ];

The odd thing here is it seems to have completely skipped root. So far it's 
backed up /export/bedroom, /export/lildell, and /data/extra.

Anyone have any idea(s) about this?

Paul


 In my main config.pl I have:
 $Conf{XferMethod} = 'rsync';
 $Conf{RsyncShareName} = '/';
 #and
 $Conf{RsyncArgs} = [
'--numeric-ids',
'--perms',
'--owner',
'--group',
'--devices',
'--links',
'--times',
'--block-size=2048',
'--recursive',
'-D',
'--one-file-system',
 ];


 My problem is that the system is backing up the /backup filesystem for some 
 reason:

 [EMAIL PROTECTED]:/backup/backuppc/pc/shebop# cd new
 [EMAIL PROTECTED]:/backup/backuppc/pc/shebop/new# l
 total 0
 drwxr-x--- 3 backuppc backuppc 72 2007-11-03 05:27 f%2f
 [EMAIL PROTECTED]:/backup/backuppc/pc/shebop/new# cd f%2f/
 [EMAIL PROTECTED]:/backup/backuppc/pc/shebop/new/f%2f# l
 total 0
 drwxr-x--- 3 backuppc backuppc 80 2007-11-03 05:27 fbackup
 [EMAIL PROTECTED]:/backup/backuppc/pc/shebop/new/f%2f# cd fbackup/
 [EMAIL PROTECTED]:/backup/backuppc/pc/shebop/new/f%2f/fbackup# l
 total 0
 drwxr-x--- 6 backuppc backuppc 208 2007-11-03 05:27 fbackuppc
 [EMAIL PROTECTED]:/backup/backuppc/pc/shebop/new/f%2f/fbackup# cd fbackuppc/
 [EMAIL PROTECTED]:/backup/backuppc/pc/shebop/new/f%2f/fbackup/fbackuppc# l
 total 8
 -rw-r- 2 backuppc backuppc  955 2007-11-02 17:29 f.bash_history
 drwxr-x--- 2 backuppc backuppc   48 2007-11-03 05:27 fcpool
 drwxr-x--- 2 backuppc backuppc  320 2007-11-03 05:27 flog
 drwxr-x--- 3 backuppc backuppc   72 2007-11-03 05:27 fpc
 drwxr-x--- 2 backuppc backuppc  200 2007-11-03 05:27 f.ssh
 -rw-r- 2 backuppc backuppc 3747 2007-11-02 17:29 f.viminfo
 [EMAIL PROTECTED]:/backup/backuppc/pc/shebop/new/f%2f/fbackup/fbackuppc# du 
 -sh
 3.4G.


 [EMAIL PROTECTED]:/etc/backuppc# ps auxww |grep rsync
 backuppc 26824  0.2  1.2   6040  3240 ?S19:21   0:00 /usr/bin/ssh 
 -q -x -l root shebop /usr/bin/rsync --server --sender --numeric-ids --perms 
 --owner --group --devices --links --times --block-size=2048 --recursive -D 
 --one-file-system --exclude=/proc --exclude=/sys --ignore-times . /
 root 26828  3.8  3.5  10648  9020 ?Ss   19:21   0:14 
 /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group 
 --devices --links --times --block-size=2048 --recursive -D --one-file-system 
 --exclude=/proc --exclude=/sys --ignore-times . /

 You can see here that the rsync is being passed two directories: '.' and '/'. 
 Is that normal? I think this may be the root of my problem, but I can't quite 
 figure out how the . is getting there. Any suggestions?

 Paul




---
If you live in a small town /You might meet a dozen or two/
Young alien types /Who step out /And dare to declare/
We're through being cool.  --  Devo, Through Being Cool
---

-10945 days until retirement!-

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up VMs

2007-11-03 Thread Paul Archer
8:55pm, Bradley Alexander wrote:

 I have a VMware server with a number of virtual servers (as well as a 
 VirtualBox installation on another box).

 Is it better to back up the virtual hosts individually or to just back up the 
 VMware/VirtualBox installation? From a space perspective, if anything changes 
 in the virtual machine, does the entire VM get backed up, as opposed to the 
 individual file getting backed up again, thereby making the backups in 
 general smaller?

In general, you're better off backing up the virtual machines as if they 
were real machines. That way you'll be able to do incrementals. If you 
backup the files that represent the virtual disks, you'll have to get the 
entire file all over again for even minor changes.
You may want to look around for a specialized backup solution that 
understands VMs, something that can do a binary diff on the disk image 
files. Keep in mind that if you do that, you won't really be able to restore 
by individual files. (That's another advantage of backing up the machines 
individually (from within the OS.))

Paul

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/