Re: [BackupPC-users] BackupPC_nightly takes too much time

2010-04-08 Thread Tino Schwarze
On Wed, Apr 07, 2010 at 01:03:18PM +0200, Norbert Schulze wrote:

> > Or just post the output of "vmstat 10 10"
> 
> r...@server:~# vmstat 10 10
> procs ---memory-- ---swap-- -io -system-- cpu
>  r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa
>  0  5  14764 6196504 687296 20274400 6 115  2  1 78 18
>  0  5  14764 6184340 688156 21416800  807252 1729 1947  2  1 27 69
>  0  5  14764 6175044 687184 22478400  8106   438 2160 2618  3  2 26 69
>  0  7  14764 6154740 686712 23560000  829728 1638 1814  3  1 16 80
>  0  7  14764 6145812 688652 24305200  8054   109 1602 1744  2  1 14 84
>  2  8  14764 6136036 687120 25358000  8205   170 1782 1997  3  1 16 80
>  0  5  14764 6132008 687580 26611200  778359 1546 1659  3  1 17 79
>  0  6  14764 6123984 687284 27517200  794030 1756 2008  2  1 29 67
>  0  6  14764 6117016 688716 28106800  7420   340 1590 1753  2  1 21 77
>  0  6  14764 6108932 688092 28939600  8068   105 1896 2182  3  1 24 73

Your I/O system seems saturated (70-80% of time is spent waiting for
I/O). Try running only one BackupPC_nightly in parallel.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_nightly takes too much time

2010-04-08 Thread Tino Schwarze
On Wed, Apr 07, 2010 at 05:02:33PM -0400, Josh Malone wrote:

> >>> OS is Ubuntu 9.04 32Bit
> >>> IMHO it is better to migrate to a 64Bit-System!?
> >>
> >> I don't see an urgent reason to migrate to 64 bit... I would have
> >> installed this machine 64 bit at the beginning, just because it's a 64
> >> bit machine. You'll lose some performance, but it might be barely
> >> noticeable.
> > 
> > I'm not so sure that's the case. My understanding is that a 32-bit OS
> > can only address a little over 3GB of physical memory, since the
> > system has 8GB, I would think you would want to upgrade to a 64-bit
> > OK.
> > 
> > Richard
> 
> Using PAE, you can have >3.5 G of usable ram on a system. HOWEVER, each
> individual process only has a 4GB virtual address space, so only 4G of ram
> per process. If you have >1 memory-intensive process you can make use of 8G
> of ram on a 32-bit system.

Right. But we're talking about BackupPC_nightly here which doesn't
require loads of memory. As we can see in vmstat output, about 6 GB of
RAM are used as disk cache. So we're on the safe side here regarding
32bit vs. 64bit.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_nightly takes too much time

2010-04-07 Thread Tino Schwarze
On Wed, Apr 07, 2010 at 12:11:31PM +0200, Norbert Schulze wrote:
> > take a look at what
> > vmstat 10
> 
> r...@server:/var/www# vmstat
> procs ---memory-- ---swap-- -io -system-- cpu
>  r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa
>  1  4  14764 6235884 688976 13520400 2 104  2  1 78 18
> 
> r...@server:/var/www# vmstat 10
> procs ---memory-- ---swap-- -io -system-- cpu
>  r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa
>  1  4  14764 6217932 687500 15443200 2 104  2  1 78 18

I suppose, this is the first line of vmstat output? It's useless - this
is overall statistics since system boot. Drop the first line, then wait
another 10 seconds for another line to appear. Or just post the output
of "vmstat 10 10"

> > What kind of storage are you using?
> 
> Storage: 6 x WD RE4−GP 2002FYPS 2TB 7k S−ATA−II 24/7 (RAID50)
> Raid Controller: ADAPTEC 3805 RAID SAS/SATA 8−Kanal
> 
> 
> OS is Ubuntu 9.04 32Bit
> IMHO it is better to migrate to a 64Bit-System!?

I don't see an urgent reason to migrate to 64 bit... I would have
installed this machine 64 bit at the beginning, just because it's a 64
bit machine. You'll lose some performance, but it might be barely
noticeable.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_nightly takes too much time

2010-04-07 Thread Tino Schwarze
Hi Norbert,

On Wed, Apr 07, 2010 at 11:34:57AM +0200, Norbert Schulze wrote:

> the BackupPC_nightly takes too much time. Is this too much data for this 
> server?
> 
> Intel(R) Core(TM)2 Quad CPUQ9650  @ 3.00GHz
> Memory: 8GB
> 
> General Server Information
> The servers PID is 15477, on host BACKUPPC-Server, version 3.1.0, started at 
> 3/29 14:35. 
> This status was generated at 4/7 11:29. 
> The configuration was last loaded at 4/7 11:29. 
> PCs will be next queued at 4/7 12:00. 
> Other info: 
> 9 pending backup requests from last scheduled wakeup, 
> 0 pending user backup requests, 
> 10 pending command requests, 
> Uncompressed pool: 
> Pool is 357.26GB comprising 469442 files and 4369 directories (as of 4/6 
> 01:05), 
> Pool hashing gives 87 repeated files with longest chain 12, 
> Nightly cleanup removed 314 files of size 0.24GB (around 4/6 01:05), 
> Compressed pool: 
> Pool is 472.31GB comprising 5608440 files and 4369 directories (as of 4/6 
> 14:03), 
> Pool hashing gives 30090 repeated files with longest chain 8, 
> Nightly cleanup removed 7149 files of size 6.13GB (around 4/6 14:03), 
> Pool file system was recently at 29% (4/7 11:28), today's max is 29% (4/7 
> 01:00) and yesterday's max was 29
> 
> Currently Running Jobs
> 
> admin4/7 01:00  BackupPC_nightly -m 0 127  5259   
> admin1   4/7 01:00  BackupPC_nightly 128 255  5260  

take a look at what

vmstat 10

prints (ignore the first line). It will show you the I/O load of the
system. What kind of storage are you using? Maybe it is saturated, then
you might want to try not running BackupPC_nightly in parallel since it
is I/O bound.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Alternative way to check if Host is alive -- can't use Ping

2010-03-30 Thread Tino Schwarze
On Tue, Mar 30, 2010 at 06:22:39PM +0200, Mirco Piccin wrote:
> Hi,
> 
> >> In my setup ICMP packets are dropped by the firewall in front of the
> >> machine I need to backup. I have been searching for alternatives but
> >> haven't found anything yet. Any pointers to a fix? Any other way to let
> >> BackupPC know he machine is alive?
> >
> >What ports are available?
> >could something like httping be used as a substitute for /bin/ping?
> 
> i remember i use - not with BackupPC - a perl script that allow to probe any
> host port (both TCP anf UDP ) - and also ping.
> It use both Net::Ping and Socket.
> 
> The solution could be something like that, or simply a telnet to a specific
> port and work with output.
> Just run nmap to that host to see available port.

I've been using netcat -z $host $port for easy "is that port open?" tests.
You'll need rsyncd or ssh access anyway, so just check these ports.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] web interface issues

2010-03-04 Thread Tino Schwarze
Quick idea: Maybe your hosts file got messed up?

Tino.

On Thu, Mar 04, 2010 at 08:50:04AM -0600, Nick Hadaway wrote:
> Hello backuppc people :)
> 
> I am running BackupPC 3.2.0beta0... and This is the 3rd or 4th time this 
> has happened... and i'm not sure why...
> 
> Log in to the cgi-bin interface...
> Go to add a host (already have about 14 hosts in there)...
> after hitting save from adding the new host... all styling is lost on 
> the web page...
> none of the data is loading up in the interface...
> host summary says 0 hosts have been backed up, etc...
> 
> This persisted through a reboot as well...
> 
> How do I troubleshoot this?  apache is throwing no errors... nothing is 
> reported in backuppc logs... it seems like a problem with mod__perl maybe??
> 
> Any help is appreciated!
> 
> -nick
> 
> --
> Download Intel® Parallel Studio Eval
> Try the new software tools for yourself. Speed compiling, find bugs
> proactively, and fine-tune applications for parallel performance.
> See why Intel Parallel Studio got high marks during beta.
> http://p.sf.net/sfu/intel-sw-dev
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Changing the backup data location

2010-02-24 Thread Tino Schwarze
Hi Chris,

On Wed, Feb 24, 2010 at 11:21:02AM -0500, ckandreou wrote:

> I use BackupPC v.3.0 
> I changed the the backup data directory to a new location because of space 
> issues. What setting should I set for the pool location to be at the same 
> directory as the backup_data? 
The easiest and most secure way of chaning where your data resides is to
just mount your new volume at the old place. E.g. if you had your data
in /var/lib/backuppc, just mount your new volume there.

There is also an HOWTO in the Wiki, IIRC:
> Wiki:http://backuppc.wiki.sourceforge.net

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Check-if-alive-pings alternatives

2010-02-08 Thread Tino Schwarze
Hi Sorin,

On Mon, Feb 08, 2010 at 11:29:11AM +0100, Sorin Srbu wrote:

> I have a server that sits behind a router (server is NAT:ed) and allows ssh
> connections in. That is to say, the *only* thing it allows in is ssh
> connections.
> 
> Now, BackupPC uses pings to check if the machine to be backed up is alive.
> Since the router in question doesn’t respond to any pings, it's in a
> pseudo-stealth mode, then BackupPC thinks the machine is down and doesn't
> initiate any backups even though the machine is actually alive and
> responding otherwise.
> 
> Short of making the router visible on the network for pings, is there any
> other way to circumvent this problem? Maybe connecting to the ssh-port or
> something? Ideas and pointers are greatly appreciated!

What's your argument for not making it visible? It will show up anyway,
one way or another.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Tmpfs advantageous for BackupPC?

2010-02-02 Thread Tino Schwarze
On Tue, Feb 02, 2010 at 04:52:37PM +0100, Sorin Srbu wrote:

> Would I have anything to gain, with respect to BackupPC, if I would mount
> /tmp to a ramdrive with tmpfs?

No. As far as I know, BackupPC does not use /tmp at all.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] error -4

2010-02-01 Thread Tino Schwarze
Hi Huw,

On Mon, Feb 01, 2010 at 04:23:42PM +, Huw Wyn Jones wrote:

> Both directories are on the same filesystem.
 
What filesystem type? Which version of BacupPC?

> >> From: "Tino Schwarze"
> >> What happens if you become the BackupPC user and create the hardlink 
> >> yourself by running:
> >> ln /backup/pc/macfs1/1/f%2fetc/fauth-client-config/attrib 
> >> /backup/pool/e/1/c/e1c552f76772a566f5f8f1ac58ce092b
> 
> I get:
> bash-3.2$ ln /backup/pc/macfs1/1/f%2fetc/fauth-client-config/attrib 
> /backup/pool/e/1/c/e1c552f76772a566f5f8f1ac58ce092b
> ln: creating hard link `/backup/pool/e/1/c/e1c552f76772a566f5f8f1ac58ce092b' 
> to `/backup/pc/macfs1/1/f%2fetc/fauth-client-config/attrib': No such file or 
> directory
 
Please check further why this didn't work - what file or directory does
not exist?

> I'm a little confused with this issue. My initial thought was that it might 
> be an issue created by the Windows hosts :-/

No, that shouldn't be connected with windows...

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] error -4

2010-02-01 Thread Tino Schwarze
Hi Huw,

On Mon, Feb 01, 2010 at 03:01:51PM +, Huw Wyn Jones wrote:

> I built a new backup server last week (CentOS if anyone was following my 
> previous thread) and now have everything running nicely. My log files however 
> are full of 'error -4' messages. See example below. Am I right in assuming 
> that I'm getting these errors because it's trying to backup attrib files? 

Are the /backup/pc and /backup/pool directories on the same filesystem?

> File /var/log/BackupPC/LOG
> 
> 2010-01-31 22:34:32 BackupPC_link got error -4 when calling 
> MakeFileLink(/backup/pc/macfs1/1/f%2fetc/fapt/attrib, 
> 5db9f1cb70f1334117b0db5426bc0154, 1)
> 2010-01-31 22:34:32 BackupPC_link got error -4 when calling 
> MakeFileLink(/backup/pc/macfs1/1/f%2fetc/fauth-client-config/attrib, 
> e1c552f76772a566f5f8f1ac58ce092b, 1)

What happens if you become the BackupPC user and create the hardlink
yourself by running:
ln /backup/pc/macfs1/1/f%2fetc/fauth-client-config/attrib 
/backup/pool/e/1/c/e1c552f76772a566f5f8f1ac58ce092b

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Central backup system software?

2010-01-28 Thread Tino Schwarze
Hi Roger,

On Wed, Jan 27, 2010 at 02:57:47PM -0500, toughman wrote:

> I have been looking for a software which will do central backup for our video 
> files from different stores.
> 
> Each store has Raid1 hotswap driver for backup HD and the HD will be pluged 
> into the central backup system once a week.  At central system, there are 
> different subfolders for all stores.  
> 
> I am looking for a software will automatically backup the store HD to central 
> backup system.  The backup software will only copy the data with the date 
> later than the date of the files in the central backup system. 
> 
> Can anyone please let me know the sfotware?

Well, you asked within a particular software's discussion forum
already... If you just want to copy the harddisks, then I'd go for a
custom rsync-based scripting solution.

What operating system are you talking about, BTW? What is the expected
backup volume? How many stores, how large drives etc.? Are there
duplicate files or are they all unique?

It looks like BackupPC is not the kind of software you're looking for.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Thinking aloud about backup rotation

2010-01-27 Thread Tino Schwarze
Hi Nigel,

On Wed, Jan 27, 2010 at 02:40:24PM -, Nigel Kendrick wrote:

> > From: Tino Schwarze [mailto:backuppc.li...@tisc.de] 
> > Sent: 27 January 2010 11:27
> > To: backuppc-users@lists.sourceforge.net
> > Subject: Re: [BackupPC-users] Thinking aloud about backup rotation
> > 
> > Hi,
> > 
> > On Wed, Jan 27, 2010 at 10:33:48AM -, PD Support wrote:
> >  
> > There is an easier solution: Just don't name the files by weekday -
> > backup into the same file every day like "xyz_db.bak". Then you are free
> > to
> > a) copy it somewhere else on the server with a weekday name (and it
> > doesn't need to be backed up there)
> > b) just rely on BackupPC for restores
> > 
> > Why would you want to keep one week's worth of backups on the server
> > itself if BackupPC keeps those backups anyway?
> 
> Hi Tino - I'm just really thinking out loud about options. Local, daily, ZIP
> backups will be handy for minor issues as there will be 30+ sites spread
> over much of the southern half of the UK, backed up to 5 regional
> Linux-based servers. Restoring from a local file (via remote access) will be
> quicker than getting the data back to site through ADSL or by car/courier.
> 
> We already create a generically-named .bak file and back it up via BackupPC,
> but I can see circumstances where the rotation feature might be useful.

If possible, let your job run at, say 11 o'clock p.m. or like 10 minutes
before midnight. It could
1. move last days backup /backupdir/current.bak to 
/somewhere-else/backup_thisday.bak
2. start new backup into /backupdir/current.bak

Then let BackupPC only backup /backupdir. That way you get your local
history of backups and provide BackupPC the opportunity to use rsync's
differential transfer optimizations.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Thinking aloud about backup rotation

2010-01-27 Thread Tino Schwarze
Hi,

On Wed, Jan 27, 2010 at 10:33:48AM -, PD Support wrote:
 
> > From: Keith Edmunds [mailto:k...@midnighthax.com] 
> > Sent: 26 January 2010 14:00
> > To: backuppc-users@lists.sourceforge.net
> > Subject: Re: [BackupPC-users] Thinking aloud about backup rotation
> > 
> > > If I have a source/dest folder with a week's worth of backups in it
> > > labelled Mon_backup.bak, Tue_backup.bak...
> > .
> > > It would be great if BackupPC had some way of 'knowing' that a folder
> > > contained files of a cyclic nature like this
> > 
> > If you use BackupPC for backups, it knows what to backup each time. It is
> > unreasonable to expect BackupPC to guess that you are also (for some
> > unstated reason) carrying out backups via a different mechanism which
> > BackupPC should somehow be aware of.
> > 
> > Why don't you trust BackupPC to backup your data, period? If the
> > Mon_backup.bak files are simply local copies, just exclude them from
> > BackupPC altogether, and have BackupPC backup the source instead.
> > 
> > Keith
> 
> The source is one file; the dump from an SQL database so it cannot be
> directly backed up. In this case, backing up today's dump by comparing it
> with yesterdays (with a different name) would be an extremely useful and
> time/bandwidth saving function - maybe even a 'selling point' for BackupPC!?

> Think about it - as things stand at the moment, backing up
> 'Wed_filename.bak' by comparing it with the currently-backed up version is
> looking at a file on the BackupPC server that has one week of changes in it
> rather than just one day's.
 
There is an easier solution: Just don't name the files by weekday -
backup into the same file every day like "xyz_db.bak". Then you are free
to
a) copy it somewhere else on the server with a weekday name (and it
doesn't need to be backed up there)
b) just rely on BackupPC for restores

Why would you want to keep one week's worth of backups on the server
itself if BackupPC keeps those backups anyway?

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Strict backup schedule

2010-01-27 Thread Tino Schwarze
On Tue, Jan 26, 2010 at 08:24:06PM -0600, Gerald Brandt wrote:

> > Why expire all of January in the first week of February? That means you 
> > only have one weeks history? Why not just tell backuppc to keep 5 
> > 'weekly' fulls, which means you will always have the ones you want? 
> > 
> > > I found a delete script that can do the deletes for me, but is that the 
> > > only way? I'd hate to have to parse output to figure out what to delete. 
> > 
> > Yes, this is the hard part. Sooner or later you will probably need to 
> > find a way to expire a specific backup number. I don't think this will 
> > really work otherwise. 
> > 
> > Personally, I don't like this solution either... 
> > 
> > > BackupPC isn't meant designed to do this stuff, so I may have to script 
> > > the whole process. Ugh. My perl "ain't so good". 
> > 
> > Nope, it isn't... I don't think there is a 'proper' way to delete a 
> > specific backup either, but it would definitely require a bit of 
> > scripting, and making sure you don't 'mess it up' under any circumstance 
> > is harder. 
> > 
> > So far, I have used two variations: 
> > 1) Use the supported keep values (with various values) 
> > 2) Keep everything for ever. (Use excessively high keep values to cover 
> > at least 1 backups). 
> > 
> > I think your main issue is that backuppc can't use block level 
> > de-duplication. If it could, you could store all your daily SQL backups 
> > with minimal actual storage space consumption. Of course, this would 
> > come in handy for plenty of others too :) 
> > 
> > Maybe someone else will have something more helpful to add. 
> > 
> > Regards, 
> > Adam 
> > 
> 
> I appreciate the help. For now I have a crontab entry that calls a
> bash script every Friday, to perform fulls for the server, and I've
> set $Conf{FullPeriod} to 7.1. That should work for the next while,
> since Jan and Feb's last workday of the month is a Friday. 

> I also have a plan for running a full on the last day of the month,
> which is really no biggie. I'll put that in place in a bit. 

> Now all I need is a smart backup expire plan, so that by December all
> I have is the last day of the month backups for Jan-Nov, and a regular
> slew of incrementals/fulls for Nov and Jan. 

Why are you so strict on keeping _only_ those backups? Why not just keep
some more (which won't cost you a lot of space because of the pooling)?

Or maybe I didn't get the point?

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Comments on this backup plan please

2010-01-26 Thread Tino Schwarze
Hi,

On Tue, Jan 26, 2010 at 12:22:45PM -, PD Support wrote:

> We are going to be backing up around 30 MS-SQL server databases via ADSL to
> a number of regional servers running CentOS (about 6 databases per backup
> server). 10 sites are 'live' as of now and this is how we have started...
> 
> The backups are between about 800MB and 5GB at the moment and are made as
> follows:
> 
> 1) A stored procedure dumps the database to SiteName_DayofWeek.bak eg:
> SHR_Mon.bak
> 
> 2) We create a local ZIP copy eg: !BSHR_Mon.zip. The !B means the file is
> EXCLUDED from backing up and is just kept as a local copy, cycled on a
> weekly basis.
> 
> 3) We rename SHR_DayofWeek.bak to SiteName.bak
> 
> 4) We split the .bak file into 200MB parts (.part1 .part2 etc.) and these
> are synced to the backup server via backuppc
> 
> This gives us a generically-named daily backup that we sync
> (backupPC/rsyncd) up to the backup server nightly.
> 
> We split the files so that if there is a comms glitch during the backing up
> of the large database file and we end up with a part backup, the next
> triggering of the backup doesn't have to start the large file again - only
> the missing/incomplete bits.
> 
> Although the zip files are relatively small, we have found that their
> contents varies so much (bit-by-bit wise) on a  weekly cycle basis that they
> take a long time to sync so we leave them as local copies only.
> 
> Seems to work OK at the mo anyway!

You might want to try gzip --rsyncable instead of ZIP and see whether it
makes a difference. Because of the file splitting etc. I'd add a .md5
checksum file, just to be sure. Also, there is a tool which name I
cannot remember which allows you to split a file and generate an
additional error-correction file, so you get a bit of redundancy and
chances are higher to reconstruct the archive, even if a part is lost.

Disabling compression in BackupPC for these hosts might speed things up
since the files cannot be compressed anyway.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backups vanish nightly...

2010-01-21 Thread Tino Schwarze
Hi Don,

On Thu, Jan 21, 2010 at 12:49:28PM -0800, Don Krause wrote:
> 
> On Jan 21, 2010, at 11:30 AM, Craig Barratt wrote:
> 
> > Don writes:
> > 
> >> The numbered directories are gone. I only have "0" from last nights 
> >> backup, which is 0.3 days old, and it replaced the one from the day 
> >> before..
> > 
> > Please look in the host's LOG file.  Do you see messages like:
> > 
> >2005-02-06 11:20:55 removing old full backup 0
> > 
> > If not, then it appears something other than BackupPC is
> > removing or trashing the files.
> > 
> > If so, then BackupPC is removing the backups, and the only
> > explanation I can think of is the $Conf{FullKeepCnt} ends
> > up being 0.  Could you have a syntax error in your main or
> > per-PC config.pl files?  Eg: try running this:
> > 
> >perl PATH_TO_MAIN_CONFIG/config.pl
> >perl PATH_TO_PER_HOST_CONFIG/config.pl
> > 
> > Craig
> > 
> 
> 
> llupbt2:/export/facility/BackupPC/pc/calib # perl /etc/BackupPC/config.pl
> llupbt2:/export/facility/BackupPC/pc/calib # perl /etc/BackupPC/pc/calib.pl 
> llupbt2:/export/facility/BackupPC/pc/calib # 
> 
> No instances of "removing" in the log files either..
> 
> Last nights log is interesting in that at 1am it claims to be starting 
> incremental backups..
> 
> 2010-01-21 01:00:01 Started incr backup on g2disun (pid=31638, share=/)
> 2010-01-21 01:00:01 Started incr backup on gantry3 (pid=31639, share=/)
> 2010-01-21 01:00:01 Started incr backup on delta (pid=31643, share=/)
> 2010-01-21 01:00:04 Started incr backup on gantry2 (pid=31675, share=/)
> 2010-01-21 01:00:09 Started incr backup on gantry3 (pid=31639, share=/usr)
> 2010-01-21 01:00:14 Started incr backup on gantry2 (pid=31675, share=/usr)
> 2010-01-21 01:00:22 Started incr backup on delta (pid=31643, share=/usr)
> 2010-01-21 01:00:48 Started incr backup on gantry3 (pid=31639, share=/var)
> 2010-01-21 01:00:48 Started incr backup on delta (pid=31643, share=/export)
> 2010-01-21 01:00:53 Started incr backup on gantry2 (pid=31675, share=/var)
> 
> Then, 2 hours later..
> 
> 2010-01-21 03:51:12 Started full backup on gantry2 (pid=489, share=/)
> 2010-01-21 03:52:19 Started full backup on gantry2 (pid=489, share=/usr)
> 2010-01-21 03:57:04 Finished crabs (BackupPC_link crabs)
> 2010-01-21 03:58:15 Started full backup on gantry3 (pid=452, 
> share=/export/switc
> hyard)
> 2010-01-21 04:00:00 Next wakeup is 2010-01-21 05:00:00
> 2010-01-21 04:04:54 Started full backup on gantry3 (pid=452, 
> share=/export/ingre
> s_back)
> 2010-01-21 04:04:58 Finished full backup on gantry3
> 2010-01-21 04:04:58 Running BackupPC_link gantry3 (pid=569)
> 2010-01-21 04:04:59 Started full backup on ebl-recovery (pid=571, share=/)
> 2010-01-21 04:05:04 Finished gantry3 (BackupPC_link gantry3)
> 2010-01-21 04:05:06 Backup failed on ebl-recovery (Unable to read 4 bytes)
> 2010-01-21 04:05:09 Started full backup on g1imgreg (pid=579, share=/)
> 2010-01-21 04:11:13 Started full backup on gantry2 (pid=489, share=/var)
> 2010-01-21 04:12:07 Started full backup on gantry2 (pid=489, 
> share=/export/gantr
> y2)

My bets are on "something or someone is wiping your directories".
Try disabling backups for this night, then see what it looks like the
next day. I suppose, the directories will look pretty virgin.

Maybe run a cronjob which does an 
ls -l /export/facility/BackupPC/pc/calib > /tmp/watchit.`date +%Y%m%d-%H%M%S`
every minute just to get an idea what's going on.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best and brightest in the field, creating opportunities for Conference
attendees to learn about information security's most important issues through
interactions with peers, luminaries and emerging and established companies.
http://p.sf.net/sfu/rsaconf-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backups vanish nightly...

2010-01-21 Thread Tino Schwarze
Hi Don,

On Thu, Jan 21, 2010 at 11:54:17AM -0800, Don Krause wrote:

> > On Thu, Jan 21, 2010 at 10:37:31AM -0800, Don Krause wrote:
> >> Just an update, there are no errors in the logs, and a forced fsck on the 
> >> file system is clean..
> > 
> > Please post the main logfile of BackupPC and a logfile of one affected
> > host - spanning at least 2 days of continuous operation.
> > 
> > Tino.
> > 
> 
> Sure.. The affected hosts (all of them on this installation.) only appear to 
> have a very short log file for the previous night. It's almost as if the log 
> file is being overwritten nightly as well..

Please provide the logfiles  hosts as well - there is a
separate logfile for each host available via web interface...
And this logfile is rotated, but it shouldn't start anew each day.

(I didn't have the time to look through the config for now).

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best and brightest in the field, creating opportunities for Conference
attendees to learn about information security's most important issues through
interactions with peers, luminaries and emerging and established companies.
http://p.sf.net/sfu/rsaconf-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backups vanish nightly...

2010-01-21 Thread Tino Schwarze
On Thu, Jan 21, 2010 at 10:37:31AM -0800, Don Krause wrote:
> Just an update, there are no errors in the logs, and a forced fsck on the 
> file system is clean..

Please post the main logfile of BackupPC and a logfile of one affected
host - spanning at least 2 days of continuous operation.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best and brightest in the field, creating opportunities for Conference
attendees to learn about information security's most important issues through
interactions with peers, luminaries and emerging and established companies.
http://p.sf.net/sfu/rsaconf-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup to same computer

2010-01-21 Thread Tino Schwarze
On Thu, Jan 21, 2010 at 10:31:26AM -0600, Carl Wilhelm Soderstrom wrote:

> I came to the conclusion that tar was actually a faster way to do backups on
> the local system. Less CPU usage, and bandwidth is not a problem. YMMV.

But tar comes with a price: If you extract an archive, therefore
creating files with ctime in the past, they won't get picked up by
incremental backups.

I suppose, you already knew that...

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best and brightest in the field, creating opportunities for Conference
attendees to learn about information security's most important issues through
interactions with peers, luminaries and emerging and established companies.
http://p.sf.net/sfu/rsaconf-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup to same computer

2010-01-21 Thread Tino Schwarze
On Thu, Jan 21, 2010 at 02:43:11AM +, Timothy Murphy wrote:

> What is the best way to backup a folder - 
> in fact, my mail folder /home/tim/Maildir -
> with backuppc running on the same computer?
> 
> I assume that I need to allow backuppc to read this folder?

The easiest approach is
a) install an rsyncd which allows only local connects
b) threat your local machine just like any other server and use
ssh/rsync with public-key authentication

HTH,

Tino, using b).

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best and brightest in the field, creating opportunities for Conference
attendees to learn about information security's most important issues through
interactions with peers, luminaries and emerging and established companies.
http://p.sf.net/sfu/rsaconf-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] compression on quad core

2010-01-13 Thread Tino Schwarze
On Wed, Jan 13, 2010 at 09:25:47AM +0100, Thomas Scholz wrote:

> we using backuppc on an quad core system. Our backupprocess using only on 
> core 
> for poolcompression. Is there a way to get Compress::Zlib working 
> multithreaded?

You might want to run multiple backups in parallel... But AFAIK, there
is no widespread support for multithreaded zipping yet. I've just found
pigz recently.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] a slightly different question about rsync for offsite backups

2009-12-09 Thread Tino Schwarze
On Wed, Dec 09, 2009 at 10:57:13AM -0800, Omid wrote:

[...]

> the idea is to schedule an rsync command to an external drive say every
> wednesday morning at 3 am, instruct the office to plug the drive in on
> tuesday, and to replace it on thursday with next week's drive.
> 
> i have the rsync command down pat.  i'm using:
> 
> rsync -aHPp /data/ /mnt/usb/data/
> 
> i've realized that the trailing backslash is important .  to be
> consistent anyways.
> 
> i've gotten the cronjob down pat, including the mount, stop and umount
> commands.  what i'm having problems with is this.
> 
> if the usb drive does not mount for whatever reason (either because it
> hasn't been plugged in, or for another reason), the copy is going to go to
> the folder that's there, which is going to fill up the native drive very
> quickly.
> 
> how can i avoid this?
 
> i've tried the --no-dir command in rsync, hoping that it would prevent rsync
> from happening if the destination folder doesn't exist.  but it doesn't seem
> to work.
> 
> the only other option i seem to have is to create a script that confirms
> that the mount has occurred before executing the rsync script.  got any
> idea's?

Just create a file called "THIS_IS_THE_USB_DRIVE" on the drive itself,
then let your cronjob check for it like this:

[ -f /mnt/usb/THIS_IS_THE_USB_DRIVE ] && rsync -aHPp /data/ /mnt/usb/data/

Of course, a script would be suitable - it might mail somebody, then
"while [ ! -f /mnt/usb/THIS_IS_THE_USB_DRIVE ] ; do sleep 1m ; done"
to wait for the drive to appear.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC migration FC7 to FC10

2009-12-07 Thread Tino Schwarze
On Sun, Dec 06, 2009 at 06:06:32PM -0500, Alan McKay wrote:
> On Thu, Jun 4, 2009 at 10:29 PM, Johan Cwiklinski  wrote:
> > If you've used the official RPM from Fedora, all configuration files are
> > under /etc/BackuPC. This path and /var/lib/BackupPC (were stands your
> > backups) are normally the only ones you have to care about.
> 
> Sorry for the thread necromancy but I'm soon looking at doing the same
> thing and was wondering what is the proper way to copy the data
> accross to account for the hard links in the data de-duplication.
> 
> My disk is getting full so I want to upgrade to a bigger one.

Thats one of the FAQs... I'll try to summarize the current consensus: If
your pool is small (e.g. not too many files/links), rsync might work but
will take very long. The easiest approach is to copy the whole block
device containing the file system, then resize to the target device's
size.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how can I find out how much space a given host uses?

2009-12-02 Thread Tino Schwarze
On Wed, Dec 02, 2009 at 01:11:27AM +0100, Pieter Wuille wrote:

> See attachment. You can run eg.:
> 
>./diffsize.pl /var/lib/backuppc/pc/*
> 
> to see values per host, and a total.
> 
> PS: it actually (correctly) divides by (nHardLinks-1) instead of +1 (what i
> claimed earlier).

I suppose, it will be difficult to explain the users that their total
used space will fluctuate, e.g. because old backups or other other hosts
are expired. I'd just do accounting on total backup size (say: last full) and
account the savings by pooling as earnings through using smart systems.
:-)

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] decompressing the backuppc pool

2009-11-29 Thread Tino Schwarze
On Fri, Nov 27, 2009 at 05:27:45PM +0100, Sebastiaan van Erk wrote:

> I'm planning to move my backuppc data directory onto ZFS. I will let ZFS 
> do the compression (and deduplication) for me, so I want to make sure 
> that the cpool stores only the plain files, not altered in any way.
> 
> Is there any way to migrate the current cpool (which is compressed) to 
> an uncompressed version?

I'd just disable compression and let the files expire over time. BTW:
All files in cpool/ are compressed, the uncompressed files go to pool/

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] DumpPreUserCmd status returns

2009-11-27 Thread Tino Schwarze
On Fri, Nov 27, 2009 at 09:26:26AM -0500, Jeffrey J. Kosowsky wrote:

>  > > As many know, I use rsync in DumpPreUserCmd to run my shadowmountrsync
>  > > routine on the remote client. This routine has many legitimate reasons
>  > > for not returning success.
>  > > 
>  > > However, the backuppc log often shows on failure:
>  > >  2009-11-24 11:00:04 DumpPreUserCmd returned error 
> status 512... exiting
>  > > 
>  > > I don't understand the 512 status since the rsync man pages lists
>  > > return codes as 0-35. So what does 512 mean?
>  > 
>  > It's 256 * exitStatus, so the exit status is 2.  The lower 8 bits are
>  > the signal number, if any, that killed the process.
> 
> Thanks Craig...
> 
> So, I guess if I want to return the exit code of the remote process I
> am running (shadowmountrsync) then I should have shadowmountrsync
> write the exit code to stdout. Then capture that as the output of the
> remote rsync command and return it.
 
I'm not familiar with how you call the script, but ssh does pass back
the exit codes:

> ssh $server /bin/true && echo jo
jo
> ssh $server /bin/false && echo jo
>

And if you're concerned about your remote script catching some exit
code, it works like this:

$somecommand
saved_exitcode=$?
...
exit $saved_exitcode

Maybe you're talking about Windows - there should be a way as well to
catch the exit code of the last command.

> Which brings to mind a suggestion...
> Why not execute these commands in a shell.
> They are not run that frequently (once per day per host) so the
> overhead of launching a shell would be low while the benefit would be
> high in terms of flexibility.

Then you open the "how to escape things correctly when passing them to a
shell" box.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] __TOPDIR__ and $Conf{TopDir}

2009-11-25 Thread Tino Schwarze
Hi Holger,

On Thu, Nov 26, 2009 at 01:21:44AM +0100, Holger Parplies wrote:

> Tino Schwarze wrote on 2009-11-24 23:40:13 +0100 [Re: [BackupPC-users] 
> __TOPDIR__ and $Conf{TopDir}]:
> > PS: Maybe there should be a prominent note just at the $Config{TopDir}
> > setting in config.pl? And a like to the wiki article?
> 
> as far as that statement goes, you are correct. There should be, but there
> isn't for older versions of BackupPC. Aside from that, pretty much all points
> made in this thread are obsolete, as BackupPC 3.2.0beta0 fixes the issue.
> Starting with this version, it should in fact be possible to change TopDir
> just by setting $Conf{TopDir} (no, I haven't tried it out ...).

[...]

Thanks for your detailed explanation. It makes sense. Old versions are
out there, we've got to live with their users asking that question. New
versions don't have this problem (and a fine explanation in config.pl).
And I didn't think about upgrades.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] __TOPDIR__ and $Conf{TopDir}

2009-11-25 Thread Tino Schwarze
On Wed, Nov 25, 2009 at 08:44:02AM -0600, Les Mikesell wrote:

>  I agree - almost every "newbie" that picks up BackupPC makes this
>  mistake - the more experienced you are with old-school config files
>  the MORE likely you are to assume changing this is all you need to do.
>   A note in the docs and/or a link to the "how to change your TOP DIR"
>  page would fix a huge percentage of installation failures.
> 
>  Or just fixing it so changing the config file IS all you have to do :)
> >>> If they need to be the same, why do we need the configuration option in 
> >>> the
> >>> first place?
> >> __TOPDIR__ is a token that gets substituted/replaced during an initial 
> >> install (a step that deb/rpm packagers have already done).  One of the 
> >> places that the actual value is substituted is the value of 
> >> $Conf{TopDir} in the config file.  But that's not the only place and 
> >> that's why you can't change it later.
> > 
> > Well, then we could simply move it out of config.pl into the code which
> > reads config.pl, e.g. have it preloaded into $Config...
> 
> The installer doing the substitution also establishes and moves the code to 
> its 
> runtime location - which the packagers have put elsewhere.  It is 
> approximately 
> the equivalent of wanting to change 'configure' options of some other source 
> package after you've installed a packaged binary.

I'm not talking about installation time.

I figure it works like this:
- someone calls BackupPC::new()
- it takes it's patched-in $TopDir to locate config.pl
- it reads config.pl
- so what's $Config{TopDir} good for except for confusing users?

I'd suggest just removing the setting from config.pl - the code calling
it should already know the correct setting - it's been patched in, isn't
it? Having it in config.pl will only confuse users since they expect it
is easy to change - so let's config.pl reflect what's really going on:
You cannot change it just here.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] __TOPDIR__ and $Conf{TopDir}

2009-11-24 Thread Tino Schwarze
On Tue, Nov 24, 2009 at 05:32:14PM -0600, Les Mikesell wrote:

> >> I agree - almost every "newbie" that picks up BackupPC makes this
> >> mistake - the more experienced you are with old-school config files
> >> the MORE likely you are to assume changing this is all you need to do.
> >>  A note in the docs and/or a link to the "how to change your TOP DIR"
> >> page would fix a huge percentage of installation failures.
> >>
> >> Or just fixing it so changing the config file IS all you have to do :)
> > 
> > If they need to be the same, why do we need the configuration option in the
> > first place?
> 
> __TOPDIR__ is a token that gets substituted/replaced during an initial 
> install (a step that deb/rpm packagers have already done).  One of the 
> places that the actual value is substituted is the value of 
> $Conf{TopDir} in the config file.  But that's not the only place and 
> that's why you can't change it later.

Well, then we could simply move it out of config.pl into the code which
reads config.pl, e.g. have it preloaded into $Config...

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] __TOPDIR__ and $Conf{TopDir}

2009-11-24 Thread Tino Schwarze
PS: Maybe there should be a prominent note just at the $Config{TopDir}
setting in config.pl? And a like to the wiki article?

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] __TOPDIR__ and $Conf{TopDir}

2009-11-24 Thread Tino Schwarze
On Tue, Nov 24, 2009 at 11:32:09PM +0100, Koen Vermeer wrote:

> What exactly is the difference between __TOPDIR__ and $Conf{TopDir}? It
> seems the former one is set at compile time, while the other is clearly
> a configuration file option, but how do these work combined? For
> example, many of the instructions seem to tell you to make a mount bind
> for the __TOPDIR__ in case you'd like to store the data somewhere else.
> Couldn't I do the same by simply setting $Conf{TopDir}? Or is there some
> subtle difference between the two?

Short answer: They have to be the same, at all times!

Long answer:
http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Change_archive_directory

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problem with backuppc

2009-11-18 Thread Tino Schwarze
On Wed, Nov 18, 2009 at 07:00:09AM -0600, Kameleon wrote:

> I am sure there are others that will chime in on this but as I see it you
> have a few options.
> 
> 1. Setup LVM and use the external disk as a permanent addition to the system
> 2. Mount the external disk as the directory that will house your desktops
> backups
> 
> Honestly, I would be wary about using an external USB disk. Alot of them
> have power saving features that will power it down after a short period and
> could cause issues with your backups. I would invest in another internal
> drive or even mount via NFS or iSCSI another drive in a separate machine.

You might want to consider at least any RAID as well. Otherwise, if one
of your disk breaks, the LVM is broken and _all_ your backups are gone.

To answer the original question: It is not currently possible to have
BackupPC use different storages. The whole data has to reside on one
file system. How to create a file system spanning multiple disks/RAIDs
and how to manage that is an operating system issue.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Transferring backups to another backuppc installation

2009-11-13 Thread Tino Schwarze
On Fri, Nov 13, 2009 at 12:24:46PM +0200, Peter Peltonen wrote:
> Hi,
> 
> I am trying to access backups created with another BackupPC installation.
> 
> I thought it would just require:
> 
> 1. copying the TopDir with rsync -avzH
> 2. changing the ownership of the TopDir to correct user/group
> 3. adding the hostnames in the hosts config file.
> 
> I get the hosts to appear in web interface, but when accessing them it
> tells me "This PC has never been backed up!!" and I cannot browse the
> backups.
> 
> What am I missing here?

Did you copy the whole pc/ directories? Including the pc/xyz/backups
files?

Note that copying a whole pool (while preserving hardlinks) is still
tough and will only work up to a certain pool size/file count (apart
from copying a whole file system image).

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Initiate backup from client?

2009-11-12 Thread Tino Schwarze
On Thu, Nov 12, 2009 at 12:04:07PM +, Tyler J. Wagner wrote:

> > How is that easier than just sending the single line:
> > BackupPC_serverMsg backup HOSTIP HOST 0/1
> > 
> > You will need to have ssh connection or vpn anyway if you are
> > remote.
> 
> It's not easier, but it is more secure.  Assuming you have a reachable IP 
> link 
> from server to client (IE, no NAT), using HTTP auth as the user is far safer 
> than leaving SSH keys on the client that can SSH into the server.

Well, there is one very safe way to use ssh-keys into the server: Limit
the command to execute via authorized_keys. That way, _only_ the command
you gave within the authorized_keys file will be executed by sshd, no
matter what you try.

For example, we use the following for establishing a one-port ssh-tunnel
with keepalive:
command="while read ; do echo $REPLY ; 
done",no-agent-forwarding,no-X11-forwarding,no-pty,permitopen="127.0.0.1:1234" 
ssh-dss B3...

On the server side we have running
  while read -t 70 ; do echo -n . ; done | ssh -R1234:localhost:abc $targethost

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression Issue

2009-11-11 Thread Tino Schwarze
On Tue, Nov 10, 2009 at 03:42:53PM -0800, Heath Yob wrote:

> Excellent it looks that fixed it.
> 
> That's kinda lame you can't just change the TopDir.

Well it's a typical bootstrap problem. Where are you supposed to find
your configuration file if it's relative to ${TopDir}? Therefore
${TopDir} needs to be patched at certain places upon installation.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Limit user space

2009-11-05 Thread Tino Schwarze
On Thu, Nov 05, 2009 at 12:32:10PM -0500, Il Neofita wrote:

> I was wondering if is it possibile to limit the backup space of an
> user like 30Gb or something like that

This is not currently possible with BackupPC. It wouldn't fit the
pooling scheme either - how would you count a file which is shared among
5 backups and maybe 3 users?

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Command Line creation of tarball

2009-10-23 Thread Tino Schwarze
Hi Michael,

On Fri, Oct 23, 2009 at 01:15:34PM -0600, Michael Osburn wrote:

>   I am trying to generate some tarballs from a few hosts so that I can
> archive them off the server. Unfortunately, the workstation that I have
> does not have a large enough /tmp partition (where firefox stores the
> download until it is complete on this system) for firefox to download
> the 115 gb file. Is there a way to generate the tarball from the command
> line so I can put it right on the usb brick?

Just use a command like that (run as backuppc user):
BackupPC_tarCreate -t -h $host -n $backupnumber -s '*' . | gzip > 
/mnt/usb/$host-$backupnumber.tar.gz

($backupnumber may be -1 for the last backup). BTW: My Firefox doesn't
do that. It downloads straight where I want it to - maybe just configure
your Firefox to ask for a download destination, then choose your USB
brick?

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Discrepant file size reports of backups

2009-10-16 Thread Tino Schwarze
On Fri, Oct 16, 2009 at 05:16:57PM +0200, Cesar Kawar wrote:

> > s...@pm7.ch wrote on 2009-10-16 02:05:07 -0700 [Re: [BackupPC-users]  
> > Discrepant file size reports of backups]:
> >> [...]
> >> Both directories are in the same filesystem and the path was most  
> >> likely
> >> changed right after the installation.
> >
> > /var/lib/backuppc and /usr/share/backuppc/data ? That would be the  
> > root file
> > system. It's not really a good idea to put your pool on the rootfs.
> >
> >> I don't fully understand why the changed path is a problem in this  
> >> case,
> >
> > Neither do I, but I remember that it *is* a problem. You can have a  
> > link for
> > $TopDir (i.e. $TopDir = '/var/lib/backuppc' and /var/lib/backuppc is a
> > softlink to somewhere), but softlinks below $TopDir don't seem to work
> > (whatever the reason was), even if they remain within one file system.
> 
> I've been using softlink to store BackupPC's storage on other disks  
> and partitions for years with no problems at all.

Note the very important detail: It is SAFE to softlink $TopDir directly,
e.g. have a softlink /var/lib/backuppc which points to
/mnt/somewhere/else.

The actual requirement is: everything below $TopDir needs to be on the
same file system so hardlinks work.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and power saving

2009-10-15 Thread Tino Schwarze
On Thu, Oct 15, 2009 at 11:20:10AM -0400, Robert Kosinski wrote:

> > 1)boot on bios power-on schedule
> > 2)let backuppc process all backups.  should automatically happen due to the
> > wakeup schedule though I would shorten that period.
> > 3)run a script to determine if backuppc is doing a backup, if so then sleep
> > for your wakeup schedule + 1 minute, if not then execute backuppc nightly
> > 4)trigger a shutdown when backuppc nightly completes.
> 
> I built a surprisingly simple proof of concept around the above flow,
> and it looks like the idea is valid. There are still bugs to track
> down and race conditions to examine, but I don't think it's too early
> to call the idea of "BackupPC Green" a success.
> 
> Some followup questions:
> 
> Does "activeJob" still equal 1 during the link stage? I'm having
> trouble catching a backup when it's doing so.
 
I just catched a host in link stage and it has activeJob=1:

"myhost" => {"lastGoodBackupTime" => "1255622063","deadCnt" =>
0,"reason" => "Reason_backup_done","activeJob" => 1,"state" =>
"Status_link_running","aliveCnt" => 575,"endTime" =>
"1255622063","needLink" => 0,"startTime" => "1255611675","type" =>
"incr","userReq" => undef},

> Speaking of "activeJob", I haven't found a way to answer the question
> "Are any hosts active?" Using BackupPC_serverMesg status hosts doesn't
> work because trashClean always reports itself as active. Is there any
> other criterion I could use to examine all possible jobs to see if any
> are active? My proof of concept only examined a single host.

Maybe you could better use "status jobs" instead? It looks like this:

%Jobs = (
" admin " => 
  {"cmd" => "/path/to/BackupPC-3.1.0/bin/BackupPC_nightly -m 192 255",
   "reqTime" => "1255622404","mesg" => "","pid" => 29003,"fh" => *::FH,
   "user" => "BackupPC","startTime" => "1255622404","type" => undef,"fn" => 7},
" trashClean " => 
  {"cmd" => "/path/to/BackupPC-3.1.0/bin/BackupPC_trashClean",
   "reqTime" => "1234604612","mesg" => "","pid" => 6293,"fh" => *::FH,
   "user" => "BackupPC","processState" => "running","startTime" => "1234604612",
   "type" => undef,"fn" => 5},
"myhost" => 
  {"cmd" => "/path/to/BackupPC-3.1.0/bin/BackupPC_link myhost",
   "reqTime" => "1255622063","pid" => 28848,"fh" => *::FH,
"user" => "BackupPC","startTime" => "1255622063","type" => "incr",
"fn" => 6});

-> So just use the result as the perl hash it is, remove the trashClean
entry (which is not critical to interrupt at any time) and check the
size of the resulting hash.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and power saving

2009-10-13 Thread Tino Schwarze
Hi Robert,

On Mon, Oct 12, 2009 at 09:11:24PM -0400, Robert Kosinski wrote:

> I have a scenario in my head and am soliciting feedback as to its
> feasibility. Forgive me if this has been discussed before; I searched
> the mailing list and wider Internet but couldn't come up with much
> information. What I'd like to do is:
> 
> 1. Set a schedule on the Windows clients to send Wake-on-LAN packets
> once per day or immediately upon booting if it's been longer than the
> desired interval. Why not the other way around? Well, the backup
> server will be asleep, and I am not aware of a method to schedule a
> self wake. Reading over documentation, it looks like wolcmd used in
> conjunction with Windows' scheduler is up to the task, except I don't
> know at this time how to specify to break schedule and trigger
> immediately if it's been too long.
> 
> 2. BackupPC does its thing and pulls data from the client(s).
> 
> 3. When all jobs are finished, the machine puts itself to sleep,
> shutting off cpus, hard drives, fans, etc.

That might be difficult since there is that BackupPC_nightly job which
needs to run once in a while. I'd rather go for minimizing idle power
consumption, e.g. by allowing disks to spin down, using a CPU with
advanced power management etc. (which might be difficult with RAID
setups which are recommended for backup).

> 4. Machine is off, waiting for the next magic packet before returning to #1.
> 
> I'm open to other power saving suggestions if this scenario sounds
> nutty. Basically I'm looking to avoid 24/7 electricity costs when the
> machine needs to run 30-90 minutes daily. I am not open to anything
> that requires daily human intervention, even if it's just to press the
> power button twice per day. Thanks for any assistance rendered.

How many clients do you have? Are you sure that backups will complete
within that 30-90 minutes?

You coult try setting a wakeup schedule in BIOS (some support that). Or
maybe you've got another server which is running 24/7? It could then
schedule a daily nightly wakeup for the server while the clients
schedule wakeups for backups.

It might be difficult to figure out that BackupPC has nothing to do any
more - it might be easier to do manual scheduling using
BackupPC_serverMesg and disable wakeup intervals - you have to run
BackupPC_nightly manually via BackupPC_serverMesg.

There is a list of possible server messages:
http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=ServerMesg_commands

I'm not sure the list is complete, though.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and power saving

2009-10-13 Thread Tino Schwarze
Hi Tyler,

On Tue, Oct 13, 2009 at 08:28:04AM +0100, Tyler J. Wagner wrote:

> That is a perfectly normal design for enterprise backup systems.  Bacula, for 
> instance, allows the execution of arbitrary commands before and after backups 
> run, so you can execute scripts to send wake-on-LAN and shutdown commands.
> 
> I don't think Backuppc has this provision, but it shouldn't be too hard to 
> add.

I suppose, you misunderstood Robert's question - he wants it the other
way around - clients power up the BackupPC server.

Tino.

> Regards,
> Tyler
> 
> On Tuesday 13 October 2009 02:11:24 Robert Kosinski wrote:
> > Hello BackupPC-users,
> > 
> > I have a scenario in my head and am soliciting feedback as to its
> > feasibility. Forgive me if this has been discussed before; I searched
> > the mailing list and wider Internet but couldn't come up with much
> > information. What I'd like to do is:
> > 
> > 1. Set a schedule on the Windows clients to send Wake-on-LAN packets
> > once per day or immediately upon booting if it's been longer than the
> > desired interval. Why not the other way around? Well, the backup
> > server will be asleep, and I am not aware of a method to schedule a
> > self wake. Reading over documentation, it looks like wolcmd used in
> > conjunction with Windows' scheduler is up to the task, except I don't
> > know at this time how to specify to break schedule and trigger
> > immediately if it's been too long.
> > 
> > 2. BackupPC does its thing and pulls data from the client(s).
> > 
> > 3. When all jobs are finished, the machine puts itself to sleep,
> > shutting off cpus, hard drives, fans, etc.
> > 
> > 4. Machine is off, waiting for the next magic packet before returning to
> >  #1.
> > 
> > I'm open to other power saving suggestions if this scenario sounds
> > nutty. Basically I'm looking to avoid 24/7 electricity costs when the
> > machine needs to run 30-90 minutes daily. I am not open to anything
> > that requires daily human intervention, even if it's just to press the
> > power button twice per day. Thanks for any assistance rendered.
> > 
> > ---
> > --- Come build with us! The BlackBerry(R) Developer Conference in SF, CA is
> >  the only developer event you need to attend this year. Jumpstart your
> >  developing skills, take BlackBerry mobile applications to market and stay
> >  ahead of the curve. Join us from November 9 - 12, 2009. Register now!
> >  http://p.sf.net/sfu/devconference
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> > 
> 
> -- 
> "Political language - and with variations this is true of all political
> parties, from Conservatives to Anarchists - is designed to make lies
> sound truthful and murder respectable, and to give an appearance of
> solidity to pure wind."
>-- George Orwell
> 
> --
> Come build with us! The BlackBerry(R) Developer Conference in SF, CA
> is the only developer event you need to attend this year. Jumpstart your
> developing skills, take BlackBerry mobile applications to market and stay 
> ahead of the curve. Join us from November 9 - 12, 2009. Register now!
> http://p.sf.net/sfu/devconference
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup, Lighttp and my mess

2009-10-06 Thread Tino Schwarze
On Mon, Oct 05, 2009 at 01:37:00PM -0400, Laura_marpplet wrote:
> 
> By the way, if I try the same on Apache I get this error message:
> 
> --
> Server error!
> 
> The server encountered an internal error and was unable to complete your 
> request. 
> 
> Error message: 
> Premature end of script headers: BackupPC_Admin.pl
> 
> If you think this is a server error, please contact the webmaster. 
> Error 500
> --

Have a look at Apache's error_log. There might be a more detailed
explanation.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Come build with us! The BlackBerry® Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9-12, 2009. Register now!
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Switching backup methods

2009-09-26 Thread Tino Schwarze
On Fri, Sep 25, 2009 at 05:04:24PM -0400, jingai wrote:
> Is it OK to switch between backup methods at any time, or do I need to  
> do something beforehand?

You might have to adjust your excludes. Have a look at the Wiki for more
information. Apart from that, there's nothing special to consider.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Come build with us! The BlackBerry® Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9-12, 2009. Register now!
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] security headaches

2009-09-25 Thread Tino Schwarze
On Fri, Sep 25, 2009 at 05:51:41AM -0400, Andrew Schulman wrote:

> Here's my problem:  I love having online backups, they're very
> convenient.  But they're a huge security problem.  All of the LAN's
> most sensitive files become readable by user backuppc, who can be
> attacked through the web application.  Worse, all of the files become
> readable by the BackupPC administrative user, and each host's files by
> that host's designated backup owner.  If any of these has a weak
> password, or if the BackupPC login doesn't run over SSL, or if the
> htdigest file is unprotected, then we give away the store.  Root
> security for the whole LAN becomes equivalent to a whole bunch of
> typically weaker links.
> 
> My question for you is, how are people addressing this problem?
> Enforcing strong passwords? Limiting the number of users with restore
> rights?  Segmenting your hosts into sensitive and less-sensitive
> files?

Our setup only has administrator access to the backup machine. It's
considered an isolated system where nobody has access, but
administrators. The web interface (which is optional, not neccessary
BTW) is SSL-secured and password protected, of course.

Backup storage is always a very security sensitive part of
infrastructure... And it's always a matter of balancing security vs.
ease of use.

Bye,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Come build with us! The BlackBerry® Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9-12, 2009. Register now!
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and DRBD - My experience so far

2009-09-17 Thread Tino Schwarze
On Thu, Sep 17, 2009 at 12:38:10PM -0400, Ian Levesque wrote:

> > ...even though they have more than a mile of physical separation.  I
> > don't currently have good data as to the bandwidth utilization during
> > backups (the DRBD config is set to limit it to 10M, which is about
> > 110Mbit/sec with TCP overhead), but the BackupPC_nightly and
> > BackupPC_trashclean give an average 5Mbit/sec combined.  Over a 24  
> > hour
> > period the servers have passed nearly 80GB of data between them (78GB
> > from the source, 2GB from the target).
> >
> > There has been no discernible effect to the amount of time it takes to
> > backup my hosts.
> >
> > If you have any questions, or feel there is anything I was not clear
> > about, feel free to ask.
> 
> Thanks for your report. With that fiber link, it's no wonder you get  
> LAN-like results. What I'm curious about is if the IO requirements of  
> BackupPC would allow for an offsite replication over a typical WAN  
> link (say, 20-30ms round-trip).

I took a look at the DRBD docs this morning and figured, there are
several replication modes including an async mode. If you combine that
with DRBD-Proxy, it should be doable over WAN with little to no
performance penalty. Of course, it depends on your particular setup.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Come build with us! The BlackBerry® Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9-12, 2009. Register now!
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and DRBD - My experience so far

2009-09-16 Thread Tino Schwarze
On Tue, Sep 15, 2009 at 03:12:28PM -0800, Chris Robertson wrote:

> In short, it works for me.

[...]

Wow, thanks for sharing your experience. I figure that DRBD is a nice
way to RAID-1 across multiple hosts for failover purposes. I didn't
expect it to perform that well - I'll look into it for Samba/NFS backup
server... (It nicely integrates with hearbeat BTW)

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Come build with us! The BlackBerry® Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9-12, 2009. Register now!
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Setting up a new BackupPC server

2009-09-14 Thread Tino Schwarze
On Mon, Sep 14, 2009 at 11:42:42AM -0700, James Ward wrote:

> I'm setting up a new BackupPC server as my current one has gotten  
> full.  This system has 2G RAM, quad Intel(R) Xeon(TM) CPU 3.20GHz and  
> a 3ware 6.5T array.  I believe the array is currently RAID5 with no  
> hot spare.  From what I'm reading, RAID5 is a no-no as is ext3?
> 
> What is the best way to set up the RAID array for BackupPC?

The array for BackupPC should be tuned for access time, mostly.
Therfore: As much disk heads as possible, disks as fast as possible.
If you want to use few big disks, go for RAID-10 since it does not
suffer the write-performance-impact of RAID-5. If you aim for more
disks (like 10), you might want to use RAID-5 or -6.

I've got a RAID-5 with only 3 disks and I'm regretting it a lot (just
didn't find the time to buy that 4th disk and migrate to RAID-10 - I'll
report if I did the move and what the results were).

Also, ext3 doesn't seem to be a good choice. XFS and ZFS are
recommended. (But ZFS is Solaris-only and needs lots of memory and CPU
power itself.)

Oh, BTW, consider upgrading RAM. It's very, very cheap these days and
helps a lot for having the OS cache metadata (directories etc.).

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] usb slow for random access? (was Re: Using rsync for blockdevice-level synchronisation of BackupPC pools)

2009-09-14 Thread Tino Schwarze
Hi Dan,

On Fri, Sep 11, 2009 at 01:40:02PM -0400, Dan Pritts wrote:

> > I'd say: Replace that USB 2.0 disk by something else like something
> > connected via Firewire or eSATA. USB 2.0 is very, very slow, especially
> > for random access.
> 
> do you have empirical results that show this?  

I did not do benchmarks. It's just my personal experience that I've yet
to see an USB-attached disk which feels fast. Remember: Disks do not
speak USB, they are adressed via IDE or SATA. So, if you use USB, you
get an additional translation layer.

Apart from that it looks like USB is not optimized for fast transfer and
low latency. SATA et al are designed for adressing hard disks, they
don't care about input devices etc. So there is less overhead.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using rsync for blockdevice-level synchronisation of BackupPC pools

2009-09-09 Thread Tino Schwarze
Hi Christian,

On Wed, Sep 09, 2009 at 09:33:03AM +0200, Christian Völker wrote:

> First, my environment:
> 28 hosts to back up. Mostly idle machines with minor services (so no big
> databases and so on). Partially fileserver with only little daily
> changes. So I expected not too much daily changes on the pool.
> I want to copy the pool to a remote location after testing is done.
> 
> So I used an USB 2.0 disk as "second backup device" to store the copied
> pool.

I'd say: Replace that USB 2.0 disk by something else like something
connected via Firewire or eSATA. USB 2.0 is very, very slow, especially
for random access.

> The pool itself is aprox 540GB in total. And is not growing any more-
> even not all 12 full backups are stored.
> 
> My first attempt was rsync the pool. This tooks ages.
> Second attempt was to "dd" the whole device (with less frequency), but
> for the size of the pool, took ages, too.
> Third try was to use "dump" for this hoping it would transfer only
> changed blocks after the initial dump. No way. After one day it
> transferred 300GB (!). I thought, dump might not bee a good solution...
> Last attempt now was to move the pool device to a VMware virtual disk
> and let rsync run over this file. Thus rsync backing up the block
> device. Best attepmt, I thought. Result: rsync transfers after the first
> run  ~300GB, too.
 
Do you want to try DRBD and see how that works? Might be a more complex
setup though..

> So what does this mean to me?
> 
> Looks like on my pool BackupPC changes approx two third of the whole
> pool daily. So if I would transfer to a remote site I'd need to transfer
> 300GB daily!

That sounds far too much for a 540 GB pool. Did you try again the next
day?

But there is currently no out-of-the-box or best practice to transfer a
BackupPC pool to a remote location.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What does "Opening " dialog do?

2009-09-08 Thread Tino Schwarze
On Tue, Sep 08, 2009 at 03:27:30PM -0400, Mike Bianchi wrote:
> If you are browsing a backup and click on a filename ("y" in this example)
> you get a pop-up dialog tilted  Opening y  with the text:
> 
>   You have chosen to open
>   y
>   which is a: BIN file
>   from: http://mymachine
> 
>   Would you like to save this file?
>   [ Cancel ][ Save File ]
> 
> If I click [ Save File ]  a window pops up and disappears too fast to see
> anything and the file is not restored in position.

The file is saved to your local harddisk - depending on your browser
configuration.

> I cannot find any documentation that explains clicking on a filename in any
> detail.  The closest I can find, in the man page, is:
> 
>   You can download a single backup file at any time simply by
>   selecting it.  Your browser should prompt you with the file name and
>   ask you whether to open the file or save it to disk.

Which is exactly what happens - your browser asks you to save it.

If you want to restore a file to the host it belongs to, use the
checkbox and the "restore files" button at the bottom of the list.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Tuning for disk contention

2009-09-07 Thread Tino Schwarze
On Sat, Sep 05, 2009 at 06:52:35PM -0600, dan wrote:

[...]

> You make a lot more sence here, but I think you overestimate CPU usage.
> backuppc is so IO bound that after your get a 2Ghz+ Dual core and 2GB RAM
> you can pretty much blame your disks for slow performance.  I have a dual
> core 2Ghz Opteron with 2GB of ram and 8 drives in a linux raid10 and hard
> disk speed is still my bottleneck.  I run 4 concurrent backups on that
> machine and it does give high system load numbers but still handles the
> desktops in the office faster than 3 concurrent while 5 concurrent takes
> quite a bit longer to complete. filesystem choice and io scheduler do make a
> difference but faster disks is the only real cure.

I'd add some memory first. 4GB is so cheap these days and it helps a lot
for disk caching. I've seen a performance boost by upgrading from 2 GB
to 6 GB on a quad core Xeon which is also heavily I/O bound.oO(I've got
to switch to RAID-10, the RAID-5 really kills performance...)

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using rsync for blockdevice-level synchronisation of BackupPC pools

2009-09-07 Thread Tino Schwarze
On Sat, Sep 05, 2009 at 06:35:16PM -0600, dan wrote:

[...]

> Thinking about the logistics in the method I have thought up a few hurdles.
> The source disks must remain unchanged during the entire sync.  

> You would need to either have a spare disk in a raid1 mirror that you
> could remove from the array and source from that, or you need to do
> some more hacking to bittorrent so that it could update the torrent
> file during the backup to reflect changes(lots of work i think)

You may use LVM snapshots for that purpose. If course, then your device
name might change or something like this and you need to work around
that.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Traffic on this mailing list

2009-09-04 Thread Tino Schwarze
Hi Juergen,

On Fri, Sep 04, 2009 at 09:01:07AM +0200, Juergen Harms wrote:

> The traffic on this list is becoming overwhelming. 

Me too.

[...]

> Suggestion:
>   - if this kind of multi-message exchange is necessary, split the list 
> into 2 parts - one for people who have time to play that game, and one 
> for people who dont,
>   - variant: create a list where periodic summaries are published and 
> where discussion is restrained to comments on the summary,
>   - find measures (no suggestions) to improve the discipline on the list.

Let me add another option:

- just wait for the traffic to go away. Its rare for this list to have
  such controverse discussion. The whole SQL discussion could IMO be
  moved to the devel list where it now belongs to.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and Barracudaware

2009-09-04 Thread Tino Schwarze
On Thu, Sep 03, 2009 at 06:49:09PM -0400, swedishorr wrote:
 
> Thank you for the replies.  I came in this morning to find that
> Yosemite / Barracudaware died on me and now I've got to find a new
> solution that doesn't use third-rate software.  

You might want to try Bacula...

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Merge config in per-host config?

2009-09-03 Thread Tino Schwarze
On Thu, Sep 03, 2009 at 04:46:58PM -0500, Les Mikesell wrote:

> > But maybe the global config could be made available as $GlobalConf{xyz}
> > within the per-host config? I remember having tried the same thing -
> > that is, I want a set of default excludes, then extend then for one
> > host. Currently, I have to repeat the whole default config just to
> > append one more directory to the excludes.
> 
> Doesn't that happen by itself if you use the web interface to edit the 
> per-host config?  That is, you see all of the default array elements 
> from the main config file until you insert a new one and then the set 
> becomes an override.  If you have enough machines that you don't want to 
> use the web interface to make a change, you could use a script to make 
> whatever edits you want.

I'm not using the web interface for configuring my hosts, for example
because I've got a 
do "somepath/DefaultRsync.conf";
in several config.pl. I just would like to say "add this to the default
excludes list" in config.pl which should be pretty easy given that it is
Perl code already.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] XP rsync SLOW and fails

2009-09-03 Thread Tino Schwarze
On Thu, Sep 03, 2009 at 04:45:48PM -0400, brianbe2 wrote:

> I dunno what happened but the full backup completed in 3 hours and 6
> minutes. 26598.5 MB according to the log. It should run incrementals
> for the next 29 days and back to a full backup on the thirtieth day.

I would do far more full backups - they're not that expensive with
BackupPC. How did you configure your incrementals? Do you have
incremental levels configured? If yes, how? They might cause your
difficulties - if you had 29 incremental levels, the server would have
to traverse a whopping 28 backups on the 29th day!

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Merge config in per-host config?

2009-09-03 Thread Tino Schwarze
On Thu, Sep 03, 2009 at 03:00:37PM -0500, Carl Wilhelm Soderstrom wrote:
> On 09/03 08:45 , Davide Brini wrote:
> > I agree that it should work like that. However, if I'm not mistaken, it 
> > seems 
> > that what's written here still applies:
> > 
> > http://osdir.com/ml/sysutils.backup.backuppc.general/2003-10/msg00010.html
> 
> I think the current behavior is the correct behavior -- where the per-host
> config file variable data completely replaces the defaults set in the
> config.pl.
> 
> The reason is that there are some settings in config.pl where is it not
> sensible to add the value in config.pl to the value in the per-host file;
> such as the rsync command. 

> It would be bad from a discoverability standpoint to have some variables be
> additive, and others ablative. They should all behave the same way; and the
> only sensible way IMHO is current behavior.

But maybe the global config could be made available as $GlobalConf{xyz}
within the per-host config? I remember having tried the same thing -
that is, I want a set of default excludes, then extend then for one
host. Currently, I have to repeat the whole default config just to
append one more directory to the excludes.

Bye,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Really confused with scheduling

2009-09-03 Thread Tino Schwarze
On Thu, Sep 03, 2009 at 01:06:17PM -0400, Jeffrey J. Kosowsky wrote:

>  > I want to schedule one Full backup every Sunday at 21:15 and five
>  > Incremental backups at 00:30 on every Tuesday, Wednesday, Thursday, Friday,
>  > and Saturday.
>  > 
>  > This is something really, really easy to do using NTbackup and Task
>  > Scheduler on Windows computers, but I have a Zimbra/Ubuntu server and can't
>  > use NTbackup.
> 
> Well, BackupPC has a different but arguably more sophisticated and
> robust version of backup scheduling.
> 
> If all you want to do is to run a fixed backup job at the exact same
> time every week, then you don't need to use the sophisticated default
> scheduler. Just turn off all job scheduling in config.pl and run
> simple cron jobs calling: BackupPC_dump [-i] [-f] 
>   where -i = incremental
>   -f = full

I'd rather use BackupPC_serverMsg...

+1 for adding that to BackupPC. If people want exact control, they
should get it.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Advantages of internal over external hard drive?

2009-09-03 Thread Tino Schwarze
On Thu, Sep 03, 2009 at 11:45:02AM -0400, Jeffrey J. Kosowsky wrote:
> Stephen Joyce wrote at about 10:31:44 -0400 on Thursday, September 3, 2009:
>  > Have you tried ssh -c blowfish?
>  > 
>  > 3des is the default cipher for most ssh implementations and blowfish is 
>  > much faster than 3des.
> 
> Thanks - I wasn't aware of that.
> What (if any) are the downsides to using 'blowfish' vs 3DES?

NSA will have a harder job of decrypting. ;->

Seriously: None. Blowfish is faster and more secure than 3DES.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 2.x behavior desired on 3.1 install

2009-09-03 Thread Tino Schwarze
On Wed, Sep 02, 2009 at 12:40:45PM -0400, Jeffrey J. Kosowsky wrote:

>  > IMO the easiest approach would be:
>  > - if BackupPC_nightly starts, it acquires a lock, then waits for backups
>  >   to complete
>  > - no new backups start until BackupPC_nightly finished
>  > 
>  > This should be rather easy since it could be implemented at the central
>  > scheduling code - actually BackupPC_nightly wouldn't be started, just
>  > the flag would be set that it wants to.
>  > 
>  > I would like such automatism (configurable, of course) as well since it
>  > just sucks to guess backup periods and to keep some time reserved for
>  > nightly maintenance.
>  > 
> 
> I like this idea and I agree it would be easy to implement.
> And by configurable, I assume you mean at a minimum the ability to
> turn on/off this option.

Exactly. Something like

# If set to 1, wait for running backups to complete before running
# BackupPC_nightly. Also, no new backups will be started until the
# nightly cleanup is finished.
$Conf{NoBackupsDuringBackupPCNightly} = 1;

By the way: The same should be configurable for the trash cleaner. It
does about the same to disk I/O as the nightly - it make the disk head
spin around like mad since a lot of virtually random inodes need to be
touched. Therefore, a config like

# Configure trash cleaning strategy:
# 1 = continuous - wake up every $Conf{TrashCleanSleepSec} and check for
# files to delete (default pre 3.2 behaviour B-) )
# 2 = run only just before BackupPC_nightly - see
# $Conf{DelayBackupsDuringBackupPCNightly} to disable backups during
# that operation
$Conf{TrashCleanMethod} = 1;

Hm... does anybody volunteer to implement that? I could take a look and
try to figure out a patch but I've not yet digged into the BackupPC
scheduling code...

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using rsync for blockdevice-level synchronisation of BackupPC pools

2009-09-03 Thread Tino Schwarze
On Wed, Sep 02, 2009 at 02:18:41PM -0600, dan wrote:

> Can I offer an alternative solution?  How about using bittorrent?

I don't see the benefits over using the patched rsync... What am I
missing? After all it's still read-all-blocks - compare checksums -
transfer changes, right?

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and Barracudaware

2009-09-03 Thread Tino Schwarze
On Wed, Sep 02, 2009 at 10:31:21PM -0500, Jim Leonard wrote:

> > I'm using bacula to backup the generated tar files and have them deleted
> > afterwards.
> 
> This is off-topic, I apologize, but if you are using Bacula, then why do 
> you have a BackupPC installation?

There are several reasons:

- BackupPC was deployed first and it works well. Never change a running
  system if it suits all your needs (so far).
- We're backing up multiple hosts over Internet. Rsync saves us a lot
  of bandwidth.
- Bacula has two purposes for us: provide offsite backup to tape and
  store our database dumps (where BackupPC isn't good at because the
  files change each day - about 70Gb each day)
- IMO it is easier to restore files to a server using BackupPC - e.g.
  after reinstall - it just needs ssh running and rsync installed.
- I didn't like the extra bacula client on servers. But I didn't look
  deeply into it either.
- BackupPC has a nice Web interface. We've had troubles finding a
  working bat when we installed Bacula (it might have been improved, but
  I'm a bit anxious to update our Bacula 2.2.x to 3.0) - we're a SuSE
  shop and had lots of segfaults etc.

Counter-question: Why should I use Bacula instead? (I'm serious, I'm
interested in possible reasons I might have missed.)

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 2.x behavior desired on 3.1 install

2009-09-02 Thread Tino Schwarze
On Wed, Sep 02, 2009 at 12:24:09PM -0400, Jon Craig wrote:

> > I tried renicing everything:
> >
> > jew...@kw157:/usr/share/backuppc/bin$ head -2 BackupPC_nightly
> > #!/usr/bin/perl
> > setpriority(0, $$, -20);
> >
> > jew...@kw157:/usr/share/backuppc/bin$ head -2 BackupPC_dump
> > #!/usr/bin/perl
> > setpriority(0, $$, 20);
> >
> 
> I believe that BackupPC_nightly is running as "backuppc" user and as
> such can only reduce it priority (ie +20), not lower (ie -20).  Also,
> you should check that the sub-processes to see if they properly
> inherited this change.  My guess is it will still not have much affect
> as your most likely IO bound and priority affects scheduling on the
> processor, but doesn't prioritize IO.

It might be worth a try to use ionice as well and have either the
nightly or the dump set to idle priority (ionice -c3 -p$pid)

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC and Barracudaware

2009-09-02 Thread Tino Schwarze
On Tue, Sep 01, 2009 at 04:17:31PM -0400, swedishorr wrote:

> I am currently using BackupPC to backup several servers (20 or so).  BackupPC 
> is running from a linux box running CentOS 5.3.  There is seemingly no issue 
> with the BackupPC operations.
> 
> However, I am trying to get these backups to tape via Yosemite / 
> Barracudaware.  I'm not getting anywhere on my own and Yosemite hasn't been 
> able to help at all.
> 
> Does anyone here have a solution (any solution) for getting their BackupPC 
> backup files onto tape?

If you want to backup your whole pool, your most likely out of luck
(except using a filesystem dump as suggested). If you just want your
backed up hosts on tape, you may use BackupPC_tarCreate or the archiving
feature.

I'm using bacula to backup the generated tar files and have them deleted
afterwards.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 2.x behavior desired on 3.1 install

2009-09-02 Thread Tino Schwarze
On Tue, Sep 01, 2009 at 01:29:45PM -0400, Steve wrote:

> >  > >  > Is there a parameter that sets priority of once backup over another,
> >  > >  > or do all the BackupPC_dump processes start at the same level?  
> > Maybe
> >  > >  > that would be a $Conf that could be added...
> >  > >  > evets
> >  > >
> >  > > That sounds like an interesting suggestion.
> >  > >
> >  > > But for your purposes where it seems like you want to de-prioritize
> >  > > all dumps relative to BackupPC_nightly, maybe just alias the relevant
> >  > > commands in _InstallDir/bin to include the nice.
> >  > >
> >  >
> >  > I don't think priorities will make much difference.  This is much more
> >  > about disk head position than CPU timeslices.  If the nightly process
> >  > runs at all it's going to keep yanking the head away from where the
> >  > backup runs want it to be.
> >  >
> >
> > Well, then you could always wrap Backup_nightly in a script that
> > renices priorities of dump prices to 20 - but that is a certainly not
> > pretty and also that assumes that you are running nightly outside of
> > your backup window so that no new dumps start...
> 
> Upon reflection, I think Les' point is the most valid - any
> competition is going to slow things down and the nice level won't help
> a lot.  So maybe the $Conf could be a "suspend backups until nightly
> admin completed"; that way you wouldn't have to guess a length of
> "dark" time; backups would just suspend themselves (leaving partials)
> when the admin started, and then resume when it finished...  You could
> still specify the time of the admins as we do now for the slowest time
> of day.

IMO the easiest approach would be:
- if BackupPC_nightly starts, it acquires a lock, then waits for backups
  to complete
- no new backups start until BackupPC_nightly finished

This should be rather easy since it could be implemented at the central
scheduling code - actually BackupPC_nightly wouldn't be started, just
the flag would be set that it wants to.

I would like such automatism (configurable, of course) as well since it
just sucks to guess backup periods and to keep some time reserved for
nightly maintenance.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems with hardlink-based backups...

2009-09-01 Thread Tino Schwarze
On Mon, Aug 31, 2009 at 05:14:20PM -0400, Peter Walter wrote:

> I am therefore restricted to copying the primary backup server itself.  
> The intent is not to be able to recover the targets directly - the aim 
> is to recover the primary backup server, and, from there, recover the 
> targets. If I had a method of simply backing up the changed files on the 
> backup server, and a method of dumping the hardlinks in such a manner 
> that they could be reconstituted later, then that would suffice.

I suppose it would be possible to add something like a transaction log
to BackupPC which could be used to script rsync, create a minimal tar
archive etc.

Of course, the transaction log would have to be written/maintained by
some central place within BackupPC. It could consist of just the
NewFileList moved to ${TopDir}/changelog plus an extra file created by
BackupPC_nightly...

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems with hardlink-based backups...

2009-08-31 Thread Tino Schwarze
Hi all,

On Mon, Aug 31, 2009 at 04:32:14PM -0400, Jeffrey J. Kosowsky wrote:

> In a very real sense, the current implementation already uses an
> artificial database structure - albeit it a slow, prorprietary,
> non-extensible, non-optimizable version. To wit, the attrib files
> present in each and every pc directory. The real essence of my
> suggestion is to replace the scattered myriad of attrib linear
> databases with a single relational database that can benefit from all
> the features, speed, tools, and optimizations of modern databases. As
> has been mentioned many times in the past, such a move would solve
> many, many problems though would obviously require some significant
> development work.

I suppose this is the most important argument _for_ trying the SQL
approach - maybe just for storing file attributes?

On the other hand, we're using one kind of atomic file system operation:
Hardlink count, that is, file expiration. That would be more difficult using
a database (prone to DB<->filesystem inconsistencies).

Maybe we should move this discussion to the -devel list? Or somebody
should come up with a database scheme, so we could start discussing
details - possibly figuring out that the requirements are difficult to
meet with a database? 

I'm just skeptical that it's is possible to store file system layout
more efficiently than a file system - and I suppose we'd need to
completely represent the directory structure of backups in database.
We'd end up with loads of entries pointing to a file.id(int8) which
is equivalent to the inode number in filesystem world. File attributes
would have to be stored in a separate table since they may be different
from host to host while file content is identical (and I'm not sure how
to do that efficiently, taking extended attributes like ACL, resource
forks etc. into account - you'll either get into JOIN hell or you'll
start storing serialized data).

Of course, a database might allow lookups like "which backups reference
file x". Also, standard databases are not good at querying hierarchical
structures. It's more natural for filesystems (but only up to a certain
point - traversing is still expensive).

These are just my random thoughts. I suppose it's worth spending some
time discussing/designing/developing a database layout - we'll learn a
lot and

a) it looks like it's worth trying to implement it - hey then we'd
already have a database layout!

b) we get convinced that it's not worth it or it's getting too
complicated - hey, then we've tried and get something out of the process
to show to people claiming that a database would improve things

Tino.

PS: Another weird thought just crossed my head: Maybe separating pool
data from backups might be worth a try. That is: Only store zero-byte
files in the pool (or maybe files with some metadata like MD5 in them)
which get hardlinked to backups, then have a second pool which contains
real data and no hardlinks (the implicit connection being the pool file
name). Creating and changing pool files is a rather central operation
(done by BackupPC_dump/_link/_nightly). That way, we could decouple the
extensive directory lookups while traversing a backup from the data
reading/writing - file pool could be separated from data pool. Without
detailed knowledge of the code, I suppose it should be doable as a
proof-of-concept hack.

Of course, this should be a configurable setting since it only makes
sense when there are actually separate physical volumes for metadata and
filedata.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Which FS? (was: Keeping servers in sync)

2009-08-31 Thread Tino Schwarze
On Mon, Aug 31, 2009 at 05:15:19PM +0200, Christian Völker wrote:

> > With backuppc the issue is not so much fragmentation within a file as 
> > the distance between the directory entry, the inode, and the file 
> > content.  When creating a new file, filesystems generally attempt to 
> > allocate these close to each other, but when you link an existing file 
> > into a new directory, that obviously can't be done so you end up with a 
> > lot of long seeks when you try to traverse directories picking up the 
> > inode info.
> Makes sense to me. Is there any FS which would be recommended for best
> performance?

I've used reiserfs and xfs with success. They seem to perform about
equally well. (I did not benchmark though!) I did not try ext3 yet.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsyncd --sparse flag

2009-08-28 Thread Tino Schwarze
On Fri, Aug 28, 2009 at 11:55:35AM +0100, Nigel Kendrick wrote:

> Does backuppc support the --sparse flag for rsyncd remote backups -
> searching for answers led me to 'probably not' in an old post.

I don't know for sure, but I doubt it since BackupPC_dump will probably
just produce zeroes and compress them.

> If it is supported, any benefit of using it with my famous database backup
> dumps?

No, you wouldn't benefit since your database dumps will probably not
contain long chains of zero bytes.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Cannot stat: No such file or directory

2009-08-27 Thread Tino Schwarze
On Tue, Aug 25, 2009 at 08:59:32PM -0400, huffie wrote:
 
> Not sure if it's due to config issue (I'm trying to backup localhost) that I 
> have this error message. Extracted from XferLOG
> 
> Running: /bin/gtar -c -f - -C /samba --totals ./samba
> full backup started for directory /samba
> Xfer PIDs are now 17605,17604
> /bin/gtar: ./samba: Cannot stat: No such file or directory
> Total bytes written: 10240 (10KiB, 274KiB/s)
> /bin/gtar: Error exit delayed from previous errors
> Tar exited with error 512 () status
> tarExtract: Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 
> filesTotal, 0 sizeTotal

Is there a file called /samba/samba? Since that is what tar is trying to
find. (File names are given relative to the share - it looks like you
specified '/samba' as the share name?)

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Solved! (Was: Sub directories of modules impossible?)

2009-08-26 Thread Tino Schwarze
On Wed, Aug 26, 2009 at 09:08:33AM +0100, higuita wrote:

> > Also thank you for that tip! I did configure BackupPC as much as 
> > possible from the web interface. So indeed I did not look very far in 
> > the config.pl.
> 
>   IIRC, you can also setup that via the webinterface, no need to
>   direct edit the config.pl

If course you can, but then you won't see all the nice and lengthy
explanations in config.pl. ;-)

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Sub directories of modules impossible?

2009-08-24 Thread Tino Schwarze
Correcting myself...

> If I remember correctly, this should be
>  $Conf{BackupFilesOnly} = {
>'cle' => [
>  '/stow-1.3.3'
>]
>  };
> 
> And the second one should be:

The following line needs to have a curly brace, of course. But you could
also specify excludes for all shares like this: ['*.flv','/tmp/']
>  $Conf{BackupFilesExclude} = {
>'cle' => [
>  '*.flv'
>]
>  };

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Sub directories of modules impossible?

2009-08-24 Thread Tino Schwarze
On Mon, Aug 24, 2009 at 09:47:30PM +0200, chin wrote:

> Adam Goryachev wrote:
> 
> (...)
> 
> > Is it possible for you to send the debug log of the two backup
> > methods... mainly so we can see the rsync commands sent across to the
> 
> Of course I could do. I have already set
> 
> $Conf{XferLogLevel} = '3';
> 
> For my machine's .pl configuration file under /etc/BackupPC.
> 
> > other side in each case? Basically, I'd like to see why one is much
> > slower than the other...
> 
> And now, I know why it is slower :-( It simply transfers *all* my files 
> of the share. It seems to ignore my settings completely:
> 
> $Conf{RsyncShareName} = [
>   'cle'
> ];
> $Conf{BackupFilesOnly} = {
>   'xotclIDE' => [
> '/stow-1.3.3'
>   ]
> };

If I remember correctly, this should be
 $Conf{BackupFilesOnly} = {
   'cle' => [
 '/stow-1.3.3'
   ]
 };

And the second one should be:
 $Conf{BackupFilesExclude} = [
   'cle' => [
 '*.flv'
   ]
 };

Have a look at the default configuration file, it's explained there very
verbosely.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Quick BackupPC_tarCreate question

2009-08-24 Thread Tino Schwarze
On Sun, Aug 23, 2009 at 11:42:00PM -0400, Mr_T wrote:
> 
> Hi,
> 
> I'm wanting to restore a file via the command line to the machine that is 
> running backuppc.
> 
> The command I'm using is...
> 
> ./BackupPC_tarCreate -t -n 55 -h server1 -s '/etc/mail.rc' > /data/test.tar
> 
> However when I run the above I just get given the following output...
> 
> usage: ./BackupPC_tarCreate [options] files/directories...
>   Required options:
>  -h host host from which the tar archive is created
>  -n dumpNum  dump number from which the tar archive is created
>  A negative number means relative to the end (eg -1
>  means the most recent dump, -2 2nd most recent etc).
>  -s shareNameshare name from which the tar archive is created

> This makes me think that my command is somehow wrong or that I'm 
> misunderstanding the flags...

The share name is usually '/', but depends on your configuration for
this particular host. How are you backing up that host? Using
rsync/rsyncd?

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsync incremental backup, makes a full backup instead of incremental

2009-08-24 Thread Tino Schwarze
On Mon, Aug 24, 2009 at 11:39:28AM +0300, Roman wrote:

> I have installed and configured BackupPC successfully, thank you for
> this great software.
> One small thing I may not fully understand:
> I have made a full backup, but which contained only a few files (just
> for testing), after this I setup all needed directories for the
> backup, and all incremental backup's afterwards look like a full
> backup. The question is: does backuppc incremental backups compare the
> files only to the last full backup and does not compare also to the
> last incremental backup? Seems like it does it like this.

Look at the configuration setting $Conf{IncrLevels}

It defaults to 1, so all incrementals will be based on the last full. If
you configured, for example, 1 weekly full and want 6 incrementals each
based on the previous, set
$Conf{IncrLevels} = [1,2,3,4,5,6];

It is a tradeoff: The 6th incremental might take very long since a lot
of directories need to be checked which does a lot of I/O on the server.
It might be faster to just transfer some more differences.

Have a look at the explanation here:
http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_incrlevels_

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC File::RsyncP issues

2009-08-19 Thread Tino Schwarze
On Wed, Aug 19, 2009 at 10:58:19AM -0500, Jim Leonard wrote:
> Tino Schwarze wrote:
> > I'd rule out the network. Samba might be doing fancy things to the TCP
> > level etc. Or you might try establishing an ssh tunnel to the Windows
> > host (or from Windows host to BackupPC server using putty which might be
> > easier), then point rsyncd to the local end of the tunnel.
> 
> I've already ruled out the network; I'm able to prove that rsyncd on the 
> windows side is simply not very fast :-(  In fact, I'm almost willing to 
> bet money that, if everyone checks their Full backup times, none of the 
> rsync ones will be over 10MB/s.  Why is rsyncd performance so bad in 
> Windows?

Alright, I can verify that. I just tried with a fresh install of Cygwin
and rsyncd. Copying an 8GB file via rsyncd yields about 14 MB/s. Using
plain smbclient, I get "average 40119.7 kb/s" which is about the maximum
write speed of the RAID.

Maybe there is some tuning possible with the socket options
configuration parameter. I couldn't find any recommendations though.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC File::RsyncP issues

2009-08-19 Thread Tino Schwarze
On Wed, Aug 19, 2009 at 08:46:50AM -0500, Jim Leonard wrote:

> > I would take a look at a network traffic dump - maybe something is bad
> > there? More suspects: Windows firewall, some other firewall inbeteween?
> > Did you try Windows rsyncd -> Windows rsync (to rule out some strange
> > Linux vs. Windows network stack issue)?
> 
> It's not the network; I did a test using rsync as the client and rsyncd 
> as the server, on the same machine (ie. the network stack was involved 
> but not the network itself), and then I got 22MB/s.  While 22MB/s is an 
> improvement over 5MB/s, it's still a long ways away from the performance 
> I saw doing smb backups with BackupPC (65MB/s using smbclient).

Half the throughput locally is reasonable - you've got twice the disk
activity (and it's usually competing for the same disk). I've just had
this here on a Linux box where I tested an rsync-based backup of Vmware
images. When I copied the images to the box, it maxed out at around
50-60 MB/s which seems to be the limit of the el-cheapo external RAID.
When I rsynced locally (rsync /raid/somewhere /raid/somewhereelse -
actually just a copy), I saw 25-30 MB/s throughput.

I'd rule out the network. Samba might be doing fancy things to the TCP
level etc. Or you might try establishing an ssh tunnel to the Windows
host (or from Windows host to BackupPC server using putty which might be
easier), then point rsyncd to the local end of the tunnel.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC File::RsyncP issues

2009-08-19 Thread Tino Schwarze
Hi Jim,

On Tue, Aug 18, 2009 at 07:45:57PM -0500, Jim Leonard wrote:

>  > first of all, where are you seeing these figures, and what are you 
> > measuring?
> 
> Rather than try to convince you of my competence, I will offer up these 
> benchmarks for the exact same endpoint machines and file (a 2 gigabyte 
> uncompressable *.avi file that did NOT exist on the target):
> 
> Unix rsync->Unix rsync:   60MB/s
> Windows SMB->Unix smbclient:  65MB/s
> Windows rsyncd->Unix rsync:5MB/s
> Windows rsyncd->BackupPC_dump: 5MB/s
> 
> As you can see, something is now clearly wrong with the windows rsyncd 
> source.  I confirmed this by profiling actual rsync in Unix and saw that 
> 77% of its time was spent waiting for data (which mirrors exactly what 
> File::RsyncP::pollsys was doing, wasting 77% of its time waiting for 
> data).  So the problem isn't BackupPC, it's windows rsyncd.

I would take a look at a network traffic dump - maybe something is bad
there? More suspects: Windows firewall, some other firewall inbeteween?
Did you try Windows rsyncd -> Windows rsync (to rule out some strange
Linux vs. Windows network stack issue)?

Bye,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fwd: Backup fails

2009-08-13 Thread Tino Schwarze
On Thu, Aug 13, 2009 at 12:20:20PM +0200, Michael Aram wrote:
> Hello all,
> 
> thank you for your answers. I changed the backupcommand (added -q) and got
> rid of the first error. Thanks.
> 
> However unfortunately, my backupserver wasnt able to successfully backup my
> remote machine.
> 
> I have to mention, that I want to backup ~100GB over a "normal 25MBit" xDSL
> connection over the internet. It use the rsync (not rsynd) between two
> ubuntu machines.
> 
> The backup always fails after a couple of hours with signal "PIPE". I think
> the machine being backed up just resets the connection or something. How can
> I detect the reason for the problem? There is no /var/log/rsync.log or
> something on the remote machine.

[...]

> Negotiated protocol version 28
> Sent exclude: /proc
> Sent exclude: /tmp
> Xfer PIDs are now 10929,11621
> [ skipped 114214 lines ]
> sys/block/md0/dev: md4 doesn't match: will retry in phase 1; file removed
> 
> Remote[1]: rsync: read errors mapping "/sys/block/md0/dev": No data
> available (61)

You should exclude /sys and /proc from backup - these are virtual file
systems anyway. Rsync might copy your whole harddisk (block devices),
kernel image etc.

HTH!

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Delete a file permanently

2009-08-10 Thread Tino Schwarze
On Mon, Aug 10, 2009 at 02:36:38PM +, Peter Hanston wrote:

> Is there any way I can delete a spesific file from all backups? For some 
> reason 
> I have a file backed up which is 50gigs in /var/log (/var/lib/smac.log) and 
> I'd 
> like to delete it from every backup that has been made.
> 
> If I go into a spesific backup number directly on the backup disk:
>  pc/db-master/57/f%2fvar/flib/fsmac.log)
> 
> Could I just simply recursively search through all backup numbers (pc/db-
> master/1-2-3 etc) and delete all fsmac.log occurances? Could there be any 
> unwanted consequences if this is done? Any better way?

It would work, BackupPC_nightly would remove the files from the pool as
well, but the backup size displayed by the web interface would be wrong.

Jeffrey J. Kosowsky wrote a nice tool for removing files from backups
called BackupPC_deleteFiles. See
http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg12265.html.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Configure ssh: X11 connection rejected because of wrong authentication.

2009-08-10 Thread Tino Schwarze
On Mon, Aug 10, 2009 at 10:19:32AM -0400, Craig Swanson wrote:
> I have created a new installation of BackupPC, attempting to configure 
> ssh with sudo, per the BackupPC instructions.
>   BackupPC fails, echoing: X11 connection rejected because of wrong 
> authentication.

Add a -x to your ssh command line.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Multiple backuppc server

2009-07-08 Thread Tino Schwarze
On Tue, Jul 07, 2009 at 01:50:56PM +0100, Andy Brown wrote:
> Hi All,
> We've started to setup a large multiple server backuppc environment, and 
> wanted a few thoughts/ideas/advice.
> We've got a large 2TB nas at the back of it with gig connectivity.
> Filesystem is LVM on top of OCFS2 so we have multiple front-end servers with 
> read/write.
> Backuppc on each host is setup with relevant different hostnames and setup 
> separate logdirectories. The actual top/backup location is shared on the main 
> nas store.
> 
> So
> $Conf{TopDir} = /backups/backuppc/
> $Conf{ConfDir} = '/etc/backuppc';
> $Conf{LogDir}  = '/backups/backup02';
> $Conf{InstallDir}  = '/usr/share/backuppc';
> $Conf{CgiDir}  = '/usr/share/backuppc/cgi-bin';

> Each server has its own list of hosts in /etc/backuppc/hosts as that's
> how I'm splitting the job queues. i.e. a host only exists in one
> server hosts file (at present either backup01 or backup02).
> 
> Can anyone see any pitfalls with this? The only strange thing I've
> noticed is with the trashClean process, it seems to be trying to clean
> things that the other server is creating/working on and failing with
> "Can't read /var/lib/backuppc/trash//home/blah/thing/file: No such
> file or directory". It doesn't seem a major thing so I'm ignoring it
> for now!

You will run into lots of troubles since BackupPC is not designed to
support multiple instances accessing the same storage. There are
processes like BackupPC_nightly which need to have exclusive access to
the pool (e.g. no parallel BackupPC_link running).

> Anyone see any pitfalls/problems with what I'm doing here?

What are you trying to accomplish by using multiple BackupPC instances?

For the time being, they need exclusive pool directories, that is,
exclusive TopDir.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Slow backups; Collision issues

2009-07-01 Thread Tino Schwarze
On Wed, Jul 01, 2009 at 01:04:06PM -0500, James Esslinger wrote:

> [...changing the program to output something else...]
> That won't fly.  The images are being output by a program that is closed
> source and I have no way of changing it.

Drat. And it doesn't support some compressed image format like PNG (or
TIFF or even GIF)? Compressing the images would probably solve your
problem (saving a ton of space as well if these are just plot graphs).

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Amazon S3 and/or EC2 or other off-site storage ideas

2009-06-27 Thread Tino Schwarze
Hi Mark,

On Fri, Jun 26, 2009 at 08:28:32AM -0700, Mark Phillips wrote:
> I am looking for inexpensive off-site storage that is compatible with
> backuppc.

In my opinion, off-site storage and backup don't go together. After all,
your backups contain all of your important (and possibly secret) data. I
wouldn't want to store these on some online-service.

> Is anyone using backuppc to backup files to Amazon S3? I have googled for
> some articles on this topic, and all I have found are old ones. It seems
> 1. S3 does not "allow" hard links and use of rsync. There is a s3sync
> option, but I haven't looked a it.
> 2. Using an EC2 front end running backuppc might work, and then storing to
> s3. Haven't found any backuppc articles about this.

I'm not familiar with S3 or EC2. Currently, BackupPC requires direct
access to a posix file system with hardlink support. And it does quite
stress the file system!

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Job details from command line for stats

2009-06-18 Thread Tino Schwarze
On Thu, Jun 04, 2009 at 11:44:47AM +0100, Hereward Cooper wrote:

> I'm looking for a way of extract details of backuppc jobs so that I
> can parse them and produce some graphs of usage and general activity.
> 
> The second part I am happy doing, however getting hold of the data is being a 
> bit more of a challenge.
> 
> I've tried my hand at some PHP DOM scrapping to pull the data from the web 
> interface, however this gets quite messy, and means learning a intermediate 
> step to achieve my goal. What I'm really looking for is a way of getting this 
> kind of information straight from the command line.
> 
> I haven't yet found a way of doing this though, so I'm asking for ideas. The 
> other option I thought of was to pull the data from the backuppc data files 
> directly, however I don't have a clue where to start with that idea!

What data are you interested in, exactly?

You might want to look at the backups file in each host's directory.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup of novel servers

2009-06-11 Thread Tino Schwarze
Hi Benedict,

On Thu, Jun 11, 2009 at 09:05:36AM +0300, Benedict simon wrote:

> i am using BackupPC to succesfully backup up linux client and working fine.
> 
> i also have 2 novell Netware servers which i would like to backup with
> backuppc
> 
> does backuppc support backin up Novell Netware servers
> 
> I have 4.11 and 5 server

(Note: I don't know anything about Netware 4.11 nor 5).

If your Novell servers are reachable either by rsync (possibly over
ssh), have tar (and ssh) or do export a samba share, they may be backed
up by BackupPC.

If I remember correctly, Novell Netware has special file systems, so
a simple file-level backup might not be sufficient (ACLs missing or
something) - you should try a restore anyway.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up a USB-disk?

2009-06-10 Thread Tino Schwarze
Hi Magnus,

On Wed, Jun 10, 2009 at 09:05:26PM +, Magnus Larsson wrote:

> What I would like is to have it as a separate host, and then do manual
> backups when I want to. Can I do this even though the host it is
> connected to already is a backuppc host? This would mean defining one
> host as a subdir of another host, in the config.pl. With the same host
> name and ip. 

You may simple configure another host (name it, for example,
myserver-usbdisk), then set $Config{ClientNamAlias}.

See
http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_clientnamealias_
See also $Conf{BackupsDisable} on how to disable automatic backup of a
particular host:
http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_backupsdisable_

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrading from etch to lenny

2009-06-10 Thread Tino Schwarze
On Wed, Jun 10, 2009 at 01:30:35PM -0400, Jim McNamara wrote:
> 
> > Have you specifically done a dist-upgrade from etch to lenny?

[...90 lines snipped...]

> By the way, top posting (writing above the previous post) is frowned upon by
> most mailing lists. Most mail programs handle it well, but people trying to
> read the thread via archives or on older software have trouble when someone
> writes "above" the older text.

Full-quoting is about the same league.

SCNR, Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
Crystal Reports - New Free Runtime and 30 Day Trial
Check out the new simplified licensing option that enables unlimited
royalty-free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problem in transferring

2009-06-08 Thread Tino Schwarze
On Mon, Jun 08, 2009 at 09:36:33AM -0400, ckandreou wrote:

> The log from the host that I am trying to backup to tape is as follow:
> 2009-06-05 01:00:08 full backup started for directory / (baseline backup #368)
> 2009-06-05 11:23:12 full backup 369 complete, 216652 files, 43976121396 
> bytes, 0 xferErrs (0 bad files, 0 bad shares, 0 other)
> 2009-06-06 01:00:15 incr backup started back to 2009-06-05 01:00:08 (backup 
> #369) for directory /
> 
> === Tape archive log displays the following:
> Executing: /usr/local/BackupPC/bin/BackupPC_archiveHost 
> /usr/local/BackupPC/bin/BackupPC_tarCreate /usr/bin/split  ccdev10 369 
> /usr/bin/bzip2 .bz2 000 /dev/sa0  *
> Xfer PIDs are now 56446
> Writing tar archive for host ccdev10, backup #369 to /dev/sa0
> .
> . unknown files
> bzip2: I/O or other error, bailing out.  Possible reason follows.
> bzip2: Inappropriate ioctl for device
>   Input file = (stdin), output file = (stdout)

Does that error come instantly or does it take a while? I suppose it
takes a while because there are so many "type 8" messages?

> Executing: /bin/csh -cf -cf /usr/local/BackupPC/bin/BackupPC_tarCreate -t -h 
> ccdev10 -n 369 -s \* . | /usr/bin/bzip2 >> /dev/sa0
> Error: /usr/local/BackupPC/bin/BackupPC_tarCreate, compress or split failed
> Archive failed: Error: /usr/local/BackupPC/bin/BackupPC_tarCreate, compress 
> or split failed

Do you get any messages in your system logs? Maybe there's some problem
with the tape? I'd try to archive to an intermediate file first, then
see whether I can copy that to tape.

BTW: I'm using a custom script to do the archiving since I've got a lot
of hosts to archive and want some control over the time that takes.
Apart from that, I'm using Bacula to write everything to tape.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problem in transferring

2009-06-08 Thread Tino Schwarze
> Thanks for the info.  I continue to get the same message. BackUpPC web
> interface for "tape_archive" indicates that the backup failed, with
> the error message the same as above,

You need to look closer. The "type 8" messages are noise (and should
probably be suppressed if possible). Please post your full error message
(copy and paste, remove the "type 8" messages, please), there's some
other hidden issue, I'm sure.
 
> When I issue the command  
> #tar tvf /dev/sa0  I get a listing of directories on the tape. I am not able 
> to see the contents within directories. For example. 
> #tar tvf /dev/sa0/home 
> I get an an error back

What error? Please be more specific. Maybe you need to add a space
between /dev/sa0 and /home?

> My question then is :
>Even though that I get the error message on tape_archive web page, I 
> should ignore and with the assumption the tape has the archived contents?

Never ever assume anything about your backups. Test them.

Tino.oO(Note to self: Need to perform an emergency restore test myself.)

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Host Summary - Full Size: Wondering whereitcomes from

2009-06-04 Thread Tino Schwarze
Hi Flavio,

On Thu, Jun 04, 2009 at 12:03:07PM +0200, Boniforti Flavio wrote:

> > My advice: Just account on total amount of data backed up. 
> > And this is the number you get in the host summary page as 
> > "Full Size(GB)" and in host status page in the "File 
> > Size/Count Reuse Summary" table in "Totals" column - here you 
> > can also see the pooling effects nicely.
> 
> As I stated above, I think I will be accounting like you're suggesting
> to: "Host Summary" page will have to be enough, for me and for my
> customers.

There's another nice point of view: That you actually need less space on
your backup server is your earnings for being so smart and using
BackupPC for that purpose. You deserve it! ;-)

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Change Archive Directory

2009-06-04 Thread Tino Schwarze
On Wed, Jun 03, 2009 at 10:25:01PM -0400, jbk wrote:

> I am trying to understand this subject heading so that I can 
> correct my backup archive. I have been running backuppc for 
> over a year now and the backed up data appears to be good. I 
> have done individual file restores without issues. What I am 
>   not seeing is any pooled data. I am using the Fedora 
> distribution (10) binary of Backuppc. When I originally set 
> up the archive directory I pointed it to an external usb 
> disk that is mounted in a location that is not in the 
> original "topdir" path. The disk is persistently mounted on 
> reboot via fstab. I changed the "topdir" path in the config 
> file to match the archive location backuppc still uses 
> /var/lib/backuppc/ for the backuppc user home directory 
> which contains the .ssh/   directory etc...
> 
> #fstab mount point
> LABEL=/backupdisk /data/bilbo/backup  ext3defaults0 0

The easiest way is to just mount your USB drive at /var/lib/backuppc/.
No bind mount neccessary, that way.

Bind-Mounting should work like this in /etc/fstab:
/data/bilbo/backup  /var/lib/backuppc/  nonebind0   0

> #/etc/Backuppc/config.pl
> $Conf{TopDir} = '/data/bilbo/backup/BackupPC';

You would also have to patch BackupPC/Lib.pm if I remember correctly.
Which is not a Good Thing(tm) if you installed some RPMs.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Host Summary - Full Size: Wondering where itcomes from

2009-06-04 Thread Tino Schwarze
Hi Flavio,

On Thu, Jun 04, 2009 at 08:23:36AM +0200, Boniforti Flavio wrote:

> > Q: Does anyone know?
> > A: Yes
> 
> Maybe I'm the only human which thinks a bit more "elastically", but if
> anybody asks:
> 
> "Anybody knows why there's difference between the above 2 values?"
> 
> I'm prone to explain *why* there's a difference and not wasting my time
> and the time of others for typing simply "yes".
> But that may be a weird and complex way-of-thinking that is affecting
> only me :-/

Nope, I tend to think like that as well. :-|

> > Asked and answered.  What you were really looking for was:
> > 
> > Q: Please explain to me the difference between the two number?
> > A: The file structure under  backuppc has little resemblance 
> > to the file structure of the original system.  For each 
> > directory, there is essentially a file with the directory 
> > information as it appears on the original system (to handle 
> > permissions and such).  This will cause the numbers to differ 
> > across multiple directories.  There may be more reasons, but 
> > the question seems so arbitrary and pointless I dont care to 
> > put a lot of effort into getting a definitive answer.  Maybe 
> > if you have a good reason why you care why the numbers are 
> > different I might be more interested.
> 
> The reason which *for me* is worth knowing to which value I have to
> trust and *why*, is the fact that I have to account for HDD space usage.
> I'd really be happy using the values that BackupPC shows in the "Host
> Summary", but if they ain't really what HDD space usage should be, I
> just want to know which value to consider. If anybody has already done
> this sort of considerations (HDD space accounting per single host, which
> corresponds to a single customer), please explain to me his or her
> considerations.

The short answer is: You cannot account for single host disk usage
because of pooling.

Suppose, you've got three hosts, all with the same operating system. All
common files will be in the pool only once. And they will get hardlinked
every time you do a full backup.

So you get three hosts, with, say, 5 backups each. The /boot/vmlinuz
file will be shared by all those 15 backups. How would you account disk
space?

And it get's more complicated in practice. Customer 1 installes Firefox,
Customer 2 and 3 use Opera. So, now only 2 and 3 share the common
Opera files in the pool (which might be compressed after all).

Even if you developed some formula how to account for disk space, it
is very expensive to figure out who shares a common pool file - you'd
need to scan all pc/ directories and remember inode numbers etc.

My advice: Just account on total amount of data backed up. And this is
the number you get in the host summary page as "Full Size(GB)" and in
host status page in the "File Size/Count Reuse Summary" table in
"Totals" column - here you can also see the pooling effects nicely.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up a BackupPC server

2009-06-04 Thread Tino Schwarze
Hi there,

(I already felt like I was going to look dumb or anxious by writing what
I wrote...)

On Wed, Jun 03, 2009 at 01:09:38PM -0400, Jeffrey J. Kosowsky wrote:
> Tino Schwarze wrote at about 18:39:26 +0200 on Wednesday, June 3, 2009:
>  > > > I recently heard about lessfs, which runs on top of FUSE to provide
>  > > > a file system that does block-level de-duplication.  See:
>  > > > 
>  > > > http://www.lessfs.com
>  > > > https://sourceforge.net/project/showfiles.php?group_id=257120
>  > > > http://tokyocabinet.sourceforge.net/index.html
>  > > > 
>  > > > The actual storage is several very large (sparse?) files on any
>  > > > file system(s) of your choice.  It should provide all the benefits
>  > > > you expect: no issues of local limitations on hardlink counts,
>  > > > meta-data etc, and the database files can be copied or rsynced.
>  > > > I'm corresponding with the author to see if some additional useful
>  > > > features could be added.
>  > 
>  > Well, we've already got MD4 checksums of file blocks. And if I
>  > understand everything correctly, we DO GET collisions, therefore the
>  > hash chains.
> 
> First, the hash chains are based on *partial* file *md5* (not md4)
> sums.
> 
> Second, the collisions only occur because the hash is only done on the first
> and eighth (or last for small files) 128K block. So, obviously you will
> have collisions for large files that have the same first and eighth
> block. 

That was the first flaw of my thoughts... So I would have to scan my
pool and compare first and eigth 128k block (e.g. 0-128k and 1M-1M128k
or is it 896k-1M?) for matches? Maybe I'll try that, out of sheer
curiousity (if I find the time to script it).

>  > Of course, this if for 256k blocks, IIRC. And "only" 128 bit hashes.
>  > But I don't like the idea of relying on probabilities. I've got enough
>  > uncertainties by flaky hardware, bugs etc.
> 
> We rely on probabilities in all aspects of life. Nothing is certain.

I know that. Sometimes I'm paranoid - I just like to get rid of
probabilities (=uncertainties) where possible. 

> It all depends on the probability... I would much prefer to take the
> risk of a mathematically known infinitesimal probability (of the order
> of md5 hash collisions) than what most people in life take for granted
> as "absolute" fact. At least with a mathematically modeled system you
> know the risk which is more than most of us know about most other
> elements of our systems.
> 
>  > I won't trust such a file system for backup data.
> 
> Making blanket statements like that show a lack of understanding of
> probability vs. certainty in the world. 

Well, I just said, *I* won't trust such a file system. It's just a
gut feeling. Something which isn't logical or anything.

> If for example, the probability of a collision is many orders of
> magnitude less than the probability of you losing all your backups
> then I wouldn't worry about it. It all depends on the probability...

The bad thing about probabilities is that they don't tell you anything
about what will happen, just about what might happen. Even if the
probability is very, very, very, very small, it doesn't mean it will
not instantly happen the next second. It's just very unlikely.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up a BackupPC server

2009-06-03 Thread Tino Schwarze
> > I recently heard about lessfs, which runs on top of FUSE to provide
> > a file system that does block-level de-duplication.  See:
> > 
> > http://www.lessfs.com
> > https://sourceforge.net/project/showfiles.php?group_id=257120
> > http://tokyocabinet.sourceforge.net/index.html
> > 
> > The actual storage is several very large (sparse?) files on any
> > file system(s) of your choice.  It should provide all the benefits
> > you expect: no issues of local limitations on hardlink counts,
> > meta-data etc, and the database files can be copied or rsynced.
> > I'm corresponding with the author to see if some additional useful
> > features could be added.

Well, we've already got MD4 checksums of file blocks. And if I
understand everything correctly, we DO GET collisions, therefore the
hash chains.

Of course, this if for 256k blocks, IIRC. And "only" 128 bit hashes.
But I don't like the idea of relying on probabilities. I've got enough
uncertainties by flaky hardware, bugs etc.

I won't trust such a file system for backup data.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 'Daily' digest isn't daily.

2009-06-03 Thread Tino Schwarze
On Wed, Jun 03, 2009 at 05:04:26PM +0100, G.W. Haywood wrote:

> There have been a few hiccups since I subscribed to the digest list
> but until now they haven't been remarkable.  However yesterday there
> were thirteen daily digests.  So far today there have been four.
> 
> Is there any chance that the daily digest could be, well, daily?

Maybe there's a maximum mails per digest or a maximum mail size? We've
had a lot of traffic today and yesterday. I'd check the subscribe
options whethere there are some numbers to tune...

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to use backuppc with TWO HDD

2009-06-02 Thread Tino Schwarze
Hi,

> Another solution to the two hard drives backing up might be to use Raid
> 0 (striping).  This does not allow redundancy but it does let you
> combine the drives so the system sees them as one drive.

One should only use RAID0 if he/she doesn't care for it's data. It might
double throughput, yes. But it doubles failure probability as well.
RAID0 might be suitable for automated build systems or similar where
only temporary data is stored. It's not suitable for a Backup system,
IMO.

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up a BackupPC server

2009-06-02 Thread Tino Schwarze
On Tue, Jun 02, 2009 at 10:06:40AM -0500, Les Mikesell wrote:

> >> Still, it would be awesome to combine the simplicity and pooling
> >> structure of BackupPC with the flexibility of a database
> >> architecture...
> >>   
> > I, for one, would be willing to contribute financially and with my very 
> > limited skills if Craig, or others, were willing to undertake such an 
> > effort. Perhaps Craig would care to comment.
> 
> The first thing needed would be to demonstrate that there would be an 
> advantage to a database approach - like some benchmarks showing an 
> improvement in throughput in the TB size range and measurements of the 
> bandwidth needed for remote replication.

In my experience, BackupPC is mainly I/O bound. It produces a lot of
seeks within the block device system (for directory and hash lookup).
This might actually benefit from a relational database - you'd just do
the appropiate SELECT, have some indices in place etc. Of course,
there's still that "how to store and query the directory hierarchies
efficiently" problem.

Maybe someone should propose a real design, then we may check how to map
BackupPC's access patterns to the database structure. It might turn out
to become really complex - I'm just wondering how to store files,
directories, attributes, the pool, a particular backup number. We
currently create the directory structure for each backup, so we may
store the attrib file (to keep track of deleted files, at least). We'd
have to do that for the database, too. There's no other solution, IMO.

I suppose, you could only benchmark something after implementing a
sufficiently complex part of the problem to solve.

Another idea: Do we have performance metrics of BackupPC? It might be
useful to check what operations take most of the time. Is it pool
lookups? File decompression? Directory traversal for incrementals?

If, for example, we figure out, that hash lookups and checksum reading
of hash files etc. are expensive, a little database (actually a
hashtable) might suffice, sort of a memcached which keeps track of pool
files, their size and checksum. This might be doable (maybe disabled by
default if it requires additional setup) and work like a cache.

> Personally I think the way to make things better would be to have a 
> filesystem that does block-level de-duplication internally. Then most of 
> what backuppc does won't even be necessary.   There were some 
> indications that this would be added to ZFS at some point, but I don't 
> know how the Oracle acquisition will affect those plans.

I don't think that belongs into the file system. In my opinion, a file
system should be tuned for one purpose: Managing space and files. It
should not care for file contents in any way, IMO.

> Meanwhile, if someone has time to kill doing benchmark measurements, 
> using ZFS with incremental send/receive to maintain a remote filesystem 
> snapshot would be interesting.  Or perhaps making a vmware vmdk disk 
> with many small (say 1 or 2 gig) elements and running backuppc in a 
> virtual machine.  Then for replication, stop the virtual machine and 
> rsync the directory containing the disk image files.  This might even be 
> possible without stopping if you can figure out how vmware snapshots work.

You don't want heavy I/O in Vmware without direct SAN attached or
similarly expensive setups.

I'd rather propose a patch to rsync adding --threat-blockdev-as-files .
This would require block-level checksum generation on _both_ sides,
though, so it's rather I/O and CPU intensive. Then, DRDB might be the
way to go - they already take note of changed parts of the disk (but
that's a guess).

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up a BackupPC server

2009-06-02 Thread Tino Schwarze
On Tue, Jun 02, 2009 at 06:27:35AM -0400, Peter Walter wrote:

> As a Linux newbie, I have only a partial understanding of the technology 
> underlying Linux and BackupPC, but I get the impression that the problem 
> with a rsync-like solution is that processing hardlinks is very 
> expensive in terms of cpu time and memory resources. This may be a 
> stupid question, but, if hardlinks are the problem, has any thought been 
> given to adding to BackupPC an option to use some form of database 
> (text, SQL or otherwise) to associate hashes to files, instead? It seems 
> to me that using hardlinks is in fact using that feature of the file 
> system *as* a database, a use that does not appear to be optimal ... if 
> I have misunderstood, please educate me :-)

An SQL approach would be rather complicated because it would have to
support a directory structure. We would end up with ... a filesystem!
The nice thing about using hardlinks is that the operating system keeps
track of the link count and we can use that link count to check for
superfluous files. This might be doable in a database as well, but
we'd have to keep a file system and a database in sync. Doable, but
error-prone. With the current design, there is only a file system.

Tino, not doing backups of the pool, but archiving hosts to tape.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


  1   2   3   4   >