Re: [BackupPC-users] Compression level

2007-12-05 Thread John Pettitt
Craig Barratt wrote:
 Rich writes:

   
 I don't think BackupPC will update the pool with the smaller file even
 though it knows the source was identical, and some tests I just did
 backing up /tmp seem to agree.  Once compressed and copied into the
 pool, the file is not updated with future higher compressed copies.
 Does anyone know something otherwise?
 

 You're right.

 Each file in the pool is only compressed once, at the current
 compression level.  Matching pool files is done by comparing
 uncompressed file contents, not compressed files.

 It's done this way because compression is typically a lot more
 expensive than uncompressing.  Changing the compression level
 will only apply to new additions to the pool.

 To benchmark compression ratios you could remove all the files
 in the pool between runs, but of course you should only do that
 on a test setup, not a production installation.

 Craig
   
The other point to keep in mind is that unless you actually need 
compression for disk space reasons leaving it off will often be faster 
on a CPU bound server.   Since there is a script provided 
(BackupPC_compressPool) to compress it later you can safely leave 
compression off until you need the disk space.

John

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc and Amanda

2007-12-05 Thread Sylvain MAURIN
Sorry for the delay,
we are using amanda for backuppc archiving purpose. Main 
difficulty relies on the kind of FS used for pooling. 

My own experience goes against using :
Soft
5T pool for 25T hardlinked data with a lot of 
small files and more than 20% of free FS space ++ defragmented
All backups are done over LVM(No RAID) snapshots
Hard 
3Ware RAID5 / 750Go SATA / 4 Opteron / 16 Go RAM hardware

Tar tends to be too slow. It produces 'killer' interruptions 
for backuppc rsync jobs. All CPUs are going IO/waiting) or 
iostat await is going over 500.

I was then only able to considere full device binary dump, which
are not handed by amanda or some dump/restore process.

dump/restore over LVM ext3 were not tested. ext3 looks too 
slow for my backuppc pool and I wasn't secure about harlinks.

I choose XFSdump after some optimization : DMAPI on SGI patched 
kernel and mount optimization.
Then I was under stress for amanda xfsrestore : Time for backup
got limits from its bandwith for housekeeping data r/w !
Once I choose to move them in tmpfs mounts (size 8Go !) and to not 
allow swapping, I got good perf :
Average looks give me
tar is done 1mb/s on xfs optimized, 
ext3 dump at 1~2mb/s
xfsdump at 1~2mb/s without dmapi 
xfsdump at 20~25mb/s with dmapi

My backups are done on LTO3. For amanda I choose to not use disk 
split buffer and I do it in RAM on 1Go chunks.

Well, it's now working in 2-3 days but restoring is  4 days but
I did not try to optmize that.

Sylvain

PS :
fstab extract :
tmpfs on /tmp/amanda type tmpfs (rw,size=8g)
/dev/mapper/vg_array0-BACKUPPC--SNAP on /srv-snap type xfs
(ro,nobarrier,largeio,swalloc,nouuid,sunit=128,swidth=1152,dmapi,mtpt=/srv-snap)
Some read about my NAS install with backuppc and amanda :
http://www.isc.cnrs.fr/informatique/public_notice/Melian

On Mon, 2007-11-26 at 13:28 -0800, Paddy Sreenivasan wrote:
 I'm a developer in Amanda (http://amanda.zmanda.com) project.  Is anyone
 using Amanda and Backuppc together?
 
 I'm interested in integrating Backuppc with Amanda.  Amanda will be
 the media manager (support for tapes and other media) consolidator of data
 from a group of BackupPC clients.  Amanda's application api
 (http://wiki.zmanda.com/index.php/Application_API) can be used for 
 integration.
 Lots of details have to be worked out.
 
 If anyone is interested in developing such a solution, please send me an email
 (paddy at zmanda dot com).  Any suggestions on what features would be
 useful in a integrated solution?
 
 Thanks,
 Paddy
 
 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 
-- 
**
 Sylvain MAURIN - Admin.Sys. Institut des Sciences Cognitives
 CNRS-Universite Claude Bernard Lyon 1
 67, boulevard Pinel. 69675 BRON cedex
 Tel:+33 437911214 Cel:+33 612399929 Mail: [EMAIL PROTECTED]
**
  Cert. CNRS-Plus S/N: 0C-31 - http://igc.services.cnrs.fr/
 SHA-1 fingerprint:
 7A:C7:4D:DE:29:C9:B3:FA:B8:81:5C:6B:80:8C:32:95:72:7A:4C:2B
 MD5  fingerprint:
 EA:B5:90:DC:72:3D:BB:39:FC:36:F5:B3:6D:96:BE:1D
**



smime.p7s
Description: S/MIME cryptographic signature
-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how to configure storage?

2007-12-05 Thread dan
my 2cents

having 4 drives i would suggest RAID5.  raid5 gives a good blend between
performance and reliability.  you get redundancy and an improvement in
filesystem performance.

you get 1.5TB + a redundant drive.  rebuild speed on that size of array on
commodity hardware will be slow but livable.  probably take a few hours to
rebuild.  i have a 4 drive raid 5 on sata and seagate 400GB disks and a full
rebuild with %50 usage on the drives is about 2 hours though this was a test
and not as a result of a failure, im not sure that really matters though.

i would also suggest you use pure software raid on linux.  decouples you
from any specific controller and your drives can be pulled and put in any
other linux setup to get the data if the machine crashes.

you have one other promissing option.  ZFS.  you would need to run
opensolaris or solaris or nexenta(ubuntu+opensolaris) to get this but it has
a ZRAID function similar to raid5 but with LVM built in so you have add
drives as you please.  i seem to be the big ZFS tester on the mailing list
and i have had no problems.  i also use the built in compression rather than
backuppc compression.  additionally, you can remote replicate a ZFS volume.
i have just started exploring this option because of some comments on the
list here that rsync gets wacky at larger file counts and i currently
replicate my bakuppc system to a remote site with rsync.

with raid5 you cannot add drives to the array once built.  if you add
another volume with LVM, it will not be redundant and that filesystem will
then require that drive to function so losing just 1 drive loses
everything!  you would have to add a second raid5 and add that to your
volume group.

oh, one last thing.  good luck :)

On Dec 5, 2007 6:08 AM, Mirco Piccin [EMAIL PROTECTED] wrote:

 Hi all.

  I also have four 500 GB IDE hard disks. I'm wondering what you would
  advise for the best way to configure them. I'm tempted to put them all
  in one lvm volume so that I can have 2 TB of storage. This is also the
  easiest for me to accomplish.

 I'd like to post here my experience.
 I assembly a small server (low power consumption and low noise) to backup
 about 30 pcs and 4 servers (mix of file server, mail server, application
 server).

 The OS is installed in a 1gb DOM (debian etch) and the space for the
 storage is located in sata disks; Initially i use 2 500gb sata disks in
 RAID1 and LVM.
 Then (after about 2 months) i add another 2 500GB sata disks (in RAID1)
 and  extend original LVM.

 So now i have 1TB of total storage space.
 I choose this way because i did not know how many disks will be added to
 the BackupPC server (i have 2 sata channels on the motherboard plus 4 sata
 channels on a PCI card), and because sata disk is not so expensive.

 Using RAID1 and LVM for me means easy expansion capabilities.

 Hope my experience helps someone...
 Bye

 (...and sorry for my poor english)



 -
 SF.Net email is sponsored by: The Future of Linux Business White Paper
 from Novell.  From the desktop to the data center, Linux is going
 mainstream.  Let it simplify your IT future.
 http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] NetBIOS name lookup - trouble with DHCP

2007-12-05 Thread dan
if you are still able to ping the laptop at the wireless address then the
wireless just needs shut down.  after that you should be able to find the
machine on the new IP via nmblookup.

i know of no way to have a prefered address that is wired, and a fallback
address that is wireless.

On Dec 4, 2007 6:41 AM, Alexander Lenz [EMAIL PROTECTED] wrote:

  Hello guys,



 at first thanks for the info on user access to backups one of you gave me
 last time.

 Now, is there any way to change the mapping of  nmb-name (NetBIOS) -- IP
 addr. ?

 Especially with DHCP leases involved, and multiple interfaces of laptops,
 i.e. eth and wireless,

 this can easily get broken.

 In this case, I want to get the resolver to find ' myNotebook-xyz'  under
 IP 192.168.1.105

 instead of 154, which was the old wireless IP. How to achieve that ?

 Already restarted /usr/sbin/nmbd (-D), but to no avail.

 Still, a  ping myNotebook-xyz  resolves to  .154

 Where's the crucial turn-key here ?



 Thanks a lot in advance,



 == Newton ==



 --
 Metaversum GmbH
 Geschaeftsfuehrer: Jochen Hummel, Dietrich Charisius, Dr. Mirko Caspar
 Rungestr. 20, D-10179 Berlin, Germany
 Amtsgericht Berlin Charlottenburg HRB 99412 B
 CONFIDENTIALITY NOTICE: The information contained in this communication is
 confidential to the sender, and is intended only for the use of the
 addressee. Unauthorized use, disclosure or copying is strictly prohibited
 and may be unlawful. If you have received this communication in error,
 please notify us immediately at the contact numbers or addresses noted
 herein.

 -
 SF.Net email is sponsored by: The Future of Linux Business White Paper
 from Novell.  From the desktop to the data center, Linux is going
 mainstream.  Let it simplify your IT future.
 http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backing up localhost fails

2007-12-05 Thread scott
Craig Barratt wrote:

Scott writes:

  

TarClientCmd is

/usr/bin/sudo /bin/tar -c -v -f - -C $shareName+ --totals

and /bin/tar --version gives:

tar (GNU tar) 1.15.92

The backup fails with:

2007-12-04 22:09:50 full backup started for directory /data
2007-12-04 22:27:23 Got fatal error during xfer (No files dumped for
share /data)
2007-12-04 22:27:28 Backup aborted (No files dumped for share /data)



What is in the XferLOG.bad file?


  

It starts with:

Running: /usr/bin/sudo /usr/local/bin/tarCreate -v -f - -C /data --totals .
full backup started for directory /data
Xfer PIDs are now 23492,23491
apache2/
assp/
assp/spam/
assp/spam/3242
assp/spam/2924
assp/spam/1809
assp/spam/899
assp/spam/607
assp/spam/1899

which is confusing as I've removed the tarCreate wrapper so I've no idea 
where its picking that up from.  In desperation I rebooted the server, 
but it still comes up.  The relevant section from the machine.pl file is:

$Conf{XferMethod} = 'tar';
$Conf{TarClientCmd} = '/usr/bin/sudo /bin/tar -c -v -f - -C $shareName+ 
--totals';
$Conf{TarIncrArgs} = '--newer=$incrDate $fileList';

Anyway, the XferLOG.bad continues with file names in assp/spam until 
around line 100 where it changes to:

assp/spam/3552
assp/spam/1666
  create d 755   0/0   0 .
  create d 755   0/0   0 apache2
  create d 755  99/0   0 assp
  create d 755  99/0   0 assp/spam
  pool 600 99/99   10066 assp/spam/3242
  pool 600 99/99   10007 assp/spam/2924
  pool 600 99/99   10014 assp/spam/1809
  pool 600 99/991566 assp/spam/899

I get around 60 of these lines beginning with pool and then from there 
it alternates between just the file names and the pool lines.  I guess 
this is it finding the file in the pool.

The first directory /data then finishes and it starts on the second, 
where it seems to go wrong:

  pool 554   0/03999 www/backuppc/cgi-bin/BackupPC_Admin
tarExtract: Done: 0 errors, 116182 filesExist, 14267486394 sizeExist, 
11865427525 sizeExistComp, 124095 filesTotal, 14494799736 sizeTotal
Running: /usr/bin/sudo /usr/local/bin/tarCreate -v -f - -C /usr --totals .
full backup started for directory /usr
Xfer PIDs are now 8150,8149
tarExtract: ./
tarExtract: ./bin/
tarExtract: ./bin/X11
tarExtract: ./bin/indent
tarExtract: ./: checksum error at
  create 0   0/0   0
tarExtract: ./bin/
tarExtract: ./bin/X11
tarExtract: ./bin/indent
tarExtract: .
tarExtract: : checksum error at
tarExtract: Can't open /backup/pc/rivera/new/f%2fusr/ for empty output
  create 0   0/0   0 .
tarExtract: : checksum error at
tarExtract: Can't open /backup/pc/rivera/new/f%2fusr/ for empty output
  create 0   0/0   0 .
tarExtract: : checksum error at
tarExtract: Can't open /backup/pc/rivera/new/f%2fusr/ for empty output
  create 0   0/0   0 .

I think I have plenty of space, my backups go to /backup:

df -k:
Filesystem   1k-blocks  Used Available Use% Mounted on
/dev/hda2479731472  43278360 412084116  10% /
/dev/hdc2307402872  27090588 264697104  10% /backup

Thanks for looking.



-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC-users Digest, Vol 20, Issue 11

2007-12-05 Thread Ben Blankenberg

Message: 1
Date: Tue, 4 Dec 2007 16:40:53 -0600
From: Carl Wilhelm Soderstrom [EMAIL PROTECTED]
Subject: Re: [BackupPC-users] Backing up Windows Client with spaces in
the network path
To: backuppc-users@lists.sourceforge.net
Message-ID: [EMAIL PROTECTED]
Content-Type: text/plain; charset=us-ascii

On 12/04 03:44 , Ben Blankenberg wrote:
 Have Tried these configs
 
 $Conf{BackupFilesOnly} = ['*/test?test/*'];
   
   And get this error message returned Last error is No files
 dumped for share C$

Hmm, I don't use the $Conf{BackupFilesOnly} directive myself, so I can
only
guess. Have you tried:
$Conf{BackupFilesOnly} = ['/*/test?test/*'];

or better yet, specifying the full path? (tho I realize it won't be a
catch-all then, it'll be a useful experiment).

$Conf{BackupFilesOnly} = ['/foo/bar/test?test/*'];

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com


Have tried the recommended syntax and it still produces the same
results. I am basically wanting to backup the My Documents folder of
XP Clients, would this be easier using rsyncd?

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC-users Digest, Vol 20, Issue 11

2007-12-05 Thread Carl Wilhelm Soderstrom
On 12/05 08:42 , Ben Blankenberg wrote:
 Have tried the recommended syntax and it still produces the same
 results. I am basically wanting to backup the My Documents folder of
 XP Clients, would this be easier using rsyncd?

It's much better to use rsyncd if at all possible. 
Just download the package from the backuppc website; it's pretty easy to
use.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression level

2007-12-05 Thread Rich Rauenzahn

John Pettitt wrote:
  

What happens is the newly transfered file is compared against candidates 
in the pool with the same hash value and if one exists it's just 
linked,   The new file is not compressed.   It seems to me that if you 
want to change the compression in the pool the way to go is to modify 
the BackupPC_compressPool script which compresses an uncompressed pool 
to instead re-compress a compressed pool.   There is some juggling that 
goes on to maintain the correct inode in the pool so all the links 
remain valid and this script already does that. 

  
You're sure?  That isn't my observation.  At least with rsync, the files 
in the 'new' subdirectory of the backup are already compressed, and I 
vaguely recall reading the code and noticing it compresses them during 
the transfer (but on the server side as it receives the data).  After 
the whole rsync session is finished, then the NewFiles hash list is 
compared with the pool.  Identical files (determined by hash code of 
uncompressed data) are then linked to the pool.


If that is all true, then it seems like there is an opportunity to 
compare the size of the existing file in the pool with the new file, and 
keep the smaller one.


Rich
-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using NFS share as BackupPC pool

2007-12-05 Thread Rob Morin
I too am using Backuppc to backup over NFS V3 never had an issue i 
backup 5 linux box every day, its been running smooth as ice for the 
last 6 months or so...

I have a question, is there any advantage to use nfs v3 over nfs v4?

Everybody have a great day!

Rob Morin
Dido Internet Inc.
Montreal,Canada
http://www.dido.ca
514-990-



Travis Fraser wrote:
 On Tue, 04 Dec 2007 22:45:40 +0100
 Dan S. Hemsø Rasmussen [EMAIL PROTECTED] wrote:

   
 Hi...

 Yes... the Synology box is running with Linux.. the box have a mirror 
 raid, and have just made a firmware with NFS support... So it should 
 support hardlinks

 Do you have any experience good/bad, with BackupPC and NFS any 
 performence issues...?

 
 I backup 6 machines for a total of around 40GB. The NFS performance has
 never appeared to have any issues. One time the NAS box did hang,
 causing some obvious problems. A restart of the NAS box fixed things
 though. Other than that one hang, the BackupPC server has been running
 fine for over two years straight.
   
 Hi...

 Anyone tried to use a NFS mount as pool for the backups...?
 I would like to use a Synology NAS box as storage for my BackupPC 
 server. But will it work.

 
 
 I use a NAS box mounted via NFS. It works fine. The underlying
 filesystem on the NAS must support hardlinks I would think. Does the
 Synology box use Linux for the OS?

   
   
 -
 SF.Net email is sponsored by: The Future of Linux Business White Paper
 from Novell.  From the desktop to the data center, Linux is going
 mainstream.  Let it simplify your IT future.
 http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

 

 -
 SF.Net email is sponsored by: The Future of Linux Business White Paper
 from Novell.  From the desktop to the data center, Linux is going
 mainstream.  Let it simplify your IT future.
 http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
   

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression level

2007-12-05 Thread John Pettitt
Rich Rauenzahn wrote:


 I know backuppc will sometimes need to re-transfer a file (for instance, 
 if it is a 2nd copy in another location.)  I assume it then 
 re-compresses it on the re-transfer, as my understanding is the 
 compression happens as the file is written to disk.(?)  

 Would it make sense to add to the enhancement request list the ability 
 to replace the existing file in the pool with the new file contents if 
 the newly compressed/transferred file is smaller?  I assume this could 
 be done during the pool check at the end of the backup... then if some 
 backups use a higher level of compression, the smallest version of the 
 file is always preferred (ok, usually preferred, because the transfer is 
 avoided with rsync if the file is in the same place as before.)

 Rich

   
What happens is the newly transfered file is compared against candidates 
in the pool with the same hash value and if one exists it's just 
linked,   The new file is not compressed.   It seems to me that if you 
want to change the compression in the pool the way to go is to modify 
the BackupPC_compressPool script which compresses an uncompressed pool 
to instead re-compress a compressed pool.   There is some juggling that 
goes on to maintain the correct inode in the pool so all the links 
remain valid and this script already does that. 

John


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Skirt a firewall

2007-12-05 Thread Gene Horodecki
Howto is complete, at:

http://backuppc.wiki.sourceforge.net/How+to+backup+through+an+SSH+tunnel

I used PuTTY to create the tunnel in my instructions, but cygwin's SSH
would work just as well..

The most difficult part was figuring out how to ping the client
appropriately, since a normal ping won't work to a non-pinging client.


Jack [EMAIL PROTECTED] wrote:


it is probably against your companies security policy, but  to get around
this you can use cygwins ssh to connect to your PC at home for a  tunnel,
rather than having your backuppc at home try to connect to your work  PC. 
 You need the tunnel to be a stable consistant connection,  that backuppc
can talk through.  This way it looks like your work pc and  backuppc
machine are on the same lan. a
href=http://www.openoffice.org/scdocs/ddSSHGuide.html;http://www.openoffice.org/scdocs/ddSSHGuide.html/a or
a  google search on 'ssh tunnel cygwin' comes up with lots of  hits. Enjoy 
and let us know how tings work out for you. (PS  think about taking notes
and writing a howto for the rest of us  :) 


From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Gene 
Horodecki
Sent:Saturday, December 01, 2007 12:06 PM
To: backuppc-users@lists.sourceforge.net
Subject:[BackupPC-users] Skirt a  firewall
 My place of business provides me with all kinds of firewalls and 
encryption on my harddrive but, wonderfully, no backup protection at all.  
And so... I am trying to backup my documents folder on my work laptop to my
 BackupPC installation.  But I have a problem.  The desktop firewall 
policy that comes as part of the image doesn't allow any incoming
connections  whatsoever.  Is there a way for me to backup this laptop with
backuppc... I  have cygwin sshd installed.. So I'm hoping with some
creative tunneling or  something like that, it can work.  Thanks!  


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] (no subject)

2007-12-05 Thread Carl Keil
Rich Rauenzahn wrote:



 Steve Willoughby wrote:

On Thu, Nov 29, 2007 at 03:22:10PM -0800, Carl Keil wrote:


Hi Folks,

I'm trying to retrieve some deleted files from a BackUpPC backup.  The
backup was deleted, but not much has been written to the drive since the
backup.  This is an ext3 file system, so I'm forced to use the grep an
unmounted drive method of retrieval.

Does anyone know a way to have grep return everything between two retrived
strings?  Like everything between tuxpaint and end.  I'm trying to
retrieve PNG files.  Can you describe to me the tools and syntaxes I'd
need to employ to accomplish such a feat?  I'm familiar with the command
line, I've gotten grep to return some interesting results and I know about
piping commands, but I can't quite figure out the steps to extract these
pngs from the raw hard drive.



instead of grep, how about the command:
   perl -ne 'print if /tuxpaint/../end/'

That would be a filter to print the lines from the one matching the
regexp /tuxpaint/ to the one matching /end/.

It'll work as a filter like grep does; either specify filenames at the
end of the command line or pipe something into it.




 Are you searching backuppc's ext3 filesystem?  Those PNG backup files
are likely compressed with backuppc and gzip, so you're really wanting
to look for backuppc's header information.  Also, realize that
sufficiently large files will not necessarily be contiguous on the
unmounted drive.

 Here's a thread where someone had some limited success with midnight
commander in 2002
http://www.ale.org/archive/ale/ale-2002-08/msg01317.html

 Rich

I'm sorry for the delay, I'm just now getting a chance to try the perl
-ne suggestion.  What do you mean by backuppc's header info?  How would I
search for that?

Thanks for your suggestions,

ck


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bug in Backup File Selection in BackupPC 3.1?

2007-12-05 Thread Jon Forrest
Nils Breunese (Lemonbit) wrote:
 Jon Forrest wrote:
 
 Also, I read the documentation and learned that if the smb
 transport is being used, there are rules about what happens
 if both BackupFilesOnly and BackupFilesExclude and specified.
 Why do these rules only apply when smb is used, and not
 when rsync is used?
 
 This is explained in the documentation: 
 http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_backupfilesexclude_.
  
 The options are directly passed to the transfer program you're using and 
 smb, tar and rsync happen to use different include/exclude mechanisms.

Thanks for replying.

No doubt, but I was thinking that BackupPC would pass the appropriate
arguments to the transport mechanism since each one is implemented in
its own Perl module.

In any case, the only mention of rsync in that section you mention
has to do with specifying --one-file-system. I don't think that's
relevant in my case because I'm not complaining about /proc
being backed up.

 I'm sure it works, there must be some error in your settings. Could you 
 maybe post your BackupFilesExclude and BackupFilesOnly settings?

Sure. In /etc/BackupPC/config.pl

$Conf{BackupFilesOnly} = {};
 $Conf{BackupFilesExclude} = {};

In /etc/BackupPC/pc/client.pl

$Conf{BackupFilesOnly} = {
'/etc/namedb' = [
''
],
'/etc/named.conf' = [
''
]
};

The only other thing I can think of is that my share name
is / but even so, I would expect BackupFilesOnly to do
its trick. (I'm not sure what share really means when
backing up Unix systems).

Cordially,

-- 
Jon Forrest
Unix Computing Support
College of Chemistry
173 Tan Hall
University of California Berkeley
Berkeley, CA
94720-1460
510-643-1032
[EMAIL PROTECTED]

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Sending email via Louts Notes Server

2007-12-05 Thread Les Mikesell
[EMAIL PROTECTED] wrote:
 Hello,
 
 I'm using BackupPC 3.0.0 since several months on a test setup and now I 
 would like to use the sending email.
 I don't want to use sendmail because I work with a Lotus Notes server and 
 I would like to use it for sending emails.
 How must I set BackupPC to use it ?

Just tell the sendmail on the backuppc machine to deliver local mail to 
the notes server.  The details vary with linux distributions but you 
need to add
define(`MAIL_HUB',[ip.of.internal.server])
to sendmail.mc, rebuild sendmail.cf and restart sendmail.  In a fedora 
or RedHat style distibution, this would be in /etc/mail and the rebuild 
happens automatically with a 'service sendmail restart' command.  (Note 
that the []'s around the IP address need to be there).

-- 
   Les Mikesell
[EMAIL PROTECTED]




-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] (no subject)

2007-12-05 Thread Rich Rauenzahn
Carl Keil wrote:
 I'm sorry for the delay, I'm just now getting a chance to try the perl
 -ne suggestion.  What do you mean by backuppc's header info?  How would I
 search for that?

 Thanks for your suggestions,


   
backuppc stores the compressed backed up files in compressed blocks with 
an md4 rsync checksum.  For instance, you can't just gunzip   
filename from the pool to examine the contents of the file.  You have 
to use backuppc's zcat utility.   I don't know that the format is 
documented outside of the perl module that does it.  Kinda looks like it 
does zlib blocks and it tweaks the first byte...

Take a look at BackuppC::FileZIO for more details...

http://backuppc.cvs.sourceforge.net/backuppc/BackupPC/lib/BackupPC/FileZIO.pm?revision=1.26view=markup

Rich

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bug in Backup File Selection in BackupPC 3.1?

2007-12-05 Thread Craig Barratt
Jon writes:

  I'm sure it works, there must be some error in your settings. Could you
  maybe post your BackupFilesExclude and BackupFilesOnly settings?
 
 Sure. In /etc/BackupPC/config.pl
 
 $Conf{BackupFilesOnly} = {};
  $Conf{BackupFilesExclude} = {};
 
 In /etc/BackupPC/pc/client.pl
 
 $Conf{BackupFilesOnly} = {
 '/etc/namedb' = [
 ''
 ],
 '/etc/named.conf' = [
 ''
 ]
 };
 
 The only other thing I can think of is that my share name
 is / but even so, I would expect BackupFilesOnly to do
 its trick.

Your BackupFilesOnly is backwards.  It should map the share name
to the list of files/directories to backup for that share:

$Conf{BackupFilesOnly} = {
'/' = ['/etc/namedb', '/etc/named.conf'],
};

These are relative to the share name, which you said was /.

Insteady, you could use /etc for the share name and then set:

$Conf{BackupFilesOnly} = {
'/etc' = ['/namedb', '/named.conf'],
};

 (I'm not sure what share really means when backing up Unix systems).

It depends on the Xfer method.  BackupPC started with only smb for PCs,
which use the share terminology.  For tar and rsync, share means the
absolute directory path.  For rsyncd it means the rsyncd module name.

Craig

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/