Re: [BackupPC-users] What things may influence BackupPC to run faster?

2008-03-19 Thread John Pettitt
dan wrote:
 I'd like to add that ZFS is not an experimental filesystem.  It is 
 deployed in production environments on Solaris and is very robust for 
 its age.  Also, since it did not start life as open source, it lived 
 behind the scenes and was tested on behind the curtain at Sun for 
 years, so its real age is hidden.
True ... but the FreeBSD implementation of ZFS is experimental in 7..0 
and will be production in 7.1 - since I run 7-STABLE rather then 7.0 I 
make give it 6 months then try it next time I have cause to change the 
disk config on the backup box.

It's an interesting change - I have a 3ware controller running 8 500GB 
drives - 6 in a raid 10 and raid 1 set.   To effectively use zfs I'd 
need to backup the data (about 1.2TB between the file systems) then 
reconfigure as JBOD and have ZFS run the drives (I'd use the zraid2 + a 
spare) - not a trivial change to make.

John


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What things may influence BackupPC to run faster?

2008-03-18 Thread John Pettitt
Bernhard Ott wrote:
 John Pettitt wrote:
   
 In the server I just upgraded  (Code 2 quad 2ghz, 2GB, 1.5TB ufs on 
 RAID10 ,  FreeBSD 7.0) my backups run between 3.6 MB/sec for a remote 
 server (*) and 56 MB/sec for a volume full of digital media on a gig-e 
 connected mac pro. Having a multi core CPU make a big difference 
 (bigger than I expected).

 (*) rsync is a wonderful thing -  six times the actual line speed.

 John

 

 Just out of curiosity: why not using using ZFS? Is it really to be 
 considered experimental? ZFS could be a reason for me  to switch to 
 FreeBDS, I remember dan to be the expert on ZFS - any news?

 Bernhard


   
The box is an upgrade from a 6.x machine and I haven't rebuilt with zfs 
yet - I probably won't until 7.1 - backups are not the place to play 
with experimental file systems.

John

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What things may influence BackupPC to run faster?

2008-03-17 Thread John Pettitt
Bruno Faria wrote:
 Hello to everyone,

 Lately for some reason, BackupPC has been running very slow on a 
 server that we have configured to do backups. Just so that you guys 
 can have an idea of how slow it really is going, it took BackupPC  
 10265.2 minutes to backup 1656103 files totaling to only 24 gigabytes 
 worth of files. Obviously, I can't really wait a week for a 24 gigbyte 
 backup to be done. Now here's what makes me think that this problem 
 with BackupPC could be due to server hardware: I first started started 
 doing backups for one pc at a time, and it took BackupPC 468.8 minutes 
 to backup 2626069 files or 32 gigabyte worth of files for that same 
 computer. But now I have about 45 computers added to BackupPC and 
 sometimes BackupPC is backing up 30 of them or more at the same time, 
 and that's when the server really goes slow.

 Here's the top command when the BackupPC server is going slow:
 top - 19:06:36 up 15 days,  6:11,  3 users,  load average: 28.76, 
 39.03, 32.14
 Tasks: 156 total,   1 running, 155 sleeping,   0 stopped,   0 zombie
 Cpu(s):  3.7% us,  2.2% sy,  0.0% ni,  7.7% id, 86.0% wa,  0.4% hi,  
 0.0% si
 Mem:   1033496k total,  1019896k used,13600k free,   141720k buffers
 Swap:  5116692k total,  2538712k used,  2577980k free,41932k cached

Any time your load average is more than your # of CPU's your system is 
contending for CPU.You are also  using a lot of swap which makes me 
think your box has gone into thrashing death spiral.   Add ram, limit 
the number of simultaneous backups (I found by trial and error that the 
number of spindles in the backup array is a good starting point for how 
many backups can be run at once).

In the server I just upgraded  (Code 2 quad 2ghz, 2GB, 1.5TB ufs on 
RAID10 ,  FreeBSD 7.0) my backups run between 3.6 MB/sec for a remote 
server (*) and 56 MB/sec for a volume full of digital media on a gig-e 
connected mac pro. Having a multi core CPU make a big difference 
(bigger than I expected).

(*) rsync is a wonderful thing -  six times the actual line speed.

John

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Hardware upgrade advice

2008-02-28 Thread John Pettitt
David Rees wrote:
 On Wed, Feb 27, 2008 at 4:38 PM, Stephen Joyce [EMAIL PROTECTED] wrote:
   
  (Mostly) agreed. If you can afford a hardware raid controller, raid 5 is a
  good choice.
 

 To clarify, a hardware raid controller with battery backed RAM is a
 good choice fo RAID 5, otherwise it will either be very slow for small
 random writes or run the risk of data corruption.

 -Dave

   
And creating directory entries very much counts as small random writes 
and BackupPC does a lot of that.

John

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Hardware upgrade advice

2008-02-27 Thread John Pettitt
Nils Breunese (Lemonbit) wrote:
 Hello all,

 We're running a BackupPC 3.1.0 installation on CentOS 4 32-bit on a  
 machine with the following specs:

 - Intel Celeron CPU 2.66 GHz
 - 512 MB RAM
 - BackupPC pool on a single 250 GB ATA 133 drive

 We currently running one backup at a time ($Conf{MaxBackups} = 1;).  
 This already maxes out the iowait%. The machine is also swapping  
 sometimes. If we'd want to do more backups simultaneously, how would  
 you prioritize the following possible upgrades:

 - Dual (or more) core CPU
 - More RAM
 - Faster drive (more drives (RAID)?)

 Thanks in advance,

 Nils Breunese.
   
I just upgraded my box - got an Asus P5K WS motherboard. 2GB of memory 
and a quad core 2.2 GHz cpu for well under a grand.   That motherboard 
has 2 gig ethernets, 6 SATA ports and a 64 bit PCI slot for my 3ware 
controller.A full tower case with a 6 dive cage and a 600W PSU cost 
me about $350 a year or two back and a 8 x  500gb Sata disks and 3ware 
9500s12 (from ebay) rounded things out to around $2.5K.   It's still 
mostly IO bound (I don't use compression since most of my data is jpeg 
and video from my photo business).It runs FreeBSD 6.3 32bit for 
historic reasons (other apps on the machine).  Backups of my photo 
archive (gig e from a mac pro) now run at 30-40MB/sec.

So my take - if your box is swapping that's the #1 upgrade because that 
will kill any server performance and memory is cheap.   Next I'd look at 
disk, with the right controller more spindles will give you a 
performance boost however raid 5 is not a great way to go because of the 
cost of doing write splices.   I run raid 10, again there is some 
performance cost because everything gets written twice.  The upside is 
reads come from six different spindles in my setup so read performance 
is improved (and backup PC is mostly read bound not write because of the 
pooling).   Lastly CPU unless you are running with a load average close 
to or greater than the # of cpu's it's probably not going to gain you much.


John



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsyncd transfer speed

2008-02-25 Thread John Pettitt
Alan Orlic( Belšak wrote:
 Hello,

 is there a way to speed up transfer via rsyncd, the last backup was at 
 0,9 MB/s. The network is 100Mb, the backup was made on local disk (no 
 USB involved), except the include list was involved and there is a lot 
 of empty directories.

 Bye, Alan

   

It's almost certainly not rsyncd that is the hold up - next time it runs 
take a look a cpu load, disk activity and network activity - I'd be 
willing to bet that you're disk bound. Do you have rsync checksum 
caching enabled? It normally makes a big difference.


John

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup to CD, DVD, USB Drive

2008-02-25 Thread John Pettitt
Damien Hull wrote:
 Here's my situation.
 1. Workstation
 2. Laptop
 3. 3 - 4 Servers

 Backup Server
 1. 750 GB of data storage ( - OS )
 2. Software RAID 1

 Data
 1. Currently have about 200 Gigs to backup
 2. May have an extra 100 Gigs or more to backup in the future

 I like the fact that BackupPC creates smaller backups with hard links. That 
 will save me if my data storage needs are more then I expected. Assuming the 
 hard links work in my case.

 What I would like to do is off site storage. In the past I've backed up to 
 DVD and USB drive. Is this possible?

 I'm also looking into Bacula. 

   
Backuppc will let you create archives of any host (tar files) which you 
can store offsite as you were before.

I've found the best offsite solution for me (currently a 782GB backup 
volume) is to use dump to copy a snapshot of the backup file system to 
one of a pair of external USB drives that I store off site.   You can 
get a 1Tb external drive for about $250 now and having tow that I rotate 
gives me a backup of last resort (basically fire/natural disaster 
insurance) for $500.

John

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copy pool one PC at a time

2007-12-28 Thread John Pettitt
Bryan Penney wrote:
 We have a server running BackupPC that has filled up it's 2TB  partition 
 (96% full anyway).  We are planning on moving BackupPC to another server 
 but would like bring the history of backups over without waiting the 
 extended period of time (days?) for the entire pool to copy.  Is there 
 any way to copy pieces of the pool, maybe per PC, at a time?  This 
 would allow us to migrate over the course of a few weeks without having 
 days at a time with no backups.

   


DD the (unmounted) disk over to the new machine then grow the filesystem 
to fill the new disk is the way to go if your OS / filesystem supports 
growing filesystems.

Failing that the way to migrate over a few weeks is to set te old server 
to not backuo, set the new one to start backing up and just wait.   Keep 
the old machine around as an archive/  reference for as long as you feel 
it's needed and let the new one build it's own history over time.

John

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copy pool one PC at a time

2007-12-28 Thread John Pettitt
Bryan Penney wrote:
 The pool consists of about 3.8 million files, so there are a lot of 
 small files.
   

My experience when I had to do this on one of my servers was that it's 
not the pool itself that kills you it's linking all the directory trees 
from the pc directories.The box I was using could link 500 files a 
second but it still too five days to complete.

Unless you need to take the old server out of service I'd suggest just 
making all the hosts backup disabled and then letting the new server 
build it's own history.  Wait three months then take the old server down.

John
 On 12/28/2007 5:37 PM, dan wrote:
   
 then you should be able to rsync that accross in 4 or 5 hours.  is it 
 mostly large files or small files?  if it is large files then you 
 should be fine, but a ton of small files might be rough.  just give it 
 a shot, only way to know for sure :)

 On Dec 28, 2007 4:29 PM, Bryan Penney [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:

 Yeah we will have them plugged into the same gigabit switch.

 On 12/28/2007 5:17 PM, Daniel Denson wrote:
  Bryan Penney wrote:
  The original document I quoted was for an older version, but I
 found
  one for 2.9.1 and is still says it doesn't understand hardlinks
 
 
 
 http://www.seas.upenn.edu/~bcpierce/unison//download/releases/unison-2.9.1/unison-manual.pdf
 
 http://www.seas.upenn.edu/%7Ebcpierce/unison//download/releases/unison-2.9.1/unison-manual.pdf
 
 
  I've copied a much smaller pool (150GB) using rsync when we first
  went to a production server.
 
  Both of the servers have 2GB of RAM.
  After I get the drives for the new server, I will try rsync.
  It will
  be interesting to see how long it takes to copy all of this
 data with
  all of those hardlinks.
 
  thanks for the help.
 
  Bryan
 
 
 
  On 12/28/2007 4:50 PM, dan wrote:
  no it wouldnt, but i though it did.  is that statement for an
 older
  version?  it may just not handle it.  rsync should work if you
 have
  enough RAM
 
  On Dec 28, 2007 3:10 PM, Bryan Penney  [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
 
  In reading about Unison I found a statement in the Caveats and
  Shortcomings section that said Unison does not understand
 hard
  links
 
  If this is true, would Unison work in this situation?
 
  On 12/28/2007 2:28 PM, dan wrote:
   no, you will have to copy the entire 'pool' or 'cpool'
 over.  you
   could copy individual pc backups, BUT when backuppc nightly
  runs it
   will remove any hardlinks from the pool that are not needed
   elsewhere.  when you copy over pc backups after that,
 the will
  not use
   hardlinks and so your filesystem usage will go up a lot.  i
  would very
   much suggest you do it all in one shot.
  
   i know that time is against you on this and that 2TB
 even over
  gigabit
   is 5 hours so i would suggest that you rsync the files over
  once and
   leave your other machine up running backups, then once
 it has
   finished, turn backups off and rsync the source to the
 target
  again.
   then you will have the bulk of the data over and only
 have to
  pull
   changes.  i worry about the  file count for 2TB being
 too much
  for
   rsync so consider Unison for the transfers.  In my
 reading i have
   found that though unison has the same issue as rsync(same
  algorythms)
   for a high number for files, it can handle more files in
 less
  memory.
  
   I have done this method to push about 800GB over and it
 worked
  well,
   but my backup server has 2GB of RAM and runs gigabit.
  
   maybe consider adding some network interfaces and
 channel bonding
   them.  i dont know if you have parts lying around but
 channel
  bonding
   in linux is pretty easy and you have agrigate each NICs
  bandwidth to
   reduce that transfer time though i suspect that your drives
  are not
   much faster than 1 gigabit NIC so you might not get much
  benefit on
   gigabit.
  
  
  
   On Dec 28, 2007 10:17 AM, Bryan Penney 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
   mailto: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
  
   We have a server running BackupPC that has filled up
 it's 2TB
partition
   (96% full anyway).  We are planning on 

Re: [BackupPC-users] Deleted Host

2007-12-13 Thread John Pettitt
Rob Ogle wrote:
 Thanks!

 I changed those settings, restarted the box, and ran BackupPC_nightly.
 My directory size hasn't changed. Should I just wait a few days to see if
 the magic happens on it's own?



   
Unless your machine changes *a lot* each week the directory size won't 
change much.  This is becasue backuppc pools the files.   When a new 
full backup happes if the file already exists it just gets linked taking 
no additional disk space - this is why  the directory doesn't grow by a 
huge amout with each full.   The flip side of this is when a full gets 
deleted only the files that were unique to that full actually get 
removed from the disk - if the same file is used in another full or an 
incremental it stays in the pool and the disk space used is not recovered.

So unless this weeks full is radically different from last weeks full 
you're not going to see a big space drop when last weeks goes away.

John
 -Original Message-
 From: Les Mikesell [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, December 13, 2007 3:22 PM
 To: [EMAIL PROTECTED]
 Cc: backuppc-users@lists.sourceforge.net; 'John Pettitt'
 Subject: Re: [BackupPC-users] Deleted Host

 Rob Ogle wrote:
   
 Ok. So...
 If I want to keep one full per week and increment once each day between
 fulls- is this what my settings should be?

 FullPeriod: 6.97
 FullKeepCnt: 1
 FullKeepCntMin: 1
 FullAgeMax: 365

 IncrPeriod: 0.97
 IncrKeepCnt: 8
 IncrKeepCntMin: 8
 IncrAgeMax: 1
 IncrLevels: 1
 


 You'll end up with overlapping fulls, which will probably happen in any 
 case since the old one wouldn't be deleted until the new one completes. 
I think that 8'th day of incrementals will keep the old one around 
 one more day.  Space-wise it probably doesn't matter as long as it is 
 gone before the next full happens.

   


-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Storage, pooling and backuppc - or why you don't always get space back when you expect to.

2007-12-13 Thread John Pettitt





There have been several threads lately about storage issues so I
figured a quick refresher on how unix like systems store files would
shed some light.

When you make a file on a unix system (eg Linux, FreeBSD, solaris etc)
what actually happens is the system allocates an inode (index node) on
the disk to represent the file. The inode contains the info about how
big the file is and where the actual blocks are on disk. The system
also creates a directory entry containing the name of the file and the
number of the inode for the file (plus some other stuff that's not
relevant here.) 

One of the really neat things about unix file system is more than one
directory entry can refer to the same inode. This is known as a hard
link. You end up with two file names both pointing at the same
data. When you delete a file you remove the directory entry that
points to the inode.  If another directory entry also points to that
inode nothing else happens. When the last entry that points to the
inode is removed the inode itself is cleared and the disk space used
becomes free.

BackupPC makes very heavy use of this. When a backup happens the file
data is stored and an inode allocated for it then two directory entries
are created that point to it. One is in the directory for the machine
it came from ./pc/machine/nnn/ whatever. The other is in the pool (or
cpool if you are using compression). Both point to the same inode and
same blocks of data on disk. When you do another backup if the same
file shows up another link is created for another directory entry to
the existing inode. This is why new backups don't take much disk,
mostly they just point to existing files that haven't changed.

When a backup goes away (because you deleted the machine or because it
ages out) the files gets deleted from the individual pc's directory.
This doesn't actually free any disk space because the inode is still
referenced by the pool and any other backups that have that file.
Eventually the last backup that references a file goes away. When
that happens the only reference to the file is in the pool. Once a day
BackupPC looks for files in the pool that only have one reference (ie
they are not pointed to by any backups just the one reference from the
pool) when it finds one it deletes it from the pool which frees the
inode (because this was the last reference to the inode) and so frees
the disk space.

This is why you can't use an SMB mount as a backuppc directory - it
doesn't support multiple links to the same file. It's also why you
can't split the backuppc pool over multiple filesystems - you need to
be able to point to the same blocks on disk and you can only do that
for inodes in the same filesystem.

John



-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression level

2007-12-05 Thread John Pettitt
Craig Barratt wrote:
 Rich writes:

   
 I don't think BackupPC will update the pool with the smaller file even
 though it knows the source was identical, and some tests I just did
 backing up /tmp seem to agree.  Once compressed and copied into the
 pool, the file is not updated with future higher compressed copies.
 Does anyone know something otherwise?
 

 You're right.

 Each file in the pool is only compressed once, at the current
 compression level.  Matching pool files is done by comparing
 uncompressed file contents, not compressed files.

 It's done this way because compression is typically a lot more
 expensive than uncompressing.  Changing the compression level
 will only apply to new additions to the pool.

 To benchmark compression ratios you could remove all the files
 in the pool between runs, but of course you should only do that
 on a test setup, not a production installation.

 Craig
   
The other point to keep in mind is that unless you actually need 
compression for disk space reasons leaving it off will often be faster 
on a CPU bound server.   Since there is a script provided 
(BackupPC_compressPool) to compress it later you can safely leave 
compression off until you need the disk space.

John

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression level

2007-12-05 Thread John Pettitt
Rich Rauenzahn wrote:


 I know backuppc will sometimes need to re-transfer a file (for instance, 
 if it is a 2nd copy in another location.)  I assume it then 
 re-compresses it on the re-transfer, as my understanding is the 
 compression happens as the file is written to disk.(?)  

 Would it make sense to add to the enhancement request list the ability 
 to replace the existing file in the pool with the new file contents if 
 the newly compressed/transferred file is smaller?  I assume this could 
 be done during the pool check at the end of the backup... then if some 
 backups use a higher level of compression, the smallest version of the 
 file is always preferred (ok, usually preferred, because the transfer is 
 avoided with rsync if the file is in the same place as before.)

 Rich

   
What happens is the newly transfered file is compared against candidates 
in the pool with the same hash value and if one exists it's just 
linked,   The new file is not compressed.   It seems to me that if you 
want to change the compression in the pool the way to go is to modify 
the BackupPC_compressPool script which compresses an uncompressed pool 
to instead re-compress a compressed pool.   There is some juggling that 
goes on to maintain the correct inode in the pool so all the links 
remain valid and this script already does that. 

John


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how to configure storage?

2007-12-04 Thread John Pettitt
Matthew Metzger wrote:
 Hello David,

 thanks for the response. I did exactly what you suggested with RAID 1 
 and LVM them together to create a large drive. It was fairly easy to 
 accomplish with Ubuntu's installer.

 However, Les Stott brings up a great point about RAID 5. I would like 
 recovering from a failure to be possible. How I have it now makes it 
 possible, but perhaps it isn't as easy as the RAID 5 option. I also like 
 that RAID 5 gives me more space.

 I think that I'll have the time to experiment with setting both of them up.

 thanks for taking the time to respond!

 -Matthew

   

RAID5 will give you more space at the expense of performance and at a 
slightly increased risk of failure (google raid 5 write hole) - I have 
a similar system to you - except I have 6 500GB drives in three raid 1 
pairs which are then striped to make a 1.5TB volume.  

RAID5 rebuild takes a *long* time on most systems and will significantly 
impact system performance if you do it on a live box.RAID 1 rebuild 
should be a bit faster as all that has to happen is a disk copy.

If you really have the time try both and fail a drive in both configs 
(pull the power works well :-) and see what you have to do to bring it 
back on line - go with the one you are most comfortable with.

In the end it's probably a religious decision (strongly held views not 
always supported by facts).

John

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Memory leak?

2007-11-29 Thread John Pettitt




John Pettitt wrote:

  
John Pettitt wrote:
  
I'm getting an out of memory on large archive jobs - this in a box with
2GB of ram which makes me thing there is a memory leak someplace ...

Writing tar archive for host jpp-desktop-data, backup #150 to output
file /dumpdir/jpp-desktop-data.150.tar.gz
Out of memory during "large" request for 528384 bytes, total sbrk() is
1023391744 bytes at /usr/local/backuppc/lib/BackupPC/FileZIO.pm line
202.
Executing: /bin/csh -cf /usr/local/backuppc/bin/BackupPC_tarCreate -t
-h jpp-desktop-data -n 150 -s \* . | /usr/bin/gzip 
/dumpdir/jpp-desktop-data.150.tar.gz


jeeves# perl -v
This is perl, v5.8.8 built for i386-freebsd-64int

The servers PID is 550, on host jeeves.localnet, version 3.1.0, started
at 11/15 12:30.
Pool is 830.38GB comprising 3460758 files and 4369 directories (as of
11/25 14:38),


The host in question was 76GB and it contained several multi GB files.

Anybody got any ideas?
  
More info - it seems to be triggered by restoring lots of already
compressed files (in the two cases I've seen one was an iTunes library
and the other was a photo archive of camera raw compressed .tif files).
  
I can reproduce it from the command line - the perl instance grows to
363 MB stays there for a while then grows to 1GB (dlimit on this box)
and fails. Other restores with similar numbers of files run to
completion and it stay around 40-50mb.
  
  

Well some more digging and a recompile of perl set to use system
malloc() instead of it's build in malloc() and the problem has gone
away (usemymalloc=n). Something the restore process does really
causes perl malloc to go crazy on lage restore jobs. Playing with
the write buffer size changed how the problem manifested (both smaller
- 65536 and larger - 2^24 - 16MB buffers made it better but didn't
eliminate it). The 1MB default buffer seemed to be the worst.

There is a comment in the FreeBSD perl ports makefile about malloc
having problems with threads but perl was not built with threads on my
box as far as I know and BackupPC does not use threads. 

I just managed to get an archive of a 375GB backup set so I'm now much
happier.

John



-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] ZFS anybody?

2007-11-18 Thread John Pettitt






Has anybody tried BackupPC using a ZFS (RAIDZ) filesystem for the
pool? It's currently a Solaris thing (and Linux?) but it's going to
be in FreeBSD 7.0 and I've been playing the a VMWare system with a ZFS
file system and it looks to be pretty fast. Does anybody have any
real world data?

John



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] 4 x speedup with one tweak to freebsd server

2007-11-05 Thread John Pettitt




I'm posting this to the list so people searching for FreeBSD
optimizations will find it in the archives.

I finally got around to looking at why my FreeBSD server was only
backing up at about 2.5MB/sec using tar with clients with lots of small
files. 

Using my desktop (a Mac PRO) as the test subject backups were running
at about 2.5MB/sec or more accurately 25 files a second. The server
(FreeBSD 6.2 with a 1.5 TB UFS2 raid 10 on a 3ware card) was disk bound.

Running the ssh / tar combo from the command line directed to /dev/null
gave close to 25MB/sec confirming that it wasn't the client or the
network. I've done the normal optimization stuff (soft updates,
noatime). After a lot of digging I discovered vfs.ufs.dirhash_maxmem 

The ufs filesystem hashes directories to speed up access when there are
lots of files in a directory (as happens with the pool) however the
maximum memory allocated to the hash by default is 2 MB! This is
way too small and the hash buffers were thrashing on almost every pool
file open.

(for those who care sysctl -a | egrep dirhash will show the min, max
and current hash usage - if current is equal to max you've probably got
it set too small)

On my box setting the vfs.ufs.dirhash_maxmem to 128M using sysctl did
the trick - the system is using 72M for the whole pool tree (2.5
million files) and backups are now running at about 10 MB/sec and 100
files a second! (this is now compute bound on the server which is an
old P4 2.6 box).

John





-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Tiger rsync?

2007-06-01 Thread John Pettitt
James Ward wrote:
 My understanding is that the current Tiger rsync with the -E flag  
 will do everything needed to make useful backups with BackupPC?  Am I  
 wrong?

 Thanks in advance,

 James



   
The -E flage doesn't work with backuppc - you can either use tar or 
forgo the resource fork data.  I've been meaning to dig into the code to 
see what -E does that is upsetting backuppc but I haven't had time yet 

John

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC running slow due to identical files.

2007-05-04 Thread John Pettitt
Doug Smith wrote:
 I currently use BackupPC to backup 34 servers (two different backup 
 servers).  We have this one development machine that takes more than a 
 day to backup (2100 minutes for 139 gigs on full backup).  We have other 
 servers with the same amount of data (some with more) that backup much 
 faster.  We believe the reason for this is that there are many CVS 
 checkouts which contain the same files. 

 Our theory is that it is taking this long due to comparison of all the 
 identical files when making links.  The hardware in this server matches 
 another machine with the same amount of data, but it only takes around 
 300-400 minutes.  I'm wondering if anyone has run into a similar problem 
 and might have suggestions.  I'm going to look and see if I can skip 
 backing up some of the data, but at this point I'm not sure if that is 
 an option.

 -Doug

   
You may just be looking at a lot of small files - on the other machines 
is the average file size similar to the slow one?   Also what OS and 
file system is the server?

John

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filling Backups

2007-04-30 Thread John Pettitt
Jason M. Kusar wrote:
 Hi all,

 I'm currently using BackupPC to back up our network.  However, I have 
 one server that has over a terrabyte of data.  I am currently running a 
 full backup that has been running since Friday morning.  I have no idea 
 how much more it has to go.

 My question is if there is a way to avoid ever having to do a full 
 backup again.  Most of this data will never change.  Is there a way to, 
 for example, have every 7th incremental filled to become a full backup 
 instead of actually performing a full backup?
   


I have a similar situation with a file server that is my main photo 
archive - about 800GB of files most of which never change and 2-5GB 
added after each shoot I do.   I eventually settled on a full backup 
every 30 days and keeping 30 incremental backups and 6 full backups. 
The full backups take about 36 hours to run (mostly because the ReadyNAS 
is not very fast) - I can backup similar sized files much faster from my 
Mac Pro.

John

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filling Backups

2007-04-30 Thread John Pettitt
Les Mikesell wrote:
 Jason M. Kusar wrote:

   
 If you use rsync as the transport you never actually transfer unchanged 
 files again - you only make a pass over the files comparing block 
 checksums.  This takes some time/cpu at each end but not a lot of bandwidth.
   
   
 So that means that even a full backup uses the old backup as the basis 
 for the transfer?  I assume also then that files deleted off of the 
 server will not be deleted off the backup since the default rsync 
 options don't include any of the delete options.  Is there a way to fix 
 this?
 

 If you have the time for the file checksumming, just do rsync fulls 
 every time.  This will reconstruct the backup file tree without the 
 deleted files.

   
The problem with doing a full every time is that on the client rsync has 
to read all the data to do the checksums - this is a non-trivial load 
for many systems.

John

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] New user questions

2007-04-22 Thread John Pettitt
Johan Ehnberg wrote:
 VPN:s are not a good idea in my case since they would cross over 
 different organizations. 

Huh?  Just because it's a VPN it doesn't have to be wide open.  A VPN 
with firewall rules that only allow connections from the BackupPC server 
to the rsyncd ports on the clients shouldn't create any issues (beyond 
the ones you already have backup up from one organization to another)

John


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] New user questions

2007-04-22 Thread John Pettitt
Johan Ehnberg wrote:
 John Pettitt wrote:
   
 Johan Ehnberg wrote:
 
 VPN:s are not a good idea in my case since they would cross over 
 different organizations. 
   
 Huh?  Just because it's a VPN it doesn't have to be wide open.  A VPN 
 with firewall rules that only allow connections from the BackupPC server 
 to the rsyncd ports on the clients shouldn't create any issues (beyond 
 the ones you already have backup up from one organization to another)
 

 Thank you for your answer.

 True, but there are a few thing which make it an inferior solution in my 
 case, even security-wise: a) it's open when it wouldn't need to be, b) 
 it is designed for level 2/3 networking, c) it's less flexible to 
 change. a) and b) make it less reliable than a (s)tunnel given the same 
 resources because you have to work your way up to level 4 yourself and 
 isolate different connections (organizations). c) just makes it harder 
 to adapt to change when all you need is one simple connection.

 In other words, the only reason I see for using a VPN is the lack of a 
 tweak in the configuration (SSH ports with rsync) so far? Or is a VPN 
 superior in some other sense, such as reconnecting transparently?

 This is my current view, of course, and I am open to any solution if it 
 is good. That's where this list may have more experience than me.

   
I think what you are looking for is something that should have been in 
DNS (with the benefit of 2020 hindsight) - the ability to query for a 
host/service pair get an IP/protocol/port back.  Unfortunately it's not 
there.

I don't see this as a lack of config tweak - the tweak is there you just 
have to use per host config.pl file to supply -p flags.  Why not write a 
script to generate the files  or  write some perl code on the main 
config.pl to evaluate to the right port number after all config.pl is 
code that is evaluated at run time.

John

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Next 3.x.y release

2007-04-20 Thread John Pettitt
Craig Barratt wrote:
 Brendan writes:

   
 I want to upgrade to BackupPC 3.x.y, but am not game to upgrade to 
 3.0.0.  I'd like to at least wait until the first bug fix release (3.0.1).

 Does anyone know when that is due for release?
 

 When there are some bugs :).

 Seriously, though, the next release is probably going to be 3.1.0.

 Here's the current CVS ChangeLog since 3.0.0 - some fixes here and
 there, but also some new features.

 Craig

 * Added some performance improvements to BackupPC::Xfer::RsyncFileIO
   for the case of small files with cached checksums.
   
Whatever you did it makes a big difference for me - I just grabbed the 
latest code and a remote FreeBSD mailserver host I backup dropped from 
240 mins to 154 mins for a full rsync backup (524,961 files 11,264MB)

John

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] rsync vs Tar - big files

2007-03-31 Thread John Pettitt






Some stats using rsync vs using tar on a file system with big files 

Server is a FreeBSD 6.2 box, 2.93Ghz Celeron with 768MB Ram., RAID10 on
a 3ware 9500Scontroller.
client is a Mac pro dual/dual xeon 2.66 6GB ram
source drive 94GB of media files average file size 10MB on a 250GB
SATA-300 drive.
network switched gig-e

tar (baseline) 10.7 MB/sec

rsync 1st - 5.63 MB/sec (writing data on server)

rsync 2nd - 10.27 MB/sec (reading data for rsync checksum compare but
checksum cache not yet written)

rsync 3rd - 35.20 MB/sec (rsync with server side checksums in cache)

On all runs CPU was not the limiting factor.

rsync is a big win with large files on machines with enough memory and
CPU.





-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-28 Thread John Pettitt
Jason Hughes wrote:
 Evren Yurtesen wrote:
 I am saying that it is slow. I am not complaining that it is crap. I 
 think when something is really slow, I should have right to say it right?
   

 There is such a thing as tact.  Many capable and friendly people have 
 been patient with you, and you fail to show any form of respect.  
 You're the one with the busted system; you should be nicer to the 
 important people who are giving you their experience for free.



Ok folks - tomorrow I'm going to run some benchmarks on my FreeBSD box

I plan to run the following:

./bonnie++ -d . -s 3072 -n 10:10:10:10 -x 1

on the following filesystems  FreeBSD 6.2 ufs2

1) an 80 Gb IDE drive
2) a mirror set from 2 300 gb IDE drives
3) a raid 10 1.5 tb made from 6 WD 500gb sata 300 drives on a 3ware 
9500S-12 controller with battery backup

I'll run the benchmark sync, soft updates and async

This is the same benchmark run on the namesys page so it should give a 
reasonable comparison to RaiserFS.

As a data point I have three backups running right now on the 9500S-12 
raid 10 partition - I'm seeing about 300 disk operations a second - when 
I used to run this on one disk I would see about 100 disk ops a second 
and on an IDE raid 5 (6 disks on a hightpoint controller) I'd see about 
150 ops a second.

John




-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Filesystem benchmarks

2007-03-28 Thread John Pettitt




Following the extended discussion of system benchmarks here are some
actual numbers from a FreeBSD box - if anybody has the time to run
similar numbers on linux boxes I will happily collate the data.

John

2.93 GHz Celeron D, 768 MB ram FreeBSD 6.2

bonnie++ -f 0 -d . -s 3072 -n 10:10:10:10 

Key:
IDE = 80 GB IDE, soft upodates, atime on
IDE-R1-atime = 300gb raid 1 (mirror) IDE, atime on, soft
updates
IDE-R1 = 300gb raid 1 (mirror) IDE, no atime, soft updates
IDE-R1-sync = 300gb raid 1 (mirror) IDE, no atime, sync
IDE-R1-async = 300gb raid 1 (mirror) IDE, no atime, async
SATA-R10 = 1.5TB raid 10 sata on 3ware 9500S-12. no atime, soft updates

v4 = raiserfs v4 from namesys on a 2.4ghz xeon
ext3 = ext3 from namesys on 2.4ghz xeon

Version 1.93c --Sequential Output-- --Sequential Input-
--Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
IDE 3G 40181 22 12106 6 36944 12
99.4 7
IDE-R1 3G 34511 20 14857 8 54482 19
121.7 9
IDE-R1-atime 3G 34426 21 14832 8 54402 18
122.1 9
IDE-R1-sync 3G 4904 8 4248 4 53750 18
103.7 8
IDE-R1-async 3G 34405 20 14877 8 53579 18
122.6 9
SATA-R10 3G 85375 53 25188 14 49751 17
454.1 33

v4 3G 37579 19 15657 11 41531 11
105.8 0
ext3 3G 35221 22 10987 4 41105 6
90.9 0

Version 1.93c --Sequential Create-- Random
Create
 -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
IDE 236 8 5071 73 9954 35 229 7 4354 68
+ +++
IDE-R1 460 19 5524 76 13606 86 301 10 4576 65
+ +++
IDE-R1-atime 377 15 4368 63 11580 70 395 13 5061 78
11642 90
IDE-R1-sync 107 12 6027 86 + +++ 112 12 5466 83
19644 82
IDE-R1-async 370 15 4609 66 18905 49 376 12 5583 84
11427 90
SATA-R10 973 41 7365 98 13281 84 1079 38 5877 79
12875 85

v4 570 39 746 17 1435 23 513 40 104 2
951 15
ext3 221 8 364 4 853 4 204 7 99 1
306 2


Notes:
The 3ware card is somewhat bus limited because it's in a 32bit PCI slot
in a 64
bit slot I'd expect better sequential read performance. This also
drives up the
CPU number due to bus contention.

The read numbers for ext3 and raiserfs look suspect 

Stripe size for the 3ware raid 10 is 256k

all file sytems were live and had other files on them - virgin file
systems may
perform very differently..

Conclusion:
ufs2 is pretty similar to ext2 if not a little better but not as fast
as raiser4.

sync is a big drag but async makes almost no difference over soft
updates.

atime/noatime doesn't make a whole lot of difference on this test.
~



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Poor backup performance

2007-03-28 Thread John Pettitt
Evren Yurtesen wrote:
 John T. Yocum wrote:
   
 According to the 3ware CLI, the cache is enabled.
 

 I have the same problem with much slower speeds (since I dont use SATA 
 or raid it makes things worse) My finding is that backuppc is doing a 
 lot of work while checking the files. Can you check if you are seeing 
 extreme disk activity in your backup servers at the backup time?

 Thanks,
 Evren

   
Yes you will see a lot of disk activity - I'm seeing around 300 disk 
operations a second with 4 backups running on a raid10 setup under 
FreeBSD (hint use gstat to see disk activity).

Backups speed is very much a function of file size - one of my backup 
clients is all big files (media and hd images) - it backs up at close to 
10mb/sec (from a MacPro over gigabit) - the system drive on the same box 
backs up at around 2 MB/sec,

Which reminds me - I've had good results splitting large machines into 
multiple backup jobs each targeting a different drive on the client.   
Once the full backups get out of phase with each other it spreads the 
load out quite well.

John


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] archive host not showing up in web interface

2007-03-28 Thread John Pettitt
benjamin thielsen wrote:
 hi-

 i'm having what is probably a basic problem, but i'm not sure where  
 to look next in troubleshooting.  i've got a working installation,  
 currently backing up 4 machines, and decided to add an archive host,  
 but it's not showing up.  the log file indicates 2007-03-27 09:31:44  
 Added host freenas to backup list, and if i point my browser  
 directly to index.cgi?host=freenas , it works.

 is the host summary page the right place i should be expecting the  
 archive host to appear?  the documentation is a bit vague: In the  
 web interface, click on the Archive Host you wish to use. You will  
 see a list of previous archives and a summary on each.

 version is 2.1.2pl1 on ubuntu edgy.  here is that particular host's  
 config:

 $Conf{XferMethod} = 'archive';
 $Conf{ArchiveDest} = '/mnt/freenas-backup';

 thanks!
 -b
   
It should be in the select a host box but not on the host summary page.

John

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync question - why do we check checksums etc?

2007-03-28 Thread John Pettitt
Evren Yurtesen wrote:

 Perhaps it could be a feature if it checksum checks could be disabled 
 altogether
 for situations where the bandwidth is cheap but cpu time is expensive?

 Thanks,
 Evren

   
That option is called tar   :-)

John

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-27 Thread John Pettitt
David Rees wrote:
 On 3/26/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
   
 Lets hope this doesnt wrap around... as you can see load is in 0.1-0.01
 range.

  1 usersLoad  0.12  0.05  0.01  Mar 27 07:30

 Mem:KBREALVIRTUAL VN PAGER  SWAP PAGER
  Tot   Share  TotShareFree in  out in  out
 Act   260203592   144912 6868   12384 count
 All  2497845456  232789611800 pages
 

 It wrapped pretty badly, but let me see if I'm interpreting this right
 (I'm no BSD expert, either):

 1. Your server has ~250MB of memory.
 2. Load average during backups is only 0.1-0.01? Does BSD calculate
 load average differently than Linux? Linux calculates load average by
 looking at the number of runnable tasks - this means if you have a
 single process waiting on disk IO you will have a load average of 1.
 If BSD calculates the load average the same way, then that means your
 server is not waiting on disk, but waiting for the clients.

 What's the load like on the clients you are backing up?
   
Under FreeBSD if a task is blocked it doesn't contribute to load no 
matter what it's waiting for - so disk bound tasks don't add to load 
average.

John



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-27 Thread John Pettitt
Les Mikesell wrote:
 Evren Yurtesen wrote:
   
 Raid5 doesn't distribute disk activity - it puts the drives in 
 lockstep and is slower than a single drive, especially on small writes 
 where it has to do extra reads to re-compute parity on the existing data.
   
 I am confused, when a write is done the data is distributed in the disks 
 depending on the stripe size you are using. When you start reading the 
 file, you are reading from 5 different disks. So you get way better 
 performance for sure on reads.
 

 The stripe effect only comes into play on files large enough to span 
 them and not at all for directory/inode accesses which is most of what 
 you are doing. Meanwhile you have another head tied up checking the 
 parity and for writes of less than a block you have to read the existing 
 contents before the write to re-compute the parity.

   
Actually most (all?) raid 5 systems I've met don't check parity on read 
- they rely on the drive indicating a failed read.However the write 
splice penalty for raid 5 can still be pretty high (seek, read, insert 
new data, compute new parity, write data, seek, write parity) - and 
that's on top of the fact that the OS  did it's own logical read, check 
permissions, insert, write to update directories and the like.

John

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread John Pettitt
Evren Yurtesen wrote:
 I am using backuppc but it is extremely slow. I narrowed it down to disk
 bottleneck. (ad2 being the backup disk). Also checked the archives of
 the mailing list and it is mentioned that this is happening because of
 too many hard links.

   
[snip]

The basic problem is backuppc is using the file system as a database - 
specifically using the hard link capability to store multiple references 
to an object and the link count to manage garbage collection.   Many 
(all?) filesystems seem to get slow when you get into the millios of 
files with thousands of links range.   Changing the way is works (say to 
use a real database) looks like a very non trivial task.   Adding disk 
spindles will help (particularly if you have multiple backups going at 
once) but in the end it's still not going to be blazingly fast.

John





-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread John Pettitt
Evren Yurtesen wrote:


 I know that the bottleneck is the disk. I am using a single ide disk to 
 take the backups, only 4 machines and 2 backups running at a time(if I 
 am not remembering wrong).

 I see that it is possible to use raid to solve this problem to some 
 extent but the real solution is to change backuppc in such way that it 
 wont use so much disk operations.

   


 From what I can tell the issue is that each file requires a hard link - 
depending on your file system metadata like directory entries, had links 
etc get treated differently that regular data - on a BSD ufs2 system 
metadata updates are typically synchronous, that is the system doesn't 
return until the write has made it to the disk.   This is good for 
reliability but really bad for performance since it prevents out of 
order writes which can save a lot of disk activity.   

Changing backuppc would be decidedly non-trivial - eyeballing it to hack 
in a real database to store the relationship between pool and individual 
files would touch almost just about every part of the system.

What filesystem are you using and have you turned off atime - I found 
that makes a big difference.

John

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Poor backup performance

2007-03-22 Thread John Pettitt
Have you checked that the 3ware actually has cache enabled - it has a 
habit of disabling it if the battery backup is bad or missing and it 
will make a *huge* difference 

John

John T. Yocum wrote:
 I'm seeing terrible backup performance on my backup servers, the speed 
 has slowly degraded over time. Although, I have never seen speeds higher 
 than 1MB/s. (We have it set to do no more than 2 backups at a time.)

 Here is our setup:

 Our network is all 100Mb between servers, and switches, and 1Gb between 
 switches. So, there is enough network capacity for decent performance.

 The servers being backed up, all have either RAID1 or RAID5 arrays 
 consisting of 15K RPM SCSI drives. All RAID is done in hardware.

 The backup servers are using 7200RPM SATA drives, connected to 3ware 
 8500 or 9000 series controllers. Two the backup servers are using RAID1, 
 and the other is using RAID5.

 Our backup servers are running CentOS 4.4, with ext3fs for the backup 
 partition. I have noatime enabled, and data=writeback set to hopefully 
 improve performance.

 On our backup servers, they are all showing a very high wait during 
 backups. Here's a screenshot from one of them 
 http://www.publicmx.com/fh/backup2.jpg. At the time it was doing two 
 backups, and a nightly.

 Any advice on improving performance, would be much appreciated.

 Thank you,
 John

 -
 Take Surveys. Earn Cash. Influence the Future of IT
 Join SourceForge.net's Techsay panel and you'll get the chance to share your
 opinions on IT  business topics through brief surveys-and earn cash
 http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/backuppc-users
 http://backuppc.sourceforge.net/

   


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Hardware choices for a BackupPC server

2007-03-14 Thread John Pettitt






It's time to build a new server. My old one (a re-purposed Celeron D
2.9Ghz / 768M FreeBSD box with a a 1.5 TB raid on a Highpoint card) has
hit a wall in both performance and capacity. gstat on FreeBSD shows
me that the Highpoint raid array is the main bottleneck (partly because
it's in a regular PCI slot and partly because it's really software raid
with crappy drivers) and CPU is a close second. I'm going to build
a new box with SATA disks and a better raid card.

So my question: Has anybody does any actual benchmarks on BackupPC
servers? 

Which OS  Filesystem is best? (I'm, leaning toward Ubuntu and
RaiserFS) 

RAID cards that work well? Allow for on the fly expansion? 

RAID mode ? 5? 6? 10? 500GB drives seem to be the sweet spot in
the price curve right now - I'd like to get 1.5TB after RAID so 6
drives in RAID 10.

I'm leaning towards a core 2 duo box with 2Gb of ram.

Any hardware to avoid?

John



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up large directories times out with signal=ALRM or PIPE

2007-02-24 Thread John Pettitt
Jason B wrote:
  close to 
 the same way as an incremental, except it's more useful, so to say?

 Incidentally, unrelated, but something that's been bugging me for a while: 
 subsequent full backups hardlink to older ones that have the true copy of the 
 file, correct? That means there is no meaningful way of deleting an older 
 backup, as the parent files may be lost, rendering future links useless?

   
Not quite - if it were symlinks that would be true but BackupPC uses 
hard links - with a hard link the underlying inode (which describes the 
file data) persists as long as there is at least one link to it.  When 
old backups get purged  what really happens is they gut unlinked (doing 
rm -rf on a numbered backup in the directory for an individual pc has 
the same effect).   If all the numbered backups that reference a file 
get removed then only a single link (from the pool tree) will remain.   
The nightly cleanup code looks for files with one link and removes 
them.   So you can safely delete older backups knowing that only files 
that are unique to that backup will disappear (and also knowing that you 
won't get the disk space backup until after the nightly cleanup).

John

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 3 load

2007-02-19 Thread John Pettitt
Nils Breunese (Lemonbit) wrote:
 I wrote:

 I recently upgraded to BackupPC 3 and although the idea of being able 
 to run the nightly jobs, the trashClean job and dump jobs all at the 
 same time is nice it seems it's a bit too much for our server (load = 
 10 at the moment). Can I make backup runs and nightly jobs mutually 
 exclusive again or do I have to adjust my WakeupSchedule to make sure 
 no backups are started in the first ten hours after the nightly jobs 
 start?

 Any ideas on how we can reduce the load? More/less nightly jobs? Less 
 concurrent backups? Other tips? We used to backup 15 servers onto one 
 BackupPC server, but now almost all of our backups are failing and the 
 load is through the roof. Can we just go and install BackupPC 2.1.3 
 again?

 Nils.
The #1 way to bring the load down if you haven't already done it is to 
mount the pool filesystem with noatime - BackupPC does a lot of 
reading files and if you have access time stamping enabled there are 
lots of writes to update the access time stamp.   Remove that load and 
things get way faster.

John

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Wake on lan

2007-02-17 Thread John Pettitt





Has anybody played with using wake-on-lan with BackupPC?

I'm thinking the approach would be to define $Conf{PingCmd} as a script
that sends a wake on lan, waits a second then does the regular ping.
Then use Pre and Post commands to disable and re-enable sleeping ...
(using pmset on osx)

Does that sound like a reasonable approach?

John



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] OSX extended atribute problems

2007-01-30 Thread John Pettitt
Brien Dieterle wrote:
 The .filename stuff is called AppleDouble and I think it preserves 
 metadata as well in there.

 Are you also getting a ton of xfer errors as below? (I did)
 /usr/bin/tar: /tmp/tar.md.Fif1QE: Cannot stat: No such file or directory
 /usr/bin/tar: /tmp/tar.md.b7XePQ: Cannot stat: No such file or directory
 /usr/bin/tar: /tmp/tar.md.QMKDdp: Cannot stat: No such file or directory

 I decided the extended attributes were not really necessary for 
 disaster recovery and I found you can turn them off via an env variable:

 $Conf{TarClientCmd} 
 http://mirror/backuppc/index.cgi?action=viewtype=docs#item_%24conf%7btarclientcmd%7d
  = '$sshPath -q -x -n -l root $host'
 . ' /usr/bin/env LC_ALL=C 
 COPY_EXTENDED_ATTRIBUTES_DISABLE=true $tarPath -c -v -f - -C $shareName+'
   . ' --totals'
   . ' --one-file-system';

 That cleared up all my tar xfer errors on 10.4

 brien

What do i miss if I don't backup extended attributes?

John

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] every hour backup

2007-01-27 Thread John Pettitt
Phong Nguyen wrote:
 Hi all,

 I just would like to know if it is possible to make an incremental
 backup of a host every hour.
 I don't know how to set the value for $Conf{IncrPeriod}  since it juste
 take a value counted in days.
 Thanks a lot

 Phong Nguyen

 Axone S.A.
 Geneva / Swiss

   


The value is in days but will happily accept numbers less than 1 - just 
set it to a number slightly less than 1/24.  you'll also need to mess 
with the blackout period (make sure there isn't one).  You will also 
need to account for full backups which may take more than an hour to 
run. Oh and make sure you've configure enough simultaneous jobs to 
allow your backup to always run.

What problem are you actually trying to solve?Keep in mind that you 
may have open file contention issues as well.

John

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] bare metal ?

2007-01-16 Thread John Pettitt
[EMAIL PROTECTED] wrote:

 Aloha,

 Is there any hope for adding bare-metal restore capabilities to 
 BackupPC for Windows clients?


 Thanks,

 Richard


Bare metal restore is tricky.There are two things needed to make it 
work 1) a backup of all the files on the box 2) a toolset for putting it 
back.

Both are problematic.   A full backup means getting windows registry 
hives which mean some sort of open file manager or snapshot driver is 
needed - as far as I know there are no open source ofm products.   
Putting the data back means reconstructing the filesystem - support in 
Linux/FreeBSD for NTFS is not good and as far as I'm aware BackupPC does 
not capture all of the extended access control file attributes. 

John

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] OK, how about changing the server's backuppc process niceness?

2007-01-01 Thread John Pettitt
Paul Harmor wrote:


 I'm running a 1.3Gig Duron, 512 DDR, LVM and 2x160GB drives, on Ubuntu 6.10.

 I have only 2 machines (at the moment) being backed up, but every time
 the backups start, the server system slows to an UNUSEABLE crawl, until
 I can, slowly, start top, and renice the 4 backuppc processes.

 Any way to make them start nicer, to begin with?
   

If you nice the backuppc parent process when you originally launch it 
(man nice to see how) it's children will inherit the nice value.  you 
might also want to look at tweaking how many backups BackupPC will run 
at once to 1 so that it does the machines sequentially - if you only 
have one disk spindle in your backup pool filesystem it may actually run 
faster that way.

John


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using backuppc to back up a freebsd server

2006-12-20 Thread John Pettitt
Garith Dugmore wrote:
 Hi,

 I've found backing up any linux server's data works using tar and ssh 
 but when trying to back up a freebsd server it gives the following error:

   
[snip]

I've had good results with rsync on FreeBSD boxes - I chose rsync 
because the box is remote and it minimizes network traffic however I've 
had no problems with it.   Then again my BackupPC server is also a 
FreeBSD box so that may help.

The snapshot feature of ffs is worth exploring (see   
/usr/src/sys/ufs/ffs/README.snapshot on you nearest FreeBSD box).

John



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Large deployment questions

2006-11-24 Thread John Pettitt


We're considering using BackupPC at work (~50 desktops, mixed OSX, 
Windows, Linux growing to over 100 machines within 6 months) - I've been 
using it at home (Win, OSX, FreeBSD) for over a year and I'm really 
happy with it.   

Anyway the point of this post 

Does anybody have any advice on best practices for larger deployments?

What do you backup on each desktop? (at home I backup everything)

We're thinking rsync over ssh for the OSX and Linux boxes and smb for 
Windows - good plan?

Any reason not to start with 3.00 beta2

What issue going to bite us unexpectedly?


John Pettitt

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/