Re: [BackupPC-users] Yet another filesystem thread

2011-07-03 Thread Holger Parplies
Hi,

C. Ronoz wrote on 2011-06-30 12:54:44 +0200 [Re: [BackupPC-users] Yet another 
filesystem thread]:
 [...]
  - How stable is XFS?

unless I missed something, I'd say XFS is perfectly stable - more stable than
reiserfs in any case. The only thing that makes me hesitate with that statement
is Les' remark XFS should also be OK on 64-bit systems - why only on 64 bit
systems? [Of course, for really large pools, a 64 bit system would be
preferable with XFS.]

 [Bowie Bailey asked on 2011-06-29 10:43:28 -0400:]
 How much memory do you have on the backup server?  What backup method
 are you using?
 The server has 1GB memory, but a pretty powerful processor.

A powerful processor doesn't help even marginally with memory problems.
See http://en.wikipedia.org/wiki/Thrashing_(computer_science)

 I found out that BackupPC is ignoring my Excludes though, [...]

This is because your syntax is wrong.

 $Conf{BackupFilesOnly} = {};
 $Conf{BackupFilesExclude} = {'/proc', '/blaat', '/pub', '/tmp'};

While the comments in config.pl state

# This can be set to a string, an array of strings, or, in the case
# of multiple shares, a hash of strings or arrays.

that is actually incorrect. A hash of strings makes no sense. In fact, Perl
would turn your example into a hash with /proc and /pub as keys and
/blaat and /tmp as respective values - certainly not what you want.
Turn your config value into an array (use '[]' instead of '{}'), and you
should be fine. You'll notice that the examples correctly don't include a
hash of strings.

Better yet, use a full hash of arrays. That is easier to read and maintain,
because it's explicit on which shares you want which excludes to apply to:

$Conf {BackupFilesExclude} = { '/' = [ '/proc', '/blaat', '/pub', '/tmp' ] };

The leading '/' on your excludes is just fine, contrary to what has been said.
It anchors them to the transfer root. Without the slashes, you would also be
excluding e.g. /home/user/pub and /home/user/tmp, just as two examples of
things you might *not* want to exclude (well, you might even want to exclude
/home/user/tmp, but really *any* file or directory named tmp? It's your
decision, you can do whatever you want, even things like tmp/ (only
directories) or /home/**/tmp/ (only directories somewhere under /home) or
/home/*/tmp/ (only directories immediately in some user's home directory).
See the rsync man page for details). Just note that if your share name is
*not* /, you'll need to remove that part from the excludes (e.g. for a share
name /var, to exclude /var/tmp you'll need to specify /tmp as the exclude,
not /var/tmp, which would try to exclude /var/var/tmp).

 This could explain why the run takes longer, but it should still finish
 within an hour?

On the first run (or whenever something is added that does not yet exist in
the pool), compression might slow down things considerably, especially if your
exclude of /proc is not working. Just consider how long compressing a large
file (say 1 GB) takes in comparison to how long reading the file takes. The
host status page should tell you more about how much data your backups
contain and how much of that was already in the pool.

 You can just delete the directory and remove the test host from your
 hosts file.
 That will only remove the hardlinks, not the original files in the pool?

What you mean is correct, but you should note that there is nothing more
original about the hardlinks from the pool to the content than those from
the pc directory to the same content. They are all hardlinks and are
indistinguishable from each other. Every normal file on your Linux system is
a hardlink to some content in the file system, just for files with only a
single hardlink we don't usually think much about it (and for files with more
than one hardlink we don't usually *need* to think much about it - it just
works as intended).

 The space should be released when BackuPC_Nightly runs.  If you want to
 start over quickly, I'd make a new filesystem on your archive partition
 (assuming you did mount a separate partition there, which is always a
 good idea...) and re-install the program.

I believe you don't even need to reinstall anything. BackupPC creates most of
the directories it needs, probably excluding $TopDir, which will exist in your
case, because it's the mount point, but which will need to have the correct
permissions (user=backuppc, group=backuppc perms=u=rwx,g=rx,o= - but check
your installation values before unmounting the existing FS). Reinstalling
BackupPC may or may not be the easier option, depending on your preferences.

 I ran backuppc nightly /usr/share/backuppc/bin/BackupPC_nightly 0 255

You shouldn't have. Hopefully, there were no BackupPC_link processes running
during that time. BackupPC_nightly *should* contain a comment something like

# *NEVER* RUN THIS BY HAND WHILE A BackupPC DAEMON IS RUNNING. IF YOU NEED AN
# IMMEDIATE NIGHTLY RUN, TELL THE BackupPC DAEMON TO LAUNCH ONE INSTEAD:
#
# BackupPC_serverMesg

Re: [BackupPC-users] Yet another filesystem thread

2011-07-03 Thread Doug Lytle
Holger Parplies wrote:
 unless I missed something, I'd say XFS is perfectly stable - more stable than
 reiserfs in any case. The only thing that makes me hesitate with that 
 statement
 is Les' remark XFS should also be OK on 64-bit systems - why only on 64 bit

I thought the same thing.  I'm running XFS (LVM)on a 32bit Mandriva 
install with a 1.1TB pool and I have been for years with no issues.

Doug


-- 
Ben Franklin quote:

Those who would give up Essential Liberty to purchase a little Temporary 
Safety, deserve neither Liberty nor Safety.


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Yet another filesystem thread

2011-07-03 Thread Les Mikesell
On 7/3/11 12:31 PM, Holger Parplies wrote:

 unless I missed something, I'd say XFS is perfectly stable - more stable than
 reiserfs in any case. The only thing that makes me hesitate with that 
 statement
 is Les' remark XFS should also be OK on 64-bit systems - why only on 64 bit
 systems? [Of course, for really large pools, a 64 bit system would be
 preferable with XFS.]

It may not apply to all distributions, but at least Red Hat/CentOS use 4k 
stacks 
in the 32-bit kernel builds and XFS isn't happy with that.


Concerning bare metal recovery, how do you plan to do that? Restoring
to the target host requires an installed and running system, restoring
to a naked new disk mountedsomewhere  requires a plan how to do that
with BackupPC, as well as some preparation (partitioning, file systems)
and some modifications afterwards (boot loader, /etc/fstab, ...).
BackupPC is not designed to handle all of that alone, though it will
obviously handle a large part of the task if that is how you want to
use it.

As long as you know the approximate sizes of the partitions you need, you can 
use a Linux livecd to boot on a new machine, make the partitions and 
filesystems, mount them somewhere, then ssh an appropriate BackupPC_tarCreate 
command to the backuppc server and pipe to a local tar to drop it in place. 
But, it's a lot of grunge work and may take some practice.

This project: http://rear.sourceforge.net/ seems to have all the missing pieces 
to save a description of the disk layout and make a bootable iso that will 
reconstruct it, but it would take some work to integrate the parts with 
backuppc.

-- 
Les Mikesell
 lesmikes...@gmail.com


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Yet another filesystem thread

2011-07-03 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 19:31:14 +0200 on Sunday, July 3, 2011:
 
  While the comments in config.pl state
  
  # This can be set to a string, an array of strings, or, in the case
  # of multiple shares, a hash of strings or arrays.
  
  that is actually incorrect. A hash of strings makes no sense. In fact, Perl
  would turn your example into a hash with /proc and /pub as keys and
  /blaat and /tmp as respective values - certainly not what you want.
  Turn your config value into an array (use '[]' instead of '{}'), and you
  should be fine. You'll notice that the examples correctly don't include a
  hash of strings.
  

I think by hash of strings, the following is meant:
$Conf {BackupFilesExclude} = { 'share1' = 'exclude-path1',
   'share2' = 'exclude-path2',
...
 }

This is just a simpler case of the hash of arrays that you illustrate
below. While I have not tried that syntax, I imagine that is what the
documentation refers to. Of course, the wording is not terribly clear
except maybe to those who already know what is going on (and
understand perl)...


  Better yet, use a full hash of arrays. That is easier to read and maintain,
  because it's explicit on which shares you want which excludes to apply to:
  
  $Conf {BackupFilesExclude} = { '/' = [ '/proc', '/blaat', '/pub', '/tmp' ] 
  };
  
  The leading '/' on your excludes is just fine, contrary to what has been 
  said.
  It anchors them to the transfer root. Without the slashes, you would also 
  be
  excluding e.g. /home/user/pub and /home/user/tmp, just as two examples of
  things you might *not* want to exclude (well, you might even want to exclude
  /home/user/tmp, but really *any* file or directory named tmp? It's your
  decision, you can do whatever you want, even things like tmp/ (only
  directories) or /home/**/tmp/ (only directories somewhere under /home) or
  /home/*/tmp/ (only directories immediately in some user's home directory).
  See the rsync man page for details). Just note that if your share name is
  *not* /, you'll need to remove that part from the excludes (e.g. for a 
  share
  name /var, to exclude /var/tmp you'll need to specify /tmp as the 
  exclude,
  not /var/tmp, which would try to exclude /var/var/tmp).
  

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Yet another filesystem thread

2011-06-30 Thread C. Ronoz
 What filesystem should I use? It seems ext4 and reiserfs are the only viable 
 options. I just hate the slowness of ext3 for rm -rf hardlink jobs, while 
 xfs and btrfs seem to be very unstable.

 - How stable is XFS?
 - Is reiserfs (much) better at hard-link removal?
 - Is reiserfs (much) less stable compared to ext4?

 BackupPC seems to recommend reiserfs although many sites say it's still an 
 unstable file system that does not have much lifespan left.

 My first back-up has been taking 12 hours for a small server and it's still 
 processing... there's only a few gigabytes of data on the Linux machine. 
 There should be more than enough power as rsnapshot back-ups always were 
 done in quick fashion. Even Bacula was able to do back-ups in less than 10 
 minutes.

If you are backing up a few gigabytes and it is taking 12 hours, then
ext3 is not your problem.  It may be slower than some of the other
options, but it is not THAT much slower.  My largest backup is 300GB and
a full backup takes 15 hours.  Both the client and server are running ext3.

How much memory do you have on the backup server?  What backup method
are you using?
The server has 1GB memory, but a pretty powerful processor. Although load seems 
pretty distrastrous too: http://images.codepad.eu/v-ISmSn6.png

I found out that BackupPC is ignoring my Excludes though, while I have a 15GB 
/pub partition. 
This could explain why the run takes longer, but it should still finish within 
an hour? 
Rsnapshot runs were always lightning fast, network is 1gbit. 

$Conf{BackupFilesOnly} = {};
$Conf{BackupFilesExclude} = {'/proc', '/blaat', '/pub', '/tmp'};

You can just delete the directory and remove the test host from your
hosts file.
That will only remove the hardlinks, not the original files in the pool?
Running du -h --max-depth=2 on /var/lib/backuppc/cpool, pc does not complete 
within 20 minutes, so I can't show a listing.

The space should be released when BackuPC_Nightly runs.  If you want to
start over quickly, I'd make a new filesystem on your archive partition
(assuming you did mount a separate partition there, which is always a
good idea...) and re-install the program.

I ran backuppc nightly /usr/share/backuppc/bin/BackupPC_nightly 0 255 after 
removing all but 1 small host, but there are still lots of files left.
root@backuppc:~# df
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/sda1 19909500   1424848  17473300   8% /
tmpfs   513604 0513604   0% /lib/init/rw
udev508852   108508744   1% /dev
tmpfs   513604 0513604   0% /dev/shm
/dev/sdb1206422036  24155916 171780500  13% /var/lib/backuppc
root@backuppc:~#

-- 

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Yet another filesystem thread

2011-06-30 Thread Richard Shaw
On Thu, Jun 30, 2011 at 5:54 AM, C. Ronoz chro...@eproxy.nl wrote:
 I found out that BackupPC is ignoring my Excludes though, while I have a 15GB 
 /pub partition.
 This could explain why the run takes longer, but it should still finish 
 within an hour?
 Rsnapshot runs were always lightning fast, network is 1gbit.

 $Conf{BackupFilesOnly} = {};
 $Conf{BackupFilesExclude} = {'/proc', '/blaat', '/pub', '/tmp'};

You have to setup the excludes to match the transfer method you're
using. In the case of rsync I believe they must be relative to the
backup root.

Here's a snippet from my config. Since I mainly backup home
directories I exclude stuff like cache and other folder that don't
need to be backed up.

$Conf{BackupFilesExclude} = {
  '*' = [
'.cache',
'.thumbnails',
'.gvfs',
'.xsession-errors',
'.recently-used.xbel',
'.recent-applications.xbel',
'.Private',
'.mozilla'
  ]
};

Notice there are no '/' on the front of my excludes. Until I setup
things like this my excludes didn't work.

Richard

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Yet another filesystem thread

2011-06-30 Thread C. Ronoz
On Thu, Jun 30, 2011 at 07:56:59AM -0500, Richard Shaw wrote:
 On Thu, Jun 30, 2011 at 5:54 AM, C. Ronoz chro...@eproxy.nl wrote:
  I found out that BackupPC is ignoring my Excludes though, while I have a 
  15GB /pub partition.
  This could explain why the run takes longer, but it should still finish 
  within an hour?
  Rsnapshot runs were always lightning fast, network is 1gbit.
 
  $Conf{BackupFilesOnly} = {};
  $Conf{BackupFilesExclude} = {'/proc', '/blaat', '/pub', '/tmp'};
 
 You have to setup the excludes to match the transfer method you're
 using. In the case of rsync I believe they must be relative to the
 backup root.
 
 Here's a snippet from my config. Since I mainly backup home
 directories I exclude stuff like cache and other folder that don't
 need to be backed up.
 
 $Conf{BackupFilesExclude} = {
   '*' = [
 '.cache',
 '.thumbnails',
 '.gvfs',
 '.xsession-errors',
 '.recently-used.xbel',
 '.recent-applications.xbel',
 '.Private',
 '.mozilla'
   ]
 };
 
 Notice there are no '/' on the front of my excludes. Until I setup
 things like this my excludes didn't work.
 
 Richard
 
 --

I see how you use excludes to exclude back-ups of files with specific 
extensions, but then how do I now exclude specific paths per host?

I am planning to back-up about 15 Linux webservers with different roles. Some 
host specific archives that take up much space, but do not require back-ups.
e.g. 1 server hosts 50GB of downloads in /pub. Another server hosts 10GB of 
internal downloads (installers, windows service packs) in 
/sites/site/httpdocs/downloads.

Does your set-up back up /proc as well? This seems to make doing a bare metal 
recovery harder, or should I not strive for such a solution?
-- 

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Yet another filesystem thread

2011-06-30 Thread Richard Shaw
On Thu, Jun 30, 2011 at 9:09 AM, C. Ronoz chro...@eproxy.nl wrote:
 I see how you use excludes to exclude back-ups of files with specific 
 extensions, but then how do I now exclude specific paths per host?

Well, it's not file specific. Some of those are directories.

On a per host basis you just override the system excludes and setup
the excludes for that particular host.



 I am planning to back-up about 15 Linux webservers with different roles. Some 
 host specific archives that take up much space, but do not require back-ups.
 e.g. 1 server hosts 50GB of downloads in /pub. Another server hosts 10GB of 
 internal downloads (installers, windows service packs) in 
 /sites/site/httpdocs/downloads.

Well, if your backup root / backup share is literally ROOT, /, then
your excludes should just be relative from that, i.e. pub, not
/pub. Remember that BacupPC is essentially clientless so if you
need to do more advanced excludes you need to look at the rsync
documentation, not the BackupPC documentation (other than how to
correctly put it in the config file).


 Does your set-up back up /proc as well? This seems to make doing a bare metal 
 recovery harder, or should I not strive for such a solution?

Well two things here:
1. I think rsync is smart enough to skip things like /proc, /dev, and such.
2. Although it can be done, BackupPC is not the right tool to do bare
metal restores. It's designed to backup files, not file systems.

There's plenty of mailing list threads, blogs, and wiki's on how to
work around that, but that's exactly what it is, a work around.

Richard

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Yet another filesystem thread

2011-06-30 Thread Les Mikesell
On 6/30/2011 9:09 AM, C. Ronoz wrote:

 I see how you use excludes to exclude back-ups of files with specific 
 extensions, but then how do I now exclude specific paths per host?

 I am planning to back-up about 15 Linux webservers with different roles. Some 
 host specific archives that take up much space, but do not require back-ups.
 e.g. 1 server hosts 50GB of downloads in /pub. Another server hosts 10GB of 
 internal downloads (installers, windows service packs) in 
 /sites/site/httpdocs/downloads.

If you control such things, it is much better to manage the sources and 
configurations of things that are installed on production servers with 
some sort of version control system instead of letting anything change 
that needs to be backed up on the server itself - other than perhaps 
upload areas and databases holding live data.  This becomes increasingly 
important as your services start to spread over load balanced farms.


 Does your set-up back up /proc as well? This seems to make doing a bare metal 
 recovery harder, or should I not strive for such a solution?

/proc appears like a mount point, so you can use the --one-file-system 
option with rsync and explicitly add only filesystems you want to back 
up.  If you do that you do have to be careful to track layout changes 
that might move things you want to a newly added filesystem, though.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Yet another filesystem thread

2011-06-29 Thread Bowie Bailey
On 6/29/2011 10:31 AM, C. Ronoz wrote:
 What filesystem should I use? It seems ext4 and reiserfs are the only viable 
 options. I just hate the slowness of ext3 for rm -rf hardlink jobs, while xfs 
 and btrfs seem to be very unstable.

 - How stable is XFS?
 - Is reiserfs (much) better at hard-link removal?
 - Is reiserfs (much) less stable compared to ext4?

 BackupPC seems to recommend reiserfs although many sites say it's still an 
 unstable file system that does not have much lifespan left. 

 My first back-up has been taking 12 hours for a small server and it's still 
 processing... there's only a few gigabytes of data on the Linux machine. 
 There should be more than enough power as rsnapshot back-ups always were done 
 in quick fashion. Even Bacula was able to do back-ups in less than 10 minutes.

If you are backing up a few gigabytes and it is taking 12 hours, then
ext3 is not your problem.  It may be slower than some of the other
options, but it is not THAT much slower.  My largest backup is 300GB and
a full backup takes 15 hours.  Both the client and server are running ext3.

How much memory do you have on the backup server?  What backup method
are you using?

-- 
Bowie

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Yet another filesystem thread

2011-06-29 Thread Jeffrey J. Kosowsky
C. Ronoz wrote at about 16:31:33 +0200 on Wednesday, June 29, 2011:
  What filesystem should I use? It seems ext4 and reiserfs are the only viable 
  options. I just hate the slowness of ext3 for rm -rf hardlink jobs, while 
  xfs and btrfs seem to be very unstable.
  
  - How stable is XFS?
  - Is reiserfs (much) better at hard-link removal?
  - Is reiserfs (much) less stable compared to ext4?
  
  BackupPC seems to recommend reiserfs although many sites say it's still an 
  unstable file system that does not have much lifespan left. 

As far as I know BackupPC doesn't recommend *any*
filesystem. Different users may favor one or another based on personal
experience. If anything, lately I have heard more criticisms of Reiserfs than 
recommendations.

  
  My first back-up has been taking 12 hours for a small server and it's still 
  processing... there's only a few gigabytes of data on the Linux machine. 
  There should be more than enough power as rsnapshot back-ups always were 
  done in quick fashion. Even Bacula was able to do back-ups in less than 10 
  minutes.

12 hours for a few gigabytes sounds like something is wrong.

  
  Also, I removed a few backups via the shell script from the
  wiki... but I still see many references to the old test hosts? How
  can I clean up the entire installation? I don't mind removing all
  data, I just don't want to waste back-up space on previously
  back-upped servers that have been removed. 

You can just delete the directory and remove the test host from your
hosts file.

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Yet another filesystem thread

2011-06-29 Thread Les Mikesell
On 6/29/2011 9:31 AM, C. Ronoz wrote:
 What filesystem should I use? It seems ext4 and reiserfs are the only viable 
 options. I just hate the slowness of ext3 for rm -rf hardlink jobs, while xfs 
 and btrfs seem to be very unstable.

 - How stable is XFS?
 - Is reiserfs (much) better at hard-link removal?
 - Is reiserfs (much) less stable compared to ext4?

 BackupPC seems to recommend reiserfs although many sites say it's still an 
 unstable file system that does not have much lifespan left.

The backuppc discussions you found on filesystems were probably from 
long ago.  At this point I'd probably use ext4 on a linux distribution 
that included it.  XFS should also be OK on 64-bit systems.

 My first back-up has been taking 12 hours for a small server and it's still 
 processing... there's only a few gigabytes of data on the Linux machine. 
 There should be more than enough power as rsnapshot back-ups always were done 
 in quick fashion. Even Bacula was able to do back-ups in less than 10 minutes.

While there are some differences in filesystem speeds in certain 
operations, it isn't on that scale.  Also note that if you are using 
rsync based backups with the checksum-seed option the fulls may be 
faster after the first two runs have completed.  Until then, a full 
involves reading every file on both the target and server and 
uncompressing on the server side to recompute the rsync checksums.

 Also, I removed a few backups via the shell script from the wiki... but I 
 still see many references to the old test hosts? How can I clean up the 
 entire installation? I don't mind removing all data, I just don't want to 
 waste back-up space on previously back-upped servers that have been removed.

The space should be released when BackuPC_Nightly runs.  If you want to 
start over quickly, I'd make a new filesystem on your archive partition 
(assuming you did mount a separate partition there, which is always a 
good idea...) and re-install the program.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/