Re: [BackupPC-users] rsync never starts transferring files (but does something)

2012-11-15 Thread Les Mikesell
On Thu, Nov 15, 2012 at 11:22 AM, Markus unive...@truemetal.org wrote:

 The client is a quad core 2.8 GHz CPU, 8 GB RAM and 1.6 TB of many many
 small files in a RAID0. CPUs 75-95% idle most of the time, load around
 0.3. No swap used.

 rsync 3.0.7 on the client, rsync 3.0.6 on the server. Negotiated
 protocol 28 (hmm).

Backuppc uses its own rsync protocol implementation on the server side
so it can work with the compressed/pooled archive and protocol 28 is
the latest it knows.   This is unfortunate in the 'huge number of
files' case because it has to transfer entire file list before
starting the comparisons.

 BackupPC server is backing up to a 8 TB iSCSI RAID5 drive.

Raid5 has performance issues with writes, but you haven't gotten that far yet.

 So, when I initiate the full backup manually, and then attach with
 strace to the rsync process on the BackupPC server I see activity, but
 it's so slow. Roughly it takes about 5 seconds for 10 lines of
 read/select/write. On the client I can see via lsof that rsync is going
 through the filesystem and the many mini-files. On the client rsync
 consumes like 0.x-1% CPU and the virt mem size is growing slowly (80M
 after 10 minutes, in the beginning 40M; that's what top says).

 Load on the client goes from 0.3 to about 1.8 a few minutes after I
 start the full backup. But CPUs stay at around 70-90% idle.

That's mostly normal - the client has to walk the entire directory
tree and send it to the server.  But it should happen at a reasonable
speed.

 I'm guessing the client is slowing down after so many hours because
 rsync has used up all memory as it is caching the list of files to be
 transferred?

Yes, both the client and server will load the directory in memory.

 What I don't understand yet is - why does rsync on the client tell rsync
 on the server about the files it is currently going through? I mean, it
 is not even transferring them yet!

The server gets the client's directory list, then walks it comparing
to the existing data, telling the client to send any differences.

I see these select/read/write outputs
 in strace attached to the rsync server process, and as it appears this
 means Hello, I'm the client, and I'm telling you which files I go
 through now on the client filesystem. But why does the rsync server
 process even need to know about that at this point in time?

 I'm guessing it can't be the comparing mechanism of BackupPC, because
 there are no files to compare yet! There are 0 backups with 0 files.

That's just the way the rsync protocol works - or did up until protocol 30.

 Any suggestions on what I could do or what could go wrong here?

You are probably pushing the client or server into swap which will
slow things down horribly.

If there are top-level directories segregating the files sensibly you
could split it into multiple 'shares'.   Otherwise, you could switch
the xfer method to tar. Also, I would try something like 'time
find / |wc -l' on the target system just to see how long it takes to
walk the directory and how many files are there.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync never starts transferring files (but does something)

2012-11-15 Thread Les Mikesell
On Thu, Nov 15, 2012 at 12:28 PM, Steve lepe...@gmail.com wrote:
 Create as many client names as you like, eg: client-share1,
 client-share2, client-share3, client-share4 (replace client with the
 real host name and share with the share names). In each
 pc/client-xxx/config.pl file, use;

 $Conf{ClientNameAlias} = client;

 (where client is the real host name). Add any other client-specific
 settings (eg: share name). This way all 4 virtual clients will refer to
 the same real client. All backups happen independently.

 That's what I meant.  Good luck!

The real win from this approach is that you can skew the days the
fulls and incrementals run for different parts of the filesystem.
Once you get past the problem of getting the directory entries in RAM,
full runs have to go back and read all of the file data for the block
checksum comparisons.   Incrementals will quickly skip any items where
the directory timestamps and length match.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restores only directories and not directory content

2012-11-14 Thread Les Mikesell
On Wed, Nov 14, 2012 at 12:53 PM, Gary Roach gary719_li...@verizon.net wrote:

 Well it finally happened. I got home from vacation, fired up the systems
 and one of the hard drives was trashed. Two days of recovery attempts
 didn't work so I reformatted and reinstalled the Debian Squeeze system.
 I re-established the rsyncd connection to the backup system and started
 a restore from the GUI. The next morning I found all of the proper
 directory structure installed but no data in the directories. I then
 tried to create a tar file. The file created held only the directory
 strucure. The data is all there in a full backup of the system. I can
 even open the files on the backup disk. Anyone know what could cause
 this problem. I found one other person that had this problem and solved
 it by switching off the proxy service in the browser. This didn't work
 form me.

I can't think of anything that would cause a problem like that, but
can you make a tar image with the BackupPC_tarCreate command line tool
on the server and restore that on the client machine?

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restores only directories and not directory content

2012-11-14 Thread Les Mikesell
On Wed, Nov 14, 2012 at 3:19 PM, Gary Roach gary719_li...@verizon.net wrote:

 I tried to try BackupPC_tarCreate and gave up. First the file wouldn't
 run until I appended ./ in front, not obvious to me at least.

That's normal.  Your search PATH for executables does not include the
current directory by default.

 Then I got
 the following:

  BackupPC_tarCreate -n 169 -h the backup computer with the data -s
 /  target.tar
  This returned - Wrong user: my userid is 0, instead of 112 (backuppc)
  Please su backuppc first.

Again, normal and expected...  You need to run as the correct user.

  su backuppc
  $
  $BackupPC_tarCreate -n 169 -h the backup computer with the data
 -s /  target.tar
  sh: 2:can not crate target.tar: Permission Denied.

OK, now that you are the backuppc user, you need to direct the output
to a location where the backuppc user has permission to create a file.
 If you don't have one, as root create a directory to hold it on a
filesystem with space and 'chown backuppc /path/to/dir'.

  sh:2:BackupPC_tarCreate: not found.

And that's the same thing as before.  Give the full path to the
program or use ./ if it is in your current directory.

-- 
   Les Mikesell
   lesmikes...@gmail.com

--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restores only directories and not directory content

2012-11-14 Thread Les Mikesell
On Wed, Nov 14, 2012 at 5:42 PM, Gary Roach gary719_li...@verizon.net wrote:

 OK, I did the following:
 mkdir /tar
 chown backuppc:backuppc /tara place to store the tar file
 cd /usr/share/backuppc/bin   to get to the
 BackupPC_tarCreate script
 su backuppclogged in as backuppc
 then ran:
 $./BackupPC_tarCreate -n 169 -h localhost -s /  /tar/target.tar

 The script ran, printed out the help menu and created an empty target.tar

Note that the shell creates the file specified for output redirection
before starting the program.

 There were 2 comments about depreciated commands in the script but no
 errors.

You don't get the help message unless the command had an error.  You
omitted the 'files/directories'  that the help message should have
shown as part of the command.   '.' will work for everything - or '/'

-- 
  Les Mikesell
 lesmikes...@gmail.com

--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restores only directories and not directory content

2012-11-14 Thread Les Mikesell
On Wed, Nov 14, 2012 at 6:35 PM, Gary Roach gary719_li...@verizon.net wrote:
 
 Was this from a backup of localhost?


 No. Localhost is the system where backuppc resides and where the stored
 backup files are kept. The actual computer system that I am trying to
 restore - lets call super2
 for want of a name. The backuppc file for super2 is #169. The actual
 data is kept on a second hard drive on localhost that is accessed thru
 /var/lib/backuppc/ . I found the instructions a bit confusing at this
 point as to what  -h should be. It seemed a bit redundant to be calling
 out the computer that is running the commands.

That should be the hostname as known in the backuppc setup - that is,
what you put in the hosts file.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restores only directories and not directory content

2012-11-14 Thread Les Mikesell
On Wed, Nov 14, 2012 at 8:35 PM, Gary Roach gary719_li...@verizon.net wrote:
 On 11/14/2012 04:01 PM, Les Mikesell wrote:

 You don't get the help message unless the command had an error.  You
 omitted the 'files/directories'  that the help message should have
 shown as part of the command.   '.' will work for everything - or '/'


 Could you elaborate on my omission of the 'files/directories' and is the
 use of '/' for the top
 directory wrong?

 Assuming that super2 is a computer that was backed up to backup #169,
 that /etc /root /home and /var were backed up, that my backup files are
 on localhost and I wish to create a tar file for backup #169 to be
 transferred manually to super2 then what is wrong with the code string:

 ./BackupPC_tarCreate -n 169 -h super2 -s /  /tar/target.tar

Your / is the argument to the -s option - which should be the share
defined in the backuppc config for the host.Then after that you
need a pattern to match the files/directories you want to include in
the tar output.   So add a space and another / at the end of the
command (before the  redirection) and it should work.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] want to take backup in human readable format in BackupPC

2012-11-09 Thread Les Mikesell
On Fri, Nov 9, 2012 at 5:51 AM, niraj_vara
backuppc-fo...@backupcentral.com wrote:
 Hi

 I want to change the backed data format of BackupPC .  Means when we take 
 the backup from the BackupPC it stored the files with 'data' files.

Is it possible that we change it to the another format ??

 i.e.

 [root@fLocal Folders]# file fSent
 fSent: data

 its showing the file type is 'data' I want to change it to with tar or tar.gz 
 is it possible 

You can turn off compression with $Conf{CompressLevel} = 0; if you
don't care about the storage space.  Or just run BackupPC_zcat on each
file that you want to uncompress when/if you need it.   You can
generate a standard tar output with BackupPC_tarCreate, but that is to
extract copies - it doesn't replace the archive storage and pooling.

Depending on why you think you need this, one of the fuse filesystem
modules that others have developed might work.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_nov
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Afraid of the midnight monster?

2012-11-08 Thread Les Mikesell
On Thu, Nov 8, 2012 at 9:16 AM, Jimmy Thrasibule
thrasibule.ji...@gmail.com wrote:
 Hi,

 I wonder if it is possible to wake BakupPC at midnight. In the
 documentation or on the Internet, they all start at 1 to 23.

 $Conf{WakeupSchedule} = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
 15, 16, 17, 18, 19, 20, 21, 22, 23];


 Can I put '0' in the `WakeupSchedule` list?


Yes but since BackupPC_nightly runs on the first entry (1), omitting
midnight might be intentional to give the running backups some extra
time to complete.  You might want to rotate the list a bit to make the
nightly run happen later in the morning when everything should be
finished.

  And what about
 `$Conf{BlackoutPeriods}`?

I think 0 should work there too.

-- 
   Les Mikesell
  lesmikes...@gmail.com

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_nov
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Performance reference (linux --(rsync)- linux)

2012-11-07 Thread Les Mikesell
On Wed, Nov 7, 2012 at 10:42 AM, Timothy J Massey tmas...@obscorp.com wrote:

 Cassiano Surek c...@surek.co.uk wrote on 11/06/2012 05:03:44 AM:


  Of course, how could I have missed that! I did find it now, thanks Michał.
 
  Last full backup (of 100 odd Gb) took slightly north of 10 days to
  complete. Incremental, just over 5 days.


 I did not see if you mentioned how *many* files you had.  Are we talking 
 100,000 files or 10 Million files?  That will make a *big* difference in 
 performance.

This will be especially true if the full directory listing sent before
comparison starts fills RAM and pushes into swap.  A couple of other
performance-killers that haven't been mentioned yet:

Running too many backups concurrently - on some systems 2 might be too many.

Having large files that change frequently.  This forces the server to
uncompress the base copy and merge changes which will thresh the disk
since they are on the same drive.   This may be unavoidable, but it
might be worth excluding things like log files you don't care about,
database files that won't be consistent anyway, etc..

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
LogMeIn Central: Instant, anywhere, Remote PC access and management.
Stay in control, update software, and manage PCs from one command center
Diagnose problems and improve visibility into emerging IT issues
Automate, monitor and manage. Do more in less time with Central
http://p.sf.net/sfu/logmein12331_d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.2.1 incremental SMB backup question

2012-11-02 Thread Les Mikesell
On Fri, Nov 2, 2012 at 10:45 AM, Jim Stark jstarkjhea...@gmail.com wrote:

 backuppc documentation says: For SMB and tar, BackupPC uses the
 modification time (mtime) to determine which files have changed since the
 last lower-level backup. That means SMB and tar incrementals are not able to
 detect deleted files, renamed files or new files whose modification time is
 prior to the last lower-level backup.

 The question: is there any way to have backuppc incrementally backup such
 files, perhaps based on the fact that they do not exist in the full backup
 set?

No, rsync/rsyncd are the only xfer methods where the target side has
any knowledge of the previous backup's contents.  However, if you do
more frequent smb fulls, backuppc will pool the duplicate content
anyway, so you might do that instead of incrementals.

-- 
  Les Mikesell
 lesmikes...@gmail.com

--
LogMeIn Central: Instant, anywhere, Remote PC access and management.
Stay in control, update software, and manage PCs from one command center
Diagnose problems and improve visibility into emerging IT issues
Automate, monitor and manage. Do more in less time with Central
http://p.sf.net/sfu/logmein12331_d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.2.1 incremental SMB backup question

2012-11-02 Thread Les Mikesell
On Fri, Nov 2, 2012 at 11:37 AM, Jim Stark jstarkjhea...@gmail.com wrote:

 2) You can simply run full backups

 Performance hit? I understand pooling will avoid creating multiple copies,
 but cost in backup time?

You need to do fulls at least once a week or so to keep the tree
structure sane - and if you have time to do them one night you can
probably do them every night, unless you have a large set of targets
and need to skew the full run days.

 I guess I'm mostly surprised that the incremental backup does not realize
 that there are files in the source that do not exist in the destination 
 back them up based on that, regardless of modtime.

It is running smbclient in tar mode.   It doesn't know anything about
previous runs.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
LogMeIn Central: Instant, anywhere, Remote PC access and management.
Stay in control, update software, and manage PCs from one command center
Diagnose problems and improve visibility into emerging IT issues
Automate, monitor and manage. Do more in less time with Central
http://p.sf.net/sfu/logmein12331_d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.2.1 incremental SMB backup question

2012-11-02 Thread Les Mikesell
On Fri, Nov 2, 2012 at 2:02 PM, Jim Stark jstarkjhea...@gmail.com wrote:
 My concern is that to verify that a file can be pooled, it first has to be
 brought over from machine A (true?).

 Thus, even if it is ultimately pooled  only costs 1 hard link in the backup
 for the host, there is all of the full backup overhead on machine A and all
 of the network traffic to get it to the backuppc machine to determine this.
 For a full backup of e.g. 100s of GB, this would be a huge overhead. In the
 case where only a few files have been added to A since the last full backup,
 that is almost completely redundant. I think I need to find a different
 solution :)

You don't have to guess about this.  Look at the 'duration' column of
your last full run.  If that time wasn't a problem, the next one
probably won't be either.  Using rsync is usually only critical for
cases of limited network bandwidth.

 Perhaps naive, but I imagined the SMB incremental logic might have included
 something like:

   if ( cannot find this file from A in host's backup tree ) {
put it there i.e. act as if the modtime were recent enough that it should
 be backed up this time.
   }

No, there is no such chat with the client, except in the rsync/rsyncd
xfers.  With those, the client sends the directory contents and the
server can then request the files that differ.

 Maybe there is something about smbclient in tar mode (not familiar with
 this) which prevents this?
 Maybe it is a prohibitively expensive test?

Probably more a matter of being patterned after the tar xfer where the
client sends only a stream back with no option to chat at all.  You
might try doing a cifs mount into the backuppc (or other) server and
doing an rsync backup of the mount point.   Incrementals will then be
able to skip matching directory entries.  However, fulls will probably
be much less efficient because the entire contents will be read over
the network mount for a data comparison that is being done in the
wrong place.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
LogMeIn Central: Instant, anywhere, Remote PC access and management.
Stay in control, update software, and manage PCs from one command center
Diagnose problems and improve visibility into emerging IT issues
Automate, monitor and manage. Do more in less time with Central
http://p.sf.net/sfu/logmein12331_d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC suddenly stopped sending email

2012-10-31 Thread Les Mikesell
On Wed, Oct 31, 2012 at 8:30 PM, Sam Barham
backuppc-fo...@backupcentral.com wrote:
 I have several servers running BackupPC.  They were created at different 
 times and in different data centers.  They are all on Debiuan Lenny or Debian 
 Squeeze.  They've all worked just fine until around the 17th of October when 
 they all simultanueously stopped sending email to me.  Backups are still 
 working, and I can see in the logs where BackupPC_Nightly is calling 
 BackupPC_sendEmail every night, but no email is coming through.  If I 
 manually run BackupPC_sendEmail -u, it works and sends me a test email.

 There's no evidence in the system logs of any attempt to send email, 
 successful or not, so it looks to me like BackupPC_sendEmail is failing to 
 send anything.

 One other thing that I noticed is that recently it's got to the point where 
 running the backups is taking almost a full 24 hours to complete, which could 
 be interfering with things, except that one of the hosts has been full for a 
 while (and sending me an email every night complaining that it can't do 
 anything), and even that one has stopped sending email.

 I didn't notice until now, and the logs have all rolled over far enough that 
 I don't have logs for the time before the email stopped sending.

 Does anyone have any idea of what the problem might be, or more places I can 
 look for clues?

Normally with backuppc. no news is good news - that is, it doesn't
send email if there are no problems to report.   Maybe the backups are
just all up to date.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Need help with interpreting error

2012-10-29 Thread Les Mikesell
On Sat, Oct 27, 2012 at 10:45 AM, Brad Morgan b-mor...@concentric.net wrote:
 I have a Windows 7 host that has started returning the following error
 during backups:

 2012-10-27 09:34:19 incr backup started back to 2012-10-25 19:00:01  (backup
 #32) for share C$
 2012-10-27 09:34:58 DumpPostShareCmd returned error status 13... exiting
 2012-10-27 09:35:00 DumpPostUserCmd returned error status 13... exiting
 2012-10-27 09:35:00 Got fatal error during xfer (DumpPostUserCmd returned
 error status 13)
 2012-10-27 09:35:05 Backup aborted (DumpPostUserCmd returned error status
 13)

 The host specific config file is empty (0 bytes) and the config.pl contains:

 $Conf{DumpPreUserCmd} = undef;
 $Conf{DumpPostUserCmd} = undef;
 $Conf{DumpPreShareCmd} = undef;
 $Conf{DumpPostShareCmd} = undef;
 $Conf{RestorePreUserCmd} = undef;
 $Conf{RestorePostUserCmd} = undef;
 $Conf{ArchivePreUserCmd} = '/bin/bash /home/backuppc/archive_rm $archiveloc
 $HostList';
 $Conf{ArchivePostUserCmd} = undef;

 What does this error really mean?

It doesn't make much sense to get an error where a command was not
specified.  The per-host config files are allowed to be in several
different places - what does the web interface show when you edit that
host's config?

-- 
   Les Mikesell
  lesmikes...@gmail.com

--
The Windows 8 Center - In partnership with Sourceforge
Your idea - your app - 30 days.
Get started!
http://windows8center.sourceforge.net/
what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Need help with interpreting error

2012-10-29 Thread Les Mikesell
On Mon, Oct 29, 2012 at 10:50 AM, Brad Morgan b-mor...@concentric.net wrote:
 2012-10-27 09:34:58 DumpPostShareCmd returned error status 13... exiting
 2012-10-27 09:35:00 DumpPostUserCmd returned error status 13... exiting
 2012-10-27 09:35:00 Got fatal error during xfer (DumpPostUserCmd
 returned error status 13)
 2012-10-27 09:35:05 Backup aborted (DumpPostUserCmd returned error
 status 13)

 The host specific config file is empty (0 bytes) and the config.pl
 contains:

 $Conf{DumpPreUserCmd} = undef;
 $Conf{DumpPostUserCmd} = undef;
 $Conf{DumpPreShareCmd} = undef;
 $Conf{DumpPostShareCmd} = undef;
 $Conf{RestorePreUserCmd} = undef;
 $Conf{RestorePostUserCmd} = undef;
 $Conf{ArchivePreUserCmd} = '/bin/bash /home/backuppc/archive_rm
 $archiveloc $HostList'; $Conf{ArchivePostUserCmd} = undef;

 It doesn't make much sense to get an error where a command was not
 specified.
 The per-host config files are allowed to be in several different places -
 what
 does the web interface show when you edit that host's config?

 I agree! The web interface shows that nothing is in the host config file and
 the permissions on the file with 0 bytes is as expected.


Since nothing else makes sense, I guess I would try changing it in the
web interface, then check the file to see if it did what you expected,
then change it back to undef.   There must be something you aren't
seeing somewhere.

-- 
   Les Mikesell
  lesmikes...@gmail.com

--
The Windows 8 Center - In partnership with Sourceforge
Your idea - your app - 30 days.
Get started!
http://windows8center.sourceforge.net/
what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Centos Backuppc. Can I install php without breaking Backuppc

2012-10-26 Thread Les Mikesell
On Fri, Oct 26, 2012 at 6:26 AM, robp2175
backuppc-fo...@backupcentral.com wrote:
 I want to have a couple of php web pages on my backuppc server, but I will 
 need to install php in order to do this. However, I am hesitant to install it 
 for fear that this may break my BackupPC installation. Can anyone offer any 
 definitive information regarding what the effect of installing the php 
 packages would be?


It is not a problem in general - you just need to make sure that you
don't clobber the apache config section for backuppc when adding other
applications.Most things that are packaged as rpms or debs for
linux distributions split out their own parts of the apache
configuration into separate files that are included by the main one to
reduce the chance of conflicts.   In RH/Centos these are in
/etc/httpd/conf.d/ - not sure about debian/ubuntu but I think they use
a similar scheme under /etc/apache2/conf.d.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
The Windows 8 Center 
In partnership with Sourceforge
Your idea - your app - 30 days. Get started!
http://windows8center.sourceforge.net/
what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Any procedure for using 2 external harddrives

2012-10-25 Thread Les Mikesell
On Thu, Oct 25, 2012 at 10:38 AM, Timothy J Massey tmas...@obscorp.com wrote:

 Also, keep in mind that the original question shows a fundamental lack of 
 understanding in the way BackupPC works.  BackupPC is *NOT* a simple 
 replacement for a tape drive.  Swapping the entire pool weekly will undermine 
 the way BackupPC works:  it's not designed for that.  And using preconceived 
 ideas of traditional backup to shape the way BacukpPC works is not a path 
 that leads to success.


As far as backuppc goes, it should work fine if you simply swap the
whole archive.  It will just catch up in the same way it would if you
had simply shut the server down for the duration of time that the
swapped-in disk had not been present.   However, you need to
understand the tradeoff that since everything is not redundant, a disk
failure will lose the data not present on the other drive(s) and you
may not always have the thing you want to restore available.  You'd
need to stop the backuppc server and unmount the partition before
removing it.  And if I were doing it I would probably rotate at least
3 drives to be sure they were never all in the same place at the same
time.  If you swap at the end of a week, letting everything catch up
over the weekend you'd have a pretty good chance of always have a
recent copy handy.

 1) Constantly mirroring the pool and occasionally breaking the mirror to take 
 it off-site.  (Option 1 above).  A variation of this would be to take the 
 pool down and make a copy of it, then bring it back up (or use LVM snapshots 
 to reduce the downtime).  This variation is left as an exercise for the 
 reader:  there are a *lot* of unexpected details in that answer:  problems 
 with file-level copy will most likely require block-level copies, LVM 
 snapshots present performance and reliability issues, etc.

This works, but has the down side of taking most of a day to re-sync
the replaced mirror with the drives being too busy to be useful for
much else during that time.

 3) Have two BackupPC systems that *both* back up the same hosts in parallel.  
 A variation of this would have the BackupPC servers in different physical 
 locations.  This variation just about requires the use of rsync, and even 
 then is not always practical (if the data changes per day/week/whatever are 
 too vast, the bandwidth between them too limited, or there is simply too much 
 data to be able to back it all up twice in a reasonable amount of time).

This is really the best solution if you have a host in another
location and enough bandwidth to make it feasible.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] No more free inodes

2012-10-23 Thread Les Mikesell
On Tue, Oct 23, 2012 at 3:23 AM, Frédéric Massot
frede...@juliana-multimedia.com wrote:
 
 Just curious: does the content you back up consist mostly of tiny
 files without much duplication?   It seems odd to run out of inodes
 while still having substantial disk space.

 I used the command df -i on different servers to see those who ate a
 lot of inode, and it seems that it is the web server with Apache cache
 enabled.

 Is that one inode is used by one directory?

Each file and directory will use one, except that hardlinked files
share their inode.

 I'll use the -t option of htcacheclean to remove empty directory, and
 see if it makes a difference.


 On a web server most affected:

 $ df -h
 FilesystemSize  Used Avail Use% Mounted on
 /dev/vda1  92G   52G   36G  60% /

 $ df -i
 FilesystemInodes   IUsed   IFree IUse% Mounted on
 /dev/vda16111232 4391859 1719373   72% /

 And the folder /var/cache/apache2/mod_disk_cache contains 2956954
 directories to 13 GB. The limit of the total disk cache size (-l option)
 is 300 MB ?!

If it is just a cache, it might be reasonable to exclude it from
backups.   The contents would probably have expired by the time you
could restore anyway, or already have fresh copies reloaded from the
original source.

-- 
   Les Mikesell
  lesmikes...@gmail.com

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup plan

2012-10-23 Thread Les Mikesell
On Tue, Oct 23, 2012 at 10:35 AM, Jimmy Thrasibule
thrasibule.ji...@gmail.com wrote:

 My main problem is that I also have some external servers (spread over
 the Internet) to backup. In that case using as less as bandwidth as
 possible can be useful. As I understood, BackupPC downloads all the data
 and do deduplication on the server side. A tool doing dedpulication on
 the client side would be better to preserve bandwidth.

If you are using rsync or rsyncd, only the file differences are
transferred.  The server will reconstruct a full copy of the file from
the previous instance plus the differences, then do file-level de-dup
by linking to any other copies of the same content.   For windows
targets you'll have to install cygwin rsync.

 Question:

 I don't want to backup backups and have incremental backup of
 incremental backups. This will make restoration harder.

 What is the best way to get those files? Forget BackupPC and use rsync?
 Use BackupPC and disable incremental backups?

Use backuppc with rsync or rsynd as the xfer method.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Need help creating a new module or any way to incorporate

2012-10-23 Thread Les Mikesell
On Tue, Oct 23, 2012 at 12:45 PM, Tilde88
backuppc-fo...@backupcentral.com wrote:
 (needless to say, im horrible in PERL)


I can't help much with code changes, but note that you already have
the option to run custom commands on the targets with the

$Conf{DumpPreUserCmd} = undef;
$Conf{DumpPostUserCmd} = undef;
$Conf{DumpPreShareCmd} = undef;
$Conf{DumpPostShareCmd} = undef;
$Conf{RestorePreUserCmd} = undef;
$Conf{RestorePostUserCmd} = undef;
$Conf{ArchivePreUserCmd} = undef;
$Conf{ArchivePostUserCmd} = undef;

options.  You don't get a push button, but they run at the appropriate
time relative to the operation.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Need help creating a new module or any way to incorporate

2012-10-23 Thread Les Mikesell
On Tue, Oct 23, 2012 at 2:02 PM, Tilde88
backuppc-fo...@backupcentral.com wrote:
 yea i know abotu the post and pre commands... i actually use them to turn on 
 the rsync daemon on windows, and snapshot...

 the problem is, the commands that i need cannot be run on every backup...
 IE, one of the commands would make a virtualizable VHD, and another will 
 convert that VHD to VMDK for vmware.
 So just having the button would be the solution, seeing as how virtualizing 
 and converting take quite some time and space, and I will be backing up 
 systems every hour on the hour.

 im working on a complete disaster recovery and business continuity 
 solution... i have the actual project completed, i just need to incorporate 
 all the pieces into BPC (as it is the main backbone)

You might want to look at http://relax-and-recover.org/ if you haven't
already.   It has most of the pieces (for Linux) that backuppc is
missing, particularly dealing with bare-metal recovery.   I think it
could be integrated with backuppc fairly easily but haven't had time
to do it myself.

-- 
   Les Mikesell
  lesmikes...@gmail.com

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Need help creating a new module or any way to incorporate

2012-10-23 Thread Les Mikesell
On Tue, Oct 23, 2012 at 2:59 PM, Tilde88
backuppc-fo...@backupcentral.com wrote:
 :\ turns out this wont be useful for me...

 i do appreciate the suggestio nthough

Seemed like a nice idea to have a bootable iso with scripts to
reconstruct your disk layout (raid,lvm, filesytems) with the network
and ssh up.  It should be trivial to do a backuppc restore from that
point - and moderately hard to reproduce what it does.   What I'd like
to see is a minor change to build a directory of the iso's contents
where backuppc could include it instead of pre-creating the iso image.
  That way you could have a large number of them with files that would
pool nicely on the backuppc server.   As a tradeoff you'd have to
download the contents and rebuild the iso at restore time, but you'd
always be able to build the matching one for any backup.

-- 
   Les Mikesell
  lesmikes...@gmail.com

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup plan

2012-10-23 Thread Les Mikesell
On Tue, Oct 23, 2012 at 3:09 PM, Michael Stowe
mst...@chicago.us.mensa.org wrote:

 My main problem is that I also have some external servers (spread over
 the Internet) to backup. In that case using as less as bandwidth as
 possible can be useful. As I understood, BackupPC downloads all the data
 and do deduplication on the server side. A tool doing dedpulication on
 the client side would be better to preserve bandwidth.

 Does each of your clients actually have a lot of duplicate files?  This
 doesn't seem like it would be particularly useful to most people.

And more to the point, do they have duplicate files that change in
duplicate ways so the difference would matter more than once?  After
you get your initial copy, rsync is only going to xfer the changes.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup plan

2012-10-23 Thread Les Mikesell
On Tue, Oct 23, 2012 at 5:32 PM, Jimmy Thrasibule
thrasibule.ji...@gmail.com wrote:
 And more to the point, do they have duplicate files that change in
 duplicate ways so the difference would matter more than once?  After
 you get your initial copy, rsync is only going to xfer the changes.


 Indeed I wasn't aware of the delta transfer feature of rsync. I was
 thinking that it was only file based (not file content). So BackupPC
 might be OK for all my machines.

 I have two other questions:

 Is it possible to run a command before a backup (cannot remember that in
 the documentation and SF.net in troubles). For example, stop MySQL
 before the copy of its catalog files?

Yes, put it in the $Conf{DumpPreUserCmd} setting.

 I have a bunch of rdiff-backup data. Can I easily transform that to the
 BackupPC storing format?

There is no file-level equivalent.  Backuppc does hardlinks, not
deltas and only de-dups at the file level.   The best you could do is
to script a restore from these backups and point backuppc to the host
holding the restored copy with the ClientNameAlias setting.  Then when
you remove the ClientNameAlias and back up the real targets you would
have initial copies for rsync to work against.

-- 
  Les Mikesell
 lesmikes...@gmail.com

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] No more free inodes

2012-10-15 Thread Les Mikesell
On Mon, Oct 15, 2012 at 8:32 AM, Carl Wilhelm Soderstrom
chr...@real-time.com wrote:
 On 10/14 06:30 , Frederic MASSOT wrote:
 I started copying Backuppc data from /var/lib/backuppc (ext4) to the
 temporary directory /mnt/backuppc-new (xfs) with the command cp -a.

 Due to the hardlinks, the only way to copy the backuppc data pool from one
 set of disks to another at anything like the expected speeds, is to use 'dd'
 and copy the whole filesystem.

But that approach won't work when the source and destination are
different filesystem types.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] No more free inodes

2012-10-15 Thread Les Mikesell
On Mon, Oct 15, 2012 at 9:40 AM, Frédéric Massot
frede...@juliana-multimedia.com wrote:

 The problem is that the original file system (ext4) has no inode
 available and increasing the size of an ext4 file system does not
 increase the number of the inode. To have no more inode problem the new
 filesystem is xfs, so I can not copy data with dd.

Just curious: does the content you back up consist mostly of tiny
files without much duplication?   It seems odd to run out of inodes
while still having substantial disk space.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restoring Backuppc itself

2012-10-15 Thread Les Mikesell
On Mon, Oct 15, 2012 at 6:01 AM, M.Baert m.ba...@free.fr wrote:

 I just reinstalled ubuntu (12.04 to replace 10.10) and backuppc
 (.deb 3.2.1-2ubuntu1.1 replacing 3.2.0-3ubuntu4~maverick1).

 I have a .tgz of old /etc/backuppc and /var/lib/backuppc.
 The store is on an external disk, which was mounted from fstab :

UUID=d8a382ae-f0ad-49ca-907b-1e99483ef5aa /var/lib/backuppc/store
 ext4 defaults,noauto,noatime 0 2

 Now I need to restore the configuration, but I'm afraid of damaging my
 store/pool.

 The simple plan I have in mind:
   - stop the service
   - overwrite or merge the files from /etc/backuppc/
   - overwrite or merge the files from /var/lib/backuppc/
   - merge /etc/fstab and mount the disk
   - start backuppc service and pray.

 Is there any specific precautions I should take ?

The only likely problems would be if the backuppc version has changed
or the packaging has changed the locations for things - and these
shouldn't have much chance of harming your existing archive. What
I usually do for similar cases is save copies of the current copies
before the restore and diff against the restored copies to make sure I
understand the configuration setting differences.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] No more free inodes

2012-10-14 Thread Les Mikesell
On Sun, Oct 14, 2012 at 11:30 AM, Frederic MASSOT
frede...@juliana-multimedia.com wrote:

 But now it is very slow, the speed is closer to 1 GB in 12 hours, or
 more. It remains to be copied 200 GB, I can not wait 100 days!

 Copy of cpool and log was performed, the slowness comes from the
 copy of pc and certainly hardlinks.


 - Is that the copy will still be slow, it will get worse and worse?

 - Is that rsync is faster than cp to copy data with hardlinks?

There is just no good way to do this.  The problem is that the copy
has to track the hardlinks by matching the inode numbers in massive
tables, and then seek all over the place updating the target entries.
 And you've made it even worse if you have the source and destination
filesystems on the same physical disks.If you don't do a huge
number of restores, it is usually better to just keep the old archive
around and offline for emergencies and let the new copy build its own
history from scratch.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] stumped by ssh

2012-10-11 Thread Les Mikesell
On Thu, Oct 11, 2012 at 11:13 AM, Robert E. Wooden rbrte...@comcast.net wrote:
 Thanks everyone.

 I will add that I am not running SELinux nor am I running ufw (Ubuntu
 firewall.)

 I am an Ubuntu OS house.

 I have even considered a ssh blacklist but, I cannot find one anywhere
 (unless I am looking in the wrong directories.)

 I am beginning to consider possible hardware issues but, it worked
 before I re-installed the OS. So, that doesn't make much sense.

 What can this issue be???

One other thing that can cause trouble is any output sent before the
rsync program starts (from /etc/profile, .bashrc, etc.).  Stock rsync
will ignore that and work anyway, but the backuppc implementation will
fail.  I didn't see any in your test, but perhaps you didn't paste the
entire results.   Also, you might look at /var/log/secure on the
target system to see if any problem is reported for the ssh
connection.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Exclude list being ignored?

2012-10-11 Thread Les Mikesell
On Thu, Oct 11, 2012 at 4:30 AM, Ian P. Christian poo...@pookey.co.uk wrote:


 Running: /usr/bin/ssh -q -x -l root my.hostname.com /usr/bin/rsync
 --server --sender --numeric-ids --perms --owner --group -D --links
 --hard-links --times --block-size=2048 --recursive --checksum-seed=32761
 --ignore-times . /

 snip

 Sent exclude: /proc
 Sent exclude: /sys


 I've just noticed on this output there is a 'Sent exclude' - I was expecting
 to see this in the rsync arguments.  Perhaps it is working after all!  How
 does this exclude get processed if it's not in the rsync command args?


That looked normal to me - are they not being excluded in the backup results?

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] stumped by ssh

2012-10-10 Thread Les Mikesell
On Wed, Oct 10, 2012 at 12:19 PM, Robert E. Wooden rbrte...@comcast.net wrote:
 I have been using Backuppc for a few years now. Recently I upgraded my
 machine to newer, faster hardware. Hence, I have experience exchanging ssh
 keys, etc.

 It seems I have one client that refuses to connect via ssh. When I exchanged
 keys and ran ssh -l root [clientip] whoami the client properly returns
 'root'.

Are you executing this test as the backuppc user?

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] No more free inodes

2012-10-05 Thread Les Mikesell
On Fri, Oct 5, 2012 at 3:58 AM, Tyler J. Wagner ty...@tolaris.com wrote:
 On 2012-10-04 19:56, Michael Stowe wrote:

 The inode number of the ext4 is static.

 - How can I do to increase the number of inodes?

 The number of ext4 inodes are set when the ext4 volume is created, so, you
 have to recreate the file system.  Perhaps using an alternative to ext4.

 I wonder what caused this. My BackupPC filesystem was created with default
 mkfs.ext4, and has used far more disk space than inodes:

 Filesystem  Size  Used Avail Use% Mounted on
 /dev/md03.6T  1.6T  2.1T  43% /var

 FilesystemInodes   IUsed IFree IUse% Mounted on
 /dev/md0   244195328 4966307 2392290213% /var

There must be a lot of tiny files that are not duplicates being backed
up.  If the source can be identified, maybe they could be tarred or
zipped in a pre-user command instead of backing up those directories
normally.   If it were mine, I'd probably pull the 'miirror' set of
drives out of the raid10 and add a new raid1 of a pair of 2 or 3TB
drives formatted as XFS and start over, leaving the old set so you
could switch back if you had to do a restore before you built
sufficient history in the new archive.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Only two machines out of 3 visible in CGI interface

2012-10-05 Thread Les Mikesell
On Fri, Oct 5, 2012 at 1:26 AM, Fabrice Delente delen...@gmail.com wrote:
 It is set to 'backuppc':

 root@se3:/etc/backuppc # grep CgiAdminUser  *
 config.pl:$Conf{CgiAdminUserGroup} = 'backuppc';
 config.pl:$Conf{CgiAdminUsers} = 'backuppc';

 The /usr/share/backuppc directory is owned by www-se3:backuppc so I added
 www-se3 to the CgiAdminUsers:

 $Conf{CgiAdminUsers} = 'backuppc,www-se3';

 but I still have the same message for the 3rd machine (only privileged users
 can view info on this host).

These 'users' are strictly web logins that normally exist only in what
you specified as  apache's AuthUserFile and AuthGroupFile.   They have
nothing to do with system users unless you have gone out of your way
to make apache use system logins.   And in any case apache should be
prompting you for a login before allowing access to the web page if it
is set up correctly.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Only two machines out of 3 visible in CGI interface

2012-10-05 Thread Les Mikesell
On Fri, Oct 5, 2012 at 8:33 AM, Fabrice Delente delen...@gmail.com wrote:
 Ok, it's a bit clearer now, I'm no apache expert! :^)

 I'll look into that, though it seems to me that there is no AuthUserFile and
 AuthGroupFile on our system.

Did you install a distribution-packaged version of backuppc or use the
Sourceforge tarball?  If you did it yourself, you need to configure
apache to require authentication.  The packaged versions should
include that part for you but you have to run htpasswd to add the
users/passwords yourself.

 What's really troubling is that on 2 out of 3 of theses machines, everything
 works ok, so why does Apache accept authentication for 2 of them but not the
 3rd?

That part doesn't make any sense to me either.  I thought you said
that you couldn't see the 3rd host in the web interface selection.
How are you providing the URL to access it? You must somehow be
bypassing the path that triggers the authentication requirement.

 Should the AuthUserFile and AuthGroupFile be on this 3rd machine, or
 only on the server where backuppc is installed?

AuthUserFile and AuthGroupFile are apache configuration directives
where you specify the paths to the actual files containing the data.
These can be in the main (or included) config file or in an .htaccess
file in the directory containing the web page or cgi script and
control the web server where backuppc is running.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc Got fatal error during xfer (Unable to read 4 byte

2012-10-05 Thread Les Mikesell
On Fri, Oct 5, 2012 at 2:29 AM, antoine2223
backuppc-fo...@backupcentral.com wrote:
 hello guys ,

 i installed backuppc in linux ubuntu 12 .
 i am really strucked up . i get the following errors when i start  a backuppc

 Got fatal error during xfer (Unable to read 4 bytes)
 2012-10-04 11:21:25 Backup aborted (Unable to read 4 bytes)


 the things i did ,
 * i copied my key ssh to my localhost

your key?

 * i connected with root@loclhost it goes well without asking passwd
 * i connected with backuppc@localhost it goes well without asking password

Were you running as the backuppc user when you did these tests?

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Only two machines out of 3 visible in CGI interface

2012-10-04 Thread Les Mikesell
On Thu, Oct 4, 2012 at 12:53 PM, Fabrice Delente delen...@gmail.com wrote:
 Hello.

 I have 3 machines to backup, that have been configured for backup in the
 same way (all 3 are linux servers backed up by rsyncd).

 In the CGI interface, only 2 out of them appear in the small scrollable list
 where I can choose the host to survey. However, if I try to go on the status
 page of the third one, I get

 Note: $ENV{REMOTE_USER} is not set, which could mean there is an
 installation problem. BackupPC_Admin expects Apache to authenticate the user
 and pass their user name into this script as the REMOTE_USER environment
 variable. See the documentation.

 How can REMOTE_USER be set for two machines, but not for the third? I'm at a
 loss...

 Thanks for any hint!

I can't think of a reason it would be different, but is it set at all?
  That is, did you log into the web interface  as the configured admin
user or a user specified as the owner of the targets in question?

-- 
   Les Mikesell
  lesmikes...@gmail.com

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Only two machines out of 3 visible in CGI interface

2012-10-04 Thread Les Mikesell
On Thu, Oct 4, 2012 at 2:35 PM, Fabrice Delente delen...@gmail.com wrote:
 I tried forcing directly REMOTE_USER to backuppc in the apache config file,
 and now I still get reports for two of the machine, but for the third the
 error message is now 'only privileged users can access information on this
 host' (rough translation, my interface is in french)...


What is your $Conf{CgiAdminUsers} set to?

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc major restore ... check status

2012-09-27 Thread Les Mikesell
On Thu, Sep 27, 2012 at 2:09 AM, Valics Lehel lval...@grafx.ro wrote:
 Hi all,

 I had a major crash on one server and I started a restore for all
 domains from backuppc.
 I see that something is happening, but I cannot see a status of which
 folders was already restored, how much remained etc.

 How can I see more information in log not only

 2012-09-26 23:35:28 restore started below directory /var/www/vhosts/ to host

I usually just ssh to the target and use 'du', etc. to see what is
actually there and how it is changing.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://ad.doubleclick.net/clk;258768047;13503038;j?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Got fatal error during xfer (Child exited prematurely)

2012-09-27 Thread Les Mikesell
On Thu, Sep 27, 2012 at 7:25 AM, isubhransu
backuppc-fo...@backupcentral.com wrote:
 I have a backup server which takes backup of other servers and in few days 
 back the backup server started experiencing error and the not able to take 
 back. I think the error log bellow could describe the problem better than me:

 2012-09-10 20:05:40 Aborting backup up after signal PIPE

That just means the transfer program died or disconnected at the other
end, so you will need more information to diagnose it.  Possible
problems could be disk corruption on the target system or running out
of memory.  If you are connecting through a firewall or NAT router,
there may be time limits for either idle periods or total connection
length that you might exceed.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://ad.doubleclick.net/clk;258768047;13503038;j?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc cannot connect to windows 2000 server for backup

2012-09-26 Thread Les Mikesell
On Wed, Sep 26, 2012 at 3:30 AM, sbdcunha
backuppc-fo...@backupcentral.com wrote:

 I hvae backup pc running for some time backing up linux and windows 2003 
 servers

 but recently there came a need to backup one server which is windows 2000

 when I tried to backup from backuppc admin it gives a error message
 ---

 Running: /usr/bin/smbclient xx.xx.xx.xx\\t -U backupuser -E -N -d 1 -c 
 tarmode\ full -Tc -
 full backup started for share t
 Xfer PIDs are now 16028,16027
 session request to xx.xx.xx.xx failed (Called name not present)

I think this means you tried to connect to an IP address and it only
wants to accept a netbios name.  This may have been fixed in a service
pack or have an option that can be changed, but the quick fix would be
to use the host name instead of the IP and if it doesn't resolve in
DNS, add an entry in the /etc/hosts file on the backuppc server for
it.

-- 
   Les Mikesell
  lesmikes...@gmail.com

--
How fast is your code?
3 out of 4 devs don\\\'t know how their code performs in production.
Find out how slow your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219672;13503038;z?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc cannot connect to windows 2000 server for backup

2012-09-26 Thread Les Mikesell
On Wed, Sep 26, 2012 at 2:51 PM, sbdcunha
backuppc-fo...@backupcentral.com wrote:
 Running: /usr/bin/smbclient kmweb\\t -U backupuser -E -N -d 1 -c tarmode\ 
 full -Tc -
 full backup started for share t
 Xfer PIDs are now 18103,18102
 session request to KMWEB failed (Not listening on called name)
 session request to *SMBSERVER failed (Not listening on called name)
 session request to KMWEB failed (Not listening on called name)
 session request to *SMBSERVER failed (Not listening on called name)


Can you connect to that share from smbclient or another windows machine?

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
How fast is your code?
3 out of 4 devs don\\\'t know how their code performs in production.
Find out how slow your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219672;13503038;z?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to prove backups for Audits

2012-09-25 Thread Les Mikesell
On Tue, Sep 25, 2012 at 11:58 AM, Derek Belcher dbelc...@alertlogic.com wrote:
 I have been tasked with taking a screenshot for our auditors showing that we
 are keeping backups for a year.

I'm surprised that the auditors aren't asking you to say you have
tested a restore - or at least are confident that you can do one.

 My current scheduling looks like this:
 FullKeepCnt = 1,0,1,0,0,1   one full week, one full month, one 64 weeks

 I there a way to display the oldest back up with a time stamp, proving 52+
 weeks? In the command line or GUI?

The gui display for each host shows a list of backups currently
available with the date it was taken.  Doesn't show the year in that
view, but you can tell from the age/days column how old it is.

-- 
  Les Mikesell
   lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to deactive the file mangling on backups ?

2012-09-24 Thread Les Mikesell
On Mon, Sep 24, 2012 at 2:46 PM, Serge SIMON serge.si...@gmail.com wrote:
 The point is that it barely miss nothing to have a browsable uncompressed
 rsynced backup folder for people willing to retrieve files from that
 directly.
 It's the default rsync behavior.

You can use rsync directly if what you want is what rsync does...

 As said previously, that would allow these two nice features :
 - allowing to plug an external drive with a backuppc uncompressed rsync-ed
 backup folder on another computer if the main server crash (allowing a quick
 recovery of some files - of course, one can unmangle the directory, but
 really, it's an unnecessary manipulation, especially if it can be easily
 avoided from the start)

The price is right for a 2nd install of backuppc code.  Just do it
ahead of time if you are concerned about this.

 - allowing to pusblish (sshfs, nfs, smb, ...) a read only share of this
 uncompressed rsync-ed backup folder.

There is at least one fuse filesystem for backuppc has been developed
- maybe more.  Not sure what kind of performance it has or permissions
it can enforce.  But do you really have people who can't use the web
interface for browsing and doing their own restores?

 I really like a lot of features in backuppc, the internal rsync storage
 behavior is the only one i strongly disagree with, mainly because i really
 thing it can be easily be just a simple and regular rsync backup path.

It can't.  Files with identical content but different attributes are
pooled with hardlinks. There is no way that regular rsync or any other
direct file access approach can reproduce the original
owner/group/timestamp/permissions, etc.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Per host host.pl not found on new CentOS 6.3 install

2012-09-21 Thread Les Mikesell
On Fri, Sep 21, 2012 at 7:17 AM, Daniel J. Doughty
d...@heartofamericait.com wrote:
 Hi,

 I've got BackupPC running and backing up linux boxes fine.  I'm now
 trying to back up a Windows 2003 box.  I've got the SMB shared out and
 now want to set the password in backuppc but just want to do it for this
 host.  So I went searching for the CONFDIR/pc/ subdir and couldn't find
 it.  All I can find is my TOPDIR/pc/ subdir.

 Any idea what I'm missing here?

I'm not sure the per-pc config files are created unless there are
non-default values for them.   Why don't you use the web interface for
these settings?

-- 
  Les Mikesell
 lesmikes...@gmail.com

--
Got visibility?
Most devs has no idea what their production app looks like.
Find out how fast your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219671;13503038;y?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Per host host.pl not found on new CentOS 6.3 install

2012-09-21 Thread Les Mikesell
On Fri, Sep 21, 2012 at 10:11 AM, Tony Molloy tony.mol...@ul.ie wrote:

 Incredible I am just doing the same thing myself. Installing
 BackupPC-3.2 on a CentOS-6.3 box replacing 3.1 on an old CentOS-5.8
 box and was just about to ask the same question.

OK, I didn't recall having any trouble with the RPM and the web
interface so I went through the motions on a clean machine and this is
all it takes:

#yum install backuppc
edit /etc/httpd/conf.d/BackupPC.conf
  change allow from 127.0.0.1 to allow from All
Follow the directions at the top of the file and:
htpasswd -c /etc/BackupPC/apache.users yourusername

edit /etc/BackupPC/configure.pl
change $Conf{CgiAdminUsers} = ''; to $Conf{CgiAdminUsers} = 'yourusername';

#service backuppc restart
#service httpd restart

Log into http://server_IP/backuppc with a browser

Click edit hosts, add one, save.
Click host summary, click new host
Click edit config (upper one, in host section)
make the changes you want.
When you save, the /etc/BackupPC/pc directory is added with the per host config.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Got visibility?
Most devs has no idea what their production app looks like.
Find out how fast your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219671;13503038;y?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-21 Thread Les Mikesell
On Fri, Sep 21, 2012 at 11:46 AM, Timothy J Massey tmas...@obscorp.com wrote:

 I would not expect the network speed changing from 100Mb to 1Gb to make a
 difference:  this server is literally unchanged from one run to the next.
 BackupPC literally shows zero files changed!  So the network bandwidth
 shouldn't make a difference:  all it should be doing is passing hashes back
 and forth...

 We'll see what happens, though, and I'll let you know.

I wouldn't expect any difference by adding more bandwidth than you
were already filling.   The real work here is that the --ignore-times
option that is hard-coded into the full runs causes the target to have
to read all the files and without checksum caching, the server does
too.  And regardless of what benchmarks might tell you about disks, if
anything else wants the disk head to be somewhere else, this is going
to be slow.  Backups will almost always thresh any disk buffer/cache
that might make other types of access seem faster.

However, I just run into a situation in an rsync restore where both
ends were sitting in a select() apparently waiting for each other for
so long I gave up and used a tar download.   Maybe there is a bug
somewhere.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Got visibility?
Most devs has no idea what their production app looks like.
Find out how fast your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219671;13503038;y?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-21 Thread Les Mikesell
On Fri, Sep 21, 2012 at 1:02 PM, John Rouillard
rouilj-backu...@renesys.com wrote:

 However, I just run into a situation in an rsync restore where both
 ends were sitting in a select() apparently waiting for each other for
 so long I gave up and used a tar download.   Maybe there is a bug
 somewhere.

 Yup. I have discussed it multiple times:

   http://adsm.org/lists/html/BackupPC-users/2010-09/msg00136.html
   http://adsm.org/lists/html/BackupPC-users/2010-02/msg00075.html
   http://comments.gmane.org/gmane.comp.sysutils.backup.backuppc.general/18179

 but always when doing a backup not while doing a restore.

That used to happen all the time with windows targets running cygwin
1.5 with rsync over ssh.   I wonder if some similar scenario can
happen with recent linux versions?

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Got visibility?
Most devs has no idea what their production app looks like.
Find out how fast your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219671;13503038;y?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-19 Thread Les Mikesell
On Wed, Sep 19, 2012 at 10:45 AM, Timothy J Massey tmas...@obscorp.com wrote:

  Can you repeat those the 3rd time with checksum-caching enabled so we
  have a better idea of how much time that saves?  (If you didn't have
  it on already, you'd need to repeat again to store them first)


 The host I was playing with is a bad one for that:  it's a pretty visible
 host, and about 30% of the data changes constantly.  I've selected another
 host to experiment with.  One issue is that it's three times as large, so
 each full backup is slower.  Good news is that it's an unused server, so
 99.99% of the data is static, and no one cares if I hammer the server for no
 real reason!  :)

 I just ran a first backup of that host with compression disabled, and it
 took the same amount of time as not-first fulls were taking with
 compression.  The backup server is busy right now and probably for the next
 2 or so hours.  Once it's free, I will start a second non-compression full
 and we'll see how it goes.

 Right now, still no hash caching.  Once I gather the stats for
 non-compression only, I will add in hash caching and we'll see from there.

Thanks - I think that will be a useful thing to know,.  And something
that should have an effect whether the bottleneck on the system is the
CPU or disk.

-- 
   Les Mikesell
  lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-18 Thread Les Mikesell
On Tue, Sep 18, 2012 at 9:53 AM, Timothy J Massey tmas...@obscorp.com wrote:


 All of that was clearly outlined at the top of the e-mail:  4 x 2TB
 Seagate SATA drives in RAID-5 (using md, which I''m not sure I stated
 originally).

Raid5 has its own reasons for bad write performance.  You sort-of make
up for it with 8+ drives in an array, but with 4 you've basically made
all the heads wait for the slowest and forced a read-update-write
cycle on everything less than a raid block in size.  And with md, the
parity update is probably going to be clocked as CPU, not device i/o
time.   Keep this in mind when comparing backuppc vs. native rsync -
backuppc is storing metatdata in different places, updating multiple
directory entries, etc.  That translates to more small writes.  For
the extreme case, consider a native rsync of an unchanged tree - no
writes at all, where backuppc will build a new directory tree and
touch the link count in every inode.


 That's fairly bad news for me, then.  These are embedded-style
 motherboards, and upgrading to a 3GHz Xeon processor is not an option...
 :(

There is not just a little bit of difference between a dual-core box
and the new xeons.  I don't have benchmarks, but they are insanely
faster.Throwing some more RAM at it might help a little by
permitting more read-ahead and write buffering but probably just a
little if you aren't seeing much wait time now.

 That's my next step.  When I upgraded from my old VIA-based servers, I
 (accidentally) left compression on, and thought the new, dual-core, faster
 and more efficient processor would be OK with this.  That may have been my
 biggest mistake.  (Honestly, I've already found other areas that make me
 think that my original distain for compression might have been
 well-justified!  :) )

I don't think it will matter that much once you have passed the 2
fulls to get cached checksums.  After that you'll only be
compressing/uncompressing content with changes.

-- 
   Les Mikesell
  lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-18 Thread Les Mikesell
On Tue, Sep 18, 2012 at 10:01 AM, Timothy J Massey tmas...@obscorp.com wrote:

 I still have this on the (very) back burner.  It's a Windows system, so
 I'm looking at SMB, and I *hate* backups over SMB.  They are nothing but a
 problem.

I'd expect cygwin tar over ssh to work on windows, but haven't tried
it and wouldn't expect it to beat rsync.

-- 
  Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-18 Thread Les Mikesell
On Tue, Sep 18, 2012 at 10:24 AM, Timothy J Massey tmas...@obscorp.com wrote:

 Fortunately, BackupPC is a backup of the backup right now, and is not
 expected to be used for real.  Yet.  That's why I can take the time and try
 to actually solve the problem, rather than apply band-aids.

 But that will likely end in November, if not sooner.

It is more than a band-aid to have a warm-spare disk ready to pop in
instead of waiting for a 3TB restore even with reasonable performance.
 Be sure everyone knows what they are losing.

 I'm not.  I've got two of these new boxes built.  In both cases, they have
 2-4% wait time when doing a backup.  One is a RAID-5 and one is a RAID-6.

Can you test one as RAID10?  Or something that doesn't make the disks
wait for each other and likely count the time against the CPU?

 Might it have something to do with md?  Could the time that would normally
 be considered wait time for BackupPC be counted as CPU time for md?  That
 doesn't seem logical to me, but I can say that there just isn't any wait
 time on these systems.

Not sure, but I am sure that raid5/6 is a bad fit for backuppc
although good for capacity.

  Mine seem to track the target host disk speed more than anything else.
   The best I see is   208GB with a full time of 148 minutes.  But that
  is with backuppc running as a VM on the East coast backing up a target
  in California and no particular tuning for efficiency.  Compression is
  on and no checksum caching.

 That's the same settings I'm using.  But that's about double the
 performance I'm getting.  247GB in 340 minutes, or about 12MB/s.

I see more like that when backing up slower targets - the rsync
protocol probably isn't very good about overalapping the read/compare
while walking the tree with the block-checksum comparison so you add
up every little delay at either end.  Are you sure the target has no
other activity happening during the backup?

-- 
  Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-18 Thread Les Mikesell
On Tue, Sep 18, 2012 at 9:21 AM, Mark Coetser m...@tux-edo.co.za wrote:

 I am busy running a full clean rsync to time exactly how long it will
 take and will post results compared to a clean full backup with
 backuppc, I can tell you that the network interface on the backup server
 is currently running at 200Mbs transfer speed.

What's the target here?  It takes a pretty good disk system at both
ends to sustain rates like that, especially if you have a tree of
small files.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-18 Thread Les Mikesell
On Tue, Sep 18, 2012 at 1:14 PM, Timothy J Massey tmas...@obscorp.com wrote:

 I have just performed some full backups on a host after disabling
 compression.  Results:

 Not-first backup with compression:  139.9 minutes for 7.6 MB
 First backup without compression:  76.7 minutes for 70181.5 MB.
 Second backup without compression:  36.6 minutes for 70187.4 MB.

 Wow:  Four times the performance.  Looks like compression is a
 *significant* performance-eater.

Can you repeat those the 3rd time with checksum-caching enabled so we
have a better idea of how much time that saves?  (If you didn't have
it on already, you'd need to repeat again to store them first)

-- 
  Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-18 Thread Les Mikesell
On Tue, Sep 18, 2012 at 3:50 PM, Timothy J Massey tmas...@obscorp.com wrote:

 Will do.  You were the one that turned me on to Clonezilla, and I'm always
 open to new tools...  :)

 ReaR is not exactly the most Google-friendly search term (on so many
 levels...), so for others (and to confirm):  http://relax-and-recover.org/

Yes, and that even looks like the current stuff - they recently moved
from sourceforge.  Packages should be available for rpm/deb based
systems in common repositories.

 Sadly, *so* many of my servers are Windows...

Clonezilla works, but you have to shut down to save the image.

 Nope:  these boards have a maximum of 4GB.  Again:  embedded.

 And we've had this debate before.  Of 4GB of RAM, 3.2GB of RAM is cache!
 Do you *really* think that more will help?  I doubt that the entire EXT4
 filesystem on-disk structure takes 3GB of disk space!  I've demonstrated
 that in previous experiments:  going from 512MB to 4GB made *zero*
 difference.  I doubt going beyond 4GB is going to change that, either.

I think the best you could hope for is to get the bulk of the
directories and inodes in cache. But that might save a few million
seeks.

  I always think 'seek time' whenever there is enough delay to notice -
  and anything that concurrently wants the disk head somewhere else is
  going to kill the throughput.

 I think you are way overstating this.  The disks on the clients spend a
 good chunk of their time *idle* even when a backup is going on.

Maybe, but seek times are always orders of magnitude greater than any
other computer operation, so that's usually the place to start.  On
the other hand, maybe your CPUs are worse than anything I've used in a
long time.   My worst current box is a 32-bit xeon with 2 CPUs with
hyperthreading. /proc/cpuinfo shows bogomips: 4791.23 for them.   The
VM where I pulled the previous numbers shows 4 CPUs (not sure if 2 are
hyperthreads or not) with bogomips: 5320.00 but it feels considerably
faster than the 32-bit box.  Also, I generally use Intel server-type
NICs but I'm not really sure if they are better or if there are big
differences in CPU involvement with different types.

 The guest servers are not hurting for resources.  They are not part of the
 problem.  The problem seems to be contained completely inside of the
 BackupPC server.

If you aren't seeing big speed differences among clients you are
probably right.  I do and they seem related to hardware capabilities.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-17 Thread Les Mikesell
On Mon, Sep 17, 2012 at 7:59 AM, Mark Coetser m...@tux-edo.co.za wrote:

 Surely disk io would affect normal rsync as well? Normal rsync and even
 nfs get normal transfer speeds its only rsync within backuppc that is slow.


Backuppc uses its own rsync implementation in perl on the server side
so it will probably not match the native version's speed.  Is this the
first or 2nd full run?  On the first it will have to compress and
create the pool hash file links.  On the 2nd it will read/uncompress
everything for block-checksum verification.  If you have enabled
checksum caching, fulls after the 2nd will not have to read/uncompress
unchanged files on the server side.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Getting files newer than error

2012-09-17 Thread Les Mikesell
On Mon, Sep 17, 2012 at 8:38 AM, Michael Stowe
mst...@chicago.us.mensa.org wrote:


 That's absolutely true -- it's this line:

 Running: /usr/bin/smbclient ts-1\\Users -c tarmode\ full -TcN
 /var/lib/BackupPC//pc/ts-1/timeStamp.level0 -

 That leads me to believe that the error is caused by different time zones
 between the system to be backed up (Windows, I assume, due to smbclient)
 and the BackupPC process.  Where the disconnect is I can only speculate.


Does everything work OK when it tries the incremental the 2nd day
after the full?

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-17 Thread Les Mikesell
On Mon, Sep 17, 2012 at 10:16 AM, Mark Coetser m...@tux-edo.co.za wrote:
 

 Its the first full run but its taking forever to complete, it was running
 for nearly 3 days!

As long is it makes it through, don't make any judgements until after
the 3nd full, and be sure you have set up checksum caching before
doing the 2nd.   Incrementals should be reasonably fast if you don't
have too much file churn but you still need to run fulls to rebase the
comparison tree.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-17 Thread Les Mikesell
On Mon, Sep 17, 2012 at 11:05 AM, Timothy J Massey tmas...@obscorp.com wrote:


 I'm writing a longer reply, but here's a quick in-thread reply:

 I know exactly what you mean by waiting until after the first full.  Often
 the second full will be faster -- but only *IF* you are bandwidth limited
 will you will see an improvement.  In this case, neither him nor I are
 bandwidth limited.  I don't see an improvement.

The 2nd might even be slower, since the server side has to decompress
and recompute the checksums.

 I am routinely limited to no more than 30MB to 60MB per *minute* as the
 maximum performance for my rsync-based backups.  This is *really* pretty
 terrible.  I also see that the system is at 100% CPU usage when doing a
 backup.  So, my guess is that the Perl-based rsync used by BackupPC is to
 blame.

I'd blame the CPU first.  It's easier to replace with something faster...

 So, I have two CPU-bound tasks and they're both fighting over the same
 core.

 Is there anything that can be done about this?

Not sure about that - I always expected the kernel scheduler to do
something sensible, but maybe not.

 A quick aside about checksum caching:  I very much *want* the ability to
 check to make sure if my backup data is corrupted *before* there is an
 issue, so I do not use checksum caching.  So, yes, this puts much greater
 stress on disk I/O:  both sides have to recalculate the checksums for each
 and every file.  But the client can do it without monopolizing 100% of the
 CPU;  the BackupPC side should be able to, too...

Backuppc is decompressing, and doing it all in perl, so I'd expect
that to be less efficient.  However, there is a setting to control how
much of the data (a random percentage) is checksum-checked even with
caching enabled, so you can tune the timing vs. risk to some extent.
There's little risk of file-level corruption that would still let the
checksums cached at the end of the file match unless you have bad RAM
(which would likely cause crashes) or physical disk block corruption
which you can check relatively quickly with a 'cat /dev/sd?
/dev/null' or a smartctl test run followed by checking the status.

-- 
Les Mikesell
  lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-17 Thread Les Mikesell
On Mon, Sep 17, 2012 at 11:54 AM, Timothy J Massey tmas...@obscorp.com wrote:

 However, I have recently inherited a server that is 3TB big, and 97%
 full, too!  Backups of that system take 3.5 *days* to complete.  I *can't*
 live with that.  I need better performance.

The only quick-fix would be if there are some top-level directories
where it would make sense to divide the target into several separate
runs. (Different 'hosts' in backuppc, with ClientAliasName pointing
back to the same server and different 'shares' for each).  Then you
can skew the schedule so the fulls happen on different days.   Or,
schedule this to start on Friday evening and hope it is done by Monday
morning...   In any case you have to keep in mind that if you ever
have to restore, there will be considerable downtime.  For the price
of a disk these days, it might be worth keeping a copy up to date with
native rsync that you could just swap into place if you needed it and
back that up with backuppc if you want to keep a history.


 No matter the size of the system, I seem to top out at about 50GB/hour for
 full backups.  Here is a perfectly typical example:

In most systems, the bottleneck is going to be seek time on the
archive disk.  That's hard to measure because most linux metrics count
device wait time as CPU time,  and it will vary wildly with the size
of the target files.   Also, except for the first full it is hard to
relate the xfer speed to the size of the data since you may be reading
a TB to xfer just a MB.

 It looks like everything is under-utilized.  For example, I'm getting a
 measly 40-50MB of read performance from my array of four drives,

If every read has to seek many tracks (a likely scenario), that's not
unreasonable performance.

 and
 *nothing* is going out over the network.

If everything you read matches, there won't be much network traffic.

  My physical drive and network
 lights echo this:  they are *not* busy.  My interrupts are certainly
 manageable and context switches are very low.  Even my CPU numbers look
 tremendous:  nearly no time in wait, and about 50% CPU idle!

I think you should be seeing wait time.  Unless perhaps you have some
huge files that end up contiguous on the disk, I'd expect the CPU to
be able to decompress and checksum as fast as the disk can deliver -
and there shouldn't be much other computation involved.

 And one more request:  for those of you out there using rsync, can you
 give me some examples where you are getting faster numbers?  Let's say, full
 backups of 100GB hosts in roughly 30-35 minutes, or 500GB hosts in two or
 three hours?  That's about four times faster than what I'm seeing, and would
 work out to be 50-60MB/s, which seems like a much more realistic speed.  If
 you are seeing such speed, can you give us an idea of your hardware
 configuration, as well as an idea of the CPU utilization you're seeing
 during the backups?  Also, are you using compression or checksum caching?
 If you need help collecting this info, I'd be happy to help you.

Mine seem to track the target host disk speed more than anything else.
 The best I see is   208GB with a full time of 148 minutes.  But that
is with backuppc running as a VM on the East coast backing up a target
in California and no particular tuning for efficiency.  Compression is
on and no checksum caching.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Adding client file accesses to AV exceptions

2012-09-12 Thread Les Mikesell
On Wed, Sep 12, 2012 at 7:31 AM, Michael Stowe
mst...@chicago.us.mensa.org wrote:

 So which free AV's offer a way to exclude a process?

 At least that's an easy question:  none of them.

 While there are differences between AV's, there are only a handful of free
 ones, none of which make the distinction between what is reading a file
 when a file is read.  (Nor would it be particularly wise to do so.)

Is there a way to make a volume shadow snapshot and exclude the
snapshot location?

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup Fails on Access Denied Warning

2012-09-10 Thread Les Mikesell
On Mon, Sep 10, 2012 at 3:41 AM, gshergill
backuppc-fo...@backupcentral.com wrote:
 
 By the Samba mail list, do you mean on Samba forums? Or does Backup Central 
 have a Samba mail list?

Forums, I guess - wherever the samba experts hang out...

 How do you set up a user with backup rights? At the moment I am just using 
 the system Administrator (the only accounts on the Windows Server), would 
 that be an issue? It seemed to work well using Administrator on Windows 
 Server 2003.

Windows has a backup operators account.  I though conceptually that
Administrators were allowed to do anything but sometimes have to take
ownership of files (which shows in an audit log) before being able to
access them, where backup operators are supposed to be able to make
backups without changing ownership.  Not sure if this means they get
read access or if there is some magic about windows backups.  There is
also some weirdness about logging in with a password vs. using ssh
keys, etc. that might be different in 2008.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup Fails on Access Denied Warning

2012-09-06 Thread Les Mikesell
On Thu, Sep 6, 2012 at 9:58 AM, gshergill
backuppc-fo...@backupcentral.com wrote:
 Hi guys,

 The following error keeps coming up when backing up my Windows 2008 server;

 Domain=[2008BACKUPTEST] OS=[Windows Server (R) 2008 Standard 6002 Service 
 Pack 2] Server=[Windows Server (R) 2008 Standard 6.0]
 tarmode is now full, system, hidden, noreset, verbose
 NT_STATUS_ACCESS_DENIED listing \ProgramData\Application Data\*
 NT_STATUS_ACCESS_DENIED listing \ProgramData\*
 This backup will fail because: NT_STATUS_ACCESS_DENIED listing \\*
 NT_STATUS_ACCESS_DENIED listing \\*

 The file keeps failing (with access denied) on the following types of files;

 Windows 2008

 Go to a folder and on the tab with File, Edit, View, Tools, Help do the 
 following;

 Tools  Folder Options  View  Hide protected operating system files 
 (Recommended).

 Apply this, and all the files that then show up are the ones that are failing.

 To get past it on a specific file I need to right click the folder,

 Properties  Security  Edit

 Then Remove the line that has a deny value, and make the rest Full Control.

 This may be okay to do now and then but there are so many of those files it 
 is unreasonable to do it on every single file, and this should be working 
 without the need for this.

 Any idea why it's failing on these file types? Is there a way around it?

I don't know much about windows security, but this should be pretty
much between smbclient and the windows system.   Are you connecting as
a user with backup rights?  You should see the same thing using
smbclient in interactive mode (a dir of the same directory) - maybe
there would be more expertise on the samba mail list.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Functionality to Back Up VSphere box?

2012-09-04 Thread Les Mikesell
On Tue, Sep 4, 2012 at 9:53 AM, gshergill
backuppc-fo...@backupcentral.com wrote:
 Hi BackupPC community,

 I was wondering if there is a functionality to allow someone to back up their 
 box running VSphere (as an OS).

 Ideally, I would run one command and it would sequentially back up every VM 
 on the VSphere box.

No, not only does it not have any special facility to deal with the VM
host (other than generic ssh commands), the large VM image files would
always have changes that would prevent backuppc's pooling from working
for them.   On the other hand you could set up any running vm guest to
be backed up just like a physical machine, and common files would be
pooled in that case.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Functionality to Back Up VSphere box?

2012-09-04 Thread Les Mikesell
On Tue, Sep 4, 2012 at 10:43 AM, gshergill
backuppc-fo...@backupcentral.com wrote:
 Hi Les Mikesell,

 
 No, not only does it not have any special facility to deal with the VM
 host (other than generic ssh commands), the large VM image files would
 always have changes that would prevent backuppc's pooling from working
 for them. On the other hand you could set up any running vm guest to
 be backed up just like a physical machine, and common files would be
 pooled in that case.
 

 Just to confirm, you are saying that there is no simple one command to back 
 them all up, but I can individually back up each VM running on the OS as I 
 normally would?


You can run backups directly from each guest just like you would from
standalone machines.   There just aren't any tools that interact with
the VMware hosts to access the VM image files and even if you could,
they would not pool nicely.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC failing to back up a host with big files

2012-08-28 Thread Les Mikesell
On Tue, Aug 28, 2012 at 12:09 AM, martin f krafft madd...@madduck.net wrote:
 also sprach Les Mikesell lesmikes...@gmail.com [2012.08.27.2326 +0200]:
 The only setting I can see that relates to this would be
 PartialAgeMax.  Are the retries happening before this expires?
 Otherwise you'd have to poke though the code to see why it prefers the
 previous full.

 This is a good idea, especially since I found in another log:

   full backup started for directory /; updating partial #91

 I will investigate this and get back to you.

 Why would I not want to build upon a partial backup older than
 3 days?


Not sure about the reasoning there, but I think that incomplete
backups are only kept if there are more files than the previous
partial.  If you haven't succeeded in getting a better partial in some
amount of time it might be better to throw the old one away and start
from scratch.   In the case of a fast network and a very large number
of files, the comparison will take longer than a transfer from
scratch.

The ClientTimeout setting may be your real issue though.  A running
backup should never time out as long as anything is transferring, even
if it takes days to complete but with rsync that value can be for the
whole backup.I'd try raising it a lot and trying to catch up over
a weekend.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC failing to back up a host with big files

2012-08-28 Thread Les Mikesell
On Tue, Aug 28, 2012 at 8:29 AM, martin f krafft madd...@madduck.net wrote:
 The ClientTimeout setting may be your real issue though.  A running
 backup should never time out as long as anything is transferring, even
 if it takes days to complete but with rsync that value can be for the
 whole backup.I'd try raising it a lot and trying to catch up over
 a weekend.

 I have gone down this path too. I really don't like it because it
 is, as you say, a safety net and I don't want to loosen it.

 I don't quite understand your first comment though. The docs say
 this for ClientTimeout:

   Timeout in seconds when listening for the transport program's
   (smbclient, tar etc) stdout. If no output is received during
   this time, then it is assumed that something has wedged during
   a backup, and the backup is terminated.

   Note that stdout buffering combined with huge files being
   backed up could cause longish delays in the output from
   smbclient that BackupPC_dump sees, so in rare cases you might
   want to increase this value.

   Despite the name, this parameter sets the timeout for all
   transport methods (tar, smb etc).

 Especially the last sentence suggests that this also applies to
 rsync and so I do not understand why an rsync backup would time out,
 unless there was no more data for almost a day (72.000 seconds).

Maybe it is a bug that has been fixed in the current version but I
know I have seen situations where the backup ended with an Alarm
signal but files in the backup had been updating not long before the
timeout.  I assumed it was an obscure bug in the rsync code to not
advance the timer on any activity.   If that is happening in your
case, raising the timeout at least temporarily might get past your
problem.I think the default used to be 7200 seconds so it was much
more common to see the issue.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC failing to back up a host with big files

2012-08-28 Thread Les Mikesell
On Tue, Aug 28, 2012 at 11:35 AM, martin f krafft madd...@madduck.net wrote:
 also sprach Les Mikesell lesmikes...@gmail.com [2012.08.28.1650 +0200]:
 Maybe it is a bug that has been fixed in the current version but I
 know I have seen situations where the backup ended with an Alarm
 signal but files in the backup had been updating not long before the
 timeout.  I assumed it was an obscure bug in the rsync code to not
 advance the timer on any activity.

 This sounds like what's happening here (3.1.0). You wouldn't happen
 to have a pointer to the patch, would you? Or is it fixed in 3.2.1
 (next Debian stable)? Then I would consider upgrading ahead of time…

No, I don't actually think it is fixed - but I'm not sure.The
default was bumped from 2 to 20 hours at some point and I've set some
of mine even higher.

-- 
   Les Mikesell
 lesmikesell#gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC failing to back up a host with big files

2012-08-27 Thread Les Mikesell
On Mon, Aug 27, 2012 at 4:19 AM, martin f krafft madd...@madduck.net wrote:


 However, the status quo seems broken to me. If BackupPC times out on
 a backup and stores a partial backup, it should be able to resume
 the next day. But this is not what seems to happen. The log seems to
 suggest that each backup run uses the previous full backup (#506,
 not the partial backup #507) as baseline:

   full backup started for directory / (baseline backup #506)

Rsync should resume a partial, and probably would if you did not have
any other fulls.  I'm not sure how it decides which to use as the base
in that case.

 I only just turned on verbose logging to find out what's actually
 going on and whether there is any progress being made, but to me it
 seems like BackupPC is failing to build upon the work done for the
 last partial backup and keeps trying again and again.

 Has anyone seen this behaviour and do you have any suggestions how
 to mitigate this problem?

If you are running over ssh, you can try adding the -C option for
compression if you haven't already.You could exclude some of the
new large files so a new run would complete, then include some, do
another full, and repeat until you have the whole set.  Or use brute
force: take the server to the client LAN or bring a complete clone of
the client's filesystem to the server lan and temporarily change
ClientNameAlias to point to it while you do a full backup to get the
base copy.

Or, you might try adding it under a different hostname with
ClientNameAliase pointed at the original host to see if it does reuse
the partials when there is no other choice.


-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Archive as mirrored file structure

2012-08-27 Thread Les Mikesell
On Sat, Aug 25, 2012 at 12:52 PM, Trey Dockendorf treyd...@gmail.com wrote:
 I've been using BackupPC for many years now, and am now for the first
 time going to be using the Archive features.  For me the archives will
 not be for off-site or remote storage, but a sort of out-of-band
 copy of the backups.  The idea being that if BackupPC is down, and all
 that is available is the NFS share, I can simply cp or rsync files
 manually from the archive to a host.  I'd like the archive to not be a
 tar or any compressed format, but simply an exact 'mirror' of the
 system's backup.  So if hostA's defined share is '/' I'd like to have
 something like '/var/lib/BackupPC/pc/archived_hosts/hostA-2012-08-25/root
 file system', where root file system, is an exact mirror of of the
 share '/'.

 I'm not entirely familiar with all the utility scripts packaged with
 BackupPC, so this may be something already possible with BackupPC, but
 before I went and wrote a custom script to handle this I was hoping
 the community had advice or code I could make use of.


If you have enough bandwidth and a long enough backup window, I'd
recommend a simple direct rsync from the target hosts for this
instead.  Otherwise backuppc is still a single point of failure and if
something goes wrong mid-copy you won't have a good version anywhere.
  If you do want to do it through backuppc, you can probably work out
a way to use BackupPC_serverMesg to tell it to restore to a different
location 
(http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg04201.html)
or script an extract from the tar file the archive host generates.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC failing to back up a host with big files

2012-08-27 Thread Les Mikesell
On Mon, Aug 27, 2012 at 3:45 PM, martin f krafft madd...@madduck.net wrote:
 also sprach Mike ispbuil...@gmail.com [2012.08.27.1502 +0200]:
 The fix I've used is to --exclude a good chunk of the backup so it does
 finish in a day, then --exclude a little less the next time it runs, and
 so forth.

 I have used this method, which you and Les suggested. And yes, it
 works. However, it's hardly a fix or even an acceptable solution
 in a large-scale deployment. There is no way I can tell this to
 hundreds of users who do occasionally dump larger files to their
 home directories.

The best solution would be to have enough bandwidth to meet your
requirements...   Or at least enough to catch up by running through
weekends.

 BackupPC must be able to cope with this, or it must learn.
 Otherwise, I am afraid, it is unsuitable.

 Do you see a way? How can I force it to use the last partial backup
 as a baseline when attempting a new full backup?

The only setting I can see that relates to this would be
PartialAgeMax.  Are the retries happening before this expires?
Otherwise you'd have to poke though the code to see why it prefers the
previous full.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC failing to back up a host with big files

2012-08-27 Thread Les Mikesell
On Mon, Aug 27, 2012 at 4:26 PM, Les Mikesell lesmikes...@gmail.com wrote:
 
 The best solution would be to have enough bandwidth to meet your
 requirements...   Or at least enough to catch up by running through
 weekends.

Forgot to mention that you might have to bump ClientTimeout way up.
In theory this is supposed to be an idle timer and only quit when
there is no activity, but there are circumstances where it is taken as
the time for the whole backup run.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error Backing up Servers

2012-08-22 Thread Les Mikesell
On Wed, Aug 22, 2012 at 4:04 AM, gshergill
backuppc-fo...@backupcentral.com wrote:

 ==
 Also, when adding hosts in the web interface you can specify new=old
 to copy an existing host's configuration so once you get a system of
 each type working you can use it as a template when adding more and
 then edit in any specific differences in the new instances.
 ==

 That's the newhost=copyhost option for the hostname right?

 For example, I want server2 to take settings from server 1, so in the web ui 
 I go to Edit hosts, add new, and in the host column I add;

 server2=server1

 Is that correct?


Yes, then go to the server2 page and change any details that differ.
If the shares are the same and you don't set ClientNameAlias, you
might not need to change anything for additions made this way.   Note
that you want to be sure your source settings are correct before
making a lot of copies, though.  Subsequent changes of settings that
override the global config will have to be repeated for each copy.

You probably want the global settings to match your most common target
type because you can make changes there and affect all targets that
don't override it, but the template/copy approach is good where you
have several of a different type (like windows/linux).

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error Backing up Servers

2012-08-21 Thread Les Mikesell
On Tue, Aug 21, 2012 at 11:51 AM, Olivier Ragain orag...@chryzo.net wrote:

 However, as explained in the documentation, you can only create one
 default config that will be applied to every hosts. If an host is an
 exception, you have to manually edit its conf file. Usually, I use the
 web interface. And in the main config file, I end up having pretty much
 all the excludes, be it windows / linux or MAC.

Also, when adding hosts in the web interface you can specify new=old
to copy an existing host's configuration so once you get a system of
each type working you can use it as a template when adding more and
then edit in any specific differences in the new instances.

-- 
  Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync: full backup more than twice faster than incremental backup

2012-08-16 Thread Les Mikesell
On Thu, Aug 16, 2012 at 3:47 AM, Udo Rader list...@bestsolution.at wrote:
 Hi,

 after adding one of our customers' server we see some weird backup
 effects with a full backup taking (somewhat expected) 7 hours but
 incremental backups taking more than twice that time.

[...]

 One of the reasons I can think of is the file structure on that host. It
 serves as a special storage pool for a customer developed application
 and as such it has really really many subdirectories with really really
 many subdirectories with really really many subdirectories. And by
 really really many I mean really really many ... So my best guess would
 be that building the file list diff takes much longer than just fetching
 the files as they exist.

Only the first full will 'just fetch', and that's because there is no
local directory tree to compare.  Subsequent runs will not only
compare trees, but the fulls do a block-checksum compare of the data.

 I've now disabled incremental backups on this server, but maybe someone
 has an idea how to enable incremental backups for this host as well.

I think you've jumped to conclusions here - you need to time full runs
other than the first.  Other things to keep in mind are:
Incremental runs copy everything that has changed since the previous full.
Using the checksum caching will avoid the need to uncompress/compare
on the server but only after the 2nd full of an unchanged file.
The whole directory tree is held in ram

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Help setting up backups on HFS+ network drive

2012-08-15 Thread Les Mikesell
On Mon, Aug 13, 2012 at 9:50 PM, rdarwish
backuppc-fo...@backupcentral.com wrote:

 I have tried searching for the answer to this, but could not find it, so 
 please do not get mad if the answer is easily located somewhere.

 I have installed and configured BackupPC on a Linux (Debian Squeeze) box, and 
 it works perfectly, however, my true need for the backup is to have the 
 backup actually saved on to a network drive, which is formatted as a HFS+ 
 drive (Apple Time Capsule). Now, my problem is that when starting BackupPC 
 with this location setup, BackupPC complains that hard links are not 
 supported by the backup location:

 Can't create a test hardlink between a file in /mnt/.cifs/backup/pc and 
 /mnt/.cifs/backup/cpool.  Either these are different file systems, or this 
 file system doesn't support hardlinks, or these directories don't exist, or 
 there is a permissions problem, or the file system is out of inodes or full.  
 Use df, df -i, and ls -ld to check each of these possibilities. Quitting...

 Now I know it's the hard link problem, and not a permission issue, because if 
 I issue df -i, I get:

 Filesystem Inodes   IUsed   IFree IUse% Mounted on
 //frank/torgo/crow/backup  0   0  0   -
 /mnt/.cifs/backup


 Now, I know that the drive is capable of handling hard links, but I think 
 that I need either an option to SAMBA or to mount.cifs to turn on that 
 functionality, but I cannot find out which option. I had seen mention of Unix 
 Extensions, and cat /proc/fs/cifs/LinuxExtensionsEnabled produces 1, so I 
 don't really know where else to look for where to put such an option.

Does the device offer nfs as an option?  If so, I'd use that instead of cifs.

 Am I going about this in the wrong way? Should I be backing up to this drive 
 in a different manner? Right now, $Conf{XferMethod} = 'tar'; but I have tried 
 it set to smb as well with the same results.

The way you back up the target systems won't make any difference
regarding the need to make hardlinks on the storage archive.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_archiveStart problems

2012-08-14 Thread Les Mikesell
On Tue, Aug 14, 2012 at 6:46 AM, Germano Paciocco
germano.pacio...@gmail.com wrote:
 Hello.
 I'm trying to archive full backuppc pool with BackupPC_archiveStart command,
 and I can't get compressed file with it.
 I searched a lot, but I can't find solutions.

Are you saying that the output is different for the specified
archiveHost target when you use the command line than it is when you
do it through the web interface, or that you don't have the correct
options set for the archiveHost?   If you 'edit config' for the
archiveHost in the web interface, what do you have in
Xfer/ArchiveComp?

-- 
  Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Centos 6.3 install

2012-08-10 Thread Les Mikesell
On Fri, Aug 10, 2012 at 8:05 AM, Kameleon kameleo...@gmail.com wrote:
 Thanks Les. That will give me the latest version of backuppc correct? Also
 all the configs and such should be in the default places like /etc/backuppc
 and /usr/share/backuppc right? I do mount my pool to /var/lib/backuppc so I
 am good there.


The packaging on the EPEL version uses /etc/BackupPC,
user/share/BackupPC, and /var/lib/BackupPC respectively.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Centos 6.3 install

2012-08-09 Thread Les Mikesell
On Thu, Aug 9, 2012 at 4:50 PM, Kameleon kameleo...@gmail.com wrote:
 I am trying to find the best way to install backuppc on my centos 6.3 box.
 Everything I read is for Centos 5 and older so alot does not apply to the
 new version. Anyone have any pointers or good sites? I tried just following
 the readme in the tarball but backuppc won't start. Our old server is ubuntu
 10.04 so that is no help either.

Add the EPEL repository to your yum config if you haven't already and:
yum install backuppc

It will save some trouble if you mount the archive partition you want
to use at /var/lib/BackupPC before the install.
Then follow the instructions in /etc/httpd/conf.d/BackupPC.conf to add
your web password.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copy host backups from one BackupPC server to another

2012-07-30 Thread Les Mikesell
On Mon, Jul 30, 2012 at 6:04 AM, Jonas Meurer jo...@freesources.org wrote:
 
 But my problem is with inactive hosts. I have some old hosts which
 aren't available anymore, but for which I keep the last backups in the
 system. I'd like to migrate these backups to the new backup server as
 well, as the old one will be powered off in near future.

If you don't expect to restore, but just want a copy for safekeeping,
use the archive host feature or BackupPC_tarCreate to make a standard
tar archive that can be kept independently from backuppc.   If you
want the data to be pooled with other backups, at this point the best
you can do is restore  (perhaps one at a time) to somewhere with
sufficient disk space, then let the other server back it up.

 I cannot believe that it's impossible to move backups of hosts from one
 BackupPC instance to another. And actually I believe that
 BackupPC_tarPCCopy is the right tool to do so. But I don't get how to
 use it.

The usual approach is to image-copy the partition or drive holding the
data, then optionally resize the filesystem on the new server.
BackupPC_tarPCCopy expects the pool/cpool directory to be copied first
with rsync and exactly in sync first (and should be documented as
such.   You can't do that after making other backups on the new
server.  You can simply 'rsync -aH ' the host directories under pc to
the corresponding new location and everything will work.  However,
those files will not be pooled with other hosts.

-- 
  Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] SMB not copying all files

2012-07-30 Thread Les Mikesell
On Mon, Jul 30, 2012 at 11:37 AM, Matthew Postinger
matthew.postin...@gmail.com wrote:
 I've implemented backuppc, so far it copies some files, but not
 everything that is in the directory.

 It happens on multiple machines. I've checked permissions. I'm using samba.

 Any ideas?

The error log should show the reason for each file.   Windows normally
locks files if anything else has them open.

-- 
  Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] SMB not copying all files

2012-07-30 Thread Les Mikesell
On Mon, Jul 30, 2012 at 12:58 PM, Matthew Postinger
matthew.postin...@gmail.com wrote:

 2012-07-30 11:37:09 full backup started for share C$
 2012-07-30 11:38:41 full backup 12 complete, 2123 files, 990690847 bytes,
 3 xferErrs (0 bad files, 0 bad shares, 3 other)
 2012-07-30 11:38:41 removing full backup 11
 2012-07-30 12:00:19 full backup started for share C$
 2012-07-30 12:01:33 full backup 13 complete, 2123 files, 990690847 bytes,
 3 xferErrs (0 bad files, 0 bad shares, 3 other)
 2012-07-30 12:01:33 removing full backup 12
 2012-07-30 12:08:47 full backup started for share C$
 2012-07-30 12:13:29 full backup 14 complete, 9096 files, 2368384305 bytes,
 2 xferErrs (0 bad files, 0 bad shares, 2 other)
 2012-07-30 12:13:29 removing full backup 13
 2012-07-30 13:55:43 incr backup started back to 2012-07-30 11:08:47
 (backup #14) for share C$
 2012-07-30 13:55:48 incr backup 15 complete, 4 files, 3191878 bytes, 5
 xferErrs (0 bad files, 0 bad shares, 5 other)
 2012-07-30 13:55:48 removing incr backup 5


 Is all i'm seeing. The log level was set to 1, i just changed it to 2 and
 started a new backup.

That's the main log.  Look at the xferlog or errors in the 'xfer error
summary section' where there is a link for each backup number.

-- 
  Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backupPC and backup via VPN connection.

2012-07-26 Thread Les Mikesell
On Thu, Jul 26, 2012 at 2:51 PM, Timothy J Massey tmas...@obscorp.com wrote:

 Arthur Darcet arthur.darcet+l...@m4x.org wrote on 07/26/2012 03:20:15 PM:


  You can easily configure the VPN to give static IP to your clients,
  and then just map a dummy name to the VPN IP using /etc/hosts on the
  BackupPC server.


 Static IP addresses that are on the same local broadcast domain as the 
 BacukpPC server?  And that will allow the client to use the same IP address 
 both when the client is behind the VPN *and* when the client is *not* behind 
 the VPN but connected directly to the office?

 Also, VPN bridging (which is what this technique requires) comes with its own 
 set of difficulties, all so you can avoid making name resolution work over 
 the VPN...

 I'd rather fix the actual problem (name resolution) instead mangling my VPN 
 and ending up with two problems!  :)

If you put the IP in /etc/hosts on the server it will take care of
name resolution whether the target on the same subnet or not.  And for
the backuppc application, you can put an IP address in ClientAliasName
to take care of it.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC data pool

2012-07-20 Thread Les Mikesell
On Fri, Jul 20, 2012 at 5:01 PM, Bryan Keadle (.net) bkea...@keadle.net wrote:
 Sorry for the late reply.

 anything/everything in a VM is an attraction.  :-)

Sure, where you don't mind the loss in performance.

 I had hoped for a pre-configured VM appliance so I could easily evaluate it.

But if someone else did it, it would be approximately what you'd get
from installing or cloning a Centos or ubuntu VM and installing the
package.  So it would have saved you a minute or two of package
install time - and might not have been on your preferred platform.

 I did not have such an easy implementation experience (as evidenced by
 apparent exhaustion of this listserv during the process).  Being able to
 download, turn on, and make a few modifications to implement in an
 environment is a good way to promote good software.  Seems that a VM would
 serve just fine in smaller environments (including home use), and should it
 need to grow beyond the performance capacity of a VM, then one could move to
 a physical solution.

Agreed, but besides the package which your package manager will take
care of by itself, there's just some small config files involved
whether you are virtual or physical.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC data pool

2012-07-13 Thread Les Mikesell
On Fri, Jul 13, 2012 at 10:37 AM, Bryan Keadle (.net)
bkea...@keadle.net wrote:
 Can BackupPC's data pool (/var/lib/BackupPC) be a CIFS mount, or must it be
 a block device?  I'm thinking it requires a block device due to the
 hardlinks/inodes BackupPC depends on, and I'm not sure that a cifs-mounted
 folder gives you that ability.

I think CIFS (as unix extensions to SMB) technically handles hardlinks
when the source system is unix/linux and the underlying filesystem
supports them.   However I wouldn't expect this capability to be very
well tested, because in that scenario everyone would use NFS anyway.
And in the backuppc case it would be much more sensible to just run
the program on the box with the drives.  If you are thinking of one of
those small NAS devices that don't support NFS, I wouldn't count on
it.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC data pool

2012-07-13 Thread Les Mikesell
On Fri, Jul 13, 2012 at 11:38 AM, Bryan Keadle (.net)
bkea...@keadle.net wrote:
 Thanks for your reply.  Yeah, we're using a NAS device, but not necessary
 those small ones - using this Drobo B800fs.  So NFS would be a
 protocol-based option for the data pool?  Still, iSCSI would be best if not
 DAS?


Yes, NFS or iSCSI will work, but it is really a lot cheaper to just
throw some big drives in a linux box.  And in all the questions you've
asked I don't remember any yet about getting offsite copies of the
archive, which is usually the one hard thing with backuppc that you
need to plan from the start.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_dump syntax

2012-07-13 Thread Les Mikesell
On Mon, Jul 9, 2012 at 10:50 AM, Bryan Keadle (.net) bkea...@keadle.net wrote:

 How might one suggest such an enhancement request - a trial run feature
 that would show *WHAT* would get backed up, but not actually backup?

I'm not really sure the concept makes sense across the different
transports.   Rsync has a --dry-run option that can be used along with
--verbose to see filenames with changes, but tar/smbtar are either
going to give you the content or not depending on the options you give
them.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC data pool

2012-07-13 Thread Les Mikesell
On Fri, Jul 13, 2012 at 3:07 PM, Bryan Keadle (.net) bkea...@keadle.net wrote:

 I just recently been introduced to BackupPC and I've been pursuing this as a
 VM appliance to backup non-critical targets.  Thus, as a VM, I would use
 remote storage (iSCSI) to provide capacity instead of local virtual disk.

That can work, but given the ease of apt-get or yum installs on any
linux system, what's the attraction of a VM?  You'll get a lot of
overhead for not much gain.   And if you end up sharing physical media
with the source targets, you shouldn't even call it a backup.

 Les - as for the offsite copies of the archive, you're speaking of a backup
 of the backup.  For the purpose that BackupPC provides, non-critical data,
 I'm not so concerned with backing up the backup.

What's your plan for a building disaster?  If it is 'collect the
insurance and retire' then you probably don't care about offsite
copies...

  However, should BackupPC
 start holding backups that I would need redundancy for, what do you
 recommend?  What are you doing?  Since I'm thinking SAN-based storage for
 BackupPC, I figured I would just use SAN-based replication.

If you aren't sharing the media with the data you are trying to
protect, that would work.   But, if you have the site-to-site
bandwidth for that, it would be much cheaper to just run the backups
over rsync from a backuppc server at the opposite site.I have
mostly converted to that approach now, but an older setup that is
still running has a 3-member RAID1 where one of the drives is swapped
out and re-synced weekly.   These were initially full sized 750 Gig
drives, but I'm using a laptop size WD (BLACK - the BLUE version is
too slow...) for the offsite members now.   Other people do something
similar with LVM mirrors or snapshot image copies.

  But should that
 not be wise or available, would you just stand up an rSync target and just
 rSync /var/lib/BackupPC to some offisite target?

The number of hardlinks in a typical backuppc archive make that a
problem or impossible  at some point because rsync has to track and
reproduce them by inode number, keeping the whole table in memory.
I think a zfs snapshot with incremental send/receive might work, but
you'd need freebsd or solaris instead of linux for that.

-- 
Les Mikesell
   lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Improving large rsync backup performance

2012-07-09 Thread Les Mikesell
On Sun, Jul 8, 2012 at 9:59 PM, Kenneth Porter sh...@sewingwitch.com wrote:
 I've got a user running Win7-64 with 3 large drives: C (original boot), D
 (data), and G (Win7 boot). I've configured rsyncd on it to serve them as 3
 modules named for the drive letters. A full backup to my CentOS 5 box
 takes about 40 hours, and an incremental takes about 8, so I've
 configured his system to only back up on weekends. But even then, he often
 works weekends and will reboot his machine because it's running slow from
 the backup load, killing the backup.

 What can I do to optimize the backup to reduce the time taken so it's more
 likely to run to completion? I was thinking I could reconfigure it to be
 three hosts with one drive each, so that there's more chance of running to
 completion.

Splitting the runs should help.  You might even want to look at the
directory contents and split at the subdirectory level.   Other than
sheer voulme, the things that can make rsync slow are very large
numbers of files where handling the directory listing becomes
cumbersone, and very large files with changes where the server has to
reconstruct a full copy with bits from the old and bits from the
transfer.  An incremental against a fairly recent full should run
about as fast as both systems can read the directory contents, though.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Improving large rsync backup performance

2012-07-09 Thread Les Mikesell
On Mon, Jul 9, 2012 at 12:56 PM, Kenneth Porter sh...@sewingwitch.com wrote:

 The total size is 0.5 TB according to the Host Summary page. The network is
 100 Mbit. The server is a Dell PE2900 with 4 GB. Probably the slowest link
 is the external USB drive used to hold the backup on the server, formatted
 as ext3. The client is mostly used for solid modeling.

 I believe I'm using the default rsync args and don't know if there's
 anything I can do to improve things there:

 '--numeric-ids',
 '--perms',
 '--owner',
 '--group',
 '-D',
 '--links',
 '--hard-links',
 '--times',
 '--block-size=2048',
 '--recursive',

 The rsyncd service running on the Windows client is from cwrsync, version
 3.0.8.

See the docs about adding '--checksum-seed=32761'.  It will save
reading/uncompressing the server-side copy for the next comparison,
but note that it doesn't start working until after the 2nd full run
with an unchanged file.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Process for Overriding hosts configuration

2012-07-09 Thread Les Mikesell
On Mon, Jul 9, 2012 at 4:28 PM, Bryan Keadle (.net) bkea...@keadle.net wrote:
 What is the actual process/command that happens when I click on the Override
 button of a hosts config file - specifically the SmbShareName?

Basically it means that the value set will be saved in the per-pc
config instead of being inherited from the global config.

 I have a Self Provisioning solution whereby a user can authenticate to a
 BackupPC web page, and their computer will automatically provision itself
 for backup.  However, I have  a mix of Windows 7 and Windows XP machines.
 As part of the self provisioning process the hosts would be defaulted to the
 global share name of C$ (uppercase) which contains Windows 7-specific
 BackupExcludeFiles specs.  However, I can determine host OS version at the
 time of the provisioning, so if the workstation is XP I'd want to override
 the SmbShareName to be c$ (lowercase) which would contain Windows
 XP-specific BackupExcludeFiles.

Excluding things that don't exist won't hurt anything.  And some small
number of errors in the log files isn't really fatal either.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Process for Overriding hosts configuration

2012-07-09 Thread Les Mikesell
On Mon, Jul 9, 2012 at 5:18 PM, Bryan Keadle (.net) bkea...@keadle.net wrote:
 The Windows 7 Junctions (which fail) are important directories in Windows
 XP, and I've made an effort to exclude all the junctions from Windows 7.

Not sure what you meant by provisioning, but maybe you can use an
existing config as a template.   For example if you are adding hosts
in the web interface you can specify NEW=OLD in the host field and it
will add the NEW host with a copy of all of the host-specific settings
from the existing OLD host.

--
   Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Errors log

2012-07-06 Thread Les Mikesell
On Fri, Jul 6, 2012 at 11:31 AM, Bryan Keadle (.net) bkea...@keadle.net wrote:
 I'm doing an initial install and testing of BackupPC.  On a host Backup
 Summary page, it shows 111 #Xfer errs.  When I click on the Errors link, I
 get a page with a title stating (Extracting only Errors), and yet it's a
 *HUGE* long list, certainly more than 111 errors.  What am I to make of
 this?  Is the filter for extracting only errors broken?

 Contents of file /var/lib/BackupPC/pc/bkeadle-vmw732/XferLOG.0.z, modified
 2012-07-01 20:54:01 (Extracting only Errors)

 Running: /usr/bin/smbclient bkeadle-vmw732\\C\$ -U VOP/BackupPC -E -d 1
 -c tarmode\ full -Tc -
 full backup started for share C$
 Xfer PIDs are now 12592,12591
 [ skipped 10 lines ]
 NT_STATUS_ACCESS_DENIED listing \Documents and Settings\*
 [ skipped 27 lines ]

Not sure why you are seeing items that don't look like errors in the
error listing, but this is the thing you need to be concerned about
since it covers the entire tree below and is probably what you really
want to back up.   Either the credentials you are using don't have
full admin/backup rights or your samba version isn't passing things
correctly.   I'm not much of a windows expert so I'm not sure how to
fix it.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Errors log

2012-07-06 Thread Les Mikesell
On Fri, Jul 6, 2012 at 12:29 PM, Bryan Keadle (.net) bkea...@keadle.net wrote:
 In pursuit of answers, where does this function/script exist that I might
 look at the underlying code that is presumably extracting only errors?

Probably somewhere in CGI/View.pm wherever your package installed.
But I think it just does some filtering on the same Xferlog you get in
the full view, and I'd be more concerned about eliminating the errors
than why you are seeing some non-error lines.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Errors log

2012-07-06 Thread Les Mikesell
On Fri, Jul 6, 2012 at 12:49 PM, Bryan Keadle (.net) bkea...@keadle.net wrote:

 But how to eliminate the errors.  The number of errors the summary
 indicates is 111, however, the number of lines references is thousands,
 and the lines, as previously shown, doesn't really indicate an error.  What
 about this snippet indicates an error (especially since I see these files
 are successfully backed up)?


The NT_STATUS_ACCESS_DENIED is the one to worry about, especially on
directories because that is probably a permissions issue.   You should
expect some number of locked file conflicts, though.

--
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Send Test email

2012-07-04 Thread Les Mikesell
On Wed, Jul 4, 2012 at 3:39 PM, Bryan Keadle (.net) bkea...@keadle.net wrote:
 As referenced here, there was talk of providing a way for sending test
 emails.  It would be nice to have this ability to test/verify email setup
 within BackupPC web GUI.

 However, how/where does BackupPC know what SMTP server to use for sending
 email?  Is there a configuration option somewhere to specify an SMTP server
 to use?  Logged in as the backuppc user, I am running:

 BackupPC_sendEmail -t |less


 And it shows the email content, but not whether it was successful sending
 it, or if it was sent at all.  I haven't received the email, so there seems
 to be some configuration I'm missing somewhere.

It should hand it to sendmail, which will deliver it however sendmail
is configured.   Normally sendmail will just follow DNS MX records.

-- 
   Les Mikesell
  lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Where does default email content come from?

2012-07-02 Thread Les Mikesell
On Mon, Jul 2, 2012 at 7:13 AM, Bryan Keadle (.net) bkea...@keadle.net wrote:
 I received my first BackupPC admin email.  I'm not sure where the verbiage
 of this content comes from.  For starters, it mentions, ... you should
 contact IS to find out why backups are not working.  I want to change IS
 to I.T. which is how our organization is referred to, having retired IS
 a few years back.  Where would I make this change?


http://backuppc.sourceforge.net/faq/BackupPC.html#email_reminders__status_and_messages
The messages should be in the language-dependent files.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Where does default email content come from?

2012-07-02 Thread Les Mikesell
On Mon, Jul 2, 2012 at 2:16 PM, Bryan Keadle (.net) bkea...@keadle.net wrote:
 I first looked there, through the interface - but it doesn't show any of the
 information I'm seeing as the default message being produced.  Instead of
 create my own, entire, message text, I just want to change the reference to
 IS to I.T. - so I'm looking for the source of the default message being
 generated.


Per the doc page in the link:
  These values are language-dependent. The default versions can be
found in the language file (eg: lib/BackupPC/Lang/en.pm). If you need
to change the message, copy it here and edit it.

Your 'lib/ location may depend on how you installed backuppc.  Mine
(EPEL rpm based) is under /usr/share/BackupPC/.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] remove-source-files

2012-06-27 Thread Les Mikesell
On Wed, Jun 27, 2012 at 3:26 PM, Ricardo Melendez
r.melen...@lipu.com.mx wrote:
 Hi, I need to make a backup and free disk space on remote backed up
 computer.

 I launch the following via shell with good results, the files are deleted on
 remote computer.

 rsync -av --remove-source-files user@remotepc:/backedup/folder /localfolder

 But I need to do this via the web inteface of backuppc, I insert an extra
 parameter on RsyncArgs with --remove-source-files.

 But when trying to backup I get the following error at log file:

 full backup started for directory /cygdrive/c/tempo (baseline backup #1)

 2012-06-27 15:00:48 Got fatal error during xfer (fileListReceive failed)
 2012-06-27 15:00:53 Backup aborted (fileListReceive failed)

 The backup run successful without the extra parameter
 (--remove-source-files)

 What could be the problem??

The server-side rsync in backuppc is a re-implementation in perl that
knows how to compare against the compressed copies.  It probably
doesn't understand the --remove-source-files option, even though the
work would be done at the other end.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Offsite copy

2012-06-26 Thread Les Mikesell
On Tue, Jun 26, 2012 at 12:20 PM, Timothy J Massey tmas...@obscorp.com wrote:

  What I'm after is a ready  to use snapshot, As it looks on the
  server I'm backing up or what it would look like if using the
  archive host feature but just not in tar format.


 Why do you expect BackupPC to provide that?  Other than using the Archive
 function to create a tar and extracting it somewhere, there isn't an option
 to do that within BacukpPC.  It's not something that is included in its
 capabilities.

Theoretically at least, you could add the host where you want the
snapshot as a backuppc target, but disable backups on it.  Then you
could do a restore to it using the data from the original source with
rsync as the transfer method.  However, I don't think there would be
any way to automate that to make the restore run after a new backup.

---
   Les Mikesell
 lesmikes...@gmail.com

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


<    3   4   5   6   7   8   9   10   11   12   >