[BackupPC-users] winexe for Windows 8 / 2012 or greater for BackupPC client (Was: Re: A question about partial backups and fatal errors)

2015-12-09 Thread Timothy J Massey
Michael Stowe  wrote on 12/04/2015 06:17:01 
PM:

> I'm *pretty* sure both these things are fixed -- I took a quick look at 
> the code on github and it looks like it'll handle 7 drives in the 
> scripts and at least it thinks it detects 2012 (I'm pretty sure I tested 

> this, but I could be easily convinced it doesn't work if somebody 
> happens to know better.)

It seems that Windows 2012 and higher needs a custom-compiled version of 
winexe that is different from the stable one linked to from the BackupPC 
client page.  Details here:  
https://www.reddit.com/r/linuxadmin/comments/2xsmge/winexe_1001_for_centos_6_x86_64/

I haven't been able to find one for CentOS 6 yet.  I'm surprised that 
there isn't one to be had on the Interwebs;  I'll have to find some time 
to set up the specific dev environment for that.  Unless I'm missing 
something obvious?

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] winexe for Windows 8 / 2012 or greater for BackupPC client (Was: Re: A question about partial backups and fatal errors)

2015-12-09 Thread Timothy J Massey
Kris Lou  wrote on 12/09/2015 01:23:05 PM:

> I've had to build it myself for winexe-waf, but unfortunately the 
> lfarkas repo doesn't contain the necessary libraries anymore to do 
> that with CentOS 6. 

Ouch!  You wouldn't happen to still have that binary, would you?  :)

> (Haven't looked into building on CentOS 7 yet.)
>  Other than that, I think I found a repo on the opensuse servers 
> seemingly attached to the one of the winexe devs (http://
> download.opensuse.org/repositories/home:/ahajda:/winexe/), but I 
> haven't used it.  The version numbers and datestamps don't really 
> match up for CentOS 6 and "newer" versions of winexe.

And it's not:  1.0.0.  I tested it in my previous searching...  :(

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] A question about partial backups and fatal errors

2015-12-08 Thread Timothy J Massey
Michael Stowe  wrote on 12/04/2015 06:17:01 
PM:

> I'm *pretty* sure both these things are fixed -- I took a quick look at 
> the code on github and it looks like it'll handle 7 drives in the 
> scripts and at least it thinks it detects 2012 (I'm pretty sure I tested 

> this, but I could be easily convinced it doesn't work if somebody 
> happens to know better.)

Excellent!  Your statement above caused me to do some extra searching.  A 
couple of things:

1) I was using a much older version.  I can't find the installer I used at 
the moment, but I'm pretty sure it was 1.1, and almost certain it was from 
the goodjobsucking.com site and not the michaelstowe.com one.  In fact, I 
*did* look for an update before I wrote something:  but I was looking at 
the old site.  I didn't even know about the new one.  So, you *have* 
updated things -- and significantly! Yay!

For others:  Michael seems to be shy to link people to his work.  So, I'll 
do it for him:  http://www.michaelstowe.com/backuppc/

2) I looked for a github site for this software and couldn't find one, 
including searching for various permutations of "backuppc github".  I 
probably don't need it with the updated items linked above, but would you 
mind sharing it?

Again, thank you very much for the updates!  I'm excited by being able to 
lazy-install it on Windows Server 2012 now, as well as  maybe using it on 
my multi-drive servers!

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 

--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911=/4140___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] A question about partial backups and fatal errors

2015-12-08 Thread Timothy J Massey
Timothy J Massey/OBSCorp wrote on 12/08/2015 01:46:31 PM:

> 2) I looked for a github site for this software and couldn't find 
> one, including searching for various permutations of "backuppc 
> github".  I probably don't need it with the updated items linked 
> above, but would you mind sharing it?

What is it about clicking Send that improves reading comprehension?  Right 
on the page (http://www.michaelstowe.com/backuppc/ ) is a link to the 
Github page:  https://github.com/mwstowe/BackupPC-Client

In my defense, it was way at the end of a blurb about a specific older 
version, but still...  :(  (Paragraphing FTW!  :)  )

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911=/4140___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] A question about partial backups and fatal errors

2015-12-04 Thread Timothy J Massey
Michael Stowe  wrote on 12/04/2015 04:07:34 
PM:

> I do use rsync, and winexe to handle shadow copies so I don't have to 
> worry about open files and such.  And I put together an installer 
> package with all the pieces.

Funny you should mention that...

I really like Michael's installer package.  It's very handy.  Except for 
two things:

= It only supports a single drive (As in just C: and I always have a C: 
and D: for servers).  That's a fairly tricky hurdle to overcome, and I do 
have servers where I only *need* to back up one drive, so I use it there.

= But the bigger deal is that the installer won't work on Windows Server 
2012 [R2].  The installer can't identify the version of VSS it needs and 
will fail rather than use the 2008 one.  I looked at updating the 
installer, but my installer-scripting-fu is weak and I wasn't able to 
accomplish it.

I've been told that it's possible to take what Michael's installer does 
and actually do it by hand but I haven't had time to dig into that.  If 
anyone has a walk-through it would be *much* appreciated:  I'd love to be 
able to take advantage of VSS on the 2012 servers I have where I only need 
to back up a single drive.

Anyway, just thought I'd take the opportunity to thank Michael for his 
very useful tool and maybe beg a little for an update...  :)

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911=/4140___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Automated regular archive of latest full backup

2015-12-02 Thread Timothy J Massey
martin f krafft  wrote on 12/02/2015 05:20:31 PM:

> I am looking for a way to automate sending an archive (tarball) of
> the latest backup of each of my hosts to an offsite machine, using
> scp and GnuPG for encryption. Can this be done within BackupPC and
> scheduled regularly, or is this a cronjob I'd need to hack up?

Well...  BackupPC can generate tarballs.  That's about it.  Everything 
else you've described is left as an exercise for the reader:  how to 
execute this via a schedule (such as cronjob as you surmised), how to 
encrypt it, how to get it to the remote machine (scp, or using a 
NFS/CIFS/SSHFS mount, etc.) and any other features you may like.

Personally, I simply archive them to a local machine with a removable 
tray, so I don't have to worry about scp to another machine -- and I 
believe in keeping archives as *simple* (and disaster-recoverable) as 
possible, so no GnuPG.  So I use a cronjob to generate a file and submit 
that file to BackupPC using BackupPC_serverMesg.  This is the same method 
that is used when you run a manual archive job using the Web GUI:  I just 
reverse-engineered it from the source.  Others use BackupPC_tarCreate by 
hand.  That's probably a better choice if you'll be piping the output 
somewhere instead of simply writing it to the filesystem like I do.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911=/4140___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Support Question

2015-11-24 Thread Timothy J Massey
Les Mikesell  wrote on 11/24/2015 01:29:24 PM:

> > Also, can anyone explain why when our updates are set to every 6.
> 97 days are still going out after the completition of every backup?
> 
> A stock backuppc system never sends emails about success - only about
> backups that have failed for a configured amount of time.   Maybe you
> have added some other notification script.  I think some have been
> posted to the list in the past.   You might start by looking at
> additional cron jobs.

And so given that fact (and that *is* a fact) that the e-mails are not 
from BackupPC, there are two things to consider:

1) The e-mails are *not* coming from BackupPC, so we can't really help you 
to figure out where they're coming from!  :)  Others have mentioned 
checking the manifold places that cron can run to see if it's there; 
another thing would be to look at the headers of the e-mail to see if 
there are any clues in there as well.

2) You've changed your e-mail frequency to 6.97 in order to avoid the 
annoyance of the daily e-mails.  That means that you will now have to have 
a full *week* of failed backups before BackupPC alerts you to this at 
*all*.  It's your system, of course, but now that you understand that 
being bugged is *not* BackupPC's fault, you may want to put your e-mail 
frequency back to where it was:  2.5 is the factory default.  (By the way, 
that's why sending out daily "I'm alive!" e-mails is bad, and why BackupPC 
doesn't do it:  *everyone* rapidly ignores *all* e-mails from that 
source!)

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741551=/4140___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_tarCreate with only certain types of files

2015-05-23 Thread Timothy J Massey
Holger Parplies wb...@parplies.de wrote on 05/23/2015 09:29:25 AM:

 for the archives: you don't strictly *need* the free space. You can pipe 
the
 output of BackupPC_tarCreate directly into a 'tar x' and tell tar to 
only
 extract files named '*.pdf', something like
 
BackupPC_tarCreate -h host -n 123 -s /share /path \
   | tar xf - --wildcards '*.pdf' '*.PDF'

Forget the archives:  I appreciate the tip.  It'd be nice to avoid all the 
I/O to have BackupPC stream all that data just so it can end up in the bit 
bucket, but at least it doesn't end up on my disk again, too!  :)

 This seems to be another good case for using the fuse module. You can 
navigate
 the backup view (or run a 'find', 'rsync', ...) at a relatively low cost 
and
 only need to read/decompress the file data you actually act on - andyou 
don't
 need any intermediate storage space.

I completely forgot about that.  I've already done most of the data so 
far, but that's an excellent suggestion, too.

once again, thank you.

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_tarCreate with only certain types of files

2015-05-23 Thread Timothy J Massey

Thank you very much for the confirmation that I'm not crazy and you can't
do wildcards with BackupPC_tarCreate.

In this instance, I was able to come up with enough free space to be able
to do the complete restore and then grab the data I needed from that. But
thank you very much for your code: I will experiment with it is a need to
use it in the future.

Timothy J. Massey

Sent from my iPad

 On May 23, 2015, at 7:43 AM, Holger Parplies wb...@parplies.de wrote:

 Hi,

 Timothy J Massey wrote on 2015-05-22 20:40:52 -0400 [Re: [BackupPC-users]
BackupPC_tarCreate with only certain types of?files]:
  Les Mikesell lesmikes...@gmail.com wrote on 05/22/2015 04:24:56 PM:
 
What am I missing?  How do I get BackupPC_tarCreate to create a
tar file that contains all PDF's stored in that path?
[...]
  
   Can't help with BackupPC_tarCreate's wildcard concepts

 I can: there are none. BackupPC_tarCreate gets a list of path names to
include
 in the tar. Each of these is looked up verbatim and included (with
 substructure if it happens to be a directory).

  The problem is not that I couldn't figure out how to get the PDF's at
all,
  but how I could avoid restoring 500GB of data for the 500*MB* I
actually
  need!  :)

 It should be *fairly* simple to patch BackupPC_tarCreate for the simple
case
 you need. A more general case would add an option and include correct
handling
 of hardlinks and symlinks. In sub TarWriteFile, after the calculation of
 $tarPath and before the big file type if (line 455 in the 3.3.0
source),
 I'd add

   return
 if $hdr-{type} != BPC_FTYPE_DIR and $tarPath !~ /\.pdf$/i;

 (omit the i modifier if you really only mean lowercase pdf). Note
that
 this is once again completely untested ;-).

 Hope that helps.

 Regards,
 Holger


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_tarCreate with only certain types of files

2015-05-22 Thread Timothy J Massey
Les Mikesell lesmikes...@gmail.com wrote on 05/22/2015 04:24:56 PM:

  What am I missing?  How do I get BackupPC_tarCreate to create a 
 tar file that contains all PDF's stored in that path?
 
  Thank you very much for any support you can give me.  I've tried 
 different escapings/not-escapings etc to be able to achieve this and
 I'm out of ideas!  :)
 
 
 Can't help with BackupPC_tarCreate's wildcard concepts but at the
 expense of a lot of overhead you could let backuppc generate a tar of
 the whole top-level directory (like your first command above) and
 specify the '*.pdf' selection to the extracting tar to get what you
 want.

Yeah, that one I could figure out!  :)  The problem is not that I couldn't 
figure out how to get the PDF's at all, but how I could avoid restoring 
500GB of data for the 500*MB* I actually need!  :)

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC_tarCreate with only certain types of files

2015-05-22 Thread Timothy J Massey
Hello!

I need to restore all PDF files from a particular backup share.  It's 
30,000 files scattered around thousands of locations.  So, I was hoping to 
use BackupPC_tarCreate to do it.  But I'm striking out.

This works:

./BackupPC_tarCreate -l -h server -n 918 -s E /Shares/Shared

But this does not:

./BackupPC_tarCreate -l -h server -n 918 -s E /Shares/Shared/*.pdf


What am I missing?  How do I get BackupPC_tarCreate to create a tar file 
that contains all PDF's stored in that path?

Thank you very much for any support you can give me.  I've tried different 
escapings/not-escapings etc to be able to achieve this and I'm out of 
ideas!  :)

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Outlook pst files backup

2015-03-13 Thread Timothy J Massey
Michael Stowe mst...@chicago.us.mensa.org wrote on 03/13/2015 09:47:06 
AM:

 I find that backing up open files is particularly useful:
 
 http://www.michaelstowe.com/backuppc/

I have found his tool quite helpful in every regard except one:  the 
installer will fail to install on Windows Server 2012/R2 (and quite likely 
Windows 8/8.1).  It doesn't detect the VSS version and therefore won't 
install *anything*.  (And needless to say, all of my new servers are 
2012...)

Michael was kind enough to send me the source code for the installer, but 
my installer-fu is weak:  after a weekend of playing with it I was no 
closer to making it work.

I've been told that it's not terribly difficult to do by hand what the 
installer is doing, but I haven't tried.  If there are any users out there 
who have done the manual version and have documented the process, I would 
greatly appreciate seeing that documentation...

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to delete backups

2014-12-10 Thread Timothy J Massey
Holger Parplies wb...@parplies.de wrote on 12/10/2014 10:59:17 AM:

 Colin Shorts wrote on 2014-12-10 11:45:41 + [Re: [BackupPC-
 users] How to delete backups]:
  You might want to press Enter before typing 
  `/usr/share/BackupPC/bin/BackupPC_nightly 0 255', otherwise it will 
get 
  deleted too.
 
 right, press Enter *before* the command, *not after*. ***Never*** run
 BackupPC_nightly from the command line. ***Never*** advise other to do 
so.
 Really quite simple.

Why not, exactly?  I do it all the time.  I'm not saying you're wrong:  I 
just want to know where the harm might be.  But you state the never do 
that in such strong terms, I'm wondering where the disconnect it.

For the original poster:  The only time I need to manually delete a backup 
is when I need the space, and I need it *right* *now*.  Otherwise I would 
simply change the FullKeepCnt and IncKeepCnt variables and let BPC clean 
it up for me.  (Or where I do a backup at the wrong time and it catches a 
large amount of unimportant temporary files *and* I really need the space 
and I don't want to lose my history, but that's a very, very specific case 
that doesn't happen enough to make this a general practice.)

By the way, that's probably the best way to handle this problem.  Set the 
FullKeepCnt and IncKeepCnt to 1, do a full backup and let BackupPC manage 
deleting everything for you!  :)

(Also, for completeness, if you delete a single backup, you better know 
what you're doing, particularly which backups depend on the backup you're 
deleting.  Or, if you have filled-in backups (like I do), why deleting 
that full doesn't actually free up any space, even *after* running 
nightly...  And don't forget to modify the backups file to match the 
destruction you are doing...  If all of that sounds like too much, simply 
use the FullKeepCnt and IncKeepCnt and let BPC do the cleanup.)

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Why Child is aborting?

2014-11-30 Thread Timothy J Massey
I believe that there is a time out in the rsync protocol. If it doesn't 
complete checksumming the file within a certain period of time it will stop.

Reference here: https://lists.samba.org/archive/rsync/2011-August/026640.html

I've never tried running are sent directly on the ESXi host: that's something 
I'll actually have to play with. However, I have seen this on servers with a 
very large image files, usually of things like Very large ISO's.

Timothy J. Massey

Sent from my iPad

 On Nov 30, 2014, at 10:29 AM, Christian Völker chrisc...@knebb.de wrote:
 
 Hi all,
 
 I am trying to troubleshoot why I can not back up my virtual machines running 
 on VMware ESXi 5.5.
 
 This is what I get when I run the ./BackupPC_dump -f -v esxi.evs-nb.de on 
 command line:
 
 mssql-Snapshot29.vmsn: blk=, newData=32768, rxMatchBlk=, rxMatchNext=0
 mssql-Snapshot29.vmsn: writing 32768 bytes new data
 Can't write 33792 bytes to socket
 Sending csums, cnt = 16, phase = 1
 /var/lib/BackupPC/pc/esxi.evs-nb.de/0/f%2fvmfs%2fvolumes%2fdata%2fmssql/fmssql.vmdk
  cache = , invalid = , phase = 1
 Sending csums for mssql.vmdk (size=518)
 Read EOF:
 Tried again: got 0 bytes
 Child is sending done
 Got done from child
 Got stats: 0 0 0 0 ('errorCnt' = 0,'ExistFileSize' = 0,'ExistFileCnt' = 
 0,'TotalFileCnt' = 1,'ExistFileCompSize' = 0,'TotalFileSize' = 313)
 finish: removing in-process file mssql-Snapshot29.vmsn
 attribWrite(dir=f%2fvmfs%2fvolumes%2fdata%2fmssql) - 
 /var/lib/BackupPC//pc/esxi.evs-nb.de/new/f%2fvmfs%2fvolumes%2fdata%2fmssql/attrib
 attribWrite(dir=) - /var/lib/BackupPC//pc/esxi.evs-nb.de/new//attrib
 Child is aborting
 Got exit from child
 Done: 1 files, 313 bytes
 Executing DumpPostShareCmd: /usr/bin/ssh -q -x -l root esxi.evs-nb.de 
 /vmfs/volumes/data/bin/remove_snapshots.sh /vmfs/volumes/data/mssql
 cmdSystemOrEval: about to system /usr/bin/ssh -q -x -l root esxi.evs-nb.de 
 /vmfs/volumes/data/bin/remove_snapshots.sh /vmfs/volumes/data/mssql
 i: mssql  id: 28 snap: 29
 Remove Snapshot:
 cmdSystemOrEval: finished: got output i: mssql  id: 28 snap: 29
 [...]
 
 I have compiled rsync-static for ESXi which is running fine so far. I have a 
 DumpPreShareCmd which starts a script on the ESXi to create a snapshot. I can 
 verify the snapshot gets created. Transfer starts but after I while I am 
 getting the above error.  rsync transfer method.
 
 Anyone having an idea how to troubleshoot further or what to monitor?
 
 Greetings
 
 Christian
 
 
 
 --
 Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
 from Actuate! Instantly Supercharge Your Business Reports and Dashboards
 with Interactivity, Sharing, Native Excel Exports, App Integration  more
 Get technology previously reserved for billion-dollar corporations, FREE
 http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Why Child is aborting?

2014-11-30 Thread Timothy J Massey

I think you misunderstand. When rsync is doing checksums on very large
files, it won't be 60 seconds without I/O. It will be many minutes. You
might want to set that to 3600 seconds and see if your systems take that
long to timeout.

Or, if you get it carefully time how long your runs are waiting before they
die? You might be able to alter the time out in smaller increments and see
if the delay changes along with your changes in timeout.

Or it might be something else completely.

Timothy J. Massey

Sent from my iPad

 On Nov 30, 2014, at 1:43 PM, Christian Völker chrisc...@knebb.de wrote:

 Hi Timothy,

 thanks for this hint related to timeout. I do not think it might be
 related as the timeout option refers to I/O timeout. But I gave it a try
 and set it to 60.000 seconds (will try with some lower values, too).

 Samre result.

 Any further ideas?

 Thanks/ Christian



--

 Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
 from Actuate! Instantly Supercharge Your Business Reports and Dashboards
 with Interactivity, Sharing, Native Excel Exports, App Integration  more
 Get technology previously reserved for billion-dollar corporations, FREE

http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk

 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Running BackupPC and Nagios web interfaces from the same box

2014-10-14 Thread Timothy J Massey
xpac backuppc-fo...@backupcentral.com wrote on 10/14/2014 03:59:26 PM:

 Ok I found this little tidbit in some documentation:
 
 As mentioned, the BackupPC user created on the system when 
 installing the RPM has to run Apache in order for everything to work
 properly with the CGIs and mod_perl. Go ahead and setup the 
 appropriate values in httpd.conf. 
 
 So BackupPC requires the user backuppc to run properly.  Guess 
 this is a question for the Nagios folks  :(

You could run them on different ports...  (I run Webmin and Apache on the 
same box on different ports.  Of course, they *have* to run on different 
ports:  Webmin supplies its own HTTP server.)

Or, I think you can get away without having Apache run as backuppc if you 
*don't* use mod_perl.  Seeing as the Web interface doesn't really do much, 
I never have it run using mod_perl anyway.  From what I read in the doc, 
you could then run the web interface as whatever you want (say, apache) 
and be fine.  I have never tried myself, so I can't say for sure.

Reference:  
http://backuppc.sourceforge.net/faq/BackupPC.html#Step-9:-CGI-interface

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best backup system for Windows clients

2014-08-26 Thread Timothy J Massey
Michael Stowe mst...@chicago.us.mensa.org wrote on 08/25/2014 09:16:09 
AM:

 Given that, a VSS/rsync combination is more or less required.  There are
 two methods for coordinating the shadow copy service with rsync -- ssh 
and
 winexe.  I use the winexe method here, and put together an installer to
 handle the various Windows versions (though I haven't yet updated it for
 2012/8+.

Still volunteering to help with this!  :)

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Help before giving up on BackupPC

2014-05-15 Thread Timothy J Massey
Marco Nicolayevsky ma...@specialtyvalvegroup.com wrote on 05/14/2014 
08:47:24 PM:

 Hello all,
 
 I am using a pretty vanilla installation of BackupPC and love the 
 simplicity and fact that it just works as it’s supposed to.
 
 My problem arises when trying to back up windows clients over rsyncd.
 
 I have 3 win boxes, all running Win7 and the rsncd server. 
 Functionally, they perform fine. I’m able to backup, restore, etc. 
 My initial tests were with small folders and not the entire drive.
 
 HOWEVER… each win box has somewhere between 20GB and 3TB of data, 
 and despite being on wired gigabit Ethernet, the 3TB machine ran for
 over 8 days and still wasn’t done before I pulled the plug and 
 called it quits. Backing up 3TB over gig-e should be able to be 
 accomplished in under 1 day, so I’m at a loss of what to do next.

Here are some hard numbers for one of my systems.  All targets 
Windows-based, using rsyncd, no SSH, no encryption, no compression. 
Servers are using 7200RPM SATA drives.

Full size:  1372GB, 1114587 files.  Time to complete:  1175.3 minutes.
(Intel Xeon E5 CPU, 16GB RAM, 6-drive Software RAID-6)

Full size:  627GB, 452759 files.  Time to complete:  682.4 minutes.
(Intel Atom D610 (really slow) CPU, 4GB RAM, 5-drive Software RAID-6)

Now, these are established backups.  I've found that initial backups can 
be 2-3 times as long.  And at 3TB, you're nearly triple of the bigger one. 
 So, that could mean that the initial backup can take 6000 minutes or 
more, or four days.  And personally, I've found that compression can 
double or triple the time again.  Worst-case of all of that could be two 
weeks!  So it is possible that you're seeing correct operation for that 
first backup, and that future backups should be much better.

Also, those are fulls.  Incrementals take 4 hours and 1 hour, 
respectively.

 Is rsync “really” that slow?

No:  rsync is only going to be marginally slower than a non-rsync copy, 
even on the first time, assuming you're not limited by something else (CPU 
or RAM) that would not be a limit for a normal copy.

That could be related to the number of files:  that's an area where rsync 
can get tripped up.  As you can see, I've got 1 million files, so the 
definition of too many is pretty big.  But if you had, say, 10M files, 
maybe that's an issue to consider.

 Am I doing something wrong?

*You*?  Who knows?  Is something wrong?  Possibly.

 Is it 
 limited to just the windows client?

No.

 When I was evaluating Bacula, I 
 was able to do the same backup in 1/10th the time (or at least it 
 felt that way since I don’t have hard numbers).

You've got hard numbers above to consider as a comparison and see if they 
will fit your needs.

 Before giving up on BackupPC and considering an alternative, can 
 someone give me some advice?

Yes.  First, you've given us *NOTHING* to go on other than it's slow. 
It's like telling a mechanic my car doesn't run right.  Of course, 
you're probably expecting to pay the mechanic, so he's incented to ask 
lots of questions to figure things out.  I think it's pretty telling that 
I can't see a single reply to your request.  While your request includes a 
lot of words, there was almost *NOTHING* of substance in your request 
except a *VERY* brief description of your hardware.  We have almost no 
description of what your backups look like (size and number of files, for 
example), what your backup jobs look like (compression being a very big 
one), or what your backup server is doing (CPU, memory or disk usage). And 
we're not getting paid, so we're not really incented to ask you a lot of 
questions.  But I'll give you a few:

First, is your system limited in some way?  Are you hitting 100% CPU, or 
swapping (not enough RAM), or are your disk overloaded, or something else? 
 Learn to use top and vmstat (or dstat).

Is your infrastructure fundamentally limited in some way?  Have you tried 
doing a straight-up copy from a target to your backup system to make sure 
that the underlying infrastructure is capable of delivering what you 
expect it to?  If you can only get 1-2MB/s copies using SMB, tar, NFS, 
FTP, etc. , then that's all you'll get with BackupPC, too.  But if you can 
get 70MB/s copies between the same two systems some other way, then we can 
expect better of BackupPC.  (But all that does is re-ask the question of 
what is limiting you.)

From my e-mail, you know what is possible and reasonable to get.  If 
you're far away from those results, then you need to figure out what is 
different about your system and causing the slowdown.

The second thing to try is to simplify things.  For me, the first thing I 
do is disable compression.  In today's multi-core universe, compression is 
rapidly becoming a bottleneck again.  The compression algorithms in common 
use today do *not* use multiple cores.  On a system with more than a 
couple of disks I can easily max out one core with compression.

Or, try to use SMB instead of rsyncd.  

Re: [BackupPC-users] Centralized storage with multiple hard drives

2014-03-20 Thread Timothy J Massey
Holger Parplies wb...@parplies.de wrote on 03/19/2014 07:26:02 PM:

 Les Mikesell wrote on 2014-03-19 11:25:38 -0500 [Re: [BackupPC-
 users] Centralized storage with multiple hard drives]:
  Throwing RAM at a disk performance problem usually helps.
 
 You've used BackupPC before, Les, right? ;-)
 BackupPC prefers pool reads over writes when possible, and it typically
 accesses large amounts of data almost randomly. Caching metadata will 
help,
 caching data likely won't.

I've given up explaining this to Mr. Mikesell.  I've posted info to the 
list several times showing the performance on BackupPC servers that had 
512MB (Megabytes) of RAM.  Zero swapping, and something like 300MB used 
for caching.  I have then upgraded them to 4GB of RAM (8 times as much 
RAM!) and saw a whopping 15 or so minutes savings on a backup that took 10 
to 12 hours.  (In fact, the only real reason I use 4GB minimum across the 
board now is that I ran into a problem where a fsck wouldn't complete 
without more RAM.)

  As does not using raid5.
 
 One disk per client host sort of precludes raid5 ;-).

And I haven't seen that RAID-5 has been that much of an issue.  As you 
mention, BackupPC is *READ* heavy so the RMW penalty doesn't hurt much. My 
experience with both RAID-5 *AND* RAID-6 (even software RAID-6!) is just 
fine.  Of course, I have a minimum of 4 drives in a RAID array (6 minimum 
for RAID-6), so I'm usually bottlenecked somewhere else anyway, such as a 
single 1Gb link.  It's not hard to mange writing 70MB/s of data!

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Got fatal error during xfer after 20 MiB (tar on localhost)

2014-03-20 Thread Timothy J Massey
Jost Schenck jost.sche...@gmx.de wrote on 03/14/2014 07:37:00 AM:

 I guess you're right that backing up the backup servers system 
 config with backuppc itself may not be such a brilliant idea :)

You can *never* back up something with itself.

 What
 I thought was, that in case my home server fails I could restore 
 configuration by first installing a new linux with backuppc and then
 pointing backuppc to my backup disc.

The last part (pointing to the backup disk) won't work:  BackupPC mangles 
file names, for one thing.  You would need to do a proper restore in order 
to use the data.

If you want a copy of your data that is 100% ready to put into production, 
you'd use a different tool.  Rsync by itself would work fine (but no 
versioning), as would things like rdiff-backup (a kind of BackupPC-style 
method of versioning but with unmangled files).

 But your idea sounds easier. Thanks!

His idea is not any easier on the *RESTORE* side.  In either case, you'll 
have to do a restore.

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Centralized storage with multiple hard drives

2014-03-20 Thread Timothy J Massey
Les Mikesell lesmikes...@gmail.com wrote on 03/20/2014 01:59:47 PM:

 On Thu, Mar 20, 2014 at 12:21 PM, Timothy J Massey 
 tmas...@obscorp.com wrote:
 
  
   You've used BackupPC before, Les, right? ;-)
   BackupPC prefers pool reads over writes when possible, and it 
typically
   accesses large amounts of data almost randomly. Caching metadatawill 
help,
   caching data likely won't.
 
 
  I've given up explaining this to Mr. Mikesell.  I've posted info 
 to the list several times showing the performance on BackupPC 
 servers that had 512MB (Megabytes) of RAM.  Zero swapping, and 
 something like 300MB used for caching.  I have then upgraded them to
 4GB of RAM (8 times as much RAM!) and saw a whopping 15 or so 
 minutes savings on a backup that took 10 to 12 hours.  (In fact, the
 only real reason I use 4GB minimum across the board now is that I 
 ran into a problem where a fsck wouldn't complete without more RAM.)
 
 4GB is still a tiny amount of RAM these days.  Try 20+.  You basically
 want to cache all of your pool directory/inode data plus the pc backup
 trees being traversed by rsync.  And without threshing it out with the
 rsync copy of the remote trees that are being processed.

After this, I *really* give up.

I *have* servers with 32GB of RAM.  I've gone from 4GB to 16GB and seen 
zero performance increase.

Do you *REALLY* think this is some sort of binary on/off enhancement? Have 
4GB?  No benefit.  5GB?  WOW Look at it go!!!   If you see zero 
improvement with 8 TIMES AS MUCH RAM (when you're not swapping to begin 
with!), you expect me to believe that double or triple that and all of a 
sudden a big difference?  Come on now...

  And I haven't seen that RAID-5 has been that much of an issue.  As
 you mention, BackupPC is *READ* heavy so the RMW penalty doesn't 
 hurt much.  My experience with both RAID-5 *AND* RAID-6 (even 
 software RAID-6!) is just fine.
 
 Sure, they work.  But they force all of the disk heads to seek every
 time unless you have a large number of disks, and every short write
 (and most of them are with all of the directory/inode operations
 happening), is going to cost an extra disk revolution time for the
 read/modify/write step.  And that 'read heavy' assessment is somewhat
 optimistic if you have any large files that are modified in place.
 There you get the worst case of alternating small reads/writes in
 different places as the changes delivered by rsync are merged into a
 copy of the old file.

I'm not saying that there isn't a penalty for RAID-5 or -6.  I'm saying 
that the penalty is usually NOT RELEVANT FOR THIS APPLICATION!  Not in 
theory, but in ACTUAL PRACTICE.  You then go on to make my point:

  Of course, I have a minimum of 4 drives in a RAID array (6 minimum
 for RAID-6), so I'm usually bottlenecked somewhere else anyway, such
 as a single 1Gb link.  It's not hard to mange writing 70MB/s of data!
 
 Seriously?  You have rsync pegging a 1Gb link for long periods of time
 anytime but the first run?  Do you have something creating a lot of
 huge new files?  My times are mostly constrained by the read time on
 the targets with a relatively small amount of data transfer.

You have stated that you're limited by the TARGET.  So given that, who 
cares if the disks are 60% busy instead of 30% busy while it waits on 
those targets?  And why WOULDN'T I use something like RAID-5 or -6 and 
give myself a LOT more usable space?

And my point was not that I *am* limited by my 1GbE link (I'm not usually 
with BackupPC but I *am* with other uses of my backup servers in general), 
but that at *BEST* I can move 70MB/s *because* I'm using 1GbE.  Seeing as 
my 6-drive RAID-6 array can copy files from one spot on the array to 
another (so not just simply streaming reads or writes) at 400MB/s 
routinely (and 1,000MB on my 12-drive systems), who *CARES* how much 
faster it would be if it were, say, RAID-10?  This server even has 4 x 
1GbE, but you *never* get that with bonded Ethernet.  You're lucky to get 
150MB/s.  So my drives are NEVER THE ISSUE!

So, yes, I'll take the extra space, which is *ALWAYS* my limiting factor! 
:)

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http

Re: [BackupPC-users] Centralized storage with multiple hard drives

2014-03-20 Thread Timothy J Massey
Les Mikesell lesmikes...@gmail.com wrote on 03/20/2014 03:48:51 PM:

 It's all about statistics and the odds of having to move the disk head
 to get a directory entry or inode vs already having it in cache for
 instant access (and sometimes even some data...).

In theory, theory and practice are the same.  In practice, they aren't. 
(Or, another way:  theoretical analysis doesn't always make a difference 
in the real world.)

You keep telling me the theory of why more RAM would help.  I keep telling 
you the FACT that it doesn't.  Repeating your theory doesn't change the 
FACT that it doesn't help.

(Why don't you try it sometime?  Either use a kernel parameter or remove 
some RAM from one of your magical 20GB boxes.  You stated that even 4GB 
was not enough to see the effect you're describing, so do that.  I *HAVE* 
done this test.  Several times.  You only have to do it for a single full 
backup of a single large target:  one night and you'll have a FACT to 
analyze, not a theory.)

 I care because _some_ of my targets are relatively fast - and have new
 big files daily.  These will routinely show a 40+ MB speed in the
 backuppc host summary, although I'm not exactly sure what that is
 measuring.  10 or 12 is more common for run-of-the-mill targets, even
 less if the files are mostly small or the server is older and slower.

If you don't know what it's measuring, how can you use it for anything.

  And my point was not that I *am* limited by my 1GbE link (I'm not 
 usually with BackupPC but I *am* with other uses of my backup 
 servers in general),
 
 It seemed like you were advising to expect backuppc to be limited by
 bandwidth - which doesn't match my experience at all once you get the
 initial copy.

Nope.  Merely (and somewhat awkwardly) making the point that it is 
*meaningless* to make your disk performance faster (as in huge RAM caches 
and avoiding parity RAID) if it's already several times faster than some 
other bottleneck.

The idea encouraging someone to not using RAID-5 or -6 on a BackupPC box 
is just about the biggest waste of money I can possibly imagine for such a 
machine.  Well, maybe encouraging someone to use 15,000 RPM SAS drives not 
in a parity array might be worse...  :).  The cost of hard drives *always* 
exceeds 50% of my array-based (as in not single drive) BackupPC servers. 
Why would I want to make that storage even *more* expensive if it buys me 
no measurable difference in performance?

And make no mistake:  I have not personally seen an application where lack 
of RAM (where that RAM is measured in Gigabytes) or disk performance (with 
4 or more drives, let's say) has led to a bottleneck on my systems.  Now, 
I've not tried backing up more than low double digits of targets to a 
single server, and I've not done much more than 10TB of pool space:  maybe 
if you scale significantly beyond that, maybe.  But even then I'd need to 
see it, not just theorize it...

And I'm officially done!  :)  (Unless there are hard facts presented.)

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Centralized storage with multiple hard drives

2014-03-19 Thread Timothy J Massey
thorvald backuppc-fo...@backupcentral.com wrote on 03/19/2014 06:53:19 
AM:

 Let's say that the storage is not a problem for me and I can have as
 many TB or PT as I need. However the main assumption is that every 
 box has got a separate disk to be backed up to. So now I faced the
 problem with BackupPC which does use pool or cpool to store files 
 within :/. I don't need any compression or deduplication. Is there 
 any way to backup files directly to pc/HOST/ instead ? 

I am going to give a flat no to this.  You may be able to break things 
within BackupPC to accomplish this (never run the link, for example), but 
you are *breaking* things.  Don't do that if you expect *anyone* to be 
able to help you.

 I'm not going to backup couple of hundreds servers using one 
 BackupPC instance of course but I want to back up at least 100 
 servers per BackupPC instance.
 
 Is there something you could advise me ?

Sure:  use virtualization.  Create your huge datastore (or multiple 
datastores) and create a VM for each unit that needs its own pool.  Each 
VM will get its own (virtual) disk.  Each will be, for better or worse, 
completely separate from each other.

The cost for this will be you'll need more RAM.  You'll have multiple 
copies of the Linux kernel, Apache web server and BackupPC running. 
However, I doubt that all of that together is 250MB:  I've run *many* 
entire BackupPC servers in 512MB of RAM.  But there will be resources 
wasted that you wouldn't have with a single instance.

Of course, that doesn't include RAM you'll need for each VM to have enough 
RAM for caching (to give good performance), but that RAM requirement would 
be roughly the same with a single instance anyway, so you can't count that 
as a disadvantage for VM's:  that's just the nature of PC technology.

There are other things that you'll have to worry about, no matter whether 
it's a single instance or multiple VM's.  The screamingly obvious one is 
disk performance.  I've found that running more than one BackupPC job at a 
time destroys performance, even on servers with 3-4 disks, using both 
hardware and software RAID.  I have one server with 6 disks that does OK 
with a couple of jobs, but not nearly as fast as one at a time.  I have 
one server with 12 disks:  that one has so much disk performance that I'm 
bottlenecked at the other end, so I can do 3 or so jobs simultaneously. 
You're talking hundreds.  So disk performance will really, really matter 
here.  Your question will be more like, How many *shelfs* of disks? 
rather than, How many disks?

Another issue will be network bandwidth.  Unless you're running a 
well-designed 10GbE network, forget backing up hundreds (your number) of 
servers in a timely fashion.  I've found that 4-port bonded 1GbE is less 
than 2 times as fast as a *single* GbE.  Maybe I've done something wrong 
every time I've done it, but I've basically washed my hands with bonding 
(for performance).  You want performance, you're gonna need 10GbE.  And if 
you *do* go 10GbE, the switch that your server is plugged into better 
either have a 1GbE connection *directly* to each other switch, or those 
switches better be behind all 10GbE connections upstream.  Otherwise, 
you'll just bottleneck on a downstream 1GbE connection.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Ftp fails

2014-03-18 Thread Timothy J Massey
Søren Brøndsted s...@ufds.dk wrote on 03/17/2014 04:43:16 AM:

 Hi
 
 On 14-03-2014 16:23, Timothy J Massey wrote:
 
  Have you tried performing the same FTP manually from the command line 
of
  the BackupPC box?
 Yes. I have tried as backuppc user and it works.

Obviously, you need to use the same credentials as are spelled out within 
the BackupPC host configuration.  I've *never* used FTP as a XferMethod 
for BackupPC, so I'm not exactly sure how it's configured.  But you need 
to make sure they both work.

  I know QSECOFR is root on System i, but it's not guaranteed that it
  has access to everything.  Are you sure that it has access to the 
files?
Use WRKFLR to verify this.  And again, try from the command line on
  the BackupPC machine and see if you get an error message.
 We are using the IFS subsystem to access the directory and I don't know 
 how to access this from WRKFLR. I am not a champion on the iSeries.

Run WRKFLR.  You will see all of the shared folders.  You can then do 
things like 14 (Authority) and 5 (work with documents) to work with the 
permissions on each folder and file to make sure the permissions are 
correct.

  Are you using shared folders on i?  You could also try to get the
  folders using SMB, which is what I have done when I've needed to. I've
  never used BackupPC to back up an i (System i is the *only* place 
where
  I still use tape!), but I've done other similar things, and I'd rather
  use SMB than FTP...
 I will try and activate this

I think this is the better way to go.  To my knowledge, very few people 
use FTP with BacukpPC, and a number of people use SMB.  The tools are 
better understood and better tested.

You still may have System i issues to work out (such as permissions), but 
at least there are more than just me that can help with the BackupPC side! 
 :)

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Ftp fails

2014-03-18 Thread Timothy J Massey
Timothy J Massey tmas...@obscorp.com wrote on 03/18/2014 01:09:16 PM:

 Søren Brøndsted s...@ufds.dk wrote on 03/17/2014 04:43:16 AM:
 
  On 14-03-2014 16:23, Timothy J Massey wrote:
  
   Are you using shared folders on i?  You could also try to get the
   folders using SMB, which is what I have done when I've needed to. 
I've
   never used BackupPC to back up an i (System i is the *only* place 
where
   I still use tape!), but I've done other similar things, and I'd 
rather
   use SMB than FTP...
  I will try and activate this 
 
 I think this is the better way to go.  To my knowledge, very few 
 people use FTP with BacukpPC, and a number of people use SMB.  The 
 tools are better understood and better tested. 

Of course, the *real* better way to go is rsync.  That's what I use in 95% 
of instances;  only if I can't make rsync work for some reason do I use 
SMB.

Of course, there's no rsync on the i.  However:

http://as400topics.blogspot.com/2012/01/installing-rsync-on-iseries-as400.html

I don't know if you can run it as a *server* (for rsyncd, which is what I 
use), or what would be involved in using SSH (for rsync over ssh, which is 
what others use), but it too is worth a consideration.

How much data are you trying to back up on the i?  It might be easiser to 
use rsync to push it out to some other system and grab it from there 
instead of grabbing it from System i directly.

I respect the i a great deal in its proper role:  mission-critical 
database applications.  For everything else, I sometimes handle the system 
like everyone else seems to:  get the data onto something else and deal 
with it there!  :)

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Ftp fails

2014-03-14 Thread Timothy J Massey
Søren Brøndsted s...@ufds.dk wrote on 03/14/2014 10:46:37 AM:

 Hi
 
 I am trying to do a ftp backup fra an iSeries 5 (AS400), but I get the 
 following result:

Unneeded log info

 remotels: adding name QIMGCLG, type f, size 272496, mode 33152
 remotels: adding name VOL001, type f, size 56245174433, mode 33152
fail 600   -/-  272496 QIMGCLG
 Unlinking(/var/lib/backuppc/pc/db2/new/f%2fBACKUP_VRT/fQIMGCLG) because 
 of size mismatch (0 vs 272496)
fail 600   -/- 56245174433 VOL001
 Unlinking(/var/lib/backuppc/pc/db2/new/f%2fBACKUP_VRT/fVOL001) because 
 of size mismatch (0 vs 56245174433)
 Full backup of db2 complete

Have you tried performing the same FTP manually from the command line of 
the BackupPC box?

I know QSECOFR is root on System i, but it's not guaranteed that it has 
access to everything.  Are you sure that it has access to the files?  Use 
WRKFLR to verify this.  And again, try from the command line on the 
BackupPC machine and see if you get an error message.

Are you using shared folders on i?  You could also try to get the folders 
using SMB, which is what I have done when I've needed to.  I've never used 
BackupPC to back up an i (System i is the *only* place where I still use 
tape!), but I've done other similar things, and I'd rather use SMB than 
FTP...

(I wonder how many people on this list tuned out as soon as they saw 
AS/400?  I wonder what the Venn diagram is for fans for System i and 
BackupPC...  :)  )

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync building file list takes forever

2014-03-04 Thread Timothy J Massey
Dr. Boris Neubert om...@online.de wrote on 03/04/2014 12:23:15 PM:

 What was irritating to me was the rsyncd log entry Building file
 list... with nothing else afterwards and the empty BackupPC host
 XferLog during backup. In fact the backup was already transferring all
 the files all the time.

That *is* very annoying.  Basically, writes to that file are buffered, and 
it will only write the data to the file in big chunks (kilobytes at 
least).  Those big chunks might reflect *gigabytes* of data in just a few 
dozen lines, and yes, that can mean hours between updates.

Get used to using top/vmstat/dstat...  ;)

Glad it worked for you.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Subversion Kills Productivity. Get off Subversion  Make the Move to Perforce.
With Perforce, you get hassle-free workflows. Merge that actually works. 
Faster operations. Version large binaries.  Built-in WAN optimization and the
freedom to use Git, Perforce or both. Make the move to Perforce.
http://pubads.g.doubleclick.net/gampad/clk?id=122218951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC Client and Windows Server 2012 / 2012 R2

2014-01-22 Thread Timothy J Massey
Hello!

I tried to install Michael Stowe's BackupPC client (with VSS support) on a 
Windows Server 2012 R2 server today.  Unfortunately, the installer program 
popped up an error:  Cannot determine version of vshadow to use.  There 
is only one choice:  OK.  When you click on it, the installer stays frozen 
there until you click Cancel, when the installer exits half-finished.

Does anyone have any suggestions as to how I might be able to get past 
this?  I am hoping that VSS hasn't changed between Server 2008 and 2012, 
and it won't actually require any code changes, just a change to the 
installer to get it past this.

I'm very willing to test both the installer and the program.  I don't use 
the BPC client much (most of my servers have more than one drive), but 
when I can it's a really handy thing!

Thank you for your help!

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 

--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments  Everything In Between.
Get a Quote or Start a Free Trial Today. 
http://pubads.g.doubleclick.net/gampad/clk?id=119420431iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What file system do you use?

2013-12-17 Thread Timothy J Massey
Russell R Poyner rpoy...@engr.wisc.edu wrote on 12/17/2013 11:12:07 AM:

 This is a poor comparison since we have different data sets, but it 
 would appear that BackupPC's internal dedupe and compression is 
 comparable to, or only slightly worse than what zfs achieves. This in 
 spite of the expectation that zfs block level dedupe might find more 
 duplication than BackupPC's file level dedupe.

It all depends on the type of files you're backing up.

For my database and Exchange servers, I'd do bodily harm for block-level 
de-dupe.  Exchange is the *worst*:  I end up with huge (tens or hundreds 
of GB) monolithic files that are 99.9% identical to the previous day's 
backup.  BackupPC won't do me a bit of good on those files, but 
block-level dedupe would.

However, with normal files, file-level dedupe (like BackupPC) gives you 
a very high percentage of block-level.

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What file system do you use?

2013-12-16 Thread Timothy J Massey
Sorin Srbu sorin.s...@orgfarm.uu.se wrote on 12/16/2013 09:40:56 AM:

 Anyway, Anaconda objected at my choosing ext4 for the 40 TB raid-array 
when I 
 recently set up a new system, and defaulted to xfs instead.

EXT4 won't support file systems 16TB at all with 4k blocks, and depending 
on OS and tool version, the tools may not support a block size 4k. 
Anaconda (the RHEL/CentOS installer) won't support 8GB during install.

Given those limitations, I consider EXT4 being limited to 16TB.  Your 
evaluation may vary, of course.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What file system do you use?

2013-12-16 Thread Timothy J Massey
Mark Rosedale mrosed...@vivox.com wrote on 12/16/2013 09:06:07 AM:

 I'm working on bringing back a backuppc instance. It is very large 3
 +TB. The issue I'm having is e2fsck is taking an extremely long time
 to finish. It is stuck on the checking directory structure. We are 
 going on 48 hours. 

How much RAM do you have?  This can make a very large difference.  a 3+TB 
drive can need 4GB or more to complete in a reasonable amount of time.

 So I'm wondering what other file systems do you guys use? Any 
 recommendations of one that may be more efficient or better suited 
 for such a large volume on Bakcuppc? Because once I have this 
 machine back up I'm actually going to add more drives. 

I use EXT3 because I'm conservative.  I've had problems with EXT4 crashing 
more often than EXT3 in the event of flaky hardware.  Make no mistake: the 
fault was the hardware, but I found that EXT3 was much more tolerant of 
that (or EXT4 more sensitive).

I have limited experience with XFS.  I have lots of experience with JFS 
(both on Linux and as a long-time OS/2 user in the past:  that's where the 
Linux JFS filesystem came from), but given its relatively small visibility 
I stay away from it in backup scenarios.

So for me, it's humble EXT3 and recently a couple of EXT4 servers.  That 
is getting to be a problem, though:  I already have a couple of servers 
where I have had to partition the server into separate partitions because 
of the 16TB limit with EXT3 (and the practical 16TB limit with EXT4 and 
user-space tools).

Reference for RHEL file size limits as of September, 2013:  
https://access.redhat.com/site/solutions/1532 .  That link has this 
interesting tidbit: The solution for large filesystems is to use XFS. The 
XFS file system is specifically targeted at very large file systems (16 TB 
and above).

Red Hat has already dropped JFS support in the installer, and I understand 
that XFS will be the default in RHEL7.  It has a certified limit in RHEL 
of 100TB.  It's also telling that their Scalable File System add-on uses 
XFS...

In the past, XFS always seemed to be the second choice of people who were 
not happy with the default:  'I use ReiserFS/Reiser4/JFS/Btrfs/Whatever 
because it does [insert narrow, specific need], but if it weren't for 
that, XFS would have been the best choice...'  It always seemed to be a 
bridesmaid for most everyone.  It seems that Red Hat is finally going to 
make it a bride...

One last thing:  everyone who uses ZFS raves about it.  But seeing as (on 
Linux) you're limited to either FUSE or out-of-tree kernel modules (of 
questionable legality:  ZFS' CDDL license is *not* GPL compatible), it's 
not my first choice for a backup server, either.

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What file system do you use?

2013-12-16 Thread Timothy J Massey
Timothy J Massey tmas...@obscorp.com wrote on 12/16/2013 02:35:44 PM:

 Mark Rosedale mrosed...@vivox.com wrote on 12/16/2013 09:06:07 AM:
 
  I'm working on bringing back a backuppc instance. It is very large 3
  +TB. The issue I'm having is e2fsck is taking an extremely long time
  to finish. It is stuck on the checking directory structure. We are 
  going on 48 hours. 
 
 How much RAM do you have?  This can make a very large difference.  a
 3+TB drive can need 4GB or more to complete in a reasonable amount of 
time. 

Just to add a detail:  I've had problems on systems with 1GB RAM fsck'ing 
a file system that was ~3TB:  it failed with OOM, but did not with 2GB. 
I've successfully fsck'ed ~5TB filesystems with 4GB RAM (4 spindles, 20-30 
minutes) and 11TB with 8GB (6 spindles, 40 or so minutes).  You might be 
able to do less, but you can at least do it with that.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Disk space used far higher than reported pool size

2013-11-01 Thread Timothy J Massey
Craig O'Brien cobr...@fishman.com wrote on 11/01/2013 09:48:23 AM:

  This error shows BackupPC_dump segfault, and pointing to libperl.so
  How do you install your BackupPC ? From source or from RPM?
 
 I did a yum install backuppc, which got it from epel

That's how I do it.

  That tells you it was unmounted cleanly last time, not that 
 everything checks out OK.   
  Try it with the -f option to make it do the actual checks.
 
 bash-4.1$ fsck -f /dev/sda1
 fsck from util-linux-ng 2.17.2
 e2fsck 1.41.12 (17-May-2010)
 Pass 1: Checking inodes, blocks, and sizes
 Pass 2: Checking directory structure
 Pass 3: Checking directory connectivity
 Pass 4: Checking reference counts
 Pass 5: Checking group summary information
 /dev/sda1: 20074505/2929688576 files (0.3% non-contiguous), 
 2775975116/2929686016 blocks
 bash-4.1$

Good.  I think we've eliminated a disk or filesystem issue.  I think we're 
pretty comfortable it's a BackupPC corruption issue.  It was hard to tell 
when your error messages said that it could not seek to a particular point 
in a file.

  What distro are you using?  (I use CentOS/RHEL) 
 
 CentosOS release 6.4

Same here.

  I think that segfault in a perl process needs to be tracked 
 down before expecting anything else to make sense.  
  Either bad RAM or mismatching perl libs could break about anything 
else.
 
 I installed perl-libs with yum as well. A yum info perl-libs tells 
 me it was installed from the updates repo
 
 I think what I'm going to try at this point is to delete the bad 
 backups, reinstall perl from epel, and keep an eye on it to see if 
 it balloons up again. Thanks for all your help!

That's a very reasonable, if not very subtle, solution.

I think you need to monitor /var/log/messages for errors that mention 
backup.  See if the crash returns.  Jeff is (justifiably) worried that the 
crash caused your corruption, but it could just as easily be the other way 
around.  Once you clean up from this, you want to make sure that nothing 
comes back.

If you've got the time, running memtest for a weekend might be a good 
idea, too.  The only thing it would cost is the downtime...

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Disk space used far higher than reported pool size

2013-10-31 Thread Timothy J Massey
Craig O'Brien cobr...@fishman.com wrote on 10/31/2013 08:49:15 AM:

 The du -hs /backup/pool /backup/cpool /backup/pc/* has finished. 
 Basically I had 1 host that was taking up 6.9 TB of data with 2.8 TB
 in the cpool directory and most of the other hosts averaging a GB each.

Well, there's your problem.

 The 1 host was our file server (which I happen to know has a 2 TB 
 volume (1.3 TB currently used) that is our main fileshare. 
 
 I looked through the error log for this pc on backups with the most 
 errors and found thousands of these: 

Just out of curiosity, why hadn't you already done that?!?

 Unable to read 8388608 bytes from /var/lib/BackupPC//pc/
 myfileserver/new//ffileshare/RStmp got=0, seekPosn=1501757440 (0,
 512,147872,1499463680,2422719488)

Interesting.  I'd make sure that the filesystem is OK before I went much 
farther...  Stop BackupPC, unmount /backup and fsck /dev/whatever

 du -hs /backup/pool /backup/cpool /backup/pc/myfileserver/* 
 
 to see which backups are doing the most damage. I'll report back 
 once that finishes.

With that, you should be able to find the bakup number(s) that are not 
linked.  You can delete them and free up space.

The big question is, though, why they aren't linking.  I'd really start at 
the bottom of the stack (the physical drives) and work your way up.  Check 
dmesg for any hardware errors.  fsck the filesystem.  Did I read correctly 
that this is connected vis NFSv4?  I sure hope not...  (I'm willing to 
admit it's a phobia, but there's no *WAY* I would trust my backup to work 
across NFS...)

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Disk space used far higher than reported pool size

2013-10-31 Thread Timothy J Massey
Holger Parplies wb...@parplies.de wrote on 10/30/2013 10:24:05 PM:

 as I understand it, the backups from before the change from smb to 
rsyncd are
 linked into the pool. Since the change, some or all are not. Whether the
 change of XferMethod has anything to do with the problem or whether it
 coincidentally happened at about the same point in time remains to be 
seen.
 I still suspect the link to $topDir as cause, and BackupPC_link is 
independent
 of the XferMethod used (so a change in XferMethod shouldn't have any
 influence).

To add my anecdote, I use a symbolic link for all of my BackupPC hosts:  a 
couple dozen?  And they all work fine.  It's been my standard procedure 
for almost as long as I've been using BackupPC.

Example: 
ls -l /var/lib
lrwxrwxrwx. 1 rootroot 22 Apr 22  2013 BackupPC - 
/data/BackupPC/TopDir/

mount
/dev/sda1 on /data type ext4 (rw)

I understand phobias from earlier problems (see my earlier e-mail about my 
thoughts on NFS and backups...) but I do not think this one is an issue.


 If the log files show nothing, we're back to finding the problem, but I 
doubt
 that. You can't break pooling by copying, as was suggested. Yes, you 
get
 independent copies of files, and they might stay independent, but 
changed
 files should get pooled again, and your file system usage wouldn't 
continue
 growing in such a way as it seems to be. If pooling is currently 
broken,
 there's a reason for that, and there should be log messages indicating
 problems.

You are 100% correct;  but it depends on how you define break.  Making a 
copy of a backup will absolutely break pooling--for the copy you just 
made!  :)

It won't prevent *future* copies from pooling, certainly.  But it sure can 
fill up a drive:  even if pooling *is* working correctly for new copies, 
they can still fill up the drive *and* BackupPC_nightly won't do a thing 
about it.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Disk space used far higher than reported pool size

2013-10-31 Thread Timothy J Massey
Craig O'Brien cobr...@fishman.com wrote on 10/31/2013 01:33:30 PM:

  Just out of curiosity, why hadn't you already done that?!? 
 
 I didn't know which host was the problem and didn't think of it. 
 Although I'll readily admit it seems painfully obvious to me now. :)

Just so you're sufficiently humble...  :)

For everyone's future reference:  ALWAYS check the server error log *and* 
the per-host logs...  :)

 The big question is, though, why they aren't linking.  I'd really 
 start at the bottom of the stack (the physical drives) and work your
 way up.  Check dmesg for any hardware errors.  
 
 bash-4.1$ grep -i backup /var/log/dmesg*
 bash-4.1$

Nice try, but won't help:  you need to be looking for the correct sd or 
ata device that is used.

Don't bother with a grep like that.  do a dmesg  dmesg.txt and then vi 
(or whatever) dmesg.txt and look for scary errors...  Look particularly 
for sda (or sdb or whatever), or ata0 (or 1 or whatever) messages, or 
possibly scsi messages (yes, SATA is SCSI to Linux) too.

But if they're there, these should not be hard to find:  there tends to be 
*LOTS* of them.

 bash-4.1$ grep -i backup /var/log/messages*

Mine comes back with nothing.

 messages-20131006:Sep 30 13:53:24 servername kernel: BackupPC_dump
 [15365]: segfault at a80 ip 00310f695002 sp 7fff438c9770 
 error 4 in libperl.so[310f60+162000]
 messages-20131006:Sep 30 13:53:27 servername abrtd: Package 
 'BackupPC' isn't signed with proper key
 messages-20131020:Oct 19 01:24:54 servername kernel: INFO: task 
 BackupPC_dump:11922 blocked for more than 120 seconds.
 messages-20131020:Oct 19 01:24:54 servername kernel: BackupPC_dump D
 0001 0 11922  10626 0x0080
 messages-20131020:Oct 19 01:30:54 servername kernel: INFO: task 
 BackupPC_dump:11922 blocked for more than 120 seconds.
 messages-20131020:Oct 19 01:30:54 servername kernel: BackupPC_dump D
 0001 0 11922  10626 0x0080
 messages-20131020:Oct 19 01:32:54 servername kernel: INFO: task 
 BackupPC_dump:11922 blocked for more than 120 seconds.
 messages-20131020:Oct 19 01:32:54 servername kernel: BackupPC_dump D
 0001 0 11922  10626 0x0080
 messages-20131020:Oct 19 01:32:54 servername kernel: INFO: task 
 BackupPC_nightl:18390 blocked for more than 120 seconds.
 messages-20131020:Oct 19 01:32:54 servername kernel: BackupPC_nigh D
 0001 0 18390   1262 0x0080
 messages-20131020:Oct 19 01:48:54 servername kernel: INFO: task 
 BackupPC_dump:11922 blocked for more than 120 seconds.
 messages-20131020:Oct 19 01:48:54 servername kernel: BackupPC_dump D
 0003 0 11922  10626 0x0080
 messages-20131020:Oct 19 01:52:54 servername kernel: INFO: task 
 BackupPC_dump:11922 blocked for more than 120 seconds.
 messages-20131020:Oct 19 01:52:54 servername kernel: BackupPC_dump D
 0001 0 11922  10626 0x0080
 messages-20131020:Oct 19 01:52:54 servername kernel: INFO: task 
 BackupPC_nightl:18390 blocked for more than 120 seconds.
 messages-20131020:Oct 19 01:52:54 servername kernel: BackupPC_nigh D
 0001 0 18390   1262 0x0080
 messages-20131020:Oct 19 01:56:54 servername kernel: INFO: task 
 BackupPC_dump:11922 blocked for more than 120 seconds.
 messages-20131020:Oct 19 01:56:54 servername kernel: BackupPC_dump D
 0003 0 11922  10626 0x0080
 messages-20131020:Oct 19 02:10:54 servername kernel: INFO: task 
 BackupPC_dump:11922 blocked for more than 120 seconds.
 messages-20131020:Oct 19 02:10:54 servername kernel: BackupPC_dump D
 0001 0 11922  10626 0x0080
 messages-20131020:Oct 19 02:12:54 servername kernel: INFO: task 
 BackupPC_dump:11922 blocked for more than 120 seconds.
 messages-20131020:Oct 19 02:12:54 servername kernel: BackupPC_dump D
 0001 0 11922  10626 0x0080
 messages-20131027:Oct 23 09:00:02 servername abrtd: Package 
 'BackupPC' isn't signed with proper key

I'd try Googling those:  they have no meaning for me (and my servers don't 
have them).

What distro are you using?  (I use CentOS/RHEL)

  fsck the filesystem. 
 
 bash-4.1$ fsck /dev/sda1
 fsck from util-linux-ng 2.17.2
 e2fsck 1.41.12 (17-May-2010)
 /dev/sda1: clean, 20074506/2929688576 files, 2775975889/2929686016 
blocks
 bash-4.1$

Definitely a good sign.

 Did I read correctly that this is connected vis NFSv4?  I sure hope
 not...  (I'm willing to admit it's a phobia, but there's no *WAY* I 
 would trust my backup to work across NFS...) 
 
 The drives are local SATA ones that I set up in a raid 5, directly 
 mounted. Def not NFS. I had an unrelated drive mounted via NFS, but 
 that had nothing to do with my backup system and that's probably the
 source of confusion.

md raid5?  What's the status of /dev/mdstat ?

 So the du command finished, here's the result:
 
 bash-4.1$ du -hs /backup/pool /backup/cpool /backup/pc/fileserver/*
 1.4T/backup/pc/fileserver/529
 1.4T

Re: [BackupPC-users] Disk space used far higher than reported pool size

2013-10-31 Thread Timothy J Massey
Les Mikesell lesmikes...@gmail.com wrote on 10/31/2013 01:54:24 PM:

 On Thu, Oct 31, 2013 at 12:33 PM, Craig O'Brien cobr...@fishman.com 
wrote:
 
  fsck the filesystem.
 
  bash-4.1$ fsck /dev/sda1
  fsck from util-linux-ng 2.17.2
  e2fsck 1.41.12 (17-May-2010)
  /dev/sda1: clean, 20074506/2929688576 files, 2775975889/2929686016 
blocks
  bash-4.1$
 
 That tells you it was unmounted cleanly last time, not that everything
 checks out OK.   Try it with the -f option to make it do the actual
 checks.

Good catch!  This should take a long time:  20 minutes to an hour?  Maybe 
more:  the drives are full.

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncd full backup

2013-10-31 Thread Timothy J Massey
Sharuzzaman Ahmat Raslan sharuzza...@gmail.com wrote on 10/30/2013 
10:06:18 PM:

 Hi Holger,

 Based on short session of troubleshooting, I believe the machine 
 actually suffer from low I/O speed to the disk. Average read is 
 about 3 MB/s, which I considered slow for a SATA disk in IDE emulation.

*REAL* slow:  I consider anything under 20MB/s slow.

But where did that number come from?  The pattern of reads will make a 
*huge* difference...

 I'm planning to suggest to the customer to have a RAID 1 setup to 
 increase the I/O speed. I'm looking at possibilities to speed things
 up by not having to change the overall setup.

I think you might want to have a better idea of what is going on first 
before you just start throwing hardware at it.  If your numbers were 
correct but still too slow I'd say sure.  But your numbers are *broken* 
wrong.  You *might* fix your problem (by accident!) by throwing away some 
pieces and adding others, but you might not, too.  Then you've got a 
client that just spent a bunch of money for nothing...

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncd full backup

2013-10-31 Thread Timothy J Massey
Sharuzzaman Ahmat Raslan sharuzza...@gmail.com wrote on 10/31/2013 
02:38:01 PM:

 Hi Timothy,

 I got the number by observing the output of iotop while file 
 transfer is running. Also, on BackupPC host summary page, average 
 transfer rate for full backup is also around 3MB/s

 It could be a network bottleneck also, as the customer is using 
 100Mbps switch with around 80 PC, not including network printer and 
 servers. Inclusive should be around 100 network devices.

For file transfers, 100Mb/s is good for 7MB/s transfer rate.  Assuming a 
good quality switch (which is a *big* assumption), the number of computers 
shouldn't matter.

But I would think strongly about buying a good quality Gigabit switch (I 
recommend the HP V1910-24G) as your backbone:  Plug all of your servers 
(including the BackupPC server) into it, as well as each of your 100Mb/s 
switches (even better if they have Gb uplink ports!).  That would 
eliminate the network as a bottleneck and only costs $300.  And improve 
network performance across the board, though your users may not notice it 
if they only work with small files.

 Any idea how to properly troubleshoot network bottleneck? My skill 
 is a little bit lacking on that area.

Sure:  Time the copying of files from one machine to another.  Assuming 
the source and destination hard drives are faster than 7MB/s (and they 
very well *better* be!), then you'll saturate a 100Mb network no problem.

For a more scientific approach, check out iperf.

I'd be *much* more worried about checking out your *disk* performance. You 
can do tests in exactly the same way:  copy files to and from the disk and 
see what happens.  Here are some very simple examples:

sync; time dd if=/dev/zero of=test.fil bs=1M count=1024; sync; sync; sync;
sync; time dd if=test.fil of=/dev/null bs=1M

The first line times the writing of a 1GB file named test.fil.  The second 
one times the reading of the same 1GB file.  Divide 1024 by the number of 
seconds it takes and that will give you the MB/s that you transferred. 
(The sync command is needed for accurate timing;  the three sync commands 
is kind of an old UNIX graybeard joke:  
http://unix.stackexchange.com/questions/5260/is-there-truth-to-the-philosophy-that-you-should-sync-sync-sync-sync
 
)

If you want more scientific disk performance information, check out iozone 
or iometer.

Remember:  always profile before you optimize.  ( 
http://www.phatcode.net/res/224/files/html/ch37/37-02.html )

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Disk space used far higher than reported pool size

2013-10-30 Thread Timothy J Massey
Craig O'Brien cobr...@fishman.com wrote on 10/29/2013 08:21:11 PM:

 I'm not sure how I can go about determining if a particular backup 
 is using the pool or just storing the files in the PC folder. What's
 the best way to check if a given backup set is represented in the 
 pool or not? Would knowing the size of all the pc folders help narrow it 
down?

Nope. 

 I'm not sure if this is the best way to check the hard linking, but 
 here's a test I thought might be helpful. I did this command to see 
 if a common file in these backups are pointing to the same inodes.

You want to look for files in the pc directory that have only one 
hardlink.  These files are not in the pool and need to either be deleted 
or connected to files in the pool.

Jeff Kosowsky gave you a command to list files with only one hardlink.  
Adam Goryachev gave you a good du command to find out how much space is 
being taken by the pc directory *after* counting the files in the pool 
separately.  That command will take a long time to run, but it will give 
you a pretty clear idea of where the space is being consumed.

I would not stake my life on this, but I would bet a pretty substantial 
amount of money:  you did something to break the pooling.  Most likely by 
copying backups around.  This undid the hardlinks and left you with 
individual copies of the files.


Or punt completely:  rebuild the BackupPC server and start over.

You could do almost as well by confirming that your latest backups *are* 
hardlinking properly and then deleting all of the old backups except maybe 
a copy or two.  I would not delete the copies by hand, but rather change 
the configuration to only keep 1 full and 1 incremental.  It might be a 
good idea to make some archives to make sure you have a good copy 
somewhere.  In any case, once BackupPC has deleted all of the old backups, 
go into your pc directories and make sure that there is indeed only the 
backups listed in the GUI in the folder structure.  Then, change the 
incremental and full keep counts back to what they should be and allow it 
to rebuild.


The only other thing that I can think of is that you did something wrong 
with archiving and accidentally archived data somewhere within the 
BackupPC tree.  In my case, I archive to a removable hard drive and 
sometimes the drive is not mounted when the archive runs.  The archives 
are then put on the backup drive (because that's where the removable drive 
is mounted).  That's tricky because you can't see the files when the drive 
*is* mounted (which is the vast majority of the time).  I have to unmount 
the drive and then I can see terabytes of archive data that should have 
been written to a removable drive.

I don't know if that might be part of your problem.  But it's the only 
other thing I can think of.

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Disk space used far higher than reported pool size

2013-10-30 Thread Timothy J Massey
Adam Goryachev mailingli...@websitemanagers.com.au wrote on 10/30/2013 
09:18:59 AM:

 Not really relevant to this thread, but I have in the past added a 
 empty file to each of the removable drives, then test if the file 
 exists before creating the archives. If the drive isn't mounted, the
 file won't exist. Thus preventing that issue.
 
 I'm sure you've probably considered this previously already :)

Thank you for the suggestion!

My thought was to parse the output of df /path/to/drive and confirm that 
it was mounted correctly.  (I already do that in the scripts I use to 
mount the removable drive and re-format it.)  If it happened more than 
once or twice a year, I probably would!  :)

Your way certainly works, too.  Because it's the root of the removable 
drive, I could simply look for lost+found, too!  :)

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] OT: rsyncd as a service on Windows 8

2013-10-29 Thread Timothy J Massey
Dan Johansson dan.johans...@dmj.nu wrote on 10/27/2013 02:26:06 PM:

  Have you checked the Event Viewer?  It usually shows you what's going 
on 
  with rsync...

This seemed to have gotten missed.  Is there anything in there?  Rsync is 
usually pretty expressive in the Event Viewer...

Also, someone else mentioned about the possibility of suspend/hibernate. I 
don't back up clients, so all of my servers are up 24/7.  I'd look 
carefully into that:  I've found way too many applications that are 
unhappy about power saving.

 Tim, do you mind sharing the CMD-file?

Literally about 30 seconds of effort went into it...  My rsync daemon is 
installed in a directory called rsyncd and contains folder:  bin (which 
contains rsync.exe and needed DLL's), log (which contains the log, .pid, 
.lock, etc.), and etc (which contains rsyncd.conf and .secrets)



@ECHO OFF
REM Start Rsync Daemon v2
IF EXIST rsync.exe GOTO PATH_OK
ECHO You must start this script in the same directory as rsync.exe.
GOTO END
:PATH_OK
DEL ..\log\rsyncd.pid
START rsync.exe --config=..\etc\rsyncd.conf --daemon --no-detach
:END


I'm not worried about deleting an active .pid file (if it's even possible) 
because I don't use it for anything, and if a second rsync daemon tries to 
start it won't because it won't be able to attach to port 873.

Like I said, not much effort in the script, but it works.

I don't use services for starting my rsync daemon.  I use a scheduled task 
set to start the daemon on startup *and* every hour.  That way if it 
*does* crash, the daemon will restart automatically.

I certainly could make the script much smarter:  check to see if there's 
an rsync process already running, etc.  But this little bit of effort 
works perfectly for my needs.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Disk space used far higher than reported pool size

2013-10-29 Thread Timothy J Massey
Craig O'Brien cobr...@fishman.com wrote on 10/29/2013 01:53:31 PM:

 On the General Server Information page, it says Pool is 2922.42GB 
 comprising 6061942 files and 4369 directories, but our pool file 
 system which contains nothing but backuppc and is 11 TB in size is 100% 
full.

My strong guess is that, while you *think* nothing else is out there, that 
is not the case!  :)

 I'm confused how this happened and even ran the BackupPC_nightly 
 script by hand which didn't seem to clear up any space. Judging by 
 the reported pool size it should be less than 30% full. I could 
 really use some help. Thanks in advance for any ideas on how to go 
 about troubleshooting this.

From the other message, it seems that the filesystem you're worried about 
is /home.  What is the TopDir of BackupPC?  I assume it's something like 
/home/backuppc (or I sure hope it is!).  Go to that path and type:

du -hs

This will take a *long* time:  BackupPC has a *lot* of files.  I would 
also hope that you see a number very similar to the pool size report 
above.

So, if you find that your BackupPC TopDir contents (what you verified with 
du -hs) reports match the GUI, then you know that there's something on the 
drive but *outside* of the BackupPC TopDir.  Find it and delete it.

Because du -hs of the pool takes so long, you could, of course, do it the 
*other* way:  do a du -hs of each directory *besides* your TopDir and see 
how much space is being used by them.  Depends on how many other folders 
you have how hard that would be.  But when you see some other folder using 
8TB, you'll know where the space went!

However, if you find that your du -hs does *not* match your GUI report, 
then you have to look more closely.  Have you *EVER* done anything from 
the command line inside of the TopDir?  Given that you mention running 
BackupPC_nightly by hand, I suspect you of monkeying with things, and that 
very well may have broken things.  Tell us what you did!  :)

(It's not that you can't run BackupPC_nightly by hand;  you can.  It's 
more that if you're brave enough to run it by hand, there's no telling 
what *else* you might have done, and how you might have broken things!  :) 
 ).


If you can't tell, I suspect something outside of BackupPC has used the 
space, *or* that you moved/copied/etc. something using tools outside of 
the BackupPC system and have broken things unintentionally.  It is very 
unlikely that BackupPC is wrong on its pool report.  Most likely it's 
something *else* that is consuming the space, and because of that all the 
BackupPC_nightly in the world isn't going to free up the space.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Disk space used far higher than reported pool size

2013-10-29 Thread Timothy J Massey
Craig O'Brien cobr...@fishman.com wrote on 10/29/2013 03:30:46 PM:

 The topdir is /var/lib/BackupPC which is a link to /backup

I missed that in your previous e-mail.  Stupid proportional fonts...

(And you might want add a -h for commands like du and df:  the -h is for 
human-readable...  When the numbers are for things in the *terabytes*, 
it's a lot of digits to manage...)

 If I do an   ls -l /var/lib
 I get a bunch of other directories as well as:
 lrwxrwxrwx.  1 rootroot   7 Dec 17  2011 BackupPC - /backup
 
 bash-4.1$ ls -l /backup
 total 20
 drwxr-x---. 18 backuppc root 4096 Oct 25 21:01 cpool
 drwx--.  2 root root 4096 Dec 17  2011 lost+found
 drwxr-x---. 76 backuppc root 4096 Oct 24 16:00 pc
 drwxr-x---.  2 backuppc root 4096 Dec 24  2012 pool
 drwxr-x---.  2 backuppc root 4096 Oct 29 01:05 trash
 bash-4.1$

That is much clearer, thank you.

 It's only backuppc stuff on there. I did it this way to give the 
 backuppc pool a really large drive to itself.

Sounds reasonable.

 As far as things done 
 from the command line I've deleted computers inside of the pc 
 directory that I no longer needed to backup. From my understanding 
 that combined with removing the pc from the /etc/BackupPC/hosts file
 would free up any space those backups used to use in the pool.

The way the pooling works is that any files with only one hardlink are 
deleted from the pool.  Given that BackupPC is saying that the pool is 
only 3TB big, then your problem is *not* that there are things in the 
pool that aren't anywhere else.  Your problem is the exact opposite:  you 
have files somewhere else that are *not* part of the pool!

 I've 
 manually stopped and started the backuppc daemon when I've made 
 config changes, or added/removed a pc. At one point I had almost all
 of the pc's being backed up with SMB, and switched them all to using 
rsync.

Again, I could imagine lots of ways this might explode your pool, but you 
have the exact opposite problem:  your pool is too small!

Please run Jeff's command to find out where you have files that are not in 
the pool.  That will be most informative.

 I ran the du -hs command you recommended, I'll post the results when
 it eventually finishes. Thank you.

I doubt that will help if you did it from /backup.  The point of that was 
to isolate other non-BackupPC folders.

Check lost+found and trash while you're at it and see what's in there. 
They should both be empty.

I'm with Jeff:  I think that you have multiple PC trees that are not part 
of the pool.  How you managed that I'm not sure.  But you need to find 
those files and clean them up.  Start with Jeff's command and go from 
there.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] OT: rsyncd as a service on Windows 8

2013-10-26 Thread Timothy J Massey
Dan Johansson dan.johans...@dmj.nu wrote on 10/26/2013 08:37:48 AM:

 Any suggestions on
 a)   how to find out why rsyncd dies in the first place

Not really:  I have never run rsync on Windows 8.  I *have* done it on 
Windows Server 2012 (based on Win8) with zero crashes across 3-4 servers 
on that OS and maybe 3 months of time.  So it may not be a Windows 8 
problem exactly, it may be just *your* Windows 8.

Have you checked the Event Viewer?  It usually shows you what's going on 
with rsync...

 b)   how to fix this

If we don't know why it dies how can we fix it?

 c)   if this is unfixable how can I make rsyncd restart even if there
 are a .pid and .lock file around

You can't, to my knowledge.  I have wrapped launching rsync in a CMD to 
delete stale files before launching the daemon.  (I hope I'm wrong:  it 
would be nice not to need it!)

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60135991iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Repeated error on more than one client; rsync timeout

2013-10-23 Thread Timothy J Massey
Hans Kraus h...@hanswkraus.com wrote on 10/23/2013 11:35:47 AM:

 Hi,
 
 that's the strange thing: initial backups worked, only the following
 backups showed the timeout.

I'd get that.  With the initial backup, there are no checksums performed: 
what would you checksum on the backup server?  So rsync is simply copying 
data non-stop, and never hits the timeout.  But the second time you do 
have files to checksum, and that's probably where you hit the timeout: 
while one side or the other was calculating the checksum.

First time I've ever heard of an *rsync* timeout value...  Glad to know of 
it!  :)  I'm actually wondering now if it might be the source of some 
weirdness I've seen on one particular rsync setup I'm using...

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60135991iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Repeated error on more than one client; rsync timeout

2013-10-22 Thread Timothy J Massey
Hans Kraus h...@hanswkraus.com wrote on 10/22/2013 01:25:30 PM:

 It was very simple: I copied the 'rsyncd.conf' file on the Clients ( all 

 Debian at the moment) from an example file, as I read in
 http://howden.net.au/thowden/2012/11/rsync-on-debian/. Namely I did
 'cp /usr/share/doc/rsync/examples/rsyncd.conf'.
 That example file contains the line:
  timeout = 600
 After changing that value to 3600 it seems to work.

600 is way too small;  that's only 10 minutes, and there will be files 
that take more than 10 minutes to process.

Frankly, I don't believe you should have ClientTimeout set in your client 
files;  having that set as a global default in config.pl should be fine, 
and there the default is 72000 (as 20 hours, or 20 times longer than what 
you have set now!).

I see and understand the value of the ClientTimeout parameter, but I've 
found that in practice it is almost *never* triggered correctly.  The 
majority of times it is triggered is when a job is legitimately taking a 
long time.  That's why an extreme setting is not so bad.

I've found that there is one place where I *do* set it in the client 
config file:  archive.  Doing archives does not seem to reset the alarm 
setting during the process (and I've asked for help with this and gotten 
none).  So, I've taken to seeting my archive ClientTimeout to 72 (as 
in 200 hours) so that they complete without error.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60135991iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Writing archives to external drive

2013-10-15 Thread Timothy J Massey
Phil Reynolds phil-backu...@tinsleyviaduct.com wrote on 10/15/2013 
08:05:14 AM:

 I haven't seen BackupPC write an archive during my testing - maybe I'm
 missing some settings?

More likely, you're just expecting too much:  BackupPC does not write an 
archive on a schedule at all.  It only writes one based on a request from 
a user.  And there *is* some configuration that needs to be done by you: 
see the documentation for the section entitled Archive functions.

Please note that the request does not need to come from an actual human: 
You can use a cron entry to script this.  Google BackupPC archive from 
cron for lots of alternatives.

 Are there any suggestions that could make my use of BackupPC in this
 way easier, or more practical?

The Google mentioned above will help.  Basically, you will mount the 
external drive to a particular path, then script an archive to that path.

I like BackupPC very much, but its archive capabilities are rudimentary at 
best.  No scheduled archives, no ability to run more than one archive at a 
time (which trips me up when one archive runs long and my next scheduled 
one runs!), and no ability to create incremental or differential archives. 
 So you may not have the flexibility that you want.

An alternative solution for places with sufficient bandwidth (which isn't 
as much as you might think if you're using rsync) is to simply run a 
second BackupPC server somewhere else.  That gives you way better offsite 
protection.  If you can accomplish that, the only reason to use archives 
is for long-term archiving, in which case its limitations are much less 
painful.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60135031iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Cannot use Linux (Was: Re: FW: no rrd graphs in version 3.3.0)

2013-10-15 Thread Timothy J Massey
Charles Belarmino charles.belarm...@maximintegrated.com wrote on 
10/13/2013 10:54:20 PM:

 Hello Everyone,
 
 Do I still first need to do “Connect to Network”….
 
 [image removed] 
 
 
 before I can get my BackupPC Server running like this one? 

First, the positive.  You *did* ask a question.  Thank you!

However, you are demonstrating that you do not know how to use Linux, not 
BackupPC.  We cannot train you on how to use Linux.  It's something you're 
going to have to figure out on your own.  Or, there are lots and lots of 
companies that you can pay to train you (or even to set up a working 
BackupPC solution for you).  A volunteer mailing list is not the right 
place.

I will repeat what I said before, and then I am done with this thread: 
Google for step-by-step instructions on how to set up BacukpPC.  Follow 
them.  Make it work according to the instructions.  If you accomplish 
this, most of your questions will probably be answered, and you'll have a 
working system to play with.

Also, in the future:  do not reply to some random message that has nothing 
to do with yours (like you did to start this thread).  It makes it 
difficult to follow when you do that.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60135031iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Issues with Include and Exclude for a SMB backup

2013-10-11 Thread Timothy J Massey
Bowie Bailey bowie_bai...@buc.com wrote on 10/11/2013 03:38:29 PM:

 The first entry in the BackupFilesOnly (the key) should be the 
 sharename, Users.  After you add that, use the buttons to the 
 right to add the directories you want to backup within that share.

Or, if there's only one share, you could use * for the key.  That's 
usually what I do even with more than one share because the things I'm 
excluding are things like Documents and Settings or ProgramData or 
something like that, which is very unlikely to be on more than one of my 
shares for a host.

And that way I avoid the problem you're having!  :)

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] no rrd graphs in version 3.3.0

2013-10-10 Thread Timothy J Massey
Charles Belarmino charles.belarm...@maximintegrated.com wrote on 
10/10/2013 01:52:26 AM:

 I cannot run my backuppc server installed in Virtual Box using 
 CentOS6.4.  Attached is the error.

That error message told you exactly what was wrong (the CGI script can't 
talk to the BackupPC server) *AND* exactly what you should check (check 
the server's configuration).

The problem is that there are an unlimited number of reasons why your 
server isn't running.  And we don't know what you have done up to this 
point, AND YOU DIDN'T TELL US!

So, why don't you try *telling* us how exactly you got to this point?

Tim Massey (so Holger doesn't have to!)


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] no rrd graphs in version 3.3.0

2013-10-09 Thread Timothy J Massey
Tyler J. Wagner ty...@tolaris.com wrote on 10/09/2013 10:15:57 AM:

 On 2013-10-09 14:09, Holger Parplies wrote:
  Hi,
  
  vano wrote on 2013-10-09 05:17:53 -0700 [[BackupPC-users]  no rrd 
 graphs in version 3.3.0]:
  Found that after upgrade to version 3.3.0, rrd graphs is missing 
 in web interface.
  
  congratulations.
 
 As you may have noticed, Holger doesn't believe the graph is a good 
thing.
 Thankfully, many of us disagree.

No, I think that was more a comment on the fact that the person's 
question was actually a sentence, without actually asking for anything. 
He was congratulating him on making such a clear statement of fact!  :)

(I felt exactly the same way reading the initial message.  If you can't 
actually attempt to figure something out on your own, can't you at *least* 
ask a question?!?  God forbid you actually tell us the most basic 
information about your system and what you've tried...  But I seem to have 
just a hair more self-control than Holger... ;)  )

 The rrdtool graph is a Debian-specific patch. Did you install 3.3.0 from
 the Debian/Ubuntu package? If not, get it from my repo:

I have to say that this drives me nuts.  Why does a downstream provider 
bundle these things together as if they were supposed to be together? 
Then, when it breaks, BackupPC people are supposed to maintain it?!?

They may be the coolest things on Earth (wouldn't know;  I use CentOS), 
but it's not right to mangle the package like that and making it the 
default package for the distro...  :(

If it *IS* the coolest thing on Earth, can't we get it upstream for 
*everyone*?

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] simple backup scheme, 1xfull, 6xinc

2013-10-04 Thread Timothy J Massey
µicroMEGAS microme...@mail333.com wrote on 10/04/2013 09:29:22 AM:

 I have red many times the manual and of course I tried many settings, 
 still without luck. I have 30 hosts which have about 1400Megabytes of 
 data in summary. As I am using a 2TB harddisk for my backuppc pool, I 
 would like (am able) to have just one full backup for all of them. If 
 it's possible to have 2 FullBackups due to deduplication, that would be 
 nice. But one full Backup is enough, because I dont have more capacity. 
 So I would like to have:
 
 - full backup, once a week
 - incremental backups, every day until next full backup is done
 
 I have configured:
 
 $Conf{FullPeriod} = '6.97';
 $Conf{IncrPeriod} = '0.97';
 $Conf{FullKeepCnt} = [1];
 $Conf{FullKeepCntMin} = '1';
 $Conf{FullAgeMax} = '6';
 $Conf{IncrKeepCnt} = '6';
 $Conf{IncrKeepCntMin} = '3';
 $Conf{IncrAgeMax} = '14';
 
 But I am facing the problem, that I get up to 3-4 FullBackups for each 
 host and then the other host backups fail.

Check the configuration for the Web GUI for that host.  What does it show 
for FullKeepCnt?

 I cannot explain why this 
 happens.

Me either.  The only thing I can think of is that *you* think it's set 
correctly, but BackupPC does *NOT* and it's doing something else.  I'd use 
the GUI to check the configuration because that's going to show you 
exactly what BackupPC is using for the configuration.  If it thinks 
something other than what you pasted above, then you have a problem with 
your configuration files.

Also, show us the log file for a particular host (the entire thing) for a 
complete month.  It will help all of us to see what BackupPC is thinking. 
(Using October won't be enough.)  For example, notice what a section of 
mine looks like:


2013-09-21 18:00:01 full backup started for directory D (baseline backup 
#69)
2013-09-22 01:56:54 full backup started for directory C (baseline backup 
#69)
2013-09-22 02:22:38 full backup 70 complete, 433302 files, 614928683948 
bytes, 118 xferErrs (0 bad files, 0 bad shares, 118 other)
2013-09-22 02:22:38 removing full backup 49


Notice that it does a new full backup, *then* deletes aa now-obsolete full 
backup.  (In my case, it just did full backup 70 and removed backup 49 
because I keep more than one full backup, spread out over months of time.)

We should see something similar from your log files as well.

 I have red, that if an incremental backup is needed for the 
 last FullBackup or something like that, but to be hondest I didnt 
 understand that :( 

I assume you're a non-native English speaker (it would be read, not 
red), so I'm trying to understand that sentence, but it's not a complete 
sentence so it's hard.  But here goes:  I think you have that backwards. 
Incremental backups are based on full backups.  You can't have 
incrementals without fulls.  But full backups stand on their own:  they 
don't need incrementals.

But if there are incrementals that need to be kept, then the full they are 
based on will need to be kept, too.  That means that with 6 incrementals, 
you're going to have to keep *2* fulls around.

 I also tried $Conf{IncrKeepCnt} = '6'; then I stopped 
 BackupPC, manually removed the backups within my pool harddisk, then I 
 did a manually nightly_job cleanup with 
 /usr/share/backuppc/bin/BackupPC_nightly 0 255 and restarted BackupPC.

Don't forget that you have to remove the backups from the backups file, 
too.

 What am I doing wrong? I would really appreciate any assistance for my 
 desired backup scheme. What are the correct values for my purpose? Thank 

 you.

As described above, with incrementals, at some point you're going to have 
two fulls (because when you do a full, the incrementals will be based on a 
*previous* full which will be kept around).

However, it should be possible to have *only* two fulls, and with pooling 
you might be just fine with that.  You're saying you have three or more, 
so something else is wrong.

Like I said:  check what the GUI says the configuration should be. 
FullKeepCnt should be 1, FullKeepCntMin should be 1, FullAgeMax should be 
much bigger (there's no disadvantage in having it be something like 90 or 
180) but is not part of this issue.  The values you have for the variables 
above are fine (except change FullAgeMax).  I'm not sure, though, that 
BackupPC is actually *using* them.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 

Re: [BackupPC-users] simple backup scheme, 1xfull, 6xinc

2013-10-04 Thread Timothy J Massey
Les Mikesell lesmikes...@gmail.com wrote on 10/04/2013 02:20:19 PM:

 Are all of your backup runs completing every day?

That is also a great question.  If your incrementals span more than a 
week, there will be more than one full that they depend on.  All of the 
assumptions you've put in place only work if the backups run exactly the 
way they're supposed to;  if you do not actually *complete* 6 incrementals 
between your fulls, you will have more than 2 full backups!

For testing purposes, I would set FullKeepCnt to 1 and IncKeepCnt to 1; 
then you know that there will be only one full that will be kept by the 
incremental (meaning that you will bounce between one and two full 
backups).

While you're at it, set FullPeriod to 1.97 to accelerate the testing 
process.

Then let it run.  You should start with 1 full after day 1, then 1 full 1 
inc after day 2, then 2 full 1 inc after day 3, then 1 full 1 inc after 
day 4, and then alternate between those last two states.

Another way of testing this would be to back up a *MUCH* smaller chunk of 
data:  say, a few GB.  Then let it run with your normal settings and 
confirm that everything works like it's supposed to.  Then you can expand 
the amount of data you back up and see how the space is used on your 
backup disk to see if everything is going to fit.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] simple backup scheme, 1xfull, 6xinc

2013-10-04 Thread Timothy J Massey
µicroMEGAS microme...@mail333.com wrote on 10/04/2013 02:42:15 PM:

 I think here's my problem: let's say one or two host were not able to be 

 backed up (incremental) Then I miss one or more incremental backups, so 
 BackupPC doesn't delete the last Full after 7 days (when the new Full 
 backup occurs). How can avoid that scenario? Should I decrease the 
 FullIncrKeep to something lower? Any help appreciated.

The only way is to make sure that the number of incrementals you keep will 
fit between the fulls no matter what.  So, with weekly fulls, you could 
shrink the number of incrementals to, say, 5:  that way if one fails, 
you're still OK.  But if two fail, you will have a point where you will 
have THREE fulls remaining.

Or, do only full backups.  Because of pooling, the amount of space used 
for a full backup and an incremental backup is *IDENTICAL*.  The *only* 
advantage of incrementals over fulls is that they run in less time.  If 
you're able to do a full backup in enough time, you could simply switch 
FullPeriod to 0.97 and FullKeepCnt to 7 and let it go at that.

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] New backup server

2013-05-24 Thread Timothy J Massey
Erik Hjertén erik.hjer...@companion.se wrote on 05/24/2013 12:15:22 PM:

 Hi all
 
 I have invested in a used HP Proliant ML150 G5 server as a new 
 backup server. I have about 500 GB of data in 40 000 files spread 
 over 8 clients to backup. Data doesn't grow fast so I'm aiming at 
 two 1TB disks in a raid 1 configuration. 
 
 Do I go with more expensive, but faster (and more reliable?), SAS-
 disks. Or is cheaper, but slower, S-ATA disks sufficient? I'm 
 guessing that disk speed will be the bottle neck in performance?
 
 Your thoughts on this would be appreciated. 

What is your backup window?  12 hours?  You could do that with a *single* 
7200 RPM SATA drive.  8 hours?  Probably still, but you'd have to do some 
testing to see.  Less than that?  You're going to need to intimately 
undertand both your circumstances and the various technologies inside of 
BackupPC to be able to answer that better.

Frankly, a mirrored array isn't gonna buy you much performance increase. 
It won't help write performance at *all*, and I'm not sure you'll need 
enough read performance to matter:  the high amount of seeking that 
BackupPC requires doesn't really hep for getting sustained high read 
performance.

I will take Les' advice (don't use the Green drives) one step farther:  I 
recommend the drives designed for DVR/Video use.  Normal drives (the 
not-Green drives) are warrantied only for 8x5 usage;  the DVR drives are 
rated for 24x7 usage.  They're a little more expensive, but not much.

There are other questions you will need to ask that will make as much (if 
not more) difference than the speed of the drives you'll be using:

* Would more drives (even if they're slower) give you better performance?

* How fast can the *clients* push the data?  If you're limited by them, 
improving the server won't help?

* What is the speed of the network involved?  Are you talking 100Mb/s or 
slower?  That will severely limit your performance.  Do you Gigabit 
everywhere between them?  Are there points in between that might cause 
problems (like if the clients and server are in *different* switches)? Can 
you do bonded Ethernet on the server?

* What technique are you using to back up the files?  Rsync over ssh (with 
encryption overhead)?  Rsyncd?  tar/SMB (which are much less intelligent 
in transferring files, but maybe less disk-intensive)?  Will you use 
compression on the server, and what level of CPU do you have?

* What do your files look like?  40,000 files for 500GB of data is a 
pretty high size-per-file.  (Contrast with one of my servers, which is 
800GB, but 400,000 files: 1/5 the data per file.)  Are your files mostly 
small (say, under 10kB), mostly average (10k to 10M)?  Do you have any 
massive files (1GB or larger) to deal with?  Backing up a database server 
and backing up a mail server require *noticeably* different approaches.

(Believe it or not, even all of *this* is not all that's involved!)


Your question is along the lines of Which is faster, a bicycle or a dump 
truck?  The answer is:  it depends.  Need to move a mountain of sand 50 
miles away?  A dump truck.  Delivering urgent letters in Manhattan? 
Bicycle!  :)  I know you tried to give us some idea of what you're trying 
to do (500GB and 40,000 files across 8 clients), but not nearly enough to 
accurately answer the question.

But keep this in mind:  the numbers that you supplied are what I would 
consider small and boring:  a relatively small amount of data and a 
relatively small number of clients.  The only thing I find interesting is 
that your 500GB of data is only 40,000 files.  That's a small number of 
files, so I am curios as to the size of your files.  Other than that small 
eybrow-raising item, your application is extremely 
straightforward--assuming that the answer to all of the questions above 
are the typical answers I would expect in a decent office environment.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] New backup server

2013-05-24 Thread Timothy J Massey
Erik Hjertén erik.hjer...@companion.se wrote on 05/24/2013 03:40:36 PM:

 Thanks for your thorough reply Timothy.

No problem.  BackupPC (well, all system backup) is a *lot* more complex 
than people think!

 About 8000 files a photos between 5 and 15 MB each, in total around 
 100 GB. This will perhaps explain the slightly size-per-file 
 numbers.

15MB/file is perfectly manageable.

 There are a also few ISO-files around 7 GB, rest is smaller files.

Large files are a hassle to back up.  Period.  *BUT*:  A file that doesn't 
change doesn't need to be backed up more than once.  So static ISO's are 
fine.  (Or at least if you change them infrequently.)  What is a *major* 
issue are large files that change frequently:  daily database dumps, 
copies of running VM disk images, etc.  Each backup becomes a full backup, 
and it pretty much destroys pooling.  (Yet *another* thing to understand 
that I didn't touch on in the original e-mail:  how do your file size 
*and* usage patters affect pooling?  :)  )

 I think now SATA drives will do the job. I'll look into costs on 
 different drives.

Depending on budget, I'd go with these:  
http://www.seagate.com/internal-hard-drives/consumer-electronics/sv35/?sku=ST3000VX000

The 3TB drives (what I use) are twice the price of the 1TB, but you can 
*never* have too much storage!  :)  (But then you get into the decision: 
would you be better off with 4 x 1TB than 2 x 3TB for *performance*?  :) )

(And while I'm thinking of it:  1TB of total space for 500GB of target 
backup space is... interesting.  I assume you want more than one backup 
for a time-based history?  (I usually default to between 12 and 16 copies 
to cover 120 or so days of time.)  Yes, BackupPC has pooling, but I would 
not want to put in a new solution that has a significant amount less than 
50% growth capacity on day one...)

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC not obeying blackout times

2013-05-22 Thread Timothy J Massey
Changing the ping command to a different command can be done on a per host 
basis. 

Timothy J. Massey

Sent from my iPhone

On May 22, 2013, at 4:52 PM, Zach lace...@roboticresearch.com wrote:

 This is true only for one host.  This is why I wouldn't want to change ping 
 to /bin/true...I want it to still work correctly for other hosts.
 
 -Zach
 
 On 05/22/2013 04:44 PM, Timothy J Massey wrote:
 Zach lace...@roboticresearch.com wrote on 05/22/2013 03:36:00 PM:
 
  Thank you Arnold.  Is there any way for me to tell BackupPC to ignore
  the preconditions?  I don't care if this guy has backed up before, or if
  he doesn't have an overdue backup, or if he has enough good pings.  I
  just never want a backup to start between 8am and 5pm, period.
 
 I don't know if you can get rid of *all* of them, but you can change ping to 
 /bin/true and change the ping count to 1 (or   maybe even 0, though 
 I'm not sure it's possible) to at least   significantly reduce the 
 possibility of it running. 
 
 Also, is this true for some hosts, or *ALL* hosts for that server?  If so, 
 you could adjust the WakeupSchedule to exclude those times (but remember 
 that you want the *first* WakeupSchedule to be during a time when you are 
 *not* doing backups:  see the documentation for details). 
 
 Tim Massey
 
  
 Out of the Box Solutions, Inc. 
 Creative IT Solutions Made Simple!
 http://www.OutOfTheBoxSolutions.com
 tmas...@obscorp.com  22108 Harper Ave.
 St. Clair Shores, MI 48080
 Office: (800)750-4OBS (4627)
 Cell: (586)945-8796
 
 
 --
 Try New Relic Now  We'll Send You this Cool Shirt
 New Relic is the only SaaS-based application performance monitoring service
 that delivers powerful full stack analytics. Optimize and monitor your
 browser, app,  servers with just a few lines of code. Try New Relic
 and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may
 
 
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 
 --
 Try New Relic Now  We'll Send You this Cool Shirt
 New Relic is the only SaaS-based application performance monitoring service
 that delivers powerful full stack analytics. Optimize and monitor your
 browser, app,  servers with just a few lines of code. Try New Relic
 and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] mysql streaming

2013-04-26 Thread Timothy J Massey
Arnold Krille arn...@arnoldarts.de wrote on 04/26/2013 04:27:44 PM:

 On Thu, 25 Apr 2013 14:45:50 -0700 Lord Sporkton
 lordspork...@gmail.com wrote:
  I'm currently backing up mysql by way of dumping the DB to a flat
  file then backing up the flat file. Which works well in most cases
  except when someone has a database that is bigger than 50% of the
  hdd. Or really bigger than around say 35% of the hdd if you account
  for system files and a reasonable amount of free space.
  I started thinking, mysqldump streams data into a file and then
  backuppc streams that file for backup. So why not cut out the middle
  man file and just stream right into backuppc? Ive been playing with
  the backup commands but im getting some unexpected results due to
  what I believe is my lack of total understanding of how backuppc
  actually streams things.
 
 I don't know about your rates, but here in europe a new 2TB-disk costs
 less then me thinking and trying to implement anything like this.

Speculation about getting data to BackupPC snipped.

 So unless you have an academic interest in understanding how things
 work, its much cheaper and easier to just push more disk-space into the
 servers concerned. Probably its just putting two more disks on the host
 and then increasing several machines disks?

I second this.  I usually have a Samba share and an NFS share on my 
BackupPC box for just this situation (I'm looking at *you*, Microsoft 
Exchange).  I would have the SQL server dump its data via SMB/NFS to 
BackupPC, and then back up that data using the locahost host on BackupPC. 
Works fine.  The only annoying part is having to copy the data twice (once 
across the network and once when BackupPC backs it up).  Ideally, I would 
**LOVE** a mode where BackupPC simply hardlinks the source data right into 
the pool (obviously, my pool and the shares are on the same partition). If 
I could do that, there would be *ZERO* downsides to my method!  :)

In fact, I'd pay for that feature  Jeff or Holger, any interest?  :)

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] mysql streaming

2013-04-26 Thread Timothy J Massey
backu...@kosowsky.org wrote on 04/26/2013 05:04:07 PM:

 If you are indeed talking about files in the 50-200GB range, you are
 not going to fit more than a handful of files per TB disk... even if
 you have a RAID array of multiple disks, you are still probably
 talking about only a small number of files. So, you are probably
 better off writing a simple script that just back up those few DB
 files and rotates them if you want to retain several older copies.

Which is something *else* I do (for, say, NTBACUP or Windows Server 
Backup), again by using that NFS or Samba share on my BackupPC server!  :) 
 Basically, I often use my BackupPC systems as a combination NAS/Backup 
server.  Again, works very well.

If you're dealing with a bunch of SCSI (Really?  Not SAS?) servers, I 
assume you have *some* budget.  I just spent less than $6,000 for a server 
with 12 3TB SATA hot swap hard drives (rated for 24/7 operation, not 
desktop drives), 2 x Intel Xeon E5 2.4GHz quad-core processors, 16GB RAM, 
an LSI RAID controller and 4 x GbE and 2 x 10GbE.  Configured for RAID-6 
with hot-spare, I still got 24TB of space.  That puppy moves 800MB/s to 
the drives...  All for $6k.  If I had cut some corners on RAM and CPU (and 
skipped the extra GbE ports) it would have been a little over under $4k.

That's enough space for more than 100 copies of a 200GB database  And 
you could still use BackupPC for keeping older copies around over time, 
even if you don't get much help with pooling...

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] mysql streaming

2013-04-26 Thread Timothy J Massey
backu...@kosowsky.org wrote on 04/26/2013 06:27:32 PM:

 My point is that even with o(100) files/copies which assuming you are
 backing up multiple versions means you have far fewer distinct files
 -- you may be better off just writing a script...

I get your point, though I would ask you to define better...

 BackupPC is really targeted at backing up large trees of thousands of
 files. If you just have a 100 large databases, why not just use a cron
 script that copies each one and appends a datestamp or stores it in a
 different folder. It will probably be much faster too...

Why not cron?  There's no Web GUI, it doesn't expire old versions easily 
over an increasingly long period of time like BPC ( [2,2,2,2], for 
example), I can't easily start/stop/manage the backup jobs while they're 
running (short of kill -9), there's no built-in way of archiving this data 
to removable storage, and I'm already keeping an close eye on my BackupPC 
server for all of the other backups that it's doing, so why build a 
kluged-together script when I can use the tool I've already GOT?

I agree that it's an environment where many of BackupPC's unique strengths 
are not used to their advantage.  But I already have it for the areas 
where it *does* excel, and other than it's a little slower than a 
straight-up native rsync, I get *all* those other features for free.  And 
as for speed:  I do the localhost during the day, when the BackupPC server 
would be idle anyway, so who cares about the time as long as it completes 
before my backup window starts in the evening?  Why *would* I use cron 
instead?

Perfect is the enemy of good enough.  And BackupPC is *plenty* good 
enough.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Forcing time distribution of backups?

2013-03-13 Thread Timothy J Massey
Koen Vermeer k...@vermeer.tv wrote on 03/13/2013 08:13:45 PM:

 On 2013-03-13 21:20, Brad Alexander wrote:
 So I'm wondering, is there a way to better force a better 
 distribution of backup jobs during the day?
 What about setting MaxBackups to a smaller number than 4?

That's what I'd do:  set the MaxBackups to, say, 1, and they'll be forced 
to spread out.

Or, hand-craft the blackout periods so they don't overlap.  But I'd rather 
just let the system do it for me, and setting MaxBackups to the most you 
want to run simultaneously will accomplish this without you having to 
think about it.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Pool synchronization?

2013-03-06 Thread Timothy J Massey
Mark Campbell mcampb...@emediatrade.com wrote on 03/06/2013 11:01:28 AM:

 I don't mean to bring up another RTFM moment, but I've searched 
 around, and I haven't found the location for enabling/disabling the 
 pooling.  The compression option I've found, but not pooling.

There is no way of doing this.  If you don't want pooling, you don't want 
BackupPC.  BackupPC without pooling equals rsync (or tar or cp or...).

What *exactly* are you trying to accomplish by turning off pooling?

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester  
Wave(TM): Endpoint Security, Q1 2013 and remains a good choice in the  
endpoint security space. For insight on selecting the right partner to 
tackle endpoint security challenges, access the full report. 
http://p.sf.net/sfu/symantec-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to disable Outlook Backup?

2012-12-11 Thread Timothy J Massey
Andrew Mark andr...@aimsystems.ca wrote on 12/11/2012 10:20:15 AM:

 Hi all,
 
 We use MS Outlook for its calendar and contact functions; our email is 
 web-based.
 It there a way to disable BackupPC from checking and warning that 
 Outlook is not backed up?
 ie. stop checking the value of $Conf{EmailNotifyOldOutlookDays} or make 
 it infinite.

Is this a trick question?  Is there really a perceptible difference 
between 9 days (or about 2 *million* years) and infinite?  Was 
using a really big number just too imaginative for you?  :)

I haven't tried to see if that number will work correctly:  it's just a 
Perl variable, and according to http://www.perlmonks.org/?node_id=718414 , 
it's smaller than Perl's maximum long int, but maybe there's some other 
limit.

So, you could be more conservative:  9 days is 273 years.  Even  
days is 27 years.  Do you *really* think we'll still be using Outlook in 
27 *years*?  (God, I hop not...)

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] archive aborts after signal ALRM

2012-12-05 Thread Timothy J Massey
Stefan Peter s_pe...@swissonline.ch wrote on 11/18/2012 04:55:49 PM:

 On 18.11.2012 20:44, Till Hofmann wrote:
  But now, when I'm trying to archive more clients at once, the 
 process receives ALRM after exactly 20 hours.
  
  2012-11-09 19:54:29 Starting archive
  2012-11-10 15:54:29 cleaning up after signal ALRM
  2012-11-11 03:16:38 Archive failed (aborted by signal=ALRM)
  
  I figured my system is set up to allow only 20 hours of execution 
 time but I couldn't find any settings like that, all my files in /
 etc/security are untouched.
  
 
 Have a look at the ClientTimeout in the Backup Settings of your
 archive host (or the $Conf{ClientTimeout} variable in
 /etc/backuppc/archive.pl).

This is the solution.  I have archives that take days to create (they're 
about 3TB big before compression) and I have to crank the ClientTimeout 
*way* high.

I've tried figuring out a way to get the archive process to reset the 
ClientTimeout timer while the archive is running (like backups do, I 
believe), but I have yet to hit on a successful way of doing it...  :(

If anyone's interested, I can share details of what I've tried, as well as 
accepting any suggestions to try.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync never starts transferring files (but does something)

2012-12-05 Thread Timothy J Massey
Markus unive...@truemetal.org wrote on 11/19/2012 04:03:03 PM:

 For fun, here's the output of find / | wc -l:
 
 24478753
 
 real 490m35.602s
 user 0m21.013s
 sys  1m23.305s
 
 25 million files! OMG. find took 8 hours to complete. Nice, hm? :-)

Wow.  If a simple find took 8 hours to complete, rsync's gonna take even 
longer to create the file list!  :)

 I started another full backup on Friday 21:00 CET and it's still 
 running, but it started to transfer files after a day or so, finally!

Good for you!

 Another box of the same customer has 2.5 files and took 29 hours for the 

 first full backup. That means the 25 million box' full backup should be 
 done within 12 days. :-)   But if I understand correctly all future full 

 backups will be faster. No idea what BackupPC will do after these 12 
 days, start directly with another full backup? Well, we will see...

In one case for me, my first full backup took 9000 minutes (almost a full 
week).  Subsequent fulls still took over 4000 minutes, so the speedup for 
future fulls was only double that of the first full.  (I ended up 
optimizing things a bit more and getting it down to about 3000 minutes, 
which I'm currently living wth.)

 a) Different profiles with aliases
 b) Different shares

I am looking at doing the same here, just to break up the fulls so they 
can run on different days.  (Thanks, Les!)

 c) tar

Personally, I find that tar takes no real less time than rsync for fulls.

 Unforunately almost all of the files are located in /home in thousands 
 of subdirectories, so I can't just say backup /home or /home/1, 
 /home/2, /home/3 in another profile, but if tar won't work I guess I 
 will have to dig deeper into the subdirectories for profiles/shares 
 splitting.

Wow.  25 *million* files saved in home directories?  That kind of defeats 
the purpose of shared data!  I thought my users were bad about that...  :)

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Performance reference (linux --(rsync)- linux)

2012-12-05 Thread Timothy J Massey
Cassiano Surek c...@surek.co.uk wrote on 11/22/2012 05:43:30 AM:

 Dear all,
 
 For reference on the matter I was trying to resolve or improve, I 
 have increased RAM from 2Gb to 4Gb (the max for that machine) and 
 backups reduced by 50% in completion time.
 
 I had wrongly assessed in the past that it was taking 10 days to 
 complete a full backup, when it actually took around 7 days. 
 
 This has now been brought down to 3.5 days.
 
 Incremental is now at 9 hours which is reasonable, albeit still on 
 the slow side.
 
 We will try to improve things further based on all your much 
 appreciated information.

I'm glad you were able to get it down to a usable timeframe.  However, I 
never did see how many files you're backing up.  I have found that on my 
server (with 1.3 million files) that 2GB to 4GB did not make an 
appreciable difference.  I'm wondering how your results fit in with this.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Afraid of the midnight monster?

2012-11-08 Thread Timothy J Massey
Les Mikesell lesmikes...@gmail.com wrote on 11/08/2012 10:51:53 AM:

 On Thu, Nov 8, 2012 at 9:16 AM, Jimmy Thrasibule
 thrasibule.ji...@gmail.com wrote:
  Hi,
 
  I wonder if it is possible to wake BakupPC at midnight. In the
  documentation or on the Internet, they all start at 1 to 23.
 
  $Conf{WakeupSchedule} = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 
14,
  15, 16, 17, 18, 19, 20, 21, 22, 23];
 
 
  Can I put '0' in the `WakeupSchedule` list?
 
 Yes but since BackupPC_nightly runs on the first entry (1), omitting
 midnight might be intentional to give the running backups some extra
 time to complete.  You might want to rotate the list a bit to make the
 nightly run happen later in the morning when everything should be
 finished.

I second this suggestion.  Cleaning the pool (what BackupPC_nightly does) 
*thrashes* the disks, pretty much causing your backups to pause during the 
time in which this runs.  I move mine to 10 A.M. (by making the first 
entry in the WakeupSchedule be 10:  10, 1 , 2, 3, ...).

As for leaving 0 (midnight) out:  I'm actually a fan of this.  Too many 
things are triggered at the change of the day that I never knowingly 
schedule anything for midnight for performance reasons.  Plus, there is 
confusion of when exactly midnight is:  is Midnight at the beginning or 
the end of the day?  I've seen it both ways...  So, personally, I simply 
avoid it.

Of course, I often take this farter and never trigger things for the start 
of an hour, even.  I usually go with 7 minutes after the hour just out of 
general principle.  Seeing as BPC can't start on not-hours (or, at least, 
I've never tried a fractional hour within the WakeupSchedule), I don't 
mind skipping midnight...

But don't let my foibles prevent you from starting on that extra hour!  :)

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_nov___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Performance reference (linux --(rsync)- linux)

2012-11-07 Thread Timothy J Massey
Cassiano Surek c...@surek.co.uk wrote on 11/06/2012 05:03:44 AM:

 Of course, how could I have missed that! I did find it now, thanks 
Michał.
 
 Last full backup (of 100 odd Gb) took slightly north of 10 days to 
 complete. Incremental, just over 5 days.

I did not see if you mentioned how *many* files you had.  Are we talking 
100,000 files or 10 Million files?  That will make a *big* difference in 
performance.

For some more numbers:  I have a file server with 700GB of data in 400,000 
files that takes about 5 hours for an incremental, and about 13 hours for 
a full.  I have another server that is 3,000GB (3TB) big with 1.4 Million 
files and it takes about 50 hours for a full and about 20 hours for an 
incremental.  Both are *substantially* faster than yours.

Both servers use an Intel Atom D520 processor (read:  very low 
performance) with 4GB of RAM.  One has a 4-disk RAID5 array and one has a 
6-disk RAID6 array of standard desktop SATA drives.

I'm not happy with the performance I get from these systems (though they 
are acceptable), but they are several orders of magnitude faster than 
yours.

I will echo some of the previous suggestions:

1) Do a test rsync between the two systems to see if you get sufficient 
performance that way
2) If you're using rsync over ssh, is CPU (for encryption) a problem?  (I 
use rsyncd, so no encryption)
3) If you're using compression, is CPU a problem?  (I had to turn off 
compression for the sake of performance)
4) Do you have a *zillion* little files?  Then RAM or disk seeks may be 
your problem.

Given the *terrible* numbers you're seeing, I would really emphasize 
making sure that there isn't something wrong with your hardware (such as 
network or disk).  Also, make sure you really look at top/vmstat/dstat 
numbers to see what you're system is doing.  This is not just a little bit 
of slowdown.  Something is fundamentally broken.

Hope that helps...

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
LogMeIn Central: Instant, anywhere, Remote PC access and management.
Stay in control, update software, and manage PCs from one command center
Diagnose problems and improve visibility into emerging IT issues
Automate, monitor and manage. Do more in less time with Central
http://p.sf.net/sfu/logmein12331_d2d___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Any procedure for using 2 external harddrives

2012-10-25 Thread Timothy J Massey
Tyler J. Wagner ty...@tolaris.com wrote on 10/25/2012 09:14:47 AM:

 On 2012-10-25 14:10, Bowie Bailey wrote:
  On 10/24/2012 12:41 PM, dixieadmin wrote:
  I am currently using BackupPC 3.2.1 on SME Server 8.0.  I wanted 
 to know if there is a correct procedure for using 2 different 
 external harddrives as a backup source and to interchange them weekly?
 
  interchange them weekly ... Are you referring to having a 2-drive 
  mirror and replacing one weekly?  Or having one drive connected at a 
  time and swapping them out weekly?  Or having both drives connected 
and 
  swapping both out for 2 different drives weekly?
 
 Because of the pool model that BackupPC uses for storage, there are only
 two useful methods for using two different external drives:
 
 1. Use RAID and remove one member periodically, to take offsite. This is
 best done with 3 drives, actually.
 
 2. Use the external drives as a target for ArchiveHost, and dump 
occasional
 tarballs of each host to them. This is OK for emergency restore but 
won't
 give you a working BackupPC server if you want that.

Also, keep in mind that the original question shows a fundamental lack of 
understanding in the way BackupPC works.  BackupPC is *NOT* a simple 
replacement for a tape drive.  Swapping the entire pool weekly will 
undermine the way BackupPC works:  it's not designed for that.  And using 
preconceived ideas of traditional backup to shape the way BacukpPC works 
is not a path that leads to success.

For BackupPC to work effectively, the pool stays in place permanently, 
100% of the time.  You do not interchange anything inside of BackupPC at 
all, with any frequency.  Of course, off-site backups are important, even 
essential.  This is *NOT* achieved by swapping pools.  It's achieved by 
one of three ways:

1) Constantly mirroring the pool and occasionally breaking the mirror to 
take it off-site.  (Option 1 above).  A variation of this would be to take 
the pool down and make a copy of it, then bring it back up (or use LVM 
snapshots to reduce the downtime).  This variation is left as an exercise 
for the reader:  there are a *lot* of unexpected details in that answer: 
problems with file-level copy will most likely require block-level copies, 
LVM snapshots present performance and reliability issues, etc.

2) Exporting a single copy of a specific backup to a removable destination 
and moving that off-site.  (Option 2 above).  A variation of this would be 
to make the destination for the archive a remote filesystem already 
off-site, and there is no physical movement at all.

There is a third option that was not mentioned:

3) Have two BackupPC systems that *both* back up the same hosts in 
parallel.  A variation of this would have the BackupPC servers in 
different physical locations.  This variation just about requires the use 
of rsync, and even then is not always practical (if the data changes per 
day/week/whatever are too vast, the bandwidth between them too limited, or 
there is simply too much data to be able to back it all up twice in a 
reasonable amount of time).

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to deactive the file mangling on backups ?

2012-09-24 Thread Timothy J Massey
I believe there is a fuse filesystem that gives you a filesystem level view of 
unmangled filenames for backup PC trees.

For the rest of us, making the backup PC tree directly browsable by rsync 
really does not have much value. For the rest of us, making a tar or zip 
restore is sufficient, and I think you will find very, very little interest in 
your small change .


Timothy J. Massey

Sent from my iPhone

On Sep 24, 2012, at 3:46 PM, Serge SIMON serge.si...@gmail.com wrote:

 The point is that it barely miss nothing to have a browsable uncompressed 
 rsynced backup folder for people willing to retrieve files from that directly.
 It's the default rsync behavior.
 I'm pretty sure every point you are presenting could be dealt in another way.
 Attrib files could have a really proprietary name to avoid collisions 
 (.backuppc_internal_attrib-backup-UUID).
 Attrib files could be stored in another meta directory somewhere else under 
 pc/localhost/ (and, regarding the volume saved on a backup, i really doubt 
 that a few more I/O would have a significant impact).
 ...
 
 As said previously, that would allow these two nice features :
 - allowing to plug an external drive with a backuppc uncompressed rsync-ed 
 backup folder on another computer if the main server crash (allowing a quick 
 recovery of some files - of course, one can unmangle the directory, but 
 really, it's an unnecessary manipulation, especially if it can be easily 
 avoided from the start)
 - allowing to pusblish (sshfs, nfs, smb, ...) a read only share of this 
 uncompressed rsync-ed backup folder.
 
 I really like a lot of features in backuppc, the internal rsync storage 
 behavior is the only one i strongly disagree with, mainly because i really 
 thing it can be easily be just a simple and regular rsync backup path.
 
 Thanks anyway for your previous answer.
 
 Regards.
 
 -- 
 Serge.
 
 
 On Mon, Sep 24, 2012 at 8:35 PM, backu...@kosowsky.org wrote:
 Serge SIMON wrote at about 19:01:54 +0200 on Monday, September 24, 2012:
   Hello,
  
   BackupPC seems a great tool.
  
   The only - main - drawback for me is the file name mangling on (mainly
   rsync) backups (backuppc adds an f to every folder of file backuped).
  
   I've read the few information about that (that it wasn't activated on old
   versions, and why it has been), but i really think it's a bad design
   choice. attrib files could have been stored elsewhere for example, and so
   on.
  
   The problem is that the rsynced folder is not directly usable once
   backuped, one can only restore from the (great) backuppc front-end, whereas
   there are several use cases where this is not possible or pertinent (server
   crash  remount external backup drive elsewhere and willing to immediately
   access backuped files, or just willing to share as read only the backup
   folder on the network).
  
 
 One is not supposed to use rsync directly to restore from the pc tree
 for mulitple reasons, including the fact that if you use a compressed
 pool, the files will be compressed. Also, you lose all the key
 metadata like timestamps, ownership, permissions. Similarly, special
 files, hard/soft links, etc. won't be treated properly. Finally, the
 prefix-f is not the only potential change. Various non-alphanumeric
 characters are also substituted by their hex representations which
 would also mess up rsync. Moreover, there is good reason to keep the
 attrib file in the same tree (it saves on directory reads), hence the
 need for f-mangling.
 
 Basically, the only use case for restoring any file directly from the
 pc tree is if you are an expert user well-aware of the above
 limitations, in which case undoing the extra 'f' should not be a
 problem.
 
   So : how could i deactivate this file mangling thing ? (at first glance in
   the code, it seems to be hard coded !).
 
 It's hard coded. There is no reason to disable it. It would mess
 things up.
 
   If it is not possible to deactivate this behavior, could it be either
   removed, either made configurable in a next version ?
 
 No --  It's a terrible idea that shows a complete lack of
 understanding of how backuppc works and the whole purpose of the pc tree.
 Feel free to rewrite the code to function otherwise but I can't think
 of any reason why the code should be officially changed and I can
 think of lots of reasons not to.
 
  
   Thanks in advance for any answer / clue about this.
  
   Regards.
  
   --
   Serge.
   
 --
   Live Security Virtual Conference
   Exclusive live event will cover all the ways today's security and
   threat landscape has changed and how IT managers can respond. Discussions
   will include endpoint security, mobile security and the latest in malware
   threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
   ___
   BackupPC-users mailing list
   BackupPC-users

Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-21 Thread Timothy J Massey
Les Mikesell lesmikes...@gmail.com wrote on 09/18/2012 07:04:21 PM:

  The guest servers are not hurting for resources.  They are not part of 
the
  problem.  The problem seems to be contained completely inside of the
  BackupPC server.
 
 If you aren't seeing big speed differences among clients you are
 probably right.  I do and they seem related to hardware capabilities.

In the case of the server I'm currently testing, this seems to be the 
case.  The difference between compression and no compression was exactly 
zero:  same time both ways.

I looked into the target more closely:  it has single-spindle drives (no 
RAID arrays), and was connected to the network via 100Mb.  I can't change 
the target's drives, but I moved the server to 1Gb and restarted a full. 
We'll see what happens...

I would not expect the network speed changing from 100Mb to 1Gb to make a 
difference:  this server is literally unchanged from one run to the next. 
BackupPC literally shows zero files changed!  So the network bandwidth 
shouldn't make a difference:  all it should be doing is passing hashes 
back and forth...

We'll see what happens, though, and I'll let you know.

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Got visibility?
Most devs has no idea what their production app looks like.
Find out how fast your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219671;13503038;y?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-19 Thread Timothy J Massey
backu...@kosowsky.org wrote on 09/18/2012 09:51:11 PM:

 Timothy J Massey wrote at about 12:54:35 -0400 on Monday, September 17, 
2012:
   I have several very similar configurations.  Here's an example:
   
   Atom D510 (1.66GHz x 2 Cores)
   4GB RAM
   CentOS 6 64-bit
   4 x 2TB Seagate SATA drives in RAID-6 configuration
   I get almost 200 MB/s transfer rate from this array...
   2 x Intel e1000 NICs in bonded mode.
   
 
 I snipped out most of Tim's original post because it seems that nobody
 has referenced the fact that he is using pretty low powered
 chips. Certainly, I could understand how compression would be slow on
 these chips and could be the rate limiting steps.

You are correct.  I've said from the beginning that this is *embedded* 
class hardware.  I'm trying to figure out exactly what I can-or have 
to--do to make this work.

Again, I know that I can throw hardware at this;  I don't want to.  I want 
to find what I can do to tweak my settings for an acceptable level of 
performance.  If I can't, then I will address the hardware, and be 
confident that I understand why I am.

 Tom's Hardware benchmarked the Atom D510 against circa year 2000
 Pentium 4 single core processors and found that for non-multithreaded
 programs that don't take advantage of new instruction set
 enhancements, they are pretty equivalent.

Ouch.  I guess I bought into the Intel hype too much...  :(

I wasn't expecting insane performance.  But I admit I was expecting 
Core-type performance rather than Pentium 4!  It sounds like I'm actually 
getting Pentium III performance, though.  (Which makes sense:  the Pentium 
M was a development of the Pentium III, and it wouldn't surprise me if the 
Atom--designed for ultramobile--had more in common with the Pentium M than 
Core...)

 So, I wouldn't be surprised if the problem is using a netbook
 category performance processor to perform a computationally intensive
 server-type job...

Nor would I.  At all.  But throwing hardware at it is not really answering 
the question, it's more addressing the symptom.

Besides, from someone who has run BackupPC on a wall wart, I would have 
expected more interest in the challenge!  :)

From my limited testing, I was able to get 4 x performance simply by 
disabling compression.  That takes my 4-day fulls and makes them 1-day. If 
I can possibly double the performance again, I get down to 12 hours. 
Mission accomplished.  So, I'll keep tweaking.

As a bit of a sanity check for me, the motherboard I'm using (SuperMicro 
X7SPA-HO-F) has an Atom D510 and is $175.  A SuperMicro X9SPV-LN4F-3LE 
with 3rd Gen Core i7-3612QE is $800.  The lowest-end X9SPV with an i3 is 
$600.  Given that the target price for the entire device is $1200 or so, 
that's a tough fit...  Which is why I'm trying to make the inexpensive, 
compact, cool and quiet Atom-based boards work.

Timothy J. Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-18 Thread Timothy J Massey
Les Mikesell lesmikes...@gmail.com wrote on 09/17/2012 01:34:33 PM:

 On Mon, Sep 17, 2012 at 11:05 AM, Timothy J Massey 
 tmas...@obscorp.com wrote:
 
 
  I'm writing a longer reply, but here's a quick in-thread reply:
 
  I know exactly what you mean by waiting until after the first full. 
Often
  the second full will be faster -- but only *IF* you are bandwidth 
limited
  will you will see an improvement.  In this case, neither him nor I are
  bandwidth limited.  I don't see an improvement.
 
 The 2nd might even be slower, since the server side has to decompress
 and recompute the checksums.

Interesting possibility.  However, on the big server, it's new enough to 
see the first backup.  First one took 9987 minutes, and the second took 
5558.  The third took   5502.  So there *was* a significant speedup 
between the first and second.  But I'm still only getting 442MB/minute, or 
7MB/s.  That server really should be able to get 5 times as much without 
breaking a sweat.  1100 minutes is still a long time, but manageable. 5500 
minutes is nearly four *days*...  :(

 There's little risk of file-level corruption that would still let the
 checksums cached at the end of the file match unless you have bad RAM
 (which would likely cause crashes) or physical disk block corruption
 which you can check relatively quickly with a 'cat /dev/sd?
 /dev/null' or a smartctl test run followed by checking the status.

It's disk block corruption caused by (silently) failing drives I'm most 
worried about.  It wasn't until a data-loss event a few months ago that I 
found that smartd was not set up properly--and I'm not certain that SMART 
would have actually helped in this case.  (Fortunately, the loss of data 
was on the backup server alone, and no one needed that data at that 
moment, and my off-site archives were unaffected, but I still didn't like 
it.)  I *do* scrub the array (the default is weekly, I believe),  and now 
have both SMART and md configured to e-mail alerts.

But I still like the extra protection of *directly* comparing the files. 
But not enough to take 4 days to do a backup!  :)

I'm working on investigating each of these possibilities to improve 
performance.  I will let everyone know what I find.

Timothy J. Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-18 Thread Timothy J Massey
John Rouillard rouilj-backu...@renesys.com wrote on 09/17/2012 01:30:36 
PM:

 AFAIK this is not correct. If checksum caching is enabled, backuppc
 will check the cached checksums against the actual file contents based
 on the setting of the:
 
   $Conf{RsyncCsumCacheVerifyProb} = 0.10; 

Yeah, I probably should have mentioned that.  I knew it was there, but I'd 
rather have that set to 1.00 than 0.10...  :)

 Also even with checksums turned off, you could have an older copy of
 the file in the pool go bad and you wouldn't know it till you tried to
 restore it, so having checksum off doesn't protect you from bad data
 in the pool except for the most recently used files.

This is perfectly valid.  However, I'm less worried about data that has 
multiple copies (older != newer) than I am about important data (or data 
that *becomes* important) but hasn't been modified in enough time so that 
there is only a single copy of it stored--the same copy in each of, say, 
six months of backups.

I've had that happen in *more* than one case.  (Another related case is a 
vitally important file that disappeared at least six months ago but 
*must* be brought back...)

Both of these can be dealt with by duplicate externally managed archives, 
etc. but this is exponentially more annoying to deal with than BackupPC. 
If I can solve a problem there, I would prefer to.  Monolithic 500GB tar 
files (aka BackupPC archvies) are not user-friendly.

 (We also won't discuss having corruption in putting the current bits
 on disks that trashes the curent backup copy. Which frankly there is
 very little short of zfs/btrfs type filesystem that can provide some
 measure of protection/detection.)

Or, like I said, multiple externally managed archives using separate and 
redundant medium.  I have that, too.  It's just a big pain to use, that I 
would rather do nearly anything else than depend on it!  :)

Unfortunately, none of this gets us closer to the source of the terrible 
performance we're seeing...  :)

Timothy J. Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-18 Thread Timothy J Massey
John Rouillard rouilj-backu...@renesys.com wrote on 09/17/2012 02:05:28 
PM:

 On Mon, Sep 17, 2012 at 12:54:35PM -0400, Timothy J Massey wrote:
  No matter the size of the system, I seem to top out at about 50GB/hour 
for 
  full backups.  Here is a perfectly typical example:
  
  Full Backup:  769.3 minutes for 675677.3MB of data.  That works out to 
be 
  878MB/min, or about 15MB/s.  For a system with an array that can move 
  200MB/s, and a network system that can move at least 70MB/s.
 
 My last full backup of a 2559463.2 MB backup ran 306.9 minutes. Which
 if I am doing my math right is 138MB/s. This was overlapping in i/o
 with 9 other backups in various backup stages.

Woo.  I like that.  Now let's see why is yours different!

 My backup drive is a 1U linux box exporting its disks as an isci
 system running raid 5 iirc over gig-E. LSI hardware raid with bbu
 write cache I think.

So this backup drive server is simply the iSCSI target that provides the 
storage used by BackupPC running on a different system?  Or is the backup 
drive the client of the backup process?

 4 drive JBOD/raid0, raid 1/0, raid 5, raid 6? I'll assume raid 5.

All of that was clearly outlined at the top of the e-mail:  4 x 2TB 
Seagate SATA drives in RAID-5 (using md, which I''m not sure I stated 
originally).

  My load average is 2, and you can see those two processes:  two 
instances 
  of BackupPC_dump.  *Each* of them are using 100% of the CPU given to 
them, 
  but they're both using the *same* CPU (core), which is why I have 50% 
  idle!
 
 Can you check that with the f J(IIRC) option. I don't see the P column
 in there that would tell us what cpu they are running on.

Certainly!

Thank you very much for your suggestion.  It seems I might have been 
wrong:  my system has not two cores, but two *hyperthreaded* cores--four 
total!  So, the 50% response makes sense for two process:  they're both 
consuming 100% of a single hyperthread.  Assuming that the Linux scheduler 
is properly handling the HT cores (and I imagine it is...  :) ), then I 
truly am using 100% of each of the two separate cores.

That's fairly bad news for me, then.  These are embedded-style 
motherboards, and upgrading to a 3GHz Xeon processor is not an option... 
:(

I'm going to turn off compression and see what type of results I get. 
Unfortunately, it might take a week to find out:  it'll be like doing the 
backup for the first time again, and it took a week (6.97 days, to be 
exact) the first time...

 Primary backuppc server at the moment is a server providing file
 storage services: 24 core 48G memory (1600 MHz). But I had much the
 same numbers running on a 4 core, 32 GB (or maybe 16) machine with an
 attached SCSI disk array raid 6 with 14 or 15 spindles (Dell
 MD1000). Disk is isci using ext4 noatime.

I'm not sure what this means.  You have a 24-core (what family and MHz?) 
system with 48GB of RAM.  This machine is attached via iSCSI using an 
unknown number of unknown interfaces to an unknown storage unit (it can't 
be that 1U machine you mentioned earlier, could it?  How many drives can 
you have in a 1U unit?) with an unknown number of drives of an unknown 
speed and interface.  This cluster of systems is backing up several 
client systems over an unknown number of GigE interfaces, including the 
system you listed earlier:  2.5TB of data in a full backup that takes 307 
minutes.

 Backuppc is configured with cached checksums.

OK.  What about compression?

 Local clients (we also back up systems over the WAN) are connected
 over Gig-E.

How many GigE?

 When you see high cpu can you use strace to see what you have for i/o
 reads/writes? Also have you tried setting this up with a
 non-compressed pool as an experiment to see if compression is what is
 killing you.

That's my next step.  When I upgraded from my old VIA-based servers, I 
(accidentally) left compression on, and thought the new, dual-core, faster 
and more efficient processor would be OK with this.  That may have been my 
biggest mistake.  (Honestly, I've already found other areas that make me 
think that my original distain for compression might have been 
well-justified!  :) )

I'll let you know in a week!  :)

Timothy J. Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https

Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-18 Thread Timothy J Massey
Tim Fletcher t...@night-shade.org.uk wrote on 09/17/2012 08:50:39 AM:

 You are being hit by disk io speeds, check you dont have atime 
 turned on on the fs.

I agree that noatime is a net win for *very* little pain.  I found a 
system where I had not mounted the datastore noatime and swiched it.  In 
my case, while it made a difference, it wasn't monumental:

With atime:  769 minutes for 675677MB.
With noatime:  609 minutes for 648224MB.

So, a 21% improvement.  I'll take it, but it's not going to get my 3.5 day 
backups under 12 hours!  :)

 Also it's worth considering tar instead of 
 rsync for this sort of work load. 

I still have this on the (very) back burner.  It's a Windows system, so 
I'm looking at SMB, and I *hate* backups over SMB.  They are nothing but a 
problem.

Tim Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-18 Thread Timothy J Massey
John Rouillard rouilj-backu...@renesys.com wrote on 09/17/2012 02:33:34 
PM:

 I have another system that is lower power:
 
   2632652.8 MB at 662.5 minutes or 66MB/s

That's 2.6TB in 11 hours.  That is perfectly acceptable for me.  And *way* 
better than I'm getting.

 the BackupPC system is a 4 core system with Dual-Core AMD Opteron(tm)
 Processor 2216 (2412.400 MHz).

So 2 CPU sockets, each of which is a dual-core processor?

 16GB of memory with 4x1TB disks. Raid 5
 running 3ware raid card with BBU cache.

So a single block device presented to the OS?

 For the majority of the time
 that backup runs, there are only 3 or so other jobs runing at the same
 time.

OK.

What about checksum caching and compression?

Thank you *very* much for this information.  Your 24-core 48GB monster is 
just too far away from what I've got to be a useful comparison.  But this 
one, while quite a bit bigger, is at least *somewhat* comparable.  And 
you're getting four times the performance, which is what I would have 
estimated that my box was capable of doing, but is not.

Timothy J. Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-18 Thread Timothy J Massey
Les Mikesell lesmikes...@gmail.com wrote on 09/17/2012 02:44:20 PM:

 On Mon, Sep 17, 2012 at 11:54 AM, Timothy J Massey 
 tmas...@obscorp.com wrote:
 
  However, I have recently inherited a server that is 3TB big, and 97%
  full, too!  Backups of that system take 3.5 *days* to complete.  I 
*can't*
  live with that.  I need better performance.
 

Quick fixes snipped

 In any case you have to keep in mind that if you ever
 have to restore, there will be considerable downtime.  For the price
 of a disk these days, it might be worth keeping a copy up to date with
 native rsync that you could just swap into place if you needed it and
 back that up with backuppc if you want to keep a history.

Fortunately, BackupPC is a backup of the backup right now, and is not 
expected to be used for real.  Yet.  That's why I can take the time and 
try to actually solve the problem, rather than apply band-aids.

But that will likely end in November, if not sooner.

  It looks like everything is under-utilized.  For example, I'm getting 
a
  measly 40-50MB of read performance from my array of four drives,
 
 If every read has to seek many tracks (a likely scenario), that's not
 unreasonable performance.

Which is why I asked if others were getting better performance.  They are. 
 So it's not just inherent to the task.  There's something different.

   My physical drive and network
  lights echo this:  they are *not* busy.  My interrupts are certainly
  manageable and context switches are very low.  Even my CPU numbers 
look
  tremendous:  nearly no time in wait, and about 50% CPU idle!
 
 I think you should be seeing wait time.  Unless perhaps you have some
 huge files that end up contiguous on the disk, I'd expect the CPU to
 be able to decompress and checksum as fast as the disk can deliver -
 and there shouldn't be much other computation involved.

I'm not.  I've got two of these new boxes built.  In both cases, they have 
2-4% wait time when doing a backup.  One is a RAID-5 and one is a RAID-6.

Might it have something to do with md?  Could the time that would normally 
be considered wait time for BackupPC be counted as CPU time for md?  That 
doesn't seem logical to me, but I can say that there just isn't any wait 
time on these systems.

 Mine seem to track the target host disk speed more than anything else.
  The best I see is   208GB with a full time of 148 minutes.  But that
 is with backuppc running as a VM on the East coast backing up a target
 in California and no particular tuning for efficiency.  Compression is
 on and no checksum caching.

That's the same settings I'm using.  But that's about double the 
performance I'm getting.  247GB in 340 minutes, or about 12MB/s.

I've just turned compression off for a couple of hosts.  We'll see how 
this affects their performance.  I'll let everyone know.

Timothy J. Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-18 Thread Timothy J Massey
Mark Coetser m...@tux-edo.co.za wrote on 09/18/2012 10:21:42 AM:

 I am busy running a full clean rsync to time exactly how long it will 
 take and will post results compared to a clean full backup with 
 backuppc, I can tell you that the network interface on the backup server 

 is currently running at 200Mbs transfer speed.

Once it is complete, wait a day or so and then re-run the full backup 
*and* the native rsync over the contents of the first rsync.  That will be 
a pretty good comparison of not-first backups.

Frankly, you don't even have to wait.  Once each of them are complete, 
immediately re-run them and compare the results.  There may be a very 
minimal amount of new files, but that isn't going to affect the order of 
magnitude of the results, and the difference you will see will be in the 
neighborood of that.

That will tell you what I think I already know:  the Perl-based rsync is 
*terrible*.  But it makes the magic of BackupPC work--if you feed it 
enough resources, it seems.

Timothy J. Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC is awesome

2012-09-18 Thread Timothy J Massey
Adam Goryachev mailingli...@websitemanagers.com.au wrote on 09/18/2012 
12:58:40 PM:

 I just wanted to put out there, that backuppc can work really well 
 at backing up, and archiving data.

In case there was doubt, I want to echo this:  I have 6-8 BackupPC servers 
in production in various places, some for as much as 5 or 6 years, and 
they have done their job reliably and well.  The only problems I've ever 
had with them have turned out to be hardware problems (failing drives or 
motherboards).  The BackupPC system itself has never let me down.

My current issue is one of scaling and capacity (and desire to not simply 
throw large amounts of hardware at it for the fun of it), not one of lack 
of appreciation for the magic that is BackupPC.

The only thing that I wish would not have changed over the past few years 
with BackupPC is that I wish Craig Barratt were still actively involved 
with the list.  His participation is certainly missed.

Timothy J. Massey
 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-18 Thread Timothy J Massey
Timothy J Massey tmas...@obscorp.com wrote on 09/18/2012 11:07:18 AM:

 John Rouillard rouilj-backu...@renesys.com wrote on 09/17/2012 
02:33:34 PM:
 
  I have another system that is lower power:
  
2632652.8 MB at 662.5 minutes or 66MB/s 
 
 That's 2.6TB in 11 hours.  That is perfectly acceptable for me.  And
 *way* better than I'm getting. 

I have just performed some full backups on a host after disabling 
compression.  Results:

Not-first backup with compression:  139.9 minutes for 7.6 MB
First backup without compression:  76.7 minutes for 70181.5 MB.
Second backup without compression:  36.6 minutes for 70187.4 MB.

Wow:  Four times the performance.  Looks like compression is a 
*significant* performance-eater.

Timothy J. Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-18 Thread Timothy J Massey
Les Mikesell lesmikes...@gmail.com wrote on 09/18/2012 12:42:26 PM:

 On Tue, Sep 18, 2012 at 10:24 AM, Timothy J Massey 
 tmas...@obscorp.com wrote:
 
  Fortunately, BackupPC is a backup of the backup right now, and is 
not
  expected to be used for real.  Yet.  That's why I can take the time 
and try
  to actually solve the problem, rather than apply band-aids.
 
  But that will likely end in November, if not sooner.
 
 It is more than a band-aid to have a warm-spare disk ready to pop in
 instead of waiting for a 3TB restore even with reasonable performance.
  Be sure everyone knows what they are losing.

That is a good point, but if I ever have to do a full 3TB restore from 
BackupPC, the 12 hours a (properly performing) BackupPC will take is not 
my biggest issue.  I don't look at BackupPC as my bare-metal disaster 
recovery plan--or, at least, not my first line of such defense.  That's 
what snapshots and virtualization provides.  BackupPC is for file-level 
restores, and the odds of having to do a full restore from BackupPC is 
small.

  I'm not.  I've got two of these new boxes built.  In both cases, they 
have
  2-4% wait time when doing a backup.  One is a RAID-5 and one is a 
RAID-6.
 
 Can you test one as RAID10?  Or something that doesn't make the disks
 wait for each other and likely count the time against the CPU?

Unfortunately, both of these boxes are in production, so they can't be 
reconfigured;  and I don't have enough parts for another one just yet.  It 
is something worth trying.  I predict that this won't make that much of a 
difference:  I've tested small-write performance differences between 
single-disk, RAID 1, RAID 10 and RAID 5 (but not RAID 6) before, and the 
penalties, while very real, were also very manageable.

But I'll see if I can try it again.

  Might it have something to do with md?  Could the time that would 
normally
  be considered wait time for BackupPC be counted as CPU time for md? 
That
  doesn't seem logical to me, but I can say that there just isn't any 
wait
  time on these systems.
 
 Not sure, but I am sure that raid5/6 is a bad fit for backuppc
 although good for capacity.

And frankly, capacity is what I need more, with a certain minimum amount 
of performance.  I do not need top-performance, and am perfectly willing 
to sacrifice performance for capacity, as long as I could get, say, 50MB/s 
of BackupPC throughput.

50MB/s performance for a RAID-5/6 array should not be difficult, even with 
read/modify/write and small transactions.  I thought I had tested this 
workload on this array successfully, giving me more like 80-100MB/s on 
synthetic benchmarks.

But again, I'll see what testing I can do.

 Are you sure the target has no
 other activity happening during the backup?

I am sure they *are* seeing other activity:  they're file servers, mail 
server, etc.  But their loads are all very low across the board.

Timothy J. Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-18 Thread Timothy J Massey
Les Mikesell lesmikes...@gmail.com wrote on 09/18/2012 03:34:56 PM:

 On Tue, Sep 18, 2012 at 1:18 PM, Timothy J Massey tmas...@obscorp.com 
wrote:
 
  That is a good point, but if I ever have to do a full 3TB restore from
  BackupPC, the 12 hours a (properly performing) BackupPC will take is 
not my
  biggest issue.  I don't look at BackupPC as my bare-metal disaster 
recovery
  plan--or, at least, not my first line of such defense.  That's what
  snapshots and virtualization provides.  BackupPC is for 
file-levelrestores,
  and the odds of having to do a full restore from BackupPC is small.
 
 Off topic, but if you haven't already, have a look at ReaR for linux
 bare-metal restores.

Will do.  You were the one that turned me on to Clonezilla, and I'm always 
open to new tools...  :)

ReaR is not exactly the most Google-friendly search term (on so many 
levels...), so for others (and to confirm):  http://relax-and-recover.org/

Sadly, *so* many of my servers are Windows...

 I'm not convinced that benchmarks replicate backuppc activity very
 well.  It seems more likely to have the small writes splattered
 randomly over the disk than a test run creating new files and aside
 from the extra seeking it is likely to have your disk buffers full of
 stuff that isn't what you want next.  If you have spare RAM around
 could you try cramming a lot more in to see how much it helps?

Nope:  these boards have a maximum of 4GB.  Again:  embedded.

And we've had this debate before.  Of 4GB of RAM, 3.2GB of RAM is cache! 
Do you *really* think that more will help?  I doubt that the entire EXT4 
filesystem on-disk structure takes 3GB of disk space!  I've demonstrated 
that in previous experiments:  going from 512MB to 4GB made *zero* 
difference.  I doubt going beyond 4GB is going to change that, either.

 Yes, but raw disk space isn't that expensive anymore - just the real
 estate to park them...

Yup.  Hence the embedded.

   Are you sure the target has no
   other activity happening during the backup?
 
  I am sure they *are* seeing other activity:  they're file servers, 
mail
  server, etc.  But their loads are all very low across the board.
 
 I always think 'seek time' whenever there is enough delay to notice -
 and anything that concurrently wants the disk head somewhere else is
 going to kill the throughput.

I think you are way overstating this.  The disks on the clients spend a 
good chunk of their time *idle* even when a backup is going on.  Like I 
said, you can look at the physical drive lights and they pulse in fits and 
starts.  No matter how badly you might think that seeking is hurting 
performance, there is a *ton* more performance to be gotten out of the 
clients' disks.

The guest servers are not hurting for resources.  They are not part of the 
problem.  The problem seems to be contained completely inside of the 
BackupPC server.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-17 Thread Timothy J Massey
Les Mikesell lesmikes...@gmail.com wrote on 09/17/2012 11:51:09 AM:

 On Mon, Sep 17, 2012 at 10:16 AM, Mark Coetser m...@tux-edo.co.za 
wrote:
  
 
  Its the first full run but its taking forever to complete, it was 
running
  for nearly 3 days!
 
 As long is it makes it through, don't make any judgements until after
 the 3nd full, and be sure you have set up checksum caching before
 doing the 2nd.   Incrementals should be reasonably fast if you don't
 have too much file churn but you still need to run fulls to rebase the
 comparison tree.

I'm writing a longer reply, but here's a quick in-thread reply:

I know exactly what you mean by waiting until after the first full.  Often 
the second full will be faster -- but only *IF* you are bandwidth limited 
will you will see an improvement.  In this case, neither him nor I are 
bandwidth limited.  I don't see an improvement.

I am routinely limited to no more than 30MB to 60MB per *minute* as the 
maximum performance for my rsync-based backups.  This is *really* pretty 
terrible.  I also see that the system is at 100% CPU usage when doing a 
backup.  So, my guess is that the Perl-based rsync used by BackupPC is to 
blame.

The other annoying part of this is that top shows 50% idle CPU.  That's 
because I have two cores.  One of them is sitting there doing *nothing*, 
while the other is at 100%.  The icing on the cake is that there are *two* 
BackupPC_dump processes, each trying to consume as much CPU as they 
can--but they're both on the same core!

A typical top:

top - 13:07:44 up 36 min,  1 user,  load average: 1.97, 1.89, 1.52
Tasks: 167 total,   3 running, 164 sleeping,   0 stopped,   0 zombie
Cpu(s): 46.1%us,  2.4%sy,  0.0%ni, 49.4%id,  2.0%wa,  0.0%hi,  0.2%si, 
0.0%st
Mem:   392k total,  3809232k used,   115212k free,11008k buffers
Swap:0k total,0k used,0k free,  3280072k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 1731 backuppc  20   0  357m 209m 1344 R 100.0  5.5  24:14.07 
BackupPC_dump
 1679 backuppc  20   0  353m 205m 2208 R 92.5  5.4  21:54.89 BackupPC_dump


So, I have two CPU-bound tasks and they're both fighting over the same 
core.

Is there anything that can be done about this?


A quick aside about checksum caching:  I very much *want* the ability to 
check to make sure if my backup data is corrupted *before* there is an 
issue, so I do not use checksum caching.  So, yes, this puts much greater 
stress on disk I/O:  both sides have to recalculate the checksums for each 
and every file.  But the client can do it without monopolizing 100% of the 
CPU;  the BackupPC side should be able to, too...

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-17 Thread Timothy J Massey
:  nearly no time in wait, and about 50% CPU idle!

Ah, but there's a problem with that.  This is a dual-core system.  Any 
time you see a dual-core system that is stuck at 50% CPU utilization, you 
can bet big that you have a single process that is using 100% of the CPU 
of a single core, and the other core is sitting there idle.  That's 
exactly what's happening here.

Notice what top shows us:

top - 13:21:27 up 49 min,  1 user,  load average: 2.07, 1.85, 1.67
Tasks: 167 total,   2 running, 165 sleeping,   0 stopped,   0 zombie
Cpu(s): 43.7%us,  3.6%sy,  0.0%ni, 50.5%id,  2.1%wa,  0.0%hi,  0.1%si, 
0.0%st
Mem:   392k total,  3774644k used,   149800k free, 9640k buffers
Swap:0k total,0k used,0k free,  3239600k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 1731 backuppc  20   0  357m 209m 1192 R 95.1  5.5  35:58.08 BackupPC_dump
 1679 backuppc  20   0  360m 211m 1596 D 92.1  5.5  32:54.18 BackupPC_dump


My load average is 2, and you can see those two processes:  two instances 
of BackupPC_dump.  *Each* of them are using 100% of the CPU given to them, 
but they're both using the *same* CPU (core), which is why I have 50% 
idle!

Mark Coetser, can you see what top shows for the CPU utilization for your 
system while doing a backup?  Don't just look at the single idle or 
user numbers:  look at each BackupPC process as well, and let us know 
what they are--and how many physical (and hyper-threaded) cores you have. 
Additional info can be found in /proc/cpuinfo if you don't know the 
answers.

To everyone:  is there a way to get Perl to allow each of these items to 
run on *different* processes?  From my quick Google it seems that the 
processes must be forked using Perl modules designed for this purpose.  At 
the moment, this is beyond my capability.  Am I missing an easier way to 
do this?

And one more request:  for those of you out there using rsync, can you 
give me some examples where you are getting faster numbers?  Let's say, 
full backups of 100GB hosts in roughly 30-35 minutes, or 500GB hosts in 
two or three hours?  That's about four times faster than what I'm seeing, 
and would work out to be 50-60MB/s, which seems like a much more realistic 
speed.  If you are seeing such speed, can you give us an idea of your 
hardware configuration, as well as an idea of the CPU utilization you're 
seeing during the backups?  Also, are you using compression or checksum 
caching?  If you need help collecting this info, I'd be happy to help you.


To cover a couple of other frequently suggested items, here's what I've 
examined to improve this:

Yes, I have noatime.  From fstab:  UUID=snipped  /data ext4 
defaults,noatime1 2
Noatime only makes a difference when you are I/O bound--which ideally a 
BackupPC server would be.  In my case, it made very little difference. I'm 
not I/O bound.

I am using EXT4.  I have gotten very similar performance with EXT3.  Have 
not tried XFS or JFS, but would *really* prefer to keep my backups on the 
extremely well-known and supported EXT series.

I am using compression on this BackupPC server.  Obviously, this may 
contribute to the CPU consumption.  My old servers did not have 
compression, but had terrible VIA C3 single-core processors.  And their 
backup performance was quite similar.  I figured with the Atom D510 I'd be 
OK with compression.  But maybe not.  I'll try to see if I can do some 
testing with some smaller hosts without compression and see what happens.

As for checksum caching:  As I mentioned, I think the strength of leaving 
it off is very valuable.  But I look forward to seeing the performance 
others are getting and how they compare to see at what performance cost 
this protection is coming.

Thank you very much for your help!

Timothy J. Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-17 Thread Timothy J Massey
Les Mikesell lesmikes...@gmail.com wrote on 09/17/2012 11:01:25 AM:

 On Mon, Sep 17, 2012 at 7:59 AM, Mark Coetser m...@tux-edo.co.za 
wrote:
 
  Surely disk io would affect normal rsync as well? Normal rsync and 
even
  nfs get normal transfer speeds its only rsync within backuppc that is 
slow.
 
 
 Backuppc uses its own rsync implementation in perl on the server side
 so it will probably not match the native version's speed.

Sadly, I think this is where the problem lies.

 Is this the
 first or 2nd full run?  On the first it will have to compress and
 create the pool hash file links.

But doesn't this get done at the link stage, not the backup stage?  I 
didn't think that the link running part even got counted in the backup 
time...

In my case, this is well after the first (or first 100) backups...

  On the 2nd it will read/uncompress
 everything for block-checksum verification.  If you have enabled
 checksum caching, fulls after the 2nd will not have to read/uncompress
 unchanged files on the server side.

I'm going to have to test this...  but I really don't like the fact that 
with checksum caching a file corrupted on the backup server will remain 
undetected--until the user tells me so when I restore it...  :(

Timothy J. Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-17 Thread Timothy J Massey
Mark Coetser m...@tux-edo.co.za wrote on 09/17/2012 11:16:29 AM:

 Its the first full run but its taking forever to complete, it was 
 running for nearly 3 days!

*IF* the backup is bandwidth-limited, the first run will take longer than 
subsequent runs.  How much depends on how bandwidth-limited you are!

When I back up clients over the Internet, the initial backups can take a 
*very* long time (more than a week).  Subsequent full backups take maybe 
3-4 hours.

However, for hosts across a fast LAN, this will not be the most 
significant part of your slowdown.  Given your network specs, I doubt that 
this is it.

Timothy J. Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc slow rsync speeds

2012-09-17 Thread Timothy J Massey
Rodrigo Severo rodr...@fabricadeideias.com wrote on 09/17/2012 11:22:23 
AM:

 On Mon, Sep 17, 2012 at 12:16 PM, Mark Coetser m...@tux-edo.co.za 
wrote:
 
 Its the first full run but its taking forever to complete, it was
 running for nearly 3 days!
  
 I'm seeing similar issues here.
 
 Is there any troubleshooting recommended to this kind of problem?

For the first run, pay attention to network utilization.  There are no 
existing files for BackupPC to do anything with:  it's basically absorbing 
a bunch of new files.  So you are most likely going to be limited by the 
speed of the connection--even if it's a Gigabit connection.

For subsequent runs, see my other (very long) e-mail.  Examine the CPU 
usage (and I/O usage!) of your BackupPC server and see what is limiting 
you.

Timothy J. Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Adding client file accesses to AV exceptions

2012-09-12 Thread Timothy J Massey

At a high-level, VSS works the same way as LVN snapshots. At an executional
level, it's completely different. You do not need a separate partition; you
just need enough free space on the volume.

Timothy J. Massey

Sent from my iPhone

On Sep 12, 2012, at 2:27 PM, Kenneth Porter sh...@sewingwitch.com
wrote:

 --On Wednesday, September 12, 2012 1:02 PM -0500 Les Mikesell
 lesmikes...@gmail.com wrote:

  Is there a way to make a volume shadow snapshot and exclude the
  snapshot location?

 That sounds interesting. Most of my clients are XP SP3. None were
 partitioned with extra space for a VSS like one would need for a Linux
VSS.
 Can you do a VSS on NTFS without a dedicated partition to hold it?

 (As I understand it, the extra partition is just a tiny space to hold
 filesystem edits while the filesystem is locked, and only has to be big
 enough to hold any filesystem changes for the lifetime of the VSS.)




--

 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions

 will include endpoint security, mobile security and the latest in malware

 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] WinXX file ownership / permissions

2012-09-04 Thread Timothy J Massey
Michael Stowe mst...@chicago.us.mensa.org wrote on 09/03/2012 07:33:39 
PM:

 I'd recommend rsync+vshadow to get all the files, of course -- for a 
bare
 metal restore, if you don't recover the registry, you won't have 
anything
 to map to, anyway, so I'm assuming you're going to do that as well, in
 which case, what you lack is a back up of your ACLs (possibly:  all that
 other stuff, I don't know how important any of that is to you.)

For those that may not about it:  CACLS.  
http://www.techrepublic.com/article/use-caclsexe-to-view-and-manage-windows-acls/1050976

You can use this in a pre-dump command to record the ACL's of all of your 
files, and potentially use it to restore the proper ACL's in the event of 
a restore.  It's not the cleanest system I've ever seen... but it's better 
than nothing, if ACL's are important to you.

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error Backing up Servers

2012-08-21 Thread Timothy J Massey
Ray Frush ray.fr...@avagotech.com wrote on 08/21/2012 11:30:22 AM:

 You need to exclude /proc from your backups.  It's a virtual file 
 system maintained by the kernel, and does not need to be backed up.
 
 Here's the excludes we use for Linux hosts:
 
 $Conf{BackupFilesExclude} = {
   '*' = [
 '/dev',
 '/proc',
 '/tmp_mnt',
 '/var/tmp',
 '/tmp',
 '/net',
 '/var/lib/nfs',
 '/sys'
   ]
 };

Another way of doing this is to add --one-file-system to the rsync 
parameters.  You just need to make sure that you then back up all of the 
filesystems you *do* want, then!  :)

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backupPC and backup via VPN connection.

2012-07-27 Thread Timothy J Massey

Of course, in the original request, inside the office was already working
perfectly. Therefore, there's no need to do anything.

Of course, if that's what the original person wanted, they wouldn't have
written the message in the first place.

Timothy J. Massey

Sent from my iPhone

On Jul 27, 2012, at 8:39 AM, Doug Lytle supp...@drdos.info wrote:

  If the IP address changes, you have to work out name resolution.

 Or you can do what I do.  Tell them the backup will only occur when
connected to the LAN (For speed reasons) and put their machine name in the
Linux /etc/hosts file along with their IP address.

 BackupPC will use that over their SMB name.

 Doug

 --
 Ben Franklin quote:

 Those who would give up Essential Liberty to purchase a little Temporary
Safety, deserve neither Liberty nor Safety.


--

 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions

 will include endpoint security, mobile security and the latest in malware

 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backupPC and backup via VPN connection.

2012-07-26 Thread Timothy J Massey
bubolski backuppc-fo...@backupcentral.com wrote on 07/24/2012 05:51:47 
AM:

 I got a problem with this (topic). When i'm connected to the same 
 wireless via cable I can start backup on backuppc. 
 When i got internet from wifi and i'm connected to my work wireless 
 via vpn i can ping my computer from backuppc but can't start backup.
 Got information about no ping.
 
 Why i can ping from my - pc backuppc and from backuppc - my pc , 
 but can't start backup for my computer ?
 Via cable is the same situation but I can start backup.

BackupPC does not use IP addresses to talk to your client:  it uses names. 
 Internally, the resolution of the name works by broadcasting on the local 
network.  With a VPN in place, your PC and the BackupPC device are on 
different networks, so the broadcast method for name resolution won't 
work.

You will need to use some sort of dynamic DNS system to be able to get 
BackupPC to be able to resolve your PC's name even when your PC is 
connected via the VPN.  A fixed IP won't work because you won't be able to 
use the same IP for your PC both inside the office and via the VPN.

Bad news:  such dynamic DNS systems are not simple.  Actually, the 
simplest is Microsoft Active Directory:  it mostly works out of the box. 
But then you have to make sure your notebook is logging into the domain 
over the VPN, which has its own issues...

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backupPC and backup via VPN connection.

2012-07-26 Thread Timothy J Massey
Arthur Darcet arthur.darcet+l...@m4x.org wrote on 07/26/2012 03:20:15 
PM:

 You can easily configure the VPN to give static IP to your clients, 
 and then just map a dummy name to the VPN IP using /etc/hosts on the
 BackupPC server.

Static IP addresses that are on the same local broadcast domain as the 
BacukpPC server?  And that will allow the client to use the same IP 
address both when the client is behind the VPN *and* when the client is 
*not* behind the VPN but connected directly to the office?

Also, VPN bridging (which is what this technique requires) comes with its 
own set of difficulties, all so you can avoid making name resolution work 
over the VPN...

I'd rather fix the actual problem (name resolution) instead mangling my 
VPN and ending up with two problems!  :)

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC data pool

2012-07-21 Thread Timothy J Massey
For most people, a VM is going to make for a much less reliable solution. They 
will be way too tempted to put the VM on the same storage as their production 
hardware.

For someone who understands the dangers, and has a disaster recovery 
replication set up for their VM's already, backup PC either isn't the right 
solution for them, or they already have the skills necessary to make such a 
solution work.

Why things are different for you in this case, which might have led to the 
exhaustion of the list, I'm not sure. But personally, while it might be a good 
way to evaluate the software, a VM of backup PC as a supported solution does 
not seem like a good idea to me.

Timothy J. Massey

Sent from my iPhone

On Jul 20, 2012, at 6:03 PM, Bryan Keadle (.net) bkea...@keadle.net wrote:

 Sorry for the late reply.
 
 anything/everything in a VM is an attraction.  :-)  
 
 I had hoped for a pre-configured VM appliance so I could easily evaluate it.  
 I did not have such an easy implementation experience (as evidenced by 
 apparent exhaustion of this listserv during the process).  Being able to 
 download, turn on, and make a few modifications to implement in an 
 environment is a good way to promote good software.  Seems that a VM would 
 serve just fine in smaller environments (including home use), and should it 
 need to grow beyond the performance capacity of a VM, then one could move to 
 a physical solution.
 
 In the case of a S.B.H. event (Smoking Black Bole), that's what I would have 
 it configured as an off-site iSCSI target (I have the benefit of dark fiber). 
  Similarly, as a VM, I can replicate the BackupPC appliance offsite as well, 
 thus my DR site would have all the backups, and a VM copy to mount and use it.
 
 Good to know about rSync - thanks.
 
 
 On Fri, Jul 13, 2012 at 4:17 PM, Les Mikesell lesmikes...@gmail.com wrote:
 On Fri, Jul 13, 2012 at 3:07 PM, Bryan Keadle (.net) bkea...@keadle.net 
 wrote:
 
  I just recently been introduced to BackupPC and I've been pursuing this as a
  VM appliance to backup non-critical targets.  Thus, as a VM, I would use
  remote storage (iSCSI) to provide capacity instead of local virtual disk.
 
 That can work, but given the ease of apt-get or yum installs on any
 linux system, what's the attraction of a VM?  You'll get a lot of
 overhead for not much gain.   And if you end up sharing physical media
 with the source targets, you shouldn't even call it a backup.
 
  Les - as for the offsite copies of the archive, you're speaking of a backup
  of the backup.  For the purpose that BackupPC provides, non-critical data,
  I'm not so concerned with backing up the backup.
 
 What's your plan for a building disaster?  If it is 'collect the
 insurance and retire' then you probably don't care about offsite
 copies...
 
   However, should BackupPC
  start holding backups that I would need redundancy for, what do you
  recommend?  What are you doing?  Since I'm thinking SAN-based storage for
  BackupPC, I figured I would just use SAN-based replication.
 
 If you aren't sharing the media with the data you are trying to
 protect, that would work.   But, if you have the site-to-site
 bandwidth for that, it would be much cheaper to just run the backups
 over rsync from a backuppc server at the opposite site.I have
 mostly converted to that approach now, but an older setup that is
 still running has a 3-member RAID1 where one of the drives is swapped
 out and re-synced weekly.   These were initially full sized 750 Gig
 drives, but I'm using a laptop size WD (BLACK - the BLUE version is
 too slow...) for the offsite members now.   Other people do something
 similar with LVM mirrors or snapshot image copies.
 
   But should that
  not be wise or available, would you just stand up an rSync target and just
  rSync /var/lib/BackupPC to some offisite target?
 
 The number of hardlinks in a typical backuppc archive make that a
 problem or impossible  at some point because rsync has to track and
 reproduce them by inode number, keeping the whole table in memory.
 I think a zfs snapshot with incremental send/receive might work, but
 you'd need freebsd or solaris instead of linux for that.
 
 --
 Les Mikesell
lesmikes...@gmail.com
 
 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net

Re: [BackupPC-users] BackupPC data pool

2012-07-13 Thread Timothy J Massey
The latency that remote storage adds, particularly at a higher level like NFS 
or SMB, can really hurt BackupPC's performance.  Besides, it is going to hammer 
the daylights out of that remote storage, leaving very little performance for 
anybody else.

And while a Drobo is one of the very nicest ones, it is still very much a small 
NAS box.
BackupPC really wants to be set up on a standalone PC with directly attached 
disks.

Timothy J. Massey

Sent from my iPhone

On Jul 13, 2012, at 12:39 PM, Bryan Keadle (.net) bkea...@keadle.net wrote:

 Thanks for your reply.  Yeah, we're using a NAS device, but not necessary 
 those small ones - using this Drobo B800fs.  So NFS would be a 
 protocol-based option for the data pool?  Still, iSCSI would be best if not 
 DAS?
 
 
 
 
 On Fri, Jul 13, 2012 at 11:11 AM, Les Mikesell lesmikes...@gmail.com wrote:
 On Fri, Jul 13, 2012 at 10:37 AM, Bryan Keadle (.net)
 bkea...@keadle.net wrote:
  Can BackupPC's data pool (/var/lib/BackupPC) be a CIFS mount, or must it be
  a block device?  I'm thinking it requires a block device due to the
  hardlinks/inodes BackupPC depends on, and I'm not sure that a cifs-mounted
  folder gives you that ability.
 
 I think CIFS (as unix extensions to SMB) technically handles hardlinks
 when the source system is unix/linux and the underlying filesystem
 supports them.   However I wouldn't expect this capability to be very
 well tested, because in that scenario everyone would use NFS anyway.
 And in the backuppc case it would be much more sensible to just run
 the program on the box with the drives.  If you are thinking of one of
 those small NAS devices that don't support NFS, I wouldn't count on
 it.
 
 --
Les Mikesell
 lesmikes...@gmail.com
 
 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 
 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and 
 threat landscape has changed and how IT managers can respond. Discussions 
 will include endpoint security, mobile security and the latest in malware 
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC data pool

2012-07-13 Thread Timothy J Massey
What problem are you guys trying to solve by separating the backup PC 
processing from the storage?  Given back up PCs extremely high storage 
requirements, and the demands it places upon that storage, I can't imagine that 
you're going to try to share that storage with anybody else. Besides, if you're 
using the same storage for both backup and production, it's not a backup!

Keep it simple. After all, if you're using this, something has gone wrong. The 
fewer parts involved the better.  And it should go without saying that every 
part of the backup server should be completely separate and unique from the 
parts used for your production data. Otherwise, it can't act as a backup for 
those parts.

A self-contained box that includes the processing and storage is by far the 
simplest way to achieve this.

Timothy J. Massey

Sent from my iPhone

On Jul 13, 2012, at 1:05 PM, Mike ispbuil...@gmail.com wrote:

 On 12-07-13 01:38 PM, Bryan Keadle (.net) wrote:
 Thanks for your reply.  Yeah, we're using a NAS device, but not necessary 
 those small ones - using this Drobo B800fs.  So NFS would be a 
 protocol-based option for the data pool?  Still, iSCSI would be best if not 
 DAS?
 
 Don't know if the Drobo supports it, but ATA over Ethernet would be a 
 fantastic option. Much lower overhead than iSCSI.
 
 
 -- 
 Looking for (employment|contract) work in the
 Internet industry, preferrably working remotely. 
 Building / Supporting the net since 2400 baud was
 the hot thing. Ask for a resume! ispbuil...@gmail.com
 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and 
 threat landscape has changed and how IT managers can respond. Discussions 
 will include endpoint security, mobile security and the latest in malware 
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Process for Overriding hosts configuration

2012-07-10 Thread Timothy J Massey
Michael Stowe mst...@chicago.us.mensa.org wrote on 07/10/2012 11:14:34 
AM:

  How are you backing the junction points up. AFAIK backuppc treats
  those as actual directories and not junction points (i.e. the concept
  of a junction point doesn't exist in backuppc's universe like a
  symbolic link does.) Pooling will make sure that the files under the
  junction point and the real location of those files are pooled
 
 How BackupPC views them is dependent on the backup method; for rsync
 backups, they look like symbolic links (which they are) and get backed 
up
 as such.

I do not see this in practice on my server.  In my case, the junctions 
create infinite loops (well, until the path gets too long, anyway), and 
give me tens of *thousands* of errors.  I used the list of exclusions on 
the BackupPC wiki to get this down to a manageable number (400, instead of 
80,000 or so).

Have you added a parameter somewhere to handle this?

Tim Massey


 
Out of the Box Solutions, Inc. 
Creative IT Solutions Made Simple!
http://www.OutOfTheBoxSolutions.com
tmas...@obscorp.com 
 
22108 Harper Ave.
St. Clair Shores, MI 48080
Office: (800)750-4OBS (4627)
Cell: (586)945-8796 
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] windows and Rsyncd : secure connexion ?

2012-07-04 Thread Timothy J Massey
Depends on what you mean by secure. If you mean, is the connection 
encrypted?, then no. You want to use rsync over SSH, which is perfectly 
possible on Windows as well.

Timothy J. Massey

Sent from my iPhone

On Jul 4, 2012, at 8:34 AM, galemberti greg galembe...@hotmail.com wrote:

 Hi,
 I have a question : if I use a backup via rsync on client windows (rsyncd) is 
 what the connection is secure?
 Thank you for your help.
 
 
 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and 
 threat landscape has changed and how IT managers can respond. Discussions 
 will include endpoint security, mobile security and the latest in malware 
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


  1   2   3   >