Re: [BackupPC-users] selective copy of backuppc volume

2011-03-19 Thread gregwm
iiuc BackupPC_fixLinks.pl 
(http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=BackupPC_FixLinks)
 
ought not ignore multiply linked files when considering what files in 
the pc tree might be in need of being attached into the pool.  just 
because files are multiply linked doesn't mean they are linked into the 
pool.

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [newb] ssh rsync with restricted permissions

2011-03-18 Thread gregwm
On 2011-03-18 05:46, Neal Becker wrote:
 I'm interested in setting up linux-linux backup.  I don't like the idea of
 giving permission for machine1 as user backup to ssh to machine2 as root.  
 What
 are the options?

 1. Can ssh be restricted so that the only command user backup can run is 
 rsync?
 2. Is there an easy way (using acls?) to give a user backup read access to
 everything (probably not)
 3. Some other options I haven't thought of?

$Conf{RsyncClientCmd} = '$sshPath -p38134 -q -x $host sudo $rsyncPath 
$argList+';

/etc/sudoers:
backuppc ALL=NOPASSWD: /usr/bin/rsync --server --sender *

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] CPOOL, PC directories and backuppc statistics generation -- moving cpool possibly?

2011-03-18 Thread gregwm
 I HAVE moved everything from /var/lib/backuppc to /mnt/backuppc  (a
 different hard drive).

 AHA!!Doing a tree -a in /var/lib/backuppc I see a cpool there with
 LOTS of directories and files!!!

 So, somewhere I must have to point the cpool and log directories at the
 new location, where is that?

how about just replace /var/lib/backuppc with a softlink to /mnt/backuppc?

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] selective copy of backuppc volume

2011-03-11 Thread gregwm
for my offsite backups i've a script selecting the latest full and 
incremental from each ~backuppc/pc/*, along with the logs and backups 
files

if/when this is restored, what will be needed?

something to rebuild cpool no doubt.

perhaps also editing of the backups files to reflect what's actually 
there?  without this might the restored backups get pruned in favor of 
backups that aren't actually there?

(backuppc 3.1.0-9ubuntu2)

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup backuppc pool with rsync offsite

2011-02-24 Thread gregwm
i've been having good success with a script that selects only the most
recent full and most recent incremental for each backup in the pc
directory, as well as the set of backups last successfully
transferred, and rsync's that set offsite, with -H.  for me, this
still deduplicates, and keeps a reasonable cap on the number of
hardlinks rsync has to grapple.

--
Free Software Download: Index, Search  Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup backuppc pool with rsync offsite

2011-02-22 Thread gregwm
 rsync'ing the BackupPC data pool is generally recommended against. The
 number of hardlinks causes an explosive growth in memory consumption by
 rsync and while you may be able to get away with it if you have 20GB of
 data (depending on how much memory you have); you will likely run out of
 memory when your amount of data gets larger.

this issue sure comes up alot, and perhaps i should just keep quiet
since i personally am in no position to do it or even go off looking
for an rsync forum, nor do i have any knowledge of just how convoluted
the rsync source may be to try to look at, but as a naive outsider it
seems still it ought not to be such a task to have a go at the rsync
source and come out with a version that sorted its xferlist into
[filesystem:inode] order if preserving hardlinks, or possibly just
created a simple [filesystem:inode] index of files already transfered,
in replacement of whatever mangy hardlink table is in there now.

--
Free Software Download: Index, Search  Analyze Logs and other IT data in 
Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
generated by your applications, servers and devices whether physical, virtual
or in the cloud. Deliver compliance at lower cost and gain new business 
insights. http://p.sf.net/sfu/splunk-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Client disk utilisation

2011-01-05 Thread gregwm
Ymmv.  Your du may match client usage, or not.  Are files excluded
from the backup?  Are duplicate files being found by backuppc (and of
course deduplicated)?  If files of significant size or quantity are
excluded or deduplicated your du will be missing that much.

On 2011-01-03, Timothy Omer t...@twoit.co.uk wrote:
 On 3 January 2011 15:59, gregwm backuppc-us...@whitleymott.net wrote:

  When i do run a du -hs on the clients folder under the pc dir is the
 result of that the amount of file storage the client is using (what could
 be
 shared by others)
 
  For example, I have comA and comB that both have the same 2GB file
  backed
 up. du -hs on both of their folders will result in...
 
  comA 2GB
  comB 2GB
 
  ...and for the pool...
  Total 2GB (as the two files above point to the same file via hard links)

 also might want to bear in mind that the client may have files that
 are identical but not hardlinked.  backuppc will deduplicate, and
 unless you unravel all those details, your calculations may be off a
 bit.


 thanks gregwm,
 my aim is to understand client utilisation per client after compression but
 not worrying about shared hardlinks, so i can report back to each user their
 approx usage - server pool utilisation is not a worry for this.

 To get that information just want to be sure that my du -hs is giving me
 that info and im not miss reading.

 if someone with the required knowledge could confirm im correct would be
 fantastic.
 thanks all



-- 
i pledge allegiance to the earth
and all that lives upon her
and the delicate balance in which it stands
one planet
unending spirit
harmonious
with deep respect for all

good website:  http://storyofstuff.com
and, for fun:  http://blip.tv/file/520347
our illusion:  http://clevercycles.com/energy_and_equity
http://clevercycles.com/energy_and_equity
what if mother earth herself is the modern day christ?
what of those who ignore her pleading, on the belief
that their next life is in rapture?  will history place them
along with judas, pilate, herod, and the executioners?
will their god be more lenient?

everything we need, to destroy, or protect, the entire earth,
is available now.  it's in our hands.  what shall we do?

considering what we know about how our choices affect the world
of our grandchildren, how can we be of service to one another
to come into better integrity with what we know?

--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Sanity check re: pooling.

2010-12-07 Thread gregwm
hmm, i rather expect the pool check doesn't follow all the transfers,
rather is interleaved with the transfers, if i'm right the temporary
ballooning you describe should not occur other than a file at a time.

On 2010-12-06, Ed McDonagh ed.mcdon...@rmh.nhs.uk wrote:
 On Mon, 2010-12-06 at 10:47 -0500, Ken D'Ambrosio wrote:
 Hi!  I've got two servers, each around a TB, that I'm backing up with
 BackupPC.  (No, not for real backups, but to be able to not have to
 recall tapes when someone deletes a file.  Darn users.)  I'm planning on
 merging the two Windows servers into one Linux box serving via Samba.
 Assuming pooling works the way I think it does, there shouldn't be any
 significant increase in needed disk space, should there?

 Thanks!

 -Ken



 I think you are right, in the medium term.

 In the short term, if the host is different then everything will be new
 and will be transferred over to the backup machine.

 Once there, it will be checked against the pool and each file will be
 replaced with a hard link to the pooled version from the original
 server, therefore you'll eventually get all the space back to what it
 was with two servers.

 Ed
 #
 Attention:
 This e-mail and any attachment is for authorised use by the intended
 recipient(s) only. It may contain proprietary, confidential and/or
 privileged information and should not be copied, disclosed, distributed,
 retained or used by any other party. If you are not an intended recipient
 please notify the sender immediately and delete this e-mail (including
 attachments and copies).

 The statements and opinions expressed in this e-mail are those of the
 author and do not necessarily reflect those of the Royal Marsden NHS
 Foundation Trust. The Trust does not take any responsibility for the
 statements and opinions of the author.

 Website: http://www.royalmarsden.nhs.uk
 #

 --
 What happens now with your Lotus Notes apps - do you make another costly
 upgrade, or settle for being marooned without product support? Time to move
 off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
 use, and manage than apps on traditional platforms. Sign up for the Lotus
 Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/



-- 
We know how to transform this world to reduce our impact on nature by
several fold, how to provide meaningful, dignified living-wage jobs
for all who seek them, and how to feed, clothe, and house every person
on earth.  What we don't know is how to remove those in power, those
whose ignorance of biology is matched only by their indifference to
human suffering.   - Paul Hawken
We are like tenant farmers chopping down the fence around our house
for fuel when we should be using Natures inexhaustible sources of
energy — sun, wind and tide. ... I'd put my money on the sun and solar
energy. What a source of power! I hope we don't have to wait until oil
and coal run out before we tackle that. - Thomas Edison (1847-1931)
http://en.wikiquote.org/wiki/Thomas_Edison

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Am I going about this wrong?

2010-11-09 Thread gregwm

 I'm archiving the BackupPC backup folder (/var/lib/BackupPC) folder to
 external disk with rsync.

 However, it looks like rsync is filling the links?

 My total disk usage on the backup server is 407g, and the space used on
 the external drive is up to 726g.

 (using rsync -avh --delete --quiet /var/lib/BackupPC /usbmount/backuppc)


you want rsync -H
i've used rsync -qPHSa with some success.  however, if you have lots of
links, and not terribly much memory, rsync gobbles memory in proportion to
how many hardlinks it's trying to match up.  so, ironically, i use
storebackup to make an offsite copy of my backuppc volume.
--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] more efficient: dump archives over the internet or copy the whole pool?

2010-11-03 Thread gregwm
 ...saving to an Amazon s3 share...
 ...So you have a nice
 non-redundant repo, and you want to make it redundant before you push it
 over the net??? Talk sense man!

 The main question:
 ==
 He thinks it would be more bandwidth-efficient to tar up and encrypt the
 pool, which accounts for duplicate files and so forth, and send that over to
 s3.  I counter that the pool will contain data concerning the last 2 weeks
 or so of changes, which I'm not interested in for the purposes of disaster
 recovery, and that transferring over that extra data is less efficient.
  Who's right?  And if it's my colleague, which folders should I be
 interested in?  It looks to me like cpool, log, and pc...

to copy my backuppc volume offsite i wrote a script to pick (from
backupvolume/pc/*/backups) the 2 most recent incremental and the 2
most recent full backups from each backup set and rsync all that to
the remote site.  i'm ignoring (c)pool but the hardlinks still apply
amongst the selected backups.  you could do something similar to feed
tar.

--
Achieve Improved Network Security with IP and DNS Reputation.
Defend against bad network traffic, including botnets, malware, 
phishing sites, and compromised hosts - saving your company time, 
money, and embarrassment.   Learn More! 
http://p.sf.net/sfu/hpdev2dev-nov
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] more efficient: dump archives over the internet or copy the whole pool?

2010-11-03 Thread gregwm

 to copy my backuppc volume offsite i wrote a script to pick
 (from backupvolume/pc/*/backups) the 2 most recent incremental and the
 2 most recent full backups from each backup set and rsync all that to the
 remote site.  i'm ignoring (c)pool but the hardlinks still apply amongst the
 selected backups.  you could do something similar to feed tar.


fwiw my main motivation for this was due to rsync's consumption of memory
related to the number of hardlinks it needs to process.  rsync was thrashing
forever until i trimmed down to only recent backups, thereby vastly reducing
the amount of ram/swap rsync required to grapple with all those hardlinks.
 more recently, ironically, i've been using storebackup to transfer my
backuppc data, as apparently storebackup is far more efficient with
hardlinks than rsync.
--
Achieve Improved Network Security with IP and DNS Reputation.
Defend against bad network traffic, including botnets, malware, 
phishing sites, and compromised hosts - saving your company time, 
money, and embarrassment.   Learn More! 
http://p.sf.net/sfu/hpdev2dev-nov___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rsync --max-size parameter not honored in BackupPC

2010-11-02 Thread gregwm
i'd just exclude them by name/pattern until a better answer surfaces

 At our site files larger than 10BG are usually recreated faster than
 restored from backup, therefore we added to the RsyncExtraArgs the
 parameter
 --max-size=100.

 Although this parameter is visible in the rsync command lines at both
 the sender and the reciever (seen with ps ax|grep rsync - see at the end),
 larger files
 are backuped nevertheless (max. file was about 65GB).

--
Achieve Improved Network Security with IP and DNS Reputation.
Defend against bad network traffic, including botnets, malware, 
phishing sites, and compromised hosts - saving your company time, 
money, and embarrassment.   Learn More! 
http://p.sf.net/sfu/hpdev2dev-nov
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Cpool nightly clean removed 190 files from where??

2010-10-28 Thread gregwm
umm,
Cpool nightly clean removed 190 files from where??

the mobo died on 10/14, a new server was purchased, complete with new
discs.  the orig server primary volume was also installed, but not the
original server backuppc volume.  on 10/28 i created a fresh empty
backuppc volume, tried starting backuppc but hadn't created the
backuppcdata directory, created that and restarted but hadn't given
ownership of backuppcdata to backuppc, did that and restarted, and
finally it's happily rebuilding the backup volume.

but wait, each time backuppc ran with an EMPTY backup volume, it
claims to have deleted files from Cpool!  eh?  just exactly what did
it delete really, from where?

it sounds about right that the Cpool was 73.32GB on 10/13.  but the
log's claim that Cpool is 64.84GB on 10/28 makes no sense.  the volume
is fresh and empty.  yet the next couple runs claim that Cpool nightly
clean removed a similar but slightly fewer number of files each time,
and each run claims Cpool is slightly smaller than the last.  like
there's a shadow copy of Cpool somewhere?  umm, i sincerely doubt
that...  so, what's going on here?

my guess:  i have $Conf{BackupPCNightlyPeriod} = 8;  which presumably
means only 1/8 of the pool is traversed each night.  i'm guessing the
numbers reported for Cpool are averaged out over the last 8 runs.
weak, if so.  i mean really, claiming it removed 190 files.

i mean, i'm still rather worried it really did remove 190 files, and
then 149 files, and then 106 more.  i mean, do i need to have my
client go get that original backup volume back from offsite storage,
spin it up, and compare everything to make sure something important
hasn't been deleted?  that would be a major annoyance...

doc/BackupPC.pod says This documentation describes BackupPC version
3.0.0beta1, so presumably that's the version.

here's the last 4 LOGs:
*  #10uj#Oct28Thu21:58#r...@server1:/e4/var/log/BackupPC#
/e4/v/h/backuppc/bin/BackupPC_zcat LOG.2.z
2010-10-13 23:15:00 Running 1 BackupPC_nightly jobs from 10..11 (out of 0..15)
2010-10-13 23:15:00 Running BackupPC_nightly -m 160 191 (pid=2524)
2010-10-13 23:15:00 Next wakeup is 2010-10-14 23:15:00
2010-10-13 23:15:01 Started full backup on localhost (pid=2525, share=/v/h)
2010-10-13 23:18:58 BackupPC_nightly now running BackupPC_sendEmail
2010-10-13 23:19:49 Finished  admin  (BackupPC_nightly -m 160 191)
2010-10-13 23:19:49 Pool nightly clean removed 0 files of size 0.00GB
2010-10-13 23:19:49 Pool is 0.00GB, 0 files (0 repeated, 0 max chain,
0 max links), 1 directories
2010-10-13 23:19:49 Cpool nightly clean removed 212 files of size 0.04GB
2010-10-13 23:19:49 Cpool is 73.32GB, 158406 files (7 repeated, 1 max
chain, 1777 max links), 4369 directories
2010-10-14 01:23:06 Finished full backup on localhost
2010-10-14 01:23:06 Running BackupPC_link localhost (pid=5928)
2010-10-14 01:23:17 Finished localhost (BackupPC_link localhost)
2010-10-28 14:23:32 Reading hosts file
2010-10-28 14:23:32 BackupPC started, pid 27652
2010-10-28 14:23:32 Running BackupPC_trashClean (pid=27655)
2010-10-28 14:23:32 Next wakeup is 2010-10-28 17:15:00
2010-10-28 17:15:01 24hr disk usage: 1% max, 1% recent, 0 skipped hosts
2010-10-28 17:15:01 Removing /var/log/BackupPC/LOG.32.z
2010-10-28 17:15:01 Aging LOG files, LOG - LOG.0 - LOG.1 - ... - LOG.32
*  #10uj#Oct28Thu21:59#r...@server1:/e4/var/log/BackupPC#
/e4/v/h/backuppc/bin/BackupPC_zcat LOG.1.z
2010-10-28 17:15:01 Running 1 BackupPC_nightly jobs from 10..11 (out of 0..15)
2010-10-28 17:15:01 Running BackupPC_nightly -m 160 191 (pid=32238)
2010-10-28 17:15:01 Next wakeup is 2010-10-29 17:15:00
2010-10-28 17:15:01 localhost: mkdir /bc/backuppcdata/pc: Permission
denied at /v/h/backuppc/bin/BackupPC_dump line 193
2010-10-28 17:15:11 BackupPC_nightly now running BackupPC_sendEmail
2010-10-28 17:15:11  admin : Can't read /bc/backuppcdata/pc: No such
file or directory at /v/h/backuppc/bin/BackupPC_sendEmail line 165.
2010-10-28 17:15:11 Finished  admin  (BackupPC_nightly -m 160 191)
2010-10-28 17:15:11 Pool nightly clean removed 0 files of size 0.00GB
2010-10-28 17:15:11 Pool is 0.00GB, 0 files (0 repeated, 0 max chain,
0 max links), 1 directories
2010-10-28 17:15:11 Cpool nightly clean removed 190 files of size 0.03GB
2010-10-28 17:15:11 Cpool is 64.84GB, 138607 files (5 repeated, 1 max
chain, 1777 max links), 3823 directories
2010-10-28 17:15:11 Running BackupPC_link localhost (pid=32269)
2010-10-28 17:15:11 Finished localhost (BackupPC_link localhost)
2010-10-28 21:04:12 Got signal TERM... cleaning up
2010-10-28 21:04:13 Reading hosts file
2010-10-28 21:04:13 BackupPC started, pid 7211
2010-10-28 21:04:13 Running BackupPC_trashClean (pid=7214)
2010-10-28 21:04:13 Next wakeup is 2010-10-29 20:09:00
2010-10-28 21:05:02 Got signal TERM... cleaning up
2010-10-28 21:05:03 Reading hosts file
2010-10-28 21:05:03 BackupPC started, pid 7261
2010-10-28 21:05:03 Running BackupPC_trashClean (pid=7262)
2010-10-28 21:05:03 Next wakeup is 2010-10-28 21:09:00