Two completely separate backup schemes are needed here.
One for full "cold-metal" restores of the boot/OS level stuff, and IMO this
is best done with "imaging" style software, in your case specifically
targeted for windoze/ntfs systems. These don't need to be done very
frequently, as little is cha
Yes, I see BackupPC as a solution for what I call "data archive" backups,
as opposed to "full host bare-metal".
For the latter wrt physical machines I tend to do relatively infrequent
"image snapshots" of the boot and system partitions, keeping
frequently-changing "working data" on separate partit
On Wed, Feb 15, 2012 at 3:03 PM, J. Bakshi wrote:
> Greetings to all of you. I have come to know about backuppc recently
> during my search for a net based backup solution which requires bare
> minimal settings at user end and supports various client OS. backuppc
> surely meet my requirement.
>
>
Either run a second BPC instance over the WAN directly to the target hosts,
or send compressed tar snapshots, whichever is more appropriate for your
combination of bandwidth, volume of data, backup time window, number of
target hosts, degree of duplicated data etc
On Tue, Feb 14, 2012 at 12:13 AM,
In addition to the comments from the others note
- Using fstab for bind mounting works just as well or better in some
cases than symlinks
- You can set things up completely plain-vanilla, test and then move
the folders after.
- USB isn't really that great a connection for daily mission-cr
On Tue, Feb 7, 2012 at 2:12 AM, Richard Shaw wrote:
> Here's the tail end of my yum transaction if you're curious to see what
> packages are needed:
>
Have you tried from a clean slate? I'd recommend starting with a complete
from-scratch install via the netinstall and the "minimal" option, get a
On Tue, Jan 24, 2012 at 8:38 PM, Ivanus, Radu
wrote:
>
>
> I need to backup a network share called \\domain\folder to another network
> share called \\ip\folder (different from the 1st one).
>
> **
>
In other words, if a simple single-instance copy to that specific target is
the primary goal,
On Tue, Jan 17, 2012 at 5:31 AM, Timothy J Massey wrote:
> "Tyler J. Wagner" wrote on 01/12/2012 04:53:49 PM:
>
> > So how about FreeNAS with BackupPC installed?
> >
> > http://harryd71.blogspot.com/search/label/backuppc
>
> Honest answer? My prejudice against non-Linux UNIX, especially with
> s
On Wed, Jan 11, 2012 at 8:18 AM, Chris Parsons <
chris.pars...@petrosys.com.au> wrote:
> Id highly recommend Nexenta. It is much more feature complete than
> Openfiler and linux. Futher to this, with my experiences, BackupPC performs
> much better on Nexenta than it does on linux.
>
> I'm sure it
I highly recommend OpenFiler. The code itself is open-source, but t's not
very diligently supported by the community. However I was a total newbie to
the world of Linux and have never needed any - it's been solid as a rock,
and has every possible NAS feature readily available from web-driven GUI,
o
On Fri, Jan 6, 2012 at 10:59 PM, Jim Durand wrote:
> Hey guys! Doing my best to get up to speed with Backuppc, pretty
> impressive so far. Logs are filled with “backuppc_link got error -4”
> errors, and from the research I have done it seems that because my TopDir
> (/mnt/sdb1) and the cpool loca
On Sat, Dec 24, 2011 at 9:34 AM, Les Mikesell wrote:
>> Thanks Les. So my snip above does hold when trying to conserve
>> bandwidth (say over a WAN), but at the potential cost of increasing
>> the time the backup session requires. In a high-speed local
>> environment, processing time can be reduce
On Sat, Dec 24, 2011 at 8:20 AM, Les Mikesell wrote:
> On Fri, Dec 23, 2011 at 6:03 PM, wrote:
>> it only makes sense to compare to the newest one, since in BPCs storage
>> model there isn't any benefit to distinguish between "incremental" vs
>> "differential" sets.
> The distinction is bet
On Sat, Dec 24, 2011 at 5:35 AM, Arnold Krille wrote:
> Well, actually the comparison is done against the last backup of a lower
> level.
Actually actually from my understanding their isn't any difference
at all in BackupPC's filesystem between the two if it hasn't been
modified. In fact you ca
I know this doesn't help for now, but next time make sure your storage
platform doesn't depend on hardware reliability - of which there is no
such thing, long term.
On the low end I recommend LVM over RAID1 for small, RAID6 for bigger
systems, obviously high-end environments have their SANs.
Just
I highly recommend **against** using any protocol conversion in the
mix "USB to eSATA" or whatever.
True eSATA is fine - obviously the quality of the hardware is an
issue. Firewire is also OK but getting rarer these days.
Internal SATA to eSATA should also not be a problem, not really doing
any c
On Mon, Dec 12, 2011 at 2:03 AM, member horvath
wrote:
> Thanks very much for the info.
> my backups are many terabytes in size so making local copies over and above
> the onsite backup is not practical.
> To remind you I need a 30day/6month onsite and only the most recent offsite.
> Once the ini
On Sat, Dec 10, 2011 at 7:45 PM, member horvath
wrote:
> I've have considered the archive function however I wasn't aware that
> the changes would be rsync'd.
> I thought it would create a tar archive of the most recent backup then
> xfer that to the archive host.
> Am I wrong in thinking this?
>
You also have the luxury of not worrying about migrating the past data over 8-)
Make sure with your new setup you give Topdir its own dedicated
filesystem, in such a way that it's easy to expand without taking the
system offline for long periods, and ideally also easy to switch from
one host to an
On Tue, Nov 22, 2011 at 3:45 AM, Bob Proulx wrote:
> Obviously due to the advantages of a long term operating system release I
> would prefer to remain on the Debian Stable release.
I would advise continuing with Debian packaging system for all the
prerequisite "infrastructure" dependencies, bu
>> Whatever happens it has to go straight to the NFS as that's the only
>> storage>> I'll have that's big enough to take anything.>> Big disks are
>> cheap these days. Or use one part of the NFS share for> backuppc, another
>> for what you send to tape.
Echo Les, what he said.
You will find yo
On Fri, Oct 21, 2011 at 4:03 AM, John Smith wrote:
>> I have dual boot computer with two hard drives. One drive has a
>> Vista install and the other has Debian, Swap and two more partitions.
>>
>> I installed using aptitude. The program now works and backs up /etc
>> to /var/lib/backuppc. I wou
On Wed, Oct 5, 2011 at 5:01 AM, Les Mikesell wrote:
> I've always just done a full install of Cygwin where I needed it, but
> now I'm looking for an installer package that would be easier for
> others to use. Is there anything like cwrsync that also includes
> sshd?
Les,
I've found with recent
On Sat, Sep 10, 2011 at 12:01 AM, Richard Shaw wrote:
> I've only skimmed this thread so I apologize if I miss something but I
> was thinking. Might it be a better idea to make the user at least
> somewhat responsible for the backup?
On Sat, Sep 10, 2011 at 12:00 AM, Les Mikesell wrote:
> In th
On Fri, Sep 9, 2011 at 10:55 PM, Les Mikesell wrote:
> Just in case that wasn't a typo, the top level is /cygdrive..
Ah, no wonder it didn't work. . .
just kidding, I think I only made that mistake here (note to self -
run back and check 8-)
> If I were doing it, I'd try to work out a scheme
On Fri, Sep 9, 2011 at 9:06 PM, Jeffrey J. Kosowsky
wrote:
> hans...@gmail.com wrote at about 20:09:02 +0700 on Friday, September 9, 2011:
> > I realize that, and thought my posting details on my precautionary
> > procedures would sufficiently demonstrate my awareness of the fact
> > that I'm "
On Fri, Sep 9, 2011 at 12:55 PM, Les Mikesell wrote:
>> my main question: can rsync be made to treat the "meta-filesystem root"
>> /cygwin as a ShareName?
> Seems like something you could test easier than explaining the question
> here... Commands like 'ls -R /cygdrive' can recurse over th
On Fri, Sep 9, 2011 at 7:30 PM, Holger Parplies wrote:
> A comma at the end of a Perl list without following elements is a *purely
> cosmetic* thing. If you want one there, fine, put it there.
If you are addressing me, I wasn't proposing that at all. I believe
Bowie was simply pointing the fact
Started a new thread, originally part of this too-long one:
http://adsm.org/lists/html/BackupPC-users/2011-09/threads.html#00026
On Fri, Sep 9, 2011 at 3:57 AM, Bowie Bailey wrote:
> Just a note that a comma at the end of the last element in an array or
> hash is perfectly fine in Perl.
>
> This
Thanks to all for answering, and particularly Holger for your
thoughtful response. Before approaching the "social/concept" side, I'd
like to be clear - from a purely technical POV - about my main
question: can rsync be made to treat the "meta-filesystem root"
/cygwin as a ShareName?
-
I managed to track down the source of my original problem, and decided
it was worth posting to the end of this ridiculous thread just in case
it's useful for future googlers. The cause was the empty value - two
quotes at the end of this:
$Conf{RsyncShareName} = [
'/cygdrive/c',
''
];
So don'
Our users have a variety of storage media from ordinary flash drives
and SD cards to eSata or firewire HDDs, and even some swappable
"internal" HD's. Much of these data is as or sometimes even more
important than those on the fixed drives.
Just as the notebook users are only intermittently attache
On Mon, Sep 5, 2011 at 9:28 AM, Les Mikesell wrote:
> On Sun, Sep 4, 2011 at 12:41 AM, wrote:
>>
>> Re packaging issues, I'm not trying to figure them out at all, AFAIC they're
>> a "black box" that just works - I plan to just observe their results and
>> stick to their policies (I didn't realiz
To all
I will do my best to not abuse the real generosity I have seen in the
list every day over the years by doing my best not to post unnecessary
questions or those not relevant to BackupPC.
I will also do the best I can to give back to the project to the
extent I am able to help - certainly mo
On Mon, Sep 5, 2011 at 6:53 AM, Adam Goryachev
wrote:
>
> I think the answer to most of this might have been:
> apt-get --purge remove backuppc
>
> This should remove every trace of the package ever having been
> installed. I only mention this because it might come in handy in the
> future for you
On Sun, Sep 4, 2011 at 10:49 AM, Jeffrey J. Kosowsky
wrote:
> Just a piece of friendly advice... you seem to have posted dozens of
> posts in the past 24 hours or so... you keep making multiple, often
> non-standard or nonsensical changes to a standard
> configuration... and are asking multiple qu
Will a BackupPC 3.2 system "just work" with a conf/log/pool/pc filesystem
moved over from 3.1, or is there an upgrade process run on the data?
If the latter, I imagine that would make it difficult to move that data back
to 3.1?
Just thinking of disaster recovery scenarios, maybe building a custom
On Sat, Sep 3, 2011 at 11:09 AM, Timothy J Massey wrote:
> But would probably be a very good idea. What would be an even better idea
> would be to grab a spare PC (or a virtual guest) and test it from a
> completely clean installation. And document the *heck* out of what you do:
> you *will* be
>
> On Sat, Sep 3, 2011 at 5:03 AM, wrote:
>
>>
>> This time I'm planning to delete the backuppc user
>>
>
> Is anything more than removing the line from /etc/passwd required for this?
>
>
>> as well as:
>>
>> /var/lib/backuppc
>> /etc/backuppc
>> /var/log/backuppc
>> /usr/share/backuppc
>>
>>
>>
On Sat, Sep 3, 2011 at 5:03 AM, wrote:
>
> This time I'm planning to delete the backuppc user
>
Is anything more than removing the line from /etc/passwd required for this?
> as well as:
>
> /var/lib/backuppc
> /etc/backuppc
> /var/log/backuppc
> /usr/share/backuppc
>
>
> I'm not going to do an
On Sat, Sep 3, 2011 at 4:46 AM, Les Mikesell wrote:
> On Fri, Sep 2, 2011 at 4:38 PM, wrote:
> >
> > Or is the message "link host-name" in my log when running "_dump
> > -v" manually indicate a hardlinkng problem kicking in **after** the pc
> > filesystem's already been created?
>
> I think the
On Sat, Sep 3, 2011 at 4:38 AM, Les Mikesell wrote:
> In general, backuppc needs rw permission on everything, and apache
> (www-data on debian/ubuntu) needs read access to some of it.
>
Sorry to need such hand-holding, but if I'm above my TOPDIR and execute
chown -R backuppc TOPDIR
chgrp -R www
On Sat, Sep 3, 2011 at 4:27 AM, Les Mikesell wrote:
> I mean try to create a hardlink between a file under the pc directory
> to under the cpool directory.
>
> Backuppc does approximately the same test at startup but in perl and
> you may not see the real error message.
>
> Does the drive in qu
On Sat, Sep 3, 2011 at 3:50 AM, Les Mikesell wrote:
> The ubuntu package should create a backuppc user and that should be
> the owner of everything under TOPDIR. I think you need to diagnose
> why the link fails but trying the same operation from the shell (su -s
> /bin/bash backuppc if it doesn
Sorry, editing mangled the referents of my pronouns:
I've heard of some software/systems being unable to traverse them [SYMLINKS]
- in fact I've read they're [BIND MOUNTS] pretty much transparent right down
to the kernel level.
--
On Sat, Sep 3, 2011 at 3:50 AM, Les Mikesell wrote:
>
> The ubuntu package should create a backuppc user and that should be
> the owner of everything under TOPDIR. I think you need to diagnose
> why the link fails but trying the same operation from the shell (su -s
> /bin/bash backuppc if it doe
On Sat, Sep 3, 2011 at 3:38 AM, Les Mikesell wrote:
> It turns out that a linux raid1 mirror looks just like the non-raid
> filesystem it contains - or enough that you can mount the single drive
> as if it were a normal partition. So you can treat the rotated member
> just the same as your sing
On Sat, Sep 3, 2011 at 3:32 AM, wrote:
> If you do this before the install, everything should land in the right
>> place and get the right permissions. The critical things are that
>> the pool/cpool/pc directories must all be in the same filesystem so
>> hardlinks can work
>>
>>
>
Just to confi
On Sat, Sep 3, 2011 at 3:14 AM, Les Mikesell wrote:
> I don't see any other message yet, but the way to get it right is to
> just mount the partition you want to use for storage in the place
> where backuppc wants it (should be /var/lib/backuppc with the deb
> package). Or put a symlink there po
On Sat, Sep 3, 2011 at 2:59 AM, Les Mikesell wrote:
> There has been a vast amount of discussion on this list covering this
> topic so you should probably wade through the archives.
> My approach is a 3-member software RAID1 where 2 drives are always in
> the server and the 3rd is a set rotated o
Was this post in response to mine just now asking for feedback on pretty
much the same topic? If not, pretty amazing example of synchronicity (in
Jung's sense, not regarding data mirroring 8-)
I considered using RAID mirroring or other partition-cloning methods, but at
this point I'm thinking I pre
On Sat, Sep 3, 2011 at 2:23 AM, Les Mikesell wrote:
> The ubuntu package should have set everything up correctly. You
> didn't change TOPDIR or mount something underneath it after the
> install, did you?
>
> --
> Les Mikesell
> lesmikes...@gmail.com
>
8-)
Of course I did Les, precisely as
Here's an idea I have completely unrelated to my problem posting, looking
for feedback.
Goal: using single large HDDs as backup media rotating them offsite, in as
simple and bullet-proof as possible a way.
Strategy:
two hard drives
one with the base server OS installed, install BackupPC 3.
I just ran the _dump script manually again, this time fully deleting
everything under TOPDIR except the pool directories and with the -v verbose
option.
The ending of the process was the same, except for a "link host_name" just
before the end aborted message.
I'm thinking maybe a permissions issu
Running 3.1.0, installed via synaptic on Ubuntu 11.04.
After spending a lot of time refining my excludes, thinking windows open
files were preventing a successful full backup completing, I tried making
the whole target one very small and static directory tree with the same
result.
There isn't any
On Mon, Apr 11, 2011 at 12:43 PM, Saturn2888
wrote:
> But none of that solves the issue we're having now. How in the world do we
> backup the current pool of data?
Sorry I haven't gone back to read the whole thread - have you tried
and failed already with rsync?
If you have too many hardlinks f
On Sun, Apr 10, 2011 at 12:16 PM, Les Mikesell wrote:
> I've never heard of raid sync affecting the original disk(s). I've been doing
> it for years, first with a set of firewire external drives (which also had USB
> but it was slower), then the sata bays. There might be problems in adding
> mo
Forgive me if I'm out of line, but wanted to let you know that your
HTML email is very hard to read, IMO better to just use plain text in
open lists. . .
--
Create and publish websites with WebMatrix
Use the most popular F
I may be wrong here, more reading than experience in this particular
area, but my understanding is that it would still recommend making
sure nothing's writing to the database while the dump is taken.
For a heavily used transactional systems with complex relational
structures, IMO it's possible tha
MySQLdump is a good tool, there are others, but usually the whole
process is scripted to fit the local environment.
Just like many mail servers, databases should be quiescent (the server
stopped) while the dump takes place to ensure consistency. If you want
to really minimize the downtime, then us
Best to use LVM (over RAID if you like) for future expansion flexibility.
I happen to use OpenFiler as a NAS host for the same reason - can act
as an iSCSI host as well as all the mainstream filesharing protocols,
relatively easy to set up compared to building your own from a generic
distro.
Note
On Sun, Mar 20, 2011 at 4:27 AM, Mark Edwards wrote:
> I moved my TopDir from a USB hard drive mounted at /var/lib/backuppc, to an
> NFS share mounted at /mnt/nfs/backuppc. The share is mounted using autofs
> rather than fstab.
Please confirm so it's clear to us noobs: the clean solution would
h
On Fri, Mar 18, 2011 at 11:05 PM, Timothy Murphy wrote:
>> Many were basically useless, but I have to believe the post-XP ones
>> may be a bit better.
>
> I actually said that there did not seem to be a standard Microsoft
> backup program that did incremental backups.
> I notice that nobody has ac
On Fri, Mar 18, 2011 at 8:22 PM, Joe Konecny wrote:
> On 3/18/2011 5:00 AM, hans...@gmail.com wrote:
> This isn't valid. With today's pipes bandwidth isn't a concern as
> far as forums go. I realize this is a mailing list and not a
I meant the mental bandwidth of the participants.
> forum but
On Fri, Mar 18, 2011 at 7:49 PM, Timothy Murphy wrote:
>>> Incidentally, is there a Windows backup program as good as BackupPC?
> I should have said that there does not seem to be a Microsoft backup program.
> I'm reluctant to run non-MS programs, for the same reason that I do not like
> to run u
On Thu, Mar 17, 2011 at 10:12 PM, Joe Konecny wrote:
> I'm not claiming I can explain the solution to his problem but this is a good
> example of why
> microsoft is so successful. MS has MVP's (most valuable professional's) (and
> they aren't
> necessarily on ms's payroll) that hang out in foru
On Thu, Mar 17, 2011 at 9:29 PM, OldManRiver
wrote:
> I'm not about to read source, as have no time for that. If this is not
> intuitive or the docs can not explain it, then I need another tool as I'm
> rushed to get this done yesterday.
And?
Obi-Wan: This isn't the backup solution you're loo
On Mon, Mar 14, 2011 at 12:16 AM, César Kawar wrote:
> Yes I'm sure. Without -H option it actually was impossible to sync the pools.
> It worked without -H but didn't fit on the target USB drive.
Just to toss this out there as a possible explanation - if I've got
this wrong someone please jump i
On Sat, Mar 12, 2011 at 5:07 AM, Jeffrey J. Kosowsky
wrote:
> In particular with regard to metrics you seek, I don't know whether it is
> better/worse to have one file with 2N links or N files with 2 links. Your
> metrics don't distinguish that and depending on how the list of hard links is
> c
On Sat, Mar 12, 2011 at 3:50 AM, Jeffrey J. Kosowsky
wrote:
> Not that I care too much, but that method uses a non-trivial approach
> that I originally developed, including code that is
> *verbatim* copy-and-pasted without any attribution and without
> GPL-license from my original, *copyrighted* r
On Fri, Mar 11, 2011 at 9:05 PM, Les Mikesell wrote:
> It is the number of files with more than one link that matter, not so much the
> total size. But the newer rsync that doesn't need the whole file tree loaded
> at
> once besides the link table and lots of RAM may permit it to scale up more.
I'm not qualified to disagree Cesar, but my understanding is that the issue:
A - Has nothing to do with the size in TB of the filesystem, but the
number of hardlinks - therefore the number of source files, the
frequency of backups and the number of clients.
b Wasn't/isn't related to memory leaks,
On Fri, Mar 11, 2011 at 10:56 AM, Rob Poe wrote:
> I'm using RSYNC to do backups of 2 BPC servers. It works swimmingly, you
> plug the USB drive into the BPC server, it auto-mounts, emails that it's
> starting, does an RSYNC dump (with delete), flushes the buffers, dismounts
> and emails.
Sou
On Fri, Mar 11, 2011 at 10:33 AM, Jeffrey J. Kosowsky
wrote:
> I wrote a script BackupPC_copyPcPool that I posted to the list that should be
> a bit more efficient & faster than BackupPC_tarPCCopy
Noted, and thanks
--
On Fri, Mar 11, 2011 at 3:46 AM, Michael Conner wrote:
> That is good to know. Actually things are a little better than I thought, the
> spare machine is Dell Dimension 2400 with a Pentium 4, max 2 gb memory. So I
> guess I could slap a new bigger drive into it and use it. My basic plan is to
On Thu, Mar 10, 2011 at 9:59 PM, Michael Conner wrote:
> and a NAS (and may be adding another). Note that my Linux knowledge is still
> limited but growing as I look at more open source stuff.
So here's another reason to set up that second NAS.
What I've done is set up a separate (bigger) NAS t
Of course it "should" be!
Every child "should" have enough to eat and a good education too, but
few people live in an environment where they'd have the cheek to
demand it as if it's an inherent *right* - someone's got to develop
the skills and put in the time and effort to earn the money to make i
On Mon, Mar 7, 2011 at 11:49 PM, Les Mikesell wrote:
> On the other hand, if you have a specific question or run into a problem
> while trying to follow the installation instructions I think you will find
> that people are more than willing to help.
Absolutely - my point is the "don't complain'
On Mon, Mar 7, 2011 at 10:30 PM, Cesar Kawar wrote:
> I'm sorry if I've been rude in my last mail.
> Again, I don't try to be rude, so if it sound a bit "snarky", I'm really
> sorry.
I for one didn't find your comment "snarky" at all.
Many people are so "religious" about open source that they i
Grml's persistence feature is the same as Debian LiveCDs - just create
an ext3 partition labeled "live-rw" and boot with the kernel option
("cheat code") of "persistence", and everything's automatically
persistent. Can also use a loopback file if you don't want to dedicate
a partition (although thi
Great as it is, Openfiler is pretty much a single-purpose "appliance",
and as an Rpath-based distro (similar to RHEL) its packages are very
out of date and most people aren't Conary package-management wizards.
Grml includes BackupPC 3.2.0-1.1 and in general Grml tends to ship
more recent releases
On Wed, Feb 2, 2011 at 10:58 PM, John Goerzen wrote:
> This *is* the smallest possible chunk, sadly ;-)
>
>> You may want to consider a separate backup profile of the database dumps. So
>> set up one backup for the rest of the machine; and another backup (using
>> $Conf{ClientNameAlias} to point t
Nothing will properly handle backing up files currently being written
to during the backup session without inserting itself somewhere
between the OS and the disk/controller hardware.
COTS "hot imaging" programs (Symantec's (recent versions of) Ghost and
BackupExec System Recovery are examples) do
I have a large but still limited amount of total storage space
available, and am trying to figure out how to optimise dividing it up
between the "primary" NAS and the "BUNAS" that will hosting BackupPC's
volumes.
The least critical are of course media files, and in fact they don't
even really need
> If you consider using ZFS as BackupPC's filesystem
In relation to other projects, I've done a bit of research, and got
interested in Nexenta, however even before the Oracle debacle I wasn't
quite ready to invest the additional learning curve - much less likely
now. . .
And just a comment, but k
I've been investigating how to backup BackupPC's filesystem,
specifically the tree with all the hard links (BTW what's the right
name for it, the one that's not the pool?)
The goal is to be able to bring a verified-good copy of the whole
volume off-site via a big cahuna sata drive.
I don't have e
86 matches
Mail list logo