replication to push
a copy of the BackupPC data to a remote device.
--
Ray Frush "Either you are part of the solution
T:970.491.5527 or part of the precipitate."
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Colorado State University | IT |
Systems Engineer
For machines with antique rsync versions, we’ve had success backing them up
with the ’tar’ method instead.
> On 2020Feb 21, at 06:13, Gerald Brandt wrote:
>
> Sadly, no. The machine is due for decommisioning (I've been trying for years,
> but management...). I still have to back it up
controller redundancy) for running VM disk images
so that if we need to take it off line for patching, there’s zero impact.
> On Dec 18, 2019, at 02:08, orsomannaro wrote:
>
> On 16/12/19 16:54, Ray Frush wrote:
>> I run BackupPC as a VM in my environment which backs up all of
, the primary storage becomes corrupted, the backups should
still be accessible once a replacement BackupPC server is built and the
external pool of data is presented to it.
--
Ray Frush "Either you are part of the solution
T:970.491.5527 or part of the precip
We backup to a ZFS based appliance, and we allow ZFS to do compression and
disable compression in BackupPC. We do not allow ZFS to de-duplicate.
However since you’re looking at doing ZFS on the same box that’s running
BackupPC it probably doesn’t matter which one you have compression turned
Ahh, I didn’t install from an RPM. We install from the official tar files as
BackupPC packaging has been spotty in the past, and based on your SELinux
issues, continues to be spotty.
> On Aug 28, 2019, at 14:12, Jamie Burchell wrote:
>
> Ah, perhaps its due to which package I have. I’m
So, I’m running on a RedHat 7 flavored box. ’semanage fcontext -l’ returns no
items for specific paths for BackupPC on my system. Also my systems have no
content in /usr/share/selinux/packages/, which is why I wrote my own.
> On Aug 28, 2019, at 13:23, Jamie Burchell wrote:
>
> Thanks
Our setup is a little different that yours, but this is the SELinux module I
deploy to my BackupPC server with these steps:
semodule -r backuppc
checkmodule -M -m -o /tmp/backuppc.mod /tmp/backuppc.te
semodule_package -o /tmp/backuppc.pp -m /tmp/backuppc.mod
semodule -i /tmp/backuppc.pp
We
I’ll echo Jean-Yves sentiment, and advise against turning on ZFS Deduplication.
For a backupPC pool, which is already significantly deduplicated (via the
hash pool), deduplication probably won’t buy you as much as you’d hope. We
recently moved to ZFS backed storage and rely on using ZFS’s
ion because we didn’t actually run
out of inodes on the backend storage.
Reporting as an FYI to let people know how BackupPC responds to some of the new
threshold checking.
--
Ray Frush "Either you are part of the solution
T:970.491.5527 or part of the pre
.
--
Ray Frush "Either you are part of the solution
T:970.491.5527 or part of the precipitate."
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Colorado State University | IS | System Administrator
> On Feb 21, 2019, at 15:40, Adam Goryachev
> wrote:
>
&
) and then
hangs again.
--
Ray Frush "Either you are part of the solution
T:970.491.5527 or part of the precipitate."
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Colorado State University | IS | System Adm
o restore some sheets from older days,
> and they need to maintain the new and the old sheet when I restore
> it. Is there any way I can add a prefix or sufix to a restored file?
> i.e. Restore_Sheet001.xls
>
--
Ray Frush "Either you are part of the solutionT:970
uple of times?
>
> Thanks for your answer in any case ! It’s going to be.. weeks yeah :s
>
> Nino
>
>
>
> From: Ray Frush
> [mailto:fr...@rams.colostate.edu]
>
> Sent: Wednesday, August 22, 2018 7:06 PM
>
> To: backuppc-users@lists.sourc
ty on one of the world's
> mostengaging tech sites, Slashdot.org!
> http://sdm.link/slashdot___BackupPC-users
> mailing listBackupPC-users@lists.sourceforge.netList:
> https://lists.sourceforge.net/lists/listinfo/backuppc-usersWiki::
> http://backuppc
The incremental period of 0.97 results in a daily backup, so that's
probably what you want to keep.
My schedule ends up giving you something like this: ~32 daily backups + a
couple of older ones just in case you need an older file.
Backup# Type Filled Level Start Date Duration/mins Age/days
0
. The Filled vs. Full
backups are a bit confusing.
--
Ray Frush
On Wed, Nov 15, 2017 at 2:59 PM, Jamie Burchell <ja...@ib3.co.uk> wrote:
> Hi!
>
>
>
> Hoping someone can give me that “ah ha!” moment that I’m so desperately
> craving after pouring over the documenta
AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2017-09-20 16:14 GMT+02:00 Ray Frush <fr...@rams.colostate.edu>:
> > Question: Just how big is the host you're trying to backup? GB?
> number
> > of files?
>
> From BackupPC web page: 14
server is 770GB with 6.8
Million files. Full backups on that system take 8.5 hours to run.
Incrementals take 20-30 minutes. I have no illusions that the
infrastructure I'm using to back things up is the fastest, but it's fast
enough for the job.
--
Ray Frush
Colorado State University.
On Wed
Gandalf-
Hopefully, someone with more rsyncd experience can step in and help. I
haven't used the rsyncd method for 3-4 years, and don't have any current
examples to help you with.
Good luck!
--
Ray Frush
On Tue, Sep 19, 2017 at 9:32 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.
tpstats --exclude=var/lib/mlocate
> --exclude=var/lib/mysql/* --exclude=var/lib/apt/lists/*
> --exclude=var/cache/apt/archives/* --exclude=usr/local/php55/sockets/*
> --exclude=var/run/* --exclude=var/spool/exim/*
> backuppc@myhost::everything /
>
>
>
> standa
is not, and there is a misconfiguration that will leap out
at you as you work through this.
I had to do the same thing when I was doing an initial install.
--
Ray Frush
Colorado State University.
On Tue, Sep 19, 2017 at 2:52 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Stil
Gandalf-
Sounds like you need a bigger backup server.
BackupPC keeps the tranfer logs compressed, even the most recent one.
Typical log sizes for my largest host (768GB, 6.7 Million files) which
also has a significant amount of churn. You can see that the Full, (backup
65) even compressed,
Longish answer below...
On Fri, Sep 1, 2017 at 3:22 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2017-08-31 16:33 GMT+02:00 Ray Frush <fr...@rams.colostate.edu>:
> > The values you'll want to check:
> > $Conf{IncrKeepCnt} = 26; #
On Thu, Aug 31, 2017 at 10:45 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
>
> Ok but let's simulate a crash in your example:
>
> On day 2, before the incremental backup, the filled one (day0) is lost.
> Is backup made on day1 still available with "all" files or only with
>
On Thu, Aug 31, 2017 at 10:23 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
>
>
> So, with a "full" run, the second "full" is still seen as an
> incremental by rsync?
> Let's assume a 100GB host.
> bpc will backup that host for the first time. 100GB are transferred.
> The
On Thu, Aug 31, 2017 at 9:16 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2017-08-31 16:33 GMT+02:00 Ray Frush <fr...@rams.colostate.edu>:
>
> Thanks for the reply.
> In this case, you are making some full backups.
> I don't want to
is pretty good at self healing from issues. We had a
number of backups impacted by running out of inodes during a cycle. While
the files lost by the lack of Inodes cannot be recovered, BackupPC
recovered gracefully on the next cycle after the file system was expanded.
--
Ray Frush
On Thu, Aug 31
Craig-
On Mon, Aug 14, 2017 at 11:54 AM, Craig Barratt via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:
> Ray,
>
> The behavior you are seeing is expected for tar (and smb and ftp)
> XferMethods. Incrementals don't detect deleted or renamed files. So if
> you have a directory
Craig-
Thanks for taking a look at this.
On Sun, Aug 13, 2017 at 8:06 PM, Craig Barratt via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:
> Ray,
>
> What is the XferMethod and backup schedule (ie, how often do you do
> incrementals and fulls)? Which backup are you viewing
A snapshot of the BackupPC Filesystem does not protect from gross hardware
failure of the storage that destroys both the data and the snapshots.
--
Ray Frush
On Wed, Aug 9, 2017 at 3:42 PM, Alexander Moisseev via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:
> On 8/9/2
run.
--
Ray Frush
On Wed, Aug 9, 2017 at 2:47 PM, Hannes Elvemyr <hanne...@gmail.com> wrote:
> Hi!
>
> I'm using BackupPC for all my machines and it's great! I would now like to
> protect my BackupPC pool somehow (if my BackupPC server crashes, gets
> stolen or burns up
Jean-Yves-
I believe you may have been looking at v3 documentation. BackupPC V4
does _not_ make extensive use of hard links.
See: https://backuppc.github.io/backuppc/BackupPC.html#BackupPC-4.0
--
Ray Frush
Colorado State University.
On Sun, Aug 6, 2017 at 6:10 PM, B <lazyvi...@gmx.
This is a two part question/problem report.
We backup a file system that has a sub-directory that generally contains
around 39K small files that usually adds up to 16GB. The files see a fair
amount of churn month to month, and we were pulling from a backup about 2
weeks ago.
When we try to
Each night I get a list of warnings about 'missing pool file' in my main
log. I believe this stems from an issue caused by running out of inodes.
I'm wondering if doing something like manually running this command would
clean up the pool and fix the missing pool file messages.
, and
IBM's Storewise V7000 Unified storage, which all do a nice job of
snapshots, and we've never experienced issues with overhead during the
snapshot process. Point is, there are a lot of options out there,
including using modern file system features.
--
Ray Frush
Colorado State University
ve 'hourly' incrementals.
As Les Mikesell just pointed out, the downside would be that for large
instances, you'd be doing a lot of fairly expensive (compute time)
operations every hour to scan the file system for changes.
I believe that FS snapshots are faster, and more efficient than BackupPC
could
er end
for BackupPC to manage?
Thanks
--
Ray Frush
Colorado State University.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sd
Do you have selinux enabled? I encountered significant challenges getting
BackupPC and SELinux to play nice with each other. Check
/var/log/audit/audit.log for a report.
--
Ray Frush
Colorado State University.
On Thu, Jul 13, 2017 at 6:48 AM, Akibu Flash <akibufl...@outlook.com> wrote:
ete didn't report any errors. It also
> didn't delete the XferLOG.[56].z files.
>
> It looks like #3 was correctly deleted, and XferLOG.3.z too.
>
> What's in those directories? Are they owned by the backuppc user?
>
> Craig
>
> On Wed, May 24, 2017 at 4:03 PM, Ra
;wb...@parplies.de> wrote:
> Hi,
>
> Ray Frush wrote on 2017-05-23 15:37:36 -0600 [[BackupPC-users] Question
> about transient inodes]:
> > [...]
> > Can a developer comment on under what conditions BackupPC might be
> > temporarily allocating a lot of extra inodes,
I've encountered an interesting issue:
In $TopDir/pc/host, I have some orphaned directories that don't appear in
$TopDir/pc/host/backups.
For example: this host shows two extra directories, and XferLOGs for
backup #5 and #6 that does not appear in the 'backups' file:
$ ls isxxt004
0 17
t; <https://github.com/backuppc/backuppc/commit/7936184a9ec049fef3d0d67e012b23d79eb336f1>
> for
> this last night.
> Craig
>
--
Ray Frush
On Wed, May 24, 2017 at 10:07 AM, Michael McGregor <mcgrm...@isu.edu> wrote:
> Hello,
>
> I have a problem with the BackupFilesExc
, and then quickly releasing
them?
Thanks.
--
Time flies like an arrow, but fruit flies like a banana.
Ray Frush
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link
Bob-
I found the instructions to run httpd (apache) as the backupPC user
(backuppc) to be a graceful way to get BackupPC to play well with all the
required file ownership.
--
Ray Frush
On Sat, May 13, 2017 at 5:50 PM, Robert Katz <bobk...@digido.com> wrote:
> Guys:
>
> Can
ub.com/backuppc/backuppc/commit/7936184a9ec049fef3d0d67e012b23d79eb336f1>
> for this last night.
>
> Craig
>
> On Fri, May 12, 2017 at 9:56 AM, Ray Frush <fr...@rams.colostate.edu>
> wrote:
>
>> I’m seeing this issue too!
>>
>> I noticed that with 4.
up','/export','/software','/oracledba','/syswork']}
Please let me know if there's any additional information we can provid.
--
Ray Frush
On Thu, May 11, 2017 at 5:38 PM, Moorcroft, Mark (ARC-TS)[Analytical
Mechanics Associates, INC.] <mark.moorcr...@nasa.gov> wrote:
> Using the 4.1.2
I spotted two file stored in the cpool totaling 23.5GB, which is about the
same as the discrepancy between the pool size that BackupPC reports and
what I'm counting on the file system.
...
drwxr-x---. 130 backuppc backuppc 8.0K May 4 01:00 18
-rw-rw. 1 backuppc backuppc 16G Apr 27 14:38
u could
> set $Conf{PoolSizeNightlyUpdatePeriod} to 1.
>
> Craig
>
> On Mon, May 1, 2017 at 3:48 PM, Ray Frush <fr...@rams.colostate.edu>
> wrote:
>
>> My instance of Backuppc 4.1.1 reports:
>>
>> "Pool is 26.56GiB comprising 764047 files and 16512 di
n NFS file system. Is there something I should
be doing different to get a more accurate report of the pool size? How
does BackupPC calculate the pool size? (I'm trying to grok the source
code, but haven't found the method yet.)
Thanks
--
Ray Frush
Colorado State
I believe the install documentation acknowledges that BackupPC isn't
SELinux aware, and advises you to disable SELinux on the server you're
using to run BackupPC.
An interesting project would be to create a backuppc module for SELinux.
On Fri, Apr 28, 2017 at 9:06 PM, Kenneth Porter
> directory path off, and it was empty (instead of "/") in the case you
> mentioned, causing the restore to the home directory, not /.
>
> Craig
>
>
> On Mon, Apr 24, 2017 at 9:20 AM, Ray Frush <fr...@rams.colostate.edu>
> wrote:
>
>> I’ve bee
the correct
location, ’testserver:/opt’.
I’m wondering if anyone else has observed this behavior, or can suggest
what I might be doing incorrectly to get the unexpected result in my first
test case. Otherwise it sounds like I may have hit a bug.
Thanks.
--
Ray Frush "Either you a
/Terminal
sudo su -
mkdir .ssh
chmod 700 .ssh
echo '[backuppc user public key]' .ssh/authorized_keys
chmod 600 .ssh/authorized_keys
exit
Note the backup public key looks something like:
ssh-rsa B3NzaC1yc... [a couple lines of hash] ISXXYosqZQ==
backuppc@server
Ray
Like Kris, we back up a number of MacBooks here using rsync via ssh, and
have never had an issue.
Also like Kris, we only backup /Users which limits what we're backing up.
Ray Frush Either you are part of the solution
T:970.288.6223 or part of the precipitate
package (which was helpfully
provided by Ray Frush).
** **
Craig
** **
On Wed, Jul 31, 2013 at 7:14 AM, Mark Campbell mcampb...@emediatrade.com
wrote:
Apparently, my issue is not as solved as hoped. The service does start up
fine now, and doing an rsync --list-only to another
.
** **
Thanks,
** **
--Mark
** **
*From:* Ray Frush [mailto:ray.fr...@avagotech.com]
*Sent:* Thursday, August 01, 2013 3:14 PM
*To:* General list for user discussion, questions and support
*Subject:* Re: [BackupPC-users] rsync on Windows 8?
** **
** **
The BackupPC
…
** **
Many thanks,
--
Ray Frush Either you are part of the solution
T:970.288.6223 or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Avago Technologies, Inc. | Technical Computing | IT Engineer
to you for some assistance. Attached are various portions of data
that should provide you with a very clear view of what I am attempting to
accomplish.
If you would be so kind, please review the information and respond with
your thoughts/questions...
--
Ray Frush Either you
@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
--
Ray Frush Either you are part of the solution
T:970.288.6223 or part of the precipitate
the new files that were transferred for that host
for that backup. Wrapping it in a for loop to report on all your systems
is a separate exercise.
--
Ray Frush Either you are part of the solution
T:970.288.6223 or part of the precipitate
around). You'll have to see if there's any evidence
that the client becomes unavailable after a backup is started. Also
check that the 'rsync --server' is actually getting started on the client.
--
Ray Frush Either you are part of the solution
T:970.288.6223 or part
: Unable to read 4 bytes'.
--
Ray Frush Either you are part of the solution
T:970.288.6223 or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Avago Technologies, Inc. | Technical Computing | IT Engineer
be
overrun by all the files.
--
Ray Frush Either you are part of the solution
T:970.288.6223 or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Avago Technologies, Inc. | Technical Computing | IT Engineer
I can't math today, I have the dumb...
On Fri, Oct 5, 2012 at 9:24 AM, Ray Frush ray.fr...@avagotech.com wrote:
Out of curiosity, I checked some of our primary storage, where we
have a mix of lots (over 1Billion) of really small files and some
large databases, and found we're using about 7
/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
--
Ray Frush Either you are part
:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
--
Ray Frush Either you are part of the solution
T:970.288.6223 or part of the precipitate
://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
--
Ray Frush Either you are part of the solution
T:970.288.6223 or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Avago Technologies, Inc. | Technical Computing
-rw-r- 1 backuppc backuppc 545 Aug 20 15:42 XferLOG.7.z
-rw-r- 1 backuppc backuppc 456 Aug 20 14:00 XferLOG.bad.z.old
Unable to open those files and paste the contents here as they all open
with a collection of symbols.
How should I proceed from here?
--
Ray Frush
On Mon, Aug 20, 2012 at 10:48 AM, Olivier Ragain orag...@chryzo.net wrote:
PS: what is the rule on this group about post responding or pre
responding to emails ^^ ?
Do what makes sense in the context of the discussion. Avoid doing
both at the same time. ;-)
--
Ray Frush
70 matches
Mail list logo