ith only 7 days of backups.
Unless you have a huge turnover in data, keeping more backups will not
take a lot more space on the server. There is only one copy kept of
each unique file, no matter how many backups you keep. And, since it
is compressed it will take less space than the original copy.
That
n and works backwards to clean up the old copies so you
should have whatever files are still there even if some were somehow
deleted. Still, if your pool filesystem is totally corrupted all
bets are off - like it would be with any system.
--
Les Mikesell
lesmikes...@gmail.com
you'll only lose the single day that you delete.
>
> And what If I don't have any other filled backup but only incrementals
> made from the deleted "filled" ?
"Filled" backups don't take a lot more space, just more time to build
the directory structure. If you are
use you'd lose things that are pooled from other hosts.
In any case, though, if the next (rsync) run does not find an
existing copy it should fill it back in. Tar/smb backups would take a
full run to recover since they only transfer new files by timestamp on
incrementals.
--
Les Mikesell
ion and pooling across host will
likely at least double the history you can keep online unless your
data is mostly unique and already compressed.
--
Les Mikesell
lesmikesell@
--
Check out the vibrant tech community
ere you expect one to move most of
the data so the 2nd one should go very fast and catch anything the 1st
one missed.
--
Les Mikesell
lesmikes...@gmail.com
>
--
Check out the vibrant tech community on on
led by BackupPC.
>
If your database has a 'backup to stdout' command like mysqldump or
pg_dump you can pipe though gzip to save the local copy which may save
enough space to make it practical. And then you can exclude the
uncompressed location from your backup to also save space on the
server.
rt on a host or two as I was leaving work on
Fridays using the web interface so they would have the weekend to
complete if necessary - and to correct any time-skew that might have
happened. This was slightly before the blackout time would end so
there would not be a problem with concurren
ecreate,
unless you have already automated that with tuned kickstart files or
all of your systems are identical. The ReaR tool I mentioned in
another post will create a script to re-create an existing system.
For Windows systems, I'd use Clonezilla images as the base, but that
does take extra time and di
e boot isos until you might need to burn a
copy.
--
Les Mikesell
lesmikes...@gmail.com
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! htt
g it well for so long...).
--
Les Mikesell
lesmikes...@gmail.com
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! htt
le inactive
branches. The easy way to get this facility is to do your work on a
Mac.
--
Les Mikesell
lesmikes...@gmail.com
--
Check out the vibrant tech community on one of the world's most
engaging tech sit
On Thu, Jul 13, 2017 at 2:01 PM, Bob Katz wrote:
> >
> It's a mess and I wish I could back out of it and start from scratch.
> Also in the process I tried to create a xinet.d method of initiating
> rsync and I fear a conflict. All I did was install xinetd and create
> some
op the daemon as the only
> command I've found to stop the daemon incorporates the nonexistent PID
> file.
>
> Any thoughts, please? Thanks!
I noticed in your rsyncd.conf that you are running it as user
backuppc. That probably will prevent it from accessing ports below
102
ort (tcp/873?) to/from localhost in
your firewall. But my opinion has always been that computers are
supposed to do work so that humans don't have to, so I've always just
run rsync over ssh to localhost just like everything else instead of
doing the e
running
>
> So I'm still stuck :-(
Did you get any result from the "rsync localhost.localdomain::" test?
If that doesn't connect, it is probably blocked by the firewall
settings.
--
Les Mikesell
lesmikes...@gmail.com
an option in the web interface would be great - and there it
could even be aware of what host(s) the user is allowed to access.
--
Les Mikesell
lesmikes...@gmail.com
--
Check out the vibrant tech community on one of the world's
he routing is only complicated if the endpoint is on a router or some
other device.
--
Les Mikesell
lesmikes...@gmail.com
--
Check out the vibrant tech community on one of the world's most
engaging tech sites,
ething that
will work with a dynamic DNS service.
--
Les Mikesell
lesmikes...@gmail.com
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! htt
gt;
Normally the key goes in root's home directory under
.ssh/authorized_keys. That 'ssh-copy-id' command is a shell script
if you want to see what it does. Maybe you find wherever root's home
directory is in the sandbox environment and make a copy there.
If not, and you end up using rsyn
selinux contexts are set correctly during the
package install. Does that work through symlinks? Maybe you could
make the symlink the other direction if you want to call it some other
name.
--
Les Mikesell
lesmikes...@gmail.com
--
ng life
spans and updates? Seems safer than putting a same-named package in
a different repo where a future update will probably accidentally pull
in some other same-named package and clobber the one you wanted to
keep.
he
differences over the network). Is there any way you can run rsync
remotely against the NFS host instead?
--
Les Mikesell
lesmikes...@gmail.com
--
Check out the vibrant tech community on one of the world's most
engaging
ny case, and I think the version of
yum in RHEL/Centos 6 and up will handle URL's directly so you could
shorten that to:
yum install
https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
--
Les Mikesell
lesmikes...@gmail.com
---
yum --enablerepo=extras install epel-release.That is, the release
package that installs the EPEL repo is available in the Centos extras
repo which is part of the distribution.
--
Les Mikesell
lesmikes...@gmail.com
--
stay current.
>
It might be even nicer to add the rpm-building script into the source
repository so anyone could build an rpm of a new version and install
without losing the file management features of rpm.
--
Les
ent direct logins. You can still
"su -s /bin/bash - backuppc" or "sudo su -s /bin/bash - backuppc"
if that is the case.
--
Les Mikesell
lesmikes...@gmail.com
---
; xterm
>
> Does anybody have a clue about that ?
The remote system is sending some output before starting rsync over
the ssh login.There is probably something being started in
/etc/profile, /etc/bashrc or root's .profile or .bashrc that is
complaining about not having TERM se
C_Admin.
>>
>> Would it make more sense to put it in /usr/libexec?
>
>
> I thought about that as well but in either case it doesn't solve the apache
> directive issue...
>
Is it possible to use Apache's own
that reason.
Are you planning to try to make one package that upgrades without
breaking or a separate backuppc4 package so the admin can choose
when/if to upgrade?
--
Les Mikesell
lesmikes...@gmail.com
--
Check
long as v3 will be
maintained and let the sysadmin decide when to switch. I might think
differently if the upgrade could be completely transparent, though.
--
Les Mikesell
lesmikes...@gmail.com
--
Check out th
> How can I correct this situation ? I'm not very good with the commands
> who change the permissions.
You can use chmod with the symbolic form for the permissions.
chmod o+rx /path
for each of the directories will add read and execute for every
Is it the file itself with incorrect permissions? Maybe everything is
blocked by permissions on a directory above the one where your symlink
points. Use 'ls -ld' on each starting with /media and make sure
there is rx permission for everyone or at least backuppc.
--
Les Mikesell
lesmik
On Sat, Feb 11, 2017 at 7:14 PM, Edith Cloutier
<edith.clout...@mediom.com> wrote:
> None
>
You'll have to explore why you can't su to the backuppc user and cd
into that directory. "sudo su -s /bin/bash backuppc" should work or
give you a hint about what is failin
th/Disque640Go
> ext4
>
Still looks reasonable - do you have any of the extended security
mechanisms enabled? AppArmor or SeLinux?
--
Les Mikesell
lesmikes...@gmail.com
--
Check out the vibrant tech community
21 11:37 trash
>
That looks OK. What about "ls -ld /media/edith/Disque640Go/backuppc"
for the directory itself?
Also, what filesystem type is this?
--
Les Mikesell
lesmikes...@gmail.com
--
Check
> What's the bug ? Where's it ?
What does "ls -l /var/lib/backuppc" show? And if it shows a symlink,
repeat for the target to see ownership and permissions.
--
Les Mikesell
lesmikes...@gmail.com
---
ted with
> error code. See "systemctl status backuppc.service" and "journalctl -xe" for
> details.
>
> What could I try next to solve the problem ?
Try executing that symlink command that the log says is failing
manually as the backuppc user. You might get
and your
ability to build an efficient raid/LVM volume of an appropriate size.
But even if you need more than one server to handle it, the interface
that lets you pick any file or directory from the history and either
restore back to the source location or download through your browser
is going to b
that backups become faster after the
2nd full run. If you use the --checksum-seed option the rsync block
checksums are cached on the server so the archive copy no longer has
to be uncompressed for the rsync comparisons.
--
Les Mikesell
long time.
> sudo mv /var/lib/backuppc /var/lib/backuppc_origine
> sudo ln -s /media/edith/Disque640Go/backuppc /var/lib/backuppc
> sudo chown -h backuppc:backuppc /var/lib/backuppc
This should have been chown -r to recurse down the tree.
--
the archived copy for the rsync comparison to work.
--
Les Mikesell
lesmikes...@gmail.com
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http:/
just the directories in /usr/local/BackupPC/lib/ which
seem to at least exist, but your error is about a particular file.
The @INC list is the search path where perl will try to find it.
Does /usr/local/BackupPC/lib/BackupPC/Lib.pm exist, and if so is it
and the pat
great
deal of difference bandwidth-wise between an incremental or full
backup, although the full make take a much longer time to complete
since it checks all the files. In either case, only the detected
differences will be transferred.
what is that limit and how can I workaround or fix it? If that's
> not possible, then BackupPC isn't the backup solution for me.
The most likely suspect is that rsync timeout shown in the log snippet
you posted. But you didn't provide any details about why or how your
rsync had timeouts enabl
Ms, one at a time.
>
> But isn't it much more interesting that I've seem to have found some
> kind of limit to Backup PC?
I think you have found some limit in your network or setup.
--
Les Mikesell
lesmikes...@gmail.com
-
the backup is the same as the one being backed up.)
> Because the read is taking so long, the write pipe has no activity and is
> timing out.
Pipes do not have any timeout.
You mentioned that the backup is actually stored on a NAS device. Is
this NFS mounted and
g and the previous file:
You probably won't see a cause on the backuppc side. Just the PIPE
signal when the connection drops.
--
Les Mikesell
lesmikes...@gmail.com
--
Check out the vibrant tech community on one of th
reciated rather than criticizing my mail so strongly. I have been using
> backuppc for 8 years, and it is what I recommend at the organizations that I
> work for, but this kind of hostility would discourage people from adopting
> it.
Holger likes the details to be right. Often that tu
pression with
the sshd configuration on the client side. However, it is only used
with the rsync xfer method. Rsyncd and samba do not use the ssh
layer.
--
Les Mikesell
lesmikes...@gmail.com
--
___
onnections, but subsequent runs with
rsync will be much faster, only copying the differences. If the
initial full runs are impractical, it might work to initial set the
2nd server up locally, then move it to the offsite location with the
initial data in pl
u don't need to do anything
different from your remote server.
--
Les Mikesell
lesmikes...@gmail.com
--
___
BackupPC-users mailing list
BackupPC-users@lists.sou
hat created the files. The chown that
Adam suggests would fix it if that is the case. Also, I believe
different packaged systems use different ways to make the web
interface run as the backuppc user. You may need the perl-suid
package
ppc side thinks the remote side disconnected.
If rsync is actually still running on the client it is probably some
sort of network issue like a nat gateway or stateful firewall timing
out. If rsync did exit it could be out of memory or a filesystem or
disk error.
--
Les Mikesell
lesmikes.
while I OOM just kills it:
Your best shot would be to use dd to copy the partition to the new
disk, then grow the filesystem to fill the space using the filesystem
tools. You don't need lvm for that since the space is already there,
but the details of the command will depend on the type of filesystem.
-
On Sun, Aug 7, 2016 at 12:45 AM, martin f krafft <madd...@madduck.net> wrote:
> also sprach Les Mikesell <lesmikes...@gmail.com> [2016-08-06 18:19 +0200]:
>> Why is it likely that you would want to read backuppc logs on
>> systems that don't have backuppc installed.
&
ly easy
to write if you already know some computer language. The hard part
comes when you try to understand someone else's code where they used a
different style - or your
n
should not be copied at all with rsync/rsyncd, but would be by other
methods using the timestamp to exclude existing files during
incrementals.
--
Les Mikesell
lesmikes...@gmail.com
--
What NetFlow Analyzer can d
it is hanging in the device driver code of the OS
(perhaps waiting for a disk action to complete). But since it is a
VM, I don't know how that really relates to the hardware.
--
Les Mikesell
lesmikes...@gmail.com
--
ined. Note that these can be
overridden in the per-host configs, though. Normally you would
change only the local host's per-host config to use sudo, leaving the
others with the stock ssh command to work remotely.Also, note that
the RsyncClientRestoreCmd command posted isn't symmetrical with
On Thu, Jun 9, 2016 at 2:07 PM, Carl Wilhelm Soderstrom
<chr...@real-time.com> wrote:
> On 06/09 01:50 , Les Mikesell wrote:
>> Sometimes this is caused by a nat router or stateful firewall
>> (possibly even host firewall software) timing out and breaking a
>> connectio
r ssh you can usually fix it by enabling keepalives - not
sure about the standalone rsyncd options.
--
Les Mikesell
lesmikes...@gmail.com
--
What NetFlow Analyzer can do for you? Monitors network bandwidth
- perhaps like this:
https://www.digitalocean.com/community/tutorials/how-to-create-a-ssl-certificate-on-apache-for-debian-8
--
Les Mikesell
lesmikes...@gmail.com
--
What NetFlow Analyzer can do for you? Monitors network ban
nst allowing it to be linked into backuppc? That would
be the equivalent, say, of gluing a proprietary database client into a
perl module which is a better example of the need for dual-licensing.
Or even if someone created a much better user interface and packaged
it as a product - why would that be a pro
ut it is almost certainly too late to change anything
with backuppc even if the copyright owner(s) wanted to.
--
Les Mikesell
lesmikes...@gmail.com
--
Mobile security can be enabling, not merely restricting. Empl
ifferent operations done in parallel or
sequenced across different machines. I never ran it against github
but with its large user base and set of plugins I'd expect that to be
a common setup.
--
Les Mikesell
lesmikes...@gmail.com
-
erations. It can be triggered by changes in your source control
system and can collate results from different runs in one place for
you. There are plugins for all sorts of build/test/publish scenarios
if you need them.
--
Les Mikes
irement to transfer ownership of contributions to the official
version so that someone would have the authority to change the license
on future versions if desired - but it may already be too late for
that.
--
Les Mikesell
lesmikes...@gmail.com
eat to automate tests across a few OS distributions
and versions - especially tracking changes in the samba and rsync
code. I'm retired now and no longer have access to lab resources or
anything running backuppc so I can't do much but it is such a great
project that I'd like to see it contin
butions, but it might do more
harm than good to try to change that now - at least for v3. It does
make it difficult to discuss issues on the mail list when different
users will be seeing different things, though.
--
oogles-wireless-effort-is-led-by-a-geeks-geek-1421973826
http://www.recode.net/2016/4/14/11586114/access-google-fiber-ceo-interview
He does have a linkedin page but I don't know if that would work any
better to reach him.
--
Les Mikesell
lesmik
ernatives, and so far none
>> of them make any economic sense to use.
>
> Doesn't BPC use whatever rsync version is available on the BPC-server?
>
v3 uses its own perl implementation to be able to compare to the
compressed archived copy..
to complete. The
approximately-weekly timing for fulls will keep them on the same day
after that.
--
Les Mikesell
lesmikes...@gmail.com
--
Find and fix application performance issues faster with Applicatio
> i believe an individual wakeup will only queue backups that are ready
> to go, i.e. have their IncrPeriod or FullPeriod used up. In most cases
> you need multiple wakeups.
And, the first wakeup in the list starts the nightly cleanup which
should happen at a time when the backup runs are expect
he defaults, it might be
worth setting up a virtual machine with a stock install or figuring
out how to extract the config file from the distribution .deb so you
can diff yours against it to find all your local changes.
--
Les Mikesell
shows space available? If it is in some other partition that is 95%
full you would see the symptoms you describe.
--
Les Mikesell
lesmikes...@gmail.com
--
Find and fix application performance issues faster with
the ping command failing.
--
Les Mikesell
lesmikes...@gmail.com
--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multipl
On Mon, Apr 11, 2016 at 1:32 PM, tschmid4 <tschm...@utk.edu> wrote:
>
> Sporadic backups will run automatically.
>
Did you mean to ask a question?Does your log file show that
scheduled backups are starting other times but failing?
--
Les Mikesell
lesmik
a
Friday evening so it - and the subsequent weekly fulls will have the
weekend to complete, and unless you have a lot of changed files the
weekday incrementals should be much faster.
--
Les Mikesell
lesmikes...@gmail.com
--
ched drives,
though. You might look through /var/log/messages for errors around
the times backuppc runs have failed. The PIPE error doesn't tell you
much except that the underlying xfer program quit unexpectedly.
--
Les
e backup drive itself is relatively new. Just a few
> months old.
>
Are you sure you have enough RAM? Rsync will load a copy of the
whole directory tree at both ends before starting. If your target is
the same machine as the backup you'll need to hold 2 copies.
--
Les Mikesel
me} ?
>
I think I'd try adding a top level directory at a time to
$Conf{BackupFilesOnly}, starting with something small if possible.
That should let you get an idea of what kind of speed you are getting
before hitting whatever is causing your failures.
--
Les Mikesell
lesmikes...@gmai
ou might try adding one at
a time and repeating the runs as they complete.
--
Les Mikesell
lesmikes...@gmail.com
--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Da
host?
Add them as a web user. The usual way is to add them to the htpasswd
file but there are many ways to configure Apache authentication. Than
add the appropriate logins in the user or moreUsers field of the hosts
file to tie them to the host(s) they can control.
--
Les Mikesell
u don't use the web interface for these tasks?
--
Les Mikesell
lesmikes...@gmail.com
--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Ac
>
Yes there is. If you add a new host with the web interface and use
the newname=oldname syntax it will copy the old host's custom settings
which you can then edit if it needs any changes.
--
Les Mikesell
lesmikes...@gmail.com
--
base filesystem to ZFS,
You don't have to rush into this. CentOS 6 is expected to be
supported with maintenance updates until November 2020. And it
doesn't use systemd and should work with your current linux
filesystem.
--
Les Mikesell
lesmik
On Wed, Feb 10, 2016 at 12:31 PM, Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com> wrote:
> 2016-02-10 16:34 GMT+01:00 Les Mikesell <lesmikes...@gmail.com>:
>> If you want to keep more than one full shouldn't FullKeepCnt and maybe
>> FullKeepMin be higher?
&g
On Wed, Feb 10, 2016 at 4:40 AM, Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com> wrote:
> 2016-02-07 17:25 GMT+01:00 Les Mikesell <lesmikes...@gmail.com>:
>> How many filled backups are you configured to keep? And have you
>> adjusted the FillCycle va
se them without conversions - but on the other hand there is
difference between '6.97' and 7.97. If that isn't a typo, something
odd happened.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get Dee
ng that could lead to issue to backup removal.
>
And yet, other people who haven't made your changes don't see that
issue. I don't know why it would happen but I wonder if your
FullPeriod is somehow getting parsed as 2.
--
config that you have
to manually change that entry.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at
ed if it has
been set and it is obviously necessary to manually change it to hit
the right machine.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM
ng double quotes instead of single around a string
containing the @ symbol. The web editor should show you how perl
parses the settings.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get Deep V
kup is always filled and
the older ones are changed to reverse deltas. It must move/copy
things around to arrange that. And the full and incremental runs
aren't tied to keeping filled/unfilled backups anymore. But, I still
don't see why any expired already.
--
On Thu, Jan 21, 2016 at 10:13 AM, Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com> wrote:
> 2016-01-21 16:30 GMT+01:00 Les Mikesell <lesmikes...@gmail.com>:
>> V4 does it backwards from v3. The last backup is always filled and
>> the older ones are changed to re
r to "Unfilled" backups if you change
the scheduling you may need to adjust the FillCycle setting.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
On Tue, Jan 19, 2016 at 10:10 AM, Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com> wrote:
> 2016-01-19 17:01 GMT+01:00 Les Mikesell <lesmikes...@gmail.com>:
>> I'd bump up FullKeepCntMin and IncrKeepCntMin to the numbers you want
>> to see if that keeps them f
On Mon, Jan 18, 2016 at 12:25 PM, Alexander Moisseev
<mois...@mezonplus.ru> wrote:
> On 18.01.16 19:53, Les Mikesell wrote:
>> Does anyone understand the docs at
>> http://backuppc.sourceforge.net/BackupPC-4.0.0alpha3_doc.html
>> for $Conf{FillCycle} ? It looks li
works for rsync.
Personally I would just start over and only worry about extracting
anything from the old drive if you had to recover some older flle.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get
101 - 200 of 9254 matches
Mail list logo