een set and it is obviously necessary to manually change it to hit
the right machine.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM +
DNS name didn't match the
windows host's concept of its own netbios name. A quick fix is to use
whatever name you want in the backuppc host setup but configure
ClientNameAlias as the IP address. Just remember later if you add
hosts with the newhost=oldhost syntax to copy the config
The web editor should show you how perl
parses the settings.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monit
ost. In any case you can use an alias in your email
setup to forward remotely.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + R
d the mail logs to see if this was the original
recipient? It might be getting aliased or returned as an error
somewhere along the way.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get De
On Thu, Jan 21, 2016 at 10:13 AM, Gandalf Corvotempesta
wrote:
> 2016-01-21 16:30 GMT+01:00 Les Mikesell :
>> V4 does it backwards from v3. The last backup is always filled and
>> the older ones are changed to reverse deltas. It must move/copy
>> things around to arrange
re changed to reverse deltas. It must move/copy
things around to arrange that. And the full and incremental runs
aren't tied to keeping filled/unfilled backups anymore. But, I still
don't see why any expired already.
--
On Tue, Jan 19, 2016 at 10:10 AM, Gandalf Corvotempesta
wrote:
> 2016-01-19 17:01 GMT+01:00 Les Mikesell :
>> I'd bump up FullKeepCntMin and IncrKeepCntMin to the numbers you want
>> to see if that keeps them from being expired early. I always did that
>> with v3 too
lock
and to keep backups after decommissioning a host.
Also, since the convention for expiry parameters is
"FullKeepPeriod/FullKeepCnt" etc refer to *Filled* backups, and
"IncrKeepPeriod/IncrKeepCnt" refer to "Unfilled" backups if you change
the scheduling you may nee
On Mon, Jan 18, 2016 at 12:25 PM, Alexander Moisseev
wrote:
> On 18.01.16 19:53, Les Mikesell wrote:
>> Does anyone understand the docs at
>> http://backuppc.sourceforge.net/BackupPC-4.0.0alpha3_doc.html
>> for $Conf{FillCycle} ? It looks like expiring is really based o
On Mon, Jan 18, 2016 at 10:16 AM, Gandalf Corvotempesta
wrote:
> 2016-01-18 17:07 GMT+01:00 Les Mikesell :
>> Why don't you put back the default settings to see they work as
>> expected?
>
> Because i'm using default settings except posted lines.
>
> I wont
y option to cause
the inode values to be read)
plus some overhead for sending the list to the server (and a lot more
if it fills RAM and swaps...).
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get
y I would just start over and only worry about extracting
anything from the old drive if you had to recover some older flle.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get Deep Visibility in
looks like the backup command is not using ssh - or anything to get
root priviliges. It should only backup files that are readable by the
backuppc user.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 A
etting up ssh keys you'll need to do the same for the
restore command.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM +
e path to the source
(in the pc/backup_number tree) and redirecting the output to the
destination file.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get Deep Visibility into Application Performa
test the ssh key setting you
have to be running as the backuppc user, not root or some other user.
Do an 'su -s /bin/bash - backuppc' first, then the ssh root@localhost
should work if your key setup is correct.
--
Les M
g
something like that where the file in question didn't exist in the
backup run selected but I can't recall the situation that let it
appear as one you could select in the web interface. If that was from
an incremental, can you pick the file from a full run to see if there
is a d
er id since smb mounts typically
just have one set of credentials.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3
ach restore you have attempted - just like there is one for
each backup run.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM
ded for shares from the linux box they might not be what you need
as a client.
In any case you can connect manually with smbclient to check what you
can access faster than waiting for a backup run to hit it.
--
Les Mikesell
lesmikes...@gmail.com
--
y of
a file and then subsequently used so the server side does not have to
uncompress and recompute them.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
AP
redentials, though. I'd recommend setting up rsync anyway because
it does a better job of tracking renames, changed files with old
timestamps, and deletions.
--
Les Mikesell
lesmikes...@gmail.com
--
Site
should be logs of
each restore attempt. Those may tell you why it failed. If you
aren't using ssh as root for the connection like you would for other
hosts you may not have permission to write.
--
Les Mikesell
- and a speedup from less compression
would make sense in that case. But, that amount of time seems
extreme.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get Deep Visibility into Application P
r (same partitions used by
> BPC as storage):
>
> # dd if=/dev/zero of=/var/backups/test.img bs=1M count=1
> ^C8815+0 records in
> 8815+0 records out
> 9243197440 bytes (9.2 GB) copied, 142.267 s, 65.0 MB/s
That's not at all like what rsync would be doing when it merges
1-13 20:00 2016-01-13 19:53
> BackupPC_tarCreate failed
>
>
Are you using ssh to connect as root to localhost like you would another
client or have you set up some other mechanism to get the appropriate
privileges?
--
Les
and actually BPC is marking all
> backups as failed as soon as one file change during the rsync procedure.
Changed/removed files are not supposed to be fatal, but you may be the
first person trying to use v4 in an environment like that. It would
not
ckup that hasn't started yet. Did you do something here?
I was going to comment on that, but Adam is in a much better position
to help since he has run v4. Is there any chance you could have
started more than one instance of the backuppc server?
--
Les M
a
'partial'. Those would be discarded when a better one completes.
And one of your log entries mentioned a fatal error.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get Deep Visibi
On Mon, Jan 11, 2016 at 1:31 PM, Gandalf Corvotempesta
wrote:
> 2016-01-11 19:10 GMT+01:00 Les Mikesell :
>> Wild guess here, but 'host unknown' usually means something has done a
>> DNS lookup (or reverse, number to name) that has failed. DNS lookups
>> can be slo
#x27; usually means something has done a
DNS lookup (or reverse, number to name) that has failed. DNS lookups
can be slow.Maybe sticking the client hosts and IPs in your
/etc/hosts file would help if your reverse DN
issing.
> This new backup is still the #1. Where are the missing backups ?
>
Are you sure the full completely succeeded? If not it might have
been marked as a 'partial' which would be replaced by a subseq
On Mon, Jan 11, 2016 at 10:58 AM, Gandalf Corvotempesta
wrote:
> 2016-01-11 17:28 GMT+01:00 Les Mikesell :
>> I haven't used v4 but it sounds like you have some issue with your
>> network, rsync, or lack of resources. The main difference between a
>> full and increment
On Mon, Jan 11, 2016 at 8:45 AM, Gandalf Corvotempesta
wrote:
> 2016-01-11 15:38 GMT+01:00 Les Mikesell :
>> Also note that with backuppc it is generally better to run fulls more
>> frequently, since unlike other backup programs they don't much
>> additional space.
unless your timing requirements are extremely
tight it is better to let the server do the scheduling to keep the
load even over the backup window. Generally with the default
approximate 24hr/1week timings, if you force a full run at the time
you want
be substantially buffered.
Otherwise you'll wait for the disk head to bounce around and always be
in the wrong place.
--
Les Mikesell
lesmikes...@gmail.com
--
Site24x7 APM Insight: Get Deep Visibility into Appl
itch to read-only.
--
Les Mikesell
lesmikes...@gmail.com
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/li
o read about how
> this works in any particular release/version?
On the 2nd full, the server-side copy is read (and uncompressed) to
compute the block checksums for the comparison. After that, the
cached values will be used except for the RsyncCsumCacheVerifyProb
percentage. The files ar
r
incrementals. That has the down side of not tracking deletions and
possibly missing files/directories that are renamed or created in a
way that preserves old timestamps (like unpacking zip files, etc.).
--
Les Mikesell
lf but would try it if I needed a new
setup.
--
Les Mikesell
lesmikes...@gmail.com
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://li
who use it don't report to the lists. Or hardly anyone uses it.
>
I don't think a lot of people have tried it - partly because v3 works
so well. But, there have not been a lot of issues reported to the
list. I think I'd trust it if I were setting up up a new system -
Craig i
the old version you can delete the old host.
--
Les Mikesell
lesmikes...@gmail.com
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:
t; This one – the failing one – is always listing "pool" or "create" maybe,
> but not a single time do I see a "same".
I'm not sure if the xfer always has the same ordering - maybe it is
just hitting different files first. Are you seeing files logged that
alre
a
complete comparison is done (partials quickly skip files where the
timestamp and length match). So, it is still going to take the time
to read those files even though only differences will be transmitted.
At the end of the next failing partial,
On Wed, Dec 2, 2015 at 5:20 PM, martin f krafft wrote:
> also sprach Les Mikesell [2015-12-03 12:08 +1300]:
>> However, you might want to consider running an offsite instance of
>> backuppc to back up the same targets directly, using a vpn for the
>> connection if necessa
you a web interface to
trigger the generation of tar archives. If you want full automation
you are better off using cron with the command line tools.
--
Les Mikesell
lesmikes...@gmail.com
--
Go from Idea to Many
pPC_tarCreate to generate the
archive and pipe it through whatever else you need.However, you
might want to consider running an offsite instance of backuppc to back
up the same targets directly, using a vpn for the connection if
necessary. Both the storage and transfer would be much more eff
smb/tar xfers, the files taken on the next
incremental will be based on the file timestamp so changes like copies
that preserve an old timestamp, zip extraction, etc. can be missed.
With rsync/rsyncd the contents of the files are checked so any change
should be detected. It won't make a 2nd t
_CGIDIR__/BackupPC_Admin
> -swxr-x---1 __BACKUPPCUSER__ web 82406 Jun 17 22:58
> __CGIDIR__/BackupPC_Admin
>
Did you use the debian deb package to install this? Otherwise those
__ names look like you might have used the sourceforge version without
running the installer.
you create the zipfile with no compression it should be readable.
At least some versions of zip had size limits that made it not very
suitable for backups, so I've always used tar format even when the
target is for
king in /var/spool/cron/ but they should be modified using
'crontab -e' as that user. Also, cron jobs are usually logged under
/var/log/cron - looking through the log might help see what the server
is doing.
--
Les Mikesell
lesmikes...@gmail.com
---
to the list in the past. You might start by looking at
additional cron jobs.
--
Les Mikesell
lesmikes...@gmail.com
--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experie
would happen if it
doesn't during backups, though.If you need a brute-force
workaround, try using BackupPC_tarCreate to create a tar image (or
download through a web browser) and feed it to tar to do the restore.
--
Les Mikesell
lesmikes..
existing files, rsync will make a hidden new
copy to merge in the changes needing more space than you would expect.
Also if there are sparse files in the backup they may take more
space - and more time - to restore.
--
Les Mikesell
ou'll probably also have to do something about the ping command or
somehow make nmblookup work too. I can't help much more than that
since I don't have access to machines running backup
backing up if it is broken
and you are depending on its own instance of backuppc to do the work.
As for administering backuppc, after the initial setup you can do
most of the management through the web interface w
#x27;t maintain that. For a small archive
you may be able to ' rsync -H' the entire archive directory, but this
is likely to fail as the backups grow. The best approach is to
simply run backuppc on a different host and let it make the b
27;t see why that
would be different when starting from the server command line vs the
web interface but maybe there is something different about the ssh key
that is used. In any case, the issue seems to be something about
your shell setup rather than in backuppc itself.
--
Les Mikesell
lesm
art I
don't understand is why the web interface is invoking the user's
shell. It is like it is using system() instead of exec() to run the
command at some point. Setting backuppc's shell to /bin/bash would
probably fix it, but I think something else is wrong.
--
you'll see that
it comes from /sbin/nologin. I don't quite understand why it is
running when you start a backup from the web interface but that is
your problem.
uld run the browser on
the backuppc host with remote access.
It's not necessary to use these approaches since you should be able to
configure apache to allow access, but sometimes they are better or
mo
consecutive times, it will
> not be backed up from 8:00 to 18:00 on Mon, Tue, Wed, Thu, Fri."
>
> Will read up some more on the ping command too.
>
> Thanks for the hints!
Note that the next full is going to be based off the time of the last
one.
On Tue, Sep 22, 2015 at 5:48 PM, Timothy Murphy wrote:
> On Tuesday 22 September 2015 14:30:34 Les Mikesell wrote:
>> On Tue, Sep 22, 2015 at 2:20 PM, Brad Alexander wrote:
>
>> I didn't mean there was a problem running 2.4, just that if you put
>> 'Require Lo
On Tue, Sep 22, 2015 at 3:26 PM, Holger Parplies wrote:
> Hi,
>
> Les Mikesell wrote on 2015-09-22 14:30:34 -0500 [Re: [BackupPC-users]
> Forbidden You don't have permission to access /BackupPC on this server.]:
>> On Tue, Sep 22, 2015 at 2:20 PM, Brad Alexander wrote:
just that if you put
'Require Local'
which I believe was in the other config posted, clients other than
from the same host as the server will be denied access.
http://httpd.apache.org/docs/2.4/mod/mod_authz_host.html
--
Les Mikesell
lesmikes...@gmail.com
-
rom remote clients.But, I don't have systems to
test on anymore.
--
Les Mikesell
lesmikes...@gmail.com
--
___
BackupPC-users mailing list
BackupPC-users@li
dex.html) found, and server-generated directory index
> forbidden by Options directive" and I do not have an index.html file in
> /var/www/html/ but rather all the files that make up the "BackupPC" webpage.
> (*.gif, *.css, *.png, etc.)
Your DocumentRoot is set to /var/ww
p
timeouts from happening, though.
> How can this timeout be changed? I did not see any option for this
> when I setup the firewall rules for the rsync connection. Also I remember
> testing this once with the firewall on the client shutoff, with the same
> results.
I don't understa
ts welcomed.
>
That scenario is pretty common when there is a nat gateway involved,
but a stateful host firewall could also time out the connection and
start blocking after some amount of idle time.
--
Les Mikesell
lesmikes...@gmail.com
---
to give that a go and see what happens.
That should work, but note that the space won't be released until a
backuppc_nightly run removes all the pooled files that will then only
have one link in the
On Tue, Jul 14, 2015 at 8:29 PM, wrote:
> Les Mikesell wrote at about 17:39:02 -0500 on Tuesday, July 14, 2015:
> > On Tue, Jul 14, 2015 at 5:31 AM, Jürgen Depicker
> wrote:
> > > Hello,
> > >
> > >
> > >
> > > I wonder what is bes
on?
I think that is a fairly unlikely scenario unless it is an external
drive being optionally automounted. When I've seen drive failures on
running systems, the filesystem goes read-only or causes errors
instead of being unmounted, and on boot, failing to mount a partition
listed in fsta
duplicate copy of the current
checkout and it is next to impossible to ever remove old revisions
from the central repository history so it can grow very large over
time.
--
Les Mikesell
rsion for this instead? You would need to have a process to
ensure that the live system was always updated from a tagged/known
revision, but then updating all the development workspaces to stay in
sync becomes easier and more efficient.
--
Les
upPC_tarCreate's wildcard concepts but at the
expense of a lot of overhead you could let backuppc generate a tar of
the whole top-level directory (like your first command above) and
specify the '*.pdf' selection to the extracting tar to get what you
want.
--
Les Mikesell
tead of on top of an existing tree. What's the big
picture here? That is, why not just let people restore files directly
from backuppc as needed? Are you trying to emulate a version control
system with the ability to do diffs, etc.?
--
Les Mikesell
lesmikes...@gmail.com
---
ut it looks like that isn't
happening in this situation. But, unless you care about the
bandwidth used, you could just script something around the command
line BackupPC_tarCreate tool piped though ssh to a remote extract
instead of using rys
ser configuration (although
on windows this can also come from the system configuration).
--
Les Mikesell
lesmikes...@gmail.com
--
One dashboard for servers and applications across Physical-Virtual-Clo
ld work
either. Are you sure it isn't just a browser proxy setting? You
might be able to fix that with an exception in the proxy
configuration.
--
Les Mikesell
lesmikes...@gmail.com
--
One dashboard for
he dhcp-connected client
will work even if automatic runs can't find it. I think it tries to
connect to the IP of the web client, at least if nmblookup isn't
finding it. Is your web browser using a proxy that would make it
appear to be some other address?
--
Les Mikesell
On Thu, May 14, 2015 at 12:17 PM, wrote:
> On Thu, May 14, 2015 11:46 am, Les Mikesell wrote:
> ...
>> So far the only thing we know for sure is that you get a sigpipe - and
>> that just means that the other end of the connection has exited or the
>> tcp connection is bro
st the files as updated on every run, but that may have
been where the destination was a FAT filesystem and I don't know if it
can happen in backuppc. But you should be able to tell by looking at
the timestamps on those files whether they should have been i
Maybe you have odd timestamps/ownership or
something that the server can't match.
> Note the summary page for this client says backup #11 is type full, yet the
> XfreLOG say it is a incr.
No idea about that.
--
Les Mikesell
lesmikes...@gmail.com
ave a timeout on idle
connections.
--
Les Mikesell
lesmikes...@gmail.com
--
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performan
is
reconstructed and saved on the server and only files with exactly
identical content can be pooled. Database and VM images generally
need special handling to make sure they don't change while being
copied, so you may want to exclude them from the backuppc run a
gt;
Storing more copies shouldn't be a problem since all files with
identical content are pooled. If you don't save all the incrementals
you will increase the chance of losing messages that were accidentally
deleted between runs.
--
Les Mikesell
for incrementals, making the next ones more efficient.
--
Les Mikesell
lesmikes...@gmail.com
--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnershi
possible to configure this?
There is no way to specify different blackouts for fulls vs.
incrementals. However fulls won't happen until the FullPeriod time
has expired since the last one, so if you force one at an appropriate
time, subsequent runs will happen at about the same time a week l
vantage in that any new install of OSX
already knows how to restore from it. Backuppc gives you an
approximate equivalent of a tar image but you need a working
installation to read it.
--
Les Mikesell
lesmikes...@gmail.com
hard drives.
Doing your own install instead using a packaged version lets you pick
the location for all of the components so you can make sure everything
involved has separate copies on each drive. And then you'll need
some sort of script to start it up - and maybe you can find a way t
ill restore the
whole system up to the backup point, then use backuppc to drop in the
more recent changes.
--
Les Mikesell
lesmikes...@gmail.com
--
Dive into the World of Parallel Programming The Go Parallel Web
han about anything else, it won't be a great way
to restore the OS or its own working parts on the host machine. That
is, the storage format is compressed and you'll need a working
instance of backuppc to restore fr
tion expects it before installing the package.
Then you don't have to change or move anything. On a Centos system
that would be /var/lib/BackupPC. Also, when using external drives,
be sure you have reformatted them with a linux filesystem type that
supports hard links.
--
Les Mikesell
l
don't think you
can do any other way is to be able to skew the full runs of different
sections to different days. Fulls are the slowest operation because
even with rsync and checksum-seed, the target side has to do a full
re
t; tired at the time [Embarassed]
>
That doesn't really explain a 'no ping response' error message. That
should have been an 'unable to transfer...' or something like that.
--
Les Mikesell
lesmikes...@gmail.com
-
h the server and client can ping each other via IP address and hostname.
>
In that case you either have a typo in the name in the backuppc
config, or when backuppc expands the configured pingPath and PingCmd
it is doing something different than what you are testing.
--
Les Mikesel
this with an IP address in ClienNameAlias.
But if that isn't the case, what is the PingCmd in the host
configuration and what happens if you execute approximately the same
thing from the command line.
--
Les Mikesell
lesmikes...@gmail.com
---
eekly timing should keep that
schedule as long as your system is up every day.
--
Les Mikesell
lesmikes...@gmail.com
--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and
On Tue, Mar 3, 2015 at 4:53 PM, Marko Doda wrote:
> I accidentally saved the credentials for rsync, isn't it better to turn off
> the save password dialog for rsync credentials?
That part is done by the client browser - which should also have a way
to clear saved passwords.
--
L
201 - 300 of 3308 matches
Mail list logo