Re: [BackupPC-users] Getting lost in installation following documentation

2022-02-23 Thread Johan Ehnberg

Hello Christian,

If you prefer not to install manually, the BackupPC Wiki has an 
installer script that works on Debian-based distributions:


https://github.com/backuppc/backuppc/wiki/Installing-BackupPC-4-from-tarball-or-git-on-Ubuntu

The script source can also be of use for you to identify where you got 
stuck when installing manually.


Another option is to build your own packages or download premade ones:

https://github.com/backuppc/backuppc/wiki/Build-Your-Own-Packages

Best regards,

Johan Ehnberg


On 23/02/2022 13.39, Christian Möller wrote:

Hello,

I've tried to install BackupPC v4 following your documentation at

https://backuppc.github.io/backuppc/BackupPC.html

on a Debian 11 machine.

Starting with "Step 1"

https://backuppc.github.io/backuppc/BackupPC.html#Step-1:-Getting-BackupPC 



everything works fine, both packages "backuppc" and "rsync-bpc" get
installed smoothly (as far as I can tell).
So according to the recommendation, I skip to Step 3 ... which gets me
lost in transition, because:

https://backuppc.github.io/backuppc/BackupPC.html#Step-3:-Setting-up-config.pl 



states

    "After running configure.pl, browse through the config file,
__CONFDIR__/config.pl [...]"

Well, which "configure.pl" to ran (and when)? What is "__CONFDIR__"? The
same as with v3, so "/etc/backuppc"? I recognized that this folder is
created freshly during installation (be aware of the file's timestamps),
but contains NO "config.pl" file:

  $ ls -al /etc/backuppc
  total 20
  drwxr-xr-x   2 backuppc www-data  4096 Feb 23 11:42 .
  drwxr-xr-x 130 root root 12288 Feb 23 11:57 ..
  -rw-r-   1 backuppc www-data    47 Feb 23 11:42 htpasswd
  lrwxrwxrwx   1 root root    13 Feb 23 11:42 pc -> /etc/backuppc

So my attempt stops at this point unfinished.

Any help appreciated. Thanks.

Christian


PS: A little bit of background info: I'm using BackupPC v3 for many
years now. Prior to my attempt to install v4, I've uninstalled v3 and
moved away the "old" folder /etc/backuppc by renaming it.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/

--
Signature
*Johan Ehnberg*

Founder, CEO

Molnix Oy


jo...@molnix.com

+358 50 320 96 88

molnix.com <https://molnix.com>


/The contents of this e-mail and its attachments are for the use of the 
intended recipient only, and are confidential and may contain legally 
privileged information. If you are not the intended recipient or have 
otherwise received the e-mail in error, please notify the sender by 
replying to this e-mail immediately and then delete it immediately from 
your system. Any dissemination, distribution, copying or use of this 
communication without prior and explicit permission of the sender is 
strictly prohibited./
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Migrate to new server

2021-04-05 Thread Johan Ehnberg

Hello Alan,

There are plenty of previous threads on the topic that you can dig through.

In short, if the installation has no legacy backups from BackupPC 3 
(which were hard-linked), just copy the whole data folder and host 
config (possibly all configs) over the new installation.


If that is not the case, have a look at this if it fits your needs:

https://johan.ehnberg.net/backuppc-pre-loading-seeding-and-migrating-moving-script/

Best regards,

Johan Ehnberg


On 5.4.2021 6.07, Alan Taylor wrote:

Hello,

I am running BackupPc 4.3 on a debian server, actual physical data 
store size about 360Gb.


I would like to build a new server.
Preference is to copy the 360Gb data store to an external drive, build 
a new server (same box, new drives, cpu, memory) and then import the 
old data store from the external hard drive. Hopefully keep my old 
backups and “history”.


Is this possible, any suggestions or comments ?

BRgds/Alan


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/

--
Signature
*Johan Ehnberg*

Founder, CEO

Molnix Oy


jo...@molnix.com <mailto:jo...@molnix.com>

+358 50 320 96 88

molnix.com <https://molnix.com>


/The contents of this e-mail and its attachments are for the use of the 
intended recipient only, and are confidential and may contain legally 
privileged information. If you are not the intended recipient or have 
otherwise received the e-mail in error, please notify the sender by 
replying to this e-mail immediately and then delete it immediately from 
your system. Any dissemination, distribution, copying or use of this 
communication without prior and explicit permission of the sender is 
strictly prohibited./


/*Please consider the environment - do not print this e-mail unless you 
really need to.*/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Checking rsync progress/speed/status

2021-01-12 Thread Johan Ehnberg


On 13.1.2021 0.58, Adam Goryachev via BackupPC-users wrote:


On 13/1/21 09:21, Les Mikesell wrote:
On Tue, Jan 12, 2021 at 4:15 PM Greg Harris 
 wrote:
Yeah, that “if you can interpret it” part gets really hard when it 
looks like:


select(7, [6], [], [6], {tv_sec=60, tv_usec=0}) = 1 (in [6], left 
{tv_sec=59, tv_usec=99})
read(6, 
"\0\200\0\0\4\200\0\7\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 
32768) = 27748


Scrolling at 32756 lines in around 30 seconds.

That tells you it is not hung up.  You could grep some 'open's out of
the stream to see what files it is examining.  Sometimes the client
side will do a whole lot of reading before it finds something that
doesn't match what the server already has.


I tend to use something like:

strace -e open -p 

Also:

ls -l /proc//fd

Regards,
Adam



Here's another one to run on the host being backed up. It shows the file 
being backed up (read, not necessarily transferred) at any given time. 
You may want to tune the refresh rate; '0.1' is very rapid. '3r' can in 
theory vary between implementations.


sudo watch -n0.1 'lsof -c rsync | grep 3r'


Best regards,
Johan

--
Signature
*Johan Ehnberg*

Founder, CEO

Molnix Oy


jo...@molnix.com <mailto:jo...@molnix.com>

+358 50 320 96 88

molnix.com <https://molnix.com>


/The contents of this e-mail and its attachments are for the use of the 
intended recipient only, and are confidential and may contain legally 
privileged information. If you are not the intended recipient or have 
otherwise received the e-mail in error, please notify the sender by 
replying to this e-mail immediately and then delete it immediately from 
your system. Any dissemination, distribution, copying or use of this 
communication without prior and explicit permission of the sender is 
strictly prohibited./


/*Please consider the environment - do not print this e-mail unless you 
really need to.*/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Ubuntu upgrade to 4.4.0 not working

2020-12-13 Thread Johan Ehnberg

Hello George,

Well, since it is not a pristine install I am guessing you will most 
likely need to change your Apache config manually.


The BackupPC manual is a good resource for that. For example, you can 
check how the previous installation was done; it may be enough just to 
add a symlink somewhere. One such would be redirecting /etc/backuppc 
from the packaged install to /etc/BackupPC which is the upstream default 
used in the script.


Best regards,

Johan


On 14/12/2020 00.05, kingswindsor wrote:

Signature
Thanks Johan
That's really helpful.  It runs through to completion successfully but 
frustratingly I can't seem to access the web interface on 
http://backuppc/BackupPC_Admin


I've using the backuppc username and have tried the old password and 
tried resetting it with

sudo htpasswd /etc/backuppc/htpasswd backuppc but without success.

I have tried an apache restart and a reboot but no difference.   I get 
the impression I'm missing something obvious. Is there anything else 
to do to get access to the CGI?


Thanks
On 13 Dec 2020, at 12:41, Johan Ehnberg <mailto:jo...@molnix.com>> wrote:


Hello George,

The script has now been fixed to work with being started from any
current directory

Can you please try again?

Best regards,

Johan Ehnberg




*From:* George King 
*Sent:* Friday, 11 December 2020 12:55
*To:* backuppc-users@lists.sourceforge.net
*Subject:* [BackupPC-users] Ubuntu upgrade to 4.4.0 not working

Hello, I have been using backuppc as my home backup solution
for years and think it is great, thanks.  I'm currently
running Ubuntu 18.04.05 LTS with Backuppc 3.3.1 (as per the
Ubuntu 18.04.05 LTS repository). I would like to upgrade
backuppc to 4.4.0.  I have tried the 'easy' approach
described at

https://github.com/backuppc/backuppc/wiki/Installing-BackupPC-4-from-tarball-or-git-on-Ubuntu
but when I do this it seems to install the packages ok but
then throws up the error chmod: cannot access
'/root/password': No such file or directory and it stops (or
finishes). Any advice to fix this would be gratefully
received.  I've searched the Mailing Lists and haven't come
up with anything similar, sorry if I've missed something
obvious. ___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net List:
https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: https://github.com/backuppc/backuppc/wiki Project:
https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project:https://backuppc.github.io/backuppc/
-- 
*Johan Ehnberg*


Founder, CEO

Molnix Oy


jo...@molnix.com <mailto:jo...@molnix.com>

+358 50 320 96 88

molnix.com <https://molnix.com>


/The contents of this e-mail and its attachments are for the use
of the intended recipient only, and are confidential and may
contain legally privileged information. If you are not the
intended recipient or have otherwise received the e-mail in error,
please notify the sender by replying to this e-mail immediately
and then delete it immediately from your system. Any
dissemination, distribution, copying or use of this communication
without prior and explicit permission of the sender is strictly
prohibited./

/*Please consider the environment - do not print this e-mail
unless you really need to.*/



BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project:https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/

--
Signature
*Johan Ehnberg*

Founder, CEO

Molnix Oy


jo...@molnix.com <mailto:jo...@molnix.com>

+358 50 320 96 88

molnix.com <https://molnix.com>


/The contents of this e-mail and its attachments are for the use of the 
intended recipient only, and are confidential and may contain legally 
privileged information. If you are not the intended recipient or have 
otherwise received the e-mail in error, please notify the sender by 
replying t

Re: [BackupPC-users] Ubuntu upgrade to 4.4.0 not working

2020-12-13 Thread Johan Ehnberg

Hello George,

The script has now been fixed to work with being started from any 
current directory


Can you please try again?

Best regards,

Johan Ehnberg




*From:* George King 
*Sent:* Friday, 11 December 2020 12:55
*To:* backuppc-users@lists.sourceforge.net
*Subject:* [BackupPC-users] Ubuntu upgrade to 4.4.0 not working

Hello, I have been using backuppc as my home backup solution for
years and think it is great, thanks.  I'm currently running Ubuntu
18.04.05 LTS with Backuppc 3.3.1 (as per the Ubuntu 18.04.05 LTS
repository). I would like to upgrade backuppc to 4.4.0.  I have
tried the 'easy' approach described at

https://github.com/backuppc/backuppc/wiki/Installing-BackupPC-4-from-tarball-or-git-on-Ubuntu
but when I do this it seems to install the packages ok but then
throws up the error chmod: cannot access '/root/password': No such
file or directory and it stops (or finishes). Any advice to fix
this would be gratefully received.  I've searched the Mailing
Lists and haven't come up with anything similar, sorry if I've
missed something obvious.
___ BackupPC-users
mailing list BackupPC-users@lists.sourceforge.net List:
https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:
https://github.com/backuppc/backuppc/wiki Project:
https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/

--
Signature
*Johan Ehnberg*

Founder, CEO

Molnix Oy


jo...@molnix.com <mailto:jo...@molnix.com>

+358 50 320 96 88

molnix.com <https://molnix.com>


/The contents of this e-mail and its attachments are for the use of the 
intended recipient only, and are confidential and may contain legally 
privileged information. If you are not the intended recipient or have 
otherwise received the e-mail in error, please notify the sender by 
replying to this e-mail immediately and then delete it immediately from 
your system. Any dissemination, distribution, copying or use of this 
communication without prior and explicit permission of the sender is 
strictly prohibited./


/*Please consider the environment - do not print this e-mail unless you 
really need to.*/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Necro'd: Use S3 buckets for the pool

2020-07-21 Thread Johan Ehnberg

Hi Kris,

Indeed the object storage transformation has not been a hot topic for 
some time. I explored this some years ago and did some proof of concept 
testing. It requires a fairly different architecture:


https://molnix.com/proposal-new-open-source-backup-solution/

Essentially, the object storage to fuse approach is not workable as-is 
from a performance standpoint, at least in almost any production 
scenario I can imagine.


The compute node needs to independently do at least the checksum 
matching, storage buffering or tiering, and object storage rate limiting.


With the fuse approach you can get pretty close to that by putting only 
pool or cpool on fuse, and keeping pc locally, added with some 
configuration tweaks. However, the object storage needs to be 
asynchronous to the backup for it to really make sense.


I believe the efforts to create FS-to-object layers to transform 
existing software to cloud concepts without actual code changes quieted 
down exactly because of these types of issues.


Best regards,

Johan


On 21/07/2020 02.37, Kris Lou via BackupPC-users wrote:
This hasn't been addressed for a while, and I didn't find anything in 
recent archives.


Anybody have any experience or hypothetical issues with writing the 
BPC4 Pool over s3fs-fuse to S3 or something similar?  Pros, Cons?


Thanks,
-Kris



Kris Lou
k...@themusiclink.net <mailto:k...@themusiclink.net>


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

--
Signature
*Johan Ehnberg*

Founder, CEO

Molnix Oy


jo...@molnix.com <mailto:jo...@molnix.com>

+358 50 320 96 88

molnix.com <https://molnix.com>


/The contents of this e-mail and its attachments are for the use of the 
intended recipient only, and are confidential and may contain legally 
privileged information. If you are not the intended recipient or have 
otherwise received the e-mail in error, please notify the sender by 
replying to this e-mail immediately and then delete it immediately from 
your system. Any dissemination, distribution, copying or use of this 
communication without prior and explicit permission of the sender is 
strictly prohibited./


/*Please consider the environment - do not print this e-mail unless you 
really need to.*/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How To Upgrade 4.3.0 to 4.3.1

2019-07-15 Thread Johan Ehnberg

Hello Mark,

There are notes in the script for upgrading.

If anyone has the time, the upgrade parts could easily be made into a 
script of their own or even better, as an argument to the script.


Our orchestration uses a separate approach, but I might contribute that 
as well as an option if I find the time.


Best regards,

Johan Ehnberg


On 7/16/19 6:20 AM, Mark Wass wrote:

Hi Guys

Is there a How-To on upgrading from 4.3.0 to 4.3.1?

I'm running 4.3.0 on Ubuntu 18.04 LTS and I installed originally from 
the script on this page.


https://github.com/backuppc/backuppc/wiki/Installing-BackupPC-4-from-tarball-or-git-on-Ubuntu

Thanks
Mark


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

--
Signature
*Johan Ehnberg*

Founder, CEO

Molnix Oy


jo...@molnix.com <mailto:jo...@molnix.com>

+358 50 320 96 88

molnix.com <https://molnix.com>


/The contents of this e-mail and its attachments are for the use of the 
intended recipient only, and are confidential and may contain legally 
privileged information. If you are not the intended recipient or have 
otherwise received the e-mail in error, please notify the sender by 
replying to this e-mail immediately and then delete it immediately from 
your system. Any dissemination, distribution, copying or use of this 
communication without prior and explicit permission of the sender is 
strictly prohibited./


/*Please consider the environment - do not print this e-mail unless you 
really need to.*/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Fwd: Re: backuppc behind nginx which is on another host

2019-04-15 Thread Johan Ehnberg
I accidentally sent my reply just to Matthew. Here is a copy for future 
reference.




 Forwarded Message 
Subject: 	Re: [BackupPC-users] backuppc behind nginx which is on another 
host

Date:   Mon, 15 Apr 2019 12:25:51 +0300
From:   Johan Ehnberg 
To: Mathew Perry 



Hi Matthew,


I would recommend using the nginx server as a reverse proxy.


So, set up the backuppc server normally (i.e. with its own web server), 
and on nginx, use something like the following if you will connect 
remotely over TLS and Let's Encrypt certificate:



server {
  listen 80;
  server_name backuppc.example.com;
  return 301 https://$server_name$request_uri;
}

server {
  listen 443 ssl http2;
  server_name backuppc.example.com;
  ssl_prefer_server_ciphers on;
  ssl_ciphers 
EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;

  ssl_certificate /etc/letsencrypt/live/backuppc.example.com/fullchain.pem;
  ssl_certificate_key 
/etc/letsencrypt/live/backuppc.example.com/privkey.pem;

  ssl_session_cache shared:SSL:5m;
  ssl_session_timeout 1h;
  add_header Strict-Transport-Security max-age=15768000 always;
  location / {
    include /etc/nginx/proxy_params;
    proxy_pass http://10.0.0.2:80;
  }
}

Change domains and IP to taste.


Best regards,

Johan Ehnberg



On 4/15/19 11:19 AM, Mathew Perry wrote:


Hi

i'm using latest backuppc and want to connect emotely to it. I have 
nginx on another host, not on same host where backuppc is running on.


So, i'm struggling to get the nginx config working to connect to the 
backuppc host. The configs on the internet are all for nginx configs 
if backuppc and nginx are on the same host.





___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project:http://backuppc.sourceforge.net/

--
Signature
*Johan Ehnberg*

Founder, CEO

Molnix Oy


jo...@molnix.com <mailto:jo...@molnix.com>

+358 50 320 96 88

molnix.com <https://molnix.com>


/The contents of this e-mail and its attachments are for the use of the 
intended recipient only, and are confidential and may contain legally 
privileged information. If you are not the intended recipient or have 
otherwise received the e-mail in error, please notify the sender by 
replying to this e-mail immediately and then delete it immediately from 
your system. Any dissemination, distribution, copying or use of this 
communication without prior and explicit permission of the sender is 
strictly prohibited./


/*Please consider the environment - do not print this e-mail unless you 
really need to.*/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Cannot ssh-copy-id

2019-01-21 Thread Johan Ehnberg

Hi Bob,

Thanks for the feedback!

The symptoms you are getting are due to how I designed the script with 
backuppc being a system user without shell etc. As such, it is not 
related to BackupPC version - you can make BackupPC 4 work the way you 
are used to.


There are many options for you to support your routines, here are a few:

- You can do 'sudo -u backuppc ssh-copy-id user@targethost' as it is 
with the current setup


- You can change the shell for backuppc with 'sudo chsh backuppc'

- You can reinstall fresh and choose another style for the backuppc user 
(change the adduser command by removing --shell /bin/false and --system)


If you have a lot of hosts, it may make sense to use orchestration tools 
that manage ssh keys instead.


Hope this helps,

Johan


On 1/21/19 6:49 PM, Robert Wooden wrote:
Just rebuilt an Ubuntu 18.04 with the 
"Installing-BackupPC-4-from-tarball-or-git-on-Ubuntu" script that Mr. 
Ehnberg recently updated.


When I try to ssh-copy-id "su - backuppc" does not work. When I "su - 
backuppc -s /bin/bash:" I get a "backuppc@[myhostname]:~$" command 
prompt. That is I am getting a username@hostname command prompt rather 
than just a "$".


In the days of version 3 I always used "su - backuppc" from root and 
it always gave me a "$" prompt and then I got use to the manner 
"ssh-copy-id" worked.


Now, with version 4, this required activity is different and troublesome.

What is the proper ssh-copy-id command process used by other people 
who have used the Ehnberg script to setup BackupPC to copy the ssh 
keys to the clients?


--
Thank you.
Bob Wooden//


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

--
Signature
*Johan Ehnberg*

Founder, CEO

Molnix Oy


jo...@molnix.com <mailto:jo...@molnix.com>

+358 50 320 96 88

molnix.com <https://molnix.com>


/The contents of this e-mail and its attachments are for the use of the 
intended recipient only, and are confidential and may contain legally 
privileged information. If you are not the intended recipient or have 
otherwise received the e-mail in error, please notify the sender by 
replying to this e-mail immediately and then delete it immediately from 
your system. Any dissemination, distribution, copying or use of this 
communication without prior and explicit permission of the sender is 
strictly prohibited./


/*Please consider the environment - do not print this e-mail unless you 
really need to.*/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Install 4.3.0 on ubuntu 18.04

2019-01-19 Thread Johan Ehnberg

Hi Mike,

Make sure you have the package libacl1-dev installed. This is a new 
requirement and was added to the script only a week or so ago.


Best regards,

Johan Ehnberg

--

*Johan Ehnberg*

Founder, CEO

Molnix Oy


jo...@molnix.com <mailto:jo...@molnix.com>

+358 50 320 96 88

molnix.com <https://molnix.com>


On 1/18/19 11:38 AM, megaram networks wrote:


Hello,

i am trying to install BackupPC 4.3.0 on Ubuntu 18.04 LTS .

Every time I start the script from github , it stops with “unable to 
locate libacl.devel” .


I thought I have all necessary repositories, but looks like I am 
missing one.


Can anyone help me here please ?

Kind regards

Mike



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncSshArgs fail when separated, work when combined

2019-01-16 Thread Johan Ehnberg
I agree that the space escaping of the rsync -e part looks like the 
issue. However, if that is indeed the case, I am surprised no one had 
stumbled on this before.


The RsyncSshArgs are added to the rsyncArgs like other parts. Here's the 
relevant code snippet in lib/BackupPC/Xfer/Rsync.pm:124 and :343


    unshift(@$rsyncArgs, "--rsync-path=$conf->{RsyncClientPath}")
    if ( $conf->{RsyncClientPath} ne "" );
    unshift(@$rsyncArgs, @{$conf->{RsyncSshArgs}})
    if ( ref($conf->{RsyncSshArgs}) eq 'ARRAY' );

    unshift(@$rsyncArgs, "--rsync-path=$conf->{RsyncClientPath}")
    if ( $conf->{RsyncClientPath} ne "" );
    unshift(@$rsyncArgs, @{$conf->{RsyncSshArgs}})
    if ( ref($conf->{RsyncSshArgs}) eq 'ARRAY' );

As I see it,

1) -e as part of the array is irrelevant since calling something SSH 
arguments makes -e meta information in the context, unless there is some 
other way of passing it through rsync that makes a difference


2) Since -e indeed requires a single argument that is passed on to SSH, 
it may be clearer to quote it or to add automatic space escaping


3) As BackupPC per-client overrides work at the variable level rather 
than array element level, I am uncertain there is a point in using an 
array in the first place



How about it'd work like this (again, I am no Perl coder and did not 
test this code yet, this is to illustrate):


    unshift(@$rsyncArgs, "--rsh=\"$conf->{RsyncSshArgs}\"")
    if ( $conf->{RsyncSshArgs} ne "" );

And changing the RsyncSshArgs UI elements and configs' format 
respectively. This breaks existing configs unless there is a migration 
helper of course, so keeping the array may make sense for that reason. 
That would also limit the fix to the lines above, just adding a bit more 
mangling instead.



Thinking of it further, this actually implies a potentially serious data 
loss scenario in cases where rsync and ssh arguments coincide with 
different functions. Take for example -C that I used below. Passed to 
SSH, it adds compression. But passed to rsync, it excludes files (short 
for --cvs-exclude). I was preliminarily able to confirm this, and it 
also explains why the errors I was getting were related to -c rather 
than -C which came before.



Best regards,

Johan


On 1/15/19 8:23 PM, Jan Stransky wrote:

My guess is, that the -e parameter needs all ssh parameters included as
a value. That is what the escaped spaces IMHO does. E.g. whole
"/usr/bin/ssh\ -l\ ubuntu\ -C\ -c\ aes256-...@openssh.com" is single
value for -e.

It might be a feature of the rsync_bpc. I am not sure, how this works
with regular rsync. I certainly use very different syntax with normal
rsync :-)

If this is true for clearer config, you are asking for, a new variable
$sshArgs or so might be considered?
Cheers,
Jan

On 1/15/19 10:18 AM, Johan Ehnberg wrote:

Yes,

Multiple lines:

Running: /usr/local/bin/rsync_bpc --bpc-top-dir /var/lib/backuppc 
--bpc-host-name test.i.molnix.com --bpc-share-name / --bpc-bkup-num 1 
--bpc-bkup-comp 0 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 
--bpc-bkup-inode0 75573 --bpc-attrib-new --bpc-log-level 1 -e /usr/bin/ssh\ -l\ 
ubuntu -C -c\ aes256-...@openssh.com --rsync-path=sudo\ /usr/bin/rsync --super 
--recursive --protect-args --numeric-ids --perms --owner --group -D --times 
--links --hard-links --delete --delete-excluded --partial --log-format=log:\ 
%o\ %i\ %B\ %8U,%8G\ %9l\ %f%L --stats --timeout=72000 --exclude=/proc 
test.i.molnix.com:/ /
incr backup started for directory /
Xfer PIDs are now 17341
This is the rsync child about to exec /usr/local/bin/rsync_bpc
rsync_bpc: -c aes256-...@openssh.com: unknown option

Combined line, (note also the space escaping around the concerned
arguments):

Running: /usr/local/bin/rsync_bpc --bpc-top-dir /var/lib/backuppc 
--bpc-host-name test.i.molnix.com --bpc-share-name / --bpc-bkup-num 4 
--bpc-bkup-comp 0 --bpc-bkup-prevnum 3 --bpc-bkup-prevcomp 0 --bpc-bkup-inode0 
75725 --bpc-attrib-new --bpc-log-level 1 -e /usr/bin/ssh\ -l\ ubuntu\ -C\ -c\ 
aes256-...@openssh.com --rsync-path=sudo\ /usr/bin/rsync --super --recursive 
--protect-args --numeric-ids --perms --owner --group -D --times --links 
--hard-links --delete --delete-excluded --partial --log-format=log:\ %o\ %i\ 
%B\ %8U,%8G\ %9l\ %f%L --stats --timeout=72000 --exclude=/proc 
test.i.molnix.com:/ /
incr backup started for directory /
Xfer PIDs are now 17737
This is the rsync child about to exec /usr/local/bin/rsync_bpc
Xfer PIDs are now 17737,17739

Best regards,
Johan

On 1/15/19 10:42 AM, Jan Stransky wrote:

In the logs, you can see actual commands issued. Have you compared those?

Cheers,

Jan

On 14/01/2019 10:17, Johan Ehnberg wrot

Re: [BackupPC-users] RsyncSshArgs fail when separated, work when combined

2019-01-15 Thread Johan Ehnberg

Yes,

Multiple lines:

Running: /usr/local/bin/rsync_bpc --bpc-top-dir /var/lib/backuppc 
--bpc-host-name test.i.molnix.com --bpc-share-name / --bpc-bkup-num 1 
--bpc-bkup-comp 0 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 
--bpc-bkup-inode0 75573 --bpc-attrib-new --bpc-log-level 1 -e /usr/bin/ssh\ -l\ 
ubuntu -C -c\ aes256-...@openssh.com --rsync-path=sudo\ /usr/bin/rsync --super 
--recursive --protect-args --numeric-ids --perms --owner --group -D --times 
--links --hard-links --delete --delete-excluded --partial --log-format=log:\ 
%o\ %i\ %B\ %8U,%8G\ %9l\ %f%L --stats --timeout=72000 --exclude=/proc 
test.i.molnix.com:/ /
incr backup started for directory /
Xfer PIDs are now 17341
This is the rsync child about to exec /usr/local/bin/rsync_bpc
rsync_bpc: -c aes256-...@openssh.com: unknown option

Combined line, (note also the space escaping around the concerned 
arguments):


Running: /usr/local/bin/rsync_bpc --bpc-top-dir /var/lib/backuppc 
--bpc-host-name test.i.molnix.com --bpc-share-name / --bpc-bkup-num 4 
--bpc-bkup-comp 0 --bpc-bkup-prevnum 3 --bpc-bkup-prevcomp 0 --bpc-bkup-inode0 
75725 --bpc-attrib-new --bpc-log-level 1 -e /usr/bin/ssh\ -l\ ubuntu\ -C\ -c\ 
aes256-...@openssh.com --rsync-path=sudo\ /usr/bin/rsync --super --recursive 
--protect-args --numeric-ids --perms --owner --group -D --times --links 
--hard-links --delete --delete-excluded --partial --log-format=log:\ %o\ %i\ 
%B\ %8U,%8G\ %9l\ %f%L --stats --timeout=72000 --exclude=/proc 
test.i.molnix.com:/ /
incr backup started for directory /
Xfer PIDs are now 17737
This is the rsync child about to exec /usr/local/bin/rsync_bpc
Xfer PIDs are now 17737,17739

Best regards,
Johan

On 1/15/19 10:42 AM, Jan Stransky wrote:


In the logs, you can see actual commands issued. Have you compared those?

Cheers,

Jan

On 14/01/2019 10:17, Johan Ehnberg wrote:


Hello,

I stumbled upon this weirdness when benchmarking offloadable SSH 
ciphers using the rsync transfer method.


In short, splitting up the RsyncSshArgs I use on multiple lines fails:

'-e',

'$sshPath -l ubuntu',

'-C',

'-c aes256-...@openssh.com'


and combining them on one line works (but may be a fluke):

'-e',

'$sshPath -l ubuntu -C -c aes256-...@openssh.com'


Errors vary between these:

No files dumped for share /

rsync error: syntax or usage error (code 1) at main.c(1572) [client=3.1.2.0]

The suspect is some escaping problem due to the @ sign since it 
indicates an array in Perl. However, escaping it (-c 
aes256-gcm\@openssh.com) does not help. I am not into Perl more than 
that. Any thoughts?


Best regards,

Johan Ehnberg

--
Signature
*Johan Ehnberg*

Founder, CEO

Molnix Oy


jo...@molnix.com <mailto:jo...@molnix.com>

+358 50 320 96 88

molnix.com <https://molnix.com>


/The contents of this e-mail and its attachments are for the use of 
the intended recipient only, and are confidential and may contain 
legally privileged information. If you are not the intended recipient 
or have otherwise received the e-mail in error, please notify the 
sender by replying to this e-mail immediately and then delete it 
immediately from your system. Any dissemination, distribution, 
copying or use of this communication without prior and explicit 
permission of the sender is strictly prohibited./


/*Please consider the environment - do not print this e-mail unless 
you really need to.*/




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project:http://backuppc.sourceforge.net/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

--
Signature
*Johan Ehnberg*

Founder, CEO

Molnix Oy


jo...@molnix.com <mailto:jo...@molnix.com>

+358 50 320 96 88

molnix.com <https://molnix.com>


/The contents of this e-mail and its attachments are for the use of the 
intended recipient only, and are confidential and may contain legally 
privileged information. If you are not the intended recipient or have 
otherwise received the e-mail in error, please notify the sender by 
replying to this e-mail immediately and then delete it immediately from 
your system. Any dissemination, distribution, copying or use of this 
communication without prior and explicit permission of the sender is 
strictly prohibited./


/*Please consider the environment - do not print this e-mail unless you 
really need to.*/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] RsyncSshArgs fail when separated, work when combined

2019-01-14 Thread Johan Ehnberg

Hello,

I stumbled upon this weirdness when benchmarking offloadable SSH ciphers 
using the rsync transfer method.


In short, splitting up the RsyncSshArgs I use on multiple lines fails:

'-e',

'$sshPath -l ubuntu',

'-C',

'-c aes256-...@openssh.com'


and combining them on one line works (but may be a fluke):

'-e',

'$sshPath -l ubuntu -C -c aes256-...@openssh.com'


Errors vary between these:

No files dumped for share /

rsync error: syntax or usage error (code 1) at main.c(1572) [client=3.1.2.0]

The suspect is some escaping problem due to the @ sign since it 
indicates an array in Perl. However, escaping it (-c 
aes256-gcm\@openssh.com) does not help. I am not into Perl more than 
that. Any thoughts?


Best regards,

Johan Ehnberg

--
Signature
*Johan Ehnberg*

Founder, CEO

Molnix Oy


jo...@molnix.com <mailto:jo...@molnix.com>

+358 50 320 96 88

molnix.com <https://molnix.com>


/The contents of this e-mail and its attachments are for the use of the 
intended recipient only, and are confidential and may contain legally 
privileged information. If you are not the intended recipient or have 
otherwise received the e-mail in error, please notify the sender by 
replying to this e-mail immediately and then delete it immediately from 
your system. Any dissemination, distribution, copying or use of this 
communication without prior and explicit permission of the sender is 
strictly prohibited./


/*Please consider the environment - do not print this e-mail unless you 
really need to.*/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC v4 installation: 404 error

2019-01-08 Thread Johan Ehnberg

Case sensitiveness now noted in the documentation.

Best regards,

Johan

On 1/8/19 6:55 PM, Jan Stransky wrote:

So, it turned out to be an user error... I was surprised that the
address in the browser is case sensitive. And also move from /backuppc
to /BackupPC_Admin (between v3 and v4)  was  a bit surprising, and
honestly, I was a little bit puzzled about this in the documentation.

Cheers,

Jan

On 31/12/2018 09:32, Jan Stransky wrote:

Dear Craig,
on first attempt, I have just pulled this docker image:
https://github.com/adferrand/docker-backuppc
On the second attempt, I have created a ubuntu 18.04 container, and used
the script you have linked as root (there is normally no other user in
containers). Obviously I had to remove "sudo" prefixes
The result is the same. The /var/www/html/BackupPC is not completly
empty, there are some pngs, gifs, js and css files. Please see complete
list below. Stdout of configure.pl is attached.
Jan
P.S. I successfully run Backuppc v3 using docker, and there is
apperantly many people running the above mentioned image.

000.gif  001.gif  0011001.gif  1001000.gif  1010001.gif
1100100.gif  1100111.gif  1101101.gif  111.gif  1110101.gif
000.gif  101.gif  BackupPC_retro_v2.css  favicon.ico
icon-hardlink.png  logo320.png
011.gif  0010001.gif  100.gif  1001100.gif  1011000.gif
1100101.gif  1101000.gif  1101110.gif  1110001.gif  1110110.gif
001.gif  110.gif  BackupPC_retro_v3.css  icon-dir.png
icon-symlink.png   sorttable.js
0001000.gif  0011000.gif  1000100.gif  101.gif  110.gif
1100110.gif  1101100.gif  110.gif  1110100.gif  1110111.gif
100.gif  111.gif  BackupPC_stnd.css  icon-file.png  logo.gif

On 12/31/18 3:02 AM, Craig Barratt via BackupPC-users wrote:

You'll need to be more explicit about what you installed and all the
steps you took.

Specifically, did you follow these instructions
<https://github.com/backuppc/backuppc/wiki/Installing-BackupPC-4-from-tarball-or-git-on-Ubuntu>?
After you run configure.pl <http://configure.pl> near the middle of the
script, what output does it generate?  Can you confirm after it runs
that  /var/www/html/BackupPC is empty?

Craig



On Mon, Dec 31, 2018 at 8:49 AM Jan Stransky
mailto:jan.stransky.c...@gmail.com>> wrote:

 Hi,
 I am trying to install BackupPC v4.3.0 from Github (more or less).
 I have tried to use docker image (adferrand/docker-backuppc), and ubuntu
 installation script from BPC Docs, and after the installation, I am
 getting 404 error from the web server. I have checked the web server
 configuration, but the folder, shich is targeted
 (/var/www/html/BackupPC) doesn't contain any index or other
 HTML-like file.
 Is it a bug, or am I doing something wrong?
 Cheers,
 Jan


 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 <mailto:BackupPC-users@lists.sourceforge.net>
 List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:    http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

--
Signature
*Johan Ehnberg*

Founder, CEO

Molnix Oy


jo...@molnix.com <mailto:jo...@molnix.com>

+358 50 320 96 88

molnix.com <https://molnix.com>


/The contents of this e-mail and its attachments are for the use of the 
intended recipient only, and are confidential and may contain legally 
privileged information. If you are not the intended recipient or have 
otherwise received the e-mail in error, please notify the sender by 
replying to this e-mail immediately and then delete it immediately from 
your system. Any dissemination, distribution, copying or use of this 
communication without prior and explicit permission of the sender is 
strictly prohibited./


/*Please consider the environment - do not print this e-mail unless you 
really need to.*/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] some stupid questions about backuppc?

2018-10-15 Thread Johan Ehnberg

Terve Eero,


- Is encryption of backups nowdays supported?


Encryption in transit is supported by select protocols (e.g. rsync uses 
SSH).


Encryption at rest is supported by underlying filesystems or block layers.

There is no built-in encryption.



- Is s3 backend supported in any way?


In short, no. The long answer is something along the lines of 
not-too-hard to implement. For a proof on concept you might try mounting 
S3. However, BackupPC does not quite have a buffer suitable for object 
storage. The result would be very slow backups at present. Additionally, 
some S3 implementations are follow the eventual consistency paradigm 
which requires even more intelligence on the compute node.



- How about replication of backuppc to another host? is the rsync the 
best way (my backuppc is version 4.2.1)


There are several options for this. Search the mailing list for details. 
A short list is:


- zfs/btrfs incrementals

- rsync with BackupPC v4 backups only

- DRBD

- parallel BackupPC server



- What is best way to implement offline backups ?


With BackupPC being a network backup system, the only option for it to 
work offline is to install it locally. Or, if you meant cold/air 
gapped/archival backups, the list of options is quite long.




br,
Eero


Hope this helps,

Johan


--
Signature
*Johan Ehnberg*

Founder, CEO

Molnix Oy


jo...@molnix.com <mailto:jo...@molnix.com>

+358 50 320 96 88

molnix.com <https://molnix.com>


/The contents of this e-mail and its attachments are for the use of the 
intended recipient only, and are confidential and may contain legally 
privileged information. If you are not the intended recipient or have 
otherwise received the e-mail in error, please notify the sender by 
replying to this e-mail immediately and then delete it immediately from 
your system. Any dissemination, distribution, copying or use of this 
communication without prior and explicit permission of the sender is 
strictly prohibited./


/*Please consider the environment - do not print this e-mail unless you 
really need to.*/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] ver 4.x split using ssd and hdd storage - size requirements?

2018-08-24 Thread Johan Ehnberg




On 08/24/2018 04:52 PM, Mike Hughes wrote:
I think I’ve discovered a new level of failure. It started off with 
these errors when attempting to rsync larger files:


rsync_bpc: failed to open 
"/home/localuser/mysql/hostname-srv-sql.our_database.sql", continuing: 
No space left on device (28)


rsync_bpc: mkstemp 
"/home/localuser/mysql/.hostname-srv-sql.our_database.sql.00" 
failed: No space left on device (28)


Now all backups are failing and I see these in the Xfer Error logs:

BackupPC_refCountUpdate: can't write new pool count file 
/var/lib/BackupPC//pc/hostname/22/refCnt/poolCntNew.1.a8


BackupPC_refCountUpdate: given errors, redoing host hostname #22 with 
fsck (reset errorCnt to 0)


bpc_poolRefFileWrite: can't open/create pool delta file name 
/var/lib/BackupPC//pc/hostname/22/refCnt/tpoolCntDelta_1_-1_0_70674 
(errno 28)


bpc_poolRefRequestFsck: can't open/create fsck request file 
/var/lib/BackupPC//pc/hostname/22/refCnt/needFsck70674 (errno 28)


Can't write new host pool count file 
/var/lib/BackupPC//pc/hostname/refCnt/poolCntNew.1.02


My guess is that the /var/lib/BackupPC/pc partition is the problem. I 
took advice from some rando on the interwebs [1] to put the pc folder on 
an ssd but perhaps it needs more headroom than suggested:


“… Split the storage up on SSD for the pc folder and something more cost 
efficient for cpool such as SMR drives. ...The pc folder in version 4 
essentially only contains the directory structures and references of 
files that they should contain, so it stays very small. However, it is 
often read from and speeds things up remarkably when it is served fast. 
Much more so than speeding up the cpool.”


The “pc” folder lives in “/var/lib/BackupPC/pc” on a local ssd with 5GB 
overhead available. The storage pools live on a platter w/70+GB free. Am 
I short-changing the “pc” folder? Does it need to grow significantly 
during backup runs? If so, how much space is suggested?


Unfortunately my monitoring software (NewRelic Infrastructure) does not 
provide much granularity and has previously hidden similar spikes in 
usage so I don’t trust its reports, which does not show any capacity 
violations.


Thank you!

[1] - 
https://molnix.com/backuppc-version-4-development-allows-better-scaling/




Hi Mike,

Post author here, nice to hear it is of use!

I was intrigued by your report and decided to look at zabbix logs that 
follow the disk usage. I can see a change in the disk usage that is 
about 3% of the pc directory, that may be happening around the time of 
reference count runs. With more headroom on the storage, I have not run 
into the issue you mention but it may indeed be as you suspect under the 
right conditions.


Can you check the size of the files that trigger the errors? What does 
df -h tell you during backups? If the failed refcount files linger on 
your drive, how big do they grow?


I will follow this thread with interest and hopefully I have the time to 
closely monitor what happens in the pc folder during reference counts.


Best regards,
Johan

--
Johan Ehnberg
Founder, CEO
jo...@molnix.com
+358503209688

Molnix Oy
molnix.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Which file system for data pool?

2018-08-14 Thread Johan Ehnberg




Regarding the location of the "pc" folder... I think I need to re-visit that since I created a 
symlink from /var/lib/BackupPC and moved the entire contents of that folder to my platters. It sounds like I 
need to undo that to keep the "pc" folder on the SSD. Would a better move be to create a symlink 
just to the "cpool" folder? ie:
ln -s /mnt/backup-volume/BackupPC/cpool /var/lib/BackupPC/cpool
and leave "pool" and "pc" on the SSD with the operating system?

Either way of linking works with the same results when the actual 'pc' 
folder is on SSD. However, I would keep both 'pool' and 'cpool' on the 
platters since they serve the same purpose (one when using compression, 
the other when not - usually only one of them is being used anyway).


Best regards,
Johan

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Which file system for data pool?

2018-08-14 Thread Johan Ehnberg



On 08/14/2018 02:44 PM, Tapio Lehtonen wrote:
I'm building a BackupPC host, with two SSD disks and two 4 TB rotating 
disks connected to LSI Logic / Symbios Logic MegaRAID SAS 2108 
[Liberator] (rev 05). Operating system is Debian GNU/Linux 9.5. The plan 
is to put OS on SSD disks with RAID1, and BackupPC data pool to rotating 
disk with RAID1. SSD disks are 240 GB each, I'm open to suggestions how 
to use part of them for cache or journal device or something.


That disk controller has reasonably performant RAID with battery backup, 
so I prefer using those features. Thus ZFS is not good, my understanding 
is ZFS should be used with plain host bus adapters.


I'm thinking XFS, so inode allocation is not a problem (previously I 
asked in this mailing list how to recover from out of inodes). What I 
read indicate XFS is equal or better than Ext4 for most features.


I could not find recent recommendations for file system used with 
BackupPC. Those old ones I found say ReiserFS is good, it probably is 
but not much maintained recently.


So, any recommendation for file system?




Terve Tapio,

I would also choose XFS in your case since ZFS is not a good option for you.
Furthermore, if you want to put the SSD:s to good use, you can put the 
'pc' folder on them if the following conditions are being met:


- You are running BackupPC 4
- You have no BackupPC 3 backups left (no hardlinks between pc and cpool 
or pool)

- $Conf{PoolV3Enabled} is off

That will give you a huge performance boost for indexing etc.

Best regards,
Johan Ehnberg

--
Johan Ehnberg
Founder, CEO
jo...@molnix.com
+358503209688

Molnix Oy
molnix.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] offsite server

2018-04-10 Thread Johan Ehnberg

On 04/10/2018 01:06 PM, Philip Parsons (Velindre - Medical Physics) wrote:

Dear list,

I’m sure this has been discussed previously, but I couldn’t see anything 
in recent archives that specifically related to v4 of BackupPC.


I am using BackupPC v4.1.5 to backup hospital data to an on-site 
server.  We would however, like to backup this data to another off-site 
server.  Has anyone had any experience of this?


I was wondering if it would be a good idea to set up another instance of 
backuppc on the remote server, turn off all the backup functions, copy 
the config settings, rsync the pool and just have that instance as a 
restorative method (should something happen to our on-site copy).  Is 
this feasible?


I guess there are a number of ways that this could be achieved.  CPool 
size is currently approximately 10Tb.  Off-site network speed is going 
to be pretty good (apologies for the vagueness here).


I’d be very interested in anyone’s thoughts, or experiences of setting 
up an off-site replication server with BackupPC v4.


Thanks,

Phil



Hi Phil,

Yes, this is quite feasible and straightforward to set up as long as you 
have control over the selection of filesystem.


Using a Copy-on-Write filesystem with snapshots allows you to very 
efficiently replicate backups from the main server offsite. No rsync 
needed, just 'zfs send' or equivalent tool for the selected filesystem. 
It will not have to search for and detect the differences between the 
repositories like rsync, instead it transfers the incremental changes 
from the filesystem since last snapshot. Since the filesystem already 
keeps track of these, the effort is minimal. You also have more control 
over the transfer data stream than with rsync (read: multithreaded high 
compression ratio algos, use accelerated VPN instead of SSH etc.).


The dual active server solution proposed in other responses has other 
advantages and is a very good option if the added traffic and burden on 
clients is acceptable.


Best regards,
Johan

--
Johan Ehnberg
Founder, CEO
jo...@molnix.com
+358503209688

Molnix Oy
molnix.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC vs ZFS compression

2018-01-24 Thread Johan Ehnberg


On 01/24/2018 07:42 PM, Patrik Janoušek wrote:

Hello,
I'd like to ask if is it better to use compression in BackupPC or in
ZFS. I don't want to use dedup, so the only criteria is CPU
time/Compression ratio.
I usually back up files like photos and docs.

--
Patrik


Hello Patrik,

ZFS gives you a lot of different compression options and it will always 
run in a separate process. This gives you more flexibility and better 
ratios for most cases. If you already have ZFS, go with that, since it 
is transparent whereas BackupPC compression is not.


That said, there is no algorithm that will give you compression at any 
ratio that is sensible to use for most photo formats. ZFS will 
automatically not compress such parts of the data.


Best regards,
Johan

--
Johan Ehnberg
Founder, CEO
jo...@molnix.com
+358503209688

Molnix Oy
molnix.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] SCGI and BackupPC

2017-07-06 Thread Johan Ehnberg
Hi,

Try /BackupPC_Admin as the address.

Best regards,
Johan EhnbergOn Jul 7, 2017 9:07 AM, Akibu Flash  wrote:
>
> All, I have installed BackupPC version 4.1.3 and have started it from the 
> command line using “systemctl start backuppc.service” on a Debian 9 system.  
> I am now attempting to setup the SCGI web interface.  I have done the 
> following:
>
>  
>
> In my backuppc config file, I have specified the SCGI Sever Port:  
> $Conf{SCGIServerPort} = ; 
>
> In my /etc/apache2/apache2.conf file, I have added the following lines:
>
>     LoadModule scgi_module modules/mod_scgi.so
>
> SCGIMount /BackupPC_Admin 127.0.0.1:
>
>  
>
> The scgi module appears to have been loaded as it is listed in the 
> mods-enabled directory under the /etc/apache2 directory.  However, when I try 
> and access backuppc in a web browser, I get the following error. 
>
> Not Found
>
> The requested URL /BackupPC was not found on this server.
>
> 
>
> Apache/2.4.25 (Debian) Server at 10.10.10.32 Port 80
>
>  
>
>  
>
> My apache2 server works fine as if I type the ipaddress of the server into 
> the web browser, I get confirmation that “It Works”.  It is probably 
> something simple that I am missing.  Do you have any suggestions?  Thanks in 
> advance.
>
>  
>
> Akibu
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Migrate to new server

2017-06-14 Thread Johan Ehnberg

On 06/14/2017 06:08 PM, Tim Fora wrote:

Hi,

I have BackupPC-3.3.1-5.el6.x86_64 running on a VM (CentOS 6) using
external storage. We have a new storage box now running CentOS 7 with
local disks. I want to install BackupPC on the new box then migrate data
while both are running. What are some options to get this done?

Thanks,
Tim



Hi Tim,

There are three levels you can do this at while keeping both instances 
mostly running:


Block level: set up DBRD or similar.

Pool level: rsync your pool (slow) or send your filesystem (if supported).

Latest file level: use something like my script but sacrificing history:
https://johan.ehnberg.net/backuppc-pre-loading-seeding-and-migrating-moving-script

BackupPC does not have a facility to manage clustering though, so you 
will not be able to have both instances actively doing backups if you 
want to keep the pool.


If your clients are fast and your transfer method supports it, I would 
simply set up another instance that runs independently and in parallel, 
by only copying the configs to the new instance. Turn off automatic 
backups on the old and retire it according to your retention requirements.


Best regards,
Johan Ehnberg
--
Johan Ehnberg
jo...@molnix.com
+358503209688

Molnix Oy
molnix.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrade from v4.0 to V4.1.3

2017-06-14 Thread Johan Ehnberg

On 06/14/2017 04:51 PM, Molly Hoffman wrote:
Can someone tell me the steps to upgrade from 4.0 to 4.1.3 on Ubuntu 
16.04.2?  Do I need to remove v4.0?  When I installed v4.0 I used the 
attached script. I am having some issues with v4.0 and want to upgrade.




Hi Molly,

I updated the wiki page with upgrading notes:

https://github.com/backuppc/backuppc/wiki/Installing-BackupPC-4-from-git-on-Ubuntu-Xenial-16.04-LTS

Essentially, the relevant difference is:
configure.pl --batch --config-path /etc/BackupPC/config.pl

There are also many steps you can skip since they do not relate to 
BackupPC version. See the comments for each script block.


Best regards,
Johan Ehnberg

--
Johan Ehnberg
jo...@molnix.com
+358503209688

Molnix Oy
molnix.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to backup a laptop over internet

2017-05-28 Thread Johan Ehnberg

Hi Xuo,

>> What should I do to perform these backups from my server (located at
>> home) to my laptop located either on my local network or remotely (on
>> internet).

There are many approaches:

- Dynamic DNS
- VPN
- Reverse tunnel
- Mirror files on server with rsync or unison, backup server instead
- Use a cloud storage system, backup the cloud storage instead

>> I'm not in some hotels, ... but in another flat (that I rent), where I
>> have a static IP and where I could open some ports if necessary (ssh,
>> ...).

Additionally, in your case, you could simply set up two hosts, one for 
when connected locally, one for remotely.


Best regards,
Johan Ehnberg

--
Johan Ehnberg
jo...@molnix.com
+358503209688

Molnix Oy
molnix.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Replication

2017-05-22 Thread Johan Ehnberg
Hi,

Indeed that is currently not implemented in BackupPC, since this would 
require all backups to stop. That in turn is not may not feasible in a 
large environment or where backup schedules are tight. But as an 
optional feature, it is interesting.

Basically any previous backup will be coherent, so snapshoting at any 
time is fine when you are aware of this limit.

Of course you can always check for running processes in a while loop and 
wait until there are no backups running.

This is the kind of details I might be interested in documenting.

Best regards,
Johan


On 05/22/2017 09:49 AM, Philippe Maladjian wrote:
> Hello,
> 
> Ok, but how to be sure not to replicate the datastore of backuppc when 
> this one realizes operations on this same datastore?
> 
> Philippe.
> 
> Le 19/05/2017 à 18:55, Johan Ehnberg a écrit :
>> Hi Philippe,
>>
>> Yes. Using for example ZFS or rsync with BackupPC 4 you can replicate
>> everything, even offsite. There are a few short threads on this from
>> before. Let us know if your datacenter has any specifics that this
>> should fit into.
>>
>> If there is broader demand, I'll consider wruting some documentation on
>> how to set it up.
>>
>> Best regards,
>> Johan Ehnberg
>>
>>
>> On 05/19/2017 05:10 PM, Philippe Maladjian wrote:
>>> Hello,
>>>
>>> It's possible to replicate backuppc on datacenter ?
>>>
>>> Philippe.
>>>
> 
> 

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Replication

2017-05-19 Thread Johan Ehnberg
Hi Philippe,

Yes. Using for example ZFS or rsync with BackupPC 4 you can replicate 
everything, even offsite. There are a few short threads on this from 
before. Let us know if your datacenter has any specifics that this 
should fit into.

If there is broader demand, I'll consider wruting some documentation on 
how to set it up.

Best regards,
Johan Ehnberg


On 05/19/2017 05:10 PM, Philippe Maladjian wrote:
> Hello,
> 
> It's possible to replicate backuppc on datacenter ?
> 
> Philippe.
> 

-- 
Johan Ehnberg
jo...@molnix.com
+358503209688

Molnix Oy
molnix.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC v4.1.1-Problem restoring backups

2017-04-27 Thread Johan Ehnberg
Hi Ib,

Another approach that I came to think of is to try to emulate the 
environment of your previous working setup. Ie. install BackupPC 3 using 
the same OS that you had before. VMs or containers are a good way to do 
this.

> I wonder how do these programs find the backups - for tarCreate for
> instance you nowhere give a path to the backups - so how does this work?

All of that comes from config.pl, under /etc/backuppc or /etc/BackupPC. 
The upgrade process also does some mangling of that file, so not 
everything used as such from a V3 install.

Best regards,
Johan

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC v4.1.1-Problem restoring backups

2017-04-25 Thread Johan Ehnberg
Hi Ib,

This is sounds more like a problem with BackupPC internals, if the 
backup number is indeed correct. Maybe someone more familiar with the 
code in question can help? Maybe something got corrupted in the pool or 
metadata as well (you mentioned data loss)?

Specifically, manual and automatic jobs both are failing but the web UI 
works; where the working and failing paths diverge should help dissect 
and pinpoint the error.

Meanwhile, if you have the extra space, another route to take is to 
convert the pool. I mention extra space because I recommend keeping the 
original pool available and instead doing a partition copy of it. On 
this mailing list, there are a few notes about the new tool to convert 
the pool from V3 to V4 - I have not used it myself, though. My thinking 
here is that maybe V4 is better geared for restoring V4 pool files. If 
that is the case and you succeed this way, there is room for improvement 
in the compatibility layers for V3.

Let's see if more people pitch in here, and I'll be back with further 
ideas if I come up with any.

Best regards,
Johan


On 04/25/2017 10:11 AM, Ib H. Rasmussen wrote:
> Hi Johan,
>
> I tried your suggestion of using tarCreate, but with no luck - I keep
> getting the error "bad backup number" for host.
>
> In order to keep it simple, I decided to just make a list of the backup,
> and leave the tar-part out for a start.
>
> it's all run as the backuppc-user (backuppc)
>
> I entered: /usr/local/BackupPC/bin/BackupPC_tarCreate -h ihrsrv31 -n -0
> -s /datard1/documentation -l *
>
> and got the error "bad backup number -0 for host ihrsrv31".
>
> no matter what I enter as backup number - a direct number like 1189
> (which is a previous full V3 backup), or a relative number like -0 / -1
> / -2 as you suggest, I get the bad number error.
>
> Have I misunderstood something?
>
> Best Regards
>
> Ib H. Rasmussen
>
>
> On 04/24/2017 09:14 PM, Johan Ehnberg wrote:
>> Hi,
>>
>>> I can browse and restore single files via the browser, but as it
>>> concerns about 2TB of data it is some job!!
>> The tarCreate option can be a doable solution as a one-off since the
>> rsync route may require more time to set up but web restores work.
>> Basically it creates a tar that you can pipe back to your restore location:
>>
>> BackupPC_tarCreate -h YOURHOSTNAME -n -0 -s / / | tar -x -C /tmp
>>
>> Or, more elaborately for a remote host over a slow link running as
>> another user than backuppc and ensuring all file attributes can be set
>> when extracting, something like:
>>
>> sudo sudo -i -u backuppc /usr/share/backuppc/bin/BackupPC_tarCreate -h
>> YOURHOSTNAME -n -0 -s / / 2> /dev/null |pv -Cq -B 256 |plzip -1 |ssh
>> SOMEOTHERHOST -- 'plzip -d | sudo tar -x -C /tmp'
>>
>>
>>> About the syntax for executing the restore from the command-line i'm not
>>> quite sure, maybe you could elaborate a bit about that (If you still
>>> think it's worth while).
>> They are actually documented well in the file itself (it is perl so not
>> binary, i.e. you can read the file with 'less
>> /usr/share/backuppc/bin/BackupPC_restore').
>>
>> Let us know how this round goes.
>>
>> Best regards,
>> Johan
>>
>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

-- 
Johan Ehnberg
jo...@molnix.com
+358503209688

Molnix Oy
molnix.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC v4.1.1-Problem restoring backups

2017-04-24 Thread Johan Ehnberg
Hi,

> I can browse and restore single files via the browser, but as it
> concerns about 2TB of data it is some job!!

The tarCreate option can be a doable solution as a one-off since the 
rsync route may require more time to set up but web restores work. 
Basically it creates a tar that you can pipe back to your restore location:

BackupPC_tarCreate -h YOURHOSTNAME -n -0 -s / / | tar -x -C /tmp

Or, more elaborately for a remote host over a slow link running as 
another user than backuppc and ensuring all file attributes can be set 
when extracting, something like:

sudo sudo -i -u backuppc /usr/share/backuppc/bin/BackupPC_tarCreate -h 
YOURHOSTNAME -n -0 -s / / 2> /dev/null |pv -Cq -B 256 |plzip -1 |ssh 
SOMEOTHERHOST -- 'plzip -d | sudo tar -x -C /tmp'


> About the syntax for executing the restore from the command-line i'm not
> quite sure, maybe you could elaborate a bit about that (If you still
> think it's worth while).

They are actually documented well in the file itself (it is perl so not 
binary, i.e. you can read the file with 'less 
/usr/share/backuppc/bin/BackupPC_restore').

Let us know how this round goes.

Best regards,
Johan

-- 
Johan Ehnberg
jo...@molnix.com
+358503209688

Molnix Oy
molnix.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC v4.1.1-Problem restoring backups

2017-04-24 Thread Johan Ehnberg
Hi Ib,

I am assuming you are running an restore job from the web UI.

First, make sure that PoolV3Enabled is on since you are accessing a V3 
pool from a V4 installation. This is found under Server settings general 
parameters.

As a quick solution just to get access to the data or to do a manual 
restore, you can of course browse the web UI for the required files and 
download them through the browser. On the command line, you can use 
BackupPC_tarCreate for a faster approach directly on the server, 
especially when using large files. Do the files show up? Does this work 
or are you getting errors?

In order to get the proper restore functions working, can you please 
post your Rsync Paths/Commands/Args? Can you make a successful backup 
using those? Essentially, comparing the backup and restore args is 
essential, if the backups are working.

If it is not working, do you still have the V3 equivalents for these 
settings at hand to compare against?

Also ensure that SSH keys are installed on the new server and that SSH 
is accepting the new host key (of the client) automatically or add it 
manually.

To debug the action itself, you can run BackupPC_restore on the command 
line on the server with the -v flag to get verbose output.

Post these details and we should be able to find out what is not working 
properly.

Best regards,
Johan Ehnberg


On 04/24/2017 11:21 AM, Ib H. Rasmussen wrote:
> I have just installed BackupPc v4.1.1 from Github on a new CentOS7 server.
>
> The backup-server is at the same time my file-server. Unfortunately I
> have lost a number of my data-directories, but I do still have several
> backup's from a previous BackupPC v3 installation.
>
> So my priority is to restore the backup's of the missing data. I'm using
> rsync (which was also used to originally backup the data), and the data
> directory is world writeable to rule-out any access-right problem.
>
> Selinux has been deactivated for the same reason.
>
> When restoring, I get the following error in the BackupPC LOG-file:
>
> 2017-04-20 11:38:30 User ihr requested restore to ihrsrv31-documentation
> (ihrsrv31-documentation)
> 2017-04-20 11:38:30 Started restore on ihrsrv31-documentation (pid=6542)
> 2017-04-20 11:38:32 Restore failed on ihrsrv31-documentation (rsync
> error: unexplained error (code 255) at io.c(629) [sender=3.0.9.6])
>
> BackupPC::XS and Rsync-bpc are both installed from Github
>
> How can I remedy this problem, and get my data back?
>
> Best Regards
>
> i...@tdcadsl.dk
>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

-- 
Johan Ehnberg
jo...@molnix.com
+358503209688

Molnix Oy
molnix.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc 4.x for Ubuntu

2017-04-06 Thread Johan Ehnberg


On 04/06/2017 08:13 PM, David Williams wrote:
> I have just upgrade to Ubuntu 16.4.2 (I think that was the version,
> maybe it was 16.0.4).  Anyway, I don’t see a backuppc 4.x version to
> upgrade to.  I think the version that it was showing was 3.3.1.  Going
> off memory as I am not in front of the machine right now.
>
> So, am I missing a source to download from or there just isn’t a
> packaged version of 4.x for Ubuntu 16.x?
>

If installing from source is OK (i.e. not using packages) you can use this:

https://github.com/backuppc/backuppc/wiki/Installing-BackupPC-4-from-git-on-Ubuntu-Xenial-16.04-LTS

Best regards,
Johan Ehnberg

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Regarding "merges", am I correct?

2017-03-26 Thread Johan Ehnberg
Hi Bob,

You are essentially correct. It pulls the master branch, which is the 
latest development version. For example, as of this writing, master has 
seven commits that are not in v4_1_0.

For releases, there are now the tarballs. Craig added notes on how to 
use those in the script, too. If there is demand for it, I can develop 
the script to allow easy selection for development or stable.

Best regards,
Johan


On 03/26/2017 06:10 PM, Bob of Donelson Trophy wrote:
> Still getting use to github and how things happen here . . . am I
> correct that Johan's script is pulling "thee" latest version of BackupPC
> (currently v4.1.0, I think) with all the latest "merges" applied?
>
> --
>
> ___
>
> Bob Wooden of Donelson Trophy
>
>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

-- 
Johan Ehnberg
jo...@molnix.com
+358503209688

Molnix Oy
molnix.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Issues with sendmail

2017-03-03 Thread Johan Ehnberg
On 03/03/2017 06:40 PM, David Williams wrote:
> All,
>
> I have recently installed ssmpt as it was needed to send emails through
> my smtp provider.  This works just fine but I noticed that BackupPC has
> stopped sending email.  I went through the following link
> (https://wiki.archlinux.org/index.php/SSMTP#Security) but still no joy.
>
> When I try to use the following I get the error on the last line.
> $ /usr/share/backuppc/bin/BackupPC_sendEmail -u
> dwilli...@dtw-consulting.com <mailto:dwilli...@dtw-consulting.com>
> Sending test email using /usr/sbin/sendmail -t -f BackupPC
> sendmail: 550 5.7.60 SMTP; Client does not have permissions to send as
> this sender
>
>
> However, still as backuppc, if I do the following it works just fine and
> I got an email.  So, seems to me that the user backuppc does have
> permission.
> echo "Test: Sendmail" | sendmail -v dwilli...@dtw-consulting.com
> <mailto:dwilli...@dtw-consulting.com>
>
> Any ideas on how I can further troubleshoot this?
>
> Regards,


Hi Dave,

This may be due to the sender address. Have you had a look at ssmtp's 
FromLineOverride=Yes setting? Alternatively, you can try tuning 
EMailFromUserName and/or EMailAdminUserName to match the allowed 
addresses, depending on how you are routing the email.

Best regards,
Johan Ehnberg

-- 
Johan Ehnberg
jo...@molnix.com
+358503209688

Molnix Oy
molnix.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC won't delete old full backup

2017-02-27 Thread Johan Ehnberg
On 02/27/2017 12:50 PM, Björn Daunfeldt wrote:
> Hello!
>
> I'm trying out BackupPC to decide whether or not I'm going to use it on
> my production servers/desktops.
> Only one problem occurs for me that i can't figure out, and that is that
> the backup server running BackupPC won't delete my (old)full backup. Ie.
> there is more than one full backup(even tho my configuration
> specifically tells it to only keep only one).
>
> My backup machine:
> CentOS Linux release 7.3.1611 (Core)
> BackupPC-3.3.1-5
> I use rsync+ssh for backup to another CentOS 7 machine.
>
> The config for the hosts in separate file:
> $Conf{IncrPeriod} = 0.97;
> $Conf{IncrKeepCnt} = 4;
> $Conf{IncrKeepCntMin} = 1;
> $Conf{IncrAgeMax} = 30;
>
> $Conf{FullPeriod} = 6.97;
> $Conf{FullKeepCnt} = 1;
> $Conf{FullKeepCntMin} = 1;
> $Conf{FullAgeMax} = 90;
>
> Logs shows no error messages whatsoever when backup/restore is done. It
> works fine in that regard.
>
> Output looks as follows:
> Backup#TypeFilledLevelStart DateDuration/mins
> Age/daysServer Backup Path
> 0fullyes02/20 10:0012.67.0
> /var/lib/BackupPC//pc/centos.local/0
> 1incrno12/21 10:118.86.0
> /var/lib/BackupPC//pc/centos.local/1
> 2incrno12/22 10:0010.95.0
> /var/lib/BackupPC//pc/centos.local/2
> 3incrno12/23 14:007.33.9
> /var/lib/BackupPC//pc/centos.local/3
> 4incrno12/24 14:009.92.9
> /var/lib/BackupPC//pc/centos.local/4
> 5fullyes02/27 10:0013.60.0
> /var/lib/BackupPC//pc/centos.local/5
>
> So why is full backup #0 not deleted?
> Have I missed something or is there something else going on here?
>
> Regards,
> -Björn
>

Hi Björn,

The incremental backups depend on it, so BackupPC ensures that they work 
by keeping the full until the incrementals also expire.

For example, if you were to decrease IncrKeepCnt and IncrKeepCntMin to 
0, all but the latest backups should be deleted automatically.

Not sure why one would do that except for testing purposes, though :)

Best regards,
Johan Ehnberg

-- 
Johan Ehnberg
jo...@molnix.com
+358503209688

Molnix Oy
molnix.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.3.1-2ubuntu3 breaks pool graphs on the Status page.

2017-02-23 Thread Johan Ehnberg
Hi,

For reference, the link to the issue is here:
https://bugs.launchpad.net/ubuntu/+source/backuppc/+bug/1612600

I made a remark about the fix not being released, contrary to what the 
bug status says.

Best regards,
Johan

-- 
Johan Ehnberg
jo...@molnix.com
+358503209688

Molnix Oy
molnix.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Looking for some comments on sizing.

2017-02-09 Thread Johan Ehnberg
Hi Scott,

I've been looking into scaling BackupPC to some extent.

Object storage is not yet feasible in v4, due to some database-like 
files being kept in the parts that would go into object storage (cpool). 
Also, in the current development builds, there are several fairly heavy 
operations on the cpool that would slow it all down too much. Some may 
be avoidable in future builds.

Running BackupPC as a symmetric/same-config scale-out solution is not 
possible due to many reasons. Splitting up the work is perfectly doable, 
though, if you can sacrifice deduplication between the instances.

If we are talking PB's of data, node-based storage is likely not 
feasible. If your files are mostly large, then network based storage is 
a good option. My experiences with ceph remote block devices in this 
kind of setting are positive. If the files are mostly small, you may be 
hit by random access latency with any network-based solution.

If your data is a mix of large and small, consider splitting them up if 
the small can fit on local storage where small random I/O is orders of 
magnitude faster.

In order to scale the filesystem itself, I would opt for a copy-on-write 
filesystem, since these fit the case of storing static files well. Most 
also give you data integrity verification out of the box. For example, 
key benefits of ZFS are:
- Data integrity verification
- Automatic bit rot fixing if you run ZFS built-in RAID
- Snapshot send/receive for remote replication faster than rsync
- Transparent compression
- Dynamic, on-line resizing
So with ZFS, you can handle both compression and data integrity checking 
outside of BackupPC, speeding things up a lot.

I have not tried ZFS on ceph yet, though. Balancing the redundancy 
optimally is an issue in this scenario, since taking advantage of both 
ceph and ZFS easily costs 4 times (twice for each, or ~1.3 times for ZFS 
and three times for ceph) the space of the data to be stored plus plenty 
of RAM. (Note that googling 'ZFS on ceph' gives a lot of issues about 
ceph on ZFS OSD's, which is irrelevant here.)

Best regards,
Johan Ehnberg

-- 
Johan Ehnberg
jo...@molnix.com
+358503209688

Molnix Oy
molnix.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems using the wiki script

2017-01-30 Thread Johan Ehnberg
On 01/30/2017 04:58 PM, Kent Tenney wrote:
> The error msg has changed with current version:
> commit edbd1a4613e0125ed65372738abff935230db075
>
> when the script executes 'makeDist ...'
>
> Unexpected Conf var RefCntFsck in bin/BackupPC_refCountUpdate
> Exiting because of errors
>
> (also added to issue 45)
>

Looks like there is a lot of development going on, this is good. V4 is 
still not finished after all, so I guess these transitional issues are 
to be expected.

Best regards,
Johan

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems using the wiki script

2017-01-27 Thread Johan Ehnberg
On 01/28/2017 09:38 AM, Johan Ehnberg wrote:
> On 01/27/2017 08:01 PM, Les Mikesell wrote:
>> On Fri, Jan 27, 2017 at 11:38 AM, Kent Tenney  wrote:
>>> https://github.com/backuppc/backuppc/wiki/Installing-BackupPC-4-from-git-on-Ubuntu-Xenial-16.04-LTS
>>>
>>> After running the wiki script,
>>>  /usr/loca/BackupPC/bin/BackupPC
>>> fails with
>>> Can't locate BackupPC/Lib.pm in @INC (you may need to install the
>>> BackupPC::Lib module) (@INC contains: /usr/local/BackupPC/lib /etc/perl
>>> /usr/local/lib/x86_64-linux-gnu/perl/5.22.1 /usr/local/share/perl/5.22.1
>>> /usr/lib/x86_64-linux-gnu/perl5/5.22 /usr/share/perl5
>>> /usr/lib/x86_64-linux-gnu/perl/5.22 /usr/share/perl/5.22
>>> /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base .) at
>>> /usr/local/BackupPC/bin/BackupPC line 57.
>>> BEGIN failed--compilation aborted at /usr/local/BackupPC/bin/BackupPC line
>>> 57.
>>>
>>> It seems the lib files weren't installed,
>>> # tree /usr/local/BackupPC/lib
>>> /usr/local/BackupPC/lib/
>>> ├── BackupPC
>>> │   ├── CGI
>>> │   ├── Config
>>> │   ├── Lang
>>> │   ├── Storage
>>> │   ├── Xfer
>>> │   └── Zip
>>> └── Net
>>> └── FTP
>>
>> I haven't installed v4 yet so I can't be more specific, but what you
>> are showing are just the directories in /usr/local/BackupPC/lib/ which
>> seem to at least exist, but your error is about a particular file.
>> The @INC list is the search path where perl will try to find it.
>> Does /usr/local/BackupPC/lib/BackupPC/Lib.pm exist, and if so is it
>> and the path to it readable by the backuppc user?
>>
>
> Hi Kent and Les,
>
> Thanks for the report! I maintain the mentioned instructions, and can
> confirm this error.
>
> It seems something has changed in the latest code, and while the
> installer says:
>
>Installing library in /usr/local/BackupPC/lib
>
> The lib folders remain empty. The exit status is also OK, so the
> installer may not even see the error. As such it is a bug in the build
> automation.
>
> The lib folder was changed a few days ago by Craig Barratt, mentioning
> significant changes.
>
> I will create an issue with these notes, and link it to the wiki.
>
> Best regards,
> Johan Ehnberg

The issue report:
https://github.com/backuppc/backuppc/issues/45

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems using the wiki script

2017-01-27 Thread Johan Ehnberg
On 01/27/2017 08:01 PM, Les Mikesell wrote:
> On Fri, Jan 27, 2017 at 11:38 AM, Kent Tenney  wrote:
>> https://github.com/backuppc/backuppc/wiki/Installing-BackupPC-4-from-git-on-Ubuntu-Xenial-16.04-LTS
>>
>> After running the wiki script,
>>  /usr/loca/BackupPC/bin/BackupPC
>> fails with
>> Can't locate BackupPC/Lib.pm in @INC (you may need to install the
>> BackupPC::Lib module) (@INC contains: /usr/local/BackupPC/lib /etc/perl
>> /usr/local/lib/x86_64-linux-gnu/perl/5.22.1 /usr/local/share/perl/5.22.1
>> /usr/lib/x86_64-linux-gnu/perl5/5.22 /usr/share/perl5
>> /usr/lib/x86_64-linux-gnu/perl/5.22 /usr/share/perl/5.22
>> /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base .) at
>> /usr/local/BackupPC/bin/BackupPC line 57.
>> BEGIN failed--compilation aborted at /usr/local/BackupPC/bin/BackupPC line
>> 57.
>>
>> It seems the lib files weren't installed,
>> # tree /usr/local/BackupPC/lib
>> /usr/local/BackupPC/lib/
>> ├── BackupPC
>> │   ├── CGI
>> │   ├── Config
>> │   ├── Lang
>> │   ├── Storage
>> │   ├── Xfer
>> │   └── Zip
>> └── Net
>> └── FTP
>
> I haven't installed v4 yet so I can't be more specific, but what you
> are showing are just the directories in /usr/local/BackupPC/lib/ which
> seem to at least exist, but your error is about a particular file.
> The @INC list is the search path where perl will try to find it.
> Does /usr/local/BackupPC/lib/BackupPC/Lib.pm exist, and if so is it
> and the path to it readable by the backuppc user?
>

Hi Kent and Les,

Thanks for the report! I maintain the mentioned instructions, and can 
confirm this error.

It seems something has changed in the latest code, and while the 
installer says:

   Installing library in /usr/local/BackupPC/lib

The lib folders remain empty. The exit status is also OK, so the 
installer may not even see the error. As such it is a bug in the build 
automation.

The lib folder was changed a few days ago by Craig Barratt, mentioning 
significant changes.

I will create an issue with these notes, and link it to the wiki.

Best regards,
Johan Ehnberg

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] incremental forever - synthetic backup using ssh

2017-01-26 Thread Johan Ehnberg
On 01/26/2017 07:10 PM, Andreas Roth wrote:
> Hi mailinglist,
>
> I need to backup a couple of hosts using a low-bandwidth connection. The 
> daily diff on the filesystem is pretty small.
>
> Is it possible to have some kind of "synthetic backup" or "incremental 
> forever" - like some commercial products name it? They using differencial 
> backups to build a new full backup. Using this I could backup the host once 
> and then only doing differencial backups.
>
> At 4.0alpha3 documentation I found
>
> "The reverse deltas allow "infinte incrementals" - no need for a full backup 
> if you are willing to trade speed for the risk that a file change will not be 
> detected if the mtime or size don't change." which seems to be the what I was 
> looking for.
>
> Is there any way to use this feature also with v3 - as I would doubt that I 
> want to use v4 for production?!
>
> Thanks in advance and Best Regards,
>
> Andreas
>

Hi Andreas,

When using rsync, both v3 and v4 will always only transfer changes. In 
v3 you can speed the process up further by using checksum caching. No 
need to run incrementals even, since the difference is not in the data 
transferred, but that the files' integrity is double-checked by 
checksumming on the client when doing a full. In v4 this is even 
smarter, as it is able to do pool matching before transferring, checking 
if the new file already exists in the pool from a different name or host.

In other words, use rsync/rsyncd and you are all set.

The option in v4 relates to how the pool is checked and managed rather 
than what is transferred. This is reflected as the fact that it is 
somewhat feasible not to ever checksum the files. Note the word risk :)

Best regards,
Johan Ehnberg

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] off site replication of backuppc

2017-01-20 Thread Johan Ehnberg
On 01/20/2017 05:41 PM, Philip Parsons (Velindre - Medical Physics) wrote:
> Hi,
>
>
>
> We have been using backuppc to backup a few of our systems.  We would
> like to have a replication server off site.  Is this possible? Can I
> just use rsync to copy the pool?
>
>
>
> Thanks,
>
> Phil.
>

Hi Phil,

In short, your options are to use BackupPC v3 with ZFS or btrfs (or even 
block level replication such as RDBD) and replicate the filesystem 
through snapshots, or to use BackupPC v4 with any of the above or rsync. 
This is due to rsync being slow with hardlinks, and BackupPC v4 no 
longer depending on them.

Check the list archives for more detailed answers, this question has 
been answered many times before.

A different approach is to run a separate, secondary instance of 
BackupPC with the same settings.

Best regards,
Johan Ehnberg

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using previous backup system as an initial backup

2016-12-21 Thread Johan Ehnberg
On 12/20/2016 11:10 PM, Jan Stransky wrote:
> Hi,
> i am going to install fresh BackupPC. I have used my own system for
> backup by know. Since data created by BackupPC will be on the same HDD,
> I am wondering if it is possible to force BackupPC use for initial
> backup hardlinking the old backup. The data in old backup are stored as
> basicaly simple copy of backupped folders.
> Best regards,
> Jan
>


Hi Jan,

Forging a pool by hand from a set of files is not likely feasible. There 
are at least two ways that could work for you though.

You could use BackupPC 3 and a script to seed the pool with local files, 
making sure that the path matches that of the host to be backed up:

http://johan.ehnberg.net/backuppc-pre-loading-seeding-and-migrating-moving-script/

See the thread "Migrate local data into BackupPC pool for remote client" 
from earlier this year.

Or, you can use BackupPC 4 (with rsync) and backup the local storage 
first. It will match any remote files that were already backed up from 
the local storage.

Best regards,
Johan


--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today.http://sdm.link/intel
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] version 4 gif files only

2016-12-18 Thread Johan Ehnberg
Hi,

I think you missed the point; it is likely working correctly but under 
another path in your browser. Did you try YOURIP/BackupPC_Admin?

> No, I did not change anything. I simply ran the script, I got what I got
> and now see lots of gif files.

This is expected under YOURIP/BackupPC.


> So, I am familar with Bpc4 from a *.tar file installation. I am not that
> familar with 'sed' replacement strings. Could some portion of your
> script have failed and that is why I am seeing gif files?

Since the script contains 'set -e' it should not complete if something 
fails along the way. The easiest way to change settings is to edit 
config.pl and the apache files after the install if you want to change 
the path.


> Please do not be offended by my suggestion. I am trying to eliminate
> possible reasons to figure out why Bpc4 is not running correctly.

No worries - these are related to BackupPC rather than the install script ;)

Best regards,
Johan

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] version 4 gif files only

2016-12-16 Thread Johan Ehnberg
Hi Bob,

Did you change the setting in config.pl? Otherwise the default it is 
/BackupPC_Admin. I'll improve the wiki page with a note about that.

Best regards,
Johan

On 12/15/2016 08:54 AM, Bob of Donelson Trophy wrote:
> This is a BackupPC version 4 question.
>
> Attention Johan Ehnberg (author of "Installing BackupPC 4 from git on
> Ubuntu Xenial 16.04 LTS" script.)
>
> I have a VM running Ubuntu 16.04.1LTS and decided to try the script
> posted at
> https://github.com/backuppc/backuppc/wiki/Installing-BackupPC-4-from-git-on-Ubuntu-Xenial-16.04-LTS.
>
> Install appeared to process correctly.
>
> When I access the //[ip address]/BackupPC I get a list of mostly *.gif
> files. (Clearly the graphics 'pieces' that make the web interface.)
>
> I believe this may be a permissions issue.
>
> Did the script fail in some fashion I am not seeing?
>
>
>
> --
>
> ___
>
> Bob Wooden of Donelson Trophy
>
>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

-- 
Johan Ehnberg
jo...@ehnberg.net
johan.ehnberg.net
+358503209688

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Second Destination

2016-11-20 Thread Johan Ehnberg

On 11/20/2016 05:08 PM, Bob of Donelson Trophy wrote:
> On 2016-10-24 10:21, Johan Ehnberg wrote:
>
>>>
>>> I, like the OP, have a second location that is connected via VPN tunnel.
>>> You insinuating that the second machine can connect and backup clients
>>> through the VPN tunnel from the second location to the first?
>>>
>>> If so, any docs reading suggestions would be greatly appreciated.
>>>
>>
>> Hi Bob,
>>
>> A simple way to do this is simply to add the hop to your RsyncClientCmd,
>> for example with all kinds of compression and tuning:
>> $sshPath -C -t -q -x -l USERATMAINSERVER MAINSERVER.example.com ssh -q
>> -x -o BatchMode=yes -o StrictHostKeyChecking=no -l USERATCLIENT $host
>> sudo $rsyncPath $argList+
>>
>> Best regards,
>> Johan
>>
>> --
>> Johan Ehnberg
>> jo...@ehnberg.net <mailto:jo...@ehnberg.net>
>> johan.ehnberg.net
>> +358503209688
>>
>> --
>> Check out the vibrant tech community on one of the world's most
>> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> <mailto:BackupPC-users@lists.sourceforge.net>
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>
>
>
> Johan,
>
> Thanks for your suggestion, I really appreciate it.
>
> It has been a while but, I finally hard to to look at this and think it
> over what you said.
>
> One question. Did you change the RsyncClientRestoreCmd in the same manner?
>
>
>

Yes.

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental backups

2016-11-01 Thread Johan Ehnberg
On 11/01/2016 02:07 PM, Adam Goryachev wrote:
> On 1/11/16 21:59, Gandalf Corvotempesta wrote:
>> 2016-11-01 11:35 GMT+01:00 Johan Ehnberg :
>>> Changes in BackupPC 4 are especially geared towards allowing very high
>>> full periods. The most recent backup being always filled (as opposed to
>>> rsnapshots hardlink pointing to the first), a full backup is not
>>> required to maintain a recent and complete representation of all the
>>> files and folders.
>> So, with the current v4, deleting a full backup doesn't break the
>> following incrementals?
>> In example, with Bacula, if you delete a "full" backup, all following
>> backups are lost.
>> In rsnapshot, you can delete whatever you want, it doesn't break
>> anything as long as you keep at least 1 backup, obviosuly

Correct, if by full you mean what in v4 is called filled (also correct 
for both full and incremental). _backupDelete should merge it properly. 
This is specifically mentioned in the documentation.


> Ummm, silly question, but why would you want to delete a backup?
> BackupPC supports automatic removal of old backups based on the schedule
> you provide, you shouldn't be manually messing with the backups. If you
> need a different schedule, then adjust the config, and let backuppc
> handle it for you.
>
> So, can you explain the need to delete random backups manually?
> Generally, if you need to do something weird like that, then either you
> are doing something wrong, or you are using the wrong tool.

I can think of many scenarios:
- An otherwise small client has an error that fills the logs with 
gigabytes or terabytes, the error was fixed, new backup run manually, no 
need to save the bad one for an eternity
- When setting up a client, maybe the first backup contains a lot of 
installation files that unecessarily were backed up - you want to delete 
just that one, but not change the overall retention periods
- A mount was temporarily unavailable on the client when the backup ran 
so the backup is not representative of how the client is normally
- You shorten the retention times and want to free up some space asap
- With v3, I've seen some partial backups that had problems causing 
everything to be transferred again over a slow link and thus needed to 
be deleted. It seems v4 won't have that thanks to pool matching before 
transfer but deleting sure would be handy if it does.

Best regards,
Johan

--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental backups

2016-11-01 Thread Johan Ehnberg
Hi,

On 11/01/2016 10:37 AM, Gandalf Corvotempesta wrote:
> 2016-11-01 9:26 GMT+01:00 Adam Goryachev 
> :
>> Easy, configure your keepfull/incremental and your full/incremental
>> periods so that they will match your desired retention periods for your
>> needs.
>>
>> Make sure you use rsync (ie, same as rsnapshot).
>>
>> Use checksum caching

This is not yet implemented in BackupPC 4, so full are still slow. See:
https://github.com/backuppc/backuppc/issues/32


>> Then, after your second full backup, you see the time to complete a
>> backup is similar to an incremental.
>
> Any drawback doing this like rsnapshot ?
> What happens if I delete the "full" backup ?

In BackupPC 4, deleting a full backup is the same as deleting an 
incremental one - it is merged to the next backup. It should be very 
fast unless it is a filled one. You need to use BackupPC_backupDelete 
for this to manage the merging properly. Note that in v4, full and 
filled concepts are decoupled, where full means that all file content is 
checked and filled means that the BackupPC pc/[backupnumber] folder for 
the host carries the complete list of files and folders in it.


> With rsnapshot nothing happens, by using hardlinks, the only way to
> loose backus (and data)
> is to delete all hardlinks pointing to a file, so I have to delete the
> whole backup pool
>
> What about backuppc ? a new full is made if the previous full is lost
> or the first "useful" backup is promoted to full ?

I'd consider this a pool integrity thing. If you somehow manage to 
delete the actual files in your BackupPC pool, its integrity is 
compromised in the same way as tampering with hard links for rsnapshot. 
BackupPC v4 works quite differently though, since file content (cpool) 
and backups (pc) are not hardlinked. It is more of a "database and 
shared storage objects" model, with the "database" being a mix of 
directory structures, file indexes and reference counts.

Currently v4 does seem to check the pool file on every backup and thus 
will recover nicely. All backups are promoted to filled when running anyway.

However, I am a proponent of having BackupPC v4 _dump never checking the 
pool (cpool), unless there is an unavoidable condition (such as a 
checksum collision detection, block comparison or a v3 style checksum 
recheck probability). The reason is that this would immensely speed up 
backups and enable the use of higher latency cloud storage systems 
(object/API based or mounted) such as swift or glusterfs. Checking could 
be decoupled from backups through the use of a separate storage 
integrity checker (think a nightly job or a slow crawler). This way, 
cpool growth I/O + possible integrity checks, rather than every backup, 
would cap BackupPC performance.

Considering the lack of checksum caching, the extra test fscks mentioned 
earlier, and the above mentioned extra pool checks during backup, 
BackupPC v4 is still a bit slow with large backups.


> I'm asking this because based on the answer I can try to get the best
> config for my environment. If BPC is smart enough
> to work even without the original full, I can use a full period very
> very high (like 1 year)
>

Changes in BackupPC 4 are especially geared towards allowing very high 
full periods. The most recent backup being always filled (as opposed to 
rsnapshots hardlink pointing to the first), a full backup is not 
required to maintain a recent and complete representation of all the 
files and folders.

Best regards,
Johan

--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Version 4 vs 3

2016-10-27 Thread Johan Ehnberg


On 10/27/2016 07:23 PM, Alain Mouette wrote:
>
> On 27-10-2016 13:16, Bowie Bailey wrote:
>> The BackupPC project has a single developer who tends to be rather busy
>> most of the time, so development happens in bursts with months or years
>> between releases.
> Thank you very much for your grat work!
>
>> but I know
>> there are some people on the list who have been using it version 4 for
>> some time now.
>
> Hi guys, could you give us some information about it?

Have a look at the v4 documentation intro for an overview of features:
https://github.com/backuppc/backuppc/blob/master/doc-src/BackupPC.pod

And the issues for an idea of the status of development:
https://github.com/backuppc/backuppc/issues

For me using rsync, v4 is promising to become cloud capable where v3 is 
not. V4 has some excellent new features such as checksum matching before 
transfer. However, some features that make BackupPC great, such as 
checksum caching, are not yet implemented, making v4 still much slower 
in those cases.

Best regards,
Johan Ehnberg

--
The Command Line: Reinvented for Modern Developers
Did the resurgence of CLI tooling catch you by surprise?
Reconnect with the command line and become more productive. 
Learn the new .NET and ASP.NET CLI. Get your free copy!
http://sdm.link/telerik
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Version 4 vs 3

2016-10-27 Thread Johan Ehnberg


On 10/27/2016 05:43 PM, Jan Novak wrote:
> Hi all,
>
> in standard enviroment (only linux enviroment), what ist the right
> version to use?
> At sourceforge i can see latest version4 from 2013 -will it be developed
> or ist this version dead?
>
> Jan
>


Hi,

Development has moved to github, get the latest version from there. 
Development has stalled a few times but it is still getting there.

Best regards,
Johan

--
The Command Line: Reinvented for Modern Developers
Did the resurgence of CLI tooling catch you by surprise?
Reconnect with the command line and become more productive. 
Learn the new .NET and ASP.NET CLI. Get your free copy!
http://sdm.link/telerik
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Second Destination

2016-10-25 Thread Johan Ehnberg

On 10/25/2016 12:50 PM, Jan Novak wrote:
> Am 24.10.2016 um 16:59 schrieb Johan Ehnberg:
>> yes, also another topic: I use zfs. Will backuppc handle snapshots in
>>> the future?
>> If someone implements it, sure. However, many features of ZFS are
>> already handled in BackupPC, so that is perhaps redundant work. My
>> suggestion would be to look into using zfs tools natively to transfer
>> the pool without BackupPC's knowledge.
>
> Hmmm... but this implements the knowledge of zfs and all related things.
> Thats the reason why i like Backuppc - easy to use, easys interface,
> quick and ready - nice overviews.
>
> You wrote, that backuppc already handle many futures of zfs!? How can i
> use that?
>

It already does it automatically; as deduplication, compression, 
versioning and so forth. Like you said - it's ready and works as it is, 
not depending on a specific filesystem.

Best regards,
Johan

--
The Command Line: Reinvented for Modern Developers
Did the resurgence of CLI tooling catch you by surprise?
Reconnect with the command line and become more productive. 
Learn the new .NET and ASP.NET CLI. Get your free copy!
http://sdm.link/telerik
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Second Destination

2016-10-24 Thread Johan Ehnberg
>
> I, like the OP, have a second location that is connected via VPN tunnel.
> You insinuating that the second machine can connect and backup clients
> through the VPN tunnel from the second location to the first?
>
> If so, any docs reading suggestions would be greatly appreciated.
>

Hi Bob,

A simple way to do this is simply to add the hop to your RsyncClientCmd, 
for example with all kinds of compression and tuning:
$sshPath -C -t -q -x -l USERATMAINSERVER MAINSERVER.example.com ssh -q 
-x -o BatchMode=yes -o StrictHostKeyChecking=no -l USERATCLIENT $host 
sudo $rsyncPath $argList+

Best regards,
Johan

-- 
Johan Ehnberg
jo...@ehnberg.net
johan.ehnberg.net
+358503209688

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Second Destination

2016-10-24 Thread Johan Ehnberg
Hi again,

> yes, also another topic: I use zfs. Will backuppc handle snapshots in
> the future?

If someone implements it, sure. However, many features of ZFS are 
already handled in BackupPC, so that is perhaps redundant work. My 
suggestion would be to look into using zfs tools natively to transfer 
the pool without BackupPC's knowledge.


> What is the different to v4?

V4 does not use hardlinks in the pool, so rsyncing the pool is very fast 
and thus quite feasible.

Best regards,
Johan

-- 
Johan Ehnberg
jo...@ehnberg.net
johan.ehnberg.net
+358503209688

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Second Destination

2016-10-24 Thread Johan Ehnberg

On 10/24/2016 05:08 PM, Jan Novak wrote:
> Hi,
>
> i have running backuppc very well to backup the data to a local harddisc
> available for the backuppc machine (a virtual one). I like to make a
> backup now to a second destination, outside my house for a dual
> strategy. I have a tunnel connection to that destination and ssh access
> also.
> Is this possible ?
>
> Jan
> Hi,

Hi Jan,

I will assume you are running BackupPC v3 and want the whole pool 
synchronized to another host, and the pool has so many hard links that 
rsync is not feasible.

Your options are basically running another, standalone BackupPC server 
on that other host, or running some fairly heavy block level replication 
with RAID 1 mirror such as using nbd/drbd in async mode. Look through 
the list archives to find more discussions on this topic.

Even ZFS or btrfs snapshot transfers are possible, but those filesystems 
are perhaps not as optimal for BackupPC, and that is another topic on 
its own.

If my assumptions were off, there are more options, such as running 
BackupPC v4 (not yet stable but the pool is easy to rsync), rsync/unison 
etc. on a small pool, or simply saving backup archives to the other host.

Best regards,
Johan

-- 
Johan Ehnberg
jo...@ehnberg.net
johan.ehnberg.net
+358503209688

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copying the pool / filesystem migration

2016-10-19 Thread Johan Ehnberg
On 10/19/2016 09:59 PM, Michael Stowe wrote:
> On 2016-10-19 12:38, Nick Bright wrote:

>> The real problem I have is in converting the ext3 filesystem to xfs.


> You have not asked this question, and I apologize for offering this
> unsolicited advice:  don't.


> I'd recommend simply moving to ext4, which doesn't have such issues --
> and this you can do by moving the entire image, then converting the
> filesystem.

I agree. Ext4 is usually fine in production.

Another, less feasible alternative in production, would be to convert 
the pool itself by moving to BackupPC 4. Since it does not use 
hardlinks, moving the pool is trivial. However, beyond version 4 not 
being stable yet, this is also less feasible since it takes the full age 
of your backup retention to complete. I.e. the conversion is not 
retroactive but uses both pools at the same time. The benefit is there 
would be no gaps.

Best regards,
Johan Ehnberg

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Installation script for BackupPC 4 on Ubuntu Xenial 16.04 LTS

2016-10-05 Thread Johan Ehnberg
Dears,

To encourage more participation in the BackupPC 4 development on github, 
I posted a simple script for installing BackupPC 4 on Ubuntu Xenial 
16.04 LTS cloud image in the Wiki. I found the process to be rather 
tedious and stumbled on a couple of bugs along the way. The script works 
around those issues and will be updated as they are resolved.

You can find it here:
https://github.com/backuppc/backuppc/wiki/Installing-BackupPC-4-from-git-on-Ubuntu-Xenial-16.04-LTS

Best regards,
Johan Ehnberg

-- 
Johan Ehnberg
jo...@ehnberg.net
johan.ehnberg.net
+358503209688

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Moving /var/lib/BackupPC to a new disk fails with rsync and OOM

2016-09-06 Thread Johan Ehnberg
On 09/06/2016 04:46 PM, Colin wrote:
> Thank you Benjamin.
>
> I ended doing the dd like:
> dd if=/dev/sdb1 | pv -s 400G | dd of=/dev/sdc1 bs=16M
>
> That took around 5h total, which was great!
> Being stuck with the same filesystem sucks but it seems without any
> bigger window of downtime, there isn't any tool that would be possible
> to use.
>
> Thank you all!
>
> Cheers,
> Colin
>

For future reference, there's Holger's script and this one I created for 
migrations such as your case:

http://johan.ehnberg.net/backuppc-pre-loading-seeding-and-migrating-moving-script/

Giving up the pool and backup history may not be feasible though, so 
beware: my script starts a fresh pool.

Best regards,
Johan Ehnberg

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BacupPC -How not to start backup when lock file is present ?

2016-06-27 Thread Johan Ehnberg
Hi,

You can do it the other way around: run the database dump with 
$Conf{DumpPreUserCmd}. Read the docs here:

http://backuppc.sourceforge.net/faq/BackupPC.html#_conf_dumppreusercmd_

Best regards,
Johan Ehnberg


On 06/27/2016 03:54 PM, phil123456 wrote:
> hello,
>
> I am exporting my database before backup
> but the time it takes may vary, and I want things to be automatic
> so I create a lock file when export starts and remove it when it is finished
>
> I would like backup pc to start the backup when everything is done
>
> is there a wy to achieve this ?
>
> thanks
>
> +--
> |This was sent by philippe.flor...@edenred.com via Backup Central.
> |Forward SPAM to ab...@backupcentral.com.
> +--
>
>
>
> --
> Attend Shape: An AT&T Tech Expo July 15-16. Meet us at AT&T Park in San
> Francisco, CA to explore cutting-edge tech and listen to tech luminaries
> present their vision of the future. This family event has something for
> everyone, including kids. Get more information and register today.
> http://sdm.link/attshape
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

--
Attend Shape: An AT&T Tech Expo July 15-16. Meet us at AT&T Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] FTPS support

2016-05-26 Thread Johan Ehnberg


On 2016-05-26 18:10, Johan Ehnberg wrote:
> On 2016-05-26 18:01, Jure Špik wrote:
>> Hi, recently one of my hosts started using FTPS (FileZilla's FTP over TLS).
>>
>> Backups fail since then. Trying to manually login to the site with a
>> normal FTP client fails with "530 This server does not allow plain FTP.
>> You have to use FTP over TLS".
>>
>> Is there any inbuilt support for this kind of service or is my only
>> option for backing up this server to hack perl code?
>>
>> Thank you, Jure
>>
>> P.S. Backup error message is "
>> Connected to ___.com
>> Login successful to __@__.com
>> Binary command successful
>> xfer start failed: xfer start failed: Can't change working directory to
>> ___: Couldn't get directory after cwd
>> "
>>
>
> Hi Jure,
>
> Have you looked into using stunnel?
>
> Best regards,
> Johan
>

And just as soon I remembered, FTP does not work over one connection 
only. So my guess is that unless they support some other protocol in 
BackupPC, it will not work. Unless you use some external helper scripts etc.

Best regards,
Johan

--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] FTPS support

2016-05-26 Thread Johan Ehnberg
On 2016-05-26 18:01, Jure Špik wrote:
> Hi, recently one of my hosts started using FTPS (FileZilla's FTP over TLS).
>
> Backups fail since then. Trying to manually login to the site with a
> normal FTP client fails with "530 This server does not allow plain FTP.
> You have to use FTP over TLS".
>
> Is there any inbuilt support for this kind of service or is my only
> option for backing up this server to hack perl code?
>
> Thank you, Jure
>
> P.S. Backup error message is "
> Connected to ___.com
> Login successful to __@__.com
> Binary command successful
> xfer start failed: xfer start failed: Can't change working directory to
> ___: Couldn't get directory after cwd
> "
>

Hi Jure,

Have you looked into using stunnel?

Best regards,
Johan



--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Migrate local data into BackupPC pool for remote client

2016-05-17 Thread Johan Ehnberg

> Tar complained unless I added the 'c' but it's still erroring out for me. 
> Were you thinking something like this?
>
> tar c --strip-components=4 /home/backup/my_web_server_backups/todays_backup/
>
> There has got to be an easier way to import local data into a backup pool for 
> a specific host...

What is the error from tar?

Yes, I forgot the c, that is required.

Since the files are easily available, you can always try other ways 
around it. For example, creating SMB shares may be feasible. This may 
not be as easy to automate, though.

Best regards,
Johan

--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Migrate local data into BackupPC pool for remote client

2016-05-17 Thread Johan Ehnberg
Hi,

Glad to hear!

To do it directly from directories, you could try changing 'zcat' to 
'tar --strip-components=X' and point it at the directory. Set X to match 
the number of path components that should not be included for the 
archive to make the structure equal to that of the actual host to be 
backed up. For example 1 would strip home from /home/backup, so that the 
archive looks like it starts at backup/.

I have not tried that myself so any results would be of interest.

Best regards,
Johan

On 2016-05-17 19:22, cardiganimpatience wrote:
> It worked! Thanks so much for spelling it out for me!
>
> My previous backups are not tar'd so I'm having to first compress them into a 
> tar.gz to make it work. Is it possible to import them directly from the 
> (uncompressed) source? Could I change the entry for TarClientCmd from 'zcat' 
> to 'cat'? If so what would follow? Just the path to the root?
>
> Thanks,
> Mike
>
>
> On 2016-05-14 07:38, Johan Ehnberg wrote:
> Hi,
>
> You are correct. The script as it is, expects a .tar.gz file in
> $FILEAREA/target. However, this is the file, not a directory. The script
> manages it as a symlink to the actual file so that you do not have to
> manually input in BackupPC for every host separately.
>
> Looking at your results (zcat complaining about the directory) I would
> assume all you have to do is point zcat at the tar.gz file
> ($FILEAREA/target) instead of the directory containing it ($FILEAREA).
>
> The file is for the whole host run as a partial full dump, so it's all
> or nothing in a single run. Any subsequent runs with different mounts
> will either replace the previous ones or not be used depending on your
> BackupPC settings.
>
> The script itself simply runs BackupPC_dump -f -v HOSTNAME. That works
> nicely manually as well for a single host, just set the symlink to point
> at your tar.gz file:
> ln -s HOSTNAME.tar.gz target
>
> Using the script with the variables you propose should work with one change:
> FILEAREA=/tmp/bpctar
>
> Furthermore, that works only for a single host since you are pointing at
> the actual .tar.gz file instead of $FILEAREA/target symlink. You can
> literallt set TarClientCmd to 'zcat /tmp/bpctar/target' and the script
> will handle it for you, enabling you to run it for many hosts also.
>
> Some further notes that may be of interest:
>
> You can also use .tar files, simply change 'zcat' to 'cat'. It has to be
> in tar format for BackupPC to understand the input, though. This can be
> faster if your files already exist in directories.
>
> Beyond that, if you are seeding in a batch manner for many hosts or
> large amounts of files which already exist in directories, and do not
> want to create tar.gz files, you can also try using 'tar' instead of
> 'cat'. This requires you to use tar's various flags to tune the paths to
> match the actual host to be backed up.
>
> I assume the tar.gz file you use was not created by BackupPC. Thus, the
> next likely thing to do after a successful seed, is to ensure that the
> paths that you get in BackupPC from the seeding match those that you get
> when backing up the actual host.
>
> I updated the script with improved documentation with the help of your
> experiences. Thanks!
>
> Best regards,
> Johan
>
> On 2016-05-14 00:03, cardiganimpatience wrote:
> Hey thanks a lot for the response Johan! It's taken me a while to figure out 
> how this is supposed to work and I'm getting closer but still not there.
>
> The file refers to 'tar dumps' but it's unclear to me what that means. Does 
> it assume that my local backups are in a tar.gz format? They are not. They're 
> uncompressed and simply live in a folder named after the hostname.
>
> So I created a tar.gz of one of my folders and tried to work on that but 
> BackupPC_dump doesn't seem to find it:
>
> Running: zcat /home/backup/my_sql_server/my_sql_server_copy
> full backup started for directory /samba_shares/
> started full dump, share=/samba_shares/
> Xfer PIDs are now 6134,6133
> xferPids 6134,6133
> cmdExecOrEval: about to exec zcat 
> /home/backup/my_sql_server/my_sql_server_copy
> gzip: /home/backup/my_sql_server/my_sql_server_copy is a directory -- ignored
> Tar exited with error 512 () status
>
> What is the expected value of TarClientCmd? Is it the name of the .gz file? 
> Or a folder which contains the .gz file(s)
> Same question for FILEAREA.
>
> If I were to run BackupPC_dump directly what values can I pass to it?
>
> It appears the script is looking for a .tar.gz file named after the hostname. 
> Is that ac

Re: [BackupPC-users] Status on new BackupPC v4

2016-05-14 Thread Johan Ehnberg
Hi all,

Going back a bit, also check this message:
https://sourceforge.net/p/backuppc/mailman/message/34753750/

On 2016-05-14 11:18, Mauro Condarelli wrote:
> Il 13/05/2016 23:13, David Cramblett ha scritto:
>> If the group was interested in GitHub, then services/platform for items 1 - 
>> 4 would be available for free with good tools. Also, GitHub is somewhat of 
>> an industry standard for open source projects now. It also allows for easy 
>> patching from interested parties via pull request, forking for those who 
>> need a custom version for their special situation, etc.
> IMHO github is a very good choice.

It is certainly a sane place to start. If the project's needs ever grow 
beyond that it can always be revised.

>>
>> On Fri, May 13, 2016 at 12:55 PM, Mauro Condarelli > <mailto:mc5...@mclink.it>> wrote:
>>
>>  IMHO we need (at least):
>>
>>   1. maintained web site
>>   2. online documentation
>>   3. online bugtracking / wishlist
>>   4. source repository
>>   5. steering committee (managing releases)
>>   6. IRC channel (or equivalent)
>>
>>
>> I think if one person (or persons) could step up and take a lead on this, 
>> others like myself would be willing to pitch in. Even if your not a perl 
>> expert (I'm certainly not), if you have some spare time to dedicate to 
>> managing the project each week, others could be delegated to for specific 
>> technical tasks.
> Thanks for the confidence.
> I will not rush to grab the gavel as I strongly suspect there are many people 
> more suited.
> Another problem is I work as a contractor, so my "free time" is 
> unpredictable, with relatively long periods (months) where I hardly have any 
> of it.

Has there been any communication through any channel from Craig on this 
since his last announcement message reply on 2015-01-20? I think the 
natural way for this to go would be for Craig to announce if he means to 
retire and hand over responsibilities for this project. (Or to fork the 
project if we really couldn't reach him.)

>> I have been using BackupPC for at least 12-14 years, I can't remember when I 
>> started using it for certain. Currently using BackupPC v3 where I'm 
>> employed, and then I use BackupPC v4 at home. I would really like to see 
>> another source tree setup that we all can support. I'm a software engineer, 
>> but I also teach classes at the local community college, do contract work, 
>> and have three busy teenagers. I don't have the time to lead, but would 
>> gladly follow someone(s) who has the best intentions for the community.
>>
>> David
> I am mostly in the same situation.
> I am available to the community (i.e.: I could set-up the github account, 
> migrating whatever to it, do some patch review, ecc.), but I don't think I'm 
> really suited to lead it.
>
> Mauro

Same here, as a very long time BackupPC user and minor contributor, I'd 
be happy to contribute to keep the project healthy. My strengths are in 
service architecture and documentation as well as deployment & 
maintenance automation. BackupPC's Perl and C are not in my repertoire, 
though.

Best regards,
Johan

-- 
Johan Ehnberg
jo...@ehnberg.net
johan.ehnberg.net
+358503209688

--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Migrate local data into BackupPC pool for remote client

2016-05-14 Thread Johan Ehnberg
Hi,

You are correct. The script as it is, expects a .tar.gz file in 
$FILEAREA/target. However, this is the file, not a directory. The script 
manages it as a symlink to the actual file so that you do not have to 
manually input in BackupPC for every host separately.

Looking at your results (zcat complaining about the directory) I would 
assume all you have to do is point zcat at the tar.gz file 
($FILEAREA/target) instead of the directory containing it ($FILEAREA).

The file is for the whole host run as a partial full dump, so it's all 
or nothing in a single run. Any subsequent runs with different mounts 
will either replace the previous ones or not be used depending on your 
BackupPC settings.

The script itself simply runs BackupPC_dump -f -v HOSTNAME. That works 
nicely manually as well for a single host, just set the symlink to point 
at your tar.gz file:
ln -s HOSTNAME.tar.gz target

Using the script with the variables you propose should work with one change:
FILEAREA=/tmp/bpctar

Furthermore, that works only for a single host since you are pointing at 
the actual .tar.gz file instead of $FILEAREA/target symlink. You can 
literallt set TarClientCmd to 'zcat /tmp/bpctar/target' and the script 
will handle it for you, enabling you to run it for many hosts also.

Some further notes that may be of interest:

You can also use .tar files, simply change 'zcat' to 'cat'. It has to be 
in tar format for BackupPC to understand the input, though. This can be 
faster if your files already exist in directories.

Beyond that, if you are seeding in a batch manner for many hosts or 
large amounts of files which already exist in directories, and do not 
want to create tar.gz files, you can also try using 'tar' instead of 
'cat'. This requires you to use tar's various flags to tune the paths to 
match the actual host to be backed up.

I assume the tar.gz file you use was not created by BackupPC. Thus, the 
next likely thing to do after a successful seed, is to ensure that the 
paths that you get in BackupPC from the seeding match those that you get 
when backing up the actual host.

I updated the script with improved documentation with the help of your 
experiences. Thanks!

Best regards,
Johan

On 2016-05-14 00:03, cardiganimpatience wrote:
> Hey thanks a lot for the response Johan! It's taken me a while to figure out 
> how this is supposed to work and I'm getting closer but still not there.
>
> The file refers to 'tar dumps' but it's unclear to me what that means. Does 
> it assume that my local backups are in a tar.gz format? They are not. They're 
> uncompressed and simply live in a folder named after the hostname.
>
> So I created a tar.gz of one of my folders and tried to work on that but 
> BackupPC_dump doesn't seem to find it:
>
> Running: zcat /home/backup/my_sql_server/my_sql_server_copy
> full backup started for directory /samba_shares/
> started full dump, share=/samba_shares/
> Xfer PIDs are now 6134,6133
> xferPids 6134,6133
> cmdExecOrEval: about to exec zcat 
> /home/backup/my_sql_server/my_sql_server_copy
> gzip: /home/backup/my_sql_server/my_sql_server_copy is a directory -- ignored
> Tar exited with error 512 () status
>
> What is the expected value of TarClientCmd? Is it the name of the .gz file? 
> Or a folder which contains the .gz file(s)
> Same question for FILEAREA.
>
> If I were to run BackupPC_dump directly what values can I pass to it?
>
> It appears the script is looking for a .tar.gz file named after the hostname. 
> Is that accurate? Am I able to import one folder/mount at a time or is it an 
> all-or-nothing deal?
>
> If I'm guessing correctly would the following import local files into the 
> BackupPC pool for the server named "my_web_server"?
>
> # - Set TarClientCmd to 'zcat /tmp/bpctar/my_web_server.tar.gz' (as set in 
> FILEAREA below)
> ...
> TARGETS="my_sql_server" # Manual target list
> ...
> # Your environment
> FILEAREA=/tmp/bpctar/my_web_server.tar.gz
> ### unused for seed ### NEWBPC=/mnt/backuppc # Where new backuppc dir is 
> mounted, if moving
> ### unused for seed ### OLDBPC=/srv/backuppc # Where current backuppc dir is 
> mounted, if moving
> ### unused for seed ### BPCLNK=/var/lib/backuppc # Where config.pl to in the 
> config, if moving
> BPCBIN=/usr/share/BackupPC/bin # Where BackupPC_* scripts are located
> BPCUSR=backuppc # User that runs BackupPC
>
> Thanks again for your help!
>
> On 2016-05-08 08:24, Johan Ehnberg wrote:
> Migrate local data into BackupPC pool for remote client
> Hi,
>
> Version 4 supports matching files from the pool.
>
> If you are using version 3, the path has to be the same, so you would
> have to process the tar file to match the h

Re: [BackupPC-users] Migrate local data into BackupPC pool for remote client

2016-05-08 Thread Johan Ehnberg
Hi,

Version 4 supports matching files from the pool.

If you are using version 3, the path has to be the same, so you would 
have to process the tar file to match the host to be backed up. This 
works fine, I used a similar method here:

http://johan.ehnberg.net/backuppc-pre-loading-seeding-and-migrating-moving-script/

You may be able to use tar with --strip-components to work around tar 
extra paths on the fly.

Good luck!

Johan


On 2016-05-06 17:19, cardiganimpatience wrote:
> BackupPC is installed and working great for new hosts. Is there a way to take 
> the hundreds of GB from old hosts that exist on the backup server and import 
> them into the BackupPC storage pool?
>
> The old backup system uses rsync to dump all files to a local disk on the 
> same server where BackupPC is installed, albeit with incorrect file 
> ownership. I don't want to re-transfer that data over our narrow bandwidth 
> connection if I don't have to. I believe rsync will sort out permissions and 
> timestamps on its own.
>
> So far I've created a host in BackupPC and changed the transfer type to 'tar' 
> and successfully imported one of its mount points, but now the tar sharename 
> is called "/home/backup//cur//", where the actual share on 
> the host is simply called /.
>
> My intention is to flip the Xfer method from 'tar' to 'rsync' after I get 
> most of the larger shares imported via local tar. Is it necessary to 
> associate the imported files with a specific host or does hard-linking take 
> care of all that?
>
> Thanks!
>
> +--
> |This was sent by itism...@gmail.com via Backup Central.
> |Forward SPAM to ab...@backupcentral.com.
> +--
>
>
>
> --
> Find and fix application performance issues faster with Applications Manager
> Applications Manager provides deep performance insights into multiple tiers of
> your business applications. It resolves application problems quickly and
> reduces your MTTR. Get your free trial!
> https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

-- 
Johan Ehnberg
jo...@ehnberg.net
johan.ehnberg.net
+358503209688

--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using Amazon AWS S3/Glacier with BackupPC

2016-03-19 Thread Johan Ehnberg
Hi,

I also find this increasingly attractive, with more services moving into 
virtualized environments. Currently there are fairly few options for 
redundant, high-availability cloud oriented storage options that 
BackupPC can use.

BackupPC 4 should be better set to use object storage since it no longer 
relies on hardlinks. It would be great if BackupPC someday manages cold 
archival, too, but that may actually turn out to be more expensive since 
deduplication benefits may be hard to maintain?

If someone has experiences with this, I would also greatly appreciate 
hearing any results.

Best regards,
Johan


On 2016-03-17 14:16, Marcel Meckel wrote:
> Hi there,
>
> Amazon offers amongst other services one named S3* (Simple Storage
> Service, moderate
> price with low latency) and Glacier* (extremely cheap storage, retrieval
> can take hours,
> perfect for backups only needed when disaster strikes).
>
> With the correct config rules in place, files uploaded to S3 can be
> moved to Glacier
> automatically, e.g. when the file's age is >= 14 days.
>
> I'm curious if anybody managed to use Amazon S3 or Glacier with
> BackupPC,
> e.g. as an additional safeguard against RAID failure or file system
> corruption?
>
> I'm not sure if uploading each and every single file to S3 is the right
> way to
> do this, performance-wise.
>
> With BackupPC's feature BackupPC_archiveStart one could let BackupPC
> generate
> an archive of the most recent backup and send this to S3 instead of many
> litte
> files.
>
> Did anyone try this in any way?
>
> Any suggestions on how to implement this with BackupPC?
>
> Thanks for your feedback
>
> Marcel.
>
>
> * https://en.wikipedia.org/wiki/Amazon_S3
> * https://en.wikipedia.org/wiki/Amazon_Glacier
>
> --
> Transform Data into Opportunity.
> Accelerate data analysis in your applications with
> Intel Data Analytics Acceleration Library.
> Click to learn more.
> http://pubads.g.doubleclick.net/gampad/clk?id=278785231&iu=/4140
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231&iu=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rebuilding a backuppc installation

2016-03-14 Thread Johan Ehnberg
Hi Brad,

If you are prepared to lose your pool, here's one way to move the latest 
backup as a basis for the new pool:

http://johan.ehnberg.net/backuppc-pre-loading-seeding-and-migrating-moving-script/

Best regards,
Johan

On 2016-03-14 03:11, Brad Alexander wrote:
> I'm getting kinda frustrated with the entire systemd thing on linux,. So
> what I'm wondering is what the procedure is (if it is possible) to
> convert the OS from Linux to FreeBSD, and converting the base filesystem
> to ZFS, preferably without losing my pool. The hardware in question is a
> Dell PE1850. If I'm going to use ZFS, I will convert the RAID 5 array to
> JBOD. Any other sage advice from folks running backuppc on FreeBSD?
>
> As I type this, I suspect I am going to lose my pool, so I should
> probably archive the older backups. Is there any way to re-import them
> into the pool after the conversion is done?
>
> Thanks,
> --b
>
>
> --
> Transform Data into Opportunity.
> Accelerate data analysis in your applications with
> Intel Data Analytics Acceleration Library.
> Click to learn more.
> http://pubads.g.doubleclick.net/gampad/clk?id=278785231&iu=/4140
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231&iu=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] check backuppc file

2016-01-15 Thread Johan Ehnberg
If you use the script, it moves the files for you, no need to touch the 
pool yourself.

I have not tried the script with smb. But unless backuppc structures smb 
backups differently from tar or rsync backups, it should work.

And you are right about the missing files, they will be fetched at the 
next backup. Since you are using SMB, moving the pool will replace all 
corrupt files also simply as newer versions of the files. This means old 
backups will remain corrupt.

/johan

On 2016-01-15 18:22, Nicola Scattolin wrote:
> i use smb tranfer mathod so i'm not sure it works.
> I almost copy the full cpool folder, even if many file are missing
> cp: cannot stat ‘cpool/4/0/7/407a0d08be5776e7c2091d48a050640f’:
> Input/output error
> i think that the file that are not beign copyed will be store at the new
> full backup ( that i hope to run this night ), not sure for the ones
> that MAY be corrupted.
> If i read correctly the script has to be run before start moving the files.
>
>
> Il 15/01/2016 16:29, Johan Ehnberg ha scritto:
>> Hi Nicola,
>>
>> We were in this exact situation not long ago. We came up with a simple
>> method to use any intact files as the base for new backups.
>>
>> You can find the details here:
>> http://johan.ehnberg.net/backuppc-pre-loading-seeding-and-migrating-moving-script/
>>
>> Or in my post to this mailing list.
>>
>> Basically, it creates a new partial backup on your new drive and when
>> backuppc backs it up the first time, any files that do not match the
>> client's corresponding files are discarded. This is verified with
>> checksum by backuppc.
>>
>> Let me know if you need any help.
>>
>> Best regards,
>> Johan
>>
>> On 2016-01-15 15:41, Nicola Scattolin wrote:
>>> Hi to all,
>>> i got a problem about a week ago and i found that my backuppc drives was
>>> dying.
>>> now i'm copying my old pools on the new drive but i got a lot of I/O
>>> errors on old drive, there is a way to force backuppc to check if the
>>> files are still ok? Because i know that if a file is alredy been copy by
>>> backuppc it will not copy it again, but i don't want to lose files when
>>> backuppc think a file is already in the pool, but it's corrupted.
>>> If it's not possible i will erase the old backups and start new ones,
>>> but i got a 1.4 TB backup to do and it will take a lot
>>>
>>> Thanks
>>> Nicola
>>>
>>> --
>>> Site24x7 APM Insight: Get Deep Visibility into Application Performance
>>> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
>>> Monitor end-to-end web transactions and take corrective actions now
>>> Troubleshoot faster and improve end-user experience. Signup Now!
>>> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
>>> ___
>>> BackupPC-users mailing list
>>> BackupPC-users@lists.sourceforge.net
>>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>>> Wiki:http://backuppc.wiki.sourceforge.net
>>> Project:http://backuppc.sourceforge.net/
>>>
>> --
>> Site24x7 APM Insight: Get Deep Visibility into Application Performance
>> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
>> Monitor end-to-end web transactions and take corrective actions now
>> Troubleshoot faster and improve end-user experience. Signup Now!
>> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project:http://backuppc.sourceforge.net/
>
> --
> Nicola
> Ser.Tec s.r.l.
> Via E. Salgari 14/D
> 31056 Roncade, Treviso
>
> Follow us on  YouTube <https://www.youtube.com/user/Sertectube>
> Facebook <https://www.facebook.com/sertecsrl> Twitter
> <https://twitter.com/sertecsrl>   Google+
> <https://plus.google.com/+Dpidgprinting/posts>
>
>
> Ser-Tec <http://dpidgprinting.com>
>
>
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monit

Re: [BackupPC-users] check backuppc file

2016-01-15 Thread Johan Ehnberg


On 2016-01-15 18:31, Les Mikesell wrote:
> On Fri, Jan 15, 2016 at 9:57 AM, Nicola Scattolin  wrote:
>>
>> i'm using smb xfermethod, where do i set backuppc to set the checksum to 1?
>>
>
> With smb you are going to get a complete new copy transferred on the
> next full run anyway.  The checksum cache only works for rsync.
> Personally I would just start over and only worry about extracting
> anything from the old drive if you had to recover some older flle.

I agree. If the clients to be backed up are on the local network, simply 
starting a fresh pool is easiest and likely fastest for 1,4TB compared 
to working with recovery.

/johan

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] check backuppc file

2016-01-15 Thread Johan Ehnberg
Hi Nicola,

We were in this exact situation not long ago. We came up with a simple 
method to use any intact files as the base for new backups.

You can find the details here:
http://johan.ehnberg.net/backuppc-pre-loading-seeding-and-migrating-moving-script/

Or in my post to this mailing list.

Basically, it creates a new partial backup on your new drive and when 
backuppc backs it up the first time, any files that do not match the 
client's corresponding files are discarded. This is verified with 
checksum by backuppc.

Let me know if you need any help.

Best regards,
Johan

On 2016-01-15 15:41, Nicola Scattolin wrote:
> Hi to all,
> i got a problem about a week ago and i found that my backuppc drives was
> dying.
> now i'm copying my old pools on the new drive but i got a lot of I/O
> errors on old drive, there is a way to force backuppc to check if the
> files are still ok? Because i know that if a file is alredy been copy by
> backuppc it will not copy it again, but i don't want to lose files when
> backuppc think a file is already in the pool, but it's corrupted.
> If it's not possible i will erase the old backups and start new ones,
> but i got a 1.4 TB backup to do and it will take a lot
>
> Thanks
>Nicola
>
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] check backuppc file

2016-01-15 Thread Johan Ehnberg
Hi Nicola,

We were in this exact situation not long ago. We came up with a simple 
method to use any intact files as the base for new backups.

You can find the details here:
http://johan.ehnberg.net/backuppc-pre-loading-seeding-and-migrating-moving-script/

Or in my post to this mailing list.

Basically, it creates a new partial backup on your new drive and when 
backuppc backs it up the first time, any files that do not match the 
client's corresponding files are discarded. This is verified with 
checksum by backuppc.

Let me know if you need any help.

Best regards,
Johan

On 2016-01-15 15:41, Nicola Scattolin wrote:
> Hi to all,
> i got a problem about a week ago and i found that my backuppc drives was
> dying.
> now i'm copying my old pools on the new drive but i got a lot of I/O
> errors on old drive, there is a way to force backuppc to check if the
> files are still ok? Because i know that if a file is alredy been copy by
> backuppc it will not copy it again, but i don't want to lose files when
> backuppc think a file is already in the pool, but it's corrupted.
> If it's not possible i will erase the old backups and start new ones,
> but i got a 1.4 TB backup to do and it will take a lot
>
> Thanks
>Nicola
>
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] check backuppc file

2016-01-15 Thread Johan Ehnberg


On 2016-01-15 17:15, Gandalf Corvotempesta wrote:
> 2016-01-15 14:41 GMT+01:00 Nicola Scattolin :
>> now i'm copying my old pools on the new drive but i got a lot of I/O
>> errors on old drive, there is a way to force backuppc to check if the
>> files are still ok? Because i know that if a file is alredy been copy by
>> backuppc it will not copy it again, but i don't want to lose files when
>> backuppc think a file is already in the pool, but it's corrupted.
>
> I think BPC is smart enough to detect differences.
> AFAIK , BPC check file checksum between backupped file and "fresh" file.
> If different, file would be transfered.

When using checksum caching, the detection of corrupt files on the 
backuppc server is only the set percentage probability. Corrupt files 
may go unnoticed for quite a while. Setting checksum probability to 1 
for a complete round of full backups will check every file to be extra sure.

The script I mentioned use goes through tar, so checksums are not 
carried over. I had some issues (duplicated files that would cause an 
error on each transfer) that backuppcc was unable to solve in the 
corrupt pool even after the pool was moved to a good drive. They went 
away when pre-seeding the new pool.

/johan

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc restore freezing

2015-11-22 Thread Johan Ehnberg
Hi,

Are you using sudo with rsync? As described here:

http://backuppc.sourceforge.net/faq/ssh.html#how_can_client_access_as_root_be_avoided

The permission issue in the log below, and the fact that temp 
directories work point at this possibility.

Best regards,
Johan

On 2015-11-21 21:56, donkeydong69 wrote:
> I found that trying to do a full restore of / seems to break down here:
> Sending /vmlinuz (remote=vmlinuz) type = 2
>restore l 777   0/0  30 /vmlinuz -> 
> boot/vmlinuz-3.19.0-33-generic
> Remote[1]: rsync: mknod "/dev/pts/1" failed: Operation not permitted (1)
> Remote[1]: rsync: mknod "/dev/pts/11" failed: Operation not permitted (1)
> Read EOF:
> Tried again: got 0 bytes
> Done: 215256 files, 5884382843 bytes
> restore failed: Unable to read 7237443 bytes
>
> It seems I need to do more research as to what folders in the / directory can 
> be restored and which shouldn't.
>
> I've also discovered that although a restore of, say a particular file or 
> directory, may finish long before the backuppc gui has updated the status. In 
> addition, I have found little to no issues restoring large directories to 
> temporary folders on the client computer, rather than overriding the existing 
> directories during the restore process. I am eager to see what happens when I 
> replace the folders on the system with the ones sent by my backuppc.
>
> +--
> |This was sent by vur...@me.com via Backup Central.
> |Forward SPAM to ab...@backupcentral.com.
> +--
>
>
>
> --
> Go from Idea to Many App Stores Faster with Intel(R) XDK
> Give your users amazing mobile app experiences with Intel(R) XDK.
> Use one codebase in this all-in-one HTML5 development environment.
> Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
> http://pubads.g.doubleclick.net/gampad/clk?id=254741551&iu=/4140
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741551&iu=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory moved on backuped server, how can avoid the complete re-synchronization

2015-11-19 Thread Johan Ehnberg
If it is a one-time move, you may be able to do some tricks with the 
rsync sharename. Or, use a restore tar file to seed the new location. 
See my post from earlier today for details, I'd be happy to hear your 
results. You can use archivemount to change the paths of the tar file 
you create.

For continuous moves, this is not feasible.

Best regards,
Johan

On 2015-11-19 15:23, Yoann David wrote:
> My problem is not to find the tool, backuppc is great, but how I avoid
> the 13 days of file transfer due to the move of the files...
> Apparently it is not possible.
>
> I'me sad ;)
>
>
>
> Le 19/11/2015 14:17, Johan Ehnberg a écrit :
>> Hello Yoann,
>>
>> Now I understand better what you are attempting. No, unfortunately
>> BackupPC will not detect moves. This is a feature in upcoming version 4.
>> Checksum caching will offload the server, and rsync allows transferring
>> just changes in current versions of BackupPC, but move detetion (or
>> rather, opportunistic pool matching based on full-file checksum) is not
>> yet available.
>>
>> I think unison may work for you, or some live sync tools such as
>> owncloud, they detect moves.
>>
>> Best regards,
>> Johan
>>
>> On 2015-11-19 15:04, Yoann David wrote:
>>> Hello Johan,
>>>
>>> thanks for your answer, unfortunately checksum caching is not configured
>>> on my backuppc.
>>>
>>> With this system rsync/backuppc can detect file moving ?
>>>
>>> ie : in my case, backuppc wil detect that
>>> /var/opt/gitolite/repositories/aa.git folder is now
>>> /home/git/repositories/aa.git
>>> (the rsync share name was /var/opt/gitolite/repositories and move also
>>> to /home/git/repositories)
>>>
>>> In the doc you linked, it said that the full performance benefit will be
>>> noticed on third full backup, so it may be to late to activate it ?
>>>
>>> Yoann
>>>
>>> --
>>> ___
>>> BackupPC-users mailing list
>>> BackupPC-users@lists.sourceforge.net
>>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>>> Wiki:http://backuppc.wiki.sourceforge.net
>>> Project:http://backuppc.sourceforge.net/
>>>
>> --
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project:http://backuppc.sourceforge.net/
>
>
>
> --
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory moved on backuped server, how can avoid the complete re-synchronization

2015-11-19 Thread Johan Ehnberg
Hello Yoann,

Now I understand better what you are attempting. No, unfortunately 
BackupPC will not detect moves. This is a feature in upcoming version 4. 
Checksum caching will offload the server, and rsync allows transferring 
just changes in current versions of BackupPC, but move detetion (or 
rather, opportunistic pool matching based on full-file checksum) is not 
yet available.

I think unison may work for you, or some live sync tools such as 
owncloud, they detect moves.

Best regards,
Johan

On 2015-11-19 15:04, Yoann David wrote:
> Hello Johan,
>
> thanks for your answer, unfortunately checksum caching is not configured
> on my backuppc.
>
> With this system rsync/backuppc can detect file moving ?
>
> ie : in my case, backuppc wil detect that
> /var/opt/gitolite/repositories/aa.git folder is now
> /home/git/repositories/aa.git
> (the rsync share name was /var/opt/gitolite/repositories and move also
> to /home/git/repositories)
>
> In the doc you linked, it said that the full performance benefit will be
> noticed on third full backup, so it may be to late to activate it ?
>
> Yoann
>
> --
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory moved on backuped server, how can avoid the complete re-synchronization

2015-11-19 Thread Johan Ehnberg
Hello Yoann,

Rsync checksum caching will help in transferring only the changes. Are 
you using it? Here are the details:

http://backuppc.sourceforge.net/faq/BackupPC.html#Rsync-checksum-caching

Best regards,
Johan Ehnberg

On 2015-11-19 13:09, Yoann David wrote:
> Hello,
>
> mostly everything is in the title.
>
> On a target server, we move a quite large directory (90 Go), as the
> target change in backuppc, it try to re-sync everything (the whole 90Go)
> not only the difference.
> Our bandwith between the backuped/target server and backuppc server is
> low (80ko/s), so it will take more than 13 days to transfer all datas !!!
>
>
> What can we do ?
>
> Yoann DAVID
>
> --
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC pre-loading / seeding and migrating / moving script

2015-11-18 Thread Johan Ehnberg
Hi all,

A recurring theme I've noticed is the pre-loading and migration 
challenge. We just finished work on a corrupt pool one of our offsite 
instances, which BackupPC was unable to fix properly despite checksum 
prob 1.0. We decided to move to a fresh pool, and keep the old one on 
cold storage until retention runs out. In the process, I created a 
script that proved useful for other situations as well.

This script can save (restore to tar), seed (pre-load) and move 
(migrate, switching between old and new backuppc partitions on the same 
server) BackupPC backups. It works at least for tar and rsync transfer 
modes and with BackupPC 3.3.0. For migration, it is considerably faster 
and more portable than migrating the pool, but previous backups will not 
be included. Migrating to another server is done using save on the old 
and seed on the new.

Some use cases I can think of for this script are:
- Preloading secondary servers at a remote location
- Moving to another partition when hardlink preservation takes too long
- Taking the usable parts of a corrupt pool when starting a fresh one

The details are on:
http://johan.ehnberg.net/backuppc-pre-loading-seeding-and-migrating-moving-script/

The latest version of the script is found here:
https://owncloud.molnix.com/index.php/s/t879KKwnHg7nHDu

Best regards,
Johan

-- 
Johan Ehnberg
jo...@ehnberg.net
+358503209688

Molnix Oy
molnix.com

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] patch for auto-detection of Bonjour (apple) clients for 3.2

2009-08-13 Thread Johan Ehnberg


Jeffrey J. Kosowsky wrote:
> James Kyle wrote at about 12:00:46 -0700 on Thursday, August 13, 2009:
>  > I patched BackupPC_dump to look for bonjour clients. My apologies if  
>  > this is not the most correct way to do so.
>  > 
>  > What this allows is:
>  > 
>  > If you have an apple client with hostname foo and Bonjour name  
>  > foo.local, you can enter the client's name as "foo" and backuppc_dump  
>  > will auto-detect its bonjour name.
>  > 
>  > This makes it so that your clients don't have to run an smb service if  
>  > you don't want to without redundantly entering .local for all your  
>  > clients.
>  > 
>  > untyped binary data: patch-bin-backuppc_dump.diff [save to a file]
>  > 
> 
> I'm not sure I would want this patch rolled into the sources since all
> it really does is check 'hostname'.local and assumes if it exist then
> it must be a Bonjour name. But in *nix world, 'foohost.local' itself is a
> valid name which may or may not be related to 'foohost'. So this in
> general seems more like a hack than a robust, general solution.
> 
> I'm not sure what the problem is with just appending '.local' to the
> names of Bonjour hosts in the Backuppc 'hosts' file. Alternatively,
> just create the aliases in the /etc/hosts file or equivalents.
> 

This should be even easier:
echo "search local" >> /etc/resolv.conf

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Where is the BackupPC-Wiki?

2009-08-06 Thread Johan Ehnberg
Hello!

Same problem here. I was going to browse for Mac OS X yesterday but had 
to fall back on the old docs.

/johan

p...@backup-and-restore.de wrote:
> Hello folks,
> 
> I wanted to add some stuff to the wiki (common configuration) but it seems
> that the wiki is no there?
> 
> Before it was at:
> 
>http://backuppc.wiki.sourceforge.net/
> 
> and in the Docs it points to the same address:
> 
>http://backuppc.sourceforge.net/faq/BackupPC.html#resources
> 
> When entering the above link (http://backuppc.wiki.sourceforge.net/)
> I get a redirect to:
> 
>http://sourceforge.net/projects/backuppc/develop
> 
> :-/
> 
> Question:
> - What is going on?
> - Where is the Wiki?
> 
> Kind regards from Berlin
> 
> - P hil
> 

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Ubuntu Stability

2009-04-30 Thread Johan Ehnberg
Hi!

Freeze sounds like kernel problem = not BackupPC, it only triggers it 
because it uses resources intensively.

In my experience this has been the cause (in similar cases to yours ) of 
either faulty hardware, faulty config or crappy drivers:

Faulty hardware can be detected by stress-testing the drive (dd, 
urandom, shred and others) and RAM (memtest86) with random reads and 
writes in a hot environment.

A common faulty config on raid is that there's been a problem in 
allocating the space which results in bad access (if your system is on 
the same partition as your pool it is a good explanation for your 
symptoms) and the drive becomes at least read only.

Finally some drivers for IDE/SATA chips are poor (often because the 
vendor has not released any specs) causing errors in some situations. 
This can often be remedied by switching SATA from "IDE Mode" to "AHCI" 
or similar in your BIOS.

In general, dmesg is your friend - try following the last seconds before 
the system freezes locally and remotely as when you want to log in it is 
too late :).

Good luck!

/johan


Christopher Derr wrote:
> I'm currently running the latest backuppc version that Ubuntu officially 
> supports (it's behind Debian as far as I can tell and I haven't tried to 
> use the Debian version): 3.0.0.  Apt-get shows it's the latest available 
> through Stable.  Anyway, the system (Tyan 2912G2NR, 8 GB memory, 4 TB in 
> RAID 5) crashes often.  Becomes untouchable, I go to the console, hit 
> enter to bring up a logon prompt, then the machine is officially 
> frozen.  I figure it's a kernel panic, but I'm not seeing anything 
> telling in any log I can find.  This happens almost exclusively when 
> backing up a one of our Windows fileservers using rsync with 700 GB+ 
> data over our 1 Gb link. 
> 
> My thoughts are it could be Ubuntu or it could be the hardware.  They 
> system doesn't seem to have any issues except on this one machine's 
> backups, and even then it's not every time (just most of the time).  I'm 
> considering moving to Debian and the latest version of Backuppc (3.1...I 
> realize 3.2 is still in beta).  I think I just need to backup my 
> /etc/backuppc config files but if I keep the /var/lib/backuppc mount, I 
> should be able to reinstall the system without affecting the backups.  
> Not sure if the 3.1 upgrade is going to talk with the 3.0 backup files 
> though.
> 
> Thoughts about Ubuntu or my upgrade in general?
> 
> Thanks,
> 
> Chris
> 

--
Register Now & Save for Velocity, the Web Performance & Operations 
Conference from O'Reilly Media. Velocity features a full day of 
expert-led, hands-on workshops and two days of sessions from industry 
leaders in dedicated Performance & Operations tracks. Use code vel09scf 
and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What is it doing? How to see progess?

2009-04-08 Thread Johan Ehnberg
Boniforti Flavio wrote:
> 
> 
> Il 08.04.09 19:14, "Carl Wilhelm Soderstrom"  ha
> scritto:
> 
>> On 04/08 05:02 , Boniforti Flavio wrote:
>>> Also, when looking into /var/lib/backuppc/pc/mail.omvsa.ch I see a "new"
>>> folder, and in it "fprofilo": why has there been an "f" added in front
>>> of the rsync module name?
>> it's just part of the storage format. the files there are compressed, and
>> not in their original form.
> 
> Ah, well... So I can't copy or access those files when they're synced over?
>  
>>> Last but not least: how do I check the progress status of my backup?
>> it's not easy; but you can sort of keep an eye on it by watching new
>> directories appear under /var/lib/backuppc/pc//new; and monitoring
>> how much data has appeared there with 'du -sch'.
> 
> The problem is that there are *no new* directories created under "new",
> simply this (out of more than 300MB):
> 
> storebox:/var/lib/backuppc/pc/mail.omvsa.ch/new# tree
> .
> `-- fprofilo
> `-- fangelo
> `-- fApplication Data
> `-- fMicrosoft
> `-- fForms
> 
> That leads me to believe, something is *not* working. No rsync process is
> running on my Linux box, but after 4 hours, the ssh tunnel is still open!
> 
> Need help...

Try monitoring network traffic with something like bwm for bandwidth, 
iptraf for connections and sniffit for data streams. Maybe it's stuck on 
a large file?

--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC, rsync and ssh

2009-04-08 Thread Johan Ehnberg
Jeffrey J. Kosowsky wrote:
> Johan Ehnberg wrote at about 10:16:05 +0300 on Wednesday, April 8, 2009:
>  > Actually it should be possible to use any protocol. However, because of
>  > a bug that only now is being fixed, Windows clients with SSH server and
>  > rsync have not been possible.
> 
>>From my experimentation, the issue seems to be associated with the
> rsync protocol used which now downgrades to protocol 28 (if I recall
> correctly). The problem doesn't occur with protocol 30 but backuppc
> won't be able to use protocol 30 unless and until perl-File-RsyncP is
> updated.
> 
> Are you saying that there is another way to "fix" the problem that is
> currently in the works or are you assuming that perl-File-RsyncP is
> being updated?
> 

Referring to Craig Barratt's mail on this list maybe a week or two ago, 
yes. And no, I have not seen any other fix. He and others have been 
experimenting with either using FUSE to get around this, or updating to 
version 30.

--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Here my log

2009-04-08 Thread Johan Ehnberg
(I invite Craig Barratt and any developer to check out my last paragraph 
below.)

> it doesn't seem like ssh actually *is* backgrounded. I didn't read
> the tutorial and haven't played around with this, so I'm not quite
> sure *what* you would want to background to keep the tunnel open
> [long enough]. You would obviously need to background *something*,
> otherwise BackupPC would wait for the tunnel to close and the
> DumpPreUserCmd to finish before proceeding with the backup, wouldn't
> it?

SSH is backgrounded with the -f switch.

Perl/BackupPC is tricked to understand this by redirecting stdout and
stderr. Otherwise it still waits until they stop even when SSH is
backgrounded.


> - The "sleep" on the client? That wouldn't help, because "ssh"
> wouldn't terminate before "sleep" is done.

As above, SSH goes to the "background just before command execution" it
is not supposed to terminate at this point.


> - The "ssh" command? Then how do you know whether it started
> successfully? - The script which starts ssh (or, equivalently, the
> "ssh && echo" sequence)? How do you tell BackupPC whether the "ssh"
> was successful (return code of the DumpPreUserCmd, determining
> whether to proceed with the backup or not)?

By using "&&". It's a shell syntax meaning about 'and if successful', in 
other words I believe it checks the exit status of the previous command. 
This makes the BackupPC log show "SSH started successfully." as we can't 
use SSH itself to tell us that.

The question of actually proceeding with the backup or not is a good 
one. I haven't explored any integration with BackupPC, so far having the 
backup run and fail on its own has been sufficient. One way would be to 
create a ping replacement/multifunction wrapper for the tunnel.

Now that you brought this up, I actually realized that integrating the 
whole tunnel functionality in BackupPC would be pretty easy in theory. A 
single conf line or split up into simple parts (GATE, CLIENT, and a 
"enable tunnel"-switch) would perhaps suffice as the ports could be 
handled dynamically.

Thanks for the input!

-- 
Johan Ehnberg

Email: jo...@ehnberg.net
GSM:   +358503209688
WWW:   http://ehnberg.net/johan.html

--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Here my log

2009-04-08 Thread Johan Ehnberg

Boniforti Flavio wrote:
> [cut]
> 
>>> How do I debug this further?
>> For testing, you can remove the "1>/dev/null 2>/dev/null" 
>> from the script to see the output of SSH. Note that the 
>> backup will not proceed like this as SSH can't be 
>> backgrounded without redirecting the output. 
>> You could perhaps also redirect it to a file.
> 
> Now it seems better (I had mistyped the remote hostname):
> 
> Executing DumpPreUserCmd: /var/lib/backuppc/tunnel.sh -fC
> administra...@mail.omvsa.ch -L 8874:127.0.0.1:873 sleep 20
> SSH Tunnel started successfully.
> full backup started for directory data
> Connected to localhost:8874, remote version 30
> Negotiated protocol version 28
> Error connecting to module data at localhost:8874: auth required, but
> service data is open/insecure
> Got fatal error during xfer (auth required, but service data is
> open/insecure)
> Backup aborted (auth required, but service data is open/insecure)
> Not saving this as a partial backup since it has fewer files than the
> prior one (got 0 and 0 files versus 0)
> --
> 
> Still, there remains "auth required, but service data is open/insecure":
> to what does it refer?

The tunnel is working well now. I bet you'll find answers here or by 
googling for more. This may have to do with the variant of rsync on the 
client.

http://backuppc.wiki.sourceforge.net/ErrorMessages
http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg13703.html


--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Here my log

2009-04-08 Thread Johan Ehnberg

> Executing DumpPreUserCmd: /var/lib/backuppc/tunnel.sh -fC
> administra...@mail.omv.ch -L 8874:127.0.0.1:873 sleep 20

The SSH tunnel is not working yet.


> full backup started for directory data
> Error connecting to rsync daemon at localhost:8874: inet connect:
> Connessione rifiutata
> Got fatal error during xfer (inet connect: Connessione rifiutata)
> Backup aborted (inet connect: Connessione rifiutata)
> Not saving this as a partial backup since it has fewer files than the
> prior one (got 0 and 0 files versus 0)
> --
> 
> If I do "ssh administra...@mail.omv.ch" I get logged in without password
> request, therefore the key-based authentication is working.
> 
> I would like to get some more detailed log about the "DumpPreUserCmd"
> (which reads: ssh $@ 1>/dev/null 2>/dev/null && echo "SSH started
> successfully.") because I don't know if the ssh tunnel effectively gets
> opened and *remains* open.
> 
> How do I debug this further?

For testing, you can remove the "1>/dev/null 2>/dev/null" from the 
script to see the output of SSH. Note that the backup will not proceed 
like this as SSH can't be backgrounded without redirecting the output. 
You could perhaps also redirect it to a file.

Then run the backup manyally using BackupPC_dump -v -f CLIENTNAME.


--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Here my log

2009-04-08 Thread Johan Ehnberg
Boniforti Flavio wrote:
>> Hi again.
>>
>> You need to change the method from rsync to rsyncd.
> 
> Well, I changed it now and here's what I got:
> 
> Executing DumpPreUserCmd: /root/tunnel.sh -f administra...@mail...
> -L 8874:127.0.0.1:873 sleep 20

First you have to make sure the SSH tunnel works with key based 
authentication (=no password but file) for the user running backuppc. 
That is explained in the backuppc FAQ, Wiki and in other contexts on the 
internet. When it works, it will say SSH started successfully at this point.


> full backup started for directory data
> Error connecting to rsync daemon at localhost:873: inet connect:
> Connessione rifiutata
> Got fatal error during xfer (inet connect: Connessione rifiutata)
> Backup aborted (inet connect: Connessione rifiutata)
> Not saving this as a partial backup since it has fewer files than the
> prior one (got 0 and 0 files versus 0)
> 
> 
> Attached a screenshot of my config. Do I have to set RsyncdUserName and
> RsyncdPasswd?

If they are in use on the client, i.e. cwRsync.

> 
> My attention is at the line "Error connecting to rsync daemon at
> localhost:873: inet connect: Connessione rifiutata", because it seems to
> me that the rsync doesn't actually go through the ssh tunnel...

Exactly. You need to check my HOWTO again, as the rsyncd port is 
probably meant to be 8874 on the server, not 873.


> Do you need some config files (both Windows client and Linux server)?

After key-based SSH is set up, all the configs needed go in backuppc, 
and on the client I assume it's already set up as you were using rsnapshot.

Please use "Reply all" to get the mailing list included at least for 
archival.

Best regards,
Johan

--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC, rsync and ssh

2009-04-08 Thread Johan Ehnberg

>>> What should I put in the $Conf{RsyncClientCmd} parameter?
>> Just keep the default, or add a -C for compression if you want.
> OK, as I've modified the default, could you please tell me what that
> command should be?
> And, two questions: 
> 1) where should I be adding -C and how would this be affecting transfers
> over WAN?
> 2) what's the meaning of "+" sign at the end of $argList?

As I understand this client uses rsyncd, not rsync, then 
$Conf{RsyncClientCmd} is not used. The -C would then serve you better in 
the tunnel itself.

1) Put it next to the -f switch in $Conf{DumpPreUserCmd}.
2) Maybe something with variables... someone else may know better.


>>> do I have to set it? Is "rsync" the right choice? I guess yes, but 
>>> please give me confirmation and eventually update your howto...
>>> ;-)
>> Actually it should be possible to use any protocol. However, 
>> because of a bug that only now is being fixed, Windows 
>> clients with SSH server and rsync have not been possible. 
>> Instead, it is necessary to use SSH server and rsyncd, which 
> 
> I desume this is *indeed* the configuration I'm using: I do ssh from my
> Linux server to the Windows client, on which rsyncd is running. Do you
> confirm?

Yes, your command was: "ssh -f administra...@remotehost.com -L 
8875:127.0.0.1:873 sleep 10", indicating a rsyncd server running on the 
client.


>> i assume? This is exactly the kind of setup my HOWTO is 
>> designed for by default. "CLIENT" 
>> in the HOWTO is then the same "127.0.0.1" as you are using 
>> now. You should be able to start using BackupPC alongside 
>> your current configuration, and complete the migration when 
>> you are happy with the results.
> 
> I'm still having troubles, but first I'd like to be sure of what
> RsyncClientCmd to use (and eventually also RsyncClientRestoreCmd). After
> you tell me about this, I'll be doing some more tests and get back to
> you, as long as you're willing to help me in this "journey" ;-)
> 
> Many thanks so far!
> 

No problem. If you use the CGI interface, then you will only have to 
bother with those configurations which are relevant for your method. In 
fact, for rsyncd, RsyncClientCmd is not visible at all. Instead, you can 
think of the rsyncd method as 'integrated in BackupPC' and only concern 
yourself with the host, port and files to be backed up.

--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC, rsync and ssh

2009-04-08 Thread Johan Ehnberg
Hi!

> What should I put in the $Conf{RsyncClientCmd} parameter?

Just keep the default, or add a -C for compression if you want.


> I’m trying to switch from rsnapshot (tunneled with ssh) to backuppc,
> still using ssh to tunnel rsync. I stumbled across 
> http://ehnberg.net/johan/files/docs/backuppc-ssh-tunnel-howto.html
> and was about to change my config.pl, when I stopped and wanted to
> ask you about the XferMethod. You don’t mention it in your howto, but
> how do I have to set it? Is “rsync” the right choice? I guess yes,
> but please give me confirmation and eventually update your howto...
> ;-)

Actually it should be possible to use any protocol. However, because of
a bug that only now is being fixed, Windows clients with SSH server and
rsync have not been possible. Instead, it is necessary to use SSH server
and rsyncd, which I have made the default for the HOWTO. Linux clients
can use rsync. Again, other methods should work as well, but I have not
tried.


> As I’m writing to you, I’d like to have your advice about my
> migration. Following is the command I’m using with rsnapshot:
> 
> ssh -f administra...@remotehost.com -L 8875:127.0.0.1:873 sleep 10; 
> rsnapshot -c /root/snap.conf daily > /var/log/rsnapshot/snap.log
> 
> As you can see, I’m opening an SSH tunnel and keep it open with
> “sleep 10”, so I can start rsnapshot in it. It’s “unwrapped” command
> is:
> 
> /usr/bin/rsync -rtv --port=8875 --delete --ignore-errors
> --numeric-ids --delete-excluded --stats --progress
> localhost::Profili/
> 
> Do you think I can achieve the same with BackupPC? The
> “remotehost.com” is a Windows 2003 Server, on which I’m running
> copSSH (ssh server) and cwRsync (rsync daemon) and I simply opened
> the SSH port, not any other port (no 873 for rsync).

You mean you opened only the SSH port at some firewall, and the rsyncd
port is open for loopback (localhost) connections, i assume? This is
exactly the kind of setup my HOWTO is designed for by default. "CLIENT"
in the HOWTO is then the same "127.0.0.1" as you are using now. You
should be able to start using BackupPC alongside your current
configuration, and complete the migration when you are happy with the
results.


> Any help/advice will be appreciated.
> 
> Many thanks in advance and kind regards,
> 
> Flavio Boniforti
> 
> PIRAMIDE INFORMATICA SAGL Via Ballerini 21 6600 Locarno Switzerland 
> Phone: +41 91 751 68 81 Fax: +41 91 751 69 14 Url:
> _http://www.piramide.ch _E-mail: _fla...@piramide.ch_--

Your feedback is greatly appreciated, and I will update the HOWTO
accordingly around the time when BackupPC 3.2.0 is released.

Good luck!

Johan

-- 
Johan Ehnberg

Email: jo...@ehnberg.net
GSM:   +358503209688
WWW:   http://ehnberg.net/johan.html


--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc and emails

2009-03-31 Thread Johan Ehnberg
Enrique Jiménez Campos wrote:
> how can i do that backuppc send email when the backuppc jobs is done? i 
> already check the "BackupPC_sendEmail" script and this work it, but 
> later when i try  a full/incr backup the system dont send me an email :S

Hi!

I created a wrapper script:
#!/bin/sh
if [[ $1 == "0" ]]; then
   STATUS=Failed
elif [[ $1 == "1" ]]; then
   STATUS=OK
else
   echo "No input status!"
   exit 1
fi

LOGDATE=`date +%m%Y`
tail /var/lib/backuppc/pc/$3/LOG.$LOGDATE | mail -s "Backup $STATUS for 
$2 on $3" backuppc


And start it in $Conf{DumpPostUserCmd} (see the docs for the arguments). 
I have seen more advanced variants on this list, too.

/johan

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Next release

2009-03-30 Thread Johan Ehnberg

> * Added more options to server backup command: rather than just forcing
>   an incremental or full backup, a regular (auto) backup can be queued
>   (ie: do nothing/incr/full based on schedule), as well as doing just
>   an incremental or full or nothing based on the client schedule.
>   Based on patches submitted by Joe Digilio.

Documentation is underway for using this in mobile and/or client 
initiated setups over SSH and rsync/rsyncd (maybe other methods too). As 
soon as the command line syntax is final I'll get to publishing it somehow.

So far I've had three variants none of which were really scalable in 
tests (although working) and this feature solves it all:

Reverse tunnel with one of these
a) The current alternative queue all function should trigger any PC in 
need of backup, but with many PC:s it becomes unreliable and inefficient 
= does not scale.
b) _dump the host is not smart enough for client initiated backups, as 
it either does not use schedules or relies only on one chance per day, 
depending on client scripts. It also does not respect simultaneous jobs 
and ignores the UI = does not scale.
c) Sitting and waiting with a sleep command. This gets ugly on the 
client side and should really be done with a VPN. Also has greater risk 
of dead SSH sessions blocking working ones, requiring manual 
intervention = does not scale too well.

Clearly queuing only the relevant client solves a), using the common 
queue and schedule solves b) and having that command in the first place 
solves c).

A big thanks to those who work on the code for this functionality.

/johan

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Revamping the Wiki

2009-02-13 Thread Johan Ehnberg
Cody Dunne wrote:
> Johan Ehnberg wrote:
>> I for one would be happy to contribute with several complete pages of 
>> documentation that I have generated for my clients, once the Wiki is 
>> clear enough to start with.
> 
> That's fantastic! I'm glad Craig supports it as well. I've marked a few 
> of the recent mailing list messages to copy over, but a more consistent 
> approach to making a FAQ would be excellent.
> 
> What sorts of documentation do you have? It might help in planning out 
> the structure, as well as what stuff of mine to contribute as well.
> 
> Cody

The only one currently available is on:
http://www.ehnberg.net/johan/files/docs/backuppc-ssh-tunnel-howto.html

The rest is mostly in Swedish but can quickly be translated. I mostly 
deal with stuff relating to SSH keys, tunneling, reaching clients behind 
firewalls, mobile clients and two-tiered backups. Stuff that doesn't 
belong in an FAQ (maybe linked under scripting/extending/flexibility of 
BackupPC questions), but in Tips&Tricks, written in a HOWTO style.

/johan

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Revamping the Wiki

2009-02-13 Thread Johan Ehnberg
Craig Barratt wrote:
> Cody writes:
> 
>> I'd be willing to do a lot of the cleaning myself, though I don't want
>> to step on anyone's toes without talking with you first. Also, my
>> knowledge of BackupPC is fairly limited to my setup (XP/Vista clients &
>> Ubuntu server).
> 
> I agree it isn't very well organized.  I don't think anyone has
> been keeping up the overall structure.  I'd be happy for you to
> make the changes you want.
> 
> Craig

Hi Cody!

It would be great to have the Wiki up to par with the quality of 
BackupPC in general. I believe many of the topics which reappear on the 
mailing list could be summarized for easy reference if the Wiki is 
clearly structured. It would also offload a lot of resources from those 
more closely involved in managing the project!

I for one would be happy to contribute with several complete pages of 
documentation that I have generated for my clients, once the Wiki is 
clear enough to start with.

Good luck!

Regards,
Johan



--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how to backup the backuppc server (off site)

2009-02-10 Thread Johan Ehnberg
Scurry7 wrote:
> Hello all, I have been looking for some documentation on how to
> backup my backup-PC server.  What on my backup server needs to be
> duplicated so I can restore my backups once the server is back up?
> 
> I can tell that the data and stuff is stored in /var/lib/backuppc/
> but is that all that I need to sync up? (when restoring do I only
> need to re-install backuppc then rsync that directory back?)
> 
> Also, does anyone use a really good really cheap service for online
> data storage?
> 
> I have about 900G used on my server.
> 
> man i love backup-PC, thanks to all that help create awesome
> software!
> 

You will want to back up /etc/backuppc (or wherever your conf files
are), these are necessary to do anything useful with BackupPC, such as
restoring files. Other than that there should be nothing.

However, you should be aware of how hard links work and restore them
correctly. Lots of recent threads about that here. In short, replicating
a complete backup archive is often not feasible off site because of the
high number of hard links.

Regards,
Johan

--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] errors in cpool after e2fsck corrections

2009-01-18 Thread Johan Ehnberg
Matthias Meyer wrote:
>> Matthias Meyer wrote:
>>> Thanks for your sympathy :-)
>>> I would believe the filesystem should be ok in the meantime. e2fsck needs
>>> to run 3 or 4 times and need in total more than 2 days. After this
>>> lost+found contains approximately 10% of my data :-( No chance to
>>> reconstruct all of them.
>>>
>>> 1) So you would recommend:
>>> mv /var/lib/backuppc/cpool /var/lib/backuppc/cpool.sav
>>> mkdir /var/lib/backuppc/cpool
>>> I would believe that the hardlinks
>>> from /var/lib/backuppc/pc// than will point to
>>> cpool.sav instead cpool?
>>> The disadvantage is that up to now every file have to be created in the
>>> new cpool. No one of the existing files (in cpool.sav) can be reused.
>>> By deleting of old backups during the next month, the cpool.sav should be
>>> empty and can be deleted than.
>>>
>>> 2) I would believe that every backuped file will be checked against
>>> cpool. Is it not identical than a new file will be created in cpool.
>>> During the deletion of old backups also old, (maybee corrupt) files in
>>> cpool will be deleted. So possible corrupt files in cpool will disappear
>>> automaticly during the next month.
>>>
>>> Which strategy would you prefer?
>>>
>>> Thanks
>> In 1) I was a bit vague: I meant moving all data (to be used only if
>> needed, including cpool) and making fresh backups altogether. And
>> exactly that will make it effortless for you - the new pool is clean.
>>
>> In 2) you are correct unless you are using checksum caching. To clean
>> unused files you need nightly, and to use that you want a clean pool.
>>
>> Go for 2) if there are few errors that you can correct yourself to keep
>> BackupPC running smoothly with an unbroken line of backups.
>>
>> However, 10 GB sounds like you'll save time and trouble by allowing
>> backuppc to make new backups - if you can afford the bandwidth. At the
>> same time you won't have to worry about many factors that could go wrong.
>>
>> Regards,
>> Johan
>>
> ok.I wil give 2) a chance and will test it for at least one month.
> 
> Should I delete all directories in /var/lib/backuppc/cpool/?/?/?/* or would
> BackupPC_nightly do this job?
> Should I reactivate BackupPC_nightly?
> 
> Regards
> Matthias

In 2) you should not delete anything - only when filesystem errors are 
causing trouble. You need the nightly.

Other than that - read the other posts too, they have good pointers to 
actually dealing with the problem behind all this as well as some ideas 
about how to get the pool in order! If your data is not critical you are 
of course at liberty to play around. In a production system I would 
assume a months testing is not acceptable on loose grounds.

Good luck!

/johan

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] errors in cpool after e2fsck corrections

2009-01-18 Thread Johan Ehnberg


Matthias Meyer wrote:
> Johan Ehnberg wrote:
> 
>> Quoting Matthias Meyer :
>>
>>> After a system crash and tons of errors in my ext3 filesystem I have to
>>> run e2fsck.
>>> During this I lost some GB of data in /var/lib/backuppc.
>>> For the time being I have disabled BackupPC_nightly by renaming it to
>>> BackupPC_nightly.disabled ;-)
>>>
>>> The rest of the backuppc system should run as well as possible.
>>>
>>> Now I get some errors from BackupPC_link:
>>> BackupPC_link got error -4 when calling
>>>
> MakeFileLink(/var/lib/backuppc/pc/firewall/0/f%2f/fvar/flog/flogwatch/f2008-12-26,
>>> 845a684e4a8c9fe22d11484dc13e24fc, 1)
>>>
>>> I believe the reason is that
>>> /var/lib/backuppc/cpool/8/4/5/845a684e4a8c9fe22d11484dc13e24fc
>>> is a directory and not a file. Probably during e2fsck created.
>>>
>>> What should I do?
>>> Should I delete all directories in /var/lib/backuppc/cpool/?/?/?/*
>>> or would BackupPC_nightly do this job?
>>>
>>> Is it a problem to disable BackupPC_nightly?
>>>
>>> Thanks
>>> Matthias
>>> --
>>> Don't Panic
>>>
>> Matthias,
>>
>> Sorry to hear about that. I would recommend the following:
>> - Consider all the backed up data corrupt (don't build any new backups on
>> it) - Start a fresh pool, saving the old one for the duration of your
>> normal cycle - Look for the reason for the crash/corruption and prevent it
>> from happening - After that, there's no need to disable nightly
>>
>> If a full backup is not possible (remote rsync), it may be feasible to
>> check the
>> integrity of the pool (VerifyProb=1 for a while) and manually correct any
>> problems. But beware, if there are many errors it could take ages.
>>
>> Hth,
>> Johan Ehnberg
>>
> Thanks for your sympathy :-)
> I would believe the filesystem should be ok in the meantime. e2fsck needs to
> run 3 or 4 times and need in total more than 2 days. After this lost+found
> contains approximately 10% of my data :-( No chance to reconstruct all of
> them.
> 
> 1) So you would recommend:
> mv /var/lib/backuppc/cpool /var/lib/backuppc/cpool.sav
> mkdir /var/lib/backuppc/cpool
> I would believe that the hardlinks
> from /var/lib/backuppc/pc// than will point to
> cpool.sav instead cpool?
> The disadvantage is that up to now every file have to be created in the new
> cpool. No one of the existing files (in cpool.sav) can be reused.
> By deleting of old backups during the next month, the cpool.sav should be
> empty and can be deleted than.
> 
> 2) I would believe that every backuped file will be checked against cpool.
> Is it not identical than a new file will be created in cpool.
> During the deletion of old backups also old, (maybee corrupt) files in cpool
> will be deleted. So possible corrupt files in cpool will disappear
> automaticly during the next month.
> 
> Which strategy would you prefer?
> 
> Thanks

In 1) I was a bit vague: I meant moving all data (to be used only if 
needed, including cpool) and making fresh backups altogether. And 
exactly that will make it effortless for you - the new pool is clean.

In 2) you are correct unless you are using checksum caching. To clean 
unused files you need nightly, and to use that you want a clean pool.

Go for 2) if there are few errors that you can correct yourself to keep 
BackupPC running smoothly with an unbroken line of backups.

However, 10 GB sounds like you'll save time and trouble by allowing 
backuppc to make new backups - if you can afford the bandwidth. At the 
same time you won't have to worry about many factors that could go wrong.

Regards,
Johan

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] errors in cpool after e2fsck corrections

2009-01-18 Thread Johan Ehnberg
Quoting Matthias Meyer :

> After a system crash and tons of errors in my ext3 filesystem I have to run
> e2fsck.
> During this I lost some GB of data in /var/lib/backuppc.
> For the time being I have disabled BackupPC_nightly by renaming it to
> BackupPC_nightly.disabled ;-)
>
> The rest of the backuppc system should run as well as possible.
>
> Now I get some errors from BackupPC_link:
> BackupPC_link got error -4 when calling
> MakeFileLink(/var/lib/backuppc/pc/firewall/0/f%2f/fvar/flog/flogwatch/f2008-12-26,
> 845a684e4a8c9fe22d11484dc13e24fc, 1)
>
> I believe the reason is that
> /var/lib/backuppc/cpool/8/4/5/845a684e4a8c9fe22d11484dc13e24fc
> is a directory and not a file. Probably during e2fsck created.
>
> What should I do?
> Should I delete all directories in /var/lib/backuppc/cpool/?/?/?/*
> or would BackupPC_nightly do this job?
>
> Is it a problem to disable BackupPC_nightly?
>
> Thanks
> Matthias
> --
> Don't Panic
>

Matthias,

Sorry to hear about that. I would recommend the following:
- Consider all the backed up data corrupt (don't build any new backups on it)
- Start a fresh pool, saving the old one for the duration of your normal cycle
- Look for the reason for the crash/corruption and prevent it from happening
- After that, there's no need to disable nightly

If a full backup is not possible (remote rsync), it may be feasible to 
check the
integrity of the pool (VerifyProb=1 for a while) and manually correct any
problems. But beware, if there are many errors it could take ages.

Hth,
Johan Ehnberg


--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Custom schedule

2009-01-14 Thread Johan Ehnberg
tagore wrote:
> Hi!
> 
> I like custom schedule per host*.pl
> 
> Example backup start:
> host1.pl:   19.00
> host2.pl:   20.00
> host3.pl:   21.00
> 
> How can I this?
> 
> Which $Conf parameter?
> 
> 
> Thanks
> 

This has been asked a few times before. In short you have a few options:

- WakeupSchedule (for specifying when NOT to run anything)
- MaxBackups (from what it seems, maybe you want to spread out the load)
- Blackouts (for specifying when not to run unless necessary)
- cron jobs (for forcing a run outside BackupPC, beware - not ideal)

Cron is the only way to accomplish exactly what you're looking for.

Regards,
Johan

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copying in a file instead of backing up?

2009-01-13 Thread Johan Ehnberg
Les Mikesell wrote:
> Johan Ehnberg wrote:
>>> OK. I can see now why this is true. But it seems like one could
>>> rewrite the backuppc rsync protocol to check the pool for a file with
>>> same checksum  before syncing. This could give some real speedup on
>>> long files. This would be possible at least for the cpool where the
>>> rsync checksums (and full file checksums) are stored at the end of
>>> each file.
>> Now this would be quite the feature - and it fits perfecty with the idea 
>> of smart pooling that BackupPC has. The effects are rather interesting:
>>
>> - Different incremental levels won't be needed to preserve bandwidth
>> - Full backups will indirectly use earlier incrementals as reference
>>
>> Definite whishlist item.
> 
> But you'll have to read through millions of files and the common case of 
> a growing logfile isn't going to find a match anyway.  The only way this 
> could work is if the remote rsync could send a starting hash matching 
> the one used to construct the pool filenames - and then you still have 
> to deal with the odds of collisions.
> 

Sure you are pointing to something and are right. What I don't see is 
why we'd have to do an (extra?) read through millions of files? That is 
done with every full anyway, and in the case of an incremental it would 
only be necessary for new/changed files. It would in fact also speed up 
those logs because of rotation: an old log changes name but is still 
found on the server. Same applies for the painful (for remote backups) 
moved big directories.

I suspect there is no problem in getting the hash with some tuning to 
Rsync::Perl? It's just a command as long as the protocol allows it.

Are collisions aren't exactly a performance problem? BackupPC handles 
them nicely from what I've seen.

/johan

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copying in a file instead of backing up?

2009-01-13 Thread Johan Ehnberg
> OK. I can see now why this is true. But it seems like one could
> rewrite the backuppc rsync protocol to check the pool for a file with
> same checksum  before syncing. This could give some real speedup on
> long files. This would be possible at least for the cpool where the
> rsync checksums (and full file checksums) are stored at the end of
> each file.

Now this would be quite the feature - and it fits perfecty with the idea 
of smart pooling that BackupPC has. The effects are rather interesting:

- Different incremental levels won't be needed to preserve bandwidth
- Full backups will indirectly use earlier incrementals as reference

Definite whishlist item.

/johan

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fwd: apparent rsync error

2009-01-02 Thread Johan Ehnberg


Terri Kelley wrote:
> reference my previous message shown below:
> 
> I deleted the host and removed files from /var/backuppc/pc/HOST with 
> this host then re-entered the host info in backuppc. I still get the 
> same error.
> 
> I really need some help here. Anyone have any ideas?
> 
> Terri Kelley
> 
> Begin forwarded message:
> 
>> *From: *Terri Kelley > >
>> *Date: *December 30, 2008 10:49:53 PM CST
>> *To: *questions and support > >
>> *Subject: **apparent rsync error*
>>
>> I have setup BackupPC on a Centos box and am attempting to backup 
>> another centos server. Looking in Last bad XferLog I get the following:
>>
>> Remote[1]: rsync: push_dir#3 "/home/backuppc/15" failed: No such file 
>> or directory (2)
>> Remote[1]: rsync error: errors selecting input/output files, dirs 
>> (code 3) at main.c(602) [sender=2.6.8]
>> Read EOF:
>> Tried again: got 0 bytes
>> fileListReceive() failed
>>
>> I cannot find anywhere on the backuppc server where I have 
>> /home/backuppc/ for anything. I have other directories listed but 
>> nothing there.
>>
>> Any clues where else I look?
>>
>> Thanks,
>>
>> Terri Kelley
>> FMB
>> Network Engineer
>>

Hi!

Have you tried running the backup manually, and double checked your 
settings, especially includes/excludes?

Other than that I can't say. Yours is the only report with that exact 
error that I can find anywhere. I'd also try some rsync 
forum/mailinglist/developer.

Good luck,
Johan

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC working for a year, all of a sudden nothing works

2009-01-02 Thread Johan Ehnberg
> 1) increased my RAM from 1G to 3G
> 2) upgraded from ubuntu 7.10 to ubuntu 8.10

Faulty RAM can give you random unexplained problems, perhaps like the 
ones you describe. Cryptography is one of the things that triggers many 
faults (failing a kernel compile is another good indicator). Ubuntu 
comes with Memtest (chosen att boot time in GRUB) - run a full set of 
tests and see what happens.

Check the most recent files in /var/log/ for any errors. In your case 
the computer reboots - check /var/log/dmesg, look just above the bootup 
messages.

Make sure you have enough disk space left by running 'df -h'.

Good luck!

/johan

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


  1   2   >