Re: [BackupPC-users] Missing backup files

2022-11-04 Thread Adam Goryachev via BackupPC-users



On 5/11/2022 01:04, G.W. Haywood via BackupPC-users wrote:

Hi there,

On Fri, 4 Nov 2022, Mark Murawski wrote:


...
This is the most recently finished full backup [51] for /etc/ssl/private
...
There's no files in there!! Just directories!? Everything is missing

And it looks like the *entire* backup system looks like this.? I didn't
even know that my backups are completely broken and missing all files.


Incidentally I'm not sure that I'd want the 'backuppc' user to be able
to read private data normally only readable by root, but it's your call
and it might even be that you have it set up that way - I don't know.
FTAOD I'm just trying to help.


I just had to comment here

I don't understand why you would NOT want backuppc to have at least read 
access to ALL data, including data only accessible to root. I assume you 
would not be suggesting that you run a separate backup system for each 
user, so why would you want to either:


1) Not backup root data
2) Run a separate backup solution just for root data

I guess this will go back to how you setup your data security etc, but 
regardless of what you do, I would strongly suggest you ensure ALL data 
is backed up (because it is always the unimportant file that needs to be 
restored most urgently and is critical).


So, for my, I use SSH + rsync to backup ALL target systems, and do that 
using the root user on the destination, and I simply use the same method 
for localhost.


As for advice, definitely, test your backups, make sure they work, 
verify by restoring some large enough sample of files and comparing the 
actual content matches what you would expect. One neat "feature request" 
would be to have BPC perform a "verify" where it would simply show all 
files that have changed since the last backup, ie, it does everything 
except adding changed/new files to the pool.



So, while I haven't followed the whole thread, consider posting your log 
and/or config for the host in question, along with output such as:


$ sudo ls -ld /etc /etc/ssl /etc/ssl/private /etc/ssl/private/*

Then we could provide additional guidance/suggestions.

Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] When does compression and de-duplication happen?

2022-09-19 Thread Adam Goryachev via BackupPC-users
It depends on the version of backuppc either v3 or v4 as to the exact 
sequence of events, but in either case, the files are processed one at a 
time as they are received, so if there is an existing file from another 
host in the pool, then that file will only require additional space 
during the transfer of the file (I think BPC v4 with rsync will avoid 
transferring the file as well).


On 19/9/2022 18:20, Kenneth Porter wrote:

When backing up a new system that's similar to an existing system, do 
I need enough space on the backup media for the entire new system, or 
just what's different? Will the entire client be pulled over and then 
de-duped, or does that happen as each file is pulled, comparing it to 
what's already in the pool? - 



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Run command per file before storing in the pool

2022-02-18 Thread Adam Goryachev via BackupPC-users


On 17/2/2022 23:43, Bruno Rogério Fernandes wrote:

Maybe I've got a solution

Instead of modifying backuppc behavior, I'm planning to disable 
compression setting at the server and create a FUSE filesystem that 
transparently compresses all the images using jpeg-xl format and put 
backuppc pool on top of that.


The only problem I can think of is that every time backuppc has to do 
some reading the FUSE will also need to decompress images on the fly. 
I have to do some testing because my server is not much powerful, just 
a dual-core system. 


I was thinking of something sort of similar

Why not use a fuse filesystem on the client, which acts as a kind of 
overlay All directory operations are transparently passed through to 
the native storage location. Reads/writes however are filtered by the 
"compression" before being transferred to the server. The saved bytes at 
the end are converted to null which keeps the file length the same as 
the server expects, but will compress well with pretty much any 
compression algorithm.


By not modifying the directory information, all the rsync comparisons 
will work without any modification. There is no added load for backuppc, 
and in addition, there is no change to the client when accessing images 
since it would access the real location, not the FUSE mounted version.


Just my thoughts...




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backuppc-4 on Debian-11

2021-09-13 Thread Adam Goryachev via BackupPC-users



On 14/9/21 02:00, Juergen Harms wrote:
This is not the place to fight for being right, but to understand and 
document help for users who hit this kind of problem.


Trying to understand: how do you define separate and different 
profiles ("per-host override configs") for each of your 18 different 
PCs in one single .pl file (i.e. your file at 
/etc/backupp/hostname.pl) ? or do you mean by hostname.pl a list of 
specific files where hostname.pl stands for an enumeration of 18 files 
with PC-specific names?


I suspect he means 18 files with each file representing the specific 
config for that specific host. This is the way BackupPC is designed, 
global config in config.pl and specific config for a single host in 
hostname.pl. There are extensions to this whereby you could have config 
which is specific to a group of hosts in a group.pl and this is 
"included" into each of the hosts within the group, but that is outside 
the scope of this discussion.
If the latter is the case, our disagreement is very small: each of 
these files in /etc/backuppc provides config info for one pc, and the 
pc/ directory does not harm, but is not used (I tried both variants - 
with and without specifyint pc/ - both work)


The "pc" symlink (it's not a directory within the Debian package) is a 
compatibility layer to make the Debian package compatible with the 
standard BackupPC documentation and user expectations outside of the 
Debian community. So if you asked for help on the list, you might be 
advised to create a host config file BPC as etc/pc/hostname.pl, assuming 
you are unaware of any specific details, you might navigate to 
/etc/backuppc/pc/ and create a hostname.pl file. This will work as 
expected. If you were aware, you might navigate to /etc/backuppc and 
create the hostname.pl file, which would have the exact same result 
(working as expected).


IMHO, it would appear that you had some config issue, and because things 
were not working you looked for something to blame, the pc symlink 
looked strange, and so you blamed that (at least that is what I did in 
the past). Once you understand that this is just a massive non-issue and 
totally not relevant to any perceived issue, then you can ignore it and 
move on.


Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Moving to 4.4.0, sudo on FreeBSD

2021-07-23 Thread Adam Goryachev via BackupPC-users
Sounds to me that you might have restricted your source IP in the *bsd 
.ssh/authorized_keys file. Maybe double check on those restrictions.


Regards,
Adam

On 24/7/21 14:27, Brad Alexander wrote:


I ran across what appears to be the reason for the issue that I am 
having. I found the following issue in my console:


/var/log/console.log:Jul 23 23:52:11 danube kernel: Jul 23 23:52:11 
danube sudo[2
866]: backuppc : command not allowed ; PWD=/usr/home/backuppc ; 
USER=root ; COMMA

ND=/usr/bin/rsync --server --sender -slHogDtprcxe.iLsfxC

I don't quite understand it. It appears that

$Conf{RsyncClientPath} = 'sudo /usr/bin/rsync';

in my config.pl  is overriding

$Conf{RsyncClientPath} = 'sudo /usr/local/bin/rsync';

in my freebsd hosts .pl files. Are per-host config files no longer 
supported? Is there another way to specify the path for the rsync 
command on a per-host or per-OS basis?


Thanks,
--b


On Fri, Jul 23, 2021 at 4:28 PM Brad Alexander > wrote:


I have been running BackupPC 3.x for many years on a Debian Linux
box. I just expanded my TrueNAS box with larger drives, grew my
pool, and am in the process of converting from BackupPC 3.3.1 on
the dedicated server (that has gotten a bit snug on the drive
space) to a 4.4.0 install in a FreeBSD jail on my TrueNAS box,
using the guide at

https://www.truenas.com/community/threads/quickstart-guide-for-backuppc-4-in-a-jail-on-freenas.74080/

,
and another page for the changes needed for rsync. I am backing up
both FreeBSD and Linux boxes.

So at this point, the linux boxes are backing up on the 4.4
installation, but the FreeBSD boxes are not. Both are working on
the 3.3.1 machine. I transferred all of my .pl files
from the old backup box to the 4.4.0 jail, and they are identical
to the old configs. So does anyone have any ideas about what could
be happening? I have a log of an iteration of the backup test at

https://pastebin.com/KLKxGYT1 
It is stopping to ask for a password, which it shouldn't be doing,
unless it is looking for rsync_bpc on the client machines.

Thoughts?

Thanks,
--b



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Run VPN script before backup - PingCmd?

2021-07-13 Thread Adam Goryachev via BackupPC-users


On 14/7/21 00:18, Rob Morin wrote:

Wow, ok so that worked!

I put the Connect2Office.sh in the DumpPreUserCmd
It returns a zero as exit status too
And drops the vpn when done with DumpPostUserCmd

image.png

DumpPreUserCmd cript looks like this:

#!/bin/bash
sudo openvpn --daemon --config /etc/openvpn/gateway.hardent.com.ovpn
sleep 10
echo $?
/bin/true

DumpPostUserCmd looks like this:

#!/bin/bash
sudo killall openvpn
echo $?
/bin/true

You might consider a better method to ensure you close the "right" 
openvpn tunnel (there could be cases where you would have more than 
one). Usually the simplest would be to have the config on start create a 
pid file, then you can simply kill `cat /run/openvpn/mytunnel.pid`, it 
might also be an idea to confirm that the pid this file points to is 
actually (still) openvpn, but at least it's working for you now. The 
rest are just potential improvements you or someone else in future might 
need/want.

Thanks a bunch Adam.
I hope this helps others!
Have a great day everyone!


Regards,
Adam


On Mon, Jul 12, 2021 at 7:52 PM Adam Goryachev via BackupPC-users 
<mailto:backuppc-users@lists.sourceforge.net>> wrote:



On 13/7/21 05:32, Rob Morin wrote:
> Hello all...
>
> I was looking at a way to start up my vpn from our remote backup
site
> to the office when backuppc starts a job.
>
> I googled around for quite a bit and saw some people were using  a
> script in place of the pingcmd parameter.
>
> I have tried that but i cant get it to work, as well as stop the
> connection when done using the PostDumpCmd.
>
> In a host, where the PingCmd text box is located i entered:
> /usr/local/bin/Connect2Office.sh
> And made sure the check mark was there and I saved it.
>
> The script itself is below, not much at all, really.
>
> #!/bin/bash
> /bin/true
> sudo openvpn --daemon --config /etc/openvpn/gateway.hardent.com.ovpn
> echo $?
>
> I added the --daemon in order to put the process in the
> background while running
> /bin/true is there because i thought the exit status had to be
something
> and the echo $? is there for same exit status reason.
>
> The user backuppc is allowed to sudo that script via the sudoers
file.
>
> Now , when I manually run this command as the user backuppc or
> root,  from the command line, all works well, and I can
manually start
> a backup and it completes fine.
>
> However, when I click on the start incremental job from GUI for the
> same host, as a test, the log file simply shows the below and
nothing
> gets backed up.
>
> 2021-07-12 14:49:45 incr backup started back to 2021-07-12 14:33:53
> (backup #0) for directory /etc
>
> Then after several minutes of nothing i dequeue the backup and
get the
> below, which is of course normal.
>
> 2021-07-12 14:51:26 Aborting backup up after signal INT
>
> I am sure I am doing something stupid
> Any help would be appreciated.
>
> Have a great day!
>
>
I'm not sure of using PingCmd for this, but why not use the
DumpPreUserCmd
http://backuppc.sourceforge.net/faq/BackupPC.html#_conf_dumppreusercmd_
<http://backuppc.sourceforge.net/faq/BackupPC.html#_conf_dumppreusercmd_>

? The stdout will be sent to the log for you to see what is happening.

As for the script, usually you would run the /bin/true as the last
command so that it will ignore any other exit status and always show
"successful". So based on the current script, that line is pointless
unless you moved it to after the openvpn command.

You might also need to check any capabilities, probably backuppc
doesn't
have NET_CAP or ability to create tunnels etc, so once you are
sure the
script is being run (maybe add a touch /tmp/myscript) then you might
want to define a openvpn log file so you can see what it is doing
and/or
why it fails

You might also need a sleep or some other test to ensure the
tunnel is
actually working/passing traffic, as openvpn will return before the
tunnel is up, and then backuppc will attempt to start the backup.

Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
<mailto:BackupPC-users@lists.sourceforge.net>
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
<https://lists.sourceforge.net/lists/listinfo/backuppc-users>
Wiki: https://github.com/backuppc/backuppc/wiki
<https://github.com/backuppc/backuppc/wiki>
Project: https://backuppc.github.io/backuppc/
<https://backuppc.gi

Re: [BackupPC-users] job queue order

2021-07-12 Thread Adam Goryachev via BackupPC-users



On 13/7/21 09:02, Kenneth Porter wrote:

How can I change the order in the queue?

I just added 18 new "hosts" (actually 6, but with 3 backup jobs per 
host). How can I push them to the front of the queue to initialize 
their first backup? Is there some UI to rearrange the queue order? I 
don't want to force a new job to start running immediately, to avoid 
loading down the network. I just want to make sure those jobs run next.


Pretty sure you could manually start a backup on those hosts you want 
done first, but it will only actually start up to your configured 
$Conf{MaxUserBackups} value. I'm not sure, but suspect that the rest 
will simply be at the "top of the queue"


Regards,
Adam

--



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Run VPN script before backup - PingCmd?

2021-07-12 Thread Adam Goryachev via BackupPC-users


On 13/7/21 05:32, Rob Morin wrote:

Hello all...

I was looking at a way to start up my vpn from our remote backup site 
to the office when backuppc starts a job.


I googled around for quite a bit and saw some people were using  a 
script in place of the pingcmd parameter.


I have tried that but i cant get it to work, as well as stop the 
connection when done using the PostDumpCmd.


In a host, where the PingCmd text box is located i entered:
/usr/local/bin/Connect2Office.sh
And made sure the check mark was there and I saved it.

The script itself is below, not much at all, really.

#!/bin/bash
/bin/true
sudo openvpn --daemon --config /etc/openvpn/gateway.hardent.com.ovpn
echo $?

I added the --daemon in order to put the process in the 
background while running

/bin/true is there because i thought the exit status had to be something
and the echo $? is there for same exit status reason.

The user backuppc is allowed to sudo that script via the sudoers file.

Now , when I manually run this command as the user backuppc or 
root,  from the command line, all works well, and I can manually start 
a backup and it completes fine.


However, when I click on the start incremental job from GUI for the 
same host, as a test, the log file simply shows the below and nothing 
gets backed up.


2021-07-12 14:49:45 incr backup started back to 2021-07-12 14:33:53 
(backup #0) for directory /etc


Then after several minutes of nothing i dequeue the backup and get the 
below, which is of course normal.


2021-07-12 14:51:26 Aborting backup up after signal INT

I am sure I am doing something stupid
Any help would be appreciated.

Have a great day!


I'm not sure of using PingCmd for this, but why not use the 
DumpPreUserCmd 
http://backuppc.sourceforge.net/faq/BackupPC.html#_conf_dumppreusercmd_ 
? The stdout will be sent to the log for you to see what is happening.


As for the script, usually you would run the /bin/true as the last 
command so that it will ignore any other exit status and always show 
"successful". So based on the current script, that line is pointless 
unless you moved it to after the openvpn command.


You might also need to check any capabilities, probably backuppc doesn't 
have NET_CAP or ability to create tunnels etc, so once you are sure the 
script is being run (maybe add a touch /tmp/myscript) then you might 
want to define a openvpn log file so you can see what it is doing and/or 
why it fails


You might also need a sleep or some other test to ensure the tunnel is 
actually working/passing traffic, as openvpn will return before the 
tunnel is up, and then backuppc will attempt to start the backup.


Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Problem with WakupSchedule and Backupplan

2021-05-04 Thread Adam Goryachev via BackupPC-users


On 4/5/21 22:00, Ralph Sikau wrote:

Am Mittwoch, den 28.04.2021, 16:30 + schrieb backuppc-
users-requ...@lists.sourceforge.net

:

However, I have a suspicion that you are on the right track.  If a backup is 
missed, then it is put into the queue and it will start as soon as it is able 
to.  You may be able to use the blackout hours to help.

Greg,
I think it would help if I could get answers to these
questions:
1. For how long does the backup system STAY awake after
having been awakened according to the WakeupSchedule?

Until the queue is empty.

2. When a backup starts at 11:30 pm will it go on then over
midnight during the following night or will ist go on hold
at midnight?
A started backup will continue until finished regardless of blackout 
periods etc.

3. Is there a possibility to see what is hold in the backup
queue?

Yes, on the web interface, click "Current Queues"

Maybe you have the answers.


More information:

When backuppc wakes up, it will put all hosts that are due to be backed 
up on the queue (ie, their backup schedule and last backup completed 
times are too far apart). It will then take the number of jobs from the 
queue that your config says it can run in parallel, and starts them. If 
a backup starts but is inside the blackout winder, then it is 
immediately stopped (ie, it never really actually starts the xfer), and 
is removed from the queue. Same if the ping time is too long, or 
whatever other constraint that suggests the backup has failed/can't 
start. The next backup on the queue will then start. Eventually, all 
backups on the queue will complete, and backuppc goes back to sleep.


If the backups took too long, and continued past the next wakup period, 
then all due hosts not already on the queue will be added to the queue. 
This is why you can set wakeup schedule every 5 minutes, without causing 
a problem. The wakeup schedule basically just defines the minimum amount 
of time after a backup becomes due, before the backup will be placed on 
the queue.


Regards,
Adam

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC permission issue?

2021-03-31 Thread Adam Goryachev via BackupPC-users


On 1/4/21 00:08, Joseph Bishay wrote:

Hello Adam and everyone,

Thank you for the reply.  I've responded below:

On Tue, Mar 30, 2021 at 10:42 PM Adam Goryachev via BackupPC-users 
<mailto:backuppc-users@lists.sourceforge.net>> wrote:


On 31/3/21 12:26, Joseph Bishay wrote:



I have BackupPC backing up a Linux client and it appears to only
back up certain files.  The pattern seems to be that if the
directory has permissions of -rw-r--r-- BackupPC can enter, read
the files and back them up correctly, but if the directory has
permissions of drwx-- it creates that directory but cannot
enter and read the files within it.

The error log file shows multiple lines of:
Remote[1]: rsync: opendir "/directory/with/files" failed:
Permission denied (13)

Other parts of the filesystem are being backed up correctly it
appears.  The BackupPC automatically connects as the user
BackupPC on the client and that backupPC user has the ability to
run rsync as root. On the client I have:

$ cat /etc/sudoers.d/backuppc giving:
backuppc ALL=NOPASSWD: /usr/bin/rsync
backuppc ALL=NOPASSWD: /usr/bin/whoami  #added this one for debugging

From BackupPC running the command:
ssh -l backuppc client_IP "whoami"
returns backuppc

and running the command
ssh -l backuppc client_IP "sudo whoami"
returns root

so it seems to be working correctly.

In the client config file on BackupPC, variable is set as:
RsyncClientCmd = "$sshPath -q -x -l backuppc $host $rsyncPath
$argList+"


Aren't you missing a sudo somewhere in the command? not sure how
you have defined rsyncPath, but that looks like it could be the issue.

Maybe you could post the logs which will show the actual commands
being run after variable expansion.

Regards,
Adam


I am not sure if there should be a sudo somewhere or how that works 
unfortunately - I do not understand this very well.  rsyncClientPath 
is defined as: /usr/bin/rsync  It appears rsync is working since I am 
getting part of the drive backed up, just not certain folders.


The Xferlog file shows:

Contents of file /var/lib/backuppc/pc/client_IP/XferLOG.0.z, modified 
2021-03-28 21:25:06


full backup started for directory /
Running: /usr/bin/ssh -q -x -l backuppc client_IP /usr/bin/rsync 
--server --sender --numeric-ids --perms --owner --group -D --links 
--hard-links --times --block-size=2048 --recursive --ignore-times . /



You are definitely missing an "sudo" in there. If you see what you have, 
you are calling ssh, with some flags "-q -x", using the account backuppc 
(-l backuppc), to login to the remove machine "client_IP" and once 
logged in running /usr/bin/rsync with some options  etc.



ssh -l backuppc client_IP "whoami"
This is the same example you posted above, as you can see, it is running 
as the user backuppc



ssh -l backuppc client_IP "sudo whoami"
As you can see, adding the "sudo" means you are going to end up running 
the command as root.



RsyncClientCmd = "$sshPath -q -x -l backuppc $host $rsyncPath $argList+"

I would suggest changing this to:

RsyncClientCmd = "$sshPath -q -x -l backuppc $host /usr/bin/sudo 
$rsyncPath $argList+"


Assuming your sudo is in /usr/bin/sudo. To check, login and run:

which sudo

Pretty sure that should solve the permissions problem, although I don't 
use sudo with backuppc, so there could be other issues that I'm not 
aware of.


Regards,
Adam

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC permission issue?

2021-03-30 Thread Adam Goryachev via BackupPC-users


On 31/3/21 12:26, Joseph Bishay wrote:

Hello,

I hope you are all doing very well today.

I have BackupPC backing up a Linux client and it appears to only back 
up certain files.  The pattern seems to be that if the directory has 
permissions of -rw-r--r-- BackupPC can enter, read the files and back 
them up correctly, but if the directory has permissions of drwx-- 
it creates that directory but cannot enter and read the files within it.


The error log file shows multiple lines of:
Remote[1]: rsync: opendir "/directory/with/files" failed: Permission 
denied (13)


Other parts of the filesystem are being backed up correctly it 
appears.  The BackupPC automatically connects as the user BackupPC on 
the client and that backupPC user has the ability to run rsync as 
root.  On the client I have:


$ cat /etc/sudoers.d/backuppc giving:
backuppc ALL=NOPASSWD: /usr/bin/rsync
backuppc ALL=NOPASSWD: /usr/bin/whoami  #added this one for debugging

From BackupPC running the command:
ssh -l backuppc client_IP "whoami"
returns backuppc

and running the command
ssh -l backuppc client_IP "sudo whoami"
returns root

so it seems to be working correctly.

In the client config file on BackupPC, variable is set as:
RsyncClientCmd = "$sshPath -q -x -l backuppc $host $rsyncPath $argList+"

Aren't you missing a sudo somewhere in the command? not sure how you 
have defined rsyncPath, but that looks like it could be the issue.


Maybe you could post the logs which will show the actual commands being 
run after variable expansion.


Regards,
Adam

I am not sure if the issue is a file / directory permission issue, or 
a BackupPC configuration issue, or something else. Any help would be 
greatly appreciated!


Thank you,
Joseph

P.S. I sent this email before to the mailing list but it did not go 
through as I was not a member.  I subscribed and am re-sending it.



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Per-host $Conf{MaxBackups} effects

2021-03-11 Thread Adam Goryachev via BackupPC-users


On 12/3/21 00:03, Dave Sherohman wrote:


If I were to set $Conf{MaxBackups} = 1 for one specific host, how 
would that be handled?  Would it prevent that specific host from 
running backups unless there are no other backups in progress?  Would 
it prevent any other backups from being started before that host 
finished?  Would it do both?  Or is that an inherently-global setting 
that has no effect if set for a single host?


My use-case here is that I've got a lot of linux hosts and a handful 
of windows machines.  The linux hosts work great with standard 
ssh/rsync configuration, no problems there.


The windows machines, on the other hand, are using a windows backuppc 
client that our windows admin found on sourceforge and it's having... 
problems... with handling shadow volumes.  As in it appears to be 
failing to create them, which causes backup runs to take many hours as 
it waits for "device or resource busy" files to time out.  Which ties 
up available slots in the MaxBackups limit and prevents the linux 
machines from being scheduled.


So I'm thinking that it might work to temporarily set the windows 
hosts to MaxBackups = 1, if that would prevent multiple windows hosts 
from running at the same time and free up slots for the linux hosts to 
run.  If it would also prevent linux hosts from running when a windows 
host is in progress, though, then that would just make things worse.


Or is there some other way I could specify "run four backups at once, 
BUT only one of these six can run at a time (alongside three others 
which aren't in that group)"?


I'm pretty sure this has been discussed before, and is not possible. 
However, I would suggest spending a bit more time to resolve the issues 
with the windows server backups. There is an updated set of instructions 
posted recently to the list (check the archives), if you need some help 
to get something working, the list is a great place to ask. Once it 
works, the windows machines will backup equally as well as the Linux ones.


HTH

Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Simple server side embedded config file to allow full shadow backups of Windows host

2021-02-26 Thread Adam Goryachev via BackupPC-users


On 27/2/21 08:23, backu...@kosowsky.org wrote:

Adam Goryachev wrote at about 05:48:56 +1100 on Saturday, February 27, 2021:

  > >
  > >   > I was missing the ClientShareName2Path. I've added that in, but now I
  > >   > get another error:
  > >   >
  > >   > No such NTFS drive 'c:' skipping corresponding shadow setup...
  > >   >     'c' => /cygdrive/c/shadow/c-20210226-234449
  > >   > Eval return value: 1
  > >   >
  > >   > I'm thinking it might be a case sensitive issue, so am waiting for it 
to
  > >   > finish before adjusting the config and retrying:
  > >   > $Conf{RsyncShareName} = [
  > >   >    'C'
  > >   > ];
  > >   > $Conf{ClientShareName2Path} = {
  > >   >      'C' => '/C',
  > >   > };
  > >   >
  > >   > ie, using all capital C instead of the lower case c. Or are there any
  > >   > other hints?
  > >   >
  > > It shouldn't be case sensitive.
  > > And personally, I think I use lower case 'c'
  > >
  > > Tell me what the following commands give:
  > >
  > > # cygpath -u C:
  > > # cygpath -u c:
  > >
  > > # ls $(cygpath -u C:)/..
  > > # ls $(cygpath -u c:)/..
  > >
  > > # mount -m | grep "^C: "
  > > # mount -m | grep "^c: "
  > >
  > Results:
  >
  > $ cygpath -u C:
  > /cygdrive/c
  > $ cygpath -u c:
  > /cygdrive/c
  > $ ls $(cygpath -u C:)/..
  > c  d
  > $ ls $(cygpath -u c:)/..
  > c  d
  > $ mount -m | grep "^C: "
  > $ mount -m | grep "^c: "
  > $ mount -m
  > none /cygdrive cygdrive binary,posix=0,user 0 0
  >
  > $ mount
  > C:/cygwin64/root/bin on /usr/bin type ntfs (binary,auto)
  > C:/cygwin64/root/lib on /usr/lib type ntfs (binary,auto)
  > C:/cygwin64/root on / type ntfs (binary,auto)
  > C: on /cygdrive/c type ntfs (binary,posix=0,user,noumount,auto)
  > D: on /cygdrive/d type udf (binary,posix=0,user,noumount,auto)
  >
  > So all seem to work with lowercase or uppercase, but for some reason,
  > neither works when from the script.
  >
  > The only "non-standard" thing I've done is all the cygwin tools are
  > installed to C:\cygwin64\root instead of the default which installs them
  > to C:\cygwin64\
  >
  > OK, from re-checking the error and the script, it looks like it's
  > failing because mount -m doesn't show the c: ...
  >

Yup.
On my machine, "mount -m" gives the letter drives...
You could try substituting the following

-if ! [ -d "$(cygpath -u ${I}:)" ] || ! grep -qE "^${I^^}: \S+ ntfs " <(mount 
-m); then
+if ! [ -d "$(cygpath -u ${I}:)" ] || ! grep -qE "^${I^^}: on \S+ type ntfs " 
<(mount); then


That seems to have fixed it, at least the shadow was created, and backup 
is starting. Will have to wait a while for the backup to complete, but 
looks good so far.


   my $sharenameref=$bpc->{Conf}{ClientShareName2Path};
   foreach my $key (keys %{$sharenameref}) { #Rewrite 
ClientShareName2Path

      $sharenameref->{$key} = "$shadowdir$2-$hosttimestamp$3" if
    $sharenameref->{$key} =~ 
m#^(/cygdrive)?/([a-zA-Z])(/.*)?$#; #Add shadow if letter drive

   }
   print map { "   '$_' => $sharenameref->{$_}
" } sort(keys %{$sharenameref}) unless $?;
}}
Junction created for C:\shadow\C-20210227.133140-keep <<===>> 
\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy5\\

   'C' => /cygdrive/c/shadow/C-20210227.133140-keep
Eval return value: 1
__bpc_progress_state__ backup share "C"
Running: /usr/local/bin/rsync_bpc --bpc-top-dir /var/lib/backuppc 
--bpc-host-name hostvm2 --bpc-share-name C --bpc-bkup-num 2 
--bpc-bkup-comp 3 --bpc-bkup-prevnum 1 --bpc-bkup-prevcomp 3 
--bpc-bkup-inode0 608083 --bpc-log-level 1 --bpc-attrib-new -e 
/usr/bin/ssh\ -l\ BackupPC 
--rsync-path=/cygdrive/c/cygwin64/root/bin/rsync.exe --super --recursive 
--protect-args --numeric-ids --perms --owner --group -D --times --links 
--hard-links --delete --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ 
%9l\ %f%L --stats --checksum --one-file-system --timeout=72000 
10.1.1.119:/cygdrive/c/shadow/C-20210227.133140-keep/ /
full backup started for directory C (client path 
/cygdrive/c/shadow/C-20210227.133140-keep)

started full dump, share=C
Xfer PIDs are now 25288
xferPids 25288
This is the rsync child about to exec /usr/local/bin/rsync_bpc
cmdExecOrEval: about to exec /usr/local/bin/rsync_bpc --bpc-top-dir 
/var/lib/backuppc --bpc-host-name hostvm2 --bpc-share-name C 
--bpc-bkup-num 2 --bpc-bkup-comp 3 --bpc-bkup-prevnum 1 
--bpc-bkup-prevcomp 3 --bpc-bkup-inode0 608083 --bpc-log-level 1 
--bpc-attrib-new -e /usr/bin/ssh\ -l\ BackupPC 
--r

Re: [BackupPC-users] Simple server side embedded config file to allow full shadow backups of Windows host

2021-02-26 Thread Adam Goryachev via BackupPC-users


On 27/2/21 03:31, backu...@kosowsky.org wrote:

Adam Goryachev via BackupPC-users wrote at about 00:41:40 +1100 on Saturday, 
February 27, 2021:

  > Also, I was thinking it should be possible to have this script in a
  > single file, and then just include or require it for each host, does
  > that work? That would make the config file look a lot cleaner, and
  > updating the script in a single file is better than updating for each host.


Specifically I do things like the following:

my $jhost = $_[1];
$Conf{BlackoutPeriods} = []
 if $jhost =~ /^(machineA|machine[0-9]|othermachine)$/;

if($jhost =~ /^machineA)$/) {
 $Conf{BackupsDisable} = 0; #Scheduled/automatic
}elsif($jhost =~/^ABCD$/) { #Specify hosts to disable
 $Conf{BackupsDisable} = 2; #Disable
}elsif($jhost =~/machine[0-9]*$/) { #
$Conf{BackupsDisable} = 2; #CHANGE TO 1 to enable manual
}   

etc.
Such logic can be continued for any differences between machines...


Ouch, that looks overly complex... it means mixing configs from 
different hosts into the same "script". I'll look into these more 
advanced options after I get the simple version working. I'm thinking 
something as simple as:


require ./windows_shadow.pl;



  > I was missing the ClientShareName2Path. I've added that in, but now I
  > get another error:
  >
  > No such NTFS drive 'c:' skipping corresponding shadow setup...
  >     'c' => /cygdrive/c/shadow/c-20210226-234449
  > Eval return value: 1
  >
  > I'm thinking it might be a case sensitive issue, so am waiting for it to
  > finish before adjusting the config and retrying:
  > $Conf{RsyncShareName} = [
  >    'C'
  > ];
  > $Conf{ClientShareName2Path} = {
  >      'C' => '/C',
  > };
  >
  > ie, using all capital C instead of the lower case c. Or are there any
  > other hints?
  >
It shouldn't be case sensitive.
And personally, I think I use lower case 'c'

Tell me what the following commands give:

# cygpath -u C:
# cygpath -u c:

# ls $(cygpath -u C:)/..
# ls $(cygpath -u c:)/..

# mount -m | grep "^C: "
# mount -m | grep "^c: "


Results:

$ cygpath -u C:
/cygdrive/c
$ cygpath -u c:
/cygdrive/c
$ ls $(cygpath -u C:)/..
c  d
$ ls $(cygpath -u c:)/..
c  d
$ mount -m | grep "^C: "
$ mount -m | grep "^c: "
$ mount -m
none /cygdrive cygdrive binary,posix=0,user 0 0

$ mount
C:/cygwin64/root/bin on /usr/bin type ntfs (binary,auto)
C:/cygwin64/root/lib on /usr/lib type ntfs (binary,auto)
C:/cygwin64/root on / type ntfs (binary,auto)
C: on /cygdrive/c type ntfs (binary,posix=0,user,noumount,auto)
D: on /cygdrive/d type udf (binary,posix=0,user,noumount,auto)

So all seem to work with lowercase or uppercase, but for some reason, 
neither works when from the script.


The only "non-standard" thing I've done is all the cygwin tools are 
installed to C:\cygwin64\root instead of the default which installs them 
to C:\cygwin64\


OK, from re-checking the error and the script, it looks like it's 
failing because mount -m doesn't show the c: ...


Thanks,
Adam


  > I've also updated the script based on the new version you posted
  > recently, though I'm assuming that won't make much difference to this issue.
  >
  > So, nope, that didn't work, I'll post more of the output below. I can
  > manually login to the machine and run the command (from bash shell)
  >
  > $ wmic shadowcopy call create Volume=C:\\
  > Executing (Win32_ShadowCopy)->create()
  > Method execution successful.
  > Out Parameters:
  > instance of __PARAMETERS
  > {
  >      ReturnValue = 0;
  >      ShadowID = "{2EB3E2AF-D099-44BA-8D43-A48B1760C73F}";
  > };
  >
  > So it seems to suggest that it should work, most likely I'm again
  > missing some obvious config, or doing something wrong, but seems it
  > should be pretty close...
  >
  > Config file now has:
  >
  > $Conf{ClientNameAlias} = [
  >    '10.1.1.119'
  > ];
  > $Conf{XferMethod} = 'rsync';
  > $Conf{RsyncdUserName} = 'BackupPC';
  > $Conf{RsyncShareName} = [
  >    'C'
  > ];
  > $Conf{ClientShareName2Path} = {
  >      'C' => '/C',
  > };
  > $Conf{RsyncSshArgs} = [
  >    '-e',
  >    '$sshPath -l BackupPC'
  > ];
  > $Conf{RsyncClientPath} = '/cygdrive/c/cygwin64/root/bin/rsync.exe';
  > $Conf{PingMaxMsec} = 100;
  >
  > Plus of course a copy of your script config file, updated today.
  >
  >
  > Backup type: type = full, needs_full = , needs_incr = , lastFullTime =
  > 1614263640, opts{f} = 1, opts{i} = , opts{F} =
  > cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 10.1.1.119
  > cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 10.1.1.119
  > CheckHostAlive: ran '/bin/ping -c 1 -w 3 10.1.1.119'; returning 0.209
  > XferLOG file /var/lib/

Re: [BackupPC-users] Simple server side embedded config file to allow full shadow backups of Windows host

2021-02-26 Thread Adam Goryachev via BackupPC-users
eref=$bpc->{Conf}{ClientShareName2Path};
   foreach my $key (keys %{$sharenameref}) { #Rewrite 
ClientShareName2Path

      $sharenameref->{$key} = "$shadowdir$2-$hosttimestamp$3" if
    $sharenameref->{$key} =~ 
m#^(/cygdrive)?/([a-zA-Z])(/.*)?$#; #Add shadow if letter drive

   }
   print map { "   '$_' => $sharenameref->{$_}
" } sort(keys %{$sharenameref}) unless $?;
}}
No such NTFS drive 'C:' skipping corresponding shadow setup...
   'C' => /cygdrive/c/shadow/C-20210227.003013-keep
Eval return value: 1
__bpc_progress_state__ backup share "C"
Running: /usr/local/bin/rsync_bpc --bpc-top-dir /var/lib/backuppc 
--bpc-host-name hostvm2 --bpc-share-name C --bpc-bkup-num 1 
--bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 
--bpc-bkup-inode0 608082 --bpc-log-level 1 --bpc-attrib-new -e 
/usr/bin/ssh\ -l\ BackupPC 
--rsync-path=/cygdrive/c/cygwin64/root/bin/rsync.exe --super --recursive 
--protect-args --numeric-ids --perms --owner --group -D --times --links 
--hard-links --delete --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ 
%9l\ %f%L --stats --checksum --one-file-system --timeout=72000 
10.1.1.119:/cygdrive/c/shadow/C-20210227.003013-keep/ /
full backup started for directory C (client path 
/cygdrive/c/shadow/C-20210227.003013-keep)

started full dump, share=C
Xfer PIDs are now 4016
xferPids 4016
This is the rsync child about to exec /usr/local/bin/rsync_bpc
cmdExecOrEval: about to exec /usr/local/bin/rsync_bpc --bpc-top-dir 
/var/lib/backuppc --bpc-host-name hostvm2 --bpc-share-name C 
--bpc-bkup-num 1 --bpc-bkup-comp 3 --bpc-bkup-prevnum -1 
--bpc-bkup-prevcomp -1 --bpc-bkup-inode0 608082 --bpc-log-level 1 
--bpc-attrib-new -e /usr/bin/ssh\ -l\ BackupPC 
--rsync-path=/cygdrive/c/cygwin64/root/bin/rsync.exe --super --recursive 
--protect-args --numeric-ids --perms --owner --group -D --times --links 
--hard-links --delete --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ 
%9l\ %f%L --stats --checksum --one-file-system --timeout=72000 
10.1.1.119:/cygdrive/c/shadow/C-20210227.003013-keep/ /
rsync: [sender] change_dir "/cygdrive/c/shadow/C-20210227.003013-keep" 
failed: No such file or directory (2)






Adam Goryachev via BackupPC-users wrote at about 17:04:21 +1100 on Friday, 
February 26, 2021:
  > Hi,
  >
  > I've just setup a new Win10 machine, and thought I'd try this solution
  > to do the backup...
  >
  > So far, I have installed the MS SSH server, using the powershell command
  > line installation method, copied the backuppc ssh public key across,
  > used a powershell script to fix permissions on the file. Confirmed I
  > could login from the backuppc host as a new backuppc user
  > (administrative access).
  >
  > I then downloaded cygwin, ran the setup, and installed rsync plus all
  > other defaults (did not install SSH).
  >
  > I then changed the default SSH shell to bash instead of powershell
  > (registry key).
  >
  > Fixed the PATH variable in the .bashrc to ensure cygwin's /bin was included
  >
  > Copied the below script to my new hosts.pl config file, along with the
  > following host specific config:
  >
  > $Conf{ClientNameAlias} = [
  >    '10.1.1.119'
  > ];
  > $Conf{XferMethod} = 'rsync';
  > $Conf{RsyncdUserName} = 'BackupPC';
  > $Conf{RsyncShareName} = [
  >    '/cygdrive/C/'
  > ];
  > $Conf{RsyncSshArgs} = [
  >    '-e',
  >    '$sshPath -l BackupPC'
  > ];
  > $Conf{RsyncClientPath} = '/cygdrive/c/cygwin64/root/bin/rsync.exe';
  > $Conf{PingMaxMsec} = 100;
  > $Conf{BlackoutPeriods} = [];
  >
  > However, when I try to run the backup, I get the following:
  >
  > Executing DumpPreUserCmd: &{sub {
  > #Load variables
  > my $timestamp = "20210226-012400";
  > my $shadowdir = "/cygdrive/c/shadow/";
  > my $shadows = "";
  >
  > my $bashscript = "DAYS=2\
  >
  > etc (cut)
  >
  >print map { "   '$_' => $sharenameref->{$_}
  > " } sort(keys %{$sharenameref}) unless $?;
  > }}
  > Eval return value: 1
  > Running: /usr/local/bin/rsync_bpc --bpc-top-dir /var/lib/backuppc 
--bpc-host-name hostvm2 --bpc-share-name /cygdrive/C/ --bpc-bkup-num 0 
--bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 --bpc-bkup-inode0 5 
--bpc-log-level 1 --bpc-attrib-new -e /usr/bin/ssh\ -l\ BackupPC 
--rsync-path=/cygdrive/c/cygwin64/root/bin/rsync.exe --super --recursive 
--protect-args --numeric-ids --perms --owner --group -D --times --links 
--hard-links --delete --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ %9l\ %f%L 
--stats --checksum --one-file-system --timeout=72000 10.1.1.119:/cygdrive/C/ /
  > full backup started for directory /cygdrive/C/
  > Xfer PIDs are now 31043
  > This is the rsync child about to exec /usr/l

Re: [BackupPC-users] Simple server side embedded config file to allow full shadow backups of Windows host

2021-02-25 Thread Adam Goryachev via BackupPC-users

Hi,

I've just setup a new Win10 machine, and thought I'd try this solution 
to do the backup...


So far, I have installed the MS SSH server, using the powershell command 
line installation method, copied the backuppc ssh public key across, 
used a powershell script to fix permissions on the file. Confirmed I 
could login from the backuppc host as a new backuppc user 
(administrative access).


I then downloaded cygwin, ran the setup, and installed rsync plus all 
other defaults (did not install SSH).


I then changed the default SSH shell to bash instead of powershell 
(registry key).


Fixed the PATH variable in the .bashrc to ensure cygwin's /bin was included

Copied the below script to my new hosts.pl config file, along with the 
following host specific config:


$Conf{ClientNameAlias} = [
  '10.1.1.119'
];
$Conf{XferMethod} = 'rsync';
$Conf{RsyncdUserName} = 'BackupPC';
$Conf{RsyncShareName} = [
  '/cygdrive/C/'
];
$Conf{RsyncSshArgs} = [
  '-e',
  '$sshPath -l BackupPC'
];
$Conf{RsyncClientPath} = '/cygdrive/c/cygwin64/root/bin/rsync.exe';
$Conf{PingMaxMsec} = 100;
$Conf{BlackoutPeriods} = [];

However, when I try to run the backup, I get the following:

Executing DumpPreUserCmd: &{sub {
   #Load variables
   my $timestamp = "20210226-012400";
   my $shadowdir = "/cygdrive/c/shadow/";
   my $shadows = "";

   my $bashscript = "DAYS=2\

etc (cut)

  print map { "   '$_' => $sharenameref->{$_}
" } sort(keys %{$sharenameref}) unless $?;
}}
Eval return value: 1
Running: /usr/local/bin/rsync_bpc --bpc-top-dir /var/lib/backuppc 
--bpc-host-name hostvm2 --bpc-share-name /cygdrive/C/ --bpc-bkup-num 0 
--bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 
--bpc-bkup-inode0 5 --bpc-log-level 1 --bpc-attrib-new -e /usr/bin/ssh\ -l\ 
BackupPC --rsync-path=/cygdrive/c/cygwin64/root/bin/rsync.exe --super 
--recursive --protect-args --numeric-ids --perms --owner --group -D --times 
--links --hard-links --delete --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ 
%9l\ %f%L --stats --checksum --one-file-system --timeout=72000 
10.1.1.119:/cygdrive/C/ /
full backup started for directory /cygdrive/C/
Xfer PIDs are now 31043
This is the rsync child about to exec /usr/local/bin/rsync_bpc
cmdExecOrEval: about to exec /usr/local/bin/rsync_bpc --bpc-top-dir 
/var/lib/backuppc --bpc-host-name hostvm2 --bpc-share-name /cygdrive/C/ 
--bpc-bkup-num 0 --bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 
--bpc-bkup-inode0 5 --bpc-log-level 1 --bpc-attrib-new -e /usr/bin/ssh\ -l\ 
BackupPC --rsync-path=/cygdrive/c/cygwin64/root/bin/rsync.exe --super 
--recursive --protect-args --numeric-ids --perms --owner --group -D --times 
--links --hard-links --delete --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ 
%9l\ %f%L --stats --checksum --one-file-system --timeout=72000 
10.1.1.119:/cygdrive/C/ /
Xfer PIDs are now 31043,31172
xferPids 31043,31172
rsync: [sender] send_files failed to open "/cygdrive/C/DumpStack.log.tmp": 
Device or resource busy (16)
newrecv cd+ ---r-x---   328384,  328384 0 .
rsync: [sender] send_files failed to open "/cygdrive/C/hiberfil.sys": Device or 
resource busy (16)
rsync: [sender] send_files failed to open "/cygdrive/C/pagefile.sys": Device or 
resource busy (16)
rsync: [sender] send_files failed to open "/cygdrive/C/swapfile.sys": Device or 
resource busy (16)


As far as I can tell, this would suggest that we are not actually doing 
the backup from the shadow copy... so, good news, I got a full backup of 
the machine (excluding open files), but bad news is I don't know why it 
didn't work.


I can login from the backuppc host as the backuppc user on the windows 
machine, and I can then create a shadow volume and delete it, but not 
sure what else to test, or where to get additional logs from


Any suggestions greatly appreciated

Regards,
Adam

On 26/2/21 07:31, Greg Harris wrote:
Okay, I was just making things way harder than they needed to be. 
 Sorry Jeff.  Doug, from my understanding DeltaCopy is nearly just an 
alternative version of cygwin-rsyncd.  I think all you need to do is 
dump these scripts into the bottom of the .pl file for the host. 
 Otherwise, all of the other setup you normally do should be the same.


Thanks,

Greg Harris

On Feb 23, 2021, at 10:58 AM, backu...@kosowsky.org 
 wrote:


Yes. SSH needs to be minimally configured just as you do when using
the 'rsync' method (over ssh) for any other system.

And SSH is pretty basic for any type of communication, login, file
transfer between machines in the 20th century (with the exception
maybe of pure Windows environments)

Technically, SSH may not be a dependency for rsync in that you can
use 'rsyncd' without SSH but the vast majority of rsync usage between
local and remote machines (with or without backuppc) is over ssh.

Greg Harris wrote at about 15:51:26 + on Tuesday, February 23, 2021:
I was hoping that I could reply with at 

Re: [BackupPC-users] Using BackupPC 4.x with rrsync on the client

2021-02-10 Thread Adam Goryachev via BackupPC-users


On 11/2/21 10:14, Felix Wolters wrote:

Jeff,

I appreciate your detailled discussion of the topic, and I consider your
arguments to be strong.

But this …


Finally, while the sudoer code I shared in my previous note was just
aimed at restricting the sudoer power to rsync with specific flags,
I'm pretty sure that it could be easily expanded to
also limit access to only certain files/directories but just extending
the sudoer line to add the paths desired, thereby further restricting
the reach of the sudo command allowed.

seems to be the critical point to me. Have your tried that? (I haven’t
yet; a quick search at least doesn’t show up manifestations of this
approach.)


At the end of the day, with rrsync, you are still allowing root
access to ssh and that just doesn't feel right.

Well … any time you administrate a remote machine, you gain root access
over ssh to it, so this alone is a danger we use to deal with. On the
other hand, with the rsync-via-sudoers approach – don’t we open rsync to
the full system, so basically an attacker on the currupted server would
be able to basically rsync the whole machine to himself? So, at the end
of the day, aren’t we trading a potential security vulnerability
(rrsync) with a heavy real one (rsync via sudoers)?


It seems that both approaches are adding some security, some of that 
security is overlapping, and some is unique to each approach. If you 
really want to protect as much as possible, why not use both? Have a 
non-root user call sudo which calls rrsync


Based on my minimal understanding that rrsync is simply a script which 
checks the arguments given to the real rsync before calling it.


PPS, also keep in mind that avoiding sudo avoids security complications 
in sudo, as avoiding rrsync avoids potential security bugs in rrsync 
(eg, the ability to exploit argument processing to get remote code 
execution) both of which might have been protected with plain rsync and 
ssh alone.


Just my 0.02c



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Using BackupPC 4.x with rrsync on the client

2021-02-10 Thread Adam Goryachev via BackupPC-users


On 10/2/21 02:56, Felix Wolters wrote:

Hello!

Let me first thank you for providing BackupPC as open source software. I
appreciate it a lot and consider it to be one of the most usefull backup
systems out there!

I’d like to use it with restricted access to the client, so a
potentially corrupted BackupPC server wouldn’t be able to damage the
client machine and data. Using rsync for transfer with a Linux client,
rrsync (restricted rsync – as part of the rsync package) would be a
straigt forward solution to restrict an incoming ssh connection to only
rsync and only a given folder which I will set read only – which would
perfectly do the trick. Unfortunately, this doesn’t seem to work with
BackupPC over rsync, as far as I can see. I’m positive rrsync generally
works on the client as I use it successfully with plain rsync over ssh
on the same machine.

I’ve seen rare information on the internet about this, and it wouldn’t
help me so far.

Thank you for some help or instruction!


Hi Felix,

I'm not familiar with rrsync, but perhaps the first step would be to try 
it and see. If it doesn't work, then include some logs and what debug 
steps you have taken, or other information that might help us to help you.


Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Checking rsync progress/speed/status

2021-01-12 Thread Adam Goryachev via BackupPC-users


On 13/1/21 09:21, Les Mikesell wrote:

On Tue, Jan 12, 2021 at 4:15 PM Greg Harris  wrote:

Yeah, that “if you can interpret it” part gets really hard when it looks like:

select(7, [6], [], [6], {tv_sec=60, tv_usec=0}) = 1 (in [6], left {tv_sec=59, 
tv_usec=99})
read(6, 
"\0\200\0\0\4\200\0\7\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 
32768) = 27748

Scrolling at 32756 lines in around 30 seconds.

That tells you it is not hung up.  You could grep some 'open's out of
the stream to see what files it is examining.  Sometimes the client
side will do a whole lot of reading before it finds something that
doesn't match what the server already has.


I tend to use something like:

strace -e open -p 

Also:

ls -l /proc//fd

Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Imnproving backup speed

2021-01-07 Thread Adam Goryachev via BackupPC-users



On 8/1/21 06:30, Alexander Kobel wrote:

Hi Sorin,

On 1/7/21 9:39 AM, Sorin Srbu wrote:

Hello all!

Trying to improve the backup speed with BPC and looked into setting 
noatime

in fstab.

But this article states some backup programs may bork if noatime is set.

https://lonesysadmin.net/2013/12/08/gain-30-linux-disk-performance-noatime-nodiratime-relatime/ 



What will BPC in particular do if noatime is set?


exactly what it's supposed to do. noatime or at least relatime (or 
perhaps recently lazytime) is the recommended setting:
https://backuppc.github.io/backuppc/BackupPC.html#Optimizations 



I think it depends on whether you are applying this setting change on 
the BPC server, and specifically the BPC pool drive, or if you are 
applying it to the clients and/or root FS of the BPC server.


If you have a separate filesystem for the BPC pool, then using this 
setting on that filesystem will not have any adverse impact, but will 
likely reduce overhead. Changing this setting elsewhere will have the 
documented impacts (and you would need to assess the results of those 
impacts based on your own personal requirements (or provide a lot more 
information for anyone else to comment on).


Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backuppc in large environments

2020-12-01 Thread Adam Goryachev via BackupPC-users


On 2/12/20 10:35, G.W. Haywood via BackupPC-users wrote:

Hi there,

On Tue, 1 Dec 2020, backuppc-users-requ...@lists.sourceforge.net wrote:


How big can backuppc reasonably scale?


Remember you can scale vertically or horizontally. Either get a bigger 
machine for your backups, or get more small machines. If you had 3 (or 
more) small machines, you can set 2 to backup each target, this gives 
you some additional redundancy of your backups infrastructure, as long 
as your backup windows can support this, or backups don't add enough 
load to interfere with your daily operations.


I guess at some point using too small machines would be more painful to 
manage, but there are a lot of options for scaling. Most people (vague 
observations) I think just scale vertically and add enough RAM or IO 
performance to handle the load.




... daily backup volume is running around 750 GB per day, with two
database servers providing the majority of that volume (400 GB/day
from one and 150 GB/day from the other).


That's the part which bothers me.  I'm not sure that BackupPC's ways
of checking for changed files marry well with database files.  In a
typical relational database server you'll have some *big* files which
are modified by more or less random accesses.  They will *always* be
changed from the last backup.  The backup of virtual machines is not
dissimilar at the level of the partition image.  You need to stop the
machine to get a consistent backup, or use something like a snapshot.

I just want to second this, my preference is to snapshot the VM (a pre 
backup script from backuppc) and then backup the content of the VM (the 
actual target I use is the SAN server rather than the VM itself). For 
the DB, you should exclude the actual DB files, and have a script 
(either called separately or from BPC pre backup) which will export/dump 
the DB to another consistent file. If possible, this file should be 
uncompressed (allows rsync to better see the unchanged data), and with 
the same filename/path each day (again so rsync/BPC will see this as a 
file with some small amount of changes instead of a massive new file).


If you do that, you might see your daily "changes" reduce compared to 
before.



... I have no idea what to expect the backup server to need in the
way of processing power.


Modest.  I've backed up dozens of Windows workstations and five or six
servers with just a 1.4GHz Celeron which was kicking around after it
was retired from the sales office.  The biggest CPU hog is likely to
be data compression, which you can tune.  Walking directory trees can
cause rsync to use quite a lot of memory.  You might want to look at
something like Icinga/Nagios to keep an eye on things.

FYI, I backup 57 hosts, my current BPC pool size if 7TB, 23M files. Some 
of my backup clients are external on the Internet, some are windows, 
most are linux.


My BPC server has 8G RAM and a quad core CPU:
Intel(R) Core(TM) i3-4150 CPU @ 3.50GHz

As others have said, you are most likely to be IO bound after the first 
couple of backups. You are probably advised to grab a spare machine, 
setup BPC, run a couple of backups against a couple of smaller targets, 
once you have it working (if all goes smoothly, under 2 hours), target a 
larger server, you will soon start to see how it performs in your 
environment, and where the relevant bottlenecks are.


PS, All you need to think about is the CPU requirement to compress 750GB 
per backup cycle (you only need to compress the changed files), and the 
disk IO to write the 750GB (plus a lot of disk IO to do all the 
comparisons, which is probably the main load, which is why you also want 
a lot of RAM to cache the directory trees).


Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] Latest backuppc fuse version

2020-11-20 Thread Adam Goryachev via BackupPC-users

Hi,

I was looking on the backuppc github project, but can't seem to find the 
current version of the backuppc fuse program. I only find old versions 
attached to the mailing list.


Can anyone advise where the current version is maintained please?

Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] What to do with a restore that's going on far too long?

2020-11-17 Thread Adam Goryachev via BackupPC-users



On 17/11/20 23:47, Adam Hardy wrote:

Which strace output do you monitor to see the process is hung up?
Sorry, I've only little experience with low level stuff.

I usually start with the general strace , if I see the process 
doing "things" then I know it's not truly stuck. Sometimes I will limit 
it to only showing file open, so I can see as it progresses through a 
backup (or restore in your case), or I might use ls -l /proc//fd to 
see which files are currently open/in use, if I see the files changing, 
then I know it's not stuck, and I can get some idea on the progress.


Regards,
Adam



-Original Message-----
From: Adam Goryachev via BackupPC-users <
backuppc-users@lists.sourceforge.net>
Reply-To: "General list for user discussion, questions and support" <
backuppc-users@lists.sourceforge.net>
To: backuppc-users@lists.sourceforge.net
CC: Adam Goryachev 
Subject: Re: [BackupPC-users] What to do with a restore that's going on
far too long?
Date: Tue, 17 Nov 2020 22:53:02 +1100

On 17/11/20 22:39, Adam Hardy wrote:

OK, I just saw Raoul's message.

Backuppc_zcat is the tool I need.

Thanks Raoul


Personally, I prefer strace.

Regards,
Adam


-Original Message-
From: Adam Hardy 
Reply-To: "General list for user discussion, questions and support" <
backuppc-users@lists.sourceforge.net>
To: backuppc-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] What to do with a restore that's going
on
far too long?
Date: Tue, 17 Nov 2020 10:50:26 +

Thanks Brad but linux complains it's zlib and can't handle it.

adam@gondolin:~$ sudo zcat
/media/backuppc/usbbackup/backuppc/pc/erebor/RestoreLOG.26.z

gzip: /media/backuppc/usbbackup/backuppc/pc/erebor/RestoreLOG.26.z:
not
in gzip format
adam@gondolin:~$ sudo file
/media/backuppc/usbbackup/backuppc/pc/erebor/RestoreLOG.26.z
/media/backuppc/usbbackup/backuppc/pc/erebor/RestoreLOG.26.z: zlib
compressed data
adam@gondolin:~$

I can't find a command line tool package with cat or less either :(

What do you do then? Just WTF & kill it?

Cheers
Adam

-Original Message-
From: Brad Alexander 
Reply-To: "General list for user discussion, questions and support" <
backuppc-users@lists.sourceforge.net>
To: "General list for user discussion, questions and support" <
backuppc-users@lists.sourceforge.net>
Subject: Re: [BackupPC-users] What to do with a restore that's going
on
far too long?
Date: Mon, 16 Nov 2020 23:11:19 -0500

You could try zless or zmore, e.g. zless RestoreLOG.z

--b

On Mon, Nov 16, 2020 at 2:19 PM Adam Hardy <
adam.ha...@cyberspaceroad.com> wrote:

Hi

I'm using 3.3.0 on Linux Mint, to restore to a linux laptop.

I'm trying to access the restore log for a restore that is now
running
for about 12 hours and surely should be done. I can see there's a
substantial RestoreLOG.z but I can't tail it because it's
compressed.

Is there a way?

I'd like to know what it's trying to do before I kill it.

Assuming it is frozen, I'd also appreciate it if someone can tell
me
the best way to kill the job without losing the log.

Thanks!
Adam
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:
https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] What to do with a restore that's going on far too long?

2020-11-17 Thread Adam Goryachev via BackupPC-users



On 17/11/20 22:39, Adam Hardy wrote:

OK, I just saw Raoul's message.

Backuppc_zcat is the tool I need.

Thanks Raoul


Personally, I prefer strace.

Regards,
Adam



-Original Message-
From: Adam Hardy 
Reply-To: "General list for user discussion, questions and support" <
backuppc-users@lists.sourceforge.net>
To: backuppc-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] What to do with a restore that's going on
far too long?
Date: Tue, 17 Nov 2020 10:50:26 +

Thanks Brad but linux complains it's zlib and can't handle it.

adam@gondolin:~$ sudo zcat
/media/backuppc/usbbackup/backuppc/pc/erebor/RestoreLOG.26.z

gzip: /media/backuppc/usbbackup/backuppc/pc/erebor/RestoreLOG.26.z: not
in gzip format
adam@gondolin:~$ sudo file
/media/backuppc/usbbackup/backuppc/pc/erebor/RestoreLOG.26.z
/media/backuppc/usbbackup/backuppc/pc/erebor/RestoreLOG.26.z: zlib
compressed data
adam@gondolin:~$

I can't find a command line tool package with cat or less either :(

What do you do then? Just WTF & kill it?

Cheers
Adam

-Original Message-
From: Brad Alexander 
Reply-To: "General list for user discussion, questions and support" <
backuppc-users@lists.sourceforge.net>
To: "General list for user discussion, questions and support" <
backuppc-users@lists.sourceforge.net>
Subject: Re: [BackupPC-users] What to do with a restore that's going on
far too long?
Date: Mon, 16 Nov 2020 23:11:19 -0500

You could try zless or zmore, e.g. zless RestoreLOG.z

--b

On Mon, Nov 16, 2020 at 2:19 PM Adam Hardy <
adam.ha...@cyberspaceroad.com> wrote:

Hi

I'm using 3.3.0 on Linux Mint, to restore to a linux laptop.

I'm trying to access the restore log for a restore that is now
running
for about 12 hours and surely should be done. I can see there's a
substantial RestoreLOG.z but I can't tail it because it's compressed.

Is there a way?

I'd like to know what it's trying to do before I kill it.

Assuming it is frozen, I'd also appreciate it if someone can tell me
the best way to kill the job without losing the log.

Thanks!
Adam
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Problems migrating backups from CentOS 6 to CentOS 7

2020-10-14 Thread Adam Goryachev via BackupPC-users
Since both are ext4 filesystems, I'd prefer a dd copy from one to the 
other. See this on some sample command line suggestions on how to do 
this over the network:


https://www.ndchost.com/wiki/server-administration/netcat-over-ssh

Just make sure the source and destination LV is unmounted during the 
copy, but it is likely to be significantly faster (this could be many 
days faster, though if the total filesystem size is only 200G, then it 
may not make that much difference) than the rsync method.


Regards,
Adam



 While trying to transfer the backups in /var/lib/BackupPC from the C6
 to the C7 machine, I run out of space.  On C6, the file system is a
 175 GiB LV in ext4 holding about 137 GB.  On C7, I started with a 200
 GiB LV in ext4 and ran out of space.



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] First backup failing Host Key and No files dumped

2020-10-01 Thread Adam Goryachev


On 2/10/20 03:26, David Hoskinson wrote:

Investigating your tip but here is what i get

Last login: Thu Oct  1 13:09:08 2020 from 10.60.2.25
[root@asbackup1 ~]# su -s /bin/bash backuppc
bash-4.4$ ssh-copy-id -i .ssh/id_rsa.pub 'root@vpnsvr'

/usr/bin/ssh-copy-id: ERROR: failed to open ID file '.ssh/id_rsa.pub': 
Permission denied

bash-4.4$ ll
bash: ll: command not found
bash-4.4$ ls -al
ls: cannot open directory '.': Permission denied
bash-4.4$



To accept the host key, all you need to do is:

su -s /bin/sh backuppc

ssh root@client

You don't actually need to authenticate. However, the reason you 
couldn't copy the ssh key across (which is what ssh-copy-id would do), 
is that you are probably not in the right directory.


su -s /bin/sh backuppc

cd ~

ssh-copy-id root@client

That should work better, though maybe you already copied the key across 
using some other method.


Regards,
Adam



*From: *"Greg Harris" 
*To: *"backuppc-users" 
*Sent: *Thursday, October 1, 2020 1:06:28 PM
*Subject: *Re: [BackupPC-users] First backup failing Host Key and No 
files dumped


CAUTION: This email originated from outside your organization. 
Exercise caution when opening attachments or clicking links, 
especially from unknown senders.


David,

The backuppc user generally doesn’t have shell access.  You should be 
able to use something like:


su -s /bin/bash backuppc

ssh-copy-id -i .ssh/id_rsa.pub 
 
‘r...@client.to .backup'


exit

On the server, you can add shell access to the backuppc user with:

usermod -s /bin/bash backuppc


Thanks,

Greg Harris

On Oct 1, 2020, at 12:00 PM, David Hoskinson
mailto:david.hoskin...@astroshapes.com>> wrote:

Hello all

I am doing a new install of BackupPC 4.4.0 and am having issues
with my first backup


XferLOG file /backup/astrobackup//pc/namesvr1/XferLOG.0.z created
2020-10-01 11:45:54
Backup prep: type = full, case = 1, inPlace = 1, doDuplicate = 0,
newBkupNum = 0, newBkupIdx = 0, lastBkupNum = , lastBkupIdx =
 (FillCycle = 0, noFillCnt = )
Running: /usr/bin/rsync_bpc --bpc-top-dir /backup/astrobackup/
--bpc-host-name namesvr1 --bpc-share-name /etc/named/
--bpc-bkup-num 0 --bpc-bkup-comp 3 --bpc-bkup-prevnum -1
--bpc-bkup-prevcomp -1 --bpc-bkup-inode0 2 --bpc-log-level 6
--bpc-attrib-new -e /usr/bin/ssh\ -l\ root
--rsync-path=/usr/bin/rsync --numeric-ids --perms --owner --group
-D --links --hard-links --times --block-size=2048 --recursive
--checksum --timeout=72000 namesvr1:/etc/named/ /
full backup started for directory /etc/named/
Xfer PIDs are now 2766
This is the rsync child about to exec /usr/bin/rsync_bpc
bpc_path_create(/backup/astrobackup//pc/namesvr1/0)
bpc_attrib_backwardCompat: WriteOldStyleAttribFile = 0,
KeepOldAttribFiles = 0
Host key verification failed.
rsync_bpc: connection unexpectedly closed (0 bytes received so
far) [Receiver]
bpc_sysCall_cleanup: doneInit = 1
RefCnt Deltas for new backup
Uncompressed HT:
Compressed HT:
Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0
filesTotal, 0 sizeTotal, 0 filesNew, 0 sizeNew, 0 sizeNewComp, 2 inode
Parsing done: nFilesTotal = 0
rsync error: unexplained error (code 255) at io.c(226)
[Receiver=3.1.2.0]
rsync_bpc exited with fatal status 255 (65280) (rsync error:
unexplained error (code 255) at io.c(226) [Receiver=3.1.2.0])
Xfer PIDs are now
Got fatal error during xfer (No files dumped for share /etc/named/)
Backup aborted (No files dumped for share /etc/named/)


I have created root ssh keys and added them to authorized_keys on
the client.


We have a older 3.3 version that is running, but this is a new
install on a new box.  I can't seem to find anything.  I think the
host key verification is the clue.  Should i be able to login into
the client from the server as backuppc?  Currently no passwd is
set on that account so i can't ssh to the client and accept the
figure print.

Thanks for any help or leads

David Hoskinson
Systems Administrator
Ext 226


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net

List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/







Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-17 Thread Adam Goryachev



On 17/6/20 22:43, Stefan Schumacher wrote:

Yes, with backuppc 3.3, you can safely delete any incremental and
full
prior to the full backup that you want to keep. You can't just keep
the
latest incremental though (there are some options if that is what
you
really need).

Keep in mind though, that:
a) websites tend to be a lot of text (php, html, css, etc) which all
compresses really well
b) website content may not change a lot, and with the dedupe, you
may
not save a lot of space anyway

Hello,

thanks for your input. I already have found out that I should not
delete  the log files unter /var/lib/backuppc/pc/example.netfed.de/
because now it shows zero backups. Good that I tried it on an
unimportant system. Do I assume correctly that I can delete the
directories themselves safely and they will not be shown in the
Webinterface anymore?


It's been a long time since V3, but from memory, you would need to edit 
the "backups" file to remove the old entries, and prevent them showing 
up on the web interface. You might be able to delete the folders, and I 
think there is some script/process to attempt to "repair" the backups 
file, but I just edited it by hand the small number of times it was needed.


I would guess you can delete log files for backups you delete, but you 
must keep log files for the backups you are keeping


Definitely better to ensure you keep a backup of any changes you make

No responsibility taken for any errors caused by the information 
provided, so be careful


Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-16 Thread Adam Goryachev



On 16/6/20 21:29, Stefan Schumacher wrote:

Hello,

I use Backuppc to backup VMs running mostly Webservers and a few custom
services. As everyone knows, Websites have a lifetime and at a certain
point the customer wishes for the site to be taken offline. We have one
Backuppc which we use for one big, special customer who wants a
FullKeepCnt of 4,0,12,0,0,0,10.

Now I have multiple websites which for which I have deactivated the
backup, but which still have multiple full and incremental backups
stored - up to 17 full backups to be exact.

Is there a way to delete all but the latest full backup and still be be
able to restore the website on demand? Is this technically possible or
will this clash with the pooling and deduplication functions of
backuppc? How should I proceed? I am still using Backuppc 3.3, because
of problems with backuppc4. (No need to go into details here)

Yes, with backuppc 3.3, you can safely delete any incremental and full 
prior to the full backup that you want to keep. You can't just keep the 
latest incremental though (there are some options if that is what you 
really need).


Keep in mind though, that:
a) websites tend to be a lot of text (php, html, css, etc) which all 
compresses really well
b) website content may not change a lot, and with the dedupe, you may 
not save a lot of space anyway


Just my comments, you might be talking about a website like youtube with 
mostly video content, and massive amounts of it, so YMMV.




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does BackupPc handle renamed files

2020-06-02 Thread Adam Goryachev



On 2/6/20 16:29, user655362...@outlook.com wrote:

Are then renamed files transferred again ?
Are the renamed files stored again ? (thus consuming 2x disk space)



What documentation have you read so far? What did that say?

What configuration do you have? What version do you have?

The answer to your first question will depend on a bunch of variables, 
you will need to provide a lot more information. The answer to your 
second is available on all the documentation, and almost never varies 
regardless of your configuration.


Regards,
Adam

--



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Odd question - probably user error

2020-01-12 Thread Adam Goryachev


On 13/1/20 11:19, Laurence Hurst wrote:
Ahhh, I think I know what‘s going on now. I run Debian stable on all 
my systems so still on version 3, yes.


It’s been several years since I copied the files over, before the old 
disks were sent for shredding, so they’ve been sitting there as 
disabled hosts and I’ve only now got around to finally sorting it out. 
Reading back over the documentation and searching the list archive, I 
think what I did was to just copied (preserving hardlinks - I am very 
familiar with the issues of copying entire, but not partial, BackupPC 
stores having done this many times, and currently do so monthly for 
offsite/DR purposes) the pc directories, so although the duplicated 
files were still hardlinked together (as they came from the same 
backuppc filesystem) nothing I copied over will be in this system’s 
pool (this process was suggested on this mailing list circa. 2012). I 
know I followed someone else’s advice on this for precisely the 
reasons you describe and this is the first, and I expect will be the 
only time, I’ve tried to partially or selectively access a subset of 
data in a BackupPC store.


What I was tying to do at the time was resurrect access to these 
specific hosts, so preserving hardlinks within there pc directories 
but not adding to the live system pool was sufficient (and within my 
understanding of the storage structure ;) ). You’re spot on that 
trying to merge the pools myself is beyond my current understanding of 
the storage structure, with I don’t “really really understand”.


Assuming this is what I did, that makes sense of what I’m seeing now, 
if I’m getting an approx. 50% hit rate on files already inthe pool, on 
this live BackupPC, from other hosts and their backups - 50% is higher 
than I would have presumed but still plausible. Having foundthe old 
mailing list post suggesting just copying the pc directories (with the 
caveat that it loses the benefits of pooling for new backups) I’m 90% 
sure this is what’s going on, I may try and confirm it tomorrow (if 
that’s right, nothing from these old pc directories will exist in the 
pool).


I’ll have to modify my plan and work through one old host at a time, 
instead of restoring all the data at once then sorting it. I was 
expecting (having clearly forgotten how I got these old systems 
backups into my current BackupPC install) restoring all the old 
backups to my current desktop system would have negligible impact on 
the pool’s freespace and I could tidy up the defunct pc’s 
configuration and pc directories before I’d made a significant dent in 
sorting throughthe restored files. Hopefully doing it a host at a time 
then removing that hosts configuration and pc directory will release 
enough space from files not shared between the old hosts to keep at 
least one backup copy (old or new) of all of the data as I go, just in 
case.


Thanks for taking the time to reply. I wouldn’t have found theold 
message and realised the above hypothesis, which makes sense of the 
situation, without it.


I would suggest the following should work, with minimal issues/disk 
space issues.


1) Copy (with hardlinks) the required pc directories from your old 
backup server to the new server. Ensure that none of the backup numbers 
conflict with existing backup numbers on the new server. Ideally, you 
would have no old backups on the new server to begin with.


2) Do the restore from BPC as needed.

3) Delete everything you copied in step 1 above

4) Do a full backup on the server the files were restored to in step 2

You could do step 4 before step 3 but you will need sufficient space on 
the new BPC server, which you don't have based on your original post. 
Also note that the default BPC config will stop doing backups when the 
used disk space is 95% or higher, so be careful/check that scheduled 
backups of all your other live servers is still happening as needed.


Regards,
Adam



Laurence



On 12 Jan 2020, at 22:57, backu...@kosowsky.org wrote:

Assuming you are using BackupPC 3.x... which is based on hardlinks...
You can't "just" copy over and merge backups... Rather there are links
and pool/cpool chains and several other complexities.

I did write and post some routines that can do this by essentially
looking up each file, searching the pool for a match, and then either
replacing the 'pc' file with a hard link to an existing pool file or
creating a new link depending on whether the file already exists in
the pool. There are also several edge cases to be careful of.
This is of course a slow process and not recommended unless
you really really understand the structure of the backup storage and
know what you are doing (which clearly you don't :)


Laurence Hurst wrote at about 19:11:02 + on Sunday, January 12, 2020:

Hi I'm hoping someone can help me figure out what I've done and/or where
my expectations are wrong.

Bit of background: I've found some old disks that belonged to a BackupPC
pool, which 

Re: [BackupPC-users] Multiple CPU usage for compression?

2019-10-02 Thread Adam Goryachev



On 3/10/19 5:26 am, Daniel Berteaud wrote:

- Le 2 Oct 19, à 18:51,  p2k-...@roosoft.ltd.uk a écrit :


On 02/10/2019 15:46, Daniel Berteaud wrote:

- Le 1 Oct 19, à 10:51,  p2k-...@roosoft.ltd.uk a écrit :


Hmmm I am not so sure about that.. because it appears the time it takes
compress files also slows down the transfer of them. I was getting like
6Mb/s from a server on the same switch as the backup machine. One CPU
out of 16 was pegged at a 100% under compression.

How do you know compression is the bottleneck ?


I happened to be watching htop at the time. I was suprrised to see only
one core pegged at 100%

That doesn't mean this process is busy only doing compression (it might, but it 
could be doing something else)



Surely it would be
trivial to replace gzip with pigz and bzip2 with pbzip2?

BackupPC does not use an external binary to compress data so no, it wouldn't be
as trivial as s/gzip/pigz/


Oh? Then why is there a config variable for the gzip path ? What is it
used for it not compression?

Curious.

It's for compression of archives, not pooled files

++



You could always disable compression, and then see if it solves your CPU 
issue


Regards,
Adam

--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup fails after three days, possibly millions of files

2019-07-21 Thread Adam Goryachev


On 20/7/19 07:08, David Koski wrote:



On 7/16/19 4:27 PM, Adam Goryachev wrote:

On 17/7/19 4:22 am, David Koski wrote:


Regards,
David Koski
dko...@sutinen.com

On 7/8/19 6:16 PM, Adam Goryachev wrote:

On 9/7/19 10:23 am, David Koski wrote:
I am trying to back up about 24TB of data that has millions of 
files.  It takes a day or to before it starts backing up and then 
stops with an error. I did a CLI dump and trapped the output and 
can see the error message:


Can't write 32780 bytes to socket
Read EOF: Connection reset by peer
Tried again: got 0 bytes
finish: removing in-process file 
Shares/Archives//COR_2630.png

Child is aborting
Done: 589666 files, 1667429241846 bytes
Got fatal error during xfer (aborted by signal=PIPE)
Backup aborted by user signal
Not saving this as a partial backup since it has fewer files than 
the prior one (got 589666 and 589666 files versus 4225016)

dump failed: aborted by signal=PIPE

This backup is doing rsync over ssh.  I enabled SSH keepalive but 
it does not appear to be due to an idle network.  It does not 
appear to be a random network interruption because the time it 
takes to fail is pretty consistent, about three days. I'm stumped. 



Did you check:

$Conf{ClientTimeout} = 72000;

Also, what version of rsync on the client, what version of BackupPC 
on the server, etc?


I think BPC v4 handles this scenario significantly better, in fact 
a server I used to have trouble with on BPC3.x all the time has 
since been combined with 4 other server (so 4 x the number of files 
and total size of data) and BPC4 handles it easily.





Thank you all for your input.  More information:

rsync version on client: 3.0.8 (Windows)
rsync version on server: 3.1.2 (Debian)
BackupPC version: 3.3.1
$(Config{ClientTimeout} = 604800

I just compared the output of two verbose BackupPC_dump runs and it 
looks like the files are reported to be backed up even though they 
are not.  For example, this appears in logs of both backup runs:


create   644  4616/545  1085243184 
/3412.zip


I checked and the file time stamp is year 2018.  The log files are 
full of these.  I checked the real time clock on both systems and 
they are correct.  There are also files that have been backed up 
that are not in the logs.


I suspect there are over ten million files but I don't have a good 
way of telling now.  Oddly, there are about 500,000 files backed 
according to the log captured from BackupPC_dump and almost the same 
number actually backed up and found in pc//0, but they are 
different subsets of files.  I have been tracking memory and swap 
usage on the server and see no issues.


Is this a possible bug in BackupPC 3.3.1?


Please don't top-post if you can avoid it, at least not on mailing 
lists.


I just realised:

Read EOF: Connection reset by peer

This is a networking issue, not BackupPC. In other words, something 
has broken the network connection (in the middle of transferring a 
file, so I would presume it isn't due to some idle timeout, dropped 
NAT entry, etc). BackupPC has been told by the operating system that 
the connection is no longer valid, and so it has "cleaned up" by 
removing the in-progress file (partial).


I just completed another backup cycle that failed in the same manner 
but this time with a continuous ping with captured output. It didn't 
miss a beat.


A "continuous ping" doesn't prove a lack of a network connection issue. 
You would need to record a complete wireshark copy of the network 
interface, that would then tell you which machine "broke" the 
connection. Either way, see below, it could be windows that is causing 
the problem rather than your network.


It takes a day to start (presumably reading ALL the files on the 
client takes this long, you could improve disk performance, or 
increase RAM on the client to improve this).


You might be right.  But it's not a show stopper.



"and then stops with an error" - is that on the first file, or are 
some files successfully transferred? Is that the first large file? 
Does it always fail on the same file (seems not, since it previously 
got many more).


Good points.  Confirmed: Not the first file (over 600,000 files 
transferred first), not a large file (less than 20Meg), does not 
always fail on the same file or directory.




I'm thinking you need to check and/or improve network reliability, 
make sure both client and server are not running out of RAM/etc 
(mainly the backuppc client, the OOM might kill the rsync process), 
etc. Check your system logs on both client and server, and/or watch 
top output on both systems during the backup.


The network did not miss a beat and generally appears responsive. It 
has been checked.  The client and server RAM usage are tracked in 
Zabbix and not close to running out.  Only curious thing is swap is 
running out on the client (Windows Server 2016) even with 10GB RAM 
available, but still has about 2GB before crash.  Ser

Re: [BackupPC-users] Backup fails after three days, possibly millions of files

2019-07-16 Thread Adam Goryachev

On 17/7/19 4:22 am, David Koski wrote:


Regards,
David Koski
dko...@sutinen.com

On 7/8/19 6:16 PM, Adam Goryachev wrote:

On 9/7/19 10:23 am, David Koski wrote:
I am trying to back up about 24TB of data that has millions of 
files.  It takes a day or to before it starts backing up and then 
stops with an error.  I did a CLI dump and trapped the output and 
can see the error message:


Can't write 32780 bytes to socket
Read EOF: Connection reset by peer
Tried again: got 0 bytes
finish: removing in-process file 
Shares/Archives//COR_2630.png

Child is aborting
Done: 589666 files, 1667429241846 bytes
Got fatal error during xfer (aborted by signal=PIPE)
Backup aborted by user signal
Not saving this as a partial backup since it has fewer files than 
the prior one (got 589666 and 589666 files versus 4225016)

dump failed: aborted by signal=PIPE

This backup is doing rsync over ssh.  I enabled SSH keepalive but it 
does not appear to be due to an idle network.  It does not appear to 
be a random network interruption because the time it takes to fail 
is pretty consistent, about three days. I'm stumped. 



Did you check:

$Conf{ClientTimeout} = 72000;

Also, what version of rsync on the client, what version of BackupPC 
on the server, etc?


I think BPC v4 handles this scenario significantly better, in fact a 
server I used to have trouble with on BPC3.x all the time has since 
been combined with 4 other server (so 4 x the number of files and 
total size of data) and BPC4 handles it easily.





Thank you all for your input.  More information:

rsync version on client: 3.0.8 (Windows)
rsync version on server: 3.1.2 (Debian)
BackupPC version: 3.3.1
$(Config{ClientTimeout} = 604800

I just compared the output of two verbose BackupPC_dump runs and it 
looks like the files are reported to be backed up even though they are 
not.  For example, this appears in logs of both backup runs:


create   644  4616/545  1085243184 /3412.zip

I checked and the file time stamp is year 2018.  The log files are 
full of these.  I checked the real time clock on both systems and they 
are correct.  There are also files that have been backed up that are 
not in the logs.


I suspect there are over ten million files but I don't have a good way 
of telling now.  Oddly, there are about 500,000 files backed according 
to the log captured from BackupPC_dump and almost the same number 
actually backed up and found in pc//0, but they are different 
subsets of files.  I have been tracking memory and swap usage on the 
server and see no issues.


Is this a possible bug in BackupPC 3.3.1?


Please don't top-post if you can avoid it, at least not on mailing lists.

I just realised:

Read EOF: Connection reset by peer

This is a networking issue, not BackupPC. In other words, something has 
broken the network connection (in the middle of transferring a file, so 
I would presume it isn't due to some idle timeout, dropped NAT entry, 
etc). BackupPC has been told by the operating system that the connection 
is no longer valid, and so it has "cleaned up" by removing the 
in-progress file (partial).


It takes a day to start (presumably reading ALL the files on the client 
takes this long, you could improve disk performance, or increase RAM on 
the client to improve this).


"and then stops with an error" - is that on the first file, or are some 
files successfully transferred? Is that the first large file? Does it 
always fail on the same file (seems not, since it previously got many more).


I'm thinking you need to check and/or improve network reliability, make 
sure both client and server are not running out of RAM/etc (mainly the 
backuppc client, the OOM might kill the rsync process), etc. Check your 
system logs on both client and server, and/or watch top output on both 
systems during the backup.


Try backing up other systems, try backing up a smaller subset (exclude 
some large directories, and then add them back in if you complete a 
backup successfully).


Overall, I would advise to upgrade to BPC v4.x, it handles backups of 
systems with huge number of files much better.


This doesn't look like a BPC bug, maybe a network driver, kernel, or 
something else, but not BPC (IMHO).


Regards,
Adam

--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.


___
BackupPC-users mailing

Re: [BackupPC-users] Backup fails after three days, possibly millions of files

2019-07-08 Thread Adam Goryachev

On 9/7/19 10:23 am, David Koski wrote:
I am trying to back up about 24TB of data that has millions of files.  
It takes a day or to before it starts backing up and then stops with 
an error.  I did a CLI dump and trapped the output and can see the 
error message:


Can't write 32780 bytes to socket
Read EOF: Connection reset by peer
Tried again: got 0 bytes
finish: removing in-process file 
Shares/Archives//COR_2630.png

Child is aborting
Done: 589666 files, 1667429241846 bytes
Got fatal error during xfer (aborted by signal=PIPE)
Backup aborted by user signal
Not saving this as a partial backup since it has fewer files than the 
prior one (got 589666 and 589666 files versus 4225016)

dump failed: aborted by signal=PIPE

This backup is doing rsync over ssh.  I enabled SSH keepalive but it 
does not appear to be due to an idle network.  It does not appear to 
be a random network interruption because the time it takes to fail is 
pretty consistent, about three days.  I'm stumped. 



Did you check:

$Conf{ClientTimeout} = 72000;

Also, what version of rsync on the client, what version of BackupPC on 
the server, etc?


I think BPC v4 handles this scenario significantly better, in fact a 
server I used to have trouble with on BPC3.x all the time has since been 
combined with 4 other server (so 4 x the number of files and total size 
of data) and BPC4 handles it easily.


Regards,
Adam

--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Files not taken from pool even when they are identical

2019-06-30 Thread Adam Goryachev




On 28/6/19 9:46 pm, Jean-Louis Biasini via BackupPC-users wrote:

Hi all,

I have a working installation on a centos7 server. I'm backing up 30+ 
linux server hosts with rsync. Since I was a bit surprised by the 
growing space taken on the backup server, I started to investigate on 
the pooling mechanism. So I started to create simple identical text 
files on 2 servers to see if the file was showed as already in pool 
while backing up the second server after having back up the first. 
First I just created 2 times the same text file (ie same content, same 
md5sum, same permission, same selinux but different creating time) 
second I created it on the first server then rsync it to the second to 
have it absolutely identical (rsync -aAXv). In both case the file is 
showed as created by the second server's backup. Then I tried a bigger 
file created with fallocate -l 2M test.img and rsynced it the same 
way. In all case my file is created again on the second backup. I also 
checked the hard link limit that I increased from 32000 to 64000 (ext4 
file-system here) with no improvement. Am I missing something?



You didn't mention which version of backuppc you are using, so I can't 
be sure, but from memory, with BPC 3.x that is the correct/expected 
behaviour. However, while the file is transferred from the second host 
to the BPC server, it *will* get pooled correctly, so there will only be 
a single on-disk version of the file. You can check this by confirming 
the hard link entry on the BPC server for the 2 hosts in question.


eg:

ls -i /var/lib/backuppc/pc//123/f%2f/ftest.txt 
/var/lib/backuppc/pc//124/f%2f/ftest.txt


If the inode number is the same for both files, then they are correctly 
de-duplicated/pooled.



This might be the same for BPC 4.x, although I recall (and have 
confirmed ages ago) that BPC 4.x doesn't transfer the file if the 
duplicate file is located on the same host. So, it may still transfer 
the file in your example (different hosts), but again should not be 
duplicated in the pool (I don't know how to verify this on BPC 4.x).


Maybe something like this:

/usr/lib/backuppc/bin/BackupPC_attribPrint 
/var/lib/backuppc/pc//123/f%2f/attrib_*


Check the inode entry, and do the same for your second host, make sure 
the inode entry is the same, and I assume this would mean that the 
pooling is working correctly.


PS, just because the xferlog shows a file is transferred, doesn't mean 
it isn't pooled after BPC receives the full file, and confirms the 
content is a match for the other file it already has.


Regards,
Adam

--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BACKUPPC V4 - rsync-bpc error connecting to client

2019-06-10 Thread Adam Goryachev

Apologies, the last email was truncated...


Hi David,



On 9/6/19 5:19 am, David Wynn via BackupPC-users wrote:
Well ... I checked on the NETSTORE box via PUTTY and it says that 
'rsync is
a tracked alias for /usr/sbin/rsync' which is why I guess that this 
works on

the old version of BackupPC (V3.3.1) ... that's all I have in the
RsyncClientPath field as an override. Anyways, I changed the V4 
override to
be the /usr/sbin/rsync field and tested . SAME result - it 
prepends the

command with the IP address of the client and of course dies on the 'not
found error'  I have attached a picture of that part of the log 
file to

show it is still happening 

The only place the IP address shows up is in the very last field that is
passed to the rsync_bpc command (192.168.1.6:/nfs/Public). It is also in
the ssh config file as the HostName field but without it, ssh will not 
run.

So whatever/whoever is generating the commands to be passed BY ssh is
somehow using the IP address as part of the info -- but danged if I 
can find

out which module/program/subroutine is performing that function.

At this point I seem to have two options --- either keep my old system up
and running since RSYNC works fine there, or change all my backups for the
NETSTORE back to SMB and run the new system. I would have thought that
this would have been an easy thing for the developers to determine the 
cause

of but it seems they don't follow this forum.

Thanks for the help and suggestions.


I think you might be rather frustrated at this point, and this just 
might aggravate you further, but you do seem like an experience and 
knowledgeable person, so hopefully you will recognise it as intended.


Your issue seems rather simple, and almost certainly is not a developer 
issue, so I doubt it would attract the attention of a developer. Given 
that BackupPC 4.x is successfully used by a large number of other people 
(myself included), it's pretty unlikely to have such a significant 
problem (not backing up hosts via rsync). So, I would suggest to go back 
and take another look at your config files, and perhaps throw them away 
(move them out of the way), and start from the original distribution 
samples, and then customise towards your required config.


One significant issue is to try to migrate your old v3 config file to 
the new v4, but there are a lot of changes to the config, and while it 
might mostly work, there are going to be some surprises that can get you.


So, it looks like somewhere, you are building the ssh and rsync commands 
(which are done from multiple places), and ending up with adding the 
$host twice, and/or missing the definition of the rsync variable.


You might have posted the complete config file before (I haven't been 
paying too much attention to this thread, sorry), but could you post the 
sections of your config file the define:


$Conf{SshPath}
$Conf{RsyncArgs}
$Conf{RsyncArgsExtra}
$Conf{RsyncFullArgsExtra}
$Conf{RsyncSshArgs}
$Conf{RsyncClientPath}

Also, a full copy of the client config file (feel free to adjust/remove 
any username/password, though there shouldn't be for this file.


Finally, the one single command I've found to be the *most* helpful in 
debugging any such issues is this:


/usr/lib/backuppc/bin/BackupPC_dump -f -vvv hostname

Which will just try to do a full backup, but show you on the console 
what it is doing through each step. You should make sure there is no 
scheduled backup for this host, and no in-progress backup for this host 
when you run this command. Under normal operation, you shouldn't use 
this command.


Hope that helps.

Regards,
Adam

--
--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Nightly Clean removed 0 files

2019-05-06 Thread Adam Goryachev



On 6/5/19 21:40, Stefan Schumacher wrote:

Hello,

I have a new backup-server - this time using a classic RAID setup
without ZFS and with much, much more storage (22TB Netto). I added the
hosts to backup last week on Thursday and just now checked the logs. It
seems that the Nightly Clean does not remove any duplicate files. These
are the relevant lines:

2019-05-06 01:02:57 Pool nightly clean removed 0 files of size 0.00GB
2019-05-06 01:02:57 Pool is 0.00GB, 0 files (0 repeated, 0 max chain, 0
max links), 0 directories
2019-05-06 01:02:57 Cpool nightly clean removed 0 files of size 0.00GB
2019-05-06 01:02:57 Cpool is 0.00GB, 0 files (0 repeated, 0 max chain,
0 max links), 0 directories
2019-05-06 01:02:57 Pool4 nightly clean removed 0 files of size 0.00GB
2019-05-06 01:02:57 Pool4 is 0.00GB, 0 files (0 repeated, 0 max chain,
0 max links), 0 directories
2019-05-06 01:02:57 Cpool4 nightly clean removed 0 files of size 0.00GB
2019-05-06 01:02:57 Cpool4 is 1948.57GB, 8107866 files (0 repeated, 0
max chain, 1842328 max links), 16512 directories

I have checked the older logs and there is no instance in which nightly
clean removed any files at all. The only thing shown in the logs is
cpool4 growing to a size of nearly 2TB in less than a week.

Am I correct in assuming that nightly deduplication is not working as
it should? How can I fix this? At this speed even the new server will
fill up in an inaceptably short amount of time.


The nightly clean will only remove files that no longer exist in any 
backup that needs to be saved. If the server has only just started 
backing up the hosts, then I would assume your retention period is 
longer than a week, therefore there are no old backups to expire, and 
even if there were, possibly no files have been deleted from the clients 
in that time and hence, there is nothing to remove.


It would be expected that initial backups would use significantly more 
data (per day) as the first backup of each server is completed. Later, 
once the number of saved backups is at the maximum (ie, each daily 
backup results in an old backup being removed) you should see disk space 
required is relatively stable, with slow growth (because users always 
use more and more disk over time).


What is the total size of all your clients used disk space? If that is 
around 2TB then there is clearly nothing to worry about.


Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Only Getting empty directory structure

2019-04-10 Thread Adam Goryachev

On 10/4/19 12:38 am, - Storm wrote:
is there any way to change that. i would like the option to be able to 
manually copy files.



Not the way you think. You can use BackupPC_restore or 
BackupPC_tarCreate to do that.



also if i check the folder size it like 100k. if i take that drive to 
another machine will i be able to access the files?



Yes, as long as you have a working installation of BackupPC on the 
destination. Keep in mind, exactly like BPC3, all the data is in the 
(c)pool directories, the pc directory just contains "pointers" to the 
pool. In BPC4 this is extended, there is no longer a hardlink, so you 
need to look at the attrib files to find the correct pool file. BPC has 
all the tools and utilities to do this for you.



Perhaps someone will update the fuse plugin to work with BPC4, which 
could make this a lot easier.



Regards,

Adam




thanks for all the help
MJ

----
*From:* Adam Goryachev 
*Sent:* April 9, 2019 9:55 AM
*To:* backuppc-users@lists.sourceforge.net
*Subject:* Re: [BackupPC-users] Only Getting empty directory structure


On 9/4/19 23:32, - Storm wrote:
ah is in version 3.2 i could just goto the folder and type ls and all 
the file were viewable. your saying that has been changed and you can 
only view the file through the web interface or using this 
backuppc_ls tool. is that correct?



Yes.

 >
 > In BPC4 there are only dirs and attrib files in the pc dirs, you 
can use

 > the CLI tool BackupPC_ls to view the files backed up from the CLI, or
 > you can use the web interface.



Regards,

Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/



--
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.
--
Adam Goryachev Website Managers www.websitemanagers.com.au
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Only Getting empty directory structure

2019-04-09 Thread Adam Goryachev


On 9/4/19 23:32, - Storm wrote:
ah is in version 3.2 i could just goto the folder and type ls and all 
the file were viewable. your saying that has been changed and you can 
only view the file through the web interface or using this backuppc_ls 
tool. is that correct?



Yes.

 >
 > In BPC4 there are only dirs and attrib files in the pc dirs, you 
can use

 > the CLI tool BackupPC_ls to view the files backed up from the CLI, or
 > you can use the web interface.



Regards,

Adam

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Only Getting empty directory structure

2019-04-08 Thread Adam Goryachev


On 9/4/19 05:02, - Storm wrote:


Hi,

Using backuppc 4.2.1 and it is only backing up the folder structure, fresh
installl tried fedora 27-29 and all the same. no error, if i check the
backup info, see's the files say's they were backuped up but when i check
the folders there are no files in any folder.

Been looking on line for the past several days, can find nothing. have any
idea's on what i might have done wrong. same setup with fedora 24 and
backuppc 3.2 works fine.



Where are you checking for the files? On the web interface, or from the CLI?


In BPC4 there are only dirs and attrib files in the pc dirs, you can use 
the CLI tool BackupPC_ls to view the files backed up from the CLI, or 
you can use the web interface.



Your best bet is to test restoring some file to confirm that the backups 
are working properly.



Regards,
Adam


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental fails after full

2019-03-17 Thread Adam Goryachev

On 17/3/19 1:44 am, Erik via BackupPC-users wrote:

From: "G.W. Haywood"


Like you, I've seen similar but not quite this.  At a guess rsync(d)
is having trouble with its algorithms.  I'd break the failing backup
into multiple shares and back them up at different times to see if it
makes any difference.


I will try that, thanks.

I am curious what started this, since there are no apparent changes to 
the setup. Except for regular system updates.


The most recent failure did not succeed manually either. The only real 
clue I have so far is from Tcpdump, which showed that the tcp link 
stalled. It ends with a number of retransmissions, which indicates the 
other end of the link has stopped responding. Sadly, the rsync logs 
have revealed nothing yet.


Erik

Are you using rsync over ssh? You could try adding ssh keepalives to 
help avoid NAT timeout issues, and also to "notice" a dropped 
connection. The other scenario I've seen causing problems is a large 
number of files in one directory.


I've found BPCv4 seems a lot more reliable with large number of files.

Regards,
Adam

--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can't write config.pl files through CGI interface.

2019-02-21 Thread Adam Goryachev

On 22/2/19 8:36 am, Hubert SCHMITT wrote:

Thanks for your answer Jean Yves,

But i really don't understand what's wrong.

The rights are the same on my side :
-rw-r-   1 backuppc apache  85K 21 févr. 20:31 config.pl 
<http://config.pl>

-rw-r-   1 backuppc apache  82K 27 déc.   2014 config.pl_20141227_OK
-rw-r-   1 backuppc apache  82K 17 avril  2016 config.pl.old
-rw-r-   1 backuppc apache  86K 19 févr. 14:16 config.pl.pre-4.3.0

Apache is running with : User backuppc and Group apache in httpd.conf

I think you will need to confirm your apache settings, because if the 
user is backuppc and group apache, you should have write access to the 
above file.


One other thing to confirm is the permissions of the directory, and also 
whether the web interface is attempting to write to the same file you 
think it is. To check directory permissions:


ls -ld /path/to/check

Regards,
Adam


--
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.
--
Adam Goryachev Website Managers www.websitemanagers.com.au
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 4.2.1 Rsync and BackupFilesOnly

2018-09-27 Thread Adam Goryachev

On 28/09/18 08:22, Kris Lou via BackupPC-users wrote:

Hey guys,

This is driving me nuts.  I just installed a new instance of 4.2.1, 
and for whatever reason, I can't get the 
BackupFilesOnly/BackupFilesExclude to work correctly.  Need More Eyes!


$Conf{BackupFilesOnly} = {
  '*' => [
    '/SysData'
  ]
};
$Conf{RsyncShareName} = [
  '/sharedfolders'
];
$Conf{XferMethod} = 'rsync';
$Conf{DumpPostUserCmd} = undef;
$Conf{DumpPreUserCmd} = undef;
$Conf{RestorePostUserCmd} = undef;
$Conf{RestorePreUserCmd} = undef;
$Conf{XferLogLevel} = 3;
$Conf{BackupZeroFilesIsFatal} = '0';
$Conf{BackupFilesExclude} = {};
~

As seen above, I've got 1 client share defined (/sharedfolders), and 
I'm trying to get "/SysData" and its content.  BackupFilesExclude is 
currently blank. (BackupFilesExclude usually has stuff;  this is an 
override to clear it out).


From XferLog:

/usr/bin/rsync_bpc --bpc-top-dir /data/BackupPCPool 
--bpc-host-nameshares.axlmusiclink.biz <http://shares.axlmusiclink.biz>  
--bpc-share-name /sharedfolders --bpc-bkup-num 0 --bpc-bkup-comp 3 --bpc-bkup-prevnum 
-1 --bpc-bkup-prevcomp -1 --bpc-bkup-inode0 2 --bpc-attrib-new --bpc-log-level 3 -e 
/usr/bin/ssh\ -l\ root --rsync-path=/usr/bin/rsync --super --recursive --protect-args 
--numeric-ids --perms --owner --group -D --times --links --hard-links --delete 
--delete-excluded --one-file-system --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ 
%9l\ %f%L --stats --checksum --timeout=72000 --include=/SysData --exclude=/\* 
shares.axlmusiclink.biz:/sharedfolders/ /
So, I think that the "--exclude=/\*" is what's causing this to dump 0 
files, but I have no idea where it came from.


If I change the share name to "/sharedfolders/SysData" and blank out 
the Only/Excludes, then it'll dump it correctly.  The rsync-bpc 
command then doesn't append the --include and --exclude strings.


So, what's wrong with my syntax?


Have you tried this instead:
$Conf{BackupFilesOnly} = {
  '*' => [
    '/SysData'
  ]
};
$Conf{RsyncShareName} = [
  '/sharedfolders'
];
$Conf{XferMethod} = 'rsync';
$Conf{DumpPostUserCmd} = undef;
$Conf{DumpPreUserCmd} = undef;
$Conf{RestorePostUserCmd} = undef;
$Conf{RestorePreUserCmd} = undef;
$Conf{XferLogLevel} = 3;
$Conf{BackupZeroFilesIsFatal} = '0';
*$Conf{BackupFilesExclude} = undef;
*
Regards,
Adam
**
--
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.
--
Adam Goryachev Website Managers www.websitemanagers.com.au
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full quota for a client

2018-05-10 Thread Adam Goryachev

On 11/05/18 01:18, Keith Edmunds wrote:

Hi, is there a way of finding the full quota of diskspace used by one
client system? That's the total space used by all the backups held for
that client system.

I'd prefer the uncompressed size, but compressed would be OK. I realise
that some files will have been de-duplicated within the backups for that
client, and we only need to count such files once. Files that are
de-duplicated with other clients need to counted for each.

Thanks,
Keith


For a v3 backuppc, I think it will work to use du on the pc directory:

du -sh backuppc/pc/clienthostname

This will include the size of a few logs/etc as well, but that data is 
also specific to the client. It will only count a file once if it is 
stored in multiple backups, and will also count files that are shared 
with other clients.


Keep in mind, it could take a long time to generate the report and 
require significant memory, depending on the number of files in the 
backup, and the number of backups for that client.


Not sure how this works with v4, I haven't considered that.

Regards,
Adam

--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problem connecting backups to new backuppc-install

2017-11-19 Thread Adam Goryachev

On 20/11/17 03:10, Gustav Almstrom wrote:
I am in a situation where I need to connect an existing backupps 
datadrive (/var/lib/backuppc) to a new Backuppc 4 install


I had a crash here, but I have reinstalled a new machine with Backuppc 
4, mounted theNFS share with the backups to the same place, and if I 
add a new host, the backups kind of turn up. They are listed in the 
web gui, but when I try to browse a backup I get an errormessage:


Error: Directory /var/lib/backuppc/pc/pihole/117 is empty (for example)

Is there something else I need to do to make the new install of 
backuppc pick up the old files?


Make sure the permissions and owner/group are set correctly between the 
old files and the new server/configuration.


Regards,
Adam

--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BPC/rsync fails on huge directory

2017-11-03 Thread Adam Goryachev



On 3/11/17 22:40, infos+backu...@ba-cst.net wrote:

On 31/10/17 14:44, Jenkins, Wade William wrote:

What kind of failure are you seeing? What method are you using?

In the client bpc log, we have

- incr backup started for directory racine
- Got fatal error during xfer (rsync error: error in rsync protocol data
stream (code 12) at io.c(1556) [receiver=3.0.9.3])
- Backup aborted (rsync error: error in rsync protocol data stream (code
12) at io.c(1556) [receiver=3.0.9.3])

In the server's syslog, it's

kernel: [566863.920163] rsync_bpc[6209]: segfault at 7f6b29599e28 ip
0044ad0d sp 7ffdb7b005c0 error 4 in rsync_bpc[40+74000]


That sounds similar to the problem that I’ve been having since March that I 
can’t seem to sort out.  I’m using rsync backups, but trying tar backups has 
run into the same issue.  My backups will hang, seemingly indefinitely—I’ve had 
a backup sit unmoving for 7 days before.  Sometimes, killing that backup and 
running a new full will give me a reprieve.  But I’ve not been able to find or 
fix the underlying cause to this point.

On 31/10/17 18:01, Michael Stowe wrote:

Well, yes. rsync 3.1.x has a bug which might be directly affecting your 
transfer, and sounds related to your use case:

https://bugzilla.samba.org/show_bug.cgi?id=13109

If this is the case, either upgrading (or downgrading) should take care of the 
issue.

It's not a process hanging, just plain crashing (which may come from the
same bug or not). Since it's rsync_bpc that goes over the cliff, it may
not be related to an rsync bug.

We learned that we are using an old release of BPC4 (4.0.0). We're going
to upgrade it to the latest release, just to clean matters.
This looks like a known issue that I had with BPC 4.0alpha I think was 
the version, it only impacted directories with a large number of 
entries, or servers with a large number of total files/dirs (I forget 
which). In any case, it has been solved for a long time, please upgrade 
to the latest version.


Regards,
Adam

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-22 Thread Adam Goryachev

2017-09-22 17:24 GMT+02:00 Gandalf Corvotempesta
:

2017-09-22 17:20 GMT+02:00 Les Mikesell :

How does your overall CPU and RAM use look while this is happening?
Remember, your situation is unusual in that you are competing with the
ZFS compression activity.

CPU almost idle, RAM used at 70%, due to ZFS ARC cache (50% of ram)
No swap.


On 23/9/17 02:50, Gandalf Corvotempesta wrote:

Just removed "--checksum" from the BackupPC arguments.
Now is... FAAAST

What i've backupped in about 40hours, now took 60 minutes.

YES: 40 hours => 60 minutes.

Is --checksum really needed ? (checksum is also missing from rsnapshot
arguments, that's why rsnapshot is rocket fast)
I can't be sure for BPC4, but maybe you need more than one full backup 
to get the checkum information available to BPC. I think on v3 you 
needed two full backups before this would happen.
The other point to consider, is that this shows you *DID* have a 
performance issue, but you didn't seem to find it. checksum will 
increase the read load and CPU load on at least the client (and possibly 
BPC server depending on where it gets the checksum info from). So you 
should have seen that you were being limited by disk IO or CPU on either 
BPC server, or the client. I'm not sure of the memory requirement for 
the checksum option, but this too might have been an issue, especially 
if BPC tried to uncompress the file into memory. Also, all of this would 
trash your disk read cache on both systems, further increasing demands 
on the disks.


Whether you need to use --checksum or not, will depend on if you are 
happy to potentially skip backing up some files without knowing about it 
until you need to do a restore. Of course, this is a little contrived, 
as it still requires:

a) size doesn't change
b) timestamp doesn't change
c) content *does* change
That is not a normal process, but it is the corner case that always ends 
up being the most important file ;)




Regards,
Adam


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Adam Goryachev



On 21/9/17 01:20, Gandalf Corvotempesta wrote:

2017-09-20 17:15 GMT+02:00 Ray Frush :

You indicate that your ZFS store does 50-70MB/s.  That's pretty slow in
today's world.  I get bothered when storage is slower than a single 10K RPM
drive (~100-120MB/sec).  I wonder how fast metadata operations are.
bonnie++ benchmarks might indicate an issue here as BackupPC is metadata
intensive, and has to read a lot of metadata to properly place files in the
CPOOL.   Compare those results with other storage to gauge how well your ZFS
is performing.   I'm not a ZFS expert.

Yes, is not very fast but keep in mind that i'm using SATA disks.
But the issue is not the server performance, because all other software are
able to backup in a very short time, with the same hardware.

Except it probably is or else it wouldn't take so long ;)


A 4x 1Gbps network link will look exactly like a single 1Gbps per network
channel (stream) unless you've got some really nice port aggregation
hardware that can spray data at 4Gbps across those.   As such, unless you
have parallel jobs running (multithreaded), I wouldn't expect to see any
product do better than 1Gbps from any single client in your environment.
The BackupPC server, running multiple backup jobs could see a benefit from
the bonded connection, being able to manage 4 1Gpbs streams at the same
time, under optimal conditions, which never happens.

I'm running 4 concurrent backups, with plain rsync/rsnapshot i'm able to run 8.

Except as you know, BPC demands a little more resources to do the 
backup, and running concurrent backups multiplies those demands. When 
you are close to, or slightly exceed your system capacity, then you will 
see massive decrease in overall performance. The more the system has to 
"pretend" it has capacity that it doesn't, and therefore it will do more 
swapping, or more seeking (on the HDD's) and less useful caching, then 
useful activity decreases significantly.


Find the performance bottleneck first, then you can decide how best to 
proceed, whether that's increase hardware, modify the config, or change 
the code.


Regards,
Adam

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Adam Goryachev



On 21/9/17 00:34, Gandalf Corvotempesta wrote:

2017-09-20 16:14 GMT+02:00 Ray Frush :

Question:   Just how big is the host you're trying to backup?  GB?  number
of files?

 From BackupPC web page: 147466.4 MB, 3465344 files.

FYI, I have a single host with 926764MB, 5327442 files.

What is the network connection between the client and the backup
server?

4x 1GbE bonded on both sides.

Remote VPN, one end is 100M the other is 40M

Full backup is about 11 hours, incr is around 8 hours, up to 10 hours.

These stats simply say "It works for me". I see you are saying "It 
doesn't work for me", but I guess my statement will help you as much as 
your statement helps you.


I would guess that you have a bottleneck/performance limitation 
somewhere in your stack, and I'd suggest finding it and then hitting it 
with something appropriate. Check CPU utilisation on the client and 
server, memory usage, swap usage, and of course bandwidth usage. Once 
you rule out all of those, then you get to the fun stuff, and start 
looking at disk performance. While everyone says it is not accurate, 
personally, I've found that "/usr/bin/iostat -dmx" and watching the util 
column is a pretty good indicator.


Apologies, I certainly haven't followed the full thread, but at this 
point, throwing your hands in the air and name calling isn't going to 
improve the situation. Either you are interested in solving the problem, 
in which case you will probably need to get your hands dirty and work 
with us to find and fix it, or you don't really care, in which case we 
don't either (because it works fine for us).


Regards,
Adam

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] DumpPre/DumpPost scripting with $type incr

2017-08-04 Thread Adam Goryachev



On 5/8/17 00:39, Greg Harris wrote:


I’m having difficulty wrapping my brain around the DumpPre/DumpPost 
command on a backup.  I’ve got a database service that I need to shut 
down, run an incremental backup, then start the service again.  I’ve 
got it successfully SSH’ing in and shutting down the service, running 
the backup, and then restarting the service.  However, how do I use 
the $type incr command to specify only incremental backups?


Currently I have:

$Conf{DumpPreUserCmd} = '$sshPath -q -x -l root $host /root/serviceDown';

$Conf{DumpPostUserCmd} = '$sshPath -q -x -l root $host /root/serviceUp';

Problem #2:

I need the incremental to run after the full does, on the same day.  I 
don’t want the database service to be shut down for the length of time 
it takes to run a full backup, so do the full, then run a followup 
incremental that shuts down the service.  Additionally, I assume that 
if I do a full once a week that I need to keep 7 incrementals, rather 
than the previous 6, correct?




I would suggest that you instead use the prebackup script to:
1) shutdown the DB
2) copy the DB files to some other location
3) start the DB

Then, your incremental and full will both backup the DB, and the DB is 
only down for the minimum time required.
Alternate option, configure cron to  shutdown DB, do a local 
copy/backup, start DB. Then whatever time the backup runs, it will 
backup the latest copy.


Personally, I choose the second option, and keep two local copies of the 
DB to rotate, this way I have a very simple and quick copy to restore 
from if needed, plus the backuppc copies if I need something older/etc.


Regards,
Adam
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc 4 ignore directive Conf{RsyncClientCmd} for the user to use for ssh

2017-08-02 Thread Adam Goryachev

On 03/08/17 08:55, Romain Pelissier wrote:

Hi,
I have a backuppc setup just upgrade with my fedora 26 server. Now for 
some unknown reason, the connection to a host is made with the user 
"backuppc" instead of root that should come from the setting
$Conf{RsyncClientCmd} = '$sshPath -q -x -l root $host $rsyncPath 
$argList+';


I have tested to set this at server level and host level with no luck. 
Everytime a backup is started, it is connecting with the backuppc 
account. I can confirm the login attempt in the secure log ans also 
that after a while I don't see any traffic coming from the backup server.
Of course, creating a user backuppc on the remote host, giving it the 
sudo for rsync and add the option $Conf{RsyncClientCmd} = '$sshPath -q 
-x -l backuppc $host $rsyncPath $argList+'; to the host fix the issue 
but I prefer to find the root (!) cause of the issue if possible.

Thanks!


Check your log file for the backup, it will show you the command that 
backuppc is using. Then check your host config and main config.pl to 
find out why it isn't using the config you think it should be.


We can't see any of that information, so we can't help you.

Regards,
Adam


--
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.
--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] I'd like to make a suggestion to the BPC devsm

2017-07-21 Thread Adam Goryachev



On 22/7/17 01:38, B wrote:

On Fri, 21 Jul 2017 09:27:41 -0500
Les Mikesell  wrote:


The quick fix here is to use a Mac with an external or network drive
for time machine.  If you aren't familiar with it, it does exactly

Among many other, Apple products are to stay out of the company.


what you suggested with easy access for the user and filesystem tricks
for efficiency.  For a more enterprise flavor, NetApp fileservers have

I do not use that, all of our servers are home build, using such
things as Debian Linux, ZFS, XFS, GlusterFS, etc; this holds the staff
technical level to a very good skills level and avoid being
stuck/proprietary dependent/contract dependent  when really bad things
happen.

My first goal was to avoid current separated servers for snapshots, but
all of the given answers are driving me toward a simple switch between
snapshots and BPC on the same servers.
Sometimes, you need other's view to see what was obvious!

Thanks to all for your answers/comments ;)

I think you want both snapshots on the local server, as well as BPC on a 
remote server. They each serve a different need. You might also want a 
image copy on a remote server, which is yet another different requirement.


Regards,
Adam

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up the server computer

2017-07-15 Thread Adam Goryachev



On 15/7/17 13:00, Paul Fox wrote:

B wrote:
  > On Fri, 14 Jul 2017 18:56:19 -0400
  > Paul Fox  wrote:
  >
  > > i confess i haven't been following this thread in all its gory detail,
  >
  > The BackupPC god absolves you (although, it is the BPC v.3x god, so
  > you'll need to upgrade the confessionnal if you want to also be absolved
  > by the v.4.x one.)

:-)

  > > but i suspect that many folks do their backups onto a separately
  > > mounted disk.  if you do that, then adding "--one-file-system" to the
  > > rsync args takes care of it:  you can back up from '/', but only the
  > > root filesystem will be backed up.  any other filesystems on that
  > > machine will also need to be backed up as separate shares, of course.
  >
  > But this way, you still backup unwanted directories, such as /tmp, /dev,
  > /proc, etc.
  > Starting on the disk root and excluding these allows for a tight control
  > over what you want and the rest, providing you need almost the whole
  > system to be saved for whatever reason.

i didn't say i don't also have some excludes.  i exclude /proc and
/sys.  /dev is a separate filesystem.  /tmp, believe it or not, i do
back up, to help with the morning-after regret of having lost a file i
thought i didn't need.  i think we're ending up in the same place -- i
just need to specifically include mounted filesystems (as separate
share), which is how i prefer it.
Actually, I think you will find that /proc, /dev, /sys, etc are actually 
different filesystems, and so will automatically be excluded by 
--one-file-system.


Regards,
Adam

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up the server computer

2017-07-05 Thread Adam Goryachev

On 06/07/17 05:46, Kenneth Porter wrote:
--On Wednesday, July 05, 2017 12:26 PM -0400 Bob Katz 
<bobk...@digido.com> wrote:



rsync: failed to connect to localhost.localdomain (::1): Connection
refused (111)


Is rsyncd listening on the IPv6 interface? Or only the IPv4 interface? 
The error message says that rsync is attempting to connect to the IPv6 
loopback.


Also, your test command is a push command. The arguments are source 
and destination. My test command is a pull command that pulls from the 
"remote" (rsyncd) end a list of modules. Your command tries to copy a 
local file to the remote rsyncd server. 


BTW, I think you want to check this command:
root@pm10:~# netstat -anp|grep LISTEN
tcp0  0 0.0.0.0:22  0.0.0.0:* LISTEN  793/sshd
tcp0  0 127.0.0.1:250.0.0.0:* LISTEN  
4269/exim4


Here you can see two daemons listening for network connections. sshd 
will accept a connection from anywhere/any interface (0.0.0.0:22) while 
exim4 will ONLY accept connections on 127.0.0.1. In your case, rsyncd 
might be configured to only accept connections on the ethernet 
interface, and may not be listening for connections on 127.0.0.1. 
Equally, some daemons can be configured to only use IPv6 and not IPv4 
(or vice versa). In any case, please send the results of this command to 
the list so we can see what it is listening for:

netstat -anp|grep :873

Not sure, but have you sent the config file yet? There are options in 
the config file to restrict IP addresses that will be accepted/etc.


Regards,
Adam

--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems Starting BackupPC

2017-05-31 Thread Adam Goryachev



On 31/5/17 22:36, Jeffrey West wrote:


I am not quite sure why it worked.  I didn’t actually run the command, 
but my colleague did and was able to get the service to start after 
running


Kill -9 1289

Ps -ax did not show a process with that ID, and I also could not find 
a pid file in /var/run/BackupPC.


Also, rebooted multiple times and always showed as running on PID 1289 
when the service failed to start, saying BackupPC was already running, 
which it wasn’t.


Very odd! This occurred on Fedora 25 with BackupPC installed via DNF.


Broken quoting, so I've just cut it...

If the process id was not in use, then:
1) You would have got an error when you tried to kill it
2) It wouldn't have changed any behaviour, since running kill on a pid 
that doesn't exist will not have any impact


Therefore you did kill something, and it did have an effect on 
starting backuppc, OR you did something else other than kill something, 
and that solved the problem, but you think the kill is what solved it.


Don't worry, I've changed or done multiple things to try and solve a 
problem, and then afterwards don't really know which thing I did 
actually fixed the problem. Equally, sometimes it is just a matter of 
time, and all the changes or things you do are nothing more than time 
wasting, and eventually the problem is (or appears to be) fixed, (this 
is one reason why telling people to reboot helps often, it gives you 
time to fix the server while they are waiting for their desktop to reboot :)


Regards,
Adam
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems Starting BackupPC

2017-05-31 Thread Adam Goryachev



On 31/5/17 16:44, Craig Barratt wrote:
Michael is correct that BackupPC does use $Info->{pid} (which is read 
from status.pl ).  But it does check that that 
process exists before printing the error message (using perl's kill(0, 
pid) that doesn't really send a signal, but tells you whether the 
process is alive).  So I don't know why it claims process 1289 is 
running; that would mean "kill(0, 1289)" returns 1 (success)...


I assume after a reboot it is very possible that some other process is 
running and has used pid 1289, and isn't a short-lived process (ie, some 
other daemon or similar).


Does BPC check the name of the process that is found? This would help to 
avoid this confusion (pid exists but is not a BPC pid).



P.S.: Yes, it might ultimately turn out that you need to break
down the door,
  but, in my experience, you usually don't break down a
*random* door
  even then.

Which means the OP broke down a random door, and didn't even need to get 
into that room anyway (ie, killed a random unknown process, which may or 
may not be causing other problems, since the expected process is not 
running correctly).


Regards,
Adam
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to backup a laptop over internet

2017-05-29 Thread Adam Goryachev

On 30/05/17 05:33, B wrote:

On Mon, 29 May 2017 20:56:01 +0200
Xuo <x...@free.fr> wrote:


Hi,

Hi Xuo,


My pc is running Mageia5.
I don't understand how a VPN connection could help solving my problem.
Could you please explain more in details ?

A VPN means either a roadwarrior (your itinerant laptop) can connect and
benefit from all machines of your LAN, or connect 2 LANs together (eg:
enterprise branches.)

This means, when you're connected to it, that your backuppc server can
reach your laptop in a secure mode (encrypted and possibly compressed
mode) as easily as if it was connected on the LAN @home.

As formerly said and because of the VPN nature (no same IP segment
messing), you'll be obliged to create 2 accounts on the server:

one with the DNS laptop name for LAN connections -
ie: mylaptop.zatiluvsomuch, let's say it == 192.168.0.25,
 (or directly 192.168.0.25 if you do not have a home DNS),

and one based on the (fixed !) VPN IP address you use when away from home
ie: 172.16.0.25.

Provided you backup daily @2000 AND your laptop is always connected at
this time whether you're home or away:

192.168.0.25 will be saved if you're @home,

172.16.0.25 will be saved when you're away from home,
Wasn't there a recent extension to ClientNameAlias which allows multiple 
addresses to be used, which will be tried (in order), and the first 
found would run the backup?
This seems a perfect use case for that, adding the local IP and the 
remote IP as the two aliases, hence only a single "host" in backuppc, 
consistent ordered backups all in one place.


PS, this probably only applies to BPC4.x and I forget what version the 
OP is using.


Regards,
Adam

--
--
Adam Goryachev Website Managers www.websitemanagers.com.au

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Late to the Game: Upgrading from Version 3.x on Fedora 25

2017-05-24 Thread Adam Goryachev

On 25/05/17 02:57, Richard Shaw wrote:
On Wed, May 24, 2017 at 10:53 AM, Tim Evans <tkev...@tkevans.com 
<mailto:tkev...@tkevans.com>> wrote:


I have BackupPC-3.3.1-7.fc25.x86_64 installed on Fedora 25. Stopped
using it a good while back when the Samba changes screwed things up.
Could never get the patches to 3.x to work for me.  My backup pool is
now more than a year out of date.

(I have been running an alternative backup mechanism (NetGear ReadyNAS
built-in backups), with a separate destination, so the old pool isn't
important.)

I'd like to install version 4.x, but must've missed any step-by-steps
that have been published. I've found the Hobbes repo, but there's
nothing there in the way of docs.


The main thing is to merge config.pl <http://config.pl>. Your old one 
should not be overwritten and the new one from the 4.x package should 
be renamed to config.pl.rpmnew.


You can diff the two and merge what you need from the old one into the 
new one. The main difference being (IIRC) that the rsync path and 
rsync arguments being split into two different variables.


Something like:
# cd /etc/BackupPC
# mv config.pl <http://config.pl> config.pl.old
# mv config.pl.rpmnew config.pl <http://config.pl>
# diff -Nau config.pl.old config.pl <http://config.pl> > changes.diff

Then review changes.diff and merge what you need over to the new 
config.pl <http://config.pl>


It is a bit time consuming because the supplied default config.pl 
<http://config.pl> and the changes made the first time you click save 
from the CGI interface create a lot of false positives in the diff.


Thanks,
Richard
In relation to this, it would be great if the original shipped config.pl 
was never modified, but there was a line at the bottom to either include 
a config_local.pl or all files contained in a directory named config.d
This would make upgrades *much* easier, since all your local edits are 
contained safely in a different file, so all you need to do is 
rename/adjust your custom settings if the variable name has changed/etc.


PS, this is already easy to do, as long as you never modify settings 
from the web, but it shouldn't be difficult to add this for the web 
editor to support the same concept.


I'm not sure what other distros do, but it appears this is being done 
for a lot of debian/ubuntu packages.


Regards,
Adam
--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Request: 'Notes' section in GUI

2017-05-19 Thread Adam Goryachev


On 19/5/17 22:35, l, rick wrote:
> I setup a VPN to each server to be backed up, therefore when I setup a
> host, I use the VPN subnet as host name. This can be slightly
> problematic at times. Knowing which host is what simply at a glance with
> so many hosts.
>
> Has anyone got an idea as to getting a small non parsed notes section in
> the GUI for such reasons among others?
>
>
Sure, random notes could be useful.
For me, I also use VPN, but I use the actual customer hostname (or some 
variation that has meaning to me), and then use the alias to point to 
the IP address.
Not sure if you could use that method to help you?

Regards,
Adam

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Exchanging keys

2017-05-14 Thread Adam Goryachev
 goes in root's home directory under
.ssh/authorized_keys. That 'ssh-copy-id' command is a shell script
if you want to see what it does.  Maybe you find wherever root's home
directory is in the sandbox environment and make a copy there.

If not, and you end up using rsyncd instead, just change the
$Conf{XferMethod} to rsyncd instead of rsync.

--
  Les Mikesell
lesmikes...@gmail.com <mailto:lesmikes...@gmail.com>


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
<mailto:BackupPC-users@lists.sourceforge.net>
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
<https://lists.sourceforge.net/lists/listinfo/backuppc-users>
Wiki: http://backuppc.wiki.sourceforge.net
<http://backuppc.wiki.sourceforge.net>
Project: http://backuppc.sourceforge.net/
<http://backuppc.sourceforge.net/>




--
--
Bob Katz 407-831-0233 DIGITAL DOMAIN | "There are two kinds of fools, 
Recording, Mastering, Manufacturing | One says-this is old and 
therefore good. Author: *Mastering Audio *| The other says-this is new 
and thereforeDigital Domain Website <http://www.digido.com/> | 
better." No trees were killed in the sending of this message. However 
a large number of electrons were terribly inconvenienced.*No more 
Plaxo, Linked-In, or any of the other time-suckers. Please contact me 
by regular email. Yes, we have a facebook page and a You-Tube site!*



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/




--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My BackupPC can not backup root folder

2017-04-11 Thread Adam Goryachev

On 11/4/17 23:49, Nathanaël Belvo wrote:

Hello again,

Half an hour after my new try, I have excluded the files you advised 
me to, but it is not working.


In the /LOG.040217/ file, the last line prompts : 2017-04-11 15:18:55 
full backup started for directory / (baseline backup #0)


I'll wait until it times out. Any other idea ?

Thanks a lot.

2017-04-11 15:17 GMT+02:00 Nathanaël Belvo >:


Hello,

First, let me apologize because I don't really know what reply
form is expected on this mailing list, so I'll just leave it like
this for this mail.

Thanks for your replay, I will try it now on a single server to
test it, I'll come back later to tell if it succeeded.

Nate,

2017-04-11 14:47 GMT+02:00 Doug Lytle >:

>>> My intuition is that there are things mounted and that ssh
can not resolve mounted stuff, but I'm not sure of anything.

I've never tried to backup root before, but my guess would be
that you'd need to exclude:

/proc
/dev
/sys
/run


Doug



Hi,

Please don't top post, please provide the full log output, and your full 
host config file (or whatever config settings are not default).


In the meantime, I would add:
$Conf{RsyncArgsExtra} = [
'--one-file-system',
];

Which will automatically resolve any issues with large mount points, 
including virtual filesystems like /proc or /sys etc...


Regards,
Adam





--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fwd: Re: BackupPC v4 for Fedora / EPEL Update

2017-04-04 Thread Adam Goryachev

On 4/4/17 21:19, Bob of Donelson Trophy wrote:


On 2017-04-03 07:06, Adam Goryachev wrote:


Perfect, there is your "smoking gun".
So, do the following:
ps aux|grep apache
See what user apache is running as (you also need the group, probably 
just get that from the apache config files).


Now do the following commands:
ls -ld /etc/BackupPC
ls -ld /etc/BackupPC/apache.users

Lets assume apache is running with the user apache
Lets assume the /etc/BackupPC directory is owned by backuppc, and is 
secured so that it is not world readable (a pretty good idea since it 
is likely to contain passwords in the various config files).
Lets assume the /etc/BackupPC/apache.users file is owned by backuppc 
as well.


You would need to do the following commands to "fix" it (ie, allow 
apache to read the passwd file).

chgrp apache /etc/BackupPC
chmod g+rx /etc/BackupPC (edit config probably won't work, but it 
should fix your current problem).


You might also need the following:
chgrp apache /etc/BackupPC/apache.users
chmod g+r /etc/BackupPC/apache.users

The other option is to simply do this:
mv /etc/BackupPC/apache.users /etc/apache2/backuppc.users
chown apache /etc/apache2/backuppc.users
Update the apache config file to point to the new location.

Then, restart apache and you should be fine.

Regards,
Adam

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net 
<mailto:BackupPC-users@lists.sourceforge.net>

List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


These made sense and I ran your suggestions. The first login attempt 
and I received the appropriate login box, entered user name and passwd 
but still only five (5) entires in navigation menu. Logoff, close 
browser tab, start new tab, enter backuppc ipaddress and just login, 
no request for user name or passwd.


Thanks for your suggestions, Adam. These match (in the CentOS way) to 
what I was seeing on Ubuntu running v3.3.1. I still think it is a 
permissions issue.


I don't think it is a permissions issue (at least not a unix permissions 
issue). Check your apache error log to ensure there is nothing there.
I expect the problem is that you haven't told backuppc that your 
username is the administrator.
You need to make sure the username you enter matches the backuppc 
configuration:

$Conf{BackupPCUser} = 'backuppc';

So for me, I login with username backuppc.

Regards,
Adam
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fwd: Re: BackupPC v4 for Fedora / EPEL Update

2017-04-03 Thread Adam Goryachev



On 3/4/17 21:47, Bob of Donelson Trophy wrote:


On 2017-04-03 05:17, Richard Shaw wrote:

It should not be necessary to run apache as the backuppc user. When I 
have some time I may have to setup a CentOS 7 box up from scratch. 
I'm not sure at this point what I've tweaked but I'm not having the 
same problem.


To Adam,

I checked the apache error_log and found "(13)Permission denied: 
[client 192.168.242.29:43348] AH01620: Could not open password file: 
/etc/BackupPC/apache.users". Hum-m-m-m??



Perfect, there is your "smoking gun".
So, do the following:
ps aux|grep apache
See what user apache is running as (you also need the group, probably 
just get that from the apache config files).


Now do the following commands:
ls -ld /etc/BackupPC
ls -ld /etc/BackupPC/apache.users

Lets assume apache is running with the user apache
Lets assume the /etc/BackupPC directory is owned by backuppc, and is 
secured so that it is not world readable (a pretty good idea since it is 
likely to contain passwords in the various config files).
Lets assume the /etc/BackupPC/apache.users file is owned by backuppc as 
well.


You would need to do the following commands to "fix" it (ie, allow 
apache to read the passwd file).

chgrp apache /etc/BackupPC
chmod g+rx /etc/BackupPC (edit config probably won't work, but it should 
fix your current problem).


You might also need the following:
chgrp apache /etc/BackupPC/apache.users
chmod g+r /etc/BackupPC/apache.users

The other option is to simply do this:
mv /etc/BackupPC/apache.users /etc/apache2/backuppc.users
chown apache /etc/apache2/backuppc.users
Update the apache config file to point to the new location.

Then, restart apache and you should be fine.

Regards,
Adam
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fwd: Re: BackupPC v4 for Fedora / EPEL Update

2017-04-02 Thread Adam Goryachev
On 03/04/17 06:50, Bob of Donelson Trophy wrote:
> I have updated to your latest COPR and v4.1.1. Thank you for that.
>
> I have been researching signing into the web gui. Even before I 
> upgraded to v4.1.1 I was having resistance to signing in with the 
> backuppc user with the htpasswd I have set up.
>
> I mentioned in a previous email that the backuppc user appears in the 
> /etc/BackupPC/apache.users file, as it should. But, When I sign into 
> the [ipaddress]/BackupPC it gives an "Internal Server Error". If I 
> modify the /etc/httpd/conf.d/BackupPC.conf file and add "Require all 
> granted" and comment out the "Require valid-user" I can access the gui 
> however, I cannot do much of anything. Understandable as BackupPC does 
> not "know" the user I am accessing it with. When I invert this back to 
> the correct configuration I am NOT presented with a user login but 
> rather the "Internal Server Error".
>
> What file permissions rights should the /etc/Backup[PC/apache.users 
> file have?
>
>  [root@localhost ~]# ls -alh /etc/BackupPC/
> total 192K
> drwxr-x---.  2 backuppc backuppc  112 Apr  1 06:55 .
> drwxr-xr-x. 80 root root 8.0K Apr  1 07:42 ..
> -rw-r--r--.  1 root root   47 Apr  1 18:14 apache.users
> -rw-r--r--.  1 backuppc backuppc  83K Mar 31 13:22 config.pl
> -rw-r--r--.  1 backuppc backuppc  83K Mar 31 13:22 config.pl.sample
> -rw-r--r--.  1 backuppc backuppc 2.2K Mar 31 13:22 hosts
> -rw-r--r--.  1 backuppc backuppc 2.2K Mar 31 13:22 hosts.sample
> -rw-r-.  1 backuppc backuppc0 Apr  1 17:53 LOCK
>
> Does your /etc/BackupPC/ directory have these rights?
>
> I am not sure I am looking in the correct place but, I think I am 
> dealing with a permissions issue.
>
> Thoughts?
>

Yes, please look at your apache error log file, it will tell you what 
the problem is.
Basically, Apache is saying "I don't like this, and I don't know what 
else to do, please get the sys admin to fix it". That's if it's a 
permissions error on any of the related files (.htaccess, the password 
file, or syntax errors, or options that are not permitted etc.

It is also possible to get an internal server error if the 
script/program itself can't run, usually the output of the script (error 
message) will be recorded in the apache error log, so again, that is 
where you will find out what the problem is.

Regards,
Adam



-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incr BU back up everything?

2017-03-08 Thread Adam Goryachev


On 9/3/17 15:46, chrisc...@knebb.de wrote:
> Hi guys,
>
>
> from time to time I have a strange behaviour. Usually my BackupPC 3.x
> (latest) does full backups on weekends and incr during weekdays. And
> usually they are done within 10 or 20 minutes (except huge file changes
> of course).
>
> But from time to time it takes days as it seems to transfer every file.
> No matter if it has changed or not. During an incr. rsyncbackupc.
>
> This is what I see in logfile from the previous backup:
>
> 2017-02-27 20:03:16 removing incr backup 584
> 2017-02-28 18:54:19 full backup started for directory / (baseline backup #591)
> 2017-02-28 19:49:48 full backup started for directory /srv/ (baseline backup 
> #591)
> 2017-02-28 21:27:36 full backup 592 complete, 475953 files, 536626090243 
> bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)
> 2017-02-28 21:27:36 removing full backup 527
>
> 2017-03-01 18:03:32 incr backup started back to 2017-02-28 18:50:08 (backup 
> #592) for directory /
> 2017-03-01 18:34:34 incr backup started back to 2017-02-28 18:50:08 (backup 
> #592) for directory /srv
> 2017-03-06 09:14:13 Aborting backup up after signal INT
> 2017-03-06 09:14:14 Got fatal error during xfer (aborted by user (signal=INT))
> # Here I aborted the backup as it caused slow traffic...
>
> 2017-03-06 17:05:24 incr backup started back to 2017-02-28 18:50:08 (backup 
> #592) for directory /
> 2017-03-06 17:22:21 incr backup started back to 2017-02-28 18:50:08 (backup 
> #592) for directory /boot
> 2017-03-06 17:23:49 incr backup started back to 2017-02-28 18:50:08 (backup 
> #592) for directory /home
> 2017-03-06 17:35:24 incr backup started back to 2017-02-28 18:50:08 (backup 
> #592) for directory /srv
>
> Since then it is running. Using a "lsof| grep rsync" on the target server 
> shows open files beeing backed up which have not changed for ages!
>
> Any idea why this is happening?
I vaguely recall this type of scenario with BPC 3.x a few years ago. 
Without seeing your detailed backup logs, I can't tell if it is the same 
or not, and my memory is a little vague. I think the circumstance was:
1) the backup was interrupted or there was some error during the backup
2) BPC marked the backup as complete, and so deleted the rest of the 
files after the backup point
3) The next backup now needed to re-download all files

The above would not happen with BPC v4 because even if a file didn't 
exist in the previous backup of this client, as long as it is in the 
pool (and unmodified) then it won't be downloaded. I don't recall what 
the fix was, or which version (if any) this was released in. I used to 
go back and delete the "partial" backup (and any newer backup) so that 
the newest backup actually contained all the files. Then I would start a 
backup and make sure it completed properly. This would happen once every 
few months for one or two servers (but never for other servers for some 
reason).

Sorry I can't be much help, but please provide some more details, and/or 
search the archives for the error message (if you find it in the details 
xferlog)...

Regards,
Adam


--
Announcing the Oxford Dictionaries API! The API offers world-renowned
dictionary content that is easy and intuitive to access. Sign up for an
account today to start using our lexical data to power your apps and
projects. Get started today and enter our developer competition.
http://sdm.link/oxford
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Looking for some comments on sizing.

2017-02-08 Thread Adam Goryachev
On 09/02/17 09:52, Scott Walker wrote:
> Has anyone used BackupPC in an enterprise environment?
>
> I'm talking PBs of data, 100's of servers, hybrid environment. Mac, 
> Solaris, BSD, Linux, Windows.
>
> Did it work well? Any gotcha's? When you see PB of data does it make 
> your gut feeling go uhh yeah no.
>
> I'm just fact finding and investigating. I find a lot of people 
> talking about using it for their home network or small business but 
> not much about the enterprise.

It would be interesting to hear if anyone is using it at that scale...

I expect that if you were going to try that, then you should use BPC v4 
(because it has the ability to avoid transferring a file if another 
client has already backed it up) and assuming you are going to use rsync 
everywhere.

I would also think that you will need to use multiple servers, otherwise 
you will spend a huge amount on getting enough CPU performance in the 
machine (all that encryption, compression, etc takes CPU), and RAM 
(storing lots of data and cache in RAM) and also the massive IOPS 
required. Consider the number of clients, you say 100s, so assume 300. 
You need a backup every day, and assume each backup takes 45 minutes. 
You need a concurrency of at least 30 to complete all the backups in 
under 8 hours (assuming an 8 hour backup window). OTOH, you could have 6 
servers each handling 5 concurrent backups, and hence have 50 clients on 
each server. This loses some of the de-dupe, but it makes everything a 
lot easier.

Personally, you could extend this further (depending on budget, risk, 
etc) and have 12 servers, such that each client is backed up by two 
different servers. A failure of one server or its underlying storage/etc 
will mean you do not lose all the backups.

Someone (on the list earlier) suggested that it would be possible to 
have BPC v4 use an object storage backend, so as long as that backend 
could perform fast enough, then it may be one way to get a larger "farm" 
of BPC servers but still keep de-dupe across all of them.

Regards,
Adam

-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Interrupted rsyncd backups on Backuppc4 and rsyncTMP.XXXX files, these files never get cleaned out.

2017-01-31 Thread Adam Goryachev
On 01/02/17 07:45, Mike Dresser wrote:
> One thing I've noticed on backuppc 4.0.0alpha3 is that if an rsyncd
> backup is interrupted(server crash, someone rebooted the server, etc),
> it leaves behind a rsyncTmp.pid.y.z file in the main host directory,
> like ./hostname/309/rsyncTmp.8869.0.4
>
> On the next backup, this file gets copied to 310, but never gets
> deleted.  If you back up a 200gb file and it fails, and then do another
> 10 incrementals you'll now have 2 TB of disk space used that will never
> come back since each new backup copies the old backup as a starting
> point.  If a future backup fails, you'll now have multiple rsyncTMP files.
>
> I had a server with 22TB disk space that had an extra 16 TB of files
> like this...  Couldn't figure out why my pool stats were nothing like
> what the disk actually had.
>
> Suggestion would be to have the copy command that runs when a backup is
> started delete any leftover files that are named like that.
>

Thank you, this is so true...
I just found:
client 1: 20 x 1125MB
client 2: 38 x 8639MB
client 3: 20 x 3565MB
client 4: 19 x 8639MB
client 5: 97 x 100MB
client 6: 42 x 34MB
client 7: 7 x 4101MB
client 8: 73 x 124MB

Total: 635GB

Ouch. Considering my backup server is reasonably stable, uptime is 
currently 99 days...

PS, would this work better is the rsyncTmp filename was pre-determined 
based on the file being transferred? This could also lead to some 
starting point for continuing a partially transferred file. eg, a 4GB 
file is backed up, and is modified daily. One day, there are a lot of 
changes, or some other network issue that interrupts the backup. Why not 
continue the transfer the next day instead of starting from the 
beginning. The argument that it will corrupt the backup is moot, since 
the tmp file is never actually "part" of the backup files.

Regards,
Adam

-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] ssh and Mac 10.12 Sierra

2016-11-29 Thread Adam Goryachev
On 30/11/16 07:57, Michael Conner wrote:
> On Nov 29, 2016, at 2:22 PM, Michael Stowe 
> <mst...@chicago.us.mensa.org> wrote:
>> On 2016-11-29 13:54, Michael Conner wrote:
>>> It was just an upgrade from 10.11, with which I had no problem. It was
>>> running 10.11 when I redid the server to Centos 7 and the key exchange
>>> worked ok. What I don’t get is why I can’t manually ssh into root but
>>> I can into another user. Until that changes, I don’t know if I also
>>> have a key problem. This is beyond me and I have yet to find anything
>>> on web with this problem. I have no Linux background, I’ve just picked
>>> stuff up as I needed it for getting BPC to work.
>>> Mike
>>>> On Nov 29, 2016, at 11:58 AM, Phil Kennedy 
>>>> <phillip.kenn...@yankeeairmuseum.org> wrote:
>>>> How far of a jump in upgrade did you make to get to Sierra?
>>>> Apple switched over the sshd_config to use Authorized_keys rather than 
>>>> Authorized_keys2 as the home for trusted keypairs several versions ago. 
>>>> Verify that your sshd config is really doing what you are expecting it to 
>>>> do. WRT key based authentication.
>>>> ~Phil
>>>> On Nov 29, 2016 11:42 AM, "Michael Conner" <mdc1...@gmail.com> wrote:
>>>> I maintain a BackupPC system for our small museum, backing up about 10 
>>>> computers, mostly Windows. BPC is version 3.3.1, running on Centos 7. I 
>>>> just upgraded to Centos 7 earlier this year and got everything working 
>>>> again ok. The one Mac I backup (mine) was just upgraded to 10.12 Sierra 
>>>> and I can no longer get BPC to connect to it. In the past I’ve been able 
>>>> to copy a key using ssh-copy-id -i .ssh/id_dsa.pub root@host_to_backup (at 
>>>> least I think this is the command I’ve used, its from from a tutorial on 
>>>> setting up BPC in Centos).
>>>> I’ve seen some stuff on the web about differences in keys in Sierra, but 
>>>> what puzzles me most is that I can’t ssh to root on the Mac now. When I 
>>>> try to ssh to it from the BPC server, it keeps asking for the password and 
>>>> ultimately fails. I can ssh into another user, just not root. Has anyone 
>>>> successfully gotten BPC to work with Sierra using rsync and a key?
>>>> Mike Conner
>> Yes, of course.  I run Sierra and BackupPC -- I suspect the clue is in your 
>> "id_dsa.pub."  DSA keys have been deprecated in favor of RSA keys.  I can't 
>> say that this is definitively your issue, as I haven't bothered trying to 
>> set up old DSA keys just to test the theory, but I'd recommend trying it 
>> with RSA keys instead, since that certainly works as expected.
> After permitting root log in the sshd_config file, I changed "ssh-keygen -t 
> dsa” to "ssh-keygen -t rsa” and did the key copy with "ssh-copy-id -i 
> .ssh/id_rsa.pub root@host_to_backup” Then I could do an ssh from the BPC 
> server to the mac without password. At first I thought everything was ok as I 
> started a full backup and it got an xfer PID. However, it quit quickly, with 
> an “Unable to read 4 bytes error.” However, when I set the client name alias 
> to the ip address, it seems to have taken. It has been running a backup for 
> 10 minutes, so hopefully that is it.
>
> Thanks for your help.
>
> Mike

Hi Mike,
I would suggest that you revert the ssd_config so that you can benefit 
from the improved security of rejecting password based root login. 
BackupPC will use key based logins, and so is not affected by this 
configuration.

Regards,
Adam


-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Second Destination

2016-11-21 Thread Adam Goryachev

On 21/11/16 23:29, Bob of Donelson Trophy wrote:


On 2016-11-20 17:58, Les Mikesell wrote:


On Sun, Nov 20, 2016 at 4:32 PM, Bob of Donelson Trophy
<b...@donelsontrophy.net <mailto:b...@donelsontrophy.net>> wrote:
I am currently testing backing up one client machine from the other 
end of
the vpn. I used the client machines ipaddress instead of it's 
hostname. The
ssh key was copied and the backup began as expected. So, the 
different ip
subnet did not matter to the backuppc machine. (First time I had 
ever tried
backing up thru the vpn.) I also expected the backup to be slower 
thru the
vpn than the local backuppc machine there and it appears to be that, 
slower.


I figured, for now, keep it simple and just try this and see what 
happens.


When the backup completes I will compare the two backuppc machine client
data. I'm looking for redundancy with two backups per client machine 
in two
different geographical locations, one local lan and the other at my 
house

(other end of vpn.) Backup has been running about three hours now.

Like you said, the two subnets are well translated by the vpn link.

As far as his "string", I'm still studying what the "options" mean 
and might
his way be a faster backup method or not. These are only thoughts at 
this

point.



The -C enables compression for the ssh and is likely to help
considerably - unless your VPN is also doing compression.   The
initial copy of each client is likely to take a long time, depending
on the speed of your network connections, but subsequent runs with
rsync will be much faster, only copying the differences.   If the
initial full runs are impractical, it might work to initial set the
2nd server up locally, then move it to the offsite location with the
initial data in place - or ship a drive for a similar machine.


(First my apologizes, my webmail client has a glitch. Not a story for 
this mailing list but my reply will look like an extension of the 
previous message and can cause confusion during reading . . . you'll 
see what I mean if the "glitch" happens.)


Thanks for the info, Les.

I backed up a small Active Directory Domain Controller (Samba4). It is 
about 5Gb in size and (full) backed up in about an hour. Slower than 
the local Backuppc but, not a lot slower.


Then I started the backup of one of the ADDC member file servers . . . 
about 250Gb (I think) . . . Backuppc (thru vpn) has been running now 
for about 21 hours for this first, full backup. This backup took about 
five or six hours locally.


I'm guessing it will finish soon . . . I hope.

This idea of doing the initial backup (of the second machine) on the 
local lan and then moving it to the second location, I do not see this 
as very practical. Every week (6.97 days) Backuppc does a full backup, 
does this mean I need to move the machine every 6.97 days to the local 
lan?


Surely there is some compression options that could be put in place to 
better stream the data thru the vpn and improve the backup speed?


So I add "-C" to the "RsyncClientCmd sshpath" or "RsyncArgs"?

Aren't the "-q -x -l" default options in the "RsyncClientCmd sshpath" 
ssh options? (Therefore the "-C" addition goes with them?




I think the point you are missing is that backuppc using rsync for a 
full backup won't actually transfer all of the content of all the files. 
This will only happen on the first backup.
Future backups will only transfer the portion of the files which are 
modified since the last incremental (or full, depending on which is more 
recent), and the content of new files.


You should use SSH compression if your VPN doesn't do compression by 
itself, or you could experiment to see which combination of compression 
works best for your system (ssh compression + vpn compression). 
Generally, I'd simply enable compression on vpn, since I would have that 
on for other clients/needs anyway, and then trust that rsync will 
already be reducing the amount of data that needs to be transferred.


I hope that help clear up why you only need the backuppc server on the 
same lan for the initial backup (or just be really patient for the first 
remote backup).


If you want a very rough estimate on the time it will take, look at the 
disk space in use on the backup target (150GB I think you said), and 
then calculate the length of time to transfer that amount of data to 
across your link, then add another 25%.


Regards,
Adam



--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental backups

2016-11-01 Thread Adam Goryachev


On 1/11/16 21:59, Gandalf Corvotempesta wrote:
> 2016-11-01 11:35 GMT+01:00 Johan Ehnberg :
>> Changes in BackupPC 4 are especially geared towards allowing very high
>> full periods. The most recent backup being always filled (as opposed to
>> rsnapshots hardlink pointing to the first), a full backup is not
>> required to maintain a recent and complete representation of all the
>> files and folders.
> So, with the current v4, deleting a full backup doesn't break the
> following incrementals?
> In example, with Bacula, if you delete a "full" backup, all following
> backups are lost.
> In rsnapshot, you can delete whatever you want, it doesn't break
> anything as long as you keep at least 1 backup, obviosuly
>
>
Ummm, silly question, but why would you want to delete a backup? 
BackupPC supports automatic removal of old backups based on the schedule 
you provide, you shouldn't be manually messing with the backups. If you 
need a different schedule, then adjust the config, and let backuppc 
handle it for you.

So, can you explain the need to delete random backups manually? 
Generally, if you need to do something weird like that, then either you 
are doing something wrong, or you are using the wrong tool.

Regards,
Adam

--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental backups

2016-11-01 Thread Adam Goryachev


On 1/11/16 19:03, Gandalf Corvotempesta wrote:
> How can I accomplish this with backuppc 4 ? I don't want to create
> "full" backups, as this woìuld require 2 days for some servers, except
> for the very fist backup for that host or if the full backup is
> missing
>
>
Easy, configure your keepfull/incremental and your full/incremental 
periods so that they will match your desired retention periods for your 
needs.

Make sure you use rsync (ie, same as rsnapshot).

Use checksum caching

Then, after your second full backup, you see the time to complete a 
backup is similar to an incremental.

You really will need to try it to see how it works, someone telling you 
it works well for them won't answer your questions. I'd suggest 
targeting a "small" host first, so you can practice more easily, when 
you are more comfortable with it, then move on to bigger and more 
ambitious goals.

Regards,
Adam

--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Version 4 vs 3

2016-10-27 Thread Adam Goryachev
On 28/10/16 03:36, Johan Ehnberg wrote:
> On 10/27/2016 07:23 PM, Alain Mouette wrote:
>> On 27-10-2016 13:16, Bowie Bailey wrote:
>>> The BackupPC project has a single developer who tends to be rather busy
>>> most of the time, so development happens in bursts with months or years
>>> between releases.
>> Thank you very much for your grat work!
>>
>>> but I know
>>> there are some people on the list who have been using it version 4 for
>>> some time now.
>> Hi guys, could you give us some information about it?
> Have a look at the v4 documentation intro for an overview of features:
> https://github.com/backuppc/backuppc/blob/master/doc-src/BackupPC.pod
>
> And the issues for an idea of the status of development:
> https://github.com/backuppc/backuppc/issues
>
> For me using rsync, v4 is promising to become cloud capable where v3 is
> not. V4 has some excellent new features such as checksum matching before
> transfer. However, some features that make BackupPC great, such as
> checksum caching, are not yet implemented, making v4 still much slower
> in those cases.
FYI I've been using v4 for over a year now, I recently patched my 
install to include a solution to a bug/issue I raised a couple years 
ago, but even without that patch it would appear to have been working 
well. From my experience, I would advise to start v4 with a clean pool, 
and avoid trying to re-use the old v3 pool. A lot of my issues were 
caused by that.
Doing work recently (adding new hosts) I realised that performance on v4 
is hit hard because of a couple of "bugs" (undeveloped sharp edges) 
which makes it do a full fsck on all existing backups after every new 
backup (or partial), and if you have a large number of backups, and/or a 
lot of files on the machines, then this will cause some significant drop 
in performance.

So, technically, it seems to work well, in practice, it could do with 
some improvements/polishing and it will become an even more awesome 
solution.

Regards,
Adam


-- 
Adam Goryachev
Website Managers
P: +61 2 8304     a...@websitemanagers.com.au
F: +61 2 8304 0001 www.websitemanagers.com.au


-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

--
The Command Line: Reinvented for Modern Developers
Did the resurgence of CLI tooling catch you by surprise?
Reconnect with the command line and become more productive. 
Learn the new .NET and ASP.NET CLI. Get your free copy!
http://sdm.link/telerik
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Strange ssh problems trying to do restore to localhost

2016-10-09 Thread Adam Goryachev


On 10/10/2016 00:20, adam...@cyberspaceroad.com wrote:
> There is plenty of traffic showing in the archives here, but it does
> appear that there is no-one willing to help newbie-level problems.
>
> I asked a similar question a few months ago here and got no response then
> either.
>
> Sometimes with these online communities I guess it happens that there is
> currently no-one on the list willing to help. It's a bit crap for newbies
> especially since the documentation doesn't help - at least not in my case.
>
> For instance now my problem is with the documentation about restores which
> says "You can optionally change the target host name, target share name,
> and target path prefix for the restore, allowing you to restore the files
> to a different location."
>
> However it doesn't say what to enter when there is no target share name
> (I'm restoring to a local file system, not a share). If I leave the field
> blank, the restore fails.
>

Follow the documentation, trust the docs, they are right. If you really 
know what you are doing, then you can use the CLI tools to do a restore 
to the local filesystem (or even a remote one). If you don't know how to 
use the basic Linux tools like tar, ssh, etc, then you should stick to 
the amazing GUI that is provided. Except, you will also need to live 
with the restrictions that the GUI imposes (like only being able to 
restore to a directory that is under a share which is on a host that is 
already setup/working (or download the restore as a tar/zip file, and 
then restore it yourself, but that comes back to knowing how to use 
basic Linux CLI tools).

I understand that being a newbie is tough, we have all been there once, 
and still come across new things that we are not experts in. However, 
when you are a newbie, at least read the documentation that is provided, 
and when you have problems, remember how to ask for help 
(http://www.catb.org/~esr/faqs/smart-questions.html#rtfm for all the 
gory detail, lots of very helpful guidance there).

Again, apologies for my rant.

Regards,
Adam
>
>
>> Hi,
>> I am new here and I am still experimenting with BackupPC, but I am worried
>> too about this problem...
>>
>> This is a common scenario where a machine goes down, nobody can help?
>>
>>
>> -
>> Alain Mouette
>>
>>
>>
>> A 7 de outubro de 2016 14:16:37 adam...@cyberspaceroad.com escreveu:
>>
>>> One of my hosts died completely and needs replacing, but in the meantime
>>> I
>>> had to restorer some of the files to the localhost where backuppc is
>>> running.
>>>
>>> I used the GUI to set up the restore job, but it immediately failed with
>>> the usual suspect: ssh problems. Since I'd had to solve that problem
>>> before, making sure that the passwordless ssh login between host and
>>> client worked.
>>>
>>> In this case, backuppc is trying to ssh to localhost.
>>>
>>> I went back to the command line and set up ssh so that it can now
>>> passwordless-ssh to itself either as root or as backuppc. I tested both
>>> cases with all combinations:
>>>
>>> sudo -u backuppc ssh backuppc@127.0.0.1 whoami
>>> sudo -u backuppc ssh backuppc@localhost whoami
>>> sudo -u backuppc ssh backuppc@gondor whoami
>>>
>>> and they all work nicely.
>>>
>>> However this seems to have made backuppc's problems worse, and now I get
>>> the error appearing not just once as before, but 3 times as copied from
>>> the log file below:
>>>
>>> Contents of file
>>> /media/adam/WDPassport2T/backuppc/pc/localhost/RestoreLOG.8.z, modified
>>> 2016-10-07 16:34:22
>>>
>>> Running: /usr/bin/ssh -q -x -l root localhost env LC_ALL=C /bin/tar -x
>>> -p
>>> --numeric-owner --same-owner -v -f - -C
>>> Running: /usr/share/backuppc/bin/BackupPC_tarCreate -h fangorn -n 72 -s
>>> /cygdrive/d/Documents -t -r / -p /home/adam/fangorn/ /
>>> Xfer PIDs are now 25659,25660
>>> No protocol specified
>>>
>>> (ssh-askpass:25663): Gtk-WARNING **: cannot open display: :0.0
>>> No protocol specified
>>>
>>> (ssh-askpass:25664): Gtk-WARNING **: cannot open display: :0.0
>>> No protocol specified
>>>
>>> (ssh-askpass:25665): Gtk-WARNING **: cannot open display: :0.0
>>> Tar exited with error 65280 () status
>>> tarCreate: Unable to write to output file (Connection reset by peer)
>>> restore failed: BackupPC_tarCreate failed
>>>
>>> So has anyone seen this before? Is there some backuppc-foo that I need
>>> to
>>> do, or is ssh just not set up properly still?
>>>
>>> Regards
>>> Adam
>>>
>>>
>>>
>>>
>>> --
>>> Check out the vibrant tech community on one of the world's most
>>> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
>>> ___
>>> BackupPC-users mailing list
>>> BackupPC-users@lists.sourceforge.net
>>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>>> Wiki:http://backuppc.wiki.sourceforge.net
>>> Project: http://backuppc.sourceforge.net/
>>
>>
>>
>> 

Re: [BackupPC-users] Strange ssh problems trying to do restore to localhost

2016-10-09 Thread Adam Goryachev
Haha, if you read the error messages, you will see the problem:

> (ssh-askpass:25663): Gtk-WARNING **: cannot open display: :0.0
> No protocol specified
>
> (ssh-askpass:25664): Gtk-WARNING **: cannot open display: :0.0
> No protocol specified
>
> (ssh-askpass:25665): Gtk-WARNING **: cannot open display: :0.0
> Tar exited with error 65280 () status
> tarCreate: Unable to write to output file (Connection reset by peer)
> restore failed: BackupPC_tarCreate failed
Ask yourself, why is ssh-askpass running if you already have 
passwordless logins working? Answer... you don't have passwordless login 
working.


On 9/10/2016 23:21, Alain Mouette wrote:
> Hi,
> I am new here and I am still experimenting with BackupPC, but I am worried
> too about this problem...
>
> This is a common scenario where a machine goes down, nobody can help?
Sure, but if you are responsible for managing multiple linux machines, 
including the backup/restore of those machines, then you really should 
have some idea on what you are doing, and how to diagnose simple problems.

Have you looked at the logs like /var/log/auth.info which probably has a 
record of the failed login?
No conspiracy, just many people get tired or trying to help people that 
can't read the documentation. While many OSS has really poor/old or no 
documentation, BackupPC is very well documented.

For all those watching on the side lines, test your backup *and* restore 
on a regular basis, and make sure you have documented your own 
environment. Get it working before you need it. When the crap hits the 
fan, there will be enough panic without also having to try and work out 
how your own backup system works.

Sorry, my rant is over ... must be having a bad night...

Regards,
Adam

>
> -
> Alain Mouette
>
>
>
> A 7 de outubro de 2016 14:16:37 adam...@cyberspaceroad.com escreveu:
>
>> One of my hosts died completely and needs replacing, but in the meantime I
>> had to restorer some of the files to the localhost where backuppc is
>> running.
>>
>> I used the GUI to set up the restore job, but it immediately failed with
>> the usual suspect: ssh problems. Since I'd had to solve that problem
>> before, making sure that the passwordless ssh login between host and
>> client worked.
>>
>> In this case, backuppc is trying to ssh to localhost.
>>
>> I went back to the command line and set up ssh so that it can now
>> passwordless-ssh to itself either as root or as backuppc. I tested both
>> cases with all combinations:
>>
>> sudo -u backuppc ssh backuppc@127.0.0.1 whoami
>> sudo -u backuppc ssh backuppc@localhost whoami
>> sudo -u backuppc ssh backuppc@gondor whoami
>>
>> and they all work nicely.
>>
>> However this seems to have made backuppc's problems worse, and now I get
>> the error appearing not just once as before, but 3 times as copied from
>> the log file below:
>>
>> Contents of file
>> /media/adam/WDPassport2T/backuppc/pc/localhost/RestoreLOG.8.z, modified
>> 2016-10-07 16:34:22
>>
>> Running: /usr/bin/ssh -q -x -l root localhost env LC_ALL=C /bin/tar -x -p
>> --numeric-owner --same-owner -v -f - -C
>> Running: /usr/share/backuppc/bin/BackupPC_tarCreate -h fangorn -n 72 -s
>> /cygdrive/d/Documents -t -r / -p /home/adam/fangorn/ /
>> Xfer PIDs are now 25659,25660
>> No protocol specified
>>
>> (ssh-askpass:25663): Gtk-WARNING **: cannot open display: :0.0
>> No protocol specified
>>
>> (ssh-askpass:25664): Gtk-WARNING **: cannot open display: :0.0
>> No protocol specified
>>
>> (ssh-askpass:25665): Gtk-WARNING **: cannot open display: :0.0
>> Tar exited with error 65280 () status
>> tarCreate: Unable to write to output file (Connection reset by peer)
>> restore failed: BackupPC_tarCreate failed
>>
>> So has anyone seen this before? Is there some backuppc-foo that I need to
>> do, or is ssh just not set up properly still?
>>
>> Regards
>> Adam
>>
>>


--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] root cannot access pool directory

2016-10-05 Thread Adam Goryachev


On 5/10/2016 18:41, orsomannaro wrote:
> Hi all.
>
> I'm using BackupPC on Ubuntu 16.04 server.
>
> The pool directory is mounted on a NFS share.
>
> If I try to access the pool directory as root user I have a "Permission
> denied" error.
>
> This usually occur with FUSE filesystems.
>
> Is that the case?
>
>
Look into the NFS no_root_squash option on your NFS server.

Regards,
Adam

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error Rsync Unexpected empty share name skipped

2016-10-03 Thread Adam Goryachev

On 04/10/16 02:36, Orazio Di nino wrote:

Hi,

I have BackupPC-3.3.1 running on Ubuntu server 16.04 LTS

The XferMethod is rsync

I have this error in the log /var/lib/backuppc/pc/192.168.2.80/LOG102016:

-full backup started for directory /tecno
-unexpected empty share name skipped
-Backup aborted

and this in/var/lib/backuppc/pc/192.168.2.80/XferLOG.bad.z  (only error)
Executing DumpPreUserCmd: /usr/local/bin/startbkpemail.sh  0 192.168.2.80 full 
DumpPreUserCmd
full backup started for directory /tecno
Running: /usr/bin/ssh -q -x -l root 192.168.2.80 /usr/bin/rsync --server 
--sender --numeric-ids --perms --owner --group -D --links --hard-links --times 
--block-size=2048 --recursive --ignore-times . /tecno/
Xfer PIDs are now 16298
Got remote protocol 30
Negotiated protocol version 28
Xfer PIDs are now 16298,16299
[ saltate 47 righe ]
Done: 37 files, 85325611 bytes
Executing DumpPostUserCmd: /usr/local/bin/endbkpemail.sh  0 192.168.2.80 full 
DumpPostUserCmd
Backup aborted ()
Not saving this as a partial backup since it has fewer files than the prior one 
(got 37 and 37 files versus 373201)
some help? I have no idea of the error thanks Orazio

I think the first thing to do is get rsync to talk in US English, I 
don't think backuppc will understand any other language. You can do this 
from the init script before starting backuppc which is probably best 
because it will apply to all utilities it runs.


See this website for some more information on setting the environment:
https://perlgeek.de/en/article/set-up-a-clean-utf8-environment

Regards,
Adam

--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can't write len=1048576 to /var/lib/backuppc/pc/sw.example.de/new//f%2f/RStmp

2016-09-22 Thread Adam Goryachev
On 22/09/16 19:54, Peter Thurner wrote:
> Hi Guys,
>
> I'm running a backuppc Server Installation on Debian 8. I'm using rsync
> + filling method on all clients. I'm backing up several Debian 8
> Clients, one of which makes problems. When I start the the backup, the
> hook Script works fine, the rsync seems to run through - I strace it on
> the client and see that it does stuff, also the new directory fills up.
> After the rsync however "nothing" more happens and the client hangs. If
> I wait for two days, the backup aborts. When I abort myself I get the
> following errors in the bad log (I set log level to 2 and added
> --verbose as rsync option)
>
>skip 600   0/0  934080 var/log/installer/syslog
>skip 600   0/0 1162350 var/log/installer/partman
> Can't write len=1048576 to
> /var/lib/backuppc/pc/sw.example.de/new//f%2f/RStmp
> Can't write len=1048576 to
> /var/lib/backuppc/pc/sw.example.de/new//f%2f/RStmp
> [...]
> lots of those cant write
> [...]
> Parent read EOF from child: fatal error!
> Done: 0 files, 0 bytes
> Got fatal error during xfer (Child exited prematurely)
> Backup aborted by user signal
>
>
> I tried writing to the RStmp file during a backup - if I touch or echo
> fo > RStmp it, I can write to it. If I dd if=/dev/zero of=RStmp bs=1M
> count=1000, the file disappears right away, as in:
>
> dd if=/dev/zero of=RStmp ... ; ls RStmp
> no such file or directory
>
> Any Ideas to what might cause this?
Ummm, backuppc is in the process of backing up data, and you want to 
start stepping on it's toes by writing to it's temp file? That doesn't 
make any sense to me, but I guess you have your reasons.

I'm assuming you are using BPC v3 but you didn't let us know

I would guess that there is a large (possibly sparse) file that is in 
the process of being backed up, and it takes a long time. I think you 
might see more information by examining the client rsync process with 
either strace, or when it is "stalled" (ie, backing up the large file), 
look at ls -l /proc//fds which will show which file it has 
open. Then you can check what is wrong with the file (unexpected large 
file, or sparse file, or whatever). Once identified, you can either 
arrange for the file to be deleted, or excluded from the backup, or be 
more patient and/or extend your timeout to allow this large file to 
complete.

If you are unable to solve the issue, please provide some additional 
details. Especially a look at strace of rsync on the client while it is 
"stalled" will help identify what it is doing.

Regards,
Adam

-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] question about backuppc topdir

2016-09-14 Thread Adam Goryachev


On 14/09/2016 20:39, Juan Manuel wrote:
> Hello Stefan Peter/Adam before of all regards for you reply.
>
> My problem is that I have 3 disk of 1 TB each, 2 disk are in RAID 1 (mirror 
> with mdadm) and is near to full, the TopDir is mounting on this RAID via LVM 
> (PV and LV).
> The server is an ordinary PC desktop, so can't put much more disk.
>
>   From the first disk only I use a litle partition (LV) to operating system 
> and backuppc binaries, the rest of the disk (1 TB) in not used.
>
>
> My question is: it is posible to mount another space to do backups (like 
> another TopDir on backuppc) using the rest available of the first disk, 
> butindependent of the RAID 1 that I have ? because this disk are not in
> RAID 1 mirrored, so can I put not critical backups there. Regards. Juan
> Manuel.
>
> PD: the other way is to drop all an make a RAID 5 with the 3 disk (1 TB)
> and redo all installations.

Personally, this whole setup is a bad idea. You are wasting space 
(1TB) for a "small" OS drive (probably 50GB is more than ample, I 
commonly use 20 or 30GB), and on top of that you are limiting the space 
for backups to just 1TB.

I always do a three mirror RAID1 for the OS drive (sdx1), then a small 
chunk from each drive (sdx2) for 3 swap drives (maybe 2GB each) and then 
use the rest of each drive (sdx3) to build a RAID5.

In your case, I would probably try to reduce the space used on the third 
drive to the minimum possible (allow some spare space for OS growth/etc).
Then, reduce the amount of space consumed on the RAID1 mirror (maybe 
delete the oldest backup if necessary). Then reduce the FS (assuming 
your FS supports reduction) and then reduce the RAID array size so it 
matches the spare space on the third drive.
Finally, grow your RAID1 array into a RAID5 array.
mdadm does support RAID1 to grow to a RAID5

Regards,
Adam


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] question about backuppc topdir

2016-09-13 Thread Adam Goryachev
On 14/09/16 08:24, Stefan Peter wrote:
> Dear Juan Manuel,
>
> On 13.09.2016 16:57, Juan Manuel wrote:
>> Hello we have a backuppc server and work perflectly, but it near to disk
>> full.
>>
>> So it is posible to add another resource/filesystem that to use
>> backuppc, another TopDir ?
>>
>> We dont have more disk space on the same disk, but we can add another disks.
>>
>> The resource (TopDir) /backuppc is actually in a logical volume LVM. We
>> try to find a solution that not imply using expanding logical volume
>> with other physical volume on LVM.
>
> Why don't you want to expand the LVM volume? The ability to expand,
> shrink, migrate and taking snap shoots are the only valid reason to use
> LVM, IMHO. And all these operations are more or less painless (I just
> migrated a 12TByte BackupPC volume from an old raid to a new one, which
> is possible without even unmounting the volume!).
>
> You can find more information about LVM at
> http://tldp.org/HOWTO/LVM-HOWTO/
> by the way.
>
> With kind regards
>
> Stefan Peter
>
I would have to agree, why use LVM if you aren't planning on ... using it?
Although, I would suggest that if the underlying physical device is 
full, there are two ways to expand that:
1) Assuming the underlying physical is RAID, then expand the RAID by 
adding additional devices, or replacing the drives with larger ones
2) Assuming you can't do the above because it's hardware RAID or some 
other constraint, then add a second RAID array, and extend your existing 
VG onto this second array, and then extend your LV to use both. This is 
basically equivalent to RAID 10 (assuming your underlying PV's are RAID 
1) or RAID 50 (assuming the underlying PV's are RAID5).

Personally, I tend to use linux software RAID to build up the PV as big 
as needed, and then use LVM to divide a single PV into the multiple LV's 
and snapshots, though it is possibly to just use all the physical drives 
as PV's, and let LVM handle the RAID across them.

Regards,
Adam


-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] access backups created by different server

2016-09-07 Thread Adam Goryachev
On 08/09/16 02:37, Michele Zarri wrote:
> Hello,
>
> Is it possible to access backups created with a different machine?
>
> My backuppc server died, but /var/lib/backuppc was on a different disk
> so I still have full backups of localhost.
> Ideally I would like to restore the full system but it would already be
> very good to recover the mysql databases and selected system
> configuration files.
>
> I reinstalled ubuntu, reinstalled backuppc, relinked /var/lib/backuppc but
> - No joy with the GUI (old backups do not show)
> - No joy using the command line.
>
> (as backuppc user)
> $ /usr/share/backuppc/bin/BackupPC_tarCreate -h localhost -n 224 -s '/'
> /var/lib/mysql > /tmp/dbs.tar
> /usr/share/backuppc/bin/BackupPC_tarCreate: bad backup number 224 for
> host localhost
>
> - BackupPC_zcat works but I have not figured out how to restore
> user:group and permissions
>
> Is there a way to get the GUI to read the backups (I am lazy)? If not is
> there a way to trick BackupPC_tarCreate?
>
> Note that when I re-installed backuppc it probably wrote new files in
> /var/lib/backuppc/pc/ and I do not have backups of those.

It would help if you provided the error messages you get with the gui in 
the logs...
However, my guess is that the userid/groupid for the new install doesn't 
match the old. You have two choices:
1) Uninstall backuppc, remove the user/group backuppc, manually create 
the user/group backuppc but make sure you match the userid/groupid on 
the files in /var/lib/backuppc
2) Re-mount /var/lib/backuppc and map all existing files to the new 
userid/groupid that the backuppc user has.

Another possibility, you need to re-create the config files and/or hosts 
file The contents probably don't matter a lot, as long as the 
hostnames match the old ones (see /var/lib/backuppc/pc/ for the right 
names).

Hope that helps

Regards,
Adam

-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] SMB-based backup failing after upgrading from 3.3.0 into 3.3.1

2016-08-29 Thread Adam Goryachev

On 29/08/16 23:59, Andriy Fomenko wrote:
I had a working BackupPC configuration (based on CentOS 7) performing 
backup of Windows share over SMB, then with minor BackupPC version 
upgrade it broke in samba part, still operattes normally for linux hosts.


What is interesting, for the "full backup" of Windows host it shows 
partial backup with ALL FILES PRESENT, but then declares this backup 
to be aborted.


As I understand it:

  * samba performs files copy: OK
  * tar extracts all files: OK
  * looks like some checks after that fail

Here is version change on my server:

Packages Altered:
Updated BackupPC-3.3.0-4.el7.nux.x86_64  @nux-dextop
Update   3.3.1-5.el7.x86_64  @epel

Here is an error log:

Contents of file 
/var/lib/BackupPC//pc/tp2.dev.videonext.net/XferLOG.483.z 
<http://tp2.dev.videonext.net/XferLOG.483.z>, modified 2016-08-29 
09:24:47 (Extracting only Errors)

Running: /usr/bin/smbclient tp2.dev.videonext.net 
<http://tp2.dev.videonext.net>\\C -U BACKUP -E -d 1 -c tarmode\ full -Tc - 
\\BACKUP
full backup started for share C
Xfer PIDs are now 25728,25727
[ skipped 1 lines ]
tar:316  tarmode is now full, system, hidden, noreset, quiet
[ skipped 971 lines ]
tar:711  Total bytes received: 1042800896
[ skipped 33 lines ]
tarExtract: Done: 0 errors, 944 filesExist, 201159397 sizeExist, 156596094 
sizeExistComp, 997 filesTotal, 104286 sizeTotal
Got fatal error during xfer (No files dumped for share C)
Backup aborted (No files dumped for share C)
Saving this as a partial backup, replacing the prior one (got 997 and 0 files 
versus 0)
Thank you,

Did you also upgrade the version of samba, or any other packages on the 
backup server at the same time?


Regards,
Adam



--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How do you monitor a backup process?

2016-08-04 Thread Adam Goryachev
On 05/08/16 04:52, martin f krafft wrote:
> also sprach Adam Goryachev <mailingli...@websitemanagers.com.au> [2016-08-04 
> 16:04 +0200]:
>> I've used ls -l /proc/pid/fds or strace or lsof etc... all work,
>> some are better on the client rather than the backuppc server.
> In fact, I found none of those useful on the server.
>
>> I've also used tail -f XferLOG | Backuppc_zcat which does work,
>> but doesn't update in real time (ie, you have to wait for a number
>> of lines of log output before you see the update.
> I've tried this, but I get:
>
>/usr/share/backuppc/bin/BackupPC_zcat: can't uncompress stdin
>
> This is using BackupPC 3.3.0 (Debian stable)

Sorry, I've not used BPC 3.x in years...
Maybe try this:
tail -f -n +0 blah.log | /usr/share/backuppc/bin/BackupPC_zcat -
You need to include the beginning of the file or else it won't detect 
the compression header Also, the - frequently means use stdin when a 
filename parameter is required, it may or may not be needed.
>
>> Not sure of a "better" way Backuppc 4.0 includes a counter for
>> number of files xfered though that doesn't help for BPC 3.x
> The counter isn't really that useful, I think, especially not if it
> doesn't have a "X of Y files" total that doesn't change (cf. rsync,
> which is kinda useless, as the total keeps increasing).
It includes the total number of files from the previous backup... so 
generally it is pretty useful (unless the client has added a huge number 
of files in between backups, or you are stuck backing up a single huge 
file, and then it looks like there is no progress. Perhaps a better 
indicator would be based on MB's processed compared to the size of the 
previous backup. I'm sure patches are welcomed :)
> The more I think about it, the more I want XferLOG
> uncompressed/unbuffered, but also structured in a way so that it
> starts a new line when it inspects a file, and then finishes the
> line with details and the verdict (same, create, link, …)
Feel free to write a patch to do what you want, but I expect patches to 
BPC v3.x are unlikely to be added at this stage, unless they fix actual 
problems (ie, preventing backups from working).

Remember in the majority of cases, you won't be watching backups, they 
are something that *just happens*, and later you will come along and 
verify they did happen, or restore some files. So "watching" a backup in 
progress isn't a high priority...
>
> also sprach Tony Schreiner <anthony.schrei...@bc.edu> [2016-08-04 15:52 
> +0200]:
>> Also on the backup host, you can get the  process id of the current dump
>> processes (there will be two per host during file transfer), and do
>>
>> (sudo) ls -l /proc/{pid1,pid2}/fd
>>
>> if a file is being written to backup it will show up in this list. But be
>> aware that there are times (sometimes long) when files are not being written
> What happens during those times?
Backing up a single large (modified) file requires the server to 
de-compress the original file, and then add the changes from the remote. 
I'm not sure why, but BPC v3 seems to be rather in-efficient at this 
process. This is one of the reasons I tend to split large remote files 
on the remote side prior to BPC (eg, VM images, sql dumps, etc), (the 
other reason is that most chunks will be the same, and so it saves disk 
space on the BPC side, improves rsync bandwidth consumption, etc).

Regards,
Adam



-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unnecessary reads with rsync?

2016-08-04 Thread Adam Goryachev
On 05/08/16 06:31, martin f krafft wrote:
> also sprach martin f krafft <madd...@madduck.net> [2016-08-04 22:16 +0200]:
>> Right now, I am staring at the lsof output of the rsync process on
>> a backup client, spawned by BackupPC. It's processing a 3.5G file
>> that has not been touched in 5 years and has been backed up numerous
>> times. According to strace, the entire file is being read, and it's
>> taking a toll:
> I also can't help but notice that the pool file is open on the
> server, and that the corresponding dump process does continuious
> reading (according to strace) on a socket, presumably linked to the
> SSH process connected with the client.
>
> Maybe reliance on file metadata isn't good enough for a backup.
> After all, a backup should care about file content, not metadata.
>
> But instead of (what seems to be) chunk-wise checksum transmission,
> why don't we (also) store the whole-file checksum on the server (can
> be computed in the same pass) and at least give people the option to
> risk reading every file once to compute this checksum, if it means
> being able to skip files without further ado or large data
> transfers?
A couple of possibilities:
a) you haven't enabled checksum caching (--checksum-seed=32761)
b) you haven't completed at least 2 full backups including this file
c) you haven't configured backuppc the way you want it 
(RsyncCsumCacheVerifyProb)
d) something else, provide more information and we might be able to 
comment further

PS, backuppc v4 does store full file checksums, but you probably still 
want to verify the block checksums of the file from time to time on the 
slim chance that the full file checksum matches but the content is 
different.

Regards,
Adam


-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Choice of compression algorithm for logs

2016-08-04 Thread Adam Goryachev
On 05/08/16 05:33, martin f krafft wrote:
> Hello,
>
> the fact that BackupPC compresses log files using zlib and requires
> /usr/share/backuppc/bin/BackupPC_zcat for their uncompression is
> a bit of a nuisance, not only when log files are being
> sync'd/analysed on a system where there is no BackupPC installed.
>
> I also can't find a suitable decompressor in a Debian package,
> especially not one supporting reading from stdin.
>
> Why aren't we just using standard gzip or bzip2 or xz, for which
> decompressors exist on pretty much every Unix system?
I'm pretty sure there is a backuppc package for debian :)

If you really want a third party tool, and want that packaged for 
debian, then I think you will need to be the one to do it/make it 
happen. So far, everyone else has been happy without it (or happy enough 
to not actually do it).

PS, we are using BackupPC's own compression tools because:

a) We know they will exist
b) We need to use them for the data, so might as well also use them for 
the logs which will compress really well

You are free to make a patch that will use a different compression tool, 
and allow that to be configured, perhaps others will appreciate it also.

Regards,
Adam


-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Still unable to "resume" a partial backup

2016-08-04 Thread Adam Goryachev
On 05/08/16 04:42, martin f krafft wrote:
> also sprach Adam Goryachev <mailingli...@websitemanagers.com.au> [2016-08-04 
> 15:47 +0200]:
>> On 4/08/2016 23:43, martin f krafft wrote:
>
>> It should work as you said, but if you never have enough time to
>> transfer the second file, then you won't actually proceed.
>> BackupPC will still check every file "before" the second file in
>> case there have been changes there, but ultimately, if the second
>> file is too big to transfer within the allotted time, then it
>> can't succeed.
> I do wonder if it woulnd't make sense to
>
>(a) randomise the order of files
>(b) update partial backups
>
> for the combination of those two will mean that over time, even
> a partial backup will become more and more useful, don't you think?
>
The order that files are processed depends on the client, without using 
non-standard client tools, we can't influence/change that.
We do update a partial backup, as long as the new partial contains more 
files than the previous partial, but it doesn't even save a partially 
transferred file. In some ways, (I've also asked for this a few years 
ago), it would be nice, because as you said, over time you would get 
more and more of the file, and eventually complete it (and each time it 
would be "more useful"). However, the decision was made to ensure that 
either the file is correct (complete) or missing, and that argument does 
also make sense (I see the reasons for both options, just neither option 
is 100% right for everybody).

Regards,
Adam

-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How do you monitor a backup process?

2016-08-04 Thread Adam Goryachev
On 4/08/2016 22:59, martin f krafft wrote:
> Hey,
>
> some of the backup processes here run for hours, and there are often
> reasons why I want to check on what's going on.
>
> How do you monitor backups in real-time? XferLOG.z can't be tail'd,
> and attaching strace or lsof to the running processes just isn't
> very sexy.
>
> Can you fathom a good method by which I can keep a good eye on
> what's going on, e.g. a way to have BackupPC write to the host's log
> file something like
>
I've used ls -l /proc/pid/fds or strace or lsof etc... all work, some 
are better on the client rather than the backuppc server.
I've also used tail -f XferLOG | Backuppc_zcat which does work, but 
doesn't update in real time (ie, you have to wait for a number of lines 
of log output before you see the update.

Not sure of a "better" way Backuppc 4.0 includes a counter for 
number of files xfered though that doesn't help for BPC 3.x

Regards,
Adam

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] three hundred thousand directories 'created' during backup

2016-08-04 Thread Adam Goryachev


On 4/08/2016 23:56, cardiganimpatience wrote:
> Backups are taking about three hours for a particular fileserver and records 
> indicate that over 300k new directories are being created every run.
>
> I opened the XferLOG in a browser and searched for the word "create d" which 
> catches every newly-created directory. The count was 348k matches. But the 
> file count in the summary is only 4724 files [sorry for the formatting]:
I can't comment on the rest, but directory entries are always created, 
because backuppc needs them for the backup structure (and there is no 
disk saving/not possible to hard link them)...


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Still unable to "resume" a partial backup

2016-08-04 Thread Adam Goryachev


On 4/08/2016 23:43, martin f krafft wrote:
> 3) Ensure that you can backup any file within the ClientTimeout,
> Is this necessary? Isn't ClientTimeout about killing the connection
> after a period of time without any traffic?
Almost, but the timer is only updated after each file has been 
transferred, not after each chunk/byte of a file.
>> The problem is backuppc will not accept a partial file, either the
>> full file is received/saved in the partial backup, or none of the
>> file is saved. I don't expect this will change.
> I don't think it can change. But if I have three huge files, and the
> backup always dies half way through the second, then I would hope
> that BackupPC learnt to reuse the first, already transferred file
> instead of repeating the same thing as the day before and failing
> half way through the second file again… I think this is the core of
> the problem, somehow… it does not seem like partial backups get
> updated on resume…
>
It should work as you said, but if you never have enough time to 
transfer the second file, then you won't actually proceed. BackupPC will 
still check every file "before" the second file in case there have been 
changes there, but ultimately, if the second file is too big to transfer 
within the allotted time, then it can't succeed.

Hope that helps.

Regards,
Adam

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Postfix problems

2016-07-26 Thread Adam Goryachev



On 26/07/2016 20:30, Kacper Kowalski wrote:

Hi,

I have issues with email notifications. I checked mailing list, found 
some tips and advises, but those are not working.


The problem is, that if I try to send test mail from backuppc user 
like this:


root@backup:/etc# sudo -u backuppc 
/usr/share/backuppc/bin/BackupPC_sendEmail -u 
kacper.s.kowal...@gmail.com 


I can see in mail.log file, that my mail wasn't send, because of 
timing out (sometimes "Network unreachable" is the reason):


Jul 26 12:12:20 backup postfix/smtp[3496]: 133C643A41: 
to=>, 
relay=none, delay=3347, delays=3256/0.02/90/0, dsn=4.4.1, 
status=deferred (connect to alt2.gmail-smtp-in.l.google.com 
[64.233.189.26]:25: Connection 
timed out)


I tried to change smtp port to 587 in master.cf  
file, but after restarting postfix service and sending test email, the 
25 port is used, not 587 (in log file the ip is followed by ":25").


What can I do to start notifications working?

You might find it easier to configure your system to send all mail to a 
smarthost instead of trying to deliver directly. You may need user/pass 
authentication for your ISP mail server, or not. BackupPC itself is 
working correctly, instead you should look at the configuration of 
postfix and perhaps discuss with the postfix list on how to best resolve 
the issue. I would help further, but unfortunately I know nothing at all 
of postfix. I tend to use either nullmailer or else exim which debian 
provides some simple configuration scripts for it.


Please consult your postfix docs, and/or the postfix mailing list for 
further help.


Regards,
Adam
--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports.http://sdm.link/zohodev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] localhost Can't cd to /root: Permission denied

2016-07-21 Thread Adam Goryachev



On 21/07/2016 21:43, Robert Wooden wrote:

Thanks for your suggestions.

Disabling apparmor made no difference.

"sestatus" returns "command not found" and the only other search 
engine references I can find about selinux is how to install it on 
16.04. And other references that it been dead since Karmic.


The command sting I am running is from an old (cannot find it on 
Backuppc documentation anymore) wiki page I printed years ago. I not 
sure what you mean when you asked "What is the current working 
directory when you ran the above test?" It was run as sudo by an 
administrative user on the Backuppc machine.


And, last question, I am not sure what you mean by "by using a locale 
other than C" . . . like 'C' the language? How would I " . . . change 
the OS default locale to C, and then try it again"? (OS is Ubuntu 16.04)


Please google ubuntu locale and read the first link from the ubuntu 
community help wiki
While you are there, please google top post and read the first link 
(wikipedia).


It is important that you learn to use all resources made available to 
you, including the ample documentation and information that many people 
have spent time producing. Google will help you find what you want out 
of that massive amount.


Regards,
Adam

--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports.http://sdm.link/zohodev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] localhost Can't cd to /root: Permission denied

2016-07-19 Thread Adam Goryachev



On 20/07/2016 01:45, Brad Alexander wrote:

It is. farragut is my backuppc host.

Is there anything of interest in the logs, specifically the error log?

And do you have your sudoers set up properly?


On Tue, Jul 19, 2016 at 10:22 AM, Bob of Donelson Trophy 
> wrote:


Thanks for the response, Brad.

I need to clarify that I can backup other hosts just NOT the
localhost and I am confused as to why I cannot.

I see, by your "backup command" that your using rsync. Is this the
command to backup your localhost?

I just tried ssh-ing into the locahost (first time I can remember
that experience) and it worked.

Re-ran my test command "sudo -u backuppc
/usr/share/backuppc/bin/BackupPC_dump -v -f localhost" and got the
same "Backup aborted (No files dumped for share /)" and "Can't cd
to /root: Permission denied".

Scratching my head! What is it?


Not sure if scratching your head is helping otherwise you might 
consider sending the complete output from your debug commands to the 
list, then if nothing else, we can scratch our heads while looking at 
the same cryptic details.
You might also want to check /var/log to see what information is logged 
there
Generally, you can simply configure the localhost backup indentically to 
any other machine, the only difference is the name of the machine. 
Personally, I always use rsync over ssh.


Finally, a copy of the relevant configuration would also be useful...

Regards,
Adam
--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports.http://sdm.link/zohodev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up BackupPC

2016-07-13 Thread Adam Goryachev
On 13/07/16 08:23, Falko Trojahn wrote:
> bpb21 wrote on 11.07.2016 at 20:28:
>> I've got BackupPC running on a CentOS server (working fine with Windows 10 
>> PCs, by the way!).  I'd like to back up, on occasion, the data BackupPC 
>> stores.
>>
>> For other servers, network shares, and CCTV footage, I use LTO-5 Ultrium 
>> tapes and just use tar on the CentOS server connected to the tape drive; 
>> nothing proprietary going on.
>>
>> But, where does BackupPC store it's data?  (I could probably figure that one 
>> out pretty easily.)  More of a question is, how would backing up the pooled 
>> data to an external source work out?  I have approx. 3.8 TB of data before 
>> pooling and compression, approx 1 TB of data after pooling and compression.
>>
>> So, I'd need to plan on 3.8 TB of external storage were I to back up 
>> BackupPC's data, correct?
>>
>> If I just used the regular tar commands to back up the data directory for 
>> BackupPC, would it be able to preserve the user permissions?  As in, could I 
>> still tell what came from what PC if I just copied the data directory?
>>
>> (I'm probably making this more complex than it is...)
>>
> you have several possibilities:
>
> - use the archivehost feature, so last backup of each host can be saved
> to tape or e.g. destination directory on e.g. external usb drive for
> offline storage
>
> - use rsync to sync the whole pool (usually /var/lib/backuppc) to other
> hard disk, with special parameters it's possible even over ssh
> we do our sync of about 2TB in one  and a half day over 1gb ethernet
> to remote location; if possible, do initial sync locally.
> backuppc service must be shutdown during the sync or at least not
> doing backups, for consistency.
>
> - instead of rsync, if backuppc pool is on btrfs, one could use btrfs'
> send-receive feature, or e.g. btrbk. didn't try that out, though.

You could also use dd, assuming that you have some method to ensure a 
consistent state throughout the tape backup period.

1) Store the backuppc volume on LVM, take a snapshot and stream the 
snapshot to tape
2) Unmount the backuppc volume, stream to tape, remount the volume
etc...

Although I'm not sure that really is a smart idea, I've never tried it, 
but just thought I'd suggest it, someone else with more experience of 
tapes might be able to comment.

PS, dd will consume the space on tape equal to the filesystem capacity

Regards,
Adam
-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports.http://sdm.link/zohodev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_trashClean (?) freezes system

2016-07-06 Thread Adam Goryachev
On 07/07/16 01:40, Witold Arndt wrote:
> Hei,
>
> On Mittwoch, 29. Juni 2016 20:56:55 CEST Holger Parplies wrote:
>
>> Witold Arndt wrote on 2016-06-27 08:53:40 +0200 [Re: [BackupPC-users]
> BackupPC_trashClean (?) freezes system]:
>>> On Sonntag, 26. Juni 2016 22:21:45 CEST Adam Goryachev wrote:
>>>> Can you login to the server after it has "hung"? I'm assuming yes since
>>>> you can try to kill the process.
>>>> I'd strongly suggest checking the various logs, starting with dmesg
>>>> Also, check the physical "host" to see what it thinks the status of the
>>>> VM is.
>>> Jep, I can login to the vm and everything besides backuppc is running and
>>> instantly responsive. Other processes which use the disk have no problem
>>> reading or wrting and iotop shows no hangups.
>> are they using the same file system? Can you show us a 'df -T' and perhaps
>> 'df -i' of your BackupPC VM?
> Yes, everything is on /dev/vda1, storage is on /san:
>
> $ df -T
> FilesystemType  1K-blocks  Used  Available Use% Mounted on
> udev  devtmpfs2013336 42013332   1% /dev
> tmpfs tmpfs404824   364 404460   1% /run
> /dev/vda1 ext44391408   25433721601920  62% /
> none  tmpfs 4 0  4   0% /sys/fs/cgroup
> none  tmpfs  5120 0   5120   0% /run/lock
> none  tmpfs   2024120 02024120   0% /run/shm
> none  tmpfs102400 0 102400   0% /run/user
> san:/vol1/storage nfs4 2879636864 188612096 2690922368   7% /san
>
> $ df -i
> Filesystem InodesIUsed IFree IUse% Mounted on
> udev   503334  4115029231% /dev
> tmpfs  506030  3335056971% /run
> /dev/vda1  287424   141971145453   50% /
> none   50603025060281% /sys/fs/cgroup
> none   50603055060251% /run/lock
> none   50603015060291% /run/shm
> none   50603025060281% /run/user
> san:/vol1/storage   182853632 25729421 157124211   15% /san
>   
>>>> Almost every time I've tried to kill a process and seen it turn into a
>>>> zombie, it's because the process was sleeping / waiting for disk IO, and
>>>> it won't die until after the OS decides the disk IO has failed or
>>>> succeeded.>
>>> This is consistent with the 85% waiting usage, but there are no errors any
>>> log (dmesg, syslog, backuppc/log/*) whatsoever.
>>>
>>> I'm a bit lost since there were no configuration changes (besides removal
>>> and addition of backup clients) and this setup has been running since
>>> 04/2014.
>> I would suspect file system corruption. Is the trash directory empty when
>> the freeze occurs? In general, I'd suggest an 'fsck', but with a BackupPC
>> pool that might not work. You *could* try moving the trash directory out of
>> the way and recreating it with the same permissions. This would avoid
>> accessing a problematic file within it, supposing this is causing the
>> problems. Though, normally, I'd expect something in the system log files in
>> case of a file system panic. Well, 'df -T' might tell us more.
> fsck was done already and didn't show any errors. Since I didn't have any
> outages in the last days I'm not sure about the contents of trash/, but I will
> keep an eye on this.

I see, so now you let us know it is a NFS mount point I suspect you 
are seeing some NFS related issue. To confirm, simply login and run ls 
/san if you see the directory listing, then NFS is fine, if it hangs, 
then NFS is the problem. Once you know that, you can focus on solving 
the NFS problem and forget about backuppc.

I suspect you are hitting some performance issue which you didn't see 
before. Try tuning your NFS mount options, and or checking both nfs 
server and nfs client for relevant statistics/logs/etc (hint, 
resource exhaustion is happening somewhere).

You might find it better to get your NFS server to deal with the trash 
folder locally instead of the backuppc server, all you need is some cron 
based script that will do a rm -rf on the contents on a regular basis, 
and look at the backuppc script to comment out or disable the trash 
cleanup step

PS, actually, you will probably find it better to install local drives 
onto the backuppc server instead of using NFS!


Regards,
Adam

-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

--

Re: [BackupPC-users] Backup of virtual machines

2016-07-04 Thread Adam Goryachev



On 4/07/2016 22:01, Smith, Graham - Computing Technical Officer wrote:


I don't use xenserver but my generic advice as a strategy would be to 
backup the contents of the VMs rather than the raw virtual disks.


Effetively treating the guests as if they were physical systems. The 
raw virtual disk files will be large but also non-unique so will cost 
a lot more


in storage space in your backups and potentially take a lot more time 
to backup. Backing up the files contained within the VMs should save 
space


with BackupPC in taking full advantage of single instance storage of 
repeated files in your pool common to many users or host systems.


Invariably it is the data rather than an operating system that is 
usually most important to keep, so you may choose to only keep a 
subset within each VM


of the unique data and configs etc. However it may be no harm to 
include a single backup of a "template" of your base operating system 
of the VM to which


you may subsequently restore data files to, particularly if your guest 
OS are heavily customised and rebuilds are complex or time consuming


to recreate from scratch.

That may aid a quicker recovery situation and gives you a hypervisor 
and potentially OS agnostic recovery option route should you need it 
compared


to say taking VM system state snapshots and backing those up for example.



FYI, I use backuppc to backup the content of my VM's as above, except I 
make use of my storage backend to assist (I use Xen instead of 
XenServer, so some of this may not apply to your environment).


I use the pre-backup script from backuppc to connect to my SAN and run a 
script which:

1) Takes a LVM snapshot of the VM (disk) to be backed up
2) Uses kpartx to scan for partitions
3) Uses mount to mount each partition on the SAN server at a VM specific 
mountpoint


Then, backuppc will backup the VM specific mountpoint of the SAN
Finally, I use the post-backup script from backuppc to connect to my SAN 
and run a cleanup script which reverses the above.


So far, this is working well for me.

PS, I also do remote off-site image backups, but I only have two copies 
of the remote images, yesterday and current, where one of those can be 
incomplete.


IMHO, you should do both file level backups of the contents, in case a 
user accidentally deletes a file, or some crypto-locker virus, as well 
as image level backups in case of crypto locker, malicious damage, or 
serious system crash/disaster.


Emphasis placed on crypto locker solutions, been bitten and saved more 
than once thanks to backups.


*From:*Elias Pereira [mailto:empbi...@gmail.com]
*Sent:* 04 July 2016 02:30
*To:* General list for user discussion questions and support 


*Subject:* [BackupPC-users] Backup of virtual machines

Hello guys,

What the best way to backup VMs from xenserver?

Thanks in advance!


The contents and any attachment of this e-mail are private and 
confidential.
They are intended only for the use of the intended addressee. If you 
are not the intended addressee, or the person responsible for 
delivering it to the intended addressee, you are notified that any 
copying, forwarding, publication, review or delivery of this e-mail or 
any attachments to anyone else or any other use of its contents is 
strictly prohibited. You are prohibited from reading any part of this 
e-mail or any attachments.  If you have received this e-mail in error, 
please notify the system manager.  Unauthorised disclosure or 
communication or other use of the contents of this e-mail or any part 
thereof may be prohibited by law and may constitute a criminal 
offence. Internet e-mails are not necessarily secure. The Institute 
does not accept responsibility for changes made to this message after 
it was sent.  Unless stated to the contrary, any opinions expressed in 
this message are personal to the author and may not be attributed to 
the Institute.




--
Attend Shape: An AT Tech Expo July 15-16. Meet us at AT Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


--
Attend Shape: An AT Tech Expo July 15-16. Meet us at AT Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.

Re: [BackupPC-users] backuppc w/ ubuntu 16.10 server

2016-06-27 Thread Adam Goryachev
On 28/06/16 12:56, donkeydong69 wrote:
> Recently did a fresh install of ubuntu 16.10. Backuppc runs but completely 
> ignores directory exclusions in settings. Same settings worked before with 
> 14.10.
>
Where did you get Ubuntu 16.10 from? Maybe you should try using 16.04 
instead, the current stable release, not a future un-released version?

Other than that, you could also try providing a copy of your config 
files, and a copy of the logs.

Regards,
Adam

-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

--
Attend Shape: An AT Tech Expo July 15-16. Meet us at AT Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_trashClean (?) freezes system

2016-06-26 Thread Adam Goryachev


On 24/06/2016 23:28, Witold Arndt wrote:
> * killing the process or its parents has no effect, it just goes to zombie
> mode
>
> So, where should I start to debug this?
>
Can you login to the server after it has "hung"? I'm assuming yes since 
you can try to kill the process.
I'd strongly suggest checking the various logs, starting with dmesg
Also, check the physical "host" to see what it thinks the status of the 
VM is.
Almost every time I've tried to kill a process and seen it turn into a 
zombie, it's because the process was sleeping / waiting for disk IO, and 
it won't die until after the OS decides the disk IO has failed or succeeded.

Regards,
Adam

--
Attend Shape: An AT Tech Expo July 15-16. Meet us at AT Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] R: Re: Full period

2016-06-23 Thread Adam Goryachev


On 23/06/2016 20:48, absolutely_f...@libero.it wrote:
>> Da: "Adam Goryachev" <mailingli...@websitemanagers.com.au>
>>
>> Other than that, I don't think there is any way to automatically do more
>> incrementals, short of increasing the FullPeriod to a higher value
>> PS, I assume day6 is nothing because even the incremental took 2 days to
>> complete
> No, as I said, it tooks about 3 days to complete backup for my entire pool
> (+100 server)
>
No, you said it took 3 days to backup one server, not 100 servers.
We have no idea what you have done, but you will need to do some 
homework and decide:
1) Is it reasonable/OK to only get one backup every 3 days
2) Based on your systems, amount of data, performance of everything, is 
that a reasonable time to complete a backup
3) Have you configured your system in a reasonable (normal) method

Without access to a whole stack of information, the rest of us on the 
list can't comment. ie, configs, description of the network, description 
of the servers (backup server and 100 servers), etc

Regards,
Adam

--
Attend Shape: An AT Tech Expo July 15-16. Meet us at AT Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full period

2016-06-21 Thread Adam Goryachev


On 21/06/2016 17:28, absolutely_f...@libero.it wrote:
> Hi,
>
> currently I have default value for:
>
> $Conf{FullPeriod} = '6.97';
> $Conf{IncrPeriod} = '0.97';
>
> I noticed that I am unable to complete backups for each server in one 
> day; probably it takes 3 days to finish.
> So, I have this situation (let's say this is backup list for ONE server)
>
> day1: full backup
> day2: nothing
> day3: nothing
> day4: nothing
> day5: incremental
> day6: nothing
> day7: another full backup
>
> Is there a way to configure BackupPC to run Full backup every X 
> backups (and NOT every X days)?
> I think that probably I'll save some bandwidth, am I wrong?
>
What method are you using to do the backup?
Why does the backup take so long? Is it bandwidth between the two 
machines or something else?

Potentially, using rsync or rsyncd will reduce the time to complete a 
backup, produce more accurate backups, and also reduce bandwidth 
consumption. Result is more backups, so better ability to restore 
if/when needed.

Other than that, I don't think there is any way to automatically do more 
incrementals, short of increasing the FullPeriod to a higher value
PS, I assume day6 is nothing because even the incremental took 2 days to 
complete

Regards,
Adam

Regards,
Adam

--
Attend Shape: An AT Tech Expo July 15-16. Meet us at AT Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Does BackupPC need a bug tracker?

2016-05-23 Thread Adam Goryachev
I think there are a lot of things that could be better, but up until 
now, we only had the mailing list, and that wasn't managed well at all. 
I would suggest we focus as much as possible on using the tools that are 
available, because adding workload to manage additional tools won't help 
the development of BackupPC. If we find that we have people actually 
working on BackupPC, and that the number of issues is not being managed 
well, or making life difficult, or whatever other aspect of managing the 
project, then we should have a discussion then about it.

ie, certainly solving problems before they become a problem is a great 
idea, but at this point I feel (for what my opinion is worth) that we 
should focus on getting backuppc 3.x into a well maintained status, and 
working towards a beta release of 4.0.

PS, also, just because the devel list is low volume doesn't mean that 
everything should go to the users list.

Regards,
Adam

-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copyright protection

2016-05-22 Thread Adam Goryachev
d strongly 
advise that we simply carry on. Allow code submissions from anyone, if 
something comes up on the list that makes some code contribution 
questionable, then ask the contributor to email the list/issue 
tracker/etc to confirm that they have their employers permission.

The rest is simply a waste of time. Let's move on and start discussing 
the issues, reviewing and committing the patches already submitted.

IANAL either... in fact, probably none of us are, and I doubt any of us 
are going to pay one for actual advice.

Regards,
Adam

-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


  1   2   3   4   5   6   >