Re: [BackupPC-users] backuppc to server on vmware

2020-08-07 Thread lu lu
hi everyone, I solved the problem and I share it with you, it was much
simpler than it seemed
in practice from windows server 2012/2016 and 2019 the SMB service is
disabled by default, it was enough to install it, with restart of the
server, and now everything seems to work perfectly

thank you all

Il giorno ven 7 ago 2020 alle ore 10:27 marki 
ha scritto:

> Hello,
>
> What is the error message?
> Please ask good questions. Please don't ask question of the type "it
> doesn't work" and make assumptions.
>
> It doesn't matter if the host is physical or virtual. But clearly your
> infrastructure has changed. Have you asked your admins if they changed
> anything else while virtualizing?
>
> Marki
>
> On August 7, 2020 9:48:00 AM GMT+02:00, lu lu  wrote:
>>
>> hello to the group
>> i have a backuppc installed on debian 7.
>> I backup 3 physical servers with windows server 2008, with backuppc I
>> backup every night of the folder on the disk called DATA with SMB system,
>> and it has been working like this for some time and it works very well.
>> but two of the three servers have been replaced with 2019 servers but on
>> VMWARE and no longer physical, and from this moment I can no longer make
>> backups of the DATA folder, why? someone can tell me if by chance with
>> Backuppc to backup a server installed on vmware some particular
>> configuration must be done? doesn't work n SMB?
>>
>> ps: I don't want to back up the virtual machine, I want to back up some
>> folders in the virtual machine HDD every night
>>
>> thanks to anyone who answers me
>>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] backuppc to server on vmware

2020-08-07 Thread lu lu
817/5000
the problem is that it does not give any error message, the structure has
not changed, I am the network administrator, I have reconfigured the host
with the new parameters of the new server
sorry you're right I didn't post the error log

Running: / usr / bin / smbclient  poseidone \\ E \ $ -U SEAT \\
administrator -E -d 1 -c tarmode \ full -Tc -
full backup started for share E $
Xfer PIDs are now 25662,25661
protocol negotiation failed: NT_STATUS_CONNECTION_RESET
protocol negotiation failed: NT_STATUS_CONNECTION_RESET
tarExtract: Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0
filesTotal, 0 sizeTotal
Got fatal error during xfer (No files dumped for share E $)
Backup aborted (No files dumped for share E $)
Not saving this as a partial backup since it has fewer files than the prior
one (got 0 and 0 files versus 0)

Il giorno ven 7 ago 2020 alle ore 10:27 marki 
ha scritto:

> Hello,
>
> What is the error message?
> Please ask good questions. Please don't ask question of the type "it
> doesn't work" and make assumptions.
>
> It doesn't matter if the host is physical or virtual. But clearly your
> infrastructure has changed. Have you asked your admins if they changed
> anything else while virtualizing?
>
> Marki
>
> On August 7, 2020 9:48:00 AM GMT+02:00, lu lu  wrote:
>>
>> hello to the group
>> i have a backuppc installed on debian 7.
>> I backup 3 physical servers with windows server 2008, with backuppc I
>> backup every night of the folder on the disk called DATA with SMB system,
>> and it has been working like this for some time and it works very well.
>> but two of the three servers have been replaced with 2019 servers but on
>> VMWARE and no longer physical, and from this moment I can no longer make
>> backups of the DATA folder, why? someone can tell me if by chance with
>> Backuppc to backup a server installed on vmware some particular
>> configuration must be done? doesn't work n SMB?
>>
>> ps: I don't want to back up the virtual machine, I want to back up some
>> folders in the virtual machine HDD every night
>>
>> thanks to anyone who answers me
>>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] backuppc to server on vmware

2020-08-07 Thread marki
Hello,

What is the error message?
 Please ask good questions. Please don't ask question of the type "it doesn't 
work" and make assumptions.

It doesn't matter if the host is physical or virtual. But clearly your 
infrastructure has changed. Have you asked your admins if they changed anything 
else while virtualizing?

Marki

On August 7, 2020 9:48:00 AM GMT+02:00, lu lu  wrote:
>hello to the group
>i have a backuppc installed on debian 7.
>I backup 3 physical servers with windows server 2008, with backuppc I
>backup every night of the folder on the disk called DATA with SMB
>system,
>and it has been working like this for some time and it works very well.
>but two of the three servers have been replaced with 2019 servers but
>on
>VMWARE and no longer physical, and from this moment I can no longer
>make
>backups of the DATA folder, why? someone can tell me if by chance with
>Backuppc to backup a server installed on vmware some particular
>configuration must be done? doesn't work n SMB?
>
>ps: I don't want to back up the virtual machine, I want to back up some
>folders in the virtual machine HDD every night
>
>thanks to anyone who answers me
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] backuppc to server on vmware

2020-08-07 Thread lu lu
hello to the group
i have a backuppc installed on debian 7.
I backup 3 physical servers with windows server 2008, with backuppc I
backup every night of the folder on the disk called DATA with SMB system,
and it has been working like this for some time and it works very well.
but two of the three servers have been replaced with 2019 servers but on
VMWARE and no longer physical, and from this moment I can no longer make
backups of the DATA folder, why? someone can tell me if by chance with
Backuppc to backup a server installed on vmware some particular
configuration must be done? doesn't work n SMB?

ps: I don't want to back up the virtual machine, I want to back up some
folders in the virtual machine HDD every night

thanks to anyone who answers me
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backuppc Errors

2020-08-04 Thread Michael Stowe

On 2020-08-04 03:55, s.chimere--- via BackupPC-users wrote:





Hello, Good day 

Hope everyone is doing okay amid the pandemic, 

I have been using backuppc since last year and it has been working okay, but recently pings from the backuppc started failing 

I fixed that using this https://u2182357.ct.sendgrid.net/ls/click?upn=UlfI6r-2FmuicX-2BnC5-2BZ3I6t4hto13icElebmTufqZ1T1YnGHZ-2FjD3nLfmUtHCO97XgoUxTh8s-2FfxzfziVTV04vLrzkdnfHsSUREcSKK59i-2BGkE2k246AzNnwCwWIvb8hWpt7J_ukiVZyKkp5Cjvx76jsH50UVUlXkseXbKCRkPqSLeHuHGH3U7CETSxfS6wuOOBk00ZHxb76gvYWlcaRBYHBhk8Mz2U5ddB1JP1P0Z4ZOguEfTHjysl-2F3g-2F-2FJVHTQQE8-2F5Jrdos3X0c4-2F5a9Jgsf9uk83-2BiHAbhbZxGXGWnN49JBzFg5k9NoJ4nNQh3GXXpxzflVnPzTE8v9tx4BH-2BeWtEz-2BSQISZlm2cn7BIagtJz04SCQ10-2FvwO3GfCStyiKKHZV [1] 

But now I get this error instead.  

Connection to x.x.x.x failed (Error NT_STATUS_IO_TIMEOUT) 

Connection to x.x.x.x failed (Error NT_STATUS_IO_TIMEOUT) 

tarExtract: Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 filesTotal, 0 sizeTotal 

Got fatal error during xfer (No files dumped for share Virtual hard disks) 

Backup aborted (No files dumped for share Virtual hard disks) 

Not saving this as a partial backup since it has fewer files than the prior one (got 0 and 0 files versus 14) 

Please anyone have idea how I can quickly resolve this ? 

Thanks 

Symbol Chimere 


Team Lead Infrastructure


Replacing the ping with an echo is pretty terrible advice.  While
Windows blocks pings by default, it's easy enough to turn on, and
there's rarely a good reason not to. 


Of course, the most likely explanation is that the system you're trying
to back up is _unreachable_, which is impossible to give a blanket
solution for, but fix that first.  Chances are, the IP changed, or the
same thing that made the system no longer responding to pings is making
it no longer respond to CIFS traffic. 


Links:
--
[1]
https://u2182357.ct.sendgrid.net/ls/click?upn=UlfI6r-2FmuicX-2BnC5-2BZ3I6t4hto13icElebmTufqZ1T1YnGHZ-2FjD3nLfmUtHCO97XgoUxTh8s-2FfxzfziVTV04vLrzkdnfHsSUREcSKK59i-2BGkE2k246AzNnwCwWIvb8hW9QYB_ukiVZyKkp5Cjvx76jsH50UVUlXkseXbKCRkPqSLeHuHGH3U7CETSxfS6wuOOBk00ZHxb76gvYWlcaRBYHBhk8FGWGdJ3lrtCD5wscVs9lowlub8Ag1UcY64kBw1zO2ZES1DiWsGgiwm8FEm8Z2tE2CL4PAmCcL92cP5DGKtFLT-2Bx3UHOdfA-2FcALw8aaPXW892MdLmneKCHx2OOLDqb7-2BF0eAxdCHzCkJPnBC-2Fkz4dnLTkqEBKf5njawEMHkjJ6uM___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Windows 10 Subsystem For Linux

2020-07-30 Thread r7

https://github.com/backuppc/backuppc/issues/259


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Windows 10 Subsystem For Linux

2020-07-30 Thread r7

https://github.com/backuppc/backuppc/issues/259


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC Service Status "Count" Column

2020-07-23 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 23 Jul 2020, Akibu Flash wrote:


In the CGI Interface on the BackupPC Service Status Page there is a
Column labelled "Count".  What exactly is that determining? Is it
the number of files that have been backed up from that share?


It's the count of files transferred.

Look for $jobStr in .../lib/BackupPC/CGI/GeneralInfo.pm for more.

I don't normally see a row of data below that line, because it's for
currently running jobs, and I'm not normally looking at the BackupPC
GUI at two o'clock in the morning.


The reason I ask is because mine has been stuck on 58721 for quite
some time.  What could be causing this and how can I determine what
is happening? There is nothing in the log file currently that I can
see which could be causing a problem.


What logs are you looking at, and what do you see in them which makes
you think everything is normal?  I'd expect it to be obvious from the
logs what's going on.

It's not a silly browser page-caching thing is it?

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] Fw: Windows 10 Subsystem For Linux

2020-07-22 Thread Stan Larson
Thanks, these changes fixed it.  I should have RTFM.

$Conf{RsyncSshArgs} = ['-e', '$sshPath -p  -l root'];# Fixed 
alternative port  for ssh

$Conf{RsyncShareName} = ['/mnt/c/Users/'];   # Fixed issue with hopping across 
/mnt/c filesystem mount point on Win 10 Linux Subsystem


Stan Larson  |  Systems Administrator
Freedom  |  www.freedomsales.com
11225 Challenger Avenue Odessa, FL 33556
PH: 813-855-2671 x206  |
Direct Line: 1-727-835-1157





From: backu...@kosowsky.org 
Sent: Tuesday, July 21, 2020 4:48 PM
To: General list for user discussion, questions and support 

Subject: Re: [BackupPC-users] Windows 10 Subsystem For Linux

Stan Larson wrote at about 12:57:40 -0400 on Tuesday, July 21, 2020:
 > We've been successfully using BackupPC 3.3 and the Windows 10 WSL feature to 
 > access Windows 10 PCs using rsync (without the cgwin plugin).  We've been 
 > using this method on our production BackupPC server to back up about 30 
 > Win10 Pro clients.  We just back up the C:/Users folder, which picks up the 
 > User's Desktop, Documents, AppData folders, etc.  This method has proven to 
 > be very reliable using BackupPC 3.3.
 >
 > We are testing with BackupPC 4.4 so that we can overcome the filesystem 
 > issues that BackupPC 3.3  hard links present.
 >
 > We are running into a couple of problems that seem to be related to 
 > rsync_bpc.
 >
 > 1.  On our BackupPC 3.3 server, we are able to use an alternate ssh port for 
 > our Windows 10 clients.  We actually run ssh on port  on the clients 
 > with no problem.  With BackupPC 4.4 (rsync_bpc), we get errors when trying 
 > to run on alternate ports.  The errors seem to indicate that even though we 
 > are specifying a different port, rsync_bpc is ignoring the alternate port 
 > and trying to use port 22.  Here's the config declaration, which works on 
 > 3.3 but not 4.4... $Conf{RsyncClientCmd} = '$sshPath -p  -q -x -l root 
 > $host $rsyncPath $argList+';

RsyncClientCmd is not a configurable variable for 4.x so not
surprising that you are having a problem...

You probably want to use: RsyncSshArgs.
For example:
$Conf{RsyncSshArgs} = ['-e', '$sshPath -p  -q -x -l root'];
Though not sure you need '-q -x'

 >
 > 2.  When we use port 22 on the client instead of port  (see above), we 
 > get a successful backup, but we have a different problem.  On the Win 10 WSL 
 > client, the C:\ drive is a separate filesystem presented as /mnt/c.  On our 
 > BackupPC 3.3 server, we are able to cross this mount point successfully with 
 > no special configurations, using the config declaration...   
 > "$Conf{BackupFilesOnly} = ['/mnt/c/'];".  On our BackupPC 4.4 server, the 
 > backup will run successfully, but no files below /mnt/c are included.  It's 
 > as if BackupPC is refusing to cross from the / filesystem to the /mnt/c 
 > filesystem.
 >

Suggest you test manually by running from the command line:

  sudo -u backuppc rsync -navxH -p  -l root :/mnt/c

 > For the new server, we are using CentOS 8 and the default BackupPC yum 
 > packages.
 >
 > Any thoughts on either problem would be much appreciated.
 >
 > --
 > [Freedom] 
 > Stan Larson  |  IT Manager
 > Freedom  |  
 > www.freedomsales.com>
 > 11225 Challenger Avenue Odessa, FL 33556
 > PH: 813-855-2671 x206  |
 > Direct Line: 1-727-835-1157
 >
 >
 > All commodities purchased from Freedom Sales are to be handled in accordance 
 > with US law including but not limited to the Export Administration 
 > Regulations, International Traffic in Arms Regulations, US Department of 
 > State, US Department of Homeland Security, US Department of Commerce, and US 
 > Office of Foreign Assets Control. Diversion Contrary to US law is prohibited.
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
All commodities purchased from Freedom Sales are to be handled in accordance 
with US law including but not limited to the Export Administration Regulations, 
International Traffic in Arms Regulations, US Department of State, US 
Department of Homeland Security, US Department of Commerce, and US Office of 
Foreign Assets Control. Diversion Contrary to US law is prohibited.
All commodities purchased from Freedom Sales are to be handled in accordance 
with US law including but not limited to the Export Administration Regulations, 

[BackupPC-users] BackupPC Service Status "Count" Column

2020-07-22 Thread Akibu Flash
All,

In the CGI Interface on the BackupPC Service Status Page there is a Column 
labelled "Count".  What exactly is that determining? Is it the number of files 
that have been backed up from that share?  The reason I ask is because mine has 
been stuck on 58721 for quite some time.  What could be causing this and how 
can I determine what is happening? There is nothing in the log file currently 
that I can see which could be causing a problem.  I have included a snippet 
below.

For context, I am running BackupPC version 4 on an Arch Linux server and am 
backing up a Windows 10 machine via rsyncd.

On a related note, is there a way to see what specifically is being copied at 
any one time... in other words, is there a way to pipe out to the screen what 
BackupPC is copying at the moment?  And if so, can someone walk me through how 
to do that?

Thanks in advance,

Akibu

bpc_attrib_backwardCompat: WriteOldStyleAttribFile = 0, KeepOldAttribFiles = 0
2020-07-21 22:12:00 Renaming /var/lib/backuppc/pc/mark-desktop/XferLOG.0.z -> 
/var/lib/backuppc/pc/mark-desktop/XferLOG.0.z.tmp
2020-07-21 22:15:45 full backup started for directory Sabrent
2020-07-21 22:36:26 full backup started for directory HGST_HDD_6T
2020-07-22 01:35:29 full backup started for directory HGST_HDD_8T1
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Necro'd: Use S3 buckets for the pool

2020-07-22 Thread Kris Lou via BackupPC-users
All,

Thanks for the suggestions and links.  There's a lot of interesting reading
to be done.  But as noted, checksum matching and storage latency are
probably prohibitive.

I hope to have access to a colo with gigabit bandwidth in the near future.
Maybe I'll spin up an instance just to see how it goes -- especially since
BPC4 should be less dependent upon bare-metal installations.

Thanks,
-Kris
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Sign out

2020-07-21 Thread Craig Barratt via BackupPC-users
I appreciate the feedback.  Please use the "List
" link in the
footer of any email to unsubscribe.

Mailing to the list mails everyone on the list.  It doesn't help you
unsubscribe.

Craig

On Tue, Jul 21, 2020 at 2:22 PM Ants Mark  wrote:

> ThNK YOU 4 providing me good info. But i don't wanna received any mor
> mails. Congrats for having such useful program
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] Sign out

2020-07-21 Thread Ants Mark
ThNK YOU 4 providing me good info. But i don't wanna received any mor
mails. Congrats for having such useful program
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Necro'd: Use S3 buckets for the pool

2020-07-21 Thread Marcelo Ricardo Leitner
Hi,

Maybe a solution composed by backuppc + rclone is the way forward.
I never used rclone myself, but AFAICT it handles all the issues
you raised, plus more:

>From their page: "Virtual backends wrap local and cloud file systems
to apply encryption, caching, chunking and joining."

https://rclone.org/

Been willing to try it, but didn't have the chance so far.

Best regards,
Marcelo

On Tue, Jul 21, 2020 at 10:42:30AM +0300, Johan Ehnberg wrote:
> Hi Kris,
> 
> Indeed the object storage transformation has not been a hot topic for some
> time. I explored this some years ago and did some proof of concept testing.
> It requires a fairly different architecture:
> 
> https://molnix.com/proposal-new-open-source-backup-solution/
> 
> Essentially, the object storage to fuse approach is not workable as-is from
> a performance standpoint, at least in almost any production scenario I can
> imagine.
> 
> The compute node needs to independently do at least the checksum matching,
> storage buffering or tiering, and object storage rate limiting.
> 
> With the fuse approach you can get pretty close to that by putting only pool
> or cpool on fuse, and keeping pc locally, added with some configuration
> tweaks. However, the object storage needs to be asynchronous to the backup
> for it to really make sense.
> 
> I believe the efforts to create FS-to-object layers to transform existing
> software to cloud concepts without actual code changes quieted down exactly
> because of these types of issues.
> 
> Best regards,
> 
> Johan
> 
> 
> On 21/07/2020 02.37, Kris Lou via BackupPC-users wrote:
> > This hasn't been addressed for a while, and I didn't find anything in
> > recent archives.
> > 
> > Anybody have any experience or hypothetical issues with writing the BPC4
> > Pool over s3fs-fuse to S3 or something similar?  Pros, Cons?
> > 
> > Thanks,
> > -Kris
> > 
> > 
> > 
> > Kris Lou
> > k...@themusiclink.net 
> > 
> > 
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> -- 
> Signature
> *Johan Ehnberg*
> 
> Founder, CEO
> 
> Molnix Oy
> 
> 
> jo...@molnix.com 
> 
> +358 50 320 96 88
> 
> molnix.com 
> 
> 
> /The contents of this e-mail and its attachments are for the use of the
> intended recipient only, and are confidential and may contain legally
> privileged information. If you are not the intended recipient or have
> otherwise received the e-mail in error, please notify the sender by replying
> to this e-mail immediately and then delete it immediately from your system.
> Any dissemination, distribution, copying or use of this communication
> without prior and explicit permission of the sender is strictly prohibited./
> 
> /*Please consider the environment - do not print this e-mail unless you
> really need to.*/
> 


> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Windows 10 Subsystem For Linux

2020-07-21 Thread backuppc
Stan Larson wrote at about 12:57:40 -0400 on Tuesday, July 21, 2020:
 > We've been successfully using BackupPC 3.3 and the Windows 10 WSL feature to 
 > access Windows 10 PCs using rsync (without the cgwin plugin).  We've been 
 > using this method on our production BackupPC server to back up about 30 
 > Win10 Pro clients.  We just back up the C:/Users folder, which picks up the 
 > User's Desktop, Documents, AppData folders, etc.  This method has proven to 
 > be very reliable using BackupPC 3.3.
 > 
 > We are testing with BackupPC 4.4 so that we can overcome the filesystem 
 > issues that BackupPC 3.3  hard links present.
 > 
 > We are running into a couple of problems that seem to be related to 
 > rsync_bpc.
 > 
 > 1.  On our BackupPC 3.3 server, we are able to use an alternate ssh port for 
 > our Windows 10 clients.  We actually run ssh on port  on the clients 
 > with no problem.  With BackupPC 4.4 (rsync_bpc), we get errors when trying 
 > to run on alternate ports.  The errors seem to indicate that even though we 
 > are specifying a different port, rsync_bpc is ignoring the alternate port 
 > and trying to use port 22.  Here's the config declaration, which works on 
 > 3.3 but not 4.4... $Conf{RsyncClientCmd} = '$sshPath -p  -q -x -l root 
 > $host $rsyncPath $argList+';

RsyncClientCmd is not a configurable variable for 4.x so not
surprising that you are having a problem...

You probably want to use: RsyncSshArgs.
For example:
$Conf{RsyncSshArgs} = ['-e', '$sshPath -p  -q -x -l root'];
Though not sure you need '-q -x'

 > 
 > 2.  When we use port 22 on the client instead of port  (see above), we 
 > get a successful backup, but we have a different problem.  On the Win 10 WSL 
 > client, the C:\ drive is a separate filesystem presented as /mnt/c.  On our 
 > BackupPC 3.3 server, we are able to cross this mount point successfully with 
 > no special configurations, using the config declaration...   
 > "$Conf{BackupFilesOnly} = ['/mnt/c/'];".  On our BackupPC 4.4 server, the 
 > backup will run successfully, but no files below /mnt/c are included.  It's 
 > as if BackupPC is refusing to cross from the / filesystem to the /mnt/c 
 > filesystem.
 > 

Suggest you test manually by running from the command line:

  sudo -u backuppc rsync -navxH -p  -l root :/mnt/c

 > For the new server, we are using CentOS 8 and the default BackupPC yum 
 > packages.
 > 
 > Any thoughts on either problem would be much appreciated.
 > 
 > --
 > [Freedom] 
 > Stan Larson  |  IT Manager
 > Freedom  |  www.freedomsales.com
 > 11225 Challenger Avenue Odessa, FL 33556
 > PH: 813-855-2671 x206  |
 > Direct Line: 1-727-835-1157
 > 
 > 
 > All commodities purchased from Freedom Sales are to be handled in accordance 
 > with US law including but not limited to the Export Administration 
 > Regulations, International Traffic in Arms Regulations, US Department of 
 > State, US Department of Homeland Security, US Department of Commerce, and US 
 > Office of Foreign Assets Control. Diversion Contrary to US law is prohibited.
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Windows 10 Subsystem For Linux

2020-07-21 Thread Stan Larson

We've been successfully using BackupPC 3.3 and the Windows 10 WSL feature to 
access Windows 10 PCs using rsync (without the cgwin plugin).  We've been using 
this method on our production BackupPC server to back up about 30 Win10 Pro 
clients.  We just back up the C:/Users folder, which picks up the User's 
Desktop, Documents, AppData folders, etc.  This method has proven to be very 
reliable using BackupPC 3.3.

We are testing with BackupPC 4.4 so that we can overcome the filesystem issues 
that BackupPC 3.3  hard links present.

We are running into a couple of problems that seem to be related to rsync_bpc.

1.  On our BackupPC 3.3 server, we are able to use an alternate ssh port for 
our Windows 10 clients.  We actually run ssh on port  on the clients with 
no problem.  With BackupPC 4.4 (rsync_bpc), we get errors when trying to run on 
alternate ports.  The errors seem to indicate that even though we are 
specifying a different port, rsync_bpc is ignoring the alternate port and 
trying to use port 22.  Here's the config declaration, which works on 3.3 but 
not 4.4... $Conf{RsyncClientCmd} = '$sshPath -p  -q -x -l root $host 
$rsyncPath $argList+';

2.  When we use port 22 on the client instead of port  (see above), we get a 
successful backup, but we have a different problem.  On the Win 10 WSL client, the C:\ 
drive is a separate filesystem presented as /mnt/c.  On our BackupPC 3.3 server, we are 
able to cross this mount point successfully with no special configurations, using the 
config declaration...   "$Conf{BackupFilesOnly} = ['/mnt/c/'];".  On our 
BackupPC 4.4 server, the backup will run successfully, but no files below /mnt/c are 
included.  It's as if BackupPC is refusing to cross from the / filesystem to the /mnt/c 
filesystem.

For the new server, we are using CentOS 8 and the default BackupPC yum packages.

Any thoughts on either problem would be much appreciated.

--
[Freedom] 
Stan Larson  |  IT Manager
Freedom  |  www.freedomsales.com
11225 Challenger Avenue Odessa, FL 33556
PH: 813-855-2671 x206  |
Direct Line: 1-727-835-1157


All commodities purchased from Freedom Sales are to be handled in accordance 
with US law including but not limited to the Export Administration Regulations, 
International Traffic in Arms Regulations, US Department of State, US 
Department of Homeland Security, US Department of Commerce, and US Office of 
Foreign Assets Control. Diversion Contrary to US law is prohibited.
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Necro'd: Use S3 buckets for the pool

2020-07-21 Thread Johan Ehnberg

Hi Kris,

Indeed the object storage transformation has not been a hot topic for 
some time. I explored this some years ago and did some proof of concept 
testing. It requires a fairly different architecture:


https://molnix.com/proposal-new-open-source-backup-solution/

Essentially, the object storage to fuse approach is not workable as-is 
from a performance standpoint, at least in almost any production 
scenario I can imagine.


The compute node needs to independently do at least the checksum 
matching, storage buffering or tiering, and object storage rate limiting.


With the fuse approach you can get pretty close to that by putting only 
pool or cpool on fuse, and keeping pc locally, added with some 
configuration tweaks. However, the object storage needs to be 
asynchronous to the backup for it to really make sense.


I believe the efforts to create FS-to-object layers to transform 
existing software to cloud concepts without actual code changes quieted 
down exactly because of these types of issues.


Best regards,

Johan


On 21/07/2020 02.37, Kris Lou via BackupPC-users wrote:
This hasn't been addressed for a while, and I didn't find anything in 
recent archives.


Anybody have any experience or hypothetical issues with writing the 
BPC4 Pool over s3fs-fuse to S3 or something similar?  Pros, Cons?


Thanks,
-Kris



Kris Lou
k...@themusiclink.net 


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

--
Signature
*Johan Ehnberg*

Founder, CEO

Molnix Oy


jo...@molnix.com 

+358 50 320 96 88

molnix.com 


/The contents of this e-mail and its attachments are for the use of the 
intended recipient only, and are confidential and may contain legally 
privileged information. If you are not the intended recipient or have 
otherwise received the e-mail in error, please notify the sender by 
replying to this e-mail immediately and then delete it immediately from 
your system. Any dissemination, distribution, copying or use of this 
communication without prior and explicit permission of the sender is 
strictly prohibited./


/*Please consider the environment - do not print this e-mail unless you 
really need to.*/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Necro'd: Use S3 buckets for the pool

2020-07-20 Thread Michael Huntley
I found S3 slower than my dead grandma trying to cross the street.

No offense grandma D!

Cheers,

Michael

> On Jul 20, 2020, at 5:37 PM, Kris Lou via BackupPC-users 
>  wrote:
> 
> 
> This hasn't been addressed for a while, and I didn't find anything in recent 
> archives.
> 
> Anybody have any experience or hypothetical issues with writing the BPC4 Pool 
> over s3fs-fuse to S3 or something similar?  Pros, Cons?
> 
> Thanks,
> -Kris
> 
> 
> 
> Kris Lou
> k...@themusiclink.net
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Necro'd: Use S3 buckets for the pool

2020-07-20 Thread Kris Lou via BackupPC-users
This hasn't been addressed for a while, and I didn't find anything in
recent archives.

Anybody have any experience or hypothetical issues with writing the BPC4
Pool over s3fs-fuse to S3 or something similar?  Pros, Cons?

Thanks,
-Kris



Kris Lou
k...@themusiclink.net
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup installation of BackupPC

2020-07-20 Thread daveinredm...@excite.com
All's well. It turned out that I had forgotten that since those old backups 
were made I had renamed the hosts from their original name/function (i.e/ 
DevBox-Code) to the generic server name (S-Code). Once I created the proper 
host name everything worked just fine.

Thanx for the prompt reply!



-Original Message-
From: "Craig Barratt via BackupPC-users" [backuppc-users@lists.sourceforge.net]
Date: 07/18/2020 16:02
To: "General list for user discussion,
 questions and support" 
CC: "Craig Barratt" 
Subject: Re: [BackupPC-users] Backup installation of BackupPC

Note: Original message sent as attachment--- Begin Message ---
>From the web interface, can you see the old hosts information?

What happens when you select one of the hosts?

The most likely issue is that $Conf{TopDir} in the config file isn't
pointing to the top-level store directory on the old disk.

If you need the file urgently, rather than just testing the 4.3.2 standby
installation, you can do that from the command-line just by navigating to
the relevant host and directory.  If you know the directory where the file
is stored, but not the backup it changed in, just use a shell wildcard for
the backup number.  In 3.x the file paths are mangled (each entry starts
with "f"), but every full backup's directory tree will have all the files.

Craig

On Sat, Jul 18, 2020 at 2:16 PM daveinredm...@excite.com <
daveinredm...@excite.com> wrote:

> I am currently running BackupPC 4.3.2. I have created a second
> installation of BackupPC on a spare machine to have the capability of using
> my backups if the server hosting the main installation dies. I also have
> several older backup disks from several years back that was made on
> BackupPC 3.x. I chown'd an old disk to ensure proper rights and copied the
> current hosts file to the test server but when I run BackupPC on the test
> machine it doesn't see any of the backups. I am trying to find a fairly old
> file that was damaged at an unknown time and used the test server to verify
> functionality. What am I missing? I've Googled "move backuppc to new
> server" but none of the responses seems relevant.
>
> TIA,
> Dave
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
--- End Message ---
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup installation of BackupPC

2020-07-18 Thread Craig Barratt via BackupPC-users
>From the web interface, can you see the old hosts information?

What happens when you select one of the hosts?

The most likely issue is that $Conf{TopDir} in the config file isn't
pointing to the top-level store directory on the old disk.

If you need the file urgently, rather than just testing the 4.3.2 standby
installation, you can do that from the command-line just by navigating to
the relevant host and directory.  If you know the directory where the file
is stored, but not the backup it changed in, just use a shell wildcard for
the backup number.  In 3.x the file paths are mangled (each entry starts
with "f"), but every full backup's directory tree will have all the files.

Craig

On Sat, Jul 18, 2020 at 2:16 PM daveinredm...@excite.com <
daveinredm...@excite.com> wrote:

> I am currently running BackupPC 4.3.2. I have created a second
> installation of BackupPC on a spare machine to have the capability of using
> my backups if the server hosting the main installation dies. I also have
> several older backup disks from several years back that was made on
> BackupPC 3.x. I chown'd an old disk to ensure proper rights and copied the
> current hosts file to the test server but when I run BackupPC on the test
> machine it doesn't see any of the backups. I am trying to find a fairly old
> file that was damaged at an unknown time and used the test server to verify
> functionality. What am I missing? I've Googled "move backuppc to new
> server" but none of the responses seems relevant.
>
> TIA,
> Dave
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backup installation of BackupPC

2020-07-18 Thread daveinredm...@excite.com
I am currently running BackupPC 4.3.2. I have created a second installation of 
BackupPC on a spare machine to have the capability of using my backups if the 
server hosting the main installation dies. I also have several older backup 
disks from several years back that was made on BackupPC 3.x. I chown'd an old 
disk to ensure proper rights and copied the current hosts file to the test 
server but when I run BackupPC on the test machine it doesn't see any of the 
backups. I am trying to find a fairly old file that was damaged at an unknown 
time and used the test server to verify functionality. What am I missing? I've 
Googled "move backuppc to new server" but none of the responses seems relevant.

TIA,
Dave


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Annoyance/Bug? Backup number increments on failed backups...

2020-07-09 Thread
I noticed that when backups repeatedly fail, leaving a partial backup,
the backup number gets sequentially incremented each time.

This leaves potentially large gaps in the backup number sequence,
particularly if the same failure occurs on each hourly backup attempt.

Backups can fail for minor reasons such as a slight misconfiguration
in the config file, missing backup directory/share, a network
disconnect, and many others.

It seems like the *right* behavior would be to *not* increment each
partial... and then when a success does occur it should inherit the
same number as the most recent (failed) partial which in turn should
be one more than the last successful backup.

Jeff


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] config.pl "built-in" script to backup Windows shares with Virtual Shadow Service Copy

2020-07-02 Thread backuppc
usermail wrote at about 20:48:44 +1000 on Thursday, July 2, 2020:
 > On 30/6/20 2:51 pm, backu...@kosowsky.org wrote:
 > Wow great work! This would be fantastic functionality!
 > I copied it into my client .pl file but i dont know if ive stuffed it up?
 > My XferLOG starts like this:
 > 
 > XferLOG file /var/lib/backuppc/pc/charlotte/XferLOG.71.z created 2020-07-02 
 > 12:00:00
 > Backup prep: type = incr, case = 4, inPlace = 0, doDuplicate = 0, newBkupNum 
 > = 71, newBkupIdx = 7, lastBkupNum = 70, lastBkupIdx = 6 (FillCycle = 0, 
 > noFillCnt = 5)
 > Executing DumpPreUserCmd: &{sub {
 > my $timestamp = "20200702-12";
 > my $shadowdir = "/cygdrive/c/shadow/";
 > my $shadows = "";
 > 
 > my $bashscript = "function\ errortrap\ \ \{\ #NOTE:\ Trap\ on\ 
 > error:\ unwind\ shadows\ and\ exit\ 1.\
 > \ \ echo\ \"ERROR\ setting\ up\ shadows...\"\;\
 > \ \ \ \ #First\ delete\ any\ partially\ created\ shadows\
 > \ \ if\ \[\ -n\ \"\$SHADOWID\"\ \]\;\ then\
 > \ \ \ \ \ \ unset\ ERROR\;\
 > \ \ \ \ \ \ \(vssadmin\ delete\ shadows\ /shadow=\$SHADOWID\ /quiet\ \|\|\ 
 > ERROR=\"ERROR\ \"\)\ \|\ tail\ +4\;\  \   \   \ \ \ \ \ \
 > \ \ \ \ \ \ echo\ \"\ \ \ \$\{ERROR\}Deleting\ shadow\ copy\ for\ 
 > \'\$\{I\^\^\}:\'\ \$SHADOWID\"\;\
 > \ \ fi\
 > \ \ if\ \[\ -n\ \"\$SHADOWLINK\"\ \]\;\ then\
 > \ \ \ \ \ \ unset\ ERROR\;\
 > \ \ \ \ \ \ cmd\ /c\ rmdir\ \$SHADOWLINK\ \|\|\ ERROR=\"ERROR\ \"\;\
 > \ \ \ \ \ \ echo\ \"\ \ \ \$\{ERROR\}Deleting\ shadow\ link\ for\ 
 > \'\$\{I\^\^\}:\'\ \$SHADOWLINK\"\;\
 > \ \ fi\
 > 
 > The same on the client config page, is this likely an encoding or copy paste 
 > issue?

The backslashes are all painfully necessary to 'escape' variables,
special characters, and white space when passing to the shaell
> 
 > Second question, I dont use cygwin I use deltacopy (basically rsync compiled 
 > for windows I think)
 > and my RsyncShareName is /
 > I dont know perl but it looks like you trim the last slash off of $cygdrive, 
 > so will it be possible to
 > set $cygdrive to /

Yes. Or just set $cygdrive="";
Having this set wrong would explain why it is not automatically
finding your drive letters :)

> 
 > Thanks again for sharing your script,
 > Dean
 > 
 > 
 > 
 > 
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] config.pl "built-in" script to backup Windows shares with Virtual Shadow Service Copy

2020-07-02 Thread backuppc
Michael Stowe wrote at about 05:43:03 + on Thursday, July 2, 2020:
 > On 2020-06-30 19:35, backu...@kosowsky.org wrote:
 > > Michael Stowe wrote at about 23:09:55 + on Tuesday, June 30, 2020:
 > >  > On 2020-06-29 21:51, backu...@kosowsky.org wrote:
 > > Not sure why you would want to use a custom version of rsync when my
 > > pretty simple scripts do all that with a lot more transparency to how
 > > they are setup.
 > 
 > I think because it's just one binary (vsshadow not needed, nor anything 
 > else)

I don't need to add any binaries beyond rsync/ssh. I use the native
Win7/Win10 VSS to generate/unwind shadows including: vssadmin, wmic,
fsutil, mklink, rmdir). They were present on even the "Home" addition
of Windows.
 
 > > I believe it's far simpler and cleaner than either:
 > > - My old approach for WinXP (using a client-side triggered script,
 > >   rsyncd setup, dosdev, 'at' recursion to elevate privileges, etc.)
 > > - Your version requiring win.exe
 > > - Other versions requiring a custom/non-standard rsync
 > 
 > N.B.: my version works using ssh now, it doesn't require winexe
Good.
 > 
 > > My version only requires a basic cygwin install with rsync/ssh and
 > > basic linux utils plus built-in windows functions.
 > > 
 > > BTW, I still need to add back in the ability to dump all the acl's
 > > (using subinacl) since rsync only syncs POSIX acls and I believe ntfs
 > > has additional acl's.
 > > 
 > > In any case, my ultimate holy-grail is to be able to use BackupPC to 
 > > allow for
 > > a full bare-metal restore by combining:
 > > - Full VSS file backup
 > > - Restore of all ACLs from a subinacl dump
 > > - Anything else I may need to recreate the full NTFS filesystem for
 > >   windows (maybe disk signatures???)
 > 
 > I fully support this notion; NTFS has a lot of weirdness that doesn't 
 > translate well to rsync, like junction points.  Last time I tried these, 
 > rsync would convert them to symlinks, and restore them as symlinks.  
 > YMMV

Yes, you are right about junctions.
My plan would be to use 'fsutil' to get a list of reparsepoints that
could theoretically be reconstructed with 'mklink'.

Though perhaps fully recreating all the NTFS bells & whistles (or
oddities) is a fool's errand.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] config.pl "built-in" script to backup Windows shares with Virtual Shadow Service Copy

2020-07-02 Thread usermail

On 30/6/20 2:51 pm, backu...@kosowsky.org wrote:

Over the years, many have asked and struggled with backing up remote
Windows shares with shadow copies. Shadow copies are useful since they
allow both the backup to be 'consistent' and allow for reading files
that are otherwise 'busy' and unreadable when part of active Windows
partitions.

Various solutions (including one I proposed almost a decade ago) use
additional scripts and hacks to create the shadow copy.
Such solutions are kludgy and require the maintenance of separate
scripts either on the server or client.

I have written a combination of perl and bash code that can be stored
in the host.pl configuration file that does everything you need to
automagically create shadow copies for each share (where possible)
with minimal to no special configuration in host.pl and nothing to
configure on the Windows client (other than having cygwin+ssh+rsync
and an accessible account on your Windows client).

The only thing you need to do is to set up the hash
Conf{ClientShareName2Path} to map share names to their
(unshadowed) Windows paths. The attached script will then set up and
interpolate the appropriate shadow paths.

It should just work...
Just cut-and-paste the attachment into your host.pl code for Windows
clients.

Note: I included a fair amount of debugging & error messages in case
any shadows or links fail to get created or unwound.


Wow great work! This would be fantastic functionality!
I copied it into my client .pl file but i dont know if ive stuffed it up?
My XferLOG starts like this:

XferLOG file /var/lib/backuppc/pc/charlotte/XferLOG.71.z created 2020-07-02 
12:00:00
Backup prep: type = incr, case = 4, inPlace = 0, doDuplicate = 0, newBkupNum = 
71, newBkupIdx = 7, lastBkupNum = 70, lastBkupIdx = 6 (FillCycle = 0, noFillCnt 
= 5)
Executing DumpPreUserCmd: &{sub {
   my $timestamp = "20200702-12";
   my $shadowdir = "/cygdrive/c/shadow/";
   my $shadows = "";

   my $bashscript = "function\ errortrap\ \ \{\ #NOTE:\ Trap\ on\ error:\ 
unwind\ shadows\ and\ exit\ 1.\
\ \ echo\ \"ERROR\ setting\ up\ shadows...\"\;\
\ \ \ \ #First\ delete\ any\ partially\ created\ shadows\
\ \ if\ \[\ -n\ \"\$SHADOWID\"\ \]\;\ then\
\ \ \ \ \ \ unset\ ERROR\;\
\ \ \ \ \ \ \(vssadmin\ delete\ shadows\ /shadow=\$SHADOWID\ /quiet\ \|\|\ 
ERROR=\"ERROR\ \"\)\ \|\ tail\ +4\;\   \   \   \ \ \ \ \ \
\ \ \ \ \ \ echo\ \"\ \ \ \$\{ERROR\}Deleting\ shadow\ copy\ for\ \'\$\{I\^\^\}:\'\ 
\$SHADOWID\"\;\
\ \ fi\
\ \ if\ \[\ -n\ \"\$SHADOWLINK\"\ \]\;\ then\
\ \ \ \ \ \ unset\ ERROR\;\
\ \ \ \ \ \ cmd\ /c\ rmdir\ \$SHADOWLINK\ \|\|\ ERROR=\"ERROR\ \"\;\
\ \ \ \ \ \ echo\ \"\ \ \ \$\{ERROR\}Deleting\ shadow\ link\ for\ \'\$\{I\^\^\}:\'\ 
\$SHADOWLINK\"\;\
\ \ fi\

The same on the client config page, is this likely an encoding or copy paste 
issue?

Second question, I dont use cygwin I use deltacopy (basically rsync compiled 
for windows I think)
and my RsyncShareName is /
I dont know perl but it looks like you trim the last slash off of $cygdrive, so 
will it be possible to
set $cygdrive to /

Thanks again for sharing your script,
Dean




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] config.pl "built-in" script to backup Windows shares with Virtual Shadow Service Copy

2020-07-01 Thread Michael Stowe

On 2020-06-30 19:35, backu...@kosowsky.org wrote:

Michael Stowe wrote at about 23:09:55 + on Tuesday, June 30, 2020:
 > On 2020-06-29 21:51, backu...@kosowsky.org wrote:
 > > Over the years, many have asked and struggled with backing up 
remote
 > > Windows shares with shadow copies. Shadow copies are useful since 
they
 > > allow both the backup to be 'consistent' and allow for reading 
files
 > > that are otherwise 'busy' and unreadable when part of active 
Windows

 > > partitions.
 > >
 > > Various solutions (including one I proposed almost a decade ago) 
use

 > > additional scripts and hacks to create the shadow copy.
 > > Such solutions are kludgy and require the maintenance of separate
 > > scripts either on the server or client.
 > >
 > > I have written a combination of perl and bash code that can be 
stored

 > > in the host.pl configuration file that does everything you need to
 > > automagically create shadow copies for each share (where possible)
 > > with minimal to no special configuration in host.pl and nothing to
 > > configure on the Windows client (other than having 
cygwin+ssh+rsync

 > > and an accessible account on your Windows client).
 > >
 > > The only thing you need to do is to set up the hash
 > > Conf{ClientShareName2Path} to map share names to their
 > > (unshadowed) Windows paths. The attached script will then set up 
and

 > > interpolate the appropriate shadow paths.
 > >
 > > It should just work...
 > > Just cut-and-paste the attachment into your host.pl code for 
Windows

 > > clients.
 > >
 > > Note: I included a fair amount of debugging & error messages in 
case

 > > any shadows or links fail to get created or unwound.
 >
 > What ssh do you use?

I just use stock cygwin ssh & rsync.


 > When I updated my server-side scripts to work with ssh as well as 
the

 > venerable winexe, and was alerted to this:
 >
https://u2182357.ct.sendgrid.net/ls/click?upn=UlfI6r-2FmuicX-2BnC5-2BZ3I6n1L3gGqkO4CpkrIpm-2FMXVdh7clCIBIMiVaW-2Fu4zJf1mu24301LW0nUuAE3Gedwg9vUvdiSjYTlleQn605qsb0FRbW6N7V7wfRFUjfSABVFV0RoZJ-2FyEfjb4YYEr-2F8znrwZSnSJzHNGDagX1i-2B8oxVVwIZ-2B-2BI-2B7fMNzIKJC5kqSEW-2F9D0DtlKHitvY1YisbM5pu2luUF-2B-2BItSFT8NuOX9Vq2dCJdKdjvFvOyTyyHxp1wW32cXKbNHSuWQQmnAIy9oD35Wau2-2B7WszpGgfR0cY-2B0pevu2W-2B-2B6fJJEznZrhlRitkK4PIwEk6rpHv5pNEE3zE6plvEiSsBUAzjqpeaPD-2BxvaMsO6aQJ9CXFTtDn8dUS5m6iatIhO0OYIAr3rlIP8HOuM8UxphdnUS-2Bx3cno2jam0S-2B-2B5lEewh0xiHIFUJ6ol867CfD3znalR13b0QgQk4L2VHC3hqf6DGABwIsLeze9isvhCs7r1nk59mpMhuTky2t-2BRoH9Mdm1KhWkz0czqEeXpfbtZjuSwPZeYgTcqMM-3D_4d7_ukiVZyKkp5Cjvx76jsH50UVUlXkseXbKCRkPqSLeHuHGH3U7CETSxfS6wuOOBk00pkNGxGGLZoFxeZBePnNNfQwoEtLwJoSFE-2FLY1sVDBZbSrzu7m0AdClCUvxzYM7KNHA9ZIEYhwveViz3D0A1qxuZ0VdWwC2Z2fJVSIP9x42x5tUhxpcmHt-2FjoSKP46YtZux-2F159UIJgD7q6ONjcVYeYEQdVYxxGrBNusnkQIrUHReWk9DtR-2B30KsApgurmA19

I don't use winexe - it's also not particularly secure.


No, nor do I.  But I do use the ssh that comes with Windows, which 
required some special handling to get the permissions right.  I don't 
know if your script will work with it, but I can give it a whirl within 
the week.



I pipe a server-side bash script onto ssh as part of the Dump Pre/Post
User Commands to setup and takedown the shadows.

Not sure why you would want to use a custom version of rsync when my
pretty simple scripts do all that with a lot more transparency to how
they are setup.


I think because it's just one binary (vsshadow not needed, nor anything 
else)



Can you test out my script?


I'll give it a shot (see above)


I believe it's far simpler and cleaner than either:
- My old approach for WinXP (using a client-side triggered script,
  rsyncd setup, dosdev, 'at' recursion to elevate privileges, etc.)
- Your version requiring win.exe
- Other versions requiring a custom/non-standard rsync


N.B.: my version works using ssh now, it doesn't require winexe


My version only requires a basic cygwin install with rsync/ssh and
basic linux utils plus built-in windows functions.

BTW, I still need to add back in the ability to dump all the acl's
(using subinacl) since rsync only syncs POSIX acls and I believe ntfs
has additional acl's.

In any case, my ultimate holy-grail is to be able to use BackupPC to 
allow for

a full bare-metal restore by combining:
- Full VSS file backup
- Restore of all ACLs from a subinacl dump
- Anything else I may need to recreate the full NTFS filesystem for
  windows (maybe disk signatures???)


I fully support this notion; NTFS has a lot of weirdness that doesn't 
translate well to rsync, like junction points.  Last time I tried these, 
rsync would convert them to symlinks, and restore them as symlinks.  
YMMV



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Updated debian package folder for rsync_bpc 3.1.3 branch

2020-07-01 Thread backuppc
I have been building my own rsync_bpc packages for the 3.0.9 branch by
copying over the 'debian' folder that Raoul Bhatia provides in his
original packages.

However, these no longer work for 3.1.2 and 3.1.3.

I am not a debian package expert so I don't know what needs to be done
to update the debian folder so that 'fakeroot dpkg-buildpackage -uc
-us' completes without error.

Any ideas?
Has anybody successfully built debian packages for 3.1.3 who can share
their 'debian' folder?

Thanks,
Jeff


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rsync error backing up Windows 10 computer

2020-07-01 Thread backuppc
The log file shows multiple instances of the following error when rsync_bpc is 
run:
unpack_smb_acl: warning: entry with unrecognized tag type ignored


Any idea what may be going on here?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] config.pl "built-in" script to backup Windows shares with Virtual Shadow Service Copy

2020-06-30 Thread
Michael Stowe wrote at about 23:09:55 + on Tuesday, June 30, 2020:
 > On 2020-06-29 21:51, backu...@kosowsky.org wrote:
 > > Over the years, many have asked and struggled with backing up remote
 > > Windows shares with shadow copies. Shadow copies are useful since they
 > > allow both the backup to be 'consistent' and allow for reading files
 > > that are otherwise 'busy' and unreadable when part of active Windows
 > > partitions.
 > > 
 > > Various solutions (including one I proposed almost a decade ago) use
 > > additional scripts and hacks to create the shadow copy.
 > > Such solutions are kludgy and require the maintenance of separate
 > > scripts either on the server or client.
 > > 
 > > I have written a combination of perl and bash code that can be stored
 > > in the host.pl configuration file that does everything you need to
 > > automagically create shadow copies for each share (where possible)
 > > with minimal to no special configuration in host.pl and nothing to
 > > configure on the Windows client (other than having cygwin+ssh+rsync
 > > and an accessible account on your Windows client).
 > > 
 > > The only thing you need to do is to set up the hash
 > > Conf{ClientShareName2Path} to map share names to their
 > > (unshadowed) Windows paths. The attached script will then set up and
 > > interpolate the appropriate shadow paths.
 > > 
 > > It should just work...
 > > Just cut-and-paste the attachment into your host.pl code for Windows
 > > clients.
 > > 
 > > Note: I included a fair amount of debugging & error messages in case
 > > any shadows or links fail to get created or unwound.
 > 
 > What ssh do you use?

I just use stock cygwin ssh & rsync.


 > When I updated my server-side scripts to work with ssh as well as the 
 > venerable winexe, and was alerted to this:
 > https://u2182357.ct.sendgrid.net/ls/click?upn=UlfI6r-2FmuicX-2BnC5-2BZ3I6hSUAGKA-2FZ4EXu0KbZUYtfPMmjaEDrGQFZ-2BTO1Kw4YUsENnB-2BYHtkE8jsm5y9ZKsZw-3D-3D0WeR_ukiVZyKkp5Cjvx76jsH50UUtEtCgMsyWtxVptJl-2FKE9RHuXXjDv46hulGquMiCHqO1cUX7lUb0JGPDBkdKULRgIzBYzygPWOLMnToJEwWlkFgSpuyvyRIoFh6g46IkD4hDv8q0iNShGrbLZ-2FWY-2FJ1bf-2Br0AUhR4II3jmqK8V6zW-2BcNS3HWYTOsSxlK1I13DnsJSLHNRELiUUl7zLG4k9qlz2FMSSKC2P8WJDhyso0MU-3D

I don't use winexe - it's also not particularly secure.

I pipe a server-side bash script onto ssh as part of the Dump Pre/Post
User Commands to setup and takedown the shadows.

Not sure why you would want to use a custom version of rsync when my
pretty simple scripts do all that with a lot more transparency to how
they are setup.
> 
 > Which points to this:
 > https://u2182357.ct.sendgrid.net/ls/click?upn=UlfI6r-2FmuicX-2BnC5-2BZ3I6ixXxMXwHczMOYAFDcPlTkEXRK7t-2BnyUlyzDQoG5GZdN-2B0DCDb-2Fa6IOvkPW8bMfXFEiMGZijC6vabaR5CPfjOJA-3DAR1c_ukiVZyKkp5Cjvx76jsH50UUtEtCgMsyWtxVptJl-2FKE9RHuXXjDv46hulGquMiCHqEFL1iPvXcZ1RYzdaFNSnxeGPM-2Fl3J4b4K5FvLtwaS73kqtSTQPDxssk7g0TT-2BdnOGQvFBTfIuEx4PzfTiMDRUEgRPM9AXS4gEYEwBmCRXNkWAR5zS58ZQzEUZA2uzhKrR9gkTQZhcItZB2-2Fa897XTKax6LE2qXFlfEyJyh-2FGjhc-3D
 > 
 > Fundamentally, it's a customized copy of rsync that automatically 
 > handles the VSS side.  At any rate, I haven't finished testing it (have 
 > a large project backlog) but thought you (or others here) might be 
 > interested.

Can you test out my script?
I believe it's far simpler and cleaner than either:
- My old approach for WinXP (using a client-side triggered script,
  rsyncd setup, dosdev, 'at' recursion to elevate privileges, etc.)
- Your version requiring win.exe
- Other versions requiring a custom/non-standard rsync

My version only requires a basic cygwin install with rsync/ssh and
basic linux utils plus built-in windows functions.

BTW, I still need to add back in the ability to dump all the acl's
(using subinacl) since rsync only syncs POSIX acls and I believe ntfs
has additional acl's.

In any case, my ultimate holy-grail is to be able to use BackupPC to allow for
a full bare-metal restore by combining:
- Full VSS file backup
- Restore of all ACLs from a subinacl dump
- Anything else I may need to recreate the full NTFS filesystem for
  windows (maybe disk signatures???)


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] config.pl "built-in" script to backup Windows shares with Virtual Shadow Service Copy

2020-06-30 Thread Michael Stowe

On 2020-06-29 21:51, backu...@kosowsky.org wrote:

Over the years, many have asked and struggled with backing up remote
Windows shares with shadow copies. Shadow copies are useful since they
allow both the backup to be 'consistent' and allow for reading files
that are otherwise 'busy' and unreadable when part of active Windows
partitions.

Various solutions (including one I proposed almost a decade ago) use
additional scripts and hacks to create the shadow copy.
Such solutions are kludgy and require the maintenance of separate
scripts either on the server or client.

I have written a combination of perl and bash code that can be stored
in the host.pl configuration file that does everything you need to
automagically create shadow copies for each share (where possible)
with minimal to no special configuration in host.pl and nothing to
configure on the Windows client (other than having cygwin+ssh+rsync
and an accessible account on your Windows client).

The only thing you need to do is to set up the hash
Conf{ClientShareName2Path} to map share names to their
(unshadowed) Windows paths. The attached script will then set up and
interpolate the appropriate shadow paths.

It should just work...
Just cut-and-paste the attachment into your host.pl code for Windows
clients.

Note: I included a fair amount of debugging & error messages in case
any shadows or links fail to get created or unwound.


What ssh do you use?

When I updated my server-side scripts to work with ssh as well as the 
venerable winexe, and was alerted to this:

https://u2182357.ct.sendgrid.net/ls/click?upn=UlfI6r-2FmuicX-2BnC5-2BZ3I6hSUAGKA-2FZ4EXu0KbZUYtfPMmjaEDrGQFZ-2BTO1Kw4YUsENnB-2BYHtkE8jsm5y9ZKsZw-3D-3DOkUp_ukiVZyKkp5Cjvx76jsH50UVUlXkseXbKCRkPqSLeHuHGH3U7CETSxfS6wuOOBk00j-2B342fKCvTk2gVMjxlLeGQjGpYP9XfaFhqsh78hDEKLdOUOH4lHptqZI7uoYJ7-2BtzZLqYUcVXdPlHNtfoD3QBSgt3UEs1MGWPQzmfb3fOH7HbGctnY54CDgrSwj0qmmrglWL3EjG0PDiNOHaMCd6Ca1oincEGz6gUk00qqpETnyNz9sH0ZleDvOZCq6-2BLfbW

Which points to this:
https://u2182357.ct.sendgrid.net/ls/click?upn=UlfI6r-2FmuicX-2BnC5-2BZ3I6ixXxMXwHczMOYAFDcPlTkEXRK7t-2BnyUlyzDQoG5GZdN-2B0DCDb-2Fa6IOvkPW8bMfXFEiMGZijC6vabaR5CPfjOJA-3DZinR_ukiVZyKkp5Cjvx76jsH50UVUlXkseXbKCRkPqSLeHuHGH3U7CETSxfS6wuOOBk00j-2B342fKCvTk2gVMjxlLeGZHu7cPKYWx5IlIr5yRpXtHWrn5iGQubBaMv62yVZDPCU8oYXWCq30DGKiQ6yzxPxdV1ygz7XdIc-2FN9v6lUIeB9MDuiLXLcnutCzb-2FQPd-2Fcg-2FefTLzrbRTi1OxMVO1MSFVoLv4y1kwBN0s-2FUA7JfoepEk2hS1-2BVqr4yp7NwB4uvW

Fundamentally, it's a customized copy of rsync that automatically 
handles the VSS side.  At any rate, I haven't finished testing it (have 
a large project backlog) but thought you (or others here) might be 
interested.



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] config.pl "built-in" script to backup Windows shares with Virtual Shadow Service Copy

2020-06-30 Thread backuppc
G.W. Haywood via BackupPC-users wrote at about 15:38:42 +0100 on Tuesday, June 
30, 2020:
 > Hi there,
 > 
 > On Tue, 30 Jun 2020, Jeff Kosowsky wrote:
 > 
 > > It should just work...
 > > [snip]
 > > -- next part --
 > > A non-text attachment was scrubbed...
 > > Name: BackupPCShadowConfig.pl
 > > Type: application/octet-stream
 > > Size: 8533 bytes
 > > Desc: not available
 > > 
 > > --
 > 
 > Don't you just hate it when that happens? :)
 > 

Try it and let me know your feedback... :)


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] config.pl "built-in" script to backup Windows shares with Virtual Shadow Service Copy

2020-06-30 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, 30 Jun 2020, Jeff Kosowsky wrote:


It should just work...
[snip]
-- next part --
A non-text attachment was scrubbed...
Name: BackupPCShadowConfig.pl
Type: application/octet-stream
Size: 8533 bytes
Desc: not available

--


Don't you just hate it when that happens? :)

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] FEATURE REQUEST: Add ability to see all file attribs when browsing the web gui

2020-06-30 Thread
Currently the web gui shows the type/mode/size/mod-date for backed up
files.
It would be great if there were a way to (optionally) show other
attribs stored in the relevant attrib file, including:
- GID/UID
- xattrs
- ACLs
- digest (md5sum)
- Nlinks
- inode
- compress

This would be more natural than having to switch back-and-forth
between the GUI and BackupPC_attribPrint.

Could be done as a hover pop-up or as a an info button...


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] config.pl "built-in" script to backup Windows shares with Virtual Shadow Service Copy

2020-06-29 Thread
Over the years, many have asked and struggled with backing up remote
Windows shares with shadow copies. Shadow copies are useful since they
allow both the backup to be 'consistent' and allow for reading files
that are otherwise 'busy' and unreadable when part of active Windows
partitions.

Various solutions (including one I proposed almost a decade ago) use
additional scripts and hacks to create the shadow copy.
Such solutions are kludgy and require the maintenance of separate
scripts either on the server or client.

I have written a combination of perl and bash code that can be stored
in the host.pl configuration file that does everything you need to
automagically create shadow copies for each share (where possible)
with minimal to no special configuration in host.pl and nothing to
configure on the Windows client (other than having cygwin+ssh+rsync
and an accessible account on your Windows client).

The only thing you need to do is to set up the hash
Conf{ClientShareName2Path} to map share names to their 
(unshadowed) Windows paths. The attached script will then set up and
interpolate the appropriate shadow paths.

It should just work...
Just cut-and-paste the attachment into your host.pl code for Windows
clients.

Note: I included a fair amount of debugging & error messages in case
any shadows or links fail to get created or unwound.



BackupPCShadowConfig.pl
Description: Binary data
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-29 Thread Mike Hughes
For what it's worth, I was able to resolve this (for now) by updating CPAN 
itself. I noticed when running certain commands that it complained that CPAN 
was at version 1.x and version 2.28 was available. It suggested running:
install CPAN
reload cpan

But those are clearly not bash commands. They need to be run in the CPAN shell:
# perl -MCPAN -e shell

After completing, I again ran the following as the backuppc user and it 
reported the correct version:
$ /usr/bin/perl -e 'use BackupPC::XS; print("$BackupPC::XS::VERSION\n")'
0.62

This is likely not the best way to resolve a mismatched cpan module version but 
it does appear to have worked for me, for now. I promise not to complain next 
time an update comes through and I end up having to rebuild from .iso 

From: Craig Barratt via BackupPC-users 
Sent: Thursday, June 25, 2020 5:43 PM
To: General list for user discussion, questions and support 

Cc: Craig Barratt 
Subject: Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

You can install the perl module Module::Path to find the path for a module.

After installing, do this:
perl -e 'use Module::Path "module_path"; 
print(module_path("BackupPC::XS")."\n");'

Example output:
/usr/local/lib/x86_64-linux-gnu/perl/5.26.1/BackupPC/XS.pm

Now try as root and the BackupPC user to see the difference.  Does the BackupPC 
user have permission to access the version root uses?

You can also print the module search path with:
perl -e 'print join("\n", @INC),"\n"'

Does that differ between root and the BackupPC user?

Craig

On Thu, Jun 25, 2020 at 9:48 AM Les Mikesell 
mailto:lesmikes...@gmail.com>> wrote:
> The system got itself into this state from a standard yum update.

That's why you want to stick to all packaged modules whenever
possible.   Over time, dependencies can change and the packaged
versions will update together.  You can probably update a cpan module
to the correct version manually but you need to track all the version
dependencies yourself.   There are some different approaches to
removing modules: https://www.perlmonks.org/?node_id=1134981


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc process will not stop

2020-06-29 Thread Mark Maciolek
hi,
  You think I would have run across this with all the Linux systems I
administer but I had not. You are correct systemctl stop worked.

Thank you.

Mark

> Mark,
>
> Perhaps systemd is being used to run BackupPC?
>
> What output do you get from:
>
> systemctl status backuppc
>
> If it shows as active/running, then the correct command to stop BackupPC
> is:
>
> systemctl stop backuppc
>
>
> Craig
>
> On Mon, Jun 29, 2020 at 11:05 AM Mark Maciolek  wrote:
>
>> hi,
>>
>> Running BackupPC v4.3.2 on Ubuntu 18.04 LTS. I want to upgrade to 4.4.0
>> but I can't get the backuppc process to stop. I can do
>> /etc/init.d/backuppc and it starts again. If I do kill -9  it
>> also just restarts.
>>
>> I have several other BackupPC servers and yet this is the only one that
>> does this.
>>
>> Does anyone have a clue to where I should start troubleshooting this
>> issue?
>>
>> Mark
>>
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
>



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc process will not stop

2020-06-29 Thread Craig Barratt via BackupPC-users
Mark,

Perhaps systemd is being used to run BackupPC?

What output do you get from:

systemctl status backuppc

If it shows as active/running, then the correct command to stop BackupPC is:

systemctl stop backuppc


Craig

On Mon, Jun 29, 2020 at 11:05 AM Mark Maciolek  wrote:

> hi,
>
> Running BackupPC v4.3.2 on Ubuntu 18.04 LTS. I want to upgrade to 4.4.0
> but I can't get the backuppc process to stop. I can do
> /etc/init.d/backuppc and it starts again. If I do kill -9  it
> also just restarts.
>
> I have several other BackupPC servers and yet this is the only one that
> does this.
>
> Does anyone have a clue to where I should start troubleshooting this issue?
>
> Mark
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] backuppc process will not stop

2020-06-29 Thread Mark Maciolek
hi,

Running BackupPC v4.3.2 on Ubuntu 18.04 LTS. I want to upgrade to 4.4.0
but I can't get the backuppc process to stop. I can do
/etc/init.d/backuppc and it starts again. If I do kill -9  it
also just restarts.

I have several other BackupPC servers and yet this is the only one that
does this.

Does anyone have a clue to where I should start troubleshooting this issue?

Mark


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unable to connect on port -1

2020-06-29 Thread Phil Kennedy
I'll clarify a bit: the Backup.sock and LOCK file deletions were a long
shot that just happened to work. I noticed that those files were there with
the service stopped, and my suspicion / reasoning was that perhaps backuppc
was having trouble generating the socket because those represented a hung /
locked / otherwise old session. Deleting them worked a couple times (stop
service, delete, start service) but that seems to have stopped working as
well.

I will check NFS version, perhaps the NAS has started defaulting to v4, but
I can't be sure. In the past, mounting without nock on a synology NAS (I've
gone through a couple, issue happened on an old WD nas as well) would cause
a load average spike that would make Backuppc appear unstable, so my
default for many years has been nolock.
~Phil

On Sun, Jun 28, 2020, 6:45 PM Craig Barratt via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:

> The CGI script is trying to connect to the BackupPC server using the
> unix-domain socket, which is at $Conf{LogDir}/BackupPC.sock. From your
> email, on your system that appears to be /var/lib/log/BackupPC.sock.
>
> Are you running nfs v3 or v4?  I have had experience with v3 not working
> reliably with BackupPC (related to buggy lock file behaviour).  BackupPC
> does rely on lock files working, so it's definitely not recommended to turn
> locking off.
>
> You said you deleted the BackupPC.sock file.  That would explain why the
> CGI script can't connect to the server.  Why did you delete it?  You said 
> "deleting
> those files doesn't always let the service restart" - deleting those files
> should not be used to get the server to restart.
>
> Craig
>
> On Sat, Jun 27, 2020 at 9:26 AM Phil Kennedy <
> phillip.kenn...@yankeeairmuseum.org> wrote:
>
>> I've hit my wits end on an issue with my backuppc instance. The system
>> ran fine, untouched, for many months. This is an ubuntu 16.0.4 system,
>> running backuppc 3.3.1, installed via apt. When accessing the index (or any
>> other pages), I get the following:
>> Error: Unable to connect to BackupPC server
>> This CGI script (/backuppc/index.cgi) is unable to connect to the
>> BackupPC server on pirate port -1.
>> The error was: unix connect: Connection refused.
>> Perhaps the BackupPC server is not running or there is a configuration
>> error. Please report this to your Sys Admin.
>>
>> The backuppc & apache services are running, and restarting without error.
>> The backuppc pool (and other important folders, such as log) lives on an
>> NFS mount, and /var/lib/backuppc is symlinked to /mnt/backup. Below is the
>> fstab entry that I use:
>>
>> 10.0.0.4:/backup /mnt/backup nfs users,auto,nolock,rw 0 0
>>
>> (I'm specifically using nolock, since that can cause a similar issue.
>> Mounting an NFS mount via some of the off the shelf NAS's out there can
>> have performance issues without nolock set.)
>>
>> I've been able to get the instance to start and run briefly by deleting
>> the BackupPC.sock and LOCK files from /var/lib/log, but the instance
>> doesn't stay running for very long (minutes to an hour or two), and the LOG
>> isn't giving me much data. On top of that, deleting those files doesn't
>> always let the service restart. Thoughts? This box lives a pretty stagnent
>> life, nothing tends to change configuration-wise.
>> ~Phil
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unable to connect on port -1

2020-06-29 Thread backuppc
G.W. Haywood via BackupPC-users wrote at about 14:16:15 +0100 on Monday, June 
29, 2020:
 > Hi there,
 > 
 > On Mon, 29 Jun 2020,  Craig Barratt wrote:
 > 
 > > ...
 > > Are you running nfs v3 or v4?  I have had experience with v3 not working
 > > reliably with BackupPC (related to buggy lock file behaviour).  BackupPC
 > > does rely on lock files working, so it's definitely not recommended to turn
 > > locking off.
 > > ...
 > 
 > I would go further than that.  My feeling is that NFS is not suitable
 > for something so important as your backups.
 > 

That being said I ran BackupPC successfully for almost a decade using
NFS-v3 to store the backups on a small under-powered/under-memory
ARM-based NAS. Never had a problem with lock files...


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unable to connect on port -1

2020-06-29 Thread Norman Goldstein
For what it is worth, I have been using an nfs mount to hold the pool 
files for both BPC V3 and V4.  This is the fstab entry:


192.168.1.80:/mnt/HD/HD_a2/POOLS /home/disks/nasPOOLS nfs nolock,rw,suid 0 0

The man page nfs(5) explains that nolock does not turn off locking 
completely.



On 2020-06-29 6:16 a.m., G.W. Haywood via BackupPC-users wrote:

Hi there,

On Mon, 29 Jun 2020,  Craig Barratt wrote:


...
Are you running nfs v3 or v4?  I have had experience with v3 not working
reliably with BackupPC (related to buggy lock file behaviour). BackupPC
does rely on lock files working, so it's definitely not recommended 
to turn

locking off.
...


I would go further than that.  My feeling is that NFS is not suitable
for something so important as your backups.






___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unable to connect on port -1

2020-06-29 Thread G.W. Haywood via BackupPC-users

Hi there,

On Mon, 29 Jun 2020,  Craig Barratt wrote:


...
Are you running nfs v3 or v4?  I have had experience with v3 not working
reliably with BackupPC (related to buggy lock file behaviour).  BackupPC
does rely on lock files working, so it's definitely not recommended to turn
locking off.
...


I would go further than that.  My feeling is that NFS is not suitable
for something so important as your backups.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unable to connect on port -1

2020-06-28 Thread Craig Barratt via BackupPC-users
The CGI script is trying to connect to the BackupPC server using the
unix-domain socket, which is at $Conf{LogDir}/BackupPC.sock. From your
email, on your system that appears to be /var/lib/log/BackupPC.sock.

Are you running nfs v3 or v4?  I have had experience with v3 not working
reliably with BackupPC (related to buggy lock file behaviour).  BackupPC
does rely on lock files working, so it's definitely not recommended to turn
locking off.

You said you deleted the BackupPC.sock file.  That would explain why the
CGI script can't connect to the server.  Why did you delete it?  You
said "deleting
those files doesn't always let the service restart" - deleting those files
should not be used to get the server to restart.

Craig

On Sat, Jun 27, 2020 at 9:26 AM Phil Kennedy <
phillip.kenn...@yankeeairmuseum.org> wrote:

> I've hit my wits end on an issue with my backuppc instance. The system ran
> fine, untouched, for many months. This is an ubuntu 16.0.4 system, running
> backuppc 3.3.1, installed via apt. When accessing the index (or any other
> pages), I get the following:
> Error: Unable to connect to BackupPC server
> This CGI script (/backuppc/index.cgi) is unable to connect to the BackupPC
> server on pirate port -1.
> The error was: unix connect: Connection refused.
> Perhaps the BackupPC server is not running or there is a configuration
> error. Please report this to your Sys Admin.
>
> The backuppc & apache services are running, and restarting without error.
> The backuppc pool (and other important folders, such as log) lives on an
> NFS mount, and /var/lib/backuppc is symlinked to /mnt/backup. Below is the
> fstab entry that I use:
>
> 10.0.0.4:/backup /mnt/backup nfs users,auto,nolock,rw 0 0
>
> (I'm specifically using nolock, since that can cause a similar issue.
> Mounting an NFS mount via some of the off the shelf NAS's out there can
> have performance issues without nolock set.)
>
> I've been able to get the instance to start and run briefly by deleting
> the BackupPC.sock and LOCK files from /var/lib/log, but the instance
> doesn't stay running for very long (minutes to an hour or two), and the LOG
> isn't giving me much data. On top of that, deleting those files doesn't
> always let the service restart. Thoughts? This box lives a pretty stagnent
> life, nothing tends to change configuration-wise.
> ~Phil
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Unable to connect on port -1

2020-06-27 Thread Phil Kennedy
I've hit my wits end on an issue with my backuppc instance. The system ran
fine, untouched, for many months. This is an ubuntu 16.0.4 system, running
backuppc 3.3.1, installed via apt. When accessing the index (or any other
pages), I get the following:
Error: Unable to connect to BackupPC server
This CGI script (/backuppc/index.cgi) is unable to connect to the BackupPC
server on pirate port -1.
The error was: unix connect: Connection refused.
Perhaps the BackupPC server is not running or there is a configuration
error. Please report this to your Sys Admin.

The backuppc & apache services are running, and restarting without error.
The backuppc pool (and other important folders, such as log) lives on an
NFS mount, and /var/lib/backuppc is symlinked to /mnt/backup. Below is the
fstab entry that I use:

10.0.0.4:/backup /mnt/backup nfs users,auto,nolock,rw 0 0

(I'm specifically using nolock, since that can cause a similar issue.
Mounting an NFS mount via some of the off the shelf NAS's out there can
have performance issues without nolock set.)

I've been able to get the instance to start and run briefly by deleting the
BackupPC.sock and LOCK files from /var/lib/log, but the instance doesn't
stay running for very long (minutes to an hour or two), and the LOG isn't
giving me much data. On top of that, deleting those files doesn't always
let the service restart. Thoughts? This box lives a pretty stagnent life,
nothing tends to change configuration-wise.
~Phil
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-25 Thread Craig Barratt via BackupPC-users
You can install the perl module Module::Path to find the path for a module.

After installing, do this:

perl -e 'use Module::Path "module_path";
print(module_path("BackupPC::XS")."\n");'

Example output:

/usr/local/lib/x86_64-linux-gnu/perl/5.26.1/BackupPC/XS.pm

Now try as root and the BackupPC user to see the difference.  Does the
BackupPC user have permission to access the version root uses?

You can also print the module search path with:

perl -e 'print join("\n", @INC),"\n"'


Does that differ between root and the BackupPC user?

Craig

On Thu, Jun 25, 2020 at 9:48 AM Les Mikesell  wrote:

> > The system got itself into this state from a standard yum update.
>
> That's why you want to stick to all packaged modules whenever
> possible.   Over time, dependencies can change and the packaged
> versions will update together.  You can probably update a cpan module
> to the correct version manually but you need to track all the version
> dependencies yourself.   There are some different approaches to
> removing modules: https://www.perlmonks.org/?node_id=1134981
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] FEATURE REQUEST: More robust error reporting/emailing

2020-06-25 Thread Daniel Berteaud
- Le 25 Juin 20, à 15:42,  backu...@kosowsky.org a écrit :

> 
> Helpful configurable options would include:
> - *Days* since last successful backup - *per host* configurable - as you
>  may want to be more paranoid about certain hosts versus others while
>  others you may not care if it gets backed up regularly and you want
>  to avoid the "nag" emails
> 
> - *Number* of errors in last backup - *per host/per share*
>  configurable - Idea being that some hosts may naturally have more
>  errors due to locked files or fleeting files while other shares may
>  be rock stable. (Potentially, one could even trigger on types of errors
>  or you could exclude certain types of errors from the count)
> 
> - *Percent* of files changed/added/deleted in last backup relative to
>  prior backup - *per host/per share* configurable - idea here being
>  that you want to be alerted if something unexpected has changed on
>  the host which could even be dramatic if a share has been damaged or
>  deleted or not mounted etc.
> 
> Just a thought starter... I'm sure others may have other ideas to add...
> 


I do all of this with Zabbix and some custom scripts : 
https://git.fws.fr/fws/zabbix-agent-addons/src/branch/master/zabbix_scripts

  * Alert if no backup since $Conf{EMailNotifyOldBackupDays}
  * Alert if Xfer error > threshold
  * Alert if new file size seems abnormal (too small or too big)
  * Graph some data about space consumption/compression efficiency

Maybe it can help ;-)

++

-- 
[ https://www.firewall-services.com/ ]  
Daniel Berteaud 
FIREWALL-SERVICES SAS, La sécurité des réseaux 
Société de Services en Logiciels Libres 
Tél : +33.5 56 64 15 32 
Matrix: @dani:fws.fr 
[ https://www.firewall-services.com/ | https://www.firewall-services.com ]



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-25 Thread Les Mikesell
> The system got itself into this state from a standard yum update.

That's why you want to stick to all packaged modules whenever
possible.   Over time, dependencies can change and the packaged
versions will update together.  You can probably update a cpan module
to the correct version manually but you need to track all the version
dependencies yourself.   There are some different approaches to
removing modules: https://www.perlmonks.org/?node_id=1134981


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-25 Thread Mike Hughes
The system got itself into this state from a standard yum update. I only 
intervened once the BackupPC service failed to start after a reboot.
>From what I found it looked like updating to .62 was the right direction. And 
>now I learned that there is no way to cleanly uninstall a cpan module. Ugh.
So am I looking at a purge and a reinstall? If so, is there a guide on how to 
do that?
Thanks for any tips!

From: Richard Shaw 
Sent: Thursday, June 25, 2020 10:08 AM
To: General list for user discussion, questions and support 

Subject: Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

On Thu, Jun 25, 2020 at 10:02 AM Mike Hughes 
mailto:m...@visionary.com>> wrote:
Certainly a mismatch. Here's my output. Hopefully it formats cleanly. How can I 
fix this while waiting for the patch to roll out?

Well, I'm not sure how to clean up the mess, but the problem is simple. You 
don't want to mix manual cpan installs with packages. There's no reason to use 
cpan at all if you're using my packages.

My guess is the cpan installs are going into /usr/local which is overriding the 
package installs.

Thanks,
Richard
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] FEATURE REQUEST: More robust error reporting/emailing

2020-06-25 Thread Mike Hughes
Daily report:
https://github.com/moisseev/BackupPC_report


From: backu...@kosowsky.org 
Sent: Thursday, June 25, 2020 8:42 AM
To: General list for user discussion 
Subject: [BackupPC-users] FEATURE REQUEST: More robust error reporting/emailing

It would be great if there could be a way to have a host/share
configurable way to trigger emails based on certain types of errors or
changes.

The goal being to avoid the "complacency" of backups continuing to run
but not being aware of either continuing backup errors or unexpected
changes to the underlying system.

Otherwise, one must rely on regular and pretty detailed review of logs
and stats.

Helpful configurable options would include:
- *Days* since last successful backup - *per host* configurable - as you
  may want to be more paranoid about certain hosts versus others while
  others you may not care if it gets backed up regularly and you want
  to avoid the "nag" emails

- *Number* of errors in last backup - *per host/per share*
  configurable - Idea being that some hosts may naturally have more
  errors due to locked files or fleeting files while other shares may
  be rock stable. (Potentially, one could even trigger on types of errors
  or you could exclude certain types of errors from the count)

- *Percent* of files changed/added/deleted in last backup relative to
  prior backup - *per host/per share* configurable - idea here being
  that you want to be alerted if something unexpected has changed on
  the host which could even be dramatic if a share has been damaged or
  deleted or not mounted etc.

Just a thought starter... I'm sure others may have other ideas to add...



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-25 Thread Richard Shaw
On Thu, Jun 25, 2020 at 10:02 AM Mike Hughes  wrote:

> Certainly a mismatch. Here's my output. Hopefully it formats cleanly. How
> can I fix this while waiting for the patch to roll out?
>

Well, I'm not sure how to clean up the mess, but the problem is simple. You
don't want to mix manual cpan installs with packages. There's no reason to
use cpan at all if you're using my packages.

My guess is the cpan installs are going into /usr/local which is overriding
the package installs.

Thanks,
Richard
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-25 Thread Mike Hughes
Certainly a mismatch. Here's my output. Hopefully it formats cleanly. How can I 
fix this while waiting for the patch to roll out?

# head /usr/share/BackupPC/bin/BackupPC -n1
#!/usr/bin/perl
# grep "use\ lib" /usr/share/BackupPC/bin/BackupPC
use lib "/usr/share/BackupPC/lib";
# which cpan
/bin/cpan
# /usr/bin/perl -e 'use BackupPC::XS; print("$BackupPC::XS::VERSION\n")'
0.57
# cpan install BackupPC::XS
...
# /usr/bin/perl -e 'use BackupPC::XS; print("$BackupPC::XS::VERSION\n")'
0.62
# su backuppc -
$ /usr/bin/perl -e 'use BackupPC::XS; print("$BackupPC::XS::VERSION\n")'
0.57
$ cpan install BackupPC::XS
...
ERROR: Can't create '/root/perl5/lib/perl5/x86_64-linux-thread-multi/BackupPC'
$ /usr/bin/perl -e 'use BackupPC::XS; print("$BackupPC::XS::VERSION\n")'
0.57

From: Richard Shaw 
Sent: Wednesday, June 24, 2020 12:42 PM
To: General list for user discussion, questions and support 

Cc: Craig Barratt 
Subject: Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

On Wed, Jun 24, 2020 at 12:19 PM Craig Barratt via BackupPC-users 
mailto:backuppc-users@lists.sourceforge.net>>
 wrote:
Mike,

It's possible you have two different versions of perl installed, or for some 
reason the BackupPC user is seeing an old version of BackupPC::XS.

Try some of the suggestions here: 
https://github.com/backuppc/backuppc/issues/351.

Yes, I just did a fresh install and didn't have any issues (with BackupPC-XS). 
I DID find that /var/run/BackupPC is not created by the package and cannot be 
created automatically since BackupPC is run as the backuppc user. Looking into 
that now.

Thanks,
Richard
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] FEATURE REQUEST: More robust error reporting/emailing

2020-06-25 Thread
It would be great if there could be a way to have a host/share
configurable way to trigger emails based on certain types of errors or
changes.

The goal being to avoid the "complacency" of backups continuing to run
but not being aware of either continuing backup errors or unexpected
changes to the underlying system.

Otherwise, one must rely on regular and pretty detailed review of logs
and stats.

Helpful configurable options would include:
- *Days* since last successful backup - *per host* configurable - as you
  may want to be more paranoid about certain hosts versus others while
  others you may not care if it gets backed up regularly and you want
  to avoid the "nag" emails

- *Number* of errors in last backup - *per host/per share*
  configurable - Idea being that some hosts may naturally have more
  errors due to locked files or fleeting files while other shares may
  be rock stable. (Potentially, one could even trigger on types of errors
  or you could exclude certain types of errors from the count)

- *Percent* of files changed/added/deleted in last backup relative to
  prior backup - *per host/per share* configurable - idea here being
  that you want to be alerted if something unexpected has changed on
  the host which could even be dramatic if a share has been damaged or
  deleted or not mounted etc.

Just a thought starter... I'm sure others may have other ideas to add...



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is there a reason that DumpPreUserCmd (and its analogs) are executed without a shell?

2020-06-24 Thread
I ended up using the perl embedded code approach... it gave me more
flexibility. 

I now have a robust routine that uses DumpPreUserCmd/DumpPostUserCmd
to automagically set up Window Shadow copies on remote Windows 7
machines (should presumably also work for Windows 8/10).

It basically uses your current $Conf{ClientShareName2Path} hash to
determine automatically what shadow copies to create. Then it:
- Creates the shadow copy
- Creates a junction for each shadow copy in a directory of your
  choosing labeled by drive-letter and timestamp (so that you could
  have multiple backups going at once on separate shadow copies)
- Modifies on the fly $Conf{ClientShareName2Path} to interpolate the
  shadow copy junction location (without changing the share name)

Then it proceeds to back up the relevant shadow copy for each share.

Then after the backup is complete, everything is unwound by deleting
the junctions and removing the shadow copies.

It's quite a hack combining escaped perl code that runs on the server,
ssh, and bash code that runs on the client.

The beauty of this approach is that *no* local code or setup is
required on the Windows client beyond making sure you have a basic
install of ssh and cygwin.

All the bash code is included in the host.pl file so it's all compact
and easy to manage -- even if the code is hairy. Indeed I wrote it so
that multiple Windows hosts can all point to the same canonical
host.pl file...

IMO this is much superior to other approaches that require separate
scripts either on the server or client -- including my old
'shadowmountrsync' approach that used all types of hacks and ran on
the client.

When I get it tested and cleaned up, I will share with the group..
I also still want to add the option to run 'subinacl' and/or 'getfacl'
on the local machine and store the results (again using a combination
of perl/ssh/bash code) so that one can have as complete a copy of the
disk as possible. Note that rsync doesn't capture all the acl detail
that NTFS uses.

I also hope to test whether the above plus disk partition/signature is
enough to get essentially a bare-metal restore capability for Windows.

The only thing that might be missing would be rarely used NTFS
functionality like alternate filestreams and perhaps some challenges
with junctions, though I could add something to back up the junctions too...

Craig Barratt via BackupPC-users wrote at about 21:00:09 -0700 on Wednesday, 
June 24, 2020:
 > Jeff,
 > 
 > The reason BackupPC avoids running shells for sub-commands is security, and
 > the extra layer of argument escaping or quoting.  It's easy to
 > inadvertently have some security weakness from misconfiguration or misuse.
 > 
 > Can you get what you need by starting the command with "/bin/bash -c"?  You
 > can alternatively set $Conf{DumpPreUserCmd} to a shell script with the
 > arguments you need, and then you can do whatever you want in that script.
 > 
 > Craig
 > 
 > On Wed, Jun 24, 2020 at 10:20 AM  wrote:
 > 
 > > I notice that in Lib.pm, the function 'cmdSystemOrEvalLong'
 > > specifically uses the structure 'exec {$cmd->[0]} @$cmd;' so that no
 > > shell is invoked.
 > >
 > > I know that technically it's a little faster to avoid calling the
 > > shell, but in many cases it is very useful to have at least a
 > > rudimentary shell available.
 > >
 > > For example, I may want to read in (rather than execute a script).
 > >
 > > Specifically say,
 > > (1)$Conf{DumpPreUserCmd} = qq(\$sshPath -q -x -i $BackupPCsshID -l
 > > $Conf{RsyncdUserName} \$hostIP bash -s <
 > > /etc/backuppc/scripts/script-\$hostIP)
 > > would allow me to run a hostIP specific script that I store in
 > > /etc/backuppc/scripts.
 > >
 > > - This is neater and easier to maintain than having to store the script
 > >   on the remote machine.
 > > - This also seems neater and nicer than having to use an executable
 > >   script that would itself need to run ssh -- plus importantly it
 > >   removes a layer of indirection and messing with extra quoting.
 > >
 > >
 > > Similarly, it would be great to be able to support:
 > > (2)$Conf{DumpPreUserCmd} = qq(\$sshPath -q -x -i $BackupPCsshID -l
 > > $Conf{RsyncdUserName} \$hostIP bash -s < > 
 > > EOF)
 > >
 > > Or similarly:
 > > (3)$Conf{DumpPreUserCmd} = qq(\$sshPath -q -x -i $BackupPCsshID -l
 > > $Conf{RsyncdUserName} \$hostIP bash -s <<< $bashscript
 > > where for example
 > > my $bashscript = <<'EOF'
 > > 
 > > EOF
 > >
 > > Though this latter form is a bash-ism and would not work in /bin/sh
 > >
 > > The advantage of the latter examples is that it would allow me to
 > > store the bashscript in the actual host.pl config scripts rather than
 > > having to have a separate set of scripts to load.
 > >
 > > Note that I am able to roughly replicate (3) using perl code, but it
 > > requires extra layers of escaping of metacharacters making it hard to
 > > write, read, and debug.
 > >
 > > For example something like:
 > > my $bashscript = <<'EOF';
 

Re: [BackupPC-users] Is there a reason that DumpPreUserCmd (and its analogs) are executed without a shell?

2020-06-24 Thread Craig Barratt via BackupPC-users
Jeff,

The reason BackupPC avoids running shells for sub-commands is security, and
the extra layer of argument escaping or quoting.  It's easy to
inadvertently have some security weakness from misconfiguration or misuse.

Can you get what you need by starting the command with "/bin/bash -c"?  You
can alternatively set $Conf{DumpPreUserCmd} to a shell script with the
arguments you need, and then you can do whatever you want in that script.

Craig

On Wed, Jun 24, 2020 at 10:20 AM  wrote:

> I notice that in Lib.pm, the function 'cmdSystemOrEvalLong'
> specifically uses the structure 'exec {$cmd->[0]} @$cmd;' so that no
> shell is invoked.
>
> I know that technically it's a little faster to avoid calling the
> shell, but in many cases it is very useful to have at least a
> rudimentary shell available.
>
> For example, I may want to read in (rather than execute a script).
>
> Specifically say,
> (1)$Conf{DumpPreUserCmd} = qq(\$sshPath -q -x -i $BackupPCsshID -l
> $Conf{RsyncdUserName} \$hostIP bash -s <
> /etc/backuppc/scripts/script-\$hostIP)
> would allow me to run a hostIP specific script that I store in
> /etc/backuppc/scripts.
>
> - This is neater and easier to maintain than having to store the script
>   on the remote machine.
> - This also seems neater and nicer than having to use an executable
>   script that would itself need to run ssh -- plus importantly it
>   removes a layer of indirection and messing with extra quoting.
>
>
> Similarly, it would be great to be able to support:
> (2)$Conf{DumpPreUserCmd} = qq(\$sshPath -q -x -i $BackupPCsshID -l
> $Conf{RsyncdUserName} \$hostIP bash -s < 
> EOF)
>
> Or similarly:
> (3)$Conf{DumpPreUserCmd} = qq(\$sshPath -q -x -i $BackupPCsshID -l
> $Conf{RsyncdUserName} \$hostIP bash -s <<< $bashscript
> where for example
> my $bashscript = <<'EOF'
> 
> EOF
>
> Though this latter form is a bash-ism and would not work in /bin/sh
>
> The advantage of the latter examples is that it would allow me to
> store the bashscript in the actual host.pl config scripts rather than
> having to have a separate set of scripts to load.
>
> Note that I am able to roughly replicate (3) using perl code, but it
> requires extra layers of escaping of metacharacters making it hard to
> write, read, and debug.
>
> For example something like:
> my $bashscript = <<'EOF';
> 
> EOF
>
> $bashscript =~ s/([][;&()<>{}|^\n\r\t *\$\\'"`?])/\\$1/g;
> $Conf{DumpPreUserCmd} = qq(&{sub {
> open(my \$out_fh, "|-", "\$sshPath -q -x -i $BackupPCsshID -l
> $Conf{RsyncdUserName} \$hostIP bash -s")
> or warn "Can't start ssh: \$!";
> print \$out_fh qq($bashscript);
> close \$out_fh or warn "Error flushing/closing pipe to ssh: \$!";
> }})
>
> Though it doesn't quite work yet...
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-24 Thread Richard Shaw
On Wed, Jun 24, 2020 at 12:19 PM Craig Barratt via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:

> Mike,
>
> It's possible you have two different versions of perl installed, or for
> some reason the BackupPC user is seeing an old version of BackupPC::XS.
>
> Try some of the suggestions here:
> https://github.com/backuppc/backuppc/issues/351.
>

Yes, I just did a fresh install and didn't have any issues (with
BackupPC-XS). I DID find that /var/run/BackupPC is not created by the
package and cannot be created automatically since BackupPC is run as the
backuppc user. Looking into that now.

Thanks,
Richard
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Is there a reason that DumpPreUserCmd (and its analogs) are executed without a shell?

2020-06-24 Thread
I notice that in Lib.pm, the function 'cmdSystemOrEvalLong'
specifically uses the structure 'exec {$cmd->[0]} @$cmd;' so that no
shell is invoked.

I know that technically it's a little faster to avoid calling the
shell, but in many cases it is very useful to have at least a
rudimentary shell available.

For example, I may want to read in (rather than execute a script).

Specifically say,
(1)$Conf{DumpPreUserCmd} = qq(\$sshPath -q -x -i $BackupPCsshID -l 
$Conf{RsyncdUserName} \$hostIP bash -s < /etc/backuppc/scripts/script-\$hostIP)
would allow me to run a hostIP specific script that I store in 
/etc/backuppc/scripts.

- This is neater and easier to maintain than having to store the script
  on the remote machine.
- This also seems neater and nicer than having to use an executable
  script that would itself need to run ssh -- plus importantly it
  removes a layer of indirection and messing with extra quoting.


Similarly, it would be great to be able to support:
(2)$Conf{DumpPreUserCmd} = qq(\$sshPath -q -x -i $BackupPCsshID -l 
$Conf{RsyncdUserName} \$hostIP bash -s <
EOF)

Or similarly:
(3)$Conf{DumpPreUserCmd} = qq(\$sshPath -q -x -i $BackupPCsshID -l 
$Conf{RsyncdUserName} \$hostIP bash -s <<< $bashscript
where for example
my $bashscript = <<'EOF'

EOF

Though this latter form is a bash-ism and would not work in /bin/sh

The advantage of the latter examples is that it would allow me to
store the bashscript in the actual host.pl config scripts rather than
having to have a separate set of scripts to load.

Note that I am able to roughly replicate (3) using perl code, but it
requires extra layers of escaping of metacharacters making it hard to
write, read, and debug.

For example something like:
my $bashscript = <<'EOF';
 
EOF

$bashscript =~ s/([][;&()<>{}|^\n\r\t *\$\\'"`?])/\\$1/g;
$Conf{DumpPreUserCmd} = qq(&{sub {
open(my \$out_fh, "|-", "\$sshPath -q -x -i $BackupPCsshID -l 
$Conf{RsyncdUserName} \$hostIP bash -s")
or warn "Can't start ssh: \$!";
print \$out_fh qq($bashscript);
close \$out_fh or warn "Error flushing/closing pipe to ssh: \$!";
}})

Though it doesn't quite work yet...



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-24 Thread Craig Barratt via BackupPC-users
Mike,

It's possible you have two different versions of perl installed, or for
some reason the BackupPC user is seeing an old version of BackupPC::XS.

Try some of the suggestions here:
https://github.com/backuppc/backuppc/issues/351.

Craig

On Wed, Jun 24, 2020 at 10:12 AM Richard Shaw  wrote:

> On Wed, Jun 24, 2020 at 11:58 AM Mike Hughes  wrote:
>
>> I'm getting a service startup failure claiming my version of BackupPC-XS
>> isn't up-to-snuff but it appears to meet the requirements:
>>
>> BackupPC: old version 0.57 of BackupPC::XS: need >= 0.62; exiting in 30s
>>
>
> I don't have a CentOS 7 machine handy so I'm downloading the minimal ISO
> for boxes...
>
> Thanks,
> Richard
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-24 Thread Richard Shaw
On Wed, Jun 24, 2020 at 11:58 AM Mike Hughes  wrote:

> I'm getting a service startup failure claiming my version of BackupPC-XS
> isn't up-to-snuff but it appears to meet the requirements:
>
> BackupPC: old version 0.57 of BackupPC::XS: need >= 0.62; exiting in 30s
>

I don't have a CentOS 7 machine handy so I'm downloading the minimal ISO
for boxes...

Thanks,
Richard
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-24 Thread Mike Hughes
I'm getting a service startup failure claiming my version of BackupPC-XS isn't 
up-to-snuff but it appears to meet the requirements:

BackupPC: old version 0.57 of BackupPC::XS: need >= 0.62; exiting in 30s

# rpm -qa | grep -i backuppc
BackupPC-XS-0.62-1.el7.x86_64
BackupPC-4.4.0-1.el7.x86_64


From: Richard Shaw 
Sent: Tuesday, June 23, 2020 10:47 AM
To: General list for user discussion, questions and support 

Subject: Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

On Tue, Jun 23, 2020 at 8:24 AM Mike Hughes 
mailto:m...@visionary.com>> wrote:
Thanks so much Richard! Will COPR installations auto-update via yum repository 
updates or do we need to specifically run a COPR update manually?

Yes, as long as you install the repo file it will work just like any other 
repository.

Thanks,
Richard
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-23 Thread Richard Shaw
On Tue, Jun 23, 2020 at 8:24 AM Mike Hughes  wrote:

> Thanks so much Richard! Will COPR installations auto-update via yum
> repository updates or do we need to specifically run a COPR update manually?
>

Yes, as long as you install the repo file it will work just like any other
repository.

Thanks,
Richard
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-23 Thread Robert E. Wooden

On 6/23/2020 8:22 AM, Mike Hughes wrote:
Thanks so much Richard! Will COPR installations auto-update via yum 
repository updates or do we need to specifically run a COPR update 
manually?




I have been using COPR for a few years and yes, it will update when you "yum 
update".

--

Bob Wooden

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-23 Thread Mike Hughes
Thanks so much Richard! Will COPR installations auto-update via yum repository 
updates or do we need to specifically run a COPR update manually?

From: Richard Shaw 
Sent: Monday, June 22, 2020 7:01 PM
To: General list for user discussion, questions and support 

Subject: Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

Builds complete and updates submitted for Fedora and CentOS 8

https://bodhi.fedoraproject.org/updates/?packages=BackupPC

CentOS 7 builds available via COPR:

https://copr.fedorainfracloud.org/coprs/hobbes1069/BackupPC/

Thanks,
Richard
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-22 Thread Richard Shaw
Builds complete and updates submitted for Fedora and CentOS 8

https://bodhi.fedoraproject.org/updates/?packages=BackupPC

CentOS 7 builds available via COPR:

https://copr.fedorainfracloud.org/coprs/hobbes1069/BackupPC/

Thanks,
Richard
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC 4.4.0 released

2020-06-22 Thread Craig Barratt via BackupPC-users
BackupPC 4.4.0  has
been released on Github.

This release contains several new features and some bug fixes. New features
include:

   - any full/filled backup can be marked for keeping, which prevents any
   expiry or deletion
   - any backup can be annotated with a comment (eg, "prior to upgrade of
   xyz")
   - added metrics CGI (thanks to @jooola ) that
   replaces RSS and adds Prometheus support
   - tar XferMethod now supports xattrs and acls
   - rsync XferMethod now correctly supports xattrs on directories and
   symlinks
   - nightly pool scanning now verifies the md5 digests of a configurable
   fraction of pool files
   - code runs through perltidy so format is now uniform (thanks to @jooola
   , with help from @shancock9
    and @moisseev
   )

New versions of BackupPC::XS (0.62
) and rsync-bpc (
3.0.9.15 ,
3.1.2.2  or
3.1.3beta0 )
are required.

Thanks to Jeff Kosowsky for extensive testing and debugging for this
release, particularly around xattrs.

Enjoy!

Craig

Here are the more detailed changes:

   - Merged pull requests #325
   , #326
   , #329
   , #330
   , #334
   , #336
   , #337
   , #338
   , #342
   , #343
   , #344
   , #345
   , #347
   , #348
   , #349
   
   - Filled/Full backups can now be marked as "keep", which excludes them
   from any expiry/deletion. Also, a backup-specific comment can be added to
   any backup to capture any important information about that backup (eg,
   "pre-upgrade of xyz").
   - Added metrics CGI, which adds Prometheus support and replaces RSS, by
   @joola  (#344
   , #347
   )
   - Tar XferMethod now supports xattrs and acls; xattrs should be
   compatible with rsync XferMethod, but acls are not
   - Sort open directories to top when browsing backup tree
   - Format code using perltidy, and included in pre-commit flow, by @joola
    (#334
   , #337
   , #342
   , #343
   , #345
   ). Thanks to @joola
    and @shancock9
 (perltidy
   author) for significant effort and support, plus improvements in perltidy,
   to make this happen.
   - Added $Conf{PoolNightlyDigestCheckPercent}, which checks the md5
   digest of this fraction of the pool files each night.
   - $Conf{ClientShareName2Path} is saved in backups file and the share to
   client path mapping is now displayed when you browse a backup so you know
   the actual client backup path for each share, if different from the share
   name
   - configure.pl now checks the per-host config.pl in a V3 upgrade to warn
   the user if $Conf{RsyncClientCmd} or $Conf{RsyncClientRestoreCmd} are used
   for that host, so that the new settings $Conf{RsyncSshArgs} and
   $Conf{RsyncClientPath} can be manually updated.
   - Fixed host mutex handling for dhcp hosts; shifted initial mutex
   requests to client programs
   - Updated webui icon, logo and favicon, by @moisseev
    (#325
   , #326
   , #329
   , #330
   )
   - Added $Conf{RsyncRestoreArgsExtra} for host-specific restore settings
   - Language files are now all use utf8 charsets
   - Bumped required version of BackupPC::XS to 0.62 and rsync-bpc to
   3.0.9.15.
   - Ping failure message only written to stdout only if verbose
   - BackupPC_backupDelete removes partial v3 backup in HOST/new; fixes #324
   

Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-18 Thread
There was a 3rd party script for 3.x 

contributed by Matthias Meyer that I have used successfully.  I made a
few small changes to the script...



BackupPC_deleteBackup
Description: Binary data

Craig Barratt via BackupPC-users wrote at about 08:58:57 -0700 on Thursday, 
June 18, 2020:
 > Stefan,
 > 
 > BackupPC_backupDelete is only available in 4.x.
 > 
 > Craig
 > 
 > On Thu, Jun 18, 2020 at 2:33 AM Stefan Schumacher <
 > stefan.schumac...@net-federation.de> wrote:
 > 
 > >
 > > > If you want to remove a backup, best to use a script built to do it
 > > > right -- BackupPC_backupDelete. Not sure if it is bundled with 3.x
 > > > but
 > > > it exists out there.
 > > >
 > > >
 > >
 > > Hello,
 > >
 > > in that case I would be grateful if someone could share the link to
 > > this script with me.
 > >
 > > Thanks in advance
 > > Stefan
 > >
 > >
 > > Stefan Schumacher
 > > Systemadministrator
 > >
 > > NetFederation GmbH
 > > Sürther Hauptstraße 180 B -
 > > Fon:+49 (0)2236/3936-701
 > >
 > > E-Mail:  stefan.schumac...@net-federation.de
 > > Internet:   http://www.net-federation.de
 > > Besuchen Sie uns doch auch auf facebook, twitter, Google+, flickr,
 > > Slideshare, XING oder unserem Blog. Wir freuen uns!
 > >
 > >
 > >
 > > ***
 > > Nachhaltigkeit bleibt Trendthema: Der neue CSR Benchmark ist live!
 > >
 > > ***
 > > Wie gut funktioniert die digitale CSR-Kommunikation in der deutschen
 > > Konzernlandschaft?
 > > Antworten darauf und zahlreiche Good Practices finden Sie unter
 > > www.csr-benchmark.de
 > >
 > >
 > >
 > >
 > >
 > > *
 > >
 > > NetFederation GmbH
 > > Geschäftsführung: Christian Berens, Thorsten Greiten
 > > Amtsgericht Köln, HRB Nr. 32660
 > >
 > > *
 > >
 > > The information in this e-mail is confidential and may be legally
 > > privileged. It is intended solely for the addressee and access to the
 > > e-mail by anyone else is unauthorised. If you are not the intended
 > > recipient, any disclosure, copying, distribution or any action taken or
 > > omitted to be taken in reliance on it, is prohibited and may be unlawful.
 > > If you have received this e-mail in error please forward to:
 > > post...@net-federation.de
 > >
 > >
 > > Die in dieser E-Mail enthaltenen Informationen sind vertraulich und können
 > > von rechtlicher Bedeutung sein. Diese Mail ist ausschließlich für den
 > > Adressaten bestimmt und jeglicher Zugriff durch andere  Personen ist nicht
 > > zulässig. Falls Sie nicht der beabsichtigte Empfänger sind, ist jegliche
 > > Veröffentlichung, Vervielfältigung, Verteilung oder sonstige in diesem
 > > Zusammenhang stehende Handlung untersagt und unter Umständen ungesetzlich.
 > > Falls Sie diese E-Mail irrtümlich erhalten haben, leiten Sie sie bitte
 > > weiter an: post...@net-federation.de
 > >
 > >
 > >
 > >
 > >
 > > ___
 > > BackupPC-users mailing list
 > > BackupPC-users@lists.sourceforge.net
 > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > > Wiki:http://backuppc.wiki.sourceforge.net
 > > Project: http://backuppc.sourceforge.net/
 > >
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-18 Thread Craig Barratt via BackupPC-users
Stefan,

BackupPC_backupDelete is only available in 4.x.

Craig

On Thu, Jun 18, 2020 at 2:33 AM Stefan Schumacher <
stefan.schumac...@net-federation.de> wrote:

>
> > If you want to remove a backup, best to use a script built to do it
> > right -- BackupPC_backupDelete. Not sure if it is bundled with 3.x
> > but
> > it exists out there.
> >
> >
>
> Hello,
>
> in that case I would be grateful if someone could share the link to
> this script with me.
>
> Thanks in advance
> Stefan
>
>
> Stefan Schumacher
> Systemadministrator
>
> NetFederation GmbH
> Sürther Hauptstraße 180 B -
> Fon:+49 (0)2236/3936-701
>
> E-Mail:  stefan.schumac...@net-federation.de
> Internet:   http://www.net-federation.de
> Besuchen Sie uns doch auch auf facebook, twitter, Google+, flickr,
> Slideshare, XING oder unserem Blog. Wir freuen uns!
>
>
>
> ***
> Nachhaltigkeit bleibt Trendthema: Der neue CSR Benchmark ist live!
>
> ***
> Wie gut funktioniert die digitale CSR-Kommunikation in der deutschen
> Konzernlandschaft?
> Antworten darauf und zahlreiche Good Practices finden Sie unter
> www.csr-benchmark.de
>
>
>
>
>
> *
>
> NetFederation GmbH
> Geschäftsführung: Christian Berens, Thorsten Greiten
> Amtsgericht Köln, HRB Nr. 32660
>
> *
>
> The information in this e-mail is confidential and may be legally
> privileged. It is intended solely for the addressee and access to the
> e-mail by anyone else is unauthorised. If you are not the intended
> recipient, any disclosure, copying, distribution or any action taken or
> omitted to be taken in reliance on it, is prohibited and may be unlawful.
> If you have received this e-mail in error please forward to:
> post...@net-federation.de
>
>
> Die in dieser E-Mail enthaltenen Informationen sind vertraulich und können
> von rechtlicher Bedeutung sein. Diese Mail ist ausschließlich für den
> Adressaten bestimmt und jeglicher Zugriff durch andere  Personen ist nicht
> zulässig. Falls Sie nicht der beabsichtigte Empfänger sind, ist jegliche
> Veröffentlichung, Vervielfältigung, Verteilung oder sonstige in diesem
> Zusammenhang stehende Handlung untersagt und unter Umständen ungesetzlich.
> Falls Sie diese E-Mail irrtümlich erhalten haben, leiten Sie sie bitte
> weiter an: post...@net-federation.de
>
>
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-18 Thread Stefan Schumacher

> If you want to remove a backup, best to use a script built to do it
> right -- BackupPC_backupDelete. Not sure if it is bundled with 3.x
> but
> it exists out there.
>
>

Hello,

in that case I would be grateful if someone could share the link to
this script with me.

Thanks in advance
Stefan


Stefan Schumacher
Systemadministrator

NetFederation GmbH
Sürther Hauptstraße 180 B -
Fon:+49 (0)2236/3936-701

E-Mail:  stefan.schumac...@net-federation.de
Internet:   http://www.net-federation.de
Besuchen Sie uns doch auch auf facebook, twitter, Google+, flickr, Slideshare, 
XING oder unserem Blog. Wir freuen uns!


***
Nachhaltigkeit bleibt Trendthema: Der neue CSR Benchmark ist live!
***
Wie gut funktioniert die digitale CSR-Kommunikation in der deutschen 
Konzernlandschaft?
Antworten darauf und zahlreiche Good Practices finden Sie unter 
www.csr-benchmark.de





*

NetFederation GmbH
Geschäftsführung: Christian Berens, Thorsten Greiten
Amtsgericht Köln, HRB Nr. 32660

*

The information in this e-mail is confidential and may be legally privileged. 
It is intended solely for the addressee and access to the e-mail by anyone else 
is unauthorised. If you are not the intended recipient, any disclosure, 
copying, distribution or any action taken or omitted to be taken in reliance on 
it, is prohibited and may be unlawful. If you have received this e-mail in 
error please forward to: post...@net-federation.de


Die in dieser E-Mail enthaltenen Informationen sind vertraulich und können von 
rechtlicher Bedeutung sein. Diese Mail ist ausschließlich für den Adressaten 
bestimmt und jeglicher Zugriff durch andere  Personen ist nicht zulässig. Falls 
Sie nicht der beabsichtigte Empfänger sind, ist jegliche Veröffentlichung, 
Vervielfältigung, Verteilung oder sonstige in diesem Zusammenhang stehende 
Handlung untersagt und unter Umständen ungesetzlich. Falls Sie diese E-Mail 
irrtümlich erhalten haben, leiten Sie sie bitte weiter an: 
post...@net-federation.de





___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] pings to ... have failed 114 consecutive times

2020-06-17 Thread Norman Goldstein

I managed to resolve the problem, but I don't understand the solution:
In the hosts file, I deselected "dhcp", even though the host is assigned 
an address by my router (although I did give the host a permanent 
192.168.* address in the router).


The reason I tried this is I perl-debugged the BackupPC script into the 
method, QueueOnePC, and because  $Hosts->{$host}{dhcp} is non-zero, the 
backup request was not being queued (line 1885).


??




On 2020-06-17 9:36 a.m., Tim Evans wrote:

On 6/17/20 12:28 PM, Norman Goldstein wrote:
Sorry for possible re-posting -- am having trouble with this mailing 
list.

Rats, previous email sent to "users-owner" by mistake.

--

Am running BPC 4.3.2 on fedora 32 x86-64.  I am able to run /ping/ 
from the command line as backuppc and as root (and as myself), and 
can successfully do backups from the command line using 
BackupPC_dump, but the GUI interface always puts a backup request to 
be idle, and shows ping requests always failing (which, I assume, is 
what puts the backup request to be idle).


My file system is not near 95% full, and I have full privilege in the 
GUI to edit all the Server and pc-specific config files. There are a 
fair number of posts on the net re "ping" issues with BPC, but I 
haven't been able to resolve this on my machine.  When making a 
manual backup request from the GUI, the only relevant log entry is in 
the Server LOG file:


2020-06-12 11:07:43 User backuppc requested backup ofmelodic 
  (melodic 
)


Make sure your full path to the ping executable is correct in the setup.






___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Supported Windows versions

2020-06-17 Thread backuppc
All windows versions are supported via multiple modalities - rsync,
rsyncd, smb, etc.

Fernando Miranda wrote at about 18:27:19 +0100 on Wednesday, June 17, 2020:
 > Hi,
 > 
 > I'm starting an analysis to choose an open source sw backup, so I have some
 > basic doubts (sorry if these are very simple questions).
 > 
 > As for BackuPC I read that Windows supported versions are 95, 98, 2000 and
 > XP clients, but is really only this? What about for later versions (and
 > server versions), any information even from "user experience" only?
 > 
 > Thanks,
 > Fernando Miranda
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Supported Windows versions

2020-06-17 Thread Fernando Miranda
Hi,

I'm starting an analysis to choose an open source sw backup, so I have some
basic doubts (sorry if these are very simple questions).

As for BackuPC I read that Windows supported versions are 95, 98, 2000 and
XP clients, but is really only this? What about for later versions (and
server versions), any information even from "user experience" only?

Thanks,
Fernando Miranda
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] pings to ... have failed 114 consecutive times

2020-06-17 Thread Tim Evans

On 6/17/20 12:28 PM, Norman Goldstein wrote:

Sorry for possible re-posting -- am having trouble with this mailing list.
Rats, previous email sent to "users-owner" by mistake.

--

Am running BPC 4.3.2 on fedora 32 x86-64.  I am able to run /ping/ from 
the command line as backuppc and as root (and as myself), and can 
successfully do backups from the command line using BackupPC_dump, but 
the GUI interface always puts a backup request to be idle, and shows 
ping requests always failing (which, I assume, is what puts the backup 
request to be idle).


My file system is not near 95% full, and I have full privilege in the 
GUI to edit all the Server and pc-specific config files.  There are a 
fair number of posts on the net re "ping" issues with BPC, but I haven't 
been able to resolve this on my machine.  When making a manual backup 
request from the GUI, the only relevant log entry is in the Server LOG file:


2020-06-12 11:07:43 User backuppc requested backup ofmelodic  
  (melodic  
)


Make sure your full path to the ping executable is correct in the setup.


--
Tim Evans   |5 Chestnut Court
443-394-3864|Owings Mills, MD 21117


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-17 Thread backuppc
Stefan Schumacher wrote at about 12:43:44 + on Wednesday, June 17, 2020:
 > 
 > > Yes, with backuppc 3.3, you can safely delete any incremental and
 > > full
 > > prior to the full backup that you want to keep. You can't just keep
 > > the
 > > latest incremental though (there are some options if that is what
 > > you
 > > really need).

You can keep incrementals so long as the preceding fulls (and lower
level incrementals remain)
> >
 > 
 > > Keep in mind though, that:
 > > a) websites tend to be a lot of text (php, html, css, etc) which all
 > > compresses really well
 > > b) website content may not change a lot, and with the dedupe, you
 > > may
 > > not save a lot of space anyway
 > 
 > Hello,
 > 
 > thanks for your input. I already have found out that I should not
 > delete  the log files unter /var/lib/backuppc/pc/example.netfed.de/
 > because now it shows zero backups. Good that I tried it on an
 > unimportant system. Do I assume correctly that I can delete the
 > directories themselves safely and they will not be shown in the
 > Webinterface anymore?

You really need to understand how BackupPC 3.x works.
Deleting the backups alone will not recover a *single* byte of storage
as you will only be removing a hard link to the pool file. Plus it
will mess up the web interface etc.

I *strongly*, *STRONGLY* recommend against manually messing with
deleting/copying/moving/renaming etc. raw backup directories unless
you truly know what you are doing.

If you want to remove a backup, best to use a script built to do it
right -- BackupPC_backupDelete. Not sure if it is bundled with 3.x but
it exists out there.



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] pings to ... have failed 114 consecutive times

2020-06-17 Thread Norman Goldstein

Sorry for possible re-posting -- am having trouble with this mailing list.
Rats, previous email sent to "users-owner" by mistake.

--

Am running BPC 4.3.2 on fedora 32 x86-64.  I am able to run /ping/ from 
the command line as backuppc and as root (and as myself), and can 
successfully do backups from the command line using BackupPC_dump, but 
the GUI interface always puts a backup request to be idle, and shows 
ping requests always failing (which, I assume, is what puts the backup 
request to be idle).


My file system is not near 95% full, and I have full privilege in the 
GUI to edit all the Server and pc-specific config files.  There are a 
fair number of posts on the net re "ping" issues with BPC, but I haven't 
been able to resolve this on my machine.  When making a manual backup 
request from the GUI, the only relevant log entry is in the Server LOG file:


2020-06-12 11:07:43 User backuppc requested backup ofmelodic  
  (melodic  
)

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Merging instances of backuppc

2020-06-17 Thread
In v4, I wanted to confirm that the following approach would work for
merging together backups for 2 different instances of backuppc.

For convenience, assume we are copying instance #1 onto instance #2

1. Copy over Cpool/pool
   Copy cpool/pool from instance #1 to instance #2 using 'cp -an' to
   avoid clobbering.
   I am willing to "assume" the very low risk of md5sum collisions --
   (which frankly exists even with rsync-bpc).


2. Copy over pc tree
   For non-overlapping machines, just copy over the machine.

   For machines existing on both instances, renumber before copying as
   necessary to avoid conflicting numbers (plus make sure that
   incremental/full and filled/unfilled chains are not interrupted)

3. Run 'BackupPC_fsck -f' to adjust refCnt's and poolCnt's


Any thoughts?
Any better ways?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-17 Thread Systems
I made a simple guide on my walkthrough on upgrading Backuppc from 3 to 4.

I'm still learning so please follow at your own risk, (and feel free to correct 
any of my commands)

https://technical.network/how-to-upgrade-backuppc-v3-3-1-to-v4-3-0-in-centos-6/ 

Thanks
Ibrahim

-Original Message-
From: G.W. Haywood via BackupPC-users  
Sent: 17 June 2020 14:04
To: backuppc-users@lists.sourceforge.net
Cc: G.W. Haywood 
Subject: Re: [BackupPC-users] Keep only one Full Backup as Archive

Hi there,

On Wed, 17 Jun 2020, J.J. Kosowsky wrote:

> ...
> FYI - Backuppc 4.x is really significantly better than Backuppc 3.x.
> ...
> To all those out there still using 3.x, if you haven't tried upgrading 
> to 4.x yet, I suggest you do. If you have, I suggest you try again.
> ...

For the record, I'm one of those who had tried 4.x a few years ago and been 
bitten by it.  So I put it back in the tar.gz and stayed with 3.x for a while 
longer.  About a year ago I did try again, and things went very much better.  I 
now believe that Jeff is right in all he says, so

+1

-- 

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-17 Thread Adam Goryachev



On 17/6/20 22:43, Stefan Schumacher wrote:

Yes, with backuppc 3.3, you can safely delete any incremental and
full
prior to the full backup that you want to keep. You can't just keep
the
latest incremental though (there are some options if that is what
you
really need).

Keep in mind though, that:
a) websites tend to be a lot of text (php, html, css, etc) which all
compresses really well
b) website content may not change a lot, and with the dedupe, you
may
not save a lot of space anyway

Hello,

thanks for your input. I already have found out that I should not
delete  the log files unter /var/lib/backuppc/pc/example.netfed.de/
because now it shows zero backups. Good that I tried it on an
unimportant system. Do I assume correctly that I can delete the
directories themselves safely and they will not be shown in the
Webinterface anymore?


It's been a long time since V3, but from memory, you would need to edit 
the "backups" file to remove the old entries, and prevent them showing 
up on the web interface. You might be able to delete the folders, and I 
think there is some script/process to attempt to "repair" the backups 
file, but I just edited it by hand the small number of times it was needed.


I would guess you can delete log files for backups you delete, but you 
must keep log files for the backups you are keeping


Definitely better to ensure you keep a backup of any changes you make

No responsibility taken for any errors caused by the information 
provided, so be careful


Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-17 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 17 Jun 2020, J.J. Kosowsky wrote:


...
FYI - Backuppc 4.x is really significantly better than Backuppc 3.x.
...
To all those out there still using 3.x, if you haven't tried upgrading
to 4.x yet, I suggest you do. If you have, I suggest you try again.
...


For the record, I'm one of those who had tried 4.x a few years ago and
been bitten by it.  So I put it back in the tar.gz and stayed with 3.x
for a while longer.  About a year ago I did try again, and things went
very much better.  I now believe that Jeff is right in all he says, so

+1

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-17 Thread Stefan Schumacher

> Yes, with backuppc 3.3, you can safely delete any incremental and
> full
> prior to the full backup that you want to keep. You can't just keep
> the
> latest incremental though (there are some options if that is what
> you
> really need).
>

> Keep in mind though, that:
> a) websites tend to be a lot of text (php, html, css, etc) which all
> compresses really well
> b) website content may not change a lot, and with the dedupe, you
> may
> not save a lot of space anyway

Hello,

thanks for your input. I already have found out that I should not
delete  the log files unter /var/lib/backuppc/pc/example.netfed.de/
because now it shows zero backups. Good that I tried it on an
unimportant system. Do I assume correctly that I can delete the
directories themselves safely and they will not be shown in the
Webinterface anymore?

Yours sincerely
Stefan


Stefan Schumacher
Systemadministrator

NetFederation GmbH
Sürther Hauptstraße 180 B -
Fon:+49 (0)2236/3936-701

E-Mail:  stefan.schumac...@net-federation.de
Internet:   http://www.net-federation.de
Besuchen Sie uns doch auch auf facebook, twitter, Google+, flickr, Slideshare, 
XING oder unserem Blog. Wir freuen uns!


***
Nachhaltigkeit bleibt Trendthema: Der neue CSR Benchmark ist live!
***
Wie gut funktioniert die digitale CSR-Kommunikation in der deutschen 
Konzernlandschaft?
Antworten darauf und zahlreiche Good Practices finden Sie unter 
www.csr-benchmark.de





*

NetFederation GmbH
Geschäftsführung: Christian Berens, Thorsten Greiten
Amtsgericht Köln, HRB Nr. 32660

*

The information in this e-mail is confidential and may be legally privileged. 
It is intended solely for the addressee and access to the e-mail by anyone else 
is unauthorised. If you are not the intended recipient, any disclosure, 
copying, distribution or any action taken or omitted to be taken in reliance on 
it, is prohibited and may be unlawful. If you have received this e-mail in 
error please forward to: post...@net-federation.de


Die in dieser E-Mail enthaltenen Informationen sind vertraulich und können von 
rechtlicher Bedeutung sein. Diese Mail ist ausschließlich für den Adressaten 
bestimmt und jeglicher Zugriff durch andere  Personen ist nicht zulässig. Falls 
Sie nicht der beabsichtigte Empfänger sind, ist jegliche Veröffentlichung, 
Vervielfältigung, Verteilung oder sonstige in diesem Zusammenhang stehende 
Handlung untersagt und unter Umständen ungesetzlich. Falls Sie diese E-Mail 
irrtümlich erhalten haben, leiten Sie sie bitte weiter an: 
post...@net-federation.de





___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-16 Thread backuppc
Stefan Schumacher wrote at about 11:29:14 + on Tuesday, June 16, 2020:
 > Hello,
 > 
 > I use Backuppc to backup VMs running mostly Webservers and a few custom
 > services. As everyone knows, Websites have a lifetime and at a certain
 > point the customer wishes for the site to be taken offline. We have one
 > Backuppc which we use for one big, special customer who wants a
 > FullKeepCnt of 4,0,12,0,0,0,10.
 > 
 > Now I have multiple websites which for which I have deactivated the
 > backup, but which still have multiple full and incremental backups
 > stored - up to 17 full backups to be exact.
 > 
 > Is there a way to delete all but the latest full backup and still be be
 > able to restore the website on demand? Is this technically possible or
 > will this clash with the pooling and deduplication functions of
 > backuppc? How should I proceed? I am still using Backuppc 3.3, because
 > of problems with backuppc4. (No need to go into details here)

FYI - Backuppc 4.x is really significantly better than Backuppc 3.x.
- Getting rid of all the hard-links and using full-file md5sums for the
  pool digests is infinitely cleaner, simpler, and faster.
- It makes it so much easier and faster to archive a copy of your backups.
- It can reliably back-up and restore xattributes (e.g., SELinux) as
  well as ACLs -- making a perfect restore possible.
- Specifically, if rsync can make a perfect copy, then BackupPC can do
  a perfect restore.
- It also seems quite stable.

Finally, all the new dev work is being done on 4.x so 3.x is
effectively end-of-life other than perhaps simple/critical bug fixes.

To all those out there still using 3.x, if you haven't tried upgrading
to 4.x yet, I suggest you do. If you have, I suggest you try again.
BackupPC_migrateV3toV4 does a great job of converting 3.x backups to
4.x backups.

If you have questions or need help, I suggest you ask for assistance
on the group as people are always willing and able to answer
questions.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-16 Thread Adam Goryachev



On 16/6/20 21:29, Stefan Schumacher wrote:

Hello,

I use Backuppc to backup VMs running mostly Webservers and a few custom
services. As everyone knows, Websites have a lifetime and at a certain
point the customer wishes for the site to be taken offline. We have one
Backuppc which we use for one big, special customer who wants a
FullKeepCnt of 4,0,12,0,0,0,10.

Now I have multiple websites which for which I have deactivated the
backup, but which still have multiple full and incremental backups
stored - up to 17 full backups to be exact.

Is there a way to delete all but the latest full backup and still be be
able to restore the website on demand? Is this technically possible or
will this clash with the pooling and deduplication functions of
backuppc? How should I proceed? I am still using Backuppc 3.3, because
of problems with backuppc4. (No need to go into details here)

Yes, with backuppc 3.3, you can safely delete any incremental and full 
prior to the full backup that you want to keep. You can't just keep the 
latest incremental though (there are some options if that is what you 
really need).


Keep in mind though, that:
a) websites tend to be a lot of text (php, html, css, etc) which all 
compresses really well
b) website content may not change a lot, and with the dedupe, you may 
not save a lot of space anyway


Just my comments, you might be talking about a website like youtube with 
mostly video content, and massive amounts of it, so YMMV.




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Keep only one Full Backup as Archive

2020-06-16 Thread Stefan Schumacher
Hello,

I use Backuppc to backup VMs running mostly Webservers and a few custom
services. As everyone knows, Websites have a lifetime and at a certain
point the customer wishes for the site to be taken offline. We have one
Backuppc which we use for one big, special customer who wants a
FullKeepCnt of 4,0,12,0,0,0,10.

Now I have multiple websites which for which I have deactivated the
backup, but which still have multiple full and incremental backups
stored - up to 17 full backups to be exact.

Is there a way to delete all but the latest full backup and still be be
able to restore the website on demand? Is this technically possible or
will this clash with the pooling and deduplication functions of
backuppc? How should I proceed? I am still using Backuppc 3.3, because
of problems with backuppc4. (No need to go into details here)

Thanks in Advance
Stefan


Stefan Schumacher
Systemadministrator

NetFederation GmbH
Sürther Hauptstraße 180 B -
Fon:+49 (0)2236/3936-701

E-Mail:  stefan.schumac...@net-federation.de
Internet:   http://www.net-federation.de
Besuchen Sie uns doch auch auf facebook, twitter, Google+, flickr, Slideshare, 
XING oder unserem Blog. Wir freuen uns!


***
Nachhaltigkeit bleibt Trendthema: Der neue CSR Benchmark ist live!
***
Wie gut funktioniert die digitale CSR-Kommunikation in der deutschen 
Konzernlandschaft?
Antworten darauf und zahlreiche Good Practices finden Sie unter 
www.csr-benchmark.de





*

NetFederation GmbH
Geschäftsführung: Christian Berens, Thorsten Greiten
Amtsgericht Köln, HRB Nr. 32660

*

The information in this e-mail is confidential and may be legally privileged. 
It is intended solely for the addressee and access to the e-mail by anyone else 
is unauthorised. If you are not the intended recipient, any disclosure, 
copying, distribution or any action taken or omitted to be taken in reliance on 
it, is prohibited and may be unlawful. If you have received this e-mail in 
error please forward to: post...@net-federation.de


Die in dieser E-Mail enthaltenen Informationen sind vertraulich und können von 
rechtlicher Bedeutung sein. Diese Mail ist ausschließlich für den Adressaten 
bestimmt und jeglicher Zugriff durch andere  Personen ist nicht zulässig. Falls 
Sie nicht der beabsichtigte Empfänger sind, ist jegliche Veröffentlichung, 
Vervielfältigung, Verteilung oder sonstige in diesem Zusammenhang stehende 
Handlung untersagt und unter Umständen ungesetzlich. Falls Sie diese E-Mail 
irrtümlich erhalten haben, leiten Sie sie bitte weiter an: 
post...@net-federation.de





___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Simple Bash shell function for locating and reading pool files

2020-06-14 Thread
I corrected a small error in the script and allowed for setting TopDir
with an optional second argument



function BackupPC_zcatPool ()
{
local BACKUPPC_ZCAT=$(which BackupPC_zcat)
local TOPDIR=/var/lib/backuppc
[ -n "$2"  -a -d "$2" ] && TOPDIR=$2
local CPOOL="$TOPDIR/cpool"
local POOL="$TOPDIR/pool"
[ -n "$BACKUPPC_ZCAT" ] || 
BACKUPPC_ZCAT=/usr/share/backuppc/bin/BackupPC_zcat

local file=${1##*/} #Strip the path prefix
#If attrib file...
file=${file/attrib[0-9a-f][0-9a-f]_/attrib_} #Convert inode format attrib 
to normal attrib
file=${file##attrib_} #Extract the md5sum from the attrib file name

local ABCD=$(printf '%04x' "$(( 0x${file:0:4} & 0xfefe ))")
local prefix="${ABCD:0:2}/${ABCD:2:2}"
#   echo $prefix

if [ -e "$CPOOL/$prefix/$file" ]; then #V4 - cpool
$BACKUPPC_ZCAT $CPOOL/$prefix/$file
elif [ -e "$POOL/$prefix/$file" ]; then #V4 - pool
cat $CPOOL/$prefix/$file
elif [ -e "$CPOOL/${file:0:1}/${file:1:1}/${file:2:1}/$file" ]; then #V3 - 
cpool
$BACKUPPC_ZCAT "$CPOOL/${file:0:1}/${file:1:1}/${file:2:1}/$file"
elif [ -e "$POOL/${file:0:1}/${file:1:1}/${file:2:1}/$file" ]; then #V3 - 
pool
cat "$POOL/${file:0:1}/${file:1:1}/${file:2:1}/$file"
else
echo "Can't find pool file: $file" >/dev/stderr
fi
}

backu...@kosowsky.org wrote at about 11:16:53 -0400 on Thursday, June 11, 2020:
 > 
 > I frequently use the following Bash shell function to allow me to locate and
 > read pool/cpool files as I got tired of manually converting to the
 > pool heirarchy decoding. It's not very complicated, but helpful
 > 
 > 1. You can enter either:
 >- 32 hex character digest (lower case): <32hex>
 >- attrib file name with/without preceding path and either in the
 >  "normal" or inode form. i.e..
 > [/]attrib_<32hex>
 > [/]attrib<2hex>_<32hex>
 > 
 > 2. It works with pool or cpool
 > 3. It works with v3/v4
 > 
 > #
 > function BackupPC_zcatPool ()
 > {
 > local BACKUPPC_ZCAT=$(which BackupPC_zcat)
 > [ -n "$BACKUPPC_ZCAT" ] || 
 > BACKUPPC_ZCAT=/usr/share/backuppc/bin/BackupPC_zcat
 > [ -n "$CPOOL" ] || local CPOOL=/var/lib/backuppc/cpool
 > [ -n "$POOL" ] || local POOL=/var/lib/backuppc/pool
 > 
 > local file=${1##*/} #Strip the path prefix
 > #If attrib file...
 > file=${file/attrib[0-9a-f][0-9a-f]_/attrib_} #Convert inode format 
 > attrib to normal attrib
 > file=${file##attrib_} #Extract the md5sum from the attrib file name
 > 
 > local ABCD=$(printf '%x' "$(( 0x${file:0:4} & 0xfefe ))")
 > local prefix="${ABCD:0:2}/${ABCD:2:2}"
 > #   echo $prefix
 > 
 > if [ -e "$CPOOL/$prefix/$file" ]; then #V4 - cpool
 >  $BACKUPPC_ZCAT $CPOOL/$prefix/$file
 > elif [ -e "$POOL/$prefix/$file" ]; then #V4 - pool
 >  cat $CPOOL/$prefix/$file
 > elif [ -e "$CPOOL/${file:0:1}/${file:1:1}/${file:2:1}/$file" ]; then #V3 
 > - cpool
 >  $BACKUPPC_ZCAT "$CPOOL/${file:0:1}/${file:1:1}/${file:2:1}/$file"
 > elif [ -e "$POOL/${file:0:1}/${file:1:1}/${file:2:1}/$file" ]; then #V3 
 > - pool
 >  cat "$POOL/${file:0:1}/${file:1:1}/${file:2:1}/$file"
 > else
 >  echo "Can't find pool file: $file" >/dev/stderr
 > fi
 > }
 > 
 > 
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Simple Bash shell function for locating and reading pool files

2020-06-11 Thread backuppc


I frequently use the following Bash shell function to allow me to locate and
read pool/cpool files as I got tired of manually converting to the
pool heirarchy decoding. It's not very complicated, but helpful

1. You can enter either:
   - 32 hex character digest (lower case): <32hex>
   - attrib file name with/without preceding path and either in the
 "normal" or inode form. i.e..
[/]attrib_<32hex>
[/]attrib<2hex>_<32hex>

2. It works with pool or cpool
3. It works with v3/v4

#
function BackupPC_zcatPool ()
{
local BACKUPPC_ZCAT=$(which BackupPC_zcat)
[ -n "$BACKUPPC_ZCAT" ] || 
BACKUPPC_ZCAT=/usr/share/backuppc/bin/BackupPC_zcat
[ -n "$CPOOL" ] || local CPOOL=/var/lib/backuppc/cpool
[ -n "$POOL" ] || local POOL=/var/lib/backuppc/pool

local file=${1##*/} #Strip the path prefix
#If attrib file...
file=${file/attrib[0-9a-f][0-9a-f]_/attrib_} #Convert inode format attrib 
to normal attrib
file=${file##attrib_} #Extract the md5sum from the attrib file name

local ABCD=$(printf '%x' "$(( 0x${file:0:4} & 0xfefe ))")
local prefix="${ABCD:0:2}/${ABCD:2:2}"
#   echo $prefix

if [ -e "$CPOOL/$prefix/$file" ]; then #V4 - cpool
$BACKUPPC_ZCAT $CPOOL/$prefix/$file
elif [ -e "$POOL/$prefix/$file" ]; then #V4 - pool
cat $CPOOL/$prefix/$file
elif [ -e "$CPOOL/${file:0:1}/${file:1:1}/${file:2:1}/$file" ]; then #V3 - 
cpool
$BACKUPPC_ZCAT "$CPOOL/${file:0:1}/${file:1:1}/${file:2:1}/$file"
elif [ -e "$POOL/${file:0:1}/${file:1:1}/${file:2:1}/$file" ]; then #V3 - 
pool
cat "$POOL/${file:0:1}/${file:1:1}/${file:2:1}/$file"
else
echo "Can't find pool file: $file" >/dev/stderr
fi
}


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how to disable full backups and keep just incremental

2020-06-10 Thread daggs
Greetings Craig,

 

I use version 3.3.2

 

Dagg.
 

Sent: Wednesday, June 10, 2020 at 6:58 PM
From: "Craig Barratt via BackupPC-users" 
To: "General list for user discussion, questions and support" 
Cc: "Craig Barratt" 
Subject: Re: [BackupPC-users] how to disable full backups and keep just incremental


Dagg,
 

What version of BackupPC are you using?

Craig

 


On Wed, Jun 10, 2020 at 7:24 AM daggs  wrote:

Greetings,

I have two large (several hundreds gigabytes) backups that keep failing due to abort or child end unexpectedly. I ran it three times already.
I want to disable the full backup and keep the incremental, what is the proper way do that?

Thanks,

Dagg.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how to disable full backups and keep just incremental

2020-06-10 Thread Craig Barratt via BackupPC-users
Dagg,

What version of BackupPC are you using?

Craig

On Wed, Jun 10, 2020 at 7:24 AM daggs  wrote:

> Greetings,
>
> I have two large (several hundreds gigabytes) backups that keep failing
> due to abort or child end unexpectedly. I ran it three times already.
> I want to disable the full backup and keep the incremental, what is the
> proper way do that?
>
> Thanks,
>
> Dagg.
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] how to disable full backups and keep just incremental

2020-06-10 Thread daggs
Greetings,

I have two large (several hundreds gigabytes) backups that keep failing due to 
abort or child end unexpectedly. I ran it three times already.
I want to disable the full backup and keep the incremental, what is the proper 
way do that?

Thanks,

Dagg.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What is the best way to add back MISSING pool files?

2020-06-10 Thread
Craig Barratt via BackupPC-users wrote at about 22:31:28 -0700 on Tuesday, June 
9, 2020:
 > Jeff,
 > 
 > The first method seems simpler.  Don't you just have to mv the file based
 > on BackupPC_zcat file | md5sum?  BackupPC_nightly shouldn't need to run
 > (other than to check you no longer get the missing error).

Yeah - that's exactly what I was saying... it's a 1 or 2 liner script.
 > 
 > Btw, where did you find the missing pool files?

I kept a copy of my old v3 pool... I also edited
BackupPC_migrateV3toV4 to save (i.e. rename) the old pc tree rather than delete
it... Then I could copy over the v4 to a new disk because no messy hard
links, leaving me with my intact v3...

Actually, it may not be a bad idea to add that as an option to
BackupPC_migrateV3toV4 to allow preservation of the old...

 > 
 > For the benefit of people on the list, Jeff and I are addressing the other
 > issues off-list.
 > 
 > Craig
 > 
 > On Tue, Jun 9, 2020 at 6:48 PM  wrote:
 > 
 > > Of course, the unanswered interesting question is why did this small
 > > number of 37 files out of about 3.5M pool files fail to migrate
 > > properly from v3 to v4...
 > >
 > > Note: I ran as many checks before and after as possible on the pool
 > > and pc heirarchy integrity (using my old v3 routines I had written) as
 > > well as checked error messages from the migration itself. I also of
 > > course had the BackupPC service off...
 > >
 > > "" wrote at about 21:41:27 -0400 on Tuesday, June 9, 2020:
 > >  > I found some of the missing v4 pool files (mentioned in an earlier
 > >  > post) in a full-disk backup of my old v3 setup.
 > >  >
 > >  > I would like to add them back to the v4 pool to eliminate the missing
 > >  > pool file messages and thus fix my backups.
 > >  >
 > >  > I can think of several ways:
 > >  >
 > >  > - Method A.
 > >  >   1. Create a script to first BackupPC_zcat each recovered old v3 pool
 > >  >  file into a new file named by its uncompressed md5sum and then move
 > >  >  it appropriately into the v4 cpool 2-layer directory heirarchy.
 > >  >
 > >  >   2. Run BackupPC_nightly assuming that it will clean up the cpool ref
 > >  >  counts to coincide with the now correct pc-branch ref count
 > >  >
 > >  > - Method B
 > >  >   1. BackupPC_zcat the recovered files from the v3 pool into a new
 > >  >  directory. Naming of the files is immaterial.
 > >  >   2. Create a new temporary host and use that to backup the folder
 > >  >   3. *Manually* delete the host by deleting the entire host folder
 > >  >   4. Run BackupPC_nightly to correct the ref counts (assuming needed)
 > >  >
 > >  > - Method C
 > >  >   1. Use some native code or routines that Craig may already have
 > >  >  written that do most or all of the above
 > >  >
 > >  > Any thoughts on which of these work and which way is preferable?
 > >  >
 > >  > Jeff
 > >  >
 > >
 > >
 > > ___
 > > BackupPC-users mailing list
 > > BackupPC-users@lists.sourceforge.net
 > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > > Wiki:http://backuppc.wiki.sourceforge.net
 > > Project: http://backuppc.sourceforge.net/
 > >
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] attrib_0 files?

2020-06-10 Thread
I agree it doesn't cause any "harm" - it just seems somewhat random or
awkward to have some attrib_0 files and others not have any.

Before writing back an attrib file, why not just check if it is empty
and delete it rather than overwriting it with an attrib_0.
You probably make that check anyway since a zero length attrib file
would have an md5sum of "d41d8cd98f00b204e9800998ecf8427e" anyway.

In the end, less clutter and fewer inodes used should be better and faster...

Craig Barratt via BackupPC-users wrote at about 21:23:24 -0700 on Tuesday, June 
9, 2020:
 > Jeff,
 > 
 > I don't think there's much different whether a directory has an empty
 > attrib file or not.  The reason they exist is when a directory ends up
 > being empty after updating the directory.  The reason a directory might
 > exist without one is when reverse deltas require a change deeper in the
 > directory tree, which causes the unfilled backup to create the intermediate
 > directories, which won't get attrib files unless rsync needs to make
 > changes at that level too.
 > 
 > Craig
 > 
 > On Mon, Jun 8, 2020 at 9:34 PM  wrote:
 > 
 > > I have some empty attrib files, labeled attrib_0.
 > > Note that the directory it represents, has no subdirectories. So, I would
 > > have
 > > thought that no attrib file was present/necessary -- which seems to be
 > > the case in most of my empty directories.
 > >
 > > So what is the difference (and rationale) for attrib_0 vs no attrib
 > > file.
 > > Does that have to do with a prior/subsequent file deletion?
 > >
 > >
 > > ___
 > > BackupPC-users mailing list
 > > BackupPC-users@lists.sourceforge.net
 > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > > Wiki:http://backuppc.wiki.sourceforge.net
 > > Project: http://backuppc.sourceforge.net/
 > >
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What is the best way to add back MISSING pool files?

2020-06-09 Thread Craig Barratt via BackupPC-users
Jeff,

The first method seems simpler.  Don't you just have to mv the file based
on BackupPC_zcat file | md5sum?  BackupPC_nightly shouldn't need to run
(other than to check you no longer get the missing error).

Btw, where did you find the missing pool files?

For the benefit of people on the list, Jeff and I are addressing the other
issues off-list.

Craig

On Tue, Jun 9, 2020 at 6:48 PM  wrote:

> Of course, the unanswered interesting question is why did this small
> number of 37 files out of about 3.5M pool files fail to migrate
> properly from v3 to v4...
>
> Note: I ran as many checks before and after as possible on the pool
> and pc heirarchy integrity (using my old v3 routines I had written) as
> well as checked error messages from the migration itself. I also of
> course had the BackupPC service off...
>
> "" wrote at about 21:41:27 -0400 on Tuesday, June 9, 2020:
>  > I found some of the missing v4 pool files (mentioned in an earlier
>  > post) in a full-disk backup of my old v3 setup.
>  >
>  > I would like to add them back to the v4 pool to eliminate the missing
>  > pool file messages and thus fix my backups.
>  >
>  > I can think of several ways:
>  >
>  > - Method A.
>  >   1. Create a script to first BackupPC_zcat each recovered old v3 pool
>  >  file into a new file named by its uncompressed md5sum and then move
>  >  it appropriately into the v4 cpool 2-layer directory heirarchy.
>  >
>  >   2. Run BackupPC_nightly assuming that it will clean up the cpool ref
>  >  counts to coincide with the now correct pc-branch ref count
>  >
>  > - Method B
>  >   1. BackupPC_zcat the recovered files from the v3 pool into a new
>  >  directory. Naming of the files is immaterial.
>  >   2. Create a new temporary host and use that to backup the folder
>  >   3. *Manually* delete the host by deleting the entire host folder
>  >   4. Run BackupPC_nightly to correct the ref counts (assuming needed)
>  >
>  > - Method C
>  >   1. Use some native code or routines that Craig may already have
>  >  written that do most or all of the above
>  >
>  > Any thoughts on which of these work and which way is preferable?
>  >
>  > Jeff
>  >
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] attrib_0 files?

2020-06-09 Thread Craig Barratt via BackupPC-users
Jeff,

I don't think there's much different whether a directory has an empty
attrib file or not.  The reason they exist is when a directory ends up
being empty after updating the directory.  The reason a directory might
exist without one is when reverse deltas require a change deeper in the
directory tree, which causes the unfilled backup to create the intermediate
directories, which won't get attrib files unless rsync needs to make
changes at that level too.

Craig

On Mon, Jun 8, 2020 at 9:34 PM  wrote:

> I have some empty attrib files, labeled attrib_0.
> Note that the directory it represents, has no subdirectories. So, I would
> have
> thought that no attrib file was present/necessary -- which seems to be
> the case in most of my empty directories.
>
> So what is the difference (and rationale) for attrib_0 vs no attrib
> file.
> Does that have to do with a prior/subsequent file deletion?
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What is the best way to add back MISSING pool files?

2020-06-09 Thread backuppc
Of course, the unanswered interesting question is why did this small
number of 37 files out of about 3.5M pool files fail to migrate
properly from v3 to v4...

Note: I ran as many checks before and after as possible on the pool
and pc heirarchy integrity (using my old v3 routines I had written) as
well as checked error messages from the migration itself. I also of
course had the BackupPC service off...

"" wrote at about 21:41:27 -0400 on Tuesday, June 9, 2020:
 > I found some of the missing v4 pool files (mentioned in an earlier
 > post) in a full-disk backup of my old v3 setup.
 > 
 > I would like to add them back to the v4 pool to eliminate the missing
 > pool file messages and thus fix my backups.
 > 
 > I can think of several ways:
 > 
 > - Method A.
 >   1. Create a script to first BackupPC_zcat each recovered old v3 pool
 >  file into a new file named by its uncompressed md5sum and then move
 >  it appropriately into the v4 cpool 2-layer directory heirarchy.
 > 
 >   2. Run BackupPC_nightly assuming that it will clean up the cpool ref
 >  counts to coincide with the now correct pc-branch ref count
 > 
 > - Method B
 >   1. BackupPC_zcat the recovered files from the v3 pool into a new
 >  directory. Naming of the files is immaterial.
 >   2. Create a new temporary host and use that to backup the folder
 >   3. *Manually* delete the host by deleting the entire host folder
 >   4. Run BackupPC_nightly to correct the ref counts (assuming needed)
 > 
 > - Method C
 >   1. Use some native code or routines that Craig may already have
 >  written that do most or all of the above
 > 
 > Any thoughts on which of these work and which way is preferable?
 > 
 > Jeff
 >


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] What is the best way to add back MISSING pool files?

2020-06-09 Thread
I found some of the missing v4 pool files (mentioned in an earlier
post) in a full-disk backup of my old v3 setup.

I would like to add them back to the v4 pool to eliminate the missing
pool file messages and thus fix my backups.

I can think of several ways:

- Method A.
  1. Create a script to first BackupPC_zcat each recovered old v3 pool
 file into a new file named by its uncompressed md5sum and then move
 it appropriately into the v4 cpool 2-layer directory heirarchy.

  2. Run BackupPC_nightly assuming that it will clean up the cpool ref
 counts to coincide with the now correct pc-branch ref count

- Method B
  1. BackupPC_zcat the recovered files from the v3 pool into a new
 directory. Naming of the files is immaterial.
  2. Create a new temporary host and use that to backup the folder
  3. *Manually* delete the host by deleting the entire host folder
  4. Run BackupPC_nightly to correct the ref counts (assuming needed)

- Method C
  1. Use some native code or routines that Craig may already have
 written that do most or all of the above

Any thoughts on which of these work and which way is preferable?

Jeff
   


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Discrepancy in *actual* vs. *reported* missing pool files

2020-06-09 Thread Craig Barratt via BackupPC-users
Jeff,

We've discussed at least one issue off-list - making sure you consider
inodes too.

It looks like BackupPC_fsck -f only rebuilds the last two backup refcounts
for each host.  It should use the -F option instead of -f when it calls
BackupPC_refcountUpdate (see line 630).  So you should try changing that
and re-running.

Craig

On Tue, Jun 9, 2020 at 8:44 AM  wrote:

> For the longest of time my log files have warned about 37 missing pool
> files.
> E.g.
> admin : BackupPC_refCountUpdate: missing pool file
> 718fc4796633702979bb5edbd20e27a6
>
> So, I decided to find them to see what is going on...
>
> I did the following:
>
> 1. Stopped the running of further backups
> Ran: BackupPC_fsck -f' to do a full checkup
> Ran: BackupPC_nightly to prune the pool fully
>
> 2. Created a sorted, uniq list of all the cpool files, using 'find'
>and 'sort -u' on TopDir/cpool
>
> 3. Created a program to iterate through all the attrib files in all my
>backups and print out the digest and name of each file (plus also
>size and type). I also included the md5sum encoded in the name of
>each attrib file itself.
> Ran the program on all my hosts and backups
> Sorted and uniquified the list of md5sum
>
> 4. Used 'comm -1 -3' and 'comm -2 -3' to find missing ones from each
>listing
>
> Result:
> 1. Relative to the attrib listing, the pool was missing *105* files
>including the 37 that were found in the LOG
>
>INTERESTINGLY, all 105 were from previously migrated v3 backups.
>Actually, from the last 3 backups on that machine (full, incr, incr)
>
> 2. Relative to the pool listing, there were *1154* files in the pool
>that were not mentioned in the attrib file digests (including the
>digest of the attrib itself)
>
> So,
> - Why is BackupPC_fsck not detecting all the missing pool files?
> - Why is BackupPC_nightly not pruning files not mentioned in the
>   attrib listing?
> - Any suggestions on how to further troubleshoot?
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Discrepancy in *actual* vs. *reported* missing pool files

2020-06-09 Thread
For the longest of time my log files have warned about 37 missing pool files. 
E.g.
admin : BackupPC_refCountUpdate: missing pool file 
718fc4796633702979bb5edbd20e27a6

So, I decided to find them to see what is going on...

I did the following:

1. Stopped the running of further backups
Ran: BackupPC_fsck -f' to do a full checkup
Ran: BackupPC_nightly to prune the pool fully

2. Created a sorted, uniq list of all the cpool files, using 'find'
   and 'sort -u' on TopDir/cpool

3. Created a program to iterate through all the attrib files in all my
   backups and print out the digest and name of each file (plus also
   size and type). I also included the md5sum encoded in the name of
   each attrib file itself.
Ran the program on all my hosts and backups
Sorted and uniquified the list of md5sum

4. Used 'comm -1 -3' and 'comm -2 -3' to find missing ones from each
   listing 

Result:
1. Relative to the attrib listing, the pool was missing *105* files
   including the 37 that were found in the LOG

   INTERESTINGLY, all 105 were from previously migrated v3 backups.
   Actually, from the last 3 backups on that machine (full, incr, incr)

2. Relative to the pool listing, there were *1154* files in the pool
   that were not mentioned in the attrib file digests (including the
   digest of the attrib itself)

So,
- Why is BackupPC_fsck not detecting all the missing pool files?
- Why is BackupPC_nightly not pruning files not mentioned in the
  attrib listing?
- Any suggestions on how to further troubleshoot?



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] attrib_0 files?

2020-06-08 Thread
I have some empty attrib files, labeled attrib_0.
Note that the directory it represents, has no subdirectories. So, I would have
thought that no attrib file was present/necessary -- which seems to be
the case in most of my empty directories.

So what is the difference (and rationale) for attrib_0 vs no attrib
file.
Does that have to do with a prior/subsequent file deletion?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Craig Barratt via BackupPC-users
I pushed a commit

that implements nightly pool checking on a configurable portion of the pool
files.  It needs the latest version of backuppc-xs, 0.61.

Craig

On Mon, Jun 8, 2020 at 4:22 PM Michael Huntley  wrote:

> I’m fine with both action items.
>
> I back up millions of emails and so far the restores I’ve performed have
> never been an issue.
>
> mph
>
>
>
> On Jun 8, 2020, at 3:01 PM, Craig Barratt via BackupPC-users <
> backuppc-users@lists.sourceforge.net> wrote:
>
> 
> Jeff & Guillermo,
>
> Agreed - it's better to scan small subsets of the pool.  I'll add that
> to BackupPC_refCountUpdate (which does the nightly pool scanning to delete
> unused files and update stats).
>
> Craig
>
> On Mon, Jun 8, 2020 at 2:35 PM  wrote:
>
>> Guillermo Rozas wrote at about 16:41:03 -0300 on Monday, June 8, 2020:
>>  > > While it's helpful to check the pool, it isn't obvious how to fix
>> any errors.
>>  >
>>  > Sure. Actually I've put aside to interpret the error and the file
>>  > involved until I find an actual error (so I hope to never need that
>>  > information! :) )
>>  >
>>  > > So it's probably best to have rsync-bpc implement the old
>> $Conf{RsyncCsumCacheVerifyProb} setting.  It could do that by randomly
>> skipping the --checksum short-circuit during a full.  For that fraction of
>> files, it would do a full rsync check and update, which would update the
>> pool file if they are not identical.
>>  >
>>  > That would be a good compromise. It makes the fulls a bit slower in
>>  > servers with poor network and slow disks, but it's more clear what to
>>  > do in case of error. Maybe also add a "warning of possible pool
>>  > corruption" if the stored checksum and the new checksum differs for
>>  > those files?
>>  >
>>
>> The only problem with this approach is that it never revisits pool
>> files that aren't part of new backups.
>>
>> That is why I suggested a nightly troll through the cpool/pool to
>> check md5sums going sequentially through X% each night...
>>
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Michael Huntley
I’m fine with both action items.

I back up millions of emails and so far the restores I’ve performed have never 
been an issue.

mph



> On Jun 8, 2020, at 3:01 PM, Craig Barratt via BackupPC-users 
>  wrote:
> 
> 
> Jeff & Guillermo,
> 
> Agreed - it's better to scan small subsets of the pool.  I'll add that to 
> BackupPC_refCountUpdate (which does the nightly pool scanning to delete 
> unused files and update stats).
> 
> Craig
> 
>> On Mon, Jun 8, 2020 at 2:35 PM  wrote:
>> Guillermo Rozas wrote at about 16:41:03 -0300 on Monday, June 8, 2020:
>>  > > While it's helpful to check the pool, it isn't obvious how to fix any 
>> errors.
>>  > 
>>  > Sure. Actually I've put aside to interpret the error and the file
>>  > involved until I find an actual error (so I hope to never need that
>>  > information! :) )
>>  > 
>>  > > So it's probably best to have rsync-bpc implement the old 
>> $Conf{RsyncCsumCacheVerifyProb} setting.  It could do that by randomly 
>> skipping the --checksum short-circuit during a full.  For that fraction of 
>> files, it would do a full rsync check and update, which would update the 
>> pool file if they are not identical.
>>  > 
>>  > That would be a good compromise. It makes the fulls a bit slower in
>>  > servers with poor network and slow disks, but it's more clear what to
>>  > do in case of error. Maybe also add a "warning of possible pool
>>  > corruption" if the stored checksum and the new checksum differs for
>>  > those files?
>>  > 
>> 
>> The only problem with this approach is that it never revisits pool
>> files that aren't part of new backups.
>> 
>> That is why I suggested a nightly troll through the cpool/pool to
>> check md5sums going sequentially through X% each night...
>> 
>> 
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Automated script to restore and compare backups

2020-06-08 Thread
I wrote the attached script to automate testing the complete fidelity
of BackupPC backups... so that I could test the round-trip of dump &
restore as broadly, easily, and accurately as possible

In my case, since I use btrfs snapshots for the source of my backups,
I am able to ensure that the source is unchanged allowing for full
compare of restores against the original source.

Working with Craig, this has uncovered several bugs in acl's and
xatrributes... but now I am able to get 100% accurate backups against
'rsync -niacXAH --delete' -- which is about as close at it gets :)

The attached BASH script automates the restore and compare process
including retrieving the appropriate shares, merges, compression
levels, etc. needed to execute the compare. I should have written it
in perl but it started as just a series of CLI commands that kept
growing until they became a program...

This script should allow others to validate their own rsync backups
both for their own sake as well as to identify other bugs that may
still persist.

Enjoy and please report back any errors in either this script or
BackupPC...



BackupPC_restoreTest
Description: Binary data

Best,
Jeff___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Craig Barratt via BackupPC-users
Jeff & Guillermo,

Agreed - it's better to scan small subsets of the pool.  I'll add that
to BackupPC_refCountUpdate (which does the nightly pool scanning to delete
unused files and update stats).

Craig

On Mon, Jun 8, 2020 at 2:35 PM  wrote:

> Guillermo Rozas wrote at about 16:41:03 -0300 on Monday, June 8, 2020:
>  > > While it's helpful to check the pool, it isn't obvious how to fix any
> errors.
>  >
>  > Sure. Actually I've put aside to interpret the error and the file
>  > involved until I find an actual error (so I hope to never need that
>  > information! :) )
>  >
>  > > So it's probably best to have rsync-bpc implement the old
> $Conf{RsyncCsumCacheVerifyProb} setting.  It could do that by randomly
> skipping the --checksum short-circuit during a full.  For that fraction of
> files, it would do a full rsync check and update, which would update the
> pool file if they are not identical.
>  >
>  > That would be a good compromise. It makes the fulls a bit slower in
>  > servers with poor network and slow disks, but it's more clear what to
>  > do in case of error. Maybe also add a "warning of possible pool
>  > corruption" if the stored checksum and the new checksum differs for
>  > those files?
>  >
>
> The only problem with this approach is that it never revisits pool
> files that aren't part of new backups.
>
> That is why I suggested a nightly troll through the cpool/pool to
> check md5sums going sequentially through X% each night...
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


  1   2   3   4   5   6   7   8   9   10   >