Re: [BackupPC-users] BackupPC Service Status "Count" Column

2020-07-23 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 23 Jul 2020, Akibu Flash wrote:


In the CGI Interface on the BackupPC Service Status Page there is a
Column labelled "Count".  What exactly is that determining? Is it
the number of files that have been backed up from that share?


It's the count of files transferred.

Look for $jobStr in .../lib/BackupPC/CGI/GeneralInfo.pm for more.

I don't normally see a row of data below that line, because it's for
currently running jobs, and I'm not normally looking at the BackupPC
GUI at two o'clock in the morning.


The reason I ask is because mine has been stuck on 58721 for quite
some time.  What could be causing this and how can I determine what
is happening? There is nothing in the log file currently that I can
see which could be causing a problem.


What logs are you looking at, and what do you see in them which makes
you think everything is normal?  I'd expect it to be obvious from the
logs what's going on.

It's not a silly browser page-caching thing is it?

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Necro'd: Use S3 buckets for the pool

2020-07-22 Thread Kris Lou via BackupPC-users
All,

Thanks for the suggestions and links.  There's a lot of interesting reading
to be done.  But as noted, checksum matching and storage latency are
probably prohibitive.

I hope to have access to a colo with gigabit bandwidth in the near future.
Maybe I'll spin up an instance just to see how it goes -- especially since
BPC4 should be less dependent upon bare-metal installations.

Thanks,
-Kris
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Sign out

2020-07-21 Thread Craig Barratt via BackupPC-users
I appreciate the feedback.  Please use the "List
<https://lists.sourceforge.net/lists/listinfo/backuppc-users>" link in the
footer of any email to unsubscribe.

Mailing to the list mails everyone on the list.  It doesn't help you
unsubscribe.

Craig

On Tue, Jul 21, 2020 at 2:22 PM Ants Mark  wrote:

> ThNK YOU 4 providing me good info. But i don't wanna received any mor
> mails. Congrats for having such useful program
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Windows 10 Subsystem For Linux

2020-07-21 Thread backuppc
Stan Larson wrote at about 12:57:40 -0400 on Tuesday, July 21, 2020:
 > We've been successfully using BackupPC 3.3 and the Windows 10 WSL feature to 
 > access Windows 10 PCs using rsync (without the cgwin plugin).  We've been 
 > using this method on our production BackupPC server to back up about 30 
 > Win10 Pro clients.  We just back up the C:/Users folder, which picks up the 
 > User's Desktop, Documents, AppData folders, etc.  This method has proven to 
 > be very reliable using BackupPC 3.3.
 > 
 > We are testing with BackupPC 4.4 so that we can overcome the filesystem 
 > issues that BackupPC 3.3  hard links present.
 > 
 > We are running into a couple of problems that seem to be related to 
 > rsync_bpc.
 > 
 > 1.  On our BackupPC 3.3 server, we are able to use an alternate ssh port for 
 > our Windows 10 clients.  We actually run ssh on port 2222 on the clients 
 > with no problem.  With BackupPC 4.4 (rsync_bpc), we get errors when trying 
 > to run on alternate ports.  The errors seem to indicate that even though we 
 > are specifying a different port, rsync_bpc is ignoring the alternate port 
 > and trying to use port 22.  Here's the config declaration, which works on 
 > 3.3 but not 4.4... $Conf{RsyncClientCmd} = '$sshPath -p  -q -x -l root 
 > $host $rsyncPath $argList+';

RsyncClientCmd is not a configurable variable for 4.x so not
surprising that you are having a problem...

You probably want to use: RsyncSshArgs.
For example:
$Conf{RsyncSshArgs} = ['-e', '$sshPath -p  -q -x -l root'];
Though not sure you need '-q -x'

 > 
 > 2.  When we use port 22 on the client instead of port  (see above), we 
 > get a successful backup, but we have a different problem.  On the Win 10 WSL 
 > client, the C:\ drive is a separate filesystem presented as /mnt/c.  On our 
 > BackupPC 3.3 server, we are able to cross this mount point successfully with 
 > no special configurations, using the config declaration...   
 > "$Conf{BackupFilesOnly} = ['/mnt/c/'];".  On our BackupPC 4.4 server, the 
 > backup will run successfully, but no files below /mnt/c are included.  It's 
 > as if BackupPC is refusing to cross from the / filesystem to the /mnt/c 
 > filesystem.
 > 

Suggest you test manually by running from the command line:

  sudo -u backuppc rsync -navxH -p  -l root :/mnt/c

 > For the new server, we are using CentOS 8 and the default BackupPC yum 
 > packages.
 > 
 > Any thoughts on either problem would be much appreciated.
 > 
 > --
 > [Freedom] <http://www.freedomsales.com>
 > Stan Larson  |  IT Manager
 > Freedom  |  www.freedomsales.com<https://www.freedomsales.com>
 > 11225 Challenger Avenue Odessa, FL 33556
 > PH: 813-855-2671 x206  |
 > Direct Line: 1-727-835-1157
 > 
 > 
 > All commodities purchased from Freedom Sales are to be handled in accordance 
 > with US law including but not limited to the Export Administration 
 > Regulations, International Traffic in Arms Regulations, US Department of 
 > State, US Department of Homeland Security, US Department of Commerce, and US 
 > Office of Foreign Assets Control. Diversion Contrary to US law is prohibited.
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Necro'd: Use S3 buckets for the pool

2020-07-20 Thread Kris Lou via BackupPC-users
This hasn't been addressed for a while, and I didn't find anything in
recent archives.

Anybody have any experience or hypothetical issues with writing the BPC4
Pool over s3fs-fuse to S3 or something similar?  Pros, Cons?

Thanks,
-Kris



Kris Lou
k...@themusiclink.net
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup installation of BackupPC

2020-07-18 Thread Craig Barratt via BackupPC-users
>From the web interface, can you see the old hosts information?

What happens when you select one of the hosts?

The most likely issue is that $Conf{TopDir} in the config file isn't
pointing to the top-level store directory on the old disk.

If you need the file urgently, rather than just testing the 4.3.2 standby
installation, you can do that from the command-line just by navigating to
the relevant host and directory.  If you know the directory where the file
is stored, but not the backup it changed in, just use a shell wildcard for
the backup number.  In 3.x the file paths are mangled (each entry starts
with "f"), but every full backup's directory tree will have all the files.

Craig

On Sat, Jul 18, 2020 at 2:16 PM daveinredm...@excite.com <
daveinredm...@excite.com> wrote:

> I am currently running BackupPC 4.3.2. I have created a second
> installation of BackupPC on a spare machine to have the capability of using
> my backups if the server hosting the main installation dies. I also have
> several older backup disks from several years back that was made on
> BackupPC 3.x. I chown'd an old disk to ensure proper rights and copied the
> current hosts file to the test server but when I run BackupPC on the test
> machine it doesn't see any of the backups. I am trying to find a fairly old
> file that was damaged at an unknown time and used the test server to verify
> functionality. What am I missing? I've Googled "move backuppc to new
> server" but none of the responses seems relevant.
>
> TIA,
> Dave
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] config.pl "built-in" script to backup Windows shares with Virtual Shadow Service Copy

2020-07-02 Thread backuppc
usermail wrote at about 20:48:44 +1000 on Thursday, July 2, 2020:
 > On 30/6/20 2:51 pm, backu...@kosowsky.org wrote:
 > Wow great work! This would be fantastic functionality!
 > I copied it into my client .pl file but i dont know if ive stuffed it up?
 > My XferLOG starts like this:
 > 
 > XferLOG file /var/lib/backuppc/pc/charlotte/XferLOG.71.z created 2020-07-02 
 > 12:00:00
 > Backup prep: type = incr, case = 4, inPlace = 0, doDuplicate = 0, newBkupNum 
 > = 71, newBkupIdx = 7, lastBkupNum = 70, lastBkupIdx = 6 (FillCycle = 0, 
 > noFillCnt = 5)
 > Executing DumpPreUserCmd: &{sub {
 > my $timestamp = "20200702-12";
 > my $shadowdir = "/cygdrive/c/shadow/";
 > my $shadows = "";
 > 
 > my $bashscript = "function\ errortrap\ \ \{\ #NOTE:\ Trap\ on\ 
 > error:\ unwind\ shadows\ and\ exit\ 1.\
 > \ \ echo\ \"ERROR\ setting\ up\ shadows...\"\;\
 > \ \ \ \ #First\ delete\ any\ partially\ created\ shadows\
 > \ \ if\ \[\ -n\ \"\$SHADOWID\"\ \]\;\ then\
 > \ \ \ \ \ \ unset\ ERROR\;\
 > \ \ \ \ \ \ \(vssadmin\ delete\ shadows\ /shadow=\$SHADOWID\ /quiet\ \|\|\ 
 > ERROR=\"ERROR\ \"\)\ \|\ tail\ +4\;\  \   \   \ \ \ \ \ \
 > \ \ \ \ \ \ echo\ \"\ \ \ \$\{ERROR\}Deleting\ shadow\ copy\ for\ 
 > \'\$\{I\^\^\}:\'\ \$SHADOWID\"\;\
 > \ \ fi\
 > \ \ if\ \[\ -n\ \"\$SHADOWLINK\"\ \]\;\ then\
 > \ \ \ \ \ \ unset\ ERROR\;\
 > \ \ \ \ \ \ cmd\ /c\ rmdir\ \$SHADOWLINK\ \|\|\ ERROR=\"ERROR\ \"\;\
 > \ \ \ \ \ \ echo\ \"\ \ \ \$\{ERROR\}Deleting\ shadow\ link\ for\ 
 > \'\$\{I\^\^\}:\'\ \$SHADOWLINK\"\;\
 > \ \ fi\
 > 
 > The same on the client config page, is this likely an encoding or copy paste 
 > issue?

The backslashes are all painfully necessary to 'escape' variables,
special characters, and white space when passing to the shaell
> 
 > Second question, I dont use cygwin I use deltacopy (basically rsync compiled 
 > for windows I think)
 > and my RsyncShareName is /
 > I dont know perl but it looks like you trim the last slash off of $cygdrive, 
 > so will it be possible to
 > set $cygdrive to /

Yes. Or just set $cygdrive="";
Having this set wrong would explain why it is not automatically
finding your drive letters :)

> 
 > Thanks again for sharing your script,
 > Dean
 > 
 > 
 > 
 > 
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] config.pl "built-in" script to backup Windows shares with Virtual Shadow Service Copy

2020-07-02 Thread backuppc
Michael Stowe wrote at about 05:43:03 + on Thursday, July 2, 2020:
 > On 2020-06-30 19:35, backu...@kosowsky.org wrote:
 > > Michael Stowe wrote at about 23:09:55 + on Tuesday, June 30, 2020:
 > >  > On 2020-06-29 21:51, backu...@kosowsky.org wrote:
 > > Not sure why you would want to use a custom version of rsync when my
 > > pretty simple scripts do all that with a lot more transparency to how
 > > they are setup.
 > 
 > I think because it's just one binary (vsshadow not needed, nor anything 
 > else)

I don't need to add any binaries beyond rsync/ssh. I use the native
Win7/Win10 VSS to generate/unwind shadows including: vssadmin, wmic,
fsutil, mklink, rmdir). They were present on even the "Home" addition
of Windows.
 
 > > I believe it's far simpler and cleaner than either:
 > > - My old approach for WinXP (using a client-side triggered script,
 > >   rsyncd setup, dosdev, 'at' recursion to elevate privileges, etc.)
 > > - Your version requiring win.exe
 > > - Other versions requiring a custom/non-standard rsync
 > 
 > N.B.: my version works using ssh now, it doesn't require winexe
Good.
 > 
 > > My version only requires a basic cygwin install with rsync/ssh and
 > > basic linux utils plus built-in windows functions.
 > > 
 > > BTW, I still need to add back in the ability to dump all the acl's
 > > (using subinacl) since rsync only syncs POSIX acls and I believe ntfs
 > > has additional acl's.
 > > 
 > > In any case, my ultimate holy-grail is to be able to use BackupPC to 
 > > allow for
 > > a full bare-metal restore by combining:
 > > - Full VSS file backup
 > > - Restore of all ACLs from a subinacl dump
 > > - Anything else I may need to recreate the full NTFS filesystem for
 > >   windows (maybe disk signatures???)
 > 
 > I fully support this notion; NTFS has a lot of weirdness that doesn't 
 > translate well to rsync, like junction points.  Last time I tried these, 
 > rsync would convert them to symlinks, and restore them as symlinks.  
 > YMMV

Yes, you are right about junctions.
My plan would be to use 'fsutil' to get a list of reparsepoints that
could theoretically be reconstructed with 'mklink'.

Though perhaps fully recreating all the NTFS bells & whistles (or
oddities) is a fool's errand.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Updated debian package folder for rsync_bpc 3.1.3 branch

2020-07-01 Thread backuppc
I have been building my own rsync_bpc packages for the 3.0.9 branch by
copying over the 'debian' folder that Raoul Bhatia provides in his
original packages.

However, these no longer work for 3.1.2 and 3.1.3.

I am not a debian package expert so I don't know what needs to be done
to update the debian folder so that 'fakeroot dpkg-buildpackage -uc
-us' completes without error.

Any ideas?
Has anybody successfully built debian packages for 3.1.3 who can share
their 'debian' folder?

Thanks,
Jeff


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rsync error backing up Windows 10 computer

2020-07-01 Thread backuppc
The log file shows multiple instances of the following error when rsync_bpc is 
run:
unpack_smb_acl: warning: entry with unrecognized tag type ignored


Any idea what may be going on here?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] config.pl "built-in" script to backup Windows shares with Virtual Shadow Service Copy

2020-06-30 Thread backuppc
G.W. Haywood via BackupPC-users wrote at about 15:38:42 +0100 on Tuesday, June 
30, 2020:
 > Hi there,
 > 
 > On Tue, 30 Jun 2020, Jeff Kosowsky wrote:
 > 
 > > It should just work...
 > > [snip]
 > > -- next part --
 > > A non-text attachment was scrubbed...
 > > Name: BackupPCShadowConfig.pl
 > > Type: application/octet-stream
 > > Size: 8533 bytes
 > > Desc: not available
 > > 
 > > --
 > 
 > Don't you just hate it when that happens? :)
 > 

Try it and let me know your feedback... :)


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] config.pl "built-in" script to backup Windows shares with Virtual Shadow Service Copy

2020-06-30 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, 30 Jun 2020, Jeff Kosowsky wrote:


It should just work...
[snip]
-- next part --
A non-text attachment was scrubbed...
Name: BackupPCShadowConfig.pl
Type: application/octet-stream
Size: 8533 bytes
Desc: not available

--


Don't you just hate it when that happens? :)

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc process will not stop

2020-06-29 Thread Craig Barratt via BackupPC-users
Mark,

Perhaps systemd is being used to run BackupPC?

What output do you get from:

systemctl status backuppc

If it shows as active/running, then the correct command to stop BackupPC is:

systemctl stop backuppc


Craig

On Mon, Jun 29, 2020 at 11:05 AM Mark Maciolek  wrote:

> hi,
>
> Running BackupPC v4.3.2 on Ubuntu 18.04 LTS. I want to upgrade to 4.4.0
> but I can't get the backuppc process to stop. I can do
> /etc/init.d/backuppc and it starts again. If I do kill -9  it
> also just restarts.
>
> I have several other BackupPC servers and yet this is the only one that
> does this.
>
> Does anyone have a clue to where I should start troubleshooting this issue?
>
> Mark
>
>
> _______
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unable to connect on port -1

2020-06-29 Thread backuppc
G.W. Haywood via BackupPC-users wrote at about 14:16:15 +0100 on Monday, June 
29, 2020:
 > Hi there,
 > 
 > On Mon, 29 Jun 2020,  Craig Barratt wrote:
 > 
 > > ...
 > > Are you running nfs v3 or v4?  I have had experience with v3 not working
 > > reliably with BackupPC (related to buggy lock file behaviour).  BackupPC
 > > does rely on lock files working, so it's definitely not recommended to turn
 > > locking off.
 > > ...
 > 
 > I would go further than that.  My feeling is that NFS is not suitable
 > for something so important as your backups.
 > 

That being said I ran BackupPC successfully for almost a decade using
NFS-v3 to store the backups on a small under-powered/under-memory
ARM-based NAS. Never had a problem with lock files...


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unable to connect on port -1

2020-06-29 Thread G.W. Haywood via BackupPC-users

Hi there,

On Mon, 29 Jun 2020,  Craig Barratt wrote:


...
Are you running nfs v3 or v4?  I have had experience with v3 not working
reliably with BackupPC (related to buggy lock file behaviour).  BackupPC
does rely on lock files working, so it's definitely not recommended to turn
locking off.
...


I would go further than that.  My feeling is that NFS is not suitable
for something so important as your backups.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unable to connect on port -1

2020-06-28 Thread Craig Barratt via BackupPC-users
The CGI script is trying to connect to the BackupPC server using the
unix-domain socket, which is at $Conf{LogDir}/BackupPC.sock. From your
email, on your system that appears to be /var/lib/log/BackupPC.sock.

Are you running nfs v3 or v4?  I have had experience with v3 not working
reliably with BackupPC (related to buggy lock file behaviour).  BackupPC
does rely on lock files working, so it's definitely not recommended to turn
locking off.

You said you deleted the BackupPC.sock file.  That would explain why the
CGI script can't connect to the server.  Why did you delete it?  You
said "deleting
those files doesn't always let the service restart" - deleting those files
should not be used to get the server to restart.

Craig

On Sat, Jun 27, 2020 at 9:26 AM Phil Kennedy <
phillip.kenn...@yankeeairmuseum.org> wrote:

> I've hit my wits end on an issue with my backuppc instance. The system ran
> fine, untouched, for many months. This is an ubuntu 16.0.4 system, running
> backuppc 3.3.1, installed via apt. When accessing the index (or any other
> pages), I get the following:
> Error: Unable to connect to BackupPC server
> This CGI script (/backuppc/index.cgi) is unable to connect to the BackupPC
> server on pirate port -1.
> The error was: unix connect: Connection refused.
> Perhaps the BackupPC server is not running or there is a configuration
> error. Please report this to your Sys Admin.
>
> The backuppc & apache services are running, and restarting without error.
> The backuppc pool (and other important folders, such as log) lives on an
> NFS mount, and /var/lib/backuppc is symlinked to /mnt/backup. Below is the
> fstab entry that I use:
>
> 10.0.0.4:/backup /mnt/backup nfs users,auto,nolock,rw 0 0
>
> (I'm specifically using nolock, since that can cause a similar issue.
> Mounting an NFS mount via some of the off the shelf NAS's out there can
> have performance issues without nolock set.)
>
> I've been able to get the instance to start and run briefly by deleting
> the BackupPC.sock and LOCK files from /var/lib/log, but the instance
> doesn't stay running for very long (minutes to an hour or two), and the LOG
> isn't giving me much data. On top of that, deleting those files doesn't
> always let the service restart. Thoughts? This box lives a pretty stagnent
> life, nothing tends to change configuration-wise.
> ~Phil
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-25 Thread Craig Barratt via BackupPC-users
You can install the perl module Module::Path to find the path for a module.

After installing, do this:

perl -e 'use Module::Path "module_path";
print(module_path("BackupPC::XS")."\n");'

Example output:

/usr/local/lib/x86_64-linux-gnu/perl/5.26.1/BackupPC/XS.pm

Now try as root and the BackupPC user to see the difference.  Does the
BackupPC user have permission to access the version root uses?

You can also print the module search path with:

perl -e 'print join("\n", @INC),"\n"'


Does that differ between root and the BackupPC user?

Craig

On Thu, Jun 25, 2020 at 9:48 AM Les Mikesell  wrote:

> > The system got itself into this state from a standard yum update.
>
> That's why you want to stick to all packaged modules whenever
> possible.   Over time, dependencies can change and the packaged
> versions will update together.  You can probably update a cpan module
> to the correct version manually but you need to track all the version
> dependencies yourself.   There are some different approaches to
> removing modules: https://www.perlmonks.org/?node_id=1134981
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is there a reason that DumpPreUserCmd (and its analogs) are executed without a shell?

2020-06-24 Thread Craig Barratt via BackupPC-users
Jeff,

The reason BackupPC avoids running shells for sub-commands is security, and
the extra layer of argument escaping or quoting.  It's easy to
inadvertently have some security weakness from misconfiguration or misuse.

Can you get what you need by starting the command with "/bin/bash -c"?  You
can alternatively set $Conf{DumpPreUserCmd} to a shell script with the
arguments you need, and then you can do whatever you want in that script.

Craig

On Wed, Jun 24, 2020 at 10:20 AM  wrote:

> I notice that in Lib.pm, the function 'cmdSystemOrEvalLong'
> specifically uses the structure 'exec {$cmd->[0]} @$cmd;' so that no
> shell is invoked.
>
> I know that technically it's a little faster to avoid calling the
> shell, but in many cases it is very useful to have at least a
> rudimentary shell available.
>
> For example, I may want to read in (rather than execute a script).
>
> Specifically say,
> (1)$Conf{DumpPreUserCmd} = qq(\$sshPath -q -x -i $BackupPCsshID -l
> $Conf{RsyncdUserName} \$hostIP bash -s <
> /etc/backuppc/scripts/script-\$hostIP)
> would allow me to run a hostIP specific script that I store in
> /etc/backuppc/scripts.
>
> - This is neater and easier to maintain than having to store the script
>   on the remote machine.
> - This also seems neater and nicer than having to use an executable
>   script that would itself need to run ssh -- plus importantly it
>   removes a layer of indirection and messing with extra quoting.
>
>
> Similarly, it would be great to be able to support:
> (2)$Conf{DumpPreUserCmd} = qq(\$sshPath -q -x -i $BackupPCsshID -l
> $Conf{RsyncdUserName} \$hostIP bash -s < 
> EOF)
>
> Or similarly:
> (3)$Conf{DumpPreUserCmd} = qq(\$sshPath -q -x -i $BackupPCsshID -l
> $Conf{RsyncdUserName} \$hostIP bash -s <<< $bashscript
> where for example
> my $bashscript = <<'EOF'
> 
> EOF
>
> Though this latter form is a bash-ism and would not work in /bin/sh
>
> The advantage of the latter examples is that it would allow me to
> store the bashscript in the actual host.pl config scripts rather than
> having to have a separate set of scripts to load.
>
> Note that I am able to roughly replicate (3) using perl code, but it
> requires extra layers of escaping of metacharacters making it hard to
> write, read, and debug.
>
> For example something like:
> my $bashscript = <<'EOF';
> 
> EOF
>
> $bashscript =~ s/([][;&()<>{}|^\n\r\t *\$\\'"`?])/\\$1/g;
> $Conf{DumpPreUserCmd} = qq(&{sub {
> open(my \$out_fh, "|-", "\$sshPath -q -x -i $BackupPCsshID -l
> $Conf{RsyncdUserName} \$hostIP bash -s")
> or warn "Can't start ssh: \$!";
> print \$out_fh qq($bashscript);
> close \$out_fh or warn "Error flushing/closing pipe to ssh: \$!";
> }})
>
> Though it doesn't quite work yet...
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-24 Thread Craig Barratt via BackupPC-users
Mike,

It's possible you have two different versions of perl installed, or for
some reason the BackupPC user is seeing an old version of BackupPC::XS.

Try some of the suggestions here:
https://github.com/backuppc/backuppc/issues/351.

Craig

On Wed, Jun 24, 2020 at 10:12 AM Richard Shaw  wrote:

> On Wed, Jun 24, 2020 at 11:58 AM Mike Hughes  wrote:
>
>> I'm getting a service startup failure claiming my version of BackupPC-XS
>> isn't up-to-snuff but it appears to meet the requirements:
>>
>> BackupPC: old version 0.57 of BackupPC::XS: need >= 0.62; exiting in 30s
>>
>
> I don't have a CentOS 7 machine handy so I'm downloading the minimal ISO
> for boxes...
>
> Thanks,
> Richard
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC 4.4.0 released

2020-06-22 Thread Craig Barratt via BackupPC-users
BackupPC 4.4.0 <https://github.com/backuppc/backuppc/releases/tag/4.4.0> has
been released on Github.

This release contains several new features and some bug fixes. New features
include:

   - any full/filled backup can be marked for keeping, which prevents any
   expiry or deletion
   - any backup can be annotated with a comment (eg, "prior to upgrade of
   xyz")
   - added metrics CGI (thanks to @jooola <https://github.com/jooola>) that
   replaces RSS and adds Prometheus support
   - tar XferMethod now supports xattrs and acls
   - rsync XferMethod now correctly supports xattrs on directories and
   symlinks
   - nightly pool scanning now verifies the md5 digests of a configurable
   fraction of pool files
   - code runs through perltidy so format is now uniform (thanks to @jooola
   <https://github.com/jooola>, with help from @shancock9
   <https://github.com/shancock9> and @moisseev
   <https://github.com/moisseev>)

New versions of BackupPC::XS (0.62
<https://github.com/backuppc/backuppc-xs/releases/tag/0.62>) and rsync-bpc (
3.0.9.15 <https://github.com/backuppc/rsync-bpc/releases/tag/3.0.9.15>,
3.1.2.2 <https://github.com/backuppc/rsync-bpc/releases/tag/3.1.2.2> or
3.1.3beta0 <https://github.com/backuppc/rsync-bpc/releases/tag/3.1.3beta1>)
are required.

Thanks to Jeff Kosowsky for extensive testing and debugging for this
release, particularly around xattrs.

Enjoy!

Craig

Here are the more detailed changes:

   - Merged pull requests #325
   <https://github.com/backuppc/backuppc/pull/325>, #326
   <https://github.com/backuppc/backuppc/pull/326>, #329
   <https://github.com/backuppc/backuppc/pull/329>, #330
   <https://github.com/backuppc/backuppc/pull/330>, #334
   <https://github.com/backuppc/backuppc/pull/334>, #336
   <https://github.com/backuppc/backuppc/pull/336>, #337
   <https://github.com/backuppc/backuppc/pull/337>, #338
   <https://github.com/backuppc/backuppc/pull/338>, #342
   <https://github.com/backuppc/backuppc/pull/342>, #343
   <https://github.com/backuppc/backuppc/pull/343>, #344
   <https://github.com/backuppc/backuppc/pull/344>, #345
   <https://github.com/backuppc/backuppc/pull/345>, #347
   <https://github.com/backuppc/backuppc/pull/347>, #348
   <https://github.com/backuppc/backuppc/pull/348>, #349
   <https://github.com/backuppc/backuppc/pull/349>
   - Filled/Full backups can now be marked as "keep", which excludes them
   from any expiry/deletion. Also, a backup-specific comment can be added to
   any backup to capture any important information about that backup (eg,
   "pre-upgrade of xyz").
   - Added metrics CGI, which adds Prometheus support and replaces RSS, by
   @joola <https://github.com/joola> (#344
   <https://github.com/backuppc/backuppc/pull/344>, #347
   <https://github.com/backuppc/backuppc/pull/347>)
   - Tar XferMethod now supports xattrs and acls; xattrs should be
   compatible with rsync XferMethod, but acls are not
   - Sort open directories to top when browsing backup tree
   - Format code using perltidy, and included in pre-commit flow, by @joola
   <https://github.com/joola> (#334
   <https://github.com/backuppc/backuppc/pull/334>, #337
   <https://github.com/backuppc/backuppc/pull/337>, #342
   <https://github.com/backuppc/backuppc/pull/342>, #343
   <https://github.com/backuppc/backuppc/pull/343>, #345
   <https://github.com/backuppc/backuppc/pull/345>). Thanks to @joola
   <https://github.com/joola> and @shancock9
<https://github.com/shancock9> (perltidy
   author) for significant effort and support, plus improvements in perltidy,
   to make this happen.
   - Added $Conf{PoolNightlyDigestCheckPercent}, which checks the md5
   digest of this fraction of the pool files each night.
   - $Conf{ClientShareName2Path} is saved in backups file and the share to
   client path mapping is now displayed when you browse a backup so you know
   the actual client backup path for each share, if different from the share
   name
   - configure.pl now checks the per-host config.pl in a V3 upgrade to warn
   the user if $Conf{RsyncClientCmd} or $Conf{RsyncClientRestoreCmd} are used
   for that host, so that the new settings $Conf{RsyncSshArgs} and
   $Conf{RsyncClientPath} can be manually updated.
   - Fixed host mutex handling for dhcp hosts; shifted initial mutex
   requests to client programs
   - Updated webui icon, logo and favicon, by @moisseev
   <https://github.com/moisseev> (#325
   <https://github.com/backuppc/backuppc/pull/325>, #326
   <https://github.com/backuppc/backuppc/pull/326>, #329
   <https://github.com/backuppc/backuppc/pull/329>, #330
   <https://github.com/backuppc/backuppc/pull/330>)
   - Added $Conf{RsyncRestoreArgsExtra} for host-specific restore settings
   - Language 

Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-18 Thread Craig Barratt via BackupPC-users
Stefan,

BackupPC_backupDelete is only available in 4.x.

Craig

On Thu, Jun 18, 2020 at 2:33 AM Stefan Schumacher <
stefan.schumac...@net-federation.de> wrote:

>
> > If you want to remove a backup, best to use a script built to do it
> > right -- BackupPC_backupDelete. Not sure if it is bundled with 3.x
> > but
> > it exists out there.
> >
> >
>
> Hello,
>
> in that case I would be grateful if someone could share the link to
> this script with me.
>
> Thanks in advance
> Stefan
>
>
> Stefan Schumacher
> Systemadministrator
>
> NetFederation GmbH
> Sürther Hauptstraße 180 B -
> Fon:+49 (0)2236/3936-701
>
> E-Mail:  stefan.schumac...@net-federation.de
> Internet:   http://www.net-federation.de
> Besuchen Sie uns doch auch auf facebook, twitter, Google+, flickr,
> Slideshare, XING oder unserem Blog. Wir freuen uns!
>
>
>
> ***
> Nachhaltigkeit bleibt Trendthema: Der neue CSR Benchmark ist live!
>
> ***
> Wie gut funktioniert die digitale CSR-Kommunikation in der deutschen
> Konzernlandschaft?
> Antworten darauf und zahlreiche Good Practices finden Sie unter
> www.csr-benchmark.de
>
>
>
>
>
> *
>
> NetFederation GmbH
> Geschäftsführung: Christian Berens, Thorsten Greiten
> Amtsgericht Köln, HRB Nr. 32660
>
> *
>
> The information in this e-mail is confidential and may be legally
> privileged. It is intended solely for the addressee and access to the
> e-mail by anyone else is unauthorised. If you are not the intended
> recipient, any disclosure, copying, distribution or any action taken or
> omitted to be taken in reliance on it, is prohibited and may be unlawful.
> If you have received this e-mail in error please forward to:
> post...@net-federation.de
>
>
> Die in dieser E-Mail enthaltenen Informationen sind vertraulich und können
> von rechtlicher Bedeutung sein. Diese Mail ist ausschließlich für den
> Adressaten bestimmt und jeglicher Zugriff durch andere  Personen ist nicht
> zulässig. Falls Sie nicht der beabsichtigte Empfänger sind, ist jegliche
> Veröffentlichung, Vervielfältigung, Verteilung oder sonstige in diesem
> Zusammenhang stehende Handlung untersagt und unter Umständen ungesetzlich.
> Falls Sie diese E-Mail irrtümlich erhalten haben, leiten Sie sie bitte
> weiter an: post...@net-federation.de
>
>
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Supported Windows versions

2020-06-17 Thread backuppc
All windows versions are supported via multiple modalities - rsync,
rsyncd, smb, etc.

Fernando Miranda wrote at about 18:27:19 +0100 on Wednesday, June 17, 2020:
 > Hi,
 > 
 > I'm starting an analysis to choose an open source sw backup, so I have some
 > basic doubts (sorry if these are very simple questions).
 > 
 > As for BackuPC I read that Windows supported versions are 95, 98, 2000 and
 > XP clients, but is really only this? What about for later versions (and
 > server versions), any information even from "user experience" only?
 > 
 > Thanks,
 > Fernando Miranda
 > _______
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-17 Thread backuppc
Stefan Schumacher wrote at about 12:43:44 + on Wednesday, June 17, 2020:
 > 
 > > Yes, with backuppc 3.3, you can safely delete any incremental and
 > > full
 > > prior to the full backup that you want to keep. You can't just keep
 > > the
 > > latest incremental though (there are some options if that is what
 > > you
 > > really need).

You can keep incrementals so long as the preceding fulls (and lower
level incrementals remain)
> >
 > 
 > > Keep in mind though, that:
 > > a) websites tend to be a lot of text (php, html, css, etc) which all
 > > compresses really well
 > > b) website content may not change a lot, and with the dedupe, you
 > > may
 > > not save a lot of space anyway
 > 
 > Hello,
 > 
 > thanks for your input. I already have found out that I should not
 > delete  the log files unter /var/lib/backuppc/pc/example.netfed.de/
 > because now it shows zero backups. Good that I tried it on an
 > unimportant system. Do I assume correctly that I can delete the
 > directories themselves safely and they will not be shown in the
 > Webinterface anymore?

You really need to understand how BackupPC 3.x works.
Deleting the backups alone will not recover a *single* byte of storage
as you will only be removing a hard link to the pool file. Plus it
will mess up the web interface etc.

I *strongly*, *STRONGLY* recommend against manually messing with
deleting/copying/moving/renaming etc. raw backup directories unless
you truly know what you are doing.

If you want to remove a backup, best to use a script built to do it
right -- BackupPC_backupDelete. Not sure if it is bundled with 3.x but
it exists out there.



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-17 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 17 Jun 2020, J.J. Kosowsky wrote:


...
FYI - Backuppc 4.x is really significantly better than Backuppc 3.x.
...
To all those out there still using 3.x, if you haven't tried upgrading
to 4.x yet, I suggest you do. If you have, I suggest you try again.
...


For the record, I'm one of those who had tried 4.x a few years ago and
been bitten by it.  So I put it back in the tar.gz and stayed with 3.x
for a while longer.  About a year ago I did try again, and things went
very much better.  I now believe that Jeff is right in all he says, so

+1

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-16 Thread backuppc
Stefan Schumacher wrote at about 11:29:14 + on Tuesday, June 16, 2020:
 > Hello,
 > 
 > I use Backuppc to backup VMs running mostly Webservers and a few custom
 > services. As everyone knows, Websites have a lifetime and at a certain
 > point the customer wishes for the site to be taken offline. We have one
 > Backuppc which we use for one big, special customer who wants a
 > FullKeepCnt of 4,0,12,0,0,0,10.
 > 
 > Now I have multiple websites which for which I have deactivated the
 > backup, but which still have multiple full and incremental backups
 > stored - up to 17 full backups to be exact.
 > 
 > Is there a way to delete all but the latest full backup and still be be
 > able to restore the website on demand? Is this technically possible or
 > will this clash with the pooling and deduplication functions of
 > backuppc? How should I proceed? I am still using Backuppc 3.3, because
 > of problems with backuppc4. (No need to go into details here)

FYI - Backuppc 4.x is really significantly better than Backuppc 3.x.
- Getting rid of all the hard-links and using full-file md5sums for the
  pool digests is infinitely cleaner, simpler, and faster.
- It makes it so much easier and faster to archive a copy of your backups.
- It can reliably back-up and restore xattributes (e.g., SELinux) as
  well as ACLs -- making a perfect restore possible.
- Specifically, if rsync can make a perfect copy, then BackupPC can do
  a perfect restore.
- It also seems quite stable.

Finally, all the new dev work is being done on 4.x so 3.x is
effectively end-of-life other than perhaps simple/critical bug fixes.

To all those out there still using 3.x, if you haven't tried upgrading
to 4.x yet, I suggest you do. If you have, I suggest you try again.
BackupPC_migrateV3toV4 does a great job of converting 3.x backups to
4.x backups.

If you have questions or need help, I suggest you ask for assistance
on the group as people are always willing and able to answer
questions.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Simple Bash shell function for locating and reading pool files

2020-06-11 Thread backuppc


I frequently use the following Bash shell function to allow me to locate and
read pool/cpool files as I got tired of manually converting to the
pool heirarchy decoding. It's not very complicated, but helpful

1. You can enter either:
   - 32 hex character digest (lower case): <32hex>
   - attrib file name with/without preceding path and either in the
 "normal" or inode form. i.e..
[/]attrib_<32hex>
[/]attrib<2hex>_<32hex>

2. It works with pool or cpool
3. It works with v3/v4

#
function BackupPC_zcatPool ()
{
local BACKUPPC_ZCAT=$(which BackupPC_zcat)
[ -n "$BACKUPPC_ZCAT" ] || 
BACKUPPC_ZCAT=/usr/share/backuppc/bin/BackupPC_zcat
[ -n "$CPOOL" ] || local CPOOL=/var/lib/backuppc/cpool
[ -n "$POOL" ] || local POOL=/var/lib/backuppc/pool

local file=${1##*/} #Strip the path prefix
#If attrib file...
file=${file/attrib[0-9a-f][0-9a-f]_/attrib_} #Convert inode format attrib 
to normal attrib
file=${file##attrib_} #Extract the md5sum from the attrib file name

local ABCD=$(printf '%x' "$(( 0x${file:0:4} & 0xfefe ))")
local prefix="${ABCD:0:2}/${ABCD:2:2}"
#   echo $prefix

if [ -e "$CPOOL/$prefix/$file" ]; then #V4 - cpool
$BACKUPPC_ZCAT $CPOOL/$prefix/$file
elif [ -e "$POOL/$prefix/$file" ]; then #V4 - pool
cat $CPOOL/$prefix/$file
elif [ -e "$CPOOL/${file:0:1}/${file:1:1}/${file:2:1}/$file" ]; then #V3 - 
cpool
$BACKUPPC_ZCAT "$CPOOL/${file:0:1}/${file:1:1}/${file:2:1}/$file"
elif [ -e "$POOL/${file:0:1}/${file:1:1}/${file:2:1}/$file" ]; then #V3 - 
pool
cat "$POOL/${file:0:1}/${file:1:1}/${file:2:1}/$file"
else
    echo "Can't find pool file: $file" >/dev/stderr
fi
}


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how to disable full backups and keep just incremental

2020-06-10 Thread Craig Barratt via BackupPC-users
Dagg,

What version of BackupPC are you using?

Craig

On Wed, Jun 10, 2020 at 7:24 AM daggs  wrote:

> Greetings,
>
> I have two large (several hundreds gigabytes) backups that keep failing
> due to abort or child end unexpectedly. I ran it three times already.
> I want to disable the full backup and keep the incremental, what is the
> proper way do that?
>
> Thanks,
>
> Dagg.
>
>
> _______
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What is the best way to add back MISSING pool files?

2020-06-09 Thread Craig Barratt via BackupPC-users
Jeff,

The first method seems simpler.  Don't you just have to mv the file based
on BackupPC_zcat file | md5sum?  BackupPC_nightly shouldn't need to run
(other than to check you no longer get the missing error).

Btw, where did you find the missing pool files?

For the benefit of people on the list, Jeff and I are addressing the other
issues off-list.

Craig

On Tue, Jun 9, 2020 at 6:48 PM  wrote:

> Of course, the unanswered interesting question is why did this small
> number of 37 files out of about 3.5M pool files fail to migrate
> properly from v3 to v4...
>
> Note: I ran as many checks before and after as possible on the pool
> and pc heirarchy integrity (using my old v3 routines I had written) as
> well as checked error messages from the migration itself. I also of
> course had the BackupPC service off...
>
> "" wrote at about 21:41:27 -0400 on Tuesday, June 9, 2020:
>  > I found some of the missing v4 pool files (mentioned in an earlier
>  > post) in a full-disk backup of my old v3 setup.
>  >
>  > I would like to add them back to the v4 pool to eliminate the missing
>  > pool file messages and thus fix my backups.
>  >
>  > I can think of several ways:
>  >
>  > - Method A.
>  >   1. Create a script to first BackupPC_zcat each recovered old v3 pool
>  >  file into a new file named by its uncompressed md5sum and then move
>  >  it appropriately into the v4 cpool 2-layer directory heirarchy.
>  >
>  >   2. Run BackupPC_nightly assuming that it will clean up the cpool ref
>  >  counts to coincide with the now correct pc-branch ref count
>  >
>  > - Method B
>  >   1. BackupPC_zcat the recovered files from the v3 pool into a new
>  >  directory. Naming of the files is immaterial.
>  >   2. Create a new temporary host and use that to backup the folder
>  >   3. *Manually* delete the host by deleting the entire host folder
>  >   4. Run BackupPC_nightly to correct the ref counts (assuming needed)
>  >
>  > - Method C
>  >   1. Use some native code or routines that Craig may already have
>  >  written that do most or all of the above
>  >
>  > Any thoughts on which of these work and which way is preferable?
>  >
>  > Jeff
>  >
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] attrib_0 files?

2020-06-09 Thread Craig Barratt via BackupPC-users
Jeff,

I don't think there's much different whether a directory has an empty
attrib file or not.  The reason they exist is when a directory ends up
being empty after updating the directory.  The reason a directory might
exist without one is when reverse deltas require a change deeper in the
directory tree, which causes the unfilled backup to create the intermediate
directories, which won't get attrib files unless rsync needs to make
changes at that level too.

Craig

On Mon, Jun 8, 2020 at 9:34 PM  wrote:

> I have some empty attrib files, labeled attrib_0.
> Note that the directory it represents, has no subdirectories. So, I would
> have
> thought that no attrib file was present/necessary -- which seems to be
> the case in most of my empty directories.
>
> So what is the difference (and rationale) for attrib_0 vs no attrib
> file.
> Does that have to do with a prior/subsequent file deletion?
>
>
> _______
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What is the best way to add back MISSING pool files?

2020-06-09 Thread backuppc
Of course, the unanswered interesting question is why did this small
number of 37 files out of about 3.5M pool files fail to migrate
properly from v3 to v4...

Note: I ran as many checks before and after as possible on the pool
and pc heirarchy integrity (using my old v3 routines I had written) as
well as checked error messages from the migration itself. I also of
course had the BackupPC service off...

"" wrote at about 21:41:27 -0400 on Tuesday, June 9, 2020:
 > I found some of the missing v4 pool files (mentioned in an earlier
 > post) in a full-disk backup of my old v3 setup.
 > 
 > I would like to add them back to the v4 pool to eliminate the missing
 > pool file messages and thus fix my backups.
 > 
 > I can think of several ways:
 > 
 > - Method A.
 >   1. Create a script to first BackupPC_zcat each recovered old v3 pool
 >  file into a new file named by its uncompressed md5sum and then move
 >  it appropriately into the v4 cpool 2-layer directory heirarchy.
 > 
 >   2. Run BackupPC_nightly assuming that it will clean up the cpool ref
 >  counts to coincide with the now correct pc-branch ref count
 > 
 > - Method B
 >   1. BackupPC_zcat the recovered files from the v3 pool into a new
 >  directory. Naming of the files is immaterial.
 >   2. Create a new temporary host and use that to backup the folder
 >   3. *Manually* delete the host by deleting the entire host folder
 >   4. Run BackupPC_nightly to correct the ref counts (assuming needed)
 > 
 > - Method C
 >   1. Use some native code or routines that Craig may already have
 >  written that do most or all of the above
 > 
 > Any thoughts on which of these work and which way is preferable?
 > 
 > Jeff
 >


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Discrepancy in *actual* vs. *reported* missing pool files

2020-06-09 Thread Craig Barratt via BackupPC-users
Jeff,

We've discussed at least one issue off-list - making sure you consider
inodes too.

It looks like BackupPC_fsck -f only rebuilds the last two backup refcounts
for each host.  It should use the -F option instead of -f when it calls
BackupPC_refcountUpdate (see line 630).  So you should try changing that
and re-running.

Craig

On Tue, Jun 9, 2020 at 8:44 AM  wrote:

> For the longest of time my log files have warned about 37 missing pool
> files.
> E.g.
> admin : BackupPC_refCountUpdate: missing pool file
> 718fc4796633702979bb5edbd20e27a6
>
> So, I decided to find them to see what is going on...
>
> I did the following:
>
> 1. Stopped the running of further backups
> Ran: BackupPC_fsck -f' to do a full checkup
> Ran: BackupPC_nightly to prune the pool fully
>
> 2. Created a sorted, uniq list of all the cpool files, using 'find'
>and 'sort -u' on TopDir/cpool
>
> 3. Created a program to iterate through all the attrib files in all my
>backups and print out the digest and name of each file (plus also
>size and type). I also included the md5sum encoded in the name of
>each attrib file itself.
> Ran the program on all my hosts and backups
> Sorted and uniquified the list of md5sum
>
> 4. Used 'comm -1 -3' and 'comm -2 -3' to find missing ones from each
>listing
>
> Result:
> 1. Relative to the attrib listing, the pool was missing *105* files
>including the 37 that were found in the LOG
>
>INTERESTINGLY, all 105 were from previously migrated v3 backups.
>Actually, from the last 3 backups on that machine (full, incr, incr)
>
> 2. Relative to the pool listing, there were *1154* files in the pool
>that were not mentioned in the attrib file digests (including the
>digest of the attrib itself)
>
> So,
> - Why is BackupPC_fsck not detecting all the missing pool files?
> - Why is BackupPC_nightly not pruning files not mentioned in the
>   attrib listing?
> - Any suggestions on how to further troubleshoot?
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Craig Barratt via BackupPC-users
I pushed a commit
<https://github.com/backuppc/backuppc/commit/7273cd15b0414c6551df4066b52ecc042e1716b6>
that implements nightly pool checking on a configurable portion of the pool
files.  It needs the latest version of backuppc-xs, 0.61.

Craig

On Mon, Jun 8, 2020 at 4:22 PM Michael Huntley  wrote:

> I’m fine with both action items.
>
> I back up millions of emails and so far the restores I’ve performed have
> never been an issue.
>
> mph
>
>
>
> On Jun 8, 2020, at 3:01 PM, Craig Barratt via BackupPC-users <
> backuppc-users@lists.sourceforge.net> wrote:
>
> 
> Jeff & Guillermo,
>
> Agreed - it's better to scan small subsets of the pool.  I'll add that
> to BackupPC_refCountUpdate (which does the nightly pool scanning to delete
> unused files and update stats).
>
> Craig
>
> On Mon, Jun 8, 2020 at 2:35 PM  wrote:
>
>> Guillermo Rozas wrote at about 16:41:03 -0300 on Monday, June 8, 2020:
>>  > > While it's helpful to check the pool, it isn't obvious how to fix
>> any errors.
>>  >
>>  > Sure. Actually I've put aside to interpret the error and the file
>>  > involved until I find an actual error (so I hope to never need that
>>  > information! :) )
>>  >
>>  > > So it's probably best to have rsync-bpc implement the old
>> $Conf{RsyncCsumCacheVerifyProb} setting.  It could do that by randomly
>> skipping the --checksum short-circuit during a full.  For that fraction of
>> files, it would do a full rsync check and update, which would update the
>> pool file if they are not identical.
>>  >
>>  > That would be a good compromise. It makes the fulls a bit slower in
>>  > servers with poor network and slow disks, but it's more clear what to
>>  > do in case of error. Maybe also add a "warning of possible pool
>>  > corruption" if the stored checksum and the new checksum differs for
>>  > those files?
>>  >
>>
>> The only problem with this approach is that it never revisits pool
>> files that aren't part of new backups.
>>
>> That is why I suggested a nightly troll through the cpool/pool to
>> check md5sums going sequentially through X% each night...
>>
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Craig Barratt via BackupPC-users
Jeff & Guillermo,

Agreed - it's better to scan small subsets of the pool.  I'll add that
to BackupPC_refCountUpdate (which does the nightly pool scanning to delete
unused files and update stats).

Craig

On Mon, Jun 8, 2020 at 2:35 PM  wrote:

> Guillermo Rozas wrote at about 16:41:03 -0300 on Monday, June 8, 2020:
>  > > While it's helpful to check the pool, it isn't obvious how to fix any
> errors.
>  >
>  > Sure. Actually I've put aside to interpret the error and the file
>  > involved until I find an actual error (so I hope to never need that
>  > information! :) )
>  >
>  > > So it's probably best to have rsync-bpc implement the old
> $Conf{RsyncCsumCacheVerifyProb} setting.  It could do that by randomly
> skipping the --checksum short-circuit during a full.  For that fraction of
> files, it would do a full rsync check and update, which would update the
> pool file if they are not identical.
>  >
>  > That would be a good compromise. It makes the fulls a bit slower in
>  > servers with poor network and slow disks, but it's more clear what to
>  > do in case of error. Maybe also add a "warning of possible pool
>  > corruption" if the stored checksum and the new checksum differs for
>  > those files?
>  >
>
> The only problem with this approach is that it never revisits pool
> files that aren't part of new backups.
>
> That is why I suggested a nightly troll through the cpool/pool to
> check md5sums going sequentially through X% each night...
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Craig Barratt via BackupPC-users
Guillermo,

Yes, that's an excellent point.  Actually v3 suffers from this too since,
with cached block and full-file checksums, it doesn't recheck the file
contents either.  However, v3 had a
parameter $Conf{RsyncCsumCacheVerifyProb} (default 0.01 == 1%) that caused
rsync to verify that random fraction of the file contents.  Other xfer
methods (eg, tar and smb) always do a full-file compare during a full, so
shouldn't be undetected server-side corruption with those XferMethods.

Thanks for the script.  While it's helpful to check the pool, it isn't
obvious how to fix any errors.  So it's probably best to have rsync-bpc
implement the old $Conf{RsyncCsumCacheVerifyProb} setting.  It could do
that by randomly skipping the --checksum short-circuit during a full.  For
that fraction of files, it would do a full rsync check and update, which
would update the pool file if they are not identical.

If folks agree with that approach, that's what I'll implement.

Craig

On Mon, Jun 8, 2020 at 10:16 AM Guillermo Rozas 
wrote:

> I've attached the script I'm using. It's very rough, so use at your own
> risk!
>
> I run it daily checking 4 folders of the pool per day, sequentially,
> so it takes 32 days to check them all. You can modify the external
> loop to change this. The last checked folder is saved in an auxiliary
> file.
>
> The checksum is done uncompressing the files in the pool using
> zlib-flate (line 25), but it can be changed to pigz or BackupPC_zcat.
> On my severely CPU-limited server (Banana Pi) both pigz and zlib-flate
> are much faster than BackupPC_zcat, they take around a quarter of the
> time to check the files (pigz is marginally faster than zlib-flate).
> On the other hand, BackupPC_zcat puts the lowest load on the CPU,
> zlib-flate's load is 30-35% higher, and pigz's is a whooping 80-100%
> higher.
>
> However, as BackupPC_zcat produces slightly modified gzip files, there
> is a (very) small chance that a BackupPC_zcat compressed file is not
> properly uncompressed by the other two (line 28 in the script). If
> that happens, you need to re-check every zlib-flate or pigz failure
> with BackupPC_zcat before calling it a real error. I think this gets
> the best balance between load on the system and time spent checking
> the pool (at least for my server and pool...).
>
> Best regards,
> Guillermo
>
>
> On Mon, Jun 8, 2020 at 1:28 PM  wrote:
> >
> > Good point...
> > Craig - would it make sense to add a parameter to BackupPC_nightly
> > that would check a user-settable percentage of the files each night,
> > say NightlyChecksumPercent. So if set to 3%, the pool would be checked
> > (sequentially) over the period of ~1 month
> >
> > Guillermo Rozas wrote at about 11:12:39 -0300 on Monday, June 8, 2020:
> >  > Yes, I wouldn't worry about collisions by chance.
> >  >
> >  > However, there is a second aspect that is not covered here: if you
> >  > rely only on saved checksums in the server, it will not check again
> >  > unmodified pool files. This risks you missing file system corruption
> >  > or bit rot in the backup files that were previously caught by the V3
> >  > behaviour (which periodically checksummed the pool files).
> >  >
> >  > Two solutions:
> >  > - put the pool in a file system with checksum verification included
> >  > - use a script to periodically traverse the pool and chesum the files
> >  >
> >  > Best regards,
> >  > Guillermo
> >  >
> >  >
> >  >
> >  > On Mon, Jun 8, 2020 at 10:58 AM G.W. Haywood via BackupPC-users
> >  >  wrote:
> >  > >
> >  > > Hi there,
> >  > >
> >  > > On Mon, 8 Jun 2020, Jeff Kosowsky wrote:
> >  > >
> >  > > > ... presumably a very rare event ...
> >  > >
> >  > > That's putting it a little mildly.
> >  > >
> >  > > If it's really all truly random, then if you tried random
> collisions a
> >  > > million times per picosecond you would (probably) need of the order
> of
> >  > > ten trillion years to have a good chance of finding one...
> >  > >
> >  > > $ echo ' scale=2; 2^128 / 10^6 / 10^12 / 86400 / 365 / 10^12 ' | bc
> >  > > 10.79
> >  > >
> >  > > I think it's safe to say that it's not going to happen by chance.
> >  > >
> >  > > If it's truly random.
> >  > >
> >  > > --
> >  > >
> >  > > 73,
> >  > > Ged.
> >  > >
> >  > >
> >  > > ___
> >  > > BackupPC-users maili

Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread backuppc
Good point...
Craig - would it make sense to add a parameter to BackupPC_nightly
that would check a user-settable percentage of the files each night,
say NightlyChecksumPercent. So if set to 3%, the pool would be checked
(sequentially) over the period of ~1 month

Guillermo Rozas wrote at about 11:12:39 -0300 on Monday, June 8, 2020:
 > Yes, I wouldn't worry about collisions by chance.
 > 
 > However, there is a second aspect that is not covered here: if you
 > rely only on saved checksums in the server, it will not check again
 > unmodified pool files. This risks you missing file system corruption
 > or bit rot in the backup files that were previously caught by the V3
 > behaviour (which periodically checksummed the pool files).
 > 
 > Two solutions:
 > - put the pool in a file system with checksum verification included
 > - use a script to periodically traverse the pool and chesum the files
 > 
 > Best regards,
 > Guillermo
 > 
 > 
 > 
 > On Mon, Jun 8, 2020 at 10:58 AM G.W. Haywood via BackupPC-users
 >  wrote:
 > >
 > > Hi there,
 > >
 > > On Mon, 8 Jun 2020, Jeff Kosowsky wrote:
 > >
 > > > ... presumably a very rare event ...
 > >
 > > That's putting it a little mildly.
 > >
 > > If it's really all truly random, then if you tried random collisions a
 > > million times per picosecond you would (probably) need of the order of
 > > ten trillion years to have a good chance of finding one...
 > >
 > > $ echo ' scale=2; 2^128 / 10^6 / 10^12 / 86400 / 365 / 10^12 ' | bc
 > > 10.79
 > >
 > > I think it's safe to say that it's not going to happen by chance.
 > >
 > > If it's truly random.
 > >
 > > --
 > >
 > > 73,
 > > Ged.
 > >
 > >
 > > ___
 > > BackupPC-users mailing list
 > > BackupPC-users@lists.sourceforge.net
 > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > > Wiki:http://backuppc.wiki.sourceforge.net
 > > Project: http://backuppc.sourceforge.net/
 > 
 > 
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread G.W. Haywood via BackupPC-users

Hi there,

On Mon, 8 Jun 2020, Jeff Kosowsky wrote:


... presumably a very rare event ...


That's putting it a little mildly.

If it's really all truly random, then if you tried random collisions a
million times per picosecond you would (probably) need of the order of
ten trillion years to have a good chance of finding one...

$ echo ' scale=2; 2^128 / 10^6 / 10^12 / 86400 / 365 / 10^12 ' | bc
10.79

I think it's safe to say that it's not going to happen by chance.

If it's truly random.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-07 Thread Craig Barratt via BackupPC-users
Jeff,

Yes, that's correct.

In v4 a full backup using --checksum will compare all the metadata and
full-file checksum.  Any file that matches all those will be presumed
unchanged.  In v4 the server load for a full is very low, since all that
meta data (including the full-file checksum) is stored and easily accessed
without needing to look at the file contents at all.  An incremental backup
just checks all the metadata and not the full-file checksum, which is fast
on both the server and client side.  V4 also supports incremental-only (by
periodically filling a backup), in cases where that is sufficient.
However, that's more risky and not the default.

In v3, a full backup checks the block-based deltas and full-file checksum
for every file.  That's a lot more work and seems unnecessary.  You can get
that behavior in v4 too by replacing --checksums with --ignore-times, but
it's a lot more expensive on the server side since v4 doesn't cache the
block and full-file checksums.

While md5 collisions can be constructed with various properties, the chance
of a random file change creating a hash collision is 2^-128, as you note.

Craig


On Sun, Jun 7, 2020 at 9:11 PM  wrote:

> Silly me... the '--checksum' is only for 'Full' so that explains the
> difference between 'incrementals' and 'fulls'... along with presumably
> why my case wasn't caught by an incremental.
>
> I still don't fully understand the comment referencing V3 and replacing
> --checksum with --ignore-times.
>
> Is the point that v3 compared both full file and block
> checksums while in v4 --checksum only compares full file checksums?
> And so v3 is more conservative since there might be checksum
> collisions of 2 non-identical files at the file-checksum level that
> would be unmasked by checksum differences at the block level?
> (presumably a very rare event -- presumably < 2^128 since the hash
> itself is 128 bits and the times and size are also checked)
>
> "" wrote at about 23:54:14 -0400 on Sunday, June 7, 2020:
>  > Can someone clarify how --checksum works in v4?
>  > And specifically, when could it get 'fooled' thinking 2 files are
>  > identical when they really aren't...
>  >
>  > According to config.pl:
>  >
>  >The --checksum argument causes the client to send full-file
>  >checksum for every file (meaning the client reads every file and
>  >computes the checksum, which is sent with the file list).  On the
>  >server, rsync_bpc will skip any files that have a matching
>  >full-file checksum, and size, mtime and number of hardlinks.  Any
>  >file that has different attributes will be updating using the block
>  >rsync algorithm.
>  >
>  >In V3, full backups applied the block rsync algorithm to every
>  >file, which is a lot slower but a bit more conservative.  To get
>  >that behavior, replace --checksum with --ignore-times.
>  >
>  >
>  > While according to the 'rsync' man pages:
>  >-c, --checksum
>  >This changes the way rsync checks if the files have been changed
>  >and are in need of a transfer.  Without this option, rsync uses a
>  >"quick check" that (by default) checks if each file’s size and time
>  >of last modification match between the sender and receiver.  This
>  >option changes this to compare a 128-bit checksum for each file
>  >that has a matching size.  Generating the checksums means that both
>  >sides will expend a lot of disk I/O reading all the data in the
>  >files in the transfer (and this is prior to any reading that will
>  >be done to transfer changed files), so this can slow things down
>  >significantly.
>  >
>  >
>  > Note by default:
>  > $Conf{RsyncFullArgsExtra} = ['--checksum'];
>  >
>  > So in v4:
>  > - Do incrementals and fulls differ in how/when checksums are used?
>  > - For each case, what situations would cause BackupPC to be fooled?
>  > - Specifically, I don't understand the comment of replacing --checksum
>  >   with --ignore-times since the rsync definition of --checksum
>  >   says that it deosn't look at times but a 128-bit file checksum.
>  >
>  > The reason I ask is that I recompiled a debian package (happens to be
>  > libbackuppc-xs-perl) to pull in the latest version 0.60. But I forgot
>  > to change the date in the Changelog. When installing the package, the
>  > file dates were the same even though the content and file md5sums for
>  > some files had changed.
>  >
>  > Specifically,
>  > /usr/lib/x86_64-linux-gnu/perl5/5.26/auto/BackupPC/XS/XS.so
>  > had the same size (and date due to my mistake) but a different file
>  &

Re: [BackupPC-users] How does --checksum work in v4?

2020-06-07 Thread backuppc
Silly me... the '--checksum' is only for 'Full' so that explains the
difference between 'incrementals' and 'fulls'... along with presumably
why my case wasn't caught by an incremental.

I still don't fully understand the comment referencing V3 and replacing
--checksum with --ignore-times.

Is the point that v3 compared both full file and block
checksums while in v4 --checksum only compares full file checksums?
And so v3 is more conservative since there might be checksum
collisions of 2 non-identical files at the file-checksum level that
would be unmasked by checksum differences at the block level?
(presumably a very rare event -- presumably < 2^128 since the hash
itself is 128 bits and the times and size are also checked)

"" wrote at about 23:54:14 -0400 on Sunday, June 7, 2020:
 > Can someone clarify how --checksum works in v4?
 > And specifically, when could it get 'fooled' thinking 2 files are
 > identical when they really aren't...
 > 
 > According to config.pl:
 > 
 >The --checksum argument causes the client to send full-file
 >checksum for every file (meaning the client reads every file and
 >computes the checksum, which is sent with the file list).  On the
 >server, rsync_bpc will skip any files that have a matching
 >full-file checksum, and size, mtime and number of hardlinks.  Any
 >file that has different attributes will be updating using the block
 >rsync algorithm.
 > 
 >In V3, full backups applied the block rsync algorithm to every
 >file, which is a lot slower but a bit more conservative.  To get
 >that behavior, replace --checksum with --ignore-times.
 > 
 > 
 > While according to the 'rsync' man pages:
 >-c, --checksum
 >This changes the way rsync checks if the files have been changed
 >and are in need of a transfer.  Without this option, rsync uses a
 >"quick check" that (by default) checks if each file’s size and time
 >of last modification match between the sender and receiver.  This
 >option changes this to compare a 128-bit checksum for each file
 >that has a matching size.  Generating the checksums means that both
 >sides will expend a lot of disk I/O reading all the data in the
 >files in the transfer (and this is prior to any reading that will
 >be done to transfer changed files), so this can slow things down
 >significantly.
 > 
 > 
 > Note by default:
 > $Conf{RsyncFullArgsExtra} = ['--checksum'];
 > 
 > So in v4:
 > - Do incrementals and fulls differ in how/when checksums are used?
 > - For each case, what situations would cause BackupPC to be fooled?
 > - Specifically, I don't understand the comment of replacing --checksum
 >   with --ignore-times since the rsync definition of --checksum
 >   says that it deosn't look at times but a 128-bit file checksum.
 > 
 > The reason I ask is that I recompiled a debian package (happens to be
 > libbackuppc-xs-perl) to pull in the latest version 0.60. But I forgot
 > to change the date in the Changelog. When installing the package, the
 > file dates were the same even though the content and file md5sums for
 > some files had changed.
 > 
 > Specifically,
 > /usr/lib/x86_64-linux-gnu/perl5/5.26/auto/BackupPC/XS/XS.so
 > had the same size (and date due to my mistake) but a different file
 > md5sum.
 > 
 > And an incremental backup didn't detect this difference...
 > 
 > 
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Regarding Transport mechanism limitations

2020-06-02 Thread Craig Barratt via BackupPC-users
Ok, I'll remove that.

There isn't an 18GB limit.  Kris was simply saying that 18GB files work
fine.

Craig

On Tue, Jun 2, 2020 at 12:41 PM  wrote:

> The url in "Some Limitations" section of updated docs are pointing to old
> docs
>
> Thanks. Is there a way to work around 18 GB limit ?
>
> On June 3, 2020 12:13:12 AM GMT+05:30, Kris Lou via BackupPC-users <
> backuppc-users@lists.sourceforge.net> wrote:
> >Updated Docs (4.x) are here:   https://backuppc.github.io/backuppc/
> >
> >And no problems here with larger files (18GB) using rsync.exe from
> >Cygwin.
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Thank you far an easier installation

2020-06-02 Thread Craig Barratt via BackupPC-users
Thanks to folks who have developed packages for several linux flavors, and
also for updating the wiki <https://github.com/backuppc/backuppc/wiki>.

It would be great to get additional volunteers to develop and maintain
packages for remaining linux distros, and submitting them upstream.  If you
are interested please let me know.

Craig

On Tue, Jun 2, 2020 at 11:14 AM Bob Wooden  wrote:

> For those of you who may not be aware of this.
>
> In the process of building a replacement machine (hardware update) I
> have discovered that CentOS 8, the EPEL repo offers BackupPC 4.3.2. (The
> CentOS 7 EPEL repo offers BackupPC 3.3.2.)
>
> After years of building from source, building *.deb packages and various
> other build processes, one of the Linux distros has "caught up" to a
> current version of BackupPC (4.3.2) in one of their repos.
>
> (Not throwing "stones" at any of the other distros. Just sharing
> information.)
>
> --
> ^^
>
> Bob Wooden
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Regarding Transport mechanism limitations

2020-06-02 Thread Craig Barratt via BackupPC-users
Kris is correct - the limits in 4.x (and 3.x) are much higher than
suggested by that very old FAQ, certainly for rsync.

Craig

On Tue, Jun 2, 2020 at 11:44 AM Kris Lou via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:

> Updated Docs (4.x) are here:   https://backuppc.github.io/backuppc/
>
> And no problems here with larger files (18GB) using rsync.exe from Cygwin.
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Regarding Transport mechanism limitations

2020-06-02 Thread Kris Lou via BackupPC-users
Updated Docs (4.x) are here:   https://backuppc.github.io/backuppc/

And no problems here with larger files (18GB) using rsync.exe from Cygwin.
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how to install SCGI , exactly??

2020-05-26 Thread Craig Barratt via BackupPC-users
Is there any chance you have two different perls installed?  Check the
first line of bin/BackupPC to see what version it is running.

What is the output from these commanda?

which perl
perl -e 'use SCGI; print $SCGI::VERSION;'

Craig

On Tue, May 26, 2020 at 12:51 PM Michael Walker - Rotech Motor Ltd. <
m...@rotech.ca> wrote:

> Well... CPAN certainly wasn't working. So I tried installing from the
> tarball:
>
>
> root@redacted:/usr/share/SCGI# perl Build.PL
> Created MYMETA.yml and MYMETA.json
> Creating new 'Build' script for 'SCGI' version '0.6'
> root@redacted:/usr/share/SCGI# ./Build
> Building SCGI
> root@redacted:/usr/share/SCGI# ./Build test
> t/blocking.t .. ok
> t/non-blocking.t .. ok
> t/pod-coverage.t .. skipped: Test::Pod::Coverage 1.00 required for testing
> POD coverage
> t/pod.t ... skipped: Test::Pod 1.00 required for testing POD
> All tests successful.
> Files=4, Tests=20,  1 wallclock secs ( 0.03 usr  0.01 sys +  0.52 cusr
> 0.10 csys =  0.66 CPU)
> Result: PASS
> root@redacted:/usr/share/SCGI# ./Build install
> Building SCGI
> Installing /usr/local/share/perl/5.26.1/SCGI.pm
> Installing /usr/local/share/perl/5.26.1/SCGI/Request.pm
> Installing /usr/local/man/man3/SCGI.3pm
> Installing /usr/local/man/man3/SCGI::Request.3pm
> root@redacted:/usr/share/SCGI#
>
> So.. it is installed, and system rebooted, but I continue to get the error:
>
> 2020-05-26 12:37:47  scgi : BackupPC_Admin_SCGI: can't load perl SCGI module 
> - install via CPAN; exiting in 60 seconds
> 2020-05-26 12:37:47 Running BackupPC_Admin_SCGI (pid=1751)
>
> Is there any information published detailing how to get BackupPC to find and 
> load the freshly-installed SCGI module?
>
> The configuration instructions online are rife with options, and frankly it's 
> mindbogglingly complex for someone like me who is trying just to install a 
> 'ready-made backup system'. (Right now I am looking at 
> https://backuppc.github.io/backuppc/BackupPC.html#Step-9:-CGI-interface )
>
> The amount of time spent analyzing all this stuff might be better spent just 
> building something from scratch :(
>
>
> -- Original Message --
> From: "Craig Barratt via BackupPC-users" <
> backuppc-users@lists.sourceforge.net>
> To: "General list for user discussion, questions and support" <
> backuppc-users@lists.sourceforge.net>
> Cc: "Craig Barratt" 
> Sent: 5/23/2020 2:51:54 PM
> Subject: Re: [BackupPC-users] how to install SCGI , exactly??
>
> There are two different components that have to be installed, one for perl
> (the client end) and another for apache (the server end).
>
> The perl module SCGI needs to be installed, which can be done via cpan.
> If cpan doesn't work you can install it manually from the tarball, which
> can be found in many places (eg, here
> <http://www.namesdir.com/mirrors/cpan/authors/id/V/VI/VIPERCODE/SCGI-0.6.tar.gz>
> ).
>
> Second, apache needs the scgi module (typically called mod-scgi) installed
> and enabled.  As Doug mentions that can be done using your favorite package
> manager.
>
> Craig
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how to install SCGI , exactly??

2020-05-26 Thread Kris Lou via BackupPC-users
https://github.com/backuppc/backuppc/wiki/Installing-BackupPC-4-from-tarball-or-git-on-Ubuntu

Alternatively, for CentOS 7 you can use the Hobbes COPR:
https://copr.fedorainfracloud.org/coprs/hobbes1069/BackupPC/

There's another maintained CentOS repo:
http://repo.firewall-services.com/centos/

Both maintainers are on the mailing list.

Kris Lou
k...@themusiclink.net


>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Spurious "file doesn't exist" messages... BUG?

2020-05-25 Thread Craig Barratt via BackupPC-users
I'm not sure why you are having problems with bpc-rsync 3.1.2 and
3.1.3-beta0.  Maybe try higher levels of logging and adding -vv to the
remote rsync.  How quickly does it fail?  Is it some ssh-related issue?

I finally was able to get ubuntu to boot on my machine with selinux turned
on in permissive mode, although it's clearly not configured correctly.  I
can see default selinux file attributes with ls -Z.  And I can change them
with chcon or semanage fcontext.  But the actual files I set don't have any
xattr settings.  I assume all those commands are doing is updating files
below /etc/selinux/targeted/contexts/files.  Directly running rsync -X to
copy those files doesn't preserve their selinux attributes.

Anyhow, I separately reconfirmed that a user xattr setting on a regular
file is correctly backed up and restored.

Craig

On Mon, May 25, 2020 at 3:43 PM  wrote:

> Thanks Craig.
> This problem was bothering me for the longest of times... but I always
> assumed it was due to files changing or some other spurious factors...
> But now that I am backing up against fixed snapshots, it has become
> easier to one-by-one track down unexpected bugs & error messages...
>
> The only remaining issue I see now is with SELinux extended attributes
> :)
>
> Plus, the challenges with hangs on bpc-rsyn 3.1.2 and 3.1.3-beta0
>
> Craig Barratt via BackupPC-users wrote at about 13:08:31 -0700 on Monday,
> May 25, 2020:
>  > Jeff,
>  >
>  > Thanks for figuring that out.  I pushed a fix
>  > <
> https://github.com/backuppc/rsync-bpc/commit/96e890fc3e5bb53f6618bd8650e8400f355b243a
> >
>  > so the warning doesn't get printed on zero-length files.
>  >
>  > Craig
>  >
>  > On Mon, May 25, 2020 at 8:08 AM  wrote:
>  >
>  > > Presumably the problem is in rsync-bpc: bpc_sysCalls.c
>  > >
>  > > int bpc_sysCall_checkFileMatch(char *fileName, char *tmpName, struct
>  > > file_struct *rsyncFile,
>  > >char *file_sum, off_t fileSize)
>  > > {
>  > > bpc_attrib_file *fileOrig, *file;
>  > > char poolPath[BPC_MAXPATHLEN];
>  > >
>  > > if ( !(fileOrig = bpc_attribCache_getFile(, fileName, 0,
> 0)) ) {
>  > > /*
>  > >  * Hmmm.  The file doesn't exist, but we got deltas
> suggesting the
>  > > file is
>  > >  * unchanged.  So that means the generator found a matching
> pool
>  > > file.
>  > >  * Let's try the same thing.
>  > >  */
>  > > if ( bpc_sysCall_poolFileCheck(fileName, rsyncFile)
>  > > || !(fileOrig = bpc_attribCache_getFile(,
> fileName,
>  > > 0, 0)) ) {
>  > > bpc_logErrf("bpc_sysCall_checkFileMatch(%s): file doesn't
>  > > exist\n", fileName);
>  > > return -1;
>  > > }
>  > > }
>  > >     ...
>  > >
>  > > But the zero length file (with md5sum
>  > > d41d8cd98f00b204e9800998ecf8427e) is not in the pool.
>  > >
>  > > Presumably, one should add a check to eliminate...
>  > >
>  > > backu...@kosowsky.org wrote at about 10:36:55 -0400 on Monday, May
> 25,
>  > > 2020:
>  > >  > Seems like these are all the zero length files...
>  > >  > Could it be that backuppc is checking for file length rather than
> file
>  > > existence???
>  > >  >
>  > >  >
>  > >  > Note: when I delete the backup and run it again, the *exact* same
>  > >  > "file doesn't exist' errors reappears (even though a new btrfs
> snapshot
>  > >  > has been created).  So I am pretty sure it is not a filesystem
> issue
>  > >  > but rather likely a bug in backuppc...
>  > >  >
>  > >  >
>  > >  > backu...@kosowsky.org wrote at about 10:00:31 -0400 on Monday,
> May 25,
>  > > 2020:
>  > >  >  > I am still occasionally but not at all consistently getting
> errors
>  > > of form:
>  > >  >  >
>  > >
> Rbpc_sysCall_checkFileMatch(var/lib/spamassassin/3.004002/updates_spamassassin_org/STATISTICS-set3-72_scores.cf.txt):
>  > > file doesn't exist
>  > >  >  >
>  > >  >  > Here is some background:
>  > >  >  > - Only seems to occur on full backups
>  > >  >  > - These messages correspond to files that have changed or been
> added
>  > > since the previous backup
>  > >  >  > - However, they only occur on some incrementals and only for a
> small
>  > > subset of

Re: [BackupPC-users] Spurious "file doesn't exist" messages... BUG?

2020-05-25 Thread backuppc
Thanks Craig.
This problem was bothering me for the longest of times... but I always
assumed it was due to files changing or some other spurious factors...
But now that I am backing up against fixed snapshots, it has become
easier to one-by-one track down unexpected bugs & error messages...

The only remaining issue I see now is with SELinux extended attributes
:)

Plus, the challenges with hangs on bpc-rsyn 3.1.2 and 3.1.3-beta0

Craig Barratt via BackupPC-users wrote at about 13:08:31 -0700 on Monday, May 
25, 2020:
 > Jeff,
 > 
 > Thanks for figuring that out.  I pushed a fix
 > <https://github.com/backuppc/rsync-bpc/commit/96e890fc3e5bb53f6618bd8650e8400f355b243a>
 > so the warning doesn't get printed on zero-length files.
 > 
 > Craig
 > 
 > On Mon, May 25, 2020 at 8:08 AM  wrote:
 > 
 > > Presumably the problem is in rsync-bpc: bpc_sysCalls.c
 > >
 > > int bpc_sysCall_checkFileMatch(char *fileName, char *tmpName, struct
 > > file_struct *rsyncFile,
 > >char *file_sum, off_t fileSize)
 > > {
 > > bpc_attrib_file *fileOrig, *file;
 > > char poolPath[BPC_MAXPATHLEN];
 > >
 > > if ( !(fileOrig = bpc_attribCache_getFile(, fileName, 0, 0)) ) {
 > > /*
 > >  * Hmmm.  The file doesn't exist, but we got deltas suggesting the
 > > file is
 > >  * unchanged.  So that means the generator found a matching pool
 > > file.
 > >  * Let's try the same thing.
 > >  */
 > > if ( bpc_sysCall_poolFileCheck(fileName, rsyncFile)
 > > || !(fileOrig = bpc_attribCache_getFile(, fileName,
 > > 0, 0)) ) {
 > > bpc_logErrf("bpc_sysCall_checkFileMatch(%s): file doesn't
 > > exist\n", fileName);
 > > return -1;
 > > }
 > > }
 > > ...
 > >
 > > But the zero length file (with md5sum
 > > d41d8cd98f00b204e9800998ecf8427e) is not in the pool.
 > >
 > > Presumably, one should add a check to eliminate...
 > >
 > > backu...@kosowsky.org wrote at about 10:36:55 -0400 on Monday, May 25,
 > > 2020:
 > >  > Seems like these are all the zero length files...
 > >  > Could it be that backuppc is checking for file length rather than file
 > > existence???
 > >  >
 > >  >
 > >  > Note: when I delete the backup and run it again, the *exact* same
 > >  > "file doesn't exist' errors reappears (even though a new btrfs snapshot
 > >  > has been created).  So I am pretty sure it is not a filesystem issue
 > >  > but rather likely a bug in backuppc...
 > >  >
 > >  >
 > >  > backu...@kosowsky.org wrote at about 10:00:31 -0400 on Monday, May 25,
 > > 2020:
 > >  >  > I am still occasionally but not at all consistently getting errors
 > > of form:
 > >  >  >
 > >  
 > > Rbpc_sysCall_checkFileMatch(var/lib/spamassassin/3.004002/updates_spamassassin_org/STATISTICS-set3-72_scores.cf.txt):
 > > file doesn't exist
 > >  >  >
 > >  >  > Here is some background:
 > >  >  > - Only seems to occur on full backups
 > >  >  > - These messages correspond to files that have changed or been added
 > > since the previous backup
 > >  >  > - However, they only occur on some incrementals and only for a small
 > > subset of the changed files, even when they do occur
 > >  >  > - The files are still backed up properly
 > >  >  > - The files never 'vanished' or changed since I am using read-only
 > > btrfs snapshots
 > >  >  > - My system is rock-solid and I have not had any other file system
 > > troubles
 > >  >  >
 > >  >  > Is this really an error?
 > >  >  > What is causing it?
 > >  >  > Why does it happen seemingly randomly?
 > >  >  >
 > >  >  >
 > >  >  > ___
 > >  >  > BackupPC-users mailing list
 > >  >  > BackupPC-users@lists.sourceforge.net
 > >  >  > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > >  >  > Wiki:http://backuppc.wiki.sourceforge.net
 > >  >  > Project: http://backuppc.sourceforge.net/
 > >  >
 > >  >
 > >  > ___
 > >  > BackupPC-users mailing list
 > >  > BackupPC-users@lists.sourceforge.net
 > >  > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > >  > Wiki:http://backuppc.wiki.sourceforge.net
 > >  > Project: http://back

Re: [BackupPC-users] Spurious "file doesn't exist" messages... BUG?

2020-05-25 Thread Craig Barratt via BackupPC-users
Jeff,

Thanks for figuring that out.  I pushed a fix
<https://github.com/backuppc/rsync-bpc/commit/96e890fc3e5bb53f6618bd8650e8400f355b243a>
so the warning doesn't get printed on zero-length files.

Craig

On Mon, May 25, 2020 at 8:08 AM  wrote:

> Presumably the problem is in rsync-bpc: bpc_sysCalls.c
>
> int bpc_sysCall_checkFileMatch(char *fileName, char *tmpName, struct
> file_struct *rsyncFile,
>char *file_sum, off_t fileSize)
> {
> bpc_attrib_file *fileOrig, *file;
> char poolPath[BPC_MAXPATHLEN];
>
> if ( !(fileOrig = bpc_attribCache_getFile(, fileName, 0, 0)) ) {
> /*
>  * Hmmm.  The file doesn't exist, but we got deltas suggesting the
> file is
>  * unchanged.  So that means the generator found a matching pool
> file.
>  * Let's try the same thing.
>  */
> if ( bpc_sysCall_poolFileCheck(fileName, rsyncFile)
> || !(fileOrig = bpc_attribCache_getFile(, fileName,
> 0, 0)) ) {
> bpc_logErrf("bpc_sysCall_checkFileMatch(%s): file doesn't
> exist\n", fileName);
> return -1;
> }
> }
> ...
>
> But the zero length file (with md5sum
> d41d8cd98f00b204e9800998ecf8427e) is not in the pool.
>
> Presumably, one should add a check to eliminate...
>
> backu...@kosowsky.org wrote at about 10:36:55 -0400 on Monday, May 25,
> 2020:
>  > Seems like these are all the zero length files...
>  > Could it be that backuppc is checking for file length rather than file
> existence???
>  >
>  >
>  > Note: when I delete the backup and run it again, the *exact* same
>  > "file doesn't exist' errors reappears (even though a new btrfs snapshot
>  > has been created).  So I am pretty sure it is not a filesystem issue
>  > but rather likely a bug in backuppc...
>  >
>  >
>  > backu...@kosowsky.org wrote at about 10:00:31 -0400 on Monday, May 25,
> 2020:
>  >  > I am still occasionally but not at all consistently getting errors
> of form:
>  >  >
>  
> Rbpc_sysCall_checkFileMatch(var/lib/spamassassin/3.004002/updates_spamassassin_org/STATISTICS-set3-72_scores.cf.txt):
> file doesn't exist
>  >  >
>  >  > Here is some background:
>  >  > - Only seems to occur on full backups
>  >  > - These messages correspond to files that have changed or been added
> since the previous backup
>  >  > - However, they only occur on some incrementals and only for a small
> subset of the changed files, even when they do occur
>  >  > - The files are still backed up properly
>  >  > - The files never 'vanished' or changed since I am using read-only
> btrfs snapshots
>  >  > - My system is rock-solid and I have not had any other file system
> troubles
>  >  >
>  >  > Is this really an error?
>  >  > What is causing it?
>  >  > Why does it happen seemingly randomly?
>  >  >
>  >  >
>  >  > ___
>  >  > BackupPC-users mailing list
>  >  > BackupPC-users@lists.sourceforge.net
>  >  > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  >  > Wiki:http://backuppc.wiki.sourceforge.net
>  >  > Project: http://backuppc.sourceforge.net/
>  >
>  >
>  > ___
>  > BackupPC-users mailing list
>  > BackupPC-users@lists.sourceforge.net
>  > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  > Wiki:    http://backuppc.wiki.sourceforge.net
>  > Project: http://backuppc.sourceforge.net/
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Spurious "file doesn't exist" messages... BUG?

2020-05-25 Thread backuppc
Presumably the problem is in rsync-bpc: bpc_sysCalls.c

int bpc_sysCall_checkFileMatch(char *fileName, char *tmpName, struct 
file_struct *rsyncFile,
   char *file_sum, off_t fileSize)
{
bpc_attrib_file *fileOrig, *file;
char poolPath[BPC_MAXPATHLEN];

if ( !(fileOrig = bpc_attribCache_getFile(, fileName, 0, 0)) ) {
/*
 * Hmmm.  The file doesn't exist, but we got deltas suggesting the file 
is
 * unchanged.  So that means the generator found a matching pool file.
 * Let's try the same thing.
 */
if ( bpc_sysCall_poolFileCheck(fileName, rsyncFile)
|| !(fileOrig = bpc_attribCache_getFile(, fileName, 0, 
0)) ) { 
bpc_logErrf("bpc_sysCall_checkFileMatch(%s): file doesn't exist\n", 
fileName);
return -1;
}
}
...

But the zero length file (with md5sum
d41d8cd98f00b204e9800998ecf8427e) is not in the pool.

Presumably, one should add a check to eliminate...

backu...@kosowsky.org wrote at about 10:36:55 -0400 on Monday, May 25, 2020:
 > Seems like these are all the zero length files...
 > Could it be that backuppc is checking for file length rather than file 
 > existence???
 > 
 > 
 > Note: when I delete the backup and run it again, the *exact* same
 > "file doesn't exist' errors reappears (even though a new btrfs snapshot
 > has been created).  So I am pretty sure it is not a filesystem issue
 > but rather likely a bug in backuppc...
 > 
 > 
 > backu...@kosowsky.org wrote at about 10:00:31 -0400 on Monday, May 25, 2020:
 >  > I am still occasionally but not at all consistently getting errors of 
 > form:
 >  > 
 > Rbpc_sysCall_checkFileMatch(var/lib/spamassassin/3.004002/updates_spamassassin_org/STATISTICS-set3-72_scores.cf.txt):
 >  file doesn't exist
 >  > 
 >  > Here is some background:
 >  > - Only seems to occur on full backups
 >  > - These messages correspond to files that have changed or been added 
 > since the previous backup
 >  > - However, they only occur on some incrementals and only for a small 
 > subset of the changed files, even when they do occur
 >  > - The files are still backed up properly
 >  > - The files never 'vanished' or changed since I am using read-only btrfs 
 > snapshots
 >  > - My system is rock-solid and I have not had any other file system 
 > troubles
 >  > 
 >  > Is this really an error?
 >  > What is causing it?
 >  > Why does it happen seemingly randomly?
 >  > 
 >  > 
 >  > ___
 >  > BackupPC-users mailing list
 >  > BackupPC-users@lists.sourceforge.net
 >  > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 >  > Wiki:http://backuppc.wiki.sourceforge.net
 >  > Project: http://backuppc.sourceforge.net/
 > 
 > 
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Spurious "file doesn't exist" messages... BUG?

2020-05-25 Thread backuppc
Seems like these are all the zero length files...
Could it be that backuppc is checking for file length rather than file 
existence???


Note: when I delete the backup and run it again, the *exact* same
"file doesn't exist' errors reappears (even though a new btrfs snapshot
has been created).  So I am pretty sure it is not a filesystem issue
but rather likely a bug in backuppc...


backu...@kosowsky.org wrote at about 10:00:31 -0400 on Monday, May 25, 2020:
 > I am still occasionally but not at all consistently getting errors of form:
 > 
 > Rbpc_sysCall_checkFileMatch(var/lib/spamassassin/3.004002/updates_spamassassin_org/STATISTICS-set3-72_scores.cf.txt):
 >  file doesn't exist
 > 
 > Here is some background:
 > - Only seems to occur on full backups
 > - These messages correspond to files that have changed or been added since 
 > the previous backup
 > - However, they only occur on some incrementals and only for a small subset 
 > of the changed files, even when they do occur
 > - The files are still backed up properly
 > - The files never 'vanished' or changed since I am using read-only btrfs 
 > snapshots
 > - My system is rock-solid and I have not had any other file system troubles
 > 
 > Is this really an error?
 > What is causing it?
 > Why does it happen seemingly randomly?
 > 
 > 
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Spurious "file doesn't exist" messages... BUG?

2020-05-25 Thread backuppc
I am still occasionally but not at all consistently getting errors of form:

Rbpc_sysCall_checkFileMatch(var/lib/spamassassin/3.004002/updates_spamassassin_org/STATISTICS-set3-72_scores.cf.txt):
 file doesn't exist

Here is some background:
- Only seems to occur on full backups
- These messages correspond to files that have changed or been added since the 
previous backup
- However, they only occur on some incrementals and only for a small subset of 
the changed files, even when they do occur
- The files are still backed up properly
- The files never 'vanished' or changed since I am using read-only btrfs 
snapshots
- My system is rock-solid and I have not had any other file system troubles

Is this really an error?
What is causing it?
Why does it happen seemingly randomly?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems with latest rsync-bpc 3.1.3 - zlib??

2020-05-25 Thread backuppc
Something is really weird here...
I could have sworn that 3.1.2.1 worked fine last night...
But when I tried it again this morning it hanged.
I'm pretty sure the same thing happened with 3.1.3.beta0.

Restoring to 3.0.9.14 worked fined...

Nothing else changed on my system...

I am at a loss to explain this...

Craig Barratt via BackupPC-users wrote at about 21:13:59 -0700 on Sunday, May 
24, 2020:
 > Thanks for the updates.  Yes, rsync's included zlib isn't compatible with
 > system zlib.  However, since you are not using the -z option, I don't think
 > that's the issue.
 > 
 > Can you try rsync-bpc 3.1.2.1?  It has more testing than 3.1.3.beta0.
 > 
 > Craig
 > 
 > On Sun, May 24, 2020 at 7:43 PM  wrote:
 > 
 > > Upgrading to the latest rsync-bpc 3.1.3 fixed the problem with
 > > specials.
 > > And restores all seemed to work last night, until I tried dumps today.
 > >
 > > Now all my scheduled backups fail with error message:
 > > rsync error: error in rsync protocol data stream (code 12) at
 > > io.c(226) [Receiver=3.1.3.beta0]
 > >
 > > Also, when I run BackupPC_dump, it hangs at the beginning:
 > >   Running: /usr/bin/rsync_bpc --bpc-top-dir /var/lib/backuppc
 > > --bpc-host-name testmachine --bpc-share-name /usr/local/bin --bpc-bkup-num
 > > 0 --bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1
 > > --bpc-bkup-inode0 2 --bpc-attrib-new --bpc-log-level 6 -e /usr/bin/sudo\ -h
 > > --rsync-path=/usr/bin/rsync --super --recursive --protect-args
 > > --numeric-ids --perms --owner --group -D --times --links --hard-links
 > > --delete --delete-excluded --one-file-system --partial --log-format=log:\
 > > %o\ %i\ %B\ %8U,%8G\ %9l\ %f%L --stats --checksum --acls --xattrs
 > > --timeout=72000 myhost:/usr/local/bin/ /
 > >   full backup started for directory /usr/local/bin
 > >   started full dump, share=/usr/local/bin
 > >   Xfer PIDs are now 7793
 > >   xferPids 7793
 > >   This is the rsync child about to exec /usr/bin/rsync_bpc
 > >   cmdExecOrEval: about to exec /usr/bin/rsync_bpc --bpc-top-dir
 > > /var/lib/backuppc --bpc-host-name testmachine --bpc-share-name
 > > /usr/local/bin --bpc-bkup-num 0 --bpc-bkup-comp 3 --bpc-bkup-prevnum -1
 > > --bpc-bkup-prevcomp -1 --bpc-bkup-inode0 2 --bpc-attrib-new --bpc-log-level
 > > 6 -e /usr/bin/sudo\ -h --rsync-path=/usr/bin/rsync --super --recursive
 > > --protect-args --numeric-ids --perms --owner --group -D --times --links
 > > --hard-links --delete --delete-excluded --one-file-system --partial
 > > --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ %9l\ %f%L --stats --checksum --acls
 > > --xattrs --timeout=72000 myhost:/usr/local/bin/ /
 > > bpc_path_create(/var/lib/backuppc/pc/testmachine/0)
 > >   bpc_attrib_backwardCompat: WriteOldStyleAttribFile = 0,
 > >   KeepOldAttribFiles = 0
 > >
 > > This problem resolves when I downgrade back to rsync-bpc 3.0.9.
 > >
 > > Googling suggest this might have something to do with internal
 > > vs. external zlib.h
 > >
 > > I tried configuring with --with-included-zlib=yes (default) and =no.
 > > But both had the same error.
 > >
 > > Note that when =yes, in order to compile, I had to change:
 > >  #include  --> #include "zlib/zlib.h"
 > > in token.c (and also changed for consistency in batch.c and options.c)
 > > since the symbol Z_INSERT_ONLY was not defined in my /usr/include/zlib.h
 > >
 > > Any thoughts on what I need to do to make this work?
 > >
 > >
 > > _______
 > > BackupPC-users mailing list
 > > BackupPC-users@lists.sourceforge.net
 > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > > Wiki:http://backuppc.wiki.sourceforge.net
 > > Project: http://backuppc.sourceforge.net/
 > >
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems with latest rsync-bpc 3.1.3 - zlib??

2020-05-24 Thread Craig Barratt via BackupPC-users
Thanks for the updates.  Yes, rsync's included zlib isn't compatible with
system zlib.  However, since you are not using the -z option, I don't think
that's the issue.

Can you try rsync-bpc 3.1.2.1?  It has more testing than 3.1.3.beta0.

Craig

On Sun, May 24, 2020 at 7:43 PM  wrote:

> Upgrading to the latest rsync-bpc 3.1.3 fixed the problem with
> specials.
> And restores all seemed to work last night, until I tried dumps today.
>
> Now all my scheduled backups fail with error message:
> rsync error: error in rsync protocol data stream (code 12) at
> io.c(226) [Receiver=3.1.3.beta0]
>
> Also, when I run BackupPC_dump, it hangs at the beginning:
>   Running: /usr/bin/rsync_bpc --bpc-top-dir /var/lib/backuppc
> --bpc-host-name testmachine --bpc-share-name /usr/local/bin --bpc-bkup-num
> 0 --bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1
> --bpc-bkup-inode0 2 --bpc-attrib-new --bpc-log-level 6 -e /usr/bin/sudo\ -h
> --rsync-path=/usr/bin/rsync --super --recursive --protect-args
> --numeric-ids --perms --owner --group -D --times --links --hard-links
> --delete --delete-excluded --one-file-system --partial --log-format=log:\
> %o\ %i\ %B\ %8U,%8G\ %9l\ %f%L --stats --checksum --acls --xattrs
> --timeout=72000 myhost:/usr/local/bin/ /
>   full backup started for directory /usr/local/bin
>   started full dump, share=/usr/local/bin
>   Xfer PIDs are now 7793
>   xferPids 7793
>   This is the rsync child about to exec /usr/bin/rsync_bpc
>   cmdExecOrEval: about to exec /usr/bin/rsync_bpc --bpc-top-dir
> /var/lib/backuppc --bpc-host-name testmachine --bpc-share-name
> /usr/local/bin --bpc-bkup-num 0 --bpc-bkup-comp 3 --bpc-bkup-prevnum -1
> --bpc-bkup-prevcomp -1 --bpc-bkup-inode0 2 --bpc-attrib-new --bpc-log-level
> 6 -e /usr/bin/sudo\ -h --rsync-path=/usr/bin/rsync --super --recursive
> --protect-args --numeric-ids --perms --owner --group -D --times --links
> --hard-links --delete --delete-excluded --one-file-system --partial
> --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ %9l\ %f%L --stats --checksum --acls
> --xattrs --timeout=72000 myhost:/usr/local/bin/ /
> bpc_path_create(/var/lib/backuppc/pc/testmachine/0)
>   bpc_attrib_backwardCompat: WriteOldStyleAttribFile = 0,
>   KeepOldAttribFiles = 0
>
> This problem resolves when I downgrade back to rsync-bpc 3.0.9.
>
> Googling suggest this might have something to do with internal
> vs. external zlib.h
>
> I tried configuring with --with-included-zlib=yes (default) and =no.
> But both had the same error.
>
> Note that when =yes, in order to compile, I had to change:
>  #include  --> #include "zlib/zlib.h"
> in token.c (and also changed for consistency in batch.c and options.c)
> since the symbol Z_INSERT_ONLY was not defined in my /usr/include/zlib.h
>
> Any thoughts on what I need to do to make this work?
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Problems with latest rsync-bpc 3.1.3 - zlib??

2020-05-24 Thread backuppc
Upgrading to the latest rsync-bpc 3.1.3 fixed the problem with
specials.
And restores all seemed to work last night, until I tried dumps today.

Now all my scheduled backups fail with error message:
rsync error: error in rsync protocol data stream (code 12) at io.c(226) 
[Receiver=3.1.3.beta0]

Also, when I run BackupPC_dump, it hangs at the beginning:
  Running: /usr/bin/rsync_bpc --bpc-top-dir /var/lib/backuppc 
--bpc-host-name testmachine --bpc-share-name /usr/local/bin --bpc-bkup-num 0 
--bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 
--bpc-bkup-inode0 2 --bpc-attrib-new --bpc-log-level 6 -e /usr/bin/sudo\ -h 
--rsync-path=/usr/bin/rsync --super --recursive --protect-args --numeric-ids 
--perms --owner --group -D --times --links --hard-links --delete 
--delete-excluded --one-file-system --partial --log-format=log:\ %o\ %i\ %B\ 
%8U,%8G\ %9l\ %f%L --stats --checksum --acls --xattrs --timeout=72000 
myhost:/usr/local/bin/ /
  full backup started for directory /usr/local/bin
  started full dump, share=/usr/local/bin
  Xfer PIDs are now 7793
  xferPids 7793
  This is the rsync child about to exec /usr/bin/rsync_bpc
  cmdExecOrEval: about to exec /usr/bin/rsync_bpc --bpc-top-dir 
/var/lib/backuppc --bpc-host-name testmachine --bpc-share-name /usr/local/bin 
--bpc-bkup-num 0 --bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 
--bpc-bkup-inode0 2 --bpc-attrib-new --bpc-log-level 6 -e /usr/bin/sudo\ -h 
--rsync-path=/usr/bin/rsync --super --recursive --protect-args --numeric-ids 
--perms --owner --group -D --times --links --hard-links --delete 
--delete-excluded --one-file-system --partial --log-format=log:\ %o\ %i\ %B\ 
%8U,%8G\ %9l\ %f%L --stats --checksum --acls --xattrs --timeout=72000 
myhost:/usr/local/bin/ / bpc_path_create(/var/lib/backuppc/pc/testmachine/0)
  bpc_attrib_backwardCompat: WriteOldStyleAttribFile = 0,
  KeepOldAttribFiles = 0

This problem resolves when I downgrade back to rsync-bpc 3.0.9.

Googling suggest this might have something to do with internal
vs. external zlib.h

I tried configuring with --with-included-zlib=yes (default) and =no.
But both had the same error.

Note that when =yes, in order to compile, I had to change:
 #include  --> #include "zlib/zlib.h"
in token.c (and also changed for consistency in batch.c and options.c)
since the symbol Z_INSERT_ONLY was not defined in my /usr/include/zlib.h

Any thoughts on what I need to do to make this work?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Testing full restore of backuppc... MULTIPLE BUGS???

2020-05-24 Thread Craig Barratt via BackupPC-users
Jeff,

I did set the policy to permissive.  If I get some time I'll try again.

Craig

On Sat, May 23, 2020 at 10:30 PM  wrote:

> Thanks Craig.
> The --specials now works (and I agree with both you and Michael that
> it is not useful... but it validates that the restore is 'perfect' as
> far as rsync is concerned)
>
> Regarding selinux, you can turn it on in 'permissive' (non-enforcing)
> mode in which case it shouldn't do anything other than create messages
> of selinux policy violations... but it shouldn't block (or otherwise
> affect any parts of your running system)
>
> Check out the following for some details:
>
> https://docs.fedoraproject.org/en-US/quick-docs/changing-selinux-states-and-modes/
>
> Craig Barratt wrote at about 18:26:34 -0700 on Saturday, May 23, 2020:
>  > While I agree with Michael that restoring sockets isn't that useful
> (since
>  > they are only created by a process that is receiving connections on a
>  > unix-domain socket), I did fix the bug
>  > <
> https://github.com/backuppc/rsync-bpc/commit/3802747ab70c8d1a41f051ac9610b899352b5271
> >
>  > that causes them to be incorrectly restored by rsync_bpc.
>  >
>  > I'm quite unfamiliar with selinux attributes.  Is it possible to add
>  > selinux attributes to a file (with setfilecon) when selinux is disabled?
>  > Unfortunately my attempt to turn selinux on didn't go well - my machine
>  > didn't boot into a usable state, so I'm not willing to turn on selinux.
>  >
>  > Craig
>  >
>  > On Fri, May 22, 2020 at 8:26 PM Michael Stowe <
>  > michael.st...@member.mensa.org> wrote:
>  >
>  > > On 2020-05-22 16:49, backu...@kosowsky.org wrote:
>  > > > Michael Stowe wrote at about 22:18:50 + on Friday, May 22, 2020:
>  > > >  > On 2020-05-22 11:42, backu...@kosowsky.org wrote:
>  > > >  > > 1. Sockets are restored as regular files not special files -->
>  > > > BUG?
>  > > >  >
>  > > >  > Why would one back up a socket?
>  > > > I am testing the fidelity of the backup/restore cycle..
>  > > >>
>  > > >  > If you really think this is sensible, you should be able to
>  > > > accomplish
>  > > >  > it with "--devices --specials" as part of your rsync command
> lines.
>  > > >  >  From the symptoms, you have this in backup but not restore.
>  > > >
>  > > > Actually, in the original text (which you snipped), I shared my
>  > > > rsync_bpc commands for both 'dump' and 'restore', which include the
>  > > > '-D' flag (actually it's the default in the config.pl for both
> rsync
>  > > > dump and restore)... and '-D' is *equivalent* to '--devices
>  > > > --specials'
>  > > >
>  > > > And since I suspected some readers might miss that, I even noted in
>  > > > the text that:
>  > > >"Also, special files (--specials) should be included under the -D
>  > > >flag that I use for both rsync dump and restore commands (see
>  > > >below)"
>  > > >
>  > > > Hence, why I suggested this is a *BUG* vs. user error or lack of
>  > > > knowledge :)
>  > >
>  > > You've mistaken my point -- sure, the -D flag is there, but it's
>  > > behaving like it isn't.  Let's review:
>  > >
>  > > --devices
>  > >     This option causes rsync to transfer character and block
> device
>  > > files  to  the  remote  system  to recreate these devices.
> This
>  > > option has no effect if the receiving rsync is not  run  as
> the
>  > > super-user (see also the --super and --fake-super options).
>  > >
>  > > Naturally this begs the question as to whether you're running it as
> the
>  > > super-user, and if you've seen the options as referred to in the man
>  > > page, which I've quoted above.
>  > >
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BUG? Using --omit-dir-times in rsync backup sets all dir dates to beginning of Epoch

2020-05-24 Thread backuppc
Regarding your comment:
>  However, I don't think it makes sense to
> "fix" it, since a backup shouldn't add metadata that changes each time you
> backup some data that hasn't changed.

Actually, the whole point of --ignore-dir-times is that the dir mod
time is not changed each time you rsync.

Behavior is as follows:
1. When created, the mod time is set to 'rsync' run time
2. This mod time is *not* changed on subsequent 'rsyncs' unless a
   change to the directory contents occurs in which case the
   directory timestamp is updated to the *current* time

Indeed, one reason to use --ignore-dir-times is to avoid changing the
mod time every time the directory mod time changes. 

This is different from the absence of --times that changes the mod time of
files/links/devices to the current time on every rsync since the files
show up as changed (unless you also use --ignore-times)

This behavior can be verified by using rsync.
Also, it can be gleaned from the following pulled from the man pages:

o A t means the modification time is different and is being
  updated to the sender’s value (requires --times).  An
  alternate value of T means that the modification time will
  be set to the transfer time, which happens when a
  file/symlink/device is updated without --times and when a
  symlink is changed and the receiver can’t set its time.
  (Note: when using an rsync 3.0.0 client, you might see the
  s flag combined with t instead of the proper T flag for
  this time-setting failure.)

Craig Barratt wrote at about 18:34:54 -0700 on Saturday, May 23, 2020:
 > I wasn't previously familiar with the --omit-dir-times option.  As you
 > discovered, the low-level parts of rsync_bpc don't 100% faithfully mimic
 > the native linux system calls.  In particular, the mkdir() in rsync_bpc
 > doesn't set the mtime, since rsync updates it later (assuming
 > --omit-dir-times is not specified).
 > 
 > It would be a one-line change to set the mtime to the current time in
 > bpc_mkdir() in bpc_sysCalls.c.  However, I don't think it makes sense to
 > "fix" it, since a backup shouldn't add metadata that changes each time you
 > backup some data that hasn't changed.
 > 
 > Craig
 > 
 > On Fri, May 22, 2020 at 8:17 PM Michael Stowe <
 > michael.st...@member.mensa.org> wrote:
 > 
 > > On 2020-05-22 16:52, backu...@kosowsky.org wrote:
 > > > Michael Stowe wrote at about 23:46:54 + on Friday, May 22, 2020:
 > > >  > On 2020-05-22 16:19, backu...@kosowsky.org wrote:
 > > >  > > Michael Stowe wrote at about 22:24:13 + on Friday, May 22,
 > > > 2020:
 > > >  > >  > On 2020-05-22 09:15, backu...@kosowsky.org wrote:
 > > >  > >  > What it does is omit directories from the modification times
 > > > that it
 > > >  > >  > sets.  In other words, you're telling it not to set the times
 > > > on
 > > >  > >  > directories it copies.  The beginning of the epoch is pretty
 > > >  > > reasonable
 > > >  > >  > for directories which have no specific time set.
 > > >  > >  >
 > > >  > >
 > > >  > > Actually, at least the manpage is unclear.
 > > >  > > And *differs* from the default behavior of native rsync (at lesat
 > > > on
 > > >  > > Ubuntu) that sets the dir time to the current time -- which is
 > > > more
 > > >  > > reasonable than some arbitrary epoch = 0 time.
 > > >  > >
 > > >  > > That is what I would have expected and I believe should be the
 > > > default
 > > >  > > behavior...
 > > >  > >
 > > >  > >  > This option has no implications for which directories are
 > > > selected
 > > >  > > to be
 > > >  > >  > copied.
 > > >  >
 > > >  > Unset is unset, it's not the option to use if you want the directory
 > > >  > modification time set.
 > > >
 > > > Regardless, behavior should be consistent with normal rsync...
 > > >
 > > > If you can show me a standard *nix version of rsync that uses Epoch as
 > > > the default then I would retract my point... but otherwise Epoch is
 > > > totally arbitrary and illogical... while at least the current time has
 > > > a good rationale... Choosing 1/1/1970 not so much...
 > >
 > > It's not that "epoch is the default" it's that that's what a timestamp
 > > of 0 is.  When you tell rsync not to set the timestamps, it doesn't.
 > >
 > > If you want to touch the directories and update their timestamps to the
 > > curr

Re: [BackupPC-users] Testing full restore of backuppc... MULTIPLE BUGS???

2020-05-23 Thread backuppc
Thanks Craig.
The --specials now works (and I agree with both you and Michael that
it is not useful... but it validates that the restore is 'perfect' as
far as rsync is concerned)

Regarding selinux, you can turn it on in 'permissive' (non-enforcing)
mode in which case it shouldn't do anything other than create messages
of selinux policy violations... but it shouldn't block (or otherwise
affect any parts of your running system)

Check out the following for some details:
https://docs.fedoraproject.org/en-US/quick-docs/changing-selinux-states-and-modes/

Craig Barratt wrote at about 18:26:34 -0700 on Saturday, May 23, 2020:
 > While I agree with Michael that restoring sockets isn't that useful (since
 > they are only created by a process that is receiving connections on a
 > unix-domain socket), I did fix the bug
 > <https://github.com/backuppc/rsync-bpc/commit/3802747ab70c8d1a41f051ac9610b899352b5271>
 > that causes them to be incorrectly restored by rsync_bpc.
 > 
 > I'm quite unfamiliar with selinux attributes.  Is it possible to add
 > selinux attributes to a file (with setfilecon) when selinux is disabled?
 > Unfortunately my attempt to turn selinux on didn't go well - my machine
 > didn't boot into a usable state, so I'm not willing to turn on selinux.
 > 
 > Craig
 > 
 > On Fri, May 22, 2020 at 8:26 PM Michael Stowe <
 > michael.st...@member.mensa.org> wrote:
 > 
 > > On 2020-05-22 16:49, backu...@kosowsky.org wrote:
 > > > Michael Stowe wrote at about 22:18:50 + on Friday, May 22, 2020:
 > > >  > On 2020-05-22 11:42, backu...@kosowsky.org wrote:
 > > >  > > 1. Sockets are restored as regular files not special files -->
 > > > BUG?
 > > >  >
 > > >  > Why would one back up a socket?
 > > > I am testing the fidelity of the backup/restore cycle..
 > > >>
 > > >  > If you really think this is sensible, you should be able to
 > > > accomplish
 > > >  > it with "--devices --specials" as part of your rsync command lines.
 > > >  >  From the symptoms, you have this in backup but not restore.
 > > >
 > > > Actually, in the original text (which you snipped), I shared my
 > > > rsync_bpc commands for both 'dump' and 'restore', which include the
 > > > '-D' flag (actually it's the default in the config.pl for both rsync
 > > > dump and restore)... and '-D' is *equivalent* to '--devices
 > > > --specials'
 > > >
 > > > And since I suspected some readers might miss that, I even noted in
 > > > the text that:
 > > >"Also, special files (--specials) should be included under the -D
 > > >flag that I use for both rsync dump and restore commands (see
 > > >below)"
 > > >
 > > > Hence, why I suggested this is a *BUG* vs. user error or lack of
 > > > knowledge :)
 > >
 > > You've mistaken my point -- sure, the -D flag is there, but it's
 > > behaving like it isn't.  Let's review:
 > >
 > > --devices
 > > This option causes rsync to transfer character and block  device
 > > files  to  the  remote  system  to recreate these devices.  This
 > > option has no effect if the receiving rsync is not  run  as  the
 > > super-user (see also the --super and --fake-super options).
 > >
 > > Naturally this begs the question as to whether you're running it as the
 > > super-user, and if you've seen the options as referred to in the man
 > > page, which I've quoted above.
 > >


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc-fuse hard links

2020-05-23 Thread Craig Barratt via BackupPC-users
Jeff,

I remember looking into this long ago, and I recall that fuse makes up its
own fake inode numbers, which creates exactly the problem you noticed -
hardlinked files don't show the same inode number.  The Git issue you
mentioned reports that problem.

Craig

On Sat, May 23, 2020 at 8:52 PM  wrote:

> It seems like backuppc-fuse correctly lists the number of hard links
> for each file *but* the corresponding inodes are not numbered the
> same.
>
> For example:
>
> #Native file system
> ls -il /usr/bin/pigz /usr/bin/unpigz
> 564544 -rwxr-xr-x 2 root root 116944 Dec 27  2017 /usr/bin/pigz*
> 564544 -rwxr-xr-x 2 root root 116944 Dec 27  2017 /usr/bin/unpigz*
>
> #Backuppc-fuse version
> ls -il /mnt/backuppc/consult/root/{/usr/bin/pigz,/usr/bin/unpigz}
> 386328 -rwxr-xr-x 2 root root 116944 Dec 27  2017
> /mnt/backuppc/myhost/root/usr/bin/pigz*
> 827077 -rwxr-xr-x 2 root root 116944 Dec 27  2017
> /mnt/backuppc/myhost/root/usr/bin/unpigz*
>
> Is there any way to fix this???
>
> I couldn't find much on Google, but it seems like there is a low and a
> high level inode notion in fuse filesystems and that the low-level one
> has the right inode number. See:
> https://github.com/libfuse/libfuse/issues/79
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] backuppc-fuse hard links

2020-05-23 Thread backuppc
It seems like backuppc-fuse correctly lists the number of hard links
for each file *but* the corresponding inodes are not numbered the
same.

For example:

#Native file system
ls -il /usr/bin/pigz /usr/bin/unpigz
564544 -rwxr-xr-x 2 root root 116944 Dec 27  2017 /usr/bin/pigz*
564544 -rwxr-xr-x 2 root root 116944 Dec 27  2017 /usr/bin/unpigz*

#Backuppc-fuse version
ls -il /mnt/backuppc/consult/root/{/usr/bin/pigz,/usr/bin/unpigz}
386328 -rwxr-xr-x 2 root root 116944 Dec 27  2017 
/mnt/backuppc/myhost/root/usr/bin/pigz*
827077 -rwxr-xr-x 2 root root 116944 Dec 27  2017 
/mnt/backuppc/myhost/root/usr/bin/unpigz*

Is there any way to fix this???

I couldn't find much on Google, but it seems like there is a low and a
high level inode notion in fuse filesystems and that the low-level one
has the right inode number. See:
https://github.com/libfuse/libfuse/issues/79


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BUG? Using --omit-dir-times in rsync backup sets all dir dates to beginning of Epoch

2020-05-23 Thread Craig Barratt via BackupPC-users
I wasn't previously familiar with the --omit-dir-times option.  As you
discovered, the low-level parts of rsync_bpc don't 100% faithfully mimic
the native linux system calls.  In particular, the mkdir() in rsync_bpc
doesn't set the mtime, since rsync updates it later (assuming
--omit-dir-times is not specified).

It would be a one-line change to set the mtime to the current time in
bpc_mkdir() in bpc_sysCalls.c.  However, I don't think it makes sense to
"fix" it, since a backup shouldn't add metadata that changes each time you
backup some data that hasn't changed.

Craig

On Fri, May 22, 2020 at 8:17 PM Michael Stowe <
michael.st...@member.mensa.org> wrote:

> On 2020-05-22 16:52, backu...@kosowsky.org wrote:
> > Michael Stowe wrote at about 23:46:54 + on Friday, May 22, 2020:
> >  > On 2020-05-22 16:19, backu...@kosowsky.org wrote:
> >  > > Michael Stowe wrote at about 22:24:13 + on Friday, May 22,
> > 2020:
> >  > >  > On 2020-05-22 09:15, backu...@kosowsky.org wrote:
> >  > >  > What it does is omit directories from the modification times
> > that it
> >  > >  > sets.  In other words, you're telling it not to set the times
> > on
> >  > >  > directories it copies.  The beginning of the epoch is pretty
> >  > > reasonable
> >  > >  > for directories which have no specific time set.
> >  > >  >
> >  > >
> >  > > Actually, at least the manpage is unclear.
> >  > > And *differs* from the default behavior of native rsync (at lesat
> > on
> >  > > Ubuntu) that sets the dir time to the current time -- which is
> > more
> >  > > reasonable than some arbitrary epoch = 0 time.
> >  > >
> >  > > That is what I would have expected and I believe should be the
> > default
> >  > > behavior...
> >  > >
> >  > >  > This option has no implications for which directories are
> > selected
> >  > > to be
> >  > >  > copied.
> >  >
> >  > Unset is unset, it's not the option to use if you want the directory
> >  > modification time set.
> >
> > Regardless, behavior should be consistent with normal rsync...
> >
> > If you can show me a standard *nix version of rsync that uses Epoch as
> > the default then I would retract my point... but otherwise Epoch is
> > totally arbitrary and illogical... while at least the current time has
> > a good rationale... Choosing 1/1/1970 not so much...
>
> It's not that "epoch is the default" it's that that's what a timestamp
> of 0 is.  When you tell rsync not to set the timestamps, it doesn't.
>
> If you want to touch the directories and update their timestamps to the
> current time, you can do that, but it's an odd thing to expect rsync to
> take care of for you when you explicitly tell it not to.
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Testing full restore of backuppc... MULTIPLE BUGS???

2020-05-23 Thread backuppc
Michael Stowe wrote at about 03:25:45 + on Saturday, May 23, 2020:
 > On 2020-05-22 16:49, backu...@kosowsky.org wrote:
 > > Michael Stowe wrote at about 22:18:50 + on Friday, May 22, 2020:
 > >  > On 2020-05-22 11:42, backu...@kosowsky.org wrote:
 > >  > > 1. Sockets are restored as regular files not special files --> 
 > > BUG?
 > >  >
 > >  > Why would one back up a socket?
 > > I am testing the fidelity of the backup/restore cycle..
 > >> 
 > >  > If you really think this is sensible, you should be able to 
 > > accomplish
 > >  > it with "--devices --specials" as part of your rsync command lines.
 > >  >  From the symptoms, you have this in backup but not restore.
 > > 
 > > Actually, in the original text (which you snipped), I shared my
 > > rsync_bpc commands for both 'dump' and 'restore', which include the
 > > '-D' flag (actually it's the default in the config.pl for both rsync
 > > dump and restore)... and '-D' is *equivalent* to '--devices
 > > --specials'
 > > 
 > > And since I suspected some readers might miss that, I even noted in
 > > the text that:
 > >"Also, special files (--specials) should be included under the -D
 > >flag that I use for both rsync dump and restore commands (see
 > >below)"
 > > 
 > > Hence, why I suggested this is a *BUG* vs. user error or lack of
 > > knowledge :)
 > 
 > You've mistaken my point -- sure, the -D flag is there, but it's 
 > behaving like it isn't.  Let's review:
 > 

So we both agree it's a BUG... :)

I also tried just using --specials which also didn't work..
And '-D' *does* backup & restore devices... so seems like the issue is
that 'specials' are not being restored (but they are dumped
properly)... which is the BUG that I originally reported at the start
of the thread...

 > --devices
 > This option causes rsync to transfer character and block  device
 > files  to  the  remote  system  to recreate these devices.  This
 > option has no effect if the receiving rsync is not  run  as  the
 > super-user (see also the --super and --fake-super options).
 > 
 > Naturally this begs the question as to whether you're running it as the 
 > super-user, and if you've seen the options as referred to in the man 
 > page, which I've quoted above.

Again, referencing the  rsync_bpc command I shared (which you
clipped from the thread), you would see that I ran it as 'sudo' (plus
the default includes the options '--super' anyway)


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Testing full restore of backuppc... MULTIPLE BUGS???

2020-05-23 Thread Craig Barratt via BackupPC-users
While I agree with Michael that restoring sockets isn't that useful (since
they are only created by a process that is receiving connections on a
unix-domain socket), I did fix the bug
<https://github.com/backuppc/rsync-bpc/commit/3802747ab70c8d1a41f051ac9610b899352b5271>
that causes them to be incorrectly restored by rsync_bpc.

I'm quite unfamiliar with selinux attributes.  Is it possible to add
selinux attributes to a file (with setfilecon) when selinux is disabled?
Unfortunately my attempt to turn selinux on didn't go well - my machine
didn't boot into a usable state, so I'm not willing to turn on selinux.

Craig

On Fri, May 22, 2020 at 8:26 PM Michael Stowe <
michael.st...@member.mensa.org> wrote:

> On 2020-05-22 16:49, backu...@kosowsky.org wrote:
> > Michael Stowe wrote at about 22:18:50 + on Friday, May 22, 2020:
> >  > On 2020-05-22 11:42, backu...@kosowsky.org wrote:
> >  > > 1. Sockets are restored as regular files not special files -->
> > BUG?
> >  >
> >  > Why would one back up a socket?
> > I am testing the fidelity of the backup/restore cycle..
> >>
> >  > If you really think this is sensible, you should be able to
> > accomplish
> >  > it with "--devices --specials" as part of your rsync command lines.
> >  >  From the symptoms, you have this in backup but not restore.
> >
> > Actually, in the original text (which you snipped), I shared my
> > rsync_bpc commands for both 'dump' and 'restore', which include the
> > '-D' flag (actually it's the default in the config.pl for both rsync
> > dump and restore)... and '-D' is *equivalent* to '--devices
> > --specials'
> >
> > And since I suspected some readers might miss that, I even noted in
> > the text that:
> >"Also, special files (--specials) should be included under the -D
> >flag that I use for both rsync dump and restore commands (see
> >below)"
> >
> > Hence, why I suggested this is a *BUG* vs. user error or lack of
> > knowledge :)
>
> You've mistaken my point -- sure, the -D flag is there, but it's
> behaving like it isn't.  Let's review:
>
> --devices
> This option causes rsync to transfer character and block  device
> files  to  the  remote  system  to recreate these devices.  This
> option has no effect if the receiving rsync is not  run  as  the
> super-user (see also the --super and --fake-super options).
>
> Naturally this begs the question as to whether you're running it as the
> super-user, and if you've seen the options as referred to in the man
> page, which I've quoted above.
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how to install SCGI , exactly??

2020-05-23 Thread Craig Barratt via BackupPC-users
There are two different components that have to be installed, one for perl
(the client end) and another for apache (the server end).

The perl module SCGI needs to be installed, which can be done via cpan.  If
cpan doesn't work you can install it manually from the tarball, which can
be found in many places (eg, here
<http://www.namesdir.com/mirrors/cpan/authors/id/V/VI/VIPERCODE/SCGI-0.6.tar.gz>
).

Second, apache needs the scgi module (typically called mod-scgi) installed
and enabled.  As Doug mentions that can be done using your favorite package
manager.

Craig

On Fri, May 22, 2020 at 10:39 AM Doug Lytle  wrote:

> >>> I am currently running BackupPC  version 4.3.2 on Ubuntu 18.04.4 LTS .
> >>> Everything seems to be working perfectly, except this pesky
> "2020-05-22 10:02:30 scgi : BackupPC_Admin_SCGI: can't load perl SCGI
> module - install via CPAN; exiting in 60 seconds" error
>
> Mike,
>
> The only thing I have on my backuppc server is the same as yours
>
> dpkg -l|grep -i scgi
>
> ii  libapache2-mod-scgi   1.13-1.1amd64
> Apache module implementing the SCGI protocol
>
>
> How are you trying to access the admin page?
>
> I don'g use sgi in my URL.  I use
>
> http://192.168.145.99/backuppc
>
> The description of the SGI admin is
>
> # BackupPC_Admin_SCGI: An SCGI implementation of the BackupPC
> #  admin interface.
>
> Which is something I don't use, just the CGI version.
>
> Doug
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BUG? Using --omit-dir-times in rsync backup sets all dir dates to beginning of Epoch

2020-05-22 Thread backuppc
Michael Stowe wrote at about 23:46:54 + on Friday, May 22, 2020:
 > On 2020-05-22 16:19, backu...@kosowsky.org wrote:
 > > Michael Stowe wrote at about 22:24:13 + on Friday, May 22, 2020:
 > >  > On 2020-05-22 09:15, backu...@kosowsky.org wrote:
 > >  > What it does is omit directories from the modification times that it
 > >  > sets.  In other words, you're telling it not to set the times on
 > >  > directories it copies.  The beginning of the epoch is pretty 
 > > reasonable
 > >  > for directories which have no specific time set.
 > >  >
 > > 
 > > Actually, at least the manpage is unclear.
 > > And *differs* from the default behavior of native rsync (at lesat on
 > > Ubuntu) that sets the dir time to the current time -- which is more
 > > reasonable than some arbitrary epoch = 0 time.
 > > 
 > > That is what I would have expected and I believe should be the default
 > > behavior...
 > > 
 > >  > This option has no implications for which directories are selected 
 > > to be
 > >  > copied.
 > 
 > Unset is unset, it's not the option to use if you want the directory 
 > modification time set.

Regardless, behavior should be consistent with normal rsync...

If you can show me a standard *nix version of rsync that uses Epoch as
the default then I would retract my point... but otherwise Epoch is
totally arbitrary and illogical... while at least the current time has
a good rationale... Choosing 1/1/1970 not so much...


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Testing full restore of backuppc... MULTIPLE BUGS???

2020-05-22 Thread backuppc
"" wrote at about 14:42:10 -0400 on Friday, May 22, 2020:
 > Craig,
 > Using rsync (rather than tar) to restore, I think I confirmed several
 > bugs with the handling of sockets and SELinux attributes
 > 
 > Hopefully, I have provided enough info to debug...
 > 
 > In summary:
 > 0.  All my ACLs are dumped & restored properly (with rsync) --> GOOD
 > 1. Sockets are restored as regular files not special files --> BUG?
 > 2. SELinux attributes are not dumped for directories and links --> BUG?
 > 3. SELinux attributes for regular files are generally handled OK, but
 > sometimes dump generates an SELinux entry where none exists
 > in the source. This happens for a few files where the
 > SELinux entry previously existed in earlier backups on the
 > host... I don't understand this... and need to investigate
 > it further 

Regarding #3, it goes away when I erase all previous backups... so
there may be a merging-inheritance problem here as I alluded to
before...

However, #1 & #2 seem to be clear issues with implementation of rsync-bpc...
> 
 > Here are some examples showing the behavior
 > 1. Sockets are dumped as 'sockets' but restored as 'regular' files
 >srw-rw-rw- 1 postfix postfix ? 0 May 21 03:39 
 > /mnt/backuppc/all/myhost/256/root/var/spool/postfix/private/scan
 >-rw-rw-rw- 1 postfix postfix ? 0 May 21 03:39 
 > /tmp/tmprestore/root/var/spool/postfix/private/scan
 >srw-rw-rw- 1 postfix postfix ? 0 May 21 03:39 
 > /var/spool/postfix/private/scan
 > 
 >Note that the first listing shows a backuppc-fuse mounting of the dump,
 >confirming that the dump is stored properly.
 > 
 >Note that rsync itself has no problem copying sockets and special
 >files.
 >Also, special files (--specials) should be included under the -D
 >flag that I use for both rsync dump and restore commands (see below)
 > 
 > 2. SELinux for 'links' & 'directories' fail to Dump the SELinux entry but
 >otherwise dump & restore properly
 >(see the /mnt/backpupc/all/myhost line which shows the backuppc-fuse 
 > version)
 > 
 >drwxrwxr-x  6 root root ?   1024 Jan 28  2019 
 > /mnt/backuppc/all/myhost/256/root/usr/local/lib/mac/
 >drwxrwxr-x  1 root root ?   428 Jan 28  2019 
 > /tmp/tmprestore/root/usr/local/lib/mac/
 >drwxrwxr-x. 1 root root system_u:object_r:lib_t:s0  428 Jan 28  2019 
 > /usr/local/lib/mac/
 > 
 >lrwxrwxrwx  1 root root ?  17 Nov 20  2009 
 > /mnt/backuppc/all/myhost/256/root/usr/local/etc/motd -> motd.good
 >lrwxrwxrwx  1 root root ?  17 Nov 20  2009 
 > /tmp/tmprestore/root/usr/local/etc/motd -> motd.good
 >lrwxrwxrwx. 1 root root system_u:object_r:etc_t:s0 17 Nov 20  2009 
 > /usr/local/etc/motd -> motd.good
 > 
 >(note that it succeeds though for the corresponding link target (see 
 > below)
 > 
 > 3. SELinux for 'regular files' generally dumps and restore properly
 >  For a handful of files, dump creates an SELinux entry even though
 >  the source didn't have such entry. Perhaps it was wrongly
 >  inherited/merged from an earlier backup I need to investigate
 >  this further... as it is quite strange
 > 
 >  Dump & Restore succeeds here (most common)
 >  -rw-r--r--. 1 root root system_u:object_r:etc_t:s0 11 Jan 17 2008 
 > /mnt/backuppc/all/myhost/257/root/usr/local/etc/motd.good
 >  -rw-r--r--. 1 root root system_u:object_r:etc_t:s0 11 Jan 17 2008 
 > root//usr/local/etc/motd.good
 >  -rw-r--r--. 1 root root system_u:object_r:etc_t:s0 11 Jan 17 2008 
 > /usr/local/etc/motd.good
 > 
 >  Dump seems to create a new  SELinux entry where none existed
 >  previously in the source; Restore then restores it...
 >  -rw-rw-r--. 1 root root system_u:object_r:lib_t:s0 5889 Aug 12 2002 
 > /mnt/backuppc/all/myhost/257/root/usr/local/lib/emacs/site-lisp/gin-mode.el
 >  -rw-rw-r--. 1 root root system_u:object_r:lib_t:s0 5889 Aug 12 2002 
 > root/usr/local/lib/emacs/site-lisp/gin-mode.el
 >  -rw-rw-r--  1 root root ?  5889 Aug 12 2002 
 > /usr/local/lib/emacs/site-lisp/gin-mode.el
 >  
 >
 > -
 > Note the command I use to restore is:
 > /usr/bin/rsync_bpc --bpc-top-dir /var/lib/backuppc --bpc-host-name \
 > myhost --bpc-share-name root --bpc-bkup-num 257 --bpc-bkup-comp 3 \
 > --bpc-bkup-merge 257/3/4 --bpc-attrib-new --bpc-log-level 1 -e \
 > /usr/bin/sudo\ -h --rsync-path=/usr/bin/rsync --recursive --super \
 > --protect-args --numeric-ids --perms --owner --group -D --times \
 > --links --hard-links --delete --partial

Re: [BackupPC-users] BUG? Using --omit-dir-times in rsync backup sets all dir dates to beginning of Epoch

2020-05-22 Thread backuppc
Michael Stowe wrote at about 22:24:13 + on Friday, May 22, 2020:
 > On 2020-05-22 09:15, backu...@kosowsky.org wrote:
 > > If I add '--omit-dir-times' to $Conf{RsyncArgsExtra}, then the backups
 > > set all the directory dates to the beginning of the Epoch.
 > > 
 > > For example
 > > drwxr-xr-x 3 backuppc www-data  1024 Dec 31  1969 pc/
 > > 
 > > (note this is 1/1/70 00:00:00 GMT)
 > > 
 > > This is inconsistent with normal rsync wich just uses --omit-dir-times
 > > to omit directories from --times when looking for *changes*
 > > 
 > > I was expecting and would have liked the normal behavior of
 > > --omit-dir-times to speed up backups...
 > > 
 > > Is this a bug???
 > 
 > No, it's expected behavior, you seem to have misunderstood what this 
 > rsync option does.
 > 
 > What it does is omit directories from the modification times that it 
 > sets.  In other words, you're telling it not to set the times on 
 > directories it copies.  The beginning of the epoch is pretty reasonable 
 > for directories which have no specific time set.
 > 

Actually, at least the manpage is unclear.
And *differs* from the default behavior of native rsync (at lesat on
Ubuntu) that sets the dir time to the current time -- which is more
reasonable than some arbitrary epoch = 0 time.

That is what I would have expected and I believe should be the default
behavior...

 > This option has no implications for which directories are selected to be 
 > copied.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Testing full restore of backuppc... [private version]

2020-05-22 Thread Craig Barratt via BackupPC-users
Jeff,

The tar XferMethod doesn't capture acls and xattrs during backup.

Direct Restore in the CGI interface uses the XferMethod setting.

Craig

On Thu, May 21, 2020 at 10:05 PM  wrote:

> I also assume that tar doesn't capture ACLs and XATTRs for backup
> either then
>
> What transfer mechanism does the CGI restore use?
> Because when I use the direct download mode, it also doesn't restore
> the ACLs and XATTRs.
>
> In any case, I guess I really need to figure out how to use rsync for
> restore...
>
>
> Craig Barratt via BackupPC-users wrote at about 21:50:40 -0700 on
> Thursday, May 21, 2020:
>  > Jeff,
>  >
>  > Unfortunately BackupPC_tarCreate doesn't support acls.  Over the years
>  > different flavors of tar supported different archive formats for certain
>  > extensions (eg, long file names etc).  The POSIX standard for PAX
> headers
>  > unified some of the those disparate formats, but didn't define acl or
> xattr
>  > support.
>  >
>  > Over the last few years it does look like GNU tar provides support for
>  > acls, but using PAX headers that are not standard.  Looking at the tar
>  > source, it uses headers like SCHILY.acl.access, SCHILY.xattr etc.
>  > Supporting those headers appears to require the acls and xattrs to be
>  > converted to descriptive strings.  Currently BackupPC rsync treats acls
> and
>  > xattr as binary blobs of data that it doesn't need to interpret.  So
>  > unfortunately it would be quit difficult to add acl and xattr support to
>  > BackupPC_tarCreate.
>  >
>  > Craig
>  >
>  > On Tue, May 19, 2020 at 11:49 PM  wrote:
>  >
>  > >
>  > > Now that I have btrfs snapshots set up, I decided to test a full
>  > > backup and restore by comparing the snapshot with the backup-restore
>  > > via rsync, using the following command:
>  > > sudo -u backuppc /usr/share/backuppc/bin/BackupPC_tarCreate -h
> myhost
>  > > -n -1 -s myshare . | sudo tar --acls --selinux --xattrs -xvf -
>  > >
>  > > Interestingly, I found that everything worked *except* that it failed
>  > > to copy any sockets or any extended attributes.
>  > >
>  > > 1. Sockets were not copied at all - but that is seemingly just a tar
>  > >limitation since tar can't copy 'special' files.
>  > >Indeed, backuppc-fuse shows that the files are actually backed up
> by
>  > > bakcuppc
>  > >
>  > > 2. Extended attributes (ACLs and SELinux context) were *never*
> restored
>  > >
>  > >This seems to be a problem with 'BackupPC_tarCreate" since:
>  > >a] Using tar alone, I can copy the files with all their extended
>  > > attributes
>  > > cd ; tar --acls --selinux --xattrs -cf - mac ) | tar
> xf -
>  > >b] Similarly, raw rsync copies all the files faithfully
>  > >rsync -navcxXAOH --delete  .
>  > >b] Backuppc-fuse shows the extended attributes
>  > >   (though that being said backuppc-fuse adds SELinux context
> attributes
>  > >   to files that don't have them... perhaps there is something
> wrong
>  > >   with the inheritance??
>  > >
>  > > Note: I tried adding ' --xargs --acls --selinux --xattrs'
>  > > to $Conf{TarClientRestoreCmd} but that didn't help.
>  > >
>  > > So, 2 questions:
>  > > 1. Why doesn't BackupPC_tarCreate restore the extended attributes?
>  > > 2. Why does backuppc-fuse show extended attributes for files that
>  > >don't have them originally?
>  > >
>  > > --
>  > > Note: I am running ubuntu 18.04 with rsync 3.1.2 and backuppc 4.3.2
>  > >
>  > >
>  > > ___
>  > > BackupPC-users mailing list
>  > > BackupPC-users@lists.sourceforge.net
>  > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  > > Wiki:http://backuppc.wiki.sourceforge.net
>  > > Project: http://backuppc.sourceforge.net/
>  > >
>  > ___
>  > BackupPC-users mailing list
>  > BackupPC-users@lists.sourceforge.net
>  > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  > Wiki:http://backuppc.wiki.sourceforge.net
>  > Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Testing full restore of backuppc... [private version]

2020-05-21 Thread backuppc
I also assume that tar doesn't capture ACLs and XATTRs for backup
either then

What transfer mechanism does the CGI restore use?
Because when I use the direct download mode, it also doesn't restore
the ACLs and XATTRs.

In any case, I guess I really need to figure out how to use rsync for restore...


Craig Barratt via BackupPC-users wrote at about 21:50:40 -0700 on Thursday, May 
21, 2020:
 > Jeff,
 > 
 > Unfortunately BackupPC_tarCreate doesn't support acls.  Over the years
 > different flavors of tar supported different archive formats for certain
 > extensions (eg, long file names etc).  The POSIX standard for PAX headers
 > unified some of the those disparate formats, but didn't define acl or xattr
 > support.
 > 
 > Over the last few years it does look like GNU tar provides support for
 > acls, but using PAX headers that are not standard.  Looking at the tar
 > source, it uses headers like SCHILY.acl.access, SCHILY.xattr etc.
 > Supporting those headers appears to require the acls and xattrs to be
 > converted to descriptive strings.  Currently BackupPC rsync treats acls and
 > xattr as binary blobs of data that it doesn't need to interpret.  So
 > unfortunately it would be quit difficult to add acl and xattr support to
 > BackupPC_tarCreate.
 > 
 > Craig
 > 
 > On Tue, May 19, 2020 at 11:49 PM  wrote:
 > 
 > >
 > > Now that I have btrfs snapshots set up, I decided to test a full
 > > backup and restore by comparing the snapshot with the backup-restore
 > > via rsync, using the following command:
 > > sudo -u backuppc /usr/share/backuppc/bin/BackupPC_tarCreate -h myhost
 > > -n -1 -s myshare . | sudo tar --acls --selinux --xattrs -xvf -
 > >
 > > Interestingly, I found that everything worked *except* that it failed
 > > to copy any sockets or any extended attributes.
 > >
 > > 1. Sockets were not copied at all - but that is seemingly just a tar
 > >limitation since tar can't copy 'special' files.
 > >Indeed, backuppc-fuse shows that the files are actually backed up by
 > > bakcuppc
 > >
 > > 2. Extended attributes (ACLs and SELinux context) were *never* restored
 > >
 > >This seems to be a problem with 'BackupPC_tarCreate" since:
 > >a] Using tar alone, I can copy the files with all their extended
 > > attributes
 > > cd ; tar --acls --selinux --xattrs -cf - mac ) | tar xf -
 > >b] Similarly, raw rsync copies all the files faithfully
 > >rsync -navcxXAOH --delete  .
 > >b] Backuppc-fuse shows the extended attributes
 > >   (though that being said backuppc-fuse adds SELinux context attributes
 > >   to files that don't have them... perhaps there is something wrong
 > >   with the inheritance??
 > >
 > > Note: I tried adding ' --xargs --acls --selinux --xattrs'
 > > to $Conf{TarClientRestoreCmd} but that didn't help.
 > >
 > > So, 2 questions:
 > > 1. Why doesn't BackupPC_tarCreate restore the extended attributes?
 > > 2. Why does backuppc-fuse show extended attributes for files that
 > >don't have them originally?
 > >
 > > --
 > > Note: I am running ubuntu 18.04 with rsync 3.1.2 and backuppc 4.3.2
 > >
 > >
 > > ___
 > > BackupPC-users mailing list
 > > BackupPC-users@lists.sourceforge.net
 > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > > Wiki:http://backuppc.wiki.sourceforge.net
 > > Project: http://backuppc.sourceforge.net/
 > >
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Testing full restore of backuppc... [private version]

2020-05-21 Thread Craig Barratt via BackupPC-users
Jeff,

Unfortunately BackupPC_tarCreate doesn't support acls.  Over the years
different flavors of tar supported different archive formats for certain
extensions (eg, long file names etc).  The POSIX standard for PAX headers
unified some of the those disparate formats, but didn't define acl or xattr
support.

Over the last few years it does look like GNU tar provides support for
acls, but using PAX headers that are not standard.  Looking at the tar
source, it uses headers like SCHILY.acl.access, SCHILY.xattr etc.
Supporting those headers appears to require the acls and xattrs to be
converted to descriptive strings.  Currently BackupPC rsync treats acls and
xattr as binary blobs of data that it doesn't need to interpret.  So
unfortunately it would be quit difficult to add acl and xattr support to
BackupPC_tarCreate.

Craig

On Tue, May 19, 2020 at 11:49 PM  wrote:

>
> Now that I have btrfs snapshots set up, I decided to test a full
> backup and restore by comparing the snapshot with the backup-restore
> via rsync, using the following command:
> sudo -u backuppc /usr/share/backuppc/bin/BackupPC_tarCreate -h myhost
> -n -1 -s myshare . | sudo tar --acls --selinux --xattrs -xvf -
>
> Interestingly, I found that everything worked *except* that it failed
> to copy any sockets or any extended attributes.
>
> 1. Sockets were not copied at all - but that is seemingly just a tar
>limitation since tar can't copy 'special' files.
>Indeed, backuppc-fuse shows that the files are actually backed up by
> bakcuppc
>
> 2. Extended attributes (ACLs and SELinux context) were *never* restored
>
>This seems to be a problem with 'BackupPC_tarCreate" since:
>a] Using tar alone, I can copy the files with all their extended
> attributes
> cd ; tar --acls --selinux --xattrs -cf - mac ) | tar xf -
>b] Similarly, raw rsync copies all the files faithfully
>rsync -navcxXAOH --delete  .
>b] Backuppc-fuse shows the extended attributes
>   (though that being said backuppc-fuse adds SELinux context attributes
>   to files that don't have them... perhaps there is something wrong
>   with the inheritance??
>
> Note: I tried adding ' --xargs --acls --selinux --xattrs'
> to $Conf{TarClientRestoreCmd} but that didn't help.
>
> So, 2 questions:
> 1. Why doesn't BackupPC_tarCreate restore the extended attributes?
> 2. Why does backuppc-fuse show extended attributes for files that
>don't have them originally?
>
> --
> Note: I am running ubuntu 18.04 with rsync 3.1.2 and backuppc 4.3.2
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] List of All Files Being Backed Up

2020-05-21 Thread Craig Barratt via BackupPC-users
DW,

I'd recommend using BackupPC_ls -R to recursively list all the files in a
particular backup.  You'd need to run it only each of your hosts on the
latest backup.

Craig

On Tue, May 19, 2020 at 4:41 PM David Wynn via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:

> I’ve tried to search using some different keywords and combinations but
> have had no luck in finding an answer.  Here’s my situation … I am backing
> up around 150 different files from a NAS but using multiple hosts to do so
> in order to keep the file transfer size down to <20GB at a time.  This is
> across the ‘internet’ so bandwidth is not under my control and I want to
> make sure the jobs don’t crap out at bad times.  I’ve found that <20GB at a
> time usually works great.
>
>
>
> But, in order to manage the sizes I first created the individual lists
> based on the size of the files/directories and input them manually into my
> config files for each host.  For example, HOST1 may have 40 smaller files
> to get to the 20GB limit whereas HOST10 may only have 1 file/directory to
> get to the limit.
>
>
>
> Now I have the problem of trying to find an individual file/directory from
> around 18 different HOSTx setups.
>
>
>
> Is there an easy way to get/create a listing that would should the HOSTx
> and the files/directories that are being backed up under it?  I have
> thought of trying to write a ‘script’ to traverse the individual HOSTx.pl
> files and extract the info – but my scripting is purely of a W10 nature and
> even at that is poor and wordy.   (Oh for the days of COBOL and PL/1.)
>
>
>
> Just wondering if there is something I have missed in the documentation or
> in trying to search the forums.  Someone must have had this problem before
> me.
>
>
>
> Thanks for your help
>
>
>
> DW
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] replication of data pool

2020-05-21 Thread Craig Barratt via BackupPC-users
Mike,

The cpool isn't structured in a way that makes it possible to just copy
recently backed-up files.

Ged's suggestion (just run another BackupPC instance offsite) is
worth considering.  That also provides more robustness to certain failures
(eg undetected filesystem corruption on the primary BackupPC server).

Craig

On Wed, May 20, 2020 at 12:37 PM Mike Hughes  wrote:

> Hi, we're currently syncing our cpool to an off-site location on a weekly
> basis. Would it be feasible to only sync the latest of each backup rather
> than the entire pool?
>
> To elaborate, on Saturdays we run an rsync of the entire cpool to another
> server to provide disaster recovery options. Is it possible/reasonable to
> just copy the data from the night before? Or, with de-duplication and
> compression, would we really save much space/transfer time? If so, what is
> the best way to grab just one night's worth of backups while still
> preserving a full recovery?
>
> Just curious if someone is already doing this and how you sorted it out.
>
> Thanks!
> Mike
>
> _______
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsync restore from the command line

2020-05-21 Thread Craig Barratt via BackupPC-users
Jeff,

Sure, that's possible.  But given that it was necessary to use a file to
pass along the list of files to restore, it seemed easier to just use that
one mechanism.

It would be pretty easy to write a wrapper script that takes command-line
arguments and writes the request file and passes it along to
BackupPC_restore ((see lib/BackupPC/CGI/Restore.pm) .

Craig

On Wed, May 20, 2020 at 11:59 AM  wrote:

> Couldn't their be an option to read the to-be-restored files from a
> file (similar to what tar and rsync allow) but allowing basic restores
> to be done form the command line.
> Other parameters could either be via command line or added config.pl
> settings if more permanent.
> Craig Barratt via BackupPC-users wrote at about 11:31:37 -0700 on
> Wednesday, May 20, 2020:
>  > Jeff,
>  >
>  > For restores, there could be a long list of specific files or
> directories
>  > to restore, which might not fit on the command line, so that's what
>  > triggered putting everything in a request file and just passing its
> name.
>  > There are also several other settings specific to the restore (eg, the
> path
>  > to restore to etc), none of which are in config files.
>  >
>  > Craig
>  >
>  > On Wed, May 20, 2020 at 10:29 AM  wrote:
>  >
>  > > Thanks Craig,
>  > >
>  > > Why is restore inherently that much more complicated than dump?
>  > > It seems like config.pl already has a number of parameters built-in
>  > > for both including rsync args and pre/post restore commands.
>  > >
>  > > Conceptually, I would think that what one needs to specify each time
>  > > is:
>  > > 1. Host
>  > > 2. Backup number
>  > > 3. Share
>  > > 4. Additional includes/excludes to determine what to restore
>  > > 5. Option to "delete" files no longer found
>  > > 6. Path to root of restore.
>  > >
>  > > Otherwise, existing includes/excludes would be assumed...
>  > >
>  > > I'm sure one could make it more complicated but am I missing something
>  > > basic???
>  > >
>  > > The reality is that rsync + backuppc is really awesome... and I can do
>  > > (and automate) so much more with CLI and scripts than with a CGI.
>  > >
>  > > Jeff
>  > >
>  > > Craig Barratt via BackupPC-users wrote at about 09:32:48 -0700 on
>  > > Wednesday, May 20, 2020:
>  > >  > Jeff,
>  > >  >
>  > >  > BackupPC_restore takes quite a few parameters, so many years ago I
>  > > decided
>  > >  > to pass those parameters via a file rather than command-line
> arguments.
>  > >  > Probably a bad choice...
>  > >  >
>  > >  > There are two alternatives:
>  > >  >
>  > >  >- write a script that creates the restore request parameter
> file (see
>  > >  >lib/BackupPC/CGI/Restore.pm) and then runs BackupPC_restore
>  > >  >- directly run rsync_bpc using an example of the many arguments
> from
>  > > a
>  > >  >successful restore log file
>  > >  >
>  > >  > The drawback of the 2nd approach is that information about the
> restore
>  > >  > isn't saved or logged, and you have to make sure it doesn't run
>  > >  > concurrently with a backup.
>  > >  >
>  > >  > Craig
>  > >  >
>  > >  >
>  > >  > On Wed, May 20, 2020 at 6:31 AM  wrote:
>  > >  >
>  > >  > >
>  > >  > > Is it possible to do an rsync restore from the command line using
>  > >  > > BackupPC_restore?
>  > >  > > If so, it's not clear from the (limited) documentation how to use
>  > >  > > it. For example, how do you specify the desired backup number,
> share,
>  > >  > > paths, etc.
>  > >  > >
>  > >  > > Alternatively, is there an rsync analog of BackupPC_tarCreate?
>  > >  > >
>  > >  > > Said another way, rather than first creating a tar file via
>  > >  > > BackupPC_tarCreate and then untarring (or piping through tar), is
>  > >  > > there a way to hook directly into rsync to restore from the
> command
>  > > line?
>  > >  > >
>  > >  > > This would have several advantages:
>  > >  > > 1. It would automatically incorporate the ssh and compression
> features
>  > >  > >to restore seamlessly, efficiently, and securely across
> platforms
>  > >  > >
>  

Re: [BackupPC-users] replication of data pool

2020-05-21 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 21 May 2020, Mike Hughes wrote:


we're currently syncing our cpool to an off-site location on a
weekly basis. Would it be feasible to only sync the latest of each
backup rather than the entire pool?



To elaborate, on Saturdays we run an rsync of the entire cpool to
another server to provide disaster recovery options. Is it
possible/reasonable to just copy the data from the night before? Or,
with de-duplication and compression, would we really save much
space/transfer time? If so, what is the best way to grab just one
night's worth of backups while still preserving a full recovery?


Why not simply run a second BackupPC instance on the off-site server?

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsync restore from the command line

2020-05-20 Thread backuppc
Couldn't their be an option to read the to-be-restored files from a
file (similar to what tar and rsync allow) but allowing basic restores
to be done form the command line.
Other parameters could either be via command line or added config.pl
settings if more permanent.
Craig Barratt via BackupPC-users wrote at about 11:31:37 -0700 on Wednesday, 
May 20, 2020:
 > Jeff,
 > 
 > For restores, there could be a long list of specific files or directories
 > to restore, which might not fit on the command line, so that's what
 > triggered putting everything in a request file and just passing its name.
 > There are also several other settings specific to the restore (eg, the path
 > to restore to etc), none of which are in config files.
 > 
 > Craig
 > 
 > On Wed, May 20, 2020 at 10:29 AM  wrote:
 > 
 > > Thanks Craig,
 > >
 > > Why is restore inherently that much more complicated than dump?
 > > It seems like config.pl already has a number of parameters built-in
 > > for both including rsync args and pre/post restore commands.
 > >
 > > Conceptually, I would think that what one needs to specify each time
 > > is:
 > > 1. Host
 > > 2. Backup number
 > > 3. Share
 > > 4. Additional includes/excludes to determine what to restore
 > > 5. Option to "delete" files no longer found
 > > 6. Path to root of restore.
 > >
 > > Otherwise, existing includes/excludes would be assumed...
 > >
 > > I'm sure one could make it more complicated but am I missing something
 > > basic???
 > >
 > > The reality is that rsync + backuppc is really awesome... and I can do
 > > (and automate) so much more with CLI and scripts than with a CGI.
 > >
 > > Jeff
 > >
 > > Craig Barratt via BackupPC-users wrote at about 09:32:48 -0700 on
 > > Wednesday, May 20, 2020:
 > >  > Jeff,
 > >  >
 > >  > BackupPC_restore takes quite a few parameters, so many years ago I
 > > decided
 > >  > to pass those parameters via a file rather than command-line arguments.
 > >  > Probably a bad choice...
 > >  >
 > >  > There are two alternatives:
 > >  >
 > >  >- write a script that creates the restore request parameter file (see
 > >  >lib/BackupPC/CGI/Restore.pm) and then runs BackupPC_restore
 > >  >- directly run rsync_bpc using an example of the many arguments from
 > > a
 > >  >successful restore log file
 > >  >
 > >  > The drawback of the 2nd approach is that information about the restore
 > >  > isn't saved or logged, and you have to make sure it doesn't run
 > >  > concurrently with a backup.
 > >  >
 > >  > Craig
 > >  >
 > >  >
 > >  > On Wed, May 20, 2020 at 6:31 AM  wrote:
 > >  >
 > >  > >
 > >  > > Is it possible to do an rsync restore from the command line using
 > >  > > BackupPC_restore?
 > >  > > If so, it's not clear from the (limited) documentation how to use
 > >  > > it. For example, how do you specify the desired backup number, share,
 > >  > > paths, etc.
 > >  > >
 > >  > > Alternatively, is there an rsync analog of BackupPC_tarCreate?
 > >  > >
 > >  > > Said another way, rather than first creating a tar file via
 > >  > > BackupPC_tarCreate and then untarring (or piping through tar), is
 > >  > > there a way to hook directly into rsync to restore from the command
 > > line?
 > >  > >
 > >  > > This would have several advantages:
 > >  > > 1. It would automatically incorporate the ssh and compression features
 > >  > >to restore seamlessly, efficiently, and securely across platforms
 > >  > >
 > >  > > 2. It would allow for restoring special file types that tar doesn't
 > >  > >support
 > >  > >
 > >  > > 3. It would be able to better and more exactly mirror the parameters
 > >  > >given to Rsync dump (for example the same format of 'includes' and
 > >  > >'excludes'
 > >  > >
 > >  > >
 > >  > > ___
 > >  > > BackupPC-users mailing list
 > >  > > BackupPC-users@lists.sourceforge.net
 > >  > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > >  > > Wiki:http://backuppc.wiki.sourceforge.net
 > >  > > Project: http://backuppc.sourceforge.net/
 > >  > >
 > >  > ___
 > >  > Ba

Re: [BackupPC-users] Rsync restore from the command line

2020-05-20 Thread Craig Barratt via BackupPC-users
Jeff,

For restores, there could be a long list of specific files or directories
to restore, which might not fit on the command line, so that's what
triggered putting everything in a request file and just passing its name.
There are also several other settings specific to the restore (eg, the path
to restore to etc), none of which are in config files.

Craig

On Wed, May 20, 2020 at 10:29 AM  wrote:

> Thanks Craig,
>
> Why is restore inherently that much more complicated than dump?
> It seems like config.pl already has a number of parameters built-in
> for both including rsync args and pre/post restore commands.
>
> Conceptually, I would think that what one needs to specify each time
> is:
> 1. Host
> 2. Backup number
> 3. Share
> 4. Additional includes/excludes to determine what to restore
> 5. Option to "delete" files no longer found
> 6. Path to root of restore.
>
> Otherwise, existing includes/excludes would be assumed...
>
> I'm sure one could make it more complicated but am I missing something
> basic???
>
> The reality is that rsync + backuppc is really awesome... and I can do
> (and automate) so much more with CLI and scripts than with a CGI.
>
> Jeff
>
> Craig Barratt via BackupPC-users wrote at about 09:32:48 -0700 on
> Wednesday, May 20, 2020:
>  > Jeff,
>  >
>  > BackupPC_restore takes quite a few parameters, so many years ago I
> decided
>  > to pass those parameters via a file rather than command-line arguments.
>  > Probably a bad choice...
>  >
>  > There are two alternatives:
>  >
>  >- write a script that creates the restore request parameter file (see
>  >lib/BackupPC/CGI/Restore.pm) and then runs BackupPC_restore
>  >- directly run rsync_bpc using an example of the many arguments from
> a
>  >successful restore log file
>  >
>  > The drawback of the 2nd approach is that information about the restore
>  > isn't saved or logged, and you have to make sure it doesn't run
>  > concurrently with a backup.
>  >
>  > Craig
>  >
>  >
>  > On Wed, May 20, 2020 at 6:31 AM  wrote:
>  >
>  > >
>  > > Is it possible to do an rsync restore from the command line using
>  > > BackupPC_restore?
>  > > If so, it's not clear from the (limited) documentation how to use
>  > > it. For example, how do you specify the desired backup number, share,
>  > > paths, etc.
>  > >
>  > > Alternatively, is there an rsync analog of BackupPC_tarCreate?
>  > >
>  > > Said another way, rather than first creating a tar file via
>  > > BackupPC_tarCreate and then untarring (or piping through tar), is
>  > > there a way to hook directly into rsync to restore from the command
> line?
>  > >
>  > > This would have several advantages:
>  > > 1. It would automatically incorporate the ssh and compression features
>  > >to restore seamlessly, efficiently, and securely across platforms
>  > >
>  > > 2. It would allow for restoring special file types that tar doesn't
>  > >support
>  > >
>  > > 3. It would be able to better and more exactly mirror the parameters
>  > >given to Rsync dump (for example the same format of 'includes' and
>  > >'excludes'
>  > >
>  > >
>  > > ___
>  > > BackupPC-users mailing list
>  > > BackupPC-users@lists.sourceforge.net
>  > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  > > Wiki:http://backuppc.wiki.sourceforge.net
>  > > Project: http://backuppc.sourceforge.net/
>  > >
>  > ___
>  > BackupPC-users mailing list
>  > BackupPC-users@lists.sourceforge.net
>  > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  > Wiki:    http://backuppc.wiki.sourceforge.net
>  > Project: http://backuppc.sourceforge.net/
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsync restore from the command line

2020-05-20 Thread backuppc
Thanks Craig,

Why is restore inherently that much more complicated than dump?
It seems like config.pl already has a number of parameters built-in
for both including rsync args and pre/post restore commands.

Conceptually, I would think that what one needs to specify each time
is:
1. Host
2. Backup number
3. Share
4. Additional includes/excludes to determine what to restore
5. Option to "delete" files no longer found
6. Path to root of restore.

Otherwise, existing includes/excludes would be assumed...

I'm sure one could make it more complicated but am I missing something
basic???

The reality is that rsync + backuppc is really awesome... and I can do
(and automate) so much more with CLI and scripts than with a CGI.

Jeff

Craig Barratt via BackupPC-users wrote at about 09:32:48 -0700 on Wednesday, 
May 20, 2020:
 > Jeff,
 > 
 > BackupPC_restore takes quite a few parameters, so many years ago I decided
 > to pass those parameters via a file rather than command-line arguments.
 > Probably a bad choice...
 > 
 > There are two alternatives:
 > 
 >- write a script that creates the restore request parameter file (see
 >lib/BackupPC/CGI/Restore.pm) and then runs BackupPC_restore
 >- directly run rsync_bpc using an example of the many arguments from a
 >successful restore log file
 > 
 > The drawback of the 2nd approach is that information about the restore
 > isn't saved or logged, and you have to make sure it doesn't run
 > concurrently with a backup.
 > 
 > Craig
 > 
 > 
 > On Wed, May 20, 2020 at 6:31 AM  wrote:
 > 
 > >
 > > Is it possible to do an rsync restore from the command line using
 > > BackupPC_restore?
 > > If so, it's not clear from the (limited) documentation how to use
 > > it. For example, how do you specify the desired backup number, share,
 > > paths, etc.
 > >
 > > Alternatively, is there an rsync analog of BackupPC_tarCreate?
 > >
 > > Said another way, rather than first creating a tar file via
 > > BackupPC_tarCreate and then untarring (or piping through tar), is
 > > there a way to hook directly into rsync to restore from the command line?
 > >
 > > This would have several advantages:
 > > 1. It would automatically incorporate the ssh and compression features
 > >to restore seamlessly, efficiently, and securely across platforms
 > >
 > > 2. It would allow for restoring special file types that tar doesn't
 > >support
 > >
 > > 3. It would be able to better and more exactly mirror the parameters
 > >given to Rsync dump (for example the same format of 'includes' and
 > >'excludes'
 > >
 > >
 > > ___
 > > BackupPC-users mailing list
 > > BackupPC-users@lists.sourceforge.net
 > > List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > > Wiki:http://backuppc.wiki.sourceforge.net
 > > Project: http://backuppc.sourceforge.net/
 > >
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsync restore from the command line

2020-05-20 Thread Craig Barratt via BackupPC-users
Jeff,

BackupPC_restore takes quite a few parameters, so many years ago I decided
to pass those parameters via a file rather than command-line arguments.
Probably a bad choice...

There are two alternatives:

   - write a script that creates the restore request parameter file (see
   lib/BackupPC/CGI/Restore.pm) and then runs BackupPC_restore
   - directly run rsync_bpc using an example of the many arguments from a
   successful restore log file

The drawback of the 2nd approach is that information about the restore
isn't saved or logged, and you have to make sure it doesn't run
concurrently with a backup.

Craig


On Wed, May 20, 2020 at 6:31 AM  wrote:

>
> Is it possible to do an rsync restore from the command line using
> BackupPC_restore?
> If so, it's not clear from the (limited) documentation how to use
> it. For example, how do you specify the desired backup number, share,
> paths, etc.
>
> Alternatively, is there an rsync analog of BackupPC_tarCreate?
>
> Said another way, rather than first creating a tar file via
> BackupPC_tarCreate and then untarring (or piping through tar), is
> there a way to hook directly into rsync to restore from the command line?
>
> This would have several advantages:
> 1. It would automatically incorporate the ssh and compression features
>to restore seamlessly, efficiently, and securely across platforms
>
> 2. It would allow for restoring special file types that tar doesn't
>support
>
> 3. It would be able to better and more exactly mirror the parameters
>given to Rsync dump (for example the same format of 'includes' and
>    'excludes'
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] List of All Files Being Backed Up

2020-05-19 Thread David Wynn via BackupPC-users
I've tried to search using some different keywords and combinations but have
had no luck in finding an answer.  Here's my situation . I am backing up
around 150 different files from a NAS but using multiple hosts to do so in
order to keep the file transfer size down to <20GB at a time.  This is
across the 'internet' so bandwidth is not under my control and I want to
make sure the jobs don't crap out at bad times.  I've found that <20GB at a
time usually works great.

 

But, in order to manage the sizes I first created the individual lists based
on the size of the files/directories and input them manually into my config
files for each host.  For example, HOST1 may have 40 smaller files to get to
the 20GB limit whereas HOST10 may only have 1 file/directory to get to the
limit.

 

Now I have the problem of trying to find an individual file/directory from
around 18 different HOSTx setups.

 

Is there an easy way to get/create a listing that would should the HOSTx and
the files/directories that are being backed up under it?  I have thought of
trying to write a 'script' to traverse the individual HOSTx.pl files and
extract the info - but my scripting is purely of a W10 nature and even at
that is poor and wordy.   (Oh for the days of COBOL and PL/1.)

 

Just wondering if there is something I have missed in the documentation or
in trying to search the forums.  Someone must have had this problem before
me.

 

Thanks for your help

 

DW

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restore only corrupted files?

2020-05-15 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 15 May 2020, Richard Shaw wrote:


Funny enough (well, actually I'm still kinda pissed ... with
Seagate, ... this is my SECOND RMA for the same drive.


Let me guess - Barracuda?

I stopped buying Seagate drives years ago when it became clear that,
running 24/7, if they lasted more than six months we'd been pretty
lucky and they ALL failed inside a couple of years.

They might have improved since then of course, but after reading an
article about reliability based on experience at Google (who obviously
buy orders of magnitude more drives the we ever will) I bought a bunch
of HGST drives (in the halcyon days before WD bought *them*) and have
never needed to buy another drive since!

It wasn't necessarily poor manufacturing at Seagate.  There were some
scary firmware problems with many Seagate drives; 'smartctl' has a
database which might tell you if an upgrade is available for a drive
(I don't know how complete the database might be, but I have seen it
warn about that for some old spare drives kicking around here) and
there will be other ways of finding out e.g. the Seagate Website.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restore only corrupted files?

2020-05-14 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 14 May 2020, Richard Shaw wrote:


...
Is it possible to do a conditional restore? Something like:

Only restore files which are the same date (mtime?) and the hashes don't
match.

Thoughts?


Assuming that it's worth recovering the data, the data presumably must
have some value.  In your situation I'd be reluctant to do anything
like that to valuable data, since I might unnecessarily be overwriting
it with something old or even corrupt.  I think I'd restore my backup
to a scratch partition, then use something like 'rsync --dry-run' to
show me the differences.

OTOH I run two backup servers, so the situation is most unlikely here.
Consider that a hint. :)

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large rsyncTmp files

2020-05-01 Thread Craig Barratt via BackupPC-users
It could also be a sparse file (eg, below /proc or /var/log/wtmp) that
isn't being excluded.

Craig

On Fri, May 1, 2020 at 10:14 AM Alexander Kobel  wrote:

> Hi Marcelo,
>
> On 5/1/20 4:15 PM, Marcelo Ricardo Leitner wrote:
> > Hi,
> >
> > Is it expected for rsync-bpc to be writting such large temporary files?
>
> If and only if there is such a big file to be backed up, AFAIK.
>
> > It seems they are as big as the full backup itself:
> > # ls -la */*/rsync*
> > -rw--- 1 112 122 302598406144 May  1 10:54
> HOST/180/rsyncTmp.4971.0.29
>
> Did you double-check whether there really is no file of that size on the
> HOST? (Try running `find $share -size +10M` on it, or something like
> that.)
>
> Do you use the -x (or --one-file-system) option for rsync?
> I recently ran into a similar issue because I didn't. A chrooted process
> suddenly received its own copy of /proc under
> /var/lib//proc after a system update, and proc has the
> 128T-huge kcore. Not a good idea trying to back up that directory.
> (Running dhcpcd on Arch by any chance?)
> It also got other mounts, like sysfs and some tmpfs, but those were
> mostly harmless.
>
> > That's a 300GB file, it filled the partition, and the full size for
> > this host is 337GB.
> >
> > Thanks,
> > Marcelo
>
>
> HTH,
> Alex
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem Recommendation for 100 TB

2020-04-30 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, Apr 28, 2020 at 1:02 PM Andrew Maksymowsky wrote:


I have no strong preference for either xfs or zfs (our team is
comfortable with either) was mainly just curious to hear about what
folks were using and if they've run into any major issues or found
particular file-system features they really like when coupled with
backuppc.


Data volumes of the systems I back up approach those with which you're
working, and I have had no issues with ext4.  Being very conservative
about filesystem choice now (after a disastrous outing with ReiserFS,
a little over a decade ago) I haven't yet taken the plunge with any of
the more modern filesystems.  It's probably past time for me to put a
toe in the water once more, but there are always more pressing issues
and I *really* don't need another episode like that with Reiser.

At one time I routinely used to modify the BackupPC GUI to display the
ext4 inode usage on BackupPC systems, but happily I no longer need to
do that. :)  Although I'd have said my systems tend to have lots of
small files, typically they're only using a few percent of inode
capacity at a few tens % of storage capacity; I have no clue what the
fragmentation is like, and likely won't unless something bites me.

There's no RAID here at all, but there are LVMs, so snapshots became
possible whatever the filesystem.  Although at one time I thought I'd
be using snapshots a lot, and sometimes did, now I seem not to bother
with them.  Large databases tend to be few in number and can probably
be backed up better using the tools provided by the database system
itself; directories containing database files and VMs are specifically
excluded in my BackupPC configurations; some routine data collection
like security camera video is treated specially in the config too, and
what's left is largely configuration and users' home directories.  All
machines run Linux or similar, thankfully no Windows boxes any more.

Just to state one possibly obvious point, the ability to prevent the
filesystem used by BackupPC from writing access times would probably
be important to most, although I'm aware that you're interested more
in the reliability of the system and this is a performance issue.  On
1GBit/s networks I see backup data rates ranging from 20MByte/s for a
full backup to 3GByte/s for an incremental.  Obviously the network is
not the bottleneck and from that point of view I think the filesystem
probably doesn't matter; you're looking at CPU, I/O (think SSDs?) and
very likely RAM too, e.g. for rsync transfers which can be surprising.

HTH

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] full backup ok, incremental fails

2020-04-26 Thread Craig Barratt via BackupPC-users
Sorry, the correct form should be "$@":

#!/bin/sh -f
exec /bin/tar -c "$@"

(Note that you want to force tar to have the -c option, not exec).

Craig



On Sun, Apr 26, 2020 at 5:14 AM Graham Seaman  wrote:

> Hi Craig
>
> I set sudoers to allow backuppc to run tar as root with no password, and
> incremental backups work fine.
>
> This is only marginally less secure than the old setup, which allowed
> backuppc to run the script which called tar, so I guess I can live with
> this.
>
> But in case you have any other ideas, here's my tiny script that's now
> definitely what's causing the problem (the quote marks are double quotes,
> not two single quotes):
>
> #!/bin/sh -f
>
> exec -c /bin/tar "$*"
>
>
> Graham
>
>
> On 26/04/2020 04:09, Craig Barratt via BackupPC-users wrote:
>
> It would be helpful if you included the edited script in your reply.  Did
> you use double quotes, or two single quotes?
>
> I'd recommend trying without the script, just the make sure it works
> correctly.  Then you can be sure it's an issue with how the script
> handles/splits arguments.
>
> Craig
>
> On Sat, Apr 25, 2020 at 2:49 PM Graham Seaman 
> wrote:
>
>> Craig
>>
>> Quoting $* gives me a new error:
>>
>> /bin/tar: invalid option -- ' '
>>
>> (I get exactly the same error whether I use $incrDate or $incrDate+)
>>
>> That script is to avoid potential security problems from relaxing the
>> rules in sudoers, so I'd rather not get rid of it, but I'm a bit surprised
>> no-one else has the same problems (and that it apparently used to work for
>> me once)
>>
>> Graham
>>
>>
>> On 25/04/2020 17:59, Craig Barratt via BackupPC-users wrote:
>>
>> Graham,
>>
>> Your script is the problem.  Using $* causes the shell the resplit
>> arguments at whitespace.  To preserve the arguments you need to put that in
>> quotes:
>>
>> exec /bin/tar -c "$*"
>>
>> Craig
>>
>> On Sat, Apr 25, 2020 at 5:04 AM Graham Seaman 
>> wrote:
>>
>>> Thanks Craig
>>>
>>> That's clearly the problem, but I'm still mystified.
>>>
>>> I have backuppc running on my home server; the storage is on a NAS NFS
>>> mounted on the home server. Backing up other hosts on my network (both
>>> full and incremental) over rsync works fine.
>>>
>>> The home server backs up using tar. The command in the log is:
>>>
>>> Running: /usr/bin/sudo /etc/backuppc/localtar/tar_create.sh -v -f - -C
>>> /etc --totals --newer=2020-04-22 21:18:10 .
>>>
>>> If I set
>>>
>>>  $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
>>>
>>>
>>> then incremental backups of the home server fail with:
>>>
>>> /bin/tar: Substituting -9223372036854775807 for unknown date format
>>> ‘2020-04-22\\’
>>> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>>>
>>> If instead I set:
>>>
>>> $Conf{TarIncrArgs} = '--newer=$incrDate $fileList';
>>>
>>> then incremental backups fail with:
>>>
>>> /bin/tar: Option --after-date: Treating date '2020-04-22' as 2020-04-22
>>> 00:00:00
>>> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>>>
>>> Could it be to do with my localtar/tar_create.sh? (I created this so
>>> long ago I no longer remember where it came from).
>>>
>>> This is just:
>>>
>>> #!/bin/sh -f
>>> exec /bin/tar -c $*
>>>
>>> Thanks again
>>>
>>> Graham
>>>
>>> On 25/04/2020 02:59, Craig Barratt via BackupPC-users wrote:
>>> > Graham,
>>> >
>>> > This is a problem with shell (likely ssh) escaping of arguments that
>>> > contain a space.
>>> >
>>> > For incremental backups a timestamp is passed as an argument to tar
>>> > running on the client.  The argument should be a date and time, eg:
>>> >
>>> > --after-date 2020-04-22\ 21:18:10'
>>> >
>>> > Notice there needs to be a backslash before the space, so it is part of
>>> > a single argument, not two separate arguments.
>>> >
>>> > You can tell BackupPC to escape an argument (to protect it from passing
>>> > via ssh) by adding a "+" to the end of the argument name, eg:
>>> >
>>> > $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
>>> >
>>> >
>&g

Re: [BackupPC-users] full backup ok, incremental fails

2020-04-25 Thread Craig Barratt via BackupPC-users
It would be helpful if you included the edited script in your reply.  Did
you use double quotes, or two single quotes?

I'd recommend trying without the script, just the make sure it works
correctly.  Then you can be sure it's an issue with how the script
handles/splits arguments.

Craig

On Sat, Apr 25, 2020 at 2:49 PM Graham Seaman  wrote:

> Craig
>
> Quoting $* gives me a new error:
>
> /bin/tar: invalid option -- ' '
>
> (I get exactly the same error whether I use $incrDate or $incrDate+)
>
> That script is to avoid potential security problems from relaxing the
> rules in sudoers, so I'd rather not get rid of it, but I'm a bit surprised
> no-one else has the same problems (and that it apparently used to work for
> me once)
>
> Graham
>
>
> On 25/04/2020 17:59, Craig Barratt via BackupPC-users wrote:
>
> Graham,
>
> Your script is the problem.  Using $* causes the shell the resplit
> arguments at whitespace.  To preserve the arguments you need to put that in
> quotes:
>
> exec /bin/tar -c "$*"
>
> Craig
>
> On Sat, Apr 25, 2020 at 5:04 AM Graham Seaman 
> wrote:
>
>> Thanks Craig
>>
>> That's clearly the problem, but I'm still mystified.
>>
>> I have backuppc running on my home server; the storage is on a NAS NFS
>> mounted on the home server. Backing up other hosts on my network (both
>> full and incremental) over rsync works fine.
>>
>> The home server backs up using tar. The command in the log is:
>>
>> Running: /usr/bin/sudo /etc/backuppc/localtar/tar_create.sh -v -f - -C
>> /etc --totals --newer=2020-04-22 21:18:10 .
>>
>> If I set
>>
>>  $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
>>
>>
>> then incremental backups of the home server fail with:
>>
>> /bin/tar: Substituting -9223372036854775807 for unknown date format
>> ‘2020-04-22\\’
>> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>>
>> If instead I set:
>>
>> $Conf{TarIncrArgs} = '--newer=$incrDate $fileList';
>>
>> then incremental backups fail with:
>>
>> /bin/tar: Option --after-date: Treating date '2020-04-22' as 2020-04-22
>> 00:00:00
>> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>>
>> Could it be to do with my localtar/tar_create.sh? (I created this so
>> long ago I no longer remember where it came from).
>>
>> This is just:
>>
>> #!/bin/sh -f
>> exec /bin/tar -c $*
>>
>> Thanks again
>>
>> Graham
>>
>> On 25/04/2020 02:59, Craig Barratt via BackupPC-users wrote:
>> > Graham,
>> >
>> > This is a problem with shell (likely ssh) escaping of arguments that
>> > contain a space.
>> >
>> > For incremental backups a timestamp is passed as an argument to tar
>> > running on the client.  The argument should be a date and time, eg:
>> >
>> > --after-date 2020-04-22\ 21:18:10'
>> >
>> > Notice there needs to be a backslash before the space, so it is part of
>> > a single argument, not two separate arguments.
>> >
>> > You can tell BackupPC to escape an argument (to protect it from passing
>> > via ssh) by adding a "+" to the end of the argument name, eg:
>> >
>> > $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
>> >
>> >
>> > Craig
>> >
>> > On Fri, Apr 24, 2020 at 3:17 PM Graham Seaman > > <mailto:gra...@theseamans.net>> wrote:
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > Ok, I guess its this (from the start of XferLOG.bad):
>> >
>> > /bin/tar: Option --after-date: Treating date '2020-04-22' as
>> 2020-04-22
>> > 00:00:00
>> > /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>> >
>> > which is kind of confusing, as it goes on to copy the rest of the
>> > directory and then says '0 Errors'. Anyway, its correct that there
>> is no
>> > file called '21:18:10'. Any idea why it thinks there should be?
>> >
>> > Graham
>> >
>> >
>> > On 24/04/2020 20:59, Craig Barratt via BackupPC-users wrote:
>> > > Graham,
>> > >
>> > > Tar exit status of 512 means it encountered some sort of error
>> > (eg, file
>> > > read error) while it was running on the target client.  Please
>> look at
>> > > the XferLOG.bad file carefully to see 

Re: [BackupPC-users] full backup ok, incremental fails

2020-04-25 Thread Craig Barratt via BackupPC-users
Graham,

Your script is the problem.  Using $* causes the shell the resplit
arguments at whitespace.  To preserve the arguments you need to put that in
quotes:

exec /bin/tar -c "$*"

Craig

On Sat, Apr 25, 2020 at 5:04 AM Graham Seaman  wrote:

> Thanks Craig
>
> That's clearly the problem, but I'm still mystified.
>
> I have backuppc running on my home server; the storage is on a NAS NFS
> mounted on the home server. Backing up other hosts on my network (both
> full and incremental) over rsync works fine.
>
> The home server backs up using tar. The command in the log is:
>
> Running: /usr/bin/sudo /etc/backuppc/localtar/tar_create.sh -v -f - -C
> /etc --totals --newer=2020-04-22 21:18:10 .
>
> If I set
>
>  $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
>
>
> then incremental backups of the home server fail with:
>
> /bin/tar: Substituting -9223372036854775807 for unknown date format
> ‘2020-04-22\\’
> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>
> If instead I set:
>
> $Conf{TarIncrArgs} = '--newer=$incrDate $fileList';
>
> then incremental backups fail with:
>
> /bin/tar: Option --after-date: Treating date '2020-04-22' as 2020-04-22
> 00:00:00
> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>
> Could it be to do with my localtar/tar_create.sh? (I created this so
> long ago I no longer remember where it came from).
>
> This is just:
>
> #!/bin/sh -f
> exec /bin/tar -c $*
>
> Thanks again
>
> Graham
>
> On 25/04/2020 02:59, Craig Barratt via BackupPC-users wrote:
> > Graham,
> >
> > This is a problem with shell (likely ssh) escaping of arguments that
> > contain a space.
> >
> > For incremental backups a timestamp is passed as an argument to tar
> > running on the client.  The argument should be a date and time, eg:
> >
> > --after-date 2020-04-22\ 21:18:10'
> >
> > Notice there needs to be a backslash before the space, so it is part of
> > a single argument, not two separate arguments.
> >
> > You can tell BackupPC to escape an argument (to protect it from passing
> > via ssh) by adding a "+" to the end of the argument name, eg:
> >
> > $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
> >
> >
> > Craig
> >
> > On Fri, Apr 24, 2020 at 3:17 PM Graham Seaman  > <mailto:gra...@theseamans.net>> wrote:
> >
> >
> >
> >
> >
> >
> >
> >
> > Ok, I guess its this (from the start of XferLOG.bad):
> >
> > /bin/tar: Option --after-date: Treating date '2020-04-22' as
> 2020-04-22
> > 00:00:00
> > /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
> >
> > which is kind of confusing, as it goes on to copy the rest of the
> > directory and then says '0 Errors'. Anyway, its correct that there
> is no
> > file called '21:18:10'. Any idea why it thinks there should be?
> >
> > Graham
> >
> >
> > On 24/04/2020 20:59, Craig Barratt via BackupPC-users wrote:
> > > Graham,
> > >
> > > Tar exit status of 512 means it encountered some sort of error
> > (eg, file
> > > read error) while it was running on the target client.  Please
> look at
> > > the XferLOG.bad file carefully to see the specific error from tar.
> > >
> > > If you are unable to see the error, please send me the entire
> > > XferLOG.bad file?
> > >
> > > Craig
> > >
> > > On Fri, Apr 24, 2020 at 12:13 PM Graham Seaman
> > mailto:gra...@theseamans.net>
> > > <mailto:gra...@theseamans.net <mailto:gra...@theseamans.net>>>
> wrote:
> > >
> > > I have a persistent problem with backing up one host: I can do
> > a full
> > > backup, but an incremental backup fails on trying to transfer
> > the first
> > > directory:
> > >
> > > tarExtract: Done: 0 errors, 2 filesExist, 81381 sizeExist,
> 18122
> > > sizeExistComp, 2 filesTotal, 81381 sizeTotal
> > > Got fatal error during xfer (Tar exited with error 512 ()
> status)
> > > Backup aborted (Tar exited with error 512 () status)
> > >
> > > All other hosts work ok. So I'm guessing it must be a file
> > permission
> > > error. Looking at the files, everything seems t

Re: [BackupPC-users] How to find files in the pool?

2020-04-24 Thread Craig Barratt via BackupPC-users
In the example you showed, the file contents have
digest 4b544ad7b8992fbbc0fafe34ae6ab5d5.  You can pass that directly to
BackupPC_zcat if you want, which will uncompress the file to stdout, eg:

BackupPC_zcat 4b544ad7b8992fbbc0fafe34ae6ab5d5 | wc

The pool directly tree is described in the documentation
<https://backuppc.github.io/backuppc/BackupPC.html#Storage-layout>:

For V4+, the digest is the MD5 digest of the full file contents (the length
is not used). For V4+ the pool files are stored in a 2 level tree, using 7
bits from the top of the first two bytes of the digest. So there are 128
directories are each level, numbered evenly in hex from 0x00, 0x02, to 0xfe.

For example, if a file has an MD5 digest of
123456789abcdef0123456789abcdef0, the uncompressed file is stored in
__TOPDIR__/pool/12/34/123456789abcdef0123456789abcdef0.


In your example, the file will be at (assuming compression is
on): __TOPDIR__/cpool/4a/54/4b544ad7b8992fbbc0fafe34ae6ab5d5.  The two
directory entries are the first two bytes (4b and 54) of the filename,
rounded down to the nearest even number (ie, 4b -> 4a, 54 -> 54).
Numerically it's anding with 0xfe.

Craig

On Fri, Apr 24, 2020 at 4:57 AM R.C.  wrote:

>
> Il 24/04/2020 02:53, Craig Barratt via BackupPC-users ha scritto:
> > The attrib file contains the meta data (mtime, permissions etc) for all
> > the files in that directory, including the md5 digest of the contents of
> > each file.
> >
> > You can use BackupPC_attribPrint to print the contents of the attrib
> > file, which will show the meta data for each file.
> >
> > Craig
> >
>
> Thank you Craig.
>
> I'm sorry I can't still figure out the right way to get to the file.
>
> If I issue:
>
> sudo -u backuppc /usr/share/BackupPC/bin/BackupPC_attribPrint
> attrib_c5cda251876d069be82cd87feef573be |head -n 15
>
> the first file's metadata returned is:
>
> Attrib digest is c5cda251876d069be82cd87feef573be
> $VAR1 = {
>'0001E9891510415CBBFA53F685D8FF2C.Zip' => {
>  'compress' => 3,
>  'digest' => '4b544ad7b8992fbbc0fafe34ae6ab5d5',
>  'gid' => 0,
>  'inode' => 9,
>  'mode' => 484,
>  'mtime' => 1320069331,
>  'name' => '0001E9891510415CBBFA53F685D8FF2C.Zip',
>  'nlinks' => 0,
>  'size' => 101121,
>  'type' => 0,
>  'uid' => 0
>},
>
> How to retrieve the actual path of that file in the cpool tree?
> I cannot use the digest to walk down the cpool tree. Octects lead to non
> existent path.
> Using the inode to find the file is cumbersome and requires the use of
> low level fs tools.
>
> I'm referring to V4
>
> Thank you
>
> Raf
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] full backup ok, incremental fails

2020-04-24 Thread Craig Barratt via BackupPC-users
Graham,

This is a problem with shell (likely ssh) escaping of arguments that
contain a space.

For incremental backups a timestamp is passed as an argument to tar running
on the client.  The argument should be a date and time, eg:

--after-date 2020-04-22\ 21:18:10'

Notice there needs to be a backslash before the space, so it is part of a
single argument, not two separate arguments.

You can tell BackupPC to escape an argument (to protect it from passing via
ssh) by adding a "+" to the end of the argument name, eg:

$Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';


Craig

On Fri, Apr 24, 2020 at 3:17 PM Graham Seaman  wrote:

>
>
>
>
>
>
>
> Ok, I guess its this (from the start of XferLOG.bad):
>
> /bin/tar: Option --after-date: Treating date '2020-04-22' as 2020-04-22
> 00:00:00
> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>
> which is kind of confusing, as it goes on to copy the rest of the
> directory and then says '0 Errors'. Anyway, its correct that there is no
> file called '21:18:10'. Any idea why it thinks there should be?
>
> Graham
>
>
> On 24/04/2020 20:59, Craig Barratt via BackupPC-users wrote:
> > Graham,
> >
> > Tar exit status of 512 means it encountered some sort of error (eg, file
> > read error) while it was running on the target client.  Please look at
> > the XferLOG.bad file carefully to see the specific error from tar.
> >
> > If you are unable to see the error, please send me the entire
> > XferLOG.bad file?
> >
> > Craig
> >
> > On Fri, Apr 24, 2020 at 12:13 PM Graham Seaman  > <mailto:gra...@theseamans.net>> wrote:
> >
> > I have a persistent problem with backing up one host: I can do a full
> > backup, but an incremental backup fails on trying to transfer the
> first
> > directory:
> >
> > tarExtract: Done: 0 errors, 2 filesExist, 81381 sizeExist, 18122
> > sizeExistComp, 2 filesTotal, 81381 sizeTotal
> > Got fatal error during xfer (Tar exited with error 512 () status)
> > Backup aborted (Tar exited with error 512 () status)
> >
> > All other hosts work ok. So I'm guessing it must be a file permission
> > error. Looking at the files, everything seems to be owned by
> > backuppc.backuppc, so I don't know where/what else to look for. Any
> > suggestions?
> >
> > Thanks
> > Graham
> >
> >
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> >     <mailto:BackupPC-users@lists.sourceforge.net>
> > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> >
> >
> >
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> >
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] full backup ok, incremental fails

2020-04-24 Thread Craig Barratt via BackupPC-users
Graham,

Tar exit status of 512 means it encountered some sort of error (eg, file
read error) while it was running on the target client.  Please look at the
XferLOG.bad file carefully to see the specific error from tar.

If you are unable to see the error, please send me the entire XferLOG.bad
file?

Craig

On Fri, Apr 24, 2020 at 12:13 PM Graham Seaman 
wrote:

> I have a persistent problem with backing up one host: I can do a full
> backup, but an incremental backup fails on trying to transfer the first
> directory:
>
> tarExtract: Done: 0 errors, 2 filesExist, 81381 sizeExist, 18122
> sizeExistComp, 2 filesTotal, 81381 sizeTotal
> Got fatal error during xfer (Tar exited with error 512 () status)
> Backup aborted (Tar exited with error 512 () status)
>
> All other hosts work ok. So I'm guessing it must be a file permission
> error. Looking at the files, everything seems to be owned by
> backuppc.backuppc, so I don't know where/what else to look for. Any
> suggestions?
>
> Thanks
> Graham
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to find files in the pool?

2020-04-24 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 24 Apr 2020, R.C. wrote:


How to retrieve the actual path of that file in the cpool tree?  I
cannot use the digest to walk down the cpool tree. Octects lead to
non existent path.


Have you just missed the little wrinkle that the subdirectories are
all even numbers?

You need to clear the least significant bit of the least significant
octet in each two-digit directory name as you go down the tree.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to find files in the pool?

2020-04-24 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 23 Apr 2020, Robert Sommerville wrote:


You can use the locate [...] command ...


That's likely to be unreliable - not only because many systems don't
have 'locate' installed by default but also because many systems will
exclude BackupPC databases from indexing.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to find files in the pool?

2020-04-23 Thread Craig Barratt via BackupPC-users
The attrib file contains the meta data (mtime, permissions etc) for all the
files in that directory, including the md5 digest of the contents of each
file.

You can use BackupPC_attribPrint to print the contents of the attrib file,
which will show the meta data for each file.

Craig

On Mon, Apr 20, 2020 at 9:58 AM R.C.  wrote:

> Hi all
>
> given the following folder attrib file:
> attrib_cdc5cda251876d069be82cd87feef573be
>
> in which subpath of the cpool folder are the contained files?
> I've tried walking the path starting from first and then from last
> octect of the MD5 number but an octect is missing at some time.
>
> Thank you
>
> Raf
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to find files in the pool?

2020-04-23 Thread Robert Sommerville via BackupPC-users

You can use the “locate ‘file name’” command but that will return all matches. 
It does report the full path so you can grep for known path strings to narrow 
the results.
If command updatedb is not in the root Cron you’ll have to execute that as root 
first. If it is the very first time it will take a while to finish as backuppc 
data space is large.

From: R.C.
Date: 04/20/2020, 12:57 PM
Subject: [BackupPC-users] How to find files in the pool?

Hi all

given the following folder attrib file:
attrib_cdc5cda251876d069be82cd87feef573be

in which subpath of the cpool folder are the contained files?
I've tried walking the path starting from first and then from last
octect of the MD5 number but an octect is missing at some time.

Thank you

Raf

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net 
(mailto:BackupPC-users@lists.sourceforge.net)
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using a jump host to backup via rsync over SSH

2020-04-22 Thread Falko Trojahn via BackupPC-users

Hi Pim,


The reason for this had nothing to do with any "remote host", it was that the 
"backuppc" user had no shell configured!

    # grep backuppc /etc/passwd
backuppc:x:994:990::/var/lib/BackupPC:/sbin/nologin

After changing the shell to /bin/bash for backuppc, all errors disappeared. 
Running backups automatically and from the web application succeeded with the 
jump host.



Glad you got it to work.


Apparently a shell is required to use a jump host from the ssh command in this 
situation?


There's an analysis about apparently same problem:

https://unix.stackexchange.com/questions/457692/does-ssh-proxyjump-require-local-shell-access

They mention, that setting the "SHELL" variable is sufficient ..



I will further investigate if this is really required and will report back in 
this list.

@Falko: since you got it working without changes, is a shell set for the 
"backuppc" user?


Yes, it's /bin/sh or /bin/bash.

Greetings,
Falko


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using a jump host to backup via rsync over SSH

2020-04-22 Thread Falko Trojahn via BackupPC-users

Hi Pim,


Without DumpPreUserCmd I get the following error when initiating an incremental 
backup through the web interface:

Got fatal error during xfer (rsync error: unexplained error (code 255) 
at io.c(226) [Receiver=3.1.2.0])

that reminds me, that I'm using rsync-bpc version 3.0.9.14, yet.
Dunno why any more, sorry. May be there were not so much differences 
between the versions.




And the following error when initiating a full backup through the web interface:

Got fatal error during xfer (No files dumped for share /)

Usually, there are some more informations in the Xferlog. What shows the 
Xferlog of a failed attempt?



Also, both backup attempts do not yield any visible outgoing traffic via 
tcpdump. So apparently it is not even initiating the outbound SSH connections.


That's weird, indeed.


Thanks. Already tried a manual backup through the CLI, and this works.


You mean by starting BackupPC_dump? Then, that sounds great, at least 
this is working. You are using the same backuppc user to start the 
backup manually, right?




So, I am only seeing this issue with backups automatically initiated from the 
hourly schedule or manually initiated through the web interface.

That's weird, right? I guess since you guys are able to get it working without 
any issues, I must be doing something awfully stupid or I am hitting a weird 
bug in my CentOS environment.



 Can I increase logging of the scheduling processes or of the web 
interface so I can see more details about which exact commands are being 
executed?


Do you see any difference between XferLOG



I am still thinking the SSH command is somehow getting mixed up when started 
from the scheduler or web application.


ok, could you please try and put a
  /usr/share/BackupPC/bin/BackupPC_dump -f -vvv (hostname)
in the crontab of your backuppc user?

Perhaps set it to some minutes apart, one time only. Maybe something 
like this (adjust the time), if you get no cron mails - redirect to file:


0 1 * * * /usr/share/BackupPC/bin/BackupPC_dump -f -vvv hostname 2>&1 > 
/tmp/my-backup-log.txt


If this fails, too: please check, what in PATH is missing compared to 
your normal shell use.


Greetings,
Falko


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using a jump host to backup via rsync over SSH

2020-04-21 Thread Falko Trojahn via BackupPC-users

Hello Pim,

using jumphost here for backing up a remote host and it's VMs without 
any problems. What BackupPC version do you use?
I am using BackupPC 4.3.1-3 from the yum repository for CentOS 7. Very 
good to hear that you got it working on your installation.


ok, so I'll try it on an 4.3.2 installation, too, and give you some 
information if it works there.


working here without any clue as described. No changes in BackupPC 
configuration needed.


So, the differences seem to be:

* you're using sudo and a special backup user

* as your DumpPreUserCmd errors out, did you try without?

> DumpPreUserCmd returned error status 65280... exiting
  *  what is this errorstatus from?
  *  does the backup work without DumpPreUserCmd?

May be this helps, too:

On Tue, 11 Jun 19, Adam Goryachev wrote:
> Finally, the one single command I've found to be the *most* helpful in
> debugging any such issues is this:
>
> /usr/lib/backuppc/bin/BackupPC_dump -f -vvv hostname
>
> Which will just try to do a full backup, but show you on the console
> what it is doing through each step. You should make sure there is no
> scheduled backup for this host, and no in-progress backup for this host
> when you run this command. Under normal operation, you shouldn't use
> this command.

(on Debian, this is in another location:
  /usr/share/backuppc/bin/BackupPC_dump
 don't know about CentOS)


> Also, not trying to be cheeky here: for added security you don't
> actually need AgentForwarding
Yes, you're right - this was an copy leftover.

> nor root logins when using a jump host.
Yes, you can use another user. Is it a normal sshd what you use for jump 
host, or something like sshportal or ssh-bastion?


There are different ways for ssh proxy/jump, too:
https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Proxies_and_Jump_Hosts#Jump_Hosts_--_Passing_Through_a_Gateway_or_Two

Best regards,
Falko


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync vs rsyncd speed for huge number of small files

2020-04-21 Thread Craig Barratt via BackupPC-users
What version of BackupPC are you running?  4.x will likely be a good deal
faster than 3.x for both rsync+ssh and rsyncd.

The penalty of rsync+ssh vs rsyncd is likely modest, although it depends on
how much data is changing between backups.

Craig

On Tue, Apr 21, 2020 at 1:33 AM R.C.  wrote:

> Hi
>
> What is the expected difference in performance between rsync+shh and
> rsyncd?
> I would use it over a private LAN, so no concerns about security.
> Currently rsync+ssh is way too slow for a huge number of very small
> files (about 700K email files in an imap server tree), even without
> --checksum.
>
> Thank you
>
> Raf
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using a jump host to backup via rsync over SSH

2020-04-21 Thread Falko Trojahn via BackupPC-users

Hi Pim,


using jumphost here for backing up a remote host and it's VMs without any 
problems. What BackupPC version do you use?

I am using BackupPC 4.3.1-3 from the yum repository for CentOS 7. Very good to 
hear that you got it working on your installation.


ok, so I'll try it on an 4.3.2 installation, too, and give you some 
information if it works there.


Greetings,
Falko


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using a jump host to backup via rsync over SSH

2020-04-20 Thread Falko Trojahn via BackupPC-users

Hi,


Option B: using an SSH client config file

Alternatively I have tried using an implicit jump host through SSH client 
config with a slightly different way of setting up the jump host (through 
netcat). This results in exactly the same errors.

Host client-machine
   ProxyCommand ssh jumphost nc %h %p 2> /dev/null

Host jumphost
   Hostname jumphostname
   User jumphostuser



using jumphost here for backing up a remote host and it's VMs without 
any problems. What BackupPC version do you use?


Doing it like this on Debian 10 buster with BackupPC 3.3.2-2:

:~# su - backuppc
$ ssh target  # confirm fingerprint
$ cat ~/.ssh/config

#
Host your-real-host
  HostName your-real-ip-here
  Port 22   # or whatever you use
  ForwardAgent yes
  ForwardX11 no
  User root
#
Host first-vm-on-real-host
  HostName first-vm-ip-here
  ForwardAgent no
  ForwardX11 no
  User root
  Port 22
  ProxyCommand ssh root@your-real-host nc %h %p
#
Host 2nd-vm-on-real-host
  HostName 2nd-vm-ip-here
  ForwardAgent no
  ForwardX11 no
  User root
  Port 22
  ProxyCommand ssh root@your-real-host nc %h %p

$ ssh-keygen -t rsa -b 4096
$ ssh-copy-id your-real-host
$ ssh-copy-id first-vm-on-real-host
$ ssh-copy-id 2nd-vm-on-real-host

Perhaps you have to adjust /etc/ssh/sshd_config to allow ssh-key only 
access.


If you use any backuppc-wrapper-script on the real host, maybe adapt it 
to the ssh forwarding.


When trying manually, make sure you do not use your own loaded ssh key
thru ssh-agent, but really use the ssh key of the backuppc user. Prove 
that by:

ssh-add -l

HTH
Falko


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] no errors but no file backuped in some directory not others

2020-04-19 Thread Craig Barratt via BackupPC-users
No, those options aren't used in 4.x.

Craig

On Sun, Apr 19, 2020 at 9:55 AM Ghislain Adnet  wrote:

> On 4/19/20 8:45 AM, Craig Barratt via BackupPC-users wrote:
> > Yes, the rsync settings have changed in 4.x.  You'll need to
> set $Conf{RsyncSshArgs} and $Conf{RsyncClientPath}.  You
> > should be able to put the chroot into $Conf{RsyncClientPath}.
> >
> > Craig
>
> ok so i guess
>
> #$Conf{RsyncClientCmd}=
> '$sshPath  -T -q -x -l aqbackup $host
> sudo vnamespace -e "'.$nomduvserveur.'" /usr/sbin/chroot
> "/vservers/'.$nomduvserveur.'" $rsyncPath $argList+';
>
> convert to
>
> $Conf{RsyncClientPath}  =
> 'sudo vnamespace -e "'.$nomduvserveur.'" /usr/sbin/chroot
> "/vservers/'.$nomduvserveur.'" /usr/bin/rsync ';
>
> $Conf{RsyncSshArgs} = [
>  '-e', '$sshPath  -T -q -x -l aqbackup',
> ];
>
>
> seems to work, is it still necessary to have things like
>
>'--block-size=2048',
>'--checksum-seed=32761',
>
> to help the backups ?
>
> Regards,
> Ghislain.
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] no errors but no file backuped in some directory not others

2020-04-19 Thread Craig Barratt via BackupPC-users
Yes, the rsync settings have changed in 4.x.  You'll need to
set $Conf{RsyncSshArgs} and $Conf{RsyncClientPath}.  You should be able to
put the chroot into $Conf{RsyncClientPath}.

Craig

On Sat, Apr 18, 2020 at 11:39 PM Ghislain Adnet  wrote:

> On 4/18/20 6:49 AM, Craig Barratt via BackupPC-users wrote:
> > Are you using rsync?  The default in 4.x is --one-file-system.  You can
> edit the config file to remove that if you
> > prefer. I do realize you said "the client machine has only one
> partition", so that might not be the issue.
> >
>
> oh sorry, yes i use rsync via ssh.
>
> I think i found the issue, i used to store the backuped file  those file a
> "container" like system. So backuppc connnect
> to the host and then backcup the guest:
>
>
> http://backuppc.sourceforge.net/faq/BackupPC.html#_conf_rsyncclientcmd_
>
> # rsync client commands
> $Conf{RsyncClientCmd}   = '$sshPath  -T -q -x -l aqbackup $host
> sudo vnamespace -e "'.$nomduvserveur.'"
> /usr/sbin/chroot "/vservers/'.$nomduvserveur.'" $rsyncPath $argList+';
> $Conf{RsyncClientRestoreCmd}= '$sshPath  -T -q -x -l aqbackup $host
> sudo vnamespace -e "'.$nomduvserveur.'"
> /usr/sbin/chroot "/vservers/'.$nomduvserveur.'" $rsyncPath $argList+';
>
>
> it seems this part is not working as it was and it is tryng to backup the
> host instead  of the guest.
>
> I dont find those in 4.0, have they disapeared ?
>
> regards,
> Ghislain.
>
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] no errors but no file backuped in some directory not others

2020-04-17 Thread Craig Barratt via BackupPC-users
Are you using rsync?  The default in 4.x is --one-file-system.  You can
edit the config file to remove that if you prefer. I do realize you said
"the client machine has only one partition", so that might not be the issue.

It would be helpful for you to include the exact rsync_bpc command that is
being run from the XferLOG file.

Craig

On Fri, Apr 17, 2020 at 8:56 AM Ghislain Adnet  wrote:

> hi,
>
>   I am testing backuppc4 after quite some time using backuppc3. It is
> quite a pain as there is no packages. I used the
> commands from the source and (debian buster) :
>
>
> cpan install SCGI
> apt-get install  libcgi-pm-perl apache2-utils
>
>
> configure.pl\
>  --batch    \
>  --cgi-dir /usr/lib/backuppc/cgi-bin\
>  --data-dir /var/lib/backuppc\
>  --hostname $(hostname -f)  \
>  --html-dir /usr/share/backuppc/image\
>  --html-dir-url /backuppc   \
> --run-dir /var/run/backuppc \
> --log-dir /var/log/backuppc \
> --config-dir /etc/backuppc  \
> --scgi-port 3000\
>  --install-dir /usr/share/backuppc;
>
>
>   to try to mimick the debian package of the 3.x, i use nginx with the
> exemple config to connect to the admin interface.
>
>   Using the same configuration than the 3.x version i have backups that
> just miss files, entire directory are skipped
> and i dont see why.
>
>   So i removed my cold onfiguration completly and put
>
> BackupFilesOnly  to /var/backups/mysql
> and removed all exclusions
>
> inside this directory /var/backups/mysql is just some gz of mysql dumps
> files.
>
>   The backups end well and tell me i have 0 files backuped.
>
>   If i add /etc to the BackupFilesOnly list then /etc is perfectly
> backuped but still no  /var/backups/mysql :(
>
>   So i am quite at a loss here, any idea of what could be going on ? This
> configuration works like a charm on backuppc3,
> the client machine has only one partition.
>
> regards,
> Ghislain.
>
>
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


  1   2   3   4   5   6   7   8   >