Re: [BackupPC-users] Backuppc in large environments

2020-12-02 Thread Dave Sherohman
Thanks, everyone!  Looks like backuppc should be able to handle my 
network, no problem.  To hit on specific points, in threaded order:


- I'll be sure to get plenty of RAM.  We're going to be buying a new, 
probably Dell, rackmount system for this and I wouldn't have been 
getting any less than 64G RAM anyhow, but bumping it up to 256 should be 
no problem.


- I haven't looked at the Debian docs for backuppc yet, but it is 
packaged in the main Debian stable repo and there should be 
Debian-specific install instructions in the package.  They're usually 
pretty good, so I don't anticipate any major setup hassles.


- Budget is finite, but this is to replace an existing Tivoli backup 
solution, so organizational accounting rules say I can probably get 5 
years of TSM license fees with few or no questions asked.  And IBM's 
licensing fees ain't cheap.


- I'm definitely backing up the VMs as individual hosts, not as disk 
image files.  Aside from minimizing atomicity concerns, it also makes 
single-file restores easier and, in the backuppc context, I doubt that 
deduplication would work well (if at all) with disk images.


- For the database servers, I was already considering a cron job to do 
SQL dumps of everything and backing that up instead of the raw database 
files.  But there's something fishy with the server that's sending 
400G/day anyhow...  It only has about 650G used on it and /var/lib/mysql 
is under 100G, so there's no reason it should have 400G of changes 
daily.  I'm in the process of looking into that.


- Thanks for the tips on zfs settings.  I tend to use ext4 by default 
and planned to look at btrfs as an alternative, but I'll check zfs out, too.


- I'm already running icinga, so monitoring is handled.  (Or will be, 
once the backup server is installed.)


- I hadn't considered the possibility of horizontal scaling. Thanks for 
bringing that up.  I'll have a chat with the other admins tomorrow and 
see what they think about that, although I think I personally prefer 
vertical scaling just for the simplicity of single-point administration.


And another question which came to mind from the zfs point:  Is anyone 
familiar with VDO (Virtual Data Optimizer)?  It's an abstraction layer 
which sits between the kernel and the filesystem and does on-the-fly 
data compression and disk-block-level deduplication.  A friend uses a 
homegrown rsync-based backup system and says it cuts his disk usage 
significantly, but I'm wondering whether it would help much in a 
backuppc setting, since bpc already does its own file-level deduplication.


On 12/1/20 5:37 PM, Richard Shaw wrote:
So long story short, a lot of it will depend on how fast your data 
changes/grows, but it doesn't necessarily require a high end computer. 
You really just need something beefy enough as to not be the 
bottleneck. If you can make the client I/O the bottleneck, then you're 
good. Depending on your budget (or what you have lying around) a 
decent AMD budget Ryzen system would work quite nicely.


If you're familiar with Debian then I'm sure it's well documented how 
to install and setup. I maintain the Fedora EPEL version and run it on 
CentOS 8 quite nicely.


Thanks,
Richard


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] Backuppc in large environments

2020-12-01 Thread Dave Sherohman

Hey, all!

I've been looking at setting up amanda as a backup solution for a fairly 
large environment at work and have just stumbled across backuppc.  While 
I love the design and scheduling methods of amanda, I'm also a big fan 
of incremental-only reverse-delta backup methods such as that used by 
backuppc, so now I'm wondering...


How big can backuppc reasonably scale?

The environment I'm dealing with includes around 75 various servers 
(about 2/3 virtual, 1/3 physical), mostly running Debian, with a few 
machines running other linux distros and maybe a dozen Windows 
machines.  Total data size that we want to maintain backups for is 
around 70 TB.  Our current backup system is using Tivoli Storage 
Manager, a commercial product that uses an incremental-only strategy 
similar to backuppc's, and the daily backup volume is running around 750 
GB per day, with two database servers providing the majority of that 
volume (400 GB/day from one and 150 GB/day from the other).


Is this something that backuppc could reliably handle?

If so, what kind of CPU resources would it require?  I've already got a 
decent handle on the network requirements from observing the current TSM 
backups and can calculate likely disk storage needs, but I have no idea 
what to expect the backup server to need in the way of processing power.




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] Getting file list from freenas

2021-01-14 Thread Dave Sherohman
I have a test install of backuppc up and running, and backing up a 
half-dozen Debian servers with no problems.  Now our NAS admin has asked 
me to add a freenas machine to the test, and it's just giving me 
"fileListReceive failed" whenever I try to run a backup.


I've verified that I can ssh in as the bpc user without problems, and 
determined that freenas puts rsync in a different place than debian 
does, so I created a file at /etc/backuppc/thisnas.pl containing


$Conf {RsyncClientPath} = '/usr/local/bin/rsync';

then forced bpc to re-read the configuration and tried again, but still 
"fileListReceive failed".


What's my next step for resolving this?  Is it possible to get more 
detailed information in the bpc logs than just "fileListReceive failed"?


Note that I do *not* have root/sudo access on the freenas machine, so 
there are logs I can't check there.  I instructed the nas admin to add 
the line


backuppc  ALL=NOPASSWD: /usr/local/bin/rsync --server --sender *

to sudoers, and he has done so, so bpc should have sufficient sudo 
capability to run, even though I'm not an admin there myself.




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Getting file list from freenas

2021-01-18 Thread Dave Sherohman
Made a little progress on this...  I finally noticed the "Last bad 
XferLOG" link in the CGI interface.


It seems to be a sudo-related problem - it's prompting for a password.  
Either


a) FreeBSD uses a different syntax/version in sudoers, so my NOPASSWD 
line isn't working, or


b) FreeNAS does some kind of weird shuffling of the root filesystem, 
apparently running it off a ramdisk or something like that and keeping 
the "real" persistent root filesystem somewhere else, so perhaps the 
sudoers change is getting lost somewhere in the insanity and not taking 
effect as it should.


On 1/14/21 3:25 PM, Dave Sherohman wrote:
I have a test install of backuppc up and running, and backing up a 
half-dozen Debian servers with no problems.  Now our NAS admin has 
asked me to add a freenas machine to the test, and it's just giving me 
"fileListReceive failed" whenever I try to run a backup.


I've verified that I can ssh in as the bpc user without problems, and 
determined that freenas puts rsync in a different place than debian 
does, so I created a file at /etc/backuppc/thisnas.pl containing


$Conf {RsyncClientPath} = '/usr/local/bin/rsync';

then forced bpc to re-read the configuration and tried again, but 
still "fileListReceive failed".


What's my next step for resolving this?  Is it possible to get more 
detailed information in the bpc logs than just "fileListReceive failed"?


Note that I do *not* have root/sudo access on the freenas machine, so 
there are logs I can't check there.  I instructed the nas admin to add 
the line


backuppc  ALL=NOPASSWD: /usr/local/bin/rsync --server --sender *

to sudoers, and he has done so, so bpc should have sufficient sudo 
capability to run, even though I'm not an admin there myself.




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Prevent "EmailNoBackupEverMesg" messages from being sent

2021-01-26 Thread Dave Sherohman
When I ran into a similar case while leaving a system 
partially-configured, I handled it by (temporarily) blanking out the 
"user" for that host, so there was no address for the mail to be sent 
to.  Worked fine, and then I re-set "user" once the host was 
successfully running backups.


On 1/26/21 4:43 PM, backu...@kosowsky.org wrote:

I have some hosts setup that for now have no backups -- and I am OK
with that.

Is there any way (on a per-host) basis to prevent the "no backup ever"
message from being sent.

Note, for the other messages, one can prevent them by just setting the
notify period to be very long, but I don't see how to prevent this
message from being sent.

If that is not possible now, one simple hack would be to add a check
to the backuppc code so that if the corresponding $Conf{EmailSubj}
value is set to -1, then message sending is skipped (i.e., message is
never sent).

This would also avoid having to set the period to very large numbers
for other messages to avoid them ever being sent.



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Disk space used by single host

2021-05-07 Thread Dave Sherohman
I think you'd first have to define what you mean by "how much disk space 
is used by a single host's backups".  Because of BPC's deduplication 
functions, the answer will be very different if you mean "how much space 
would I need to make a full copy of this host's data" vs. if you mean 
"how much space would be freed on the backup server if I deleted all of 
this host's backups".


On 5/6/21 6:18 PM, Gerald Brandt wrote:

Hi,

Is there an easy way to find out how much disk space is used by a 
single hosts backups?



Gerald




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Problem with WakupSchedule and Backupplan

2021-05-06 Thread Dave Sherohman
Daily schedule seems to work, too.  I've got a NAS with 20 T of 
backed-up data which takes a little over 3 days to do a full backup and 
its daily incrementals will patiently wait for that to finish before 
they try to run.  A couple other hosts in the 7-8 T range also take over 
24 hours to complete a full, and they have no problems with backup 
windows stepping on each other either; no attempts are made to start an 
additional backup on a host which already has a backup in progress.


And then the daily incrementals for all of these machines usually take 
only about 5-10 minutes to complete because, even though they hold a lot 
of data, it doesn't change frequently - which also seems to be the case 
for most "large media libraries".


On 5/5/21 3:54 PM, G.W. Haywood via BackupPC-users wrote:

Hi there,

On Wed, 5 May 2021, Ralph Sikau wrote:


I have a large media library which is too big to be backed
up on a single day.


Does it matter that it's too large to be backed up in a single day?

You could run a weekly or even monthly schedule.

It doesn't have to be daily.




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] Public information page

2021-03-29 Thread Dave Sherohman
I've just added a "guest" user to my bpc htpasswd, with the intention of 
allowing coworkers to view the overall status of the system without 
needing to go through me and... well... it doesn't show anything at all, 
since "guest" doesn't (and shouldn't!) own any of the machines that are 
being backed up.


What I would like to have displayed for guest (and any other logged-in 
user, of course) is:


- On the Status page, the full General Server Information section 
(including, perhaps most importantly, the pool usage graphs) and ideally 
also the full list of Currently Running Jobs.


- On the Host Summary page, the full table of host information (most 
importantly the last backup dates).


- Ideally also the historical information on the host detail pages, but 
with no buttons to trigger actions and no ability to browse the 
backed-up filesystems.


Is there any way to configure the CGI interface to allow this, or some 
other option for allowing users to see when backups were made without 
also giving them access to view the content of those backups?




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Per-host $Conf{MaxBackups} effects

2021-03-11 Thread Dave Sherohman
I'm just the linux admin around here, but the windows admin is working 
on it.  At this point, I'm mostly just looking for a workaround for 
until he figures it out.  Plan B for dealing with it is to reduce the 
backup frequency for the windows machines and run some manually to 
spread them out so they run on different days.  But that's undesirable, 
since it means they're going multiple days between backups, of course.


At this point, the extent of what I know about the underlying problem is 
that the xferlog shows:


Creates event in log
backuppc_start couldn't register event source, error code 5
  Logevent not completed after trying for 6 milliseconds.
  The target server or the domain controller might be unavailable.
  Hint: Increase the TimeOut parameter or try again later.
Waits 30 s while the shadow copies are created and the file 
"C:\cygwin\backuppc\rsyncd.pid" is created
(repeated a variable number of times)

from the prerun script, then a number of "device or resource busy" 
errors while doing the actual backup, and finally a repeat of the 
"couldn't register event source, error code 5" and a lot of "Waits 60 s 
while the file "C:\cygwin\backuppc\shadow_del.pid" is created" when the 
postrun script executes.


This is using the windows client from 
https://sourceforge.net/p/backuppc-windows-client/code/ci/master/tree/


On 3/11/21 2:29 PM, Adam Goryachev via BackupPC-users wrote:


On 12/3/21 00:03, Dave Sherohman wrote:


If I were to set $Conf{MaxBackups} = 1 for one specific host, how 
would that be handled?  Would it prevent that specific host from 
running backups unless there are no other backups in progress?  Would 
it prevent any other backups from being started before that host 
finished?  Would it do both?  Or is that an inherently-global setting 
that has no effect if set for a single host?


My use-case here is that I've got a lot of linux hosts and a handful 
of windows machines.  The linux hosts work great with standard 
ssh/rsync configuration, no problems there.


The windows machines, on the other hand, are using a windows backuppc 
client that our windows admin found on sourceforge and it's having... 
problems... with handling shadow volumes.  As in it appears to be 
failing to create them, which causes backup runs to take many hours 
as it waits for "device or resource busy" files to time out.  Which 
ties up available slots in the MaxBackups limit and prevents the 
linux machines from being scheduled.


So I'm thinking that it might work to temporarily set the windows 
hosts to MaxBackups = 1, if that would prevent multiple windows hosts 
from running at the same time and free up slots for the linux hosts 
to run.  If it would also prevent linux hosts from running when a 
windows host is in progress, though, then that would just make things 
worse.


Or is there some other way I could specify "run four backups at once, 
BUT only one of these six can run at a time (alongside three others 
which aren't in that group)"?


I'm pretty sure this has been discussed before, and is not possible. 
However, I would suggest spending a bit more time to resolve the 
issues with the windows server backups. There is an updated set of 
instructions posted recently to the list (check the archives), if you 
need some help to get something working, the list is a great place to 
ask. Once it works, the windows machines will backup equally as well 
as the Linux ones.


HTH

Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] Per-host $Conf{MaxBackups} effects

2021-03-11 Thread Dave Sherohman
If I were to set $Conf{MaxBackups} = 1 for one specific host, how would 
that be handled?  Would it prevent that specific host from running 
backups unless there are no other backups in progress? Would it prevent 
any other backups from being started before that host finished?  Would 
it do both?  Or is that an inherently-global setting that has no effect 
if set for a single host?


My use-case here is that I've got a lot of linux hosts and a handful of 
windows machines.  The linux hosts work great with standard ssh/rsync 
configuration, no problems there.


The windows machines, on the other hand, are using a windows backuppc 
client that our windows admin found on sourceforge and it's having... 
problems... with handling shadow volumes.  As in it appears to be 
failing to create them, which causes backup runs to take many hours as 
it waits for "device or resource busy" files to time out.  Which ties up 
available slots in the MaxBackups limit and prevents the linux machines 
from being scheduled.


So I'm thinking that it might work to temporarily set the windows hosts 
to MaxBackups = 1, if that would prevent multiple windows hosts from 
running at the same time and free up slots for the linux hosts to run.  
If it would also prevent linux hosts from running when a windows host is 
in progress, though, then that would just make things worse.


Or is there some other way I could specify "run four backups at once, 
BUT only one of these six can run at a time (alongside three others 
which aren't in that group)"?



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Per-host $Conf{MaxBackups} effects

2021-03-11 Thread Dave Sherohman

On 3/11/21 4:36 PM, backu...@kosowsky.org wrote:

I don't see how this would make sense at a per-host level. And any
behavior to have it differ by host is undocumented and not necessarily
predictable.


That's why I asked - I can't predict what it would do if it were 
allowed.  :D




Look at the code that I recently submitted to the group to streamline
creation/deletion of shadow backups.


I saved those posts, but, honestly, I don't see the advantage of using a 
large script for the per-host config files over having two lines to set 
the pre/postdump commands.  Yes, the pre/postdump commands require 
scripts to be installed on each target host, but those scripts come 
along as part of the overall backuppc client installation that they 
(presumably) need anyhow, in order for rsync and such to be available.




  > So I'm thinking that it might work to temporarily set the windows hosts
  > to MaxBackups = 1, if that would prevent multiple windows hosts from
  > running at the same time and free up slots for the linux hosts to run.
  > If it would also prevent linux hosts from running when a windows host is
  > in progress, though, then that would just make things worse.


To me this sounds backasswords.
Why not just INCREAE MaxBackups to allow for a few hung Windows
machines. It's not like they are consuming any server bandwidth or cpu.


Because if I increase it to, say, 20, then that will allow 20 linux 
backups (which are all using bandwidth) just as happily as 20 stalled 
windows backups.


If there were a solution so say "this pool of resources is only for 
linux clients and that pool is only for windows clients" (short of 
having two completely separate bpc installs), then I'd have my 
workaround, but it appears that no such capability exists.



Alternatively, I might divide the blackout periods into one period for Linux
and one for Windows machines. That way Linux machines don't compete
with Windows machines for slots. You can then write a script to kill
any hanging Windows backups that bleed into your Linux slot.

That could work, yes.

But the only real solution is to figure out why and where the Windows
backups are hanging...

Of course.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Per-host $Conf{MaxBackups} effects

2021-03-11 Thread Dave Sherohman

On 3/11/21 4:40 PM, backu...@kosowsky.org wrote:

Sounds like the shadow creation script or your implementation of it is
broken.
The precmd script fails to create the shadow volume when it is run from 
the backuppc user account on the backup server, but works when it's run 
from just about any other account.  So I'm thinking user admin rights, 
but the windows admin hasn't yet found exactly which rights are the 
issue (assuming my theory is even correct; I don't really know the admin 
side of windows).

Also, why are you using rsyncd? rsync itself is cleaner and generally
more secure as it uses rsa/dsa keys vs. unencrypted secrets.
I'll ask the windows admin, but I suspect the primary reason is "because 
that's how the backuppc client for windows on sourceforge says to do it".



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Adding a max and warning line to the backup pool size?

2021-03-15 Thread Dave Sherohman

On 3/13/21 5:24 PM, Sorin Srbu wrote:

Is it possible to add a red max and yellow warning line to the BackupPC pool
size chart, reading from the df or OS partition size?
Speaking of the pool size chart, was that removed in BPC 4.x?  I did a 
test install on Debian 10 (bpc 3.3.1), then set up my final production 
install on Debian 11 (bpc 4.4.0).  The deb10 version had the graphs 
showing by default, but I'm not seeing them on the deb11 server.



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] "error in rsync protocol data stream"

2021-03-22 Thread Dave Sherohman
The latest new beast to be added to my backup zoo is a synology NAS 
device.  It is being uncooperative, and the only error message it 
provides is


rsync error: error in rsync protocol data stream (code 12) at io.c(226) 
[Receiver=3.1.3.0]


which is rather less than helpful.

Online searches have turned up all kinds of suggestions that "error in 
rsync protocol data stream" means your ssh keys aren't in order or there 
are network problems or what not, but none of those have proven helpful, 
as we are quite certain that passwordless ssh and 'sudo rsync' are both 
working correctly when used manually.


Looking through auth.log on the synology, we've found the entry

2021-03-22T11:00:01+01:00 hacluster01 sudo: backuppc : TTY=unknown ; 
PWD=/volume1/homes/backuppc ; USER=root ; COMMAND=/usr/bin/rsync 
--server --sender -lHogDtpre.iLsfxC -B2048 --timeout=72000 --numeric-ids . /


so, based on that, I tried going to the bpc server and (as the bpc user) 
running


ssh hacluster01 nice sudo /usr/bin/rsync --server --sender 
-lHogDtpre.iLsfxC -B2048 --timeout=72000 --numeric-ids . /


...and that gives no error messages when I run it manually.  It doesn't 
actually *do* anything (it just sits there until I hit ctrl-C), but it 
does not produce an "rsync protocol data stream" error.


However, corresponding to the auth.log entry above, there's also a 
simultaneous entry in /var/log/messages:


2021-03-22T11:00:01+01:00 hacluster01 rsync: User uid (0) is disabled.

which certainly seems likely to be relevant, but, again, that doesn't 
cause any error when running 'sudo rsync' manually, either locally or 
over ssh from the bpc server.



Does anyone know offhand what the solution to this would be? Failing 
that, any tips on how to proceed further with debugging this?  Is there 
some way to get bpc to log every command that it runs, so that I can try 
to reproduce the sequence manually?




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Per-host $Conf{MaxBackups} effects

2021-03-12 Thread Dave Sherohman

On 3/11/21 4:49 PM, Dave Sherohman wrote:

On 3/11/21 4:36 PM, backu...@kosowsky.org wrote:

Look at the code that I recently submitted to the group to streamline
creation/deletion of shadow backups.


I saved those posts, but, honestly, I don't see the advantage of using 
a large script for the per-host config files over having two lines to 
set the pre/postdump commands.  Yes, the pre/postdump commands require 
scripts to be installed on each target host, but those scripts come 
along as part of the overall backuppc client installation that they 
(presumably) need anyhow, in order for rsync and such to be available.
Despite having said that, I did still forward your mail to the windows 
admin, and he said that it looked like your method of handling the 
shadow volumes looked better than how the sourceforge client scripts 
were doing it, so we gave your script a shot and it looks like it worked 
without a hitch.  Thanks for the tip, and for the script!



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Handling machines too large to back themselves up

2021-04-09 Thread Dave Sherohman
Do you know offhand of any online documentation on using pre/postcmd in 
this way?  Using that and ClientNameAlias could be a solution, although 
it always makes me uneasy to use any backup target other than the root 
of a filesystem, due to the possibility of things falling through the 
cracks if someone creates a new directory outside of any existing 
targets and forgets to add it to the list.



On 4/8/21 10:33 PM, Bowie Bailey via BackupPC-users wrote:
You can use the DumpPreUserCmd and DumpPostUserCmd settings to manage 
lockfiles and make sure backups from the aliased hosts cannot run at 
the same time.


You can separate them in the scheduler by manually starting them at 
different times, or by disabling the automatic backups and using cron 
to start the backups at particular times.



On 4/8/2021 9:47 AM, Mike Hughes wrote:

Hi Dave,

You can always break a backup job into multiple backup 'hosts' by 
using the ClientNameAlias setting. I create hosts based on the share 
or folder for each job, then use the ClientNameAlias to point them to 
the same host.



*From:* Dave Sherohman 
*Sent:* Thursday, April 8, 2021 8:22 AM
*To:* General list for user discussion, questions and support 

*Subject:* [BackupPC-users] Handling machines too large to back 
themselves up


I have a server which I'm not able to back up because, apparently, 
it's just too big.


If you remember me asking about synology's weird rsync a couple weeks 
ago, it's that machine again.  We finally solved the rsync issues by 
ditching the synology rync entirely and installing one built from 
standard rsync source code and using that instead.  Using that, we 
were able to get one "full" backup, but it missed a bunch of files 
because we forgot to use sudo when we did it.  (The synology rsync is 
set up to run suid root and is hardcoded to not allow root to run it, 
so we had to take sudo out for that, then forgot to add it back in 
when we switched to standard rsync.)


Since then, every attempted backup has failed, either full or 
incremental, because the synology is running out of memory:


This is the rsync child about to exec /usr/libexec/backuppc-rsync/rsync_bpc
Xfer PIDs are now 1228998,1229014
xferPids 1228998,1229014
ERROR: out of memory in receive_sums [sender]
rsync error: error allocating core memory buffers (code 22) at util2.c(118) 
[sender=3.2.0dev]
Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 filesTotal, 0 
sizeTotal, 0 filesNew, 0 sizeNew, 0 sizeNewComp, 32863617 inode
rsync_bpc: [generator] write error: Broken pipe (32)

The poor little NAS has only 6G of RAM vs. 9.4 TB of files 
(configured as two sharenames, /volume1 (8.5T) and /volume2 (885G) 
and doesn't seem up to the task of updating that much at once via rsync.


Adding insult to injury, even a failed attempt to back it up causes 
the bpc server to take 45 minutes to copy the directory structure 
from the previous backup before it even attempts to connect, and then 
12-14 hours doing reference counts after it finishes backing up 
nothing.  Which makes trial-and-error painfully slow, since we can 
only try one thing, at most, each day.


In our last attempt, I tried flipping the order of the 
RsyncShareNames to do /volume2 first, thinking it might successfully 
back up the smaller share successfully before running out of memory 
trying to process the larger one.  It did not run out of memory... 
but it did sit there for a full 24 hours with one CPU (out of four) 
running pegged at 99% handling the rsync process before we finally 
put it out of its misery.  The bpc xferlog recorded that the 
connection was closed unexpectedly (which is fair, since we killed 
the other end) after 3182 bytes were received, so the client clearly 
hadn't started sending data yet.  And now, after that attempt, the 
bpc server still lists the status as "refCnt #2" another 24 hours 
after the client-side rsync was killed.


So, aside from adding RAM, is there anything else we can do to try to 
work around this?  Would it be possible to break this one backup down 
into smaller chunks that are still recognized as a single host (so 
they run in sequence and don't get scheduled concurrently), but don't 
require the client to diff large amounts of data in one go, and maybe 
also speed up the reference counting a bit?


An "optimization" (or at least an option) to completely skip the 
reference count updates after a backup fails with zero files received 
(and, therefore, no new/changed references to worry about) might also 
not be a bad idea.




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project:https://backuppc.github.io/backuppc/




___
BackupPC-use

Re: [BackupPC-users] Handling machines too large to back themselves up

2021-04-09 Thread Dave Sherohman

On 4/8/21 8:46 PM, Les Mikesell wrote:

On Thu, Apr 8, 2021 at 8:25 AM Dave Sherohman  wrote:

rsync error: error allocating core memory buffers (code 22) at util2.c(118) 
[sender=3.2.0dev]
This is more about the number of files than the size of the drive.  Do
you happen to know if there are directories containing millions of
tiny files that could feasibly be archived as a tar or zip file
instead of stored separately?


I don't know details of the filesystem contents on this machine, but our 
earlier not-quite-full (some files missed because we didn't use sudo) 
backup contained 24,058,239 files according to the bpc host status page, 
so that is a possibility.


The synology's admin replied faster than I expected and says there's a 
directory where scanned files are dropped which "contains a lot of 
files", so I'm looking into whether that can be archived (or skipped - 
most of the scans we deal with tend to be temporary files that are 
discarded after we do OCR on them).



Also, rsync versions newer than 3.x are supposed to handle it better.
  Is your server side extremely old?


The rsync on the synology NAS is version 3.0.9.  The bpc server is a 
brand-new Debian 11 install, with rsync 3.2.3.


The samba FAQ link mentions that the memory optimization "only works 
provided that both sides are 3.0.0 or newer and certain options that 
rsync currently can't handle in this mode are not being used." Any idea 
what those "certain options" might be?  The client-side rsync commands 
look pretty basic, so it's probably not using any of them, but it's a 
possibility.




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] Handling machines too large to back themselves up

2021-04-08 Thread Dave Sherohman
I have a server which I'm not able to back up because, apparently, it's 
just too big.


If you remember me asking about synology's weird rsync a couple weeks 
ago, it's that machine again.  We finally solved the rsync issues by 
ditching the synology rync entirely and installing one built from 
standard rsync source code and using that instead. Using that, we were 
able to get one "full" backup, but it missed a bunch of files because we 
forgot to use sudo when we did it.  (The synology rsync is set up to run 
suid root and is hardcoded to not allow root to run it, so we had to 
take sudo out for that, then forgot to add it back in when we switched 
to standard rsync.)


Since then, every attempted backup has failed, either full or 
incremental, because the synology is running out of memory:


This is the rsync child about to exec /usr/libexec/backuppc-rsync/rsync_bpc
Xfer PIDs are now 1228998,1229014
xferPids 1228998,1229014
ERROR: out of memory in receive_sums [sender]
rsync error: error allocating core memory buffers (code 22) at util2.c(118) 
[sender=3.2.0dev]
Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 filesTotal, 0 
sizeTotal, 0 filesNew, 0 sizeNew, 0 sizeNewComp, 32863617 inode
rsync_bpc: [generator] write error: Broken pipe (32)

The poor little NAS has only 6G of RAM vs. 9.4 TB of files (configured 
as two sharenames, /volume1 (8.5T) and /volume2 (885G) and doesn't seem 
up to the task of updating that much at once via rsync.


Adding insult to injury, even a failed attempt to back it up causes the 
bpc server to take 45 minutes to copy the directory structure from the 
previous backup before it even attempts to connect, and then 12-14 hours 
doing reference counts after it finishes backing up nothing.  Which 
makes trial-and-error painfully slow, since we can only try one thing, 
at most, each day.


In our last attempt, I tried flipping the order of the RsyncShareNames 
to do /volume2 first, thinking it might successfully back up the smaller 
share successfully before running out of memory trying to process the 
larger one.  It did not run out of memory... but it did sit there for a 
full 24 hours with one CPU (out of four) running pegged at 99% handling 
the rsync process before we finally put it out of its misery.  The bpc 
xferlog recorded that the connection was closed unexpectedly (which is 
fair, since we killed the other end) after 3182 bytes were received, so 
the client clearly hadn't started sending data yet. And now, after that 
attempt, the bpc server still lists the status as "refCnt #2" another 24 
hours after the client-side rsync was killed.


So, aside from adding RAM, is there anything else we can do to try to 
work around this?  Would it be possible to break this one backup down 
into smaller chunks that are still recognized as a single host (so they 
run in sequence and don't get scheduled concurrently), but don't require 
the client to diff large amounts of data in one go, and maybe also speed 
up the reference counting a bit?


An "optimization" (or at least an option) to completely skip the 
reference count updates after a backup fails with zero files received 
(and, therefore, no new/changed references to worry about) might also 
not be a bad idea.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backuppc-4 on Debian-11

2021-09-10 Thread Dave Sherohman
I installed BPC4 from the pre-release debian 11 repo back in March and 
it Just Worked(TM), no problems at all.  I'm currently backing up 89 
hosts with it and haven't had to touch any of the BPC infrastructure 
aside from setting up appropriate configs. It's solid.


I'm not sure why Debian decided to do the pc/ symlink thing, but it 
works.  Debian was doing that with the BPC 3.x packages in debian10 and 
it never caused any problems that I'm aware of.


On 9/9/21 6:18 PM, Juergen Harms wrote:

Hallo

I had planned to migrate from my present distro (my present system 
runs a correct backuppc 4 installation), and had delayed my move until 
the release of Debian-11.


I am deceived - the backuppc package in Debian-11 does not look solid:
- /etc/backuppc/pc (immediately after installation) is a link to 
/etc/backuppc: an infinite loop
- if, to make a fresh start, I remove the backuppc package I had 
installed (remove backuppc followed by autoremove and rm -rf 
/etc/backuppc) and do a fresh install, the directory /etc/backuppc 
will have incorrect contents (only htpasswd and pc).


That looks like packaging has not been followed by thorough testing.

The recent post ("backuppc 4 fails without any log") might fall under 
the same heading (rpi4 probably means raspberry, i.e. Debian).


Did anybody have success making this Debian package work on his 
installation?


Juergen


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backuppc-4 on Debian-11

2021-09-13 Thread Dave Sherohman

I say "it works" because (for me, at least) it. does. work.

I have per-host override files at /etc/backuppc/hostname.pl (which is 
the same as /etc/backuppc/pc/hostname.pl, thanks to the symlink that you 
seem to loathe so much).  They work.


Hosts which do not have per-host override files in /etc/backuppc use the 
settings in /etc/backuppc/config.pl.  They *also* work.


As I mentioned earlier, I am backing up 89 hosts via backuppc. 71 of 
them are linux hosts using the default settings from config.pl.  The 
other 18 are a mix of Windows, Macs, and various BSD-based appliances, 
and these 18 use per-host override configs. All 89 work, with no special 
handling beyond (for the 18 non-linux machines) creating the per-host 
configs.


On 9/11/21 9:23 AM, Juergen Harms wrote:

On 10/09/2021 09:18, Dave Sherohman wrote:


I'm not sure why Debian decided to do the pc/ symlink thing, but it 
works.  Debian was doing that with the BPC 3.x packages in debian10 
and it never caused any problems that I'm aware of.


I am coming back to the symlink question, I did not take it seriously 
enough. According to backuppc documentation ( 
http://backuppc.sourceforge.net/BackupPC-4.0.0.html, Config and Log 
Directories) it is meant to point to a directory that contains per-pc 
definitions - they serve to override definitions originally made in 
config.pl but which are to be different for specific hosts.


As it is in the present Debian package, this symlink does more or less 
nothing and does not at all allow to define per-pc configurationd - a 
very important drawback. You say "it works" - what makes it look as if 
it works is that in absence of such per-pc definitions the definition 
in config.pl is used.


Juergen


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backuppc-4 on Debian-11

2021-09-14 Thread Dave Sherohman

On 9/13/21 6:00 PM, Juergen Harms wrote:
This is not the place to fight for being right, but to understand and 
document help for users who hit this kind of problem.
Agreed.  But what kind of response is expected when you say "what makes 
it *look as if* your installation works is that...", other than "no, it 
*actually does* work"?  (Rhetorical question.)
Trying to understand: how do you define separate and different 
profiles ("per-host override configs") for each of your 18 different 
PCs in one single .pl file (i.e. your file at 
/etc/backupp/hostname.pl) ? or do you mean by hostname.pl a list of 
specific files where hostname.pl stands for an enumeration of 18 files 
with PC-specific names?
`ls /etc/backuppc/*.pl` lists 19 files, "config.pl" and 18 separate 
"[hostname].pl" files.


The (non-commented-out) contents of the [hostname].pl files range from 
the very brief


$Conf {RsyncClientPath} = '/usr/bin/nice /usr/local/bin/sudo 
/usr/local/bin/rsync';


(for a BSD-based box that just has rsync in a different location than 
where debian puts it) to the slightly-more-complex


$Conf{RsyncShareName} = '/cygdrive/c/';
$Conf{PingPath} = '/bin/echo';
$Conf {RsyncClientPath} = '/usr/bin/nice /usr/bin/rsync';
$Conf{RsyncSshArgs} = ['-e', '$sshPath -p 2022'];
$Conf{DumpPreUserCmd} = '$sshPath -q -x -l backuppc -p 2022 $host 
"/backuppc/pre-cmd.cmd"';
$Conf{DumpPostUserCmd} = '$sshPath -q -x -l backuppc -p 2022 $host 
"/backuppc/post-cmd.cmd"';


(used by most of the Windows hosts, although some omit the lines setting 
an alternate ssh port or disabling ping checks)


If the latter is the case, our disagreement is very small: each of 
these files in /etc/backuppc provides config info for one pc, and the 
pc/ directory does not harm, but is not used (I tried both variants - 
with and without specifyint pc/ - both work)
I'm mildly surprised by the last parenthetical there.  I had expected 
BPC to look only in /etc/backuppc/pc, with the symlink allowing admins 
to place the configs directly in /etc/backuppc (where I believe most 
debian-familiar admins would expect them to go).  I hadn't expected the 
symlink to be entirely superfluous.



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Debian 11 + apache2 initial problems

2021-09-16 Thread Dave Sherohman

On 9/16/21 4:25 PM, Kimmo Hedman via BackupPC-users wrote:

Thank you, those warnings now did go away.
But this still exists (attached image).


Now that those packages are installed, run `sudo systemctl start 
backuppc` to start the BPC service.


If it still says it's not running, `systemctl status backuppc` should at 
least give some hints as to the problem.


You may also want to run `sudo systemctl enable backuppc` so that it 
will be automatically started when the system boots; I'm pretty sure 
this is already handled automatically when you install the package, but 
I'm not 100% certain, and running "enable" again won't hurt anything if 
it already is enabled.


And maybe that who is responsible of backuppc debian11 packages should 
add dependencies smb and rrdtool.


I would actually say it's the other way around - if you're not doing 
backups via samba, then backuppc should be able to start without needing 
valid samba paths (because they're not going to be used anyhow).  I 
believe that rrdtool is only used to create backup disk pool usage 
graphs, so that should also be optional.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] idle versus done state and host summary

2021-07-13 Thread Dave Sherohman
Timing, basically.  The row is only green for a certain amount of time 
(not positive, but probably 1.2 hours - as long as the "Last Backup 
(days)" number rounds to 0.0) after a backup is completed, then it 
reverts to the white "idle" state.  Think of green/"done" as "a backup 
just finished" and white/"idle" as "nothing to do at the moment".


On 7/13/21 12:57 AM, Kenneth Porter wrote:
Using v4.4.0. I was comparing the Host Summary page for two servers 
with just a few clients and one with a lot of clients. The one with 
lots of clients has nice color bands in the table indicating success 
with light green. (There are also yellow and red rows for problems and 
grey for disabled clients. New clients awaiting a first backup are 
white.) The small servers have only white backgrounds for all clients. 
Looking closer, I realized the "Last attempt" column in the small 
servers has only "idle" while the big server has "done" in that 
column. Why don't the small servers say "done"?




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] rsync error

2021-09-23 Thread Dave Sherohman
According to a quick web search, it looks like rsync error code 5 
indicates authorization problems.  Here are a couple links which may 
provide solutions:


https://unix.stackexchange.com/questions/71719/rsync-error-starting-client-server-protocol
https://bovitron.com/blogostu/2019/04/14/rsync-error-starting-client-server-protocol-code-5/

Beyond that, I'm not really sure, because I've only used the rsync 
XferMethod, not rsyncd.


Also, FYI, I didn't receive an attachment with your mail; I suspect the 
list manager software may be set up to remove them.


On 9/22/21 11:14 PM, Gary L. Roach wrote:


Hi All,

After setting up SSH, Rsyncd and Backuppc, I tried to run a backup on 
two different computers and got the following log file error:


2021-09-22 13:22:08 Created directory /var/lib/backuppc/pc/supercrunch  
/refCnt
2021-09-22 13:22:08 full backup started for directory /etc
2021-09-22 13:22:09 Got fatal error during xfer (rsync error: error starting 
client-server protocol (code 5) at main.c(1683) [Receiver=3.1.3.0])

My config.pl file for the host is attached.

Any help will be sincerely appreciated.

Gary R.



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] "mutt" and "/etc/aliases"

2021-09-30 Thread Dave Sherohman
This part, at least, I can explain:  mutt (and mail) knows nothing about 
the aliases in /etc/aliases.  It only knows its own aliases (defined in 
.muttrc).


Any mail sent to an address without a hostname is for the local system 
by default, so mail to just "root" gets @localhost appended.


The aliases in /etc/aliases are handled by the MTA (postfix, in your 
case), so it attempts to deliver the mail to root@localhost, but checks 
its own list of aliases and sees that mail for root@localhost should be 
delivered to to@recipient.domain and forwards it there.  It doesn't 
touch the To: header in the process, because, if it did, then you 
wouldn't have any way of knowing what address your mail was sent to, 
only where it was finally delivered to after all forwarding, etc.


If you want the address in the To: header to match the final delivery 
address, you'll need to create a mutt alias for "root" which maps to 
that address, either manually or (as already suggested) with a script 
that translates aliases from /etc/aliases to .muttrc.


On 9/29/21 7:43 PM, orsomannaro wrote:
Both mutt and mail (from mailutils) correctly translate the alias from 
/et/aliases but the incoming email "To:" header turns out to be 
@ instead of @


Il giorno mer 29 set 2021 alle ore 17:49 orsomann...@gmail.com 
 > ha scritto:


> It seems to me that this is a mail client question, not a BackupPC
> question.

I know and ... honestly: I posted here in the hope that someone could
suggest an alternative way to send notifications from BackupPC,
because
I went insane trying to send attachments with sendamil provided by
Postfix ...


> Perhaps you will need to create a script which takes your
/etc/aliases
> and outputs text suitable for mutt

Thank you so much for your answer!



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] V4 Infinite Incrementals

2021-09-24 Thread Dave Sherohman

On 9/23/21 7:09 PM, Stan Larson wrote:

A few of the servers that are being backed up take many hours to run a 
full backup, which can modestly impact the end users of those 
servers.  Currently my FullPeriod value is set at the default 6.97 
days.  Since V4 uses reverse-delta, I should be able to stretch out 
the FullPeriod to a much longer length of time. What is a reasonable 
FullPeriod value?


You didn't state what your XferMethod is, but, based on what I've seen 
with BPC4, this kind of optimization is not necessary when using 
ssh+rsync backups.  I have hosts which took 5-6 days to complete their 
initial full backups, but subsequent full backups finish in a matter of 
minutes.


My assumption is that this is because, even though the fulls examine and 
compare checksums for every file on the host, rsync still only transfers 
changed data, unlike other transport methods which need to send the 
entire content over the network regardless of whether it is changed or not.


BTW, FillPeriod is set to 0.  My understanding is that the only 
difference between using FullPeriod and FillPeriod is 
cosmetic/verbage.  Is that correct?


Not really.  A full backup and a filled backup aren't the same thing.  
Every full backup is filled, but the reverse is not the case.


A full backup makes a complete copy of the target host to the backup 
server (modulo rsync optimizations for unchanged data).


A filled backup has entries in the backup server's index for every file 
on the target host.  This naturally happens with a full, because it gets 
every file as part of the backup, but an incremental can also be filled 
by adding index entries for any files which aren't included in the 
incremental, to produce an index that "looks like" a full backup, even 
though it isn't.  This is basically a way to address the tradeoff you 
mentioned between backup times and restore times, since restoring from 
an incremental only needs to look back to the most recent filled backup 
instead of the most recent full.




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Moving to 4.4.0, sudo on FreeBSD

2021-07-26 Thread Dave Sherohman
Per-host config files are working fine for me.  Maybe your 
RsyncClientCmd has the rsync path hardcoded in it instead of referencing 
$rsyncPath?


My environment is mostly-linux with a few BSD-based hosts and, to just 
pick a .pl that contains "local" at random-ish, I have:


$Conf{RsyncClientPath} = 'sudo /usr/local/bin/rsync';
# $Conf{RsyncClientPath} = 'sudo /tmp/rsync';
$Conf{RsyncClientCmd} = '$sshPath $host nice $rsyncPath $argList+';
$Conf{RsyncClientRestoreCmd} = '$sshPath $host nice $rsyncPath $argList+';


On 7/24/21 6:27 AM, Brad Alexander wrote:


I ran across what appears to be the reason for the issue that I am 
having. I found the following issue in my console:


/var/log/console.log:Jul 23 23:52:11 danube kernel: Jul 23 23:52:11 
danube sudo[2
866]: backuppc : command not allowed ; PWD=/usr/home/backuppc ; 
USER=root ; COMMA

ND=/usr/bin/rsync --server --sender -slHogDtprcxe.iLsfxC

I don't quite understand it. It appears that

$Conf{RsyncClientPath} = 'sudo /usr/bin/rsync';

in my config.pl  is overriding

$Conf{RsyncClientPath} = 'sudo /usr/local/bin/rsync';

in my freebsd hosts .pl files. Are per-host config files no longer 
supported? Is there another way to specify the path for the rsync 
command on a per-host or per-OS basis?


Thanks,
--b


On Fri, Jul 23, 2021 at 4:28 PM Brad Alexander > wrote:


I have been running BackupPC 3.x for many years on a Debian Linux
box. I just expanded my TrueNAS box with larger drives, grew my
pool, and am in the process of converting from BackupPC 3.3.1 on
the dedicated server (that has gotten a bit snug on the drive
space) to a 4.4.0 install in a FreeBSD jail on my TrueNAS box,
using the guide at

https://www.truenas.com/community/threads/quickstart-guide-for-backuppc-4-in-a-jail-on-freenas.74080/

,
and another page for the changes needed for rsync. I am backing up
both FreeBSD and Linux boxes.

So at this point, the linux boxes are backing up on the 4.4
installation, but the FreeBSD boxes are not. Both are working on
the 3.3.1 machine. I transferred all of my .pl files
from the old backup box to the 4.4.0 jail, and they are identical
to the old configs. So does anyone have any ideas about what could
be happening? I have a log of an iteration of the backup test at

https://pastebin.com/KLKxGYT1 
It is stopping to ask for a password, which it shouldn't be doing,
unless it is looking for rsync_bpc on the client machines.

Thoughts?

Thanks,
--b



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Is all in order?

2021-11-02 Thread Dave Sherohman via BackupPC-users
What seems suspicious to you about it?  It's not currently in the 
process of backing up files or doing anything with the host right this 
minute; it is idle.


On 11/2/21 7:47 AM, Norman Goldstein wrote:

My question is in the last line of this email.

I was having errors backing up, so I decided to start with a fresh, 
empty /var/lib/BackupPC , and started a manual full backup.  
Afterwards, I was able to access various backup files, as a sanity 
check.  Also ...


The server log file looks good:
2021-11-01 22:24:50 User backuppc requested backup ofmelodic  
  (melodic  
)
2021-11-01 22:24:51 Started full backup onmelodic  
  (pid=14098, share=/)
2021-11-01 22:24:52 Started full backup onmelodic  
  (pid=14098, share=/home)
2021-11-01 22:55:52 Finished full backup onmelodic  


The host log file looks good:

2021-11-01 22:24:51 full backup started for directory /
2021-11-01 22:24:52 full backup started for directory /home
2021-11-01 22:55:35 full backup 0 complete, 20167 files, 5521522076 bytes, 0 
xferErrs (0 bad files, 0 bad shares, 0 other)

The server Status page looks good (no running jobs).

The server Host Summary page has an idle entry:
 
Host 
User 
Comment 
#Full 	Full Age 
(days) 	Full Size 
(GiB) 	Speed 
(MiB/s) 
#Incr 	Incr Age 
(days) 	Last Backup 
(days) 
State 	#Xfer 
errs 	Last 
attempt

melodic   backuppc
1   0.1 5.142.860   0.1 idle0   
idle

This idle entry looks suspicious to me. Is this something I can ignore?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Share subdirectories not shown in web gui

2021-12-02 Thread Dave Sherohman via BackupPC-users
What file transfer method are you using with the Windows hosts? I've 
seen a couple mentions on the mailing list that BPC4 added 
--one-file-system to the default set of flags for rsync file transfers, 
which would prevent rsync from crossing over onto a remotely-mounted 
filesystem, such as an SMB share.  It may have also added an equivalent 
setting for SMB-based transfers, although I haven't seen anyone say 
anything one way or the other about that.


On 12/2/21 9:06 AM, Kārlis Irmejs via BackupPC-users wrote:


Hi, I have a problem and cannot solve it by myself.

I'm running BackupPC 4.4.0 on Ubuntu 20.04. Backups are made for 2 
Windows hosts with couple of SMB shares. Configuration mostly default, 
at least regarding server and CGI config.


Here is my problem. In web GUI under host 'Browse backups' there are 
no folders, only files in share root. Backups are running without 
errors, logs showing files transferred, the disk is filling and there 
are files in topdir/pc, pool and cpool. Under topdir/pc/hostname there 
are all subfolders as they should be.


What I have done. I deleted everything related to BackupPC, including 
topdir contents (reformatted partition which is mounted there), 
reinstalled BackupPC using script from Wiki 
'Installing-BackupPC-4-from-tarball-or-git-on-Ubuntu' and recreated 
configuration from scratch. No luck.


I have another BackupPC in another location, the same version, same 
config, no problems at all.


Localhost tar backups are shown properly. I have no other Linux 
machines on that location to test.


Perhaps some ideas what to do. Or some more info needed?
--

/*Kārlis Irmejs*
/




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Real Time View of What Is Being Copied

2022-03-07 Thread Dave Sherohman via BackupPC-users

On 3/5/22 14:36, G.W. Haywood via BackupPC-users wrote:

On Sat, 5 Mar 2022, Les Mikesell wrote:

Unix/Linux has something calle 'sparse' files used by some types of
databases where you can seek far into a file and write without
using/allocating any space up to that point.  The file as stored may
not be large but most tools to copy it will act as though the empty
parts were filled with nulls.


I can't remember the last time I saw a sparse file used *anywhere* in
'real life', although they are occasionally found in malicious mail.


One common legitimate use case for sparse files is virtual disk images.  
I run a fair number of virtual machines at work (I think we're currently 
at around 90 of them) and sparse files allow me to have a VM with a "1 
TB" virtual disk that only takes up a few dozen GB of real disk space 
and grows as needed to hold additional data.


As Les said, copying these files (with, e.g., `cp`) takes time 
proportional to their virtual size, not their physical size, even though 
the smaller physical size is preserved by the operation. Don't know 
whether this applies to rsync, though.  Might need to try that next time 
I need to duplicate one, just to see whether rsync handles sparse files 
smarter/faster than other tools.




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] How to disable to backup a client PC

2022-02-22 Thread Dave Sherohman via BackupPC-users
In the directory where config.pl lives, there should be a pc/ 
subdirectory.  The per-host config files go in that subdirectory. (Note 
that, on debian, pc/ is a symlink back to the same directory, so the 
global config.pl and the per-host configs are all in the same place.)



The per-host config files are named [host].pl and don't exist until you 
create them, which is why you can't find it.



You should also be able to manage them through the web interface by 
using the "Edit Config" sidebar option on the individual host's detail 
page, but I've only ever used that to view per-host customizations, not 
to change anything, so I don't know if there are any gotchas to watch 
out for.



On 2/21/22 21:50, Chris Wu wrote:

Hi,

    As per the doc of BackupPC, it says below

    "To disable backups for a client$Conf{BackupsDisable} 
can 
be set to two different values in that client's per-PC config.pl file:


1.

Don't do any regular backups on this machine. Manually requested
backups (via the CGI interface) will still occur.

2.

Don't do any backups on this machine. Manually requested backups
(via the CGI interface) will be ignored.

This will still allow the client's old backups to be browsable and 
restorable."


   Does anyone know where the "per-PC config.pl file" is? I can only 
find the file /etc/BackupPC/config.pl, but this file should be a 
global config file other than per-PC config file specific to the PC 
that will be disabled for backups.


      Thanks.


Kind regards,

Chris



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project:https://backuppc.github.io/backuppc/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backing up trash

2022-04-04 Thread Dave Sherohman via BackupPC-users
The first thing I'd try is adding it as \@Recycle.  Much (all?) of the 
BPC code is written in Perl, which will (in some contexts) interpret 
"@Recycle" to mean "an array named Recycle" rather than the literal text 
"@Recycle".  Adding the backslash prevents the @ from being interpreted 
as an array signifier.


On 4/2/22 09:43, gen...@wp.pl wrote:


Hi,


these is a problem to backup volumes from QNAP. It is a tar archiv 
error while backing up "@Recycle" folder. If it is empty, there is no 
problem. I cannot add "@Recycle" to exceptions. Any idea how to add it 
to acception?



Above does not work.


Thanks and regards.

Andrzej





___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project:https://backuppc.github.io/backuppc/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Some directories stay empty (The directory ... is empty).

2022-03-22 Thread Dave Sherohman via BackupPC-users
This is basically what I've done, with the addition (which may have been 
an unstated assumption) that the backuppclogin user on each client 
machine has a disabled password, so that it can only be accessed via ssh 
public key login, or by "sudo su [user]" on the local machine.


This is, IMO, "secure enough" for all practical purposes. Although the 
backuppclogin user can, as you said, be used to read any information on 
the machine via a properly-crafted "sudo rsync" command line, the only 
ways that you can run a command as that user are if you either have root 
on the client machine (in which case you can already read all its files) 
or if you have cracked the master backuppc account on the bpc server to 
gain access to its private ssh key (in which case you can already read 
the client's files from the backup pool).  In neither case does the 
"read everything" rsync command give you anything you don't already have.


On 3/22/22 03:37, backu...@kosowsky.org wrote:

There are some things you can do to *partially* harden the situation,
While this might be particularly dangerous, but if you are going to backup
a machine fully then you will need at least root-like read access to all the
files on that machine.

Things to consider include:
1. Use sudo for the backuppc login user (say: 'backuppclogin') restricted only 
to the specific
'backuppclogin' user and the /usr/bin/rsync string that is sent by backing 
up
backuppclogin ALL=NOPASSWD: /usr/bin/rsync --server --sender 
-slHogDtpAXrxe.iLsf, /usr/bin/rsync --server --sender -slHogDtpAXrcxe.iLsf

(note: this is not perfect as you still are able to read
*everything* root can and there might be ways to overload the above
strings to get even more access)

2. Use ssh-agent so that you can use an ssh-key with password though
you will need to add the key to the backuppc user keychain

3. I'm sure there are other things you can do with SELinux, ACLs etc
to be more restrictive of privileges...

Would be good to hear what others do here...






___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] rsync/File::RsyncP conflict

2022-04-12 Thread Dave Sherohman via BackupPC-users

On 4/11/22 18:22, G.W. Haywood via BackupPC-users wrote:

Looking at

https://metacpan.org/dist/File-RsyncP/changes

it seems that there is only one later version (0.76) so your options
seem to be somewhat limited. :)


Is it even still being used?  My BPC server is running 4.4.0, installed 
from the debian 11/bullseye package, and `locate RsyncP` reports no 
matching files exist on the system.


If it's been removed from current BPC versions, that would explain why 
there's only one newer version of the module.




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC failed after upgrading client to Debian 11

2022-07-11 Thread Dave Sherohman via BackupPC-users
Since you mentioned different versions of rsync, I assume you already 
checked this, but, just in case:  Double-check that rsync is still 
installed on the deb11 system.  I upgraded a ton of systems from deb9 to 
deb11 earlier in the summer, and apt decided to uninstall rsync during 
the upgrades on most of them.


On 7/10/22 12:23, Taste-Of-IT wrote:

Hi all,

i have latest BackupPC running on Debian 10. I upgraded one system to Debian 
11. Backup before running well and without problems. After upgrading the client 
it fails with these errors:

Got fatal error during xfer (rsync error: unexplained error (code 255) at 
io.c(820) [generator=3.1.3beta1])
Backup aborted (rsync error: unexplained error (code 255) at io.c(820) 
[generator=3.1.3beta1])

I searched and got different solution. One is the diffent speed of BPC and the 
clilent, but thats not the case here. Another is the different versions of 
rsync, which i use. But i didnt find a solution for that.

Has anyone a solution for that?

thx

Taste



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Determine heavy-weight clients

2022-09-22 Thread Dave Sherohman via BackupPC-users
While there may be a better way, my first thought would be to check the 
backup summary page for each host and look at backup durations (longer 
time should correlate with more data transferred) and/or the "new files" 
columns in the "File Size/Count Reuse Summary" section.


On 9/22/22 11:03, martin f krafft via BackupPC-users wrote:

-

Hello,

has anyone written a tool to identify the hosts that contribute most
to recent pool size increase for v4? I see the data in pc/*/backups 
and I could extract the size of new files for each backup, then
correlate with the time of the backup and then… With v3, it was 
possible to use command-line tools on the filesystem to identify the 
big directories, but v4 has changes the filesystem storage format to 
no longer allow this.


Background: we've been running BackupPC in a non-profit setup for
many years. Over the last couple of months, the backup pool size has 
sharply increased, and we're running out of space now, while there's 
no money to stock up on disk space… :/


We can't figure out why. So we'd love to figure out which host this
is on, and which directories are adding to the pool increase.

Any clues how to easily find this out, so we can assess the
situation, use BackupPC_deleteBackup as appropriate and possibly add
excludes?

--
|@martinkrafft | https://matrix.to/#/#madduck:madduck.net "an 
intellectual is someone who has found something more interesting than 
sex." -- edgar wallace spamtraps: madduck.bo...@madduck.net |



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project:https://backuppc.github.io/backuppc/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Import pool or clients

2022-08-04 Thread Dave Sherohman via BackupPC-users

On 8/4/22 07:44, backu...@kosowsky.org wrote:


On Wed, Aug 3, 2022 at 2:31 PM backuppc--- via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:

I pulled a V3 pool off an old hard disk that I had wrongly assumed was
broken. Now I would like to import as much data as possible into my current
V4 installation.

- What are you trying to accomplish?
- Do you only have the pool files or do you also have the pc backup
   directories?

Really hard to answer the "how" if you don't explain the "what" and
"why" that you are seeking to accomplish...


I have to agree here.  You didn't mention the age of the backups on this 
"old" disk, but importing it into your current pool strikes me as 
something which would be carry a very high risk, while providing very 
little reward.


If the backups on the "old" disk are from three days ago and taking new 
backups of that data would take over a week to complete, then importing 
them (or at least making them temporarily available to restore in some 
fashion) seems reasonable; if they're from three years ago, then there's 
considerably less potential benefit.




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Lost linux file ownership on restore

2022-11-18 Thread Dave Sherohman via BackupPC-users
Thanks, that did get me additional information.  With those settings, my 
XferLOG now shows, for example:


log: recv >f.s... rw-r--r--    6, DEFAULT   1081344 
var/cache/man/index.db


However, when I get a tar archive of /var/cache/man, `tar tvf` shows:

-rw-r--r-- root/root   1081344 2022-11-08 11:23 ./man/index.db

So it appears that rsync is sending the file with uid 6 (user 'man', as 
it should be), but BPC is storing it as owned by root, if I correctly 
understand what was said on ticket #171 about using a tar restore to 
check the status of things in the backup storage.


On 11/16/22 19:05, Kris Lou via BackupPC-users wrote:




XferLOG.[backup number] does not contain any individual file
names/paths, so I'm not able to check the permissions there.  Latest
test XferLOG attached for reference.


I have "--log-format=log: %o %i %B %8U,%8G %9l %f%L" and "--stats" as 
RsyncArgs, by default.  Perhaps those will give you more verbosity?




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project:https://backuppc.github.io/backuppc/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Lost linux file ownership on restore

2022-11-18 Thread Dave Sherohman via BackupPC-users
Am I understanding correctly that backuppcfs would just provide a more 
convenient way to check the ownership of backed-up files instead of 
getting a tarball to see their ownership?  Or would it also provide 
additional functionality for helping to track down why they have 
incorrect ownership?


On 11/16/22 20:56, backu...@kosowsky.org wrote:

'backuppcfs' is a (read-only) FUSE fileystem that allows you to see
the contents/ownership/perms/dates/Xattrs etc. of any file in your
backup.

It is great for troubleshooting as well as for partial restores...

Dave Sherohman via BackupPC-users wrote at about 09:54:27 +0100 on Wednesday, 
November 16, 2022:
  > I've just encountered the same problem from issue #171 on github, "on
  > restore all files owner root"
  > https://github.com/backuppc/backuppc/issues/171
  >
  > My RsyncRestoreArgs are the same as reported there, with the exception
  > that I don't have --ignore-times.  I have done the tar restore check
  > there and confirmed that the files in the tarball show root/root ownership.
  >
  > The troubleshooting in the ticket ended with the suggestion "Can you
  > look in the most recent full backup XferLOG file and see what
  > permissions are transferred for that file? You might need to increase
  > the XferLogLevel to 2 or 3 and then do a new full backup to see
  > permissions on unchanged files." which the user in that case did not
  > respond to.  I have attempted that step, but, even with XferLogLevel set
  > to 3 (as a per-host override) and doing a full backup of my test host,
  > XferLOG.[backup number] does not contain any individual file
  > names/paths, so I'm not able to check the permissions there.  Latest
  > test XferLOG attached for reference.
  >
  > (I'm actually a little skeptical at this point as to whether
  > XferLogLevel even does anything these days, as there are no obvious
  > differences in the XferLOG files at log level 1 vs. log level 3.)
  >
  > What's my next step, and is there any further documentation on this
  > problem aside from the ticket for #171?
  >
  > My overall setup is a Debian 11 server running debianized BPC 4.4.0-3
  > (the latest version from the stable repo) with a mixed pool of clients -
  > mostly Debian, but also some Windows, TrueNAS, OS X, and Centos machines.
  > XferLOG file /var/lib/backuppc/pc/koha-dsa/XferLOG.634 created 2022-11-16 
09:47:03
  > Backup prep: type = full, case = 3, inPlace = 1, doDuplicate = 1, 
newBkupNum = 634, newBkupIdx = 28, lastBkupNum = , lastBkupIdx =  (FillCycle = 7, 
noFillCnt = 0)
  > Executing /usr/share/backuppc/bin/BackupPC_backupDuplicate -m -h koha-dsa
  > Xfer PIDs are now 3702528
  > Copying backup #633 to #634
  > Xfer PIDs are now 3702528,3702594
  > BackupPC_refCountUpdate: computing totals for host koha-dsa
  > BackupPC_refCountUpdate: host koha-dsa got 0 errors (took 3 secs)
  > BackupPC_refCountUpdate total errors: 0
  > Xfer PIDs are now 3702528
  > BackupPC_backupDuplicate: got 0 errors and 0 file open errors
  > Finished BackupPC_backupDuplicate (running time: 6 sec)
  > Running: /usr/libexec/backuppc-rsync/rsync_bpc --bpc-top-dir 
/var/lib/backuppc --bpc-host-name koha-dsa --bpc-share-name / --bpc-bkup-num 634 
--bpc-bkup-comp 0 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 --bpc-bkup-inode0 
569756 --bpc-log-level 3 --bpc-attrib-new --rsync-path=/usr/bin/nice\ 
/usr/bin/sudo\ /usr/bin/rsync --numeric-ids --perms --owner --group -D --links 
--hard-links --times --block-size=2048 --recursive --delete --timeout=72000 
--exclude=/var/lib/elasticsearch --exclude=/var/lib/mysql --exclude=/proc 
--exclude=/sys --exclude=/var/lib/postgresql --exclude=/usr/local/ezproxy/cookies 
--exclude=/run --exclude=/dev --exclude=/tmp --exclude=/var/local/palantir 
--exclude=/var/local/brick\* --exclude=/var/local/arbiter\* 
--exclude=/var/local/bpc --exclude=/var/lib/backuppc 
--exclude=/var/lib/libvirt/images koha-dsa:/ /
  > full backup started for directory /
  > Xfer PIDs are now 3707471
  > This is the rsync child about to exec /usr/libexec/backuppc-rsync/rsync_bpc
  > Xfer PIDs are now 3707471,3707473
  > xferPids 3707471,3707473
  > Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 filesTotal, 0 
sizeTotal, 7 filesNew, 17755669 sizeNew, 17755669 sizeNewComp, 569757 inode
  > Parsing done: nFilesTotal = 0
  > DoneGen: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 155253 
filesTotal, 35349671498 sizeTotal, 0 filesNew, 0 sizeNew, 0 sizeNewComp, 569756 
inode
  > Parsing done: nFilesTotal = 155253
  > Xfer PIDs are now
  > full backup 634 complete, 155253 files, 35349671498 bytes, 0 xferErrs (0 
bad files, 0 bad shares, 0 other)
  > BackupExpire: cntFull = 17, cntIncr = 12, firstFull = 0, firstIncr = 12, 
oldestIncr = 13.5728125, oldestFull = 81.6116550925926
  >

Re: [BackupPC-users] Lost linux file ownership on restore

2022-11-18 Thread Dave Sherohman via BackupPC-users
OK.  I got the script from the link that Mike Hughes found, got it 
running, and it concurs with my earlier check via tar:


/mnt/koha-dsa/latest/var/cache/man# ls -l index.db
-rw-r--r-- 1 root root 1081344 Nov  8 11:23 index.db

while the XferLOG for the latest backup (a manual full) showed

log: recv >f.s... rw-r--r--    6, DEFAULT   1081344 
var/cache/man/index.db


when using "--log-format=log: %o %i %B %8U,%8G %9l %f%L" as suggested by 
Kris Lou.


On 11/18/22 14:17, backu...@kosowsky.org wrote:

The former which might help with the latter...
Dave Sherohman via BackupPC-users wrote at about 14:02:09 +0100 on Friday, 
November 18, 2022:
  > Am I understanding correctly that backuppcfs would just provide a more
  > convenient way to check the ownership of backed-up files instead of
  > getting a tarball to see their ownership?  Or would it also provide
  > additional functionality for helping to track down why they have
  > incorrect ownership?
  >
  > On 11/16/22 20:56, backu...@kosowsky.org wrote:
  > > 'backuppcfs' is a (read-only) FUSE fileystem that allows you to see
  > > the contents/ownership/perms/dates/Xattrs etc. of any file in your
  > > backup.
  > >
  > > It is great for troubleshooting as well as for partial restores...
  > >
  > > Dave Sherohman via BackupPC-users wrote at about 09:54:27 +0100 on 
Wednesday, November 16, 2022:
  > >   > I've just encountered the same problem from issue #171 on github, "on
  > >   > restore all files owner root"
  > >   > https://github.com/backuppc/backuppc/issues/171
  > >   >
  > >   > My RsyncRestoreArgs are the same as reported there, with the exception
  > >   > that I don't have --ignore-times.  I have done the tar restore check
  > >   > there and confirmed that the files in the tarball show root/root 
ownership.
  > >   >
  > >   > The troubleshooting in the ticket ended with the suggestion "Can you
  > >   > look in the most recent full backup XferLOG file and see what
  > >   > permissions are transferred for that file? You might need to increase
  > >   > the XferLogLevel to 2 or 3 and then do a new full backup to see
  > >   > permissions on unchanged files." which the user in that case did not
  > >   > respond to.  I have attempted that step, but, even with XferLogLevel 
set
  > >   > to 3 (as a per-host override) and doing a full backup of my test host,
  > >   > XferLOG.[backup number] does not contain any individual file
  > >   > names/paths, so I'm not able to check the permissions there.  Latest
  > >   > test XferLOG attached for reference.
  > >   >
  > >   > (I'm actually a little skeptical at this point as to whether
  > >   > XferLogLevel even does anything these days, as there are no obvious
  > >   > differences in the XferLOG files at log level 1 vs. log level 3.)
  > >   >
  > >   > What's my next step, and is there any further documentation on this
  > >   > problem aside from the ticket for #171?
  > >   >
  > >   > My overall setup is a Debian 11 server running debianized BPC 4.4.0-3
  > >   > (the latest version from the stable repo) with a mixed pool of 
clients -
  > >   > mostly Debian, but also some Windows, TrueNAS, OS X, and Centos 
machines.
  > >   > XferLOG file /var/lib/backuppc/pc/koha-dsa/XferLOG.634 created 
2022-11-16 09:47:03
  > >   > Backup prep: type = full, case = 3, inPlace = 1, doDuplicate = 1, 
newBkupNum = 634, newBkupIdx = 28, lastBkupNum = , lastBkupIdx =  (FillCycle = 7, 
noFillCnt = 0)
  > >   > Executing /usr/share/backuppc/bin/BackupPC_backupDuplicate -m -h 
koha-dsa
  > >   > Xfer PIDs are now 3702528
  > >   > Copying backup #633 to #634
  > >   > Xfer PIDs are now 3702528,3702594
  > >   > BackupPC_refCountUpdate: computing totals for host koha-dsa
  > >   > BackupPC_refCountUpdate: host koha-dsa got 0 errors (took 3 secs)
  > >   > BackupPC_refCountUpdate total errors: 0
  > >   > Xfer PIDs are now 3702528
  > >   > BackupPC_backupDuplicate: got 0 errors and 0 file open errors
  > >   > Finished BackupPC_backupDuplicate (running time: 6 sec)
  > >   > Running: /usr/libexec/backuppc-rsync/rsync_bpc --bpc-top-dir 
/var/lib/backuppc --bpc-host-name koha-dsa --bpc-share-name / --bpc-bkup-num 634 
--bpc-bkup-comp 0 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 --bpc-bkup-inode0 569756 
--bpc-log-level 3 --bpc-attrib-new --rsync-path=/usr/bin/nice\ /usr/bin/sudo\ 
/usr/bin/rsync --numeric-ids --perms --owner --group -D --links --hard-links --times 
--block-size=2048 --recursive --delete --timeout=72000 --e

Re: [BackupPC-users] Lost linux file ownership on restore

2022-11-21 Thread Dave Sherohman via BackupPC-users
Ah, thanks - adding --super appears to have resolved the problem.  Ran 
another full of the test system, and backuppcfs now shows correct file 
ownerships.


I'm still missing --protect-args, --delete-excluded, and --partial from 
the defaults, so I'll probably also add those after checking the man 
page to see what they do.  (...and also missing --one-file-system, but 
that's deliberate.)



For the record, my original settings were:

$Conf{RsyncArgs} = [
  '--numeric-ids',
  '--perms',
  '--owner',
  '--group',
  '-D',
  '--links',
  '--hard-links',
  '--times',
  '--block-size=2048',
  '--recursive',
  '--delete'
];

It's not something I would have been likely to modify manually (aside 
from removing --one-file-system), but I don't recall whether the base 
settings were migrated from my original test install (which was BPC3) or 
if Debian chose a different set of default settings than upstream.


On 11/18/22 19:31, Kris Lou via BackupPC-users wrote:
Look again at your RsyncArgs, they don't match the defaults [1], 
though obviously your system may justify it:


-o, --owner
    This option causes rsync to set the owner of the destination file 
to be
    the same as  the source file, but only if the receiving rsync is 
being run
    as the super-user (see also the --super and --fake-super options). 
Without
    this option, the owner of new and/or transferred files are set to 
the invoking

    user on the receiving side...

-g, --group
    This option causes rsync to set the group of the destination file 
to be the same as
    the source file. If the receiving program is not running as the 
super-user (or if
    --no-super was specified), only groups that the invoking user on 
the receiving side
    is a member of will be preserved. Without this option, the group 
is set to the default

    group of the invoking user on the receiving side...

And correspondingly:

--super
      This tells the receiving side to attempt super-user activities 
even if the receiving rsync wasn't
      run by the super-user. These activities include: preserving 
users via the --owner option,
      preserving all groups(not just the current user's groups) via 
the --groups option, and copying
      devices via the --devices option. This is useful for systems 
that allow such activities without being
      the super-user, and also for ensuring that you will get errors 
if the receiving side isn't

      being run as the super-user.
      To turn off super-user activities, the super-user can use 
--no-super.


Defaults are (--one-file-system is often overlooked):

$Conf{RsyncArgs} = [
    '--super',
    '--recursive',
    '--protect-args',
    '--numeric-ids',
    '--perms',
    '--owner',
    '--group',
    '-D',
    '--times',
    '--links',
    '--hard-links',
    '--delete',
    '--delete-excluded',
    '--one-file-system',
    '--partial',
    '--log-format=log: %o %i %B %8U,%8G %9l %f%L',
    '--stats',
    #
    # Add additional arguments here, for example --acls or --xattrs
    # if all the clients support them.
    #
    #'--acls',
    #'--xattrs',
];

[1] 
https://github.com/backuppc/backuppc/blob/174e707c0f64d9fe6eb699612be35fa214cafc3f/conf/config.pl#L1276-L1300




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] Lost linux file ownership on restore

2022-11-16 Thread Dave Sherohman via BackupPC-users
I've just encountered the same problem from issue #171 on github, "on 
restore all files owner root" 
https://github.com/backuppc/backuppc/issues/171


My RsyncRestoreArgs are the same as reported there, with the exception 
that I don't have --ignore-times.  I have done the tar restore check 
there and confirmed that the files in the tarball show root/root ownership.


The troubleshooting in the ticket ended with the suggestion "Can you 
look in the most recent full backup XferLOG file and see what 
permissions are transferred for that file? You might need to increase 
the XferLogLevel to 2 or 3 and then do a new full backup to see 
permissions on unchanged files." which the user in that case did not 
respond to.  I have attempted that step, but, even with XferLogLevel set 
to 3 (as a per-host override) and doing a full backup of my test host, 
XferLOG.[backup number] does not contain any individual file 
names/paths, so I'm not able to check the permissions there.  Latest 
test XferLOG attached for reference.


(I'm actually a little skeptical at this point as to whether 
XferLogLevel even does anything these days, as there are no obvious 
differences in the XferLOG files at log level 1 vs. log level 3.)


What's my next step, and is there any further documentation on this 
problem aside from the ticket for #171?


My overall setup is a Debian 11 server running debianized BPC 4.4.0-3 
(the latest version from the stable repo) with a mixed pool of clients - 
mostly Debian, but also some Windows, TrueNAS, OS X, and Centos machines.
XferLOG file /var/lib/backuppc/pc/koha-dsa/XferLOG.634 created 2022-11-16 
09:47:03 
Backup prep: type = full, case = 3, inPlace = 1, doDuplicate = 1, newBkupNum = 
634, newBkupIdx = 28, lastBkupNum = , lastBkupIdx =  (FillCycle = 7, noFillCnt 
= 0)
Executing /usr/share/backuppc/bin/BackupPC_backupDuplicate -m -h koha-dsa
Xfer PIDs are now 3702528
Copying backup #633 to #634
Xfer PIDs are now 3702528,3702594
BackupPC_refCountUpdate: computing totals for host koha-dsa
BackupPC_refCountUpdate: host koha-dsa got 0 errors (took 3 secs)
BackupPC_refCountUpdate total errors: 0
Xfer PIDs are now 3702528
BackupPC_backupDuplicate: got 0 errors and 0 file open errors
Finished BackupPC_backupDuplicate (running time: 6 sec)
Running: /usr/libexec/backuppc-rsync/rsync_bpc --bpc-top-dir /var/lib/backuppc 
--bpc-host-name koha-dsa --bpc-share-name / --bpc-bkup-num 634 --bpc-bkup-comp 
0 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 --bpc-bkup-inode0 569756 
--bpc-log-level 3 --bpc-attrib-new --rsync-path=/usr/bin/nice\ /usr/bin/sudo\ 
/usr/bin/rsync --numeric-ids --perms --owner --group -D --links --hard-links 
--times --block-size=2048 --recursive --delete --timeout=72000 
--exclude=/var/lib/elasticsearch --exclude=/var/lib/mysql --exclude=/proc 
--exclude=/sys --exclude=/var/lib/postgresql 
--exclude=/usr/local/ezproxy/cookies --exclude=/run --exclude=/dev 
--exclude=/tmp --exclude=/var/local/palantir --exclude=/var/local/brick\* 
--exclude=/var/local/arbiter\* --exclude=/var/local/bpc 
--exclude=/var/lib/backuppc --exclude=/var/lib/libvirt/images koha-dsa:/ /
full backup started for directory /
Xfer PIDs are now 3707471
This is the rsync child about to exec /usr/libexec/backuppc-rsync/rsync_bpc
Xfer PIDs are now 3707471,3707473
xferPids 3707471,3707473
Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 filesTotal, 0 
sizeTotal, 7 filesNew, 17755669 sizeNew, 17755669 sizeNewComp, 569757 inode
Parsing done: nFilesTotal = 0
DoneGen: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 155253 
filesTotal, 35349671498 sizeTotal, 0 filesNew, 0 sizeNew, 0 sizeNewComp, 569756 
inode
Parsing done: nFilesTotal = 155253
Xfer PIDs are now 
full backup 634 complete, 155253 files, 35349671498 bytes, 0 xferErrs (0 bad 
files, 0 bad shares, 0 other)
BackupExpire: cntFull = 17, cntIncr = 12, firstFull = 0, firstIncr = 12, 
oldestIncr = 13.5728125, oldestFull = 81.6116550925926
Running BackupPC_refCountUpdate -h koha-dsa on koha-dsa
Xfer PIDs are now 3711195
BackupPC_refCountUpdate: processing host koha-dsa #634 (fsck = 0)
BackupPC_refCountUpdate: processing host koha-dsa #634 deltaFile 
/var/lib/backuppc/pc/koha-dsa/634/refCnt/poolCntDelta_0_1_0_3707471 with 22 
entries
BackupPC_refCountUpdate: computing totals for host koha-dsa
BackupPC_refCountUpdate: host koha-dsa got 0 errors (took 2 secs)
BackupPC_refCountUpdate total errors: 0
Xfer PIDs are now 
Finished BackupPC_refCountUpdate (running time: 2 sec)
Xfer PIDs are now 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Incorrect number of hosts "skipped"?

2023-07-03 Thread Dave Sherohman via BackupPC-users

111 = 37 * 3

Presumably it made three attempts to back up each host, but all three 
attempts (per host, 111 attempts total) were skipped due to insufficient 
space to store the backups.  And, for simplicity, the code just counted 
the attempts without attempting to deduplicate multiple attempts for the 
same host.


On 7/3/23 10:34, Jamie Burchell wrote:


I received an email this morning to tell me:

“Yesterday 111 hosts were skipped because the file system containing 
/var/lib/BackupPC/ was too full.”


Aside from the obvious issue, it says 111 hosts when I only have 37?

“There are 37 hosts that have been backed up, for a total of:”

Is this a bug?

Jamie



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project:https://backuppc.github.io/backuppc/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Cant Backup Mount Point

2023-07-24 Thread Dave Sherohman via BackupPC-users
By default, the RsyncArgs for BPC4 includes --one-file-system, which 
tells it not to descend into mounted filesystems.  To include other 
filesystems in your backups, you can either add a second "share" to the 
machine for /var/files so that it's explicitly backed up, or you can 
remove --one-file-system from RsyncArgs - but note that, if you remove 
--one-file-system, then you'll probably need to add exclude rules for 
whatever virtual filesystems you might have mounted.  (e.g., /dev, /run, 
any nfs or samba mounts...)


On 7/23/23 22:06, kont...@taste-of-it.de wrote:

Hi,

i use BPC 4.4.0 with rsync for a linux machine. I mounted via fstab a 
second sdc1 drive at /var/files/, but this folder isnt backuped. I see 
in the backup foldertree /var/files but its empty. I have no exclusion 
for /var/files, or /var/. I assume there is a build in setting for non 
root disks, but i cant find a solution.


Has anybody an idea?

thx

Taste



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Different strategies on same host

2023-06-30 Thread Dave Sherohman via BackupPC-users

On 6/29/23 15:11, Guillermo Rozas wrote:


*I* don't do it, simply because there's little practical difference
between rsync'ing directories that don't change and not rsync'ing
them.


Just to give reason *I* use it:
- the cost/benefit of doing full backups for different folders is 
different (fulls take considerably longer than incrementals, and some 
folders like "Downloads" are not worth that extra time)


What's your transfer method?  While retention policy is a good point, 
the post you replied to is correct that there is no extra time required 
for fulls when using rsync.


The largest machine I'm backing up has 24T on it.  I currently have two 
level 0 backups stored for that machine, which took 12.5 minutes and 1.8 
minutes to complete.  I also have a rather large number (I'd estimate 
30ish, but too lazy to count them) of level 1 backups, which range from 
2.0 to 18.4 minutes, with a pair of outliers at 23.9 and 70.4 minutes.  
Aside from those two outliers (which are incrementals, not fulls!), 
there's no real difference in the time taken.


Fulls and incrementals take roughly the same amount of time when using 
an rsync transfer method, because rsync only sends changed data over the 
wire either way.
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Progress indicator for longer backups

2024-02-22 Thread Dave Sherohman via BackupPC-users
When I was debugging an issue some time ago, I added a setting to RsyngArgs 
which caused all files to be listed in the XferLOGs.  I believe the relevant 
setting was:

--log-format=log: %o %i %B %8U,%8G %9l %f%L

but I can't find documentation to confirm that.  In any case, I now get entries 
in XferLOG while it works, which look like


newrecv >f.st.. rw-r-   33,   4157414 
var/log/nginx/access.log
newrecv >f.st.. rw-r-   33,   4185251 
var/log/nginx/access.log.1
pool   recv >f.st.. rw-r-   33,   4  4652 
var/log/nginx/access.log.10.gz
pool   recv >f.st.. rw-r-   33,   4  4721 
var/log/nginx/access.log.11.gz



From: david.pearce--- via BackupPC-users 
Sent: Wednesday, February 21, 2024 18:34
To: backuppc-users@lists.sourceforge.net 
Cc: david.pea...@l3harris.com 
Subject: [BackupPC-users] Progress indicator for longer backups


I have one backup host defined that has about 333 GB of data. The “full” job 
when it competes takes about 9 hours to run.



Right now I see no progress indicators. Even if the log file showed a list of 
files that would be great. The log level is set to five but no files are listed.



While a backup job is running, is there a log file being written somewhere?



David Pearce

Systems Administrator


CONFIDENTIALITY NOTICE: This email and any attachments are for the sole use of 
the intended recipient and may contain material that is proprietary, 
confidential, privileged or otherwise legally protected or restricted under 
applicable government laws. Any review, disclosure, distributing or other use 
without expressed permission of the sender is strictly prohibited. If you are 
not the intended recipient, please contact the sender and delete all copies 
without reading, printing, or saving.

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Outdated Files in Backup?

2023-11-15 Thread Dave Sherohman via BackupPC-users
Beyond the safety precaution you mentioned, rsync doesn't delete files 
at all unless your RsyncArgs include "--delete".  There have been a few 
previous people mailing the list who didn't have --delete in there, so I 
wonder whether that might be the problem here.


On 11/15/23 12:42, Guillermo Rozas wrote:


For testing I did a new "full" backup.
It has the same old files in it :\


If you check the full log of that backup, does it shows any error or 
anything suspicious? I had that kind of things happen sporadically 
when there is a reading error on the client side, and rsync disables 
deletions for safety. "Reading error" in my case was usually just a 
symlink for which the linked file was missing.


Best regards,
Guillermo



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project:https://backuppc.github.io/backuppc/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC remembers _everything_

2024-05-07 Thread Dave Sherohman via BackupPC-users
By default, rsync will only copy new files to the destination and update 
existing ones, but it does not delete files which are not present in the source 
unless it receives the --delete parameter.  Sounds like you're probably missing 
that.

From: Kirby 
Sent: Monday, May 6, 2024 17:55
To: backuppc-users@lists.sourceforge.net 
Subject: [BackupPC-users] BackupPC remembers _everything_

BackupPC has been covering up my stupid mistakes since 2005.
Fortunately, I have never done the 'rm -r *' until last week. Good thing
was that I was in my home directory so the system itself was untouched
and I caught myself before too much could get deleted.

'Not a problem!' I thought. I will just restore from last night's backup
and be on my way. I selecting the missing files and directories, started
the restore, and went for a walk. When I got back things were in a sorry
state. My ~/Downloads directory was has filled up my drive included
stuff that had been deleted 5 years ago.

Am I misunderstanding how fill works? I though it was filled from the
last backup going back to the last non-filled backup. Instead it looks
like it is pulling everything it has ever backed up.

I am running BackupPC-4.4.0-1.el8.x86_64.

Thank you.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/