Hello again :) * Patch https://github.com/backuppc/backuppc/issues/541 - My patch is in github as Issue 541. I'm sure I missunderstand your "If no Github issue yet exists, you could for example add your suggestion to https://github.com/backuppc/backuppc/issues/5" - I don't implement a "YetAnotherDumpPreUserCmd" because than I've to implement it in the GUI and in the Documentation too. On the other hand "DumpPreUserCmd" is just called sometime earlier. I believe it can affect other users only if they expect a limited time between "DumpPreUserCmd" and "DumpPreShareCmd". - But if there would be a chance to get this patch into normal BackuPC source, I'd implement it as you asked for and add it to above Issue 541
The others just for giving you an answer. I know what I have to do next: * I'll provide my script to https://github.com/backuppc/backuppc/wiki/How-to-find-which-backups- reference-a-particular-pool-file, as soon as testing is finished. * Patches: What you propose is (similar) what I'm done. 2 directories and running: - diff -ruN ... - patch -p0 ... * * "bpc_attrib_dirRead: can't open" for existing file: - I have no 1014 in my /etc/passwd and no 544 in my /etc/group. But there are a lot of files, readable by BackupPC_ls, with 1014/513 or 544/513. I don't know where this uid/gid are coming from. Before migration the files are owned by backuppc:backuppc (106/106) - - What I'm wondering about: + /home/Backup4U/.ssh/authorized_keys has digest e86ae4879765b4579bb3deee0626e88b + bpc_attrib_dirRead: can't open /var/lib/backuppc/pool/74/fa/74faf6dde97ccc2439ee042541197853 + That's another digest, probably another file or directory what have not been outlined? - But don't worry about. I have a V3 backup from the already migrated V4 backup. I'll migrate it to V4 using ulimit -n 10000. I'm very sure the migrate job will not have any problems than. If not - it is a really old backup and I'll just remove it 🤭️ * 11,000 out of 30,000,000: - No I don't asked for your opinion. I find out that this missing files are from 2009 till 2023. So no relevant backup is affected. Just historical data. * e2fsck: Me too. I don't have issues with ext4 since several years. Thanks to journaling🙏️. I probably lost files several years ago. e2fsck repaired something and I removed the files from Lost+Found because I'm not able to restore files from there🤣️ Have a great day Matthias Am Dienstag, dem 19.08.2025 um 18:42 +0100 schrieb G.W. Haywood: > Hi there, > > On Sun, 17 Aug 2025, matth...@gmx.at wrote: > > > ... > > ... > > BTW1: > > * is there a place where I could provide this (bash) script? > > The scripts section at https://github.com/backuppc/backuppc/wiki > > > * sometimes I'm developing patches to BackupPC perl scripts (applyable > > with patch -p0 --dry-run > > -- > > reject-file=oops.rej <file.patch) and better documented than below. Is > > there a place where I > > could provide them? > > Github is the place. > > Being an Old School greybeard I'm very comfortable with patches, but I > guess most people never use 'patch' any more. The latest thing is > 'pull requests' on Github. Unfortunately more or less every time I > try to use Github's User Interface it breaks in some new and to be > frank very uninteresting way, so I tend to use git from the command > line to do any code changes on Github (and gitk to look at history). > I'm not above downloading two different versions of the whole thing as > an archive, extracting them to a temporary directory, and then running > 'diff -r -U3 ...' on the two subdirectories. > > The point buried beneath all this is that if you can recruit a few > people to test your patches that will help enormously. Despite my own > reservations about and dreadful experiences of Github I think you will > get better traction if (1) you use Github's facilities to publish your > changes for testing and (2) after you've made the changes available on > Github you send a message to this mailing list announcing them. In my > view it's much better to keep discussion on this mailing list than to > try to use Github as some kind of forum. > > > - BackupPC_backupDuplicate can need some time and the client could go > > to sleep in the > > meantime. A > > patch to??BackupPC_dump move the call of DumpPreUserCmd before > > execution of > > BackupPC_backupDuplicate and a users DumpPreUserCmd can disable > > hibernation on client > > side. > > - BackupPC_restore could need some time between calculation of $lastNum > > and using it for > > RestoreInfo and RestoreLOG. A patch move the calculation behind the > > call of the > > RestorePostUserCmd and if someone, like me??, is > > calling?BackupPC_restore from a programm > > several times in parallel for different shares and dirs of a host, > > each single call can > > use > > another?$lastNum. > > All good stuff. :) > > Your patch for BackupPC_restore is on Github at > > https://github.com/backuppc/backuppc/issues/541 > > I know that at some point I've at least seen something describing the > rationale behind the BackupPC_backupDuplicate patch, but I have been > unable to find it (for my TODO list:). Did you mention it on Github, > or this mailing list or somewhere...? > > In general I'm much happier with changes which add functionality as an > *option* and which won't affect existing users in any way unless they > deliberately ask for the option. In both patches I'd worry much less > if you added YetAnotherDumpPreUserCmd, so if YetAnotherDumpPreUserCmd > is undefined there would be no change to the current operation. If no > Github issue yet exists, you could for example add your suggestion to > > https://github.com/backuppc/backuppc/issues/5 > > which I want to look into more carefully when I can get to it. > > > BTW2: > > ... > > ... > > -rwxrwx--- 1014/544 415 2008-12-14 14:12:10 > > /home/Backup4U/.ssh/authorized_keys > > ... > > ... I don't understand the second one because > > "/home/Backup4U/.ssh/authorized_keys" exists. > > To be able to make a backup of a file, the user 'backuppc' (or > whatever you have set set in the variable $Conf{BackupPCUser} in > /etc/BackupPC/config.pl) needs to be able to read the file when it > tries to make the backup. Can your $Conf{BackupPCUser} read a file > which has UID 1014 and GID 544, but no 'world' read permission? > > > I think the main reason of this corruptions in my system was an > > insufficient maximum number of open file descriptors. As soon as I > > recognized this and set "ulimit -n 10000" all remaining migrations > > went well. > > Useful information. It seems to me that it should be possible to add > a check in the conversion script to check ulimit, maybe warn about it. > I added it to my TODO, but don't hold your breath it's low priority. > > > > It seems likely that your conversion of V3 backups to V4 backups did > > > not go very well.? You said that there were 'issues', and that now the > > > count of missing pool files is nearly two thousand.? (1.800 - is that > > > right? in the UK we use comma , not decimal point . as the thousands > > > separator).? You are right to want to investigate.? Are you able to > > > recover a few files successfully?? Perhaps choose some at random, and > > > some because they're big/small/valuable/new/old? > > 1,800 missing pool files found last night, in 1/16 of the pool. In total it > > is more than 11,000 > > out > > of 30,000,000. So only a few files are affected??. > > I'm not sure if you're asking if I suggested that 11,000 files is only > "a few files". For the avoidance of doubt, I did not. I asked if you > were able to restore a few files but I suggested choosing the files in > a number of different ways to get a hopefully representative sample of > your success rate, as a way of checking that your backup recovery can > "sort of work" on a good day. Obviously it isn't an exhaustive test, > but I'd probably try something like that before I tried something more > exhaustive. > > > Probably because of filesystem corrections made by e2fsck in the > > past or because of some aborted migrations, mentioned above. > > I have no idea what will happen to a V3 BackupPC pool if e2fsck was > obliged to make corrections to it, but I wouldn't feel that I could > trust it without making careful tests. My personal view is that the > filesystem for your backup system must be completely beyond reproach, > and if it starts to need maintenance of that kind then it's probably > time to replace it unless there's some obvious explanation with an > equally obvious and easy fix. There was a time, decades ago, when I > spent many hours every year fixing filesystems, but in general they > are all a lot more reliable nowadays and now I can't remember the last > time I had to run any kind of filesystem fixing tool - even on the USB > attached hard drives on the several Raspberry Pis which we use here. > _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/