Re: [BackupPC-users] Zfs Deduplication vs Backuppc Pooling
On Wed, 3 Apr 2019 14:20:34 +0100 (BST) "G.W. Haywood via BackupPC-users" wrote: Hi G.W., > You will need to test the performance yourself. Performance can be > improved by avoiding disc writes, which will take orders of magnitude > longer than reading RAM. ZFS checksums are in RAM, so you might need > a lot of it. ZFS deduplication takes place at disc block level, not > at file level, so if you have for example files which grow from backup > to backup where the first parts of files are identical, then you might > see performance improvements from _both_ kinds of deduplication. It > will obviously depend on your data profile, Only if your modifications exceed the recordsize setting size. > and it may also depend on > encryption; I have no idea what impact that might have for example on > deduplication of files which have identical blocks before encryption. > I'd expect any sensible encryption system to use something like salts, > so that blocks stored on disc would be different after encryption even > if they were identical before it. Otherwise, interesting attacks on > the encrypted data can become possible. There's a lot of literature. Yup, this is why (both reasons) full disk encryption is not a good solution compared to file encryption (and it can opens ways to crypto side attacks.) > In any event, in my view, the stability of the filesystem is a much > more important consideration. I should be reluctant to move any of my > backups from ext4 to ZFS simply because I have very little information > about ZFS to work with and (call it my disclaimer) I have no personal > experience of it at all. Certainly using the ZFS encryption feature > would for me be a risk too far. There are precautions to take (such as one infamous _release_ that made some files disappear), but not much as usual. Notice that this kind of adventure is really the cake under the icy when it happens in a project, 'cos it makes devs reacting by tightening (a lot) the regression test suite to avoid a Bis Repetita. The main precaution, the one w$ users usually prefer to ignore, is the pro IT main motto: "if it is working as expected, don't fix it" ;-p) Other than that, it is pretty stable and I know a lot of labs of any branch that use it either on workstations or backup servers, especially because they can have very long term data (some studies can last more than half a century) than they don't wanna see corrupted. This is the great ZFS advantage against conventional RAID that just ensure data redundancy but not consistency. Speaking of consistency, please also note that at of mid-2018, rust and SSDz were already dead and buried, which is a good thing for data persistance (but only IF the industry adopt the right replacement, which is far from being a logical choice), see: https://www.servethehome.com/carbon-nanotube-nram-exudes-excellence-in-persistent-memory/ and: https://www.servethehome.com/fujitsu-nram-production-to-start-in-2019/ provided the marketing stays out of the way and the license be cheap, the low technology used (55nm) could make it tsunamied the stockage market in less than a year for something that looks like the benefit of all (not to mention the huge energy sparing.) JY ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Zfs Deduplication vs Backuppc Pooling
On Tue, 2 Apr 2019 12:25:21 + Stefan Schumacher wrote: Hi > I want to set up a new zfs volume as storage for Backuppc. I plan on > using the zfs features encryption, deduplication and compression. Unless you have an _absolute_ need, do NOT use deduplication into ZFS and if you persist, do not take carelessly the 5 GB/TB _lower_ limit than is given in docs for dedup - if your files are plenty and small, you hit the jackpot and can raise this limit to 10 GB/TB. Not to mention the loss of performance, especially in writing. > According to my understanding activating these feature on the > filesystem level should be followed by disabling them on the > application level, meaning Backuppc. I have found an option to > deactive compression in the configuration, but none for pooling. > > My questions are: > > 1) Am I correct in assuming that I should disable pooling and > compression in Backuppc? Compression yes, because LZ4 into ZFS comes at almost no cost, provided you have a sufficient CPU. This one needs another answer than mine, perhaps more technical : Pooling no, as it the base of BPC process - whether you use V.3 or V.4, you still need the files… pooled somewhere. > The information in this e-mail is confidential and may be legally > privileged. It is intended solely for the addressee and access to the > e-mail by anyone else is unauthorised. If you are not the intended > recipient, any disclosure, copying, distribution or any action taken > or omitted to be taken in reliance on it, is prohibited and may be > unlawful. If you have received this e-mail in error please forward to: > post...@net-federation.de This particular clause has been an urban legend for way too looong, 'cos in any reasonable system of law (and international law), you cannot reverse the effect and the cause, leaving somebody that did nothing responsible for your own act of error. In short, if you goof, it's for your ass, not the receiver's - bad lawyer, change lawyer… Jean-Yves ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Can't write config.pl files through CGI interface.
On Thu, 21 Feb 2019 22:36:26 +0100 Hubert SCHMITT wrote: > But i really don't understand what's wrong. > > The rights are the same on my side : > -rw-r- 1 backuppc apache 85K 21 févr. 20:31 config.pl > -rw-r- 1 backuppc apache 82K 27 déc. 2014 > config.pl_20141227_OK -rw-r- 1 backuppc apache 82K 17 avril > 2016 config.pl.old -rw-r- 1 backuppc apache 86K 19 févr. 14:16 > config.pl.pre-4.3.0 > > Apache is running with : User backuppc and Group apache in httpd.conf > > The umask is set in config.pl with : > > # > # Permission mask for directories and files created by BackupPC. > # Default value prevents any access from group other, and prevents > # group write. > # > $Conf{UmaskMode} = 027; > > Since three days now i'm searching a solution but without any success. > > Think i'll throw in the towel now and only modify config files through PLS don't top post. The problem is: as your file 'config.pl' only has r(read) permission for the user 'apache', you CAN'T write it as the writing is made using the HTTP server identity (hence, group 'apache'), NOT by BPC. umask is also correct, as it'll assign rwxr-w--- permissions to the files that BPC write (the data backup files, nothing to do w/ the configuration file.) Do a: chmod 660 /etc/backuppc/config.pl and that'll do the trick. JY ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Can't write config.pl files through CGI interface.
On Wed, 20 Feb 2019 22:22:48 +0100 Hubert SCHMITT wrote: > Good evening to all. Good dayght > I updated my Gentoo recently and since it's done i can't write any > BackupPC configuration changes through the CGI, i always get the same > message : *TextFileWrite: Failed to > write /etc/BackupPC/pc/myhost.pl.new (errno = Read-only file system, > uids = 1005,1005, gids = 81 81,81 81, umask = 027, ver = v5.26.2, prog ^^^ your problem seems to lay here, 027 == rwxr-x--- (inverse of mask) so, the apache user can't write to your file, even if the file owner/group is: backuppc:www-data (or whatever group apache user is in.) Mine is: -rw-rw 1 backuppc www-data 99613 2019-02-20 00:13 config.pl Jean-Yves ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Curious Message to PC user
On Mon, 19 Nov 2018 22:45:35 -0800 Craig Barratt via BackupPC-users wrote: This is what's very good with BPC, when you talk, the kreatuor is listening and reacting fast ;-) Jean-Yves > I pushed a fix > <https://github.com/backuppc/backuppc/commit/46168f8f0a843aa819218ce31de2f70c882d4e8c> > for this issue. > > Craig > > On Mon, Nov 19, 2018 at 2:14 PM B wrote: > > > On Mon, 19 Nov 2018 21:48:30 + > > Jaime Fenton wrote: > > > > > Where in the conf file will that be? (yes we had a chuckle at how > > > far back that time/date was). > > > > http://backuppc.sourceforge.net/faq/BackupPC.html > > look after: $Conf{EMailNotifyOldBackupDays} = 7.0; > > and following. > > > > d° for v4 into: http://backuppc.sourceforge.net/BackupPC-4.1.3.html > > > > Jenkins had an idea I checked it against postgresql > > select now() - interval '17851.4 days'; > > and provided you received the e-mail 3 days ago, it matches > > (@ ~23:01 local time (+1)) : > >?column? > > --- > > 1970-01-04 13:25:12.889537+01 > > (1 row) > > > > Ziziz quite strange (I'm not even sure a BIOS could go back such a > > long time backwards, never tried.) > > > > Wild guess: this was formerly a machine powered under m$ window$ and > > the soul of Paul Allen is punishing you for switching to Linux > > *<;-{p) > > > > Jiff > > > > > > ___ > > BackupPC-users mailing list > > BackupPC-users@lists.sourceforge.net > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users > > Wiki:http://backuppc.wiki.sourceforge.net > > Project: http://backuppc.sourceforge.net/ > > ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Curious Message to PC user
On Mon, 19 Nov 2018 21:48:30 + Jaime Fenton wrote: > Where in the conf file will that be? (yes we had a chuckle at how far > back that time/date was). http://backuppc.sourceforge.net/faq/BackupPC.html look after: $Conf{EMailNotifyOldBackupDays} = 7.0; and following. d° for v4 into: http://backuppc.sourceforge.net/BackupPC-4.1.3.html Jenkins had an idea I checked it against postgresql select now() - interval '17851.4 days'; and provided you received the e-mail 3 days ago, it matches (@ ~23:01 local time (+1)) : ?column? --- 1970-01-04 13:25:12.889537+01 (1 row) Ziziz quite strange (I'm not even sure a BIOS could go back such a long time backwards, never tried.) Wild guess: this was formerly a machine powered under m$ window$ and the soul of Paul Allen is punishing you for switching to Linux *<;-{p) Jiff ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Curious Message to PC user
On Mon, 19 Nov 2018 17:55:31 + Jaime Fenton wrote: > "Your PC (COMPUTERNAME.GOES.HERE) has not been successfully backed up > for 17851.4 days. Your PC has been correctly backed up 1 times from > 0.3 to 17851.4 days ago. PC backups should occur automatically when > your PC is connected to the network. That's more than 48 years, did you check the SRV timestamp? … > This was for a brand new computer that was setup on Friday. Has anyone > else had wildly high numbers for "not backed up successfully" emails > previously? Yup, but only when going further than the number of days specified into the configuration file (and NOT +48yrs) Jiff ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Browsing backups: view files directly instead of downloading
On Fri, 16 Nov 2018 19:11:08 + Steve Richards wrote: Hi Steve, > I took that to mean that I would have the option to view the contents > of the file, either directly in the browser for content it can render > (text, pdf etc.) or by opening the application associated with the > relevant MIME type. I don't get that option though, I just get a File > Save dialog box. That allows me to save the file locally, after which > I can successfully open it. For those times when you just want to take > a peep at a previous version though, it's not quite as convenient as > opening it directly. Not at all, you just have to remember 2 things: * BackupPC is a _backup_ software, not a viewer of any kind, * Better is the best enemy of good. When you have a good software that does what it is meant to do and have no bugs, the beginning of lots of troubles is when you decide to "improve" it with useless "functionalities". > My uses Nginx rather than Apache (because the machine already runs > Nginx). Could that be source of the glitch, or have I perhaps > misunderstood how it's supposed to work? You have just been excommunicated by the Apache foundation. Jean-Yves ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] considerations on i-nodes
On Thu, 19 Apr 2018 16:55:59 + (UTC) Michael Stowe wrote: > While there's nothing inherently wrong with selecting an older > filesystem, ext4's design decision of backward compatibility has > essentially set some of its limitations in stone. (Your article below > elaborates on this point; it's not a next generation filesystem, it's > just something that works.) IIRC, EXT4 was launched almost only to counter ReiserFS that was raising hard at this time and had the favour of people, opposing to what kernel people were thinking was the best for others (as you see, development democrature isn't really new and take it's roots at the source;) Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Cryptic logs
On Sat, 14 Apr 2018 17:31:06 -0700 Craig Barratt via BackupPC-users wrote: Whoops, I forgot an important thing: the BPC server is running on unstable (sid) ! JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Cryptic logs
On Sat, 14 Apr 2018 17:31:06 -0700 Craig Barratt via BackupPC-users wrote: > It's probably benign (sorry I can't be very definitive). Just to be > safe, you could browse the backup tree to make sure those files are > stored correctly. Yup, the first thing I did (and forgot to mention:/) All logged files are present and accounted. If it has an importance, it was files from the server itself (accessed by it's FQDN, like any other machine.) > This is pretty old code; you should consider upgrading to 4.x. I did (and it it was a reaaal PITA to erase the whole arborescence), but I came back to v3 very fast, because: * it took about twice the time v3 takes for the first backup, * incrementals were incredibly long compared to v3, * I had a problem on a machine for which I stopped incremental twice in a row for some reasons: the process before a new backup could take place (duno remember it's name) failed and prevented to make further backups for this machine, OTOH, v3 was faster than light compared to v4 and stopping incrementals did not produce any bad things - as a matter of fact, v3 is really rock solid (from everything it beared at home without a single failure.) This was on a home installation and the only modification to the server between versions was a re-formatting of the backup repo disk from XFS to EXT4 (it stays in EXT4), but from that, I'll never flip to v4 in production before being absolutely sure that this kind of behavior has been eradicated from BPC. (and hardlinks won't soon be a PB, as a flip to ZFS for the BPC repo will take place after summer.) Sooo, it was quite a bad experiment - may be it was "something" I didn't see, may be it was "bad luck", but I never have had this kind of problem with v3. Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] No restore checkboxes
On Sat, 14 Apr 2018 19:07:25 -0400 Steve wrote: > Turns out it was the browser. I was using a browser called Web that > comes with Devuan and the checkboxes don't shown up but when I switched > to Firefox they were there. Good to know. -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] Cryptic logs
Hi list, The digest gives 1 error for a machine, but it's logs show several cryptic lines I don't understand very much, such as: … Xfer PIDs are now 10411,10498 [ skipped 6075 lines ] Unexpected call BackupPC::Xfer::RsyncFileIO->unlink(usr/lib/python2.7/dist-packages/xlwt/BIFFRecords.py) Unexpected call BackupPC::Xfer::RsyncFileIO->unlink(usr/lib/python2.7/dist-packages/xlwt/Bitmap.py) Unexpected call BackupPC::Xfer::RsyncFileIO->unlink(usr/lib/python2.7/dist-packages/xlwt/Cell.py) Unexpected call BackupPC::Xfer::RsyncFileIO->unlink(usr/lib/python2.7/dist-packages/xlwt/Column.py) … [~a good dozen of such lines] What does it really mean ? Is it something I should fear ? Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] No restore checkboxes
On Sat, 14 Apr 2018 11:24:14 -0400 Steve wrote: … > The problem I have is that I cannot restore. When I look at old backups > there are no checkboxes to check to select something to restore. Debian and it's derivates are centering _all_ web operations against the www-data group, so, to avoid useless contortions, I changed the group of the whole BPC repo: chgrp -R www-data /BPC/BACKUPS and the rights of it's directories to 6750 (SUIDs owner & grp): find /BPC -type d -print0 | xargs -0 chmod 6750 on an existing repo, as you may already know, you can prepare a pack of coffee, another of cigars, some pizzas, many beers, a bunch of films, a pillow and a jar of rollmops the time it's achieved… I use fcgiwrap with nginx and it's working ferpectly :) Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] BPC v4 cryptic error
Hi list, BPC v4.1.5 (Debian pkg) made a cryptic error: 2018-03-29 03:00:04 incr backup started for directory / 2018-03-29 03:00:06 Got fatal error during xfer (rsync error: unexplained error (code 255) at io.c(629) [Receiver=3.0.9.12]) 2018-03-29 03:00:11 Backup aborted (rsync error: unexplained error (code 255) at io.c(629) [Receiver=3.0.9.12]) after test, it appears it was because user backuppc couldn't ssh the machine because it was complaining about an offending ECDSA key into ~/.ssh.known_hosts (I was rotating ssh keys.) A better error message (or even better: the capture of ssh complains) would be welcome (well, it changes from the 4 bytes of v4 anyway ;-p) Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] RGDP, is backuppc still usable for non hobby backups after may 25
On Sun, 25 Mar 2018 14:46:18 +0200 Pelle Hanses wrote: > In my > discussions with lawyers, you should have some form of backup filters > so that the data requested deleted is not restored. For BackupPC you > probably have to write some scripts and store all documents names that > should not be restored in some file or database and run all restored > documents through the script. It would be using a canon to shoot a mosquito, not to mention the difficulties if the doc's date is changing for whatever reason. You'd better use a document manager that automatically delete documents at the right time and backup its data ;-) Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] BackupPC vs ZFS compression
On Wed, 24 Jan 2018 23:51:51 +0100 Patrik Janoušek wrote: > I've read compression in BackupPC is much better than in ZFS. So is it > not true? Should I definitely use compression in ZFS or is there any > reason to use BackupPC compression? > I've found BackupPC use zlib and ZFS use lz4 that is much faster, but > doesn't have so good compression ratio. Both do not have a good compression ratios; the right question being: is THIS compression type good for MY needs, thus, for MY type of data, and to solve that, the best way is to make tests as YMMV (knack spotted: if you're a professional in pictures &| films, do not event think about it.) Or, a fast solution could be to keep in mind that lz4 is so fast and so few CPU hungry that it seems to be quite a good compromise in many situations. (AFAIK, lz4 first makes a free run, observing if compression doesn't add to the original size and ditch it if so.) > So... where is the truth? Mox Fulder & Dany Sculla told you before: it is out there ;-p) Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] File System containing backup was too full
On Tue, 19 Dec 2017 13:10:44 +0100 Adrien Coestesquis wrote: > i don't think so, the BPC arborescence is somewhere else Like the truth, apparently :/ Hmm, devs, could it be something weird in the code, like the use of a signed int that would overflow? -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] File System containing backup was too full
On Tue, 19 Dec 2017 12:15:05 +0100 Adrien Coestesquis wrote: … > another weird thing backups seems to be repeated : > > 2017-12-19 05:00:00 incr backup started back to 2017-12-17 05:00:03 > (backup #274) for directory /var/lib/jenkins/jobs > > 2017-12-19 08:49:39 incr backup started back to 2017-12-17 05:00:03 > (backup #274) for directory /var/lib/jenkins/jobs Could it be that: * you backup /var entirely, * your BPC arborescence is into /var/lib/backuppc that might create (?) an infinite loop? -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] File System containing backup was too full
On Mon, 18 Dec 2017 16:43:35 +0100 Adrien Coestesquis wrote: > Hey ! Indeed, it solves problems sometimes :) > I reboot the machine right now, and I will see tomorrow It may also be tied to another cause which appears quite silly: https://serverfault.com/questions/482173/is-there-any-other-reason-for-no-space-left-on-device (see comment noted 4 in middle page) This is old, but this behavior may not have been corrected since. -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] File System containing backup was too full
On Mon, 18 Dec 2017 16:43:35 +0100 Adrien Coestesquis wrote: > Hey ! Indeed, it solves problems sometimes :) > I reboot the machine right now, and I will see tomorrow If it still goes on, it might also be for this reason: https://access.redhat.com/solutions/2316 which you may be able to visualize using: https://serverfault.com/questions/232525/df-in-linux-not-showing-correct-free-space-after-file-removal (although, this one adresses a _df_ report problem.) NB: Now using XFS @home, I never have this problem again. -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] File System containing backup was too full
On Mon, 18 Dec 2017 16:15:16 +0100 Adrien Coestesquis wrote: > Thanks for your replay Stefan. > > So this is related to the 95% and that's why the DfMaxUsagePct change > is not taken ? > > this is the output of tune2fs -l /dev/sda1: … doesn't seems there's any problem. … > But today i have 4.5TB left and this is sufficient to make backups > with my retention configuration. So why backuppc complains about > this ? how 63% (today's disk utilisation) is superior to 95% ? When I was still using extN FS, I had some new free space problems as it wasn't reported correctly for a long random moment (from minutes to sometimes days.) Forcing a reboot was one solution - anyway, if you can do so, go for it, just to see if the next backup will still complain or not. -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] question about catalog
On Fri, 15 Dec 2017 12:16:50 -0500 David Owens wrote: > Hello, Hi > I want to backup my backuppc server. Questions: Shall we imagine it runs under Linux? >1. Where is the catalog, and what is the best practice for backing > it up. There isn't one, the list is determined by the root of your backup (could be / ou /DATA or /whatever) and the eventual exclusions you set into config.pl. > 2. I was just going to tar the config. What are the essential > files / file systems needed to capture. Assuming it runs under Linux, don't be stingy with space and backup the whole stuff except the usual suspects (/dev, /proc, /tmp, /var/tmp, …) Because this way, if you meet a catastrophe, you'll only need to reinstall a very minimal system including rsync, then restore the last backup to recover a fully operating system (NB: impossible with w$.) Otherwise, you'll "save" a few space but bang your head on the wall the time being when you'll discover you forgot let's say something like /var/lib. Of course, for the BPC server, an up to date tar of /etc/backuppc saved on some other machines can help. -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Trouble with restore
On Wed, 13 Dec 2017 15:21:09 +0100 "Jens Potthast" wrote: > Ok, I don't understand, what you are talking about. Sorry. >From some Samba ML posts, this error code means the server command is not able to either create the destination directory OR can't write in it. But it is Samba, not m$. > BackupPC does create a directory, if it does not already exists. If it > does, it stops with an error. It restores the first file and then it > stops. Hmm, have a look at w$ logs, if they are correctly set (? duno if it is even possible to change the log level into nsa self-service), they may reflect what the real problem is on your client as m$ is also known to sometimes group several errors under only one error code. > So, if neither directory nor files do exists, why would restore > fail? I had some permissions problems in the past on a friend's Linux/w$ installation (but didn't took notes, as I do not use w$ anymore and he switched to a w$ freeware solution :/) - IIRC, there was a user substitution to the "system" user (or from, I don't remember) that clobbered the whole restoration process or something close to that. -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Trouble with restore
On Wed, 13 Dec 2017 14:44:25 +0100 "Jens Potthast" wrote: Correction, the samba ML links this code to an inability to _create_ the destination DIR (should be a m$ signification; they love Doublespeak :/) > No, that's not the problem. Files should be overwritten. *But* even > restoring to a new and empty directory fails with the same error. > > -Original Message- > From: B [mailto:lazyvi...@gmx.com] > Sent: Mittwoch, 13. Dezember 2017 14:25 > To: backuppc-users@lists.sourceforge.net > Subject: Re: [BackupPC-users] Trouble with restore > > On Wed, 13 Dec 2017 14:01:46 +0100 > "Jens Potthast" wrote: > > > tar:1596 Can't mkdir Path to filename: > > NT_STATUS_OBJECT_NAME_COLLISION > ^^ > You obviously already have a file having the same name in this restore > DIR. -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Trouble with restore
On Wed, 13 Dec 2017 14:01:46 +0100 "Jens Potthast" wrote: > tar:1596 Can't mkdir Path to filename: NT_STATUS_OBJECT_NAME_COLLISION ^^ You obviously already have a file having the same name in this restore DIR. -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] error in rsync protocol data stream (code 12) (Restoring)
On Thu, 16 Nov 2017 12:22:00 -0600 Les Mikesell wrote: > No, but when doing a restore for any reason other than accidental > complete deletion of a file or directory I nearly always restore to a > different location and compare things instead of overwriting the > existing current versions anyway. Ok, I clearly see your point and value it as a wise thing to do when you only have BPC as a safety net. > Your hypothetical educated user […] Well, this is relatively different as BPC is shot over night once per 24Hrs. In fact, they only use it for former docs they mess with; current ones, wrecked during the same day must be addressed by one admin, as they must be extracted from an hourly FS snapshot. But surprisingly, this is a very rare operation; which may be tied to the fact that admins interventions are strictly limited to the morning, no matter what (except servers breakdown.) -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] error in rsync protocol data stream (code 12) (Restoring)
On Thu, 16 Nov 2017 17:57:59 +0100 Holger Parplies wrote: Whoops, wrong from: (and strange setup), putting this back in the list. > Bzzzz wrote on 2017-11-16 00:50:52 +0100 [Re: [BackupPC-users] error > in rsync protocol data stream (code 12) (Restoring)]: > > [...] > > In short: being root and (especially) removing directories is bad, on > > the other hand, using root as part of a controlled process doesn't > > mean that you'll be hacked or whatever - furthermore, doing some > > stuffs as root is compulsory for some maintenance work. > > wrong. Not understanding a concept and giving advice about it is bad. Ok, so develop it, this way I could understand what's so terribly wrong. -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] error in rsync protocol data stream (code 12) (Restoring)
On Thu, 16 Nov 2017 10:45:40 -0600 Les Mikesell wrote: > Yes, but things have to be very, very screwed up to get to the point > where the user can't fix it with a tar download through a browser > followed by an appropriate restore command. When things have been > broken that badly it may be time to let someone else fix it. And if > you are restoring a whole system you have to configure that part again > anyway. Well, I can concede this to you, but it is a tiny bit extreme. (do you climb mountains free hands with pitch black glasses and a chopper waiting for you at the top ?;) -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] error in rsync protocol data stream (code 12) (Restoring)
On Thu, 16 Nov 2017 10:21:49 -0600 Les Mikesell wrote: > damaging) direct restore. But the admin should know what to tweak if > he does need that massive restore. Yup, and the problem is: in this configuration, you *need* an admin intervention to fix a restore, when the other solution easily leads to: user = BPC user <=> each user can backup/restore from 1 file to his whole $HOME (if needed) without having to ask the admin to do so. This means informed users, but IMHO the time taken to train them is, by far, less pain than being disturbed any time. Of course YMMV as this depends on company policy, some don't want any direct contact between user and data (well, it also depends on admins, some want to control everything, others not;-p) Jiff -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] error in rsync protocol data stream (code 12) (Restoring)
On Wed, 15 Nov 2017 23:29:58 - Jamie Burchell wrote: > Because you'll seldom find any good advice that advocates doing > anything as root. You misread it. Doing stuffs as root when it can be done by a regular user, or a sudo user _can_ be a risk (although I don't know any real admin that hasn't at least one or two consoles opened as root.) Everybody with a large practice of Linux has been goofing at least one time - I made a rm -r * as root on the root of the master disk (thanks BPC !), but you can also do tough shit as a user (such as rm -r ~ /dir watch the space between ~ and /dir, meaning you're removing your whole $HOME + /dir !) And in the particular scheme of BPC, where's the risk? root does launch rsync and recover files or parts of files into a controlled, so what? This isn't a _console_ risky line command order, this is part of a known automatic process ! In short: being root and (especially) removing directories is bad, on the other hand, using root as part of a controlled process doesn't mean that you'll be hacked or whatever - furthermore, doing some stuffs as root is compulsory for some maintenance work. Rule of thumb: don't get creative with a well known (and described) process. -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] error in rsync protocol data stream (code 12) (Restoring)
On Wed, 15 Nov 2017 22:48:01 - Jamie Burchell wrote: > I followed the instructions to make a restricted backuppc user on > client machines with limited sudo permission thus: > backuppc ALL=NOPASSWD: /usr/bin/rsync --server --sender * Why on earth did you use that instead of let it to root !? in which case, restoration by a user doesn't cause any problem. -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Help with monthly schedule configuration
On Wed, 15 Nov 2017 22:52:26 - Jamie Burchell wrote: > I've gone with the schedule that looks correct for now and will see > what happens! Terrible things, you'll lose confidence in closed source softwares (hence, in m$ "products"), your HD will fill with many little gremlins, and finally you'll end with usable backups on an almost full month ! As I said, terrible !! -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Help with monthly schedule configuration
On Wed, 15 Nov 2017 21:59:28 - Jamie Burchell wrote: > Hi! Ho¡ > Hoping someone can give me that “ah ha!” moment that I’m so desperately > craving after pouring over the documentation, mailing lists and various > forum posts. Jiff chops a “ah ha!” moment from the (near) field, cooks it and => Jamie Burchell a hot “ah ha!” moment. > I want to move from BackupPC’s default schedule to keeping ~1 month’s > worth of backups, but I cannot fathom if I should: > - Do a full backup every day and keep 30 of them > - Do a full backup every week and keep 4 of them, with > incrementals in between > - Do a full backup each month and keep 30 incrementals. For what it's worth (I'm not a BPC specialist, just a doc reader), I keep several filled incrementals each day (~a month), plus some fulls every sunday (5, with the last of them a real SOS as it is extra-old.) This way, as incrementals looks complete (and fulls looks… full;), I just use the last backup, pick what I need and restore it when needed. > BackupPC is so efficient with storage and transferring only what is > needed between backups that I don’t understand the difference between > the three approaches. IIRC, fulls are unconditionally full backups, meaning they do not care about whatever took place formerly, meaning you either take disk place for nothing if you do one/day (links are cheap but not free in term of HD place.) > All backups can be browsed like full backups, Yep, but only if you said so, part of BPC black magic. > BackupPC only ever transfers files it doesn’t have, all storage is > deduplicated and rsync can detect changes, new files and deletions, so > why does it matter? All this takes time when an incremental take usually (much) less time (NB: we're NOT talking about very busy and large databases that have a short rotation of the whole rows here, just regular data.) > FullPeriod 6.97 > FullKeepCnt 4 > IncrPeriod 0.97 > IncrKeepCnt 24 > I **think** this will give me 4 full backups with incrementals in > between, but I think I could have equally have gone with: Looks correct. > FullPeriod 30 > FullKeepCnt 1 > IncrPeriod 0.97 > IncrKeepCnt 29 No, as it will keep more fulls than necessary. > I don’t understand what is meant by a “filled backup” either. Reading the doc helps, a lot. -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Phantom host
On Fri, 27 Oct 2017 14:18:28 -0700 Ethan Tarr wrote: > I resolved the issue by deleting the laptop’s DHCP entries on the PDC, > then running ipconfig /release -> /flushdns/ -> /renew -> /registerdns > on the laptop itself. Now it has a different IP assigned through DHCP, > which is being properly registered in the DNS forward lookup zone, and > the BackupPC machine can now resolve the name. A, where would be admins without ze windoze touch?!! *<;-p) Anyway, ziziz nice! > Sent from 10 Windows for horse Mail -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Phantom host
On Fri, 27 Oct 2017 10:00:11 -0700 Ethan Tarr wrote: > Oh, I forgot to mention, I use rsyncd as the transport method, so even > if samba has any issues it should still back up fine. Whatever the method, your problem has good chances to live around the DNS servers, as you can't resolve (both direct and reverse?) your laptop from them, which is abnormal. JY > Sent from Anymanymanimoooh-2100 Telepathic Mail in 2028 for > Windows 25.7 -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Phantom host
On Fri, 27 Oct 2017 09:23:36 -0700 Ethan Tarr wrote: > nslookup mylaptop > ;; Got SERVFAIL reply from 10.1.10.3, trying next server > Server: 10.1.10.4 > Address:10.1.10.4#53 > ** server can't find mylaptop: SERVFAIL > > nslookup mylaptop.mydomain.com > Server: 10.1.10.3 > Address:10.1.10.3#53 > ** server can't find mylaptop.mydomain.com: NXDOMAIN > > I checked DNS and DHCP on the PDC back when this first happened and > the proper entries are still in place. Test, test and re-test, especially to see if name resolution is still failing or not when the incriminated laptop is off line. If your DHCP srv is updating your DNS srv, check it is still the case; if your DNS srv isn't basically updated by the DHCP srv, then you have a DNS problem. Other useful thing: either with and with out the criminal online, use 'arp' (pkg net-tools on Debian) as root to get a list of all resolved (or not!) IP addresses and MAC of your network. Oops, forgot the main issue: check laptop isn't behind a router, as they usually do not allow broadcast packets to pass through. > > The one unusual thing I can think of is that over the summer this > laptop had a boot drive fail, so I had to wipe it and reinstall Win10. > Maybe somehow that resulted in “two machines” sharing the same MAC > address? I don’t know. Neither do I, as next march it will be 19 years I dumped windows for Debian and never ever regretted it. > smbclient -U user%pass -L //mylaptop > OS=[Windows 10 Home 15063] Server=[Windows 10 Home 6.3] > Sharename Type Comment > - --- > ADMIN$ Disk Remote Admin > C$ Disk Default share > Canon iR-ADV C5030C5035 Class Driver Printer Canon iR-ADV > C5030/C5035 Class Driver D$ Disk Default share > IPC$IPC Remote IPC > print$ Disk Printer Drivers > Users Disk > OS=[Windows 10 Home 15063] Server=[Windows 10 Home 6.3] > Server Comment > ---- > WorkgroupMaster Hm, you don't have any share? This can be a real problem, at least for C:\ JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] BPC4: checksum
On Fri, 27 Oct 2017 17:11:26 +0200 Gandalf Corvotempesta wrote: > I'm using ZFS, so checksumming is done by ZFS itself, is not an issue > for me to skip any data corruption check, as zfs does this > automatically ZFS is very good at this, but for data I'd like to have both belt and suspenders (note that there's still a pending important issue about rewriting or not when meeting a bad sector, it's mitigated if you're using mirrors (which you should), but with RAIDZ-n, it raises the possibility of data loss.) But from your other post (10x slower w/ chksum), I think there's no question that removing it is the way to go for your case. > What I would like is to keep load as low as possible on clients and > checksumming every file is slowing down everything As always in IT, the best compromise for your own case is always the best of all ;-) JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] BPC4: checksum
On Fri, 27 Oct 2017 10:03:45 -0500 Les Mikesell wrote: > I thought in v4 this > mechanism is also related to the ability to match copied, moved or > renamed files to existing matching content in the pool, so removing it > might be a bad idea aside from eliminating the check for corruption or > changes in content that don't update the directory/inode. Yep, I agree with you. JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] BPC4: checksum
On Fri, 27 Oct 2017 16:24:51 +0200 B wrote: Correction (as often,I read much too fast): > This i going against: "I don't think so, because on incrementals BPC > doesn't use "--checksum" at all." (v.4.x doc): The doc doesn't speak about incrementals (only fulls), but to be sure about this, you should look at rsync_bpc source. JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] BPC4: checksum
On Fri, 27 Oct 2017 12:56:36 +0200 Gandalf Corvotempesta wrote: > What happens if I remove "--checksum" from "full" backups ? Monstrosities: * an A380 will holographically crash onto your house, * your dog/cat/children/wife/goldfish will turn gay, * you'll awake one morning and all your machines will be reinstalled with DOS-2.0, * you'll dream of Bill Gates every night until you pass away, etc… and apart that, may be: http://backuppc.sourceforge.net/faq/BackupPC.html#Rsync-checksum-caching can help as a base; in v.4.x, there are some light differences: http://backuppc.sourceforge.net/BackupPC-4.1.3.html This i going against: "I don't think so, because on incrementals BPC doesn't use "--checksum" at all." (v.4.x doc): $Conf{RsyncFullArgsExtra} = [ ... ]; Additional arguments for a full rsync or rsyncd backup. The --checksum argument causes the client to send full-file checksum for every file (meaning the client reads every file and computes the checksum, which is sent with the file list). On the server, rsync_bpc will skip any files that have a matching full-file checksum, and size, mtime and number of hardlinks. Any file that has different attributes will be updating using the block rsync algorithm. In V3, full backups applied the block rsync algorithm to every file, which is a lot slower but a bit more conservative. To get that behavior, replace --checksum with --ignore-times. the server may not send any chksum command, but this states that the client will anyway use them. So I'll join "l, rick" saying that if you deactivate it, your full backups will take "a while" - test it, but you won't love it. Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Phantom host
On Thu, 26 Oct 2017 15:34:38 -0700 Ethan Tarr wrote: > Finally had a few minutes this afternoon to sort some lists of MAC > addresses, and I couldn’t find any duplicates. This is very > mysterious. Not a huge problem, but annoying. Ok, from the server, can you test what is the answer of: * a DNS request (I suppose your machines use their regular DNS names when resolving to NETBIOS) $ nslookup mymachineicantreach * a SMB listing request? $ smbclient -Umyuser%mypassword -L //mymachinename JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Phantom host
On Wed, 25 Oct 2017 11:52:23 -0700 Ethan Tarr wrote: > Nothing has changed, as far as I know. As far as you know, or you know for sure it hasn't? Make several arp checks anyway, with all machines on line, 'cos this looks like a MAC conflict. JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Phantom host
On Wed, 25 Oct 2017 10:23:29 -0700 Ethan Tarr wrote: > I’m having a funny issue with a host. A few weeks ago, the machine > running BackupPC lost the ability to find the NetBIOS name of a laptop > (10.1.10.121) on the network. I eventually just resolved the issue by > adding an entry to the hosts file, but one of the first things I tried > while troubleshooting was to check the DHCP box for that laptop in > Hosts. > > From then on, even after unchecking the box, and restarting BackupPC > as well as the machine it’s running on, there is a phantom 10.1.10.121 > host that appears in “Failures that need attention” and in the daily > admin email. I can’t find any mention of this “host” in any of the > configs or the directory structure. How should I go about clearing > this out? Did you add something to this network, or did you change/replace an Ethernet card on any of the machines/device/IoT/etc? Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Backing up a NFS share on Windows using BackupPC
On Mon, 16 Oct 2017 15:03:08 + (UTC) Michael Stowe wrote: > needs, you may choose to ignore this, or implement the > Windows-recommended solution of shadow-copies. It seems to be the right answer, as many (if not all?) w$ backup programs (O-S or commercial) use the shadow copy to ensure no locking problems will hamper the operation. JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Upgrading from Ubuntu 14.04 to 16.04
On Tue, 12 Sep 2017 10:29:34 -0500 "Gerald Brandt" wrote: Oops, I forgot an important point: if you use system rescue CD on a USB key, do NOT mount anything on /mnt as it is used by the USB burner to mount the ISO966 image file (took me some time to figure out why my burns were non-operational:/) JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Upgrading from Ubuntu 14.04 to 16.04
On Tue, 12 Sep 2017 10:29:34 -0500 "Gerald Brandt" wrote: > Hi, Ho > Has anyone done an upgrade from Ubuntu 14.04 to 16.04 on an active > BackupPC system? This is ZE problem w/ your distro, cycles are too short, meaning you have to upgrade often (every 6 months IIRC) to avoid further problems if you jump several upgrades; this is why it is almost never used for servers except for LTS versions… > Normally, I'd clonezilla the system drive before I > did an upgrade, but it's not working right on my Linux raid 1 boot > drives. You could use something like: http://www.system-rescue-cd.org/ that have a very good manual and can also be burned on a USB key (it supplies the binary for that.) It has driveimage - do NOT use it if your FS isn't in the list, see the package info as it is part of Debian - and FSarchiver, usable in any circumstance as it does a block backup and is therefore FS agnostic. Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] rsyncd method does not work: (unexpected response: '')
On Sun, 10 Sep 2017 23:37:52 +0300 Anton Torkunov wrote: > Hi all! Hi you!! > I forgot to say that I have 3.3.0 version installed from Ubuntu 14.x > repositories. Hmm, memory lapses are precursors to the Alzheimer disease… > Thanks for suggestions: I asked Robert Duval tomb and his say that i > need to install last version of BackupPC from GitHub. I did it and now > everything is fine! Glad to read that! You should be careful about ubuntu because they package non-stable versions of many softwares, leading to "some (direct) problems" and also sometimes bad interactions. For servers stability sake, you'd prefer using the original: Debian. > Jean-Yves, just for clarify - orange.vpn - it just name inside my > private VPN ;) Who knows what' in orange people's heads… JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] rsyncd method does not work: (unexpected response: '')
On Wed, 6 Sep 2017 17:18:40 +0300 Anton Torkunov wrote: > Hello everyone! Hi alone!! > I've tried to backup host via rsyncd, but got error: Backup aborted > (unexpected response: '') > > rsyncd is well configured, because If I try to run just rsync, > everything is fine: > > rsync rsync://orange.vpn/root/ I suspect some very bad black magic here. As you know, or should know, 'orange' is the name of a telecom monopolistic company formerly called 'france telecom' in france, as such, these people are very angry and resentful because they lost their privileges when this company was privatized. So, in order to get back their former position and revenge, they swear to indefinitely devote themselves to the mighty god "unanswered call" and goddess "twisted pair copper" (very filthy this one). So, as my BPC crystal balls say, there is a very high possibility they took offense of your machine name, stole you the digital equivalent of a beard hair or hair strand in the form of a byte from your browser and stuffed it in a tux puppet representing your computer. To achieve their miserable goal, they hired a very cruel and powerful Russo-Chinese warlock, who's in fact half Béninois (from Bénin) by the cousin of the half-sister's of his grand-aunt's cleaning lady. As you know, or should know, Bénin was formerly called Dahomet and is considered to be the mother country of voodoo. So, each time you try to BPC you're machine, he plunges little darts in his tux puppet and mutter a devilish incantation to make your live miserable. So, your only solution is to jump in a plane going to Haïti and once there, ask for Robert Duval tomb, but you can only ask a 34 years old virgin ginger girl born on February 29th, and you must do this at exactly midnight a night of full moon. Once you'll know where it is, you'll have to wait for the next red moon and be standing in front of the grave at 11:47:29PM precisely, then you'll have to sing the salve regina three times in a row not missing a music note, a word and sing them in the right tune - DO NOT accelerate the tempo, it would amplify the problem, making it almost impossible to solve. Then you'll kneel and wait for Robert Duval to raise from the dead (NB: this might take some time, depending on his mood, YMMV.) When he'll be out, you will ask him how to get rid of this curse, but the most important is to NOT forget to offer him a little goat (less than 1 year old but more than 11 months and 25 days) named Patricia that you brought with you, because not doing so would nullify your quest. Once you'll get your answer, you'll only have 2 hours to go back home and ward the curse off, otherwise you'll have to do it all over again from the beginning. Your chances of success can be evaluated to ~8%; there is another way to achieve that, but it is so terrible that nobody wants to talk about it. > 64 bytes from orange (192.168.220.27): icmp_seq=1 ttl=64 time=8.25 ms > --- orange.vpn ping statistics --- > 1 packets transmitted, 1 received, 0% packet loss, time 0ms > rtt min/avg/max/mdev = 8.259/8.259/8.259/0.000 ms I guess this terrible timing is due to the VPN liaison > At the client side (rsyncd): Hmm, did you try with the rsync method instead of the rsyncd one? I remember, long time ago, I had some problems with rsyncd and BPC in the same LAN - may be it was my fault, may be not, my notes juste say not to use rsyncd. > Could you help me to fix the issue??? Nope, it is in Robert Duval's hands from now. > Anton Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] pc files ereadable (encoding?)
On Wed, 6 Sep 2017 00:24:00 +0200 Maxime Chupin wrote: > Yes, I received the Les' mail after. Sorry. zcat works for one file, > thank you ! > Now I'm looking for restoring a directory... PLS do not top post. As Les said, it would be much more efficient to (re)install BPC and use your copy as it's base directory, thus being able to recover whatever you want very easily by the web I/F - otherwise, you'll have to uncompress each file one by one. JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] pc files ereadable (encoding?)
On Wed, 6 Sep 2017 00:08:34 +0200 Maxime Chupin wrote: > Ah ouf :). And how can I uncompressed them ? > Thanks a lot ! Never tried, but from what Les told you, I'd say some' like: BackupPC_zcat monfichieràmoikilèbomècompressé > fichierorg JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] pc files ereadable (encoding?)
On Tue, 5 Sep 2017 23:55:50 +0200 Maxime Chupin wrote: > Hi everyone, > > I'm facing a problem. I want to get back the files of a backup of a > down server. I have mount my disk on another machine, and I want to > get the files. > The files are stored (if I understand) in the > /PATHBACKUP/server/pc/localhost/147/f%2f/fhome/... > > For example, I want my .zshrc of my home. But the file is unreadable... > > cat /raid/server/pc/localhost/147/f%2f/fhome/fmc/f.zshrc > %R��@ � ڬ'�e�k��'PU��+� ;��' > �X~�9�93gf�m f > ��4P8--� S�q)��`t�3 j�Y7�$��h�� ���.}�W�� g*� � ��Lֆ� > aL]4qS &+*��$��܁��x�^⍽� !nʍ"��1�_ � C@) w > ] ��"}�҇ U% > �u{��v-��?}H �}�y.\v� �� T��$a@��³Ocn�E� ���Y���KGf�i�/4� �ֱ��kI > �í��8,� � \� > ���:�T �ׅ6*��m��I�h �\E��}���l��s�J�LMtԙ,M-Mh�iuI > ��(J�DŽ�>�O�Jnn��Ȗ�ij��E%|���`H �ˌ�$�F�ոh t �.m= 6�,m�K > > A � ��d�TOQ �� 3�2�O� > > A�H�%j*�D��� � > Z��7TJ,��h.��Q���(^�5�T1�_G.K �'KVB��i��Q/nyw > �&��t:�29�L.N��w��{j�ۋȁ��,�� � > > � � = c��H�;�/��@P@��٪S&t > Whereas the file > > cat /raid/server/pc/localhost/backups > 142full15016968001501709990713644369473241081 > 7128803680074561981041146628726210003 > 33055878370252865757601tar03.3.0 > 146incr15027336001502734326447742917546613780 > 17824273828312509390171000031393614606 > 141940527611tar13.3.0 > 147full15029928011503005137713977372254295106 > 71338537128108270774597363832210003 > 33342823872017135127601tar03.3.0 > 148incr150325200015032526089961905033762377 > 28809946882816169966981000390516146 > 75723115711tar13.3.0 > 149incr1503511200150351181711392022125506507 > 903089424756111909543810003677071250 > 38470970011tar13.3.0 > 150incr1503770400150377104811922164079898600 > 1158840374743100530033210003897366045 > 17562859711tar13.3.0 > 151 > > has no problem. Is there a way to do what I want ? Looks normal: the backups schedule file is in clear, as it is read by BPC, and your file in compressed. JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Scheduling advice
On Fri, 1 Sep 2017 11:50:25 -0500 Les Mikesell wrote: > Large, changing files can be a problem, but log files tend to be > highly compressible. Yup, I confirm, a large VM file (50G) on one laptop extends the backups time from a few hours to 2 days when it has been used :/ JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Backing up the BackupPC pool
On Fri, 11 Aug 2017 07:12:30 +0200 Hannes Elvemyr wrote: > I'm taking about BPC itself. Why? Let's take an example. > > 00:30 My nightly rsync script starts to sync pool to off-site storage. > This night, there happens to be a lot of new data an it takes time. > 01:00 BackupPC_Nightly starts cleaning the pool! Rsync is still > running! 01:15 My nightly rsync script is done syncing (disclaimer: I'm using v3) this isn't a problem, I've several machines that keep BPC busy for +22 HRS w/ BackupPC_Nightly running behind; the only thing is that makes the unfinished backups loger to achieve. > This night I would end up with a corrupt copy since BPC wrote to the > pool while I was copying it. I never saw any corruption in former backups. >From what I've put on the back of BPC, I can tell you it is _very_ reliable in any normal situation (power outage included; I exclude any hardware fault or failure of course.) > One suggestion to prevent BPC to touch > the pool while copying it was to stop the BPC service temporarily. I don't see why, unless you use v4 and it is know to corrupt in such a situation. > Sounds good, but would that lead to new problems for BPC? As far as your backups have ended that must not be the case. At home, my BPC server is an old machine that eats a lot of energy, so it is off most of the time. I do a backup every 4 days, colliding happily running backups and nightly, and, as written above, I never saw any corruption occurring (I recently had to reinstall 2 machines completely from a minimal installation then a BPC restore, so if there was any problem I would have meet them.) JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Backing up the BackupPC pool
On Fri, 11 Aug 2017 00:16:27 +0200 Hannes Elvemyr wrote: > Would it be wise to stop/start BackupPC every night? Can it introduce > any problem? What if my copy process (for instance rsync over Internet > to an off-site storage) takes 1 hour, maybe 2 hours some days, well > that interfere with BackupPC_Nightly? Do you speak about BPC itself or it's machine? If BPC, why, as it almost eat nothing while sleeping, if machine (energy spare, I guess), it isn't recommended as electronics and HDz really hate shocks, either electric (start draw more current as you've got to charge capacitors, this produce an overall peak) or thermal, especially HDz. Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Backing up the BackupPC pool
On Thu, 10 Aug 2017 11:26:26 +0800 Alexey Safonov wrote: > you can use for example FreeNAS (ZFS based) which can sync snapshots. you can even do better (depending on your IT Dpt size), using GlusterFS onto ZFS, and synchronize with the remote site :) However, these solutions implies to transmit the whole shebang; so the available inet bandwidth might be the final judge (difficult to elaborate an answer with only scarce parts of the equation…) JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Backing up the BackupPC pool
On Wed, 9 Aug 2017 16:29:49 -0600 Ray Frush wrote: > A snapshot of the BackupPC Filesystem does not protect from gross > hardware failure of the storage that destroys both the data and the > snapshots. I think he meant making a snapshot then convey it to he's 2nd site. JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Backing up the BackupPC pool
On Wed, 9 Aug 2017 22:47:25 +0200 Hannes Elvemyr wrote: I forgot: for either solution, you wanna have a deep look at https://www.wireguard.com/ , a VPN solution faster than any competition, very easy to set up, protected by PFS and elliptic curve crypto. JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Backing up the BackupPC pool
On Wed, 9 Aug 2017 22:47:25 +0200 Hannes Elvemyr wrote: > Hi! Biscotte Hannes, > I'm using BackupPC for all my machines and it's great! I would now > like to protect my BackupPC pool somehow (if my BackupPC server > crashes, gets stolen or burns up I don't want to loose the data). Use an encrypted partition/HD to house the BPC pool|cpool. > As I > see it, I have at least two options: > > 1. Run a second instance of BackupPC off-site > > This of course creates a new second pool, but that could actually be > an advantage if one of them somehow gets corrupted. This would be ZE wise (and first) thing to do; think about little inconveniences that will happen one day or another to your work building: fire, lightning strike, flooding, gas (or bomb) explosion, dragon attack, rabid customer, and worse of all: a mad Craig Barratt attack (well, he's already mad, so it's not a matter of "if" but really a matter of "when";-p) It will also give you the main benefit of rsync by tunneling only what's necessary (only your first backup will take more time.) Think about the whole pool size and the time needed to tunnel it elsewhere… Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Question about v4
On Mon, 7 Aug 2017 10:17:41 -0700 Craig Barratt via BackupPC-users wrote: > Sorry that it isn't very clearly stated Nooo, there's no more sorry in stock… may be a little forgive, for the lowest rate of $.91 (sub-primed credit allowed) ;-p) > and it doesn't depend on the real client's inode (and > inode numbers) on the file system being backed up. Ahhh, NOW this is crystal clear; however, it wasn't in the aforementioned doc's page. I may sound dumb/punctilious, but on my reading I really understood what I asked about (may be it is because I'm always strive to see things like a user will.) > Therefore it's > portable across different underlying file systems on the server. Yeah, now let's port it to the Apple ][ CP/M card, then build a cluster of 120KB 5"1/4 diskettes and conquer the (known) world by backuping it ! :)) (yeah, I'm not that young anymore, may be that's what explains the dumb/punctilious way of live.) Thanks for your answer, Craig; it is now clear that BPCv4 "inodes storing" is way different than the OS one. Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Question about v4
On Mon, 7 Aug 2017 08:50:30 -0600 Ray Frush wrote: > Jean-Yves- Ray > I believe you may have been looking at v3 documentation. BackupPC V4 > does _not_ make extensive use of hard links. > > See: https://backuppc.github.io/backuppc/BackupPC.html#BackupPC-4.0 Na, I'm on: http://backuppc.sourceforge.net/BackupPC-4.1.1.html searching "inodes" leads to this paragraph: "Inodes for hardlinked files are stored in each backup tree. This makes backing up hardlinks accurate, compared to V3, and provides for consistent inode numbering across backups." As there's a comparison w/ v3, it is of course targeting v4. This is why I ask, 'cos it collides with what you said. Sooo, I don't know what to think 'cos moving a regular hardlink isn't a problem as the FS is taking care of inode(s) swapping, but IF v4 is doing it's own not-hardlinking-but-much-like-it, moving a file whose inode's reference is contained in another one would break this system (?) JY > -- > Ray Frush > Colorado State University. > > > On Sun, Aug 6, 2017 at 6:10 PM, B wrote: > > > Hi backuppcers, > > > > I'm gonna switch from v3 to v4 and have a question about it: > > > > * doc says hardlinked files inodes are stored in each backup tree, > > I guess that BPC partition being formatted in XFS, any optimization > > (that might move any file) of this partition is out of the > > question ? > > > > Jean-Yves > > > > > > -- > > Check out the vibrant tech community on one of the world's most > > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > ___ > > BackupPC-users mailing list > > BackupPC-users@lists.sourceforge.net > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users > > Wiki:http://backuppc.wiki.sourceforge.net > > Project: http://backuppc.sourceforge.net/ > > > > > -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] Question about v4
Hi backuppcers, I'm gonna switch from v3 to v4 and have a question about it: * doc says hardlinked files inodes are stored in each backup tree, I guess that BPC partition being formatted in XFS, any optimization (that might move any file) of this partition is out of the question ? Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] backuppc 4: rsync error: unexplained error (code 255) at io.c(629) [Receiver=3.0.9.8])
On Tue, 1 Aug 2017 22:20:23 -0400 Romain Pelissier wrote: > Hi, > I have check the setting: > > $Conf{RsyncClientCmd} = '$sshPath -q -x -l root $host $rsyncPath > $argList+'; > > and all seems to be fine but every backup tries to login on a host > with the backuppc account... I am lost... Beginning by not crossposting and multi-posting would be a nice start… JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] I'd like to make a suggestion to the BPC devsm
On Fri, 21 Jul 2017 13:12:25 -0500 Les Mikesell wrote: > There is a missing piece in terms of recreating the > partitions/filesystems/raids to match the system you want to recreate, OMFG, ZE missing link ? ;-p) > unless you have already automated that with tuned kickstart files or > all of your systems are identical. The ReaR tool I mentioned in > another post will create a script to re-create an existing system. As servers structures are always identical, local daemons excepted, we use another way to do so: boot on a FAI server, install a minimal system, including rsync, restore the whole system using BPC, reboot, online & operational. This stays an exceptional procedure, as all servers hardware go through a 2 weeks burn-in before being put in production - most of the breakage coming from HDz &| SSDz. Just in case, partitions tables are extracted and dumped on long-term backup redundant servers and also printed to be inserted in our local paper documentation. > For Windows systems, I'd use Clonezilla images as the base, but that > does take extra time and disk space to maintain. The PITA of a few w$ systems will be definitely eradicated as of 2018-Q3. JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] I'd like to make a suggestion to the BPC devsm
On Sat, 22 Jul 2017 02:40:57 +1000 Adam Goryachev wrote: > I think you want both snapshots on the local server, as well as BPC on > a remote server. They each serve a different need. Hmm, I'm not that sure; for the time being, snapshots will be kept and doubled w/ other BPC servers for daily "snapshots" of only work directories. Time will say if ZFS snapshots are mandatory or completely replace by daily BPC. The current architecture being a 3-2-1 w/ redondant level 2. > You might also want > a image copy on a remote server, which is yet another different > requirement. I guess you mean for total reconstruction (?) At this time I did not found any significant gain of time between image (which, BTW, takes a lot of time/bandwidth/disk space to achieve) and a BPC reconstruction done from a minimal OS installation. Not to mention that in a release last months of life, it is sometimes important to use backports to avoid a hard transition with the next release (hence, forcing to multiply images.) JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] I'd like to make a suggestion to the BPC devsm
On Fri, 21 Jul 2017 09:27:41 -0500 Les Mikesell wrote: > The quick fix here is to use a Mac with an external or network drive > for time machine. If you aren't familiar with it, it does exactly Among many other, Apple products are to stay out of the company. > what you suggested with easy access for the user and filesystem tricks > for efficiency. For a more enterprise flavor, NetApp fileservers have I do not use that, all of our servers are home build, using such things as Debian Linux, ZFS, XFS, GlusterFS, etc; this holds the staff technical level to a very good skills level and avoid being stuck/proprietary dependent/contract dependent when really bad things happen. My first goal was to avoid current separated servers for snapshots, but all of the given answers are driving me toward a simple switch between snapshots and BPC on the same servers. Sometimes, you need other's view to see what was obvious! Thanks to all for your answers/comments ;) JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] I'd like to make a suggestion to the BPC devs
On Fri, 21 Jul 2017 06:29:46 -0700 Kenneth Porter wrote: > A better solution would be a change to rsyncd to monitor its > filesystem and remember which files were touched since the last > backup. Yep, this is much closer to what I imagined. > Some filesystems have a "backdoor" like inotify that lets you > get notified of files getting touched. rsyncd could log these and only > consult this list, not the whole filesystem. You'd need some way to > reset the list and force a full filesystem check. Not necessarily, as other files than work aren't important in this matter; the idea (I forgot to allude to formerly:/) is to confine the modified version of BPC to ~/WORK or ~/Documents + ~/Pictures in order to lower ressources consumption hourly (and because that's all that's needed:) In fact, I just realize that, given what I wrote above, the only things needed would be: a secondary list of directories to backup and secondaries schedule parms to achieve that. So, may be there's no need to modify anything, just launch (and stop) another instance of BPC by a crontab while feeding it with different configuration files (IF it doesn't check itself for an already running instance before starting ?) @GW Haywood: this would be limited to executive people that usually know what they're doing and are the only ones that are working on not-to-lose docs ie: big spreadsheets - the idea is to entirely pull off any admin from the restoration process, which isn't the case w/ snapshots. JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] I'd like to make a suggestion to the BPC devs
On Thu, 20 Jul 2017 17:27:30 +0200 Daniel Berteaud wrote: > Because what you want to achieve is not really backups, but some kind > of rotative snapshots. There are lots of different ways to do this > (LVM, LVM-thin, btrfs, zfs etc..), and this is very dependant on the > system hosting your data, which is not controlled by BackupPC. > BackupPC is just a backup tool. You could configure more frequent incr > (every hour), but the performance impact won't be the same as > snapshots. This can be a solution depending on the amount of data you > have to manage, but it's already possible, without any modification to > BackupPC Fair enough, I'm gonna watch this closely - thanks. JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] I'd like to make a suggestion to the BPC devs
On Thu, 20 Jul 2017 08:18:44 -0600 Ray Frush wrote: > I believe you could do something like you propose with the current > BackupPC by setting the "IncrPeriod' to 0.04 (1/24 of a day). You'd > have to make some interesting settings for "FillCycle" and > "FullKeepCnt" to make it keep a usable schedule, but you could then > have 'hourly' incrementals. "interesting settings" is the corner stone of this. > As Les Mikesell just pointed out, the downside would be that for large > instances, you'd be doing a lot of fairly expensive (compute time) > operations every hour to scan the file system for changes. > > I believe that FS snapshots are faster, and more efficient than > BackupPC could ever be for this. OK, fair explanation this time; too bad. I'll dive into the snapshot code to see if a timestamp check can be easily implemented - thanks. JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] I'd like to make a suggestion to the BPC devs
On Thu, 20 Jul 2017 09:17:29 -0500 Les Mikesell wrote: > You can set the schedule to run as often as you like, but the > underlying tools are going to have to traverse the whole directory > tree to find the touched files, which you probably don't want to > happen while you are working. This stage just slow down machines for a bout a minute, which is acceptable. > The easy way to get this facility is to do your work on a > Mac. If I didn't want serious security, I'd use w$, which is about the same level… JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] I'd like to make a suggestion to the BPC devs
On Thu, 20 Jul 2017 10:26:18 +0200 Daniel Berteaud wrote: > I think this is out of BackupPC's scope Please develop, don't drop me dry, why is that? Why adding a kinda-Xtiple-fugitive-daily-snapshots of only touched files is out of the BPC's scope ? On the other hand, I see this as the missing complement to get a professional ubiquitous backup system. JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] I'd like to make a suggestion to the BPC devs
Hi Bacukppcers, My suggestion is to avoid using such things as FS snapshots during the day to avoid work losses. An addition to BPC could do the trick, preferably saving the result in another directory than the main one, by checking which files have been touched the present day and save them automatically; it may be triggered from a crontab. And before each complete backup, BPC would empty this daily directory for the next day. This way, if we take an hypothesis of an hourly crontab, clumsy users would be able to recover their work very rapidly with at most one hour loss - and the hourly backups being confined to only touched files should be quite transparent/invisible/lightweight for them. How about that ? Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Backing up the server computer
On Sun, 16 Jul 2017 10:12:39 -0400 Bob Katz wrote: > I appreciate that. Well, if I have to do a bare metal restore I would > use a clone, it's safer, if it's recent. I don't see where it could be safer; the worse you can have are eventual dangling files if the install image has changed a lot between you original system installation and the restore. And as you said about the clone: "IF it's recent" - which is almost always not the case, this is where BPC take it's whole value: at worse, you'll restore the yesterday backup, which usually doesn't take more time than an image restore. In a production environment where today's work _must_ be saved, a combination of BPC and several snapshots a day does the trick, providing FS/Network core/Network speed allows it in a reasonable time slice. > My object of backing up the > server is that hopefully I could get myself out of a disaster by > digging through incrementals. ? My object of backing up the server is to recover it fully functional with the minimal loss of data in a minimum of time… JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Ran out of inodes
On Fri, 14 Jul 2017 15:11:36 +0300 Tapio Lehtonen wrote: > Running BackupPC 3 on Debian Wheezy. Ran out of inodes on 250 GB > filesystem, max inodes was 15 million. Use the force, change to a better FS: XFS (w/ the inode64 switch on) eg: laptop 500GB HD filled @ 80% with many small pictures and dev files, df -i returns: Filesystem InodesIUsed IFreeIUse% Mounted on /dev/sda2 388630464 1265847 3873646171%/ XFS is also capable to raise the inodes quantity in one command while the partition's mounted. > Can the nightly cleanup now run > and maybe release some idodes from the oldest backups? Best way to know: test it… > Since the filesystem is Ext4 I can not increase max inodes. Would it > reduce the need of inodes if I reduced the number of backups to keep? google 'inode' to know exactly what it is. > My quess is users have lots of e-mails stored and since those tend to > be small they eat up the inodes. This is entirely YOUR fault, 'cos before establishing a backup system, an admin has to analyze what the data is made of, with the aim of what he's gonna do with it. Counting the number of files and their size to process is backup's 1.0.1, while checking if you have enough room & inodes on the backup device is 1.0.2. Not to mention that some self-researches & readings can help. Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Backing up the server computer
On Sat, 15 Jul 2017 16:20:32 +1000 Adam Goryachev wrote: > Actually, I think you will find that /proc, /dev, /sys, etc are > actually different filesystems, and so will automatically be excluded > by --one-file-system. On 2nd thought, that looks logical from a FS point of vue, and good to know. JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Backing up the server computer
On Fri, 14 Jul 2017 18:56:19 -0400 Paul Fox wrote: Just a precision for B.Katz: I also have a script that creates an _INSTALLED_PKGS.txt file from the usual command (debian), launched as a pre-backup command to be able to easily reconstruct the full exact working system from a minimal install. JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Backing up the server computer
On Fri, 14 Jul 2017 18:56:19 -0400 Paul Fox wrote: > i confess i haven't been following this thread in all its gory detail, The BackupPC god absolves you (although, it is the BPC v.3x god, so you'll need to upgrade the confessionnal if you want to also be absolved by the v.4.x one.) > but i suspect that many folks do their backups onto a separately > mounted disk. if you do that, then adding "--one-file-system" to the > rsync args takes care of it: you can back up from '/', but only the > root filesystem will be backed up. any other filesystems on that > machine will also need to be backed up as separate shares, of course. But this way, you still backup unwanted directories, such as /tmp, /dev, /proc, etc. Starting on the disk root and excluding these allows for a tight control over what you want and the rest, providing you need almost the whole system to be saved for whatever reason. Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Backing up the server computer
On Fri, 14 Jul 2017 18:22:54 -0400 Bob Katz wrote: > Oh boy I get it!!! I can't believe how stupid I was about that. Me too ;-p) > Well, doesn't this mean I have to establish a whole bunch of modules > with a different path for each module, in order to back up everything > EXCEPT the backup location? Maybe I should try a different method than > rsyncd You can still use '/', but that means you'll have to exclude all unwanted directories - I use BPC this way 'cos I really need the whole system being backed up. Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] How to backup a laptop over internet
On Tue, 30 May 2017 09:13:14 +1000 Adam Goryachev wrote: > Wasn't there a recent extension to ClientNameAlias which allows > multiple addresses to be used, which will be tried (in order), and the > first found would run the backup? Duno, Adam, I'm still in v.3 for production use and no time to explore v.4 (is it now rock-stable ?); this might change when Debian Stretch will become the new stable (around mid june ?), but I must see about solid version migration before any change can take place. > This seems a perfect use case for that, adding the local IP and the > remote IP as the two aliases, hence only a single "host" in backuppc, > consistent ordered backups all in one place. This is really a nice feature, especially in this case. > PS, this probably only applies to BPC4.x and I forget what version the > OP is using. He did not state it, just that he use mageia5, which, from their site is a fork of mandriva - might work the same as rh: one click and it installs a hexabyte "package" with no other choice *<;-{p) Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] How to backup a laptop over internet
On Mon, 29 May 2017 14:21:18 -0500 Les Mikesell wrote: Hi Les, … > to access the web interface. A good starting point would be looking > at point-to-point configurations of OpenVPN.You'll need either a > stable IP address for your home network or at least something that > will work with a dynamic DNS service. Yup, I forgot this one ! Also note that using WireGuard instead of OpenVPN can give you an uncommon speed (that OVPN could just dream of): === EXCERPT FROM WG ML (20170513) Using the Infiniband network directly, iperf's performance is 21.7 Gbit/s (iperf maxes out the CPU at the receiver, even when using 8 threads). Hardware used: - Xeon E5520 @2.27GHz (2 CPUs, 4 cores each) - Mellanox ConnectX IB 4X QDR MT26428 Versions used: - Debian jessie - Linux 3.16.43-2 - Wireguard 0.0.20170421-2 - iperf 2.0.5 - Mellanox ConnectX InfiniBand driver v2.2-1 === /EXCERPT FROM WG ML (20170513) JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] How to backup a laptop over internet
On Mon, 29 May 2017 20:56:01 +0200 Xuo wrote: > Hi, Hi Xuo, > My pc is running Mageia5. > I don't understand how a VPN connection could help solving my problem. > Could you please explain more in details ? A VPN means either a roadwarrior (your itinerant laptop) can connect and benefit from all machines of your LAN, or connect 2 LANs together (eg: enterprise branches.) This means, when you're connected to it, that your backuppc server can reach your laptop in a secure mode (encrypted and possibly compressed mode) as easily as if it was connected on the LAN @home. As formerly said and because of the VPN nature (no same IP segment messing), you'll be obliged to create 2 accounts on the server: one with the DNS laptop name for LAN connections - ie: mylaptop.zatiluvsomuch, let's say it == 192.168.0.25, (or directly 192.168.0.25 if you do not have a home DNS), and one based on the (fixed !) VPN IP address you use when away from home ie: 172.16.0.25. Provided you backup daily @2000 AND your laptop is always connected at this time whether you're home or away: 192.168.0.25 will be saved if you're @home, 172.16.0.25 will be saved when you're away from home, nada will be saved, wherever you are, if you're not connected. (although, there are rumors of the backuppc team working on a way to backup data by telepathy, we can't give it much credits as it was issued by the nsa - furthermore, this would imply you to read all files line by line.) To be short, VPN allowing a transparent connection, the only difference with a pure LAN construction is the 2nd account needed. > I understand the proposal from Johan Ehnberg setting 2 hosts, but I'd > prefer to avoid this. You can't, or more likely, if you do so, you will enter the "dark side" of routing (routing only part of the same IP segment), which is far from easy, prone to (huge) errors and absolutely discouraged for beginners. (but it can be a nice way to learn how to route correctly; if you choose this way, make sure noting important gets out of your LAN.) Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] How to backup a laptop over internet
On Sun, 28 May 2017 21:21:32 +0200 B wrote: > And this won't work, even if you know by advance the IP where your > laptop can be reached, unless you open a hole in the firewall &| adsl > box/modem in the home that host you. Forget about that, I was thinking about a very different thing; my bad. JY -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] How to backup a laptop over internet
On Sun, 28 May 2017 20:50:55 +0200 Xuo wrote: > Le 22/051/2217 à 198.44, Xuo a écrit : > > Hi, Hi, > > I am using BackupPC on my laptop when it is connected to my local > > network at home. > > But, when I'm not at home (durin the work week), I don't know how to > > do. I'm not in some hotels, ... but in another flat (that I rent), > > where I have a static IP and where I could open some ports if > > necessary (ssh, ...). If you're under Linux, use a VPN, such as: https://www.wireguard.io/ ; under m$, duno (& don't care very much.) > > I think I could define my laptop ip (on my BackupPC server > > configuration) using my remote ip adress, but this wouldn't work > > when I come back home. This means you'll be obliged to reach your server through the VPN connection when in a home-home configuration; so your VPN MUST be able to connect like that and preferably not eating your CPU. And this won't work, even if you know by advance the IP where your laptop can be reached, unless you open a hole in the firewall &| adsl box/modem in the home that host you. > > What should I do to perform these backups from my server (located at > > home) to my laptop located either on my local network or remotely > > (on internet). Notice that if you usually backuppc@1600Z, you'd better already be connected at this particular time, unless you wanna shift it. Under m$, this should be the same, however it depends greatly on your user's permissions & groups, which can be quite a PITA and depends on your setup and user's type - notice m$ VPN almost all have large security problems. Other possibility: in case of breakdown, ask the nsa a copy taken from their last incursion ;-p) Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] XferLOG.z is all trashed
On Wed, 22 Mar 2017 23:57:56 -0500 Les Mikesell wrote: Hi Les, sorry for the delayed follow-up, You were absolutely right: my (bad) habit to insert echoes in root's .bashrc (to remind me I've some things to do on the machine when I ssh it) was the reason it failed; commenting all echo lines re-established a normal behavior, thanks ! :) Jean-Yves > The remote system is sending some output before starting rsync over > the ssh login.There is probably something being started in > /etc/profile, /etc/bashrc or root's .profile or .bashrc that is > complaining about not having TERM set - which it won't with ssh trying > to start a command remotely. > On Wed, Mar 22, 2017 at 11:48 PM, B wrote: > > > > Got remote protocol 1297237332 > > Fatal error (bad version): TERM environment variable not set. > > > > Sent exclude: /1.8EB_01 > > … > > > > > > The bad listing issued is all trashed (many characters like little > > boxes w/ points in, no text alignement, almost unreadable). > > > > I see the "Got remote protocol" is wrong, but can't figure why; I > > do not understand either the "TERM envvar not set" as it is set! > > > > as the BPC user: > > $ echo $TERM > > xterm > > > > Does anybody have a clue about that ? > > -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] XferLOG.z is all trashed
debian jessie === Hi folks, I had this problem some times ago, it disappeared I don't know how an now it's back :/ The machine was left alone more than a month, but I don't recall any major intervention on it. ssh is working normally and is ok w/ BPC. When I run BPC from the a command line: /usr/share/backuppc/bin/BackupPC_dump -v -f computer.domain on a good machine, I get this: CheckHostAlive: returning 0.196 full backup started for directory / (baseline backup #82) started full dump, share=/ Running: /usr/bin/ssh -q -x -l root srv0ae1.local /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --checksum-seed=32761 --ignore-times . / Xfer PIDs are now 11441 xferPids 11441 Got remote protocol 31 Negotiated protocol version 28 Checksum caching enabled (checksumSeed = 32761) Sent exclude: /1.8EB_01 … on the bad computer (the BPC srv itself), that: CheckHostAlive: returning 0.059 full backup started for directory / (baseline backup #35) started full dump, share=/ Running: /usr/bin/ssh -q -x -l root srv0860.local /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --checksum-seed=32761 --ignore-times . / Xfer PIDs are now 11524 xferPids 11524 Got remote protocol 1297237332 Fatal error (bad version): TERM environment variable not set. Sent exclude: /1.8EB_01 … The bad listing issued is all trashed (many characters like little boxes w/ points in, no text alignement, almost unreadable). I see the "Got remote protocol" is wrong, but can't figure why; I do not understand either the "TERM envvar not set" as it is set! as the BPC user: $ echo $TERM xterm Does anybody have a clue about that ? Jean-Yves -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Archive backup with encryption
On Fri, 9 Dec 2016 14:01:01 +0100 Peter Viskup wrote: > Dear all, Dear alone, > would like to ask whether it would be possible to use BackupPC to > store encrypted archives of sensitive directories from the clients > *only*. The easiest way I found to do so is to have each client encrypting its own sensible data and backup the encrypted directory. … > Is it possible to achieve that with BackupPC? Duno. Jiff -- Developer Access Program for Intel Xeon Phi Processors Access to Intel Xeon Phi processor-based developer platforms. With one year of Intel Parallel Studio XE. Training and support from Colfax. Order your platform today.http://sdm.link/xeonphi ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Garbage into XferLog.z - [SOLVED]
On Thu, 14 Jan 2016 10:14:59 +1100 Adam Goryachev wrote: > On 14/01/16 07:22, Bzzzz wrote: > > Hi list, > > > > First, I already have had this problem for this particular machine > > when I wanted it to be a client (garbage into NewFileList). > > It runs under a Debian sid 64bits, just like half of my park > > (1/2: sid 64bits & 32bits, 2/2: jessie 32bits). > > > > Today, it becomes the backuppc svr and I still have problems > > (NewFileList is now empty). > > > > The main problem is: > > /usr/share/backuppc/bin/BackupPC_zcat pc/backuppcsvr.domain/XferLOG.z > > only return garbage (a bit like if file had a different encoding, > > which is not possible as all machines works with UTF-8); so, > > no backup is made from the server itself:( > > > > * Where could it come from? > > > > * How to fix that? > > > Does this happen for all machines you are backing up, or just one? > Can you provide a copy of the "garbage" file (ie, attach the xferlog.z > if it isn't too big). Apparently, I spoke too fast: I rebooted the backuppc svr after a maintenance stop this morning and to my surprise I just notice that all machines are ok on the backuppc I/F. So, this weird problem was probably caused by an electric glitch when installing the perl pkgs &| the backuppc pkg, as I reinstalled them all five days ago. JY -- Site24x7 APM Insight: Get Deep Visibility into Application Performance APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month Monitor end-to-end web transactions and take corrective actions now Troubleshoot faster and improve end-user experience. Signup Now! http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Garbage into XferLog.z
On Thu, 14 Jan 2016 10:14:59 +1100 Adam Goryachev wrote: Hi Adam, I just reinstalled perl and all of backuppc dependencies, unfortunately, this doesn't change anything about XferLOG.z :( (no sign of HD degradation.) JY -- Site24x7 APM Insight: Get Deep Visibility into Application Performance APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month Monitor end-to-end web transactions and take corrective actions now Troubleshoot faster and improve end-user experience. Signup Now! http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] Garbage into XferLog.z
Hi list, First, I already have had this problem for this particular machine when I wanted it to be a client (garbage into NewFileList). It runs under a Debian sid 64bits, just like half of my park (1/2: sid 64bits & 32bits, 2/2: jessie 32bits). Today, it becomes the backuppc svr and I still have problems (NewFileList is now empty). The main problem is: /usr/share/backuppc/bin/BackupPC_zcat pc/backuppcsvr.domain/XferLOG.z only return garbage (a bit like if file had a different encoding, which is not possible as all machines works with UTF-8); so, no backup is made from the server itself:( * Where could it come from? * How to fix that? Jean-Yves -- Site24x7 APM Insight: Get Deep Visibility into Application Performance APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month Monitor end-to-end web transactions and take corrective actions now Troubleshoot faster and improve end-user experience. Signup Now! http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Automatically spot-checking backups?
On Thu, 29 Oct 2015 13:31:34 -0400 Dave Sill wrote: > Is there any reasonable way to do API? Is there an this for BackupPC? May be a better way to get a deep check without esoteric fiddling would be to raise $Conf{RsyncCsumCacheVerifyProb} and enable checksum caching (if your distro doesn't set it up by default). Jean-Yves -- ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Slow transfer via rsync?
On Mon, 28 Sep 2015 14:55:35 +0200 Christian Völker wrote: > Hi all, Hi alone ;-p) … > Why is it still so slow? I recently ran into such a problem with an old laptop used as a HTTP svr and having a dead battery. It came from it but the problem was gone after a reboot (ssh it took several minutes instead of 2s). Most probably, an electrical glitch messed it up. So, may be a reboot could fix your problem as well. JY -- ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Slow transfer via rsync?
On Mon, 28 Sep 2015 05:41:31 +0200 Christian Völker wrote: > >> When calculating with a 5Mbit/s linke 27GB should be transferred > >> within ~15hours. Here, the initial full backup is not yet done and > >> is already running more than 24hours! > > Make sure you only use one compression order in any place (either > > backuppc or openvpn, NOT both). > How should this do any harm? Simply because trying to compress an already compressed stream make it bigger. > The router is a totally different pc. See above. > Shouldn't the cpu usage be high at least if OpenVPN settings would > interfere here? Depends on CPU power; do as Patrick suggested: check your backuppc svr RAM (& CPU) usage. > > And as you have a fixed IP address, why not simply use a regular SSH > > connection instead of openvpn? (I've seen it ridiculed by a simple > > remote desktop connection on w$…) > What do you mean? I do not want to use direct connections as I have to > do a port forward in this case. So, you have a Ferrari in the garage but keep using an old sedan… Sweet memories or to pass the time somehow? ;-p) JY -- ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Slow transfer via rsync?
On Sun, 27 Sep 2015 22:48:43 +0200 Christian Völker wrote: > Hi guys, Hi Chris, … > The connection is now a 5Mbit/s (uplink) leased line with fixed IP > (connected through OpenVPN to the BackuPC host). The backup takes > ages! How much is the downlink? (rsync needs a good deal of bi-dir exchanges) > When calculating with a 5Mbit/s linke 27GB should be transferred within > ~15hours. Here, the initial full backup is not yet done and is already > running more than 24hours! Make sure you only use one compression order in any place (either backuppc or openvpn, NOT both). And as you have a fixed IP address, why not simply use a regular SSH connection instead of openvpn? (I've seen it ridiculed by a simple remote desktop connection on w$…) Jean-Yves -- ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] [CentOS 7] backuppc is not working
On Thu, 06 Aug 2015 20:31:27 -0700 yashiahru wrote: Hi, > 1st time > after installation and config a apache user for backuppc > when i visit http://localhost/BackupPC > the login page is downloaded Your browser indicating a download usually shows that the script file is send by the http server instead of being executed (thus, a fastcgi server not started or configuration problem [often the socket perms].) You could also try using nginx (much better memory management and low consumption than apache) with fcgiwrap. Jean-Yves -- ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] prevent full OS partition when data disk fails
On Tue, 14 Jul 2015 11:40:58 + Jürgen Depicker wrote: > Unmounting /dev/sdb1 while Backuppc > service is running is not possible either, > so my other option would be > to shutdown one of the virtual disk servers which I don't really want > to do... Greetings, J Simpler answer: IF you left the original /var/lib/backuppc as is (read: with its original directories), then you're doomed and need to stop backuppc, umount HD (virtual or not) and remove these directories manually to avoid backuppc starting when the other HD isn't mounted. The only other solution I see would be to run a script that check if your HD's mounted (before starting any backup) and return a value ≠ 0 that'll prevent backuppc to launch any backup (see: $Conf{DumpPreUserCmd} & $Conf{UserCmdCheckStatus}.) So, your script would be some' like (mine is encrypted and not alone, hence the "encfs" first search): #!/bin/sh MYHDTHATISMOUNTEDFORBACKUPPC=`grep encfs /etc/mtab | grep 1.8TB_USB_01` if [ "$MYHDTHATISMOUNTEDFORBACKUPPC" = "" ] ; then # Operator is a FBFH that forgot to mount the backuppc HD exit -1 else # Operator is not a lousy drunk exit 0 fi JY -- Don't Limit Your Business. Reach for the Cloud. GigeNET's Cloud Solutions provide you with the tools and support that you need to offload your IT needs and focus on growing your business. Configured For All Businesses. Start Your Cloud Today. https://www.gigenetcloud.com/ ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] prevent full OS partition when data disk fails
On Tue, 14 Jul 2015 10:31:42 + Jürgen Depicker wrote: > My setup: all virtualized; all backups stored in /var/lib/backuppc , > but that is /dev/sdb1 mounted there. So if that drive fails, I'm > pretty sure Backuppc will fill up my / partition recreating the backup > in the then empty /var/lib/backuppc . How can I prevent this? Hi Jürgen, It'll fail - and it's easy to test: unmount your HD, make sure you do not have the original directories into /var/lib/backuppc and restart backuppc. JY -- Don't Limit Your Business. Reach for the Cloud. GigeNET's Cloud Solutions provide you with the tools and support that you need to offload your IT needs and focus on growing your business. Configured For All Businesses. Start Your Cloud Today. https://www.gigenetcloud.com/ ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] Weird failure
Debian sid Backuppc V. 3.3.0-2 = Hi list, BackupPC svr: Debian sid address 3 other machines than the server; 1 is Debian oldstable (squeeze), OK, 2 are Debian sid, but one is failing. The message is: backup failed (fileListReceive failed) All machines have exactly the same locale setup (a mix between fr_FR.UTF-8 and C). The problem is, everything's alright, key's fingerprint has been accepted /usr/bin/ssh -l root machine.domain ls is working well, but I've got garbage into $TOPDIR/pc/machine.domain/XferLOG.z and therefor into XferLOG.bad.z instead of regular text, such as: /usr/share/backuppc/bin/BackupPC_zcat XferLOG.bad.z … /changelog.gz]8�h:U:opyrightC.AU�)hangelog.Debian.gza:gmp10/README.DebianN��S� copyright�:hangelog.Debian.gz%2 �S:samba-libs/NEWS.Debian.gz��>U� copyright�:hangelog.Debian.gz�Q��>U:exo-utils/changelog.gz]�!D�P:NEWS.Debian.gz'z1gN: copyright� 7y;R:hangelog.Debian.gz�n T:geoip-database/copyright��+U�hangelog.Debian.gzG:libbind9-90/copyright'i�T�hangelog.Debian.gz�K:ves/lives-O�weede�GETTING.STARTED.gz,�README.multi_encoder7 fileListReceive() failed Done: 0 files, 0 bytes Got fatal error during xfer (fileListReceive failed) Backup aborted by user signal Not saving this as a partial backup since it has fewer files than the prior one (got 0 and 0 files versus 0) As I also use some other Perl programs, I suspect svr & good machine have more recent Perl module(s) (I used CPAN on them), but I don't know which ones are to be upgraded - Well, this is a first guess, as the older machine can be backed up without problem. Jean-Yves -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/