Re: [Bacula-users] Block checksum mismatch on file storage
Hello, Yes, it is clear that one can do read-only tests that do not destroy data. However, in this case, it seems to me more useful to do read/write (it is actually write/read) tests as it appears that the problem is more likely in the write ... I have never heard of a non-destructive read/write test, which I assume reads then rewrites the disk. Although that is clever and could be useful, in this case it sounds to me risky on a disk that seems to be failing. Best regards, Kern On 06/29/2014 09:04 PM, John Stoffel wrote: Kern 3. Run read/write disk tests on your USB disk (note: this will Kern destroy any existing data). This isn't quite right. You can run read-write tests on a quiescent filesystem (ie unmounted) without problems: badblocks -svn /dev/sd? will scan the entire disk using non-destructive read-write mode. But as Kern said, check your logs as well. John -- Open source business process management suite built on Java and Eclipse Turn processes into business applications with Bonita BPM Community Edition Quickly connect people, data, and systems into organized workflows Winner of BOSSIE, CODIE, OW2 and Gartner awards http://p.sf.net/sfu/Bonitasoft ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] How big is an Enterprise?
On 06/30/2014 06:42 AM, keel...@spamcop.net wrote: Quoting Kern Sibbald k...@sibbald.com: Hello Graham, Thanks for your Full disclosure. To answer your question: a *well* tuned Bacula community Director can probably handle between 1000-1500 normal size jobs per 12 hour backup period. A normal job is a normal Linux or Microsoft server that would include a few terabytes (say up to 10) of data/server and perhaps on the average 1 million files/server. If the data volume goes up significantly, one may need to consider multiple Storage daemons as there is a certain data throughput that a SD will support. The issues with the Director are mostly the total number of files (i.e. the catalog). A Bacula Enterprise Director can probably handle 5000 similar jobs per day. For others on this list: I have been well aware of the Burp fork since its beginning, and I have no problems with it. I have never considered it a hostile fork as is the case of Bareos. That said, there are some minor issues with the Burp fork. If you (Graham) are now planning to enter the Enterprise market or do head to head competition with Bacula or Bacula Enterprise, we should discuss. Nothing really serious and nothing to worry about. It is also curious to say that Burp started out as a fork of Bacula. Is there a point where a fork stops being a fork? Hello Kern, Thanks for your reply and the information. The question about how big an Enterprise is arose because somebody emailed me to suggest that 1000 burp clients might cause a problem on the server due to the way they poll. It turns out that he doesn't actually have 1000 clients and it was just a number he made up. Though it happens to be in the same area that you have suggested. I can't imagine any possibility of burp-1.x.x handling that amount of data. But I have been working on burp-2.x.x, which may (or equally may not) be able to approach the lower end. I have no way of testing it at that level. It will probably use more memory and CPU cycles on the client than bacula does, because it does variable length chunking and checksumming. And burp does backups to disk only. I think it would be impossible to use it to back up directly to tape like bacula does. So I don't think that bacula and burp really occupy quite the same space. I don't have a plan to do head to head competition or enter the Enterprise market, I just make my software by myself in my spare time. However, it is possible that somebody more business minded than myself would want to help me do so in the future (no such person exists right now). In case that ever happens, I would like to know about the minor issues that you mention. Re: fork - I was using 'started out' to mean 'this is how burp started'. But I would imagine that it would stop being a fork once the last piece of the original code was removed. So you are right, it is still a bacula fork because there is some bacula code that remains. From the top of my head, that is: a) The Windows API parts. b) Some of findlib (though I changed it a lot). c) base64.c. Hello Graham, Yes, if Burp polls, it might be hard to handle 1000 clients in a single director. 1000 and above clients is a lot, and like you, not something that I have running here. However, there are quite a few Bacula as well as Bacula Enterprise installations that are at that number. I have always seen Burp as a different way of backing up with different objectives as you point out on your web site. Give what you say above about your plans, it seems to me that it remains much the same. I will send my comments about the minor issues off-list ... Best regards, Kern -- Open source business process management suite built on Java and Eclipse Turn processes into business applications with Bonita BPM Community Edition Quickly connect people, data, and systems into organized workflows Winner of BOSSIE, CODIE, OW2 and Gartner awards http://p.sf.net/sfu/Bonitasoft ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Block checksum mismatch on file storage
I have seen this before with both disk and tape media, where a backup job with no errors cannot later be restored due to i/o errors. The simple answer is that media can fail, even when offline, which is one of the reasons we make more than one backup. It is possible, if cumbersome and expensive, to write to RAID-1 storage, which would practically eliminate this issue. If restore from a secondary backup is not acceptable for whatever reason, then more fault tolerant hardware is the only answer. The alternative I would recommend, where restore from secondary backup is acceptable, is to set a volume size limit for disk volumes. Disk media usually fails in a small area of the media, meaning that if there are multiple volumes on the disk then only one (or a few) are likely to be affected. Huge volumes are at greater risk. Smaller volume size does not eliminate the problem, but mitigates the risk at the expense of a somewhat larger database size. On 6/28/2014 3:30 AM, Kern Sibbald wrote: It is unlikely that this is a Bacula problem, especially considering your remark that you have used it for years and never had any problems. My best guess is that you have bad media or a bad medium or a bad connector. When writing, unless the OS reports an error, Bacula assumes the write is good. That is, it does not re-read the data. If you want to verify then you must run a Bacula verify job after the backup job. I suspect that there is no difference between Bacula and rsync except that rsync is writing on a part of the media that is good and Bacula is writing elsewhere. There are several solutions (this is not exhaustive): 1. Get new media. 2. Use a more reliable form of backup device (USB is relatively unreliable compared to SATA, ...). 3. Run read/write disk tests on your USB disk (note: this will destroy any existing data). 4. Check your OS logs. They may show low level errors that are not reported to Bacula. If you have such errors, you must eliminate them to have reliable backups (or said the other way around: reliable backups *never* generate any OS device errors). Best regards, Kern On 06/27/2014 04:36 PM, advan...@posteo.de wrote: Hi Liste, I am using Bacula for years now and had no trouble so far. But now it really hits me. Well it worked smoothly .. until restore. (on ubuntu 12LTS and ubuntu 14, bacula version 5.2.6) The files were on USB disk. To be on the safe side I recreated everything on local sata again. Same result. I do tons of rsync on that disc with no problem, checked with smart, upgraded the system and no change. If I run bacula-sd with -p the restore is pulled through but the files are really corrupted. Luckily I have another backup. But this is really a bad move. How can I rely on the backup of bacula now? (i.e. Rsync tells me at once if the file is corrupt) Do I really have to do a checking restore on every job now? Could you give me a hint what might be the problem? Thanks G. -- Open source business process management suite built on Java and Eclipse Turn processes into business applications with Bonita BPM Community Edition Quickly connect people, data, and systems into organized workflows Winner of BOSSIE, CODIE, OW2 and Gartner awards http://p.sf.net/sfu/Bonitasoft ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] How big is an Enterprise?
On 6/29/2014 11:07 AM, Kern Sibbald wrote: It is also curious to say that Burp started out as a fork of Bacula. Is there a point where a fork stops being a fork? It is a philosophical question, of course, but I would say when it is no longer easily recognizable as a fork. It will always be a fork, technically, but may not always be recognizable as a fork without an investigation of its history. -- Open source business process management suite built on Java and Eclipse Turn processes into business applications with Bonita BPM Community Edition Quickly connect people, data, and systems into organized workflows Winner of BOSSIE, CODIE, OW2 and Gartner awards http://p.sf.net/sfu/Bonitasoft ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Unintended effect of bconsole's reload command?
Hi list, Hi Kern, Recently, I have been noticing that jobs I have temporarily disabled at the bconsole prompt had been 'randomly' showing back up in the list of scheduled nightly jobs. I just noticed that this happened again to me this morning, and determined that I had just modified a config file (I made some indents and formatting changes, no actual backup configuration changes were made), and then I issued the reload command. Is the reload command supposed to enable all disabled jobs, or is this an unintended consequence? Using v7.04 Thanks! Bill -- Bill Arlofski Reverse Polarity, LLC http://www.revpol.com/ -- Not responsible for anything below this line -- -- Open source business process management suite built on Java and Eclipse Turn processes into business applications with Bonita BPM Community Edition Quickly connect people, data, and systems into organized workflows Winner of BOSSIE, CODIE, OW2 and Gartner awards http://p.sf.net/sfu/Bonitasoft ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Unintended effect of bconsole's reload command?
Hello, 2014-06-30 15:23 GMT+02:00 Bill Arlofski waa-bac...@revpol.com: Hi list, Hi Kern, Recently, I have been noticing that jobs I have temporarily disabled at the bconsole prompt had been 'randomly' showing back up in the list of scheduled nightly jobs. I just noticed that this happened again to me this morning, and determined that I had just modified a config file (I made some indents and formatting changes, no actual backup configuration changes were made), and then I issued the reload command. Is the reload command supposed to enable all disabled jobs, Yes it is. best regards -- Radosław Korzeniewski rados...@korzeniewski.net -- Open source business process management suite built on Java and Eclipse Turn processes into business applications with Bonita BPM Community Edition Quickly connect people, data, and systems into organized workflows Winner of BOSSIE, CODIE, OW2 and Gartner awards http://p.sf.net/sfu/Bonitasoft___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Block checksum mismatch on file storage
Kern Yes, it is clear that one can do read-only tests that do not destroy Kern data. However, in this case, it seems to me more useful to do Kern read/write (it is actually write/read) tests as it appears that the Kern problem is more likely in the write ... Absolutely. And hopefully, this way you don't corrupt the existing data on the disk, but you do force the disk to do a low level re-allocation of bad blocks and sectors. But if you are seeing bad blocks on the disk, then it's time to start thinking about retiring it. Kern I have never heard of a non-destructive read/write test, which I assume Kern reads then rewrites the disk. Although that is clever and could be Kern useful, in this case it sounds to me risky on a disk that seems to be Kern failing. Kern Best regards, Kern Kern Kern On 06/29/2014 09:04 PM, John Stoffel wrote: Kern 3. Run read/write disk tests on your USB disk (note: this will Kern destroy any existing data). This isn't quite right. You can run read-write tests on a quiescent filesystem (ie unmounted) without problems: badblocks -svn /dev/sd? will scan the entire disk using non-destructive read-write mode. But as Kern said, check your logs as well. John -- Open source business process management suite built on Java and Eclipse Turn processes into business applications with Bonita BPM Community Edition Quickly connect people, data, and systems into organized workflows Winner of BOSSIE, CODIE, OW2 and Gartner awards http://p.sf.net/sfu/Bonitasoft ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users