Re: [Bacula-users] How does Bacula determine which files have been backed up?

2020-12-20 Thread Phil Stracchino
On 12/20/20 2:54 AM, y...@kozinski.com wrote:
> 
>> On Dec 19, 2020, at 10:49, Phil Stracchino  wrote:
>>
>> If a volume errors during a job, Bacula will write an EOF and continue
>> on the next tape.  Everything SUCCESSFULLY written to that tape volume
>> prior to the error should still be restorable, but you should consider
>> manually flagging the errored tape read-only.
>>
> 
> Thanks, Phil. This is helpful. Is there any way to validate a completed 
> backup against the contents on disk? As it turns out, I think the errors I 
> have been getting have been related to a faulty LTO drive. I’ve switched out 
> the drive mid-backup and things seem to be proceeding normally, but I’m 
> wondering now if I should just restart the backup from the very beginning, 
> since I don’t think I can trust the data written by the faulty drive to be 
> correct without some way of verifying it.


What you probably want is the VERIFY job, which can be set up to attempt
to re-read all data stored by a Job and optionally verify the checksum
and modification date of all files.  This should at least TELL you
whether you need to restart the job.  Remember that if Bacula hits a
tape error during the process of writing the blocks of a file, it will
mark the entire block failed, and restart from that block on the next
volume.  It is designed to be robust.


Personally, repeated LTO drive failures are the reason I abandoned LTO
as a backup medium.  I was spending more on replacing drives than on
replacing tapes.


> So if a volume within a backup set were lost or damaged, would I be able to 
> rebuild the catalog from the remaining tapes using bscan (or some other way), 
> and recover at least some of the files in the backup? Or would the entire 
> backup become useless?

If a volume in a backup set is lost or unusable, AND you have lost the
Catalog database, then yes, you can bscan the remaining volumes of the
backup set.  Otherwise, if the job completed successfully, but later one
of its volumes became unavailable, as long as you don't delete the job
from the Catalog you can still restore all of the data from all BUT the
unavailable volume.

> Assuming I could somehow partially rebuild the catalog, are you saying that I 
> would not be able to do an incremental backup from the recovered catalog, and 
> would have to start again with a new full backup?

Well, you're conflating two different things there.  Yes, you can bscan
the remaining media to get a catalog of what is readable from them in
order to be able to do partial restores from the remaining good volumes.
 (Although as noted, you should only have to do that if the job failed
or was deleted.  As long as it *completed*, all of its catalog records
are still there even if some volumes errored.)  You would probably have
to create your Restore job then, before actually running it, edit its
BSR file to delete all references to the unavailable volume.  (And
possibly the last file before that volume, if it spans onto the missing
volume.)

But if the backup job was invalidated or deleted, or if it *failed*,
then you still can't use the bscan records as the basis for a future
incremental backup, because the records of what is on those volumes do
not constitute a completed Job.

However, in this case you may be able to solve your problem with a
VIRTUAL FULL job, which uses the records of multiple existing completed
jobs — if possible — to create a "virtual" FULL job record that refers
to the backup media of multiple other jobs.


-- 
  Phil Stracchino
  Babylon Communications
  ph...@caerllewys.net
  p...@co.ordinate.org
  Landline: +1.603.293.8485
  Mobile:   +1.603.998.6958


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] deb and rpm Bacula 9.6.7 ????

2020-12-20 Thread Mario Pranjic


n 12/20/20 2:41 PM, Jose Alberto wrote:

Hi.
Will they update the binaries soon?

https://www.bacula.org/packages/ 


Hi,

For Debian, try backports.

# dpkg -l |grep bacula
ii  bacula-bscan 9.6.7-1~bpo10+1  
amd64    network backup service - bscan tool
ii  bacula-client 9.6.7-1~bpo10+1  
all  network backup service - client metapackage
ii  bacula-common 9.6.7-1~bpo10+1  
amd64    network backup service - common support files
ii  bacula-common-pgsql 9.6.7-1~bpo10+1  
amd64    network backup service - PostgreSQL common files
ii  bacula-console 9.6.7-1~bpo10+1  
amd64    network backup service - text console
ii  bacula-director 9.6.7-1~bpo10+1  
amd64    network backup service - Director daemon
ii  bacula-director-pgsql 9.6.7-1~bpo10+1  
all  network backup service - PostgreSQL storage for Director
ii  bacula-fd 9.6.7-1~bpo10+1  amd64    
network backup service - file daemon
ii  bacula-sd 9.6.7-1~bpo10+1  amd64    
network backup service - storage daemon
ii  bacula-server 9.6.7-1~bpo10+1  
all  network backup service - server metapackage



Best regards,

--
Mario.


OpenPGP_signature
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] deb and rpm Bacula 9.6.7 ????

2020-12-20 Thread Jose Alberto
Hi.
Will they update the binaries soon?

https://www.bacula.org/packages/

Saludos.


-- 
#
#   Sistema Operativo: Debian  #
#Caracas, Venezuela  #
#
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] About the situation of Centos Stream ...

2020-12-20 Thread Jose Alberto
Hi.

Most of us know the current issue with Centos - Redhat - Centos Streams and
the possible Fork of the Centos (Rocky) maintainers.

My question. How is Bacula evaluating this situation regarding RPM binaries


https://www.bacula.org/packages/.../rpms/




-- 
#
#   Sistema Operativo: Debian  #
#Caracas, Venezuela  #
#
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Improving Bacula backup performance

2020-12-20 Thread Josh Fisher



On 12/18/20 4:26 PM, Philip Pemberton via Bacula-users wrote:

Hi all,

I'm trying to improve the performance of my Bacula backups. I have a
configuration with two machines:

   - "A" - a small server running a few web services.
(Intel Celeron J1800 2.4GHz dual-core)

   - "B" - a 9TB NAS with a Quantum Superloader LTO-6 SAS tape robot
(Intel Q6600 3GHz quad-core)


My issues are twofold:

   - Backups of "B" are done by the local Bacula FD/SD/DIR and spooled
onto disk to reduce shoe-shining. The spool limit is 50GB, on a
solid-state disk.
   It takes about 6 minutes to fill the spool file, and between 5 and 7
to write it out to tape.
   This gives an effective data rate (quoted in the log) of about 50MB/s,
but the tape write rate (again, from the log) is closer 100-120MB/s.

   - Backups from A to B take a long time to spool to disk, but the tape
phase goes as fast as the local backup. Bacula reports about 7MB/sec. I
assume something is slowing down the network traffic.


I have a couple of questions --

   - Re. local "B" backups. Bacula seems to be writing to the spool file,
then dumping it to tape. There's no spooling happening when the tape is
being written to.
   Is there any way I can set Bacula up to do "A/B" or "ping-pong"
buffering, or something better than its current 50% duty cycle?
   Otherwise it seems my only


I don't think so. It is best to make the data spool as large as 
possible. Spooling can slow down jobs that are larger than the spool 
storage size.





   - Re. slow transfers from "A" to "B". What can I do to speed up the
network transfers between the two?
   I find that SMB and NFS from my workstation to/from "A" or "B" is
quite fast, certainly far higher than the ~7MB/s I'm seeing (quoted in
the Bacula log). I'm not expecting to hit 100MB/s, but I was expecting
better than 7MB/s!


It is not likely a network issue. 'A' does not have a strong processor. 
When compression is enabled for a job, it is the client that performs 
the compression. Likewise for data encryption. Try disabling both 
compression and encryption for the 'A' job, if enabled.


Also, the rate is based on the total run time for the job, including 
de-spooling attributes at the end. If attribute de-spooling is taking a 
long time, then database performance may be the bottleneck.





Both A and B are on the same gigabit network switch.
"A" (small server) has an Intel 82574L Ethernet controller.
"B" (NAS) has a Marvell 88E8056 Ethernet controller.


Thanks,
Phil
phil...@philpem.me.uk


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users