[Bacula-users] bpipe problems on ver. 15.0.2

2024-06-14 Thread Žiga Žvan
Hi!
I'm using bacula to backup some virtual machines from my esxi hosts. It worked 
on version 9.6.5 (Centos), however I'm having problems on version 15.0.2 
(Ubuntu). Backup job ends with a success, however bacula-fd service gets killed 
in the process...
Does anybody experience similar problems?
Any suggestion how to fix this?

Kind regards,
Ziga Zvan


 Relevant part of conf 
Job {
Name = "esxi_donke_SomeHost-backup"
JobDefs = "SomeHost-job"
ClientRunBeforeJob = "sshpass -p 'SomePassword' ssh -o StrictHostKeyChecking=no 
SomeUser@esxhost.domain.local /ghettoVCB-master/ghettoVCB.sh -g 
/ghettoVCB-master/ghettoVCB.conf -m SomeHost"
ClientRunAfterJob = "sshpass -p 'SomePassword' ssh -o StrictHostKeyChecking=no 
SomeUser@esxhost.domain.local rm -rf /vmfs/volumes/ds2_raid6/backup/SomeHost"
}


FileSet {
Name = "SomeHost-fileset"
Include {
Options {
signature = MD5
Compression = GZIP1
}
Plugin = "bpipe:/mnt/bkp_SomeHost.tar:sshpass -p 'SomePassword' ssh -o 
StrictHostKeyChecking=no SomeUser@esxhost.domain.local /bin/tar -c 
/vmfs/volumes/ds2_raid6/backup/SomeHost:/bin/tar -C 
/storage/bacula/imagerestore -xvf -"
}
Exclude {
}
}

 Bacula-fd state after backup finished ###

× bacula-fd.service - Bacula File Daemon service
 Loaded: loaded (/lib/systemd/system/bacula-fd.service; enabled; vendor 
preset: enabled)
 Active: failed (Result: signal) since Tue 2024-06-11 13:00:08 CEST; 20h ago
Process: 392733 ExecStart=/opt/bacula/bin/bacula-fd -fP -c 
/opt/bacula/etc/bacula-fd.conf (code=killed, signal=SEGV)
   Main PID: 392733 (code=killed, signal=SEGV)
CPU: 3h 33min 48.142s

Jun 11 13:00:08 bacula bacula-fd[392733]: Bacula interrupted by signal 11: 
Segmentation violation
Jun 11 13:00:08 bacula bacula-fd[393952]: bsmtp: bsmtp.c:508-0 Failed to 
connect to mailhost localhost
Jun 11 13:00:08 bacula bacula-fd[392733]: The btraceback call returned 1
Jun 11 13:00:08 bacula bacula-fd[392733]: LockDump: 
/opt/bacula/working/bacula.392733.traceback
Jun 11 13:00:08 bacula bacula-fd[392733]: bacula-fd: smartall.c:418-1791 
Orphaned buffer: bacula-fd 280 bytes at 55fad3bdf278>
Jun 11 13:00:08 bacula bacula-fd[392733]: bacula-fd: smartall.c:418-1791 
Orphaned buffer: bacula-fd 280 bytes at 55fad3bdff08>
Jun 11 13:00:08 bacula bacula-fd[392733]: bacula-fd: smartall.c:418-1791 
Orphaned buffer: bacula-fd 536 bytes at 55fad3beb678>
Jun 11 13:00:08 bacula systemd[1]: bacula-fd.service: Main process exited, 
code=killed, status=11/SEGV
Jun 11 13:00:08 bacula systemd[1]: bacula-fd.service: Failed with result 
'signal'.
Jun 11 13:00:08 bacula systemd[1]: bacula-fd.service: Consumed 3h 33min 48.142s 
CPU time.


# Trace output##
Check the log files for more information.

Please install a debugger (gdb) to receive a traceback.
Attempt to dump locks
threadid=0x7f16f1023640 max=2 current=-1
threadid=0x7f16f1824640 max=2 current=-1
threadid=0x7f16f202d640 max=0 current=-1
threadid=0x7f16f2093780 max=0 current=-1
Attempt to dump current JCRs. njcrs=0
List plugins. Hook count=1
Plugin 0x55fad3b0bf28 name="bpipe-fd.so"

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula and Oracle IAAS object storage

2022-05-09 Thread Žiga Žvan
Hi,
I have implemented bacula 9.6.5. two years ago. At that time there was no 
native support for Oracle cloud as a storage backend due to their 
implementation of S3 API. Therefore I write to local filesystem and let Oracle 
Storage Gateway (app provided by Oracle) upload stuff to cloud.. I would be 
glad if you could tell me if something changed regarding this?

I'm looking at this from Bacula Systems: 
https://www.baculasystems.com/wp-content/uploads/2020/Bacula_Enterprise_Edition_for_the_Cloud_s.pdf
The cloud storage backend for the Storage Daemon uses the S3, S3-IA, 
Azure,Google Cloud, AWS, Glacier and Oracle Cloud protocols, including https 
transport encryption for a wide range of public and private cloud. Services.

Does community version support Oracle object storage?
Is there someone who tested this? Use this in production?
Are there any plans regarding this?

Thx, best regards
Ziga Zvan

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-sd - file driver - cloud resource

2021-01-04 Thread Žiga Žvan
Sorry, this message was to large to get to the list. Just a suggestion 
to update documentation regarding purge/truncate commands...

Regards, Ziga


On 30.12.2020 21:32, Žiga Žvan wrote:


Hello Heitor,

I have admin job (output bellow). I have followed instructions in 
bacula 9.6.x manual (page 228 of 
https://www.bacula.org/9.6.x-manuals/en/main/main.pdf).

After some testing I discovered that:
a)command "prune expired volume yes" changes the status of volume to 
purged and allows volume to be used again (data on disk does not change)
b)command "purge volume action=all allpools 
storage=FSOciCloudStandardChanger" does not do anything.


I have changed this command to "truncate AllPools 
storage=FSOciCloudStandardChanger" as indicated on link you provided. 
Now data on disk actually gets truncated.


Thank you for all your help.
Perhaps someone could update documentation. I think it is quite 
misleading.


Regards,
Ziga

PS: Because of my changes to SD conf, I got error on truncate command 
(is a directory...). I have reinitialized catalog database to solve this.



Job {
  Name = "BackupCatalog"
  JobDefs = "CatalogJob"
  Level = Full
  FileSet="Catalog"
  Schedule = "WeeklyCycleAfterBackup"
  # This creates an ASCII copy of the catalog
  # Arguments to make_catalog_backup.pl are:
  #  make_catalog_backup.pl 
  RunBeforeJob = "/opt/bacula/scripts/make_catalog_backup.pl MyCatalog"
  # This deletes the copy of the catalog
  RunAfterJob  = "/opt/bacula/scripts/delete_catalog_backup"
  #Prune
  RunScript {
    Console = "prune expired volume yes"
    RunsWhen = Before
    RunsOnClient= No
  }
  #Purge
  RunScript {
    RunsWhen=After
    RunsOnClient=No
    Console = "purge volume action=all allpools 
storage=FSOciCloudStandardChanger"

  }
  Write Bootstrap = "/opt/bacula/working/%n.bsr"
  Priority = 11   # run after main backup
}

*truncate pool=jhost05-weekly-pool storage=FSOciCloudStandardChanger
Using Catalog "MyCatalog"
Connecting to Storage daemon FSOciCloudStandardChanger at 
192.168.66.35:9103 ...
3929 Unable to open device ""FSOciCloudStandard1" 
(/mnt/ocisg/bacula/backup)": ERR=file_dev.c:190 Could not 
open(/mnt/ocisg/bacula/backup/jhost05-weekly-vol- 
0475,CREATE_READ_WRITE,0640): ERR=Is a directory




On 30.12.2020 14:01, Heitor Faria wrote:


Hello Ziga,

If you are using AutoPrune=No, you must automate some sort of routine 
to prune expired volumes, otherwise it will affect Bacula's 
capability of recycling volumes.
You can achieve that (e.g.) using an Admin Job and the prune expired 
command, such as in: 
https://www.bacula.lat/truncate-bacula-volumes-to-free-disk-space/?lang=en


Regards,
--
MSc Heitor Faria
CEO Bacula LatAm
mobile1: + 1 909 655-8971
mobile2: + 55 61 98268-4220

América Latina
[ http://bacula.lat/]

...



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-sd - file driver - cloud resource

2020-12-30 Thread Žiga Žvan
|    0 | 0 | File1 
|  14 |    4 | 2020-12-25 23:09:39 | 2,628,574 |
| 635 | jhost05-weekly-vol-0635 | Used  |   1 | 
2,293,497,620 |    0 |    3,024,000 |   1 |    0 | 0 | File1 
|   1 |    0 | 2020-12-30 12:55:26 | 3,023,721 |
| 636 | jhost05-weekly-vol-0636 | Used  |   1 | 
2,293,497,620 |    0 |    3,024,000 |   1 |    0 | 0 | File1 
|   1 |    0 | 2020-12-30 12:59:05 | 3,023,940 |

+-+-+---+-+---+--+--+-+--+---+---+-+--+-+---+

#Example of job output:

30-Dec 12:25 bacula-dir JobId 2304: Start Backup JobId 2304, 
Job=dc1-monthly-backup.2020-12-30_12.25.21_08
30-Dec 12:25 bacula-dir JobId 2304: Pruning oldest volume 
"dc1-weekly-vol-0354"
30-Dec 12:25 bacula-dir JobId 2304: Using Device "FSOciCloudStandard2" 
to write.
30-Dec 12:25 bacula-dir JobId 2304: Pruning oldest volume 
"dc1-weekly-vol-0354"
30-Dec 12:25 bacula-dir JobId 2304: Pruning oldest volume 
"dc1-weekly-vol-0354"
30-Dec 12:25 bacula-sd JobId 2304: Job 
dc1-monthly-backup.2020-12-30_12.25.21_08 is waiting. Cannot find any 
appendable volumes.

Please use the "label" command to create a new Volume for:
    Storage:  "FSOciCloudStandard2" (/mnt/ocisg/bacula/backup)
    Pool: dc1-weekly-pool
    Media type:   File1

#Relevant part of bacula-dir conf:

Pool {

  Name = dc1-weekly-pool
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically 
recycle Volumes
  AutoPrune = no  # Prune expired volumes (catalog 
job handles this)

  Action On Purge = Truncate  # Allow to volume truncation
  Volume Use Duration = 3 days    # Create new volume for each backup
  Volume Retention = 35 days  # one month
  Maximum Volume Bytes = 500G # Limit Volume size to something 
reasonable

  Maximum Volumes = 6 # Limit number of Volumes in Pool
  Label Format = "dc1-weekly-vol-" # Auto label
  Cache Retention = 1 days    # Cloud specific (delete local 
cache after one day)

  Maximum Volume Jobs = 1 # Write each backup to new volume
  Recycle Oldest Volume = yes # In case maximum volumes is 
reached - prune oldest backup

}

#Relevant part of bacula-sd conf:

Device {
  Name = FSOciCloudStandard2
  Device type = File
  Media Type = File1
  Archive Device = /mnt/ocisg/bacula/backup
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Autochanger = yes;
}






On 28.12.2020 22:18, Heitor Faria wrote:


Hello Ziga,

If you are using the Orwcle Gateway there is no reason on Earth to use 
the S3 Driver. It will only mess your system, since I believe even in 
the SD startup it tries to connect to the bucket.
If you want smaller volumes, just tune the Maximum Volume Bytes and 
maybe Jobs (e.g. one).


Regards,
--
MSc Heitor Faria
CEO Bacula LatAm
mobile1: + 1 909 655-8971
mobile2: + 55 61 98268-4220

América Latina
[ http://bacula.lat/]



 Original Message 
From: Žiga Žvan 
Sent: Monday, December 28, 2020 04:07 PM
To: Heitor Faria ,bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] bacula-sd - file driver - cloud resource

Hello Heitor,

I am aware of that. I'm using Oracle Storage Gateway to upload data to
cloud (to S3 bucket). For bacula-sd, backup destination is ordinary
filesystem mounted over nfs. However I'm using file driver with cloud
resource (in order to have volume broken down in file parts and
therefore optimize upload amount).

Everything worked until bacula decided to reuse old volume. I'm getting
strange error (cannot download Volume). In my opinion this is some sort
of bug, because data is available on nfs destination (see example output
from cloudcache and  backup folder bellow).

Could you check my configuration (bellow)? Could you confirm that this
is a bug and I did not misconfigure something?
Regards,
Ziga


On 28.12.2020 14:46, Heitor Faria wrote:
>
> Hello Ziga,
>
> The Bacula Community S3 Plugin was built for Amazon S3 protocol.
> I read that Oracle built some sort of S3 emulator for its cloud, but
> since Bacula Community is a free software, it comes without warranties
> or support.
> You could consider running the bacula-sd in debug mode to troubleshoot
> the problem. The Driver is not working properly.
> Bacula Systems already developed a specific plugin for Oracle Object
> Storages, available only in the Enterprise edition.
>
> Regards,
> --
> MSc Heitor Faria
> CEO Bacula LatAm
> mobile1: + 1 909 655-8971
> mobile2: + 55 61 98268-4220
>
> América Latina
> [ 

Re: [Bacula-users] bacula-sd - file driver - cloud resource

2020-12-28 Thread Žiga Žvan

Hello Heitor,

I am aware of that. I'm using Oracle Storage Gateway to upload data to 
cloud (to S3 bucket). For bacula-sd, backup destination is ordinary 
filesystem mounted over nfs. However I'm using file driver with cloud 
resource (in order to have volume broken down in file parts and 
therefore optimize upload amount).


Everything worked until bacula decided to reuse old volume. I'm getting 
strange error (cannot download Volume). In my opinion this is some sort 
of bug, because data is available on nfs destination (see example output 
from cloudcache and  backup folder bellow).


Could you check my configuration (bellow)? Could you confirm that this 
is a bug and I did not misconfigure something?

Regards,
Ziga


On 28.12.2020 14:46, Heitor Faria wrote:


Hello Ziga,

The Bacula Community S3 Plugin was built for Amazon S3 protocol.
I read that Oracle built some sort of S3 emulator for its cloud, but 
since Bacula Community is a free software, it comes without warranties 
or support.
You could consider running the bacula-sd in debug mode to troubleshoot 
the problem. The Driver is not working properly.
Bacula Systems already developed a specific plugin for Oracle Object 
Storages, available only in the Enterprise edition.


Regards,
--
MSc Heitor Faria
CEO Bacula LatAm
mobile1: + 1 909 655-8971
mobile2: + 55 61 98268-4220

América Latina
[ http://bacula.lat/]



 Original Message 
From: Žiga Žvan 
Sent: Monday, December 28, 2020 08:22 AM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] bacula-sd - file driver - cloud resource

Hi,
What am I doing wrong? I would be grateful for a hint, why there is no
reply to my emails.
I believe that my problems are a bit complex for an email. Is there any
way to discuss this in different way?
Who should I contact in case that I would like to support development of
oracle cloud plugin?

Regards,
Ziga

On 21.12.2020 20:38, Žiga Žvan wrote:
> Hello,
> I'm using file driver with cloud resource. Bacula was able to backup
> data in this way until it wrote data to new volumes. Now, after
> retention period,  I'm getting error: Fatal error: cloud_dev.c:983
> Unable to download Volume (see output below). Data on cloud path looks
> ok but data in local cache contains only part.1 without any data.
>
> Is this expected?
> Has anybody tested this scenario?
> Should I avoid file driver in production environment?
>
> Regards,
> Ziga
>
>
> [root@bacula db-01-weekly-vol-0365]# ls -la
> /mnt/ocisg/bacula/backup/db-01-weekly-vol-0365
> total 0
> drwxr-. 2 bacula disk   0 Oct 24 07:45 .
> drwxr-xr-x. 2 bacula bacula 0 Dec 18 23:38 ..
> -rw-r--r--. 1 bacula disk 256 Oct 24 07:43 part.1
> -rw-r--r--. 1 bacula disk   35992 Oct 24 07:44 part.2
> -rw-r--r--. 1 bacula disk   35993 Oct 24 07:44 part.3
> -rw-r--r--. 1 bacula disk   381771773 Oct 24 07:45 part.4
>
> [root@bacula db-01-weekly-vol-0365]# ls -la
> /storage/bacula/cloudcache/db-01-weekly-vol-0365
> total 20
> drwxr-.   2 bacula disk  28 Dec 11 23:10 .
> drwxr-xr-x. 344 bacula bacula 16384 Dec 18 23:26 ..
> -rw-r--r--.   1 bacula disk   0 Dec 11 23:10 part.1
>
> SD config (autochanger)
>
> Device {
>   Name = FSOciCloudStandard2
>   Device type = Cloud
>   Cloud = OracleViaStorageGateway
>   Maximum Part Size = 1000 MB
>   Media Type = File1
>   Archive Device = /storage/bacula/cloudcache
>   LabelMedia = yes;   # lets Bacula label unlabeled 
media

>   Random Access = Yes;
>   AutomaticMount = yes;   # when device opened, read it
>   RemovableMedia = no;
>   AlwaysOpen = no;
>   Autochanger = yes;
> }
> ...
> Device {
>   Name = FSOciCloudStandard4
>   Device type = Cloud
>   Cloud = OracleViaStorageGateway
>   Maximum Part Size = 1000 MB
>   Media Type = File1
>   Archive Device = /storage/bacula/cloudcache
>   LabelMedia = yes;   # lets Bacula label unlabeled 
media

>   Random Access = Yes;
>   AutomaticMount = yes;   # when device opened, read it
>   RemovableMedia = no;
>   AlwaysOpen = no;
>   Autochanger = yes;
> }
>
> Cloud {
>   Name = OracleViaStorageGateway
>   Driver = "File"
>   HostName = "/mnt/ocisg/bacula/backup"
>   BucketName = "DummyBucket"
>   AccessKey = "DummyAccessKey"
>   SecretKey = "DummySecretKey"
>   Protocol = HTTPS
>   UriStyle = VirtualHost
>   Truncate Cache = AtEndOfJob
> }
>
>
> 21-Dec 19:14 bacula-dir JobId 2073: Start Backup JobId 2073,
> Job=db-01-backup.2020-12-21_19.14.14_48
> 21-Dec 19:14 bacula-dir JobId 2073: Using Device "FSOciCloudStandard2"
> to write.
> 21-Dec 19:14 bacula-sd JobId 2073: Fatal error: cloud_dev.c:983 Unable
> to download 

Re: [Bacula-users] bacula-sd - file driver - cloud resource

2020-12-28 Thread Žiga Žvan

Hi,
What am I doing wrong? I would be grateful for a hint, why there is no 
reply to my emails.
I believe that my problems are a bit complex for an email. Is there any 
way to discuss this in different way?
Who should I contact in case that I would like to support development of 
oracle cloud plugin?


Regards,
Ziga

On 21.12.2020 20:38, Žiga Žvan wrote:

Hello,
I'm using file driver with cloud resource. Bacula was able to backup 
data in this way until it wrote data to new volumes. Now, after 
retention period,  I'm getting error: Fatal error: cloud_dev.c:983 
Unable to download Volume (see output below). Data on cloud path looks 
ok but data in local cache contains only part.1 without any data.


Is this expected?
Has anybody tested this scenario?
Should I avoid file driver in production environment?

Regards,
Ziga


[root@bacula db-01-weekly-vol-0365]# ls -la 
/mnt/ocisg/bacula/backup/db-01-weekly-vol-0365

total 0
drwxr-. 2 bacula disk   0 Oct 24 07:45 .
drwxr-xr-x. 2 bacula bacula 0 Dec 18 23:38 ..
-rw-r--r--. 1 bacula disk 256 Oct 24 07:43 part.1
-rw-r--r--. 1 bacula disk   35992 Oct 24 07:44 part.2
-rw-r--r--. 1 bacula disk   35993 Oct 24 07:44 part.3
-rw-r--r--. 1 bacula disk   381771773 Oct 24 07:45 part.4

[root@bacula db-01-weekly-vol-0365]# ls -la 
/storage/bacula/cloudcache/db-01-weekly-vol-0365

total 20
drwxr-.   2 bacula disk  28 Dec 11 23:10 .
drwxr-xr-x. 344 bacula bacula 16384 Dec 18 23:26 ..
-rw-r--r--.   1 bacula disk   0 Dec 11 23:10 part.1

SD config (autochanger)

Device {
  Name = FSOciCloudStandard2
  Device type = Cloud
  Cloud = OracleViaStorageGateway
  Maximum Part Size = 1000 MB
  Media Type = File1
  Archive Device = /storage/bacula/cloudcache
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Autochanger = yes;
}
...
Device {
  Name = FSOciCloudStandard4
  Device type = Cloud
  Cloud = OracleViaStorageGateway
  Maximum Part Size = 1000 MB
  Media Type = File1
  Archive Device = /storage/bacula/cloudcache
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Autochanger = yes;
}

Cloud {
  Name = OracleViaStorageGateway
  Driver = "File"
  HostName = "/mnt/ocisg/bacula/backup"
  BucketName = "DummyBucket"
  AccessKey = "DummyAccessKey"
  SecretKey = "DummySecretKey"
  Protocol = HTTPS
  UriStyle = VirtualHost
  Truncate Cache = AtEndOfJob
}


21-Dec 19:14 bacula-dir JobId 2073: Start Backup JobId 2073, 
Job=db-01-backup.2020-12-21_19.14.14_48
21-Dec 19:14 bacula-dir JobId 2073: Using Device "FSOciCloudStandard2" 
to write.
21-Dec 19:14 bacula-sd JobId 2073: Fatal error: cloud_dev.c:983 Unable 
to download Volume="db-01-weekly-vol-0365" label.
21-Dec 19:14 bacula-sd JobId 2073: Fatal error: cloud_dev.c:983 Unable 
to download Volume="db-01-weekly-vol-0365" label.
21-Dec 19:14 db-01.prod.kr.cetrtapot.si JobId 2073: Fatal error: 
job.c:3013 Bad response from SD to Append Data command. Wanted 3000 OK 
data

, got len=25 msg="3903 Error append data:  "
21-Dec 19:14 bacula-sd JobId 2073: Fatal error: cloud_dev.c:983 Unable 
to download Volume="db-01-weekly-vol-0365" label.
21-Dec 19:14 bacula-sd JobId 2073: Warning: label.c:398 Open Cloud 
device "FSOciCloudStandard2" (/storage/bacula/cloudcache) Volume 
"db-01-weekly-vol-0365" failed: ERR=
21-Dec 19:14 bacula-sd JobId 2073: Fatal error: cloud_dev.c:983 Unable 
to download Volume="db-01-weekly-vol-0365" label.
21-Dec 19:14 bacula-sd JobId 2073: Fatal error: cloud_dev.c:983 Unable 
to download Volume="db-01-weekly-vol-0365" label.
21-Dec 19:14 bacula-sd JobId 2073: Warning: label.c:398 Open Cloud 
device "FSOciCloudStandard2" (/storage/bacula/cloudcache) Volume 
"db-01-weekly-vol-0365" failed: ERR=
21-Dec 19:14 bacula-sd JobId 2073: Marking Volume 
"db-01-weekly-vol-0365" in Error in Catalog.

21-Dec 19:14 bacula-sd JobId 2073: Fatal error: Job 2073 canceled.


On 06.12.2020 20:52, Žiga Žvan wrote:

Dear all,
I'm using bacula 9.6.5 in a production for a month now. I'm 
experiencing random backup failures from my clients. Specific hosts 
report errors like the outputs attached. The same host is able to 
perform backup at some other time. The error is more often at large 
backups (more errors at full backups than incremental, more errors at 
hosts with large data sets).


I have tried to implement heartbeat interval 
(https://www.bacula.org/9.6.x-manuals/en/main/Client_File_daemon_Configur.html#SECTION00221) 
but there is no improvement.
The error occures also on hosts in the same zone as bacul

Re: [Bacula-users] Offsite S3 backup

2020-12-21 Thread Žiga Žvan

Hi,
I'm using dummy S3 bucket and upload data with StorageGateway (oracle 
cloud is not supported directly), however I have some troubles with this 
setup.

Directions are here: https://blog.bacula.org/whitepapers/CloudBackup.pdf
Regards, Ziga


On 17.12.2020 18:42, Satvinder Singh wrote:

Hi,

Has anyone tested doing offsite backups to an S3 bucket? If yes, can someone 
point me in the right direction on how to?

Thanks

  


Satvinder Singh / Operations Manager
ssi...@celerium.com / Cell: 703-989-8030

Celerium
Office: 703-418-6315
www.celerium.com 

        



Disclaimer: This message is intended only for the use of the individual or 
entity to which it is addressed and may contain information which is 
privileged, confidential, proprietary, or exempt from disclosure under 
applicable law. If you are not the intended recipient or the person responsible 
for delivering the message to the intended recipient, you are strictly 
prohibited from disclosing, distributing, copying, or in any way using this 
message. If you have received this communication in error, please notify the 
sender and destroy and delete any copies you may have received.

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula-sd - file driver - cloud resource

2020-12-21 Thread Žiga Žvan

Hello,
I'm using file driver with cloud resource. Bacula was able to backup 
data in this way until it wrote data to new volumes. Now, after 
retention period,  I'm getting error: Fatal error: cloud_dev.c:983 
Unable to download Volume (see output below). Data on cloud path looks 
ok but data in local cache contains only part.1 without any data.


Is this expected?
Has anybody tested this scenario?
Should I avoid file driver in production environment?

Regards,
Ziga


[root@bacula db-01-weekly-vol-0365]# ls -la 
/mnt/ocisg/bacula/backup/db-01-weekly-vol-0365

total 0
drwxr-. 2 bacula disk   0 Oct 24 07:45 .
drwxr-xr-x. 2 bacula bacula 0 Dec 18 23:38 ..
-rw-r--r--. 1 bacula disk 256 Oct 24 07:43 part.1
-rw-r--r--. 1 bacula disk   35992 Oct 24 07:44 part.2
-rw-r--r--. 1 bacula disk   35993 Oct 24 07:44 part.3
-rw-r--r--. 1 bacula disk   381771773 Oct 24 07:45 part.4

[root@bacula db-01-weekly-vol-0365]# ls -la 
/storage/bacula/cloudcache/db-01-weekly-vol-0365

total 20
drwxr-.   2 bacula disk  28 Dec 11 23:10 .
drwxr-xr-x. 344 bacula bacula 16384 Dec 18 23:26 ..
-rw-r--r--.   1 bacula disk   0 Dec 11 23:10 part.1

SD config (autochanger)

Device {
  Name = FSOciCloudStandard2
  Device type = Cloud
  Cloud = OracleViaStorageGateway
  Maximum Part Size = 1000 MB
  Media Type = File1
  Archive Device = /storage/bacula/cloudcache
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Autochanger = yes;
}
...
Device {
  Name = FSOciCloudStandard4
  Device type = Cloud
  Cloud = OracleViaStorageGateway
  Maximum Part Size = 1000 MB
  Media Type = File1
  Archive Device = /storage/bacula/cloudcache
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Autochanger = yes;
}

Cloud {
  Name = OracleViaStorageGateway
  Driver = "File"
  HostName = "/mnt/ocisg/bacula/backup"
  BucketName = "DummyBucket"
  AccessKey = "DummyAccessKey"
  SecretKey = "DummySecretKey"
  Protocol = HTTPS
  UriStyle = VirtualHost
  Truncate Cache = AtEndOfJob
}


21-Dec 19:14 bacula-dir JobId 2073: Start Backup JobId 2073, 
Job=db-01-backup.2020-12-21_19.14.14_48
21-Dec 19:14 bacula-dir JobId 2073: Using Device "FSOciCloudStandard2" 
to write.
21-Dec 19:14 bacula-sd JobId 2073: Fatal error: cloud_dev.c:983 Unable 
to download Volume="db-01-weekly-vol-0365" label.
21-Dec 19:14 bacula-sd JobId 2073: Fatal error: cloud_dev.c:983 Unable 
to download Volume="db-01-weekly-vol-0365" label.
21-Dec 19:14 db-01.prod.kr.cetrtapot.si JobId 2073: Fatal error: 
job.c:3013 Bad response from SD to Append Data command. Wanted 3000 OK data

, got len=25 msg="3903 Error append data:  "
21-Dec 19:14 bacula-sd JobId 2073: Fatal error: cloud_dev.c:983 Unable 
to download Volume="db-01-weekly-vol-0365" label.
21-Dec 19:14 bacula-sd JobId 2073: Warning: label.c:398 Open Cloud 
device "FSOciCloudStandard2" (/storage/bacula/cloudcache) Volume 
"db-01-weekly-vol-0365" failed: ERR=
21-Dec 19:14 bacula-sd JobId 2073: Fatal error: cloud_dev.c:983 Unable 
to download Volume="db-01-weekly-vol-0365" label.
21-Dec 19:14 bacula-sd JobId 2073: Fatal error: cloud_dev.c:983 Unable 
to download Volume="db-01-weekly-vol-0365" label.
21-Dec 19:14 bacula-sd JobId 2073: Warning: label.c:398 Open Cloud 
device "FSOciCloudStandard2" (/storage/bacula/cloudcache) Volume 
"db-01-weekly-vol-0365" failed: ERR=
21-Dec 19:14 bacula-sd JobId 2073: Marking Volume 
"db-01-weekly-vol-0365" in Error in Catalog.

21-Dec 19:14 bacula-sd JobId 2073: Fatal error: Job 2073 canceled.


On 06.12.2020 20:52, Žiga Žvan wrote:

Dear all,
I'm using bacula 9.6.5 in a production for a month now. I'm 
experiencing random backup failures from my clients. Specific hosts 
report errors like the outputs attached. The same host is able to 
perform backup at some other time. The error is more often at large 
backups (more errors at full backups than incremental, more errors at 
hosts with large data sets).


I have tried to implement heartbeat interval 
(https://www.bacula.org/9.6.x-manuals/en/main/Client_File_daemon_Configur.html#SECTION00221) 
but there is no improvement.
The error occures also on hosts in the same zone as bacula server (no 
router/firewall in between).
Storage deamon is installed on the same server as bacula director. I'm 
using File cloud driver (backup to local disk via cloud resource).


Could you please suggest a solution or a way to troubleshoot this 
further?

Thx!

Regards,Ziga Zvan

Backup from linux hosts (on 05-dec 3 hosts failed, 20 hosts completed 
wit

[Bacula-users] bacula - connection reset by peer

2020-12-06 Thread Žiga Žvan

Dear all,
I'm using bacula 9.6.5 in a production for a month now. I'm experiencing random 
backup failures from my clients. Specific hosts report errors like the outputs 
attached. The same host is able to perform backup at some other time. The error 
is more often at large backups (more errors at full backups than incremental, 
more errors at hosts with large data sets).

I have tried to implement heartbeat interval 
(https://www.bacula.org/9.6.x-manuals/en/main/Client_File_daemon_Configur.html#SECTION00221)
 but there is no improvement.
The error occures also on hosts in the same zone as bacula server (no 
router/firewall in between).
Storage deamon is installed on the same server as bacula director. I'm using 
File cloud driver (backup to local disk via cloud resource).

Could you please suggest a solution or a way to troubleshoot this further?
Thx!
 
Regards,Ziga Zvan


Backup from linux hosts (on 05-dec 3 hosts failed, 20 hosts completed without 
error):
05-Dec 03:26 bacula-dir JobId 1721: Fatal error: Network error with FD during 
Backup: ERR=Connection reset by peer
05-Dec 03:27 bacula-dir JobId 1721: Fatal error: No Job status returned from FD.
05-Dec 03:27 bacula-dir JobId 1721: Error: Bacula bacula-dir 9.6.5 (11Jun20):

Backup from windows hosts (on 05-dec 2 hosts failed, 5 hosts completed without 
error):
05-Dec 00:40 iwhost01.kranj.cetrtapot.si-fd JobId 1726: Error: lib/bsock.c:383 
Write error sending 57172 bytes to Storage daemon:192.168.66.35:9103: 
ERR=Input/output error
05-Dec 00:40 iwhost01.kranj.cetrtapot.si-fd JobId 1726: Fatal error: 
filed/backup.c:848 Network send error to SD. ERR=Input/output error
05-Dec 00:40 iwhost01.kranj.cetrtapot.si-fd JobId 1726: VSS Writer (BackupComplete): 
"Task Scheduler Writer", State: 0x1 (VSS_WS_STABLE)
05-Dec 00:40 iwhost01.kranj.cetrtapot.si-fd JobId 1726: VSS Writer (BackupComplete): 
"VSS Metadata Store Writer", State: 0x1 (VSS_WS_STABLE)
05-Dec 00:40 iwhost01.kranj.cetrtapot.si-fd JobId 1726: VSS Writer (BackupComplete): 
"Performance Counters Writer", State: 0x1 (VSS_WS_STABLE)
05-Dec 00:40 iwhost01.kranj.cetrtapot.si-fd JobId 1726: VSS Writer (BackupComplete): 
"System Writer", State: 0x1 (VSS_WS_STABLE)
05-Dec 00:40 iwhost01.kranj.cetrtapot.si-fd JobId 1726: VSS Writer (BackupComplete): 
"ASR Writer", State: 0x1 (VSS_WS_STABLE)
05-Dec 00:40 iwhost01.kranj.cetrtapot.si-fd JobId 1726: VSS Writer (BackupComplete): 
"Shadow Copy Optimization Writer", State: 0x1 (VSS_WS_STABLE)
05-Dec 01:01 bacula-dir JobId 1726: Error: bsock.c:551 Read error from Client: 
iwhost01.kranj.cetrtapot.si-fd:iwhost01.kranj.cetrtapot.si:9102: ERR=Connection 
timed out
05-Dec 01:01 bacula-dir JobId 1726: Fatal error: Network error with FD during 
Backup: ERR=Connection timed out
05-Dec 01:02 bacula-dir JobId 1726: Fatal error: No Job status returned from FD.
05-Dec 01:02 bacula-dir JobId 1726: Error: Bacula bacula-dir 9.6.5 (11Jun20):

Similar output from 21-Nov:
21-Nov 05:30 dc1.kranj.cetrtapot.si-fd JobId 1393: Error: lib/bsock.c:383 Write 
error sending 4 bytes to Storage daemon:192.168.66.35:9103: ERR=Input/output 
error
21-Nov 05:30 dc1.kranj.cetrtapot.si-fd JobId 1393: Error: lib/bsock.c:271 
Socket has errors=1 on call to Storage daemon:192.168.66.35:9103
21-Nov 05:30 dc1.kranj.cetrtapot.si-fd JobId 1393: Error: lib/bsock.c:271 
Socket has errors=1 on call to Storage daemon:192.168.66.35:9103
21-Nov 05:30 dc1.kranj.cetrtapot.si-fd JobId 1393: Fatal error: 
filed/backup.c:607 Network send error to SD. ERR=Input/output error
21-Nov 05:49 bacula-dir JobId 1393: Fatal error: Network error with FD during 
Backup: ERR=Connection timed out




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] performance challenges

2020-10-07 Thread Žiga Žvan
Thanks Joe for this info. It looks like it is a client issue as it is 
written in the document (many small files; operations like stat(), 
fstat() consume 100% cpu on the client).


I think that implementing autochanger solves my problems (mutliple 
clients will write at the same time and utilize bandwidth).
I have installed and tested version 9.6.5 of bacula-client. It does not 
show any better performance (still 3,5 hours for 166 GB, 5 files), 
however bconsole status dir command now shows progress (files and bytes 
written). With old client, this reporting did not work (it  showed 0 
files till the end of backup).


Regards,
Ziga

On 07.10.2020 12:29, Joe GREER wrote:

Ziga,

It is sad to hear your having issues with Bacula. Some of your 
concerns have been here since 2005. The only thing you can do to speed 
things up is to spool the whole job to very fast disk(SSD), break up 
your large job(number of files), make sure your database is on very 
fast disk(SSD) and have a person that is very familiar with Postgres 
look at your DB to see if it needs some tweaking.


Here is a post from Andreas Koch back in 2017 with similar issues and 
at the time VERY powerful hardware getting poor performance :


"It appears that such trickery might be unnecessary if the Bacula FD could
perform something similar (hiding the latency of individual meta-data
operations) on-the-fly, e.g. by executing in a multi-threaded fashion. 
This

has been proposed as Item 15 in the Bacula `Projects' list since November
2005 but does not appear to have been implemented yet (?)."

https://sourceforge.net/p/bacula/mailman/message/36021244/

Thanks,
Joe




This message and any documents attached are confidential - without any 
specifications - created for the exclusive use of its intended 
recipient(s), and may be legally privileged. Any modification, 
printing, use, or distribution of this email that is not authorised is 
prohibited. If you have received this email in error, please notify us 
immediately, delete it from your system and destroy any attachments.


-- French version --
Ce message et toutes les pièces jointes sont confidentiels et - sans 
mention particulière - établis à l'intention et pour l'exploitation 
exclusive de son ou ses destinataires. Toute modification, édition, 
utilisation ou diffusion non autorisée est interdite. Si vous avez 
reçu ce message par erreur, merci d'en avertir immédiatement 
l'émetteur et de détruire le message et pièces jointes.

>>> Žiga Žvan  10/6/2020 9:56 AM >>>
Hi,
I have done some testing:
a) testing storage with dd command (eg: dd if=/dev/zero
of=/storage/test1.img bs=1G count=1 oflag=dsync). The results are:
-writing to IBM storage (with cloud enabled) shows 300 MB/sec
-writing to local SSD storage shows 600 MB/sec.
I guess storage is not a bottleneck.
b) testing file copy from linux centos 6 server to bacula server with
rsync (eg. rsync --info=progress2 source destination)
-writing to local storage: 82 MB/sec
-writing to IBM storage: 85 MB/sec
I guess this is ok for a 1 GB network link.
c) using bacula:
-linux centos 6 file server: 13 MB/sec on IBM storage, 16 MB/sec on
local SSD storage (version of client 5.2.13).
-windows file server:  around 18 MB/sec - there could be some additional
problem, because I perform a backup from a deduplicated drive (version
of client 9.6.5)
d) I have tried to manipulate encryption/compression settings, but I
believe there is no significant difference

I think that  bacula rate (15 MB/sec) in quite slow comparing to file
copy results (85 MB/sec) from the same client/server. It should be
better... Do you agree?

I have implemented autochanger in order to perform backup from both
servers at the same time. We shall see the results tomorrow.
I have not changed the version of the client on linux server yet. My
windows server uses new client version, so that was not my first idea...
Will try this tomorrow if needed.

What about retention?
I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

At the moment I use different job/schedule for monthly backup, but that
triggers full backup also on Monday after monthly backup (I would like
to run incremental then). Is there a better way? Relevant parts of conf
below...

Regards,
Ziga

JobDefs {
Name = "bazar2-job"
Schedule = "WeeklyCycle"
...
}

Job {
   Name = "bazar2-backup"
   JobDefs = "bazar2-job"
   Full Backup Pool = bazar2-weekly-pool
   Incremental Backup Pool = bazar2-daily-pool
}

Job {
   Name = "bazar2-monthly-backup"
   Level = Full
   JobDefs = "bazar2-job"
   Pool = bazar2-monthly-pool
   Schedule = "MonthlyFull"  #schedule : see in bacula-dir.conf (monthly
pool with longer retention)
}





Example output:

06-Oct 12:19 bacula-dir JobId 714: Bacula bacula-dir 9.6.5 (11Jun20):
   Build OS: 

Re: [Bacula-users] Backup from client with deduplication

2020-10-07 Thread Žiga Žvan

Hi,
I'm able to backup Windows 2012 server with deduplication enabled. My 
bacula server is also centos, bacula version 9.6.5 (on client and 
server). I have tested only individual file restores but it always 
worked for me. Backup is very small (54 GB for app. 700 GB raw data). 
I'm not sure how bacula manages to backup and encrypt deduplicated 
amount (I would expect that there will be 700 GB on the wire). I'm 
having some performance problems (see thread 
performance challenges).


I'm at the the end of testing bacula sw, so I'm not an experienced user. 
I would also appreciate if someone confirms, that this should work. If 
it helps, there is log from my last backup.

Kind regards,
Ziga



02-Oct 23:05 bacula-dir JobId 692: Start Backup JobId 692, 
Job=dc1-monthly-backup.2020-10-02_23.05.00_09
02-Oct 23:05 bacula-dir JobId 692: Created new Volume="dc1-monthly-vol-0297", 
Pool="dc1-monthly-pool", MediaType="File" in catalog.
02-Oct 23:05 bacula-dir JobId 692: Using Device "FSLocalBackup" to write.
02-Oct 23:05 bacula-sd JobId 692: Labeled new Volume "dc1-monthly-vol-0297" on File 
device "FSLocalBackup" (/storage/bacula/backup).
02-Oct 23:05 bacula-sd JobId 692: Wrote label to prelabeled Volume "dc1-monthly-vol-0297" 
on File device "FSLocalBackup" (/storage/bacula/backup)
02-Oct 23:05 dc1.kranj.cetrtapot.si-fd JobId 692: Generate VSS snapshots. 
Driver="Win64 VSS"
02-Oct 23:05 dc1.kranj.cetrtapot.si-fd JobId 692: Snapshot mount point: D:\
03-Oct 13:42 dc1.kranj.cetrtapot.si-fd JobId 692: VSS Writer (BackupComplete): "Task 
Scheduler Writer", State: 0x1 (VSS_WS_STABLE)
03-Oct 13:42 dc1.kranj.cetrtapot.si-fd JobId 692: VSS Writer (BackupComplete): "VSS 
Metadata Store Writer", State: 0x1 (VSS_WS_STABLE)
03-Oct 13:42 dc1.kranj.cetrtapot.si-fd JobId 692: VSS Writer (BackupComplete): 
"Performance Counters Writer", State: 0x1 (VSS_WS_STABLE)
03-Oct 13:42 dc1.kranj.cetrtapot.si-fd JobId 692: VSS Writer (BackupComplete): 
"System Writer", State: 0x1 (VSS_WS_STABLE)
03-Oct 13:42 dc1.kranj.cetrtapot.si-fd JobId 692: VSS Writer (BackupComplete): "ASR 
Writer", State: 0x1 (VSS_WS_STABLE)
03-Oct 13:42 dc1.kranj.cetrtapot.si-fd JobId 692: VSS Writer (BackupComplete): "IIS 
Config Writer", State: 0x1 (VSS_WS_STABLE)
03-Oct 13:42 dc1.kranj.cetrtapot.si-fd JobId 692: VSS Writer (BackupComplete): "WMI 
Writer", State: 0x1 (VSS_WS_STABLE)
03-Oct 13:42 dc1.kranj.cetrtapot.si-fd JobId 692: VSS Writer (BackupComplete): "IIS 
Metabase Writer", State: 0x1 (VSS_WS_STABLE)
03-Oct 13:42 dc1.kranj.cetrtapot.si-fd JobId 692: VSS Writer (BackupComplete): 
"Dedup Writer", State: 0x1 (VSS_WS_STABLE)
03-Oct 13:42 dc1.kranj.cetrtapot.si-fd JobId 692: VSS Writer (BackupComplete): 
"Registry Writer", State: 0x1 (VSS_WS_STABLE)
03-Oct 13:42 dc1.kranj.cetrtapot.si-fd JobId 692: VSS Writer (BackupComplete): 
"Shadow Copy Optimization Writer", State: 0x1 (VSS_WS_STABLE)
03-Oct 13:42 dc1.kranj.cetrtapot.si-fd JobId 692: VSS Writer (BackupComplete): "DFS 
Replication service writer", State: 0x1 (VSS_WS_STABLE)
03-Oct 13:42 dc1.kranj.cetrtapot.si-fd JobId 692: VSS Writer (BackupComplete): "NPS 
VSS Writer", State: 0x1 (VSS_WS_STABLE)
03-Oct 13:42 dc1.kranj.cetrtapot.si-fd JobId 692: VSS Writer (BackupComplete): 
"Certificate Authority", State: 0x1 (VSS_WS_STABLE)
03-Oct 13:42 dc1.kranj.cetrtapot.si-fd JobId 692: VSS Writer (BackupComplete): "Dhcp 
Jet Writer", State: 0x1 (VSS_WS_STABLE)
03-Oct 13:42 dc1.kranj.cetrtapot.si-fd JobId 692: VSS Writer (BackupComplete): "COM+ 
REGDB Writer", State: 0x1 (VSS_WS_STABLE)
03-Oct 13:42 dc1.kranj.cetrtapot.si-fd JobId 692: VSS Writer (BackupComplete): 
"NTDS", State: 0x1 (VSS_WS_STABLE)
03-Oct 13:42 bacula-sd JobId 692: Elapsed time=14:37:01, Transfer rate=1.081 M 
Bytes/second
03-Oct 13:42 bacula-sd JobId 692: Sending spooled attrs to the Director. 
Despooling 462,280,104 bytes ...
03-Oct 13:42 bacula-dir JobId 692: Bacula bacula-dir 9.6.5 (11Jun20):
  Build OS:   x86_64-redhat-linux-gnu-bacula redhat (Core)
  JobId:  692
  Job:dc1-monthly-backup.2020-10-02_23.05.00_09
  Backup Level:   Full
  Client: "dc1.kranj.cetrtapot.si-fd" 9.6.5 (11Jun20) Microsoft 
Standard Edition (build 9200), 64-bit,Cross-compile,Win64
  FileSet:"dc1-fileset" 2020-09-25 18:20:26
  Pool:   "dc1-monthly-pool" (From Job resource)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"FSLocalBackup" (From Job resource)
  Scheduled time: 02-Oct-2020 23:05:00
  Start time: 02-Oct-2020 23:05:02
  End time:   03-Oct-2020 13:42:34
  Elapsed time:   14 hours 37 mins 32 secs
  Priority:   10
  FD Files Written:   1,706,949
  SD Files Written:   1,706,949
  FD Bytes Written:   55,666,483,955 (55.66 GB)
  SD Bytes Written:   56,910,661,569 (56.91 GB)
  Rate:   1057.3 KB/s
  Software Compression:   None
  Comm Line 

Re: [Bacula-users] performance challenges

2020-10-06 Thread Žiga Žvan

Hi,
I have done some testing:
a) testing storage with dd command (eg: dd if=/dev/zero 
of=/storage/test1.img bs=1G count=1 oflag=dsync). The results are:

-writing to IBM storage (with cloud enabled) shows 300 MB/sec
-writing to local SSD storage shows 600 MB/sec.
I guess storage is not a bottleneck.
b) testing file copy from linux centos 6 server to bacula server with 
rsync (eg. rsync --info=progress2 source destination)

-writing to local storage: 82 MB/sec
-writing to IBM storage: 85 MB/sec
I guess this is ok for a 1 GB network link.
c) using bacula:
-linux centos 6 file server: 13 MB/sec on IBM storage, 16 MB/sec on 
local SSD storage (version of client 5.2.13).
-windows file server:  around 18 MB/sec - there could be some additional 
problem, because I perform a backup from a deduplicated drive (version 
of client 9.6.5)
d) I have tried to manipulate encryption/compression settings, but I 
believe there is no significant difference


I think that  bacula rate (15 MB/sec) in quite slow comparing to file 
copy results (85 MB/sec) from the same client/server. It should be 
better... Do you agree?


I have implemented autochanger in order to perform backup from both 
servers at the same time. We shall see the results tomorrow.
I have not changed the version of the client on linux server yet. My 
windows server uses new client version, so that was not my first idea... 
Will try this tomorrow if needed.


What about retention?
I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

At the moment I use different job/schedule for monthly backup, but that 
triggers full backup also on Monday after monthly backup (I would like 
to run incremental then). Is there a better way? Relevant parts of conf 
below...


Regards,
Ziga

JobDefs {
Name = "bazar2-job"
Schedule = "WeeklyCycle"
...
}

Job {
  Name = "bazar2-backup"
  JobDefs = "bazar2-job"
  Full Backup Pool = bazar2-weekly-pool
  Incremental Backup Pool = bazar2-daily-pool
}

Job {
  Name = "bazar2-monthly-backup"
  Level = Full
  JobDefs = "bazar2-job"
  Pool = bazar2-monthly-pool
  Schedule = "MonthlyFull"  #schedule : see in bacula-dir.conf (monthly 
pool with longer retention)

}





Example output:

06-Oct 12:19 bacula-dir JobId 714: Bacula bacula-dir 9.6.5 (11Jun20):
  Build OS:   x86_64-redhat-linux-gnu-bacula redhat (Core)
  JobId:  714
  Job:bazar2-monthly-backup.2020-10-06_09.33.25_03
  Backup Level:   Full
  Client: "bazar2.kranj.cetrtapot.si-fd" 5.2.13 (19Jan13) 
x86_64-redhat-linux-gnu,redhat,(Core)
  FileSet:"bazar2-fileset" 2020-09-30 15:40:26
  Pool:   "bazar2-monthly-pool" (From Job resource)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"FSTestBackup" (From Job resource)
  Scheduled time: 06-Oct-2020 09:33:15
  Start time: 06-Oct-2020 09:33:28
  End time:   06-Oct-2020 12:19:19
  Elapsed time:   2 hours 45 mins 51 secs
  Priority:   10
  FD Files Written:   53,682
  SD Files Written:   53,682
  FD Bytes Written:   168,149,175,433 (168.1 GB)
  SD Bytes Written:   168,158,044,149 (168.1 GB)
  Rate:   16897.7 KB/s
  Software Compression:   36.6% 1.6:1
  Comm Line Compression:  None
  Snapshot/VSS:   no
  Encryption: no
  Accurate:   no
  Volume name(s): bazar2-monthly-vol-0300
  Volume Session Id:  11
  Volume Session Time:1601893281
  Last Volume Bytes:  337,370,601,852 (337.3 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK


On 06.10.2020 14:28, Josh Fisher wrote:


On 10/6/20 3:45 AM, Žiga Žvan wrote:
I believe that I have my spooling attributes set correctly on jobdefs 
(see bellow). Spool attributes = yes; Spool data defaults to no. Any 
other idea for performance problems?

Regard,
Ziga



The client version is very old. First try updating the client to 9.6.x.

For testing purposes, create another storage device on local disk and 
write a full backup to that. If it is much faster to local disk 
storage than it is to the s3 driver, then there may be an issue with 
how the s3 driver is compiled, version of s3 driver, etc.


Otherwise, with attribute spooling enabled, the status of the job as 
given by the status dir command in bconsole will change to "despooling 
attributes" or something like that when the client has finished 
sending data. That is the period at the end of the job when the 
spooled attributes are being written to the catalog database. If 
despooling is taking a long time, then database

Re: [Bacula-users] performance challenges

2020-10-06 Thread Žiga Žvan
I believe that I have my spooling attributes set correctly on jobdefs 
(see bellow). Spool attributes = yes; Spool data defaults to no. Any 
other idea for performance problems?

Regard,
Ziga



JobDefs {
  Name = "bazar2-job"
  Type = Backup
  Level = Incremental
  Client = bazar2.kranj.cetrtapot.si-fd #Client names: will be match on 
bacula-fd.conf on client side

  FileSet = "bazar2-fileset"
  Schedule = "WeeklyCycle" #schedule : see in bacula-dir.conf
  Storage = FSTestBackup
#  Storage = FSOciCloudStandard
  Messages = Standard
  Pool = bazar2-daily-pool
  Spool Attributes = yes   # Better for backup to disk
  Max Full Interval = 15 days # Ensure that full backup exist
  Priority = 10
  Write Bootstrap = "/opt/bacula/working/%c.bsr"
}

status dir not showing files transfered:

 JobId  Type Level Files Bytes  Name  Status
==
   714  Back Full  0 0  bazar2-monthly-backup is running


On 06.10.2020 09:14, Žiga Žvan wrote:


Thanks Josh for your reply and sorry for my previous duplicate email.

I will try to disable data spooling and report back the results.

What about manipulating retention? Currently I have different jobs for 
weekly full and monthly full backup (see bellow), but that triggers 
full backup instead of incremental on Monday (because I use different 
job resource). Is there a better way to have monthly backup with 
longer retention?


Kind regards,
Ziga

#For all clients

Schedule {
  Name = "MonthlyFull"
  Run = Full 1st fri at 23:05
}

# This schedule does the catalog. It starts after the WeeklyCycle
Schedule {
  Name = "WeeklyCycleAfterBackup"
  Run = Full sun-sat at 23:10
}

#Job for each client

Job {
  Name = "oradev02-backup"
  JobDefs = "oradev02-job"
  Full Backup Pool = oradev02-weekly-pool
  Incremental Backup Pool = oradev02-daily-pool
}

Job {
  Name = "oradev02-monthly-backup"
  JobDefs = "oradev02-job"
  Pool = oradev02-monthly-pool
  Schedule = "MonthlyFull"  #schedule : see in bacula-dir.conf 
(monthly pool with longer retention)

}



On 05.10.2020 16:30, Josh Fisher wrote:



On 10/5/20 9:20 AM, Žiga Žvan wrote:


Hi,
I'm having some performance challenges. I would appreciate some 
educated guess from an experienced bacula user.


I'm changing old backup sw that writes to tape drive with bacula 
writing  to disk. The results are:
a) windows file server backup from a deduplicated drive (1.700.000 
files, 900 GB data, deduplicated space used 600 GB). *Bacula: 12 
hours, old software: 2.5 hours*
b) linux file server backup (50.000 files, 166 GB data).*Bacula 3.5 
hours, old software: 1 hour*.


I have tried to:
a) turn off compression The result is the same: backup 
speed around 13 MB/sec.
b) change destination storage (from a new ibm storage attached over 
nfs, to a local SSD disk attached on bacula server virtual machine). 
It took 2 hours 50 minutes to backup linux file server (instead of 
3.5 hours). Sequential write test tested with linux dd command shows 
write speed 300 MB/sec for IBM storage and 600 MB/sec for local SSD 
storage (far better than actual throughput).




There are directives to enable/disable spooling of both data and the 
attributes (metadata) being written to the catalog database. When 
using disk volumes, you probably want to disable data spooling and 
enable attribute spooling. The attribute spooling will prevent a 
database write after each file backed up and instead do the database 
writes as a batch at the end of the job. Data spooling would rarely 
if ever be needed when writing to dick media.


With attribute spooling enabled, you can make a rough guess as to 
whether DB performance is the problem by judging how long the job is 
in the 'attribute despooling' state, The status dir command in 
bconsole shows the job state.



The network bandwidth is 1 GB (1 GB on client, 10 GB on server) so I 
guess this is not a problem; however I have noticed that bacula-fd 
on client side uses 100% of CPU.


I'm using:
-bacula server version 9.6.5
-bacula client version 5.2.13 (original from centos 6 repo).

Any idea what is wrong and/or what performance should I expect?
I would also appreciate some answers on the questions bellow (I 
think this email went unanswered).


Kind regards,
Ziga Zvan




On 05.08.2020 10:52, Žiga Žvan wrote:


Dear all,
I have tested bacula sw (9.6.5) and I must say I'm quite happy with 
the results (eg. compression, encryption, configureability). 
However I have some configuration/design questions I hope, you can 
help me with.


Regarding job schedule, I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

I am using dummy cloud driver that writes to local file storage.  
Vo

[Bacula-users] performance challenges

2020-10-06 Thread Žiga Žvan

Hi,

I'm having some performance challenges. I would appreciate some educated 
guess from an experienced bacula user.


I'm changing old backup sw that writes to tape drive with bacula 
writing  to disk. The results are:
a) windows file server backup from a deduplicated drive (1.700.000 
files, 900 GB data, deduplicated space used 600 GB). *Bacula: 12 hours, 
old software: 2.5 hours*
b) linux file server backup (50.000 files, 166 GB data).*Bacula 3.5 
hours, old software: 1 hour*.


I have tried to:
a) turn off compression The result is the same: backup speed 
around 13 MB/sec.
b) change destination storage (from a new ibm storage attached over nfs, 
to a local SSD disk attached on bacula server virtual machine). It took 
2 hours 50 minutes to backup linux file server (instead of 3.5 hours). 
Sequential write test tested with linux dd command shows write speed 300 
MB/sec for IBM storage and 600 MB/sec for local SSD storage (far better 
than actual throughput).


The network bandwidth is 1 GB (1 GB on client, 10 GB on server) so I 
guess this is not a problem; however I have noticed that bacula-fd on 
client side uses 100% of CPU.


I'm using:
-bacula server version 9.6.5
-bacula client version 5.2.13 (original from centos 6 repo).

Any idea what is wrong and/or what performance should I expect?
I would also appreciate some answers on the questions bellow.

Kind regards,
Ziga Zvan




On 05.08.2020 10:52, Žiga Žvan wrote:


Dear all,
I have tested bacula sw (9.6.5) and I must say I'm quite happy with 
the results (eg. compression, encryption, configureability). However I 
have some configuration/design questions I hope, you can help me with.


Regarding job schedule, I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

I am using dummy cloud driver that writes to local file storage.  
Volume is a directory with fileparts. I would like to have seperate 
volumes/pools for each client. I would like to delete the data on disk 
after retention period expires. If possible, I would like to delete 
just the fileparts with expired backup.


Questions:
a) At the moment, I'm using two backup job definitions per client and 
central schedule definition for all my clients. I have noticed that my 
incremental job gets promoted to full after monthly backup ("No prior 
Full backup Job record found"; because monthly backup is a seperate 
job, but bacula searches for full backups inside the same job). Could 
you please suggest a better configuration. If possible, I would like 
to keep central schedule definition (If I manipulate pools in a 
schedule resource, I would need to define them per client).


b) I would like to delete expired backups on disk (and in the catalog 
as well). At the moment I'm using one volume in a daily/weekly/monthly 
pool per client. In a volume, there are fileparts belonging to expired 
backups (eg. part1-23 in the output bellow). I have tried to solve 
this with purge/prune scripts in my BackupCatalog job (as suggested in 
the whitepapers) but the data does not get deleted. Is there any way 
to delete fileparts? Should I create separate volumes after retention 
period? Please suggest a better configuration.


c) Do I need a restore job for each client? I would just like to 
restore backup on the same client, default to /restore folder... When 
I use bconsole restore all command, the wizard asks me all the 
questions (eg. 5- last backup for a client, which client,fileset...) 
but at the end it asks for a restore job which changes all previously 
defined things (eg. client).


d) At the moment, I have not implemented autochanger functionality. 
Clients compress/encrypt the data and send them to bacula server, 
which writes them on one central storage system. Jobs are processed in 
sequential order (one at a time). Do you expect any significant 
performance gain if i implement autochanger in order to have jobs run 
simultaneously?


Relevant part of configuration attached bellow.

Looking forward to move in the production...
Kind regards,
Ziga Zvan


*Volume example *(fileparts 1-23 should be deleted)*:*
[root@bacula cetrtapot-daily-vol-0022]# ls -ltr
total 0
-rw-r--r--. 1 bacula disk   262 Jul 28 23:05 part.1
-rw-r--r--. 1 bacula disk 35988 Jul 28 23:06 part.2
-rw-r--r--. 1 bacula disk 35992 Jul 28 23:07 part.3
-rw-r--r--. 1 bacula disk 36000 Jul 28 23:08 part.4
-rw-r--r--. 1 bacula disk 35981 Jul 28 23:09 part.5
-rw-r--r--. 1 bacula disk 328795126 Jul 28 23:10 part.6
-rw-r--r--. 1 bacula disk 35988 Jul 29 23:09 part.7
-rw-r--r--. 1 bacula disk 35995 Jul 29 23:10 part.8
-rw-r--r--. 1 bacula disk 35981 Jul 29 23:11 part.9
-rw-r--r--. 1 bacula disk 35992 Jul 29 23:12 part.10
-rw-r--r--. 1 bacula disk 453070890 Jul 29 23:12 part.11
-rw-r--r--. 1 bacula disk 35995 Jul 30 23:09 part.12
-rw-r--r--. 1 bacula disk 35993 Jul 30 23:10 part.1

[Bacula-users] performance challenges

2020-10-05 Thread Žiga Žvan

Hi,
I'm having some performance challenges. I would appreciate some educated 
guess from an experienced bacula user.


I'm changing old backup sw that writes to tape drive with bacula 
writing  to disk. The results are:
a) windows file server backup from a deduplicated drive (1.700.000 
files, 900 GB data, deduplicated space used 600 GB). *Bacula: 12 hours, 
old software: 2.5 hours*
b) linux file server backup (50.000 files, 166 GB data).*Bacula 3.5 
hours, old software: 1 hour*.


I have tried to:
a) turn off compression The result is the same: backup speed 
around 13 MB/sec.
b) change destination storage (from a new ibm storage attached over nfs, 
to a local SSD disk attached on bacula server virtual machine). It took 
2 hours 50 minutes to backup linux file server (instead of 3.5 hours). 
Sequential write test tested with linux dd command shows write speed 300 
MB/sec for IBM storage and 600 MB/sec for local SSD storage (far better 
than actual throughput).


The network bandwidth is 1 GB (1 GB on client, 10 GB on server) so I 
guess this is not a problem; however I have noticed that bacula-fd on 
client side uses 100% of CPU.


I'm using:
-bacula server version 9.6.5
-bacula client version 5.2.13 (original from centos 6 repo).

Any idea what is wrong and/or what performance should I expect?
I would also appreciate some answers on the questions bellow (I think 
this email went unanswered).


Kind regards,
Ziga Zvan




On 05.08.2020 10:52, Žiga Žvan wrote:


Dear all,
I have tested bacula sw (9.6.5) and I must say I'm quite happy with 
the results (eg. compression, encryption, configureability). However I 
have some configuration/design questions I hope, you can help me with.


Regarding job schedule, I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

I am using dummy cloud driver that writes to local file storage.  
Volume is a directory with fileparts. I would like to have seperate 
volumes/pools for each client. I would like to delete the data on disk 
after retention period expires. If possible, I would like to delete 
just the fileparts with expired backup.


Questions:
a) At the moment, I'm using two backup job definitions per client and 
central schedule definition for all my clients. I have noticed that my 
incremental job gets promoted to full after monthly backup ("No prior 
Full backup Job record found"; because monthly backup is a seperate 
job, but bacula searches for full backups inside the same job). Could 
you please suggest a better configuration. If possible, I would like 
to keep central schedule definition (If I manipulate pools in a 
schedule resource, I would need to define them per client).


b) I would like to delete expired backups on disk (and in the catalog 
as well). At the moment I'm using one volume in a daily/weekly/monthly 
pool per client. In a volume, there are fileparts belonging to expired 
backups (eg. part1-23 in the output bellow). I have tried to solve 
this with purge/prune scripts in my BackupCatalog job (as suggested in 
the whitepapers) but the data does not get deleted. Is there any way 
to delete fileparts? Should I create separate volumes after retention 
period? Please suggest a better configuration.


c) Do I need a restore job for each client? I would just like to 
restore backup on the same client, default to /restore folder... When 
I use bconsole restore all command, the wizard asks me all the 
questions (eg. 5- last backup for a client, which client,fileset...) 
but at the end it asks for a restore job which changes all previously 
defined things (eg. client).


d) At the moment, I have not implemented autochanger functionality. 
Clients compress/encrypt the data and send them to bacula server, 
which writes them on one central storage system. Jobs are processed in 
sequential order (one at a time). Do you expect any significant 
performance gain if i implement autochanger in order to have jobs run 
simultaneously?


Relevant part of configuration attached bellow.

Looking forward to move in the production...
Kind regards,
Ziga Zvan


*Volume example *(fileparts 1-23 should be deleted)*:*
[root@bacula cetrtapot-daily-vol-0022]# ls -ltr
total 0
-rw-r--r--. 1 bacula disk   262 Jul 28 23:05 part.1
-rw-r--r--. 1 bacula disk 35988 Jul 28 23:06 part.2
-rw-r--r--. 1 bacula disk 35992 Jul 28 23:07 part.3
-rw-r--r--. 1 bacula disk 36000 Jul 28 23:08 part.4
-rw-r--r--. 1 bacula disk 35981 Jul 28 23:09 part.5
-rw-r--r--. 1 bacula disk 328795126 Jul 28 23:10 part.6
-rw-r--r--. 1 bacula disk 35988 Jul 29 23:09 part.7
-rw-r--r--. 1 bacula disk 35995 Jul 29 23:10 part.8
-rw-r--r--. 1 bacula disk 35981 Jul 29 23:11 part.9
-rw-r--r--. 1 bacula disk 35992 Jul 29 23:12 part.10
-rw-r--r--. 1 bacula disk 453070890 Jul 29 23:12 part.11
-rw-r--r--. 1 bacula disk 35995 Jul 30 23:09 part.12
-rw-r--r--. 1 b

[Bacula-users] design challenges - file-cloud backup

2020-08-05 Thread Žiga Žvan


Dear all,
I have tested bacula sw (9.6.5) and I must say I'm quite happy with the 
results (eg. compression, encryption, configureability). However I have 
some configuration/design questions I hope, you can help me with.


Regarding job schedule, I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

I am using dummy cloud driver that writes to local file storage.  Volume 
is a directory with fileparts. I would like to have seperate 
volumes/pools for each client. I would like to delete the data on disk 
after retention period expires. If possible, I would like to delete just 
the fileparts with expired backup.


Questions:
a) At the moment, I'm using two backup job definitions per client and 
central schedule definition for all my clients. I have noticed that my 
incremental job gets promoted to full after monthly backup ("No prior 
Full backup Job record found"; because monthly backup is a seperate job, 
but bacula searches for full backups inside the same job). Could you 
please suggest a better configuration. If possible, I would like to keep 
central schedule definition (If I manipulate pools in a schedule 
resource, I would need to define them per client).


b) I would like to delete expired backups on disk (and in the catalog as 
well). At the moment I'm using one volume in a daily/weekly/monthly pool 
per client. In a volume, there are fileparts belonging to expired 
backups (eg. part1-23 in the output bellow). I have tried to solve this 
with purge/prune scripts in my BackupCatalog job (as suggested in the 
whitepapers) but the data does not get deleted. Is there any way to 
delete fileparts? Should I create separate volumes after retention 
period? Please suggest a better configuration.


c) Do I need a restore job for each client? I would just like to restore 
backup on the same client, default to /restore folder... When I use 
bconsole restore all command, the wizard asks me all the questions (eg. 
5- last backup for a client, which client,fileset...) but at the end it 
asks for a restore job which changes all previously defined things (eg. 
client).


d) At the moment, I have not implemented autochanger functionality. 
Clients compress/encrypt the data and send them to bacula server, which 
writes them on one central storage system. Jobs are processed in 
sequential order (one at a time). Do you expect any significant 
performance gain if i implement autochanger in order to have jobs run 
simultaneously?


Relevant part of configuration attached bellow.

Looking forward to move in the production...
Kind regards,
Ziga Zvan


*Volume example *(fileparts 1-23 should be deleted)*:*
[root@bacula cetrtapot-daily-vol-0022]# ls -ltr
total 0
-rw-r--r--. 1 bacula disk   262 Jul 28 23:05 part.1
-rw-r--r--. 1 bacula disk 35988 Jul 28 23:06 part.2
-rw-r--r--. 1 bacula disk 35992 Jul 28 23:07 part.3
-rw-r--r--. 1 bacula disk 36000 Jul 28 23:08 part.4
-rw-r--r--. 1 bacula disk 35981 Jul 28 23:09 part.5
-rw-r--r--. 1 bacula disk 328795126 Jul 28 23:10 part.6
-rw-r--r--. 1 bacula disk 35988 Jul 29 23:09 part.7
-rw-r--r--. 1 bacula disk 35995 Jul 29 23:10 part.8
-rw-r--r--. 1 bacula disk 35981 Jul 29 23:11 part.9
-rw-r--r--. 1 bacula disk 35992 Jul 29 23:12 part.10
-rw-r--r--. 1 bacula disk 453070890 Jul 29 23:12 part.11
-rw-r--r--. 1 bacula disk 35995 Jul 30 23:09 part.12
-rw-r--r--. 1 bacula disk 35993 Jul 30 23:10 part.13
-rw-r--r--. 1 bacula disk 36000 Jul 30 23:11 part.14
-rw-r--r--. 1 bacula disk 35984 Jul 30 23:12 part.15
-rw-r--r--. 1 bacula disk 580090514 Jul 30 23:13 part.16
-rw-r--r--. 1 bacula disk 35994 Aug  3 23:09 part.17
-rw-r--r--. 1 bacula disk 35936 Aug  3 23:12 part.18
-rw-r--r--. 1 bacula disk 35971 Aug  3 23:13 part.19
-rw-r--r--. 1 bacula disk 35984 Aug  3 23:14 part.20
-rw-r--r--. 1 bacula disk 35973 Aug  3 23:15 part.21
-rw-r--r--. 1 bacula disk 35977 Aug  3 23:17 part.22
-rw-r--r--. 1 bacula disk 108461297 Aug  3 23:17 part.23
-rw-r--r--. 1 bacula disk 35974 Aug  4 23:09 part.24
-rw-r--r--. 1 bacula disk 35987 Aug  4 23:10 part.25
-rw-r--r--. 1 bacula disk 35971 Aug  4 23:11 part.26
-rw-r--r--. 1 bacula disk 36000 Aug  4 23:12 part.27
-rw-r--r--. 1 bacula disk 398437855 Aug  4 23:12 part.28

*Cache (deleted as expected):*

[root@bacula cetrtapot-daily-vol-0022]# ls -ltr 
/mnt/backup_bacula/cloudcache/cetrtapot-daily-vol-0022/

total 4
-rw-r-. 1 bacula disk 262 Jul 28 23:05 part.1

*Relevant part of central configuration*

# Backup the catalog database (after the nightly save)
Job {
  Name = "BackupCatalog"
  JobDefs = "CatalogJob"
  Level = Full
  FileSet="Catalog"
  Schedule = "WeeklyCycleAfterBackup"
  RunBeforeJob = "/opt/bacula/scripts/make_catalog_backup.pl MyCatalog"
  # This deletes the copy of the catalog
  RunAfterJob  = "/opt/bacula/scripts/delete_catalog_backup"
  

Re: [Bacula-users] bacula - optimize storage for cloud sync

2020-07-09 Thread Žiga Žvan



Hello Kern,

Thank you for all info. I needed to install cloud plugin (yum install 
bacula-cloud-storage) in order to use this configuration. Relevant 
part of bacula-sd.conf is pasted bellow. Files did get uploaded to 
Oracle object storage.


I intend to use lifecycle policy on the cloud bucket. It will archive 
60days old files to a 10times cheaper storage. For bacula that means 
that some fileparts will not be available until I manually restore 
them (after 60 days status will change from available to archived).


Do you expect that this will represent any kind of problem for backup 
job? Perhaps fileparts of the last full backup need to be available at 
local folder in order to run incremental backup?


What about restore procedure (eg. from 90 days old incremental 
backup)? Will bacula notify me which fileparts are missing at local 
folder in order to complete the restore?


I will continue with my tests and hopefully move to production in a 
month or two. I intend to publish on the bacula-users list, if 
everything worked for me. If I forget, do not hesitate to ask me about 
that.


Kind regards,
Ziga

Relevant part of bacula-sd.conf:

Device {
  Name = FSOciCloudStandard
  Device type = Cloud
  Cloud = OracleViaStorageGateway
#  Maximum Part Size = 100 MB
  Media Type = File1
  Archive Device = /mnt/backup_bacula/backup
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
}

Cloud {
  Name = OracleViaStorageGateway
  Driver = "File"
  HostName = "/mnt/baculatest_standard/backup"
  BucketName = "DummyBucket"
  AccessKey = "DummyAccessKey"
  SecretKey = "DummySecretKey"
  Protocol = HTTPS
  UriStyle = VirtualHost
}



On 08/07/2020 16:32, Kern Sibbald wrote:


Hello Ziga,

Yes, you might be able to do what you want using a "debug" feature of 
the Bacula Cloud driver.  It is not very well documented, but there 
is one section "3 File Driver for the Cloud" in the "Bacula Cloud 
Backup" that mentions it.


Basically instead of using the "S3" driver in the Cloud resource of 
your Storage Daemon, you use "File" and the HostName becomes the path 
where the Cloud volumes (directories + parts) will be written.  For 
example, I use the following for writing to disk instead of an S3 cloud.


Cloud {
  Name = DummyCloud
  Driver = "File"
  HostName = "/home/kern/bacula/k/regress/tmp/cloud"
  BucketName = "DummyBucket"
  AccessKey = "DummyAccessKey"
  SecretKey = "DummySecretKey"
  Protocol = HTTPS
  UriStyle = VirtualHost
}

The Device resource looks like:

Device {
  Name = FileStorage1
  Media Type = File1
  Archive Device = /home/kern/bacula/k/regress/tmp
  LabelMedia = yes;   # lets Bacula label unlabelled 
media

  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;

  Device Type = Cloud

  Cloud = DummyCloud
}

I know the code runs and produces correct output, but I am not sure 
how it will work in your environment.  If it works, great.  If it 
doesn't work, for the near future, unfortunately I cannot provide 
support, but at some point (probably 3-6 months) the project may 
support this feature.


Note: the next version of Bacula coming in a few months will have a 
good number of new features and improvements to the Cloud driver 
(more bconsole commands for example).


Good luck, and best regards,

Kern

PS: If it does work for you, I would appreciate it if you would 
document it and publish it on the bacula-users list so others can use 
the Oracle cloud.




On 7/8/20 3:41 PM, Žiga Žvan wrote:


Hi Mr. Kern,

My question was a bit different. I have noticed that Oracle S3 is 
not compatible, therefore I have implemented Oracle Storage gateway 
(a docker image that uses local filesystem as a cache and moves the 
data automatically to oracle cloud). I have this filesystem mounted 
(nfsv4) on bacula server and I am able to backup data to this 
storage (and hence in cloud).


I have around 1 TB data daily and I'm a bit concerned about the 
bandwidth. It will take app. 4 hours to sync to the cloud and I need 
to count in the future growth. As long as bacula writes data to one 
file/volume, where it stores full and incremental backups, this is 
not optimal for the cloud (the file will change and all the data 
will upload each day). I have noticed that bacula stores data 
differently in the cloud configuration. Volume is not a file, but a 
folder with fileparts. This would be better for me, because only 
some fileparts would change and move to the cloud via Storage 
gateway. So the question is:
Can I configure bacula-sd to store data in fileparts, without actual 
cloud sync? Is this possible? I have tried severa

Re: [Bacula-users] bacula - optimize storage for cloud sync

2020-07-08 Thread Žiga Žvan

Hi Mr. Kern,

My question was a bit different. I have noticed that Oracle S3 is not 
compatible, therefore I have implemented Oracle Storage gateway (a 
docker image that uses local filesystem as a cache and moves the data 
automatically to oracle cloud). I have this filesystem mounted (nfsv4) 
on bacula server and I am able to backup data to this storage (and hence 
in cloud).


I have around 1 TB data daily and I'm a bit concerned about the 
bandwidth. It will take app. 4 hours to sync to the cloud and I need to 
count in the future growth. As long as bacula writes data to one 
file/volume, where it stores full and incremental backups, this is not 
optimal for the cloud (the file will change and all the data will upload 
each day). I have noticed that bacula stores data differently in the 
cloud configuration. Volume is not a file, but a folder with fileparts. 
This would be better for me, because only some fileparts would change 
and move to the cloud via Storage gateway. So the question is:
Can I configure bacula-sd to store data in fileparts, without actual 
cloud sync? Is this possible? I have tried several configurations of a 
bacula-sd device with no luck. Should  I configure some dummy cloud 
resource?


Kind regards,

Ziga Zvan


On 07/07/2020 14:40, Kern Sibbald wrote:


Hello,

Oracle S3 is not compatible with Amazon S3 or at least with the libs3 
that we use to interface to AWS and other compatible S3 cloud offerings.


Yes, Bacula Enterprise has a separate Oracle cloud driver that they 
wrote.  There are no plans at the moment to backport it to the 
community version.


Best regards,

Kern

On 7/7/20 8:43 AM, Žiga Žvan wrote:



Dear all,

I'm testing communty version of bacula in order to change backup sw 
for app. 100 virtual and physical hosts. I would like to move all 
the data to local storage and then move them to public cloud (Oracle 
Object storage).


I believe that community version of the software suites our needs. I 
have installed:

-version 9.6.5 of bacula on centos 7 computer
-oracle storage gateway (similar to aws SG - it moves data to object 
storage and exposes it localy as nfsv4; for bacula this is backup 
destination).


I have read this two documents regarding bacula and cloud
https://blog.bacula.org/whitepapers/CloudBackup.pdf
https://blog.bacula.org/whitepapers/ObjectStorage.pdf

It is mentioned it the document above, that Oracle Object storage is 
not supported at the moment.
Is it possible to *configure* bacula Storage device in a way that 
uses *Cloud media format* (directory with file parts as a volume, 
instead of a single file as a volume) *without actual cloud sync* 
(Storage Gateway does this in my case)? I am experimenting with 
variations of the definition bellow, but I am unable to solve this 
issue for now (it tries to initialize cloud plugin or it writes to a 
file, instead of a directory).


Device {
  Name = FSOciCloudStandard
#  Device type = Cloud
  Device type = File
#  Cloud = OracleViaStorageGateway
  Maximum Part Size = 100 MB
#  Media Type = File
  Media Type = CloudType
  Archive Device = /mnt/baculatest_standard/backup
  LabelMedia = yes;   # lets Bacula label unlabeled 
media

  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
}

Is there any plan to support oracle object storage in near future? It 
has S3 compatible API and bacula enterprise supports it...

Kind regards,
Ziga Zvan


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula - optimize storage for cloud sync

2020-07-07 Thread Žiga Žvan



Dear all,

I'm testing communty version of bacula in order to change backup sw 
for app. 100 virtual and physical hosts. I would like to move all the 
data to local storage and then move them to public cloud (Oracle 
Object storage).


I believe that community version of the software suites our needs. I 
have installed:

-version 9.6.5 of bacula on centos 7 computer
-oracle storage gateway (similar to aws SG - it moves data to object 
storage and exposes it localy as nfsv4; for bacula this is backup 
destination).


I have read this two documents regarding bacula and cloud
https://blog.bacula.org/whitepapers/CloudBackup.pdf
https://blog.bacula.org/whitepapers/ObjectStorage.pdf

It is mentioned it the document above, that Oracle Object storage is 
not supported at the moment.
Is it possible to *configure* bacula Storage device in a way that uses 
*Cloud media format* (directory with file parts as a volume, instead 
of a single file as a volume) *without actual cloud sync* (Storage 
Gateway does this in my case)? I am experimenting with variations of 
the definition bellow, but I am unable to solve this issue for now (it 
tries to initialize cloud plugin or it writes to a file, instead of a 
directory).


Device {
  Name = FSOciCloudStandard
#  Device type = Cloud
  Device type = File
#  Cloud = OracleViaStorageGateway
  Maximum Part Size = 100 MB
#  Media Type = File
  Media Type = CloudType
  Archive Device = /mnt/baculatest_standard/backup
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
}

Is there any plan to support oracle object storage in near future? It 
has S3 compatible API and bacula enterprise supports it...

Kind regards,
Ziga Zvan
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users