[Bacula-users] Which version works with GCC 4.8.5

2018-02-23 Thread Hicks, Daniel CTR OSD DMEA
Hello all

I am working on installing on a new system running RHEL 7.4 but unfortunately 
that RHEL version still only comes with GCC 4.8.5.

I have 9.0.3 installed on another network and would like to install a newer 
version on this other network.

Before everyone points to an online repository let me say that both network are 
isolated from the internet so moving data in and out is very difficult.

As always thanks for the help.


Daniel Hicks
Senior Systems Analyst
FutureWorld Technologies Inc.
DMEA IT Support

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Testing VirtualFull; job created but not running; why?

2018-02-23 Thread Martin Simmons
Do you have multiple storage devices configured?  As Bill mentioned, a
VirtualFull job needs two devices.  If you are backing up to disk, then you
could use a "virtual autochanger" within a single bacula-sd for this.

__Martin
  

> On Mon, 19 Feb 2018 17:18:07 +, Mike Eggleston said:
> 
> Hi Bill,
> 
> In the pools definition I have "#Maximum Volume Jobs = 1" and in 
> bacula-sd.conf I have "Maximum Concurrent Jobs = 20". I also have "Maximum 
> Concurrent Jobs = 20" in bacula-dir.conf. What else do I need to check? I 
> know you presented a list of files below, but I'm confused about this. I have 
> autochangers commented out.
> 
> Thanks,
> Mike
> 
> -Original Message-
> From: Bill Arlofski [mailto:waa-bac...@revpol.com] 
> Sent: Monday, February 19, 2018 10:50 AM
> To: Mike Eggleston ; 
> bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Testing VirtualFull; job created but not running; 
> why?
> 
> On 02/19/2018 08:57 AM, Mike Eggleston wrote:
> > Sorry it took me a while to get this answer... life is...
> 
> [...snip...]
> > Running Jobs:
> > Console connected at 19-Feb-18 09:05
> >  JobId  Type Level Files Bytes  Name  Status
> > ==
> >112  Back Virt  0 0  dvlnx107-vf   is waiting on max 
> > Storage jobs
> >132  Back Virt  0 0  dvlnx107-vf   is waiting on max 
> > Storage jobs
> > 
> [...snip...]
> 
> 
> Hi Mike,
> 
> Consider that when a Virtual Full is run, it needs one device to read from 
> and one device to write to.
> 
> Notice the "is waiting on max Storage jobs"  messages?   It is probable that
> you have not added a "MaximumConcurrentJobs" (MCJ) setting to your Director's 
> Storage{} resource, or in the bacula-sd.conf file's Main Storage{} resource 
> where the SD is defined.
> 
> It is possible that you are trying to use a DIR Storage resource that points 
> to a single device in the SD... You will want to set up an Autochanger with 
> more than one device in it.
> 
> Also, if you do not explicitly set an MCJ, it defaults to (1) for just about 
> every resource you can think of:
> 
> - DIR
> - SD
> - FD
> - Job
> - Dir Storage resource
> - SD Storage/Autochanger resource
> - SD Devices
> 
> 
> Hope this helps you track this down.
> 
> 
> Best regards,
> 
> Bill
> 
> 
> --
> Bill Arlofski
> http://www.revpol.com/bacula
> -- Not responsible for anything below this line --
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup of large volumes never completes - keeps restarting

2018-02-23 Thread Wanderlei Huttel
Hello Fouri

Please, inform your Bacula version?

Atenciosamente

*Wanderlei Hüttel*
http://www.huttel.com.br

2018-02-23 5:32 GMT-03:00 Fourie Joubert :

> Hi Folks
>
> I know similar topics have been addressed before in many posts, but none
> of them provide me with a workable solution…
>
> - We are backing up fairly large volumes: 250TB up to 1.5 PB over 40Gbps
> Infiniband
> - Bacula backups up to a target PB-scale ZFS pool
> - The volumes created are each 100GB in size
>
> We are having trouble getting the first full backups to finish
> successfully in one job (due to various IT issues that we do not have
> control over).
>
> The result is that although the backups are configured as incrementals,
> there is never a successful full backup in a single job, and the next job
> starts backing up everything all over again. This happens over and over so
> we never get a backup of all the content and we fill up our ZFS backup
> target pool with all the uncompleted attempts.
>
> Is there a way to prevent this, so that despite a backup job being flagged
> as unsuccessfully terminated, the next session will be forced to only be
> incremental?
>
> Any advice would be sincerely appreciated!
>
> Best regards,
>
> Fourie
>
>
>
> This message and attachments are subject to a disclaimer.
> Please refer to http://upnet.up.ac.za/services/it/documentation/docs/
> 004167.pdf for full details.
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Backup of large volumes never completes - keeps restarting

2018-02-23 Thread Fourie Joubert
Hi Folks

I know similar topics have been addressed before in many posts, but none of
them provide me with a workable solution…

- We are backing up fairly large volumes: 250TB up to 1.5 PB over 40Gbps
Infiniband
- Bacula backups up to a target PB-scale ZFS pool
- The volumes created are each 100GB in size

We are having trouble getting the first full backups to finish successfully
in one job (due to various IT issues that we do not have control over).

The result is that although the backups are configured as incrementals,
there is never a successful full backup in a single job, and the next job
starts backing up everything all over again. This happens over and over so
we never get a backup of all the content and we fill up our ZFS backup
target pool with all the uncompleted attempts.

Is there a way to prevent this, so that despite a backup job being flagged
as unsuccessfully terminated, the next session will be forced to only be
incremental?

Any advice would be sincerely appreciated!

Best regards,

Fourie

-- 
This message and attachments are subject to a disclaimer.
Please refer to 
http://upnet.up.ac.za/services/it/documentation/docs/004167.pdf for full 
details.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users