Re: [Bacula-users] Workaround to get Bacula FD running on Ubuntu 22.04 Jammy using Debian 11 Bullseye packages

2022-09-29 Thread THAF (Thomas Alexander Frederiksen) via Bacula-users
On 2022-09-28 18:06, Josip Deanovic wrote:
> I don't understand what was the problem with the Bacula FD on Ubuntu.
> Is that something Ubuntu specific?
>
> For Debian 10.12 and Bacula 9.6.7 all one would need to do is to
> use Debian buster-backports repository and install needed bacula-*
> packages as usual.
There are no bacula-fd packages in any Ubuntu 22.04 repo, not even in backports.

--
/Thomas
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] WG: Bacula 11.0.6 - Migration from file archive to lto-5 tape ends with fatal error after 15 minutes waiting

2022-05-01 Thread Thomas Bölscher
Hi Marcin,

 

thank you very much for your help.

 

First i want to mention I have used the „new migration job“-wizard of baculum 
to create the migration job.

 

As far as i know i never have touched the „DefaultJob“, so i guess this is 
completely default. So here it is:

 

JobDefs {

  Name = "DefaultJob"

  Type = "Backup"

  Level = "Incremental"

  Messages = "Standard"

  Storage = "File1"

  Pool = "File"

  Client = "bacula-backup-fd"

  Fileset = "Full Set"

  Schedule = "WeeklyCycle"

  WriteBootstrap = "/opt/bacula/working/%c.bsr"

  SpoolAttributes = yes

  Priority = 10

}

 

So, i see no „Tape1“-entry in the „DefaultJob“ and also when i run the 
migration-job manually in baculum i select „file1“ as source storage for 
migration to tape, which should be correct !?

 

When i select „Run Job“ für the migration-job in Baculum, these are the 
settings i choose / the wizard has set:

 

*   Level: Incremental
*   Client: bacula-backup-fd
*   FileSet: i7-11700k-fileset  (<- not sure if that is correct, but this 
is the setting i did the original file-daemon client to storage-daemon backup 
with)
*   Pool: File
*   Storage: Tape1
*   Priority: 10

 

Best regards

Thomas

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula 11.0.6 - Migration from file archive to lto-5 tape ends with fatal error after 15 minutes waiting

2022-05-01 Thread Thomas Bölscher
 

Hi and greetings from germany,

 

i hope someone has an idea why i always run in trouble using the 
„migration“-job in bacula 11.06.

 

I am running Bacula 11.0.6 with Baculum GUI on Ubuntu Server 20.04 (latest 
updates/patches installed) with a single HP LTO5 drive on an old HP Proliant 
ML350 G6.

The server was freshly installed from the 11.06 DEB-packets available today.

 

I am able to backup my windows-os clients/files directly to the bacula file 
storage and directly to LTO5 tape storage without a hassle.

So direct backup from some windows-os based servers to disk, or direct backup 
to tape works like a charm.

 

But when i try to migrate a former disk backup to a tape backup the whole 
process does not even start and i don’t see why that is.

The „messages“ protocol states it is using the device „LTO5-Drive1“ to READ 
(what is kind of strange, because the migration job should use the LTO drive to 
WRITE and use a file-device for read). - After 15 minutes waiting there is the 
message „Fatal error: Storage daemon didn’t accept Device „LTO5-Drive1“ 
command.“  (The job queues forever, because the migration never comes to an 
end/never aborts. I have to restart all processes/the server)

 

Before installing this on Ubuntu Server 20.04 i had another bacula 11.06 setup 
running under Debian 11. -> And i had exactly the same problem there, too. That 
was the reason why i formated my whole debian 11 server setup and started from 
scratch with ubuntu 20.04, running now in exactly the same „migration-job“ 
problem as i did on debian 11 before.

So i wasn’t able to get bacula „migration“-jobs running on two different setups 
(debian 11/ubuntu 20.04) so far. ☹

 

As i am a bacula newbie, fiddling around with it only for about a week, i 
struggle to find the cause for this misbehavior.

 

Here are some sniplets from my bacula director config:

 

Storage {

  Name = "File1"

  SdPort = 9103

  Address = "bacula-backup"

  Password = "XX"

  Device = "FileChgr1"

  MediaType = "File1"

  Autochanger = "File1"

  MaximumConcurrentJobs = 10

}

 

Storage {

  Name = "Tape1"

  SdPort = 9103

  Address = "bacula-backup"

  Password = "X"

  Device = "LTO5-Drive1"

  MediaType = "LTO3000"

  MaximumConcurrentJobs = 10

}

 

 

Pool {

  Name = "File"

  PoolType = "Backup"

  LabelFormat = "Vol-"

  MaximumVolumes = 100

  MaximumVolumeBytes = 53687091200

  VolumeRetention = 31536000

  AutoPrune = yes

  Recycle = yes

}

 

Pool {

  Name = "Tape1"

  PoolType = "Backup"

  LabelFormat = "Tape-"

  LabelType = "Bacula"

  MaximumVolumes = 100

  MaximumVolumeBytes = 53687091200

  VolumeRetention = 31536000

  Storage = "Tape1"

  AutoPrune = yes

  Recycle = yes

  Catalog = "MyCatalog"

}

 

 

Job {

  Name = "Migrate-to-tape"

  Type = "Migrate"

  NextPool = "Tape1"

  Fileset = "i7-11700k-fileset"

  JobDefs = "DefaultJob"

  SelectionPattern = "i7-11700k-backupjob"

  SelectionType = "Job"

}

 

 

And here storage-daemon config sniplets:

 

 

Device {

  Name = "FileChgr1-Dev1"

  MediaType = "File1"

  ArchiveDevice = "/media/backups"

  RemovableMedia = no

  RandomAccess = yes

  AutomaticMount = yes

  LabelMedia = yes

  AlwaysOpen = no

  MaximumConcurrentJobs = 5

}

 

Device {

  Name = "LTO5-Drive1"

  MediaType = "LTO3000"

  DeviceType = "Tape"

  ArchiveDevice = "/dev/nst0"

  RemovableMedia = yes

  RandomAccess = no

  AutomaticMount = yes

  LabelMedia = yes

  AlwaysOpen = yes

  MaximumFileSize = 100

  MaximumConcurrentJobs = 5

  LabelType = "Bacula"

}

 

Thank you very much in advance

Thomas

 

 

 

 

 

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual tapes or virtual disks

2022-01-26 Thread Thomas Lohman


I'm having a RAID5 array of about 40TB in size. A separate RAID 
controller card handles the disks. I'm planning to use the normal ext4 
file system. It's standard and well known, most probably not the 
fastest though. That will not have any great impact, as there is a 4TB 
NVMe SSD drive, which takes the odd of the slow physical disk 
performance.



Hi,

I'd recommend if you're going to use RAID that you at least use a RAID-6 
configuration.  You don't want to risk losing all your backups if you 
have a drive fail and then during the rebuilding of the RAID-5, you 
happen to have another drive failure/error.


cheers,

--tom




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] New Bacula Server with multiple disks

2021-12-16 Thread Thomas Lohman
>Can Bacula use my 4 disks in the same way filling up backup1 and than 
using backup2 etc?


The short answer is yes.  We've been doing this for over a decade using 
sym links to create one logical Bacula storage area that then points off 
to 40-50 disks worth of volume data on each server.    In general, I 
would agree with the RAID recommendation given the few drives that you 
have.  One option, if you can afford it, would be to double your disk 
count and create a RAID 10.


Since at the time of creation, we were not able to afford RAID setups 
with that amount of disks and backup servers that we have, I created an 
application that "stripes" our completed backup volume data across all 
the JBOD disks on a given server thus if we lose one disk, it lessens 
the likelihood that we lose an entire sequence of backup data.  It also 
helps to test the drives and root out suspect drives before they totally 
fail - which allows us to then copy all the good backup volumes off of 
it and take it out of circulation.


cheers,


--tom




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Baculum Volume Error

2020-06-11 Thread Jeff Thomas
Thanks. That fixed it.   I must have missed that part in the installation
instructions.

On Thu, Jun 11, 2020 at 9:36 AM Marcin Haba  wrote:

> Hello Jeff,
>
> You have missing timezone settings in your PHP configuration. You need
> to add timezone setting to php.ini file in /etc, for example:
>
> date.timezone = "Europe/Warsaw"
>
> Full list timezones you can find here:
>
> https://www.php.net/manual/en/timezones.php
>
> At the end you need to restart or reload web server.
>
> Best regards,
> Marcin Haba (gani)
>
> On Thu, 11 Jun 2020 at 16:16, Jeff Thomas  wrote:
> >
> > Error 1000 - Internal error. [Warning] strtotime(): It is not safe to
> rely on the system's timezone settings. You are *required* to use the
> date.timezone setting or the date_default_timezone_set() function. In case
> you used any of those methods and you are still getting this warning, you
> most likely misspelled the timezone identifier. We selected the timezone
> 'UTC' for now, but please set date.timezone to select your timezone. (@line
> 135 in file
> /usr/share/baculum/htdocs/protected/API/Class/VolumeManager.php).
> >
> >
> >
> >
> > ___
> > Bacula-users mailing list
> > Bacula-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
>
> --
> "Greater love hath no man than this, that a man lay down his life for
> his friends." Jesus Christ
>
> "Większej miłości nikt nie ma nad tę, jak gdy kto życie swoje kładzie
> za przyjaciół swoich." Jezus Chrystus
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Baculum Volume Error

2020-06-11 Thread Jeff Thomas
Error 1000 - Internal error. [Warning] strtotime(): It is not safe to rely
on the system's timezone settings. You are *required* to use the
date.timezone setting or the date_default_timezone_set() function. In case
you used any of those methods and you are still getting this warning, you
most likely misspelled the timezone identifier. We selected the timezone
'UTC' for now, but please set date.timezone to select your timezone. (@line
135 in file
/usr/share/baculum/htdocs/protected/API/Class/VolumeManager.php).
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] areas for improvement?

2020-05-27 Thread Thomas Lohman

Bacula DOES NOT LIKE and does not handle network interruptions _at all_
if backups are in progress. This _will_ cause backups to abort - and
these aborted backups are _not_ resumable


Hi,

My feeble two cents is that this has been a bit of an Achilles heel for 
us even though we are a LAN backup environment (e.g. backups don't leave 
our local network).  We are still running an older "somewhat/slightly" 
customized/modified version of community bacula so I have not explored 
the restarting of stopped jobs option that has come with newer versions. 
Given that, I can recall when we initially deployed our "backups to 
disk" setup, I would see backups of large file systems/data (e.g. 1TB) 
write 3/4ths of their data to volumes and then error out due to some 
random network interruption.  I didn't like the idea that this meant 
e.g. 750GBs worth of our volume space was taken up by an 
errored/incomplete job that would never be used.  Because of this, I had 
to implement spooling which typically people would only do if their 
backups were then being written to sequential media (tape).  So, we now 
spool all jobs to dedicated spool disks and then bacula writes that data 
to the disk data volumes.  It fixed the "cruft" issue and made large 
backups more stable (along with other options).  But I can imagine a 
scenario where we would not have had to do this if Bacula could more 
easily recover from network glitches and automatically restart jobs 
where it last left off (thinking along the lines of the concept of 
checkpointing in a RDBMS).


As someone else said, this would require non-trivial changes to Bacula 
(i.e. I won't be making those changes to our version - :) ) and the 
devil would be in the details in practice.  Still, if it was put to a 
vote, I'd probably vote for this as "a nice feature to have."


cheers,


--tom



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Baculum Install on Centos 7

2020-05-22 Thread Jeff Thomas
Apparently yum doesn't flush and re-read the repos.d files every time?
The solution was:

yum clean all
rm -rf /var/cache/yum/*




On Thu, May 21, 2020 at 10:36 PM Marcin Haba  wrote:

> Hello Jeff,
>
> How is it possible that in repository are packages with suffix el7 and
> your packages have fc31? Please look at the repository address:
>
> http://bacula.org/downloads/baculum/stable/centos
>
> There isn't any packages marked 'fc31' and your packages have it:
>
> rpmlib(PayloadIsZstd) <= 5.4.18-1 is needed by
> baculum-api-9.6.3-1.fc31.noarch
> rpmlib(PayloadIsZstd) <= 5.4.18-1 is needed by
> baculum-api-httpd-9.6.3-1.fc31.noarch
> rpmlib(PayloadIsZstd) <= 5.4.18-1 is needed by
> baculum-common-9.6.3-1.fc31.noarch
>
> I don't know how it could be possible.
>
> Best regards,
> Marcin Haba (gani)
>
> On Fri, 22 May 2020 at 00:06, Jeff Thomas  wrote:
> >
> > I'm using the following repos.d file:
> >
> > [root@costello jrthomas]# cat /etc/yum.repos.d/baculum.repo
> > [baculumrepo]
> > name=Baculum CentOS repository
> > baseurl=http://bacula.org/downloads/baculum/stable/centos
> > gpgcheck=1
> > enabled=1
> >
> >
> >
> >
> > On Thu, May 21, 2020 at 4:40 PM Marcin Haba  wrote:
> >>
> >> Hello Jeff,
> >>
> >> It looks you are trying to install Fedora 31 packages on CentOS.
> >>
> >> I would propose to use Baculum packages dedicated for CentOS 7. You
> >> can take them from here:
> >>
> >>
> https://www.bacula.org/9.6.x-manuals/en/console/Baculum_API_Web_GUI_Tools.html#SECTION0034
> >>
> >> Best regards,
> >> Marcin Haba (gani)
> >>
> >> On Thu, 21 May 2020 at 23:04, Jeff Thomas  wrote:
> >> >
> >> > I'm following the instructions to the letter and then 'WHAM!'
> >> >
> >> > Running transaction check
> >> > ERROR You need to update rpm to handle:
> >> > rpmlib(PayloadIsZstd) <= 5.4.18-1 is needed by
> baculum-api-9.6.3-1.fc31.noarch
> >> > rpmlib(PayloadIsZstd) <= 5.4.18-1 is needed by
> baculum-api-httpd-9.6.3-1.fc31.noarch
> >> > rpmlib(PayloadIsZstd) <= 5.4.18-1 is needed by
> baculum-common-9.6.3-1.fc31.noarch
> >> >
> >> > I don't get it.   From what I can tell zstd enabled RPM is a fiction
> for some future release of Centos 8?
> >> >
> >> > I'd appreciate some help.
> >> >
> >> > Thanks!
> >> >
> >> >
> >> > ___
> >> > Bacula-users mailing list
> >> > Bacula-users@lists.sourceforge.net
> >> > https://lists.sourceforge.net/lists/listinfo/bacula-users
> >>
> >>
> >>
> >> --
> >> "Greater love hath no man than this, that a man lay down his life for
> >> his friends." Jesus Christ
> >>
> >> "Większej miłości nikt nie ma nad tę, jak gdy kto życie swoje kładzie
> >> za przyjaciół swoich." Jezus Chrystus
>
>
>
> --
> "Greater love hath no man than this, that a man lay down his life for
> his friends." Jesus Christ
>
> "Większej miłości nikt nie ma nad tę, jak gdy kto życie swoje kładzie
> za przyjaciół swoich." Jezus Chrystus
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Baculum Install on Centos 7

2020-05-21 Thread Jeff Thomas
I'm using the following repos.d file:

[root@costello jrthomas]# cat /etc/yum.repos.d/baculum.repo
[baculumrepo]
name=Baculum CentOS repository
baseurl=http://bacula.org/downloads/baculum/stable/centos
gpgcheck=1
enabled=1




On Thu, May 21, 2020 at 4:40 PM Marcin Haba  wrote:

> Hello Jeff,
>
> It looks you are trying to install Fedora 31 packages on CentOS.
>
> I would propose to use Baculum packages dedicated for CentOS 7. You
> can take them from here:
>
>
> https://www.bacula.org/9.6.x-manuals/en/console/Baculum_API_Web_GUI_Tools.html#SECTION0034
>
> Best regards,
> Marcin Haba (gani)
>
> On Thu, 21 May 2020 at 23:04, Jeff Thomas  wrote:
> >
> > I'm following the instructions to the letter and then 'WHAM!'
> >
> > Running transaction check
> > ERROR You need to update rpm to handle:
> > rpmlib(PayloadIsZstd) <= 5.4.18-1 is needed by
> baculum-api-9.6.3-1.fc31.noarch
> > rpmlib(PayloadIsZstd) <= 5.4.18-1 is needed by
> baculum-api-httpd-9.6.3-1.fc31.noarch
> > rpmlib(PayloadIsZstd) <= 5.4.18-1 is needed by
> baculum-common-9.6.3-1.fc31.noarch
> >
> > I don't get it.   From what I can tell zstd enabled RPM is a fiction for
> some future release of Centos 8?
> >
> > I'd appreciate some help.
> >
> > Thanks!
> >
> >
> > ___
> > Bacula-users mailing list
> > Bacula-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
>
> --
> "Greater love hath no man than this, that a man lay down his life for
> his friends." Jesus Christ
>
> "Większej miłości nikt nie ma nad tę, jak gdy kto życie swoje kładzie
> za przyjaciół swoich." Jezus Chrystus
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Baculum Install on Centos 7

2020-05-21 Thread Jeff Thomas
I'm following the instructions to the letter and then 'WHAM!'

Running transaction check
ERROR You need to update rpm to handle:
rpmlib(PayloadIsZstd) <= 5.4.18-1 is needed by
baculum-api-9.6.3-1.fc31.noarch
rpmlib(PayloadIsZstd) <= 5.4.18-1 is needed by
baculum-api-httpd-9.6.3-1.fc31.noarch
rpmlib(PayloadIsZstd) <= 5.4.18-1 is needed by
baculum-common-9.6.3-1.fc31.noarch

I don't get it.   From what I can tell zstd enabled RPM is a fiction for
some future release of Centos 8?

I'd appreciate some help.

Thanks!
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Installation/Startup Issues

2020-05-15 Thread Jeff Thomas
No, same issue.


On Fri, May 15, 2020 at 5:43 AM Radosław Korzeniewski <
rados...@korzeniewski.net> wrote:

> Hello,
>
> czw., 14 maj 2020 o 18:03 Jeff Thomas  napisał(a):
>
>> Problem solved.   FQDN needed in bconsole.conf
>>
>>
> I'm very glad that you've solved your issue! Great!
>
> So, it seems you have a different issue then previously reported, right?
>
> 13-May 11:23 bacula-dir JobId 0: Fatal error: Could not open Catalog
> "MyCatalog", database "bacula".
> 13-May 11:23 bacula-dir JobId 0: Fatal error: postgresql.c:332 Unable to
> connect to PostgreSQL server. Database=bacula User=bacula
> Possible causes: SQL server not running; password incorrect;
> max_connections exceeded.
> 13-May 11:23 bacula-dir ERROR TERMINATION
> Please correct configuration file: bacula-dir.conf
>
> best regards
> --
> Radosław Korzeniewski
> rados...@korzeniewski.net
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Installation/Startup Issues

2020-05-14 Thread Jeff Thomas
Problem solved.   FQDN needed in bconsole.conf

On Thu, May 14, 2020 at 8:55 AM Radosław Korzeniewski <
rados...@korzeniewski.net> wrote:

> Hello,
>
> First of all, you should respond to the list and not to me directly,
> please.
>
> czw., 14 maj 2020 o 15:30 Jeff Thomas  napisał(a):
>
>> The director is running as 'bacula'
>>
>> bacula   27410 1  0 May13 ?00:00:00 /opt/bacula/bin/bacula-sd
>> -fP -c /opt/bacula/etc/bacula-sd.conf
>> root 27420 1  0 May13 ?00:00:00 /opt/bacula/bin/bacula-fd
>> -fP -c /opt/bacula/etc/bacula-fd.conf
>> bacula   31156 1  0 May13 ?00:00:00
>> /opt/bacula/bin/bacula-dir -fP -c /opt/bacula/etc/bacula-dir.conf
>>
>> I'm not sure what you mean by 'Catalog Resource Configuration'?
>>
>
> Your bacula-dir.conf -> Catalog{} resource configuration. This is a place
> where you configure information about your catalog database. The database
> name, user, password, server, etc.
>
> best regards
> --
> Radosław Korzeniewski
> rados...@korzeniewski.net
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Installation/Startup Issues

2020-05-14 Thread Jeff Thomas
# Generic catalog service
Catalog {
  Name = MyCatalog
  dbname = "bacula"; dbuser = "bacula"; dbpassword = ""
}



On Thu, May 14, 2020 at 8:55 AM Radosław Korzeniewski <
rados...@korzeniewski.net> wrote:

> Hello,
>
> First of all, you should respond to the list and not to me directly,
> please.
>
> czw., 14 maj 2020 o 15:30 Jeff Thomas  napisał(a):
>
>> The director is running as 'bacula'
>>
>> bacula   27410 1  0 May13 ?00:00:00 /opt/bacula/bin/bacula-sd
>> -fP -c /opt/bacula/etc/bacula-sd.conf
>> root 27420 1  0 May13 ?00:00:00 /opt/bacula/bin/bacula-fd
>> -fP -c /opt/bacula/etc/bacula-fd.conf
>> bacula   31156 1  0 May13 ?00:00:00
>> /opt/bacula/bin/bacula-dir -fP -c /opt/bacula/etc/bacula-dir.conf
>>
>> I'm not sure what you mean by 'Catalog Resource Configuration'?
>>
>
> Your bacula-dir.conf -> Catalog{} resource configuration. This is a place
> where you configure information about your catalog database. The database
> name, user, password, server, etc.
>
> best regards
> --
> Radosław Korzeniewski
> rados...@korzeniewski.net
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Installation/Startup Issues

2020-05-13 Thread Jeff Thomas
Greetings all,

I would greatly appreciate some pointers to resolve this issue.   I
installed using the Community Installation Guide with:

Centos 7, Bacula 9.6.3 and Postgresql 9.2

The bconsole command  returns to the shell immediately.

[root@costello working]# sudo -u bacula ../bin/bconsole
Connecting to Director costello:9101
[root@costello working]#

And the director, running in the foreground shows:

13-May 11:23 bacula-dir JobId 0: Fatal error: Could not open Catalog
"MyCatalog", database "bacula".
13-May 11:23 bacula-dir JobId 0: Fatal error: postgresql.c:332 Unable to
connect to PostgreSQL server. Database=bacula User=bacula
Possible causes: SQL server not running; password incorrect;
max_connections exceeded.
13-May 11:23 bacula-dir ERROR TERMINATION
Please correct configuration file: bacula-dir.conf

Thanks in advance!!
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula and memory usage

2020-01-28 Thread Thomas Lohman

Hello

I am used to this principle with Linux but I don't understand why it just takes 
it when Bacula is working and it slows down the server so much that I can no 
longer access it in ssh.


How is your storage allocated on the server? i.e. how are things 
partitioned with regard to your backup disks and your database? If your 
DB is located on the same physical disks as your OS and/or your actual 
backup data then you could see such "freeze ups" while Bacula is running 
due to I/O limitations.  I find it helps to separate the OS, DB data and 
any Bacula storage volumes so they are all on separate disk devices if 
possible - separate controllers even better.



--tom




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula and memory usage

2020-01-28 Thread Thomas Lohman




%Cpu(s):  0.1 us,  0.2 sy,  0.0 ni, 52.9 id, 46.5 wa,  0.0 hi,  0.2 si,  0.0 st
KiB Mem : 29987532 total,   220092 free,   697356 used, 29070084 buff/cache
KiB Swap: 15138812 total, 15138812 free,0 used. 28880936 avail Mem


It looks like your memory is being used by the Linux file cache. This is 
typical and if the system needs the memory for something else, it will 
use it.


As mentioned in my previous e-mail, can you run status within the 
director (bconsole) and see what the clients are doing when the backups 
are running?  Is bacula actually backing anything up?  The first thing 
to determine is if there is a problem/malfunction or if possibly your 
backups are simply taking too long to run (due to data total, # of 
files, etc.).



--tom




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula and memory usage

2020-01-27 Thread Thomas Lohman

Hi,

How many files and total space on each client?  6 TB is not necessarily 
a huge total amount but you may want to consider splitting each client 
job into smaller chunks.  Also, what does the status of the jobs show?  
Does it show that it is indeed backing up data?  Unfortunately, if they 
are not close to finishing, you most likely are going to run into the 
hard limit on job run time (6 days?) and the jobs will be canceled.  I'm 
assuming that this hard coded limitation is still in the 7.0.5 code base.


Also, to avoid queuing up additional backup runs for the same job, you 
may to look into the various options that allow one to cancel jobs of 
they are already running, already queued, etc.



--tom


On 1/27/20 2:11 PM, Jean Mark Orfali wrote:

Hello,

Thank you for your reply. Here is the missing information. My Bacula server and 
the four clients are on Linux Centos 7 servers. I use Webmin version 1.941 to 
access bacula. The bacula version is 7.0.5. The SQL server is a MariaDB version 
5.5.64. The server has 30TB of hard drive and 30GB of memory. Backups are saved 
in a directory directly on the backup server. No backup is kept on clients 
side. At the moment there is 6 TB of data to backup. On each of the 4 clients I 
have an incremental backup task schedule every day at 11 p.m. Right now I have 
4 backups running for 5 days and 14 waiting.

Here is the server configuration information:

Thank you so much!

Bacula-dir.conf

#
# Default Bacula Director Configuration file
#
#  The only thing that MUST be changed is to add one or more
#   file or directory names in the Include directive of the
#   FileSet resource.
#
#  For Bacula release 7.0.5 (28 July 2014) -- redhat Enterprise release
#
#  You might also want to change the default email address
#   from root to your address.  See the "mail" and "operator"
#   directives in the Messages resource.
#

Director {# define myself
   Name = bacula-dir
   DIRport = 9101
   QueryFile = "/etc/bacula/query.sql"
   WorkingDirectory = /var/spool/bacula
   PidDirectory = "/var/run"
   Maximum Concurrent Jobs = 100
   Password = "" # Console password
   Messages = Daemon
}



#
# Define the main nightly save backup job
#   By default, this job will back up to disk in /tmp

#Job {
#  Name = "BackupClient2"
#  Client = bacula2-fd
#  JobDefs = "DefaultJob"
#}

#Job {
#  Name = "BackupClient1-to-Tape"
#  JobDefs = "DefaultJob"
#  Storage = LTO-4
#  Spool Data = yes# Avoid shoe-shine
#  Pool = Default
#}

#}

# Backup the catalog database (after the nightly save)

#
# Standard Restore template, to be changed by Console program
#  Only one such job is needed for all Jobs/Clients/Storage ...
#


# List of files to be backed up
FileSet {
   Name = "Full Set"
   Include {
 Options {
   signature = MD5
   compression = GZIP
 }
#
#  Put your list of files here, preceded by 'File =', one per line
#or include an external list with:
#
#File = \" -s \"Bacula: 
%t %e of %c %l\" %r"
   operatorcommand = "/usr/sbin/bsmtp -h 51.79.119.27 -f \"\(Bacula\) \<%r\>\" -s 
\"Bacula: Intervention needed for %j\" %r"
   mail = root@51.79.119.27 = all, !skipped
   operator = root@51.79.119.27 = mount
   console = all, !skipped, !saved
#
# WARNING! the following will create a file that you must cycle from
#  time to time as it will grow indefinitely. However, it will
#  also keep all your messages if they scroll off the console.
#
   append = "/var/log/bacula/bacula.log" = all, !skipped
   catalog = all, !skipped, !saved
}


#
# Message delivery for daemon messages (no job).
Messages {
   Name = Daemon
   mailcommand = "/usr/sbin/bsmtp -h 51.79.119.27 -f \"\(Bacula\) \<%r\>\" -s \"Bacula 
daemon message\" %r"
   mail = root@51.79.119.27 = all, !skipped
   console = all, !skipped, !saved
   append = "/var/log/bacula/bacula.log" = all, !skipped
}

# Default pool definition
Pool {
   Name = Default
   Pool Type = Backup
   Recycle = yes   # Bacula can automatically recycle 
Volumes
   AutoPrune = yes # Prune expired volumes
   Volume Retention = 365 days # one year
   Maximum Volume Bytes = 50G  # Limit Volume size to something 
reasonable
   Maximum Volumes = 100   # Limit number of Volumes in Pool
}

# File Pool definition
Pool {
   Name = File
   Pool Type = Backup
   Label Format = Local-
   Recycle = yes   # Bacula can automatically recycle 
Volumes
   AutoPrune = yes # Prune expired volumes
   Volume Retention = 365 days # one year
   Maximum Volume Bytes = 50G  # Limit Volume size to something 
reasonable
   Maximum Volumes = 100   # Limit number of Volumes in Pool
   #Label Format = "Vol-"   # Auto label
}


# Scratch pool definition
Pool {
   Name = Scratch
   Pool Type = Backup
}

#
# Restricted console used by tray-monitor to get the status of the 

[Bacula-users-fr] Demande d'aide

2019-03-20 Thread Thierry THOMAS

  
  
Bonjour
Je suis en train d'essayer d'installer Bacula 9.4.2 sur CentOs 6 à
partir des sources et je rencontre un problème lors du Make.
il se termine avec cette erreur :

                               -soname
  libbaccats-9.4.2.so -L/usr/lib64/ -lmariadb -lz -ldl -lm -lpthread
  -lssl -lcrypto
  
  /usr/bin/ld: cannot find -lmariadb
  
  collect2: ld returned 1 exit status
  
  make[1]: *** [libbaccats-mysql.la] Erreur 1
  
  make[1]: *** Attente des tâches non terminées
  
  make[1]: quittant le répertoire «
  /root/bacula/bacula-9.4.2/src/cats »
  
  
  
    == Error in /root/bacula/bacula-9.4.2/src/cats ==


Quelqu'un aurait-il déjà rencontré ce problème et surtout trouvé une
solution ?
Cordialement,

-- 
  
  
  
  
  
  
  
  
  
  
  

Thierry THOMAS
Service
Informatique LBCMCP -
UMR5088 CNRS
Centre de Biologie Intégrative (CBI) de Toulouse FR3743
Université Toulouse III P. Sabatier
118 route de Narbonne - Bât 4R3B1 
31062 TOULOUSE Cedex 9
FRANCE
  Support : http://support.lbcmcp.lab/
@ : lbcmcp.support-i...@univ-tlse3.fr
Tél.: 05 61 55 68 87
 
  

  


___
Bacula-users-fr mailing list
Bacula-users-fr@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users-fr


[Bacula-users] Cannot find registration for "access-key"

2019-01-08 Thread Thomas Johanns
Hi;

The co,,unity install guide says at 4.3 :  For example for Debain Jessie
use: # Bacula Community deb
http://www.bacula.org/packages/@access-key@/debs/9.2.0/jessie/amd64 jessie
main Where: ◾ @access-key@ refers to your personalized access key. This is
the trailing path component sent in the registration email. Copying the URI
from that email will be one of the simplest ways to set this up correctly.

https://blog.bacula.org/whitepapers/CommunityInstallationGuide.pdf

But I cannot find where to register to get that access-key.

Can somebody help me .

Thomas
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Ubuntu 18.04 / Bacula 9.0.6 and Postgres 10

2018-09-07 Thread Thomas Lohman
Hi Kern, yes, I know - I should have mentioned that we're still running 
an earlier version of Bacula.  But my main point was that Postgres 10 
doesn't seem to have any issues for us.


cheers,


--tom

On 09/07/2018 02:41 PM, Kern Sibbald wrote:

On 09/07/2018 12:05 PM, Thomas Lohman wrote:



FWIW we have not seen any compatibility problems in v.10, but we're not
using it with bacula. All I can see in bacula is
/usr/libexec/bacula/create_postgresql_database:


We've been using Bacula with Postgres 10.x on RH Enterprise 7.5 for a 
few months now with no issues.  The only change to Bacula I made was 
adding a 10 option to the above mentioned file.


Bacula version 9.2.x corrects the option issue you mentioned.

Best regards,
Kern






___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Ubuntu 18.04 / Bacula 9.0.6 and Postgres 10

2018-09-07 Thread Thomas Lohman




FWIW we have not seen any compatibility problems in v.10, but we're not
using it with bacula. All I can see in bacula is
/usr/libexec/bacula/create_postgresql_database:


We've been using Bacula with Postgres 10.x on RH Enterprise 7.5 for a 
few months now with no issues.  The only change to Bacula I made was 
adding a 10 option to the above mentioned file.



--tom


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] BSCAN - what to expect?

2018-04-07 Thread Thomas plancon

Hi Dan,

Thanks for your interest!

So I let bscan run  for almost 48 hours and then thought I should try 
and see if any records were added to the database: nothing was added 
except a job number and the volume name! I cancelled bscan and went into 
bconsole to verify and yes, nothing except as mentioned above.


I tried bls, and that was actually pulling file names off of the tape 
and I redirected it to a text file. From that info I ran bextract with 
the name of a project retrieved with bls. Bextract successfully restored 
the project to a local directory!!!


So now I'm letting bls run on the tapes and collecting the list in text 
files. Hoping I'll see the project I need to recover. Unfortunately, bls 
is also a very slow process; 48 hours and still not through listing a 
LTO-2, 400 Gb, tape; I've got 3 tapes to go through!


It is frustrating because I'm not even sure the needed project is even 
on these tapes. The backup was full, so I'm just deducing, and hoping, 
(that could be on a t-shirt: deducing and hoping), that it is there 
somewhere.


I'll post again in a day or so with the status.

Tom Plancon



On 4/7/2018 3:57 PM, Dan Langille wrote:

On Apr 3, 2018, at 11:56 AM, Tom Plancon  wrote:

Hi folks,

I'm running BSCAN to recover old data from LTO-2 tapes. The backup job spanned 
3 tapes/volumes. I ran the BSCAN command listing the volumes as required with 
the first tape in the drive - this is NOT an autochanger. BSCAN seemed to start 
OK, found the tape, dbase etc., and has been running now for about 24hrs with 
the first tape still in the drive!

bscan does take a very long time. I have no personal experience with it though.


So, my questions, will BSCAN ask for the next tape when the first is done? Or, 
how will I know how to change it? Should it be taking this long for a 400Gb 
tape? The tape drive moves for a few seconds, pauses a few seconds and moves 
again, now for a full day!

I'm trying to recover a specific project in this old data, there is probably 
over 100 projects saved on these tapes. Is there a better way to do this?

Any help appreciated! Thanks much!

It's been a few days. Any progress to report?





--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] two SD on a server

2017-05-15 Thread Thomas Franz
Hello,

in our opinion it is the best way to have one storage daemon for each 
storage device (more stable if one device ist lost etc.)
Our configuration ( see below) works, but I have a problem with bconsole.
In this configuration the status command of bconsole only shows the 
first storage !!!
The second storage ist only shown if I change the "Address" , for 
example with the domain of a second network device ( Address = 
backup.seconddomain ).
I don't understand this behavior and I don't like my hack .

director.conf:

Storage {
   Name = LTO-3-Changer
   Address = backup.domain
   SDPort = 9103
   Password = "XXX"
   Device = LTO-3
   Media Type = LTO-3
}

Storage {
   Name = LTO-7-Changer
   Address = backup.domain
   SDPort = 9104
   Password = "XXX"
   Device = LTO-7
   Media Type = LTO-7
}

Any idea ?

Best regards

Thomas






-- 
Thomas Franz

Data-Service GmbH
Beethovenstr. 2A
23617 Stockelsdorf
Amtsgericht Lübeck, HRB 318 BS
Geschäftsführer: Wilfried Paepcke, Dr. Andreas Longwitz, Josef Flatau


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Incremental backups stacking up behind long-running job

2017-02-08 Thread Thomas Lohman
> One of the queued backups is the next incremental backup of "archive".
> My expectation was that the incremental backup would run only some hours
> after the full backup finishes, so the difference is really small and it
> only takes some minutes and only requires a small amount of tape
> storage. The problem now is that bacula does its check if there already
> is a full backup of "archive" available when adding the job to the queue
> and not when running it. Since the full backup has not been finished
> yet, there is none and bacula turns the second incremental backup (and
> probably the third one) into a full backup as well.
>
> I'm currently running bacula 5.2.6, so my question is if anybody knows
> a solution to this problem (apart from manually cancelling the queued
> incremental jobs) or if an upgrade to bacula 7 might solve the problem.
> The upgrade to 7.4 is planned for the future already.

I believe that the problem that you're describing is the same one I had 
a number of years ago when running 5.2.x.  I had fixed it and submitted 
a patch I believe.  So my guess is that this should now be fixed and 
should not be an issue in 7.4.x.

http://bugs.bacula.org/view.php?id=1882

In addition, there are options to cancel new jobs if there are already 
running jobs, etc.  Please see the following job options

   Allow Duplicate Jobs = yes/no
   Cancel Lower Level Duplicates = yes/no
   Cancel Queued Duplicates = yes/no
   Cancel Running Duplicates = yes/no


--tom


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula btape multivolume fill test help requested

2016-11-24 Thread Thomas Franz

Hello Peter,

if you are still reading here and still have  the problem.
We have the same problem with FreeBsd.
Now we found it.  It is a logical error in handling the last block of a 
tape.


I made a bug report a year ago, but is was solved only partially.

This little patch is solving the problem.
Maybe there is a more elegant way, but it works.

Finally we  can switch  to bacula7.


Best regards,

Thomas


--- src/stored/block_util.c.orig2016-07-06 21:03:41.0 +0200
+++ src/stored/block_util.c 2016-11-11 20:57:49.36519 +0100
@@ -205,7 +205,6 @@
Dmsg3(200, "empty len=%d block=%p set binbuf=%d\n",
  block->buf_len, block, block->binbuf);
block->bufp = block->buf + block->binbuf;
-   block->buf[0] = 0;/* clear for debugging */
block->bufp[0] = 0;   /* clear for debugging */
block->read_len = 0;
block->write_failed = false;







Am 04.08.2016 um 13:44 schrieb Kern Sibbald:

Hello,

I suggest you go back to the default tape configuration.  Then
make sure you are using the Linux st kernel driver and not the
kernel driver that IBM supplies.  The IBM driver is not compatible
with Bacula.

If that does not solve your problems, either others on this list
can help you, or you will need professional help.

Best regards,
Kern

On 08/03/2016 09:24 PM, Peter Szaban wrote:

Hello,

  I'm having trouble installing bacula 7.4.3 on a SuSE LEAP 42.1
64 bit system with kernel 4.1.26-21-default on an LTO6 tape
library.

  Btape's "test" and "autochanger" tests successful, but the multi-volume
"fill" test is failing.  I was hoping some kind person could make some
suggestions about how to fix this.

  I interpret btape to be  saying it is having trouble re-reading the
last block it wrote to a tape after loading a tape.  Maybe it's having
trouble positioning to the correct spot on the tape.

  I decided this isn't an issue with the physical end of tape, because
I tried setting "Maximum Volume Size = 10G" (which makes the fill test
run a lot faster,) and encountered the exact same problem.

  At one point, I also unsuccessfully tried using stinit to change tape
drive properties with guidance from:

  
http://www.bacula.org/5.2.x-manuals/en/problems/problems/Testing_Your_Tape_Drive.html#SECTION00434000

manufacturer=IBM model = "ULTRIUM-TD6" {
scsi2logical=1
sysv=0
read-ahead=1
buffering=1
async-writes=1
mode1 blocksize=0 compression=1 }


  I google'd this problem and found that some people with FreeBSD
fixed a similar problem by setting "BSF at EOM = yes".  That isn't my
problem however, because When I set that option in bacula-sd.conf,
btape's "test" fails.

Here's some output from mt on the tape drive:

mt -f /dev/st0 status
drive type = Generic SCSI-2 tape
drive status = 1509949440
sense key error = 0
residue count = 0
file number = 0
block number = 0
Tape block size 0 bytes. Density code 0x5a (unknown).
Soft error count since last status=0
General status bits on (4101):
   BOT ONLINE IM_REP_EN


  Here's the output from btape fill:
.
.
.
Wrote block=960, file,blk=504,6280 VolBytes=2,516,582,137,856 rate=144.8 
MB/s
Wrote block=9605000, file,blk=504,11280 VolBytes=2,517,892,857,856 rate=144.8 
MB/s
02-Aug 21:20 btape JobId 0: End of Volume "TestVolume1" at 504:14969 on device 
"TAPE01" (/dev/nst0). Write of 262144 bytes got -1.
02-Aug 21:20 btape JobId 0: Re-read of last block succeeded.
btape: btape.c:2712-0 Last block at: 504:14968 this_dev_block_num=14969
btape: btape.c:2747-0 End of tape 504:0. Volume Bytes=2,518,859,907,072. Write 
rate = 144.7 MB/s
02-Aug 21:20 btape JobId 0: End of medium on Volume "TestVolume1" 
Bytes=2,518,859,907,072 Blocks=9,608,688 at 02-Aug-2016 21:20.
02-Aug 21:20 btape JobId 0: 3307 Issuing autochanger "unload slot 1, drive 0" 
command for vol TestVolume1.
02-Aug 21:22 btape JobId 0: 3304 Issuing autochanger "load slot 2, drive 0" 
command for vol TestVolume2.
02-Aug 21:23 btape JobId 0: 3305 Autochanger "load slot 2, drive 0", status is 
OK for vol TestVolume2.
Wrote Volume label for volume "TestVolume2".
02-Aug 21:24 btape JobId 0: Wrote label to prelabeled Volume "TestVolume2" on tape device 
"TAPE01" (/dev/nst0)
02-Aug 21:24 btape JobId 0: New volume "TestVolume2" mounted on device "TAPE01" 
(/dev/nst0) at 02-Aug-2016 21:24.
btape: btape.c:2315-0 Wrote 1000 blocks on second tape. Done.
Done writing 0 records ...
Wrote End of Session label.
btape: btape.c:2384-0 Wrote state file last_block_num1=14968 
last_block_num2=1001
btape: btape.c:2402-0

21:24:11 Done filling tapes at 0:1003. Now beginning re-read of first tape ...
btape: btape.c:2480-0 Enter do_unfill
02-Aug 21:24 btape JobId 0: 3307 Issuing autochanger "unload slot 2, drive 0"

Re: [Bacula-users] Problem with DB permissions after update to 7.2

2015-10-02 Thread Thomas Eriksson
On 10/02/2015 04:54 PM, Thomas Eriksson wrote:
> Hi,
> 
> I updated my director from 7.0.5 to 7.2.0 on a CentOS 7 box, using
> Simone's COPR repository. I ran the database update script and have
> successfully run some backups after the upgrade.
> 
> However, the catalog backup (using the make_catalog_backup.pl script)
> fails with the following error:
> 
> pg_dump: [archiver (db)] query failed: ERROR:  permission denied for
> relation snapshot
> pg_dump: [archiver (db)] query was: LOCK TABLE public.snapshot IN ACCESS
> SHARE MODE
> 
> Anyone know how to correct the permissions?
> 
> thanks,
> 
>   Thomas
> 
Facepalm!

I ran the grant_bacula_privileges script, and it works.

sorry about the noise.


Thomas

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Problem with DB permissions after update to 7.2

2015-10-02 Thread Thomas Eriksson
Hi,

I updated my director from 7.0.5 to 7.2.0 on a CentOS 7 box, using
Simone's COPR repository. I ran the database update script and have
successfully run some backups after the upgrade.

However, the catalog backup (using the make_catalog_backup.pl script)
fails with the following error:

pg_dump: [archiver (db)] query failed: ERROR:  permission denied for
relation snapshot
pg_dump: [archiver (db)] query was: LOCK TABLE public.snapshot IN ACCESS
SHARE MODE

Anyone know how to correct the permissions?

thanks,

Thomas

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] nodump flag

2015-07-06 Thread Thomas Franz
Hello,

using the otion honor nodump flag = yes  we have a lot of messages 
like ...NODUMP flag set - will not process  , because
we use this flag extensively .

I think this messages is of type info and it is not a good idea to 
exclude  all info messages in the Messages Resource.

Why are these messages not of type skipped ?

Any suggestions to suppress this message?

best regards


Thomas




-- 
Thomas Franz

Data-Service GmbH
Beethovenstr. 2A
23617 Stockelsdorf
Amtsgericht Lübeck, HRB 318 BS
Geschäftsführer: Wilfried Paepcke, Dr. Andreas Longwitz, Josef Flatau


--
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multiple full backups in same month

2015-06-25 Thread Thomas Lohman
 The question now is: bacula decides if it will upgrade jobs when it
 queues the jobs or when it starts the jobs? According to the logs
 above I think it is when it starts.


 To my mind it's upgraded when it's queued... I hope I'm wrong :)

Hi, it is done when the job is queued to run.  So, if you see it listed 
under Running jobs in bconsole then it's already been decided.  Queued 
to run isn't necessarily the same as when the job actually starts due to 
other factors/settings.

hope this helps,


--tom



--
Monitor 25 network devices or servers for free with OpManager!
OpManager is web-based network management software that monitors 
network devices and physical  virtual servers, alerts via email  sms 
for fault. Monitor 25 devices for free with no restriction. Download now
http://ad.doubleclick.net/ddm/clk/292181274;119417398;o
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multiple full backups in same month

2015-06-25 Thread Thomas Lohman
 On 25/06/15 13:21, Silver Salonen wrote:

 But why it upgraded the other incrementals in the queue if the first
 incremental was upgraded to full?

 Because the algorithm is broken. It should only make that decision when
 the job exits the queue.

 I filed a bug against this a long time ago, It still isn't fixed.

I believe Alan is right and you're experiencing this bug or something 
similar depending on what configuration parameters you have set.

http://bugs.bacula.org/view.php?id=1882

I fixed this particular issue described in the bug report reference 
above that we ran into in 5.2.13 along with some other things but never 
got those into the main code base.  We're still running 5.2.13 and I 
have not had the time to port my changes to 7.0.x but you might be able 
to look at my changes to 5.2.13 and make the equivalent changes in 7.0.x.


--tom



--
Monitor 25 network devices or servers for free with OpManager!
OpManager is web-based network management software that monitors 
network devices and physical  virtual servers, alerts via email  sms 
for fault. Monitor 25 devices for free with no restriction. Download now
http://ad.doubleclick.net/ddm/clk/292181274;119417398;o
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multiple full backups in same month

2015-06-25 Thread Thomas Lohman
 Ok, so the option Allow Duplicate Job=no can at least prevent multiple
 full backups of the same server in a row as stated before?

As others mentioned, I think it may help in your case but it may not 
completely solve the problem that you saw.  It looks like you had 5 
instances of the same job queued up at the same time.  Disallowing 
duplicate jobs would mean the last 4 would be canceled once queued (but 
after being upgraded to Full).  Now, if we assume your original Full job 
actually ended up running and completed successfully, your next instance 
of this job will still get upgraded to Full I suspect since it's going 
to see the canceled jobs as newer than that successful Full.  The 
problem, I think, is what I described here in bug 1882

The original 5.2.13 behavior when determining if a failed job needs to 
be rerun was to look at the start time of the most recent successful 
backup. From there it would then see if any job had started since then 
and failed. As pointed out, this creates an issue when you have FULL 
jobs that tend to run longer than the time period between normal backups 
for those jobs. i.e. the job laps itself so to speak. Any new jobs would 
be upgraded to FULLs and then canceled since the original FULL was still 
running (this assumes that duplicate jobs are not allowed). But once the 
original FULL finished, Bacula was grabbing it's start time and then 
seeing those canceled FULL jobs that happened since the successful FULL 
was started. To me, it seems like looking at the end time of that 
successful job makes more sense.

The change I made was to have Bacula look at the real end time of the 
last successful job and then see if any jobs have failed since that 
time.  This fixed these type of issues for us.  Sorry that this probably 
doesn't help you with fixing it right now if you're running 7.0.x, but I 
think it does explain the behavior that you're seeing and also says that 
it is still there in 7.0.x

And just for completeness, these are the related settings that we run with:

Allow Duplicate Jobs = no
Cancel Lower Level Duplicates = yes
Cancel Queued Duplicates = yes
Cancel Running Duplicates = no
Rerun Failed Levels = yes

hope this helps,


--tom



--
Monitor 25 network devices or servers for free with OpManager!
OpManager is web-based network management software that monitors 
network devices and physical  virtual servers, alerts via email  sms 
for fault. Monitor 25 devices for free with no restriction. Download now
http://ad.doubleclick.net/ddm/clk/292181274;119417398;o
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multiple full backups in same month

2015-06-25 Thread Thomas Lohman
No, because the end time of Full job #1 occurred after the end time of 
the failed job #2.  Bacula doesn't see any failed jobs occurring after 
the end time of successful job #1 which is all it cares about - at least 
in our patched version.


--tom

 Wouldn't this changed behavior run into the problem that cancelled
 duplicates are still seen as failed jobs and therefore jobs would be
 upgraded still?

 Eg:

  1. Full starts
  2. Incr is queued, upgraded to Full and cancelled.
  3. Full ends
  4. Incr is queued, checks that Full job no. 1 finished OK, but then
 checks that Incr-Full job no. 2 failed - thus it's still upgraded
 to Full and started.

 --
 Silver


--
Monitor 25 network devices or servers for free with OpManager!
OpManager is web-based network management software that monitors 
network devices and physical  virtual servers, alerts via email  sms 
for fault. Monitor 25 devices for free with no restriction. Download now
http://ad.doubleclick.net/ddm/clk/292181274;119417398;o
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] how to debug a job

2015-01-23 Thread Thomas Lohman
 Even though, IMHO, spooling disks backup is just muda (Japanese
 Term): http://en.wikipedia.org/wiki/Muda_(Japanese_term)

Not necessarily - if you have a number of backups that tend to flake out
halfway through for whatever reasons (network, client issues, user 
issues, etc) e.g. then by spooling backups and then de-spooling 
sequentially to disk you save your disk volumes from filling up with 
unnecessary cruft - which depending on how everything is configured for 
you could cause problems.  If the community version could restart 
backups from an aborted point then this probably wouldn't be a potential 
issue.

cheers,


--tom




--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Schedule question

2014-12-12 Thread Thomas Lohman
 is there a quick way to set the schedule to be every other week
 (to create full backups every 14 days i.e. on even weeks since
 01.01.1971 for example)

 If there is no predefined keyword, is there a way to trigger this
 based on the result of an external command?

Hi, you may also want to look at the MaxFullInterval option which allows 
one to specify the max number of days between FULLs for a job.

hope this helps,


--tom



--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] pruning of virtual full jobs

2014-12-11 Thread Thomas Lohman
This is probably a question for Kern or perhaps should be better posted 
to bacula-devel but I'll send it here since others may have experienced 
or have comments on this.

Assume you are running Virtual Fulls every x days (aka the Max Full 
Interval for Virtual Fulls) and also have retention periods for 
clients/volumes set.  When a client that comes and goes is ready for a 
new Virtual Full, it's possible that there have been no new 
Incremental/Differential backups since the last Virtual Full.  So, it 
simply makes a new copy of the last Virtual Full which makes sense. 
When you then run a prune of that client, it will look at the JobTDate 
of the Virtual Full job and see the date of the original last real 
backup for that client and depending on the retention defined will 
delete the job information which then leads to an error on it's next 
backup attempt.  At this point, you have to get the client in and do a 
new Full for that job.

The issue really seems to be whether or not for Virtual Fulls, pruning 
should use the real job termination time and not the job termination 
time that gets dragged forward from the last real backup that was 
done.  It seems to me that it should but I can see an argument the other 
way as well since the actual data you're storing has aged past your 
retention periods.


--tom


--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] get files of list incr

2014-12-05 Thread Thomas Manninger
Hello!



is it possible to get a list of all saved files of my last incr backup and the size of the files?

I need it, because my incr backup of a server is greater than 10gb every day, and i will know, which files are so big.



Thanks



Regards

Thomas

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to do Cross Replication Site1=Site2 (DR)

2014-11-26 Thread Thomas Lohman
 First let me thank you all for your responses, i really appreciate
 them. As Joe, i think the problem here are the bacula jobids, ¿ is
 there any way to say bacula to start from (let say) job id 900 ?
 i think that's an easy way to fix all the problem as i will be able

I am not familiar enough with mysql and it's workings but with postgres, 
the jobid column in the job table is defined as a sequence - 
job_jobid_seq.  When this is first created it can be seeded with 
whatever starting value that you wish.

e.g.

\d job_jobid_seq
 Sequence public.job_jobid_seq
 Column |  Type   |Value
---+-+-
  sequence_name | name| job_jobid_seq
  last_value| bigint  | 328864
  start_value   | bigint  | 1
  increment_by  | bigint  | 1
  max_value | bigint  | 9223372036854775807
  min_value | bigint  | 1
  cache_value   | bigint  | 1
  log_cnt   | bigint  | 31
  is_cycled | boolean | f
  is_called | boolean | t

So, you could have one server start at 1 and another start at some 
number that you know the first server will never reach (assuming you 
want them to have unique job id sets forever).

hope this helps,


--tom


--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula 7.0.5 compression

2014-11-17 Thread Thomas Manninger
Hello,



i installed bacula-dir on a debian 7 HOST A, bacula-sd on debian 7 HOST B.



The compression is not working:

backup job protocol:

Software compression: none



hostA bacula-dir.conf:

...


FileSet {
 Name = LinuxAllSet
 Include {
 Options {
 Compression = gzip
 Signature = MD5
 _OneFs_ = no
 Accurate = ipmc
 }
 File = /
 }
 Exclude {
 File = /proc
 File = /sys
 File = /var/lib/mysql # We make a mysqldump..
 }
}

...


Storage {
 Name = FileStorage
 Address = host-b.test.local
 Device = FileStorage
 Media Type = File
 Password = RkjDn0IrqN833OgxzIqtQPy/DxyCfcJL9Ahog9GQgbdl
 Maximum Concurrent Jobs = 20
 AllowCompression = yes
}



hostB bacula-sd.conf:

Device {
 Name = FileStorage
 Media Type = File
 Archive Device = /bacula-storage
 LabelMedia = yes; # lets Bacula label unlabeled media
 Random Access = Yes;
 AutomaticMount = yes; # when device opened, read it
 RemovableMedia = no;
 AlwaysOpen = no;
 Maximum Concurrent Jobs = 20
}



On both host, zlib is compiled.



What is the problem??


Thanks

Thomas



--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] File volumes and scratch pool

2014-10-07 Thread Thomas Lohman
 My volumes are of type files so using new volumes vs recycling expired
 ones just fills up the file system with old data. It makes it hard to
 manage and forecast filesystem space needs.

 I have never understood Bacula's desire to override my policy and insist
 on preserving data that I already defined as useless.

If one of the issues is getting rid of old data that goes beyond the 
retention period then one should be able to use the truncate volume on 
purge directive and then set up a way then to ask Bacula to purge those 
volumes once they are moved into your recycle pool (via a separate 
job/script that runs the appropriate bconsole commands).  As far as I 
understand things, Bacula won't do the truncate automatically when it 
marks the volume as purged and moves it into the recycle pool.

Bacula will still use new never before used volumes when it grabs one 
from the recycle pool (although I suspect if you knew what you were 
doing you could get around that by updating the proper time 
stamps/attributes on the media records for the truncated volumes so they 
would appear as new) but if the used volumes are truncated then they 
won't fill up the file system and the backup data should be deleted.

hope this helps,


--tom


--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] v7.0.4 migrate: StartTime older than SchedTime

2014-07-23 Thread Thomas Lohman
 StartTime does not get updated when migrating a job. Is this a bug or
 is it the way it is supposed to be?


I believe that this is the way it is supposed to work.  When 
copying/migrating a job or when creating a virtual Full job from 
previous jobs, the start time of the new job gets set to the start time 
of the copied/migrated job or in the case of a Virtual Full to the start 
time of the last backup used to create the Virtual Full.  This, I 
believe, is because that start time is used when looking to see what 
needs to be backed up if you're doing another backup that will be based 
off of that job.  This can cause issues if you're assuming start time is 
the real start time of a job as you've discovered.  I went ahead and 
added a realstarttime attribute to a job as part of some of my 
patches/extensions but those were for 5.2.13 and not the latest release 
7.0.x.


--tom


--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Socket terminated message after backup complete

2014-07-08 Thread Thomas Lohman
 According to
 http://www.baculasystems.com/windows-binaries-for-bacula-community-users,
 6.0.6 is still the latest version. Does this mean the bug was never
 fixed there, or is it the text on that page that needs updating? Or
 is there still something else entirely, and is it not this bug that's
 hitting me?

Hi,

it's possible that there may be other scenarios where that particular 
bug occurs or it's also possible that the patch to the community code 
did not make it into the enterprise version that you're using.  I am not 
sure.  Kern may be able to answer.


--tom


--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Socket terminated message after backup complete

2014-07-07 Thread Thomas Lohman

 Because traffic is going through those firewalls, I had already
 configured keepalive packets (heartbeat) at 300 seconds. In my first
 tests, backups *did* fail because that was missing.  Now they don't
 seem to fail anymore, but there's that socket terminated message
 every now and then that doesn't belong there.


Hi,

This seems like the problem that you're having.

http://bugs.bacula.org/view.php?id=1925

I believe this was fixed in community client version 5.2.12 and I can 
verify that we no longer see these warning/error messages on clients 
that have been upgraded to = 5.2.12.  We still see it on Windows 
machines that are running 5.2.10.  I don't know which version of the 
Enterprise client has this fix in it.

The messages themselves are mainly harmless so you can ignore them if 
you want to.


--tom

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fatal error: Authorization key rejected by Storage daemon

2014-06-12 Thread Thomas Lohman
I've seen this error before on and off on one particular client. 
Nothing changes with regard to the configuration and yet the error will 
crop up.  Usually a combination of the following fixes it - 
cancel/restart the job, restart the Bacula client, or restart the Bacula 
storage daemon.  Since it only happens with this one client, I haven't 
bothered to try and figure out why exactly.  I'd be interested if anyone 
has any thoughts on what causes this error to randomly occur.


--tom

 I've problem with my Bacula server and my FD on my client-test server
 (centos6-fd). When I try to run a job with BAT I've the following
 error :

 centos6-fd Fatal error: Authorization key rejected by Storage
 daemon. Please see
 http://www.bacula.org/en/rel-manual/Bacula_Freque_Asked_Questi.html#SECTION0026
 for help. bacula.local-dir Start Backup JobId 156,
 Job=BackupCentos6.2014-06-03_16.19.34_08 Using Device LTO-4 to
 write. bacula.local-dir Fatal error: Bad response to Storage command:
 wanted 2000 OK storage , got 2902 Bad storage

 From my server I can telnet the client on port 9102 and 9103. From
 my client I can telnet my server on 9101,9102 and 9103.

 So I thought it was a password mistake but I use the same password
 everywhere.

 Any idea/suggestion please ?

 Benjamin

 +--


|This was sent by benja...@oceanet.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--




 --


HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
 Find What Matters Most in Your Big Data with HPCC Systems Open
 Source. Fast. Scalable. Simple. Ideal for Dirty Data. Leverages Graph
 Analysis for Fast Processing  Easy Data Exploration
 http://p.sf.net/sfu/hpccsystems
 ___ Bacula-users mailing
 list Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Delete files from failed jobs

2014-05-20 Thread Thomas Lohman
 thank you, so the only way is to configure the volume to be used in only
 1 job, So if a job fail i can delete the entire volumen. I try this.

Hi, you can also choose to spool jobs before they are written to your 
actual volumes.  This way if jobs tend to fail in the middle for 
whatever reason, no space will be wasted inside your volumes.


--tom


--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore from an incremental job: No Full backup before ... found

2014-05-12 Thread Thomas Lohman
 I guess I will go with Sven's suggestion, or does anyone have any
 other recommendation on running a weekly backup with 7 days archive?

Hi, this may be the same as Sven's recommendation but if you want to
guarantee the ability to restore data as it was 7 days ago then
you'll need to set your retention period to 14 days.  An example may
illustrate best:

May 3rd - Full
May 4th-9th - Incrementals
May 10 - Full
May 11- Incremental
May 12 - restore request for the data as it was on May 9th.

With only a 7 day retention period, by the time May 12 comes around, 
you've lost your May 3rd Full potentially.

Whether or not you've actually lost the data depends on whether the 
volume that it resides on actually has been overwritten/re-used yet. 
How things behave, of course, will depend on your exact configuration. 
If it has not been overwritten, then you do have options.   I have never 
used it but you could try using a volume scanning tool (i.e. bscan) to 
re-create the DB meta-data for the jobs on that volume.  Another option 
would be to restore your DB back to May 9th on another computer (i.e. a 
spare/test Bacula server) and then use it to get at the data.  I've done 
the latter with success when someone wanted some data that was older 
than our restore window.

cheers,


--tom


--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SOLVED: catalog problem: duplicate key value violates unique constraint fileset_pkey

2014-01-16 Thread Thomas Lohman
 It did.  Thanks a lot for your help - I highly appreciate it.
 If we ever should run into each other in real life please remember me
 that I owe you some beer...

No problem :) - glad that you got it working.


--tom




--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments  Everything In Between.
Get a Quote or Start a Free Trial Today. 
http://pubads.g.doubleclick.net/gampad/clk?id=119420431iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] catalog problem: duplicate key value violates unique constraint fileset_pkey

2014-01-15 Thread Thomas Lohman
 I tried that, but it fails:

  Enter SQL query: alter sequence fileset_filesetid_seq restart with 76;
  Query failed: ERROR:  must be owner of relation fileset_filesetid_seq

 I ran this under bconsole, i. e. as user bacula - is this not the
 right thing to do?

Wolfgang,

As someone I think already pointed out, it sounds like the owner of your 
bacula database sequences is another user - more than likely the 
Postgres super user which is probably named something like 'postgres' 
on your system I'm guessing.  You will need to connect to the database 
as that user in order to have update privileges on the sequences.

hope this helps,


--tom



--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments  Everything In Between.
Get a Quote or Start a Free Trial Today. 
http://pubads.g.doubleclick.net/gampad/clk?id=119420431iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Help binding scsi address to tape library

2014-01-15 Thread Thomas Eriksson


On 01/15/2014 12:48 PM, Luis G. Alford G. wrote:
 Hi 
 
 I have installed a bacula server and it works ok but when I reboot the
 server the tape drive change's it's  scsi address when I reboot the
 server it is via FC .  any idea what i can do to avoid this from happen 
 

There are probably several ways to solve this, but one of them is to use
the /dev/tape/by-id/*  names instead of the /dev/nst0 naming.

From my bacula-sd.conf:

Autochanger {
  Name = HP-ML4048
  Device = LTO4-Drive-1, LTO4-Drive-2
  Changer Command = /opt/bacula/scripts/mtx-changer %c %o %S %a %d
  Changer Device = /dev/tape/by-id/scsi-3500110a0008dde58
}

Device {
  Name = LTO4-Drive-1
  Drive Index = 0
  Media Type = LTO-4
  Archive Device = /dev/tape/by-id/scsi-3500110a0008dde59-nst
  ...
}

Device {
  Name = LTO4-Drive-2
  Drive Index = 1
  Media Type = LTO-4
  Archive Device = /dev/tape/by-id/scsi-3500110a0008dde5f-nst
  ...
}

The by-id name will not change between reboots.

 -Thomas

--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments  Everything In Between.
Get a Quote or Start a Free Trial Today. 
http://pubads.g.doubleclick.net/gampad/clk?id=119420431iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] catalog problem: duplicate key value violates unique constraint fileset_pkey

2014-01-14 Thread Thomas Lohman
My guess is that during the migration from MySQL to Postgres, the 
sequences in Bacula did not get seeded right and probably are starting 
with a seed value of 1.

the filesetid field in the fileset table is automatically populated by 
the fileset_filesetid_seq sequence.

Run the following two queries and see what the results are - in 
particular, see what the last_value is for the sequence.  This should be 
equal to the max value from the fileset table which it is in my Bacula 
database.  If not, you'll need to manually fix it via a sql update 
command to the sequence.

select max(filesetid) from fileset;

select * from fileset_filesetid_seq;


hope this helps,


--tom

 Hello,

 I've tried to switch a bacula configuration that has been running for
 years using from MySQL to PostgreSQL.  Everything worked apparently
 fine (I did the same before with two other instalations, where the
 very same steps worked, too), but when trying to run jobs in the new
 PostgreSQL environment, some jobs fail with errors like this:

 13-Jan 22:13 XXX-dir JobId 1: Error: sql_create.c:741 Create DB FileSet 
 record INSERT INTO FileSet (FileSet,MD5,CreateTime) VALUES ('YYY 
 root','zD/PtXx6xx/IEHZH8X5OJB','2014-01-13 22:13:59') failed. ERR=ERROR:  
 duplicate key value violates unique constraint fileset_pkey
 DETAIL:  Key (filesetid)=(1) already exists.

 13-Jan 22:13 XXX-dir JobId 1: Error: Could not create FileSet YYY root 
 record. ERR=sql_create.c:741 Create DB FileSet record INSERT INTO FileSet 
 (FileSet,MD5,CreateTime) VALUES ('YYY 
 root','zD/PtXx6xx/IEHZH8X5OJB','2014-01-13 22:13:59') failed. ERR=ERROR:  
 duplicate key value violates unique constraint fileset_pkey
 DETAIL:  Key (filesetid)=(1) already exists.


 Not all jobs are faliling like this, only some.


 Is there a way to check the DB for consistence (or, even better, to repair 
 it)?

 What could cause such issues, and what could be done to fix these?



 I don;t know if it's related, but maybe I should note that in the old
 setup (with a MySQL DB) I had occasionally jobs failing with errors
 like this:

 30-Dec 00:05 XXX-dir JobId 70535: Start Backup JobId 70535, 
 Job=AAA-Root.2013-12-30_00.05.02_02
 30-Dec 00:05 XXX-dir JobId 70535: Using Device LTO3-1 to write.
 30-Dec 00:19 ZZZ-sd JobId 70535: Fatal error: askdir.c:340 NULL Volume name. 
 This shouldn't happen!!!
 30-Dec 00:19 ZZZ-sd JobId 70535: Spooling data ...
 30-Dec 00:06 AAA-fd JobId 70535:  /work is a different filesystem. Will 
 not descend from / into it.
 30-Dec 00:21 ZZZ-sd JobId 70535: Elapsed time=00:01:13, Transfer rate=0  
 Bytes/second
 30-Dec 00:06 AAA-fd JobId 70535: Error: bsock.c:429 Write error sending 8 
 bytes to Storage daemon:ZZZ:9103: ERR=Connection reset by peer
 30-Dec 00:06 AAA-fd JobId 70535: Fatal error: xattr.c:98 Network send error 
 to SD. ERR=Connection reset by peer

 Out of 30+ jobs running each night, only one would fail about once
 per week, and this was one out of 2 candidates - all others never
 showed any such problem. I have been wondering if there was some DB
 issue for these jobs, which is one of the reasons for switching to
 PostgreSQL.   But maybe this is totally unrelated...


 Any help welcome.  Thanks in advance.

 Best regards,

 Wolfgang Denk



--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments  Everything In Between.
Get a Quote or Start a Free Trial Today. 
http://pubads.g.doubleclick.net/gampad/clk?id=119420431iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] catalog problem: duplicate key value violates unique constraint fileset_pkey

2014-01-14 Thread Thomas Lohman
Wolfgang,

 Dear Thomas,

 In message 52d555c5.9070...@mtl.mit.edu you wrote:
 My guess is that during the migration from MySQL to Postgres, the
 sequences in Bacula did not get seeded right and probably are starting
 with a seed value of 1.

 Do you have any idea why this would happen?  Is this something I can
 influence?
 Are there any other variables that might hit by similar issues?

I can't say exactly why it happened to you but my guess would be that 
this problem could hit anyone porting from mysql to postgres.  I'm not 
familiar with the Bacula procedure for doing that (if you used one) but 
any Postgres sequence creations during the Postgres DB setup would more 
than likely be created with a default starting value of 1 - but if 
you've already got data in your database (migrated over from Mysql) then 
all sequences would need to be seeded properly.  The bad news for you 
may be that almost all of the Bacula tables have sequences to generate 
their id fields.

client
file
filename
path
job
jobmedia
fileset
media
pool

I believe in each case, the 'id' field is the primary key which means it 
will be unique - thus any inserts should fail with an error and thus 
ensure that your database doesn't get into a strange funky state with 
multiple records having the same id.  It may also be that you get lucky 
and avoid that for tables such as file, job, filename because if your 
database had been around awhile, it may be that re-starting those 
counters back to 1 may not overlap with any existing/current data (e.g. 
if the newest job before migration had an id of 1 and all old jobs 
have been purged then restarting at 1 shouldn't cause problems depending 
on your configuration of course).  With that said, if it was me, I'd 
re-seed all the sequences to where the id left off for each of the 
tables to avoid possible future insert errors/conflicts.

 select max(filesetid) from fileset;

 select * from fileset_filesetid_seq;

 This is what I get:

 Enter SQL query: select max(filesetid) from fileset;
 +--+
 | max  |
 +--+
 |   75 |
 +--+
 Enter SQL query: select * from fileset_filesetid_seq;
 +---++-+--+---+---+-+-+---+---+
 | sequence_name | last_value | start_value | increment_by | max_value 
 | min_value | cache_value | log_cnt | is_cycled | is_called |
 +---++-+--+---+---+-+-+---+---+
 | fileset_filesetid_seq |  4 |   1 |1 | 
 9,223,372,036,854,775,807 | 1 |   1 |  32 | f | t 
 |
 +---++-+--+---+---+-+-+---+---+
 Enter SQL query:


 Sorry, my DB / sql knowledge is somewhat limited (read: non-existient).
 Could you please be so kind and tell me how I could fix that?

Well, if your DB knowledge is limited then you may want to consult 
someone in your location who may be able to assist.  Given that, I'll 
say the next part with the usual use at your own risk disclaimer.  To 
change the last_value field of a Postgres sequence, you need to use the 
Postgres alter sequence command

e.g.

alter sequence fileset_filesetid_seq restart with 76;

After that, the next fileset record created should be created with an id 
value of 76.

This may be dependent on your version of Postgres.  I am using 9.1.x and 
am looking at the following documentation:

http://www.postgresql.org/docs/9.1/static/sql-altersequence.html

I would then redo that above procedure for each of the sequences for 
each of the Bacula tables (querying to get the max value currently used 
and then resetting the last_value field to max value + 1).

hope this helps and good luck,


--tom




--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments  Everything In Between.
Get a Quote or Start a Free Trial Today. 
http://pubads.g.doubleclick.net/gampad/clk?id=119420431iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restores fail because of multiple storages

2013-12-17 Thread Thomas Lohman
 That seems a working solution, but creating a symbolic link for every
 volume required by a restore job introduces a manual operation that
 would be better to avoid, especially if a lot of incremental volumes are
 being considered.

We use symbolic links here and have never had any problems.  All volumes 
are created ahead of time so links are created at the same time. It may 
not be the most elegant solution but it's certainly a workable solution 
and for us, it eliminated issues/problems we were having with vchanger 
mistakenly marking volumes in error which then had to be corrected manually.

hope this helps,


--tom



--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restores fail because of multiple storages

2013-12-13 Thread Thomas Lohman
  10-dic 17:46 thisdir-sd JobId 762: acquire.c:121 Changing read
 device. Want Media Type=JobName_diff have=JobName_full
device=JobName_full (/path/to/storage/JobName_full)

I think that you want to make sure the Media Type for each Storage 
Device is File.  It looks like you've defined them to be different. 
It might help if you were post your storage configuration which would 
allow folks to see the details of your configuration.

hope this helps,


--tom



--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Console - shows conflicting info

2013-12-02 Thread Thomas Lohman
 25-Nov 13:38 home-server-dir JobId 144: Fatal error: Network error with FD 
 during Backup: ERR=Connection reset by peer
 25-Nov 13:38 home-server-dir JobId 144: Fatal error: No Job status returned 
 from FD.
 25-Nov 13:38 home-server-dir JobId 144: Error: Bacula home-server-dir 5.2.5 
 (26Jan12):

I am not sure what your exact configuration is but my guess/hunch is 
that your jobs are being spooled to the server, but while they are then 
being de-spooled to your volumes, the connection back to the client is 
cut off for whatever reason (Connection reset by peer error).  This I 
think would explain why the client may in fact think it finished ok but 
the server doesn't. That is probably technically a bug and not a 
feature. :)  Look at the Bacula Heartbeat Interval option if you are 
not using this already and see if that helps to keep the connection alive.

hope this helps,


--tom



--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ERROR Spooling/Backups with large amounts of data from windows server 2012

2013-11-21 Thread Thomas Lohman
 - heartbeat: enabling on the SD (60 seconds) and
 net.ipv4.tcp_keepalive_time also set to 60

In glancing at your error (Connection reset by peer) and your config 
files, I didn't see the Heartbeat Interval setting in all the places 
that it may need to be.  Make sure it is in all the following locations:

Director definition for the server Director daemon.
Storage definition for the server Storage daemon.
FileDaemon definition for the Client File daemon

That error typically means the network/socket connection between the 
file daemon and the storage daemon was closed unexpectedly at one end or 
by something in between blocking/dropping it.  I have also seen that 
error suddenly pop up on Windows clients for no obvious reason but a 
reboot of the Windows box has fixed it.


--tom


--
Shape the Mobile Experience: Free Subscription
Software experts and developers: Be at the forefront of tech innovation.
Intel(R) Software Adrenaline delivers strategic insight and game-changing 
conversations that shape the rapidly evolving mobile landscape. Sign up now. 
http://pubads.g.doubleclick.net/gampad/clk?id=63431311iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FW: Client last backup time report

2013-11-18 Thread Thomas Lohman
We do something like this by running a job within Bacula every morning 
that scans all client configuration files, builds a list of expected 
current jobs/clients and then queries the Bacula DB to see when/if 
they've been successfully backed up or not (i.e. marked with a T).  If 
it's been more than the specified number of days, then they are added to 
a list which is then mailed to whatever address is specified (e.g. the 
IT system folks).  The content of the message looks something like this:

WARNING -- Bacula has not backed up:

(1) Job: foobar for Client: foobar-host in the past 10 days


I suspect that this utility is fairly specific to our configuration 
structure so not sure if it could be of direct help to you but I figured 
I'd throw it out there as an example that it is pretty straight forward 
to do what you want to do and a lot of ways to implement it. :)


--tom

 I need to create a report with the last time a good backup was run of
 each client.

 We are looking for anyone who has not backed up recently, so it would be
 nice if the report could be set for clients that have not had a
 successful backup in 1 week, or even a variable amount of time.

 I am assuming this would be SQL. Grepping (or anything else) Bacula's
 'List Jobs' would not work, since if a client has not even started a
 backup it would not be listed there. (Our backups are kicked off by
 remotely calling a script on each client that starts the FD. We have
 several 'waves' of backups when departments are not here or would be
 least affected by the backup.)

 We envision the report being something like:

 Name Last Backup  F/D/I  JobFiles   JobBytes JobStatus

 COMPUTER12013-11-06 23:59   I  29129,056 T

 LAPTOP2  2013-10-20 10:30   D  17 89,423 T

 COMPUTER22013-10-19 17:05   I   0  0 E

 Anyone else doing something like this, or can point me to some examples?

 Thanks in advance



 --
 DreamFactory - Open Source REST  JSON Services for HTML5  Native Apps
 OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
 Free app hosting. Or install the open source package on any LAMP server.
 Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
 http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk



 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



--
DreamFactory - Open Source REST  JSON Services for HTML5  Native Apps
OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
Free app hosting. Or install the open source package on any LAMP server.
Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with Bacula 5.2.5 and Windows client 5.2.10.

2013-11-15 Thread Thomas Lohman
 We are having a problem between a Bacula server version 5.2.5
 (SD and
 Dir) and a Windows client running Bacula-fd 5.2.10.

While this may not be your problem, in general, I recall it is best to 
keep the client versions = to the server versions.


--tom



--
DreamFactory - Open Source REST  JSON Services for HTML5  Native Apps
OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
Free app hosting. Or install the open source package on any LAMP server.
Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: Spooling attrs takes forever

2013-11-15 Thread Thomas Lohman

 Yes, for disk storage, it does not make much sense to have data spooling
 turned off.
 I would suggest to always turn attribute spooling on (default off) so
 that attributes
 will be inserted in batch mode (much faster), and if possible ensure
 that the
 working directory, where attributes are spooled is on a different drive
 from the
 Archive Directory.  Of course this last suggestion is most often not
 possible.

One reason to turn on spooling even if you use disk storage for your 
volumes if you tend to have hosts that abruptly get pulled off the 
network during backups or otherwise have hiccups that cause backups to 
fail.  With spooling, you shouldn't get volumes filling up with backup 
data from the partially completed failed backups.


--tom


--
DreamFactory - Open Source REST  JSON Services for HTML5  Native Apps
OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
Free app hosting. Or install the open source package on any LAMP server.
Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Is anyone using 128K blocks with LTO-4 or LTO-5 drives?

2013-09-20 Thread Thomas
Hi Andreas,

we are using also LTO-5 with 2M Blocksize and without any Problems.

Drives and Kernel are:

Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1+deb7u1 x86_64 GNU/Linux
Medium ChangerOVERLAND NEO Series
IBM  ULTRIUM-TD5
IBM  ULTRIUM-TD5

the btape tests fails like in your example, but backup and restore are working 
fine.

Best regards
Thomas

-- 
[:O]###[O:]


--
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Is anyone using 128K blocks with LTO-4 or LTO-5 drives?

2013-09-20 Thread Thomas
it seems that bacula's limit is = 400
from src/stored/block.c :

if (block_len  400) {
   Dmsg3(20, Dump block %s 0x%x blocksize too big %u\n, msg, b, 
 block_len);
   return;
}


another limit i found is this one from the output of dmesg | grep st :

 [3.695203] st0: Block limits 1 - 8388608 bytes.
 [3.697814] st1: Block limits 1 - 8388608 bytes.



Am 20.09.2013 16:01, schrieb Andreas Koch:
 On 09/20/2013 03:43 PM, Alan Brown wrote:
  On 20/09/13 13:22, Andreas Koch wrote:
 
  Many thanks for the data point! When we use Bacula (not just btape)
  with larger block sizes (512 KB), our backups abort when bacula fails
  to read the tape's header block.
 

  Did you attempt to mix blocksizes on the same physical tape?

 No. I initially tried writing with 512 KB to our existing tapes for
 differentials (that were written on LTO-4 drives onto the same tapes, also
 with 512 KB).

  That will not work. If you change block sizes all existing tapes holding
  data must be marked used and new ones labelled.

 That was my second attempt: Freshly labelled tapes. All tries with 512KB
 failed as described previously, only 128 KB succeeded.

 As with btape, the problem with the larger block sizes did not come up
 during writing, but during reading. This lead to all backups failing, since
 Bacula appears to read the on-tape header block before writing the actual
 backup.

  I run 2Mb block size quite happily on HP LTO5 drives and have done for a
  few years. They are capable of 16Mb - we are limited to 2Mb by Bacula.

 Hmm, curiouser and curiouser! I'll wait and see whether btape gets fixed to
 accept the larger block sizes and then proceed to run more tests.

 Many thanks for all the feedback!

 Best,
   Andreas

-- 
[:O]###[O:]
Besser man könnte da langfahren wo man nicht muss, als wenn man wo langfahren 
muss wo man nicht
kann.


--
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] USB tape drives?

2013-09-17 Thread Thomas Harold
On 9/16/2013 12:52 PM, Greg Woods wrote:
 On Sat, 2013-09-14 at 14:02 -0600, Greg Woods wrote:

 My question is whether there is any such thing as a USB tape drive that
 is known to work with Bacula.

 It's clear from the responses I got that I left out an important detail,
 since all the responses were telling me why I should use something other
 than a USB tape drive.

 This is a cheap home setup. My storage server is a Raspberry Pi. So I do
 not have SATA bays,  SCSI interfaces, or eSATA interfaces available. The
 only connection for peripherals is USB. My current storage device is a
 4TB USB drive, of the green type that shuts itself down automatically
 when inactive. But it is still connected, because I want to be able to
 fire off an incremental backup for a laptop or desktop whenever I want,
 without having to fiddle with hardware connections.


I'd recommend:

1) If this wasn't a Raspberry Pi, I'd say get a USB3 card.  Instead of 
25-30MB/s, you'll be able to push as much as 75-85MB/s over the wire to 
the drive.  Even if your host is USB2, you should still make sure to get 
a USB3 drive.

2) Look into autofs. The autofs daemon is designed to automatically 
mount a volume at a mount point when requested (and it is available), 
then dismount it after a period of inactivity.

(While we don't yet use bacula at the office for our offsite backups, we 
do use external USB3 drives with autofs + LUKS encryption.)

3) External 2.5 USB3 drives come in 1TB and 2TB capacities.  How big is 
your backup set?

Given the prices of LTO capable of storing 2-3TB, for a small setup, 3-6 
USB drives is very attractive.

--
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] From NAS to NAS .... and far beyond!

2013-06-27 Thread Florent THOMAS

Hy folks,

I explain my context. I have a web agency as customer that store its 
production on a NAS. They are working from their IMac and sharing files 
on the NAS. They want to be more secure and make some increments 
backups. Of course they need some GUI because they are not as geeks as 
me ;-)
I know that there also will be other needs behind this single one, 
that's why I think bacula could be a great solution.


Yet, some questions are not totally clear from now.

I have a question regarding to backing up files _from a NAS_ with Bacula 
and storing backups to _another NAS_.
I read many documentations and I understand that backupfiles could be 
store to a NAS instead of a Tape. Is this point correct?
On another part, I would like to save the files tahta are on a 
production NAS. As bacula need client (If I understand well) does that 
meens that I need to find a client for my specific device?


Thansk for reading my poor french level of english,

regards


--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] From NAS to NAS .... and far beyond!

2013-06-27 Thread Florent THOMAS

Hy,

Thanks for your answer, I was expecting this kind of answer. It confirms 
what I was thinking.

rsync will be a good solution.

I still have a question probably due to my misunderstanding. You wrote
/You will need a bacula daemon on a NAS otherwise _(unworkable)_./

I don't know what I must understand :
- It is totally impossible to run bacula daemon on a NAS
- There is no bacula dameon for NAS and I will have to compile it (I 
found some ressources for compiling bacula on NAS)
- You could do everything you were thinking of but it's far far away 
from a good practice.


Which interpretation of you're sentence should be the right one?

Anywa, a great thanks for your fast answer.

regards


On 27/06/2013 12:27, Philip Gaw wrote:

Hi,

Dont use bacula and use rsync instead. most (all?) NAS devices will 
allow you to rsync from device to device. You will need a bacula 
daemon on a NAS otherwise (unworkable).


 On 27/06/2013 11:00, Florent THOMAS wrote:

Hy folks,

I explain my context. I have a web agency as customer that store its 
production on a NAS. They are working from their IMac and sharing 
files on the NAS. They want to be more secure and make some 
increments backups. Of course they need some GUI because they are 
not as geeks as me ;-)
I know that there also will be other needs behind this single one, 
that's why I think bacula could be a great solution.


Yet, some questions are not totally clear from now.

I have a question regarding to backing up files _from a NAS_ with 
Bacula and storing backups to _another NAS_.
I read many documentations and I understand that backupfiles could be 
store to a NAS instead of a Tape. Is this point correct?
On another part, I would like to save the files tahta are on a 
production NAS. As bacula need client (If I understand well) does 
that meens that I need to find a client for my specific device?


Thansk for reading my poor french level of english,

regards




--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] From NAS to NAS .... and far beyond!

2013-06-27 Thread Florent THOMAS

Ok Philip,

If I'm used to compile and all the development stuff, would it be a 
great solution to use bacula regarding to you? It could still to be a 
good practice?


I mean, for just a NAS to another one, rsync would be enough but I know 
that my customer will ask for more and more and I won't be able to match 
all his needs with rsync.


Regards


On 27/06/2013 13:00, Philip Gaw wrote:

On 27/06/2013 11:52, Florent THOMAS wrote:

Hy,

Thanks for your answer, I was expecting this kind of answer. It 
confirms what I was thinking.

rsync will be a good solution.

I still have a question probably due to my misunderstanding. You wrote
/You will need a bacula daemon on a NAS otherwise _(unworkable)_./

I don't know what I must understand :
- It is totally impossible to run bacula daemon on a NAS
- There is no bacula dameon for NAS and I will have to compile it (I 
found some ressources for compiling bacula on NAS)
- You could do everything you were thinking of but it's far far away 
from a good practice.


Which interpretation of you're sentence should be the right one?

#2 (There is no bacula daemon on the NAS; so you will need to compile 
it - which wont be easy)



Anywa, a great thanks for your fast answer.

regards


On 27/06/2013 12:27, Philip Gaw wrote:

Hi,

Dont use bacula and use rsync instead. most (all?) NAS devices will 
allow you to rsync from device to device. You will need a bacula 
daemon on a NAS otherwise (unworkable).


 On 27/06/2013 11:00, Florent THOMAS wrote:

Hy folks,

I explain my context. I have a web agency as customer that store 
its production on a NAS. They are working from their IMac and 
sharing files on the NAS. They want to be more secure and make some 
increments backups. Of course they need some GUI because they are 
not as geeks as me ;-)
I know that there also will be other needs behind this single one, 
that's why I think bacula could be a great solution.


Yet, some questions are not totally clear from now.

I have a question regarding to backing up files _from a NAS_ with 
Bacula and storing backups to _another NAS_.
I read many documentations and I understand that backupfiles could 
be store to a NAS instead of a Tape. Is this point correct?
On another part, I would like to save the files tahta are on a 
production NAS. As bacula need client (If I understand well) does 
that meens that I need to find a client for my specific device?


Thansk for reading my poor french level of english,

regards




--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users








--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] rescheduling jobs and max sched time

2013-03-04 Thread Thomas Lohman
Hi,

We have jobs that we want to limit their time either sitting and waiting 
or running to certain number of hours.  In addition, we want these jobs 
to reschedule on error - essentially, start the job at X time, keep 
trying to run but after Y hours end no matter what.  I've found that if 
you use reschedule on error and max run sched time, that the latter will 
use the latest scheduled time as opposed to when the job initially was 
scheduled.  The database schedule time seems to stay the originally 
scheduled time since it's really the same job as far as that is 
concerned.  This seems to all make sense but doesn't accomplish what we 
want to do.  I was wondering if I'm missing existing options or will 
need to extend Bacula with a new Max Run Init Sched Time option which 
will use that initial scheduled time when determining if the job should 
be ended.

thanks,


--tom



--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Client side FS detection

2013-01-16 Thread Thomas Lohman
 One idea I can think of is using a list of filesystem types that matter.
 That way you can handle most things and also exclude cluster
 filesystems like ocfs2 that should best be backed up with a different
 job and separate fd.

This is what we do for our UNIX systems.  We actually define each file 
system as it's own job and have things set up so if a mismatch between 
what is found on a client and what is being backed up occurs, it is 
reported and can be fixed.  You're right in that a problem with this 
approach is if your clients may be attaching storage that uses 
unexpected file system types.  For us, that isn't really a problem since 
the policy is that we back up what is fixed on the computer and each 
computer is set up by us as well.

hope this helps,


--tom



--
Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery
and much more. Keep your Java skills current with LearnJavaNow -
200+ hours of step-by-step video tutorials by Java experts.
SALE $49.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122612 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Network error with FD during Backup: ERR=Connection reset by peer

2012-10-04 Thread Thomas Lohman

 Yesterday I waited for the job to finish the first tape and then wait
 for me to insert the next one.

 I opened wireshark to see if there is a heartbeat during waiting -
 and there was none. During the job the heartbeat was active.

 From what you wrote the heartbeat should be active when waiting for
 a tape. Could you try to confirm that (have a look at the code)?

Marcus,

I think that you should be seeing heartbeats in this case.  What version 
of the Storage Daemon server are you running?  I am looking at 5.2.10 
and up as far as the code.  Can you run it in debug mode?  If so, set 
the debug level to 400 and you should get some messages in the output if 
the heartbeat logic is working.

The heartbeat is sent from inside this method:

/* 

* Wait for SysOp to mount a tape on a specific device.
   Returns: W_ERROR, W_TIMEOUT, W_POLL, W_MOUNT, or W_WAKE 

*/
int wait_for_sysop(DCR *dcr)

Inside that method, there is a particular debug line:

Dmsg0(dbglvl, Send heartbeat to FD.\n);

Anyhow, if you're not seeing this debug output then it is not sending a 
heartbeat for whatever reason.  If you see it then it is sending it so 
the problem lies elsewhere if you're still not seeing it arriving at 
it's destination.

hope this helps,


--tom





--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Network error with FD during Backup: ERR=Connection reset by peer

2012-10-03 Thread Thomas Lohman

 I now could check if bacula fd to sd connection timed out because of
 the network switches. This was not the case. My job still cancels.

My experience is that the heartbeat setting has not helped us with our 
Connection Reset by Peer issues that occur occasionally.  Something 
more is going on than a typical network timeout.

 Can someone tell me how and when the heartbeat should occur? Is it
 active when no job is running? In my config I set the following line
 for dir, sd and fd: Heartbeat Interval = 5 This should result in a
 heartbeat every 5 sec?

The heartbeats are only setup when a job with a client is initiated. 
So, there should be no activity when no job is running.  When you 
initiate a job with the client, the director sets up a connection with 
the client telling the client what storage daemon to use.  The client 
then initiates a connection back to that storage daemon.  If you have 
the heartbeat settings in place as you do then you should see heartbeat 
packets sent from the client back to the director in order to keep that 
connection alive while the data is being sent back to the storage 
daemon.  In addition, you may see heartbeat packets send from the 
storage daemon to the client.  I'd have to re-look at the code but I 
believe this is used in the scenario where the storage daemon is waiting 
for a volume to write the data to (i.e. operator intervention).  If the 
heartbeat setting is on then the storage daemon will send heartbeats 
back to the client in order to keep the connection alive while it waits.

Also of note, 5 seconds is the minimum feasible setting you can have. 
The heartbeat thread wakes up every 5 seconds to check to see if it 
needs to send a heartbeat to the director.  So, anything less than that 
really isn't going to do anything.

hope this helps,


--tom

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Network error with FD during Backup: ERR=Connection reset by peer

2012-09-27 Thread Thomas Lohman
 Tom: How did you restart the job. Did you have a script or do you do it
 by hand?

There are Job options to reschedule jobs on error:

Reschedule On Error = yes
Reschedule Interval = 30 minutes
Reschedule Times = 18

The above will reschedule the job 30 minutes after the failure and it'll 
try and do that 18 times before finally giving up.  These options come 
in handy if you're backing up laptops or other computers that may not be 
on your network 24x7.

hope this helps,


--tom


--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://ad.doubleclick.net/clk;258768047;13503038;j?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Network error with FD during Backup: ERR=Connection reset by peer

2012-09-26 Thread Thomas Lohman

 2012-09-19 22:58:45   bacula-dir JobId 13962: Start Backup JobId 13962, 
 Job=nina_systemstate.2012-09-19_21.50.01_31
 2012-09-19 22:58:46   bacula-dir JobId 13962: Using Device FileStorageLocal
 2012-09-19 23:02:41   nina-fd JobId 13962: DIR and FD clocks differ by 233 
 seconds, FD automatically compensating.
 2012-09-19 23:02:41   nina-fd JobId 13962: DIR and FD clocks differ by 233 
 seconds, FD automatically compensating.
 2012-09-19 23:02:45   nina-fd JobId 13962: shell command: run 
 ClientRunBeforeJob C:/backup/bacula/systemstate.cmd
 2012-09-19 23:02:45   nina-fd JobId 13962: shell command: run 
 ClientRunBeforeJob C:/backup/bacula/systemstate.cmd
 2012-09-19 23:03:40   bacula-dir JobId 13962: Sending Accurate information.
 2012-09-19 23:05:12   bacula-dir-sd JobId 13962: Job write elapsed time = 
 00:01:21, Transfer rate = 2.517 M Bytes/second
 2012-09-19 23:09:06   nina-fd JobId 13962: shell command: run ClientAfterJob 
 C:/backup/bacula/systemstate.cmd cleanup
 2012-09-19 23:09:06   nina-fd JobId 13962: shell command: run ClientAfterJob 
 C:/backup/bacula/systemstate.cmd cleanup
 2012-09-19 23:05:17   bacula-dir JobId 13962: Fatal error: Network error with 
 FD during Backup: ERR=Connection reset by peer

We have seen that same error (Connection reset by peer) ocassionally 
for many months.  Some are normal - Mac/Windows desktops/laptops that 
either get rebooted or removed from the network during a backup, etc. 
But sometimes we see this error with UNIX servers that are up 24x7.  We 
suspect that it is network related since we've had similar errors with 
print servers and non-Bacula backup servers.  But we have yet to pin it 
down.  We restart failed jobs in Bacula so typically the job always 
completes OK even after initially getting this error on the first try. 
I'd be curious to know if others get these errors occasionally and what 
version of Bacula that you're running.


--tom



--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup through firewall - timeout

2012-09-10 Thread Thomas Lohman
 Hi folks.

 I've got a problem whereby my email and web servers sometimes fail to backup.

 These two servers are inside the DMZ and backup to the server inside my LAN.

 The problem appears to be the inactivity on the connection after the data has
 been backed up while the database is being updated. Does anyone have any
 suggestions on what I can do?

 Gary

Gary,

Take a look at the Heartbeat Interval options for the client and storage 
configurations.  More than likely your firewall/router is dropping the 
connection due to inactivity.  How fast it's doing this will depend on 
the configuration and the network load so you may need to experiment 
with different interval settings.

hope this helps,


--tom

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] backup truecrypt container

2012-08-31 Thread Thomas Lau
Dear All,

I am wondering how could we do bacula backup on truecrypt container, we have 3 
container which the boss keep the key, so everyday boss will open the container 
image and send files, afterward the container is closed, and we need to backup 
everyday, each container consist with 600GB image. I tested with incremental 
backup, seems bacula can't copy delta within files, but the redo full backup on 
those containers because it have been changed.

Any solution to deal with this ?

Thomas Lau
Senior Technology Analyst
Principle One Limited
27/F Kinwick Centre, 32 Hollywood Road, Central, Hong Kong
T  +852 3555 2217 F  +852 3555   M  +852 9880 1217
Hong Kong   .   Singapore   .   Tokyo

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Heartbeat Interval errors

2012-08-19 Thread Thomas Lohman
Since adding Heartbeat Interval (set to 15 seconds) on our clients' 
FileDaemon definition as well as the Director definition in 
bacula-dir.conf and the Storage definition in bacula-sd.conf, it has 
fixed some of the firewall timeout issues that we've had backing up some 
clients but we've also started getting some of the following errors 
during each backup cycle (even though the backup finishes OK each time).

client-fd JobId 79326:Error: bsock.c:346 Socket is terminated=1 on call 
to client:xx.xx.xx.xx:36387

My best guess is that the client is trying to send a ping down the 
connection but in the time that it decided to do this, the backup 
finished and the connection was closed.

I was wondering if anyone else who uses this option has seen this error 
and if it should be considered a bug perhaps or if there is something 
we can do in our configuration to fix it.

thanks,


--tom

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with bat from Bacula 5.2.10

2012-08-15 Thread Thomas Lohman
 bat ERROR in lib/smartall.c:121 Failed ASSERT: nbytes 0

This particular message is generated because some calling method is 
passing in a 0 to the SmartAlloc methods as the number of bytes to 
allocate.  This is not allowed via an ASSERT condition at the top of the 
actual smalloc() method in the smartall.c file.  I'd think that you'd 
need to do some kind of trace to see where the problem is originating.


--tom




--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with bat from Bacula 5.2.10

2012-08-15 Thread Thomas Lohman
 bat ERROR in lib/smartall.c:121 Failed ASSERT: nbytes 0

 This particular message is generated because some calling method is
 passing in a 0 to the SmartAlloc methods as the number of bytes to
 allocate.  This is not allowed via an ASSERT condition at the top of the
 actual smalloc() method in the smartall.c file.  I'd think that you'd
 need to do some kind of trace to see where the problem is originating.

 Hm, the question is what should i trace and how? Bat, the director or
 something other?

The bat executable is the one that you'd trace to see what it is doing. 
I don't know how much info bat may put out if you run in some kind of 
debug mode but that may be enough assuming there is such a mode.  But I 
suspect you'll need to somehow find out what it's exactly doing that is 
causing it to try and allocate 0 bytes of memory.  If you can get a 
specific cause then the Bacula bug folks may be able to track it 
down/fix it easier.

hope this helps,


--tom



--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] BAT and qt vesrion

2012-08-09 Thread Thomas Lohman
I downloaded the latest stable QT open source version (4.8.2 at the 
time) and built it before building Bacula 5.2.10.  Bat seems to work 
fine with it.  If you do this, just be aware that the first time you 
build it, it will probably find the older 4.6.x RH QT libraries and 
embed their location in the shared library path so when you go to use 
it, it won't work.  The first time I built it, I told it to explicitly 
look in it's own source tree for it's libraries (by setting LDFLAGS), 
installed that version and then re-built it again telling it to now look 
in the install directory.


--tom

 I tried to compile bacula-5.2.10 with BAT on a RHEL6.2 server. I
 found that BAT did not get installed because it needs qt version
 4.7.4 or higher but RHEL6.2 has version qt-4.6.2-24 as the latest.  I
 would like to know what the others are doing about this issue?

 Uthra

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula working state job listing

2012-08-02 Thread Thomas Lohman
This may be a stupid question but is the working state data, that are 
cached on the client and used to display the recent job history of a 
client from the tray monitor, limited to the most recent 10 job events? 
  Or is there a way to configure this to show and/or cache more than 
just 10?

thanks,


--tom

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] restores to Windows machines

2012-07-25 Thread Thomas Lohman
Hi,

We're running 5.2.10 for both Windows 7 clients and our servers.  My 
system admins have noticed that when during restores of files to a 
Windows 7 client that the restored files are all hidden which requires 
them to then go in and uncheck the hide protected operating system files 
option.  At that the point, the files are then visible to the user. 
Typically, they do a restore and specify a restore directory of 
C:/RestoredFiles or something along those lines.  So, in that directory 
on the client, one sees a C and then the rest of the restored 
path/files underneath it.  The problem seems to be that the permissions 
that C sub-directory in C:\RestoredFiles are what causes everything to 
be hidden.

Of the folks here who back up Windows clients, have you seen this 
problem and does anyone know of any fixes for it on the Bacula side?

thanks,


--tom

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] max run time

2012-07-15 Thread Thomas Lohman
This actually is a hardcoded sanity check in the code itself.  Search 
the mailing lists from the past year.  I'm pretty sure I posted where in 
the code this was and what needed to be changed.  We have no jobs that 
run more than a few days so have not made such changes ourselves so I 
can't guarantee it'll fix your problems completely - all I know is that 
overcoming the 6 day limit definitely will mean making a few tweaks to 
the code.  You may want to submit a bug report and make the case that 
such a sanity check should be removed or have an configurable way to 
override.

hope this helps,


--tom


 but still they are terminated after 6 days:

 14-Jul 20:27 cbe-dir JobId 39969: Fatal error: Network error with FD
   during Backup: ERR=Interrupted system call
 14-Jul 20:27 cbe-dir JobId 39969: Fatal error: No Job status returned from FD.
 14-Jul 20:27 cbe-dir JobId 39969: Error: Watchdog sending kill after
   518426 secs to thread stalled reading File

 I like to know how to fix this.

 I've seen the comments in the mailing list in the past that running
 backups that take more than 6 days is insane. They're wrong in my
 environment. I don't want to hear that again. I have a genuine reason for
 running very long backups and I need to know how to make it work.

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restores to Windows host fail and file daemon crashes on 5.2.9

2012-06-19 Thread Thomas Lohman

 I am running version 5.2.9 on my director and file daemon. I am
 able to backup successfully but when I attempt to restore data onto
 the 32bit Windows 2003 file daemon the bacula service terminates on
 the 2003 server and the restore job fails. I can choose a Linux
 file daemon as the target for the data and the data is restored but
 if I choose the Windows 2003 32bit file daemon the file daemon
 crashes. What can I do to troubleshoot this further?

Yes, this sounds like the same problem a number of sites, including us, 
have had.  I suspect it will work fine if you put 5.0.3 on the Windows 
client.  Also, looking at the bug tracker e-mails, I believe Kern may 
have fixed this issue in 5.2.10 which will be the next minor release.


--tom


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore dies every time

2012-06-15 Thread Thomas Lohman
 Restores to the Windows client systematically crash the FD on the
 client without restoring anything. This seems to be a known, as
 yet unsolved problem. There are several posts on this on the list.

Yes, we have the same problem.  For now, we have rolled back our Windows 
clients to 5.0.3 which works fine.  I opened a bug report for this but I 
don't think that they were able to reproduce it so they wanted a 
complete stack trace of the dying client which I don't have time to do 
at the moment.  I believe the bug was closed but I'd be happy to re-open 
it if anyone has a complete trace of the dead FD.  Or feel free to open 
a new report since there is obvious a bug in there somewhere given the 
number of people experiencing this.


--tom



--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bad interaction between cancel duplicates and rerun failed jobs

2012-06-04 Thread Thomas Lohman
Jon,

I believe I posted this same issue back in April and didn't get any 
replies.  I never did submit it as a bug but it does seem to be a bug to me.

http://sourceforge.net/mailarchive/forum.php?thread_name=4F8ECD71.8080203%40mtl.mit.eduforum_name=bacula-users

Perhaps I'll go ahead and post a bacula bug report and see what they say 
about this scenario.

cheers,


--tom

 So I've got a full backup job that takes more than a day to complete. To
 keep a second full backup from getting started while the first one is still
 completing I've set the following in the Job definitions:
Allow Duplicate Jobs = no
Cancel Queued Duplicates = yes

 However to handle network connection issues or clients being missing when
 their scheduled backup times comes around I have this setting in the Job
 definitions as well:
Rerun Failed Levels = yes

 It seems that the duplicate job handling marks the level as failed, so that
 when the first backup finishes, the next backup that wants to run should be
 an incremental, but gets upgraded to a full because of the duplicate jobs
 that were canceled.

 Anyone know a way around this?


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 5.2 in Debian

2012-05-04 Thread Thomas Mueller


Am Fri, 04 May 2012 15:07:17 +0200 schrieb mailing:

 Hey folks...
 
 can youtell me when Debian takes Bacula 5.2 in in the repository?
 

guess this is more a question for the debian bacula packaging group:

http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-bacula-devel

and try googling. there were discussions about 5.2 on this mailinglist.

- Thomas


--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] question on re-running of failed levels

2012-04-18 Thread Thomas Lohman
Before I submit this as a possible bug, I just wanted to see if perhaps 
it is the expected behavior for Bacula.

We have a few long running jobs that take  24 hours to do a Full 
backup.  Because of this, we have the following set:

Allow Duplicate Jobs = no
Cancel Lower Level Duplicates = yes
Cancel Queued Duplicates = yes

In addition, we also have Rerun Failed Levels set to 'yes' since 
sometimes are computers are not accessible when a Differential or Full runs.

So, what I have seen happen recently is the following scenario:

April 16th 5am - Full runs for Job X
April 17th 5am - Job X runs again and is canceled
April 17th 3pm - Original job X finishes successfully
April 18th 5am - Job X runs again and does a Full again

The April 18th job should only run an Incremental but it appears that 
because we have Allow Duplicate Jobs set to 'yes', it sees the April 
17th 5am failure and decides that it needs to rerun the Full even though 
the April 16th 5am job did successfully finish after the April 17th 5am 
failure/cancellation.

Given these settings, should one expect it to see that successful job 
and not rerun the Full?  Has anyone else seen this behavior?

FYI, we are running Bacula 5.2.6 on the director/storage side and 5.0.3 
on this particular client.

thanks,


--tom


--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] PostgreSQL - pg_dump

2012-03-28 Thread Thomas Bennett
I am having some hardware issues because of several HVAC outages.  So if I am 
doing a pg_dump of the bacula database on PostgreSQL, do I need to include oids?

From pg_dump man

-o
   --oids Dump  object  identifiers  (OIDs)  as  part of the data for every 
table. Use this option if your application references the OID
  columns in some way (e.g., in a foreign key constraint).  
Otherwise, this option should not be used.

This is Bacula 3 PostgreSQL 8.



Thanks,

Thomas


Thomas McMillan Grant Bennett   Appalachian State University
Operations  Systems AnalystP O Box 32026
University LibraryBoone, North Carolina 28608
(828) 262 6587
Library Systems  http://www.library.appstate.edu


--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full backup forced if client changes

2012-03-25 Thread Thomas Mueller
Am Sat, 24 Mar 2012 16:52:32 -0400 schrieb Steve Thompson:

 Bacula 5.0.2. For the following example job:
 
...
 }
 
 more than one client is available to backup the (shared) storage. If I
 change the name of the client in the Job definition, a full backup
 always occurs the next time a job is run. How do I avoid this?

i would run the bacula-fd (maybe a second instance to not intefere with 
other backups on the node) with the same configuration on every shared 
storage node where you would like to backup the storage and setup an DNS 
entry to point at the desired node.

- Thomas


--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing up Outlook PST files.

2012-03-02 Thread Thomas Mueller
Am Fri, 02 Mar 2012 12:38:52 +0200 schrieb Wassim Zaarour:

 Hello List,
 
 I was wondering if anyone is able to back Outlook PST files in an
 efficient way with Bacula.
 For my understanding if the PST file will be modified everyday, than
 Bacula will be backing it up everyday with any level of Backup
 (Incremental,
 Differential, Full) while we only need to backup the changes To save
 backup time and network bandwidth.
 
 Any tips or hints on how to backup a PST file in a block level way?
 

these PST files are really annoying!

there is the delta plugin which could be used (Bacula Enterprise Version).

http://www.bacula.org/en/dev-manual/main/main/
Enterprise_Bacula_New_Featu.html#SECTION0035

or you could use a software to export the pst files to small files 
(example: http://www.mailstore.com/de/mailstore-home.aspx)

or you just live with the fact, that you need to backup the whole file 
everyday.

- Thomas


--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Server disaster recovery.

2012-03-02 Thread Thomas Mueller
Am Fri, 02 Mar 2012 13:02:23 +0200 schrieb Wassim Zaarour:

 Hi List,
 
 We are backing up to disks not tapes, now I need to set up a plan in
 case the bacula server (director) crashes.
 What to do in case the MySQL catalog is lost? Can we recover? I guess it
 is easy to keep copy of configuration files to install on a new system
 but what about the catalog?
 
 What is the best plan for the server's disaster recovery?
 Thanks.



if the catalog has gone away and no sql dump is available you could use 
bscan to repopulate the catalog. but it will take a looong time if 
you have lots of tapes. And IMHO not all information is restored. Last 
time I was force to use bscan the restore from the tape did read the 
whole tape to restore some files instead of jumping directly to the right 
place.

better you backup your bacula configs and catalog dump somewhere easyly 
accessible (another server (in another building), Amazon S3, whatever) to 
do a fast recovery of the bacula backup service.

- Thomas



--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Server disaster recovery.

2012-03-02 Thread Thomas Mueller
On 02.03.2012 14:32, Wassim Zaarour wrote:
  So if I take a backup of MySql and I have the configuration files and the
  volumes of the data, I could simple export mysql database to a new 
server,
  install bacula and use the backed up conf files and put the volumes 
in the
  same configured location, that would make me up and running?

yes. but I would recommend to test the procedure and write some down 
somewhere. :)

- Thomas



 
  Hi List,
 
  We are backing up to disks not tapes, now I need to set up a plan in
  case the bacula server (director) crashes.
  What to do in case the MySQL catalog is lost? Can we recover? I 
guess it
  is easy to keep copy of configuration files to install on a new system
  but what about the catalog?
 
  What is the best plan for the server's disaster recovery?
  Thanks.
 
 
 
  if the catalog has gone away and no sql dump is available you could use
  bscan to repopulate the catalog. but it will take a looong time if
  you have lots of tapes. And IMHO not all information is restored. Last
  time I was force to use bscan the restore from the tape did read the
  whole tape to restore some files instead of jumping directly to the 
right
  place.
 
  better you backup your bacula configs and catalog dump somewhere easyly
  accessible (another server (in another building), Amazon S3, 
whatever) to
  do a fast recovery of the bacula backup service.
 

--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Client Password based data encryption.

2012-02-16 Thread Thomas Mueller
On 16.02.2012 12:32, Wassim Zaarour wrote:
 What I was thinking is of a way maybe to have some password based
 encryption, where only the users know his password but I didn't find any
 solution that can work like this.
 I guess for now we have to settle that the IT admin. Or the Sys admin have
 access to the encrypted data.

as you describe it, its not possible. Even if it was supported, how 
would it be encrypted on the client without the sys admins knowing the 
password?

i've tested the other method with removing the private key from PKI 
Keypair but it fails with Failed to load private key for File daemon.

- Thomas






 On 2/16/12 12:01 PM, Thomas Muellertho...@chaschperli.ch  wrote:

 Am Wed, 15 Feb 2012 11:07:40 +0200 schrieb Wassim Zaarour:

 Hello,

 Currently the data encryption option in Bacula is based on certificates,
 meaning that if the person creating the certificates for the client
 keeps his copy of the certs, he is able to restore and decrypt the data
 without the users approval, since some people want their data
 undecryptable by absolutely anyone but them, I was wondering if there is
 a way to encrypt data using a password that only the client knows, or of
 there any ideas how to achieve this.


 let the user himself create the encryption cert and try to use only the
 public-key in the sd.

 In theory encryption does just need the public-key. Encrypting needs the
 private key. but I don't know if it is possible to provide only the
 public key to bacula-sd.

 - Thomas


--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Space reclamation

2012-02-16 Thread Thomas Mueller
Am Thu, 16 Feb 2012 13:36:45 +0100 schrieb Demeter Tibor:

 Hi,
 
 
 Are there any option in bacula for the tape space reclamation ? I know
 and use this option from Tivol storage manager.
 
 
 How can I defragment my tapes?

IMHO there is no such thing like defragment for tapes.

you could use a migration job to move the data away from tapes to have 
the old one recycled.

- Thomas


--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Client Password based data encryption.

2012-02-16 Thread Thomas Mueller
Am Thu, 16 Feb 2012 08:01:57 -0500 schrieb Phil Stracchino:

 On 02/16/2012 05:01 AM, Thomas Mueller wrote:
 In theory encryption does just need the public-key. Encrypting needs
 the private key. but I don't know if it is possible to provide only the
 public key to bacula-sd.
 
 I believe you mean that DEcrypting requires the private key.

oh yes, I've meant DEcrypting needs the private key. 

- Thomas


--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Encrypting Data on Tape

2012-01-10 Thread Thomas Mueller
Am Tue, 10 Jan 2012 16:12:26 -0500 schrieb Craig Van Tassle:

 I'm sorry if this has been asked before.
 
 I'm running a Scalar 50 with HP LTO-4 Drives. I want to encrypt the data
 that is put on the tape, We already have encryption going between the
 Dir/SD and FD's. I just want to encrypt the data that will be placed on
 Tape for OffSite storage.
 
 Has anyone done that or know some pointers to point me to so I can get
 this working?
 
 Thanks!

bacula encryption takes place on the file-daemon. Encryption on the 
storage-daemon is not supported (... yet, it's on the projects list 
http://www.bacula.org/git/cgit.cgi/bacula/plain/bacula/projects?
h=Branch-5.2)

as Steve said, (some?) LTO tapes support on-drive encryption. never used 
it by myself.

- Thomas


--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] catalog pg_dump fails after 5.2.2 upgrade

2011-12-23 Thread Thomas Lohman
The update postgres script for 5.2.x is missing these two lines which 
you can run manually from within psql (connect to the bacula db as your 
Postgres admin db user):

grant all on RestoreObject to ${bacula_db_user};
grant select, update on restoreobject_restoreobjectid_seq to 
${bacula_db_user};

That should solve your problem, I think.


--tom


 At this point I'm unclear where the permissions problem exists.

 Within PostgreSQL.  The PostgreSQL user does not have permissions on that 
 table…

 This is not a Unix permissions issue.


 Thanks in advance for further clues.

 dn




 I am not using 5.2.2, so I did the version table as an example of what it 
 should look like.


 bacula-# \l
   List of databases
   Name| Owner  | Encoding
 ---++---
 bacula| bacula | SQL_ASCII
 postgres  | pgsql  | UTF8
 template0 | pgsql  | UTF8
 template1 | pgsql  | UTF8
 (4 rows)

 User bacula's shell is defined as /sbin/nologin, so I think it's user
 pgsql that's doing the work (at least it was prior to the upgrade). User
 bacula cannot launch psql nor can I su to that user because of the
 nologin setting.

 What permissions do I need to change to get this dump working?

 Thanks again!

 dn



 I have restarted all bacula and postgresql daemons since the upgrade. I
 have not changed any permissions in the /home/bacula directory.

 Thanks in advance for troubleshooting clues.

 dn


 --
 Write once. Port to many.
 Get the SDK and tools to simplify cross-platform app development. Create
 new or port existing apps to sell to consumers worldwide. Explore the
 Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
 http://p.sf.net/sfu/intel-appdev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users







--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Permanently disabling clients without deleting them

2011-12-19 Thread Thomas Mueller
Am Tue, 20 Dec 2011 13:31:09 +1100 schrieb Gary R. Schmidt:

 Hi,
 Bacula 3.0.3.
 
 I've had some machines be removed from use, but I don't want their
 backups to go away until we are sure their replacements are complete.
 
 I can disable them individually from the console, but is there something
 I can do in the .conf file that will turn them off?

disable the job with 

Enabled = no

in the conf file.


Bacula Main Reference - The Job Resource

http://bacula.org/5.2.x-manuals/en/main/main/
Configuring_Director.html#SECTION00233


- Thomas


--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore with data encryption?

2011-12-19 Thread Thomas Mueller
Am Mon, 19 Dec 2011 17:14:15 +0100 schrieb Oliver Hoffmann:

 Hi all,
 
 I do backups with data encryption. Backups as well as restores on the
 clients work without problems.
 Now I want to be able to do restores with the server (or another one)
 only. The doc says that adding the following line would be enough.
 
 PKI Keypair = /etc/bacula/keys/master.keypair
 
 So my working bacula-fd.conf on the server looks like this (just the PKI
 part):
 
 PKI Signatures = Yes PKI Encryption = Yes PKI Keypair =
 /etc/bacula/keys/server-fd.pem
 PKI Master Key = /etc/bacula/keys/master.cert
 
 Next I replaced server-fd.pem with master.keypair like mentioned in the
 doc. I made the master.keypair accordingly.
 That doesn't work. Neither putting the client-fd.pem in place.
 
 I got this error:
 
 Error: restore.c:944 Missing cryptographic signature for
 /path/to/my/file
 


I had problems restoring files from an encrypted backup if Replace: 
always was not selected on the restore job.

But I do not remember the exact error message.

- Thomas


--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtualfull

2011-12-01 Thread Thomas Mueller
On 01.12.2011 13:42, Miikael Havelock Nilson wrote:
 Hello,


 I have a small question. When virtualfull is made will data pass trought the 
 director? Question is in the case the storage is on low bandwith will data 
 move from storage to director and back to storage or storage to storage?


data is copied only on storage daemon.

- Thomas


--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volume Use Duration not working in 5.2.1

2011-11-24 Thread Thomas Mueller
On 24.11.2011 17:01, Fahrer, Julian wrote:
 Hi,

 Am I missing something or is the Volume Use Duration parameter in the
 pool not working in 5.2.1?

 I defined the pool like this:
 ---
 Pool {
Name = NEO200S_Weekly_Pool
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Volume Retention = 31 days
Volume Use Duration = 72h
Storage = NEO200S
Cleaning Prefix = CLN
 }

 ---

 In bconsole a list volumes shows

 ---

 Pool: NEO200S_Weekly_Pool
 +-++---+-+---+--
 +--+-+--+---+---+---
 --+
 | mediaid | volumename | volstatus | enabled | volbytes  |
 volfiles | volretention | recycle | slot | inchanger | mediatype |
 lastwritten |
 +-++---+-+---+--
 +--+-+--+---+---+---
 --+
 |   4 | ANO623L4   | Full  |   1 |   969,400,986,624 |
 196 |2,678,400 |   1 |0 | 0 | LTO-4 | 2011-11-14
 11:28:51 |
 |   5 | ANO650L4   | Append|   1 |   315,867,396,096 |
 64 |2,678,400 |   1 |0 | 0 | LTO-4 | 2011-11-14
 13:26:54 |
 |   7 | ANO622L4   | Full  |   1 | 1,014,171,669,504 |
 205 |2,678,400 |   1 |0 | 0 | LTO-4 | 2011-11-21
 06:42:40 |
 |   8 | HFO402L4   | Append|   1 |   739,355,000,832 |
 148 |2,678,400 |   1 |0 | 0 | LTO-4 | 2011-11-21
 11:36:12 |
 +-++---+-+---+--
 +--+-+--+---+---+---
 --+

 ---

 Mediaid 5  8 should actually be in status used. It is 2011-11-24
 17:00 right now...
 Same thing for other pools with Volume Use Duration = 12 days

IMHO status only gets updated if media is consired for use.

If you start a backup job, does it take media-id 5 or 8 ?

- Thomas


--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Force Bacula to retain at least one Full backup per job?

2011-11-23 Thread Thomas Mueller
On 23.11.2011 15:01, Ralph Kutschera wrote:
 Hallo!

 Am 16.04.2011 00:43, schrieb CDuv:
 Hello,

 While testing Bacula, I've had this bad No full backup before
 2011-04-15 13:37:00 found surprise when trying to restore files of a
 Job

 The client status shows that jobs were correctly done (both full and
 incremental btw) but restore command says he can't.

 I think my problem comes from retention times: files/jobs of a full
 backup getting pruned after the defined File/Job Retention setting.

 Let's say I set a retention of 10 days and schedule my full backup
 every weeks : this should run fine, 10-days-old backups will be
 pruned 3 days after the last full backup. Right?

 But what if that Client isn't available for two weeks? The Job won't
 run, so will the full backup. But I think pruning will occurs and
 will make my job without any backup (either full or incremental). Am
 I still right?

 So here is my question: Is there a way to prevent Bacula from pruning
 files/jobs that are part of the last full backup Job done?

 Thank you

 Is there a solution yet for Bacula 2.0.3?

 Thanks,
 Ralph


IMHO this is a new 5.2.x feature.

- Thomas


--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Excluding files and directories recursive

2011-11-04 Thread Thomas Schweikle
Hi!

I have defined:

FileSet {
  Name = bacula
  @/etc/bacula/clients/include/linux.inc
  @/etc/bacula/clients/include/fileset.exc
}

where linux.inc is:
  Include {
Options {
  signature = MD5
  xattrsupport = yes
  onefs = yes
}
File = \\|/bin/sh -c 'df -TlP | tail -n +2 | grep
\ext\\|reiserfs\\|ufs\\|xfs\\|zfs\ | grep -v \/BACKUP/\ | awk
\{print \\$7}\'
Exclude Dir Containing = .exclude
  }

and fileset.exc:
  Exclude {
File = /var/lib/bacula
File = /var/lib/mysql
File = /var/lib/nfs
File = /var/lib/postgresql
File = /var/lock
File = /var/run
File = /lib/init/rw
File = /BACKUP
File = /HOME
File = /home
File = /dev
File = /proc
File = /sys
File = /tmp
File = /.journal
File = /.fsck
  }


now, backing up bacula itself I get errors:
2011-11-04 01:10:09   bacula-fd JobId 8007:  Could not stat
/var/lib/postgresql/9.1/main/base/19190/27367: ERR=No such file or
directory

Since this is an excluded path /var/lib/postgresql I'd assume it
to be excluded, but it is included again by the script creating the
includes, leading to:

FileSet {
  Name = bacula
  Include {
Options {
  signature = MD5
  xattrsupport = yes
  onefs = yes
}
File = / /boot /var/lib/postgresql
Exclude Dir Containing = .exclude
  }
  Exclude {
File = /var/lib/bacula
File = /var/lib/mysql
File = /var/lib/nfs
File = /var/lib/postgresql
File = /var/lock
File = /var/run
File = /lib/init/rw
File = /BACKUP
File = /HOME
File = /home
File = /dev
File = /proc
File = /sys
File = /tmp
File = /.journal
File = /.fsck
  }
}

Any idea why /var/lib/postgresql is descended into?
And how do I exclude it the right way?


-- 
Thomas

--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


  1   2   3   4   5   6   >