Re: [Bacula-users] restore raid array

2018-12-12 Thread Jerry Lowry
Thank you Kern and fellow Bacula users,

I had a suspicion that was the case, but thought I would ask. I appreciate
the help and commend the community for its endeavors to work with other
users.

thank you again,

jerry

On Wed, Dec 12, 2018 at 12:25 AM Kern Sibbald  wrote:

> Hello,
>
> Bacula has no way to reconstruct the data in an original backup Volume.
> If you lose the Volume, it is gone.  About the only mitigating factors
> after you delete the original volumes are: as is your case, switch to using
> the Copy volume in place of the original Backup volumes;  begin re-creating
> new volumes (obviously the history for prior backups will not be available
> in those volumes).
>
> There is one other possibility that might work.  After deleting the
> original Volumes, your Copy Volumes will be promoted to being the Backup
> Volumes.  You could possibly then do a sort of reverse Copy of your Copy
> volumes (now promoted to Backup) to your on-site location.  Then by some
> SQL magic, you might be able to swap your Backup volumes and Copy volumes
> so that you will be back to the original configuration, except that your
> original Backups will be reconstructed from copies of your Copy volumes.
> To the best of my knowledge no one has ever done this, and without a lot of
> technical knowledge of the catalog formats, it is not possible.
>
> I have never been fully happy with how Copy volumes are handled, or more
> precisely the lack of control the user has from bconsole to manipulate Copy
> volumes.  Perhaps some additional future code could make situations like
> yours easier to manage.
>
> Perhaps someone else has a good idea to solve this problem.
>
> Best regards,
> Kern
>
>
>
> On 12/11/18 6:13 PM, Jerry Lowry wrote:
>
> Josh,
> Yes, I understand how the copy jobs works when the original job is
> deleted.  What I need to do is rebuild the client directories with the
> backup database using the latest offsite backup, if possible.  I don't want
> to restore the backup for the client, I want to rebuild the data in the
> directories so that a restore can be done from there.  Hope that makes
> since.  I know that I can use the offsite backups to restore the client
> data. I want to know if I can rebuild what would be the initial backup of
> the volume.
>
> thanks,
> jerry
>
> On Tue, Dec 11, 2018 at 6:18 AM Josh Fisher  wrote:
>
>>
>> On 12/11/2018 1:09 AM, Jerry Lowry wrote:
>>
>> Well,  The raid was (8) 6TB disks attached to an ATTO Tech raid
>> controller in a supermicro cabinet. It was setup as a raid-5 disk array.
>> The ATTO support group know how it was configured as they have been helping
>> me since Thursday.  The raid setup is not the problem, that can be rebuilt
>> to duplicate the on disk file structure. Bacula was using this raid array
>> as storage for different clients in my network.  Each client had a
>> directory on the array and with in each directory there were anywhere from
>> 3 bacula volumes to 8 volumes. Each volume held any where from 250 GB to
>> 320 GB. Two of the clients would have an offsite backup done each week. A
>> Bacula copy job would run each week and copy that weeks backups on to a hot
>> swap raid disk running on the same system. The system was the Director and
>> Storage director combined.
>> What I want to do is to restore the daily volumes into the new raid array
>> from the offsite disks from the most recent offsite backups. Will any of
>> the bacula utilities enable me to do this?
>>
>>
>> No special tools are needed. If the original volume files no longer
>> exist, then the volumes (and their jobs) can be deleted from the catalog
>> using 'delete volume'. When bacula finds a Copy of a job when a Job is
>> deleted from the catalog, then it will automatically promote the Copy as
>> the real backup for that job so that subsequent restores use the promoted
>> copy rather than the original. The Copy literally replaces the original and
>> the original ceases to exist. At that point, a normal restore will
>> automatically use those promoted volumes. You should ensure that those
>> promoted volumes are marked as Used so that no jobs will attempt to write
>> to them.
>>
>> If auto-labeling is being used, then jobs should create new volumes as
>> needed when they run. If not, then you will manually create new empty
>> volume files in the client directories and label them using the Label
>> command from bconsole.
>>
>>
>>
>> Thanks,
>> jerry
>>
>> On Mon, Dec 10, 2018 at 6:01 PM Phil Stracchino 
>> wrote:
>>
>>> On 12/10/18 7:44 PM, Jerry Lowry wrote:
>&g

Re: [Bacula-users] restore raid array

2018-12-11 Thread Jerry Lowry
Josh,
Yes, I understand how the copy jobs works when the original job is
deleted.  What I need to do is rebuild the client directories with the
backup database using the latest offsite backup, if possible.  I don't want
to restore the backup for the client, I want to rebuild the data in the
directories so that a restore can be done from there.  Hope that makes
since.  I know that I can use the offsite backups to restore the client
data. I want to know if I can rebuild what would be the initial backup of
the volume.

thanks,
jerry

On Tue, Dec 11, 2018 at 6:18 AM Josh Fisher  wrote:

>
> On 12/11/2018 1:09 AM, Jerry Lowry wrote:
>
> Well,  The raid was (8) 6TB disks attached to an ATTO Tech raid controller
> in a supermicro cabinet. It was setup as a raid-5 disk array.  The ATTO
> support group know how it was configured as they have been helping me since
> Thursday.  The raid setup is not the problem, that can be rebuilt to
> duplicate the on disk file structure. Bacula was using this raid array as
> storage for different clients in my network.  Each client had a directory
> on the array and with in each directory there were anywhere from 3 bacula
> volumes to 8 volumes. Each volume held any where from 250 GB to 320 GB. Two
> of the clients would have an offsite backup done each week. A Bacula copy
> job would run each week and copy that weeks backups on to a hot swap raid
> disk running on the same system. The system was the Director and Storage
> director combined.
> What I want to do is to restore the daily volumes into the new raid array
> from the offsite disks from the most recent offsite backups. Will any of
> the bacula utilities enable me to do this?
>
>
> No special tools are needed. If the original volume files no longer exist,
> then the volumes (and their jobs) can be deleted from the catalog using
> 'delete volume'. When bacula finds a Copy of a job when a Job is deleted
> from the catalog, then it will automatically promote the Copy as the real
> backup for that job so that subsequent restores use the promoted copy
> rather than the original. The Copy literally replaces the original and the
> original ceases to exist. At that point, a normal restore will
> automatically use those promoted volumes. You should ensure that those
> promoted volumes are marked as Used so that no jobs will attempt to write
> to them.
>
> If auto-labeling is being used, then jobs should create new volumes as
> needed when they run. If not, then you will manually create new empty
> volume files in the client directories and label them using the Label
> command from bconsole.
>
>
>
> Thanks,
> jerry
>
> On Mon, Dec 10, 2018 at 6:01 PM Phil Stracchino 
> wrote:
>
>> On 12/10/18 7:44 PM, Jerry Lowry wrote:
>> > Hi,
>> > Last thursday I was adding a disk to the bacula raid array when the
>> > system decide to fail. When it rebooted my raid array was gone. This
>> > raid was where all of my daily backups were held, which is where I do my
>> > offsite backups from.  The database is fine, catalog is in working
>> > order.  I rebuilt the server and per Raid Support I have all new disks.
>> >
>> > I need to recreate the physical backup volumes for each of the clients
>> > back on the raid array. I have looked at the utility document. Is bcopy
>> > what I need to use?  I want recreate for example the file structure on
>> > disk call /engineering/tools with the volumes "tool-3, tools-4,
>> tools5...".
>>
>>
>> In ordert to be able to answer this question, anyone here would need to
>> know a lot more about how your RAID was set up.  But you're probably a
>> lot better off asking for help from whoever made the tools you used to
>> build it, or community forums for them.  They'll need to know what you
>> built it with too.
>>
>>
>>
>> --
>>   Phil Stracchino
>>   Babylon Communications
>>   ph...@caerllewys.net
>>   p...@co.ordinate.org
>>   Landline: +1.603.293.8485
>>   Mobile:   +1.603.998.6958
>>
>>
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>
>
> ___
> Bacula-users mailing 
> listBacula-users@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] restore raid array

2018-12-10 Thread Jerry Lowry
Well,  The raid was (8) 6TB disks attached to an ATTO Tech raid controller
in a supermicro cabinet. It was setup as a raid-5 disk array.  The ATTO
support group know how it was configured as they have been helping me since
Thursday.  The raid setup is not the problem, that can be rebuilt to
duplicate the on disk file structure. Bacula was using this raid array as
storage for different clients in my network.  Each client had a directory
on the array and with in each directory there were anywhere from 3 bacula
volumes to 8 volumes. Each volume held any where from 250 GB to 320 GB. Two
of the clients would have an offsite backup done each week. A Bacula copy
job would run each week and copy that weeks backups on to a hot swap raid
disk running on the same system. The system was the Director and Storage
director combined.
What I want to do is to restore the daily volumes into the new raid array
from the offsite disks from the most recent offsite backups. Will any of
the bacula utilities enable me to do this?

Thanks,
jerry

On Mon, Dec 10, 2018 at 6:01 PM Phil Stracchino 
wrote:

> On 12/10/18 7:44 PM, Jerry Lowry wrote:
> > Hi,
> > Last thursday I was adding a disk to the bacula raid array when the
> > system decide to fail. When it rebooted my raid array was gone. This
> > raid was where all of my daily backups were held, which is where I do my
> > offsite backups from.  The database is fine, catalog is in working
> > order.  I rebuilt the server and per Raid Support I have all new disks.
> >
> > I need to recreate the physical backup volumes for each of the clients
> > back on the raid array. I have looked at the utility document. Is bcopy
> > what I need to use?  I want recreate for example the file structure on
> > disk call /engineering/tools with the volumes "tool-3, tools-4,
> tools5...".
>
>
> In ordert to be able to answer this question, anyone here would need to
> know a lot more about how your RAID was set up.  But you're probably a
> lot better off asking for help from whoever made the tools you used to
> build it, or community forums for them.  They'll need to know what you
> built it with too.
>
>
>
> --
>   Phil Stracchino
>   Babylon Communications
>   ph...@caerllewys.net
>   p...@co.ordinate.org
>   Landline: +1.603.293.8485
>   Mobile:   +1.603.998.6958
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] restore raid array

2018-12-10 Thread Jerry Lowry
Hi,
Last thursday I was adding a disk to the bacula raid array when the system
decide to fail. When it rebooted my raid array was gone. This raid was
where all of my daily backups were held, which is where I do my offsite
backups from.  The database is fine, catalog is in working order.  I
rebuilt the server and per Raid Support I have all new disks.

I need to recreate the physical backup volumes for each of the clients back
on the raid array. I have looked at the utility document. Is bcopy what I
need to use?  I want recreate for example the file structure on disk call
/engineering/tools with the volumes "tool-3, tools-4, tools5...".

thanks,

jerry
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] building 9.2 with Qt5 failing to find QT

2018-08-13 Thread Jerry Lowry
Hi,
I am trying to build bacula 9.2 with Qt 5.  The configure is failing to
find Qt directories.  I have these defined:

QTINC=/usr/local/Qt/5.11.1/gcc_64/include/QtGui
QTDIR=/usr/local/Qt/5.11.1/gcc_64

I have tried a number of different iterations of these but they all fail.
What is the configure for bacula looking for?  I was thinking it is the
include files and binaries, am I wrong?

Anyone lend a helping variable ?

thanks,

jerry
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Problems with client connecting

2018-06-27 Thread Jerry Lowry
Hi,

I had to rebuild the domain a couple weeks ago. Everything works for Bacula
except for two systems.  One is a log server and the other is my work
station.  Both are getting the same errors. I am not sure why because all
of the other systems in the domain are being backed up without any problems.
I have changed nothing in the configuration of Bacula for any of the
systems. Most started working after the domain was rebuilt.

I have started the FD on the client from the command prompt with the
following:
bacula-fd -d 200

which gives me a trace file with this output:
vigo-fd: filed/fd_plugins.c:943-0 plugin dir is NULL
vigo-fd: filed/filed.c:276-0 filed: listening on port 9102
vigo-fd: lib/bnet_server.c:112-0 Addresses host[ipv4:0.0.0.0:9102]
bacula-fd: filed/filed_conf.c:452-0 Inserting director res: vigo-mon
vigo-fd: lib/bsys.c:510-0 Could not open state file. sfd=-1 size=192:
ERR=No such file or directory

The working directory is there with the same permissions as before. What
would be causing the state file from not being created?  If I change the
debug to a higher number will that give me more information ( I am assuming
it will), I have not tried that yet.

pondering,

jerry
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] ongoing problems with v 9.0.3

2018-06-13 Thread Jerry Lowry
Hi,

Each time I have to change a disk during my offsite backups I get errors
from the job that is running and it fails. My storage and pool definitions
follow below, they have not changed for the last 10 years and have been
working without any error or problems up until I upgraded to 9.0.3 of
bacula and migrated the database to MariaDB 10.2.8-1.
If any other configuration files are needed I can add them.  I loose data
on each of these backups because of these errors.

Any help with this would be great,

thanks,
jerry

# Definition of file storage device
Storage {
  Name = midswap# offsite disk
# Do not use "localhost" here
  #Address = kilchis# N.B. Use a fully qualified name here
  Address = kilchis  # N.B. Use a fully qualified name here
  SDPort = 9103
  Password = ""
  Device = MidSwap
  Media Type = File
}
# File Pool definition
Pool {
  Name = OffsiteMid
  Pool Type = Copy
  Next Pool = OffsiteMid
  Storage = midswap
  Recycle = yes   # Bacula can automatically recycle
Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 30 years # thirty years
  Maximum Volume Bytes = 1800G   # Limit Volume to disk size
  Maximum Volumes = 10   # Limit number of Volumes in Pool
}

---

emails sent at disk full message:


13-Jun 17:52 kilchis JobId 37853: Job BackupUsers.2018-06-12_23.47.07_32 is
waiting. Cannot find any appendable volumes.

Please use the "label" command to create a new Volume for:

Storage:  "MidSwap" (/MidSwap)

Pool: OffsiteMid

Media type:   File


13-Jun 17:52 kilchis JobId 37851: Fatal error: Out of freespace caused End
of Volume "homeMS-5" at 981661189531 on device "MidSwap" (/MidSwap). Write
of 64512 bytes got 10853.

13-Jun 17:52 kilchis JobId 37851: Elapsed time=02:59:41, Transfer
rate=67.40 M Bytes/second

12-Jun 23:47 kilchis-dir JobId 37850: Copying using JobId=37780
Job=BackupUsers.2018-06-09_20.05.00_18

13-Jun 14:52 kilchis-dir JobId 37850: Start Copying JobId 37850,
Job=CopyHMDiskToDisk.2018-06-12_23.47.07_29

13-Jun 14:52 kilchis-dir JobId 37850: Using Device "Home" to read.

13-Jun 14:52 kilchis JobId 37850: Ready to read from volume "home-6" on
File device "Home" (/engineering/Home).

13-Jun 14:52 kilchis JobId 37850: Forward spacing Volume "home-6" to
addr=824369125834 13-Jun 17:39 kilchis JobId 37850: End of Volume "home-6"
at addr=1503238496266 on device "Home" (/engineering/Home).

13-Jun 17:39 kilchis JobId 37850: Ready to read from volume "home-7" on
File device "Home" (/engineering/Home).

13-Jun 17:39 kilchis JobId 37850: Forward spacing Volume "home-7" to
addr=215 13-Jun 17:52 kilchis JobId 37850: Error: bsock.c:649 Write error
sending 65540 bytes to client:10.20.10.21:9103: ERR=Connection reset by
peer 13-Jun 17:52 kilchis JobId 37850: Fatal error: read.c:277 Error
sending to File daemon. ERR=Connection reset by peer 13-Jun 17:52 kilchis
JobId 37850: Elapsed time=02:59:42, Transfer rate=67.39 M Bytes/second
13-Jun 17:52 kilchis JobId 37850: Error: bsock.c:537 Socket has errors=1 on
call to client:10.20.10.21:9103 13-Jun 17:52 kilchis JobId 37850: Error:
bsock.c:537 Socket has errors=1 on call to client:10.20.10.21:9103 13-Jun
17:52 kilchis-dir JobId 37850: Error: Bacula kilchis-dir 9.0.6 (20Nov17):




Build OS:   x86_64-pc-linux-gnu redhat

  Prev Backup JobId:  37780

  Prev Backup Job:BackupUsers.2018-06-09_20.05.00_18

  New Backup JobId:   37851

  Current JobId:  37850

  Current Job:CopyHMDiskToDisk.2018-06-12_23.47.07_29

  Backup Level:   Full

  Client: kilchis-fd

  FileSet:"Mid Set" 2011-04-11 13:13:32

  Read Pool:  "HomePool" (From Command input)

  Read Storage:   "home" (From Job resource)

  Write Pool: "OffsiteMid" (From Command input)

  Write Storage:  "midswap" (From Command input)

  Catalog:"MyCatalog" (From Client resource)

  Start time: 13-Jun-2018 14:52:20

  End time:   13-Jun-2018 17:52:04

  Elapsed time:   2 hours 59 mins 44 secs

  Priority:   10

  SD Files Written:   1,784,587

  SD Bytes Written:   726,665,971,203 (726.6 GB)

  Rate:   67383.7 KB/s

  Volume name(s): homeMS-5

  Volume Session Id:  82

  Volume Session Time:1528397911

  Last Volume Bytes:  981,661,189,531 (981.6 GB)

  SD Errors:  3

  SD termination status:  Error

  Termination:*** Copying Error ***
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net

Re: [Bacula-users] problems with storage daemon

2018-01-15 Thread Jerry Lowry
Martin,

Yeah, I forgot to create a new output file for the second run.  The first
run the client info scrolled off the window buffer, so I ran it again.

Yes it does affect all of the clients, the only thing that runs is the
catalog backup.
Yes, 10.10.10.3 is the correct IP.
No, firewall, well I checked and apparently the kernel update turned on the
firewall service.  I did not check because I have never had this happen
before after an update. Did not even think to check.

Thank you much,



On Mon, Jan 15, 2018 at 3:07 AM, Martin Simmons <mar...@lispworks.com>
wrote:

> The storage log shows no sign of any connection from the client (though it
> looks like you sent logs from two different jobs).
>
> Does the problem affect all clients?
>
> Is 10.10.10.3 the correct IP address of distress?
>
> Is distress running some firewall (iptables etc)?
>
> You could try
>
> telnet distress 9103
>
> on the client machine to see if that can connect (see
> https://technet.microsoft.com/en-us/library/cc771275(v=ws.10).aspx).
>
> __Martin
>
> >>>>> On Fri, 12 Jan 2018 10:32:10 -0800, Jerry Lowry said:
> >
> > Well, I recompiled the source for bacula 9.0.6 on the offending server
> but
> > it is still failing to work.  I have started each of the processes with
> the
> > debug flag set at 100 and am attaching the output for each system (
> > director, storage,client).  It looks to me like the client gets started
> but
> > is not connecting to the storage daemon.  Not sure why because I can ping
> > it from the client, it is not blocked by a firewall because it's on the
> > same subnet.  The windows firewall is turned off as well.  Hopefully,
> this
> > information will help figure out the problem.
> > centos 7.3
> > mariadb 10.2.12
> > bacula 9.0.6
> >
> > thanks
> >
> > On Thu, Jan 11, 2018 at 2:08 PM, Jerry Lowry <michaiah2...@gmail.com>
> wrote:
> >
> > > Dan,
> > > Yes, bacula did not change at all. This bacula server is both director
> and
> > > storage.  All of the clients are windows systems running the latest 64
> bit
> > > release of community client ( 5.2.10).
> > >
> > > I am getting ready to rebuild bacula based on the new kernel and hoping
> > > this will fix the problem.  I have already done a database check with
> the
> > > new version of MariaDB and the bacula database is fine for its point of
> > > view.
> > >
> > > jerry
> > >
> > > On Thu, Jan 11, 2018 at 11:51 AM, Dan Langille <d...@langille.org>
> wrote:
> > >
> > >> > On Jan 11, 2018, at 11:56 AM, Jerry Lowry <michaiah2...@gmail.com>
> > >> wrote:
> > >> >
> > >> > Hi,  Last weekend I ran through a bunch of updates for my backup
> > >> server.  Of the many updates it updated the kernel and MariaDB (
> 10.2.8 ->
> > >> 10.2.12 ). I have recently upgraded bacula to 9.0.6.  The problem is
> that
> > >> the following error:
> > >> >  10-Jan 20:36 distress-dir JobId 34724: Fatal error: Bad response to
> > >> Storage command: wanted 2000 OK storage
> > >> > , got 2902 Bad storage
> > >> >
> > >> > I also get errors saying that the client can not connect to the
> storage
> > >> server:
> > >> >
> > >> > 11-Jan 01:37 Denial-fd: BackupDenial.2018-01-11_00.05.01_44 Fatal
> > >> error: ../../lib/bnet.c:870 Unable to connect to Storage daemon on
> > >> distress.ACCOUNTING.EDT.LOCAL:9103. ERR=No error
> > >> > 11-Jan 01:38 distress-dir JobId 34726: Fatal error: Socket error on
> > >> Storage command: ERR=No data available
> > >> > 11-Jan 01:38 distress-dir JobId 34726: Fatal error: No Job status
> > >> returned from FD.
> > >> >
> > >> > nothing was changed in the config files, just the updates were
> > >> installed. I can ping the storage server from all of the clients.
> > >> >
> > >> > The jobs get to the point where they display "waiting for client"
> but
> > >> the jobs never complete.
> > >> >
> > >>
> > >> When you did those update, did you keep all bacula-dir and all
> bacula-sd
> > >> on the same exact version?
> > >>
> > >> The general rule for software versions is:
> > >>
> > >> bacula-dir = bacula-sd >= bacula-fd
> > >>
> > >> --
> > >> Dan Langille - BSDCan / PGCon
> > >> d...@langille.org
> > >>
> > >>
> > >>
> > >>
> > >>
> > >
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] problems with storage daemon

2018-01-12 Thread Jerry Lowry
Well, I recompiled the source for bacula 9.0.6 on the offending server but
it is still failing to work.  I have started each of the processes with the
debug flag set at 100 and am attaching the output for each system (
director, storage,client).  It looks to me like the client gets started but
is not connecting to the storage daemon.  Not sure why because I can ping
it from the client, it is not blocked by a firewall because it's on the
same subnet.  The windows firewall is turned off as well.  Hopefully, this
information will help figure out the problem.
centos 7.3
mariadb 10.2.12
bacula 9.0.6

thanks

On Thu, Jan 11, 2018 at 2:08 PM, Jerry Lowry <michaiah2...@gmail.com> wrote:

> Dan,
> Yes, bacula did not change at all. This bacula server is both director and
> storage.  All of the clients are windows systems running the latest 64 bit
> release of community client ( 5.2.10).
>
> I am getting ready to rebuild bacula based on the new kernel and hoping
> this will fix the problem.  I have already done a database check with the
> new version of MariaDB and the bacula database is fine for its point of
> view.
>
> jerry
>
> On Thu, Jan 11, 2018 at 11:51 AM, Dan Langille <d...@langille.org> wrote:
>
>> > On Jan 11, 2018, at 11:56 AM, Jerry Lowry <michaiah2...@gmail.com>
>> wrote:
>> >
>> > Hi,  Last weekend I ran through a bunch of updates for my backup
>> server.  Of the many updates it updated the kernel and MariaDB ( 10.2.8 ->
>> 10.2.12 ). I have recently upgraded bacula to 9.0.6.  The problem is that
>> the following error:
>> >  10-Jan 20:36 distress-dir JobId 34724: Fatal error: Bad response to
>> Storage command: wanted 2000 OK storage
>> > , got 2902 Bad storage
>> >
>> > I also get errors saying that the client can not connect to the storage
>> server:
>> >
>> > 11-Jan 01:37 Denial-fd: BackupDenial.2018-01-11_00.05.01_44 Fatal
>> error: ../../lib/bnet.c:870 Unable to connect to Storage daemon on
>> distress.ACCOUNTING.EDT.LOCAL:9103. ERR=No error
>> > 11-Jan 01:38 distress-dir JobId 34726: Fatal error: Socket error on
>> Storage command: ERR=No data available
>> > 11-Jan 01:38 distress-dir JobId 34726: Fatal error: No Job status
>> returned from FD.
>> >
>> > nothing was changed in the config files, just the updates were
>> installed. I can ping the storage server from all of the clients.
>> >
>> > The jobs get to the point where they display "waiting for client"  but
>> the jobs never complete.
>> >
>>
>> When you did those update, did you keep all bacula-dir and all bacula-sd
>> on the same exact version?
>>
>> The general rule for software versions is:
>>
>> bacula-dir = bacula-sd >= bacula-fd
>>
>> --
>> Dan Langille - BSDCan / PGCon
>> d...@langille.org
>>
>>
>>
>>
>>
>


director.out
Description: Binary data


storage.out
Description: Binary data


client.out
Description: Binary data
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] problems with storage daemon

2018-01-11 Thread Jerry Lowry
Hi,  Last weekend I ran through a bunch of updates for my backup server.
Of the many updates it updated the kernel and MariaDB ( 10.2.8 -> 10.2.12
). I have recently upgraded bacula to 9.0.6.  The problem is that the
following error:
 10-Jan 20:36 distress-dir JobId 34724: Fatal error: Bad response to
Storage command: wanted 2000 OK storage
, got 2902 Bad storage

I also get errors saying that the client can not connect to the storage
server:

11-Jan 01:37 Denial-fd: BackupDenial.2018-01-11_00.05.01_44 Fatal error:
../../lib/bnet.c:870 Unable to connect to Storage daemon on
distress.ACCOUNTING.EDT.LOCAL:9103. ERR=No error
11-Jan 01:38 distress-dir JobId 34726: Fatal error: Socket error on Storage
command: ERR=No data available
11-Jan 01:38 distress-dir JobId 34726: Fatal error: No Job status returned
from FD.

nothing was changed in the config files, just the updates were installed. I
can ping the storage server from all of the clients.

The jobs get to the point where they display "waiting for client"  but the
jobs never complete.

thanks
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] updated kernel and other apps now bacula is not working

2018-01-09 Thread Jerry Lowry
Hi,
I ran  the update over the weekend and it updated the kernel and mariadb as
well as many others.  Bacula is not connecting to the storage daemon now.
Can someone tell me what the following error means:

Fatal error: Bad response to Storage command: wanted 2000 OK storage, got
2902 Bad storage

thanks,
jerry
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] problems with copy job

2018-01-03 Thread Jerry Lowry
I spent a couple days over the last week upgrading a bacula server from
centos 6.9 to 7.3 and bacula from 9.0.3 to 9.0.6.
The backup jobs are working just fine, but the offsite job is failing with
the following error:

03-Jan 14:12 distress JobId 34620: Volume "dcBS-105" previously written,
moving to end of data.
03-Jan 14:12 distress JobId 34619: Warning: acquire.c:235 Read open File
device "Workstations" (/accounting/Workstations) Volume "dcBS-103" failed:
ERR=file_dev.c:190 Could not
open(/accounting/Workstations/dcBS-103,OPEN_READ_ONLY,0640): ERR=No such
file or directory

first - the volume dcBS-105 is not full it is sitting at 79%.
The Workstations device does not have a volume "dcBS-103" in it, never
has.  It is an offsite backup volume that is 100% full.

Why is bacula trying to open this volume when it is not in the Workstations
pool?

The job has been running with the following configuration for years.
Job {
Name = "CopyWKDiskToDisk"
Type = Copy
Level = Full
FileSet = "Bottom Set"
Client = distress-fd
Messages = Standard
Storage = workstations
Pool = WorkstationPool
Maximum Concurrent Jobs = 4
Selection Type = PoolUncopiedJobs
Selection Pattern = "DC-*"
}

Ran a mysqlcheck on the database and all is okay.

Any ideas as to why it is looking in the wrong pool?
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems with version 9.0.3 failing since the upgrade

2017-12-01 Thread Jerry Lowry
list,

I am re-iterating my problem, due to no response.  As a side note the
storage server and director are running on the same server.  This happens
on a copy job from one disk to another disk on the same system.
It is a definite problem as I am loosing backup data!

bacula 9.0.3
mariadb 10.2.8
centos 6.9

I upgraded bacula from 5.2.13 which worked very well, to version 9.0.3.
Basically installed new version from source and then upgraded the database
structure.  The source was compiled with the following:

./configure --sbindir=/usr/bacula/bin --sysconfdir=/usr/bacula/bin
--with-pid-dir=/var/run/bacula --with-subsys-dir=/var/run/bacula/working
--enable-smartalloc --with-mysql --with-working-dir=/usr/bacula/bin/working
--with-dump-email=u...@domain.com --with-job-email=u...@domain.com
--with-smtp-host=smtp.googlemail.com --enable-bat

The problem started with my offsite backups. I will get the following error:

13-Nov 01:18 distress JobId 33429: Fatal error: Socket error on Data
received command: ERR=No data available
13-Nov 01:18 distress JobId 33429: Fatal error: fd_cmds.c:157 Read
data not accepted

And the backup fails. Most of the time it is on a backup that spans
multiple disks.  So, I chatted with the ATTO raid support folks and they
suggested that I use a different hotswap raid enclosure due to the one I
was using was not very reliable in their opinion.  Although this enclosure
had worked very reliably for well into 10 years without a problem!  So, I
moved the system to a completely new system ( Supermicro with ATTO raid ).
The problem still persists!  I have rebuilt the raid disk structure and
changed the working of the backups. To no avail!
My backups worked flawlessly before the upgrade!  Once going to v 9 I can
not count how many offsite backups the have failed to complete with this
type of error.  I also get

13-Nov 01:23 distress JobId 33430: Warning: mount.c:210 Open of File
device "BottomSwap" (/BottomSwap) Volume "dcBS-104" failed:
ERR=file_dev.c:190 Could not
open(/BottomSwap/dcBS-104,OPEN_READ_WRITE,0640): ERR=No such file or
directory
and
15-Nov 17:20 kilchis JobId 35825: Error: bsock.c:849 Read error from
Storage daemon:kilchis:9103: ERR=Connection reset by peer
15-Nov 17:20 kilchis JobId 35825: Fatal error: append.c:271 Network
error reading from FD. ERR=Connection reset by peer

All of this happens on one storage server, well actually two storage
servers but they

service two different subnets/domains. It all started with the upgrade!

please, tell me that you have fixed this with a new version!

thanks


On Mon, Nov 27, 2017 at 7:45 AM, Jerry Lowry <michaiah2...@gmail.com> wrote:

> list,
>
> bacula 9.0.3
> mariadb 10.2.8
> centos 6.9
>
> I upgraded bacula from 5.2.13 which worked very well, to version 9.0.3.
> Basically installed new version from source and then upgraded the database
> structure.  The source was compiled with the following:
>
> ./configure --sbindir=/usr/bacula/bin --sysconfdir=/usr/bacula/bin
> --with-pid-dir=/var/run/bacula --with-subsys-dir=/var/run/bacula/working
> --enable-smartalloc --with-mysql --with-working-dir=/usr/bacula/bin/working
> --with-dump-email=u...@domain.com --with-job-email=u...@domain.com
> --with-smtp-host=smtp.googlemail.com --enable-bat
>
> The problem started with my offsite backups. I will get the following
> error:
>
> 13-Nov 01:18 distress JobId 33429: Fatal error: Socket error on Data received 
> command: ERR=No data available
> 13-Nov 01:18 distress JobId 33429: Fatal error: fd_cmds.c:157 Read data not 
> accepted
>
> And the backup fails. Most of the time it is on a backup that spans
> multiple disks.  So, I chatted with the ATTO raid support folks and they
> suggested that I use a different hotswap raid enclosure due to the one I
> was using was not very reliable in their opinion.  Although this enclosure
> had worked very reliably for well into 10 years without a problem!  So, I
> moved the system to a completely new system ( Supermicro with ATTO raid ).
> The problem still persists!  I have rebuilt the raid disk structure and
> changed the working of the backups. To no avail!
> My backups worked flawlessly before the upgrade!  Once going to v 9 I can
> not count how many offsite backups the have failed to complete with this
> type of error.  I also get
>
> 13-Nov 01:23 distress JobId 33430: Warning: mount.c:210 Open of File device 
> "BottomSwap" (/BottomSwap) Volume "dcBS-104" failed: ERR=file_dev.c:190 Could 
> not open(/BottomSwap/dcBS-104,OPEN_READ_WRITE,0640): ERR=No such file or 
> directory
> and
> 15-Nov 17:20 kilchis JobId 35825: Error: bsock.c:849 Read error from Storage 
> daemon:kilchis:9103: ERR=Connection reset by peer
> 15-Nov 17:20 kilchis JobId 35825: Fatal error: append.c:271 Network error 
> reading from FD. ERR=Connection 

[Bacula-users] Problems with version 9.0.3 failing since the upgrade

2017-11-27 Thread Jerry Lowry
list,

bacula 9.0.3
mariadb 10.2.8
centos 6.9

I upgraded bacula from 5.2.13 which worked very well, to version 9.0.3.
Basically installed new version from source and then upgraded the database
structure.  The source was compiled with the following:

./configure --sbindir=/usr/bacula/bin --sysconfdir=/usr/bacula/bin
--with-pid-dir=/var/run/bacula --with-subsys-dir=/var/run/bacula/working
--enable-smartalloc --with-mysql --with-working-dir=/usr/bacula/bin/working
--with-dump-email=u...@domain.com --with-job-email=u...@domain.com
--with-smtp-host=smtp.googlemail.com --enable-bat

The problem started with my offsite backups. I will get the following error:

13-Nov 01:18 distress JobId 33429: Fatal error: Socket error on Data
received command: ERR=No data available
13-Nov 01:18 distress JobId 33429: Fatal error: fd_cmds.c:157 Read
data not accepted

And the backup fails. Most of the time it is on a backup that spans
multiple disks.  So, I chatted with the ATTO raid support folks and they
suggested that I use a different hotswap raid enclosure due to the one I
was using was not very reliable in their opinion.  Although this enclosure
had worked very reliably for well into 10 years without a problem!  So, I
moved the system to a completely new system ( Supermicro with ATTO raid ).
The problem still persists!  I have rebuilt the raid disk structure and
changed the working of the backups. To no avail!
My backups worked flawlessly before the upgrade!  Once going to v 9 I can
not count how many offsite backups the have failed to complete with this
type of error.  I also get

13-Nov 01:23 distress JobId 33430: Warning: mount.c:210 Open of File
device "BottomSwap" (/BottomSwap) Volume "dcBS-104" failed:
ERR=file_dev.c:190 Could not
open(/BottomSwap/dcBS-104,OPEN_READ_WRITE,0640): ERR=No such file or
directory
and
15-Nov 17:20 kilchis JobId 35825: Error: bsock.c:849 Read error from
Storage daemon:kilchis:9103: ERR=Connection reset by peer
15-Nov 17:20 kilchis JobId 35825: Fatal error: append.c:271 Network
error reading from FD. ERR=Connection reset by peer

All of this happens on one storage server, well actually two storage
servers but they

service two different subnets/domains. It all started with the upgrade!

please, tell me that you have fixed this with a new version!

thanks
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Incomplete backup - due to bsock error

2017-09-22 Thread Jerry Lowry
Yes, kilchis is a bonifide hardware server. Only VM's I have are test
systems running on my desktop.

There are 2 copy jobs on this system. This particular job is the one that
typically runs long enough that it will need a new volume during the
night.  The other one will if it is run late in the day and the current
volume does not have very much space left on it. The other daily backup
jobs will wait until the copy job is finished, but there is nothing else
running on the system that utilizes the network except for VNC traffic.
This problem happened two weeks in a row and this last week it worked just
fine.  The one thing that is different is that I dropped all of the current
backup files and purged them from the DB. I then recreated new files to
backup to.  Just wondering if one of the files was writing on a
questionable sector on disk.  Nothing in the logs and smart does not give
any details on that.

I think I will call it a fluke and keep a watch on it in the future..
Thanks!

On Fri, Sep 22, 2017 at 10:27 AM, Martin Simmons <mar...@lispworks.com>
wrote:

> That's odd -- the reading side looks normal to me until the error is
> detected.
>
> Also, "Connection reset by peer" doesn't normally occur when connected to
> the
> current machine.
>
> Is kilchis a real computer (not a VM)?
>
> Is this the only copy job that waits overnight for someone to label a new
> volume?
>
> Maybe something happens overnight on the system that causes networking to
> be
> disrupted in some subtle way, causing "Connection reset by peer" when the
> connection is closed cleanly?
>
> __Martin
>
>
> >>>>> On Tue, 19 Sep 2017 15:31:46 -0700, Jerry Lowry said:
> >
> > The reading side is the same system.  It is a copy job setup to backup
> > daily backups to the offsite backup disk.
> > The attachment is the bacula jobid 35202.
> >
> > jerry
> >
> > On Tue, Sep 19, 2017 at 10:08 AM, Martin Simmons <mar...@lispworks.com>
> > wrote:
> >
> > > The email below is from the writing side of the copy job and the
> message:
> > >
> > > 13-Sep 08:43 kilchis JobId 35203: Error: bsock.c:849 Read error from
> > > Storage daemon:kilchis:9103: ERR=Connection reset by peer
> > >
> > > shows that the connection to the reading side of the job was closed
> > > unexpectedly from the reading end.
> > >
> > > Do you have the corresponding email from the reading side?  It will
> have a
> > > different JobId (but should mention JobId 35203) and should start with
> > > something like "Using Device ... to read."
> > >
> > > __Martin
> > >
> > >
> > > >>>>> On Mon, 18 Sep 2017 13:42:19 -0700, Jerry Lowry said:
> > > >
> > > > Martin,
> > > > Here is the complete email that was sent just before the "Copy Error"
> > > > message:
> > > >
> > > > 12-Sep 15:09 kilchis-dir JobId 35203: Using Device "MidSwap" to
> write.
> > > > 12-Sep 15:09 kilchis JobId 35203: Volume "homeMS-200" previously
> > > written, moving to end of data.
> > > > 12-Sep 15:27 kilchis JobId 35203: End of medium on Volume
> "homeMS-200"
> > > Bytes=1,932,735,274,146 Blocks=29,959,317 at 12-Sep-2017 15:27.
> > > > 12-Sep 15:28 kilchis JobId 35203: Job BackupUsers.2017-09-12_09.05.
> 09_50
> > > is waiting. Cannot find any appendable volumes.
> > > > Please use the "label" command to create a new Volume for:
> > > > Storage:  "MidSwap" (/MidSwap)
> > > > Pool: OffsiteMid
> > > > Media type:   File
> > > > 12-Sep 15:36 kilchis JobId 35203: Wrote label to prelabeled Volume
> > > "homeMS-201" on File device "MidSwap" (/MidSwap)
> > > > 12-Sep 15:36 kilchis JobId 35203: New volume "homeMS-201" mounted on
> > > device "MidSwap" (/MidSwap) at 12-Sep-2017 15:36.
> > > > 12-Sep 19:54 kilchis JobId 35203: End of medium on Volume
> "homeMS-201"
> > > Bytes=1,932,735,281,790 Blocks=29,959,315 at 12-Sep-2017 19:54.
> > > > 12-Sep 19:54 kilchis JobId 35203: Job BackupUsers.2017-09-12_09.05.
> 09_50
> > > is waiting. Cannot find any appendable volumes.
> > > > Please use the "label" command to create a new Volume for:
> > > > Storage:  "MidSwap" (/MidSwap)
> > > > Pool: OffsiteMid
> > > > Media type:   File
> > > > 12-Sep 20:57 kilchis JobId 35203: Job B

Re: [Bacula-users] encrypting backup jobs

2017-09-21 Thread Jerry Lowry
Ah ha! I had forgotten that point.  The system that it is backing up is
bigger still. Two Xeon cpu's with 16 processors and 24 GB of memory. I
don't run this concurrently with other backups on this system as I had to
split them into 2 different backups due to the size.  It was reaching 5 TB
of data and at the time I didn't have the space to backup multiple days of
data.
I think it's time to watch some system resources on the FD.

Thanks,


On Thu, Sep 21, 2017 at 4:51 AM, Josh Fisher <jfis...@pvct.com> wrote:

>
> On 9/20/2017 2:15 PM, Jerry Lowry wrote:
>
>> Hi,
>>
>> I have just started encrypting my backup jobs.  I have one full backup
>> that went from completing in
>>   Scheduled time: 12-Aug-2017 20:05:00
>>   Start time: 13-Aug-2017 06:16:47
>>   End time:   13-Aug-2017 15:26:35
>>   Elapsed time:   9 hours 9 mins 48 secs
>> to running in
>>   Scheduled time: 16-Sep-2017 20:05:00
>>   Start time: 17-Sep-2017 20:08:34
>>   End time:   18-Sep-2017 18:45:36
>>   Elapsed time:   22 hours 37 mins 2 secs
>>
>> This job is backing up 2.3 TB of data.
>>
>> These are running over the weekend so I have not checked on system
>> performance during this time. (other things to do ). But I am wondering if
>> these jobs will use more memory / cpu ?  The system has 24 Gb of memory and
>> 1 Xeon processor with 8 cpu's. The disk drives are on a raid 5 using an
>> ATTO 6G raid card.
>>
>> What would cause this increase of duration?
>>
>
> The encryption is not performed by the bacula-dir machine, but rather by
> the bacula-fd machine. If you are not already doing so, running the jobs
> concurrently may help, since it appears that the clients are spending much
> of their time in encryption routines.
>
>
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] encrypting backup jobs

2017-09-20 Thread Jerry Lowry
Hi,

I have just started encrypting my backup jobs.  I have one full backup that
went from completing in
  Scheduled time: 12-Aug-2017 20:05:00
  Start time: 13-Aug-2017 06:16:47
  End time:   13-Aug-2017 15:26:35
  Elapsed time:   9 hours 9 mins 48 secs
to running in
  Scheduled time: 16-Sep-2017 20:05:00
  Start time: 17-Sep-2017 20:08:34
  End time:   18-Sep-2017 18:45:36
  Elapsed time:   22 hours 37 mins 2 secs

This job is backing up 2.3 TB of data.

These are running over the weekend so I have not checked on system
performance during this time. (other things to do ). But I am wondering if
these jobs will use more memory / cpu ?  The system has 24 Gb of memory and
1 Xeon processor with 8 cpu's. The disk drives are on a raid 5 using an
ATTO 6G raid card.

What would cause this increase of duration?

thanks,
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Incomplete backup - due to bsock error

2017-09-19 Thread Jerry Lowry
The reading side is the same system.  It is a copy job setup to backup
daily backups to the offsite backup disk.
The attachment is the bacula jobid 35202.

jerry

On Tue, Sep 19, 2017 at 10:08 AM, Martin Simmons <mar...@lispworks.com>
wrote:

> The email below is from the writing side of the copy job and the message:
>
> 13-Sep 08:43 kilchis JobId 35203: Error: bsock.c:849 Read error from
> Storage daemon:kilchis:9103: ERR=Connection reset by peer
>
> shows that the connection to the reading side of the job was closed
> unexpectedly from the reading end.
>
> Do you have the corresponding email from the reading side?  It will have a
> different JobId (but should mention JobId 35203) and should start with
> something like "Using Device ... to read."
>
> __Martin
>
>
> >>>>> On Mon, 18 Sep 2017 13:42:19 -0700, Jerry Lowry said:
> >
> > Martin,
> > Here is the complete email that was sent just before the "Copy Error"
> > message:
> >
> > 12-Sep 15:09 kilchis-dir JobId 35203: Using Device "MidSwap" to write.
> > 12-Sep 15:09 kilchis JobId 35203: Volume "homeMS-200" previously
> written, moving to end of data.
> > 12-Sep 15:27 kilchis JobId 35203: End of medium on Volume "homeMS-200"
> Bytes=1,932,735,274,146 Blocks=29,959,317 at 12-Sep-2017 15:27.
> > 12-Sep 15:28 kilchis JobId 35203: Job BackupUsers.2017-09-12_09.05.09_50
> is waiting. Cannot find any appendable volumes.
> > Please use the "label" command to create a new Volume for:
> > Storage:  "MidSwap" (/MidSwap)
> > Pool: OffsiteMid
> > Media type:   File
> > 12-Sep 15:36 kilchis JobId 35203: Wrote label to prelabeled Volume
> "homeMS-201" on File device "MidSwap" (/MidSwap)
> > 12-Sep 15:36 kilchis JobId 35203: New volume "homeMS-201" mounted on
> device "MidSwap" (/MidSwap) at 12-Sep-2017 15:36.
> > 12-Sep 19:54 kilchis JobId 35203: End of medium on Volume "homeMS-201"
> Bytes=1,932,735,281,790 Blocks=29,959,315 at 12-Sep-2017 19:54.
> > 12-Sep 19:54 kilchis JobId 35203: Job BackupUsers.2017-09-12_09.05.09_50
> is waiting. Cannot find any appendable volumes.
> > Please use the "label" command to create a new Volume for:
> > Storage:  "MidSwap" (/MidSwap)
> > Pool: OffsiteMid
> > Media type:   File
> > 12-Sep 20:57 kilchis JobId 35203: Job BackupUsers.2017-09-12_09.05.09_50
> is waiting. Cannot find any appendable volumes.
> > Please use the "label" command to create a new Volume for:
> > Storage:  "MidSwap" (/MidSwap)
> > Pool: OffsiteMid
> > Media type:   File
> > 12-Sep 23:03 kilchis JobId 35203: Job BackupUsers.2017-09-12_09.05.09_50
> is waiting. Cannot find any appendable volumes.
> > Please use the "label" command to create a new Volume for:
> > Storage:  "MidSwap" (/MidSwap)
> > Pool: OffsiteMid
> > Media type:   File
> > 13-Sep 03:15 kilchis JobId 35203: Job BackupUsers.2017-09-12_09.05.09_50
> is waiting. Cannot find any appendable volumes.
> > Please use the "label" command to create a new Volume for:
> > Storage:  "MidSwap" (/MidSwap)
> > Pool: OffsiteMid
> > Media type:   File
> > 13-Sep 08:23 kilchis JobId 35203: Wrote label to prelabeled Volume
> "homeMS-202" on File device "MidSwap" (/MidSwap)
> > 13-Sep 08:23 kilchis JobId 35203: New volume "homeMS-202" mounted on
> device "MidSwap" (/MidSwap) at 13-Sep-2017 08:23.
> > 13-Sep 08:43 kilchis JobId 35203: Error: bsock.c:849 Read error from
> Storage daemon:kilchis:9103: ERR=Connection reset by peer
> > 13-Sep 08:43 kilchis JobId 35203: Fatal error: append.c:271 Network
> error reading from FD. ERR=Connection reset by peer
> > 13-Sep 08:43 kilchis JobId 35203: Elapsed time=04:56:15, Transfer
> rate=125.6 M Bytes/second
> > 13-Sep 08:43 kilchis JobId 35203: Sending spooled attrs to the Director.
> Despooling 1,533,148,574 bytes ...
> >
> > I don't have the job log. Interestingly, I did not have any problems with
> > this or any other copy job before I upgraded.  I went from 5.2.13 to
> 9.0.3
> > of Bacula and latest version of MySql to Mariadb.  Not saying that this
> is
> > a problem, because I have 5 other copy jobs that work without error
> still.
> > This one just happens to be the biggest one.
> >
> > thanks,
> > jerry
> >
> > On Mon, Sep 18, 2017 at 7:55 AM, Martin Simmons <mar...@lis

Re: [Bacula-users] Incomplete backup - due to bsock error

2017-09-13 Thread Jerry Lowry
No, the only thing that shows in the messages file is that I changed the
disk 3 times as they filled up.

jerry

On Wed, Sep 13, 2017 at 10:51 AM, Josip Deanovic <djosip+n...@linuxpages.net
> wrote:

> On Wednesday 2017-09-13 09:35:07 Jerry Lowry wrote:
> > Kern,
> > My Offsite Backup just failed again on the same drive, different disk.
> > It failed with the same bsock error.  If the backup is working on the
> > same system using the copy function, how far out of the network stack
> > does it go.  My thinking is it does not get out of the application
> > layer.  Is this right?  Why would I get a bsock error?
> >
> > I have taken a look at the smart data for the disk and they seem to be
> > running okay. I am getting some sector relocation errors, would that
> > cause the bsock error during a remap?  This procedure has been running
> > flawlessly for many years ( except for human error ).  I am wondering
> > if I should delete the present disk files and let bacula recreate new
> > ones.
> >
> > thanks for your help!
>
>
> Did you get any disk/file system related error messages in the dmesg
> output?
>
> The same question goes for the system logs (usually /var/log/messages).
>
> --
> Josip Deanovic
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Incomplete backup - due to bsock error

2017-09-13 Thread Jerry Lowry
Kern,
My Offsite Backup just failed again on the same drive, different disk. It
failed with the same bsock error.  If the backup is working on the same
system using the copy function, how far out of the network stack does it
go.  My thinking is it does not get out of the application layer.  Is this
right?  Why would I get a bsock error?

I have taken a look at the smart data for the disk and they seem to be
running okay. I am getting some sector relocation errors, would that cause
the bsock error during a remap?  This procedure has been running flawlessly
for many years ( except for human error ).  I am wondering if I should
delete the present disk files and let bacula recreate new ones.

thanks for your help!

jerry


On Wed, Sep 6, 2017 at 11:26 PM, Kern Sibbald <k...@sibbald.com> wrote:

> Hello,
>
> If the job is marked as Incomplete in the catalog ("I" I think), then you
> can simply restart it and it should pickup where it left off.  If not you
> must run it again from the beginning.
>
> If you are switching devices when one is full during a Job, it is unlikely
> you can restore that job when it terminates. I recommend carefully testing
> restores on your system.
>
> Best regards,
>
> Kern
>
> On 09/06/2017 05:38 PM, Jerry Lowry wrote:
>
> List,
> I am running, bacula 9.0.3, Mariadb 12.2.8 on Centos 6.9.  I got notice
> last night that my Offsite backup failed due to a bsock error.  My offsite
> drives are attached to an ATTO raid card which gives me hot swap
> capability. This configuration works great as it allows me to hot swap a
> drive when it fills up with a new drive to continue with.  The problem is
> included below. The backup that I was doing is to the OffsiteMid drive
> which is mounted as /dev/sde. Is there a way to restart this backup job or
> am I left with an incomplete backup going forward.
>
> thanks for your help,
>
> jerry
>
>
> Sep  5 08:46:01 kilchis bat[4339]: bsock.c:147 Unable to connect to
> Director dae
> mon on kilchis:9101. ERR=Connection refused
> Sep  5 10:37:20 kilchis attocfgd: [CRIT] [ExpressSAS
> R608,50:01:08:60:00:57:3d:c
> 0] [FW] RAID Group state now Offline: OffsiteTop
> Sep  5 10:39:06 kilchis kernel: scsi 5:0:1:0: Direct-Access ATTO
> Offsite
> Top00 0001 PQ: 0 ANSI: 5
> Sep  5 10:39:06 kilchis kernel: sd 5:0:1:0: Attached scsi generic sg6 type
> 0
> Sep  5 10:39:06 kilchis kernel: sd 5:0:1:0: [sdd] 488366336 4096-byte
> logical bl
> ocks: (2.00 TB/1.81 TiB)
> Sep  5 10:39:06 kilchis kernel: sd 5:0:1:0: [sdd] Write Protect is off
> Sep  5 10:39:06 kilchis kernel: sd 5:0:1:0: [sdd] Write cache: enabled,
> read cac
> he: enabled, doesn't support DPO or FUA
> Sep  5 10:39:06 kilchis kernel: sd 5:0:1:0: [sdd] 488366336 4096-byte
> logical bl
> ocks: (2.00 TB/1.81 TiB)
> Sep  5 10:39:06 kilchis kernel: sdd: unknown partition table
> Sep  5 10:39:06 kilchis kernel: sd 5:0:1:0: [sdd] 488366336 4096-byte
> logical bl
> ocks: (2.00 TB/1.81 TiB)
> Sep  5 10:39:06 kilchis kernel: sd 5:0:1:0: [sdd] Attached SCSI disk
> Sep  5 10:39:35 kilchis kernel: sd 5:0:1:0: [sdd] 488366336 4096-byte
> logical bl
> ocks: (2.00 TB/1.81 TiB)
> Sep  5 10:39:35 kilchis kernel: sdd:
> Sep  5 10:44:54 kilchis kernel: EXT4-fs (sdd): mounted filesystem with
> ordered d
> ata mode. Opts:
> Sep  5 11:02:38 kilchis bacula-dir[4373]: bsock.c:537 Socket has errors=1
> on cal
> l to client:10.20.10.21:9101
> Sep  5 11:02:38 kilchis bacula-dir[4373]: bsock.c:537 Socket has errors=1
> on cal
> l to client:10.20.10.21:9101
> Sep  5 11:02:38 kilchis bacula-dir[4373]: bsock.c:537 Socket has errors=1
> on cal
> l to client:10.20.10.21:9101
> Sep  5 11:02:38 kilchis bacula-dir[4373]: bsock.c:537 Socket has errors=1
> on cal
> l to client:10.20.10.21:9101
> Sep  5 11:02:38 kilchis bacula-dir[4373]: bsock.c:537 Socket has errors=1
> on cal
> l to client:10.20.10.21:9101
> Sep  5 11:02:38 kilchis bacula-dir[4373]: bsock.c:537 Socket has errors=1
> on cal
> l to client:10.20.10.21:9101
> Sep  5 11:02:38 kilchis bacula-dir[4373]: bsock.c:537 Socket has errors=1
> on cal
> l to client:10.20.10.21:9101
> Sep  5 13:45:48 kilchis attocfgd: [CRIT] [ExpressSAS
> R608,50:01:08:60:00:57:3d:c
> 0] [FW] RAID Group state now Offline: OffsiteMid
> Sep  5 13:45:53 kilchis attocfgd: [CRIT] [ExpressSAS
> R608,50:01:08:60:00:57:3d:c
> 0] [FW] RAID Group state now Offline: OffsiteTop
> Sep  5 13:47:52 kilchis kernel: scsi 5:0:1:0: Direct-Access ATTO
> Offsite
> Mid00 0001 PQ: 0 ANSI: 5
> Sep  5 13:47:52 kilchis kernel: sd 5:0:1:0: Attached scsi generic sg6 type
> 0
> Sep  5 13:47:52 kilchis kernel: sd 5:0:1:0: [sde] 488366336 4096-byte
> logical bl
> ocks: (2.00 TB/1.81 TiB)
> Sep  5 13:47:52 kil

[Bacula-users] Incomplete backup - due to bsock error

2017-09-06 Thread Jerry Lowry
List,
I am running, bacula 9.0.3, Mariadb 12.2.8 on Centos 6.9.  I got notice
last night that my Offsite backup failed due to a bsock error.  My offsite
drives are attached to an ATTO raid card which gives me hot swap
capability. This configuration works great as it allows me to hot swap a
drive when it fills up with a new drive to continue with.  The problem is
included below. The backup that I was doing is to the OffsiteMid drive
which is mounted as /dev/sde. Is there a way to restart this backup job or
am I left with an incomplete backup going forward.

thanks for your help,

jerry


Sep  5 08:46:01 kilchis bat[4339]: bsock.c:147 Unable to connect to
Director dae
mon on kilchis:9101. ERR=Connection refused
Sep  5 10:37:20 kilchis attocfgd: [CRIT] [ExpressSAS
R608,50:01:08:60:00:57:3d:c
0] [FW] RAID Group state now Offline: OffsiteTop
Sep  5 10:39:06 kilchis kernel: scsi 5:0:1:0: Direct-Access ATTO
Offsite
Top00 0001 PQ: 0 ANSI: 5
Sep  5 10:39:06 kilchis kernel: sd 5:0:1:0: Attached scsi generic sg6 type 0
Sep  5 10:39:06 kilchis kernel: sd 5:0:1:0: [sdd] 488366336 4096-byte
logical bl
ocks: (2.00 TB/1.81 TiB)
Sep  5 10:39:06 kilchis kernel: sd 5:0:1:0: [sdd] Write Protect is off
Sep  5 10:39:06 kilchis kernel: sd 5:0:1:0: [sdd] Write cache: enabled,
read cac
he: enabled, doesn't support DPO or FUA
Sep  5 10:39:06 kilchis kernel: sd 5:0:1:0: [sdd] 488366336 4096-byte
logical bl
ocks: (2.00 TB/1.81 TiB)
Sep  5 10:39:06 kilchis kernel: sdd: unknown partition table
Sep  5 10:39:06 kilchis kernel: sd 5:0:1:0: [sdd] 488366336 4096-byte
logical bl
ocks: (2.00 TB/1.81 TiB)
Sep  5 10:39:06 kilchis kernel: sd 5:0:1:0: [sdd] Attached SCSI disk
Sep  5 10:39:35 kilchis kernel: sd 5:0:1:0: [sdd] 488366336 4096-byte
logical bl
ocks: (2.00 TB/1.81 TiB)
Sep  5 10:39:35 kilchis kernel: sdd:
Sep  5 10:44:54 kilchis kernel: EXT4-fs (sdd): mounted filesystem with
ordered d
ata mode. Opts:
Sep  5 11:02:38 kilchis bacula-dir[4373]: bsock.c:537 Socket has errors=1
on cal
l to client:10.20.10.21:9101
Sep  5 11:02:38 kilchis bacula-dir[4373]: bsock.c:537 Socket has errors=1
on cal
l to client:10.20.10.21:9101
Sep  5 11:02:38 kilchis bacula-dir[4373]: bsock.c:537 Socket has errors=1
on cal
l to client:10.20.10.21:9101
Sep  5 11:02:38 kilchis bacula-dir[4373]: bsock.c:537 Socket has errors=1
on cal
l to client:10.20.10.21:9101
Sep  5 11:02:38 kilchis bacula-dir[4373]: bsock.c:537 Socket has errors=1
on cal
l to client:10.20.10.21:9101
Sep  5 11:02:38 kilchis bacula-dir[4373]: bsock.c:537 Socket has errors=1
on cal
l to client:10.20.10.21:9101
Sep  5 11:02:38 kilchis bacula-dir[4373]: bsock.c:537 Socket has errors=1
on cal
l to client:10.20.10.21:9101
Sep  5 13:45:48 kilchis attocfgd: [CRIT] [ExpressSAS
R608,50:01:08:60:00:57:3d:c
0] [FW] RAID Group state now Offline: OffsiteMid
Sep  5 13:45:53 kilchis attocfgd: [CRIT] [ExpressSAS
R608,50:01:08:60:00:57:3d:c
0] [FW] RAID Group state now Offline: OffsiteTop
Sep  5 13:47:52 kilchis kernel: scsi 5:0:1:0: Direct-Access ATTO
Offsite
Mid00 0001 PQ: 0 ANSI: 5
Sep  5 13:47:52 kilchis kernel: sd 5:0:1:0: Attached scsi generic sg6 type 0
Sep  5 13:47:52 kilchis kernel: sd 5:0:1:0: [sde] 488366336 4096-byte
logical bl
ocks: (2.00 TB/1.81 TiB)
Sep  5 13:47:52 kilchis kernel: sd 5:0:1:0: [sde] Write Protect is off
Sep  5 13:47:52 kilchis kernel: sd 5:0:1:0: [sde] Write cache: enabled,
read cac
he: enabled, doesn't support DPO or FUA
Sep  5 13:47:52 kilchis kernel: sd 5:0:1:0: [sde] 488366336 4096-byte
logical bl
ocks: (2.00 TB/1.81 TiB)
Sep  5 13:47:52 kilchis kernel: sde: unknown partition table
Sep  5 13:47:52 kilchis kernel: sd 5:0:1:0: [sde] 488366336 4096-byte
logical bl
ocks: (2.00 TB/1.81 TiB)
Sep  5 13:47:52 kilchis kernel: sd 5:0:1:0: [sde] Attached SCSI disk
Sep  5 13:48:01 kilchis kernel: EXT4-fs error (device sdd):
__ext4_get_inode_loc
: unable to read inode block - inode=2, block=1057
Sep  5 13:48:01 kilchis kernel: Buffer I/O error on device sdd, logical
block 0
Sep  5 13:48:01 kilchis kernel: lost page write due to I/O error on sdd
Sep  5 13:48:01 kilchis kernel: EXT4-fs error (device sdd) in
ext4_reserve_inode
_write: IO failure
Sep  5 13:48:01 kilchis kernel: EXT4-fs (sdd): previous I/O error to
superblock
detected
Sep  5 13:48:01 kilchis kernel: Buffer I/O error on device sdd, logical
block 0
Sep  5 13:48:01 kilchis kernel: lost page write due to I/O error on sdd
Sep  5 13:48:06 kilchis kernel: Aborting journal on device sdd-8.
Sep  5 13:48:06 kilchis kernel: Buffer I/O error on device sdd, logical
block 24
3826688
Sep  5 13:48:06 kilchis kernel: lost page write due to I/O error on sdd
Sep  5 13:48:06 kilchis kernel: JBD2: I/O error detected when updating
journal s
uperblock for sdd-8.
Sep  5 13:48:08 kilchis kernel: EXT4-fs error (device sdd): ext4_put_super:
Coul
dn't clean up the journal
Sep  5 13:48:08 kilchis kernel: EXT4-fs (sdd): Remounting filesystem
read-only
Sep  5 13:48:44 kilchis kernel: sd 5:0:1:0: [sde] 488366336 4096-byte
logical bl
ocks: (2.00 

Re: [Bacula-users] [Non-DoD Source] Re: database upgrade problem

2017-09-01 Thread Jerry Lowry
Okay, on the one system that I upgraded to 9.0.3 I also upgraded the
database engine to mariadb 12.2.8.  Was still getting the error.  I then
went back to the July email and saw that they removed the
STRICT_TRANS_TABLES
<https://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sqlmode_strict_trans_tables>
from the sql_modes.  I remove this and yes it does work now.

What will this affect on the whole of Bacula though?  Will this cause
problems down the road?

thanks,

Jerry

Now on to the other system, half way to a weekend :)

On Fri, Sep 1, 2017 at 5:15 PM, Phil Stracchino <ph...@caerllewys.net>
wrote:

> On 09/01/17 19:10, Jerry Lowry wrote:
> >
> > I am running 5.6.28 on one of my updated servers.  Once I updated the
> > bacula database and got it all running.  I tried to start a job and it
> > failed with this error.  Do you know if it is 5.6 or 5.6.x where the
> > problem started?  There are no sql_modes for date specified any where in
> > the configuration.
>
> Jerry, I *thought* it was 5.7 but my memory could be in error.
>
> If in doubt, check what the global SQL_MODE is *in your running DB*.  If
> you haven't got it set explicitly in your configuration, you're running
> on the compiled-in default.
>
>
> --
>   Phil Stracchino
>   Babylon Communications
>   ph...@caerllewys.net
>   p...@co.ordinate.org
>   Landline: +1.603.293.8485
>   Mobile:   +1.603.998.6958
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] database upgrade problem

2017-09-01 Thread Jerry Lowry
Hi,
I finally got my backup server upgraded to 9..0.3.  I have run the database
update and it succeeded with out any errors. But I get this error any time
I start a job:

01-Sep 00:20 kilchis-dir JobId 0: Fatal error: sql_create.c:84 Create DB
Job record INSERT INTO Job
(Job,Name,Type,Level,JobStatus,SchedTime,JobTDate,ClientId,Comment) VALUES
('BackupCatalog.2017-09-01_00.20.18_03','BackupCatalog','B','F','C','2017-09-01
00:20:16',1504250416,15,'') failed. ERR=Field 'StartTime' doesn't have a
default value

Does this mean the update did not work?

thanks for the help!

jerry
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bat install questions

2017-08-31 Thread Jerry Lowry
That is correct it is not BAT

to solve this check out this:
https://github.com/marazmista/radeon-profile/issues/8

thanks,

jerry

On Thu, Aug 31, 2017 at 8:17 AM, Phil Stracchino <ph...@caerllewys.net>
wrote:

> On 08/31/17 10:37, Jerry Lowry wrote:
> > List,
> > So I installed qt as Heitor suggested.  compiles just fine.  But when I
> > run bat I get a blank screen with the following output on the terminal
> > window:
> > [root@tech bin]# ./bat
> > X Error: BadAccess (attempt to access private resource denied) 10
> >   Extension:131 (MIT-SHM)
> >   Minor opcode: 1 (X_ShmAttach)
> >   Resource id:  0x15a
> > X Error: BadShmSeg (invalid shared segment parameter) 128
> >   Extension:131 (MIT-SHM)
> >   Minor opcode: 5 (X_ShmCreatePixmap)
> >   Resource id:  0x269
>
>
> You have an X11 permissions/forwarding/authorization problem of some
> kind.  This isn't BAT's fault.  You need to make sure X11 is working
> correctly before you try to run BAT.
>
>
>
> --
>   Phil Stracchino
>   Babylon Communications
>   ph...@caerllewys.net
>   p...@co.ordinate.org
>   Landline: +1.603.293.8485
>   Mobile:   +1.603.998.6958
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bat install questions

2017-08-31 Thread Jerry Lowry
List,
So I installed qt as Heitor suggested.  compiles just fine.  But when I run
bat I get a blank screen with the following output on the terminal window:
[root@tech bin]# ./bat
X Error: BadAccess (attempt to access private resource denied) 10
  Extension:131 (MIT-SHM)
  Minor opcode: 1 (X_ShmAttach)
  Resource id:  0x15a
X Error: BadShmSeg (invalid shared segment parameter) 128
  Extension:131 (MIT-SHM)
  Minor opcode: 5 (X_ShmCreatePixmap)
  Resource id:  0x269
X Error: BadDrawable (invalid Pixmap or Window parameter) 9
  Major opcode: 62 (X_CopyArea)
  Resource id:  0x2a00012
X Error: BadDrawable (invalid Pixmap or Window parameter) 9
  Major opcode: 62 (X_CopyArea)
  Resource id:  0x2a00012
X Error: BadDrawable (invalid Pixmap or Window parameter) 9
  Major opcode: 62 (X_CopyArea)
  Resource id:  0x2a00012
X Error: BadDrawable (invalid Pixmap or Window parameter) 9
  Major opcode: 62 (X_CopyArea)
  Resource id:  0x2a00012
X Error: BadDrawable (invalid Pixmap or Window parameter) 9
  Major opcode: 62 (X_CopyArea)
  Resource id:  0x2a00012
X Error: BadDrawable (invalid Pixmap or Window parameter) 9
  Major opcode: 62 (X_CopyArea)
  Resource id:  0x2a00012
X Error: BadDrawable (invalid Pixmap or Window parameter) 9
  Major opcode: 62 (X_CopyArea)
  Resource id:  0x2a00012
X Error: BadDrawable (invalid Pixmap or Window parameter) 9
  Major opcode: 62 (X_CopyArea)
  Resource id:  0x2a00012

 and it goes on for pages

thanks,

jerry


On Wed, Aug 30, 2017 at 12:30 AM, Simone Caronni 
wrote:

> On Wed, Aug 30, 2017 at 9:28 AM, Simone Caronni 
> wrote:
>
>> Hi,
>>
>> you might as well use pre-built binaries:
>>
>
> Sorry, keyboard error :/
> Damn gmail.
>
> https://copr.fedorainfracloud.org/coprs/slaanesh/Bacula/
>
> Those are exactly the same builds in Fedora that will eventually go into
> RHEL. I make sure that every release builds on all supported Fedora and
> CentOS/RHEL releases.
> Upgrades in place from bundled packages in the distribution is supported,
> just make sure to update the database.
>
> Regards,
> --Simone
>
>
>
> --
> You cannot discover new oceans unless you have the courage to lose sight
> of the shore (R. W. Emerson).
>
> http://xkcd.com/229/
> http://negativo17.org/
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bat install questions

2017-08-29 Thread Jerry Lowry
List,

I am still trying to resolve my 9.0.3 install from a week or so ago.  I
have had to rebuild the server due to numerous problems.  I am using the
same configure command that I have in the past.
Centos 7 bld 1611 , mariadb 5.5.52

./configure --sbindir=/usr/bacula/bin --sysconfdir=/usr/bacula/bin
--with-pid-dir=/var/run/bacula --with-subsys-dir=/var/run/bacula/working
--enable-smartalloc --with-mysql --with-working-dir=/usr/bacula/bin/working
--with-dump-email=email.com --with-job-email=email.com --with-smtp-host=
email.com --enable-bat

I have the depkgs-qt installed in the bacula-9.0.3 directory and have the
qt path set up like so:

QTDIR=/usr/local/bacula-9.0.3/depkgs-qt/qt4
OLDPWD=/usr/local/bacula-9.0.3/depkgs-qt
QTINC=/usr/local/bacula-9.0.3/depkgs-qt/qt4/include
PATH=/usr/local/bacula-9.0.3/depkgs-qt/qt4/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
PWD=/usr/local/bacula-9.0.3
QTLIB=/usr/local/bacula-9.0.3/depkgs-qt/qt4/lib
PKG_CONFIG_PATH=/usr/local/bacula-9.0.3/depkgs-qt/qt4/lib/pkgconfig

QT finished its make without any errors. But this is my configure output:
checking for true... /bin/true
checking for false... /bin/false
configuring for Bacula 9.0.3 (08 August 2017)
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking whether gcc and cc understand -c and -o together... yes
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking whether gcc needs -traditional... no
checking for g++... /bin/g++
checking for a BSD-compatible install... /bin/install -c
checking for mv... /bin/mv
checking for rm... /bin/rm
checking for cp... /bin/cp
checking for sed... /bin/sed
checking for echo... /bin/echo
checking for cmp... /bin/cmp
checking for tbl... /bin/tbl
checking for ar... /bin/ar
checking for openssl... /bin/openssl
checking for mtx... mtx
checking for dd... /bin/dd
checking for mkisofs... /bin/mkisofs
checking for python... /bin/python
checking for growisofs... /bin/growisofs
checking for dvd+rw-mediainfo... /bin/dvd+rw-mediainfo
checking for dvd+rw-format... /bin/dvd+rw-format
checking for pkg-config... /bin/pkg-config
checking for qmake... /usr/local/bacula-9.0.3/depkgs-qt/qt4/bin/qmake
checking for gmake... /bin/gmake
checking for pidof... /sbin/pidof
checking for gawk... gawk
checking for gawk... /bin/gawk
checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-pc-linux-gnu
checking how to print strings... printf
checking for a sed that does not truncate output... (cached) /bin/sed
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /bin/ld
checking if the linker (/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /bin/nm -B
checking the name lister (/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 1572864
checking whether the shell understands some XSI constructs... yes
checking whether the shell understands "+="... yes
checking how to convert x86_64-pc-linux-gnu file names to
x86_64-pc-linux-gnu format... func_convert_file_noop
checking how to convert x86_64-pc-linux-gnu file names to toolchain
format... func_convert_file_noop
checking for /bin/ld option to reload object files... -r
checking for objdump... objdump
checking how to recognize dependent libraries... pass_all
checking for dlltool... no
checking how to associate runtime and link libraries... printf %s\n
checking for archiver @FILE support... @
checking for strip... strip
checking for ranlib... ranlib
checking command to parse /bin/nm -B output from gcc object... ok
checking for sysroot... no
checking for mt... no
checking if : is a manifest tool... no
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for dlfcn.h... yes
checking for objdir... .libs
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fPIC -DPIC
checking if gcc PIC flag -fPIC -DPIC works... yes
checking if gcc static flag -static works... no
checking if gcc supports -c -o file.o... yes
checking if gcc supports -c -o file.o... (cached) yes
checking whether the gcc linker (/bin/ld -m elf_x86_64) supports shared

Re: [Bacula-users] version 9.0.3 bconsole looking for old libbac library

2017-08-17 Thread Jerry Lowry
Kern, et al,
I would concur with you but this is a brand new install, on a brand new
system, with centos 7.3.  Never had bacula installed on it before.  The
only thing that came from a previous version is the mysqldump file.  And it
upgrade just fine.

I am in the midst of rebuilding all of bacula, I will let you know what
happens.

thanks,

jerry

On Thu, Aug 17, 2017 at 9:56 AM, Kern Sibbald <k...@sibbald.com> wrote:

> Hello,
>
> You probably have multiple bconsoles from multiple Bacula versions
> loaded.  This is most likely the cause of your problems.  The best solution
> is to find and remove all bconsoles that are not the one installed for
> 9.0.3.
>
> Best regards,
> kern
>
>
>
> On 17/08/2017 17:25, Jerry Lowry wrote:
>
> Hi list,
> I am getting ready to upgrade from 5.2.6 to 9.0.3.  I have install 9.0.3
> from source on a test system and move the bacula database over using
> mysqldump. Upgraded the database to version 16 and set all the
> permissions.  Everything looks good until I run bconsole.
> When I run bconsole I get the following error:
> bconsole: error while loading shared libraries: libbaccfg-5.2.13.so:
> cannot open shared object file: No such file or directory
>
> when i look at bconsole with ldd it see this:
> linux-vdso.so.1 =>  (0x7ffce948c000)
> libtinfo.so.5 => /usr/lib64/libtinfo.so.5 (0x7fea49537000)
> *libbaccfg-9.0.3.so <http://libbaccfg-9.0.3.so> *=> /usr/lib64/
> libbaccfg-9.0.3.so (0x7fea49327000)
> libbac-9.0.3.so => /usr/lib64/libbac-9.0.3.so (0x7fea490c)
> libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x7fea48ea4000)
> libdl.so.2 => /usr/lib64/libdl.so.2 (0x7fea48ca)
> libssl.so.10 => /usr/lib64/libssl.so.10 (0x7fea48a31000)
> libcrypto.so.10 => /usr/lib64/libcrypto.so.10 (0x7fea48647000)
> libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x7fea4833e000)
> libm.so.6 => /usr/lib64/libm.so.6 (0x7fea4803b000)
> libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x7fea47e25000)
> libc.so.6 => /usr/lib64/libc.so.6 (0x7fea47a64000)
> libcap.so.2 => /usr/lib64/libcap.so.2 (0x7fea4785e000)
> libz.so.1 => /usr/lib64/libz.so.1 (0x7fea47648000)
> /lib64/ld-linux-x86-64.so.2 (0x7fea49762000)
> libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2
> (0x7fea473fa000)
> libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x7fea47112000)
> libcom_err.so.2 => /usr/lib64/libcom_err.so.2 (0x7fea46f0e000)
> libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x7fea46cdc000)
> libattr.so.1 => /usr/lib64/libattr.so.1 (0x7fea46ad6000)
> libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0
> (0x7fea468c7000)
> libkeyutils.so.1 => /usr/lib64/libkeyutils.so.1 (0x7fea466c3000)
> libresolv.so.2 => /usr/lib64/libresolv.so.2 (0x7fea464a8000)
> libselinux.so.1 => /usr/lib64/libselinux.so.1 (0x7fea46281000)
> libpcre.so.1 => /usr/lib64/libpcre.so.1 (0x7fea4601f000)
>
> Why is bconsole looking for an older version of this library?
>
> thanks for the help!
>
> jerry
>
>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>
>
>
> ___
> Bacula-users mailing 
> listBacula-users@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] version 9.0.3 bconsole looking for old libbac library

2017-08-17 Thread Jerry Lowry
Hi list,
I am getting ready to upgrade from 5.2.6 to 9.0.3.  I have install 9.0.3
from source on a test system and move the bacula database over using
mysqldump. Upgraded the database to version 16 and set all the
permissions.  Everything looks good until I run bconsole.
When I run bconsole I get the following error:
bconsole: error while loading shared libraries: libbaccfg-5.2.13.so: cannot
open shared object file: No such file or directory

when i look at bconsole with ldd it see this:
linux-vdso.so.1 =>  (0x7ffce948c000)
libtinfo.so.5 => /usr/lib64/libtinfo.so.5 (0x7fea49537000)
*libbaccfg-9.0.3.so  *=> /usr/lib64/
libbaccfg-9.0.3.so (0x7fea49327000)
libbac-9.0.3.so => /usr/lib64/libbac-9.0.3.so (0x7fea490c)
libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x7fea48ea4000)
libdl.so.2 => /usr/lib64/libdl.so.2 (0x7fea48ca)
libssl.so.10 => /usr/lib64/libssl.so.10 (0x7fea48a31000)
libcrypto.so.10 => /usr/lib64/libcrypto.so.10 (0x7fea48647000)
libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x7fea4833e000)
libm.so.6 => /usr/lib64/libm.so.6 (0x7fea4803b000)
libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x7fea47e25000)
libc.so.6 => /usr/lib64/libc.so.6 (0x7fea47a64000)
libcap.so.2 => /usr/lib64/libcap.so.2 (0x7fea4785e000)
libz.so.1 => /usr/lib64/libz.so.1 (0x7fea47648000)
/lib64/ld-linux-x86-64.so.2 (0x7fea49762000)
libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2
(0x7fea473fa000)
libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x7fea47112000)
libcom_err.so.2 => /usr/lib64/libcom_err.so.2 (0x7fea46f0e000)
libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x7fea46cdc000)
libattr.so.1 => /usr/lib64/libattr.so.1 (0x7fea46ad6000)
libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0
(0x7fea468c7000)
libkeyutils.so.1 => /usr/lib64/libkeyutils.so.1 (0x7fea466c3000)
libresolv.so.2 => /usr/lib64/libresolv.so.2 (0x7fea464a8000)
libselinux.so.1 => /usr/lib64/libselinux.so.1 (0x7fea46281000)
libpcre.so.1 => /usr/lib64/libpcre.so.1 (0x7fea4601f000)

Why is bconsole looking for an older version of this library?

thanks for the help!

jerry
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] version of qt

2016-09-16 Thread Jerry Lowry
I installed bacula version 5.2.13 and I am having trouble with getting bat
to compile. What version of qt is this version of looking for?

thanks
jerry
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] compile of bacula 5.2.13 failing

2016-05-21 Thread Jerry Lowry
Well, I went back and checked.  zlib-devel is installed.  When I look at
the config.log file it does show that zlib is included:
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
--infodir=/u
sr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla
--enable-bootstr
ap --enable-shared --enable-threads=posix --enable-checking=release
*--with-system-zlib* --enable-__cxa_atexit --disable-libunwind-exceptions
--enable-gnu-unique-
object --enable-languages=c,c++,objc,obj-c++,java,fortran,ada
--enable-java-awt=
gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre
--en
able-libgcj-multifile --enable-java-maintainer-mode
--with-ecj-jar=/usr/share/ja
va/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog
--with-tun
e=generic --with-arch_32=i686 --build=x86_64-redhat-linux
Thread model: posix

But i don't see a config parameter that lets me set it.

The basis for this test system is to build a base level bacula install.  I
then want to look at the database to see if there is actually a table
RestoreObject. I have two working systems, but one fails doing the
mysqldump of the catalog because it says this table is not found.  All that
shows up in the directory for this table is the .frm and .ibd files.  If I
try to run the mysqlcheck on the database all the tables are okay except
this table.
What is this table used for?

Can you help me figure this out?

thanks.

On Sat, May 21, 2016 at 12:21 AM, Davide Franco <bacula-...@dflc.ch> wrote:

> Hi Jerry,
>
> Install zlib-devel package then run again the combo ./configure && make
> ...everything should be fine after this.
>
> Regards
>
> Davide
> On May 21, 2016 09:18, "Kern Sibbald" <k...@sibbald.com> wrote:
>
>> Hello,
>>
>> Most likely you have something out of sync with your ./configure.
>> According to the config.out file lzo is not enabled, but the link is trying
>> to include it.  So, you probably should redo your ./configure and do a make
>> again, and it will probably work.  On the other hand if you want lzo
>> compression (possibly used for the mysql client interface, then you must
>> make sure the lzo development libraries are loaded on your system.
>>
>> Best regards,
>> Kern
>>
>>
>> On 05/20/2016 06:33 PM, Jerry Lowry wrote:
>>
>> List,
>>  I am setting up a test server for bacula and I am getting the following
>> error:
>>
>> Making libbaccats-mysql.la...
>> /usr/local/bacula-5.2.13/libtool --silent --tag=cxx --mode=link
>> /usr/bin/g++ -D_BDB_PRIV_INTERFACE_  -o libbaccats-mysql.ls mysql.lo
>> -export-dynamic -rpath /usr/lib64 -release 5.2.13 -soname
>> libbaccats-5.2.13.so -R /usr/lib64/mysql -L/usr/lib64/mysql
>> -lmysqlclient_r -lz
>> /usr/bin/ld: cannot find -lz
>> collect2: ld returned 1 exit status
>>
>> I have use this same configure script on a couple other systems that are
>> running in production right now without any problems.
>>
>> Here is the config.out file.
>>
>> Configuration on Mon May 16 15:17:12 PDT 2016:
>>
>>Host: x86_64-unknown-linux-gnu -- redhat
>>Bacula version: Bacula 5.2.13 (19 February 2013)
>>Source code location: .
>>Install binaries: /usr/bacula/bin
>>Install libraries: /usr/lib64
>>Install config files: /usr/bacula/bin
>>Scripts directory: /usr/bacula/bin
>>Archive directory: /tmp
>>Working directory: /usr/bacula/bin/working
>>PID directory: /var/run/bacula
>>Subsys directory: /var/run/bacula/working
>>Man directory: ${datarootdir}/man
>>Data directory: /usr/share
>>Plugin directory: /usr/lib64
>>C Compiler: gcc 4.4.7
>>C++ Compiler: /usr/bin/g++ 4.4.7
>>Compiler flags:  -g -O2 -Wall -fno-strict-aliasing
>> -fno-exceptions -fno-rtti
>>Linker flags:
>>Libraries: -lpthread -ldl -ldl
>>Statically Linked Tools:  no
>>Statically Linked FD: no
>>Statically Linked SD: no
>>Statically Linked DIR:no
>>Statically Linked CONS:   no
>>Database backends: MySQL
>>Database port:
>>Database name: bacula
>>Database user: bacula
>>
>>Job Output Email: n...@domain.com
>>Traceback Email: n...@domain.com
>>SMTP Host Address: mail.domain.com
>>
>>Director Port: 9101
>>File daemon Port: 9102
>>Storage daemon Port: 

[Bacula-users] compile of bacula 5.2.13 failing

2016-05-20 Thread Jerry Lowry
List,
 I am setting up a test server for bacula and I am getting the following
error:

Making libbaccats-mysql.la...
/usr/local/bacula-5.2.13/libtool --silent --tag=cxx --mode=link
/usr/bin/g++ -D_BDB_PRIV_INTERFACE_  -o libbaccats-mysql.ls mysql.lo
-export-dynamic -rpath /usr/lib64 -release 5.2.13 -soname
libbaccats-5.2.13.so -R /usr/lib64/mysql -L/usr/lib64/mysql -lmysqlclient_r
-lz
/usr/bin/ld: cannot find -lz
collect2: ld returned 1 exit status

I have use this same configure script on a couple other systems that are
running in production right now without any problems.

Here is the config.out file.

Configuration on Mon May 16 15:17:12 PDT 2016:

   Host: x86_64-unknown-linux-gnu -- redhat
   Bacula version: Bacula 5.2.13 (19 February 2013)
   Source code location: .
   Install binaries: /usr/bacula/bin
   Install libraries: /usr/lib64
   Install config files: /usr/bacula/bin
   Scripts directory: /usr/bacula/bin
   Archive directory: /tmp
   Working directory: /usr/bacula/bin/working
   PID directory: /var/run/bacula
   Subsys directory: /var/run/bacula/working
   Man directory: ${datarootdir}/man
   Data directory: /usr/share
   Plugin directory: /usr/lib64
   C Compiler: gcc 4.4.7
   C++ Compiler: /usr/bin/g++ 4.4.7
   Compiler flags:  -g -O2 -Wall -fno-strict-aliasing
-fno-exceptions -fno-rtti
   Linker flags:
   Libraries: -lpthread -ldl -ldl
   Statically Linked Tools:  no
   Statically Linked FD: no
   Statically Linked SD: no
   Statically Linked DIR:no
   Statically Linked CONS:   no
   Database backends: MySQL
   Database port:
   Database name: bacula
   Database user: bacula

   Job Output Email: n...@domain.com
   Traceback Email: n...@domain.com
   SMTP Host Address: mail.domain.com

   Director Port: 9101
   File daemon Port: 9102
   Storage daemon Port:  9103

   Director User:
   Director Group:
   Storage Daemon User:
   Storage DaemonGroup:
   File Daemon User:
   File Daemon Group:

   Large file support: yes
   Bacula conio support: no
   readline support: no
   TCP Wrappers support: no
   TLS support:  no
   Encryption support: no
   ZLIB support: no
   LZO support:  no
   enable-smartalloc: yes
   enable-lockmgr: no
   bat support:  no
   enable-gnome: no
   enable-bwx-console: no
   enable-tray-monitor:  no
   client-only:  no
   build-dird: yes
   build-stored: yes
   Plugin support: yes
   AFS support:  no
   ACL support:  no
   XATTR support: yes
   Python support: no
   systemd support: no
   Batch insert enabled: None

  What am I missing?

thanks
--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catalog backup failing after system restore

2016-05-16 Thread Jerry Lowry
thanks for the suggestion.
Checked the database, bacula has all permissions granted.  Ran that script
before starting bacula.
I do have another server that is running the same configuration.  It shows
the same table in the database and you can not access it as well.  The
difference being the catalog backup works on this system.

thanks

On Sun, May 15, 2016 at 11:38 PM, Davide Franco <bacula-...@dflc.ch> wrote:

> Hi,
>
> Have you tried to to run the grant permission script on your database ?
>
> If the table does exist, it sounds to me a permissions issue.
>
> Good luck with recovering your catalog.
>
> Regards
>
> Davide
>
> On May 16, 2016 05:45, "Jerry Lowry" <michaiah2...@gmail.com> wrote:
> >
> > Hi all,
> >
> > I am just finished recovering a system disk boot failure on one of my
> backup servers.  It is running Centos (6.6/6.7 now); Mysql 5.6.28/5.6.30
> (now) and Bacula 5.2.13.
> > Fortunately the system disk did not die, just some boot problem and I
> could tell that it was not spinning like it should.  So I was able to pull
> the Bacula database directory from the old disk. I went ahead and
> reinstalled all of the products and then was able get Mysql configured the
> same way for Bacula.  All of the system backups are working without any
> problems, but the catalog backup is failing with the following errors:
> > 15-May 13:25 kilchis-dir JobId 30402: shell command: run BeforeJob
> "/usr/bacula/bin/make_catalog_backup.pl MyCatalog"
> > 15-May 13:25 kilchis-dir JobId 30402: BeforeJob: Warning: Using unique
> option prefix database instead of databases is deprecated and will be
> removed in a future release. Please use the full name instead.
> > 15-May 13:25 kilchis-dir JobId 30402: BeforeJob: Warning: mysqldump:
> ignoring option '--databases' due to invalid value 'bacula'
> > 15-May 13:25 kilchis-dir JobId 30402: BeforeJob: mysqldump: Got error:
> 1146: Table 'bacula.RestoreObject' doesn't exist when using LOCK TABLES
> > 15-May 13:41 kilchis-dir JobId 30402: BeforeJob: Error: Couldn't read
> status information for table RestoreObject ()
> > 15-May 13:41 kilchis-dir JobId 30402: BeforeJob: mysqldump: Couldn't
> execute 'show create table `RestoreObject`': Table 'bacula.RestoreObject'
> doesn't exist (1146)
> > 15-May 13:41 kilchis-dir JobId 30402: Error: Runscript: BeforeJob
> returned non-zero status=2. ERR=Child exited with code 2
> > 15-May 13:41 kilchis-dir JobId 30402: Error: Bacula kilchis-dir 5.2.13
> (19Jan13)
> >
> > When I look at the database and try to select from that table, I get an
> error saying the table does not exist.  But it shows up when I do a 'show
> tables;' in mysql.
> >
> > Does any one have any words or thoughts as to why this is failing?  I
> have not looked at the perl script yet, but it did not change as I am
> running the same version as before.
> >
> > thanks for the help!
> >
> > jerry
> >
> >
> >
> --
> > Mobile security can be enabling, not merely restricting. Employees who
> > bring their own devices (BYOD) to work are irked by the imposition of MDM
> > restrictions. Mobile Device Manager Plus allows you to control only the
> > apps on BYO-devices by containerizing them, leaving personal data
> untouched!
> > https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
> > ___
> > Bacula-users mailing list
> > Bacula-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users
> >
>
>
--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Catalog backup failing after system restore

2016-05-15 Thread Jerry Lowry
Hi all,

I am just finished recovering a system disk boot failure on one of my
backup servers.  It is running Centos (6.6/6.7 now); Mysql 5.6.28/5.6.30
(now) and Bacula 5.2.13.
Fortunately the system disk did not die, just some boot problem and I could
tell that it was not spinning like it should.  So I was able to pull the
Bacula database directory from the old disk. I went ahead and reinstalled
all of the products and then was able get Mysql configured the same way for
Bacula.  All of the system backups are working without any problems, but
the catalog backup is failing with the following errors:
15-May 13:25 kilchis-dir JobId 30402: shell command: run BeforeJob
"/usr/bacula/bin/make_catalog_backup.pl MyCatalog"
15-May 13:25 kilchis-dir JobId 30402: BeforeJob: Warning: Using unique
option prefix database instead of databases is deprecated and will be
removed in a future release. Please use the full name instead.
15-May 13:25 kilchis-dir JobId 30402: BeforeJob: Warning: mysqldump:
ignoring option '--databases' due to invalid value 'bacula'
15-May 13:25 kilchis-dir JobId 30402: BeforeJob: mysqldump: Got error:
1146: Table 'bacula.RestoreObject' doesn't exist when using LOCK TABLES
15-May 13:41 kilchis-dir JobId 30402: BeforeJob: Error: Couldn't read
status information for table RestoreObject ()
15-May 13:41 kilchis-dir JobId 30402: BeforeJob: mysqldump: Couldn't
execute 'show create table `RestoreObject`': Table 'bacula.RestoreObject'
doesn't exist (1146)
15-May 13:41 kilchis-dir JobId 30402: Error: Runscript: BeforeJob returned
non-zero status=2. ERR=Child exited with code 2
15-May 13:41 kilchis-dir JobId 30402: Error: Bacula kilchis-dir 5.2.13
(19Jan13)

When I look at the database and try to select from that table, I get an
error saying the table does not exist.  But it shows up when I do a 'show
tables;' in mysql.

Does any one have any words or thoughts as to why this is failing?  I have
not looked at the perl script yet, but it did not change as I am running
the same version as before.

thanks for the help!

jerry
--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] relabeled offsite copydisk

2016-04-19 Thread Jerry Lowry
Hello,  I have unfortunately relabel one of my offsite backup disks using
‘parted’.  I would like to go back and recopy the jobs that were on it.  Is
there a way to reset the copy flag in the database for these specific jobs?



Thanks for the help!
--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Copy job with PoolUncopiedJobs

2015-10-30 Thread Jerry Lowry
Hi,

Centos 5.11 64bit OS
Bacula 5.2.6 on all directors and clients

I have run across a problem with one of my copy jobs.  The job is setup
with the PoolUncopiedJobs parameter.  The jobs are failing with the
following:

30-Oct 11:05 distress JobId 28325: Error: block.c:291 Volume data
error at 3:87310201! Wanted ID: "BB02", got "Í". Buffer discarded.

(not all of them get the same "got")

I can understand the error because I had problems with the backup disks
during the date it is trying to copy.

Question is,  How can I get around these bad backups to the date where the
disks were functioning properly and get them copied to my offsite disk?

thanks for the pointers.

jerry
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] disk errors during backup

2015-09-23 Thread Jerry Lowry
started an offsite yesterday and found this morning that I have been
getting the following error:

23-Sep 10:52 kilchis JobId 28178: Error: block.c:1045 Read error on fd=5 at
file:blk 23:3783749357 on device "Hardware" (/Hardware). ERR=Input/output
error.
23-Sep 10:52 kilchis JobId 28178: Error: read_record.c:151 block.c:1045
Read error on fd=5 at file:blk 23:3783749357 on device "Hardware"
(/Hardware). ERR=Input/output error.

Is there any way to get past this error?  There are a few copy jobs
remaining after this one.

thanks for your help

jerry
--
Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
Get real-time metrics from all of your servers, apps and tools
in one place.
SourceForge users - Click here to start your Free Trial of Datadog now!
http://pubads.g.doubleclick.net/gampad/clk?id=241902991=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bsmtp failing to connect to mail server

2015-08-11 Thread Jerry Lowry
Thank you all for the pointers and information regarding postfix and google
mail.  I have not determined which way I am going to proceed yet.  To many
other fires, which makes me check on the backups rather than check the
email.

Kind regards,

jerry

On Sun, Aug 9, 2015 at 11:37 AM, Dimitri Maziuk dmaz...@bmrb.wisc.edu
wrote:

 On 2015-08-08 14:23, Jerry Lowry wrote:
  Heitor,  Sorry for not saying this in the original text.  It does the
  same thing when I specify 'localhost'.
 
  I don't really need 'tls' that is just what gmail is looking for when I
  try to use them as the sending mail server.

 Send-only postfix setup that also accepts mail on 127.0.0.1:

   /etc/postfix/main.cf:

 myorigin = $mydomain
 inet_interfaces = localhost
 inet_protocols = ipv4
 mydestination =
 relayhost = $mydomain
 local_transport = error:local mail delivery is disabled

 -- change myorigin and relayhost as appropriate. Plus,

   /etc/postfix/master.cf:

 #local unix  -   n   n   -   -   local

 (i.e. the local line commented out).

 Make sure iptables isn't blocking port 25 on localhost.

 To test:

 telnet localhost 25

 -- if you get connected and 220 code from postfix you should be good
 to go. Note that sending mail from command line is not a useful test
 since it doesn't necessarily work the same way. It's calling the
 sendmail binary directly instead of talking to the server on port 25.

 And then there's gmail. It delivers some messages to all mail instead
 of inbox, messages whose From and To addresses are the same just vanish,
 and so on. So check the postfix log after running a test job: it may
 well be there's nothing wrong with your bsmtp or postfix config.

 Dima


 --
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bsmtp failing to connect to mail server

2015-08-08 Thread Jerry Lowry
Heitor,  Sorry for not saying this in the original text.  It does the same
thing when I specify 'localhost'.

I don't really need 'tls' that is just what gmail is looking for when I try
to use them as the sending mail server.

thanks

On Sat, Aug 8, 2015 at 5:16 AM, Heitor Faria hei...@bacula.com.br wrote:

 Since the move to gmail I have not been able to get the bsmtp
 configuration to work.  I have tried to use gmail but it requires tls.

 Jul 20 10:26:59 distress bacula-dir: 20-Jul 10:26  Message delivery ERROR:
 Mail
 prog: bsmtp: bsmtp.c:145 Fatal malformed reply from smtp.googlemail.com:
 530 5.7
 .0 Must issue a STARTTLS command first. b9sm14093516ioj.6 - gsmtp

 It's pretty easy to set Postfix to use TLS:
 http://bacula.us/authenticated-mail-sending/

 So, I have configured postfix/cyrus on the backup server just to send
 email.  The problem I get from bsmtp now is that it can not connect to the
 mail server.

 [root@kilchis bin]# /usr/bacula/bin/bsmtp -d 25 -h kilchis.server.com -f
 \\(Bacula\) \no-re...@edt.com\\ -s \Bacula daemon message\
 r...@server.com

 If you pretend to use postfix in this machine as your mail relay service,
 -h (mail server address) should be localhost.

 test message
 
 bsmtp: bsmtp.c:338-0 Debug level = 25
 bsmtp: bsmtp.c:346-0 host=kilchis.server.com
 bsmtp: bsmtp.c:356-0 subject=Bacula
 bsmtp: bsmtp.c:432-0 My hostname is: kilchis
 bsmtp: bsmtp.c:456-0 From addr=(Bacula)
 bsmtp: bsmtp.c:514-0 Failed to connect to mailhost kilchis.server.com

 I can send mail from the command prompt and it works just fine.

 What am I missing?

 Regards,
 ===
 Heitor Medrado de Faria - LPIC-III | ITIL-F |  Bacula Systems Certified
 Administrator II
 Do you need Bacula training?
 https://www.udemy.com/bacula-backup-software/?couponCode=bacula-list
 +55 61 %2B55%2061%202021-82608268-4220 %2B55%2061%208268-4220
 Site: http://bacula.us FB: heitor.faria
 http://www.facebook.com/heitor.faria
 ===


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bsmtp failing to connect to mail server

2015-08-07 Thread Jerry Lowry
All,

I have recently setup two separate bacula servers on two subnets.  They
were using one server and going through the firewall for one of the
subnets.  The previous configuration worked well but for security reasons
and speed I separated them.
Along with this change the company decided to move from an inhouse email
server to gmail.

Since the move to gmail I have not been able to get the bsmtp configuration
to work.  I have tried to use gmail but it requires tls.

Jul 20 10:26:59 distress bacula-dir: 20-Jul 10:26  Message delivery ERROR:
Mail
prog: bsmtp: bsmtp.c:145 Fatal malformed reply from smtp.googlemail.com:
530 5.7
.0 Must issue a STARTTLS command first. b9sm14093516ioj.6 - gsmtp

So, I have configured postfix/cyrus on the backup server just to send
email.  The problem I get from bsmtp now is that it can not connect to the
mail server.

[root@kilchis bin]# /usr/bacula/bin/bsmtp -d 25 -h kilchis.server.com -f
\\(Bacula\) \no-re...@edt.com\\ -s \Bacula daemon message\
r...@server.com
test message

bsmtp: bsmtp.c:338-0 Debug level = 25
bsmtp: bsmtp.c:346-0 host=kilchis.server.com
bsmtp: bsmtp.c:356-0 subject=Bacula
bsmtp: bsmtp.c:432-0 My hostname is: kilchis
bsmtp: bsmtp.c:456-0 From addr=(Bacula)
bsmtp: bsmtp.c:514-0 Failed to connect to mailhost kilchis.server.com

I can send mail from the command prompt and it works just fine.

What am I missing?

thanks
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Question on restoring files using bscan with 2 storage servers

2013-11-26 Thread Jerry Lowry
hello list,
I am hoping you can help me solve this.

I have one director and to storage servers.  Both storage servers have
disks attached but only one sd has a tape drive.  The tape drive is not on
the director.  I need to load some old tapes into the database but when I
run bscan from the sd where the tape is, it does not see the database.  If
I run bscan from the director where there isn't a tape drive it does not
see the tape drive on the other sd.

How can I resolve this?

thanks
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Offsite disk file system corrupt after copy job

2012-10-11 Thread Jerry Lowry

Hello,

I just ran into a problem with one of my offsite disks.  After rebooting 
the storage director one of my disks that I run a copy job to became 
corrupt.  I would like to know if there is a way to recreate the copy 
job with the same jobs so that I don't lose any data. The job that was 
running used 3 different disks, the last disk is the one that became 
corrupt.
I can get all of the jobs that were copied from the log but how would I 
setup the copy job to do this?


thanks,

jerry


--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Problems configuring 5.0.1 with bat

2011-10-10 Thread jerry lowry
 Hi,  I move the DIR/SD to a new motherboard and linux version. Before 
it was running FC 14 and now it is running Centos 5.7.  The motherboard 
was an upgrade that was much needed.


I am using the configure script that was used previously to build Bacula 
( it was also used to build a test system ).  Both of these build 
were/are configured to us BAT.  I have installed Qt, meaning I can run 
the designer from the prompt without any problems.  I have changed my 
PATH to include the Qt installation.  But, when I run configure I am 
getting the following error:


configure: error: Unable to find Qt4 installation needed by bat

I have included the configure output, it shows that it is able to find 
gmake the the Qt directory but I don't know what it is looking for at 
this point.


Any help is appreciated, thanks.


checking for true... /bin/true
checking for false... /bin/false
configuring for Bacula 5.0.1 (24 February 2010)
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking whether gcc and cc understand -c and -o together... yes
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking whether gcc needs -traditional... no
checking for g++... /usr/bin/g++
checking for a BSD-compatible install... /usr/bin/install -c
checking for mv... /bin/mv
checking for rm... /bin/rm
checking for cp... /bin/cp
checking for sed... /bin/sed
checking for echo... /bin/echo
checking for cmp... /usr/bin/cmp
checking for tbl... /usr/bin/tbl
checking for ar... /usr/bin/ar
checking for openssl... /usr/bin/openssl
checking for mtx... mtx
checking for dd... /bin/dd
checking for mkisofs... /usr/bin/mkisofs
checking for python... /usr/bin/python
checking for growisofs... /usr/bin/growisofs
checking for dvd+rw-mediainfo... /usr/bin/dvd+rw-mediainfo
checking for dvd+rw-format... /usr/bin/dvd+rw-format
checking for pkg-config... /usr/bin/pkg-config
checking for qmake... /usr/local/qtsdk-2010.04/qt/bin/qmake
checking for gmake... /usr/bin/gmake
checking for wx-config... wx-config
checking for cdrecord... /usr/bin/cdrecord
checking for pidof... /sbin/pidof
checking for gawk... gawk
checking for gawk... /bin/gawk
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking for a sed that does not truncate output... (cached) /bin/sed
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 98304
checking whether the shell understands some XSI constructs... yes
checking whether the shell understands +=... yes
checking for /usr/bin/ld option to reload object files... -r
checking for objdump... objdump
checking how to recognize dependent libraries... pass_all
checking for ar... /usr/bin/ar
checking for strip... strip
checking for ranlib... ranlib
checking command to parse /usr/bin/nm -B output from gcc object... ok
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for dlfcn.h... yes
checking whether we are using the GNU C++ compiler... (cached) yes
checking whether /usr/bin/g++ accepts -g... (cached) yes
checking how to run the C++ preprocessor... /usr/bin/g++ -E
checking for objdir... .libs
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fPIC -DPIC
checking if gcc PIC flag -fPIC -DPIC works... yes
checking if gcc static flag -static works... yes
checking if gcc supports -c -o file.o... yes
checking if gcc supports -c -o file.o... (cached) yes
checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared 
libraries... yes
checking whether -lc should be explicitly linked in... no
checking dynamic linker characteristics... GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static 

Re: [Bacula-users] Problems configuring 5.0.1 with bat

2011-10-10 Thread jerry lowry

 On 10/10/2011 04:51 PM, jerry lowry wrote:
 Hi,  I move the DIR/SD to a new motherboard and linux version. Before 
it was running FC 14 and now it is running Centos 5.7.  The 
motherboard was an upgrade that was much needed.


I am using the configure script that was used previously to build 
Bacula ( it was also used to build a test system ).  Both of these 
build were/are configured to us BAT.  I have installed Qt, meaning I 
can run the designer from the prompt without any problems.  I have 
changed my PATH to include the Qt installation.  But, when I run 
configure I am getting the following error:


configure: error: Unable to find Qt4 installation needed by bat

I have included the configure output, it shows that it is able to find 
gmake the the Qt directory but I don't know what it is looking for at 
this point.


Any help is appreciated, thanks.



--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Okay, so I forgot the configure statementits been that kind of a day..


./configure \
--sbindir=/usr/bacula/bin \
--sysconfdir=/usr/bacula/bin \
--with-pid-dir=/var/run/bacula \
--with-subsys-dir=/var/run/bacula/working \
--enable-smartalloc \
--with-mysql \
--with-working-dir=/usr/bacula/bin/working \
--with-dump-email=jlo...@edt.com \
--with-job-email=jlo...@edt.com \
--with-smtp-host=mailhost.edt.com \
--enable-tray-monitor \
--enable-bat \
--enable-bwx-console \
--with-python=/usr/lib/python2.4 \
--with-x
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Firewall traversal

2011-06-20 Thread jerry lowry
I have a similar setup.  Can you add a rule in the firewall that will 
allow the FD access to the SD.  That's what I did in order to get my 
backups to work.



On 6/20/2011 9:11 AM, Kevin O'Connor wrote:

My setup is as follows:

Bacula Server (DIR, SD) - Firewall/NAT - Server to be backed up (FD)

The FD is accessible from anywhere, but the DIR/SD is not (NAT/FW).

When I start the backup, the Director connects to the FD without a 
problem, but then when the Director tells the FD to connect back to 
the SD it fails because of the NAT.  I'm in a situation where I can't 
get the ports forwarded, but it would seem that there should be a way 
to have the SD connect out to the FD or something along those lines to 
get this working.  Is there a way to do that that I've missed in the 
docs or is really the only way to get this working is to expose the SD?



--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Quick question regarding migrating jobs

2011-06-03 Thread jerry lowry
Hi,

I just have a quick question regarding the migration of jobs.  If I have 
migrated jobs to another volume that I have sent off site can I delete 
the original volume if all the jobs on that volume have been migrated?  
Trying to clean up some volume names that have been miss labeled is 
really the reason for the question.

thanks,

jerry

--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Discover what all the cheering's about.
Get your free trial download today. 
http://p.sf.net/sfu/quest-dev2dev2 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Questions regarding the bcopy utility

2011-05-17 Thread Jerry Lowry

Hi,

I am trying to utilize the bcopy utility to copy corrupt volumes to new 
volumes that I can bscan into bacula.  I look at the documentation and 
it does not say whether I need to create new volumes in bacula or not, I 
assume I do.  But, the docs say that the new volume is not recorded in 
the catalog.
I created a volume in bacula and tried the copy but it failed with the 
errors attached.  Also,I have loaded the patch 1560 for bcopy.

Any help is wonderful.

thanks,
--

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315
Beaverton, Oregon 97006 (U.S.A.)
Phone: 503-690-1234 / 800-435-4320
Fax: 503-690-1243
Web: _www.edt.com http://www.edt.com/_


[jlowry@distress-sd bin]$ ./bcopy -v -c /etc/bacula/bacula-sd.conf -i 
hardware-0007 -o hardware-0007a /Hardware /Hardware
bcopy: butil.c:281 Using device: /Hardware for reading.
17-May 12:59 bcopy JobId 0: Ready to read from volume hardware-0007 on device 
Hardware (/Hardware).
bcopy: butil.c:284 Using device: /Hardware for writing.
17-May 12:59 bcopy JobId 0: Wrote label to prelabeled Volume hardware-0007a 
on device Hardware (/Hardware)
Volume Label Record: VolSessionId=109 VolSessionTime=1298660169 JobId=1 
DataLen=176
bcopy: bcopy.c:259 Volume label not copied.
17-May 12:59 bcopy JobId 0: End of Volume hardware-0007a at 0:215 on device 
Hardware (/Hardware). Write of 64512 bytes got 3880.
17-May 12:59 bcopy JobId 0: End of medium on Volume hardware-0007a Bytes=216 
Blocks=0 at 17-May-2011 12:59.
bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
17-May 12:59 bcopy JobId 0: Warning: mount.c:221 Open device Hardware 
(/Hardware) Volume  failed: ERR=Could not open file device Hardware 
(/Hardware). No Volume name given.

bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
17-May 12:59 bcopy JobId 0: Warning: mount.c:221 Open device Hardware 
(/Hardware) Volume  failed: ERR=Could not open file device Hardware 
(/Hardware). No Volume name given.

bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
17-May 12:59 bcopy JobId 0: Warning: mount.c:221 Open device Hardware 
(/Hardware) Volume  failed: ERR=Could not open file device Hardware 
(/Hardware). No Volume name given.

bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
17-May 12:59 bcopy JobId 0: Warning: mount.c:221 Open device Hardware 
(/Hardware) Volume  failed: ERR=Could not open file device Hardware 
(/Hardware). No Volume name given.

bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
17-May 12:59 bcopy JobId 0: Warning: mount.c:221 Open device Hardware 
(/Hardware) Volume  failed: ERR=Could not open file device Hardware 
(/Hardware). No Volume name given.

Mount Volume  on device Hardware (/Hardware) and press return when ready: 
bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
17-May 12:59 bcopy JobId 0: Warning: mount.c:221 Open device Hardware 
(/Hardware) Volume  failed: ERR=Could not open file device Hardware 
(/Hardware). No Volume name given.

Mount Volume  on device Hardware (/Hardware) and press return when ready: 
bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
17-May 13:01 bcopy JobId 0: Warning: mount.c:221 Open device Hardware 
(/Hardware) Volume  failed: ERR=Could not open file device Hardware 
(/Hardware). No Volume name given.

Mount Volume  on device Hardware (/Hardware) and press return when ready: 
qqq
bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
17-May 13:01 bcopy JobId 0: Warning: mount.c:221 Open device Hardware 
(/Hardware) Volume  failed: ERR=Could not open file device Hardware 
(/Hardware). No Volume name given.

Mount Volume  on device Hardware (/Hardware) and press return when ready: 
bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
bcopy: label.c:308 === ERROR: write_new_volume_label_to_dev called with NULL 
VolName
17-May 13:01 bcopy JobId 0: Warning: mount.c:221 Open device Hardware 
(/Hardware) Volume  failed: ERR=Could not 

Re: [Bacula-users] Questions regarding migration job failure

2011-05-13 Thread Jerry Lowry

thanks for your help and input.

I don't think the controller was/is causing the corruption.  The problem 
stems from my initial configuration of the storage and volumes causing 
the disks to fill up.  In order to hopefully not loose any backup data I 
moved some of the volumes to another disk while reconfiguring the pools 
and storage.  As I was working on the reconfiguration bacula got to the 
point where it wanted to write to the volumes that I moved, hence the 
volume was deemed corrupt because bacula could not find it.
So long as the recycling of the volumes clears the corrupt part of the 
volume I think I should be okay.  Will just have to be more intelligent 
in my configuration of volumes and storage.


Thanks

On 5/13/2011 2:25 AM, Martin Simmons wrote:

On Thu, 12 May 2011 09:58:14 -0700, Jerry Lowry said:

thanks for the help.  Looks like I have some digging to do to figure out
what is actually happening.  I know that I one time I had some problems
with the raid controller.  I have since gotten that resolved.

If the volume has been recycled will the corruption remain with the
volume or will it go by the wayside once the volume recycles?  Just
curious as to whether I should drop the corrupt volumes ( files ) and
create new ones.

I would consider reformatting the whole partition -- if the raid controller
was corrupting things, then there is no way to be sure that the filesystem is
OK.

__Martin

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315
Beaverton, Oregon 97006 (U.S.A.)
Phone: 503-690-1234 / 800-435-4320
Fax: 503-690-1243
Web: _www.edt.com http://www.edt.com/_


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Questions regarding migration job failure

2011-05-12 Thread Jerry Lowry
thanks for the help.  Looks like I have some digging to do to figure out 
what is actually happening.  I know that I one time I had some problems 
with the raid controller.  I have since gotten that resolved.


If the volume has been recycled will the corruption remain with the 
volume or will it go by the wayside once the volume recycles?  Just 
curious as to whether I should drop the corrupt volumes ( files ) and 
create new ones.


On 5/12/2011 12:31 AM, Graham Keeling wrote:

On Wed, May 11, 2011 at 02:06:44PM -0700, Jerry Lowry wrote:

another mistake on my part.  You have to give bls the correct spelling
of the volume ( sometimes I wonder )

Once I corrected the volume name this is the results I get:

Volume Record: File:blk=0: 206 Sessid=16 SessTime=1303843290 Jobid=3
DataLen=171
11-May 13:42 bls JobId 0: Error: block.c:318 Volumne data error at 0:206!
Block checksum mismatch in block=6010112 len=64512: calc=c6a6912d
blk=50a7d773

Well, that's the problem right there.
Your migration doesn't work when volumes that are not corrupted are being read.

As to how your volumes got corrupted, that's a much harder question.

If it were me, I would start everything from scratch, and after every backup
run your 'bls' command on any volume that changed. This will let you catch
the problem just after it happened, and you might be able to spot anything
strange that happened before that.

(assuming that it is a bacula bug, rather than you having a disk or a file
system problem)


I ran this again with debug at level 200. I have attached the file with
the output.

thanks for all your help!

On 5/11/2011 12:11 PM, Jerry Lowry wrote:

Hi,

No, the migration job is occurring on the same storage daemon.  This
storage daemon has 6 raid devices setup as jbod, 3 are for daily use
and 3 are setup as hotswap devices for off-site backups.  The problem
is when I run bls on the storage daemon where the disks are located I
get a message asking me to mount the disk, which is already mounted
according to the director, as well as being mounted by the OS.



On 5/11/2011 11:26 AM, Phil Stracchino wrote:

On 05/11/11 13:48, Jerry Lowry wrote:

Ok, I am trying to run bls on the specified volume file that is
associated with this job. But the problem I am having is that bls is
failing trying to stat the device.

I have one director and two storage directors.  The volume I am trying
to run against is on the second SD.  Do I run bls on the system where
the 'director' is or on the system thats running the stand alone 'sd'
where the volume is located?

Jerry,
If I'm understanding you correctly, you have two storage daemons, and
you're trying to do a migration from a device on one SD to a device on
the other.  Is this correct?

If this understanding is correct, sorry, it won't work.  Copy and
migration can currently only be done between devices controlled by the
same SD.  (This is in large part a result of there being no current
capability for direct communication between one storage daemon and another.)



--

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315
Beaverton, Oregon 97006 (U.S.A.)
Phone: 503-690-1234 / 800-435-4320
Fax: 503-690-1243
Web: _www.edt.comhttp://www.edt.com/_



--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315
Beaverton, Oregon 97006 (U.S.A.)
Phone: 503-690-1234 / 800-435-4320
Fax: 503-690-1243
Web: _www.edt.comhttp://www.edt.com/_


[jlowry@distress-sd bin]$ ./bls -d 200 -j -v -v -V home-0006 -c 
/etc/bacula/bacula-sd.conf /Home
bls: stored_conf.c:698-0 Inserting director res: distress-mon
bls: stored_conf.c:698-0 Inserting device res: DBB
bls: stored_conf.c:698-0 Inserting device res: Hardware
bls: stored_conf.c:698-0 Inserting device res: Swift
bls: stored_conf.c:698-0 Inserting device res: Home
bls: stored_conf.c:698-0 Inserting device res: Workstations
bls: stored_conf.c:698-0 Inserting device res: TopSwap
bls: stored_conf.c:698-0 Inserting device res: MidSwap
bls: stored_conf.c:698-0 Inserting device res: BottomSwap
bls: stored_conf.c:698-0 Inserting device res: FileStorage
bls: stored_conf.c:698-0 Inserting device res: FileStorage1
bls: stored_conf.c:698-0 Inserting device res: Drive-1
bls

Re: [Bacula-users] Questions regarding migration job failure

2011-05-11 Thread Jerry Lowry
Is there anyone that can help me with this problem?  Surely someone is 
using the migration job.



On 5/9/2011 2:51 PM, jerry lowry wrote:

Hi,

I am frequently getting errors on my migration jobs and I need some 
help trying to figure out what the problem is.


I have three migration jobs that migrate data from a daily disk to a 
raid disk that is setup as a hotswap disk.  Once this is full I pull 
the disk and move it to an offsite facility.  About half of the time 
the migration jobs work with out any problems, the other half I get 
errors on many of the jobs that are being migrated.  Example:  I start 
a migrate job and it starts to migrate 6 jobs to the offsite disk.  It 
will get through two of the jobs successfully and then the last four 
jobs will fail with the error below.  Each of the media are created 
using  BAT or BConsole without errors.


I have no clue as to what the problem might be, so any help is great.

Below you will find the config files and job output.

thanks,
jerry

Job error:
09-May 12:55 distress-dir JobId 2549: The following 3 JobIds were chosen to be 
migrated: 2335,2328,2291
09-May 12:55 distress-dir JobId 2549: Job queued. JobId=2550
09-May 12:55 distress-dir JobId 2549: Migration JobId 2550 started.
09-May 12:55 distress-dir JobId 2549: Job queued. JobId=2552
09-May 12:55 distress-dir JobId 2549: Migration JobId 2552 started.
09-May 12:55 distress-dir JobId 2549: Migration using JobId=2291 
Job=BackupHardware.2011-04-17_20.05.00_17
09-May 12:55 distress-dir JobId 2549: Bootstrap records written to 
/var/run/bacula/working/distress-dir.restore.53.bsr
09-May 13:59 distress-dir JobId 2549: Start Migration JobId 2549, 
Job=CopyHWDiskToDisk.2011-05-09_12.55.37_45
09-May 13:59 distress-dir JobId 2549: Using Device TopSwap
09-May 13:59 distress-sd-sd JobId 2549: Ready to read from volume hardware-0007 on 
device Hardware (/Hardware).
09-May 13:59 distress-sd-sd JobId 2549: Volume hardwareBS-2 previously 
written, moving to end of data.
09-May 13:59 distress-sd-sd JobId 2549: Ready to append to end of Volume 
hardwareBS-2 size=240021666918
09-May 13:59 distress-sd-sd JobId 2549: Forward spacing Volume hardware-0007 
tofile:block  0:215.
09-May 13:59 distress-sd-sd JobId 2549: Error: block.c:275 Volume data error at 0:215! Wanted ID: 
BB02, got 2. Buffer discarded.
09-May 13:59 distress-dir JobId 2549: Error: Bacula distress-dir 5.0.1 
(24Feb10): 09-May-2011 13:59:15
   Build OS:   x86_64-unknown-linux-gnu redhat
   Prev Backup JobId:  2291
   Prev Backup Job:BackupHardware.2011-04-17_20.05.00_17
   New Backup JobId:   2554
   Current JobId:  2549
   Current Job:CopyHWDiskToDisk.2011-05-09_12.55.37_45
   Backup Level:   Full
   Client: distress-sd-fd
   FileSet:Top Set 2011-03-30 10:42:47
   Read Pool:  HardwarePool (From Job resource)
   Read Storage:   hardware (From command line)
   Write Pool: OffsiteTop (From Job Pool's NextPool resource)
   Write Storage:  topswap (From Storage from Pool's NextPool 
resource)
   Catalog:MyCatalog (From Client resource)
   Start time: 09-May-2011 13:59:15
   End time:   09-May-2011 13:59:15
   Elapsed time:   0 secs
   Priority:   10
   SD Files Written:   0
   SD Bytes Written:   0 (0 B)
   Rate:   0.0 KB/s
   Volume name(s):
   Volume Session Id:  27
   Volume Session Time:1304722130
   Last Volume Bytes:  0 (0 B)
   SD Errors:  1
   SD termination status:  Running
   Termination:*** Migration Error ***


Configuration files: (This is one of three, they are all setup the same way)

Job {
 Name = CopyHWDiskToDisk
 Type = Migrate
 Level = Full
 FileSet = Top Set
 Client = distress-sd-fd
 Messages = Standard
Storage = hardware
 Pool = HardwarePool
 Maximum Concurrent Jobs = 4
 Selection Type = Pool Time
 Selection Pattern = hardwareTS-*
}

# File Pool definition
Pool {
   Name = OffsiteTop
   Pool Type = Migrate
   Next Pool = OffsiteTop
   Storage = topswap
   Recycle = yes   # Bacula can automatically recycle 
Volumes
   AutoPrune = yes # Prune expired volumes
   Volume Retention = 6 months # one week
   Maximum Volume Bytes = 1800G   # Limit Volume size to something 
reasonable
   Maximum Volumes = 10   # Limit number of Volumes in Pool
}

FileSet {
Name = Top Set
Include {
Options {
signature = MD5
}
#
#  Put your list of files here, preceded by 'File =', one per line
#or include an external list with:
#
#File =file-name
#
#  Note: / backs up everything on the root partition.
#if you have other partitions such as /usr or /home
#you will probably want to add them too.
#
File

Re: [Bacula-users] Questions regarding migration job failure

2011-05-11 Thread Jerry Lowry
I have not tried to restore from that particular job as yet, but the 
next question would be, if it fails on the restore that would mean that 
anything backed up in that job would not be valid, correct?


thanks

On 5/11/2011 8:54 AM, Graham Keeling wrote:

On Wed, May 11, 2011 at 08:44:18AM -0700, Jerry Lowry wrote:

Is there anyone that can help me with this problem?  Surely someone is
using the migration job.

I'm not using migration jobs, but maybe I can give you a hint...


On 5/9/2011 2:51 PM, jerry lowry wrote:

09-May 13:59 distress-sd-sd JobId 2549: Forward spacing Volume hardware-0007 
tofile:block  0:215.
09-May 13:59 distress-sd-sd JobId 2549: Error: block.c:275 Volume data error at 0:215! Wanted ID: 
BB02, got 2. Buffer discarded.

It seems to me that the error is not with the write to the new volume, but with
the read from the existing volume hardware-0007.

I've seen similar errors before, when I found bugs in bacula that trashed the
data on my disk volumes.

One thing to try is a restore from hardware-0007. I predict that you will
get the same error.


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315
Beaverton, Oregon 97006 (U.S.A.)
Phone: 503-690-1234 / 800-435-4320
Fax: 503-690-1243
Web: _www.edt.com http://www.edt.com/_


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Questions regarding migration job failure

2011-05-11 Thread Jerry Lowry
Ok, I am trying to run bls on the specified volume file that is 
associated with this job. But the problem I am having is that bls is 
failing trying to stat the device.


I have one director and two storage directors.  The volume I am trying 
to run against is on the second SD.  Do I run bls on the system where 
the 'director' is or on the system thats running the stand alone 'sd' 
where the volume is located?


thanks

On 5/11/2011 9:32 AM, Graham Keeling wrote:

On Wed, May 11, 2011 at 09:19:49AM -0700, Jerry Lowry wrote:

I have not tried to restore from that particular job as yet, but the
next question would be, if it fails on the restore that would mean that
anything backed up in that job would not be valid, correct?

I think that depends upon what you mean by valid.

For example, it might be possible to skip over the bad area of the volume and
restore some files past that bad area.

If it were me, I have to say that I would indeed be treating the whole job as
suspicious. And the others too, probably.

But let's not get ahead of ourselves. Perhaps the volume is actually fine and
the problem is something else.

Rather than doing a restore, maybe it would be worth running commands like
'bls' on the volume first. It would probably give a quicker diagnosis, if
there is a problem.


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315
Beaverton, Oregon 97006 (U.S.A.)
Phone: 503-690-1234 / 800-435-4320
Fax: 503-690-1243
Web: _www.edt.com http://www.edt.com/_


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Questions regarding migration job failure

2011-05-11 Thread Jerry Lowry

Sorry, forgot to add this..
When I run bls on the second SD it asks me to mount the volume on the 
device specified. But when I go to the director and try to mount the 
device is says that it is always mounted due to the device being a disk.


bls -j -V Home-0006 /Home

this uses the bacula-sd.conf in the current directory.

Device {
   Name = Home
   Media Type = File
   Archive Device = /Home
   LabelMedia = yes;
   Random Access = yes;
   AutomaticMount = yes
   Removable Media = no;
   AlwaysOpen = no;
}



On 5/11/2011 10:48 AM, Jerry Lowry wrote:
Ok, I am trying to run bls on the specified volume file that is 
associated with this job. But the problem I am having is that bls is 
failing trying to stat the device.


I have one director and two storage directors.  The volume I am trying 
to run against is on the second SD.  Do I run bls on the system where 
the 'director' is or on the system thats running the stand alone 'sd' 
where the volume is located?


thanks

On 5/11/2011 9:32 AM, Graham Keeling wrote:

On Wed, May 11, 2011 at 09:19:49AM -0700, Jerry Lowry wrote:

I have not tried to restore from that particular job as yet, but the
next question would be, if it fails on the restore that would mean that
anything backed up in that job would not be valid, correct?

I think that depends upon what you mean by valid.

For example, it might be possible to skip over the bad area of the volume and
restore some files past that bad area.

If it were me, I have to say that I would indeed be treating the whole job as
suspicious. And the others too, probably.

But let's not get ahead of ourselves. Perhaps the volume is actually fine and
the problem is something else.

Rather than doing a restore, maybe it would be worth running commands like
'bls' on the volume first. It would probably give a quicker diagnosis, if
there is a problem.


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315
Beaverton, Oregon 97006 (U.S.A.)
Phone: 503-690-1234 / 800-435-4320
Fax: 503-690-1243
Web: _www.edt.com http://www.edt.com/_



--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315
Beaverton, Oregon 97006 (U.S.A.)
Phone: 503-690-1234 / 800-435-4320
Fax: 503-690-1243
Web: _www.edt.com http://www.edt.com/_


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Questions regarding migration job failure

2011-05-11 Thread Jerry Lowry

Hi,

No, the migration job is occurring on the same storage daemon.  This 
storage daemon has 6 raid devices setup as jbod, 3 are for daily use and 
3 are setup as hotswap devices for off-site backups.  The problem is 
when I run bls on the storage daemon where the disks are located I get a 
message asking me to mount the disk, which is already mounted according 
to the director, as well as being mounted by the OS.




On 5/11/2011 11:26 AM, Phil Stracchino wrote:

On 05/11/11 13:48, Jerry Lowry wrote:

Ok, I am trying to run bls on the specified volume file that is
associated with this job. But the problem I am having is that bls is
failing trying to stat the device.

I have one director and two storage directors.  The volume I am trying
to run against is on the second SD.  Do I run bls on the system where
the 'director' is or on the system thats running the stand alone 'sd'
where the volume is located?

Jerry,
If I'm understanding you correctly, you have two storage daemons, and
you're trying to do a migration from a device on one SD to a device on
the other.  Is this correct?

If this understanding is correct, sorry, it won't work.  Copy and
migration can currently only be done between devices controlled by the
same SD.  (This is in large part a result of there being no current
capability for direct communication between one storage daemon and another.)




--

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315
Beaverton, Oregon 97006 (U.S.A.)
Phone: 503-690-1234 / 800-435-4320
Fax: 503-690-1243
Web: _www.edt.com http://www.edt.com/_


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Questions regarding migration job failure

2011-05-11 Thread Jerry Lowry
another mistake on my part.  You have to give bls the correct spelling 
of the volume ( sometimes I wonder )


Once I corrected the volume name this is the results I get:

Volume Record: File:blk=0: 206 Sessid=16 SessTime=1303843290 Jobid=3 
DataLen=171

11-May 13:42 bls JobId 0: Error: block.c:318 Volumne data error at 0:206!
Block checksum mismatch in block=6010112 len=64512: calc=c6a6912d 
blk=50a7d773


I ran this again with debug at level 200. I have attached the file with 
the output.


thanks for all your help!

On 5/11/2011 12:11 PM, Jerry Lowry wrote:

Hi,

No, the migration job is occurring on the same storage daemon.  This 
storage daemon has 6 raid devices setup as jbod, 3 are for daily use 
and 3 are setup as hotswap devices for off-site backups.  The problem 
is when I run bls on the storage daemon where the disks are located I 
get a message asking me to mount the disk, which is already mounted 
according to the director, as well as being mounted by the OS.




On 5/11/2011 11:26 AM, Phil Stracchino wrote:

On 05/11/11 13:48, Jerry Lowry wrote:

Ok, I am trying to run bls on the specified volume file that is
associated with this job. But the problem I am having is that bls is
failing trying to stat the device.

I have one director and two storage directors.  The volume I am trying
to run against is on the second SD.  Do I run bls on the system where
the 'director' is or on the system thats running the stand alone 'sd'
where the volume is located?

Jerry,
If I'm understanding you correctly, you have two storage daemons, and
you're trying to do a migration from a device on one SD to a device on
the other.  Is this correct?

If this understanding is correct, sorry, it won't work.  Copy and
migration can currently only be done between devices controlled by the
same SD.  (This is in large part a result of there being no current
capability for direct communication between one storage daemon and another.)




--

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315
Beaverton, Oregon 97006 (U.S.A.)
Phone: 503-690-1234 / 800-435-4320
Fax: 503-690-1243
Web: _www.edt.com http://www.edt.com/_



--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315
Beaverton, Oregon 97006 (U.S.A.)
Phone: 503-690-1234 / 800-435-4320
Fax: 503-690-1243
Web: _www.edt.com http://www.edt.com/_


[jlowry@distress-sd bin]$ ./bls -d 200 -j -v -v -V home-0006 -c 
/etc/bacula/bacula-sd.conf /Home
bls: stored_conf.c:698-0 Inserting director res: distress-mon
bls: stored_conf.c:698-0 Inserting device res: DBB
bls: stored_conf.c:698-0 Inserting device res: Hardware
bls: stored_conf.c:698-0 Inserting device res: Swift
bls: stored_conf.c:698-0 Inserting device res: Home
bls: stored_conf.c:698-0 Inserting device res: Workstations
bls: stored_conf.c:698-0 Inserting device res: TopSwap
bls: stored_conf.c:698-0 Inserting device res: MidSwap
bls: stored_conf.c:698-0 Inserting device res: BottomSwap
bls: stored_conf.c:698-0 Inserting device res: FileStorage
bls: stored_conf.c:698-0 Inserting device res: FileStorage1
bls: stored_conf.c:698-0 Inserting device res: Drive-1
bls: match.c:250-0 add_fname_to_include prefix=0 gzip=0 fname=/
bls: butil.c:281 Using device: /Home for reading.
bls: dev.c:284-0 init_dev: tape=0 dev_name=/Home
bls: vol_mgr.c:162-0 add read_vol=home-0006 JobId=0
bls: butil.c:186-0 Acquire device for read
bls: acquire.c:95-0 Want Vol=home-0006 Slot=0
bls: acquire.c:109-0 MediaType dcr= dev=File
bls: acquire.c:189-0 dir_get_volume_info vol=home-0006
bls: bls.c:486-0 Fake dir_get_volume_info
bls: mount.c:546-0 Must load Home (/Home)
bls: autochanger.c:120-0 Device Home (/Home) is not an autochanger
bls: acquire.c:220-0 bstored: open vol=home-0006
bls: dev.c:360-0 open dev: type=1 dev_name=Home (/Home) vol=home-0006 
mode=OPEN_READ_ONLY
bls: dev.c:369-0 call open_file_device mode=OPEN_READ_ONLY
bls: dev.c:2089-0 Enter mount
bls: dev.c:542-0 open disk: mode=OPEN_READ_ONLY open(/Home/home-0006, 0x0, 0640)
bls: dev.c:557-0 open dev: disk fd=3 opened, part=0/0, part_size=0
bls: dev.c:373-0 preserve=0x0 fd=3
bls: acquire.c:228-0 opened dev Home (/Home) OK
bls: acquire.c:231-0 calling read-vol-label
bls: label.c:81-0

[Bacula-users] Questions regarding migration job failure

2011-05-09 Thread jerry lowry

Hi,

I am frequently getting errors on my migration jobs and I need some help 
trying to figure out what the problem is.


I have three migration jobs that migrate data from a daily disk to a 
raid disk that is setup as a hotswap disk.  Once this is full I pull the 
disk and move it to an offsite facility.  About half of the time the 
migration jobs work with out any problems, the other half I get errors 
on many of the jobs that are being migrated.  Example:  I start a 
migrate job and it starts to migrate 6 jobs to the offsite disk.  It 
will get through two of the jobs successfully and then the last four 
jobs will fail with the error below.  Each of the media are created 
using  BAT or BConsole without errors.


I have no clue as to what the problem might be, so any help is great.

Below you will find the config files and job output.

thanks,
jerry

Job error:

09-May 12:55 distress-dir JobId 2549: The following 3 JobIds were chosen to be 
migrated: 2335,2328,2291
09-May 12:55 distress-dir JobId 2549: Job queued. JobId=2550
09-May 12:55 distress-dir JobId 2549: Migration JobId 2550 started.
09-May 12:55 distress-dir JobId 2549: Job queued. JobId=2552
09-May 12:55 distress-dir JobId 2549: Migration JobId 2552 started.
09-May 12:55 distress-dir JobId 2549: Migration using JobId=2291 
Job=BackupHardware.2011-04-17_20.05.00_17
09-May 12:55 distress-dir JobId 2549: Bootstrap records written to 
/var/run/bacula/working/distress-dir.restore.53.bsr
09-May 13:59 distress-dir JobId 2549: Start Migration JobId 2549, 
Job=CopyHWDiskToDisk.2011-05-09_12.55.37_45
09-May 13:59 distress-dir JobId 2549: Using Device TopSwap
09-May 13:59 distress-sd-sd JobId 2549: Ready to read from volume hardware-0007 on 
device Hardware (/Hardware).
09-May 13:59 distress-sd-sd JobId 2549: Volume hardwareBS-2 previously 
written, moving to end of data.
09-May 13:59 distress-sd-sd JobId 2549: Ready to append to end of Volume 
hardwareBS-2 size=240021666918
09-May 13:59 distress-sd-sd JobId 2549: Forward spacing Volume hardware-0007 
tofile:block  0:215.
09-May 13:59 distress-sd-sd JobId 2549: Error: block.c:275 Volume data error at 0:215! Wanted ID: 
BB02, got 2. Buffer discarded.
09-May 13:59 distress-dir JobId 2549: Error: Bacula distress-dir 5.0.1 
(24Feb10): 09-May-2011 13:59:15
  Build OS:   x86_64-unknown-linux-gnu redhat
  Prev Backup JobId:  2291
  Prev Backup Job:BackupHardware.2011-04-17_20.05.00_17
  New Backup JobId:   2554
  Current JobId:  2549
  Current Job:CopyHWDiskToDisk.2011-05-09_12.55.37_45
  Backup Level:   Full
  Client: distress-sd-fd
  FileSet:Top Set 2011-03-30 10:42:47
  Read Pool:  HardwarePool (From Job resource)
  Read Storage:   hardware (From command line)
  Write Pool: OffsiteTop (From Job Pool's NextPool resource)
  Write Storage:  topswap (From Storage from Pool's NextPool resource)
  Catalog:MyCatalog (From Client resource)
  Start time: 09-May-2011 13:59:15
  End time:   09-May-2011 13:59:15
  Elapsed time:   0 secs
  Priority:   10
  SD Files Written:   0
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Volume name(s):
  Volume Session Id:  27
  Volume Session Time:1304722130
  Last Volume Bytes:  0 (0 B)
  SD Errors:  1
  SD termination status:  Running
  Termination:*** Migration Error ***


Configuration files: (This is one of three, they are all setup the same way)

Job {
Name = CopyHWDiskToDisk
Type = Migrate
Level = Full
FileSet = Top Set
Client = distress-sd-fd
Messages = Standard
Storage = hardware
Pool = HardwarePool
Maximum Concurrent Jobs = 4
Selection Type = Pool Time
Selection Pattern = hardwareTS-*
}

# File Pool definition
Pool {
  Name = OffsiteTop
  Pool Type = Migrate
  Next Pool = OffsiteTop
  Storage = topswap
  Recycle = yes   # Bacula can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 6 months # one week
  Maximum Volume Bytes = 1800G   # Limit Volume size to something reasonable
  Maximum Volumes = 10   # Limit number of Volumes in Pool
}

FileSet {
Name = Top Set
Include {
Options {
signature = MD5
}
#
#  Put your list of files here, preceded by 'File =', one per line
#or include an external list with:
#
#File =file-name
#
#  Note: / backs up everything on the root partition.
#if you have other partitions such as /usr or /home
#you will probably want to add them too.
#
File = /Workstations/
}

#
# If you backup the root directory, the following two excluded
#   files can be useful
#
Exclude {
#File = /var/run/bacula/working
#File = /tmp
#File = /proc

[Bacula-users] Question regarding migrate job failing

2011-04-14 Thread jerry lowry

Hi List,

I have a migrate job setup that will move job id's to an offsite disk 
based on pool time.  This has been working on some of the job id's but 
on some I have gotten a couple of errors: Volume data Block checksum 
mismatch and Volume data error at 13:2932697021! wanted id: BB02 got 
INIT.  This happened on a number of the job id's but not all of them.
These errors happened on only two of the volumes  that it used in the 
migrate job.  Volume hardware-003 and hardware-007.  The big 
question I have is: If these two volumes were copied from one disk to 
another disk and then back to the original disk would this create a 
problem with bacula?  From a hardware point, none of the disks are 
showing any errors.  I just had a problem with the raid volume that I 
setup up.  They are running in a raid 0 setting, which uses two 1T disks.


Thanks for your input.

Config definitions:

# File Pool definition
Pool {
  Name = HomePool
  Pool Type = Backup
  Next Pool = OffsiteMid
  Recycle = yes   # Bacula can automatically 
recycle Volumes

  AutoPrune = yes # Prune expired volumes
  Volume Retention = 7 days # one week
  Maximum Volume Bytes = 200G# Limit Volume size to something 
reasonable

  Migration Time = 5 days# Migrate data older than period of time
  Maximum Volumes = 5   # Limit number of Volumes in Pool
  Label Format = home-
}
# File Pool definition
Pool {
  Name = OffsiteMid
  Pool Type = Migrate
  Next Pool = OffsiteMid
  Storage = midswap
  Recycle = yes   # Bacula can automatically 
recycle Volumes

  AutoPrune = yes # Prune expired volumes
  Volume Retention = 6 months # one week
  Maximum Volume Bytes = 1800G   # Limit Volume size to something 
reasonable

  Maximum Volumes = 10   # Limit number of Volumes in Pool
}
Job {
Name = CopyHMDiskToDisk
Type = Migrate
Level = Full
FileSet = Mid Set
Client = distress-sd-fd
Messages = Standard
Storage = home
Pool = HomePool
Maximum Concurrent Jobs = 4
Selection Type = Pool Time
Selection Pattern = homeMS-*
}

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Questions regarding the copy job 'selection type'

2011-04-01 Thread jerry lowry

Anyone have any answers, ideas 

Based on the Documentation:

OldestVolume This selection keyword selects the volume with the oldest 
last write time in the Pool
to be migrated. The Pool to be migrated is the Pool defined in the 
Migration Job resource. The
migration control job will then start and run one migration backup job 
for each of the Jobs found

on this Volume. The Selection Pattern, if specified, is not used.

If the above is true(in red) will the copy job go to the next volume to 
complete the jobid from the oldest volume?


jerry

On 3/30/2011 2:20 PM, jerry lowry wrote:

Hello list,

I have setup a copy job that will copy with a 'selection type = oldest 
volume' to a hot swapable disk so that I can move it offsite.  The job 
worked just fine up until it got to the end of the from volume, then 
it failed.  My question is:  If it hits the end of the volume will it 
automatically go to the next volume if the last jobid spans between 
two volumes?  If it does not follow to the next volume does the error 
below make sense?  (jobid 1661 spans to the next volume)


thanks,
jerry
--
30-Mar 13:10 distress-dir JobId 1964: The following 11 JobIds were chosen to be 
copied: 1566,1573,1579,1585,1597,1610,1620,1628,1634,1641,1661
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1965
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1965 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1967
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1967 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1969
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1969 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1971
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1971 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1973
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1973 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1975
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1975 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1977
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1977 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1979
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1979 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1981
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1981 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1983
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1983 started.
30-Mar 13:10 distress-dir JobId 1964: Copying using*JobId=1661*   
Job=BackupHardware.2011-02-21_23.05.00_11
30-Mar 13:10 distress-dir JobId 1964: Bootstrap records written to 
/var/run/bacula/working/distress-dir.restore.11.bsr
30-Mar 13:14 distress-dir JobId 1964: Start Copying JobId 1964, 
Job=CopyHWDiskToDisk.2011-03-30_13.10.58_05
30-Mar 13:14 distress-dir JobId 1964: Using Device TopSwap
30-Mar 13:14 distress-sd-sd JobId 1964: Ready to read from volume hardware-0001 on 
device Hardware (/Hardware).
30-Mar 13:14 distress-sd-sd JobId 1964: Volume OffHwDisk-1 previously 
written, moving to end of data.
30-Mar 13:14 distress-sd-sd JobId 1964: Ready to append to end of Volume 
OffHwDisk-1 size=5348997922
30-Mar 13:14 distress-sd-sd JobId 1964: Forward spacing Volume hardware-0001 
tofile:block  1:1054031476.
30-Mar 13:14 distress-sd-sd JobId 1964: Error: block.c:318 Volume data error at 
1:1850819116!
Block checksum mismatch in block=12351 len=64512: calc=3385a118 blk=fbddd6bb
30-Mar 13:14 distress-dir JobId 1964: Error: Bacula distress-dir 5.0.1 
(24Feb10): 30-Mar-2011 13:14:50


--
Create and publish websites with WebMatrix
Use the most popular FREE web apps or write code yourself;
WebMatrix provides all the features you need to develop and
publish your website. http://p.sf.net/sfu/ms-webmatrix-sf


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
--
Create and publish websites with WebMatrix
Use the most popular FREE web apps or write code yourself; 
WebMatrix provides all the features you need to develop and 
publish your website. http://p.sf.net/sfu/ms-webmatrix-sf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Questions regarding the copy job 'selection type'

2011-03-30 Thread jerry lowry

Hello list,

I have setup a copy job that will copy with a 'selection type = oldest 
volume' to a hot swapable disk so that I can move it offsite.  The job 
worked just fine up until it got to the end of the from volume, then it 
failed.  My question is:  If it hits the end of the volume will it 
automatically go to the next volume if the last jobid spans between two 
volumes?  If it does not follow to the next volume does the error below 
make sense?  (jobid 1661 spans to the next volume)


thanks,
jerry
--

30-Mar 13:10 distress-dir JobId 1964: The following 11 JobIds were chosen to be 
copied: 1566,1573,1579,1585,1597,1610,1620,1628,1634,1641,1661
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1965
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1965 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1967
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1967 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1969
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1969 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1971
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1971 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1973
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1973 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1975
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1975 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1977
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1977 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1979
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1979 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1981
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1981 started.
30-Mar 13:10 distress-dir JobId 1964: Job queued. JobId=1983
30-Mar 13:10 distress-dir JobId 1964: Copying JobId 1983 started.
30-Mar 13:10 distress-dir JobId 1964: Copying using*JobId=1661*   
Job=BackupHardware.2011-02-21_23.05.00_11
30-Mar 13:10 distress-dir JobId 1964: Bootstrap records written to 
/var/run/bacula/working/distress-dir.restore.11.bsr
30-Mar 13:14 distress-dir JobId 1964: Start Copying JobId 1964, 
Job=CopyHWDiskToDisk.2011-03-30_13.10.58_05
30-Mar 13:14 distress-dir JobId 1964: Using Device TopSwap
30-Mar 13:14 distress-sd-sd JobId 1964: Ready to read from volume hardware-0001 on 
device Hardware (/Hardware).
30-Mar 13:14 distress-sd-sd JobId 1964: Volume OffHwDisk-1 previously 
written, moving to end of data.
30-Mar 13:14 distress-sd-sd JobId 1964: Ready to append to end of Volume 
OffHwDisk-1 size=5348997922
30-Mar 13:14 distress-sd-sd JobId 1964: Forward spacing Volume hardware-0001 
tofile:block  1:1054031476.
30-Mar 13:14 distress-sd-sd JobId 1964: Error: block.c:318 Volume data error at 
1:1850819116!
Block checksum mismatch in block=12351 len=64512: calc=3385a118 blk=fbddd6bb
30-Mar 13:14 distress-dir JobId 1964: Error: Bacula distress-dir 5.0.1 
(24Feb10): 30-Mar-2011 13:14:50


--
Create and publish websites with WebMatrix
Use the most popular FREE web apps or write code yourself; 
WebMatrix provides all the features you need to develop and 
publish your website. http://p.sf.net/sfu/ms-webmatrix-sf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Director crash- again with traceback

2011-01-14 Thread jerry lowry
No one has any ideas on what would have caused this.  Based on the trace 
dump it looks like there is a problem with the scheduler.  Any pointers 
as to what I can look at?


thanks,

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315
Beaverton, Oregon 97006 (U.S.A.)
Phone: 503-690-1234 / 800-435-4320
Fax: 503-690-1243
Web: _www.edt.com http://www.edt.com/_



On 1/11/2011 9:12 AM, jerry lowry wrote:

I really hate when I do that!!!

[?1034h[Thread debugging using libthread_db enabled]
[New Thread 0x7f8362bfd710 (LWP 9002)]
[New Thread 0x7f8363fff710 (LWP 3111)]
[New Thread 0x7f8368c49710 (LWP 3110)]
0x003377a0e91d in nanosleep () from /lib64/libpthread.so.0
$1 = '\000'repeats 29 times
$2 = 0x1fe2068 bacula-dir
$3 = 0x1fe20a8 /usr/bacula/bin/bacula-dir
$4 = 0x7f834c004328 MySQL
$5 = 0x7f836eadbd9e 5.0.1 (24 February 2010)
$6 = 0x7f836eadbdb7 x86_64-unknown-linux-gnu
$7 = 0x7f836eadbdd0 redhat
$8 = 0x7f836eadba7c 
$9 = distress, '\000'repeats 41 times
#0  0x003377a0e91d in nanosleep () from /lib64/libpthread.so.0
#1  0x7f836eaae6f7 in bmicrosleep (sec=60, usec=0) at bsys.c:61
#2  0x0042e1d5 in wait_for_next_job (
 one_shot_job_to_run=value optimized out) at scheduler.c:131
#3  0x0040d93d in main (argc=value optimized out,
 argv=value optimized out) at dird.c:338

Thread 4 (Thread 0x7f8368c49710 (LWP 3110)):
#0  0x0033772d7393 in select () from /lib64/libc.so.6
#1  0x7f836eab0ad4 in bnet_thread_server (addrs=value optimized out,
 max_clients=value optimized out, client_wq=value optimized out,
 handle_client_request=value optimized out) at bnet_server.c:161
#2  0x004468fc in connect_thread (arg=0x1fe3ee8) at ua_server.c:82
#3  0x003377a06a3a in start_thread () from /lib64/libpthread.so.0
#4  0x0033772de62d in clone () from /lib64/libc.so.6
#5  0x in ?? ()

Thread 3 (Thread 0x7f8363fff710 (LWP 3111)):
#0  0x003377a0b3b9 inpthread_cond_timedwait@@GLIBC_2.3.2  ()
from /lib64/libpthread.so.0
#1  0x7f836ead402c in watchdog_thread (arg=value optimized out)
 at watchdog.c:308
#2  0x003377a06a3a in start_thread () from /lib64/libpthread.so.0
#3  0x0033772de62d in clone () from /lib64/libc.so.6
#4  0x in ?? ()

Thread 2 (Thread 0x7f8362bfd710 (LWP 9002)):
#0  0x003377a0ec8d in waitpid () from /lib64/libpthread.so.0
#1  0x7f836eacb7ad in signal_handler (sig=11) at signal.c:229
#2signal handler called
#3  0x003377a0c280 in pthread_kill () from /lib64/libpthread.so.0
#4  0x00420eba in cancel_storage_daemon_job (jcr=0x7f834c01c2f8)
 at job.c:515
#5  0x00410b50 in wait_for_job_termination (jcr=0x7f834c01c2f8,
 timeout=value optimized out) at backup.c:538
#6  0x004116f0 in do_backup (jcr=0x7f834c01c2f8) at backup.c:456
#7  0x00421fd4 in job_thread (arg=0x7f834c01c2f8) at job.c:314
#8  0x00423624 in jobq_server (arg=0x673b40) at jobq.c:450
#9  0x003377a06a3a in start_thread () from /lib64/libpthread.so.0
#10 0x0033772de62d in clone () from /lib64/libc.so.6
#11 0x in ?? ()

Thread 1 (Thread 0x7f836ea7b7e0 (LWP 3106)):
#0  0x003377a0e91d in nanosleep () from /lib64/libpthread.so.0
#1  0x7f836eaae6f7 in bmicrosleep (sec=60, usec=0) at bsys.c:61
#2  0x0042e1d5 in wait_for_next_job (
 one_shot_job_to_run=value optimized out) at scheduler.c:131
#3  0x0040d93d in main (argc=value optimized out,
 argv=value optimized out) at dird.c:338
#0  0x003377a0e91d in nanosleep () from /lib64/libpthread.so.0
No symbol table info available.
#1  0x7f836eaae6f7 in bmicrosleep (sec=60, usec=0) at bsys.c:61
61 stat = nanosleep(timeout, NULL);
timeout = {tv_sec = 60, tv_nsec = 0}
tv = {tv_sec = 90194313216, tv_usec = 140202474247679}
tz = {tz_minuteswest = 372, tz_dsttime = 0}
stat =value optimized out
#2  0x0042e1d5 in wait_for_next_job (
 one_shot_job_to_run=value optimized out) at scheduler.c:131
131   bmicrosleep(next_check_secs, 0); /* recheck once per minute */
jcr =value optimized out
job =value optimized out
run =value optimized out
now =value optimized out
prev =value optimized out
first = false
next_job =value optimized out
#3  0x0040d93d in main (argc=value optimized out,
 argv=value optimized out) at dird.c:338
338while ( (jcr = wait_for_next_job(runjob)) ) {
jcr =value optimized out
test_config = false
ch =value optimized out
no_signals = false
uid = 0x0
gid = 0x0
mode =value optimized out
#0  0x in ?? ()
No symbol table info available.
#0  0x in ?? ()
No symbol table info available.
#0  0x in ?? ()
No symbol table info available.
#0  0x in ?? ()
No symbol table info available.


 Original Message 
Subject

[Bacula-users] Director crash

2011-01-11 Thread jerry lowry

Hi list,

I came in this morning and found that my director had died last night 
after doing two of the backups.  The traceback follows at the end.

This is the scenario:

I noticed yesterday that the only two jobs that were scheduled to 
be performed last night were a monthly backup and the catalog backup.  
Given that I did not have the time to research why the other 5 backups 
were not scheduled I started BAT and selected the jobs to run at the 
appropriate times they normally run each night ( supposed to anyway ).  
So, when I looked at the director status I saw the two that were 
scheduled and 5 jobs that were waiting for the selected time to run.


The two jobs that were scheduled ran without any errors.  The director 
crashed when running the first job that I selected to run from BAT.  
From BAT
I selected the JOBS tab and then selected the job which I wanted to 
run.  I modified only the when ( or start time ) by highlighting the 
hour and minute
and inserting the time I wanted the job to run.  Did this for each of 
the jobs that did not get scheduled.


Made sure they were all showing up in the DIRECTOR tab and went on home.

Restarted bacula this morning and all the jobs were scheduled as normal.

Any clues or ideas as to the problem would be great.

OS:  Fedora 12 ( 2.6.32.11-99.fc12)
MySQL: 5.1.45 ( source distribution )
Bacula: 5.0.1

--

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315
Beaverton, Oregon 97006 (U.S.A.)
Phone: 503-690-1234 / 800-435-4320
Fax: 503-690-1243
Web: _www.edt.com http://www.edt.com/_


--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Director crash- again with traceback

2011-01-11 Thread jerry lowry

I really hate when I do that!!!

[?1034h[Thread debugging using libthread_db enabled]
[New Thread 0x7f8362bfd710 (LWP 9002)]
[New Thread 0x7f8363fff710 (LWP 3111)]
[New Thread 0x7f8368c49710 (LWP 3110)]
0x003377a0e91d in nanosleep () from /lib64/libpthread.so.0
$1 = '\000'repeats 29 times
$2 = 0x1fe2068 bacula-dir
$3 = 0x1fe20a8 /usr/bacula/bin/bacula-dir
$4 = 0x7f834c004328 MySQL
$5 = 0x7f836eadbd9e 5.0.1 (24 February 2010)
$6 = 0x7f836eadbdb7 x86_64-unknown-linux-gnu
$7 = 0x7f836eadbdd0 redhat
$8 = 0x7f836eadba7c 
$9 = distress, '\000'repeats 41 times
#0  0x003377a0e91d in nanosleep () from /lib64/libpthread.so.0
#1  0x7f836eaae6f7 in bmicrosleep (sec=60, usec=0) at bsys.c:61
#2  0x0042e1d5 in wait_for_next_job (
one_shot_job_to_run=value optimized out) at scheduler.c:131
#3  0x0040d93d in main (argc=value optimized out,
argv=value optimized out) at dird.c:338

Thread 4 (Thread 0x7f8368c49710 (LWP 3110)):
#0  0x0033772d7393 in select () from /lib64/libc.so.6
#1  0x7f836eab0ad4 in bnet_thread_server (addrs=value optimized out,
max_clients=value optimized out, client_wq=value optimized out,
handle_client_request=value optimized out) at bnet_server.c:161
#2  0x004468fc in connect_thread (arg=0x1fe3ee8) at ua_server.c:82
#3  0x003377a06a3a in start_thread () from /lib64/libpthread.so.0
#4  0x0033772de62d in clone () from /lib64/libc.so.6
#5  0x in ?? ()

Thread 3 (Thread 0x7f8363fff710 (LWP 3111)):
#0  0x003377a0b3b9 inpthread_cond_timedwait@@GLIBC_2.3.2  ()
   from /lib64/libpthread.so.0
#1  0x7f836ead402c in watchdog_thread (arg=value optimized out)
at watchdog.c:308
#2  0x003377a06a3a in start_thread () from /lib64/libpthread.so.0
#3  0x0033772de62d in clone () from /lib64/libc.so.6
#4  0x in ?? ()

Thread 2 (Thread 0x7f8362bfd710 (LWP 9002)):
#0  0x003377a0ec8d in waitpid () from /lib64/libpthread.so.0
#1  0x7f836eacb7ad in signal_handler (sig=11) at signal.c:229
#2signal handler called
#3  0x003377a0c280 in pthread_kill () from /lib64/libpthread.so.0
#4  0x00420eba in cancel_storage_daemon_job (jcr=0x7f834c01c2f8)
at job.c:515
#5  0x00410b50 in wait_for_job_termination (jcr=0x7f834c01c2f8,
timeout=value optimized out) at backup.c:538
#6  0x004116f0 in do_backup (jcr=0x7f834c01c2f8) at backup.c:456
#7  0x00421fd4 in job_thread (arg=0x7f834c01c2f8) at job.c:314
#8  0x00423624 in jobq_server (arg=0x673b40) at jobq.c:450
#9  0x003377a06a3a in start_thread () from /lib64/libpthread.so.0
#10 0x0033772de62d in clone () from /lib64/libc.so.6
#11 0x in ?? ()

Thread 1 (Thread 0x7f836ea7b7e0 (LWP 3106)):
#0  0x003377a0e91d in nanosleep () from /lib64/libpthread.so.0
#1  0x7f836eaae6f7 in bmicrosleep (sec=60, usec=0) at bsys.c:61
#2  0x0042e1d5 in wait_for_next_job (
one_shot_job_to_run=value optimized out) at scheduler.c:131
#3  0x0040d93d in main (argc=value optimized out,
argv=value optimized out) at dird.c:338
#0  0x003377a0e91d in nanosleep () from /lib64/libpthread.so.0
No symbol table info available.
#1  0x7f836eaae6f7 in bmicrosleep (sec=60, usec=0) at bsys.c:61
61 stat = nanosleep(timeout, NULL);
timeout = {tv_sec = 60, tv_nsec = 0}
tv = {tv_sec = 90194313216, tv_usec = 140202474247679}
tz = {tz_minuteswest = 372, tz_dsttime = 0}
stat =value optimized out
#2  0x0042e1d5 in wait_for_next_job (
one_shot_job_to_run=value optimized out) at scheduler.c:131
131   bmicrosleep(next_check_secs, 0); /* recheck once per minute */
jcr =value optimized out
job =value optimized out
run =value optimized out
now =value optimized out
prev =value optimized out
first = false
next_job =value optimized out
#3  0x0040d93d in main (argc=value optimized out,
argv=value optimized out) at dird.c:338
338while ( (jcr = wait_for_next_job(runjob)) ) {
jcr =value optimized out
test_config = false
ch =value optimized out
no_signals = false
uid = 0x0
gid = 0x0
mode =value optimized out
#0  0x in ?? ()
No symbol table info available.
#0  0x in ?? ()
No symbol table info available.
#0  0x in ?? ()
No symbol table info available.
#0  0x in ?? ()
No symbol table info available.



 Original Message 
Subject:Director crash
Date:   Tue, 11 Jan 2011 09:11:17 -0800
From:   jerry lowry jlo...@edt.com
To: bacula-users@lists.sourceforge.net



Hi list,

I came in this morning and found that my director had died last night 
after doing two of the backups.  The traceback follows at the end.

This is the scenario:

I noticed yesterday that the only two jobs that were scheduled to 
be performed last night were a monthly backup and the catalog backup.  
Given that I did not have the time to research why the other 5 backups 
were not scheduled I started BAT and selected

Re: [Bacula-users] Problem with backup

2010-07-29 Thread jerry lowry

On 7/29/2010 1:46 PM, Dan Langille wrote:
 On 7/29/2010 11:21 AM, Carlo Filippetto wrote:

 Hi all,
 I have a problem when I do FULL backup on all the server in DMZ. this
 jobs stops with the error Network error with FD during Backup:
 ERR=Connection reset by peer 

 I have configured the firewall, DMZ - LAN on CISCO ASA, so that
 bacula ports (9101/2/3) are opened between the server and the bacula
 server, and work fine when i do the incremental.

 have you got any idea to overcome the problem?
  
 Try looking at the Heart Beat options in bacula-sd.conf and bacula-fd.congf


the other thing to look at is the timeout period that is set on the 
firewall.  I had the same problem, changed the default setting on the 
firewall and everything works fine now.

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems restoring data

2010-06-07 Thread Jerry Lowry
Well, the difference between the system is only about 7 minutes, so I 
don't think that would cause it. 

Any other ideas?  I don't want to lose anything at some point down the 
road.  We were able to get past it this time.

jerry

Doug Forster wrote:

Jerry,

I would check your backup server to ensure the correct time is listed 
there. There is usually an error from the bacula client if time is off 
with Linux hosts. I do not know if the same is true for windows 
systems. That is the only thing that I would think that would give you 
this discrepancy. especially since the time stamps are more like 4PM 
on June 3rd and not like 12:01 AM. That is more than a standard 
deviation of timezone would cause. 


On 06/04/2010 12:21 PM, jlowry wrote:

Hello users,

I have a user that wants to restore one particular directory as of  
June 2.  Not a problem.  using bat with the version browser I go out 
to the users backup and based on the 'End Time' date I selected the 
correct 'JobID'.  I then walked thru the directory structure and 
select the directory to restore.  When I restore I always restore to 
a new directory structure, that way I don't make a mistake and wipe 
anything out.


When I look at the directory structure of the date of ' 2010=06-02 
23:19:28' ( end time of backup) I find the directory restored but the 
dates are wrong.  The dates on the most recent files are from June 3.


-rw-r--r-- 1 root root  66332 Jun  1 14:02 p53a.frm
-rw-r--r-- 1 root root  68128 Jun  1 14:02 p53a21L.frm
-rw-r--r-- 1 root root  68128 Jun  1 14:02 p53a21.frm
-r--r--r-- 1 root root  50877 Jun  2 17:00 pdb.c
drwxr-xr-x 2 mark root   4096 Jun  2 17:02 SCCS
-rw-r--r-- 1 root root  10650 Jun  3 16:38 p53b_lnx.c
-rw-r--r-- 1 root root  11658 Jun  3 16:38 p53bioctl.c
-rw-r--r-- 1 root root 153803 Jun  3 16:38 p53bdep.cpp
-rw-r--r-- 1 root root  19874 Jun  3 16:38 edt_set.c
-r--r--r-- 1 root root   4263 Jun  3 16:39 trace.c
-r--r--r-- 1 root root   1116 Jun  3 16:39 synctest.c
-r--r--r-- 1 root root   2216 Jun  3 16:39 syncsec.c
-r--r--r-- 1 root root   4553 Jun  3 16:39 syncp53b.c
-r--r--r-- 1 root root  14133 Jun  3 16:39 setdebug.c
-r--r--r-- 1 root root  17251 Jun  3 16:39 rttest.c
-r--r--r-- 1 root root   2936 Jun  3 16:39 rtmodecode.c
-r--r--r-- 1 root root   3643 Jun  3 16:39 rtblktest.c
-r--r--r-- 1 root root  31813 Jun  3 16:39 pdbold.c
-rw-r--r-- 1 root root  51232 Jun  3 16:39 p53dbg.c
-rw-r--r-- 1 root root  87524 Jun  3 16:39 p53btest.c
-r--r--r-- 1 root root   3749 Jun  3 16:39 modecode.c
-r--r--r-- 1 root root  15391 Jun  3 16:39 libp53b.c
-r--r--r-- 1 root root 215758 Jun  3 16:39 libedt.c
-r--r--r-- 1 root root   9135 Jun  3 16:39 gtestsim.c
-r--r--r-- 1 root root   9471 Jun  3 16:39 gtest_hwint.c
-r--r--r-- 1 root root  11797 Jun  3 16:39 gtest.c
-r--r--r-- 1 root root   3839 Jun  3 16:39 embselect.c
-r--r--r-- 1 root root  43560 Jun  3 16:39 edt_trace.c
-rw-r--r-- 1 root root  69117 Jun  3 16:39 edt_lnx_kernel.c
-r--r--r-- 1 root root  19447 Jun  3 16:39 edt_error.c
-r--r--r-- 1 root root   1806 Jun  3 16:39 checkp53b.c
-r--r--r-- 1 root root  19865 Jun  3 16:39 bm.c
-r--r--r-- 1 root root   5988 Jun  3 16:39 bm2bc.c
-r--r--r-- 1 root root  12583 Jun  3 16:39 bctest.c
-r--r--r-- 1 root root   2920 Jun  3 16:39 bcsim.c
-r--r--r-- 1 root root   5143 Jun  3 16:39 bcblktest.c
-rw-r--r-- 1 root root  66332 Jun  3 16:46 p53rnc.frm
-rw-r--r-- 1 root root  66332 Jun  3 16:46 p53bnc.frm
-rw-r--r-- 1 root root  66332 Jun  3 16:46 p53anc.frm

Why do these files show up having these dates when the 'end time' of 
the backup is 0602 23:19:28?


If I restore the files from the previous days backup the dates are fine:

-r--r--r-- 1 root root   3839 Jun  1 13:33 embselect.c
-r--r--r-- 1 root root  43560 Jun  1 13:33 edt_trace.c
-rw-r--r-- 1 root root  69117 Jun  1 13:33 edt_lnx_kernel.c
-r--r--r-- 1 root root  19447 Jun  1 13:33 edt_error.c
-r--r--r-- 1 root root   5217 Jun  1 13:33 checkp53b.c
-r--r--r-- 1 root root  19865 Jun  1 13:33 bm.c
-r--r--r-- 1 root root   5988 Jun  1 13:33 bm2bc.c
-r--r--r-- 1 root root  12583 Jun  1 13:33 bctest.c
-r--r--r-- 1 root root   2920 Jun  1 13:33 bcsim.c
-r--r--r-- 1 root root   5143 Jun  1 13:33 bcblktest.c
-rw-r--r-- 1 root root  66332 Jun  1 14:02 p53rnc.frm
-rw-r--r-- 1 root root  66332 Jun  1 14:02 p53r.frm
-rw-r--r-- 1 root root  66332 Jun  1 14:02 p53bnc.frm
-rw-r--r-- 1 root root  66332 Jun  1 14:02 p53b.frm
-rw-r--r-- 1 root root  66332 Jun  1 14:02 p53anc.frm
-rw-r--r-- 1 root root  66332 Jun  1 14:02 p53a.frm
-rw-r--r-- 1 root root  68128 Jun  1 14:02 p53a21L.frm
-rw-r--r-- 1 root root  68128 Jun  1 14:02 p53a21.frm
drwxr-xr-x 2 mark root   4096 Jun  1 14:12 SCCS
-rw-r--r-- 1 root root  87350 Jun  1 14:23 p53btest.c
-rw-r--r-- 1 root root 155061 Jun  1 17:08 p53bdep.cpp

And when I restore the same directory from the backup of June 3 the 
dates are the same as June 2.


Is bacula pulling in both dates even though I only select the backup 
for June 2?


I'm at a loss right now 

[Bacula-users] problem mounting volume on disk for restore

2010-06-03 Thread Jerry Lowry
I need to restore a directory from last nights backup.  I have the 
restore running but it is waiting to mount the volume.


when I try to mount the volume, I can only mount the storage volume 
File1 not the volume hardware-0047 that is on this disk.


I need this quickly, so any help would be great.

03-Jun 10:47 distress-sd JobId 351: Please mount Volume hardware-0047 for:
   Job:  RestoreHardware.2010-06-03_10.19.41_17
   Storage:  FileStorage (/backup0/DBB)
   Pool: Restore
   Media type:   File

--
the volume is in this 


Storage {
 Name = File1  # used for home/hardware
# Do not use localhost here
 Address = distress# N.B. Use a fully qualified name here

 SDPort = 9103
 Password = 
 Device = FileStorage1
 Media Type = File
}

How do you mount the file on the storage volume?


thanks,
--

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315
Beaverton, Oregon 97006 (U.S.A.)
Phone: 503-690-1234 / 800-435-4320
Fax: 503-690-1243
Web: _www.edt.com http://www.edt.com/_




--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] problem mounting volume on disk for restore

2010-06-03 Thread Jerry Lowry
okay,  I finally got it.  when I tried to mount it on the storage 
volume, I got an error:


jobid 351: Warning: acquire.c;224 Read open device FileStorage ( 
/backup0/DBB) volume hardware-0047 failed: ERR=dev.c:548 count not 
open: /backup0/DBB/hardware-0047, ERR=no such file or directory


after really looking at it.  I figured it out.  The volume 
hardware-0047 is on /backup1 but the restore is looking for it to be 
on /backup0/DBB (which is an error in my config).  I created a soft link 
to the volume and away the restore went.


I was doing the restore using bat, but the only way I saw the error was 
when I tried the mount in bconsole.


thanks



Jerry Lowry wrote:
I need to restore a directory from last nights backup.  I have the 
restore running but it is waiting to mount the volume.


when I try to mount the volume, I can only mount the storage volume 
File1 not the volume hardware-0047 that is on this disk.


I need this quickly, so any help would be great.

03-Jun 10:47 distress-sd JobId 351: Please mount Volume hardware-0047 for:
Job:  RestoreHardware.2010-06-03_10.19.41_17
Storage:  FileStorage (/backup0/DBB)
Pool: Restore
Media type:   File

--
the volume is in this 


Storage {
  Name = File1 # used for home/hardware
# Do not use localhost here
  Address = distress# N.B. Use a fully qualified name here

  SDPort = 9103
  Password = 
  Device = FileStorage1
  Media Type = File
}

How do you mount the file on the storage volume?
  


thanks,
--

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315
Beaverton, Oregon 97006 (U.S.A.)
Phone: 503-690-1234 / 800-435-4320
Fax: 503-690-1243
Web: _www.edt.com http://www.edt.com/_

 





--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
  


--

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315
Beaverton, Oregon 97006 (U.S.A.)
Phone: 503-690-1234 / 800-435-4320
Fax: 503-690-1243
Web: _www.edt.com http://www.edt.com/_




--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems getting restore to work

2010-04-14 Thread Jerry Lowry
Craig,  Actually I found out to late that the restore was doing exactly 
what the documentation said it would do.  It restored to the 'root' 
directory of the client.  So, I just got home last night and I got a 
call from one of the engineers that the root directory of the file 
server was full, do you know anything about this directory /home...?
OOPS!!!   So we deleted the errant restore and I think they rebooted the 
file server.  
Goes to show that sometimes we can't remember everything!


H, how many times did I try that 4,5,6 with different options.

Anyway, it restored.

thanks,
jerry

Craig Ringer wrote:

On 14/04/10 00:19, Jerry Lowry wrote:


I also get an error saying that it can not create a directory on the
same disk for the same reason  ERR= No space left on device. But I
think this is the same type of error.


Betcha you've run out of free inodes on the target file system.

Run df -i to see.

--
Craig Ringer
--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems getting restore to work

2010-04-13 Thread Jerry Lowry

Craig,
The file systems are definitely not full, especially /backup0.  The disk 
backup is on the /backup1 volume, it size is below.


FilesystemSize  Used Avail Use% Mounted on
/dev/sda1 241G  6.5G  222G   3% /
tmpfs 1.7G  236K  1.7G   1% /dev/shm
/dev/sda3 963G   31G  884G   4% /backup0
/dev/sdb1 917G  232G  640G  27% /backup1
/dev/sde1 917G  201G  670G  24% /backup2
/dev/sdf1 917G   72M  871G   1% /backup3
/dev/sdc1 917G  210G  661G  25% /backup4
/dev/sdd1 917G  759G  112G  88% /backup5
/dev/sda4 165G  1.5G  155G   1% /database
++
total 232G
-rw-r- 1 root root 231G 2010-04-10 14:00 hardware-0014

I don't think any of the files are bigger that 2GB as they are all pdf 
documents and tool updates.  At most maybe 30 MB but nothing in the GB 
region.


As for restoring the right client.  I walk through the 'bat' restore and 
select the client ( only have three ) it walks through and creates the 
build list.  The 'bacula-restores' directory is owned by 'root' but is 
wide open as far as privileges.
I have only two different full backups this one and another one.  So 
there aren't alot of .bsr files to select from and they match the client 
that was backed up.


I also get an error saying that it can not create a directory on the 
same disk for the same reason  ERR= No space left on device.  But I 
think this is the same type of error.


Still no joy in restore.
thanks


Craig Ringer wrote:

On 13/04/10 05:15, Jerry Lowry wrote:

Hi,  I am still tweaking a new installation of bacula 5.0.1 on Centos
5.4.  The backups work fine but I am trying to get the restore to work
and I keep getting the following errors:

Volume hardware-0014 tofile:block  7:2933114700.
10-Apr 12:51 swift-fd JobId 118: Error: restore.c:1133 Write error on 
/backup0/bacula-restores/home/hardware/pdf/altera/quartus/81_linux/81_nios2eds_linux.tar: 
No space left on device


The volume is a disk drive and I am trying to restore it to a dedicated
restore directory on a different disk. I have checked the config files
with an old set that I was running under 1.38 and it look very similar(
ie names of pools and disks were changed).


No space left on device can also mean with some file systems this 
file is bigger than the maximum file size permitted by this file system.


Is 81_nios2eds_linux.tar bigger than 2GB? What file system is being 
restored onto?


Are you *SURE* that /backup0/bacula-restores has the free space 
required (according to df -h /backup0/bacula-restores )?


Are you restoring to the right client?

--
Craig Ringer
--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems getting restore to work

2010-04-13 Thread Jerry Lowry
Martin,  I am trying to restore the files to the file system on the 
bacula server.  The client 'swift-fd' definitely does NOT have room on 
the disk to restore all the pdfs.  That is why my restore is configured 
with - where= /backup0/bacula-restores.


No,
jlowry:swift 61ls /home/hardware/backup0
/home/hardware/backup0: No such file or directory

When it tries the restore it fails to create the directory structure on 
the backup server.  This is based on the error message that I get.


12-Apr 13:54 swift-fd JobId 137: Error: restore.c:1133 Write error on 
/backup0/bacula-restores/home/hardware/pdf/rca/sarah.tv.pdf: No space left on 
device

thanks,
jerry


Martin Simmons wrote:

On Tue, 13 Apr 2010 09:19:11 -0700, Jerry Lowry said:


Craig,
The file systems are definitely not full, especially /backup0.  The disk 
backup is on the /backup1 volume, it size is below.


FilesystemSize  Used Avail Use% Mounted on
/dev/sda1 241G  6.5G  222G   3% /
tmpfs 1.7G  236K  1.7G   1% /dev/shm
/dev/sda3 963G   31G  884G   4% /backup0
/dev/sdb1 917G  232G  640G  27% /backup1
/dev/sde1 917G  201G  670G  24% /backup2
/dev/sdf1 917G   72M  871G   1% /backup3
/dev/sdc1 917G  210G  661G  25% /backup4
/dev/sdd1 917G  759G  112G  88% /backup5
/dev/sda4 165G  1.5G  155G   1% /database
++
total 232G
-rw-r- 1 root root 231G 2010-04-10 14:00 hardware-0014

I don't think any of the files are bigger that 2GB as they are all pdf 
documents and tool updates.  At most maybe 30 MB but nothing in the GB 
region.


As for restoring the right client.  I walk through the 'bat' restore and 
select the client ( only have three ) it walks through and creates the 
build list.  The 'bacula-restores' directory is owned by 'root' but is 
wide open as far as privileges.
I have only two different full backups this one and another one.  So 
there aren't alot of .bsr files to select from and they match the client 
that was backed up.


I also get an error saying that it can not create a directory on the 
same disk for the same reason  ERR= No space left on device.  But I 
think this is the same type of error.



When you do a restare, there are two clients to consider: the one from the
original backup and the one where the restore occurs.  By default, they are
the same.

Just to be doubly sure, was that df output generated on the machine running as
swift-fd, which is the where the restore is occurring?

Can you see the
/backup0/bacula-restores/home/hardware/pdf/altera/quartus/81_linux/ directory
on that machine?

__Martin



  

Still no joy in restore.
thanks


Craig Ringer wrote:


On 13/04/10 05:15, Jerry Lowry wrote:
  

Hi,  I am still tweaking a new installation of bacula 5.0.1 on Centos
5.4.  The backups work fine but I am trying to get the restore to work
and I keep getting the following errors:

Volume hardware-0014 tofile:block  7:2933114700.
10-Apr 12:51 swift-fd JobId 118: Error: restore.c:1133 Write error on 
/backup0/bacula-restores/home/hardware/pdf/altera/quartus/81_linux/81_nios2eds_linux.tar: 
No space left on device


The volume is a disk drive and I am trying to restore it to a dedicated
restore directory on a different disk. I have checked the config files
with an old set that I was running under 1.38 and it look very similar(
ie names of pools and disks were changed).

No space left on device can also mean with some file systems this 
file is bigger than the maximum file size permitted by this file system.


Is 81_nios2eds_linux.tar bigger than 2GB? What file system is being 
restored onto?


Are you *SURE* that /backup0/bacula-restores has the free space 
required (according to df -h /backup0/bacula-restores )?


Are you restoring to the right client?

--
Craig Ringer
  


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
  
--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problems getting restore to work

2010-04-13 Thread Jerry Lowry
thanks Ralf,  after playing around with the setting, that was what it 
was.  It seems to be restoring nowI say seems because the 'st dir' 
show the job waiting on the storage 'File 1', although when I look at 
the disk is has directories and files and is growing in used space.


Any ideas why it says 'waiting on storage 'File1'  and yet it is running?

jerry

Ralf Gross wrote:

Jerry Lowry schrieb:
  
Martin,  I am trying to restore the files to the file system on the  
bacula server.  The client 'swift-fd' definitely does NOT have room on  
the disk to restore all the pdfs.  That is why my restore is configured  
with - where= /backup0/bacula-restores.


No,
jlowry:swift 61ls /home/hardware/backup0
/home/hardware/backup0: No such file or directory

When it tries the restore it fails to create the directory structure on  
the backup server.  This is based on the error message that I get.


12-Apr 13:54 swift-fd JobId 137: Error: restore.c:1133 Write error on 
/backup0/bacula-restores/home/hardware/pdf/rca/sarah.tv.pdf: No space left on 
device



your are trying to restore to the client swift-fd, I guess this is not
what you want. You have to change this in your restore settings.

Ralf

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
  
--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Problems getting restore to work

2010-04-12 Thread Jerry Lowry
Hi,  I am still tweaking a new installation of bacula 5.0.1 on Centos 
5.4.  The backups work fine but I am trying to get the restore to work 
and I keep getting the following errors:


Volume hardware-0014 to file:block 7:2933114700.
10-Apr 12:51 swift-fd JobId 118: Error: restore.c:1133 Write error on 
/backup0/bacula-restores/home/hardware/pdf/altera/quartus/81_linux/81_nios2eds_linux.tar:
 No space left on device

The volume is a disk drive and I am trying to restore it to a dedicated 
restore directory on a different disk.  I have checked the config files 
with an old set that I was running under 1.38 and it look very similar( 
ie names of pools and disks were changed).


thanks

Here are my config files ( directory and storage):
#
# Standard Restore template, to be changed by Console program
#  Only one such job is needed for all Jobs/Clients/Storage ...
#
Job {
 Name = RestoreHardware
 Type = Restore
 Client=distress-fd
 FileSet=Swift Hardware Set 
 Storage = File1 
 Pool = Restore

 Messages = Standard
 Where = /backup0/bacula-restores
}
# List of files to be backed up
FileSet {
 Name = Swift Hardware Set
 Include {
   Options {
 signature = MD5
   }
#   
#  Put your list of files here, preceded by 'File =', one per line

#or include an external list with:
#
#File = file-name
#
#  Note: / backs up everything on the root partition.
#if you have other partitions such as /usr or /home
#you will probably want to add them too.
#
   File = /home/hardware
 }

#
# If you backup the root directory, the following two excluded
#   files can be useful
#
 Exclude {
#File = /var/run/bacula/working
#File = /tmp
#File = /proc
#File = /tmp
#File = /.journal
#File = /.fsck
 }
}
Storage {
 Name = File0   # used for database and restores only
# Do not use localhost here   
 Address = distress# N.B. Use a fully qualified name here

 SDPort = 9103
 Password = 
 Device = FileStorage0
 Media Type = File
}


# Definition of file storage device
Storage {
 Name = File1   # used for home/hardware
# Do not use localhost here   
 Address = distress# N.B. Use a fully qualified name here

 SDPort = 9103
 Password = 
 Device = FileStorage1
 Media Type = File
}
# Default tape pool definition
Pool {
 Name = Restore
 Pool Type = Backup
 Recycle = yes   # Bacula can automatically recycle 
Volumes

 AutoPrune = yes # Prune expired volumes
 Volume Retention = 30 days # one month
}

# File Pool definition
Pool {
 Name = Pool0
 Pool Type = Backup
 Recycle = yes   # Bacula can automatically recycle 
Volumes

 AutoPrune = yes # Prune expired volumes
 Volume Retention = 7 days # one week
 Maximum Volume Bytes = 500G  # Limit Volume size to something 
reasonable

 Maximum Volumes = 2   # Limit number of Volumes in Pool
}

# File Pool definition
Pool {
 Name = Pool1
 Pool Type = Backup
 Recycle = yes   # Bacula can automatically recycle 
Volumes

 AutoPrune = yes # Prune expired volumes
 Volume Retention = 7 days # one week
 Maximum Volume Bytes = 500G  # Limit Volume size to something 
reasonable

 Maximum Volumes = 2   # Limit number of Volumes in Pool
 Label Format = hardware-
}
**
bacula-sd
Device {
 Name = Restore
 Media Type = File
 Archive Device = /backup0/bacula-restores
 LabelMedia = yes;   # lets Bacula label unlabeled media
 Random Access = Yes;
 AutomaticMount = yes;   # when device opened, read it
 RemovableMedia = no;
 AlwaysOpen = no;
}

Device {
 Name = FileStorage0
 Media Type = File
 Archive Device = /backup0/DBB
 LabelMedia = yes;   # lets Bacula label unlabeled media
 Random Access = Yes;
 AutomaticMount = yes;   # when device opened, read it
 RemovableMedia = no;
 AlwaysOpen = no;
}

Device {
 Name = FileStorage1
 Media Type = File
 Archive Device = /backup1
 LabelMedia = yes;   # lets Bacula label unlabeled media
 Random Access = Yes;
 AutomaticMount = yes;   # when device opened, read it
 RemovableMedia = no;
 AlwaysOpen = no;
}

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fatal error: backup.c:892 Network send error to SD. ERR=Connection reset by peer

2010-04-09 Thread jerry lowry
On 4/10/2010 3:30 AM, Jon Schewe wrote:
 On 04/08/2010 07:04 AM, Matija Nalis wrote:

 On Wed, Apr 07, 2010 at 02:15:14PM +0100, Prashant Ramhit wrote:

  
 b06-Apr 12:54 client-fd JobId 299: Fatal error: backup.c:892 Network send 
 error to SD. ERR=Connection reset by peer/b/small/pre

 Is it possible to tell me how to enable more debug on client and
 storage so that i can find more clues to this issue.br


 You can use -d number to increase debug level; but in your case it
 should be pretty clear -- something (usually router or firewall)
 between SD and FD (or even local firewalls on themselves) is killing
 TCP connection (usually because it was idle for too long).

 See http://tinyurl.com/y8wapdu
 it adding Heartbeat Interval helps you.


  
 I have heartbeat intervals set at the following:
 bacula-dir.conf:
 client {
Heartbeat interval = 15 Seconds
 }
 storage {
Heartbeat interval = 1 minutes
 }

 bacula-sd.conf
 storage {
Heartbeat interval = 1 minute
 }

 bacula-fd.conf
 FileDaemon {
Heartbeat Interval = 5 seconds
 }


 --
 Download Intel#174; Parallel Studio Eval
 Try the new software tools for yourself. Speed compiling, find bugs
 proactively, and fine-tune applications for parallel performance.
 See why Intel Parallel Studio got high marks during beta.
 http://p.sf.net/sfu/intel-sw-dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

Hi,  are you backing up through a firewall.  I had this same problem and 
it tuned out that the firewall has a setup limit on how long a job will 
last.  Reset the limit and all my backups work as planned.

jerry

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Problem with backup failing after 2 hours

2010-03-16 Thread Jerry Lowry
Hi,
I have a new installation that I am tweeking the backups on.  One of my 
backups goes through the firewall from an public ip address to where the 
backup server is on an private  ip address.  The backup works just fine 
for the first 2h 11m 15s and then it fails.  After the first test failed 
I inserted the 'Heartbeat Interval' option and set it for 30 sec.  This 
slowed down the backup but I wanted to make sure that it continued 
through the entire disk.  The disk that I am backing up has closed to 
250GB of data on it.   I get approx. 50GB backed up before it fails.  Is 
there any other setting that will help this finish.
The logs are here:

First attempt:

15-Mar 10:26 distress-sd JobId 38: Labeled new Volume hardware-0010 on device 
FileStorage1 (/backup1).
15-Mar 10:26 distress-sd JobId 38: Wrote label to prelabeled Volume 
hardware-0010 on device FileStorage1 (/backup1)
15-Mar 12:37 distress-dir JobId 38: Fatal error: Network error with FD during 
Backup: ERR=Connection timed out
15-Mar 12:37 distress-sd JobId 38: JobId=38 
Job=BackupHardware.2010-03-15_10.26.42_06 marked to be canceled.
15-Mar 12:37 distress-sd JobId 38: Job write elapsed time = 02:11:15, Transfer 
rate = 8.120 M Bytes/second
15-Mar 12:37 distress-sd JobId 38: Error: bsock.c:529 Read expected 65536 got 
14596 from client:70.99.222.36:36643
15-Mar 12:37 distress-dir JobId 38: Fatal error: No Job status returned from FD.
15-Mar 12:37 distress-dir JobId 38: Error: Bacula distress-dir 5.0.1 (24Feb10): 
15-Mar-2010 12:37:59
  Build OS:   x86_64-unknown-linux-gnu redhat 
  JobId:  38
  Job:BackupHardware.2010-03-15_10.26.42_06
  Backup Level:   Full (upgraded from Incremental)
  Client: swift-fd 5.0.1 (24Feb10) 
sparc-sun-solaris2.10,solaris,5.10
  FileSet:Swift Hardware Set 2010-03-10 11:30:53
  Pool:   Pool1 (From Job resource)
  Catalog:MyCatalog (From Client resource)
  Storage:File1 (From command line)
  Scheduled time: 15-Mar-2010 10:26:34
  Start time: 15-Mar-2010 10:26:44
  End time:   15-Mar-2010 12:37:59
  Elapsed time:   2 hours 11 mins 15 secs
  Priority:   10
  FD Files Written:   0
  SD Files Written:   707,122
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   63,952,326,117 (63.95 GB)
  Rate:   0.0 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): hardware-0010
  Volume Session Id:  1
  Volume Session Time:1268673669
  Last Volume Bytes:  64,023,383,638 (64.02 GB)
  Non-fatal FD errors:0
  SD Errors:  1
  FD termination status:  Error
  SD termination status:  Canceled
  Termination:*** Backup Error ***

Second attempt:

15-Mar 17:14 distress-dir JobId 40: Fatal error: Network error with FD during 
Backup: ERR=Connection timed out
15-Mar 17:14 distress-sd JobId 40: JobId=40 
Job=BackupHardware.2010-03-15_15.02.58_10 marked to be canceled.
15-Mar 17:14 distress-sd JobId 40: Job write elapsed time = 01:57:35, Transfer 
rate = 7.432 M Bytes/second
15-Mar 17:14 distress-dir JobId 40: Fatal error: No Job 15-Mar 17:14 
distress-dir JobId 40: Error: Bacula distress-dir 5.0.1 (24Feb10): 15-Mar-2010 
17:14:15
  Build OS:   x86_64-unknown-linux-gnu redhat 
  JobId:  40
  Job:BackupHardware.2010-03-15_15.02.58_10
  Backup Level:   Full (upgraded from Incremental)
  Client: swift-fd 5.0.1 (24Feb10) 
sparc-sun-solaris2.10,solaris,5.10
  FileSet:Swift Hardware Set 2010-03-10 11:30:53
  Pool:   Pool1 (From Job resource)
  Catalog:MyCatalog (From Client resource)
  Storage:File1 (From command line)
  Scheduled time: 15-Mar-2010 15:02:57
  Start time: 15-Mar-2010 15:03:00
  End time:   15-Mar-2010 17:14:15
  Elapsed time:   2 hours 11 mins 15 secs
  Priority:   10
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): hardware-0012
  Volume Session Id:  3
  Volume Session Time:1268673669
  Last Volume Bytes:  51,996,669,624 (51.99 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  Error
  SD termination status:  Running
  Termination:*** Backup Error ***
status returned from FD.

thanks,
-- 

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315

Re: [Bacula-users] Problems with storage device not being seen

2010-03-12 Thread Jerry Lowry
I found the problem.  The director had the tape devices defined with the
'Autochanger' commented.  The storage director had the 'Autochanger' 
uncommented and set to 'yes'.  Commented out the 'Autochanger' in the sd 
and it works just fine.

See I knew it was easy.


Jerry Lowry wrote:
 I am setting up a new installation and I am having difficulty with the 
 tape drive not being seen. Here is the error message:

 Job=TapeBackup.2010-03-11_14.34.23_05
 11-Mar 14:34 distress-dir JobId 20: Fatal error: 
  Storage daemon didn't accept Device Drive-2 command.
 11-Mar 14:34 distress-dir JobId 20: Error: Bacula distress-dir 5.0.1 
 (24Feb10): 11-Mar-2010 14:34:26
   Build OS:   x86_64-unknown-linux-gnu redhat 
   JobId:  20
   Job:TapeBackup.2010-03-11_14.34.23_05
   Backup Level:   Full (upgraded from Incremental)
   Client: distress-fd 5.0.1 (24Feb10) 
 x86_64-unknown-linux-gnu,redhat,
   FileSet:Tape Set 2010-03-10 23:05:00
   Pool:   Default (From Job resource)
   Catalog:MyCatalog (From Client resource)
   Storage:Tape-Left (From Job resource)
   Scheduled time: 11-Mar-2010 14:34:14

 -
 I have check both the dir-conf and the sd-conf and either I am blind due 
 to the length of time I've spent on it or I am just missing it.
 Probably an easy fix.

 Here are the snapshots of the config files.
 DIR.conf
 # Definition of DLT tape storage device
 Storage {
 Name = Tape-Right # Do not use localhost here
 Address = distress # N.B. Use a fully qualified name here
 SDPort = 9103
 Password =  # password for Storage daemon
 Device = Drive-1 # must be same as Device in Storage daemon
 Media Type = DLT-V4 # must be same as MediaType in Storage daemon
 # Autochanger = yes # enable for autochanger device
 }

 # Definition of DLT tape storage device
 Storage {
 Name = Tape-Left # Do not use localhost here
 Address = distress # N.B. Use a fully qualified name here
 SDPort = 9103
 Password =  # password for Storage daemon
 Device = Drive-2 # must be same as Device in Storage daemon
 Media Type = DLT-V4 # must be same as MediaType in Storage daemon
 # Autochanger = yes # enable for autochanger device
 }
 SD.conf
 Device {
 Name = Drive-1 #
 Drive Index = 0
 Media Type = DLT-V4
 Archive Device = /dev/nst0
 AutomaticMount = no; # when device opened, read it
 AlwaysOpen = yes;
 RemovableMedia = yes;
 RandomAccess = no;
 AutoChanger = yes
 }
 Device {
 Name = Drive-2 #
 Drive Index = 1
 Media Type = DLT-V4
 Archive Device = /dev/nst1
 AutomaticMount = no; # when device opened, read it
 AlwaysOpen = yes;
 RemovableMedia = yes;
 RandomAccess = no;
 AutoChanger = yes
 }

 also get a GDB traceback each time the job errors:

 [?1034h[Thread debugging using libthread_db enabled]
 [New Thread 0x7fbb1e655710 (LWP 22578)]
 [New Thread 0x7fbb1f056710 (LWP 22530)]
 0x003666cd6ca3 in select () from /lib64/libc.so.6
 $1 = '\000' repeats 29 times
 $2 = 0xa16058 bacula-sd
 $3 = 0xa16098 /usr/bacula/bin/bacula-sd
 $4 = 0x0
 $5 = 0x7fbb258cbd9e 5.0.1 (24 February 2010)
 $6 = 0x7fbb258cbdb7 x86_64-unknown-linux-gnu
 $7 = 0x7fbb258cbdd0 redhat
 $8 = 0x7fbb258cba7c 
 $9 = distress, '\000' repeats 41 times
 #0  0x003666cd6ca3 in select () from /lib64/libc.so.6
 #1  0x7fbb258a0ad4 in bnet_thread_server (addrs=value optimized out, 
 max_clients=value optimized out, client_wq=value optimized out, 
 handle_client_request=value optimized out) at bnet_server.c:161
 #2  0x0040763e in main (argc=value optimized out, 
 argv=value optimized out) at stored.c:312

 Thread 3 (Thread 0x7fbb1f056710 (LWP 22530)):
 #0  0x00366740b3b9 in pthread_cond_timedwait@@GLIBC_2.3.2 ()
from /lib64/libpthread.so.0
 #1  0x7fbb258c402c in watchdog_thread (arg=value optimized out)
 at watchdog.c:308
 #2  0x003667406a3a in start_thread () from /lib64/libpthread.so.0
 #3  0x003666cddf3d in clone () from /lib64/libc.so.6
 #4  0x in ?? ()

 Thread 2 (Thread 0x7fbb1e655710 (LWP 22578)):
 #0  0x00366740eb3d in waitpid () from /lib64/libpthread.so.0
 #1  0x7fbb258bb7ad in signal_handler (sig=11) at signal.c:229
 #2  signal handler called
 #3  0x00431bb3 in is_vol_in_autochanger (vol=value optimized out, 
 rctx=value optimized out) at reserve.c:389
 #4  find_suitable_device_for_job (vol=value optimized out, 
 rctx=value optimized out) at reserve.c:456
 #5  0x0043269a in use_storage_cmd (jcr=value optimized out)
 at reserve.c:317
 #6  use_cmd (jcr=value optimized out) at reserve.c:71
 #7  0x0041e80f in handle_connection_request (arg=0xa1aad8)
 at dircmd.c:233
 #8  0x7fbb258c44a9 in workq_server (arg=0x651e20) at workq.c:346
 #9  0x003667406a3a in start_thread () from /lib64/libpthread.so.0
 #10 0x003666cddf3d in clone () from /lib64/libc.so.6
 #11 0x in ?? ()

 Thread 1 (Thread

[Bacula-users] Problems with storage device not being seen

2010-03-11 Thread Jerry Lowry
I am setting up a new installation and I am having difficulty with the 
tape drive not being seen. Here is the error message:

Job=TapeBackup.2010-03-11_14.34.23_05
11-Mar 14:34 distress-dir JobId 20: Fatal error: 
 Storage daemon didn't accept Device Drive-2 command.
11-Mar 14:34 distress-dir JobId 20: Error: Bacula distress-dir 5.0.1 (24Feb10): 
11-Mar-2010 14:34:26
  Build OS:   x86_64-unknown-linux-gnu redhat 
  JobId:  20
  Job:TapeBackup.2010-03-11_14.34.23_05
  Backup Level:   Full (upgraded from Incremental)
  Client: distress-fd 5.0.1 (24Feb10) 
x86_64-unknown-linux-gnu,redhat,
  FileSet:Tape Set 2010-03-10 23:05:00
  Pool:   Default (From Job resource)
  Catalog:MyCatalog (From Client resource)
  Storage:Tape-Left (From Job resource)
  Scheduled time: 11-Mar-2010 14:34:14

-
I have check both the dir-conf and the sd-conf and either I am blind due 
to the length of time I've spent on it or I am just missing it.
Probably an easy fix.

Here are the snapshots of the config files.
DIR.conf
# Definition of DLT tape storage device
Storage {
Name = Tape-Right # Do not use localhost here
Address = distress # N.B. Use a fully qualified name here
SDPort = 9103
Password =  # password for Storage daemon
Device = Drive-1 # must be same as Device in Storage daemon
Media Type = DLT-V4 # must be same as MediaType in Storage daemon
# Autochanger = yes # enable for autochanger device
}

# Definition of DLT tape storage device
Storage {
Name = Tape-Left # Do not use localhost here
Address = distress # N.B. Use a fully qualified name here
SDPort = 9103
Password =  # password for Storage daemon
Device = Drive-2 # must be same as Device in Storage daemon
Media Type = DLT-V4 # must be same as MediaType in Storage daemon
# Autochanger = yes # enable for autochanger device
}
SD.conf
Device {
Name = Drive-1 #
Drive Index = 0
Media Type = DLT-V4
Archive Device = /dev/nst0
AutomaticMount = no; # when device opened, read it
AlwaysOpen = yes;
RemovableMedia = yes;
RandomAccess = no;
AutoChanger = yes
}
Device {
Name = Drive-2 #
Drive Index = 1
Media Type = DLT-V4
Archive Device = /dev/nst1
AutomaticMount = no; # when device opened, read it
AlwaysOpen = yes;
RemovableMedia = yes;
RandomAccess = no;
AutoChanger = yes
}

also get a GDB traceback each time the job errors:

[?1034h[Thread debugging using libthread_db enabled]
[New Thread 0x7fbb1e655710 (LWP 22578)]
[New Thread 0x7fbb1f056710 (LWP 22530)]
0x003666cd6ca3 in select () from /lib64/libc.so.6
$1 = '\000' repeats 29 times
$2 = 0xa16058 bacula-sd
$3 = 0xa16098 /usr/bacula/bin/bacula-sd
$4 = 0x0
$5 = 0x7fbb258cbd9e 5.0.1 (24 February 2010)
$6 = 0x7fbb258cbdb7 x86_64-unknown-linux-gnu
$7 = 0x7fbb258cbdd0 redhat
$8 = 0x7fbb258cba7c 
$9 = distress, '\000' repeats 41 times
#0  0x003666cd6ca3 in select () from /lib64/libc.so.6
#1  0x7fbb258a0ad4 in bnet_thread_server (addrs=value optimized out, 
max_clients=value optimized out, client_wq=value optimized out, 
handle_client_request=value optimized out) at bnet_server.c:161
#2  0x0040763e in main (argc=value optimized out, 
argv=value optimized out) at stored.c:312

Thread 3 (Thread 0x7fbb1f056710 (LWP 22530)):
#0  0x00366740b3b9 in pthread_cond_timedwait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x7fbb258c402c in watchdog_thread (arg=value optimized out)
at watchdog.c:308
#2  0x003667406a3a in start_thread () from /lib64/libpthread.so.0
#3  0x003666cddf3d in clone () from /lib64/libc.so.6
#4  0x in ?? ()

Thread 2 (Thread 0x7fbb1e655710 (LWP 22578)):
#0  0x00366740eb3d in waitpid () from /lib64/libpthread.so.0
#1  0x7fbb258bb7ad in signal_handler (sig=11) at signal.c:229
#2  signal handler called
#3  0x00431bb3 in is_vol_in_autochanger (vol=value optimized out, 
rctx=value optimized out) at reserve.c:389
#4  find_suitable_device_for_job (vol=value optimized out, 
rctx=value optimized out) at reserve.c:456
#5  0x0043269a in use_storage_cmd (jcr=value optimized out)
at reserve.c:317
#6  use_cmd (jcr=value optimized out) at reserve.c:71
#7  0x0041e80f in handle_connection_request (arg=0xa1aad8)
at dircmd.c:233
#8  0x7fbb258c44a9 in workq_server (arg=0x651e20) at workq.c:346
#9  0x003667406a3a in start_thread () from /lib64/libpthread.so.0
#10 0x003666cddf3d in clone () from /lib64/libc.so.6
#11 0x in ?? ()

Thread 1 (Thread 0x7fbb25889720 (LWP 22522)):
#0  0x003666cd6ca3 in select () from /lib64/libc.so.6
#1  0x7fbb258a0ad4 in bnet_thread_server (addrs=value optimized out, 
max_clients=value optimized out, client_wq=value optimized out, 
handle_client_request=value optimized out) at bnet_server.c:161
#2  0x0040763e in main (argc=value optimized out, 
argv=value optimized out) at 

[Bacula-users] bacula 5.0.1 installation problems

2010-03-05 Thread Jerry Lowry
ok, I have bacula installed without openssl and without bat. But I would 
like to use bat for management of bacula.


QT make problem:
When I run the qt make I am getting errors from the openssl modules. I 
went looking through the configure files in the qt directory but did not 
see anything that specified openssl.  I have attached the std output 
showing the errors.  They are coming from the ssl modules.


How does one turn off openssl in qt? Which configure file is it hiding in?

thanks,

jerry
--

qnativesocketengine_unix.cpp:167: note: initialized from here
In file included from qsslcertificate.cpp:124:
qsslsocket_openssl_symbols_p.h:271: error: variable or field ‘q_sk_free’ 
declared void
qsslsocket_openssl_symbols_p.h:271: error: ‘STACK’ was not declared in this 
scope
qsslsocket_openssl_symbols_p.h:271: error: ‘a’ was not declared in this 
scope
qsslsocket_openssl_symbols_p.h:272: error: ‘STACK’ was not declared in this 
scope
qsslsocket_openssl_symbols_p.h:272: error: ‘a’ was not declared in this 
scope
qsslsocket_openssl_symbols_p.h:273: error: ‘STACK’ was not declared in this 
scope
qsslsocket_openssl_symbols_p.h:273: error: ‘a’ was not declared in this 
scope
qsslsocket_openssl_symbols_p.h:273: error: expected primary-expression before 
‘int’
qsslsocket_openssl_symbols_p.h:273: error: initializer expression list treated 
as compound expression
qsslcertificate.cpp: In member function 
‘QMultiMapQSsl::AlternateNameEntryType, QString 
QSslCertificate::alternateSubjectNames() const’:
qsslcertificate.cpp:371: error: ‘STACK’ was not declared in this scope
qsslcertificate.cpp:371: error: ‘altNames’ was not declared in this scope
qsslcertificate.cpp:371: error: expected primary-expression before ‘)’ token
qsslcertificate.cpp:371: error: expected ‘;’ before ‘q_X509_get_ext_d2i’
qsslcertificate.cpp:383: error: ‘q_sk_free’ was not declared in this scope
make[2]: *** [.obj/release-static/qsslcertificate.o] Error 1
make[2]: *** Waiting for unfinished jobs
make[1]: *** [sub-network-make_default-ordered] Error 2
In file included from qsslcertificate.cpp:124:
qsslsocket_openssl_symbols_p.h:271: error: variable or field ‘q_sk_free’ 
declared void
qsslsocket_openssl_symbols_p.h:271: error: ‘STACK’ was not declared in this 
scope
qsslsocket_openssl_symbols_p.h:271: error: ‘a’ was not declared in this 
scope
qsslsocket_openssl_symbols_p.h:272: error: ‘STACK’ was not declared in this 
scope
qsslsocket_openssl_symbols_p.h:272: error: ‘a’ was not declared in this 
scope
qsslsocket_openssl_symbols_p.h:273: error: ‘STACK’ was not declared in this 
scope
qsslsocket_openssl_symbols_p.h:273: error: ‘a’ was not declared in this 
scope
qsslsocket_openssl_symbols_p.h:273: error: expected primary-expression before 
‘int’
qsslsocket_openssl_symbols_p.h:273: error: initializer expression list treated 
as compound expression
qsslcertificate.cpp: In member function 
‘QMultiMapQSsl::AlternateNameEntryType, QString 
QSslCertificate::alternateSubjectNames() const’:
qsslcertificate.cpp:371: error: ‘STACK’ was not declared in this scope
qsslcertificate.cpp:371: error: ‘altNames’ was not declared in this scope
qsslcertificate.cpp:371: error: expected primary-expression before ‘)’ token
qsslcertificate.cpp:371: error: expected ‘;’ before ‘q_X509_get_ext_d2i’
qsslcertificate.cpp:383: error: ‘q_sk_free’ was not declared in this scope
make[2]: *** [.obj/release-static/qsslcertificate.o] Error 1
make[1]: *** [sub-network-install_subtargets-ordered] Error 2

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 5.0.1 installation errors

2010-03-05 Thread Jerry Lowry
That would work but it really is a pain to get all the dependent 
software working with the old version.  I really don't use ssl for my 
backups so just turning off the openssl for qt would be fine.

thanks 




Hugh Brown wrote:
 Jerry Lowry wrote:
   
 Hi, I am trying to install bacula 5.0.1 on a new backup server.  I have
 attached the error that I am getting during the make and the config.out.
 The server configuration is:

 fedora 12 x86_64
 mysql 5.1.42
 bacula src 5.0.1
 

 Perhaps old version of OpenSSL libs?

 --
 Hugh Brown, Systems Manager
 The Centre for High-Throughput Biology
 hbr...@chibi.ubc.ca
   

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula 5.0.1 installation errors

2010-03-02 Thread Jerry Lowry
Hi, I am trying to install bacula 5.0.1 on a new backup server.  I have 
attached the error that I am getting during the make and the 
config.out.  The server configuration is:


fedora 12 x86_64
mysql 5.1.42
bacula src 5.0.1

Any ideas as to why this is failing?  Also, if I include 'enable-bat' 
after installing depkgs-qt and sourcing the qt directories the bacula 
configure does  not see that qt is installed.

--

---
Jerold Lowry
IT Manager / Software Engineer
Engineering Design Team (EDT), Inc. a HEICO company
1400 NW Compton Drive, Suite 315
Beaverton, Oregon 97006 (U.S.A.)
Phone: 503-690-1234 / 800-435-4320
Fax: 503-690-1243
Web: _www.edt.com http://www.edt.com/_





Configuration on Tue Mar  2 10:02:32 PST 2010:

   Host:x86_64-unknown-linux-gnu -- redhat 
   Bacula version:  Bacula 5.0.1 (24 February 2010)
   Source code location:.
   Install binaries:/usr/bacula/bin
   Install libraries:   /usr/lib64
   Install config files:/usr/bacula/bin
   Scripts directory:   /usr/bacula/bin
   Archive directory:   /tmp
   Working directory:   /var/run/bacula/working
   PID directory:   /var/run/bacula
   Subsys directory:/var/run/bacula/working
   Man directory:   ${datarootdir}/man
   Data directory:  /usr/share
   Plugin directory:/usr/lib64
   C Compiler:  gcc 4.4.2
   C++ Compiler:/usr/lib64/ccache/g++ 4.4.2
   Compiler flags:   -g -O2 -Wall -fno-strict-aliasing -fno-exceptions 
-fno-rtti
   Linker flags: 
   Libraries:   -lpthread -ldl 
   Statically Linked Tools: no
   Statically Linked FD:no
   Statically Linked SD:no
   Statically Linked DIR:   no
   Statically Linked CONS:  no
   Database type:   MySQL
   Database port:
   Database lib:-L/usr/lib64/mysql -lmysqlclient_r -lz
   Database name:   bacula
   Database user:   bacula

   Job Output Email:jlo...@edt.com
   Traceback Email: jlo...@edt.com
   SMTP Host Address:   mailhost.edt.com

   Director Port:   9101
   File daemon Port:9102
   Storage daemon Port: 9103

   Director User:   
   Director Group:  
   Storage Daemon User: 
   Storage DaemonGroup: 
   File Daemon User:
   File Daemon Group:   

   SQL binaries Directory   /usr/bin

   Large file support:  yes
   Bacula conio support:yes -ltermcap
   readline support:no 
   TCP Wrappers support:no 
   TLS support: yes
   Encryption support:  yes
   ZLIB support:yes
   enable-smartalloc:   yes
   enable-lockmgr:  no
   bat support: no
   enable-gnome:no 
   enable-bwx-console:  no 
   enable-tray-monitor: yes
   client-only: no
   build-dird:  yes
   build-stored:yes
   Plugin support:  yes
   ACL support: yes
   XATTR support:   yes
   Python support:  no 
   Batch insert enabled:yes

  

CLFLAGS=-g -02 ./configure --sbindir=/usr/bacula/bin 
--sysconfdir=/usr/bacula/bin --with-pid-dir=/var/run/bacula 
--with-subsys-dir=/var/run/bacula/working --enable-smartalloc --with-mysql 
--with-working-dir=/var/run/bacula/working --with-dump-email=jlo...@edt.com 
--with-job-email=jlo...@edt.com --with-smtp-host=mailhost.edt.com 
--docdir=/var/run/bacula/doc --enable-tray-monitor



make[1]: Entering directory `/database/bacula-5.0.1/src/lib'
Compiling attr.c
Compiling base64.c
Compiling berrno.c
Compiling bsys.c
Compiling bget_msg.c
Compiling bnet.c
Compiling bnet_server.c
Compiling runscript.c
Compiling bsock.c
Compiling bpipe.c
Compiling bsnprintf.c
Compiling btime.c
Compiling cram-md5.c
Compiling crc32.c
Compiling crypto.c
crypto.c: In function ‘ASN1_OCTET_STRING* openssl_cert_keyid(X509*)’:
crypto.c:333: error: invalid conversion from ‘const X509V3_EXT_METHOD*’ to 
‘X509V3_EXT_METHOD*’
crypto.c: In function ‘CRYPTO_SESSION* crypto_session_new(crypto_cipher_t, 
alist*)’:
crypto.c:1102: error: cannot convert ‘unsigned char*’ to 
‘EVP_PKEY_CTX*’ for argument ‘1’ to ‘int 
EVP_PKEY_encrypt(EVP_PKEY_CTX*, unsigned char*, size_t*, const unsigned char*, 
size_t)’
crypto.c: In function ‘crypto_error_t crypto_session_decode(const u_int8_t*, 
u_int32_t, alist*, CRYPTO_SESSION**)’:
crypto.c:1226: error: cannot convert ‘unsigned char*’ to 
‘EVP_PKEY_CTX*’ for argument ‘1’ to ‘int 
EVP_PKEY_decrypt(EVP_PKEY_CTX*, unsigned char*, size_t*, const unsigned char*, 
size_t)’
make[1]: *** [crypto.lo] Error 1
make[1]: Leaving directory `/database/bacula-5.0.1/src/lib'


  == Error in /database/bacula-5.0.1/src/lib ==

--
Download Intel#174; Parallel Studio Eval
Try the 

[Bacula-users] Problem adding a new tape drive

2008-01-10 Thread Jerry Lowry
I have been trying to add another tape drive to my server.  These are not in

an autochanger, they are separate tape drives.  I can access the tape drive
from

Linux with mt and as you will see in the documentation below btape worked as

well.  But when I try to access the tape from bacula it does not see it.

I thought I had the configuration setup correctly but I must have missing

something.  I need another pair of eyes to look at the SD and DIR config.

 

thanks

 

 

---

label

The defined Storage resources are:

 1: File

 2: Quantum-DLT-V4

 3: Quantum-DLT-V4-1

Select Storage resource (1-3): 3

Enter new Volume name: distress-mon0208

Defined Pools:

 1: Default

 2: Monthly

Select the Pool (1-2): 2

Connecting to Storage daemon Quantum-DLT-V4-1 at
distress.ACCOUNTING.EDT.LOCAL:9103 ...

Sending label command for Volume distress-mon0208 Slot 0 ...

3999 Device Quantum-DLT-1 not found or could not be opened.

Label command failed for Volume distress-mon0208.

Do not forget to mount the drive!!!

 

st stor

The defined Storage resources are:

 1: File

 2: Quantum-DLT-V4

 3: Quantum-DLT-V4-1

Select Storage resource (1-3): 3

Connecting to Storage daemon Quantum-DLT-V4-1 at
distress.ACCOUNTING.EDT.LOCAL:9103

 

distress-sd Version: 2.0.3 (06 March 2007) i686-pc-linux-gnu redhat (Zod)

Daemon started 09-Jan-08 11:52, 11 Jobs run since started.

 Heap: bytes=19,491 max_bytes=152,863 bufs=84 max_bufs=103

 

Running Jobs:

No Jobs running.



 

Jobs waiting to reserve a drive:



 

Terminated Jobs:

 JobId  LevelFiles  Bytes   Status   FinishedName 

===

  1800  Incr20211.02 G  OK   09-Jan-08 19:24 Futility

  1801  Incr114995.0 M  OK   09-Jan-08 20:07 Denial

  1802  Incr174266.9 M  OK   09-Jan-08 21:07 nestucca

  1803  Incr4471.864 G  OK   09-Jan-08 22:10 Bagby

  1804  Full 35,51215.28 G  OK   09-Jan-08 22:41 Destruction

  1805  Full  0 0   Other09-Jan-08 23:11 Doom

  1806  Incr  1,3491.758 G  OK   10-Jan-08 08:47 Gloom

  1807  Incr663777.3 M  OK   10-Jan-08 08:49 Distress

  1808  Full  1162.4 M  OK   10-Jan-08 08:49 BackupCatalog

  1809  Full  0 0   Cancel   10-Jan-08 14:43 Doom



 

Device status:

Device FileStorage (/backupdb/backup) is not open.

Device Quantum-DLT (/dev/nst0) is mounted with Volume=distress-010908
Pool=Default

Total Bytes=32,169,424,896 Blocks=498,657 Bytes/block=64,512

Positioned at File=36 Block=0



 

In Use Volume status:

distress-010908 on device Quantum-DLT (/dev/nst0)



 

[EMAIL PROTECTED] bin]# btape -c /usr/local/bacula/bin/bacula-sd.conf /dev/nst1

Tape block granularity is 1024 bytes.

btape: butil.c:286 Using device: /dev/nst1 for writing.

btape: btape.c:368 open device Quantum-DLT-1 (/dev/nst1): OK

*test

I'm going to write 1000 records and an EOF

then write 1000 records and an EOF, then rewind,

and re-read the data to verify that it is correct.

 

This is an *essential* feature ...

 

btape: btape.c:825 Wrote 1000 blocks of 64412 bytes.

btape: btape.c:499 Wrote 1 EOF to Quantum-DLT-1 (/dev/nst1)

btape: btape.c:841 Wrote 1000 blocks of 64412 bytes.

btape: btape.c:499 Wrote 1 EOF to Quantum-DLT-1 (/dev/nst1)

btape: btape.c:850 Rewind OK.

1000 blocks re-read correctly.

Got EOF on tape.

1000 blocks re-read correctly.

=== Test Succeeded. End Write, rewind, and re-read test ===

 

 

 The test ran through without errors...only copied first part for brevity

 

bacula-sd.conf

 

Device {

  Name = Quantum-DLT  # First tape - daily and weekly

  Drive Index = 0

  Media Type = DLT-V4

  Archive Device = /dev/nst0

  AutomaticMount = no;   # when device opened, read it

  AlwaysOpen = yes;

  RemovableMedia = yes;

  RandomAccess = no;

  AutoChanger = no

 

Device {

  Name = Quantum-DLT-1# Second tape - monthly and yearly

  Drive Index = 1

  Media Type = DLT-V4

  Archive Device = /dev/nst1

  AutomaticMount = no;   # when device opened, read it

  AlwaysOpen = yes;

  RemovableMedia = yes;

  RandomAccess = no;

  AutoChanger = no

 

***bacula-dir.conf-  storage device section

 

# Definition of DDS tape storage device

Storage {

  Name = Quantum-DLT-V4

#  Do not use localhost here 

  Address = distress.ACCOUNTING.EDT.LOCAL# N.B. Use a fully
qualified name here

  SDPort = 9103 

  Password = Dn91BxJ/wNTn+9u3OX4ef1iRBXh1/Ch2GBaNiQzoQ9ny  #
password for Storage daemon

  Device = Quantum-DLT # must be same as Device in Storage daemon

  Media Type = DLT-V4  # must be same as MediaType in Storage
daemon

#  Autochanger = yes   # enable for 

Re: [Bacula-users] Problem with socket errors on new workstation-: Backup Fatal Error ofDisaster-fd Full

2008-01-08 Thread Jerry Lowry

I am setting up several new workstations on my backup schedule.  I am running
bacula 2.0.3 on both server and clients.  I copied a working configuration in
the bacula-dir.conf to create the new workstation configurations but each one
of them is getting the following error:

**note***  the words 'disaster', 'distress' and other similar node names are
based on the fact that it is an accounting system running on MS. Nothing to
do with how bacula workswhich the server is running on a Linux system.
No problems with the server...just clients

Thanks

jerry

Subject: Bacula: Backup Fatal Error of Disaster-fd Full

04-Jan 13:26 Distress-dir: No prior Full backup Job record found.
04-Jan 13:26 Distress-dir: No prior or suitable Full backup found in catalog.
Doing FULL backup.
04-Jan 13:26 Distress-dir: Start Backup JobId 1732,
Job=Disaster.2008-01-04_13.26.49
04-Jan 13:38 Disaster-fd: Disaster.2008-01-04_13.26.49 Fatal error: Failed to
connect to Storage daemon: distress.ACCOUNTING.EDT.LOCAL:9103
04-Jan 13:38 Disaster-fd: Disaster.2008-01-04_13.26.49 Error:
./../lib/bnet.c:779 gethostbyname() for host distress.ACCOUNTING.EDT.LOCAL
failed: ERR=Valid name, no data record of resquested type.
04-Jan 13:26 Distress-dir: Disaster.2008-01-04_13.26.49 Fatal error: Socket
error on Storage command: ERR=No data available
04-Jan 13:26 Distress-dir: Disaster.2008-01-04_13.26.49 Error: Bacula 2.0.3
(06Mar07): 04-Jan-2008 13:26:58
  JobId:  1732
  Job:Disaster.2008-01-04_13.26.49
  Backup Level:   Full (upgraded from Incremental)
  Client: Disaster-fd 2.0.3 (06Mar07)
Linux,Cross-compile,Win32
  FileSet:Disaster Set 2008-01-04 09:26:45
  Pool:   Default (From Job resource)
  Storage:Quantum-DLT-V4 (From Job resource)
  Scheduled time: 04-Jan-2008 13:26:48
  Start time: 04-Jan-2008 13:26:51
  End time:   04-Jan-2008 13:26:58
  Elapsed time:   7 secs
  Priority:   10
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Volume name(s): 
  Volume Session Id:  253
  Volume Session Time:1196712213
  Last Volume Bytes:  24,438,693,888 (24.43 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  
  SD termination status:  Error
  Termination:*** Backup Error ***



-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users