[Bacula-users] Tuning SD/FD/DIR Maximum Concurrent Jobs

2016-05-12 Thread Mingus Dew
Dear All,
 I've been using Bacula for about 7 years and have never quite
understood how to properly configure these values in relation to each
other. In terms of an FD on a normal client I will typically use a Maximum
Concurrent Jobs = 5 setting in bacula-fd.conf.

On my "Backup Server" I run the FD, SD, and DIR. The FD I've set that
parameter to 30, and 45 in the DIR and FD. I use the FD on the server to
backup files from NFS mounts and locally written backups from client types
that can only ftp/scp/sftp files (can't install bacula-fd on them)

What I'm really after is some best practices or guidelines to setting
Max Concurrent Jobs in all the resources based on number of clients, backup
jobs, and number storage devices. I

t seems that every now and then I've got too many jobs in queue waiting for
available storage devices. Sometimes the devices go out to lunch. They
think they are reserved, but don't actually have anything mounted and won't
mount any new volumes.

Thanks,
Shon
--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] SchedTime Versus StartTime

2016-04-15 Thread Mingus Dew
Dear All,
  I am looking for a solution to a specifc problem. Occasionally and
for a variety of reasons, the Job queue will be have many Jobs waiting to
start while other Jobs complete and resources become available. This will
sometimes cause a job to have a StartTime hours after its SchedTime. This
doesn't happen very often, but has caused problems for application
availability when delayed until peak business hours.

 Is there a way to have Bacula not Start a queued job if the real
StartTime is outside a defined window?

Yours,
Shon
--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Failing Windows Backups

2016-04-15 Thread Mingus Dew
I also wanted to include the Job info I have for an example failure

2016-04-15 04:10:48mt-back4.director JobId 711684: Start Backup JobId
711684, Job=egv-fdcpss-vm04_Daily_Disk.2016-04-15_04.00.00_26

2016-04-15 04:10:48mt-back4.director JobId 711684: Using Device
"Mentora_Incr_Device-5" to write.

2016-04-15 04:10:49egv-fdcpss-vm04.fdcclient JobId 711684: shell command:
run ClientRunBeforeJob "start /w wbadmin delete systemstatebackup
-backuptarget:C: -keepversions:0 -quiet"

2016-04-15 04:10:51egv-fdcpss-vm04.fdcclient JobId 711684: shell command:
run ClientRunBeforeJob "start /w wbadmin start systemstatebackup
-backuptarget:C: -quiet"

2016-04-15 04:28:33mt-back4.director JobId 711684: There are no more Jobs
associated with Volume "Mentora_Incr_Disk-13390". Marking it purged.

2016-04-15 04:28:33mt-back4.director JobId 711684: All records pruned from
Volume "Mentora_Incr_Disk-13390"; marking it "Purged"

2016-04-15 04:28:33mt-back4.director JobId 711684: Recycled volume
"Mentora_Incr_Disk-13390"

2016-04-15 04:28:33mt-back4.storage JobId 711684: Recycled volume
"Mentora_Incr_Disk-13390" on device "Mentora_Incr_Device-5"
(/mnt/backup1/mentora/bacula/incremental), all previous data lost.

2016-04-15 04:28:34mt-back4.director JobId 711684: Max Volume jobs=1
exceeded. Marking Volume "Mentora_Incr_Disk-13390" as Used.

2016-04-15 04:28:34egv-fdcpss-vm04.fdcclient JobId 711684: Generate VSS
snapshots. Driver="Win64 VSS", Drive(s)="C"

2016-04-15 04:36:39egv-fdcpss-vm04.fdcclient JobId 711684: VSS Writer
(BackupComplete): "Task Scheduler Writer", State: 0x1 (VSS_WS_STABLE)

2016-04-15 04:36:39egv-fdcpss-vm04.fdcclient JobId 711684: VSS Writer
(BackupComplete): "VSS Metadata Store Writer", State: 0x1 (VSS_WS_STABLE)

2016-04-15 04:36:39egv-fdcpss-vm04.fdcclient JobId 711684: VSS Writer
(BackupComplete): "Performance Counters Writer", State: 0x1 (VSS_WS_STABLE)

2016-04-15 04:36:39egv-fdcpss-vm04.fdcclient JobId 711684: VSS Writer
(BackupComplete): "System Writer", State: 0x1 (VSS_WS_STABLE)

2016-04-15 04:36:39egv-fdcpss-vm04.fdcclient JobId 711684: VSS Writer
(BackupComplete): "ASR Writer", State: 0x1 (VSS_WS_STABLE)

2016-04-15 04:36:39egv-fdcpss-vm04.fdcclient JobId 711684: VSS Writer
(BackupComplete): "IIS Config Writer", State: 0x1 (VSS_WS_STABLE)

2016-04-15 04:36:39egv-fdcpss-vm04.fdcclient JobId 711684: VSS Writer
(BackupComplete): "Registry Writer", State: 0x1 (VSS_WS_STABLE)

2016-04-15 04:36:39egv-fdcpss-vm04.fdcclient JobId 711684: VSS Writer
(BackupComplete): "Shadow Copy Optimization Writer", State: 0x1
(VSS_WS_STABLE)

2016-04-15 04:36:39egv-fdcpss-vm04.fdcclient JobId 711684: VSS Writer
(BackupComplete): "COM+ REGDB Writer", State: 0x1 (VSS_WS_STABLE)

2016-04-15 04:36:39egv-fdcpss-vm04.fdcclient JobId 711684: VSS Writer
(BackupComplete): "BITS Writer", State: 0x1 (VSS_WS_STABLE)

2016-04-15 04:36:39egv-fdcpss-vm04.fdcclient JobId 711684: VSS Writer
(BackupComplete): "WMI Writer", State: 0x1 (VSS_WS_STABLE)

2016-04-15 04:36:44egv-fdcpss-vm04.fdcclient JobId 711684: Error:
lib/bsock.c:382 Socket is terminated=1 on call to client:10.0.1.65:9102

2016-04-15 04:36:45mt-back4.storage JobId 711684: Elapsed time=00:08:11,
Transfer rate=35.40 M Bytes/second

2016-04-15 04:36:44egv-fdcpss-vm04.fdcclient JobId 711684: shell command:
run ClientAfterJob "start /w wbadmin delete systemstatebackup
-backuptarget:C: -keepversions:0 -quiet"

2016-04-15 04:36:47mt-back4.director JobId 711684: Fatal error: Network
error with FD during Backup: ERR=Connection reset by peer

2016-04-15 04:36:47mt-back4.director JobId 711684: Error: Bacula
mt-back4.director 5.2.13 (19Jan13):

  JobId:  711684

On Fri, Apr 15, 2016 at 9:12 AM, Mingus Dew <shon.steph...@gmail.com> wrote:

> Dear All,
>  I'm assuming that setting "Heartbeat Interval = 300" is the same as 5
> minutes. I want to be clear on it's usage though.. it would be for when a
> Firewall or Router is not honoring KeepAlive?
>
>  I'm having issues with backup failures on a mix of Windows clients.
> These all seem to have bsock.c timeout issues. Sometimes the jobs complete,
> mostly the FD completes writing files to SD, but loses connection to the
> Director and doesn't write the job to the database. It's become a hot
> button issue for me that I can't seem to resolve.
>
>  I've added the Heartbeat Interval to the Director and FD configs.
> I've verified that the security team is not seeing any dropped connections
> in their FW logs, and the timeout is 4h on connections. I've curated the
> FileSet to minimize the data transferred. The systems are all VMs and some
> version of Windows (Windows 2k8, 2k12). Other than that I haven't be

[Bacula-users] Failing Windows Backups

2016-04-15 Thread Mingus Dew
Dear All,
 I'm assuming that setting "Heartbeat Interval = 300" is the same as 5
minutes. I want to be clear on it's usage though.. it would be for when a
Firewall or Router is not honoring KeepAlive?

 I'm having issues with backup failures on a mix of Windows clients.
These all seem to have bsock.c timeout issues. Sometimes the jobs complete,
mostly the FD completes writing files to SD, but loses connection to the
Director and doesn't write the job to the database. It's become a hot
button issue for me that I can't seem to resolve.

 I've added the Heartbeat Interval to the Director and FD configs. I've
verified that the security team is not seeing any dropped connections in
their FW logs, and the timeout is 4h on connections. I've curated the
FileSet to minimize the data transferred. The systems are all VMs and some
version of Windows (Windows 2k8, 2k12). Other than that I haven't been able
to find anything common.

 I'd appreciate any ideas. The server is Bacula 7.0.5 and clients are
all 5.x

Yours,
Shon
--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula Windows Status?

2016-03-02 Thread Mingus Dew
Dear All,
 I'm just wondering if there are binaries newer than 5.x available for
Windows

Thanks,
Shon
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Issue with In-Changer Media after replacing Tape Library

2016-03-01 Thread Mingus Dew
Dear Brynm
Thank you. I forgot you can update InChanger status  via update volume.

Yours,
Shon

On Tue, Mar 1, 2016 at 9:20 AM, Bryn Hughes <li...@nashira.ca> wrote:

> Appears you never did an 'Update Slots' on the old library after emptying
> the tapes from it...
>
> You can manually change the 'InChanger' flag from a '1' to a '0' for the
> affected tapes, it at least doesn't look like there are too many of them.
> You can also delete those volumes all together if you no longer need to get
> any data from them.
>
> Bryn
>
>
> On 2016-03-01 05:52 AM, Mingus Dew wrote:
>
> Dear All,
>   I recently replaced a malfunctioning LTO5 TL2000 tape library with a
> new LTO6 TL4000 tape library. I've been having a lot of failing tape jobs
> since then, and I think I've finally rooted out the problem, but am not
> sure where to begin fixing it.
>
> Essentially, when I do a query to show what Volumes Bacula thinks are in
> the changer, it is still showing me Volumes and Slots from the previous
> library mixed with Volumes and Slots from the new, higher capacity library.
>
> I'm using Bacula Community Edition 7.0.5 on CentOS 6.7 x86_64. Here is the
> output from bconsole query. The old library entries all have Storage
> "TL2000" and Media Type "LTO5". Any guidance is very appreciated.
>
> Yours,
> Shon
>
>
> +-++-+-+--+-+---+---+
>
> | MediaId | VolumeName | GB  | Storage | Slot | Pool
>   | MediaType | VolStatus |
>
>
> +-++-+-+--+-+---+---+
>
> |   2,065 | 15 | 300.6804| TL2000  |1 |
> Mentora_LTO5_Tapes  | LTO-5 | Full  |
>
> |   2,109 | 05 | 1100.4370   | TL2000  |2 |
> Synnefo_LTO5_Tapes  | LTO-5 | Append|
>
> |   2,113 | 08 | 0.0001  | TL2000  |3 |
> KillerIT_LTO5_Tapes | LTO-5 | Append|
>
> |   2,098 | 51 | 411.7481| TL2000  |4 |
> Mentora_LTO5_Tapes  | LTO-5 | Append|
>
> |   2,115 | 04 | 0.0001  | TL2000  |6 |
> KillerIT_LTO5_Tapes | LTO-5 | Append|
>
> |   2,111 | 07 | 258.3787| TL2000  |7 |
> KillerIT_LTO5_Tapes | LTO-5 | Append|
>
> |   2,108 | 12 | 1452.4568   | TL2000  |8 | Calgon_LTO5_Tapes
>   | LTO-5 | Full  |
>
> |   2,110 | 02 | 1664.2599   | TL2000  |9 | Calgon_LTO5_Tapes
>   | LTO-5 | Full  |
>
> |   2,116 | 03 | 0.0001  | TL2000  |   10 |
> KillerIT_LTO5_Tapes | LTO-5 | Append|
>
> |   2,117 | 06 | 0.0001  | TL2000  |   11 |
> KillerIT_LTO5_Tapes | LTO-5 | Append|
>
> |   2,112 | 09 | 3882.6981   | TL2000  |   12 |
> Mentora_LTO5_Tapes  | LTO-5 | Full  |
>
> |   2,704 | 21L6   | 0.  | TL4000  |3 |
> Mentora_LTO6_Tapes  | LTO-6 | Recycle   |
>
> |   2,738 | 37L6   | 0.0001  | TL4000  |   37 | Calgon_LTO6_Tapes
>   | LTO-6 | Append|
>
> |   2,739 | 40L6   | 0.0001  | TL4000  |   38 |
> Synnefo_LTO6_Tapes  | LTO-6 | Append|
>
> |   2,740 | 43L6   | 0.0001  | TL4000  |   39 |
> KillerIT_LTO6_Tapes | LTO-6 | Append|
>
> |   2,741 | 46L6   | 0.0001  | TL4000  |   40 | Voith_LTO6_Tapes
>   | LTO-6 | Append|
>
> |   2,742 | 38L6   | 0.0001  | TL4000  |   41 | Scratch_LTO6
>   | LTO-6 | Append|
>
> |   2,743 | 41L6   | 0.0001  | TL4000  |   42 | Scratch_LTO6
>   | LTO-6 | Append|
>
> |   2,744 | 44L6   | 0.0001  | TL4000  |   43 | Scratch_LTO6
>   | LTO-6 | Append|
>
> |   2,745 | 47L6   | 0.0001  | TL4000  |   44 | Scratch_LTO6
>   | LTO-6 | Append|
>
> |   2,746 | 39L6   | 0.0001  | TL4000  |   45 | Scratch_LTO6
>   | LTO-6 | Append|
>
> |   2,747 | 42L6   | 0.0001  | TL4000  |   46 | Scratch_LTO6
>   | LTO-6 | Append|
>
> |   2,748 | 45L6   | 0.0001  | TL4000  |   47 | Scratch_LTO6
>   | LTO-6 | Append|
>
>
> +-++-+-+--+-+---+---+
>
>
> --
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup 
> Now!http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140

[Bacula-users] Issue with In-Changer Media after replacing Tape Library

2016-03-01 Thread Mingus Dew
Dear All,
  I recently replaced a malfunctioning LTO5 TL2000 tape library with a
new LTO6 TL4000 tape library. I've been having a lot of failing tape jobs
since then, and I think I've finally rooted out the problem, but am not
sure where to begin fixing it.

Essentially, when I do a query to show what Volumes Bacula thinks are in
the changer, it is still showing me Volumes and Slots from the previous
library mixed with Volumes and Slots from the new, higher capacity library.

I'm using Bacula Community Edition 7.0.5 on CentOS 6.7 x86_64. Here is the
output from bconsole query. The old library entries all have Storage
"TL2000" and Media Type "LTO5". Any guidance is very appreciated.

Yours,
Shon

+-++-+-+--+-+---+---+

| MediaId | VolumeName | GB  | Storage | Slot | Pool
| MediaType | VolStatus |

+-++-+-+--+-+---+---+

|   2,065 | 15 | 300.6804| TL2000  |1 | Mentora_LTO5_Tapes
| LTO-5 | Full  |

|   2,109 | 05 | 1100.4370   | TL2000  |2 | Synnefo_LTO5_Tapes
| LTO-5 | Append|

|   2,113 | 08 | 0.0001  | TL2000  |3 | KillerIT_LTO5_Tapes
| LTO-5 | Append|

|   2,098 | 51 | 411.7481| TL2000  |4 | Mentora_LTO5_Tapes
| LTO-5 | Append|

|   2,115 | 04 | 0.0001  | TL2000  |6 | KillerIT_LTO5_Tapes
| LTO-5 | Append|

|   2,111 | 07 | 258.3787| TL2000  |7 | KillerIT_LTO5_Tapes
| LTO-5 | Append|

|   2,108 | 12 | 1452.4568   | TL2000  |8 | Calgon_LTO5_Tapes
| LTO-5 | Full  |

|   2,110 | 02 | 1664.2599   | TL2000  |9 | Calgon_LTO5_Tapes
| LTO-5 | Full  |

|   2,116 | 03 | 0.0001  | TL2000  |   10 | KillerIT_LTO5_Tapes
| LTO-5 | Append|

|   2,117 | 06 | 0.0001  | TL2000  |   11 | KillerIT_LTO5_Tapes
| LTO-5 | Append|

|   2,112 | 09 | 3882.6981   | TL2000  |   12 | Mentora_LTO5_Tapes
| LTO-5 | Full  |

|   2,704 | 21L6   | 0.  | TL4000  |3 | Mentora_LTO6_Tapes
| LTO-6 | Recycle   |

|   2,738 | 37L6   | 0.0001  | TL4000  |   37 | Calgon_LTO6_Tapes
| LTO-6 | Append|

|   2,739 | 40L6   | 0.0001  | TL4000  |   38 | Synnefo_LTO6_Tapes
| LTO-6 | Append|

|   2,740 | 43L6   | 0.0001  | TL4000  |   39 | KillerIT_LTO6_Tapes
| LTO-6 | Append|

|   2,741 | 46L6   | 0.0001  | TL4000  |   40 | Voith_LTO6_Tapes
| LTO-6 | Append|

|   2,742 | 38L6   | 0.0001  | TL4000  |   41 | Scratch_LTO6
| LTO-6 | Append|

|   2,743 | 41L6   | 0.0001  | TL4000  |   42 | Scratch_LTO6
| LTO-6 | Append|

|   2,744 | 44L6   | 0.0001  | TL4000  |   43 | Scratch_LTO6
| LTO-6 | Append|

|   2,745 | 47L6   | 0.0001  | TL4000  |   44 | Scratch_LTO6
| LTO-6 | Append|

|   2,746 | 39L6   | 0.0001  | TL4000  |   45 | Scratch_LTO6
| LTO-6 | Append|

|   2,747 | 42L6   | 0.0001  | TL4000  |   46 | Scratch_LTO6
| LTO-6 | Append|

|   2,748 | 45L6   | 0.0001  | TL4000  |   47 | Scratch_LTO6
| LTO-6 | Append|

+-++-+-+--+-+---+---+
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Storage Devices not displaying in bconsole status

2016-02-10 Thread Mingus Dew
Dear Ben,
  Thank you very much. Part of my mind had wondered if it was a change
in display behavior as I only recently upgraded to 7.0. As you can see from
the configs they are all on the same SD instance. I think it doesn't
deduplicate the other Disk storage because it's using a different local IP
in it's config.

Yours,
Shon

On Wed, Feb 10, 2016 at 4:36 AM, Roberts, Ben 
wrote:

> Hi Shon,
>
>
>
> > I am having an issue where when I run a status command in bconsole,
> select "Storage",
>
> > I am only presented with the option for status on 3 of my defined
> storage resources.
>
> > I am trying to figure out why this is, but am being left with a blank.
> Backups do seem
>
> > to be running at present, but I should have a lot more devices to status.
>
> This was a documented change for 7.0 (
> http://www.bacula.org/7.2.x-manuals/en/main/New_Features_in_7_0_0.html#SECTION00414000)
> . When multiple storage definitions point at the same SD instance, Bacula
> is de-duplicating the entries that show up in the list, since selecting any
> one of them shows the status of the entire SD multiple entries in the list
> would give the same output.
>
>
>
> I agree it’s confusing when a seemingly arbitrary subset of your defined
> storages show up, and you have to remember which storage resource points at
> which SD, and if the one you wanted is not there, which of the entries
> listed is on the same SD instance.
>
>
>
> Regards,
>
> Ben Roberts
>
>
> --
> This email and any files transmitted with it contain confidential and
> proprietary information and is solely for the use of the intended
> recipient. If you are not the intended recipient please return the email to
> the sender and delete it from your computer and you must not use, disclose,
> distribute, copy, print or rely on this email or its contents. This
> communication is for informational purposes only. It is not intended as an
> offer or solicitation for the purchase or sale of any financial instrument
> or as an official confirmation of any transaction. Any comments or
> statements made herein do not necessarily reflect those of GSA Capital. GSA
> Capital Partners LLP is authorised and regulated by the Financial Conduct
> Authority and is registered in England and Wales at Stratton House, 5
> Stratton Street, London W1J 8LA, number OC309261. GSA Capital Services
> Limited is registered in England and Wales at the same address, number
> 5320529.
>
>
--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Storage Devices not displaying in bconsole status

2016-02-09 Thread Mingus Dew
Dear All,
 I am having an issue where when I run a status command in bconsole,
select "Storage", I am only presented with the option for status on 3 of my
defined storage resources. I am trying to figure out why this is, but am
being left with a blank. Backups do seem to be running at present, but I
should have a lot more devices to status.

mt-atl-back1-dir Version: 7.0.5 (28 July 2014) x86_64-unknown-linux-gnu
redhat
Daemon started 09-Feb-16 10:18. Jobs: run=2, running=0 mode=0,0
 Heap: heap=548,864 smbytes=592,095 max_bytes=833,177 bufs=1,954
max_bufs=3,683

*status storage
The defined Storage resources are:
 1: TL4000
 2: Mentora_Full_Files
 3: ITAnalytics_Full_Files
Select Storage resource (1-3):


I am not receiving any errors when restarting or reloading Bacula and all
components (FD, SD, DIR) appear to start and work. Here is my bacula-dir,
bacula-sd, and bacula-dir_storage config files

bacula-dir.conf:
Director {
  Name = mt-atl-back1-dir
  DIRport = 9101
  QueryFile = "/opt/bacula/etc/query.sql"
  WorkingDirectory = "/mnt/backup/bacula/var/lib/bacula"
  PidDirectory = "/var/run"
  Maximum Concurrent Jobs = 60
  Password = "###"
  Messages = Daemon
  FD Connect Timeout = 5 minutes
  SD Connect Timeout = 5 minutes
  Statistics Retention = 90 days
}
Catalog {
  Name = Mentora
  dbname = "bacula"
  dbuser = "bacula"
  dbpassword = "##"
  DB Socket = /var/lib/mysql/mysql.sock
}
Messages {
  Name = Standard
  mailcommand = "/opt/bacula/sbin/bsmtp -h localhost -f \"\(Bacula\)
\<%r\>\" -s \"Bacula: %t %e of %c %l\" %r"
  operatorcommand = "/opt/bacula/sbin/bsmtp -h localhost -f \"\(Bacula\)
\<%r\>\" -s \"Bacula: Intervention needed for %j\" %r"
  mail = ?? = all, !skipped
  operator = ?? = mount
  console = all, !skipped, !saved
  append = "/mnt/backup/bacula/var/lib/bacula/log" = all, !skipped
}
Messages {
  Name = Daemon
  mailcommand = "/opt/bacula/sbin/bsmtp -h localhost -f \"\(Bacula\)
\<%r\>\" -s \"Bacula daemon message\" %r"
  mail = ?= all, !skipped
  console = all, !skipped, !saved
  append = "/mnt/backup/bacula/var/lib/bacula/log" = all, !skipped
}
Console {
  Name = mt-atl-back1-mon
  Password = "##"
  CommandACL = status, .status
}
## Included configre files for Storage, Pools, Filesets, Schedules,
Clients, and Jobs
@|"sh -c 'for f in /opt/bacula/etc/conf.d/*.conf; do echo @${f}; done'"
@|"sh -c 'for f in /opt/bacula/etc/conf.d/customer.d/*/*.conf; do echo
@${f}; done'"

bacula-sd.conf:
Storage {
  Name = mt-atl-back1.storage
  WorkingDirectory = /mnt/backup/bacula/var/lib/bacula
  Pid Directory = /var/run
  SDAddresses  = { ip = {
addr = 10.0.50.164; port = 9103; }
ip = {
addr = 10.0.91.164; port = 9103; }
ip = {
addr = 10.0.50.166; port = 9103; }
ip = {
addr = 10.0.70.166; port = 9103; }
  }
   Maximum Concurrent Jobs = 60
   Heartbeat Interval = 5
}
Director {
  Name = mt-atl-back1-dir
  Password = "##"
}
Autochanger {
  Name = TL4000_Library
  Device = TL4000_DTE0, TL4000_DTE1, TL4000_DTE2, TL4000_DTE3
  Changer Command = " /opt/bacula/etc/mtx-changer %c %o %S %a %d"
  Changer Device = /dev/changer
}
Device {
  Name = TL4000_DTE0
  Drive Index = 0
  Media Type = LTO-6
  Changer Device = /dev/changer
  Archive Device = /dev/nst2
  LabelMedia = Yes
  AutomaticMount = Yes
  AlwaysOpen = Yes
  RemovableMedia = Yes
  RandomAccess = No
  AutoChanger = Yes
  Alert Command = "sh -c 'smartctl -H -l error -d scsi %c'"
  Maximum Changer Wait = 900
  Maximum Rewind Wait = 900
  Maximum Open Wait = 900
}
Device {
  Name = TL4000_DTE1
  Drive Index = 1
  Media Type = LTO-6
  Changer Device = /dev/changer
  Archive Device = /dev/nst1
  LabelMedia = Yes
  AutomaticMount = Yes
  AlwaysOpen = Yes
  RemovableMedia = Yes
  RandomAccess = No
  AutoChanger = Yes
  Alert Command = "sh -c 'smartctl -H -l error -d scsi %c'"
  Maximum Changer Wait = 900
  Maximum Rewind Wait = 900
  Maximum Open Wait = 900
}
Device {
  Name = TL4000_DTE2
  Drive Index = 2
  Media Type = LTO-6
  Changer Device = /dev/changer
  Archive Device = /dev/nst0
  LabelMedia = Yes
  AutomaticMount = Yes
  AlwaysOpen = Yes
  RemovableMedia = Yes
  RandomAccess = No
  AutoChanger = Yes
  Alert Command = "sh -c 'smartctl -H -l error -d scsi %c'"
  Maximum Changer Wait = 900
  Maximum Rewind Wait = 900
  Maximum Open Wait = 900
}
Device {
  Name = TL4000_DTE3
  Drive Index = 3
  Media Type = LTO-6
  Changer Device = /dev/changer
  Archive Device = /dev/nst3
  LabelMedia = Yes
  AutomaticMount = Yes
  AlwaysOpen = Yes
  RemovableMedia = Yes
  RandomAccess = No
  AutoChanger = Yes
  Alert Command = "sh -c 'smartctl -H -l error -d scsi %c'"
  Maximum Changer Wait = 900
  Maximum Rewind Wait = 900
  Maximum Open Wait = 900
}
Device {
  Name = Mentora_Full_Device-1
  Device Type = File
  Media Type = Mentora_File
  Archive Device = /mnt/backup/bacula/mentora/full
  LabelMedia = Yes
  Random Access = Yes
  AutomaticMount = yes
  

[Bacula-users] Error compiling Bacula 5.2.13

2013-12-19 Thread Mingus Dew
Dear All,
 I am compliing Bacula 5.2.13 on Solaris10. I am getting the following
error and can't figure out what's causing it. Can't even find where -mt
is coming from.

Compiling mysql.c
Making libbaccats-mysql.la ...
/root/Bacula/src/bacula-5.2.13/libtool --silent --tag=CXX --mode=link
/opt/csw/bin/g++ -D_BDB_PRIV_INTERFACE_   -o libbaccats-mysql.la mysql.lo
-export-dynamic -rpath /usr/lib -release 5.2.13 \
   -soname
libbaccats-5.2.13.so -R /opt/mysql/mysql/lib -L/opt/mysql/mysql/lib
-lmysqlclient_r -lz
g++: error: unrecognized command line option '-mt'
*** Error code 1
make: Fatal error: Command failed for target `libbaccats-mysql.la'
Current working directory /root/Bacula/src/bacula-5.2.13/src/cats


  == Error in /root/Bacula/src/bacula-5.2.13/src/cats ==


*** Error code 1
The following command caused the error:
for I in src scripts src/lib src/findlib src/filed   src/console
src/plugins/fd src/cats src/dird src/stored src/tools manpages; \
  do (cd $I; echo ==Entering directory `pwd`; \
  make DESTDIR= all || (echo ; echo ; echo   == Error in `pwd`
==; \
echo ; echo ; exit 1;)); \
done
make: Fatal error: Command failed for target `all'
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error compiling Bacula 5.2.13

2013-12-19 Thread Mingus Dew
I was compiling against 64-bit mysql and it didn't like that derp...

-Shon


On Thu, Dec 19, 2013 at 4:19 PM, Mingus Dew shon.steph...@gmail.com wrote:

 Dear All,
  I am compliing Bacula 5.2.13 on Solaris10. I am getting the following
 error and can't figure out what's causing it. Can't even find where -mt
 is coming from.

 Compiling mysql.c
 Making libbaccats-mysql.la ...
 /root/Bacula/src/bacula-5.2.13/libtool --silent --tag=CXX --mode=link
 /opt/csw/bin/g++ -D_BDB_PRIV_INTERFACE_   -o libbaccats-mysql.la mysql.lo
 -export-dynamic -rpath /usr/lib -release 5.2.13 \
-soname
 libbaccats-5.2.13.so -R /opt/mysql/mysql/lib -L/opt/mysql/mysql/lib
 -lmysqlclient_r -lz
 g++: error: unrecognized command line option '-mt'
 *** Error code 1
 make: Fatal error: Command failed for target `libbaccats-mysql.la'
 Current working directory /root/Bacula/src/bacula-5.2.13/src/cats


   == Error in /root/Bacula/src/bacula-5.2.13/src/cats ==


 *** Error code 1
 The following command caused the error:
 for I in src scripts src/lib src/findlib src/filed   src/console
 src/plugins/fd src/cats src/dird src/stored src/tools manpages; \
   do (cd $I; echo ==Entering directory `pwd`; \
   make DESTDIR= all || (echo ; echo ; echo   == Error in
 `pwd` ==; \
 echo ; echo ; exit 1;)); \
 done
 make: Fatal error: Command failed for target `all'


--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bconsole status is VERY slow

2013-12-05 Thread Mingus Dew
Dear All,
 I'm trying to figure out why it sometimes takes 40 minutes for that
status command to return results. It wasn't always this way and I'm not
sure even remotely what might be happening. This seems to be occurring even
when there are only a few (less than 10) jobs in queue. Any ideas?

Yours,
Shon
--
Sponsored by Intel(R) XDK 
Develop, test and display web and hybrid apps with a single code base.
Download it for free now!
http://pubads.g.doubleclick.net/gampad/clk?id=111408631iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to use multiple volumes from the same pool, simultaneously?

2013-11-19 Thread Mingus Dew
Dear All,
 I would like some advice on how to configure Bacula so that volumes
from my Disk pools can be read for Copying at the same time that other
volumes in the same pools are being written for backups.
 Currently if there is a backup writing, none of my copy jobs will run
since the device is mounted with the volume being written. I've seen
reference to a way to get Storage Pools to use multiple devices but can't
find it now or how to properly configure.


Yours,
Shon
--
Shape the Mobile Experience: Free Subscription
Software experts and developers: Be at the forefront of tech innovation.
Intel(R) Software Adrenaline delivers strategic insight and game-changing 
conversations that shape the rapidly evolving mobile landscape. Sign up now. 
http://pubads.g.doubleclick.net/gampad/clk?id=63431311iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Remove media while Job Dir inserting Attributes?

2013-08-01 Thread Mingus Dew
Dear All,
 I have a job that takes 7 days to run. Its currently in the Dir
inserting Attributes state. Can I remove the tapes from the library while
its in that state? Its overlapping my normal tape swap and I can't adjust
today's pickup.

Yours,
Shon
--
Get your SQL database under version control now!
Version control is standard for application code, but databases havent 
caught up. So what steps can you take to put your SQL databases under 
version control? Why should you start doing it? Read more to find out.
http://pubads.g.doubleclick.net/gampad/clk?id=49501711iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Is Truncate just broken?

2013-07-30 Thread Mingus Dew
Dear Kern,
 Thanks for the explanation. I'll have to dig deeper on my end to see
what else could be at the root of the issue. As explained, my disk volumes
are on ZFS, but again this should not be an issue and was working for me
previously.

Sincerely,
Shon


On Wed, Jul 24, 2013 at 3:35 PM, Kern Sibbald k...@sibbald.com wrote:

  Hello,

 ftruncate did not seem to work over NFS or at least on certain versions,
 so I implemented code that if after doing an ftruncate, the file size (via
 an
 fstat) is non-zero, Bacula will delete and recreate the Volume.  This is
 not
 ideal, because it may create it with different permissions and/or owner,
 group but it seemed to work.  If you drop the SD's user from root to
 bacula, depending on how you have defined your NFS volume permissions,
 the SD may be unable do the delete and recreate.

 Nothing has changed in that code for many years.

 Best regards,
 Kern


 On 07/22/2013 04:06 PM, Mingus Dew wrote:

 Thanks for the reply. Specifically I'm using Bacula 5.2.13 on Solaris 10
 x86. My disk backups are stored on ZFS. I saw some old threads about
 truncate not working right on NFS but then some code changes were made that
 actually deleted, then recreated the volume instead of using ftruncate. I
 know this used to work for me, but doesn't seem to anymore. I'm wondering
 if something happened in more recent Bacula versions that changed this
 again.
 *
 *
 Yours,
 Shon* *


  On Fri, Jul 12, 2013 at 3:53 AM, Radosław Korzeniewski 
 rados...@korzeniewski.net wrote:

 Hello,

  2013/7/11 Mingus Dew shon.steph...@gmail.com

 I don't think I've ever gotten it to work under any circumstance, either
 manually or automated.


  It is working fine, automatic or manually. I use it on a lot of Bacula
 deployments without problems.

  best regards
  --
 Radosław Korzeniewski
 rados...@korzeniewski.net




 --
 See everything from the browser to the database with AppDynamics
 Get end-to-end visibility with application monitoring from AppDynamics
 Isolate bottlenecks and diagnose root cause in seconds.
 Start your free trial of AppDynamics Pro 
 today!http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk



 ___
 Bacula-users mailing 
 listBacula-users@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/bacula-users



--
Get your SQL database under version control now!
Version control is standard for application code, but databases havent 
caught up. So what steps can you take to put your SQL databases under 
version control? Why should you start doing it? Read more to find out.
http://pubads.g.doubleclick.net/gampad/clk?id=49501711iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Maximum Concurrent Jobs for Devices

2013-07-22 Thread Mingus Dew
Dear All,
 I'm trying to understand how to properly use this directive and if its
possible to configure Bacula the way I think I can using this directive.
I'm using Version: 5.2.13 (19 February 2013) i386-pc-solaris2.10 solaris
5.10

The manual at
http://www.bacula.org/en/dev-manual/main/main/New_Features_in_5_2_x.html#SECTION00361000says

Maximum Concurrent Jobs is a new Device directive in the Storage Daemon
configuration permits setting the maximum number of Jobs that can run
concurrently on a specified Device. Using this directive, it is possible to
have different Jobs using multiple drives, because when the Maximum
Concurrent Jobs limit is reached, the Storage Daemon will start new Jobs on
any other available compatible drive. This facilitates writing to multiple
drives with multiple Jobs that all use the same Pool.

I think this means that if you have Device A and its running 5 jobs from
Pool A, and you've set the Max Concurrent Jobs = 5 for Device A, then the
next job from Pool A should use Device A1.

Where this all breaks down for me is how to configure Device, Storage,
Pool, Job (what directives to put where so that Bacula automatically
switches devices but keeps the right initial storage and stays in the right
pool and right devices.

The way I envision using this is currently I have a Full Device and an
Incremental Device for disk backups. I use MaxJobs=1 so that each Job is
its own Volume. As we know, only 1 volume can be mounted at a time. This
limits me to running 1 job in a Pool at a time in my current configuration.
Im hoping I can add multiple Devices and Bacula will start being able to
run more Jobs in each Pool.

Any help understanding if this will work and how to configure it, or just a
push in the right direction to understand this directive is very
appreciated.

Thankfully Yours,
Shon
--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Is Truncate just broken?

2013-07-22 Thread Mingus Dew
Thanks for the reply. Specifically I'm using Bacula 5.2.13 on Solaris 10
x86. My disk backups are stored on ZFS. I saw some old threads about
truncate not working right on NFS but then some code changes were made that
actually deleted, then recreated the volume instead of using ftruncate. I
know this used to work for me, but doesn't seem to anymore. I'm wondering
if something happened in more recent Bacula versions that changed this
again.
*
*
Yours,
Shon* *


On Fri, Jul 12, 2013 at 3:53 AM, Radosław Korzeniewski 
rados...@korzeniewski.net wrote:

 Hello,

 2013/7/11 Mingus Dew shon.steph...@gmail.com

 I don't think I've ever gotten it to work under any circumstance, either
 manually or automated.


 It is working fine, automatic or manually. I use it on a lot of Bacula
 deployments without problems.

 best regards
 --
 Radosław Korzeniewski
 rados...@korzeniewski.net

--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Need help understanding and applying some Retention/Recycle concepts

2013-07-22 Thread Mingus Dew
Dear Ana,
Thanks for the response. It took a few days to find this in the Inbox.
I see what you're saying. What else besides Max Volumes tells Bacula not to
label anymore. I kind of wanted it to manage that on its own, otherwise
I'll be changing that everytime I add a new client. Then of course if I
remove clients I'll have to update the pool, delete the volumes I want
gone, update the pool again. Doesn't seem very efficient way to use
something that Autolabels. I don't know, but I do understand what you
linked to. Thanks.

Yours,
Shon


On Fri, Jul 12, 2013 at 12:43 PM, Ana Emília Machado de Arruda 
emiliaarr...@gmail.com wrote:

 Hi Shon,

 The question is that bacula needs more than just the retention period to
 recycle volumes when you´re using disk volumes and automatic labeling. If
 you don´t have a limit of volumes in your pool or something else that tells
 bacula to not label any more volumes, just use an older one that can be
 recycled (retention period was reached).

 Take a look at the link I posted answering you.

 Regards,
 Ana


 --
 See everything from the browser to the database with AppDynamics
 Get end-to-end visibility with application monitoring from AppDynamics
 Isolate bottlenecks and diagnose root cause in seconds.
 Start your free trial of AppDynamics Pro today!
 http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bconsole - connection refused

2013-07-11 Thread Mingus Dew
Just wanted to update everyone on what happened.

I found that I was hitting the ulimit, which was set very low to 256.
I tried to set that higher but found it was being ignored. I then found its
an issue with Solaris stdio.h and limiting to 256 descriptors.
To work around, I modified the bacula startup scripts to set the ulimit and
set and ENV variable to overcome the stdio.h limitation

Thanks
Shon


On Wed, Jun 12, 2013 at 3:32 PM, Marcin Haba ganius...@gmail.com wrote:



 Dnia 2013-06-10, pon o godzinie 15:55 -0400, Mingus Dew pisze:
  :-) At first I thought you were trolling me a little, but then I
  looked and sure enough, my Director has died.
  How can I find out what happened?

 Hi,

 For example you can start on Bacula Bug Tracker :)

 http://bugs.bacula.org/view.php?id=1943

 Do you think that it is connected with your Director crash problem?

 Regards.
 Marcin Haba (gani)


--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Need help understanding and applying some Retention/Recycle concepts

2013-07-11 Thread Mingus Dew
Dear All,
 I am running Bacula Community Edition 5.2.13 on Solaris 10 x86. I am
trying to understand why Bacula doesn't seem to recycle when I think it
should.
 I have recently moved to a configuration where each Volume contains
only a single Job. I had envisioned that when the Volume Retention period
was reached, the Volume would be set to Recycle and reused. Instead what
I'm finding is that Bacula is taking much longer to Recycle Volumes than I
think it should and is instead creating more new Volumes.

This is a relevant Pool Config

Pool {
  Name = Archimedes-FS_Full
  Pool Type = Backup
  Storage = Archimedes_Full_Files
  Recycle = Yes
  AutoPrune = Yes
  Volume Retention = 14d
  Maximum Volume Jobs = 1
  Label Format = Archimedes-FS_Full-
  Next Pool = Archimedes_LTO5_Tapes
}

This is a standard Job Config using Volumes in that Pool

Job {
  Name = ar-web1_Daily_Disk
  Type = Backup
  Client = ar-web1.archimedes
  FileSet = Archimedes_Solaris_FileSystems
  Schedule = Incr_0300_Wed_Full
  Pool = Archimedes-FS_Incr
  JobDefs = DiskDefs
  Full Backup Pool = Archimedes-FS_Full
  Incremental Backup Pool = Archimedes-FS_Incr
  Messages = Standard
  Write Bootstrap = /mnt/backup1/archimedes/bacula/bootstrap/%n.bsr
}

This is the Client definition

Client {
  Name = ar-web1.archimedes
  Address = 10.0.6.101
  FDPort = 9102
  Catalog = Mentora_Catalog
  Password = m7pJY91cdgeYjknTffyj3/vBd+vpRzh6briOhZV5yWWN
  AutoPrune = yes
}


As you can see, the only place where I define retention period is in the
Pool. Nothing should be overriding this. Yet, here are the Volumes. I can
see that the period between LastWritten is 7 days, coinciding with my Full
Backup Schedule, however Bacula should not be retaining 4 weeks of these
files, only 2 (14d)

 108 | Archimedes-FS_Full-0108 | Recycle   |   1 |  1 |
   0 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-06-06 03:07:21 |
| 128 | Archimedes-FS_Full-0128 | Used  |   1 | 33,862,178,634
|7 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-06-06 06:53:05 |
| 129 | Archimedes-FS_Full-0129 | Used  |   1 | 39,038,099,750
|9 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-06-06 10:01:46 |
| 130 | Archimedes-FS_Full-0130 | Used  |   1 |  5,130,340,345
|1 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-06-06 10:04:51 |
| 131 | Archimedes-FS_Full-0131 | Used  |   1 |  4,744,597,690
|1 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-06-06 10:07:48 |
| 132 | Archimedes-FS_Full-0132 | Used  |   1 |  5,376,150,353
|1 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-06-12 03:09:06 |
| 133 | Archimedes-FS_Full-0133 | Used  |   1 |  4,790,685,951
|1 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-06-12 03:11:47 |
| 134 | Archimedes-FS_Full-0134 | Used  |   1 |  2,878,371,042
|0 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-06-12 03:16:04 |
| 135 | Archimedes-FS_Full-0135 | Used  |   1 | 33,834,286,944
|7 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-06-12 07:03:00 |
| 136 | Archimedes-FS_Full-0136 | Used  |   1 | 39,201,368,421
|9 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-06-12 10:10:53 |
| 137 | Archimedes-FS_Full-0137 | Used  |   1 |  4,820,516,075
|1 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-06-19 03:06:47 |
| 138 | Archimedes-FS_Full-0138 | Used  |   1 | 40,170,205,038
|9 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-06-19 06:22:59 |
| 139 | Archimedes-FS_Full-0139 | Used  |   1 |  5,126,884,169
|1 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-06-19 06:25:25 |
| 140 | Archimedes-FS_Full-0140 | Used  |   1 | 33,890,965,413
|7 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-06-19 10:13:47 |
| 141 | Archimedes-FS_Full-0141 | Used  |   1 |  2,920,752,161
|0 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-06-19 10:17:54 |
| 142 | Archimedes-FS_Full-0142 | Used  |   1 |  5,567,216,031
|1 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-07-03 03:14:42 |
| 143 | Archimedes-FS_Full-0143 | Used  |   1 | 33,858,531,390
|7 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-07-03 07:02:09 |
| 144 | Archimedes-FS_Full-0144 | Used  |   1 | 31,142,987,941
|7 |1,209,600 |   1 |0 | 0 | Archimedes_File |
2013-07-03 10:11:46 |
| 145 | Archimedes-FS_Full-0145 | Used  |   1 |  3,086,752,333
|0 |1,209,600 |   1 |0 | 0 

[Bacula-users] bconsole - connection refused

2013-06-10 Thread Mingus Dew
Dear All,
 I'm having an issue with bconsole not being able to connect to the
Director

bconsole[18490]: [ID 702911 daemon.error] bsock.c:138 Unable to connect to
Director daemon on mt-back4.mentora.:9101. ERR=Connection refused

I *think* what has happened is that so many jobs are in queue that I've
exceeded the number of connections the Directory allows? This actually
happened to me a couple weeks ago after an incredibly large backup job
backed up the queue.

I was wondering how to more accurately troubleshoot this, if I can recover
without killing, etc...

PLEASE - do not respond about authentication, etc... This is not an
authentication issue.

Thanks,
Shon
--
How ServiceNow helps IT people transform IT departments:
1. A cloud service to automate IT design, transition and operations
2. Dashboards that offer high-level views of enterprise services
3. A single system of record for all IT processes
http://p.sf.net/sfu/servicenow-d2d-j___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bconsole - connection refused

2013-06-10 Thread Mingus Dew
:-) At first I thought you were trolling me a little, but then I looked and
sure enough, my Director has died.
How can I find out what happened?

Thanks,
Shon


On Mon, Jun 10, 2013 at 2:06 PM, Radosław Korzeniewski 
rados...@korzeniewski.net wrote:

 Hello,

 2013/6/10 Mingus Dew shon.steph...@gmail.com

 Dear All,
  I'm having an issue with bconsole not being able to connect to the
 Director

 bconsole[18490]: [ID 702911 daemon.error] bsock.c:138 Unable to connect
 to Director daemon on mt-back4.mentora.:9101. ERR=Connection refused

 I *think* what has happened is that so many jobs are in queue that I've
 exceeded the number of connections the Directory allows? This actually
 happened to me a couple weeks ago after an incredibly large backup job
 backed up the queue.

 I was wondering how to more accurately troubleshoot this, if I can
 recover without killing, etc...

 PLEASE - do not respond about authentication, etc... This is not an
 authentication issue.


 Yes, you are right. It is not an authorization issue, because the error
 is: ERR=Connection refused. :)

 There is no Bacula Director listening at: mt-back4.mentora.:9101. Check it.

 best regards
 --
 Radosław Korzeniewski
 rados...@korzeniewski.net

--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula-dir died...what now?

2013-06-10 Thread Mingus Dew
Dear All,
 I posted under another topic earlier, but since disovering that my
Director crashed, I was wondering what I can do before restarting it to
find out what happened.

Thanks,
Shon
--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Where can Catalog be used?

2013-06-06 Thread Mingus Dew
Dear All,
 I've been reading the manual, but can only find 1 place where
Catalog is used.. in the Client definition. Obviously its defined in the
Director definition, but I can only find it used in Client.

 Does anyone know anywhere else it can be used? I tried in Pool, but
that violates Bacula's constraint on all Pools being defined in all
Catalogs.

Yours,
Shon
--
How ServiceNow helps IT people transform IT departments:
1. A cloud service to automate IT design, transition and operations
2. Dashboards that offer high-level views of enterprise services
3. A single system of record for all IT processes
http://p.sf.net/sfu/servicenow-d2d-j___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multiple Catalog Configuration

2013-06-04 Thread Mingus Dew
Thanks very much for the replie and clarification. I'll be able to get to
work on this with peace of mind.

Yours,
Shon


On Tue, Jun 4, 2013 at 4:49 AM, Uwe Schuerkamp uwe.schuerk...@nionex.netwrote:

 On Mon, Jun 03, 2013 at 03:45:48PM -0400, Mingus Dew wrote:
  Does anyone have an example they can share of using mutliple catalogs? I
  have a particular set of NAS backups that has over 500,000,000 file
 records
  and pretty much need to use a separate catalog to hopefully overcome 3
 days
  of table locking while inserting preventing other jobs from completing.
 
  Thanks,
  Shon

 Hi Shon,

 just define your new catalogs in the director's config and assign the
 catalog to the new clients:

 Catalog {
   Name = catalog1
   dbname = catalog1; user = bacula; password = XXX
 }

 Catalog {
   Name = catalog2
   dbname = catalog2;user = bacula; password = YY
 }

 .

 Client {
   Name = client1
   Address = client1.example.com
   FDPort = 9102
   Catalog = catalog1
 ...


 Client {
   Name = client2
   Address = client2.example.com
   FDPort = 9102
   Catalog = catalog2

 Cheers, Uwe

 --
 NIONEX --- Ein Unternehmen der Bertelsmann SE  Co. KGaA



--
How ServiceNow helps IT people transform IT departments:
1. A cloud service to automate IT design, transition and operations
2. Dashboards that offer high-level views of enterprise services
3. A single system of record for all IT processes
http://p.sf.net/sfu/servicenow-d2d-j___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Question on Copy Jobs - Selection Type PoolUncopiedJobs

2013-06-04 Thread Mingus Dew
Dear All,
 I have an existing Bacula installation that uses separate disk pools
for Incremental and Full backups. I have setup a Copy Job to tape pool
setup using the PoolUncopiedJobs Selection Type for the Full disk pool
only.

 I now wish to set this up for the Incremental pool, but only from now
moving forward. My concern is that the first run of the Copy Job will
select all the current Jobs in the Incremental Pool, which would put 3
previous weeks of Inrementals onto 1 weeks tapes.

Does anyone have any suggestions on how to avoid this?

Sincere Thanks,
Shon
--
How ServiceNow helps IT people transform IT departments:
1. A cloud service to automate IT design, transition and operations
2. Dashboards that offer high-level views of enterprise services
3. A single system of record for all IT processes
http://p.sf.net/sfu/servicenow-d2d-j___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Multiple Catalog Configuration

2013-06-03 Thread Mingus Dew
Does anyone have an example they can share of using mutliple catalogs? I
have a particular set of NAS backups that has over 500,000,000 file records
and pretty much need to use a separate catalog to hopefully overcome 3 days
of table locking while inserting preventing other jobs from completing.

Thanks,
Shon
--
How ServiceNow helps IT people transform IT departments:
1. A cloud service to automate IT design, transition and operations
2. Dashboards that offer high-level views of enterprise services
3. A single system of record for all IT processes
http://p.sf.net/sfu/servicenow-d2d-j___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Pools not updating Attributes

2013-04-02 Thread Mingus Dew
Dear Uwe,
 This in fact did work. I had to remove the volumes I no longer
wanted to retain, that were over the Max Volumes I set for the pool,
then the Max Volumes finally updated after stopping, restarting Bacula

Thanks much,
Shon

On Thu, Mar 21, 2013 at 2:43 AM, Uwe Schuerkamp
uwe.schuerk...@nionex.net wrote:
 On Wed, Mar 20, 2013 at 05:18:37PM -0400, Mingus Dew wrote:
 Dear All,
  I am running bacula-5.2.10 on Solaris 10 x86. I've noticed an
 issue recently where I've wanted to restrict further restrict the
 Maximum Volumes in a couple of Pools.
 I edited the config and changed Maximum Volumes = 3. I then reloaded
 and updated the Pools from Resource. It says it successfully updates.
 I've also tried restarting bacula-dir and then updating the Pools from
 Resource. Here is my config, names changed to protect the innocent.


 Hi Mingus,

 check the list archives, I recently also started a discussion on this
 subject. For now, I've found the only way to shrink a pool is to

 - change  reload your bacula config

 - delete extra volumes (hopefully the oldest ones) manually from the
  pool in bconsole using the delete volume command

 - run update pool in bconsole for the affected pool

 Only then will the pool attributes actually update and reflect the
 changed number of max. volumes.

 Naturally all of this can be scripted and I hope I will eventually get
 around to it (I recently changed the incr pool max volumes vor ~120
 clients and don't feel like repeating all of the above steps for each
 client manually ;)

 I've asked around if this is a bacula  bug, but apparently this is
 considered a feature ;-)

 All the best, Uwe


 --
 NIONEX --- Ein Unternehmen der Bertelsmann SE  Co. KGaA



--
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire 
the most talented Cisco Certified professionals. Visit the 
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Broken Job History

2013-04-02 Thread Mingus Dew
Dear All,
 I am running Bacula 5.2.10. I have the following configured in
bacula-dir.conf

Statistics Retention = 90 days

However, Bacula seems to have stopped updating the JobHisto well over
a year ago. I am uncertain as to why this wouldn't be working.

Any Ideas?

Yours,
Shon

--
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire 
the most talented Cisco Certified professionals. Visit the 
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Pools not updating Attributes

2013-03-20 Thread Mingus Dew
Dear All,
 I am running bacula-5.2.10 on Solaris 10 x86. I've noticed an
issue recently where I've wanted to restrict further restrict the
Maximum Volumes in a couple of Pools.
I edited the config and changed Maximum Volumes = 3. I then reloaded
and updated the Pools from Resource. It says it successfully updates.
I've also tried restarting bacula-dir and then updating the Pools from
Resource. Here is my config, names changed to protect the innocent.

Pool {
  Name = Company-FS_Full
  Pool Type = Backup
  Storage = Company_Files
  Recycle = Yes
  AutoPrune = Yes
  Volume Retention = 14d
  Maximum Volume Jobs = 25
  Maximum Volumes = 3
  Label Format = Company-FS_Full-
  Next Pool = Company_LTO5_Tapes
}

Pool {
  Name = Company-FS_Incr
  Pool Type = Backup
  Storage = Company_Files
  Recycle = Yes
  AutoPrune = Yes
  Volume Retention = 14d
  Maximum Volume Jobs = 150
  Maximum Volumes = 3
  Label Format = Company-FS_Incr-
  Next Pool = Company_LTO5_Tapes
}

If I run list pools in bconsole, I see this:
++---+-+-+--+-+
| PoolId | Name  | NumVols | MaxVols | PoolType |
LabelFormat |
++---+-+-+--+-+

| 20 | Company-FS_Full   |   5 |   5 | Backup   |
Company-FS_Full-|
| 21 | Company-FS_Incr   |   4 |   5 | Backup   |
Company-FS_Incr-|

So not only is Bacula not updating the Maximum Volumes, but its still
creating additional volumes still (I had removed some volumes)

Can anyone suggest how I can even begin to troubleshoot this?

Sincerely Yours,
Shon

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Pools not updating Attributes

2013-03-20 Thread Mingus Dew
Dear John,
 Thanks for the quick response.

On Wed, Mar 20, 2013 at 5:26 PM, John Drescher dresche...@gmail.com wrote:
 On Wed, Mar 20, 2013 at 5:18 PM, Mingus Dew shon.steph...@gmail.com wrote:
 Dear All,
  I am running bacula-5.2.10 on Solaris 10 x86. I've noticed an
 issue recently where I've wanted to restrict further restrict the
 Maximum Volumes in a couple of Pools.
 I edited the config and changed Maximum Volumes = 3. I then reloaded
 and updated the Pools from Resource. It says it successfully updates.
 I've also tried restarting bacula-dir and then updating the Pools from
 Resource. Here is my config, names changed to protect the innocent.

 Pool {
   Name = Company-FS_Full
   Pool Type = Backup
   Storage = Company_Files
   Recycle = Yes
   AutoPrune = Yes
   Volume Retention = 14d
   Maximum Volume Jobs = 25
   Maximum Volumes = 3
   Label Format = Company-FS_Full-
   Next Pool = Company_LTO5_Tapes
 }

 Pool {
   Name = Company-FS_Incr
   Pool Type = Backup
   Storage = Company_Files
   Recycle = Yes
   AutoPrune = Yes
   Volume Retention = 14d
   Maximum Volume Jobs = 150
   Maximum Volumes = 3
   Label Format = Company-FS_Incr-
   Next Pool = Company_LTO5_Tapes
 }

 If I run list pools in bconsole, I see this:
 ++---+-+-+--+-+
 | PoolId | Name  | NumVols | MaxVols | PoolType |
 LabelFormat |
 ++---+-+-+--+-+

 | 20 | Company-FS_Full   |   5 |   5 | Backup   |
 Company-FS_Full-|
 | 21 | Company-FS_Incr   |   4 |   5 | Backup   |
 Company-FS_Incr-|

 So not only is Bacula not updating the Maximum Volumes, but its still
 creating additional volumes still (I had removed some volumes)

 Can anyone suggest how I can even begin to troubleshoot this?


 This is by design. The pool resource is a template to create new
 volumes. Not alter existing volumes. To fix open bconsole and

 update pool from resource

This is precisely what I did. I'm not trying to alter existing
volumes. I'm trying to limit volume creation in the pool. It did not
affect the display of Max Vols in the output, it did not limit Volume
Creation in the Pool, which is by design what Maximum Volumes is
supposed to do.




 then

 update all volumes in pool

 John

--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Strange Device Status - Device is BLOCKED. User unmounted

2012-08-07 Thread Mingus Dew
Thanks Patti, I mounted a volume and then use relase as suggested by
John. That cleared it up

On Mon, Aug 6, 2012 at 12:34 PM, Clark, Patricia A. clar...@ornl.gov wrote:
 I've encountered this whenever I've used an unmount command.  It will stay
 that way until I manually mount a tape volume.  It does not matter if I
 use the volume or not, but it clears the BLOCKED status.

 Patti Clark
 Information International Associates, Inc.
 Linux Administrator and subcontractor to:
 Research and Development Systems Support Oak Ridge National Laboratory


 On 8/6/12 11:53 AM, Mingus Dew shon.steph...@gmail.com wrote:

Device TL2000_DTE1 (/dev/rmt/1cbn) is not open.
Device is BLOCKED. User unmounted.
Drive 1 status unknown.


I'm not sure why this is in this status or how to unblock.

Yours,
Shon

--

Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to specify drive of Autochanger?

2012-08-07 Thread Mingus Dew
Dear John,
 Can you send me a copy of the relevant parts of your config. I'd
like to compare them to mine. I think what you're saying is that you
can send jobs to either drive, or the autochanger, but the robot will
handle the tapes in either case?

Yours,
Shon

On Mon, Aug 6, 2012 at 10:44 AM, John Drescher dresche...@gmail.com wrote:
 On Mon, Aug 6, 2012 at 10:32 AM, Mingus Dew shon.steph...@gmail.com wrote:
 Dear All,
 Does anyone know if its possible to run a job from Command line
 that specifies the drive of the autochanger to be used?


 I have a device for each of my tape drives along with the autochanger
 itself in bacula-dir.conf. That way I can send jobs to the autochanger
 or either drive of my 2 drive exabyte magnum. With that said I have to
 warn you that although this has worked for me since 2006 the
 developers (including Kern) say that this is an unsupported
 configuration.

 John

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Perpetual Issues w/ 2 Drive Autochangers not utilizing 2nd drive

2012-08-06 Thread Mingus Dew
Dear Bacula Users,
I have been using Bacula for about 4 years now. I recently moved
from a single drive LTO-3 Exabyte Autochanger to a 2 drive LTO-5
TL-2000 Autochanger. All the devices are recognized properly and I can
read/write from both jobs. The issue I'm having is that Bacula almost
never uses the 2nd drive. It does occasionally, but I can't figure out
a rhyme or reason for it.
 I've tried setting Prefer Mounted Volumes = No in a JobDefs
resource for Tape jobs, but still no luck. It does work, just randomly
and not in a way that seem configurable, predictable, or useful.
Can anyone help?
Yours,
Shon


# Media storage devices
Storage {
  Name = TL2000
  Address = backup1.internal
  SDPort = 9103
  Password = UFHNdDMDfuihpmPXxTJfTXYt63MBxQnSeA2cEgZOx1ht
  Device = TL2000_Library
  Media Type = LTO-5
  Autochanger = Yes
}

Autochanger {
  Name = TL2000_Library
  Device = TL2000_DTE0, TL2000_DTE1
  Changer Command = /opt/csw/etc/bacula/mtx-changer %c %o %S %a %d
  Changer Device = /dev/scsi/changer/c3t2002000E111471E5d1
}

Device {
  Name = TL2000_DTE0
  Media Type = LTO-5
  Archive Device = /dev/rmt/2cbn
  LabelMedia = Yes
  AutomaticMount = Yes
  AlwaysOpen = Yes
  RemovableMedia = Yes
  RandomAccess = No
  AutoChanger = yes
  Spool Directory = /mnt/backup1/bacula/spool
  Alert Command = sh -c '/opt/csw/sbin/tapeinfo -f %c |grep TapeAlert|cat'
  Maximum Changer Wait = 900
  Maximum Rewind Wait = 900
  Maximum Open Wait = 900
  Drive Index = 0
}

Device {
  Name = TL2000_DTE1
  Media Type = LTO-5
  Archive Device = /dev/rmt/1cbn
  LabelMedia = Yes
  AutomaticMount = Yes
  AlwaysOpen = Yes
  RemovableMedia = Yes
  RandomAccess = No
  AutoChanger = yes
  Spool Directory = /mnt/backup1/bacula/spool
  Alert Command = sh -c '/opt/csw/sbin/tapeinfo -f %c |grep TapeAlert|cat'
  Maximum Changer Wait = 900
  Maximum Rewind Wait = 900
  Maximum Open Wait = 900
  Drive Index = 1
}

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to specify drive of Autochanger?

2012-08-06 Thread Mingus Dew
Dear All,
Does anyone know if its possible to run a job from Command line
that specifies the drive of the autochanger to be used?

Yours,
Shon

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Strange Device Status - Device is BLOCKED. User unmounted

2012-08-06 Thread Mingus Dew
Device TL2000_DTE1 (/dev/rmt/1cbn) is not open.
Device is BLOCKED. User unmounted.
Drive 1 status unknown.


I'm not sure why this is in this status or how to unblock.

Yours,
Shon

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Synthetic, Copy, Restore from either?

2012-07-30 Thread Mingus Dew
Dear All,
 I was hoping someone could clarify for me a few points regarding
Synthetic Jobs, Copy Jobs, and restoring from Copy Jobs

The Synthetic job uses the last Full backup and subsequent Incremental
backups to create a new Full backup from current Volume data into a
new Pool/Volume. Does this new Job become the Full backup that new
Incremental or Full jobs refer to for change deltas and determining
files to backup?

If I have Copy Jobs configured to copy the week's Incremental backups
to tape from disk volume, does the Synthetic backup consider these
jobs at all, or only the actual Incremental jobs?

In the case of Copy Jobs, I imagine the purpose to be having the data
on disk and on tape. Is it possible to restore directly from a Copy
Job?

Yours,
Shon

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Right way to clean up from old clients, jobs?

2012-05-16 Thread Mingus Dew
Dear All,
 When decommissioning a client, what is the best practice to clean
up old client, job data?

In trying to do some reporting from the database, I still see clients
that I removed from the config long ago, and job data for clients and
jobs that haven't existed in years.

Thanks,
Shon

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Running Jobs Table?

2012-05-16 Thread Mingus Dew
Dear All,
 I was wondering if bacula keeps current state information in a
sql table (running jobs, status) or any other place that can be
accessed w/o bconsole? I'd like to monitor different aspects fo the
running job queue (how long job has been running, how many jobs
waiting for resources, etc.)

Thanks,
Shon

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Why does installing WinBacula NEVER work?

2012-03-16 Thread Mingus Dew
Dear All,
 I realize its kind of a strong subject line, but I seriously
haven't gotten the windows FD to run on any platform in a very long
time. It installs, the config looks good, but I ALWAYS on any platform
(2k, 2k3, 2k8) get error 1067 Process quit unexpectedly.

Any ideas why it doesn't run?

Shon

--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] NDMP Plugin coming to community release?

2012-03-05 Thread Mingus Dew
Just wondering if anyone knows if the NDMP plugin will be coming to
the community release of Bacula anytime soon.

Yours,
Shon

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula Events and Python Scripting

2012-02-14 Thread Mingus Dew
Dear All,
 I am currently truncating volumes on disk manually. In my disk
pools I keep 5 volumes (max volumes = 5) . Each of these volumes is
limited with max jobs = 6 * N-clients (incremental volumes) and max
jobs = 1 * N-clients (full volumes). All the volumes have the Action
On Purge value set to Truncate. Each week I purge and truncate the
oldest volume manually.

 I intend to write a Python script to do this, but was hoping to
get some advice on Bacula events and feedback on my plan of attack.

The documentation states that A Bacula event is a point in the Bacula
code where Bacula will call a subroutine (actually a method) that you
have defined in the Python StartUp script. Events correspond to some
significant event such as a Job Start, a Job End, Bacula needs a new
Volume Name, ... When your script is called, it will have access to
all the Bacula variables specific to the Job (attributes of the Job
Object), and it can even call some of the Job methods (subroutines) or
set new values in the Job attributes, such as the Priority. You will
see below how the events are used.

However, is there some other documentation where a list of actual
defined Events is, or is an event anything we can define in the
method?

I envision that the Python code will work something like:

When a Full Job ends,
Check Volumes in the Pool for status Used
Select the oldest written Volume and Purge it with Action = Truncate

When an Incr Job ends,
Check how many previous Incr Jobs have run since last Full
If the Incr Job is the last before next Full,
Check Volumes in the Pool for status Used
Select the oldest written Volume and Purge it with Action = Truncate

Yours,
Shon

--
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Dell TL2000 (IBM Ultrium HH LTO-5) and Solaris

2012-01-09 Thread Mingus Dew
Dear Bacula Community,
 I am seeking help from anyone who has a TL2000/4000 or other
autochanger that utilized the IBM Ultrium HH LTO-5 drives or similar
on a Solaris 10 system. I am seeking correct st.conf definition for
this drive. I have an exisiting definition for an LTO-3 drive and the
system is able to configure and use the LTO-5 drives, but the speed is
horrendous.

Yours,
Shon

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Using Multi-Drive Autochangers - My Autochanger is only utilizing 1 Tape Drive

2011-12-08 Thread Mingus Dew
Dear All,
 I am running bacula 5.0.3 on Solaris 10 x86. I have a Dell TL2000
24-slots Autoloader Library with 2 HH LTO5 Drives. The changer works
and the device will load tapes and write backups. However, I'm kind of
in the dark as to how to get it to use the 2nd drive. The
documentation is real thin in this regard. I've setup the Drive
Index parameter and listed both devices in the Autochanger config. I
was reading about a PreferMountedVolumes but that doesn't seem to be
the solution to my issue. I'd like to know if

Job A in Pool A writing to Tape 0 can run at that same time as Job B
in Pool B writing on Tape 1

How to make Bacula use both Drives? I purchased the unit because
Bacula docs says it can use multi-drive autochangers. I also purchased
it because a single-tape drive just isn't enough to capture everything
daily to tape.

Thanks for the help,
Shon


Autochanger {
  Name = TL2000_Library
  Device = TL2000_DTE0, TL2000_DTE1
  Changer Command = /opt/csw/etc/bacula/mtx-changer %c %o %S %a %d
  Changer Device = /dev/scsi/changer/c3t2002000E111471E5d1
}

Device {
  Name = TL2000_DTE0
  Media Type = LTO-5
  Archive Device = /dev/rmt/2cbn
  LabelMedia = Yes
  AutomaticMount = Yes
  AlwaysOpen = Yes
  RemovableMedia = Yes
  RandomAccess = No
  AutoChanger = yes
  Spool Directory = /mnt/backup1/bacula/spool
  Alert Command = sh -c '/opt/csw/sbin/tapeinfo -f %c |grep TapeAlert|cat'
  Maximum Changer Wait = 900
  Maximum Rewind Wait = 900
  Maximum Open Wait = 900
  Drive Index = 0
}

Device {
  Name = TL2000_DTE1
  Media Type = LTO-5
  Archive Device = /dev/rmt/1cbn
  LabelMedia = Yes
  AutomaticMount = Yes
  AlwaysOpen = Yes
  RemovableMedia = Yes
  RandomAccess = No
  AutoChanger = yes
  Spool Directory = /mnt/backup1/bacula/spool
  Alert Command = sh -c '/opt/csw/sbin/tapeinfo -f %c |grep TapeAlert|cat'
  Maximum Changer Wait = 900
  Maximum Rewind Wait = 900
  Maximum Open Wait = 900
  Drive Index = 1
}

--
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Using Multi-Drive Autochangers - My Autochanger is only utilizing 1 Tape Drive

2011-12-08 Thread Mingus Dew
Thank you

On Thu, Dec 8, 2011 at 10:20 AM, John Drescher dresche...@gmail.com wrote:
 Job A in Pool A writing to Tape 0 can run at that same time as Job B
 in Pool B writing on Tape 1

 Yes. That should work (and does for me) without much fuss. You get
 into a more difficult situation if you want both drives to use the
 same pool at the same time.

Can you look at my config and see if there is any missing or incorrect
element? I was also wondering if MaxConcurrentJobs for Devices might
need to be set. Though this seems to apply to getting 2 drives running
jobs from the same pool.


 John

--
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Help - Client Status shows Backup OK - Job not in Catalog

2011-08-18 Thread Mingus Dew
Dear All,
     I am running Version: 5.0.3 (04 August 2010) i386-pc-solaris2.10
solaris 5.10  Bacula.
I have a situation where backups are running and the client status
reports the backups as OK, but they jobs do not appear when I query
List all backups for a Client. I know that the jobs aren't making it
into the Catalog because I have a Nagios check that runs against the
Catalog DB for Client Job status. The backup ran as the client
reports, but the check and sql query don't find it in the Catalog.

This seems to happen during when my very large backups are running
with a status of Dir inserting Attributes. However, I don't get any
error in the job status from Bacula and am not getting any MySQL, DIR,
SD, or FD errors. The very large job backs up 239,897,443 files.
This just might be more than MySQL can handle. I'm just wondering why
I'm not getting any errors from Bacula about it, or where to find them
that I'm not picking up on them.

Also, right now my Director is hung. The very large job is Dir
inserting Attributes and there is a job with fatal error. Anytime
I've ever had fatal error, I've had to restart the director. It just
won't recover, cancel the job, process other queued jobs, nothing.
Trying to cancel the job results in the message that its marked to be
cancelled, but bconsole then never returns to a prompt.


Console connected at 18-Aug-11 11:52
 JobId Level   Name   Status
==
 95559 FullAdvisen_NFS2_Documents_Tape.2011-08-17_23.00.00_50 Dir
inserting Attributes
 95560 FullAdvisen_NFS2_AMX_Tape.2011-08-17_23.00.00_51 is waiting
on max Storage jobs
 95561 FullAdvisen_NFS2_SEC_Tape.2011-08-17_23.00.00_52 is waiting
on max Storage jobs
 95565 FullMentora_NAS_Weekly_Tape.2011-08-18_00.00.00_57 is
waiting on max Storage jobs
 95593 Fullbop-prod-dw-ts_Daily_Disk.2011-08-18_02.30.00_25 has a
fatal error
 95608 Increme  bop-prod-bm06_Daily_Disk.2011-08-18_03.00.00_40 is
waiting on Storage Shopbop_Files
 95609 Increme  bop-prod-web01_Daily_Disk.2011-08-18_03.00.00_41 is
waiting on Storage Shopbop_Files
 95610 Increme  bop-prod-adm2_Daily_Disk.2011-08-18_03.00.00_42 is
waiting on Storage Shopbop_Files
 95614 Increme  bop-prod-app06_Daily_Disk.2011-08-18_04.00.00_46 is
waiting on Storage Shopbop_Files


Select Job:
 1: JobId=95559 Job=Advisen_NFS2_Documents_Tape.2011-08-17_23.00.00_50
 2: JobId=95560 Job=Advisen_NFS2_AMX_Tape.2011-08-17_23.00.00_51
 3: JobId=95561 Job=Advisen_NFS2_SEC_Tape.2011-08-17_23.00.00_52
 4: JobId=95565 Job=Mentora_NAS_Weekly_Tape.2011-08-18_00.00.00_57
 5: JobId=95593 Job=bop-prod-dw-ts_Daily_Disk.2011-08-18_02.30.00_25
 6: JobId=95608 Job=bop-prod-bm06_Daily_Disk.2011-08-18_03.00.00_40
 7: JobId=95609 Job=bop-prod-web01_Daily_Disk.2011-08-18_03.00.00_41
 8: JobId=95610 Job=bop-prod-adm2_Daily_Disk.2011-08-18_03.00.00_42
 9: JobId=95614 Job=bop-prod-app06_Daily_Disk.2011-08-18_04.00.00_46
Choose Job to cancel (1-9): 5
2001 Job bop-prod-dw-ts_Daily_Disk.2011-08-18_02.30.00_25 marked to be canceled.

Any help with diagnosing and solving this issue is very appreciated.
I'm almost wondering if I've reached the end of the capabilities of
Bacula (probably not Bacula's fault, but just too much for MySQL or
PostgreSQL to bear with not any other viable backend) I tuned for 15
million rows originally, don't know if I can tune for what is probably
a billion rows or more.

Yours,
Shon

--
Get a FREE DOWNLOAD! and learn more about uberSVN rich system, 
user administration capabilities and model configuration. Take 
the hassle out of deploying and managing Subversion and the 
tools developers use with it. http://p.sf.net/sfu/wandisco-d2d-2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Help w/ failed Jobs while Dir Inserting Attributes

2011-07-01 Thread Mingus Dew
Dear Bacula Users,
 I am running v5.0.3 on Solaris 10 x86 platform with MySQL 5.1 as the
backend. I am currently having a problem where a few very large (15 million+
files) backups to the tape device are causing disk backup jobs to fail.

 89787 FullAdvisen_NFS2_AMX_Tape.2011-06-29_23.00.00_51 Dir inserting
Attributes
 89788 FullAdvisen_NFS2_SEC_Tape.2011-06-29_23.00.00_52 is waiting on
max Storage jobs
 89973 Increme  ad-data5_Daily_Disk.2011-07-01_05.00.01_08 Dir inserting
Attributes
 89974 Increme  ad-data6_Daily_Disk.2011-07-01_05.00.01_09 Dir inserting
Attributes

You can see that the first job is Dir inserting Attributes this is the
large tape job. The disk jobs that are also Dir inserting attributes fail.
They run, start to insert Attributes, and then fail

1-Jul 08:24 mt-back4.director JobId 89970: Error: Bacula mt-back4.director
5.0.3 (04Aug10): 01-Jul-2011 08:24:29
  Build OS:   i386-pc-solaris2.10 solaris 5.10
  JobId:  89970
  Job:ad-ora2_Daily_Disk.2011-07-01_05.00.01_05
  Backup Level:   Incremental, since=2011-06-30 05:00:02
  Client: ad-ora2.advisen 2.0.0 (04Jan07)
x86_64-redhat-linux-gnu,redhat,
  FileSet:LinuxDB_FileSystems 2011-02-20 10:59:25
  Pool:   Advisen-FS_Incr (From Job IncPool override)
  Catalog:Mentora_Catalog (From Client resource)
  Storage:Advisen_Files (From Pool resource)
  Scheduled time: 01-Jul-2011 05:00:01
  Start time: 01-Jul-2011 05:00:05
  End time:   01-Jul-2011 08:24:29
  Elapsed time:   3 hours 24 mins 24 secs
  Priority:   10
  FD Files Written:   96
  SD Files Written:   96
  FD Bytes Written:   164,038,683 (164.0 MB)
  SD Bytes Written:   164,047,185 (164.0 MB)
  Rate:   13.4 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): Advisen-FS_Incr-0144
  Volume Session Id:  1002
  Volume Session Time:1308776503
  Last Volume Bytes:  535,777,189,031 (535.7 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:*** Backup Error ***


As you can See the FD and SD termination status' are OK, but it still fails.
I have to believe this is a DB locking issue, but I have no idea how to fix
this. Does anyone in the community have any ideas?

Thank you,
Shon
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] NDMP Support

2011-06-29 Thread Mingus Dew
Dear All,
 I recently saw that NDMP support is/was added to the Bacula Systems
Enterprise edition. When will this feature become available in the public
version?

Yours,
Shon
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Help Troubleshooting Windows FD

2011-05-05 Thread Mingus Dew
Dear Konstantin,
 Thanks very much for your response. With yours and others I was able to
get the latest version of the client for Windows installed and working.

Sincerely Yours,
Shon

On Wed, May 4, 2011 at 8:55 AM, Konstantin Khomoutov 
flatw...@users.sourceforge.net wrote:

 On Wed, 4 May 2011 08:17:18 -0400
 Mingus Dew shon.steph...@gmail.com wrote:

   I am trying to install the latest Win64 client downloaded from
  bacula.org on a Windows 2008 server. The application does appear to
  install correctly, but when trying to start the service it fails to
  start and generates Error 1067: The process terminated unexpectedly.
 
   I actually have gotten this error, on multiple client versions,
  on all my different version (2k, 2k3, 2k8) Windows servers for over 2
  years now. I've not EVER been able to run Bacula on Windows and
  really need some help figuring out why this is.

 Start with debugging.
 Assuming you have installed it under %programfiles%\bacula, do this
 (supposedly you will need to do this using an administrative account):
 1) Run cmd.exe
 2) cd %programfiles%\bacula
 3) Run: bacula-fd.exe -c %programfiles%\bacula\bacula-fd.conf -t
   This should check your config file (specified via -c; you have to
   specify it explicitly because otherwise Bacula tends to look for it
   somewhere under your current user's %appdata%).
 If Bacula reports any errors, fix them.
 If it still doesn't work after fixing any errors detected,
 4) Run: bacula-fd.exe -c %programfiles%\bacula\bacula-fd.conf -d 100
   Which will create a trace file (called whatever.trace in the
   bacula's installation directory with whatever being the name of
   this FD instance read from the configuration file).
   Look at that file to see what happens when bacula-fd starts up.

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Help Troubleshooting Windows FD

2011-05-04 Thread Mingus Dew
Dear All,
 I am trying to install the latest Win64 client downloaded from
bacula.org on a Windows 2008 server. The application does appear to install
correctly, but when trying to start the service it fails to start and
generates Error 1067: The process terminated unexpectedly.

 I actually have gotten this error, on multiple client versions, on all
my different version (2k, 2k3, 2k8) Windows servers for over 2 years now.
I've not EVER been able to run Bacula on Windows and really need some help
figuring out why this is.

Thank you,
Shon
--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Hardware Encryption?

2011-01-05 Thread Mingus Dew
Fellow Bacula Users,
 I am in search of a hardware based encryption solution that is
OS/Storage independent. Whether it be a new drive or something else even
Bacula compatible. The encryption provided within Bacula itself is not
sufficient for my needs and during my last search I was unable to find an
LTO-4 drive with encryption that was capable of encrypting tapes w/o
interaction from the Backup program (essentially only being supported by the
software of the vendor itself)

So some responses from fellow users on what you're doing for encryption
of Tapes outside of Bacula would be great.

Thanks,
Shon
--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Need some Solaris advice - Bacula, MySQL memory leak?

2010-11-02 Thread Mingus Dew
Hi all,
 I recently upgraded to MySQL 5.1.57 from Blastwave packages. I then
upgraded to Bacula 5.0.3
I read somewhere about a memory leak on Solaris and think I'm encountering
it. I was wondering if this is a leak in Bacula, or in MySQL.
I'm running Solaris 10 x86_64 127128-11

Basically, the system just slowly dies, less memory is available, the memory
isn't released when I stop Bacula or MySQL and I have to reboot the box.

If its MySQL, I was hoping someone could point me to a fix or suggest a
version that I can compile myself that isn't subject to this. If its Bacula,
what is
the recommended solution?

Thanks,
Shon
--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Synthetic to Tape

2010-11-02 Thread Mingus Dew
I want to setup Synthetic backups and write them to tape. I suspect that I
can use the Next Pool directive to point to a pool containing tape
volumes.
My intention is that I will be taking the last Full and subsequent
Incrementals from the Disk-To-Disk backup jobs and writing them to tape as a
Synthetic backup.

I was wondering if I will be able to restore from the synthetic job, or
exactly how a restore from this backup would work.

Thanks,
Shon
--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Job, File, Volume Retention?

2010-10-25 Thread Mingus Dew
All,
 I recall reading about this long ago, when I read the manual like a
book. I'm having a hard time finding the information now,

I have configured Job and File Retention periods in the Job Definitions and
then Volume Retention periods in the Pool Definitions

Job Retention = 30d
File Retention = 30d

Volume Retention = 14d

Which retention period will be the one implemented, or will this produce
some resultant Retention that I'm not aware of?

Thank you,
Shon
--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tuning for large (millions of files) backups?

2010-10-14 Thread Mingus Dew
Henrik,
 Have you had any problems with slow queries during backup or restore
jobs? I'm thinking about
http://bugs.bacula.org/view.php?id=1472specifically, and considering
that the bacula.File table already has 73
million rows in it and I haven't even successfully ran the big job yet.

Just curious as a fellow Solaris deployer...

Thanks,
Shon

On Fri, Oct 8, 2010 at 3:30 PM, Henrik Johansen hen...@scannet.dk wrote:

 'Mingus Dew' wrote:

 All,
 I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running
 MySQL 4.1.22 for the database server. I do plan on upgrading to a
 compatible version of MySQL 5, but migrating to PostgreSQL isn't an
 option at this time.

 I am trying to backup to tape a very large number of files for a
 client. While the data size is manageable at around 2TB, the number of
 files is incredibly large.
 The first of the jobs had 27 million files and initially failed because
 the batch table became Full. I changed the myisam_data_pointer size
 to a value of 6 in the config.

 This job was then able to run successfully and did not take too long.

 I have another job which has 42 million files. I'm not sure what that
 equates to in rows that need to be inserted, but I can say that I've
 not been able to successfully run the job, as it seems to hang for
 over 30 hours in a Dir inserting attributes status. This causes
 other jobs to backup in the queue and once canceled I have to restart
 Bacula.

 I'm looking for way to boost performance of MySQL or Bacula (or both)
 to get this job completed.


 You *really* need to upgrade to MySQL 5 and change to InnoDB - there is no
 way in hell that MySQL 4 + MyISAM is going to perform decent in your
 situation.
 Solaris 10 is a Tier 1 platform for MySQL so the latest versions are
 always available from www.mysql.com in the native pkg format so there
 really
 is no excuse.

 We run our Bacula Catalog MySQl servers on Solaris (OpenSolaris) so
 perhaps I can give you some pointers.

 Our smallest Bacula DB is currently ~70 GB (381,230,610 rows).

 Since you are using Solaris 10 I assume that you are going to run MySQL
 off ZFS - in that case you need to adjust the ZFS recordsize for the
 filesystem that is going to hold your InnoDB datafiles to match the
 InnoDB block size.

 If you are using ZFS you should also consider getting yourself a fast
 SSD as a SLOG (or to disable the ZIL entirely if you dare) - all InnoDB
 writes to datafiles are O_SYNC and benefit *greatly* from an SSD in
 terms of write / transaction speed.

 If you have enough CPU power to spare you should try turning on
 compression for the ZFS filesystem holding the datafiles - it also can
 accelerate DB writes / reads but YMMV.

 Lastly, our InnoDB related configuration from my.cnf :

 # InnoDB options skip-innodb_doublewrite
 innodb_data_home_dir = /tank/db/
 innodb_log_group_home_dir = /tank/logs/
 innodb_support_xa = false
 innodb_file_per_table = true
 innodb_buffer_pool_size = 20G
 innodb_flush_log_at_trx_commit = 2
 innodb_log_buffer_size = 128M
 innodb_log_file_size = 512M
 innodb_log_files_in_group = 2
 innodb_max_dirty_pages_pct = 90



 Thanks,
 Shon


 --
 Beautiful is writing same markup. Internet Explorer 9 supports
 standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
 Spend less time writing and  rewriting code and more time creating great
 experiences on the web. Be a part of the beta today.
 http://p.sf.net/sfu/beautyoftheweb


  ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



 --
 Med venlig hilsen / Best Regards

 Henrik Johansen
 hen...@scannet.dk
 Tlf. 75 53 35 00

 ScanNet Group
 A/S ScanNet
--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tuning for large (millions of files) backups?

2010-10-12 Thread Mingus Dew
Henrik,
 I really appreciate your reply, particularly as a fellow
Bacula-on-Solaris user. I do not have my databases on ZFS, only my Bacula
storage. I'll probably have to tune for local disk.

Thanks very much,
Shon

On Fri, Oct 8, 2010 at 3:30 PM, Henrik Johansen hen...@scannet.dk wrote:

 'Mingus Dew' wrote:

 All,
 I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running
 MySQL 4.1.22 for the database server. I do plan on upgrading to a
 compatible version of MySQL 5, but migrating to PostgreSQL isn't an
 option at this time.

 I am trying to backup to tape a very large number of files for a
 client. While the data size is manageable at around 2TB, the number of
 files is incredibly large.
 The first of the jobs had 27 million files and initially failed because
 the batch table became Full. I changed the myisam_data_pointer size
 to a value of 6 in the config.

 This job was then able to run successfully and did not take too long.

 I have another job which has 42 million files. I'm not sure what that
 equates to in rows that need to be inserted, but I can say that I've
 not been able to successfully run the job, as it seems to hang for
 over 30 hours in a Dir inserting attributes status. This causes
 other jobs to backup in the queue and once canceled I have to restart
 Bacula.

 I'm looking for way to boost performance of MySQL or Bacula (or both)
 to get this job completed.


 You *really* need to upgrade to MySQL 5 and change to InnoDB - there is no
 way in hell that MySQL 4 + MyISAM is going to perform decent in your
 situation.
 Solaris 10 is a Tier 1 platform for MySQL so the latest versions are
 always available from www.mysql.com in the native pkg format so there
 really
 is no excuse.

 We run our Bacula Catalog MySQl servers on Solaris (OpenSolaris) so
 perhaps I can give you some pointers.

 Our smallest Bacula DB is currently ~70 GB (381,230,610 rows).

 Since you are using Solaris 10 I assume that you are going to run MySQL
 off ZFS - in that case you need to adjust the ZFS recordsize for the
 filesystem that is going to hold your InnoDB datafiles to match the
 InnoDB block size.

 If you are using ZFS you should also consider getting yourself a fast
 SSD as a SLOG (or to disable the ZIL entirely if you dare) - all InnoDB
 writes to datafiles are O_SYNC and benefit *greatly* from an SSD in
 terms of write / transaction speed.

 If you have enough CPU power to spare you should try turning on
 compression for the ZFS filesystem holding the datafiles - it also can
 accelerate DB writes / reads but YMMV.

 Lastly, our InnoDB related configuration from my.cnf :

 # InnoDB options skip-innodb_doublewrite
 innodb_data_home_dir = /tank/db/
 innodb_log_group_home_dir = /tank/logs/
 innodb_support_xa = false
 innodb_file_per_table = true
 innodb_buffer_pool_size = 20G
 innodb_flush_log_at_trx_commit = 2
 innodb_log_buffer_size = 128M
 innodb_log_file_size = 512M
 innodb_log_files_in_group = 2
 innodb_max_dirty_pages_pct = 90



 Thanks,
 Shon


 --
 Beautiful is writing same markup. Internet Explorer 9 supports
 standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
 Spend less time writing and  rewriting code and more time creating great
 experiences on the web. Be a part of the beta today.
 http://p.sf.net/sfu/beautyoftheweb


  ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



 --
 Med venlig hilsen / Best Regards

 Henrik Johansen
 hen...@scannet.dk
 Tlf. 75 53 35 00

 ScanNet Group
 A/S ScanNet
--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tuning for large (millions of files) backups?

2010-10-08 Thread Mingus Dew
Bruno,
 Not so rude at all :) You've made me think of 2 questions

How difficult is it (or procedure for) converting to InnoDB and what exactly
will this gain in performance increase?

Also, you mention Postgresql and batch inserts. Does Bacula not use batch
inserts with MySQL by default?
I'm assuming I'm using batch inserts because Bacula uses a table called
'batch'

-Shon

On Fri, Oct 8, 2010 at 2:07 AM, Bruno Friedmann br...@ioda-net.ch wrote:

 On 10/07/2010 11:03 PM, Mingus Dew wrote:
  All,
   I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running
  MySQL 4.1.22 for the database server. I do plan on upgrading to a
 compatible
  version of MySQL 5, but migrating to PostgreSQL isn't an option at this
  time.
 
   I am trying to backup to tape a very large number of files for a
  client. While the data size is manageable at around 2TB, the number of
 files
  is incredibly large.
  The first of the jobs had 27 million files and initially failed because
 the
  batch table became Full. I changed the myisam_data_pointer size to a
 value
  of 6 in the config.
  This job was then able to run successfully and did not take too long.
 
  I have another job which has 42 million files. I'm not sure what that
  equates to in rows that need to be inserted, but I can say that I've not
  been
  able to successfully run the job, as it seems to hang for over 30 hours
 in a
  Dir inserting attributes status. This causes other jobs to backup in
 the
  queue and
  once canceled I have to restart Bacula.
 
  I'm looking for way to boost performance of MySQL or Bacula (or both)
 to
  get this job completed.
 
  Thanks,
  Shon
 
 Rude answer :

 If you really want to use Mysql drop the myisam to innodb.
 But you don't want to use mysql for that job, just use Postgresql fine
 tuned with batch insert enabled.

 :-)

 --

 Bruno Friedmann (irc:tigerfoot)
 Ioda-Net Sàrl www.ioda-net.ch
  openSUSE Member
User www.ioda.net/r/osu
Blog www.ioda.net/r/blog
  fsfe fellowship www.fsfe.org
 GPG KEY : D5C9B751C4653227
 vcard : http://it.ioda-net.ch/ioda-net.vcf


 --
 Beautiful is writing same markup. Internet Explorer 9 supports
 standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
 Spend less time writing and  rewriting code and more time creating great
 experiences on the web. Be a part of the beta today.
 http://p.sf.net/sfu/beautyoftheweb
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ZFS and Bacula

2010-10-07 Thread Mingus Dew
I have been using this setup for awhile. You absolutely must disable Bacula
compression on the ZFS Devices within the SD or for the specific Pools that
have volumes on the ZFS. Doubling up encryption can actually increase file
sizes and also lead to data errors.

-Shon

On Thu, Oct 7, 2010 at 2:11 PM, Bruno Friedmann br...@ioda-net.ch wrote:

 On 10/07/2010 07:47 PM, Roy Sigurd Karlsbakk wrote:
  Hi all
 
  I'm planning a Bacula setup with ZFS on the SDs (media being disk, not
 tape), and I just wonder - should I use a smaller recordsize (aka largest
 block size) than the default setting of 128kB?
 
  Also, last I tried, with ZFS on a test box, I enabled compression, the
 lzjb algorithm (very lightwegith and quite decent compression). For 'normal'
 data, I usually get 30%+ compression with this, but for the data backed up
 with bacula, it didn't look that good, compression being down to 3-5%. Any
 idea what might cause this?

 If the data coming from bacula are already compressed by the bacula-fd
 there's little space for improvement.
 In your type of setup, I would disable compression on bacula-fd increasing
 the speed of backup, your sd doing it by zfs mechanism.


 
  Vennlige hilsener / Best regards
 
  roy
  --
  Roy Sigurd Karlsbakk
  (+47) 97542685
  r...@karlsbakk.net
  http://blogg.karlsbakk.net/
  --
  I all pedagogikk er det essensielt at pensum presenteres intelligibelt.
 Det er et elementært imperativ for alle pedagoger å unngå eksessiv
 anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller
 eksisterer adekvate og relevante synonymer på norsk.
 
 
 --
  Beautiful is writing same markup. Internet Explorer 9 supports
  standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
  Spend less time writing and  rewriting code and more time creating great
  experiences on the web. Be a part of the beta today.
  http://p.sf.net/sfu/beautyoftheweb
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users


 --

 Bruno Friedmann (irc:tigerfoot)
 Ioda-Net Sàrl www.ioda-net.ch
  openSUSE Member
User www.ioda.net/r/osu
Blog www.ioda.net/r/blog
  fsfe fellowship www.fsfe.org
 GPG KEY : D5C9B751C4653227
 vcard : http://it.ioda-net.ch/ioda-net.vcf


 --
 Beautiful is writing same markup. Internet Explorer 9 supports
 standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
 Spend less time writing and  rewriting code and more time creating great
 experiences on the web. Be a part of the beta today.
 http://p.sf.net/sfu/beautyoftheweb
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disabled Jobs Question

2010-10-07 Thread Mingus Dew
I think at one point Bacula did complain. Thanks for answering my question
though. I think in order to disable the job from being run, I'll have to
make the scripts that call bconsole commands read-only instead of
executable.

-Shon

On Thu, Oct 7, 2010 at 8:12 AM, Phil Stracchino ala...@metrocast.netwrote:

 On 10/07/10 04:10, Ralf Gross wrote:
  Phil Stracchino schrieb:
  On 10/06/10 14:35, Mingus Dew wrote:
  John,
   I think I had to create a bogus schedule, that bacula wouldn't
  accept the job config without a schedule. I think I'll disable the job
  in bconsole and try to start it remotely. Just see what happens...
 
  Mingus, this is why I always create an empty schedule named NEVER.  To
  disable automatic run of any job, long-term, without deleting the Job, I
  then simply set its Schedule to NEVER.
 
  bacula doesn't complain if the job resource has no schedule.

 No, it doesn't.  But if you have a Schedule directive in the relevant
 JobDefs resource, and you want to disable scheduling of a job based on
 that JobDefs, then you need a null Schedule to override it with.


 --
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.


 --
 Beautiful is writing same markup. Internet Explorer 9 supports
 standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
 Spend less time writing and  rewriting code and more time creating great
 experiences on the web. Be a part of the beta today.
 http://p.sf.net/sfu/beautyoftheweb
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Tuning for large (millions of files) backups?

2010-10-07 Thread Mingus Dew
All,
 I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running
MySQL 4.1.22 for the database server. I do plan on upgrading to a compatible
version of MySQL 5, but migrating to PostgreSQL isn't an option at this
time.

 I am trying to backup to tape a very large number of files for a
client. While the data size is manageable at around 2TB, the number of files
is incredibly large.
The first of the jobs had 27 million files and initially failed because the
batch table became Full. I changed the myisam_data_pointer size to a value
of 6 in the config.
This job was then able to run successfully and did not take too long.

I have another job which has 42 million files. I'm not sure what that
equates to in rows that need to be inserted, but I can say that I've not
been
able to successfully run the job, as it seems to hang for over 30 hours in a
Dir inserting attributes status. This causes other jobs to backup in the
queue and
once canceled I have to restart Bacula.

I'm looking for way to boost performance of MySQL or Bacula (or both) to
get this job completed.

Thanks,
Shon
--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Disabled Jobs Question

2010-10-06 Thread Mingus Dew
I am running Bacula 5.0.1 and have a question about Disabled Jobs.

I have some jobs configured that do not have an active schedule (i.e.,
bacula never automatically queues or runs them)
Instead, these jobs are run by a remote user that logs into the backup
server and runs a script to run the configured job on demand.

If I disable these jobs via console, will this prevent them from being run
by bconsole?

Thanks,
Shon
--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Disabled Jobs Question

2010-10-06 Thread Mingus Dew
John,
 I think I had to create a bogus schedule, that bacula wouldn't accept
the job config without a schedule. I think I'll disable the job in bconsole
and try to start it remotely. Just see what happens...

-Shon

On Wed, Oct 6, 2010 at 2:30 PM, John Drescher dresche...@gmail.com wrote:

 2010/10/6 Mingus Dew shon.steph...@gmail.com:
  I am running Bacula 5.0.1 and have a question about Disabled Jobs.
 
  I have some jobs configured that do not have an active schedule (i.e.,
  bacula never automatically queues or runs them)
  Instead, these jobs are run by a remote user that logs into the backup
  server and runs a script to run the configured job on demand.
 
  If I disable these jobs via console, will this prevent them from being
 run
  by bconsole?
 

 I believe disabling jobs via bconsole just temporarily disables any
 schedule for that job until you restart the director in which case the
 job returns to its normal schedule. Well that is how it has worked for
 me. I have been trying to kill a job for months, I keep doing it in
 bconsole only to forget to change the schedule only to have the job
 come back after I restart the director.. I know its easy to edit the
 file. I did not actually think about this or I would have solved it a
 long time ago with exactly what you did. I mean get rid of the
 schedule. Or was the problem I used the default job as a base. I am
 babbling sorry, too much coffee . end here.

 John

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] High Resource Utilization with Bacula FD on VMWare Virtual Machines

2010-07-17 Thread Mingus Dew
All,
 I have ESXi 4.0 servers clustered and hosting a variety of Virtual
Machines. The VM's vary between RHEL, Solaris, Windows, and Debian. I run
various versions of the bacula-fd (5.x, 2.x, 3.x) on these VMs for taking OS
backups. No matter what the FD version though, I am seeing high IOwait on
the VM that the backup is running on, and then also problems with resource
availability on other VMs while the FD is active.

 Just wondering if anyone else has experience similiar issues, what the
details of your issue are, and any possible permutations.

Thanks,
Shon
--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 5.0.2 Truncate feature not working

2010-06-23 Thread Mingus Dew
Marc,
 Thanks for the tip. Is it recommended to run this daily as a runscript,
or after each job?

Thanks,
Shon

On Mon, Jun 21, 2010 at 6:22 PM, Marc Schiffbauer m...@schiffbauer.netwrote:

 * Mingus Dew schrieb am 21.06.10 um 19:43 Uhr:
  I am running 5.0.2 on Solaris 10 x86 edition. I upgraded to 5.0.2
  specifically to use the Truncate volumes on purge feature.
  I have setup Pool and updated volumes and followed the manual, but still
 my
  volumes do not seem to be truncating.
 
  I was wondering if anyone has this setup and working and could share
 their
  relevant config and experiences.


 Hi Shon,

 I was having the same problem.

 To actually truncate volumes you have to manually run the bconsole
 command mentioned in the docs or let bacula run it with a RunScript.

 For example:
  *purge volume action=truncate storage=File allpools

 -Marc
 --
 8AAC 5F46 83B4 DB70 8317  3723 296C 6CCA 35A6 4134


 --
 ThinkGeek and WIRED's GeekDad team up for the Ultimate
 GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the
 lucky parental unit.  See the prize list and enter to win:
 http://p.sf.net/sfu/thinkgeek-promo
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula 5.0.2 Truncate feature not working

2010-06-21 Thread Mingus Dew
I am running 5.0.2 on Solaris 10 x86 edition. I upgraded to 5.0.2
specifically to use the Truncate volumes on purge feature.
I have setup Pool and updated volumes and followed the manual, but still my
volumes do not seem to be truncating.

I was wondering if anyone has this setup and working and could share their
relevant config and experiences.

Thanks,
Shon
--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bug or Feature Request - Config token too long?

2009-12-10 Thread Mingus Dew
Thank you. Great tip. I won't implement because I'd need to update though.
But it did lead me to the right place.

-Shon

On Tue, Nov 24, 2009 at 10:16 AM, Arno Lehmann a...@its-lehmann.de wrote:

 Hi,

 24.11.2009 15:47, Mingus Dew wrote:
  All,
   Running Bacula 3. on Solaris 10 x86. I'm setting up some Copy jobs
  using the Selection Type = SQLQuery. When I try to test the config file
  I get the error:
 
  bac...@backup_svr: bacula-dir -t -c /opt/csw/etc/bacula/bacula-dir.conf
  24-Nov 01:51 bacula-dir: ERROR TERMINATION at lex.c:270
  Config token too long, file: /opt/csw/etc/bacula/bacula-dir_jobs.conf,
  line 757, begins at line 756

 Looks like a config token must not span lines.

  My SQLQuery is actually 2 queries on the same line. It works fine in
  MySQL and I need 2 queries to select jobs from 2 pools in the same Copy
  Job. Here is the SQL statement
 
  select distinct Job.JobId,Job.Name from Job,Pool where Pool.Name =
  'FS-Incremental_Pool' and Pool.PoolId = Job.PoolId and Job.Type = 'B'
  and Job.JobStatus in ('T','W') and Job.jobBytes  0 and Job.StartTime 
  date_sub(current_timestamp(),interval 7 day) and Job.StartTime 
  current_timestamp() order by Job.StartTime; select distinct
  Job.JobId,Job.Name from Job,Pool where Pool.Name = 'FS-Full_Pool' and
  Pool.PoolId = Job.PoolId and Job.Type = 'B' and Job.JobStatus in
  ('T','W') and Job.jobBytes  0 and Job.StartTime 
  date_sub(current_timestamp(),interval 7 day) and Job.StartTime 
  current_timestamp() order by Job.StartTime;
 
  This is the only method I can use as the other methods for selecting Job
  ID's for Copying don't fit my environment. I'd prefer not to split the
  copying of Incrementals and Fulls into different Jobs.

 It should be possible to create on query, but I don't know how right
 now :-)

 I haven't checked the actual length of your queries, but if they
 really exceed Baculas built-in limit, you might want a function
 defined in the catalog database. Should be possible with PostgreSQL
 and MySQL, as far as I know.

 Cheers,

 Arno

  Thanks,
  Shon
 
 
  
 
 
 --
  Let Crystal Reports handle the reporting - Free Crystal Reports 2008
 30-Day
  trial. Simplify your report design, integration and deployment - and
 focus on
  what you do best, core application coding. Discover what's new with
  Crystal Reports now.  http://p.sf.net/sfu/bobj-july
 
 
  
 
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users

 --
 Arno Lehmann
 IT-Service Lehmann
 Sandstr. 6, 49080 Osnabrück
 www.its-lehmann.de


 --
 Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
 trial. Simplify your report design, integration and deployment - and focus
 on
 what you do best, core application coding. Discover what's new with
 Crystal Reports now.  http://p.sf.net/sfu/bobj-july
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Question on RunAfterJob directive

2009-12-10 Thread Mingus Dew
I'd like to run a Bacula Job after the successful run of another Bacula job.

Does RunAfterJob only work for external commands, or do I have to fully
define using a RunScript directive?

Thanks,
Shon
--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Questions on Copy Jobs

2009-12-10 Thread Mingus Dew
Thanks for the help. This got me going in just the right direction.

-Shon

On Tue, Oct 27, 2009 at 11:49 AM, Brian Debelius 
bdebel...@intelesyscorp.com wrote:

 The copy job copies all uncopied jobs one job at a time, one job at a time.
  If you need to limit this, you need to use your own sql command.  I pulled
 the SQL statement from the source that does the selection for the uncopied
 jobs, and modified it with a date range, and run this to do uncopied jobs in
 batches when the need arises.

 Watch for typos

 SELECT DISTINCTJob.JobId FROM Job,Pool WHERE Pool.Name = 'MyPoolName'
 AND Pool.PoolId = Job.PoolId AND Job.Type = 'B' AND Job.JobStatus IN
 ('T','W') AND Job.jobBytes  0 AND (Job.StartTime  '2009-09-01 00:00:00'
 AND Job.StartTime  '2009-10-27 00:00:00') AND Job.JobId NOT IN (SELECT
 PriorJobId FROM Job WHERE Type IN ('B','C') AND Job.JobStatus IN ('T','W')
 AND PrioJobId != 0) ORDER by Job.StartTime;

 Mingus Dew wrote:

 In the documentation, there is an example for a Copy Job using the
 Selection Type PoolUncopiedJobs. I'd like to implement Copy Jobs using
 this selection type. I currently have about 30 clients backed up into the
 same Disk Pool. There are 28 days of backups on Disk for each client.

 If I setup a Copy Job like the one below, will it Copy all the previous
 Jobs in that Pool in one run? Is there a way to limit this?
 If I use a SQLquery selection to Copy the previous 14 days, will the next
 Copy using PoolUncopiedJobs only copy the Jobs since then, or will it go
 back beyond as well?
 For Pools like mine that contains many Jobs, will there only be 1 Copy Job
 Id or does it create and ID for each Job being copied?

 #
 # Default template for a CopyDiskToTape Job
 #
 JobDefs {
  Name = CopyDiskToTape

  Type = Copy
  Messages = StandardCopy
  Client = None
  FileSet = None
  Selection Type = PoolUncopiedJobs
  Maximum Concurrent Jobs = 10
  SpoolData = No
  Allow Duplicate Jobs = Yes
  Allow Higher Duplicates = No

  Cancel Queued Duplicates = No
  Cancel Running Duplicates = No
  Priority = 13
 }

 Schedule {
   Name = DaySchedule7:00
   Run = Level=Full daily at 7:00
 }

 Job {
  Name = CopyDiskToTapeFullBackups

  Enabled = Yes
  Schedule = DaySchedule7:00
  Pool = FullBackupsVirtualPool
  JobDefs = CopyDiskToTape
 }

 Thanks,
 Shon
 


 --
 Come build with us! The BlackBerry(R) Developer Conference in SF, CA
 is the only developer event you need to attend this year. Jumpstart your
 developing skills, take BlackBerry mobile applications to market and stay
 ahead of the curve. Join us from November 9 - 12, 2009. Register now!
 http://p.sf.net/sfu/devconference

 

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bug or Feature Request - Config token too long?

2009-11-24 Thread Mingus Dew
All,
 Running Bacula 3. on Solaris 10 x86. I'm setting up some Copy jobs
using the Selection Type = SQLQuery. When I try to test the config file I
get the error:

bac...@backup_svr: bacula-dir -t -c /opt/csw/etc/bacula/bacula-dir.conf
24-Nov 01:51 bacula-dir: ERROR TERMINATION at lex.c:270
Config token too long, file: /opt/csw/etc/bacula/bacula-dir_jobs.conf, line
757, begins at line 756

My SQLQuery is actually 2 queries on the same line. It works fine in MySQL
and I need 2 queries to select jobs from 2 pools in the same Copy Job. Here
is the SQL statement

select distinct Job.JobId,Job.Name from Job,Pool where Pool.Name =
'FS-Incremental_Pool' and Pool.PoolId = Job.PoolId and Job.Type = 'B' and
Job.JobStatus in ('T','W') and Job.jobBytes  0 and Job.StartTime 
date_sub(current_timestamp(),interval 7 day) and Job.StartTime 
current_timestamp() order by Job.StartTime; select distinct
Job.JobId,Job.Name from Job,Pool where Pool.Name = 'FS-Full_Pool' and
Pool.PoolId = Job.PoolId and Job.Type = 'B' and Job.JobStatus in ('T','W')
and Job.jobBytes  0 and Job.StartTime 
date_sub(current_timestamp(),interval 7 day) and Job.StartTime 
current_timestamp() order by Job.StartTime;

This is the only method I can use as the other methods for selecting Job
ID's for Copying don't fit my environment. I'd prefer not to split the
copying of Incrementals and Fulls into different Jobs.

Thanks,
Shon
--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Verify Jobs

2009-10-27 Thread Mingus Dew
There doesn't seem to be much in the docs on Verify Jobs, unless I'm missing
a section somewhere.
Can a Verify Job be run from the console, or does it have to be configured
in a Job Definition?
What are best practices for running Verify Jobs?

Thanks,
Shon
--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Questions on Copy Jobs

2009-10-27 Thread Mingus Dew
In the documentation, there is an example for a Copy Job using the Selection
Type PoolUncopiedJobs. I'd like to implement Copy Jobs using this
selection type. I currently have about 30 clients backed up into the same
Disk Pool. There are 28 days of backups on Disk for each client.

If I setup a Copy Job like the one below, will it Copy all the previous Jobs
in that Pool in one run? Is there a way to limit this?
If I use a SQLquery selection to Copy the previous 14 days, will the next
Copy using PoolUncopiedJobs only copy the Jobs since then, or will it go
back beyond as well?
For Pools like mine that contains many Jobs, will there only be 1 Copy Job
Id or does it create and ID for each Job being copied?

#
# Default template for a CopyDiskToTape Job
#
JobDefs {
  Name = CopyDiskToTape
  Type = Copy
  Messages = StandardCopy
  Client = None
  FileSet = None
  Selection Type = PoolUncopiedJobs
  Maximum Concurrent Jobs = 10
  SpoolData = No
  Allow Duplicate Jobs = Yes
  Allow Higher Duplicates = No
  Cancel Queued Duplicates = No
  Cancel Running Duplicates = No
  Priority = 13
}

Schedule {
   Name = DaySchedule7:00
   Run = Level=Full daily at 7:00
}

Job {
  Name = CopyDiskToTapeFullBackups
  Enabled = Yes
  Schedule = DaySchedule7:00
  Pool = FullBackupsVirtualPool
  JobDefs = CopyDiskToTape
}


Thanks,
Shon
--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Testing Backups?

2009-10-26 Thread Mingus Dew
I have a need to perform a full restore and verification of the backups of 2
hosts. I'm not sure the best way to do this, or rather what best practices
are for verifying backups, their readability, and completeness.
Can anyone suggest some methods for accomplishing this?

Thanks,
Shon
--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Adding used tapes back to the Scratch Pool?

2009-10-06 Thread Mingus Dew
I'd like to know the recommended way to added previously used tapes back to
the Scratch Pool.

I have already purged the jobs that were on the tapes and updated the Pool
to Scratch for each tape Volume.
For the tapes that contained data, should I rewind them manually? What about
the status? Most of them have a status of Purged currently.

Thanks,
Shon
--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem - All Media records for Volumes are missing!

2009-10-05 Thread Mingus Dew
Yes. I'd quite forgotten about that. Do recall reading it in Changelog or
Readme...

Thanks,
Shon

On Fri, Oct 2, 2009 at 11:32 AM, John Drescher dresche...@gmail.com wrote:

 On Fri, Oct 2, 2009 at 10:32 AM, Christian Gaul christian.g...@otop.de
 wrote:
  Mingus Dew schrieb:
  Enter *MediaId or Volume name: 151
  there is your problem.
 
  *151 instead of 151
 

 Yes there was a change in the input format to allow volume names to be
 a number. So now to enter a MediaId you have to prepend an *

 John

--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Need help w/ Volume Status Error

2009-10-05 Thread Mingus Dew
All,
 Using Bacula 3.0.2 on Solaris 10_x86.  I have a question regarding how
to properly recover tapes from a status of Error. I also have some
questions on reassigning tapes from one Storage Pool to a different Pool.

I am not sure how this status came to be, but I currently have a few tapes
with this status:

|  27 | A00024 | 56.15 | Exabyte_224 |   12 | TSN_Tapes |
LTO-3 | Error

In this state, I can't use them. I don't particularly care about the data
that is on them, but I would like to get them back into the Pool and
useable.
For example, Can I just delete the Volume from Bacula and rewind/relable the
tape?  How would I do so?

Really this same question would apply to the tapes in my Storage Pool. I
have discontinued backups for a customer and can erase their tapes. I'd like
to reuse these tapes and put them into a new Storage Pool. What is the best
method of doing so?

Thanks in advance,
Shon
--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Problem - All Media records for Volumes are missing!

2009-10-02 Thread Mingus Dew
All,


I'm running Bacula version 3.0.2 for Solaris 10_x86. I'm running into an
issue since upgrading from 2.x where I cannot perform any delete or purge
operations on volumes.
I need to because for some reason I now have many volumes with status
Error and some volumes that need purged to clear DB and FS space because
they are no longer
needed for retention.

Select the Pool (1-22): 18
+-+--+---+-+-+--+--+-+--+---+---+-+
| MediaId | VolumeName   | VolStatus | Enabled | VolBytes|
VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType |
LastWritten |
+-+--+---+-+-+--+--+-+--+---+---+-+
| 138 | TSN-FS_Full-0138 | Append|   1 |   7,579,119,384
|1 |1,209,600 |   1 |0 | 0 | TSN_File  |
2009-07-18 07:11:11 |
| 150 | TSN-FS_Full-0150 | Used  |   1 | 438,608,525,689 |
102 |1,209,600 |   1 |0 | 0 | TSN_File  | 2009-07-05
04:00:21 |
| 151 | TSN-FS_Full-0151 | Used  |   1 | 481,044,391,206 |
112 |1,209,600 |   1 |0 | 0 | TSN_File  | 2009-07-12
03:32:32 |
+-+--+---+-+-+--+--+-+--+---+---+-+
Enter *MediaId or Volume name: 151
sql_get.c:1030 Media record for Volume 151 not found.

This is one example, but its happened to all of my Volumes since upgrading.
Please help.

Thanks,
Shon
--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] What to do with Error Disk Volumes

2009-07-15 Thread Mingus Dew
All,
 I have a disk volume with a status of Error

 149 | ShopCart-FS_Full-0149 | Error |   1 |  97,820,348,233
|   22 |1,814,400 |   1 |0 | 0 | ShopCart_File |
2009-07-02 02:59:40

Just wondering what is the proper method to clear or recover from this
error?

-Shon
--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Hardware Encryption

2009-05-12 Thread Mingus Dew
I checked out the survey and voted for features. However, one feature I
didn't see was interfacing Bacula with Hardware Encryption capable LTO-4
drives.

This is the latest in tape tech and absolutely should be supported for a
full-featured solution.

-Shon
--
The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your
production scanning environment may not be a perfect world - but thanks to
Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700
Series Scanner you'll get full speed at 300 dpi even with all image 
processing features enabled. http://p.sf.net/sfu/kodak-com___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula not finding SD resource properly defined and used by other jobs

2009-04-24 Thread Mingus Dew
Arno,
 I'm definitely affected by that Terminated with Error. As for what's
wrong with the jobs...

I think the batch table may be filling before the job completes.

24-Apr 01:00 mt-back4.storage JobId 27642: Error: Bacula cannot write on
disk Volume Advisen-FS_Full-0131 because: The sizes do not match!
Volume=239975205030 Catalog=239634130188 24-Apr 07:38 mt-back4.director
JobId 27642: Fatal error: sql_create.c:731 sql_create.c:731 insert INSERT
INTO batch VALUES
(24239508,27642,'/mnt/exports/documents/NEWSEDGE/2005-09-09/','40667475.200509091680.2_1e6200137e6e8940.xml','HgR
ELu2p IGk B Pq Pq A B51 BAA Q BJvdNX BDIiGg BEmMek A A
G','9Sp1QNBJqIPo87GeVcSAzg') failed:
The table 'batch' is full
24-Apr 07:38 mt-back4.director JobId 27642: Fatal error: catreq.c:490
Attribute create error. sql_update.c:453 Update failed: affected_rows=0 for
UPDATE Media SET InChanger=0, Slot=0 WHERE Slot=6 AND StorageId=6 AND
MediaId!=88 24-Apr 07:49 ad-nfs2.advisen JobId 27642: Fatal error:
backup.c:892 Network send error to SD. ERR=Broken pipe 24-Apr 07:50
mt-back4.director JobId 27642: Error: Bacula mt-back4.director 2.4.4
(28Dec08): 24-Apr-2009 07:50:08

-Shon

On Fri, Apr 24, 2009 at 3:16 AM, Arno Lehmann a...@its-lehmann.de wrote:

 Hi,

 24.04.2009 01:21, Mingus Dew wrote:
  All,
   I'm running Bacula 2.4.4 (not gone to 3.0 just yet) on Solaris
  10_x86. I am having an issue where a couple of new jobs that I've
  created are hanging indefinitely (why doesn't they watchdog catch
 these?).

 Hard to say without further information - how are the jobs configured,
 what is actually stalled, and what's the state of the affected daemon?

  I eventually have to cancel the jobs or all the other jobs get stuck
  waiting execution. When I cancel I get the following error:
 
  23-Apr 19:13 mt-back4.storage JobId 27584: Fatal error:
   Device Advisen_ZFS_Storage with MediaType Advisen_File
  requested by DIR not found in SD Device resources.

 Sounds surprisingly similar to the thread Canceling jobs in 3.0.0
 results in Terminated with Error.

  23-Apr 19:13 mt-back4.director JobId 27584: Fatal error:
   Storage daemon didn't accept Device Advisen_ZFS_Storage because:
   3924 Device Advisen_ZFS_Storage not in SD Device resources.
 
  However, these resources do exist, are defined, and are written to daily
  by other jobs. Also I've run bacula-dir -t -c bacula-dir.conf just to
  see if any errors come up, but nothing does.
 
  Any ideas?

 At least this seems be a problem affecting both 2.4.4 and 3.0.0. Might
 be worth a bug report.

 Arno

  Thanks,
  Shon
 
 
  
 
 
 --
  Crystal Reports #45; New Free Runtime and 30 Day Trial
  Check out the new simplified licensign option that enables unlimited
  royalty#45;free distribution of the report engine for externally facing
  server and web deployment.
  http://p.sf.net/sfu/businessobjects
 
 
  
 
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users

 --
 Arno Lehmann
 IT-Service Lehmann
 Sandstr. 6, 49080 Osnabrück
 www.its-lehmann.de


 --
 Crystal Reports #45; New Free Runtime and 30 Day Trial
 Check out the new simplified licensign option that enables unlimited
 royalty#45;free distribution of the report engine for externally facing
 server and web deployment.
 http://p.sf.net/sfu/businessobjects
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Crystal Reports #45; New Free Runtime and 30 Day Trial
Check out the new simplified licensign option that enables unlimited
royalty#45;free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula not finding SD resource properly defined and used by other jobs

2009-04-23 Thread Mingus Dew
All,
 I'm running Bacula 2.4.4 (not gone to 3.0 just yet) on Solaris 10_x86.
I am having an issue where a couple of new jobs that I've created are
hanging indefinitely (why doesn't they watchdog catch these?).

I eventually have to cancel the jobs or all the other jobs get stuck
waiting execution. When I cancel I get the following error:

23-Apr 19:13 mt-back4.storage JobId 27584: Fatal error:
 Device Advisen_ZFS_Storage with MediaType Advisen_File requested by
DIR not found in SD Device resources.
23-Apr 19:13 mt-back4.director JobId 27584: Fatal error:
 Storage daemon didn't accept Device Advisen_ZFS_Storage because:
 3924 Device Advisen_ZFS_Storage not in SD Device resources.

However, these resources do exist, are defined, and are written to daily by
other jobs. Also I've run bacula-dir -t -c bacula-dir.conf just to see if
any errors come up, but nothing does.

Any ideas?

Thanks,
Shon
--
Crystal Reports #45; New Free Runtime and 30 Day Trial
Check out the new simplified licensign option that enables unlimited
royalty#45;free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] gdb trace output?

2009-02-25 Thread Mingus Dew
Bruno,
 Not rude at all. I wouldn't have known that to be in the Manual.

Thanks,
Shon

On Tue, Feb 24, 2009 at 2:30 PM, Bruno Friedmann br...@ioda-net.ch wrote:

 Mingus Dew wrote:
  I've seen from time to time other people posting gdb trace output from
  Bacula. It looks as though this was automatically generated after a
 problem
  is encountered and then sent via bsmtp.
 
  Is this true? If so how can I set this up?
 
  Thanks,
  Shon
 
 
 

 See the kaboom chapter in manual, and make a search in ml
 Sorry to be rude, but this two are the but source of informations.


 --

 Bruno Friedmann


 --
 Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco,
 CA
 -OSBC tackles the biggest issue in open source: Open Sourcing the
 Enterprise
 -Strategies to boost innovation and cut costs with open source
 participation
 -Receive a $600 discount off the registration fee with the source code:
 SFAD
 http://p.sf.net/sfu/XcvMzF8H
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Jobs not completing, but not erroring?

2009-02-24 Thread Mingus Dew
So when I find these jobs that are hung. They all do have Sleep state and a
longer time value. I'm just unsure what that means from a MySQL perspective.
The thread is in a sleep state. To my way of thinking that would mean that
Bacula has opened the thread but isn't using it.

What could the director be waiting on?

-Shon

On Thu, Feb 19, 2009 at 6:12 PM, Mingus Dew shon.steph...@gmail.com wrote:

 I checked mysql during one of these jobs thats just running. For one thing,
 I can see that other jobs start, run, complete, terminate all while this
 particular job is just hanging.

 Writing: Incremental Backup job Canopy_OLTPA_Lvl1_Tape JobId=22789
 Volume=B00046
 pool=Canopy_Tapes device=Ultrium-TD3 (/dev/rmt/0cbn)
 spooling=0 despooling=0 despool_wait=0
 Files=158 Bytes=50,216,398,373 Bytes/sec=5,228,151
 FDReadSeqNo=767,589 in_msg=767117 out_msg=5 fd=5


 r...@mt-back4: mysqladmin processlist

 +--+-+---++-+-+---+--+
 | Id   | User| Host  | db | Command | Time|
 State |
 Info |

 +--+-+---++-+-+---+--+
 | 2| system user |   || Connect | 3134516 | Has read
 all relay log; waiting for the slave I/O thread to update it   |
 | 6177 | bacula  | localhost | bacula | Sleep   | 20
 |
 |   |
 | 6179 | bacula  | localhost | bacula | Sleep   | 44
 |
 |   |

 +--+-+---++-+-+---+--+

 So its got a long sleep time (6179). So what? That doesn't really
 illuminate anything. Its not like MySQL is starved for resources. I'm not
 buying that this is a MySQL issue though.

 -Shon


 On Thu, Feb 19, 2009 at 10:04 AM, Ryan Novosielski novos...@umdnj.eduwrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 You should check it out with 'mysqladmin processlist' -- you may learn
 that something is going on.

 =R

 Mingus Dew wrote:
  Sorry. I forgot to mention MySQL 4. Its still responding. I've tested it
  while the jobs were hung. Also, if I cancel the hung job, the next tape
  job in queue starts and completes just fine.
 
  -Shon
 
  On Wed, Feb 18, 2009 at 4:13 PM, Ryan Novosielski novos...@umdnj.edu
  mailto:novos...@umdnj.edu wrote:
 
  Mingus Dew wrote:
  Hi all,
   Been using Bacula 2.4.2 on Solaris 10_x86 for almost 2 years now.
  Recently tape backups have been entering into a state that I can only
  describe as limbo.
 
  If I check the status of the director, I may see something like
 
  Running Jobs:
   JobId Level   Name   Status
  ==
   22649 Increme  RMAN_A_Lvl1_Tape.2009-02-17_13.30.36 is running
   22650 Increme  RMAN_B_Lvl1_Tape.2009-02-17_13.30.38 is waiting on max
  Storage jobs
   22651 Increme  RMAN_PROD_Lvl1_Tape.2009-02-17_14.00.40 is waiting on
  max Storage jobs
   22652 Increme  RMAN_BI_Lvl1_Tape.2009-02-17_14.00.42 is waiting
  on max
  Storage jobs
   22653 Increme  RMAN_COG_Lvl1_Tape.2009-02-17_14.00.44 is waiting
  on max
  Storage jobs
 
  If I check the status of the running jobid or the tape device, it will
  show this:
 
  Used Volume status:
  B00046 on device Ultrium-TD3 (/dev/rmt/0cbn)
  Reader=0 writers=0 devres=0 volinuse=1
  
 
  Data spooling: 0 active jobs, 0 bytes; 80 total jobs,
  47,799,329,608 max
  bytes/job.
  Attr spooling: 0 active jobs, 0 bytes; 80 total jobs, 40,616 max
  bytes.
 
  Basically, tape is mounted and reserved, job is showing a is running
  status, but nothing is happening. Because I lack any monitoring of how
  long jobs have been running,
  these have sat for as many as 3 days without changing status,
  erroring,
  or completing. This backs up subsequent jobs that have been
  waiting for
  the tape device.
  The only commonality that I've seen is that they are tape jobs. Other
  than that, the level, fileset, etc. are different.
 
  On one occasion when I cancelled one of these long running jobs, I got
  an error
 
  Hostname: BUG!
  Date: 2009-02-11 14:00:30
  Severity: err
 
  unregister_watchdog_unlocked called before start_watchdog
 
 
  Hostname: BUG!
  Date: 2009-02-11 14:00:30
  Severity: err
 
  bacula-dir[20200]: [ID 702911 daemon.error] backup4.director: ABORTING
  due to ERROR in watchdog.c:206
 
  If anyone has any advice on what might be happening, I would really
  appreciate your responses.
 
  Check to see what, if anything, your backend database is doing. You
  don't tell us what it is, so I can't be any more specific

[Bacula-users] gdb trace output?

2009-02-24 Thread Mingus Dew
I've seen from time to time other people posting gdb trace output from
Bacula. It looks as though this was automatically generated after a problem
is encountered and then sent via bsmtp.

Is this true? If so how can I set this up?

Thanks,
Shon
--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Help with alert configuration

2009-02-24 Thread Mingus Dew
Hello All,

 I've been scratching my brain but haven't figured out how to do this.
I'd like to supress email alerts from Bacula for jobs that are manually
canceled. Can anyone help?

Thanks,
Shon
--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Jobs not completing, but not erroring?

2009-02-19 Thread Mingus Dew
I checked mysql during one of these jobs thats just running. For one thing,
I can see that other jobs start, run, complete, terminate all while this
particular job is just hanging.

Writing: Incremental Backup job Canopy_OLTPA_Lvl1_Tape JobId=22789
Volume=B00046
pool=Canopy_Tapes device=Ultrium-TD3 (/dev/rmt/0cbn)
spooling=0 despooling=0 despool_wait=0
Files=158 Bytes=50,216,398,373 Bytes/sec=5,228,151
FDReadSeqNo=767,589 in_msg=767117 out_msg=5 fd=5


r...@mt-back4: mysqladmin processlist
+--+-+---++-+-+---+--+
| Id   | User| Host  | db | Command | Time|
State |
Info |
+--+-+---++-+-+---+--+
| 2| system user |   || Connect | 3134516 | Has read all
relay log; waiting for the slave I/O thread to update it   |
| 6177 | bacula  | localhost | bacula | Sleep   | 20
|
|   |
| 6179 | bacula  | localhost | bacula | Sleep   | 44
|
|   |
+--+-+---++-+-+---+--+

So its got a long sleep time (6179). So what? That doesn't really illuminate
anything. Its not like MySQL is starved for resources. I'm not buying that
this is a MySQL issue though.

-Shon

On Thu, Feb 19, 2009 at 10:04 AM, Ryan Novosielski novos...@umdnj.eduwrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 You should check it out with 'mysqladmin processlist' -- you may learn
 that something is going on.

 =R

 Mingus Dew wrote:
  Sorry. I forgot to mention MySQL 4. Its still responding. I've tested it
  while the jobs were hung. Also, if I cancel the hung job, the next tape
  job in queue starts and completes just fine.
 
  -Shon
 
  On Wed, Feb 18, 2009 at 4:13 PM, Ryan Novosielski novos...@umdnj.edu
  mailto:novos...@umdnj.edu wrote:
 
  Mingus Dew wrote:
  Hi all,
   Been using Bacula 2.4.2 on Solaris 10_x86 for almost 2 years now.
  Recently tape backups have been entering into a state that I can only
  describe as limbo.
 
  If I check the status of the director, I may see something like
 
  Running Jobs:
   JobId Level   Name   Status
  ==
   22649 Increme  RMAN_A_Lvl1_Tape.2009-02-17_13.30.36 is running
   22650 Increme  RMAN_B_Lvl1_Tape.2009-02-17_13.30.38 is waiting on max
  Storage jobs
   22651 Increme  RMAN_PROD_Lvl1_Tape.2009-02-17_14.00.40 is waiting on
  max Storage jobs
   22652 Increme  RMAN_BI_Lvl1_Tape.2009-02-17_14.00.42 is waiting
  on max
  Storage jobs
   22653 Increme  RMAN_COG_Lvl1_Tape.2009-02-17_14.00.44 is waiting
  on max
  Storage jobs
 
  If I check the status of the running jobid or the tape device, it will
  show this:
 
  Used Volume status:
  B00046 on device Ultrium-TD3 (/dev/rmt/0cbn)
  Reader=0 writers=0 devres=0 volinuse=1
  
 
  Data spooling: 0 active jobs, 0 bytes; 80 total jobs,
  47,799,329,608 max
  bytes/job.
  Attr spooling: 0 active jobs, 0 bytes; 80 total jobs, 40,616 max
  bytes.
 
  Basically, tape is mounted and reserved, job is showing a is running
  status, but nothing is happening. Because I lack any monitoring of how
  long jobs have been running,
  these have sat for as many as 3 days without changing status,
  erroring,
  or completing. This backs up subsequent jobs that have been
  waiting for
  the tape device.
  The only commonality that I've seen is that they are tape jobs. Other
  than that, the level, fileset, etc. are different.
 
  On one occasion when I cancelled one of these long running jobs, I got
  an error
 
  Hostname: BUG!
  Date: 2009-02-11 14:00:30
  Severity: err
 
  unregister_watchdog_unlocked called before start_watchdog
 
 
  Hostname: BUG!
  Date: 2009-02-11 14:00:30
  Severity: err
 
  bacula-dir[20200]: [ID 702911 daemon.error] backup4.director: ABORTING
  due to ERROR in watchdog.c:206
 
  If anyone has any advice on what might be happening, I would really
  appreciate your responses.
 
  Check to see what, if anything, your backend database is doing. You
  don't tell us what it is, so I can't be any more specific.
 

 -

 --
 Open Source Business Conference (OSBC), March 24-25, 2009, San
 Francisco, CA
 - -OSBC tackles the biggest issue in open source: Open Sourcing the
 Enterprise
 - -Strategies to boost innovation and cut costs with open source
 participation
 - -Receive a $600 discount off the registration fee with the source
 code: SFAD
 http://p.sf.net/sfu

Re: [Bacula-users] Jobs not completing, but not erroring?

2009-02-18 Thread Mingus Dew
Sorry. I forgot to mention MySQL 4. Its still responding. I've tested it
while the jobs were hung. Also, if I cancel the hung job, the next tape job
in queue starts and completes just fine.

-Shon

On Wed, Feb 18, 2009 at 4:13 PM, Ryan Novosielski novos...@umdnj.eduwrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Mingus Dew wrote:
  Hi all,
   Been using Bacula 2.4.2 on Solaris 10_x86 for almost 2 years now.
  Recently tape backups have been entering into a state that I can only
  describe as limbo.
 
  If I check the status of the director, I may see something like
 
  Running Jobs:
   JobId Level   Name   Status
  ==
   22649 Increme  RMAN_A_Lvl1_Tape.2009-02-17_13.30.36 is running
   22650 Increme  RMAN_B_Lvl1_Tape.2009-02-17_13.30.38 is waiting on max
  Storage jobs
   22651 Increme  RMAN_PROD_Lvl1_Tape.2009-02-17_14.00.40 is waiting on
  max Storage jobs
   22652 Increme  RMAN_BI_Lvl1_Tape.2009-02-17_14.00.42 is waiting on max
  Storage jobs
   22653 Increme  RMAN_COG_Lvl1_Tape.2009-02-17_14.00.44 is waiting on max
  Storage jobs
 
  If I check the status of the running jobid or the tape device, it will
  show this:
 
  Used Volume status:
  B00046 on device Ultrium-TD3 (/dev/rmt/0cbn)
  Reader=0 writers=0 devres=0 volinuse=1
  
 
  Data spooling: 0 active jobs, 0 bytes; 80 total jobs, 47,799,329,608 max
  bytes/job.
  Attr spooling: 0 active jobs, 0 bytes; 80 total jobs, 40,616 max bytes.
 
  Basically, tape is mounted and reserved, job is showing a is running
  status, but nothing is happening. Because I lack any monitoring of how
  long jobs have been running,
  these have sat for as many as 3 days without changing status, erroring,
  or completing. This backs up subsequent jobs that have been waiting for
  the tape device.
  The only commonality that I've seen is that they are tape jobs. Other
  than that, the level, fileset, etc. are different.
 
  On one occasion when I cancelled one of these long running jobs, I got
  an error
 
  Hostname: BUG!
  Date: 2009-02-11 14:00:30
  Severity: err
 
  unregister_watchdog_unlocked called before start_watchdog
 
 
  Hostname: BUG!
  Date: 2009-02-11 14:00:30
  Severity: err
 
  bacula-dir[20200]: [ID 702911 daemon.error] backup4.director: ABORTING
  due to ERROR in watchdog.c:206
 
  If anyone has any advice on what might be happening, I would really
  appreciate your responses.

 Check to see what, if anything, your backend database is doing. You
 don't tell us what it is, so I can't be any more specific.

 - --
   _  _ _  _ ___  _  _  _
  |Y#| |  | |\/| |  \ |\ |  | |Ryan Novosielski - Systems Programmer II
  |$| |__| |  | |__/ | \| _| |novos...@umdnj.edu - 973/972.0922 (2-0922)
  \__/ Univ. of Med. and Dent.|IST/CST - NJMS Medical Science Bldg - C630
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

 iEYEARECAAYFAkmcee8ACgkQmb+gadEcsb7DFwCgsSkpcfe1yenkadAjrZwH0nhf
 hVcAoNI3/Xjl7F59nl/uIEQE5/qDQfmx
 =l5KJ
 -END PGP SIGNATURE-


 --
 Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco,
 CA
 -OSBC tackles the biggest issue in open source: Open Sourcing the
 Enterprise
 -Strategies to boost innovation and cut costs with open source
 participation
 -Receive a $600 discount off the registration fee with the source code:
 SFAD
 http://p.sf.net/sfu/XcvMzF8H
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula estimate listing shows the files and attributes, backup job error is Could not stat FILE=: ERR=No such file or directory

2008-12-11 Thread Mingus Dew
All,
 I am having a very strange problem. My backup server is running Bacula
2.4.2 (Solaris 10_x86) and my client is Bacula 2.2.8 (Solaris 10_x86). This
client is having intermittent job failures that I am at a loss to explain.
Here is the log output from one of the failed jobs. Its completing with an
OK status, but is not backing up any files. I know it says ERR=No such file
or directory, but the files are there and estimate listing shows the
files.

008-12-11 11:32:04mt-back4.director JobId 17587: Start Backup JobId 17587,
Job=GCRS_Lvl0_Tape.2008-12-11_11.32.41
2008-12-11 11:32:04mt-back4.director JobId 17587: Using Device Ultrium-TD3
2008-12-11 11:32:04mt-back4.storage JobId 17587: 3301 Issuing autochanger
loaded? drive 0 command.
2008-12-11 11:32:07mt-back4.storage JobId 17587: 3302 Autochanger loaded?
drive 0, result: nothing loaded.
2008-12-11 11:32:07mt-back4.storage JobId 17587: 3304 Issuing autochanger
load slot 18, drive 0 command.
2008-12-11 11:34:30mt-back4.storage JobId 17587: 3305 Autochanger load slot
18, drive 0, status is OK.
2008-12-11 11:34:30mt-back4.storage JobId 17587: Volume B00041 previously
written, moving to end of data.
2008-12-11 11:35:19mt-back4.storage JobId 17587: Ready to append to end of
Volume B00041 at file=142.
2008-12-11 11:35:19mt-back4.storage JobId 17587: Spooling data ...
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
11-Dec 11:35 adm9. JobId 17587:  Could not stat FILE=: ERR=No such file
or directory
2008-12-11 11:35:19mt-back4.storage JobId 17587: Job write elapsed time =
00:00:01, Transfer rate = 0  bytes/second
2008-12-11 11:35:19mt-back4.storage JobId 17587: Committing spooled data to
Volume B00041. Despooling 462 bytes ...
2008-12-11 11:35:19mt-back4.storage JobId 17587: Despooling elapsed time =
00:00:01, Transfer rate = 462  bytes/second
2008-12-11 11:35:24mt-back4.storage JobId 17587: Sending spooled attrs to
the Director. Despooling 0 bytes ...
2008-12-11 11:35:24mt-back4.director JobId 17587: Bacula mt-back4.director
2.4.2 (26Jul08): 11-Dec-2008 11:35:24
  Build OS:   i386-pc-solaris2.10 solaris 5.10
  JobId:  17587
  Job:GCRS_Lvl0_Tape.2008-12-11_11.32.41
  Backup Level:   Incremental, since=2008-12-08 22:53:45
  Client: adm9. 2.2.8 (26Jan08)
i386-pc-solaris2.8,solaris,5.8
  FileSet:Gcrs_Lvl0 2008-11-16 08:03:52
  Pool:   Tapes (From Job resource)
  Storage:Exabyte_224 (From Pool resource)
  Scheduled time: 11-Dec-2008 11:31:57
  Start time: 11-Dec-2008 11:32:04
  End time:   11-Dec-2008 11:35:24
  Elapsed time:   3 mins 20 secs
  Priority:   10
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  VSS:no
  Storage Encryption: no
  Volume name(s): B00041
  Volume Session Id:  1391
  Volume Session Time:1227027150
  Last Volume Bytes:  140,914,916,352 (140.9 GB)
  Non-fatal FD errors:21
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:

[Bacula-users] Please help me with Bacula tape load issues

2008-11-18 Thread Mingus Dew
Bacula 2.4.2 has been running generally well for me over a year now. I've
upgraded to 2.4.2 from 2 previous versions.

Lately I have been having multiple issues with Bacula losing track of tapes
in the autochanger, not being able to load tapes
that are in the changer, and just generally not working as advertised.

I haven't made any changes, so I kind of need to know whats going on?

Currently I have this issue:

18-Nov 06:33 mt-back4.storage JobId 16098: Please mount Volume A00046 or
label a new one for:
Job:  Oracle_Weekly_Tape.2008-11-16_22.00.49
Storage:  Ultrium-TD3 (/dev/rmt/0cbn)
Pool: Advisen_Tapes
Media type:   LTO-3


Seems pretty normal. The director wants us to mount a new Volume. But
wait

|  37 | A00033 | Full  |   1 | 1,140,108,770,304 |1,142
|2,332,800 |   1 |9 | 0 | LTO-3 | 2008-11-07
00:27:55 |
|  45 | A00040 | Full  |   1 |   638,830,983,168 |  639
|2,332,800 |   1 |4 | 1 | LTO-3 | 2008-11-14
03:45:31 |
|  47 | A00042 | Full  |   1 |   465,565,750,272 |  466
|2,332,800 |   1 |6 | 1 | LTO-3 | 2008-11-14
07:13:40 |
|  48 | A00043 | Full  |   1 |   594,564,784,128 |  596
|2,332,800 |   1 |7 | 1 | LTO-3 | 2008-11-17
15:21:46 |
|  51 | A00046 | Recycle   |   1 | 1 |0
|2,332,800 |   1 |   10 | 1 | LTO-3 | 2008-10-17
05:42:21 |
|  52 | A00047 | Full  |   1 |   468,591,298,560 |  469
|2,332,800 |   1 |   11 | 1 | LTO-3 | 2008-11-14
10:28:58 |
|  53 | A00048 | Full  |   1 |   909,908,342,784 |  911
|2,332,800 |   1 |   12 | 1 | LTO-3 | 2008-11-16
20:37:24 |
|  78 | B00026 | Full  |   1 | 1,102,732,904,448 |1,104
|2,332,800 |   1 |   14 | 0 | LTO-3 | 2008-11-09
20:03:27 |
|  79 | B00025 | Append|   1 |   636,516,292,608 |  638
|2,332,800 |   1 |   13 | 0 | LTO-3 | 2008-11-10
11:31:24 |
|  85 | B00030 | Recycle   |   1 | 1 |0
|2,332,800 |   1 |   18 | 0 | LTO-3 | 2008-10-13
17:04:46 |
|  87 | B00043 | Full  |   1 |   928,518,893,568 |  929
|2,332,800 |   1 |   19 | 1 | LTO-3 | 2008-10-17
10:06:32 |
|  88 | B00044 | Full  |   1 | 1,273,995,813,888 |1,277
|2,332,800 |   1 |   20 | 1 | LTO-3 | 2008-11-13
22:15:21 |
| 109 | B00014 | Full  |   1 |   705,744,635,904 |  706
|2,332,800 |   1 |   14 | 0 | LTO-3 | 2008-10-10
05:26:14 |
+-++---+-+---+--+--+-+--+---+---+-+

A00046 is in the changer. Its status is Recycle. When I check the Storage
status I see

Device status:
Autochanger Magnum_224 with devices:
   Ultrium-TD3 (/dev/rmt/0cbn)
Device Ultrium-TD3 (/dev/rmt/0cbn) open but no Bacula volume is currently
mounted.
Device is BLOCKED waiting for mount of volume A00046,
   Pool:Oracle_Tapes
   Media type:  LTO-3
Slot 10 is loaded in drive 0.
Total Bytes Read=0 Blocks Read=0 Bytes/block=0
Positioned at File=0 Block=0

If I use MTX I can see that A00046 is actually loaded in the drive

mtx -f /dev/scsi/changer/c6t4d1 status
  Storage Changer /dev/scsi/changer/c6t4d1:1 Drives, 23 Slots ( 0
Import/Export )
Data Transfer Element 0:Full (Storage Element 10 Loaded):VolumeTag =
A00046
  Storage Element 1:Full :VolumeTag=A00037
  Storage Element 2:Full :VolumeTag=A00038
  Storage Element 3:Full :VolumeTag=A00039
  Storage Element 4:Full :VolumeTag=A00040
  Storage Element 5:Full :VolumeTag=A00041
  Storage Element 6:Full :VolumeTag=A00042
  Storage Element 7:Full :VolumeTag=A00043
  Storage Element 8:Full :VolumeTag=A00044
  Storage Element 9:Full :VolumeTag=A00045
  Storage Element 10:Empty:VolumeTag=

  Storage Element 11:Full :VolumeTag=A00047

  Storage Element 12:Full :VolumeTag=A00048

  Storage Element 13:Full :VolumeTag=B00037

  Storage Element 14:Full :VolumeTag=B00038

  Storage Element 15:Full :VolumeTag=B00039

  Storage Element 16:Full :VolumeTag=B00040

  Storage Element 17:Full :VolumeTag=B00041

  Storage Element 18:Full :VolumeTag=B00042

  Storage Element 19:Full :VolumeTag=B00043

  Storage Element 20:Full :VolumeTag=B00044

  Storage Element 21:Full :VolumeTag=B00045

  Storage Element 22:Full :VolumeTag=B00046

  Storage Element 23:Full :VolumeTag=B00047

This and similar issues have been plaguing me recently. I have NO clue what
might be happening. Can anyone help?

Thanks,
Shon

Re: [Bacula-users] Cannot find any appendable volumes. (but they are there!)

2008-11-10 Thread Mingus Dew
When I have this issue I do the following.

1. Make sure that the request for an appendable volume isn't legitimate.
It happens, tapes fill up and no more are available in the changer.
2. Cancel any tape jobs. Let any disk jobs complete
3. Stop bacula-dir, bacula-sd
4. Remove bacula-dir.state, bacula-sd.state
5. Restart bacula-dir, bacula-sd
6. Run canceled tape jobs.

-Shon

On Tue, Nov 4, 2008 at 7:31 AM, Joerg Wunsch [EMAIL PROTECTED] wrote:

 As Mingus Dew wrote:

  I hope someone answers your question. I've seen this same issue reported
 my
  me and others with no admission that there is a bug. I personally can't
 find
  anything wrong with my configuration. Could you check your bacula.log for
  any errors in the tape jobs prior. I've noticed that library error (mtx
  output) is not making it into the failure emails, only the request for an
  appendable volume

 That might explain it, but then I'd still like to know how to recover.

 The night before, one backup was aborted since the autochanger script
 somehow got confused about the slots to unload the tape into, and then
 the ch(4) device (this is FreeBSD which has a kernel device driver for
 media changer devices) refused to unload the medium because the
 desired destination slot was not empty.  Prompted by this, I modified
 the autochanger script to always use the chio return drive 0 command
 rather than even trying to trust the slot number passed in from Bacula
 -- I found my L280 can always exactly remember which slot the medium
 has been loaded from, even after a power cycle, so there's no point in
 trying to move the tape to anything else but its source slot.

 After fixing this, I did not restart the bacula daemons though.  The
 last messages from the previous night's failed jobs were:

 03-Nov 03:13 uriah-sd JobId 35: 3304 Issuing autochanger load slot 3,
 drive 0 command.
 03-Nov 03:14 uriah-sd JobId 35: Fatal error: 3992 Bad autochanger load
 slot 3, drive 0: ERR=Child exited with code 1.
 Results=chio: /dev/ch0: CHIOMOVE: No space left on device

 03-Nov 03:14 uriah-fd JobId 35: Fatal error: job.c:1817 Bad response to
 Append Data command. Wanted 3000 OK data
 , got 3903 Error append data

 One day later (i.e. this morning), the messages were:

 04-Nov 03:03 uriah-dir JobId 36: Start Backup JobId 36,
 Job=UriahHome.2008-11-04_03.03.24
 04-Nov 03:03 uriah-dir JobId 36: Using Device Drive-1
 04-Nov 03:03 uriah-sd JobId 36: Job UriahHome.2008-11-04_03.03.24 waiting.
 Cannot find any appendable volumes.

 I.e. it did not even attempt to touch the changer.

 So how would I have to tell the Bacula daemons my media changer
 problems have been fixed?  Remember, even a full update slots scan
 did not unblock it.

 --
 cheers, Jorg   .-.-.   --... ...--   -.. .  DL8DTL

 http://www.sax.de/~joerg/ http://www.sax.de/%7Ejoerg/
  NIC: JW11-RIPE
 Never trust an operating system you don't have sources for. ;-)

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cannot find any appendable volumes. (but they are there!)

2008-11-04 Thread Mingus Dew
I hope someone answers your question. I've seen this same issue reported my
me and others with no admission that there is a bug. I personally can't find
anything wrong with my configuration. Could you check your bacula.log for
any errors in the tape jobs prior. I've noticed that library error (mtx
output) is not making it into the failure emails, only the request for an
appendable volume

-Shon

On Tue, Nov 4, 2008 at 3:28 AM, Joerg Wunsch [EMAIL PROTECTED] wrote:

 I see there's another message with that subject already.  My problem's
 a little different: the appendable volume /is/ there, but Bacula
 refuses to use it.  The drive is BLOCKED, and I don't know how to make
 it proceed.  I eventually gave up, and killed the job so the other
 outstanding jobs could at least proceed, but would like to know why
 Bacula didn't really use the volumes, to avoid the problem in the
 future.

 The message is:

 04-Nov 09:03 uriah-sd JobId 36: Job UriahHome.2008-11-04_03.03.24 waiting.
 Cannot find any appendable volumes.
 Please use the label  command to create a new Volume for:
Storage:  Drive-1 (/dev/nsa0)
Pool: Home
Media type:   DLT-7000

 status stor says:

 Device status:
 Autochanger L280 with devices:
   Drive-1 (/dev/nsa0)
 Device Drive-1 (/dev/nsa0) is not open.
Device is BLOCKED waiting to create a volume for:
   Pool:Home
   Media type:  DLT-7000
Drive 0 status unknown.

 Yet there are two media available in the pool Home that are marked
 Append:

 *list media
 Automatically selected Catalog: MyCatalog
 Using Catalog MyCatalog
 Pool: Default

 +-++---+-+-+--+--+-+--+---+---+-+
 | MediaId | VolumeName | VolStatus | Enabled | VolBytes| VolFiles |
 VolRetention | Recycle | Slot | InChanger | MediaType | LastWritten
 |

 +-++---+-+-+--+--+-+--+---+---+-+
 | 1   | uriah-001  | Append| 1   | 14103097344 | 20   |
 31536000 | 1   | 1| 1 | DLT-7000  | 2008-10-29 23:25:47
 |
 | 3   | uriah-003  | Append| 1   | 31426375680 | 35   |
 31536000 | 1   | 3| 1 | DLT-7000  | 2008-11-02 19:06:51
 |
 | 5   | uriah-005  | Append| 1   | 64512   | 0|
 31536000 | 1   | 5| 1 | DLT-7000  | 0
 |
 | 7   | uriah-007  | Append| 1   | 64512   | 0|
 31536000 | 1   | 7| 1 | DLT-7000  | 0
 |
 | 8   | uriah-008  | Append| 1   | 64512   | 0|
 31536000 | 1   | 8| 1 | DLT-7000  | 0
 |

 +-++---+-+-+--+--+-+--+---+---+-+
 Pool: Home

 +-++---+-+-+--+--+-+--+---+---+-+
 | MediaId | VolumeName | VolStatus | Enabled | VolBytes| VolFiles |
 VolRetention | Recycle | Slot | InChanger | MediaType | LastWritten
 |

 +-++---+-+-+--+--+-+--+---+---+-+
 | 2   | uriah-002  | Full  | 1   | 19375727616 | 22   |
 31536000 | 1   | 2| 1 | DLT-7000  | 2008-11-02 03:29:38
 |
 | 4   | uriah-004  | Append| 1   | 690213888   | 4|
 31536000 | 1   | 4| 1 | DLT-7000  | 2008-11-03 03:09:00
 |
 | 6   | uriah-006  | Append| 1   | 64512   | 0|
 31536000 | 1   | 6| 1 | DLT-7000  | 0
 |

 +-++---+-+-+--+--+-+--+---+---+-+

 I had to mark uriah-002 artificially Full because of the drive's
 power vanishing before finishing to write the last job that went onto
 the tape, so the tape is currently not really appendable anymore.  But
 both, uriah-004 and uriah-006 should be OK to be written to, so what's
 missing?

 Since the drive was BLOCKED, unmount/mount did not do anything at all
 (the changer script was not started), and even update slots scan did
 not unblock it, even though it thoroughly scanned all the available
 volume headers (and thus confirmed the volume status).

 I did not attempt to relabel uriah-006 since I want it to continue
 filling up medium uriah-004 first, until that one is really full.

 --
 cheers, Jorg   .-.-.   --... ...--   -.. .  DL8DTL

 http://www.sax.de/~joerg/ http://www.sax.de/%7Ejoerg/
  NIC: JW11-RIPE
 Never trust an operating system you don't have sources for. ;-)

 -
 This SF.Net email is sponsored by 

[Bacula-users] Need help with Bacula losing track of tapes

2008-10-30 Thread Mingus Dew
I'm not entirely sure if its a Bacula specific issue, but I could use some
advice (as usual)

I'm running Bacula 2.4.2 on Solaris 10_x68 with and Exabyte LTO-3 Tape
libary. What is happening is that at some point an error occurs to the
device and then Bacula can no longer load or even determine the correct tape
to load for subsequent jobs. My only solution has been to stop Bacula and
remove the bacula-dir.state file. If I don't do this, no matter how many
times I update slots I can't get Bacula to locate the right slots.

This is the log output

2008-10-16 22:00:00mt-back4.director JobId 14221: Start Backup JobId 14221,
Job=NAS_Weekly_Tape.2008-10-16_22.
00.40
2008-10-16 22:00:00mt-back4.storage JobId 14221: 3307 Issuing autochanger
unload slot 1, drive 0 command.
2008-10-16 22:01:45mt-back4.director JobId 14221: Using Device Ultrium-TD3
2008-10-16 22:01:45mt-back4.storage JobId 14221: 3301 Issuing autochanger
loaded? drive 0 command.
2008-10-16 22:01:47mt-back4.storage JobId 14221: 3302 Autochanger loaded?
drive 0, result: nothing loaded.
2008-10-16 22:01:47mt-back4.storage JobId 14221: 3304 Issuing autochanger
load slot 7, drive 0 command.
2008-10-16 22:04:06mt-back4.storage JobId 14221: 3305 Autochanger load slot
7, drive 0, status is OK.
2008-10-16 22:04:06mt-back4.storage JobId 14221: Volume A00043 previously
written, moving to end of data.
2008-10-16 22:05:32mt-back4.storage JobId 14221: Ready to append to end of
Volume A00043 at file=663.
2008-10-17 00:16:36mt-back4.storage JobId 14221: End of Volume A00043 at
1048:9866 on device Ultrium-TD3 (/dev/rmt
/0cbn). Write of 64512 bytes got 0.
2008-10-17 00:16:39mt-back4.storage JobId 14221: Re-read of last block
succeeded.
2008-10-17 00:16:39mt-back4.storage JobId 14221: End of medium on Volume
A00043 Bytes=1,047,677,137,920 Blocks=16,24
0,034 at 17-Oct-2008 00:16.
2008-10-17 00:16:39mt-back4.storage JobId 14221: 3307 Issuing autochanger
unload slot 7, drive 0 command.
2008-10-17 00:18:17mt-back4.director JobId 14221: There are no more Jobs
associated with Volume A00046. Marking it p
urged.
2008-10-17 00:18:17mt-back4.director JobId 14221: All records pruned from
Volume A00046; marking it Purged
2008-10-17 00:18:17mt-back4.director JobId 14221: Recycled volume A00046
2008-10-17 00:18:17mt-back4.storage JobId 14221: 3301 Issuing autochanger
loaded? drive 0 command.
2008-10-17 00:18:20mt-back4.storage JobId 14221: 3302 Autochanger loaded?
drive 0, result: nothing loaded.
2008-10-17 00:18:20mt-back4.storage JobId 14221: 3304 Issuing autochanger
load slot 10, drive 0 command.
2008-10-17 00:20:39mt-back4.storage JobId 14221: 3305 Autochanger load slot
10, drive 0, status is OK.
2008-10-17 00:20:40mt-back4.storage JobId 14221: Recycled volume A00046 on
device Ultrium-TD3 (/dev/rmt/0cbn), all
 previous data lost.
2008-10-17 00:20:40mt-back4.storage JobId 14221: New volume A00046 mounted
on device Ultrium-TD3 (/dev/rmt/0cbn) a
t 17-Oct-2008 00:20.
2008-10-17 05:42:21mt-back4.storage JobId 14221: End of Volume A00046 at
727:0 on device Ultrium-TD3 (/dev/rmt/0cb
n). Write of 64512 bytes got 0.
2008-10-17 05:42:25mt-back4.storage JobId 14221: Error: Backspace record at
EOT failed. ERR=I/O error
2008-10-17 05:42:25mt-back4.storage JobId 14221: End of medium on Volume
A00046 Bytes=726,953,472,000 Blocks=11,268,
499 at 17-Oct-2008 05:42.
2008-10-17 05:42:25mt-back4.storage JobId 14221: 3307 Issuing autochanger
unload slot 10, drive 0 command.
2008-10-17 05:44:08mt-back4.director JobId 14221: There are no more Jobs
associated with Volume B00043. Marking it p
urged.
2008-10-17 05:44:09mt-back4.director JobId 14221: All records pruned from
Volume B00043; marking it Purged
2008-10-17 05:44:09mt-back4.director JobId 14221: Recycled volume B00043
2008-10-17 05:44:09mt-back4.storage JobId 14221: 3301 Issuing autochanger
loaded? drive 0 command.
2008-10-17 05:44:11mt-back4.storage JobId 14221: 3302 Autochanger loaded?
drive 0, result: nothing loaded.
2008-10-17 05:44:11mt-back4.storage JobId 14221: 3304 Issuing autochanger
load slot 19, drive 0 command.
2008-10-17 05:46:35mt-back4.storage JobId 14221: 3305 Autochanger load slot
19, drive 0, status is OK.
2008-10-17 05:46:35mt-back4.storage JobId 14221: Recycled volume B00043 on
device Ultrium-TD3 (/dev/rmt/0cbn), all
 previous data lost.
2008-10-17 05:46:35mt-back4.storage JobId 14221: New volume B00043 mounted
on device Ultrium-TD3 (/dev/rmt/0cbn) a
t 17-Oct-2008 05:46.
2008-10-17 10:06:31mt-back4.storage JobId 14221: End of Volume B00043 at
928:8964 on device Ultrium-TD3 (/dev/rmt/
0cbn). Write of 64512 bytes got 0.
2008-10-17 10:06:34mt-back4.storage JobId 14221: Re-read of last block
succeeded.
2008-10-17 10:06:34mt-back4.storage JobId 14221: End of medium on Volume
B00043 Bytes=928,518,893,568 Blocks=14,392,
963 at 17-Oct-2008 10:06.
2008-10-17 10:06:34mt-back4.storage JobId 14221: 3307 Issuing autochanger
unload slot 19, drive 0 command.
2008-10-17 10:12:06mt-back4.storage JobId 14221: 3995 Bad autochanger
unload slot 

Re: [Bacula-users] How does a change to Volume Use Duration take effect?

2008-03-03 Thread Mingus Dew
Thanks Arno. I just wanted confirmation from someone more experienced.


On Fri, Feb 29, 2008 at 11:55 AM, Arno Lehmann [EMAIL PROTECTED] wrote:

 Hi,

 29.02.2008 17:30, John Drescher wrote:
  By changing the Volume Use Duration in the Pool config, Bacula will
 pick up
  the change after a reload and I don't need to update the volume in the
  catalog?
  Is there a better way to address this issue?
 
 
  I may be wrong on this case but I have found that any changes in the
  pool resource do not affect currently labeled tapes. To change
  currently labeled tapes you need to use the update volume command and
  follow the prompts.

 You are absolutely right - I like to think of the pool definition as
 the template for newly created volumes.


 Arno

  John
 
 
 -
  This SF.net email is sponsored by: Microsoft
  Defy all challenges. Microsoft(R) Visual Studio 2008.
  http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
 

 --
 Arno Lehmann
 IT-Service Lehmann
 www.its-lehmann.de

 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2008.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How does a change to Volume Use Duration take effect?

2008-03-03 Thread Mingus Dew
This is great too. Thanks Brian. This actually answers a question I was
going to pose about tapes. I have some tapes that I changed pool definitions
for, but wasn't quite sure how to update that on tape.

Thanks again

On Fri, Feb 29, 2008 at 1:27 PM, Brian Debelius [EMAIL PROTECTED]
wrote:

  If I understand you correctly, This is how I do it

 **reload*
 You have messages.
 **update*
 Automatically selected Catalog: MyCatalog
 Using Catalog MyCatalog
 Update choice:
  1: Volume parameters
  *2: Pool from resource*
  3: Slots from autochanger
 *Choose catalog item to update (1-3): 2*
 The defined Pool resources are:
  1: Default
  2: Catalog
  3: Scratch
  4: Cleaning
  *5: Tape-daily*
  6: Tape-weekly
 *Select Pool resource (1-6): 5*

 +++-+-+-++-

 +---+-+--+---+-+-+-
 ---+
 | PoolId | Name   | NumVols | MaxVols | UseOnce | UseCatalog |
 AcceptAnyVol
 | AutoPrune | Recycle | PoolType | LabelType | LabelFormat | Enabled |
 ScratchP
 rationTime |

 +++-+-+-++-

 +---+-+--+---+-+-+-
 ---+
 |  5 | Tape-daily |   9 |   0 |   0 |  1 |
 | 1 |   1 | Backup   | 0 | *   |   1 |
  0 |

 +++-+-+-++-

 +---+-+--+---+-+-+-
 ---+
 Pool DB record updated from resource.
 **update*
 Update choice:
  *1: Volume parameters*
  2: Pool from resource
  3: Slots from autochanger
 *Choose catalog item to update (1-3): 1*
 Parameters to modify:
  1: Volume Status
  2: Volume Retention Period
  3: Volume Use Duration
  4: Maximum Volume Jobs
  5: Maximum Volume Files
  6: Maximum Volume Bytes
  7: Recycle Flag
  8: Slot
  9: InChanger Flag
 10: Volume Files
 11: Pool
 12: Volume from Pool
  *   13: All Volumes from Pool*
 14: Enabled
 15: RecyclePool
 16: Done
 *Select parameter to modify (1-16): 13*
 The defined Pool resources are:
  1: Default
  2: Catalog
  3: Scratch
  4: Cleaning
  *5: Tape-daily*
  6: Tape-weekly
 *Select Pool resource (1-6): 5*
 All Volume defaults updated from Tape-daily Pool record.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Too many open reload requests?

2008-03-03 Thread Mingus Dew
I am running Bacula 2.2.7 on Solaris 10_x86. When I make changes I usually
do the following:

Edit configuration file
Test with bacula-dir -t -c bacula-dir.conf
Launch bconsole and issue reload command

Recently I started getting these error messages after issuing a reload

03-Mar 16:16 mt-back4.director JobId 0: Error: Too many open reload
requests. Request ignored.

I have not been able to find any reference to this error or why its
happening. A status command against the director doesn't show any pending
jobs or requests.

Thanks
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How does a change to Volume Use Duration take effect?

2008-02-29 Thread Mingus Dew
Hi all,
 I recently made a change in my Pool configs to change the Volume Use
Duration from 7d to 6d 23h.

Pool {
  Name = My_Disks
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 27d
  Volume Use Duration = 6d 23h
  LabelFormat = ${Pool}-${NumVols}
}

I made this change to compensate for the fact that a job may be scheduled to
start at 08:00, but the sd might not actually start writing the job until
08:09. The following week, the use duration wouldn't have expired, and the
current week's job would begin running. Then the volume would expire and the
next job would fail. Somthing like this:

10-Jan 20:01 mt-back4.director JobId 541: Max configured use duration
exceeded. Marking Volume Mentora_Disks-1 as Used.
10-Jan 20:01 mt-back4.director JobId 541: There are no more Jobs associated
with Volume Mentora_Disks-2. Marking it purged.
10-Jan 20:01 mt-back4.director JobId 541: All records pruned from Volume
Mentora_Disks-2; marking it Purged
10-Jan 20:01 mt-back4.director JobId 541: Recycled volume Mentora_Disks-2
10-Jan 20:01 mt-back4.storage JobId 541: Fatal error: acquire.c:366 Wanted
to append to Volume Mentora_Disks-2, but device Mentora_ZFS_Storage
(/mnt/backup1/mentora/bacula) is busy writing on Mentora_Disks-1 .
10-Jan 02:31 mt-www1.mentora JobId 541: Fatal error: ../../filed/job.c:1811
Bad response to Append Data command. Wanted 3000 OK data
, got 3903 Error append data

I believe its because the first job is still writing, while the 2nd that
started after the period expired can't append to the volume it wants. So I'd
like to know a couple of things.

By changing the Volume Use Duration in the Pool config, Bacula will pick up
the change after a reload and I don't need to update the volume in the
catalog?
Is there a better way to address this issue?

Thanks
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to permanantly remove client?

2008-02-29 Thread Mingus Dew
I am running Bacula 2.2.7 on Solaris 10_x86 w/ MySQL4 database. There are
some systems which are being decommissioned and I'd like to remove all
traces of these clients.
I removed their client and jobs from the director config. However, when I
list clients in bconsole, I still see them. How can I get rid of these
persistent client listings and also make sure my database no longer has
these systems in it either?

Thanks
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How does a change to Volume Use Duration take effect?

2008-02-29 Thread Mingus Dew
I figured out the Duration part, but I'm still not sure if this change is
the correct way to address the larger issue of jobs failing.

*query
Available queries:
 1: List up to 20 places where a File is saved regardless of the
directory
 2: List where the most recent copies of a file are saved
 3: List last 20 Full Backups for a Client
 4: List all backups for a Client after a specified time
 5: List all backups for a Client
 6: List Volume Attributes for a selected Volume
 7: List Volumes used by selected JobId
 8: List Volumes to Restore All Files
 9: List Pool Attributes for a selected Pool
10: List total files/bytes by Job
11: List total files/bytes by Volume
12: List Files for a selected JobId
13: List Jobs stored on a selected MediaId
14: List Jobs stored for a given Volume name
15: List Volumes Bacula thinks are in changer
16: List Volumes likely to need replacement from age or errors
Choose a query (1-16): 9
Enter Pool name: Shopbop_Disks
+-+--+++-+-+
| Recycle | VolRetention | VolUseDuration | MaxVolJobs | MaxVolFiles |
MaxVolBytes |
+-+--+++-+-+
|   1 |2,332,800 |601,200 |  0 |   0
|   0 |
+-+--+++-+-+

The VolUseDuration is stored with the Pool values, not the Volume values.
This does show the new time as well.

Thanks.

On Fri, Feb 29, 2008 at 11:30 AM, John Drescher [EMAIL PROTECTED]
wrote:

  By changing the Volume Use Duration in the Pool config, Bacula will pick
 up
  the change after a reload and I don't need to update the volume in the
  catalog?
  Is there a better way to address this issue?
 

 I may be wrong on this case but I have found that any changes in the
 pool resource do not affect currently labeled tapes. To change
 currently labeled tapes you need to use the update volume command and
 follow the prompts.

 John

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Exabyte 224 - Additional magazine not properlyrecognized

2008-02-21 Thread Mingus Dew
I was able to figure out the issue.

The Exabyte Magnum 224 LTO has to be configured via LCD for the number of
slots. After I set the slots to 24, it worked.

As for Bacula not using the IE slot. I was able to get Bacula to recognize
the slot and use tapes from it by modifying mtx-changer. Let me know if
you'd like to see what I've done.

Thanks

On Tue, Feb 12, 2008 at 10:23 AM, John Drescher [EMAIL PROTECTED]
wrote:

  For what it's worth, we use the same model and have had no problem with
 the
  expansion slots being recognized. Maybe you just need to power cycle the
  unit?
 
  One difference is that we disabled the import/export slot on the
 changer.
  Not sure if that matters though.
 
 I have the IE slot enabled on my unit but my unit came with both slots
 so I always had 24 slots. BTW, bacula will not use the IE slot
 properly so I may switch to disabling it.

 John

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Exabyte 224 - Additional magazine not properlyrecognized

2008-02-21 Thread Mingus Dew
All,
 Here is the output of mtx status command and mtx-changer run from
command line for comparison. I am running Bacula 2.2.7 on Solaris 10_x86.
Also, I still do have to run mtx status command 2x from command line for the
additional slot to be seen before updating slots in bconsole. I think this
is an Exabyte issue, but is low on my fix list right now. Also notice that
the IE slot is still numbered 24, even though its in the RH magazine, not
LH.

This is the most significant change in the mtx-changer, but I can't recall
if there were others

${MTX} -f $ctl status | grep  *Storage Element [0-9].*:.*Full |sed s/
IMPORT\/EXPORT// |awk {print \$3 \$4} | sed s/Full:VolumeTag=//

[EMAIL PROTECTED]: mtx -f /dev/scsi/changer/c6t4d1 status
  Storage Changer /dev/scsi/changer/c6t4d1:1 Drives, 24 Slots ( 1
Import/Export )
Data Transfer Element 0:Full (Storage Element 7 Loaded):VolumeTag =
A00019
  Storage Element 1:Full :VolumeTag=A00013
  Storage Element 2:Full :VolumeTag=A00014
  Storage Element 3:Full :VolumeTag=A00015
  Storage Element 4:Full :VolumeTag=A00016
  Storage Element 5:Full :VolumeTag=A00017
  Storage Element 6:Full :VolumeTag=A00018
  Storage Element 7:Empty:VolumeTag=
  Storage Element 8:Full :VolumeTag=A00020
  Storage Element 9:Full :VolumeTag=A00021
  Storage Element 10:Full :VolumeTag=A00022

  Storage Element 11:Full :VolumeTag=A00023

  Storage Element 12:Empty:VolumeTag=

  Storage Element 13:Empty:VolumeTag=

  Storage Element 14:Empty:VolumeTag=

  Storage Element 15:Empty:VolumeTag=

  Storage Element 16:Empty:VolumeTag=

  Storage Element 17:Empty:VolumeTag=

  Storage Element 18:Empty:VolumeTag=

  Storage Element 19:Empty:VolumeTag=

  Storage Element 20:Empty:VolumeTag=

  Storage Element 21:Empty:VolumeTag=

  Storage Element 22:Empty:VolumeTag=

  Storage Element 23:Empty:VolumeTag=

  Storage Element 24 IMPORT/EXPORT:Full :VolumeTag=A00024

[EMAIL PROTECTED]: ./mtx-changer /dev/scsi/changer/c6t4d1 list 0 0
1:A00013
2:A00014
3:A00015
4:A00016
5:A00017
6:A00018
8:A00020
9:A00021
10:A00022
11:A00023
24:A00024

#!/bin/bash
#
# /bin/sh isn't always compatible so use /bin/bash
#
# Bacula interface to mtx autoloader
#
#  $Id: solaris-mtx-changer 2805 2006-02-22 18:43:55Z kerns $
#
#  If you set in your Device resource
#
#  Changer Command = path-to-this-script/mtx-changer  %c %o %S %a %d
#you will have the following input to this script:
#
#  mtx-changer changer-device command slot archive-device
drive-index
#  $1  $2   $3$4   $5
#
#  for example:
#
#  mtx-changer /dev/sg0 load 1 /dev/nst0 0 (on a Linux system)
#
#  If you need to an offline, refer to the drive as $4
#e.g.   mt -f $4 offline
#
#  Many changers need an offline after the unload. Also many
#   changers need a sleep 60 after the mtx load.
#
#  N.B. If you change the script, take care to return either
#   the mtx exit code or a 0. If the script exits with a non-zero
#   exit code, Bacula will assume the request failed.
#

# Sun sed/awk etc are not sufficient, working versions are in /usr/xpg4/bin
#export PATH=/usr/local/bin:/usr/sfw/bin:/usr/xpg4/bin:/usr/bin
# Add all paths
export
PATH=/usr/sbin:/usr/bin:/usr/dt/bin:/usr/openwin/bin:/usr/ccs/bin:/usr/local/bin:/usr/local/sbin:/opt/csw/bin:/opt/csw/sbin:/opt/csw/mysql4/bin:/usr/xpg4/bin

MTX=mtx


#
# The purpose of this function to wait a maximum
#   time for the drive. It will
#   return as soon as the drive is ready, or after
#   waiting a maximum of 180 seconds.
# Note, this is very system dependent, so if you are
#   not running on Linux, you will probably need to
#   re-write it.
#
# If you have a FreeBSD system, you might want to change
#  the $(seq 180) to $(jot 180) -- tip from Brian McDonald
#
wait_for_drive() {
  #for i in $(seq 180); do # Wait max 180 seconds
  # Solaris has no seq, use perl instead
  for i in $(perl -e '$,=\n; print 0 .. 180;'); do   # Wait max 180
seconds
if ( mt -f $1 status | grep 0x0 ) /dev/null 21; then
  #echo Device $1 READY
  break
fi
#echo Device $1 - not ready, retry ${i}...
sleep 1
  done
}


if test $# -lt 2 ; then
  echo usage: mtx-changer ctl-device command slot archive-device drive
  echo   Insufficient number of arguments arguments given.
  echo   Mimimum usage is first two arguments ...
  exit 1
fi

# Setup arguments
ctl=$1
cmd=$2
slot=$3
#device=$4
# Set $device to your the correct tape device, unless otherwise defined
if test x$4 = x ; then
  # change this to your device
  device=/dev/rmt/0
else
  device=$4
fi
# If drive not given, default to 0
if test $# = 5 ; then
  drive=$5
else
  drive=0
fi

#
# Check for special cases where only 2 arguments are needed,
#  all others are a minimum of 3
case $cmd in
   loaded)
 ;;
   unload)
 ;;
   list)
 ;;
   slots)
 ;;
   *)
 if test $# -lt 3; then
echo usage: mtx-changer 

[Bacula-users] Exabyte 224 - Additional magazine not properly recognized

2008-02-11 Thread Mingus Dew
Hi all,
 This is slightly off topic. I've been using an Exabyte Magnum 224
autoloader with 12-slot magazine and single LTO-3 drive with Bacula
2.2.7for several months now.
I recently purchased an additional magazine to increase the number of tapes
in a given cycle.

According to Exabyte I should only have to insert the magazine for the
autoloader to recognize it. Which it seemed to do. However, as a precursor
to using this with Bacula, I wanted to test the mtx status output.

[EMAIL PROTECTED]: mtx -f /dev/scsi/changer/c6t4d1 status
  Storage Changer /dev/scsi/changer/c6t4d1:1 Drives, 13 Slots ( 1
Import/Export )
Data Transfer Element 0: Empty
  Storage Element 1:Full :VolumeTag=A00037
  Storage Element 2:Full :VolumeTag=A00038
  Storage Element 3:Full :VolumeTag=A00039
  Storage Element 4:Full:VolumeTag=A00040

  Storage Element 5:Full :VolumeTag=A00041
  Storage Element 6:Full :VolumeTag=A00042
  Storage Element 7:Full :VolumeTag=A00043
  Storage Element 8:Full :VolumeTag=A00044
  Storage Element 9:Full :VolumeTag=A00045
  Storage Element 10:Full :VolumeTag=A00046

  Storage Element 11:Full :VolumeTag=A00047

  Storage Element 12 IMPORT/EXPORT:Full :VolumeTag=A00048
  Storage Element 13:Empty :VolumeTag=

So only 1 additional slot is seen, rather than the 12 more (24 total)
expected.

I'm asking if anyone else has run into this issue with their Exabyte Magnum
224

Thanks
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


  1   2   >