[Bacula-users] Anyone out there using an LSI20320-R for your tape drives/libs?

2011-11-26 Thread Jesse Molina

Hi all.  New Bacula user here doing my setup and install.

I am trying to get a new drive working with an LSI20320-R card.  The -R 
is for RAID, but it only does RAID 0 or 1, so I was guessing it would be 
okay, but after some drive errors I am not so sure.

I am having various trouble, which I am currently blaming on either a 
bad drive, tapes, cable, or HBA card.

The LSI20320-R is a single channel Ultra320 PCI-X card.  EBay is flush 
with cheap units; $10 each, shipped.  I bought two to have a spare.  =)

If anyone out there is using this card and has feedback, it would make 
me feel better.

My host is Debian GNU/Linux AMD64.



Here's what lspci says about the card:

03:06.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X 
Fusion-MPT Dual Ultra320 SCSI (rev 08)

Linux driver module is mptspi



-- 
# Jesse Molina
# Mail = je...@opendreams.net
# Page = page-je...@opendreams.net
# Cell = 1.602.323.7608
# Web  = http://www.opendreams.net/jesse/



--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Error writing final EOF to tape. Bad tape suspected.

2011-11-26 Thread Jesse Molina

This looks like a bad tape.  I'm new to Bacula.  Can I get a second 
opinion, please?

Here is some info about my setup:

Debian GNU/Linux, kernel 3.0.0-2-amd64.

Debian packaged Bacula version 5.0.3-1+b1.

This is a Quantum DLT S4 drive attached to an LSI20320-R card on a PCI 
32bit bus slot, Gigabyte GA-MA78G-DS3H desktop motherboard.  The 
hardware has been in use for a few years and is stable.

The thing is, tar isn't having this problem with the tape in question. 
Yes I restored and compared all of the files.  The backup fileset was 
the same.

Quantum also has a nice toolset called xTalk and I have run extensive 
read/write tests against the tape and a 23-minute long device health 
check which did not indicate any kind of error.

Below are three separate backup attempts on a fairly small set of data. 
  You will see the Bacula log info, and then the kernel log info.  This 
is the same tape.  When I tried another tape, the backup succeeded 
without the error.  Note that /var/log was included, so the file set 
size changed(probably grew) slightly between attempts.

It looks like the error happens when Bacula is trying to write the end 
of tape.



26-Nov 00:05 parts-sd JobId 28: Error: block.c:577 Write error at 
1:22939 on device DLT-S4-1 (/dev/nst0). ERR=Input/output error.
26-Nov 00:05 parts-sd JobId 28: Error: Error writing final EOF to tape. 
This Volume may not be readable.
dev.c:1745 ioctl MTWEOF error on DLT-S4-1 (/dev/nst0). 
ERR=Input/output error.
26-Nov 00:05 parts-sd JobId 28: End of medium on Volume tape1 
Bytes=6,479,778,816 Blocks=100,442 at 26-Nov-2011 00:05.

Nov 26 00:05:06 parts kernel: [ 7387.339293] st0: Sense Key : Medium 
Error [current]
Nov 26 00:05:06 parts kernel: [ 7387.339298] Info fld=0x1243c800
Nov 26 00:05:06 parts kernel: [ 7387.339300] st0: Add. Sense: Write error
Nov 26 00:05:06 parts kernel: [ 7387.420619] st0: Sense Key : Illegal 
Request [current]
Nov 26 00:05:06 parts kernel: [ 7387.420624] st0: Add. Sense: Write 
append position error



26-Nov 00:23 parts-sd JobId 29: Error: block.c:577 Write error at 
1:22983 on device DLT-S4-1 (/dev/nst0). ERR=Input/output error.
26-Nov 00:23 parts-sd JobId 29: Error: Error writing final EOF to tape. 
This Volume may not be readable.
dev.c:1745 ioctl MTWEOF error on DLT-S4-1 (/dev/nst0). 
ERR=Input/output error.
26-Nov 00:23 parts-sd JobId 29: End of medium on Volume tape1 
Bytes=6,482,617,344 Blocks=100,486 at 26-Nov-2011 00:23.

Nov 26 00:23:50 parts kernel: [ 8511.562014] st0: Sense Key : Medium 
Error [current]
Nov 26 00:23:50 parts kernel: [ 8511.562021] Info fld=0x1243c800
Nov 26 00:23:50 parts kernel: [ 8511.562023] st0: Add. Sense: Write error
Nov 26 00:23:50 parts kernel: [ 8511.564405] st0: Sense Key : Illegal 
Request [current]
Nov 26 00:23:50 parts kernel: [ 8511.564410] st0: Add. Sense: Write 
append position error



26-Nov 01:42 parts-sd JobId 30: Error: block.c:577 Write error at 
1:23171 on device DLT-S4-1 (/dev/nst0). ERR=Input/output error.
26-Nov 01:42 parts-sd JobId 30: Error: Error writing final EOF to tape. 
This Volume may not be readable.
dev.c:1745 ioctl MTWEOF error on DLT-S4-1 (/dev/nst0). 
ERR=Input/output error.
26-Nov 01:42 parts-sd JobId 30: End of medium on Volume tape1 
Bytes=6,494,745,600 Blocks=100,674 at 26-Nov-2011 01:42.

Nov 26 01:42:24 parts kernel: [13225.859089] st0: Sense Key : Medium 
Error [current]
Nov 26 01:42:24 parts kernel: [13225.859096] Info fld=0x1243c800
Nov 26 01:42:24 parts kernel: [13225.859097] st0: Add. Sense: Write error
Nov 26 01:42:24 parts kernel: [13225.872664] st0: Sense Key : Illegal 
Request [current]
Nov 26 01:42:24 parts kernel: [13225.872669] st0: Add. Sense: Write 
append position error

--




-- 
# Jesse Molina
# Mail = je...@opendreams.net
# Page = page-je...@opendreams.net
# Cell = 1.602.323.7608
# Web  = http://www.opendreams.net/jesse/



--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] SQL Failure while upgrading from 5.0.3 to 5.2.1

2011-11-26 Thread Armin Tueting
All,

26-Nov 12:43 sydney-dir JobId 10: Error: sql_update.c:255 sql_update.c:255 
update UPDATE Counters SET 
MinValue=1,MaxValue=2147483647,CurrentValue=2,WrapCounter='' WHERE 
Counter='DiffFileVolumeCounter' failed: You have an error in your SQL syntax; 
check the manual that corresponds to your MySQL server version for the right 
syntax to use near 
'MinValue=1,MaxValue=2147483647,CurrentValue=2,WrapCounter='' WHERE 
Counter='' at line 1
26-Nov 12:43 sydney-dir JobId 10: Error: Count not update counter 
DiffFileVolumeCounter: ERR=sql_update.c:255 update UPDATE Counters SET 
MinValue=1,MaxValue=2147483647,CurrentValue=2,WrapCounter='' WHERE 
Counter='DiffFileVolumeCounter'
failed: You have an error in your SQL syntax; check the manual that corresponds 
to your MySQL server version for the right syntax to use near 
'MinValue=1,MaxValue=2147483647,CurrentValue=2,WrapCounter='' WHERE 
Counter='' at line 1
26-Nov 12:43 sydney-dir JobId 10: Created new Volume DiffFile-0001 in 
catalog.
I'm getting the error message when upgrading from 5.0.3 to 5.2.1...

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volume Use Duration not working in 5.2.1

2011-11-26 Thread Fahrer, Julian


 -Ursprüngliche Nachricht-
 Von: Phil Stracchino [mailto:ala...@metrocast.net]
 Gesendet: Donnerstag, 24. November 2011 18:15
 An: bacula-users@lists.sourceforge.net
 Betreff: Re: [Bacula-users] Volume Use Duration not working in 5.2.1
 
 On 11/24/11 11:01, Fahrer, Julian wrote:
  Hi,
 
  Am I missing something or is the Volume Use Duration parameter in
  the pool not working in 5.2.1?
 
  I defined the pool like this:
  ---
  Pool {
Name = NEO200S_Weekly_Pool
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Volume Retention = 31 days
Volume Use Duration = 72h
Storage = NEO200S
Cleaning Prefix = CLN
  }
 
 It's working for me.  However, keep in mind that the volume status may
 not update until Bacula next examines the volume to see whether it's
 usable.
 
 

Thanks for the replies. I did not know that bacula updates he volume status 
only if it examines the volume. Since the volumes are currently not in the 
changer, the volume won't be used by bacula and the volume status won't be 
updated.

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up Zimbra on-the-fly

2011-11-26 Thread Silver Salonen
On Fri, 25 Nov 2011 13:08:40 -0500 (EST), Bill Arlofski wrote:
 Hi.

 Is anyone backing up Zimbra on-the-fly? I don't think taking server
 offline for pure file-based copy is a modern method of doing things.
 Neither do I want to use zmbackup, because as I understand, that 
 dumps
 all the mailboxes (which are on disk anyway) to separate files which
 would just waste so much space.

 Hi Silver... The Network Edition (eg: commercial/pay-for) version
 of Zimbra supports internal full and incremental backups that it does
 on-the-fly and automatically once configured.

 At our client sites, we use Bacula to backup the automatic Zimbra
 backups directory structure.

 It's a pretty reliable method of backing up Zimbra, and I have
 unfortunately had the experience of having to fully test process this
 when a client's Zimbra server lost 4 drives in a 6-drive RAID5 array
 at the same time. :(

 The good new though is that we were able to rebuild the Zimbra server
 (virtual this time), install the Zimbra software, restore Zimbra's
 automatic full and inc backups from our Bacula backup, and then
 re-import all Zimbra accounts/emails/calendars etc

 I think with the non-commercial Community Edition (assuming that is
 what you are using) you are best off running an live rsync of the
 /opt/zimbra directory structure, then shutdown Zimbra services
 (zmcontrol stop), run an offline rsync of the /opt/zimbra directory
 structure to the same place, restart Zimbra services (zmcontrol 
 start,
 THEN run a Bacula backup of the rsync'ed directory.

 On smaller sites using the non-commercial edition of Zimbra, we do
 those steps in a RunBefore script for the Zimbra job.

 Does this cost you a few minutes of Zimbra downtime each night?
 Yes, but only a few at most while the offline rsync runs.

 But if you are running the non-commercial version the benefit of this
 method is in your cost savings - IMHO.

 Hope this helps.

Thanks for the tips. I'm running the Network Edition, so I do have the 
backup possibility, but I'd prefer using Bacula, especially because I 
want to do backups to a remote server. Zimbra's backup scripts are meant 
storing backups locally, right? Also the backups take more-or-less the 
amount of space the data is.

As for the rsync-method, the downside of this is that it needs the same 
amount of disk-space for backup as for the data itself. This is what I 
meant by non-modern in the initial e-mail.

Anyway, would it suffice to make MySQL-dump, LDAP-dump and just backup 
the whole /opt/zimbra with Bacula from an LVM-snapshot or smth?

--
Silver

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to set up bacula daemons on Win7 + cygwin?

2011-11-26 Thread Tim Saker
Has anyone broken ground on this yet?

I have successfully compiled Bacula 5.0.3 under cygwin on Windows 7 (64 bit
arch). The Director and Storage daemons run beautifully from a cygwin bash
shell and I am able to run backup jobs so I have full trust that there are
no build issues.

So now I want to set up the Director and Storage daemons as Windows
services so the whole thing is hands free. I have tried the following use
of cygrunsrv with no success:

cygrunsrv -I Bacula-dir -d Bacula Director Service -p
/home/htpc/bacula/bin/bacula-dir.exe -a
/home/htpc/bacula/bin/bacula-dir.conf

The service installs nicely in the Windows services registry, but when I
attempt to start the service I get the following modal warning dialog:

The Bacula Director Service service on Local Computer started and then
stopped. Some services stop automatically if they are bot in use by other
services or programs.

... and the Windows application event log shows this:

The description for Event ID 0 from source Bacula-dir cannot be found.
Either the component that raises this event is not installed on your local
computer or the installation is corrupted. You can install or repair the
component on the local computer.

If the event originated on another computer, the display information had to
be saved with the event.

The following information was included with the event:

Bacula-dir: PID 8156: `Bacula-dir' service stopped, exit status: 255
--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users