Re: [Bacula-users] How many active processes should Bacula spawn? Seeing over 400+

2011-09-22 Thread Michael Galloway
thats all quite normal for a system with a large number of cpu's. on my 
48 core boxes, there are more than 700 processes running when its idling 
:-) this is just the system managing the IO/etc between the cpus and the 
other subsystems.


--- michael

On 09/22/2011 09:30 AM, R. Leigh Hennig wrote:

I have Bacula 5.0.2 installed and have a number of clients (15 in total) and
storage resources (5 total), and on the system that my director is installed
on has about 433 active processes. This seems very excessive. The vast, vast
majority of them are these processes:

watchdog/n
ksoftirqd/n
migration/n
events/n
kblocked/n
cqueue/n
aio/n
ata/n
kmpathd/n
xfslogd/n
ib_cm/n
rpciod/n

Where n is some number between 0 and 23. Is this normal? Does this look
right to you guys? Is something abnormal here? Thanks for your input,
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Hp MSL4048

2009-05-29 Thread Michael Galloway
On Fri, May 29, 2009 at 05:44:45PM +0300, Ismail OZATAY wrote:
 Hi all,
 
 I use bacula for 3 years as disk based backup solution. Now I want to 
 buy HP msl4048 sas tape library and use bacula as backup software. Has 
 anybody ever used this hardware with bacula ? I have no expriences about 
 this.
 
 Thanks


i have this library, SAS,  running with bacula (2.4.4), dual lto-4 drives. no 
issues so far. 

-- michael
 

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] HP msl5030 misfunctions ...

2009-03-04 Thread Michael Galloway
On Wed, Mar 04, 2009 at 08:29:37AM +, mimmo lariccia wrote:
 
 Hi Michael and thanks for fast answer...
 Hi all
 
 I hope you're right.
 But these are results of my tests:
 
 # lsscsi 
 [0:0:0:0]mediumx COMPAQ   MSL5000 Series   0423  -   
 [0:0:0:1]tapeHP   Ultrium 1-SCSI   E38W  /dev/st0
 [0:0:0:2]storage HP   NS E1200-160 530b  -   


you indicated that there are two drives in your library. do both drives show up
in the scsi bios when you boot the system? it would seem that the os is not 
seeing
the first drive for some reason.

-- michael
 

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] HP msl5030 misfunctions ...

2009-03-03 Thread Michael Galloway
On Tue, Mar 03, 2009 at 10:29:24PM +, mimmo lariccia wrote:
 
 Hi all.
 Sorry for annoying with another strange request, but...
 It's making me fool!
 
 Using: 
 - Centos 5.2 Final
 - bacula 2.4.2
 - I'm not able to make any kind of manage of autochanger, but btape test 
 works perfectly (after I've loaded the drive manually with mtx-changer); also 
 an update slots gives me an answer as:
  3999 Device Autochanger not found or could not be opened.
 
 Of course backup on FileStorage works perfectly...
 
 From my point of view may be strange that linux detect only a /dev/nst0 
 instead of 2 LTO1 drives into TapeLibrary, but it's not first time I see a 
 Hp msl kind library, works with only a drive, instead two or more...
 
 ls /dev 
 - 
 [...]
 /dev/sg0 
 /dev/sg1 
 /dev/sg2 
 [...]
 /dev/nst0
 [...]
 
 bacula-sd.conf
 
 Autochanger {
 Name = HPMSL5030
 Device = Drive1
 Changer Device = /dev/sg0
 Changer Command = /usr/lib/bacula/mtx-changer %c %o %S %a %d
 } 
 
 Device {
 Name = Drive1
 Device Type = Tape
 Media Type = Ultrium1
 Archive Device = /dev/nst0
 Removable Media = yes
 Random Access = no
 }
 
 Can anyone help me please?
 Thanks...


clearly /dev/sg0 is not your changer device. have a look at lsscsi and then 
test with mtx to see what device is your changer. mine:

krait:~ # lsscsi
[0:0:0:0]diskATA  ST980813AS   3.AA  /dev/sda
[1:0:0:0]diskATA  ST980813AS   3.AA  /dev/sdb
[6:0:2:0]tapeHP   Ultrium 4-SCSI   U26W  /dev/st0
[6:0:2:1]mediumx HP   MSL G3 Series6.30  -   
[6:0:3:0]tapeHP   Ultrium 4-SCSI   U26W  /dev/st1
[7:0:0:0]diskAdaptec  Device 0 V1.0  /dev/sdc

and /dev/sg3 is my changer 

krait:~ # mtx -f /dev/sg3 status
  Storage Changer /dev/sg3:2 Drives, 48 Slots ( 0 Import/Export )
Data Transfer Element 0:Full (Storage Element 3 Loaded):VolumeTag = BR0728L4

Data Transfer Element 1:Empty
  Storage Element 1:Full :VolumeTag=BR0729L4
  Storage Element 2:Full :VolumeTag=BR0725L4
  Storage Element 3:Empty
  Storage Element 4:Full :VolumeTag=BR0727L4
 etc

-- michael


 
 
 _
 Invite your mail contacts to join your friends list with Windows Live Spaces. 
 It's easy!
 http://spaces.live.com/spacesapi.aspx?wx_action=createwx_url=/friends.aspxmkt=en-us
 --
 Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
 -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
 -Strategies to boost innovation and cut costs with open source participation
 -Receive a $600 discount off the registration fee with the source code: SFAD
 http://p.sf.net/sfu/XcvMzF8H

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape handling with an autoloader

2009-02-24 Thread Michael Galloway
On Tue, Feb 24, 2009 at 07:23:31AM -0800, Christopher Dick wrote:
 I have an LTO-2 library with 20 slots and have a question about tape handling 
 in Bacula.  I have been running backups for about two weeks now, and I've 
 about filled my tapes.  I have two free slots yet, and then I am going to 
 have to check out tapes and refill the library.  The first fulls were quite 
 large and got minimal compression.
 
 In my library, the 20th slot is the mail slot, for checking in and out tapes. 
  My question is, does Bacula have a facility for doing this?  I put a new 
 tape in the mail slot, but I couldn't find any bat or bconsole command to 
 tell bacula to take that tape and label it according to the barcode and then 
 move it to an available slot.  I had to use command line mtx to actually do a 
 full re-inventory of the library and move the tape myself so that Bacula 
 could say oh, hey, that tape in that slot isn't in my db.  At that point, I 
 could issue a label barcodes.
 
 So, in short, does bacula have an internal facility for checking in and out 
 tapes from the library and I am just missing it?
 
 Thanks!
 
 Chris

i do this manually. in bconsole, i umount the drives, then at the library i 
export the tapes
i want to remove and import new or recycled tapes. then in bconsole, i run 
update slots and label
any new tapes if needed. 

-- michael


 
 
 
   
 
 --
 Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
 -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
 -Strategies to boost innovation and cut costs with open source participation
 -Receive a $600 discount off the registration fee with the source code: SFAD
 http://p.sf.net/sfu/XcvMzF8H
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape handling with an autoloader

2009-02-24 Thread Michael Galloway
On Tue, Feb 24, 2009 at 07:59:08AM -0800, Christopher Dick wrote:
 
 
 
 
 Thanks for the quick response!
 
 Just to ease my discomfort, how will Bacula deal with a need to restore
 from a volume that is no longer in the library in the event of a need
 to restore?  Will it just prompt for the volume and wait until I have
 inserted it and moved it to an available slot?  Does it get confused about 
 that sort of thing?
 
 Thanks!
 
 Chris
 


if bacula wants a volume thats not in the library it will request that you 
mount it. in that case
i simply do the same procedure, importing the volumes that bacula requests. 

-- michael 

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula autochanger uses only 1 drive on 2

2009-02-18 Thread Michael Galloway
On Wed, Feb 18, 2009 at 04:45:31PM +0100, Diego Roccia wrote:
 On Wed, 18 Feb 2009 09:56:16 -0500
 John Drescher dresche...@gmail.com wrote:
 
   Hi John,
Thanks for the reply. Obviously 1 backup use 1 drive, I know this.
   The problem is that I run 16 backup jobs (4 of them run
   simultaneously and the other ones stay in queue), and all of them
   run on the same drive
  
  
  Are they all going to the same pool?
  
  John
 
 yes, same pool


and you have 

Prefer Mounted Volumes = no

in the job definition?

-- michael 

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula autochanger uses only 1 drive on 2

2009-02-18 Thread Michael Galloway
On Wed, Feb 18, 2009 at 03:20:51PM +0100, blues...@bluesman.it wrote:
 
 Hi All,
   I'm having a non-critical problem with Bacula in my production
   environment. I have 2 DELL TL-4000 libraries with 2 drives each.
   For each library on the Storage daemon I defined the 2 resources
   for the drives and 1 for the changer:
 
 --
 Autochanger {
   Name = DELL-TL4000
   Device = DELL-TL4000-Drive-1 , DELL-TL4000-Drive-2
   Changer Command = /etc/bacula/scripts/mtx-changer %c %o %S %a %d
   Changer Device = /dev/dell-tape
 }
 
 Device {
   Name = DELL-TL4000-Drive-1  #
   Drive Index = 0
   Media Type = LTO-4
   Archive Device = /dev/dell-tape-drive-n0 
   AutomaticMount = yes;
   AlwaysOpen = yes;
   LabelMedia = yes;
   RandomAccess = no;
   Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
   Autochanger=yes
 }
 
 Device {
   Name = DELL-TL4000-Drive-2
   Drive Index = 1
   Media Type = LTO-4
   Archive Device = /dev/dell-tape-drive-n1
   AutomaticMount = yes;
   AlwaysOpen = yes;
   LabelMedia = yes;
   RandomAccess = no;
   Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
   Autochanger=yes
 }
 --
 
 On the director I configured only the changer resource:
 
 --
 Storage {
   Name = DELL-TL4000
   Address = 10.1.5.109
   SDPort = 9103
   Password = *
   Device = DELL-TL4000-Drive-1 , DELL-TL4000-Drive-2 
   Media Type = LTO-4
   Autochanger = yes
   Maximum Concurrent Jobs = 4 
 }

maybe try something like this, this is how i have storage configured in my 
director config:

# Definition of DDS tape storage device
Storage {
  Name = HP4048
  SDPort = 9103
  Media Type = LTO-4  # must be same as MediaType in Storage 
daemon
  Autochanger = yes   # enable for autochanger device
  Device = LTO4-1
  Maximum Concurrent Jobs = 4
}

Storage {
  Name = LTO4-1
  Address = krait# N.B. Use a fully qualified name here
  SDPort = 9103
  Media Type = LTO-4  # must be same as MediaType in Storage 
daemon
  Device = LTO4-1
  Autochanger = yes   # enable for autochanger device
  Maximum Concurrent Jobs = 2
}
Storage {
  Name = LTO4-2
  Address = krait# N.B. Use a fully qualified name here
  SDPort = 9103
  Media Type = LTO-4  # must be same as MediaType in Storage 
daemon
  Device = LTO4-2
  Autochanger = yes   # enable for autochanger device
  Maximum Concurrent Jobs = 2
}



--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] windows client auth problems

2009-02-07 Thread Michael Galloway
i'm having a bit of what i imaging is authentication issues to a windows 
client, that
is behind a firewall. i am doing backups to my linux clients in the same 
network segment,
for what thats worth. my config on the bacula server for the client is:

Job {
  Name = seahorse
  Client = seahorse-fd
  Type = Backup
  Level = Incremental
  FileSet = seahorsefileset
  Schedule = standardsched
  Storage = HP4048
  Messages = Standard
  Pool = Full
  SpoolData = Yes
  Priority = 10
  Write Bootstrap = /bacula/bin/working/seahorse.bsr
}

Client {
  Name = seahorse-fd
  Address = seahorse.ornl.gov
  FDPort = 9102
  Catalog = MyCatalog
  Password = U4r..Nzv+  # password for FileDaemon
  File Retention = 30 days# 30 days
  Job Retention = 6 months# six months
  AutoPrune = yes # Prune expired Jobs/Files
}

and the fd config on the client:

#
# Default  Bacula File Daemon Configuration file
#
#  For Bacula release 2.4.4 (01/03/09) -- Windows MVS
#
# There is not much to change here except perhaps the
# File daemon Name
#

#
# Global File daemon configuration specifications
#

FileDaemon {# this is me
  Name = seahorse-fd
  FDport = 9102# where we listen for the director
  WorkingDirectory = C:\\Documents and Settings\\All Users\\Application 
Data\\Bacula\\Work
  Pid Directory = C:\\Documents and Settings\\All Users\\Application 
Data\\Bacula\\Work
  Maximum Concurrent Jobs = 2
  Heartbeat Interval = 15
}

#
# List Directors who are permitted to contact this File daemon
#

Director {
  Name = krait
  Password = U4r.Nzv+
}



#
# Restricted Director, used by tray-monitor to get the
#   status of the file daemon
#
Director {
  Name = krait-mon
  Password = LHM..mlHQ
  Monitor = yes
}

# Send all messages except skipped files back to Director
Messages {
  Name = Standard
  director = krait-dir = all, !skipped, !restored
}

i've restarted the client on the windows system a couple of times and the 
client is running. telnet to the client
gives some response:

krait:/bacula/bin # telnet seahorse  9102
Trying 128.xxx.xxx.xxx...
Connected to seahorse.
Escape character is '^]'.

so i think the client is at least answering on that port. i cannot see anything 
relevant in the event log. 
what am i missing here? this is my first attempt at windows clients, and i'm 
not really a windows person,
so i'm unsure what to look for.

-- michael



--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] windows client auth problems

2009-02-07 Thread Michael Galloway
ok, upon rereading (for the nth time) the config on the client, i found the 
issue.

-- michael

On Sat, Feb 07, 2009 at 01:42:38PM -0500, Michael Galloway wrote:
 i'm having a bit of what i imaging is authentication issues to a windows 
 client, that
 is behind a firewall. i am doing backups to my linux clients in the same 
 network segment,
 for what thats worth. my config on the bacula server for the client is:
 
 Job {
   Name = seahorse
   Client = seahorse-fd
   Type = Backup
   Level = Incremental
   FileSet = seahorsefileset
   Schedule = standardsched
   Storage = HP4048
   Messages = Standard
   Pool = Full
   SpoolData = Yes
   Priority = 10
   Write Bootstrap = /bacula/bin/working/seahorse.bsr
 }
 
 Client {
   Name = seahorse-fd
   Address = seahorse.ornl.gov
   FDPort = 9102
   Catalog = MyCatalog
   Password = U4r..Nzv+  # password for FileDaemon
   File Retention = 30 days# 30 days
   Job Retention = 6 months# six months
   AutoPrune = yes # Prune expired Jobs/Files
 }
 
 and the fd config on the client:
 
 #
 # Default  Bacula File Daemon Configuration file
 #
 #  For Bacula release 2.4.4 (01/03/09) -- Windows MVS
 #
 # There is not much to change here except perhaps the
 # File daemon Name
 #
 
 #
 # Global File daemon configuration specifications
 #
 
 FileDaemon {# this is me
   Name = seahorse-fd
   FDport = 9102# where we listen for the director
   WorkingDirectory = C:\\Documents and Settings\\All Users\\Application 
 Data\\Bacula\\Work
   Pid Directory = C:\\Documents and Settings\\All Users\\Application 
 Data\\Bacula\\Work
   Maximum Concurrent Jobs = 2
   Heartbeat Interval = 15
 }
 
 #
 # List Directors who are permitted to contact this File daemon
 #
 
 Director {
   Name = krait
   Password = U4r.Nzv+
 }
 
 
 
 #
 # Restricted Director, used by tray-monitor to get the
 #   status of the file daemon
 #
 Director {
   Name = krait-mon
   Password = LHM..mlHQ
   Monitor = yes
 }
 
 # Send all messages except skipped files back to Director
 Messages {
   Name = Standard
   director = krait-dir = all, !skipped, !restored
 }
 
 i've restarted the client on the windows system a couple of times and the 
 client is running. telnet to the client
 gives some response:
 
 krait:/bacula/bin # telnet seahorse  9102
 Trying 128.xxx.xxx.xxx...
 Connected to seahorse.
 Escape character is '^]'.
 
 so i think the client is at least answering on that port. i cannot see 
 anything relevant in the event log. 
 what am i missing here? this is my first attempt at windows clients, and i'm 
 not really a windows person,
 so i'm unsure what to look for.
 
 -- michael
 
 
 
 --
 Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
 software. With Adobe AIR, Ajax developers can use existing skills and code to
 build responsive, highly engaging applications that combine the power of local
 resources and data with the reach of the web. Download the Adobe AIR SDK and
 Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 

--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] windows client auth problems

2009-02-07 Thread Michael Galloway
On Sat, Feb 07, 2009 at 11:32:15AM -0800, Kevin Keane wrote:
 So, what was the issue, if you don't mind? I'm having a similar one 
 (probably not the same one, though), and am looking for inspiration ;-)

issue was this:
 
Director {
Name = krait
Password = U4r.Nzv+
}

should have been:

Director {
Name = krait-dir
Password = U4r.Nzv+
}

--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] why can t running multiples jobs ?

2009-02-06 Thread Michael Galloway
On Fri, Feb 06, 2009 at 09:57:08PM +0100, gui...@free.fr wrote:
 hello
 
 i ve the following conf :
 
 Device status:
 Autochanger 136T with devices:
Drive-0 (/dev/st0)
Drive-1 (/dev/st1)
Drive-2 (/dev/st2)
 Device Drive-0 (/dev/st0) is not open.
 Drive 0 status unknown.
 Device Drive-1 (/dev/st1) is not open.
 Drive 1 status unknown.
 Device Drive-2 (/dev/st2) is mounted with:
 Volume:  LT1008L3
 Pool:Default
 Media type:  LTO-3
 Slot 39 is loaded in drive 2.
 Total Bytes=23,531,268,096 Blocks=364,757 Bytes/block=64,512
 Positioned at File=36 Block=11,641
 
 3 drives , in an autoloader .
 And , always ... only one , drive working .
 not 3 as the same time  (with 3 differents jobs ... ) why ?
 where is the mistake ?
 
 regards


how do you have concurrency configured?

-- michael
 

--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] why can t running multiples jobs ?

2009-02-06 Thread Michael Galloway
On Fri, Feb 06, 2009 at 04:11:11PM -0500, John Drescher wrote:
 On Fri, Feb 6, 2009 at 3:57 PM,  gui...@free.fr wrote:
  hello
 
  i ve the following conf :
 
  Device status:
  Autochanger 136T with devices:
Drive-0 (/dev/st0)
Drive-1 (/dev/st1)
Drive-2 (/dev/st2)
  Device Drive-0 (/dev/st0) is not open.
 Drive 0 status unknown.
  Device Drive-1 (/dev/st1) is not open.
 Drive 1 status unknown.
  Device Drive-2 (/dev/st2) is mounted with:
 Volume:  LT1008L3
 Pool:Default
 Media type:  LTO-3
 Slot 39 is loaded in drive 2.
 Total Bytes=23,531,268,096 Blocks=364,757 Bytes/block=64,512
 Positioned at File=36 Block=11,641
 
  3 drives , in an autoloader .
  And , always ... only one , drive working .
  not 3 as the same time  (with 3 differents jobs ... ) why ?
  where is the mistake ?
 
  regards
 
 
 Probably because your jobs all go to the same pool and bacula already
 has the pool loaded in one drive so it does not think it needs to find
 more volumes of the same pool and use them on different jobs in
 different drives.


then in that case Prefered Mounted Volumes = No will help perhaps?

-- michael
 

--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] clients on firewalled network segment

2009-01-25 Thread Michael Galloway
good day all, 

i'm moving a few clients to a network segment thats behind a firewall from 
my bacula servers. does the firewall need more than port 9102 FD's for the 
clients?

-- michael

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] clients on firewalled network segment

2009-01-25 Thread Michael Galloway
On Sun, Jan 25, 2009 at 04:38:03PM +0100, Bruno Friedmann wrote:
 Michael Galloway wrote:
  good day all, 
  
  i'm moving a few clients to a network segment thats behind a firewall from 
  my bacula servers. does the firewall need more than port 9102 FD's for the 
  clients?
  
  -- michael
  
 
 Clients should contact the sd and dir should contact fd.
 
 You could also take time and read the
 http://www.bacula.org/en/dev-manual/Dealing_with_Firewalls.html
 
 Or the long list of pages talking about firewall  bacula
 http://www.google.com/search?q=firewallsa=Searchdomains=www.bacula.orgsitesearch=www.bacula.org
 


yes, of course, thank you for the references, i should have looked in the 
manual. sorry for the 
inconvience.

-- michael 

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Temporary Files?

2009-01-25 Thread Michael Galloway
On Sun, Jan 25, 2009 at 04:11:58PM +, Ricardo Duarte wrote:
 
 Hi.
  
 Does Bacula store a temporary file that then is written to tape, or is the 
 data from FD copied directly to tape?
  
 Thanks.

ricardo, my understanding is that bacula only writes to files if you have 
spooling enabled (assuming, you are not
backing up to files). 

-- michael

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How do I back up the DB server that hosts my bacula DB?

2009-01-19 Thread Michael Galloway
On Mon, Jan 19, 2009 at 06:28:35PM -0500, Ed Barrett wrote:
 Sorry for the obtuse question, but how do I back up the postgresql
 server that my bacula database is stored on?  I'm planning on stopping
 the database services on all of the servers that we have so that backups
 are as clean as possible, but that would make my bacula DB unavailable
 during the time that bacula is working on that DB server.  Is this an
 issue?  I'm looking at section 21.12 of the pdf manual for bacula 2.4.4,
 but it doesn't seem to answer my question.  
 
 Thanks,
 Ed Barrett


ed, this thread was discussed recently here is the link:

http://www.nabble.com/Bacula-for-backing-up-postgres-td21268085.html#a21268264

hope that helps some.

-- michael


--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula giving slow speed

2009-01-14 Thread Michael Galloway
On Wed, Jan 14, 2009 at 10:53:13AM -0500, John Drescher wrote:
  3. There is no database backup as such. What we do is just take the
  system level full backup.
 
 
  I believe he was suggesting that the Bacula catalog database might be stored
  on the same file system that is being backed up. Since Bacula must write to
  its catalog database frequently during a backup, that would cause disk
  thrashing and greatly affect performance.
 
 
 Exactly.


so, would it be considered a 'best practice' to have the catalog database 
server that is on
a separate machine that the bacula server?

-- michael
 

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] scsi help

2009-01-10 Thread Michael Galloway
On Sat, Jan 10, 2009 at 12:53:59PM +0200, Timo Neuvonen wrote:
 
 If the library was replaced with another one (factory-replacement avove), 
 are there any chances that the scsi addresses (14 vs 15) in drive 
 configuration were swapped? It's also possible you (or someone else at your 
 site) had done that years ago to fix this kind of inconsistency in the old 
 unit, so there doesn't have to be anything wrong with the new, replaced 
 unit.
 
 I don't know this spesific hardware, so don't know if the addresses are set 
 with jumpers/switches, or thru some more modern configuration system. But 
 for me it just looks like this could be fixed with swapping the address 
 settings of the two drives.
 
 
 --
 TiN 

Yes! following your suggestion, i started digging aound in the library 
configuration, and
buried in the partition library panels was the scsi id settings, looks 'sane' 
now. thanks!

[r...@molbio ~]# lsscsi
[0:0:0:0]diskATA  TOSHIBA MK2035GS DK02  /dev/sda
[1:0:0:0]diskATA  TOSHIBA MK2035GS DK02  /dev/sdb
[4:0:0:0]diskAMCC 9650SE-16M DISK  3.08  /dev/sdc
[5:0:14:0]   tapeIBM  ULTRIUM-TD4  89B8  /dev/st0
[5:0:14:1]   mediumx SPECTRA  PYTHON   2000  -   
[5:0:15:0]   tapeIBM  ULTRIUM-TD4  89B8  /dev/st1

-- michael

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] scsi help

2009-01-09 Thread Michael Galloway
On Fri, Jan 09, 2009 at 05:51:26PM +1100, Glen Davison wrote:
 Michael Galloway m...@ornl.gov wrote on 09/01/2009 09:56:08 AM:
 
  any help appreciated.
  
  -- michael
 
 I had a similar situation awhile back.  I think my st0 corresponded to 
 nst1, st1 didn't have nst equiv.  Something like that.
 
 I don't know what O/S you're running there?  It's obviously different to 
 mine.


its centos5:
[r...@molbio ~]# cat /etc/redhat-release 
CentOS release 5 (Final)

[r...@molbio ~]# uname -a
Linux molbio.ornl.gov 2.6.18-8.1.14.el5 #1 SMP Thu Sep 27 19:05:32 EDT 2007 
x86_64 x86_64 x86_64 GNU/Linux

 
 You could try looking here:
 
 ls -l /dev/st[0-9] /dev/nst[0-9]
 
 Look at the major and minor numbers.


[r...@molbio ~]# ls -l /dev/st*
crw-rw 1 root disk 9,  0 Jan  9 07:28 /dev/st0
crw-rw 1 root disk 9, 96 Jan  9 07:28 /dev/st0a
crw-rw 1 root disk 9, 32 Jan  9 07:28 /dev/st0l
crw-rw 1 root disk 9, 64 Jan  9 07:28 /dev/st0m
crw-rw 1 root disk 9,  1 Jan  9 07:28 /dev/st1
crw-rw 1 root disk 9, 97 Jan  9 07:28 /dev/st1a
crw-rw 1 root disk 9, 33 Jan  9 07:28 /dev/st1l
crw-rw 1 root disk 9, 65 Jan  9 07:28 /dev/st1m
[r...@molbio ~]# ls -l /dev/nst*
crw-rw 1 root disk 9, 128 Jan  9 07:28 /dev/nst0
crw-rw 1 root disk 9, 224 Jan  9 07:28 /dev/nst0a
crw-rw 1 root disk 9, 160 Jan  9 07:28 /dev/nst0l
crw-rw 1 root disk 9, 192 Jan  9 07:28 /dev/nst0m
crw-rw 1 root disk 9, 129 Jan  9 07:28 /dev/nst1
crw-rw 1 root disk 9, 225 Jan  9 07:28 /dev/nst1a
crw-rw 1 root disk 9, 161 Jan  9 07:28 /dev/nst1l
crw-rw 1 root disk 9, 193 Jan  9 07:28 /dev/nst1m

st0 and nst0 have lower minor numbers.

 
 On linux, st? and nst? are created in order of discovery; while mtx is 
 talking about the library's internal concept of drive numbering.  Why they 
 are out of sync: various possibilities.  It could be that your SCSI buses 
 are wired up in an unusual way, although your lsscsi above looks OK.  Or 
 something in the O/S config...?


this same configuration worked as expected before i swapped out the library (its
been making backups for a year, then replaced library with factory 
replacement). 

-- michael
 
 
 Glen
 
 --
 Glen Davison  d...@sirca.org.au
 SIRCA Pty Ltd  Ph (02) 9236 9133

 --
 Check out the new SourceForge.net Marketplace.
 It is the best place to buy or sell services for
 just about anything Open Source.
 http://p.sf.net/sfu/Xq1LFB

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Another strangeness on 2.4.4 - upgrading to FULL after FULL backup

2009-01-08 Thread Michael Galloway
On Thu, Jan 08, 2009 at 06:21:50AM -0500, John Drescher wrote:
 On Thu, Jan 8, 2009 at 6:08 AM, Frank Altpeter frank.altpe...@gmail.com 
 wrote:
  My bad, just detected it by myself... the FileSet has been modified
  and so I assume the Incremental backup has been upgraded to Full
  because the FileSet has been changed.
 
  But IMHO there should be a better notification for that, something
  like FileSet has been modified, upgrading to FULL backup.
 
 
 
 I think there is a way to have fileset changes not force a full. Check
 the docs if anyone does not reply with the instructions.
 
 John


yup, been there, done that :-), its here;

http://www.bacula.org/en/rel-manual/Configuring_Director.html#SECTION00147

Ignore FileSet Changes = yes|no
Normally, if you modify the FileSet Include or Exclude lists, the next 
backup will be forced to a Full so that Bacula can guarantee that any additions 
or deletions are properly saved.

We strongly recommend against setting this directive to yes, since doing so 
may cause you to have an incomplete set of backups.

If this directive is set to yes, any changes you make to the FileSet 
Include or Exclude lists, will not force a Full during subsequent backups.

The default is no, in which case, if you change the Include or Exclude, 
Bacula will force a Full backup to ensure that everything is properly backed 
up. 
 
 --
 Check out the new SourceForge.net Marketplace.
 It is the best place to buy or sell services for
 just about anything Open Source.
 http://p.sf.net/sfu/Xq1LFB
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] scsi help

2009-01-08 Thread Michael Galloway
could i get a sanity check please? i've replaced my spectralogic T50 (had some 
hardware issues) and i'm 
rebuilding backup server. i'm working through the basic tape and changer tests. 
scsi looks like this:

[r...@molbio bin]# lsscsi
[0:0:0:0]diskATA  TOSHIBA MK2035GS DK02  /dev/sda
[1:0:0:0]diskATA  TOSHIBA MK2035GS DK02  /dev/sdb
[4:0:0:0]diskAMCC 9650SE-16M DISK  3.08  /dev/sdc
[5:0:14:0]   tapeIBM  ULTRIUM-TD4  89B8  /dev/st0
[5:0:15:0]   tapeIBM  ULTRIUM-TD4  89B8  /dev/st1
[5:0:15:1]   mediumx SPECTRA  PYTHON   2000  -   

mtx status looks as i would expect:

[r...@molbio bin]# mtx -f /dev/sg5 status
Storage Changer /dev/sg5:2 Drives, 30 Slots ( 0 Import/Export )
Data Transfer Element 0:Empty
Data Transfer Element 1:Empty
Storage Element 1:Full :VolumeTag=000290L4
Storage Element 2:Full :VolumeTag=012477L4 
etc 

however, when i use mtx to load a tape i get somewhat unexpected results:

[r...@molbio bin]# mtx -f /dev/sg5 load 1 0

should load slot 1 into /dev/st0, i think, what i get is this:

[r...@molbio bin]# mt -f /dev/nst0 status
SCSI 2 tape drive:
File number=-1, block number=-1, partition=0.
Tape block size 0 bytes. Density code 0x0 (default).
Soft error count since last status=0
General status bits on (5):
 DR_OPEN IM_REP_EN
[r...@molbio bin]# mt -f /dev/nst1 status
SCSI 2 tape drive:
File number=0, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x46 (no translation).
Soft error count since last status=0
General status bits on (4101):
 BOT ONLINE IM_REP_EN

and if i add:
[r...@molbio bin]# mtx -f /dev/sg5 load 2 1
[r...@molbio bin]# mt -f /dev/nst0 status
SCSI 2 tape drive:
File number=0, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x46 (no translation).
Soft error count since last status=0
General status bits on (4101):
 BOT ONLINE IM_REP_EN
[r...@molbio bin]# mt -f /dev/nst1 status
SCSI 2 tape drive:
File number=0, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x46 (no translation).
Soft error count since last status=0
General status bits on (4101):
 BOT ONLINE IM_REP_EN

now both are loaded.

it seems the drives are reversed somehow. i don't really understand. 

any help appreciated.

-- michael

--
Check out the new SourceForge.net Marketplace.
It is the best place to buy or sell services for
just about anything Open Source.
http://p.sf.net/sfu/Xq1LFB
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula for backing up postgres

2009-01-03 Thread Michael Galloway
On Sat, Jan 03, 2009 at 01:56:49PM -0500, Eduardo J. Ortega U. wrote:
 Hi, all:
 
 I am considering bacula for backing up a server whose primary role is
 PostgreSQL. My question is, is bacula appropriate for taking online
 postgres backups (considering that a filesystem backup when the
 database is running is a badf solution) ?
 
 Thanks,


it can be as simple as backing up postgres dump files, but this may be of help:

http://wiki.bacula.org/doku.php?id=application_specific_backups:postgresql
 
 -- 
 Eduardo J. Ortega U.
 
 --
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] enable batch inserts 2.4.4-b2

2009-01-01 Thread Michael Galloway
On Wed, Dec 31, 2008 at 09:19:09PM -0500, Dan Langille wrote:
 Michael Galloway wrote:
  On Wed, Dec 31, 2008 at 09:50:14AM -0500, Michael Galloway wrote:
 
  am i doing something incorrect?
 
  -- michael
 
  
  ok, looks like configure simply fails to consider share libraries for 
  postgres. is this a bug or an issues?
  i'd installed postgres from rpm. i can build from source statically if i 
  need to.
 
 Known and documented issue: 
 http://www.bacula.org/en/rel-manual/Installi_Configur_PostgreS.html
 
 Well, so I think.  Do you mean share libraries?  Or do you mean 
 -enable-thread-safety option when doing the ./configure for PostgreSQL?


no, i mean the configure script only considers that libpq is statically linked 
(libpq.a) and could not be dynamically
linked (libpq.so):

(from the script:)

  SQL_INCLUDE=-I$POSTGRESQL_INCDIR
  SQL_LFLAGS=$POSTGRESQL_LFLAGS
  SQL_BINDIR=$POSTGRESQL_BINDIR
  SQL_LIB=$POSTGRESQL_LIBDIR/libpq.a

in my SLES rpm based postgres install i get only dynamically liked lib's. 

-- michael 

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] enable batch inserts 2.4.4-b2

2008-12-31 Thread Michael Galloway
good day all, 

i'm working on a build of 2.4.4-b2 using postgresql, i'm trying to enable batch 
inserts:

  $ ./configure --sbindir=/share/bacula/bin --sysconfdir=/share/bacula/bin 
--with-pid-dir=/share/bacula/bin/working 
--with-subsys-dir=/share/bacula/bin/workin
g --enable-batch-insert --enable-smartalloc --with-postgresql --with-readline 
--with-python --enable-bat --with-working-dir=/share/bacula/bin/working --with-d
ump-email=...@ornl.gov --with-job-email=...@ornl.gov 
--with-smtp-host=smtp.ornl.gov

but when configure finishes, it reports:

Configuration on Wed Dec 31 09:24:46 EST 2008:

  Host:   x86_64-unknown-linux-gnu -- suse 10
  Bacula version: 2.4.4-b2 (24 December 2008)
  Source code location:   .
  Install binaries:   /share/bacula/bin
  Install config files:   /share/bacula/bin
  Scripts directory:  /share/bacula/bin
  Archive directory:  
  Working directory:  /share/bacula/bin/working
  PID directory:  /share/bacula/bin/working
  Subsys directory:   /share/bacula/bin/working
  Man directory:  ${datarootdir}/man
  Data directory: ${prefix}/share
  C Compiler: gcc 4.1.2
  C++ Compiler:   /usr/bin/g++ 4.1.2
  Compiler flags:  -g -Wall -fno-strict-aliasing -fno-exceptions 
-fno-rtti
  Linker flags:
  Libraries:  -lpthread -ldl 
  Statically Linked Tools:no
  Statically Linked FD:   no
  Statically Linked SD:   no
  Statically Linked DIR:  no
  Statically Linked CONS: no
  Database type:  PostgreSQL
  Database lib:   -L/usr/lib64 -lpq -lcrypt
  Database name:  bacula
  Database user:  bacula

  Job Output Email:   m...@ornl.gov
  Traceback Email:m...@ornl.gov
  SMTP Host Address:  smtp.ornl.gov

  Director Port:  9101
  File daemon Port:   9102
  Storage daemon Port:9103

  Director User:  
  Director Group: 
  Storage Daemon User:
  Storage DaemonGroup:
  File Daemon User:   
  File Daemon Group:  

  SQL binaries Directory  /usr/bin

  Large file support: yes
  Bacula conio support:   yes -lncurses
  readline support:   no 
  TCP Wrappers support:   no 
  TLS support:no
  Encryption support: no
  ZLIB support:   yes
  enable-smartalloc:  yes
  bat support:yes 
  enable-gnome:   no 
  enable-bwx-console: no 
  enable-tray-monitor:
  client-only:no
  build-dird: yes
  build-stored:   yes
  ACL support:no
  Python support: yes -L/usr/lib64/python2.4/config -lpython2.4 
-lutil -lrt 
  Batch insert enabled:   no

i think my postgres is ok,

 nm /usr/lib64/libpq.so | grep pthread_mutex_lock
 U pthread_mutex_lock@@GLIBC_2.2.5

am i doing something incorrect?

-- michael

i must have missed 

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] cancel all jobs from bconsole?

2008-12-29 Thread Michael Galloway
is there anyway to cancel all jobs from bconsole in bacula 2.4.3?

-- michael

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] pool/tape problems

2008-12-05 Thread Michael Galloway
good day all, looks like my bacula repeated this issue for me. i needed a tape 
for the catalog backup
which goes into the default pool, i did not like any of the volumes in the 
default pool and pulled in
four empty tapes from the scratch pool, and did not find any of them 
acceptable. media list is now:

Pool: Default
+-++---+-+---+--+--+-+--+---+---+-+
| mediaid | volumename | volstatus | enabled | volbytes  | volfiles | 
volretention | recycle | slot | inchanger | mediatype | lastwritten |
+-++---+-+---+--+--+-+--+---+---+-+
|   1 | 002048L4   | Append|   1 |   393,860,146,176 |  449 |   
31,536,000 |   1 |2 | 0 | LTO4  | 2008-03-14 17:36:03 |
|  29 | 012476L4   | Append|   1 |   131,792,467,968 |  140 |   
31,536,000 |   1 |   29 | 0 | LTO4  | 2008-04-03 23:33:41 |
|  45 | 020784L4   | Append|   1 | 1,276,820,858,880 |1,353 |   
31,536,000 |   1 |1 | 1 | LTO4  | 2008-12-02 01:02:51 |
|  49 | 020780L4   | Append|   1 |64,512 |0 |   
31,536,000 |   1 |   14 | 1 | LTO4  | |
|  50 | 020742L4   | Append|   1 |64,512 |0 |   
31,536,000 |   1 |   15 | 1 | LTO4  | |
|  51 | 023304L4   | Append|   1 |64,512 |0 |   
31,536,000 |   1 |   21 | 0 | LTO4  | |
|  52 | 023303L4   | Append|   1 |64,512 |0 |   
31,536,000 |   1 |   22 | 0 | LTO4  | |
+-++---+-+---+--+--+-+--+---+---+-+
Pool: Scratch
No results to list.

yet still pends the backup with:

05-Dec 06:14 molbio-sd JobId 1989: Job BackupCatalog.2008-12-04_23.10.13 
waiting. Cannot find any appendable volumes.
Please use the label  command to create a new Volume for:
Storage:  LTO4 (/dev/nst1)
Pool: Default
Media type:   LTO4

the default pool definition looks like this. not had any problem with it until 
after my upgrade to 2.4.3.
Pool {
  Name = Default
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 365 days # one year
}

-- michael

bacula is:

./bconsole 
Connecting to Director molbio:9101
1000 OK: molbio-dir Version: 2.4.3 (10 October 2008)
Enter a period to cancel a command.


--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] move tapes back to scratch pool

2008-12-01 Thread Michael Galloway
happy holidays all!

bacula moved several tapes from scratch pool into the default pool this 
morninig 
after i swapped out some tapes in the library:

*list media
Pool: Default
+-++---+-+---+--+--+-+--+---+---+-+
| mediaid | volumename | volstatus | enabled | volbytes  | volfiles | 
volretention | recycle | slot | inchanger | mediatype | lastwritten |
+-++---+-+---+--+--+-+--+---+---+-+
|   1 | 002048L4   | Append|   1 |   393,860,146,176 |  449 |   
31,536,000 |   1 |2 | 0 | LTO4  | 2008-03-14 17:36:03 |
|  29 | 012476L4   | Append|   1 |   131,792,467,968 |  140 |   
31,536,000 |   1 |   29 | 0 | LTO4  | 2008-04-03 23:33:41 |
|  45 | 020784L4   | Append|   1 | 1,272,702,154,752 |1,348 |   
31,536,000 |   1 |1 | 1 | LTO4  | 2008-12-01 12:25:03 |
|  49 | 020780L4   | Append|   1 |64,512 |0 |   
31,536,000 |   1 |   14 | 1 | LTO4  | |
|  50 | 020742L4   | Append|   1 |64,512 |0 |   
31,536,000 |   1 |   15 | 1 | LTO4  | |
|  51 | 023304L4   | Append|   1 |64,512 |0 |   
31,536,000 |   1 |   21 | 0 | LTO4  | |
|  52 | 023303L4   | Append|   1 |64,512 |0 |   
31,536,000 |   1 |   22 | 0 | LTO4  | |
+-++---+-+---+--+--+-+--+---+---+-+

how can i move those tapes back into the scratch pool? update volume status? 
thanks!

-- michael

bacula version:

Connecting to Director molbio:9101
1000 OK: molbio-dir Version: 2.4.3 (10 October 2008)


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] move tapes back to scratch pool

2008-12-01 Thread Michael Galloway
On Mon, Dec 01, 2008 at 02:26:22PM -0500, John Drescher wrote:
 On Mon, Dec 1, 2008 at 2:15 PM, Michael Galloway [EMAIL PROTECTED] wrote:
  happy holidays all!
 
  bacula moved several tapes from scratch pool into the default pool this 
  morninig
  after i swapped out some tapes in the library:
 
  *list media
  Pool: Default
  +-++---+-+---+--+--+-+--+---+---+-+
  | mediaid | volumename | volstatus | enabled | volbytes  | volfiles 
  | volretention | recycle | slot | inchanger | mediatype | lastwritten   
|
  +-++---+-+---+--+--+-+--+---+---+-+
  |   1 | 002048L4   | Append|   1 |   393,860,146,176 |  449 
  |   31,536,000 |   1 |2 | 0 | LTO4  | 2008-03-14 
  17:36:03 |
  |  29 | 012476L4   | Append|   1 |   131,792,467,968 |  140 
  |   31,536,000 |   1 |   29 | 0 | LTO4  | 2008-04-03 
  23:33:41 |
  |  45 | 020784L4   | Append|   1 | 1,272,702,154,752 |1,348 
  |   31,536,000 |   1 |1 | 1 | LTO4  | 2008-12-01 
  12:25:03 |
  |  49 | 020780L4   | Append|   1 |64,512 |0 
  |   31,536,000 |   1 |   14 | 1 | LTO4  |   
|
  |  50 | 020742L4   | Append|   1 |64,512 |0 
  |   31,536,000 |   1 |   15 | 1 | LTO4  |   
|
  |  51 | 023304L4   | Append|   1 |64,512 |0 
  |   31,536,000 |   1 |   21 | 0 | LTO4  |   
|
  |  52 | 023303L4   | Append|   1 |64,512 |0 
  |   31,536,000 |   1 |   22 | 0 | LTO4  |   
|
  +-++---+-+---+--+--+-+--+---+---+-+
 
  how can i move those tapes back into the scratch pool? update volume 
  status? thanks!
 
 
 update volume pool
 
 John


yup, great! thanks john!

-- michael
 

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-30 Thread Michael Galloway
On Thu, Nov 27, 2008 at 04:03:50PM +0100, Daniel Betz wrote:
 Hi!
 
 I have the same problem with large amount of files on one filesystem ( 
 Maildir ).
 Now i have 2 concurrent jobs running and the time for the backups need half 
 the time.
 I havent tested 4 concurrent jobs jet .. :-)
 
 
 Greetings,


would you mind posting what your config is for concurrancy? in the same boat 
here, i have
several filesystems with more than 10 million files per filesystem.

-- michael
 

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] postgres 8.2.5 issue

2008-10-31 Thread Michael Galloway
hmmm, good day all, my catalog backup last night failed with this error:

31-Oct 09:04 molbio-dir JobId 1820: BeforeJob: pg_dump: SQL command failed
31-Oct 09:04 molbio-dir JobId 1820: BeforeJob: pg_dump: Error message from 
server: ERROR:  could not access status of transaction 1109420146
31-Oct 09:04 molbio-dir JobId 1820: BeforeJob: DETAIL:  Could not open file 
pg_clog/0422: No such file or directory.
31-Oct 09:04 molbio-dir JobId 1820: BeforeJob: pg_dump: The command was: COPY 
public.file (fileid, fileindex, jobid, pathid, filenameid, markid, lstat, md5) 
TO stdout;
31-Oct 09:04 molbio-dir JobId 1820: Error: Runscript: BeforeJob returned 
non-zero status=1. ERR=Child exited with code 1

this is bacule 2.2.6 and postgres 8.2.5. 

there was a warning on the raid controller that holds the postgress data space 
for a completed sector repair. i guess
this implies that the pg tables are somehow corrupt. any hints on how to 
correct this?

-- michael

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] restoring corrupt db

2008-10-31 Thread Michael Galloway
ok, my bacula db (postgres) is corrupt due to a disk problem on the raid array 
it resides on.
i have a complete database dump (from pg_dumpall) from last wednesday that i 
plan to drop back
to. my backup environment is bacula 2.2.x and postgres 8.2.5. my plan is to 
install current 
bacula 2.4.3 and postgres 8.3.2, initialize the db, make bacula tables for 
2.4.3 and then import
the postgres dump from last wednesday. i realize that the file counts on the 
tapes will be incorrect,
and i will have to update the volume info, file count, etc. 

is this a reasonable course of action or am i asking for trouble?

-- michael

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Installing bacula-fd on Mac OS/X

2008-02-15 Thread Michael Galloway
On Fri, Feb 15, 2008 at 09:21:12PM -0500, Dan Langille wrote:
 
 
 ports seems to have 2.2.6, not sure what fink has. download it and
 build it i guess :-)
 
 
 I have macports installed... where do I go from here?  Total lack of  
 knowledge here...
 


sudo port selfupdate
sudo port search bacula
sudo port install bacula

then a bunch of stuff to get it to launch at boot. let me look that up ...

-- michael 

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Installing bacula-fd on Mac OS/X

2008-02-15 Thread Michael Galloway
On Fri, Feb 15, 2008 at 09:30:40PM -0500, Michael Galloway wrote:
 
 sudo port selfupdate
 sudo port search bacula
 sudo port install bacula
 
 then a bunch of stuff to get it to launch at boot. let me look that up ...
 
 -- michael 


hah, it even says how to do it:

sudo port install
bacula
---  Fetching cdrtools
---  Attempting to fetch cdrtools-2.01.01a37.tar.bz2 from

.

---  Attempting to fetch bacula-2.2.6.tar.gz from
http://downloads.sourceforge.net/bacula
---  Verifying checksum(s) for bacula
---  Extracting bacula
---  Configuring bacula
---  Building bacula with target all
---  Staging bacula into destroot
---  Creating launchd control script
###
# A startup item has been generated that will aid in
# starting bacula with launchd. It is disabled
# by default. Execute the following command to start it,
# and to cause it to launch at startup:
#
# sudo launchctl load -w /Library/LaunchDaemons/org.macports.bacula.plist
###
---  Installing bacula 2.2.6_0
---  Activating bacula 2.2.6_0
---  Cleaning bacula

cool ... 

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Installing bacula-fd on Mac OS/X

2008-02-15 Thread Michael Galloway
On Fri, Feb 15, 2008 at 08:51:39PM -0500, Dan Langille wrote:
 I have a Macbook running Tiger...
 
 How do I get a bacula-fd 2.2.8 on there?
 
 -- 
 Dan Langille -- http://www.langille.org/
 [EMAIL PROTECTED]
 


hmmm ...

ports seems to have 2.2.6, not sure what fink has. download it and
build it i guess :-)

-- michael

 

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] help simplifying

2008-02-07 Thread Michael Galloway
On Thu, Feb 07, 2008 at 09:59:38PM +0100, Arno Lehmann wrote:
 
 Just issue 'restore' in bconsole and see what happens!
 


indeed, and i do one now and then just to make sure things are working
the way i expect and to make sure it works correctly.

-- michael 

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] help with autochanger/barcodes

2008-02-01 Thread Michael Galloway
what i do is put my tapes in the scratch pool, then let bacula assign them
to my working pools:

label storage=MYSTORAGEDEVICE slots=STARTSLOTNUMBER-ENDSLOTNUMBER pool=Scratch 
barcodes

-- michael

On Fri, Feb 01, 2008 at 01:54:17PM -0500, Robin Blanchard wrote:
 Trying to label the 20 tapes with their corresponding barcodesIs the
 problem with my device (sd) configuration or is this stemming from the
 mtx-changer script ?
 
 *label barcodes
 Automatically selected Catalog: MyCatalog
 Using Catalog MyCatalog
 Automatically selected Storage: Drive-1
 Connecting to Storage daemon Drive-1 at lewis.itos.uga.edu:9103 ...
 Connecting to Storage daemon Drive-1 at lewis.itos.uga.edu:9103 ...
 3306 Issuing autochanger slots command.
 Device Drive-1 has 0 slots.
 No slots in changer to scan.
 
 # mtx -f /dev/pass0 status
 
   Storage Changer /dev/pass0:2 Drives, 20 Slots ( 0 Import/Export )
 Data Transfer Element 0:Empty
 Data Transfer Element 1:Empty
   Storage Element 1:Full :VolumeTag=B00023L3
   Storage Element 2:Full :VolumeTag=B00039L3
   Storage Element 3:Full :VolumeTag=B00029L3
   Storage Element 4:Full :VolumeTag=B00025L3
   Storage Element 5:Full :VolumeTag=B00028L3
   Storage Element 6:Full :VolumeTag=B00022L3
   Storage Element 7:Full :VolumeTag=B00026L3
   Storage Element 8:Full :VolumeTag=B00031L3
   Storage Element 9:Full :VolumeTag=B00030L3
   Storage Element 10:Full :VolumeTag=B00037L3
 
   Storage Element 11:Full :VolumeTag=B00034L3
 
   Storage Element 12:Full :VolumeTag=B00035L3
 
   Storage Element 13:Full :VolumeTag=B00036L3
 
   Storage Element 14:Full :VolumeTag=B00038L3
 
   Storage Element 15:Full :VolumeTag=B00032L3
 
   Storage Element 16:Full :VolumeTag=B00024L3
 
   Storage Element 17:Full :VolumeTag=B00021L3
 
   Storage Element 18:Full :VolumeTag=B00020L3
 
   Storage Element 19:Full :VolumeTag=B00027L3
 
   Storage Element 20:Full :VolumeTag=B00033L3
 
 
 Device {
   Name = Drive-1
   Drive Index = 0
   Media Type = LTO3
   Description = IBM Ultrium
   Archive Device = /dev/nsa0
   SpoolDirectory = /export/bacula/spool;
   AutomaticMount = yes;   
   AlwaysOpen = yes
   RemovableMedia = yes;
   RandomAccess = no;
   AutoChanger = yes
   Offline On Unmount = no
   Hardware End of Medium = no
   BSF at EOM = yes
   Backward Space Record = no
   Fast Forward Space File = no
   TWO EOF = yes
   Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
 }
 
 
 Robin P. Blanchard
 Systems Administrator
 Information Technology Outreach Services
 Carl Vinson Institute of Government
 The University of Georgia
 fon 706.542.6295 // fax 706.542.6535
 
 
 
 
 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2008.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multiple catalogs on one client

2008-01-20 Thread Michael Galloway
On Sun, Jan 20, 2008 at 08:29:37PM +0100, Thomas Lundin wrote:
 Hi
 
 I have a single client bacula system up and running where I store the
 backups off-site. Now I want to have a set of parallell jobs that
 backups to a local disk for easy access. I want to separate the two
 completely, including the catalog.
 
 The only way I've come up with to configure this is to have two clients
 which points to the same server but have different catalogs. When I do
 that says bacula-fd -c -t that only one client resource is permitted
 in the bacula-fd.conf file.
 
 In the new installation and configuration guide on the homepage it is
 mentioned in the catalog resource chapter that it should be possible to
 have different catalogs for backup and verify jobs, which must be on the
 same client.
 
 How do you configure mutliple catalogs for one client?
 
 I'm running bacula 2.0.3 under ubuntu.
 
 /Thomas


there is some general direction for multiple catalogs in this thread:

http://news.gmane.org/find-root.php?message_id=%3c20070220162408.GA11909%40p15145560.pureserver.info%3e

-- michael 

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] very slow netapp nfs backup

2008-01-17 Thread Michael Galloway
On Fri, Dec 28, 2007 at 03:40:51PM +0100, Svein-Erik Lund wrote:
 On Friday 28 December 2007 15:31:48 Michael Galloway wrote:
  could this be a bacula or postgres issue (my db has grown to over 2GB in 
  size)? 
 
 I'm not shure if it will have any effect or not, but you could  try a VACUUM 
 FULL; of the database. 
 
 --
 Svein-Erik Lund
 


yes, i did vacuum the db (and enable batch inserts and spooling ) and its 
still
quite slow. i think its simply the large number of small files 

-- michael 

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] beginner with bacula; help w/autochanger

2008-01-17 Thread Michael Galloway
On Mon, Dec 17, 2007 at 11:46:34AM -0500, Robin Blanchard wrote:
 Bacula 2.2.6 on RHEL5 (2.6.18-53.1.4.el5xen)
 
 # cat /proc/scsi/scsi 
 Attached devices:
 Host: scsi0 Channel: 00 Id: 00 Lun: 00
   Vendor: 3wareModel: Logical Disk 0   Rev: 1.2 
   Type:   Direct-AccessANSI SCSI revision: 
 Host: scsi1 Channel: 00 Id: 00 Lun: 00
   Vendor: 3wareModel: Logical Disk 0   Rev: 1.2 
   Type:   Direct-AccessANSI SCSI revision: 
 Host: scsi2 Channel: 00 Id: 00 Lun: 00
   Vendor: QUALSTAR Model: RLS-8204-20  Rev: 006D
   Type:   Medium Changer   ANSI SCSI revision: 02
 Host: scsi2 Channel: 00 Id: 01 Lun: 00
   Vendor: IBM  Model: ULTRIUM-TD3  Rev: 73P5
   Type:   Sequential-AccessANSI SCSI revision: 03
 
 
 Snippet from bacula-sd.conf:
 
 
 Autochanger {
   Name = Autochanger
   Device = Drive-1
   #Device = Drive-2
   Changer Command = /usr/local/bacula-2.2.6/scripts/mtx-changer %c %o
 %S %a %d
   Changer Device = /dev/sg0
 }
 
 Device {
   Name = Drive-1  #
   Drive Index = 0
   Media Type = DLT-8000
   Archive Device = /dev/nst0
   AutomaticMount = yes;   # when device opened, read it
   AlwaysOpen = yes;
   RemovableMedia = yes;
   RandomAccess = no;
   AutoChanger = yes
   #
   # Enable the Alert command only if you have the mtx package loaded
   # Note, apparently on some systems, tapeinfo resets the SCSI
 controller
   #  thus if you turn this on, make sure it does not reset your SCSI 
   #  controller.  I have never had any problems, and smartctl does
   #  not seem to cause such problems.
   #
   Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
   #If you have smartctl, enable this, it has more info than tapeinfo 
   #Alert Command = sh -c 'smartctl -H -l error %c'  
 }
 
 
 # ./btape -c ../etc/bacula-sd.conf /dev/nst0
 Tape block granularity is 1024 bytes.
 btape: butil.c:285 Using device: /dev/nst0 for writing.
 17-Dec 11:45 btape JobId 0: 3301 Issuing autochanger loaded? drive 0
 command.
 17-Dec 11:45 btape JobId 0: 3991 Bad autochanger loaded? drive 0
 command: ERR=Child exited with code 1.
 Results=mtx: Request Sense: Long Report=yes
 mtx: Request Sense: Valid Residual=no
 mtx: Request Sense: Error Code=0 (Unknown?!)
 mtx: Request Sense: Sense Key=No Sense
 mtx: Request Sense: FileMark=no
 mtx: Request Sense: EOM=no
 mtx: Request Sense: ILI=no
 mtx: Request Sense: Additional Sense Code = 00
 mtx: Request Sense: Additional Sense Qualifier = 00
 mtx: Request Sense: BPV=no
 mtx: Request Sense: Error in CDB=no
 mtx: Request Sense: SKSV=no
 READ ELEMENT STATUS Command Failed
 
 17-Dec 11:45 btape: Fatal Error at device.c:296 because:
 dev open failed: dev.c:433 Unable to open device Drive-1 (/dev/nst0):
 ERR=Input/output error
 
 17-Dec 11:45 btape JobId 0: Fatal error: butil.c:194 Cannot open
 Drive-1 (/dev/nst0)


hey! you got scsi working ok? :-)

whats mtx -f /dev/sg0 status  report, i suspect /dev/sg0 is not your changer 
and its
really /dev/sg2.

-- michael

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO-3 / scsi woes

2008-01-15 Thread Michael Galloway
On Mon, Jan 14, 2008 at 04:12:22PM -0500, Robin Blanchard wrote:
 I've been around the block with LSI and with adaptec: tried an LSI
 SYMC101, an adaptec 2940U2W and a 39160. I've removed the
 library/exchange from the equation, using only the LTO-3 drive (and have
 actually swapped that drive out for another as well), swapped SCSI
 cables, and terminators, and still am getting scsi errors. Anyone got
 any tips/ideas here ?
 
 target2:0:1: FAST-40 WIDE SCSI 80.0 MB/s ST (25 ns, offset 31)
   Vendor: IBM   Model: ULTRIUM-TD3   Rev: 73P5
   Type:   Sequential-Access  ANSI SCSI revision: 03
  target2:0:1: Beginning Domain Validation
  target2:0:1: asynchronous
  target2:0:1: wide asynchronous
  target2:0:1: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 62)
 sym0: SCSI parity error detected: SCR1=1 DBC=1500 SBCL=2d
 sym0: SCSI parity error detected: SCR1=1 DBC=1500 SBCL=2d
  target2:0:1: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 62)
 sym0: SCSI parity error detected: SCR1=1 DBC=1500 SBCL=2d
 sym0: SCSI parity error detected: SCR1=1 DBC=1500 SBCL=2d
  target2:0:1: Domain Validation detected failure, dropping back
  target2:0:1: control msgout: c.
 sym0: TARGET 1 has been reset.
  target2:0:1: Domain Validation detected failure, dropping back
  target2:0:1: FAST-40 WIDE SCSI 66.0 MB/s DT (30.3 ns, offset 62)
 sym0: SCSI parity error detected: SCR1=1 DBC=1500 SBCL=2d
 sym0: SCSI parity error detected: SCR1=1 DBC=1500 SBCL=2d
  target2:0:1: FAST-40 WIDE SCSI 66.0 MB/s DT (30.3 ns, offset 62)
 sym0: SCSI parity error detected: SCR1=1 DBC=1502 SBCL=ae
 sym0: SCSI parity error detected: SCR1=1 DBC=1500 SBCL=2d
 sym0:1: ERROR (0:8) (d-ae-5) (3e/38/8c) @ (scripta 98:870b).
 sym0: script cmd = 800a
 sym0: regdump: da 10 c0 38 47 3e 01 0a 03 0d 00 ae 80 00 0e 00 00 1c 6f
 08 2a 00 00 00.
 sym0: SCSI BUS reset detected.
 sym0: SCSI BUS has been reset.
  target2:0:1: Domain Validation detected failure, dropping back
  target2:0:1: FAST-20 WIDE SCSI 40.0 MB/s DT (50 ns, offset 62)
  target2:0:1: FAST-20 WIDE SCSI 40.0 MB/s DT (50 ns, offset 62)
  target2:0:1: FAST-20 WIDE SCSI 40.0 MB/s DT (50 ns, offset 62)
  target2:0:1: FAST-20 WIDE SCSI 40.0 MB/s DT (50 ns, offset 62)
  target2:0:1: FAST-20 WIDE SCSI 40.0 MB/s DT (50 ns, offset 62)
  target2:0:1: Domain Validation skipping write tests
  target2:0:1: Ending Domain Validation
 scsi 2:0:1:0: Attached scsi generic sg2 type 1
 ACPI: PCI Interrupt :05:02.1[B] - GSI 101 (level, low) - IRQ 22
 sym1: 1010-33 rev 0x1 at pci :05:02.1 irq 22
 sym1: Symbios NVRAM, ID 7, Fast-80, SE, parity checking
 sym1: open drain IRQ line driver, using on-chip SRAM
 sym1: using LOAD/STORE-based firmware.
 sym1: handling phase mismatch from SCRIPTS.
 sym1: SCSI BUS has been reset.
 scsi3 : sym-2.2.3

what i would try, being the brute force kinda guy i am, try 
drive/adapter/cabling
on a different machine, maybe different os. different slots on the existing 
machine.
what os is this on again?

-- michael

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Remote backup question

2008-01-15 Thread Michael Galloway
On Tue, Jan 15, 2008 at 04:08:21PM -0500, Paul Stewart wrote:
 Hi folks...
 
 Just started using Bacula recently - have it up and working on one machine -
 like it so far..;)
 
 I'm trying to get another machine backing up to my 'host' machine but
 confused over what needs to be actually installed on the remote machine
 (both host and remote are Linux).  The docs talk about moving one binary
 over to the remote machine and a conf file and it should work... is this
 correct?
 
 Can someone provide a bit more detail on this?
 
 Thanks very much,
 
 Paul
 


i generally build bacula from source, and try and keep my clients at the same
release as the server. the client builds easily with the --enable-client-only 
flag.

try it and see.

-- michael 


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Newbie Help Configuring Bacula with Dell 132T

2008-01-14 Thread Michael Galloway
On Mon, Jan 14, 2008 at 10:34:45PM +, Nick - wrote:
 I'm having difficulty posting so I apologize if this is a repost.
  
  
  
  
 Hello Everyone,
  
 I’m new to using Bacula and am having a little trouble setting up 
 the configurations.
  
  
 When I run cat /proc/scsi/scsi I get the following back
  
 Attached devices:
 Host: scsi0 Channel: 00 Id: 00 Lun: 00
   Vendor: DELL Model: PV-132T  Rev: 308D
   Type:   Medium Changer   ANSI SCSI revision: 02
 Host: scsi0 Channel: 00 Id: 01 Lun: 00
   Vendor: IBM  Model: ULTRIUM-TD2  Rev: 53Y3
   Type:   Sequential-AccessANSI SCSI revision: 03
 Host: scsi0 Channel: 01 Id: 02 Lun: 00
   Vendor: IBM  Model: ULTRIUM-TD2  Rev: 37RH
   Type:   Sequential-AccessANSI SCSI revision: 03
 Host: scsi1 Channel: 00 Id: 08 Lun: 00
   Vendor: DP   Model: BACKPLANERev: 1.05
   Type:   EnclosureANSI SCSI revision: 05
 Host: scsi1 Channel: 02 Id: 00 Lun: 00
   Vendor: DELL Model: PERC 5/i Rev: 1.03
   Type:   Direct-AccessANSI SCSI revision: 05
  
 Next I run mtx –f /dev/sg0 status
  
   Storage Changer /dev/sg0:2 Drives, 23 Slots ( 1 Import/Export )
 Data Transfer Element 0:Empty
 Data Transfer Element 1:Empty
   Storage Element 1:Full :VolumeTag=13  
   Storage Element 2:Full :VolumeTag=19  
   Storage Element 3:Full :VolumeTag=20  
   Storage Element 4:Full :VolumeTag=21  
   Storage Element 5:Full :VolumeTag=17  
   Storage Element 6:Full :VolumeTag=18  
   Storage Element 7:Full :VolumeTag=22  
   Storage Element 8:Full :VolumeTag=01  
   Storage Element 9:Full :VolumeTag=08  
   Storage Element 10:Full :VolumeTag=04  
   Storage Element 11:Full :VolumeTag=07  
   Storage Element 12:Full :VolumeTag=11  
   Storage Element 13:Full :VolumeTag=09  
   Storage Element 14:Full :VolumeTag=10  
   Storage Element 15:Full :VolumeTag=16  
   Storage Element 16:Full :VolumeTag=02  
   Storage Element 17:Empty:VolumeTag=
   Storage Element 18:Full :VolumeTag=14  
   Storage Element 19:Full :VolumeTag=15  
   Storage Element 20:Full :VolumeTag=03  
   Storage Element 21:Full :VolumeTag=05  
   Storage Element 22:Full :VolumeTag=06  
   Storage Element 23 IMPORT/EXPORT:Full :VolumeTag=12  
  
 /mtx –f /dev/sg0 inquiry shows:
  
 Product Type: Medium Changer
 Vendor ID: 'DELL'
 Product ID: 'PV-132T '
 Revision: '308D'
 Attached Changer API: No
  
 /mtx –f /dev/sg1 inquiry shows:
 Product Type: Tape Drive
 Vendor ID: 'IBM '
 Product ID: 'ULTRIUM-TD2 '
 Revision: '53Y3'
 Attached Changer API: No
  
 /mtx –f /dev/sg2 inquiry shows:
 Product Type: Tape Drive
 Vendor ID: 'IBM '
 Product ID: 'ULTRIUM-TD2 '
 Revision: '37RH'
 Attached Changer API: No
  
 /mt –f /dev/nst0 status shows:
 SCSI 2 tape drive:
 File number=-1, block number=-1, partition=0.
 Tape block size 0 bytes. Density code 0x0 (default).
 Soft error count since last status=0
 General status bits on (5):
  DR_OPEN IM_REP_EN
  
  
  
 In bacula-sd.conf I have the following setup:
  
 Autochanger {
   Name = Autochanger
   Device = DellPowerVault132T
   Changer Command = /etc/bacula/mtx-changer %c %o %S %a %d
   Changer Device = /dev/sg0
 }
  
 Device {
   Name = DellPowerVault132T
   LabelMedia = yes
   Drive Index = 0
   Media Type = LTO-3
   Archive Device = /dev/nst0
   AutomaticMount = yes;
   AlwaysOpen = yes;
   RemovableMedia = yes;
 RandomAccess = no;
 AutoChanger = yes
 Hardware End of Medium = No
 BSF at EOM = yes
 # Enable the Alert command only if you have the mtx package loaded 
 Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
 TWO EOF = yes
 }
  
  
 Also in the bacula-dir.conf I have the following setup:
 Storage {
   Name = File
 # Do not use localhost here
   Address = server_name# N.B. Use a fully qualified name 
 here
   SDPort = 9103
   Password = xxx
   Device = DellPowerVault132T
   Media Type = LTO-3
 }
  
  
 Does everything look correctly setup? 
  
 When I try to mounting the storage using the Webmin Bacula console I get the 
 following error:
  
 Mounting volume on storage device File .. Automatically selected Catalog: 
 MyCatalogUsing Catalog MyCatalog3301 Issuing autochanger 

Re: [Bacula-users] tape volume capacity exceeds

2008-01-08 Thread Michael Galloway

On Tue, Jan 08, 2008 at 03:54:38PM +0100, renatn oblak wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 dear list!
 
 - --problem:
 the capacity of the tapes are 400GB, but the Last Volume Bytes of the
 bacula-message says 535.8 GB already!
 how is this possible?
 i tried to restore a file from this tape, no problem.
 and as you can see in the following output, there are no
 errors.
 nevertheless i'm worried about this.
 
 === Last Volume Bytes: 535,821,986,209 (535.8 GB)
   Non-fatal FD errors: 0
   SD Errors: 0

i'd guess the drive is using hardware compression. on my LTO4 (800/1600GB) i 
get:

|  11 | 000299L4   | Append|   1 | 1,520,676,154,368 |1,527 |   
10,368,000 |   1 |   11 | 1 | LTO4  | 2008-01-02 18:00:07 |

on an 800GB tape ...

-- michael

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Why this ...

2008-01-07 Thread Michael Galloway
On Mon, Jan 07, 2008 at 05:30:20PM -0500, Reynier Perez Mira wrote:
 Dan, soury for not reply before but I solve this problem. The problem now is 
 other. See the details:
 
 bacula-dir -t -c bacula-dir.conf
 07-ene 17:34 bacula-dir:  Fatal error: A user name for MySQL must be supplied.
 07-ene 17:34 bacula-dir:  Fatal error: Could not open Catalog MyCatalog, 
 database bacula.
 07-ene 17:34 bacula-dir ERROR TERMINATION
 Please correct configuration file: bacula-dir.conf
 
 I see inside bacula-dir.conf file and setup the parameters as follow:
 
 Catalog {
   Name = MyCatalog
   dbname = bacula; 
   password = baculapool
 }
 
 What's missing here?
 Cheers
 Ing. Reynier Pérez Mira  


should look more like this:

Catalog {
  Name = MyCatalog
  dbname = bacula; user = bacula; password = 

 

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Why this ...

2008-01-07 Thread Michael Galloway
On Mon, Jan 07, 2008 at 05:41:48PM -0500, Reynier Perez Mira wrote:
 Catalog {
   Name = MyCatalog
   dbname = bacula;
   user = bacula;  
   password = X
 }
 
 Wich is correct because DB, user and password are Ok but when I test the 
 config file I get this error again and again:
 
 bacula-dir -t -c bacula-dir.conf
 07-ene 17:44 bacula-dir:  Fatal error: Could not open Catalog MyCatalog, 
 database bacula.
 07-ene 17:44 bacula-dir:  Fatal error: mysql.c:188 Unable to connect to MySQL 
 server.
 Database=bacula User=bacula
 It is probably not running or your password is incorrect.
 07-ene 17:44 bacula-dir ERROR TERMINATION
 Please correct configuration file: bacula-dir.conf
 
 Looks like a MySQL problem but I don't know why. Can any help me


make sure mysql is running and you can connect via command line.

-- michael 

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] network appliance backups - large number of files very very slow

2008-01-02 Thread Michael Galloway
happy new year all!

my backups of network appliance nfs mounts has gotten intolerable. i have 3 
FAS250's
i'm working with: masspec, birch, aspen. all are connected to the same switch 
via
gigE network, with single hops to the bacula server (2.2.6 patched). first full 
i did
was this one:

  Job:mspec.2007-12-18_21.29.07
  Backup Level:   Full
  Client: molbio-fd 2.2.6 (10Nov07) 
x86_64-unknown-linux-gnu,redhat,
  FileSet:Mspec Set 2007-12-18 20:45:51
  Pool:   Full (From Job resource)
  Storage:LTO4 (From Job resource)
  Scheduled time: 18-Dec-2007 21:29:24
  Start time: 18-Dec-2007 21:53:40
  End time:   19-Dec-2007 15:55:36
  Elapsed time:   18 hours 1 min 56 secs
  Priority:   10
  FD Files Written:   863,458
  SD Files Written:   863,458
  FD Bytes Written:   1,825,660,355,131 (1.825 TB)
  SD Bytes Written:   1,825,879,267,061 (1.825 TB)
  Rate:   28123.4 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Volume name(s): 002045L4|002042L4
  Volume Session Id:  3
  Volume Session Time:1198028560
  Last Volume Bytes:  960,677,286,912 (960.6 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

adequate backup rate of 28MB/s. the next filer that got a full was aspen:

  Job:aspen.2007-12-20_23.25.27
  Backup Level:   Full (upgraded from Incremental)
  Client: molbio-fd 2.2.6 (10Nov07) 
x86_64-unknown-linux-gnu,redhat,
  FileSet:Aspen Set 2007-12-20 23:25:00
  Pool:   Inc (From Run pool override)
  Storage:LTO4 (From Job resource)
  Scheduled time: 20-Dec-2007 23:25:00
  Start time: 20-Dec-2007 23:25:02
  End time:   21-Dec-2007 23:02:33
  Elapsed time:   23 hours 37 mins 31 secs
  Priority:   10
  FD Files Written:   8,069,999
  SD Files Written:   8,069,999
  FD Bytes Written:   990,743,048,680 (990.7 GB)
  SD Bytes Written:   992,311,396,359 (992.3 GB)
  Rate:   11648.8 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Volume name(s): 002040L4|002049L4
  Volume Session Id:  16
  Volume Session Time:1198028560
  Last Volume Bytes:  15,757,378,560 (15.75 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

slower at 12MB/s but still tolerable. the last started on christmas day, birch:

  Job:birch.2007-12-25_16.47.08
  Backup Level:   Full
  Client: molbio-fd 2.2.6 (10Nov07) 
x86_64-unknown-linux-gnu,redhat,
  FileSet:Birch Set 2007-12-22 09:56:56
  Pool:   Full (From Job resource)
  Storage:LTO4 (From Job resource)
  Scheduled time: 25-Dec-2007 16:47:11
  Start time: 25-Dec-2007 16:47:25
  End time:   31-Dec-2007 23:09:01
  Elapsed time:   6 days 6 hours 21 mins 36 secs
  Priority:   10
  FD Files Written:   16,679,881
  SD Files Written:   16,679,881
  FD Bytes Written:   1,105,891,427,122 (1.105 TB)
  SD Bytes Written:   1,108,951,797,447 (1.108 TB)
  Rate:   2043.0 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Volume name(s): 000299L4
  Volume Session Id:  5
  Volume Session Time:1198587778
  Last Volume Bytes:  1,504,677,113,856 (1.504 TB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

not acceptable at 2MB/s. i cannot find any real difference in the network 
config or nfs mount
config on these filesystms. i suspect it has to do with the nature of the 
filesystems. masspec
has less than a millon files, aspen has around 8 million files and birch has 
nearly 17 million.

has anyone had similar experience working with nfs backups of this nature? 
anything i can do
to improve performance to get the filer backed up in a reasonable time window?

-- michael




-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] network appliance backups - large number of files very very slow

2008-01-02 Thread Michael Galloway
On Wed, Jan 02, 2008 at 04:20:12PM +0100, Bruno Friedmann wrote:
 
  
  With the purpose of gathering facts: are these results repeatable?
  
 
 In the same way of idea that Dan.
 
 This could be have to do with the database server having to much record to 
 store.
 What and where are the db server ?
 How is the load on it, did you use the delay-insert feature (if so where the 
 tmp file is created ? )
 


the db server is the bacula server, its postgres at:

# /usr/local/pgsql/bin/postgres -V
postgres (PostgreSQL) 8.2.5

and i did not use the delay-insert feature. is there a reference url to this? 
the load on things
seemed resonable. running a load average of around 1.5 or so. server is dual 
dual core opteron,
8GB ram, lot of swap.

-- michael 

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] network appliance backups - large number of files very very slow

2008-01-02 Thread Michael Galloway
On Wed, Jan 02, 2008 at 10:29:36AM -0500, Dan Langille wrote:
 
 the db server is the bacula server, its postgres at:
 
 # /usr/local/pgsql/bin/postgres -V
 postgres (PostgreSQL) 8.2.5
 
 and i did not use the delay-insert feature. is there a reference url to 
 this? the load on things
 seemed resonable. running a load average of around 1.5 or so. server is 
 dual dual core opteron,
 8GB ram, lot of swap.
 
 I think delay-insert refers to batch insert.  It is enabled by default.
 
   http://www.bacula.org/rel-manual/Installi_Configur_PostgreS.html


thanks,
 
 Has a vacuum analyse been run on the Bacula database, either manually or 
 through auto-vacuum?


nope. what does vacuum analsys do?
 
 What OS is that running on?


centOS5:

cat /etc/redhat-release 
CentOS release 5 (Final)

 Is it on the same machine as bacula-sd/bacula-dir?


yes.

thanks 
 

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] network appliance backups - large number of files very very slow

2008-01-02 Thread Michael Galloway
On Wed, Jan 02, 2008 at 10:38:22AM -0500, Dan Langille wrote:
 
 I meant to say: Bacula enables it by default.  It relies upon the 
 PostgreSQL client having the thread safe option.  See the above URL for 
 some detail.
 
 On a related issue: Are you spooling attributes (Bacula feature)?  See docs.
 


no, i did not enable attribute spooling. i have disk space available. i can 
enable
that and see if it helps. 

-- michael 

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] network appliance backups - large number of files very very slow

2008-01-02 Thread Michael Galloway
On Wed, Jan 02, 2008 at 09:54:57AM -0500, Dan Langille wrote:
  
  slower at 12MB/s but still tolerable. the last started on christmas day, 
  birch:
  
Job:birch.2007-12-25_16.47.08
Backup Level:   Full
Client: molbio-fd 2.2.6 (10Nov07) 
  x86_64-unknown-linux-gnu,redhat,
FileSet:Birch Set 2007-12-22 09:56:56
Pool:   Full (From Job resource)
Storage:LTO4 (From Job resource)
Scheduled time: 25-Dec-2007 16:47:11
Start time: 25-Dec-2007 16:47:25
End time:   31-Dec-2007 23:09:01
Elapsed time:   6 days 6 hours 21 mins 36 secs
Priority:   10
FD Files Written:   16,679,881
SD Files Written:   16,679,881
FD Bytes Written:   1,105,891,427,122 (1.105 TB)
SD Bytes Written:   1,108,951,797,447 (1.108 TB)
Rate:   2043.0 KB/s
Software Compression:   None
VSS:no
Encryption: no
Volume name(s): 000299L4
Volume Session Id:  5
Volume Session Time:1198587778
Last Volume Bytes:  1,504,677,113,856 (1.504 TB)
Non-fatal FD errors:0
SD Errors:  0
FD termination status:  OK
SD termination status:  OK
Termination:Backup OK
  
  not acceptable at 2MB/s. i cannot find any real difference in the network 
  config or nfs mount
  config on these filesystms. i suspect it has to do with the nature of the 
  filesystems. masspec
  has less than a millon files, aspen has around 8 million files and birch 
  has nearly 17 million.
  
  has anyone had similar experience working with nfs backups of this nature? 
  anything i can do
  to improve performance to get the filer backed up in a reasonable time 
  window?
 
 With the purpose of gathering facts: are these results repeatable?


hmmm 

these are the only complete level 0's i've taken for these filers. i made an 
attempt at birch
a few days earlier and cancelled it because of the slowness:

  Build OS:   x86_64-unknown-linux-gnu redhat
  JobId:  71
  Job:birch.2007-12-22_09.56.36
  Backup Level:   Full
  Client: molbio-fd 2.2.6 (10Nov07) 
x86_64-unknown-linux-gnu,redhat,
  FileSet:Birch Set 2007-12-22 09:56:56
  Pool:   Full (From Job resource)
  Storage:LTO4 (From Job resource)
  Scheduled time: 22-Dec-2007 09:56:44
  Start time: 22-Dec-2007 09:56:58
  End time:   25-Dec-2007 07:05:52
  Elapsed time:   2 days 21 hours 8 mins 54 secs
  Priority:   10
  FD Files Written:   6,206,323
  SD Files Written:   6,206,323
  FD Bytes Written:   444,458,726,167 (444.4 GB)
  SD Bytes Written:   445,477,079,297 (445.4 GB)
  Rate:   1785.4 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Volume name(s): 002042L4|000299L4
  Volume Session Id:  24
  Volume Session Time:1198028560
  Last Volume Bytes:  305,032,863,744 (305.0 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  Canceled
  SD termination status:  Canceled
  Termination:Backup Canceled

i cancelled the job, checked networking, restarted postgres and bacula. then 
restarted the backup.

-- michael 

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] postgres enable thread safety

2008-01-02 Thread Michael Galloway
hmmm, insert foot into appropriate oriface for me. while reading up on
bacula and postgres for my netapp issue, i realized i'd not built postgres
with enable-thread-safety. so, i should assume all data on tapes now (about
12TB) is useless? i'm rebuilding postgres now.

-- michael (aka doofus)

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] network appliance backups - large number of files very very slow

2008-01-02 Thread Michael Galloway
On Wed, Jan 02, 2008 at 10:59:08AM -0500, Dan Langille wrote:
 
 no, i did not enable attribute spooling. i have disk space available. i 
 can enable
 that and see if it helps. 
 
 I've lost track of whether you are using tape or not, so this post may 
 not be relevant.
 
 Another issue to consider: is your tape drive being constantly fed with 
 enough data?  Is is stopping and starting all the time?  Start/stop can 
 affect throughput.   However, the only two ways I know to avoid this is:
 
 1 - steady stream of data from the FD to the SD
 2 - data spooling, which means writing the data to local HDD then to the
 tape.
 
 I listen to my DLT drives.  I can tell when they are streaming from one 
 end of the tape to the other from the sounds.  I'm sure you can too. 

ok, i've rebuilt postgres and bacula to enable bulk updates and threadsafeness.
i've enabled a 500G spool directory and enabled data and attribute spooling
on the slowest of the netapps, running a level 0 now to see if it helps.

-- michael

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Error: Watchdog sending kill after 518416 secs to thread stalled reading File daemon.

2007-12-31 Thread Michael Galloway
ok, what does this message imply?

31-Dec 16:47 molbio-dir JobId 92: Error: Watchdog sending kill after 518416 
secs to thread stalled reading File daemon.
*

this job has been running for nearly a week and is almost finished:

*status client=molbio-fd
Connecting to Client molbio-fd at molbio:9102

molbio-fd Version: 2.2.6 (10 November 2007)  x86_64-unknown-linux-gnu redhat 
Daemon started 25-Dec-07 08:02, 4 Jobs run since started.
 Heap: heap=1,441,792 smbytes=899,137 max_bytes=935,145 bufs=1,199 
max_bufs=1,264
 Sizeof: boffset_t=8 size_t=8 debug=0 trace=0

Running Jobs:
JobId 92 Job birch.2007-12-25_16.47.08 is running.
Backup Job started: 25-Dec-07 16:47
Files=15,566,337 Bytes=1,019,121,545,680 Bytes/sec=1,964,745 Errors=0
Files Examined=15,566,337
Processing file: /birch_vol0/prot_data/update/pdb_tmp/pdb1i9b.ent
SDReadSeqNo=5 fd=5
Director connected at: 31-Dec-07 16:52

both the fd and the sd and dir are running on the same machine, bacula 2.2.6 
patched.
it looks like its still running, would really be a shame if it exited now.

-- michael

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] very slow netapp nfs backup

2007-12-28 Thread Michael Galloway
On Thu, Dec 27, 2007 at 10:13:14AM -0500, Michael Galloway wrote:
 i'm having trouble getting a netapp nfs mount backed up to my local 
 bacula server. this is bacula 2.2.6 patched. i think the problem is
 the large number of files on the netapp. the backup just slows to a
 crawl:
 
 molbio-fd Version: 2.2.6 (10 November 2007)  x86_64-unknown-linux-gnu redhat 
 Daemon started 25-Dec-07 08:02, 4 Jobs run since started.
  Heap: heap=1,306,624 smbytes=808,085 max_bytes=829,451 bufs=414 max_bufs=447
  Sizeof: boffset_t=8 size_t=8 debug=0 trace=0
 
 Running Jobs:
 JobId 92 Job birch.2007-12-25_16.47.08 is running.
 Backup Job started: 25-Dec-07 16:47
 Files=2,655,026 Bytes=373,175,385,406 Bytes/sec=2,508,371 Errors=0
 Files Examined=2,655,026
 Processing file: 
 /birch_vol0/GC/organism/human_old/chromosome/8/contig/NT_007995.13/gene/grailexp/mrna/14.1.fna
 SDReadSeqNo=5 fd=5
 Director connected at: 27-Dec-07 10:06
 
 my mount options on the nfs mount are:
 
 birch:/vol/vol0 on /birch_vol0 type nfs 
 (ro,rsize=32768,wsize=32768,tcp,addr=xxx.xxx.xxx.xxx)
 
 the job has been running nearly 48 hours and its only about a third done, 
 there is around 1.1T on
 this filer and around 8 million files. 
 
 anyone had any experience working with nfs backups like this? my other netapp 
 nfs mount backups
 seem to run at adequate rates (20+MB/s) 
 


getting even slower, at this rate, will take several days to get this backup 
finished:

*status client=molbio-fd
Connecting to Client molbio-fd at molbio:9102

molbio-fd Version: 2.2.6 (10 November 2007)  x86_64-unknown-linux-gnu redhat 
Daemon started 25-Dec-07 08:02, 4 Jobs run since started.
 Heap: heap=1,306,624 smbytes=794,986 max_bytes=829,451 bufs=389 max_bufs=447
 Sizeof: boffset_t=8 size_t=8 debug=0 trace=0

Running Jobs:
JobId 92 Job birch.2007-12-25_16.47.08 is running.
Backup Job started: 25-Dec-07 16:47
Files=4,704,033 Bytes=425,852,376,675 Bytes/sec=1,830,591 Errors=0
Files Examined=4,704,033
Processing file: 
/birch_vol0/database/iprscan/tmp/bpse_E254_04feb03/cnk_39/hmmpfam.out
SDReadSeqNo=5 fd=5
Director connected at: 28-Dec-07 09:24


i've checked the nfs mount, the ethernet connections on the server and client 
(both at 1000Mb/s
full duplex), ran a traceroute, etc. could this be a bacula or postgres issue 
(my db has grown
to over 2GB in size)? of do you reckon its just due to the netapp and number of 
files? the netapp
is working, but not working very hard:

birch sysstat 5
 CPUNFS   CIFS   HTTP  Net kB/s Disk kB/s  Tape kB/sCache
   in   out read  writeread write age
 53%601  0  0 605 2062926112285   0 0   5s
 42%487  0  0 439 1458316787  0   0 0   7s
 38%437  0  0 388 1208315080128   0 0   9s
 40%457  0  0 415 1328216845 94   0 0   9s
 35%371  0  0 342 1126714839  0   0 0   9s
 34%363  0  0 321  991013795205   0 0  11s

of course some of that is traffic not associated with the backup. anyway to 
speed this up
or to otherwise organize the backup so that it can get down without blocking 
days worth 
of other backups?

-- michael 

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] very slow netapp nfs backup

2007-12-27 Thread Michael Galloway
i'm having trouble getting a netapp nfs mount backed up to my local 
bacula server. this is bacula 2.2.6 patched. i think the problem is
the large number of files on the netapp. the backup just slows to a
crawl:

molbio-fd Version: 2.2.6 (10 November 2007)  x86_64-unknown-linux-gnu redhat 
Daemon started 25-Dec-07 08:02, 4 Jobs run since started.
 Heap: heap=1,306,624 smbytes=808,085 max_bytes=829,451 bufs=414 max_bufs=447
 Sizeof: boffset_t=8 size_t=8 debug=0 trace=0

Running Jobs:
JobId 92 Job birch.2007-12-25_16.47.08 is running.
Backup Job started: 25-Dec-07 16:47
Files=2,655,026 Bytes=373,175,385,406 Bytes/sec=2,508,371 Errors=0
Files Examined=2,655,026
Processing file: 
/birch_vol0/GC/organism/human_old/chromosome/8/contig/NT_007995.13/gene/grailexp/mrna/14.1.fna
SDReadSeqNo=5 fd=5
Director connected at: 27-Dec-07 10:06

my mount options on the nfs mount are:

birch:/vol/vol0 on /birch_vol0 type nfs 
(ro,rsize=32768,wsize=32768,tcp,addr=xxx.xxx.xxx.xxx)

the job has been running nearly 48 hours and its only about a third done, there 
is around 1.1T on
this filer and around 8 million files. 

anyone had any experience working with nfs backups like this? my other netapp 
nfs mount backups
seem to run at adequate rates (20+MB/s) 

thanks and happy new year!

-- michael



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] postgres/netapp woes

2007-12-25 Thread Michael Galloway
merry christmas all!

having some postgres issues, was trying to backup another one of my netapp's via
nfs and getting very slow backup rate

  Elapsed time:   2 days 21 hours 8 mins 54 secs
  Priority:   10
  FD Files Written:   6,206,323
  SD Files Written:   6,206,323
  FD Bytes Written:   444,458,726,167 (444.4 GB)
  SD Bytes Written:   445,477,079,297 (445.4 GB)
  Rate:   1785.4 KB/s

finally cancelled the backup. this is nominally the same mount as my other 
netapp
that gets around 25MB/s, mount looks like this:

aspen:/vol/vol0 on /aspen type nfs 
(ro,rsize=32768,wsize=32768,tcp,addr=xxx.xxx.xxx.xxx)
birch:/vol/vol0 on /birch_vol0 type nfs 
(ro,rsize=32768,wsize=32768,tcp,addr=xxx.xxx.xxx.xxx)

aspen got 25MB/s, birch got 1.8MB/s. i cannot see any difference in the network 
interfaces,
both are running at 1000Mb/s:

# ssh birch ifstat -a
-- interface  e0a  (194 days, 9 hours, 33 minutes, 23 seconds) --

RECEIVE
 Frames/second:  21  | Bytes/second:14478  | Errors/minute:   0 
 Total frames: 5559m | Total bytes:  3196g | Total errors:0 
 Multi/broadcast:   114m | No buffers:  0  | Non-primary u/c: 6 
 Tag drop:0  | Vlan tag drop:   0  | Vlan untag drop: 0 
 Runt frames: 0  | Long frames: 0  | CRC errors:  0 
 Length Error:0  | Code Error:  0  | Dribble Error:   0 
TRANSMIT
 Frames/second: 955  | Bytes/second: 1413k | Errors/minute:   0 
 Total frames:14652m | Total bytes: 19612g | Total errors:0 
 Multi/broadcast: 55847  | Queue overflows: 0  | No buffers:  0 
 CRC errors:  0  | Abort Error: 0  | Runt frames: 0 
 Long frames: 0  | Single collision:0  | Late collisions: 0 
 Deferred:0 
LINK_INFO
 Current state:   up | Up to downs: 2  | Speed:1000m
 Duplex:full | Flowcontrol:   full

# ssh aspen ifstat -a
-- interface  e0a  (194 days, 9 hours, 33 minutes, 40 seconds) --

RECEIVE
 Frames/second:  26  | Bytes/second:22388  | Errors/minute:   0 
 Total frames: 2155m | Total bytes:  1578g | Total errors:0 
 Multi/broadcast:   114m | No buffers:  0  | Non-primary u/c: 6 
 Tag drop:0  | Vlan tag drop:   0  | Vlan untag drop: 0 
 Runt frames: 0  | Long frames: 0  | CRC errors:  0 
 Length Error:0  | Code Error:  0  | Dribble Error:   0 
TRANSMIT
 Frames/second:   8  | Bytes/second: 1107  | Errors/minute:   0 
 Total frames: 4949m | Total bytes:  5903g | Total errors:0 
 Multi/broadcast:   213k | Queue overflows: 0  | CRC errors:  0 
 Abort Error: 0  | Runt frames: 0  | Long frames: 0 
 Single collision:0  | Late collisions: 0  | Deferred:0 
LINK_INFO
 Current state:   up | Up to downs: 3  | Speed:1000m
 Duplex:full | Flowcontrol:   full

so i figured i'd just restart bacula and postgres and see if that had any 
affect. 
when trying to down postgres i get this

$ /usr/local/pgsql/bin/pg_ctl -D /var/lib/pgsql/data -l logfile stop
waiting for server to shut 
down... failed
pg_ctl: server does not shut down

and there are a log of errors in the log like this:

NOTICE:  CREATE TABLE will create implicit sequence basefiles_baseid_seq for 
serial column basefiles.baseid
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index basefiles_pkey 
for table basefiles
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 
unsavedfiles_pkey for table unsavedfiles
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index cdimages_pkey 
for table cdimages
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index status_pkey 
for table status
ERROR:  role bacula already exists
STATEMENT:  create user bacula;
ERROR:  table delcandidates does not exist
STATEMENT:  DROP TABLE DelCandidates
ERROR:  index delinx1 does not exist
STATEMENT:  DROP INDEX DelInx1
ERROR:  index delinx1 does not exist
STATEMENT:  DROP INDEX DelInx1
ERROR:  table delcandidates does not exist
STATEMENT:  DROP TABLE DelCandidates
ERROR:  index delinx1 does not exist
STATEMENT:  DROP INDEX DelInx1
ERROR:  index delinx1 does not exist
STATEMENT:  DROP INDEX DelInx1
ERROR:  table delcandidates does not exist
STATEMENT:  DROP TABLE DelCandidates
ERROR:  index delinx1 does not exist
STATEMENT:  DROP INDEX DelInx1
ERROR:  index delinx1 does not exist
STATEMENT:  DROP INDEX DelInx1
ERROR:  table delcandidates does not exist
STATEMENT:  DROP TABLE DelCandidates
ERROR:  index delinx1 does not exist
STATEMENT:  DROP INDEX DelInx1
ERROR:  index delinx1 does not exist

why won't postgres shutdown? the only db in it is bacula. is it safe to kill
it and restart?

-- michael


[Bacula-users] bsock ... broken pipe error

2007-12-22 Thread Michael Galloway
is this error anything to worry about?

22-Dec 03:41 molbio-dir JobId 68: Begin pruning Jobs.
22-Dec 03:41 molbio-dir JobId 68: No Jobs found to prune.
22-Dec 03:41 molbio-dir JobId 68: Begin pruning Files.
22-Dec 03:41 molbio-dir JobId 68: No Files found to prune.
22-Dec 03:41 molbio-dir JobId 68: End auto prune.

22-Dec 03:41 molbio-dir JobId 68: AfterJob: run command 
/bacula/bin/delete_catalog_backup
22-Dec 03:41 molbio-dir JobId 0: Error: bsock.c:306 Write error sending 19 
bytes to client:xxx.xx.xx.xxx:36131: ERR=Broken pipe

at the end of the job, last line in the log file. the client in question is the 
client daemon on the local bacula server.

-- michael

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] LTO4 backup rates

2007-12-20 Thread Michael Galloway
good day all, 

out of curiosity, i'm wondering what other folks are getting for backup rates
to LTO4. i seem to be getting these sorts of rates:

local disk backups (3ware raid6 9650SE sata disks/xfs filesystem):
  Elapsed time:   8 hours 22 mins
  Priority:   10
  FD Files Written:   432,602
  SD Files Written:   432,602
  FD Bytes Written:   1,038,925,243,806 (1.038 TB)
  SD Bytes Written:   1,038,995,304,809 (1.038 TB)
  Rate:   34492.9 KB/s

linux client, (3ware sata raid 5, ext3):
  Elapsed time:   1 day 37 mins 15 secs
  Priority:   10
  FD Files Written:   1,214,678
  SD Files Written:   1,214,678
  FD Bytes Written:   2,159,368,051,425 (2.159 TB)
  SD Bytes Written:   2,159,551,222,342 (2.159 TB)
  Rate:   24362.5 KB/s

netapp client via nfs mount to bacula server:
  Elapsed time:   18 hours 1 min 56 secs
  Priority:   10
  FD Files Written:   863,458
  SD Files Written:   863,458
  FD Bytes Written:   1,825,660,355,131 (1.825 TB)
  SD Bytes Written:   1,825,879,267,061 (1.825 TB)
  Rate:   28123.4 KB/s

clients are all on local lan via gigE connections.

-- michael

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO4 backup rates

2007-12-20 Thread Michael Galloway
sorry, should have added, this is bacula 2.2.6 patched and
2.2.6 client.

On Thu, Dec 20, 2007 at 11:28:24AM -0500, Michael Galloway wrote:
 good day all, 
 
 out of curiosity, i'm wondering what other folks are getting for backup rates
 to LTO4. i seem to be getting these sorts of rates:
 
 local disk backups (3ware raid6 9650SE sata disks/xfs filesystem):
   Elapsed time:   8 hours 22 mins
   Priority:   10
   FD Files Written:   432,602
   SD Files Written:   432,602
   FD Bytes Written:   1,038,925,243,806 (1.038 TB)
   SD Bytes Written:   1,038,995,304,809 (1.038 TB)
   Rate:   34492.9 KB/s
 
 linux client, (3ware sata raid 5, ext3):
   Elapsed time:   1 day 37 mins 15 secs
   Priority:   10
   FD Files Written:   1,214,678
   SD Files Written:   1,214,678
   FD Bytes Written:   2,159,368,051,425 (2.159 TB)
   SD Bytes Written:   2,159,551,222,342 (2.159 TB)
   Rate:   24362.5 KB/s
 
 netapp client via nfs mount to bacula server:
   Elapsed time:   18 hours 1 min 56 secs
   Priority:   10
   FD Files Written:   863,458
   SD Files Written:   863,458
   FD Bytes Written:   1,825,660,355,131 (1.825 TB)
   SD Bytes Written:   1,825,879,267,061 (1.825 TB)
   Rate:   28123.4 KB/s
 
 clients are all on local lan via gigE connections.
 
 -- michael
 
 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO4 backup rates

2007-12-20 Thread Michael Galloway
On Thu, Dec 20, 2007 at 06:05:03PM +0100, Ralf Gross wrote:
 
 For full backups I get 70-75MB/s write speed to LTO-4 tape. Spooling
 seems to make not much difference here. The overall backup speed
 (whole job) drops with spooling enabled, because it's asynchron for
 single jobs (spooling - despooling - spooling...). Therefor I don't
 use spooling for large jobs that run 20 hours.
 
 Are the above values with spooling or is this the 'Transfer rate' to
 tape?
 
 Ralf

that is straight to tape, no spooling. i think my raid system is limiting 
the local backup rate.

-- michael

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] catalog error?

2007-12-19 Thread Michael Galloway
ok, got this error last night during a new backup:

18-Dec 21:55 molbio-sd JobId 50: 3301 Issuing autochanger loaded? drive 0 
command.
18-Dec 21:55 molbio-sd JobId 50: 3302 Autochanger loaded? drive 0, result is 
Slot 8.
18-Dec 21:55 molbio-sd JobId 50: Volume 002042L4 previously written, moving 
to end of data.
18-Dec 22:22 molbio-sd JobId 50: Error: Bacula cannot write on tape Volume 
002042L4 because:
The number of files mismatch! Volume=215 Catalog=214
18-Dec 22:22 molbio-sd JobId 50: Marking Volume 002042L4 in Error in Catalog.
18-Dec 22:22 molbio-dir JobId 50: Recycled volume 002045L4

what sort of problem does this indicate? is this a postgres error? how to 
resolve it?

-- michael

bacula 2.2.6, patched and postgresql 8.2.5


-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] catalog error?

2007-12-19 Thread Michael Galloway
On Wed, Dec 19, 2007 at 08:03:13AM -0500, Dan Langille wrote:

 
 It is a data inconsistency.  It means Bacula thought the Volume had 214 
 files on it (as documented in the Catalog).  However, upon inspection, 
 Bacula only 215 files.
 
 Easily fixed.
 
 Use the update volume command in bconsole to adjust the Catalog. Set the 
 File Number to the actual value (215) in this case.
 
 Why did this occur: I don't know.  There was at one time a bug related 
 to this issue, but it has been fixed by your version.  I have not been 
 following this thread.


thanks dan, the volume got marked as error, i assume i can update it to append
again after i fix the catalog issus?

-- michael 

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FW: new to bacula; help w/autochanger

2007-12-18 Thread Michael Galloway
could you please post the entire output from btape test? 

-- michael


On Tue, Dec 18, 2007 at 08:59:30AM -0500, Robin Blanchard wrote:
   [a lot of old stuff snipped]
  
   I think you are now asking for help with the above messages, but you
   haven't explicitly asked.
  
   When running btape, be sure bacula-sd is not running.  That is your
   firstcheck.
  
  The second check would be the permissions. If the SD runs as user
  bacula, make sure that user can write to /dev/nst0. Running a test as
  root is more likely to succeed :-)
  
 
 Progress ! Thanks for the tips thus far. What next ? It does sort of
 look like a permissions issue (despite all of this being performed as
 uid 0):
 
 # ps ax |fgrep -i acula
 16835 pts/0S+ 0:00 fgrep -i acula
 
 # whoami
 root
 
 # ls -ald {/dev/sg2,/dev/st0,/dev/nst0}
 crw-rw 1 root disk  9, 128 Dec 17 11:13 /dev/nst0
 crw-rw 1 root disk 21,   2 Dec 17 11:13 /dev/sg2
 crw-rw 1 root disk  9,   0 Dec 17 11:13 /dev/st0
 
 # mtx -f /dev/sg2 load 1
 
 # mtx -f /dev/sg2 status
   Storage Changer /dev/sg2:1 Drives, 20 Slots ( 0 Import/Export )
 Data Transfer Element 0:Full (Storage Element 1 Loaded):VolumeTag =
 B00023L3
   Storage Element 1:Empty
   Storage Element 2:Full :VolumeTag=B00039L3
   Storage Element 3:Full :VolumeTag=B00029L3
   Storage Element 4:Full :VolumeTag=B00025L3
   Storage Element 5:Full :VolumeTag=B00028L3
   Storage Element 6:Full :VolumeTag=B00022L3
   Storage Element 7:Full :VolumeTag=B00026L3
   Storage Element 8:Full :VolumeTag=B00031L3
   Storage Element 9:Full :VolumeTag=B00030L3
   Storage Element 10:Full :VolumeTag=B00037L3
 
   Storage Element 11:Full :VolumeTag=B00034L3
 
   Storage Element 12:Full :VolumeTag=B00035L3
 
   Storage Element 13:Full :VolumeTag=B00036L3
 
   Storage Element 14:Full :VolumeTag=B00038L3
 
   Storage Element 15:Full :VolumeTag=B00032L3
 
   Storage Element 16:Full :VolumeTag=B00024L3
 
   Storage Element 17:Full :VolumeTag=B00021L3
 
   Storage Element 18:Full :VolumeTag=B00020L3
 
   Storage Element 19:Full :VolumeTag=B00027L3
 
   Storage Element 20:Full :VolumeTag=B00033L3
 
 
 # mt -f /dev/nst0 status
 SCSI 2 tape drive:
 File number=0, block number=0, partition=0.
 Tape block size 0 bytes. Density code 0x44 (no translation).
 Soft error count since last status=0
 General status bits on (4101):
  BOT ONLINE IM_REP_EN
 [EMAIL PROTECTED] ~]# /usr/local/bacula-2.2.6/sbin/btape -v -c
 /usr/local/bacula-2.2.6/etc/bacula-sd.conf /dev/nst0 
 Tape block granularity is 1024 bytes.
 btape: butil.c:285 Using device: /dev/nst0 for writing.
 18-Dec 08:54 btape JobId 0: 3301 Issuing autochanger loaded? drive 0
 command.
 18-Dec 08:54 btape JobId 0: 3302 Autochanger loaded? drive 0, result
 is Slot 1.
 18-Dec 08:54 btape JobId 0: 3301 Issuing autochanger loaded? drive 0
 command.
 18-Dec 08:54 btape JobId 0: 3302 Autochanger loaded? drive 0, result
 is Slot 1.
 btape: btape.c:368 open device LTO3-1 (/dev/nst0): OK
 *test
 
 === Write, rewind, and re-read test ===
 
 I'm going to write 1000 records and an EOF
 then write 1000 records and an EOF, then rewind,
 and re-read the data to verify that it is correct.
 
 This is an *essential* feature ...
 
 18-Dec 08:55 btape JobId 0: Error: block.c:569 Write error at 0:1 on
 device LTO3-1 (/dev/nst0). ERR=Input/output error.
 18-Dec 08:56 btape JobId 0: Error: Backspace record at EOT failed.
 ERR=Input/output error
 btape: btape.c:823 Error writing block to device.
 
 -
 SF.Net email is sponsored by:
 Check out the new SourceForge.net Marketplace.
 It's the best place to buy or sell services
 for just about anything Open Source.
 http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FW: new to bacula; help w/autochanger

2007-12-18 Thread Michael Galloway
On Tue, Dec 18, 2007 at 09:15:09AM -0500, Robin Blanchard wrote:
  [EMAIL PROTECTED] ~]# /usr/local/bacula-2.2.6/sbin/btape -v -c
  /usr/local/bacula-2.2.6/etc/bacula-sd.conf /dev/nst0
  Tape block granularity is 1024 bytes.
  btape: butil.c:285 Using device: /dev/nst0 for writing.
  18-Dec 08:54 btape JobId 0: 3301 Issuing autochanger loaded? drive 0
  command.
  18-Dec 08:54 btape JobId 0: 3302 Autochanger loaded? drive 0, result
  is Slot 1.
  18-Dec 08:54 btape JobId 0: 3301 Issuing autochanger loaded? drive 0
  command.
  18-Dec 08:54 btape JobId 0: 3302 Autochanger loaded? drive 0, result
  is Slot 1.
  btape: btape.c:368 open device LTO3-1 (/dev/nst0): OK
  *test
  
  === Write, rewind, and re-read test ===
  
  I'm going to write 1000 records and an EOF
  then write 1000 records and an EOF, then rewind,
  and re-read the data to verify that it is correct.
  
  This is an *essential* feature ...
  
  18-Dec 08:55 btape JobId 0: Error: block.c:569 Write error at 0:1 on
  device LTO3-1 (/dev/nst0). ERR=Input/output error.
  18-Dec 08:56 btape JobId 0: Error: Backspace record at EOT failed.
  ERR=Input/output error
  btape: btape.c:823 Error writing block to device.
 
 
 Hmmm
 
 # mt -f /dev/st0 rewind
 
 # tar cvf /dev/st0 /root/
 tar: Removing leading `/' from member names
 /root/
 /root/.my.cnf
 /root/anaconda-ks.cfg
 /root/.rnd
 /root/.cshrc
 /root/.tcshrc
 /root/scripts/
 /root/scripts/bootstrap2.sh
 tar: /dev/st0: Cannot write: Input/output error
 tar: Error is not recoverable: exiting now
 
 # tail /var/log/messages
 Dec 18 09:13:23 lewis kernel: st0: Current: sense key: Aborted Command
 Dec 18 09:13:23 lewis kernel: Add. Sense: Data phase error


i suggest loading a different tape and retrying the tar. make sure that tar
can write and read correctly. i went through something similar with my new
spectra T50 and it ended up i needed to modify the scsi bios in my scsi 
controller.

if tar can write/read correctly, then procede to btape test without the 
autochanger
part, if that works correctly integrate the changer. 

-- michael 

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FW: new to bacula; help w/autochanger

2007-12-18 Thread Michael Galloway
On Tue, Dec 18, 2007 at 11:24:30AM -0500, Robin Blanchard wrote:
 
 Thanks for the tip. Looks indeed like something underlyingly SCSI here
 (see below). Any suggestions as to what to look for in the adaptec
 controller bios ?


have a look around this thread:

http://thread.gmane.org/gmane.comp.sysutils.backup.bacula.general/39443/focus=39873

-- michael 

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] purging volumes

2007-12-17 Thread Michael Galloway

On Mon, Dec 17, 2007 at 06:20:22PM +0100, belen wrote:
 Hi, I have some questions for you all. Does  Bacula support LTO-4 
 autochangers ?.
 I see in this list media with mediatype LTO-4. Could you tell me which model 
 and Manufacturer supports LTO-4?
 


yes, i've got bacula working with a spectralogic T50, single LTO-4 drive.

-- michael  

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] new to bacula; help w/autochanger

2007-12-17 Thread Michael Galloway
clearly /dev/sg0 is not your changer. i think yours is /dev/sg3, try this:

tapeinfo -f /dev/sg3 

or 

mtx -f /dev/sg3 

and see if it looks like the changer device. 

-- michael

On Mon, Dec 17, 2007 at 01:29:02PM -0500, Robin Blanchard wrote:
 
 Bacula 2.2.6 on RHEL5 (2.6.18-53.1.4.el5xen)
 
 # cat /proc/scsi/scsi 
 Attached devices:
 Host: scsi0 Channel: 00 Id: 00 Lun: 00
   Vendor: 3wareModel: Logical Disk 0   Rev: 1.2 
   Type:   Direct-AccessANSI SCSI revision: 
 Host: scsi1 Channel: 00 Id: 00 Lun: 00
   Vendor: 3wareModel: Logical Disk 0   Rev: 1.2 
   Type:   Direct-AccessANSI SCSI revision: 
 Host: scsi2 Channel: 00 Id: 00 Lun: 00
   Vendor: QUALSTAR Model: RLS-8204-20  Rev: 006D
   Type:   Medium Changer   ANSI SCSI revision: 02
 Host: scsi2 Channel: 00 Id: 01 Lun: 00
   Vendor: IBM  Model: ULTRIUM-TD3  Rev: 73P5
   Type:   Sequential-AccessANSI SCSI revision: 03
 
 
 Snippet from bacula-sd.conf:
 
 
 Autochanger {
   Name = Autochanger
   Device = Drive-1
   #Device = Drive-2
   Changer Command = /usr/local/bacula-2.2.6/scripts/mtx-changer %c %o
 %S %a %d
   Changer Device = /dev/sg0
 }
 
 Device {
   Name = Drive-1
   Drive Index = 0
   Media Type = LTO-3
   Archive Device = /dev/nst0
   AutomaticMount = yes;
   AlwaysOpen = yes;
   RemovableMedia = yes;
   RandomAccess = no;
   AutoChanger = yes
   #Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
   ##If you have smartctl, enable this, it has more info than tapeinfo 
   ##Alert Command = sh -c 'smartctl -H -l error %c'  
 }
 
 # /usr/local/bacula-2.2.6/sbin/btape -c
 /usr/local/bacula-2.2.6/etc/bacula-sd.conf /dev/nst0
 Tape block granularity is 1024 bytes.
 btape: butil.c:285 Using device: /dev/nst0 for writing.
 17-Dec 13:27 btape JobId 0: 3301 Issuing autochanger loaded? drive 0
 command.
 17-Dec 13:27 btape JobId 0: 3991 Bad autochanger loaded? drive 0
 command: ERR=Child exited with code 1.
 Results=mtx: Request Sense: Long Report=yes
 mtx: Request Sense: Valid Residual=no
 mtx: Request Sense: Error Code=0 (Unknown?!)
 mtx: Request Sense: Sense Key=No Sense
 mtx: Request Sense: FileMark=no
 mtx: Request Sense: EOM=no
 mtx: Request Sense: ILI=no
 mtx: Request Sense: Additional Sense Code = 00
 mtx: Request Sense: Additional Sense Qualifier = 00
 mtx: Request Sense: BPV=no
 mtx: Request Sense: Error in CDB=no
 mtx: Request Sense: SKSV=no
 READ ELEMENT STATUS Command Failed
 
 17-Dec 13:27 btape: Fatal Error at device.c:296 because:
 dev open failed: dev.c:433 Unable to open device Drive-1 (/dev/nst0):
 ERR=Input/output error
 
 17-Dec 13:27 btape JobId 0: Fatal error: butil.c:194 Cannot open
 Drive-1 (/dev/nst0)
 
 
 
 Robin P. Blanchard
 Systems Administrator
 Information Technology Outreach Services
 Carl Vinson Institute of Government
 The University of Georgia
 fon 706.542.6295 // fax 706.542.6535
 
 
 
 -
 SF.Net email is sponsored by:
 Check out the new SourceForge.net Marketplace.
 It's the best place to buy or sell services
 for just about anything Open Source.
 http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] new to bacula; help w/autochanger

2007-12-17 Thread Michael Galloway
sorry, i meant /dev/sg2 ...

On Mon, Dec 17, 2007 at 01:29:02PM -0500, Robin Blanchard wrote:
 
 Bacula 2.2.6 on RHEL5 (2.6.18-53.1.4.el5xen)
 
 # cat /proc/scsi/scsi 
 Attached devices:
 Host: scsi0 Channel: 00 Id: 00 Lun: 00
   Vendor: 3wareModel: Logical Disk 0   Rev: 1.2 
   Type:   Direct-AccessANSI SCSI revision: 
 Host: scsi1 Channel: 00 Id: 00 Lun: 00
   Vendor: 3wareModel: Logical Disk 0   Rev: 1.2 
   Type:   Direct-AccessANSI SCSI revision: 
 Host: scsi2 Channel: 00 Id: 00 Lun: 00
   Vendor: QUALSTAR Model: RLS-8204-20  Rev: 006D
   Type:   Medium Changer   ANSI SCSI revision: 02
 Host: scsi2 Channel: 00 Id: 01 Lun: 00
   Vendor: IBM  Model: ULTRIUM-TD3  Rev: 73P5
   Type:   Sequential-AccessANSI SCSI revision: 03
 
 
 Snippet from bacula-sd.conf:
 
 
 Autochanger {
   Name = Autochanger
   Device = Drive-1
   #Device = Drive-2
   Changer Command = /usr/local/bacula-2.2.6/scripts/mtx-changer %c %o
 %S %a %d
   Changer Device = /dev/sg0
 }
 
 Device {
   Name = Drive-1
   Drive Index = 0
   Media Type = LTO-3
   Archive Device = /dev/nst0
   AutomaticMount = yes;
   AlwaysOpen = yes;
   RemovableMedia = yes;
   RandomAccess = no;
   AutoChanger = yes
   #Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
   ##If you have smartctl, enable this, it has more info than tapeinfo 
   ##Alert Command = sh -c 'smartctl -H -l error %c'  
 }
 
 # /usr/local/bacula-2.2.6/sbin/btape -c
 /usr/local/bacula-2.2.6/etc/bacula-sd.conf /dev/nst0
 Tape block granularity is 1024 bytes.
 btape: butil.c:285 Using device: /dev/nst0 for writing.
 17-Dec 13:27 btape JobId 0: 3301 Issuing autochanger loaded? drive 0
 command.
 17-Dec 13:27 btape JobId 0: 3991 Bad autochanger loaded? drive 0
 command: ERR=Child exited with code 1.
 Results=mtx: Request Sense: Long Report=yes
 mtx: Request Sense: Valid Residual=no
 mtx: Request Sense: Error Code=0 (Unknown?!)
 mtx: Request Sense: Sense Key=No Sense
 mtx: Request Sense: FileMark=no
 mtx: Request Sense: EOM=no
 mtx: Request Sense: ILI=no
 mtx: Request Sense: Additional Sense Code = 00
 mtx: Request Sense: Additional Sense Qualifier = 00
 mtx: Request Sense: BPV=no
 mtx: Request Sense: Error in CDB=no
 mtx: Request Sense: SKSV=no
 READ ELEMENT STATUS Command Failed
 
 17-Dec 13:27 btape: Fatal Error at device.c:296 because:
 dev open failed: dev.c:433 Unable to open device Drive-1 (/dev/nst0):
 ERR=Input/output error
 
 17-Dec 13:27 btape JobId 0: Fatal error: butil.c:194 Cannot open
 Drive-1 (/dev/nst0)
 
 
 
 Robin P. Blanchard
 Systems Administrator
 Information Technology Outreach Services
 Carl Vinson Institute of Government
 The University of Georgia
 fon 706.542.6295 // fax 706.542.6535
 
 
 
 -
 SF.Net email is sponsored by:
 Check out the new SourceForge.net Marketplace.
 It's the best place to buy or sell services
 for just about anything Open Source.
 http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Inc promoted to Full i can't see why

2007-12-15 Thread Michael Galloway
i'm still working on getting my backup schedules working on this new T50 
library.
at this point, i'm backing up two largish file servers, molbio, moldyn. both
recently had incremental backups upgraded to full because bacula could find no
prior full backup:

14-Dec 21:05 molbio-dir JobId 35: No prior Full backup Job record found.
14-Dec 21:05 molbio-dir JobId 35: No prior or suitable Full backup found in 
catalog. Doing FULL backup.
14-Dec 21:05 molbio-dir JobId 35: Start Backup JobId 35, 
Job=moldyn.2007-12-14_21.05.13

i think this is in the catalog though:

*list job=moldyn
Using Catalog MyCatalog
+---++-+--+---+--+-+---+
| jobid | name   | starttime   | type | level | jobfiles | jobbytes 
   | jobstatus |
+---++-+--+---+--+-+---+
| 4 | moldyn | 2007-12-06 22:21:11 | B| F |  128,194 | 
257,942,793,788 | T |
|10 | moldyn | 2007-12-07 23:44:04 | B| I |   92 |   
8,037,685,364 | T |
|13 | moldyn | 2007-12-08 21:54:38 | B| I |   27 |   
4,499,370,405 | T |
|20 | moldyn | 2007-12-11 10:45:07 | B| I |  202 |   
6,173,334,158 | T |
|23 | moldyn | 2007-12-11 21:05:03 | B| I |   31 |   
4,406,679,712 | T |
|27 | moldyn | 2007-12-13 07:08:01 | B| I |   60 |   
5,481,061,655 | T |
|31 | moldyn | 2007-12-13 22:01:00 | B| I |   97 |   
2,353,854,922 | T |
|35 | moldyn | 2007-12-14 21:05:02 | B| F |  221,768 | 
280,941,935,090 | T |
+---++-+--+---+--+-+---+

which indicates a full from 12/06. my file and job retention are this:

# Client (File Services) to backup
Client {
  Name = moldyn-fd
  Address = moldyn.ornl.gov
  FDPort = 9102
  Catalog = MyCatalog
  Password =   # password for FileDaemon
  File Retention = 45 days# 30 days
  Job Retention = 6 months# six months
  AutoPrune = yes # Prune expired Jobs/Files
}

and i've not seen anything about pruning jobs or files in the messages or logs. 
i clearly seem to
be missing something, but i cannot see what. any ideas?

this is bacula 2.2.6 patched. 

-- michael

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] purging volumes

2007-12-15 Thread Michael Galloway
ok, since my two large full backups just got promoted to Full from Inc, i'd like
to reuse the volumes from the previous Full backups. my understanding from the 
manual is to use 'purge volume=volumename from the console. as near as i can 
tell
the two Full's in question are contained on two volumes, and those two volumes 
don;t
contain any other jobs so i feel ok with purging them. do i have to do anything 
else
to make the volumes available for future backups?

-- michael

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula with StorEdge L280

2007-12-15 Thread Michael Galloway

On Sat, Dec 15, 2007 at 05:44:20PM -0300, Daniel Bareiro wrote:
 
 The test autochanger worked!!
 --
 
 All the tests seems finished successfully!!
 
 Well, it's a good step in the correct direction :-)
 
 Now I'm trying to perform backups operations. I've noticed when I try
 label a volume, Bacula uses the slot 0 and then it causes a mount error.
 
 --
 *label
 Automatically selected Storage: DLTDrive
 Enter new Volume name: DLT-15Dic07
 Defined Pools:
  1: Default
  2: SundayPool
 Select the Pool (1-2): 1
 Connecting to Storage daemon DLTDrive at sparky.educ.gov.ar:9103 ...
 Sending label command for Volume DLT-15Dic07 Slot 0 ...
 Invalid slot=0 defined, cannot autoload Volume.
 3301 Issuing autochanger loaded drive 0 command.
 3302 Autochanger loaded drive 0, result: nothing loaded.
 3301 Issuing autochanger loaded drive 0 command.
 3302 Autochanger loaded drive 0, result: nothing loaded.
 3912 Failed to label Volume: ERR=dev.c:678 Rewind error on Drive-1
 (/dev/nst0). ERR=No medium found.
 
 Label command failed for Volume DLT-15Dic07.
 Do not forget to mount the drive!!!
 --
 
 I have seen some cases where Bacula asks on the slot for the volume. How
 I can obtain this behavior?
 
 Thanks for your response.
 
 Regards,
 Daniel
 -- 

excellent!

try this from the console:

update slots (or update slots scan)
mount

-- michael


-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula with StorEdge L280

2007-12-14 Thread Michael Galloway
daniel, were you able to successfully complete the btape test
command from the bconsole? if it encounters errors it can 
suggest modifications to you configuration which may help.

-- michael

On Thu, Dec 13, 2007 at 12:51:46PM -0300, Daniel Bareiro wrote:
 
 I modified the autochanger resource definition. Now, does it look
 better? :-)
 
 Autochanger {
   Name = Autochanger
   Device = Drive-1
   Changer Command = /etc/bacula/scripts/mtx-changer %c %o %S %a %d
   Changer Device = /dev/sg2
 }
 
 Device {
   Name = Drive-1  #
   Media Type = DLT-7000
   Archive Device = /dev/nst0
   Autochanger = yes
   LabelMedia = no
   AutomaticMount = yes;   # when device opened, read it
   AlwaysOpen = yes
 }
 
 This is the output of 'status storage':
 
 *status storage
 The defined Storage resources are:
  1: respaldadora-sd
  2: File
 Select Storage resource (1-2): 1
 Connecting to Storage daemon respaldadora-sd at sparky.educ.gov.ar:9103
 
 respaldadora-sd Version: 1.38.11 (28 June 2006) sparc-unknown-linux-gnu 
 debian 4.0
 Daemon started 13-dic-07 06:33, 0 Jobs run since started.
 
 Running Jobs:
 No Jobs running.
 
 
 Jobs waiting to reserve a drive:
 
 
 Terminated Jobs:
 JobId  Level   Files  Bytes Status   FinishedName
 ==
 1  Full  0  0 Other11-dic-07 15:20 Backup_Usuarios
 2  Full  0  0 Cancel   13-dic-07 01:17 Backup_Usuarios
 
 
 Device status:
 Autochanger Autochanger with devices:
Drive-1 (/dev/nst0)
 Device FileStorage (/tmp) is not open or does not exist.
 Device Drive-1 (/dev/nst0) open but no Bacula volume is mounted.
Slot 1 is loaded in drive 0.
Total Bytes Read=0 Blocks Read=0 Bytes/block=0
Positioned at File=0 Block=0
 
 
 In Use Volume status:
 
 
 Then, I try label the volumes but I got an input/output error message
 after a delay. Performing the operation over another slot shows the same
 error.
 
 The defined Storage resources are:
  1: respaldadora-sd
  2: File
 Select Storage resource (1-2): 1
 Enter autochanger drive[0]:
 Enter new Volume name: TestVolume1
 Enter slot (0 or Enter for none): 1
 Automatically selected Pool: Default
 Connecting to Storage daemon respaldadora-sd at sparky.educ.gov.ar:9103...
 Sending label command for Volume TestVolume1 Slot 1 ...
 3301 Issuing autochanger loaded drive 0 command.
 3302 Autochanger loaded drive 0, result: nothing loaded.
 3304 Issuing autochanger load slot 1, drive 0 command.
 3305 Autochanger load slot 1, drive 0, status is OK.
 3301 Issuing autochanger loaded drive 0 command.
 3302 Autochanger loaded drive 0, result is Slot 1.
 3301 Issuing autochanger loaded drive 0 command.
 3302 Autochanger loaded drive 0, result is Slot 1.
 3912 Failed to label Volume: ERR=dev.c:678 Rewind error on Drive-1
 (/dev/nst0). ERR=Error de entrada/salida.
 
 Label command failed for Volume TestVolume1.
 Do not forget to mount the drive!!!
 
 Any idea? The permissions are correct, I think.
 
 sparky:/etc/bacula# ll /dev/st0 /dev/sg2
 crw-rw 1 root tape 21, 2 2007-12-12 11:54 /dev/sg2
 crw-rw 1 root tape  9, 0 2007-12-12 11:54 /dev/st0
 
 sparky:/etc/bacula# id bacula
 uid=105(bacula) gid=105(bacula) grupos=105(bacula),26(tape)
 
 Regards,
 Daniel
 -- 
 Daniel Bareiro - System Administrator
 Fingerprint: BFB3 08D6 B4D1 31B2 72B9  29CE 6696 BF1B 14E6 1D37
 Powered by Debian GNU/Linux Etch - Linux user #188.598




-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula with StorEdge L280

2007-12-14 Thread Michael Galloway
On Fri, Dec 14, 2007 at 01:39:40PM -0300, Daniel Bareiro wrote:
 Hi Michael. I've performed the btape test according to Bacula's manual
 but I got a message saying Bacula doen't found the medium.
 
 sparky:~# btape -c /etc/bacula/bacula-sd.conf /dev/nst0
 Tape block granularity is 1024 bytes.
 btape: butil.c:272 Using device: /dev/nst0 for writing.
 14-Dec 13:11 btape: 3301 Issuing autochanger loaded drive 0 command.
 14-Dec 13:11 btape: 3302 Autochanger loaded drive 0, result: nothing
 loaded.
 14-Dec 13:11 btape: 3301 Issuing autochanger loaded drive 0 command.
 14-Dec 13:11 btape: 3302 Autochanger loaded drive 0, result: nothing
 loaded.
 btape: btape.c:338 open device Drive-1 (/dev/nst0): OK
 *test
 
 === Write, rewind, and re-read test ===
 
 I'm going to write 1000 records and an EOF
 then write 1000 records and an EOF, then rewind,
 and re-read the data to verify that it is correct.
 
 This is an *essential* feature ...
 
 14-Dec 13:12 btape: 3301 Issuing autochanger loaded drive 0 command.
 14-Dec 13:12 btape: 3302 Autochanger loaded drive 0, result: nothing
 loaded.
 btape: btape.c:775 Bad status from rewind. ERR=dev.c:678 Rewind error on
 Drive-1 (/dev/nst0). ERR=No medium found.
 
 

looks like either the changer script or mtx cannot get a tape from the library
into the drive. i'd back up a step or two and make sure mtx works as you would
expect (i.e., can mtx -f /dev/sg2 next load the next tape, etc). if that work
as you expect it should, load a tape into the drive with mtx, then run btape 
test
without using the autochanger part and make sure that it runs without error. 
then
we can work on putting the autochanger part back in. 

-- michael


-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula with StorEdge L280

2007-12-13 Thread Michael Galloway
in addition to what others have already indicated, i would suggest
working thru the tape and library section of the manual (i just
went through this exercise myself with a new library). have a look
here:

http://bacula.org/dev-manual/Testing_Your_Tape_Drive.html

make sure btape test works correctly. i recommended i add some
configuration to my SD configuration. 

-- michael

On Wed, Dec 12, 2007 at 10:52:41PM -0300, Daniel Bareiro wrote:
 Hi all!
 
 This is my first mail to the list. I'm newbie with Bacula. I'm using
 Bacula version (1.38.11-8) from Debian Etch for Sparc respositories.
 
 The operating system detects the tape library and I can operate it with
 mt and mtx:
 

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula AutoChanger example config request

2007-12-13 Thread Michael Galloway
On Thu, Dec 13, 2007 at 06:18:54AM -0800, Gary Danko wrote:
 I think I am doing something wrong in configuring my autochanger. Would
 someone who successfully uses a changer with more than one tape engine
 please post their configs so I can examine them? The field the I am not sure
 about is the Changer Device in the Device section.
 
 From the documentation I read:  The specified *name-string* must be
 the *generic
 SCSI* device name of the autochanger that corresponds to the normal
 read/write *Archive Device* specified in the Device resource. This generic
 SCSI device name should be specified if you have an autochanger or if you
 have a standard tape drive and want to use the *Alert Command* (see below).
 
 Does this mean that in the Device section, my Changer Device needs to be
 the generic SCSI device name of the changer itself or of the individual tape
 drive?
 
 My setup looks like this:
 
 tape drive 0 = /dev/nst0 = /dev/sg2
 tape drive 1 = /dev/nst1 = /dev/sg3
 tape drive 2 = /dev/nst2 = /dev/sg5
 tape drive 3 = /dev/nst3 = /dev/sg6
 changer = /dev/sg4
 
 The configuration for the changer and two of its drives is below. I am not
 sure if I should change the Changer Device to /dev/sg4 for each of the
 Device entries. Any ideas?


i don't have multiple drives but i think it should be like this:

 
 Autochanger {
 Name = Dell
   Device = ML6020-D0
   Device = ML6020-D1
   Device = ML6020-D2
   Device = ML6020-D3
   Changer Command = /usr/local/bacula/etc/mtx-changer %c %o %S %a %d
   Changer Device = /dev/sg4
 }


 
 Device {
   Name = ML6020-D0
   Archive Device = /dev/nst0
   Device Type = Tape
   Media Type = LTO4
   Autochanger = Yes
   Changer Device = /dev/sg4
   Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
   Drive Index = 0
   Autoselect = Yes
   AlwaysOpen = Yes;
   RemovableMedia = Yes;
   RandomAccess = No;
   RequiresMount = No;
 }
 
  Device {
   Name = ML6020-D1
   Archive Device = /dev/nst1
   Device Type = Tape
   Media Type = LTO4
   Autochanger = Yes
   Changer Device = /dev/sg4
   Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
   Drive Index = 1
   Autoselect = Yes
   AlwaysOpen = Yes;
   RemovableMedia = Yes;
   RandomAccess = No;
   RequiresMount = No;
 }

  Device {
   Name = ML6020-D2
   Archive Device = /dev/nst2
   Device Type = Tape
   Media Type = LTO4
   Autochanger = Yes
   Changer Device = /dev/sg4
   Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
   Drive Index = 2
   Autoselect = Yes
   AlwaysOpen = Yes;
   RemovableMedia = Yes;
   RandomAccess = No;
   RequiresMount = No;
 }



etc 

 -
 SF.Net email is sponsored by:
 Check out the new SourceForge.net Marketplace.
 It's the best place to buy or sell services
 for just about anything Open Source.
 http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] update slots from manual fails

2007-12-12 Thread Michael Galloway
On Wed, Dec 12, 2007 at 11:12:17AM -0500, John Drescher wrote:
  unmount
  (remove magazine)
  (insert new magazine)
  update slots
  mount
 
  however, when i run update slots i get this:
 
  *update slots
  The defined Storage resources are:
   1: LTO4
   2: File
  Select Storage resource (1-2): 1
  Connecting to Storage daemon LTO4 at molbio:9103 ...
  3306 Issuing autochanger slots command.
  Device LTO4 has 0 slots.
  No slots in changer to scan.
 
 Did you issue this command before the archive was done with the
 inventory? On my archive this takes 3 to 5 minutes.


yup, that was exactly it. i just added a couple more tape cartridges and
waited a bit, works correctly per manual. thanks!

-- michael 

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] update slots from manual fails

2007-12-12 Thread Michael Galloway
On Wed, Dec 12, 2007 at 11:12:17AM -0500, John Drescher wrote:
  *update slots
  The defined Storage resources are:
   1: LTO4
   2: File
  Select Storage resource (1-2): 1
  Connecting to Storage daemon LTO4 at molbio:9103 ...
  3306 Issuing autochanger slots command.
  Device LTO4 has 0 slots.
  No slots in changer to scan.
 
 Did you issue this command before the archive was done with the
 inventory? On my archive this takes 3 to 5 minutes.


hmm 

thats possible, i will try it again tomorrow and give it more time.

-- michael 

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services
for just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volume Error

2007-12-11 Thread Michael Galloway
also, i should have noted, this is bacula 2.2.6 with these
patches:

2.2.6-add.patch
2.2.6-backup-restore-socket.patch
2.2.6-queued-msg.patch
2.2.6-status.patch

running on centos 5. 

-- michael


-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volume Error

2007-12-11 Thread Michael Galloway
here is what was in the log file:

Dec 10 09:34:04 molbio kernel: st0: Current: sense key: Medium Error
Dec 10 09:34:04 molbio kernel: Additional sense: Recorded entity not found
Dec 10 09:34:04 molbio kernel: Info fld=0x1

i've marked it as used, and will see what happens. if it errors out again
i will replace it. 

thanks again!

-- michael

On Tue, Dec 11, 2007 at 10:20:38PM +0100, Arno Lehmann wrote:
 
 A look in the system log - you've got the date and time, so you should 
 find the time range easily - might tell you more. It could be a 
 one-time hardware problem you'll never encounter again, it could be a 
 tape where the final EOF wasn't written for whatever reason, or it's 
 really an unusable tape.
 
 Personally, I'd mark the volume as used and take a not to observe this 
 tape. If, during normal operations, more errors happen, remove it.

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volume Error

2007-12-11 Thread Michael Galloway
thanks for helping arno, here is what i've found:

On Tue, Dec 11, 2007 at 09:36:06PM +0100, Arno Lehmann wrote:
 
 First of all, it's quite important to know why it's been marked as 
 Error. You can look that up in the log file Bacula, by default, 
 writes. A grep for the volume name should tell you something.


10-Dec 09:03 molbio-sd JobId 18: Volume 002045L4 previously written, moving 
to end of data.
10-Dec 09:34 molbio-sd JobId 18: Error: Unable to position to end of data on 
device LTO4 (/dev/nst0): ERR=dev.c:1355 ioctl MTFSF error on LTO4 
(/dev/nst0). ERR=Input/output error.

10-Dec 09:34 molbio-sd JobId 18: Marking Volume 002045L4 in Error in Catalog.
10-Dec 09:34 molbio-dir JobId 18: Using Volume 002044L4 from 'Scratch' pool.

is this a tape failure? these are new LTO4 tapes.

-- michael

 

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] volstatus error

2007-12-10 Thread Michael Galloway
good day all, 

the second tape of a large level 0 backup got marked with an error in volstatus:

Pool: Full
+-++---+-+-+--+--+-+--+---+---+-+
| mediaid | volumename | volstatus | enabled | volbytes| volfiles | 
volretention | recycle | slot | inchanger | mediatype | lastwritten |
+-++---+-+-+--+--+-+--+---+---+-+
|   4 | 002045L4   | Error |   1 | 334,484,075,520 |  336 |   
10,368,000 |   1 |5 | 1 | LTO4  | 2007-12-07 01:20:05 |

this is the volume i that i got an error from the other day when i tried a test 
restore. how do i go about 
moving the data off this volume and onto another? thanks!

-- michael

-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] restore fails

2007-12-07 Thread Michael Galloway
hmmm ...

ok, working with backups on my new T50/LTO4 box. i've got some data backed up 
but
when i tried to restore a file today i got this in the console:

07-Dec 11:44 molbio-dir JobId 6: Bacula molbio-dir 2.2.6 (10Nov07): 07-Dec-2007 
11:44:31
otstrap records written to /bacula/bin/working/molbio-dir.restore.1.bsr

The job will require the following
   Volume(s) Storage(s)SD Device(s)
===
   
   002045L4  LTO4  LTO4 

1 file selected to be restored.

Run Restore job
JobName: RestoreFiles
Bootstrap:   /bacula/bin/working/molbio-dir.restore.1.bsr
Where:   /tmp/bacula-restores
Replace: always
FileSet: Full Set
Backup Client:   moldyn-fd
Restore Client:  moldyn-fd
Storage: LTO4
When:2007-12-07 11:39:20
Catalog: MyCatalog
Priority:10
OK to run? (yes/mod/no): yes
Job queued. JobId=6
*
*
*
You have messages.
*
 .

07-Dec 11:44 molbio-sd JobId 6: End of Volume at file 0 on device LTO4 
(/dev/nst0), Volume 002045L4
07-Dec 11:44 molbio-sd JobId 6: End of all volumes.
07-Dec 11:44 molbio-sd JobId 6: Alert: smartctl version 5.36 
[x86_64-redhat-linux-gnu] Copyright (C) 2002-6 Bruce Allen
07-Dec 11:44 molbio-sd JobId 6: Alert: Home page is 
http://smartmontools.sourceforge.net/
07-Dec 11:44 molbio-sd JobId 6: Alert: 
07-Dec 11:44 molbio-sd JobId 6: Alert: TapeAlert Not Supported
07-Dec 11:44 molbio-sd JobId 6: Alert: 
07-Dec 11:44 molbio-sd JobId 6: Alert: Error Counter logging not supported
07-Dec 11:44 molbio-dir JobId 6: Bacula molbio-dir 2.2.6 (10Nov07): 07-Dec-2007 
11:44:31
  Build OS:   x86_64-unknown-linux-gnu redhat 
  JobId:  6
  Job:RestoreFiles.2007-12-07_11.39.04
  Restore Client: moldyn-fd
  Start time: 07-Dec-2007 11:39:33
  End time:   07-Dec-2007 11:44:31
  Files Expected: 1
  Files Restored: 0
  Bytes Restored: 0
  Rate:   0.0 KB/s
  FD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Restore OK -- warning file count mismatch

07-Dec 11:44 molbio-dir JobId 6: Begin pruning Jobs.
07-Dec 11:44 molbio-dir JobId 6: No Jobs found to prune.
07-Dec 11:44 molbio-dir JobId 6: Begin pruning Files.
07-Dec 11:44 molbio-dir JobId 6: No Files found to prune.
07-Dec 11:44 molbio-dir JobId 6: End auto prune.

  Build OS:   x86_64-unknown-linux-gnu redhat 
  JobId:  6
  Job:RestoreFiles.2007-12-07_11.39.04
  Restore Client: moldyn-fd
  Start time: 07-Dec-2007 11:39:33
  End time:   07-Dec-2007 11:44:31
  Files Expected: 1
  Files Restored: 0
  Bytes Restored: 0
  Rate:   0.0 KB/s
  FD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Restore OK -- warning file count mismatch

with no file restored. this is bacula 2.2.6 on centos 5. i patched with:

2.2.6-add.patch
2.2.6-backup-restore-socket.patch
2.2.6-queued-msg.patch
2.2.6-status.patch

i noticed that in dmesg there was this:

st0: Current: sense key: Medium Error
Additional sense: Recorded entity not found
Info fld=0x1

i'm trying another restore from a different volume to see how it goes. 
is this a hardware or software problem?

-- michael


-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] 2.2.6 clients on 2.2.5 server?

2007-11-30 Thread Michael Galloway
i have version 2.2.5 server up and running, any reason to not use 2.2.6 clients
with 2.2.5 server?

-- michael

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] scratch pool usage (Cannot find any appendable volumes)

2007-11-29 Thread Michael Galloway
good day all, i'm working thru getting backups working on my new
library, i labled 10 tapes in the library from inside bconsole
using barcodes, i put them all into the scratch pool:

Pool: Scratch
+-++---+-+--+--+--+-+--+---+---+-+
| mediaid | volumename | volstatus | enabled | volbytes | volfiles | 
volretention | recycle | slot | inchanger | mediatype | lastwritten |
+-++---+-+--+--+--+-+--+---+---+-+
|   1 | 002048L4   | Append|   1 |   64,512 |0 |   
31,536,000 |   1 |2 | 1 | LTO4  | |
|   2 | 002047L4   | Append|   1 |   64,512 |0 |   
31,536,000 |   1 |3 | 1 | LTO4  

etc 

when i try and run a backup using the scratch pool via the command line:

Run Backup job
JobName:  Client1
Level:Full
Client:   molbio-fd
FileSet:  Full Set
Pool: Scratch (From Job resource)
Storage:  LTO4 (From Job resource)
When: 2007-11-29 10:16:59
Priority: 10
OK to run? (yes/mod/no): yes
Job queued. JobId=7

i get:

29-Nov 10:17 molbio-dir JobId 7: Start Backup JobId 7, 
Job=Client1.2007-11-29_10.17.11
29-Nov 10:17 molbio-dir JobId 7: Using Device LTO4
29-Nov 10:17 molbio-sd JobId 7: Job Client1.2007-11-29_10.17.11 waiting. Cannot 
find any appendable volumes.
Please use the label  command to create a new Volume for:
Storage:  LTO4 (/dev/nst0)
Pool: Scratch
Media type:   Ultrium-4

all though all volumes in scratch pool are listed as appendable. 

i must be misunderstanding the use of the scratch pool, i understood that 
bacula would select volumes 
as needed from it. my bacula version is:

*version
molbio-dir Version: 2.2.5 (09 October 2007) x86_64-unknown-linux-gnu redhat 

is my approach to this incorrect?

-- michael

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] scratch pool usage (Cannot find any appendable volumes)

2007-11-29 Thread Michael Galloway
On Thu, Nov 29, 2007 at 11:19:43AM -0500, Michael Galloway wrote:
 On Thu, Nov 29, 2007 at 11:06:05AM -0500, Flak Magnet wrote:
  Don't configure your jobs to USE the scratch pool.  That's the root of the 
  issue.
  
  Instead, set your job's pool to be something else OTHER than Scratch, 
  let's 
  say it's Client1_Pool just for example.  Of course, you'll have to define 
  the pool to the director and reload the config before you run the job.  
  It's 
  a good idea to test the config before you actually issue the reload 
  command.  Speak up if you need help with that.
  
  When your job Client1 runs, if there are valid volumes in Client1_Pool 
  then bacula will use a volume in the Client1_Pool.  
  
  If there is NOT a valid volume in the Client1_Pool THEN bacula will 
  re-assign an appropriate volume from the Scratch pool into Client1_Pool 
  and use that one.
  
  So your jobs should never be explicitly told to use the Scratch pool.  
  Bacula 
  does it automagically.
  
  I hope that helps it make sense.
  
  --Tim
 
 
 like this:
 
 *list pools
 ++-+-+-+--+-+
 | poolid | name| numvols | maxvols | pooltype | labelformat |
 ++-+-+-+--+-+
 |  1 | Default |   0 |   0 | Backup   | *   |
 |  2 | Scratch |  10 |   0 | Backup   | *   |
 |  3 | Full|   0 |   0 | Backup   | *   |
 ++-+-+-+--+-+
 
 and the job:
 
 Run Backup job
 JobName:  Client1
 Level:Full
 Client:   molbio-fd
 FileSet:  Full Set
 Pool: Full (From Job resource)
 Storage:  LTO4 (From Job resource)
 When: 2007-11-29 11:17:32
 Priority: 10
  
 and again:
 
 29-Nov 11:18 molbio-dir JobId 9: Start Backup JobId 9, 
 Job=Client1.2007-11-29_11.18.03
 29-Nov 11:18 molbio-dir JobId 9: Using Device LTO4
 29-Nov 11:18 molbio-sd JobId 9: Job Client1.2007-11-29_11.18.03 waiting. 
 Cannot find any appendable volumes.
 Please use the label  command to create a new Volume for:
 Storage:  LTO4 (/dev/nst0)
 Pool: Full
 Media type:   Ultrium-4
 


sorry, should have included this, bacula-dir.conf, bacula-sd.conf, this is 
bacula version 2.2.5:


 
#
# Default Bacula Director Configuration file
#
#  The only thing that MUST be changed is to add one or more
#   file or directory names in the Include directive of the
#   FileSet resource.
#
#  For Bacula release 2.2.5 (09 October 2007) -- redhat 
#
#  You might also want to change the default email address
#   from root to your address.  See the mail and operator
#   directives in the Messages resource.
#

Director {# define myself
  Name = molbio-dir
  DIRport = 9101# where we listen for UA connections
  QueryFile = /opt/bacula/bin/query.sql
  WorkingDirectory = /opt/bacula/bin/working
  PidDirectory = /opt/bacula/bin/working
  Maximum Concurrent Jobs = 1
  Password = KEI+uWMRWamvrL7luIICAgj8UnNg0XFfNvGyz5/LgT3d # Console 
password
  Messages = Daemon
}

JobDefs {
  Name = DefaultJob
  Type = Backup
  Level = Incremental
  Client = molbio-fd 
  FileSet = Full Set
  Schedule = WeeklyCycle
  Storage = LTO4
  Messages = Standard
  Pool = Full
  Priority = 10
}


#
# Define the main nightly save backup job
#   By default, this job will back up to disk in /tmp
Job {
  Name = Client1
  JobDefs = DefaultJob
  Write Bootstrap = /opt/bacula/bin/working/Client1.bsr
}

#Job {
#  Name = Client2
#  Client = molbio2-fd
#  JobDefs = DefaultJob
#  Write Bootstrap = /opt/bacula/bin/working/Client2.bsr
#}

# Backup the catalog database (after the nightly save)
Job {
  Name = BackupCatalog
  JobDefs = DefaultJob
  Level = Full
  FileSet=Catalog
  Schedule = WeeklyCycleAfterBackup
  # This creates an ASCII copy of the catalog
  RunBeforeJob = /opt/bacula/bin/make_catalog_backup bacula bacula
  # This deletes the copy of the catalog
  RunAfterJob  = /opt/bacula/bin/delete_catalog_backup
  Write Bootstrap = /opt/bacula/bin/working/BackupCatalog.bsr
  Priority = 11   # run after main backup
}

#
# Standard Restore template, to be changed by Console program
#  Only one such job is needed for all Jobs/Clients/Storage ...
#
Job {
  Name = RestoreFiles
  Type = Restore
  Client=molbio-fd 
  FileSet=Full Set  
  Storage = File  
  Pool = Default
  Messages = Standard
  Where = /tmp/bacula-restores
}


# List of files to be backed up
FileSet {
  Name = Full Set
  Include {
Options {
  signature = MD5
}
#
#  Put your list of files here, preceded by 'File =', one per line
#or include an external list with:
#
#File = file-name
#
#  Note: / backs up everything on the root partition.
#if you have other partitons such as /usr or /home
#you will probably want to add them too.
#
#  By default this is defined to point to the Bacula build

Re: [Bacula-users] scratch pool usage (Cannot find any appendable volumes)

2007-11-29 Thread Michael Galloway
On Thu, Nov 29, 2007 at 01:16:30PM -0500, John Drescher wrote:
 On Nov 29, 2007 10:50 AM, Michael Galloway [EMAIL PROTECTED] wrote:
  good day all, i'm working thru getting backups working on my new
  library, i labled 10 tapes in the library from inside bconsole
  using barcodes, i put them all into the scratch pool:
 
  Pool: Scratch
  +-++---+-+--+--+--+-+--+---+---+-+
  | mediaid | volumename | volstatus | enabled | volbytes | volfiles | 
  volretention | recycle | slot | inchanger | mediatype | lastwritten |
  +-++---+-+--+--+--+-+--+---+---+-+
  |   1 | 002048L4   | Append|   1 |   64,512 |0 |   
  31,536,000 |   1 |2 | 1 | LTO4  | |
  |   2 | 002047L4   | Append|   1 |   64,512 |0 |   
  31,536,000 |   1 |3 | 1 | LTO4
 
 For some reason the Media Type for these tapes in the Scratch Pool is
 set to LTO4 which is not Ultrium-4 so bacula will ignore these tapes.
 You need to correct that for the tapes. I am not exactly sure how to
 do that. You might need to delete the volumes and to get bacula to
 relabel them.
 
 John
 
 John


doh! yes, i see that now, thank you very much. its running, i reset the media 
type in the storage definition and it got the
job running. 

-- michael 

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Btape test command fails

2007-11-18 Thread Michael Galloway
none of the scsi cards that i had trouble with btape with were low end, 
all were new U320 from the big vendors (LSI and Adaptec).

-- michael

On Thu, Nov 15, 2007 at 12:48:05AM +0200, Michael Lewinger wrote:
 Hi,
 
 It just seems to me that SCSI tapes  bacula don't perform well on
 low-budget SCSI cards... ? I'm also having problems finsihing btape
 successfully  with a DDS2 tape on a tekram dc315U controller, and I'm
 waiting for a friend sending me an adaptec 2940 controller to see if
 it happens as well. I'll update the list.
 
 Michael
 
 

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Btape test command fails

2007-11-09 Thread Michael Galloway
i had somewhat similar problems with btape that i ended up resolving by 
modifying the 
configuration of my scsi controller. see this thread:

http://sourceforge.net/mailarchive/forum.php?thread_name=20071030014308.GA31765%40sif.lsd.ornl.govforum_name=bacula-users

-- michael



On Fri, Nov 09, 2007 at 01:07:13PM -0800, Brad M wrote:
 Hi there, I am having some problems trying to get my Tape drive working with 
 Bacula. I am very new to the world of Bacula as I was only introduced to it 
 back at the BSDCan 2007 gathering. Basically I am setting up a new file 
 server and need to backup the data on a daily basis but can't get past the 
 point of simply testing my drive. When running the btape test command, it 
 seems that it is able to write to the device but unable to read from it. Here 
 is the error output: # ./btape -c bacula-sd.conf /dev/sa0Tape block 
 granularity is 1024 bytes.btape: butil.c:285 Using device: /dev/sa0 for 
 writing.btape: btape.c:368 open device HP_Ultrium (/dev/sa0): OK*test=== 
 Write, rewind, and re-read test ===I'm going to write 1000 records and an 
 EOFthen write 1000 records and an EOF, then rewind,and re-read the data to 
 verify that it is correct.This is an *essential* feature ...btape: 
 btape.c:827 Wrote 1000 blocks of 64412 bytes.btape: btape.c:501 Wrote 1 EOF 
 to HP_Ultrium (/dev/sa0)b
 tape: btape.c:843 Wrote 1000 blocks of 64412 bytes.btape: btape.c:501 Wrote 1 
EOF to HP_Ultrium (/dev/sa0)btape: btape.c:501 Wrote 1 EOF to HP_Ultrium 
(/dev/sa0)btape: btape.c:852 Rewind OK.1000 blocks re-read correctly.07-Nov 
15:28 btape JobId 0: Error: block.c:995 Read error on fd=3 at file:blk 0:1000 
on device HP_Ultrium (/dev/sa0). ERR=Operation not permitted.btape: 
btape.c:864 Read block 1001 failed! ERR=Operation not permitted   Here is a 
copy of my baculad-sd.conf file: Storage { # 
definition of myself  Name = quagmire-sd  SDPort = 9103  # 
Director's port  WorkingDirectory = /usr/local/bacula/bin/working  Pid 
Directory = /usr/local/bacula/bin/working  Maximum Concurrent Jobs = 20}
 Director {  Name = quagmire-dir  Password = somthing random}
 Director {  Name = quagmire-mon  Password = somthing random  Monitor = yes}
 Device {  Name = HP_Ultrium  Media Type = LTO  Archive Device = /dev/sa0  
 AutomaticMount = yes;  Device Type = Tape  AlwaysOpen = yes  Removable Media 
 = yes  Random Access = no;  AutoChanger = no#FreeBSD Settings  Offline On 
 Unmount = no  Hardware End of Medium = no  BSF at EOM = yes  Backward Space 
 Record = no  Backward Space File = no  Fast Forward Space File = no  TWO EOF 
 = yes}
 Messages {  Name = Standard  director = quagmire-dir = all}   My software 
 setup is as follows:FreeBSD 5.5 x86Bacula 2.2.5 (Installed from source)MySQL 
 5.0.45   My hardware setup is as follows:CPU - AMD AM2 5600+Motherboard - 
 Asus M2N-LRSCSI Card - Adaptec 2130SLP (PCIX Ultra320)Tape Drive - HP 
 StorageWorks Ultrium 448Data Cartridge - HP LTO2 Ultrium 400GB I have been 
 trying various FreeBSD suggested configuration changes but in the end, I'm 
 always getting the same error. I manually tar'd 40GB of data to the tape 
 drive then extracted it back and it worked flawlessly. So I am guessing its a 
 configuration setting that I either got wrong or am missing.Any help would be 
 very appreciated. Thanks! Brad.
  
 (Sorry if this gets posted twice as I had a message awaiting approval but it 
 disappeared)
 _
 Send a smile, make someone laugh, have some fun! Start now!
 http://www.freemessengeremoticons.ca/?icid=EMENCA122
 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now  http://get.splunk.com/

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] scsi problems

2007-11-08 Thread Michael Galloway
progress with this issue. i submitted a bug report to adaptec and they
finally provided some suggestions to help resolve this. here is what
they recommeded:

   Go to Configure/View Host Adapter Settings.

   If the SCSI Controller does not have the system boot device attached,
   disable the BIOS. On SCSI Controllers with 2 channels, the BIOS of the
   channel that does not have the boot device, can be disabled.

   To do this, go to Advanced Configuration and set SCSI Controller Int
   13 Support to Disabled. If you boot from a SCSI device attached with
   the SCSI controller, leave the SCSI Controller Int 13 Support at
   Enabled.

   Under Advanced Configuration set Domain Validation to Disabled.

   Press Esc to exit.

   Go to SCSI Device Configuration.

   For the SCSI ID of the tape drive or tape library, set Initiate Wide
   Negotiation to No. This will automatically change the Sync Transfer
   Rate to 40MB/s, Packetized to No, QAS to No, and BIOS
   Multiple LUN Support to No. BIOS Multiple LUN Support can be changed
   back to Yes if needed.

   For the SCSI ID of the tape drive or tape library, set Enable
   Disconnection to No.

   For the SCSI ID of the tape drive or tape library, set Send Start Unit
   Command to No.

   Press Esc twice to exit, save the changes.

   Press Esc again, exit the utility and reboot the system.

with these changes implemented, the btape test passes (with a couple of
modifications to bacula-sd.conf) and the autochanger test passes. 

of of curiosity, what are others with LTO-4 using for scsi adapaters?

-- michael

On Mon, Oct 29, 2007 at 09:43:09PM -0400, Michael Galloway wrote:
 seem to be having some scsi problems with btape test. this test is with
 a spectra T50/LTO-4 attached via an LSI LSIU320 controller. i ran 100GB
 of data onto the drive with tar with no issue. but when i run this:
 
 ./btape -c bacula-sd.conf /dev/nst0
 
 test
 
 i get:
 
 *test
 
 === Write, rewind, and re-read test ===
 
 I'm going to write 1000 records and an EOF
 then write 1000 records and an EOF, then rewind,
 and re-read the data to verify that it is correct.
 
 This is an *essential* feature ...
 
 btape: btape.c:827 Wrote 1000 blocks of 64412 bytes.
 btape: btape.c:501 Wrote 1 EOF to LTO4 (/dev/nst0)
 btape: btape.c:843 Wrote 1000 blocks of 64412 bytes.
 btape: btape.c:501 Wrote 1 EOF to LTO4 (/dev/nst0)
 btape: btape.c:852 Rewind OK.
 1000 blocks re-read correctly.
 29-Oct 21:27 btape JobId 0: Error: block.c:995 Read error on fd=3 at file:blk 
 0:1000 on device LTO4 (/dev/nst0). ERR=No such device or address.
 btape: btape.c:864 Read block 1001 failed! ERR=No such device or address
 
 and i the kernel ring buffer log:
 
 st0: Block limits 1 - 16777215 bytes.
 mptbase: ioc0: LogInfo(0x11010f00): F/W: bug! MID not found
 mptbase: ioc0: LogInfo(0x11010f00): F/W: bug! MID not found
 mptbase: ioc0: IOCStatus(0x004b): SCSI IOC Terminated
 st0: Error 8 (sugg. bt 0x0, driver bt 0x0, host bt 0x8).
 mptscsih: ioc0: attempting task abort! (sc=81011bf35240)
 st 5:0:15:0: 
 command: Read(6): 08 00 00 fc 00 00
 mptbase: Initiating ioc0 recovery
 mptscsih: ioc0: task abort: SUCCESS (sc=81011bf35240)
 mptbase: ioc0: IOCStatus(0x0043): SCSI Device Not There
 mptscsih: ioc0: attempting target reset! (sc=81011bf35240)
 st 5:0:15:0: 
 command: Read(6): 08 00 00 fc 00 00
 mptscsih: ioc0: target reset: SUCCESS (sc=81011bf35240)
 mptbase: ioc0: IOCStatus(0x0043): SCSI Device Not There
 mptscsih: ioc0: attempting bus reset! (sc=81011bf35240)
 st 5:0:15:0: 
 command: Read(6): 08 00 00 fc 00 00
 mptscsih: ioc0: bus reset: SUCCESS (sc=81011bf35240)
 mptbase: ioc0: IOCStatus(0x0047): SCSI Protocol Error
 mptscsih: ioc0: Attempting host reset! (sc=81011bf35240)
 mptbase: Initiating ioc0 recovery
 mptbase: ioc0: IOCStatus(0x0047): SCSI Protocol Error
 st 5:0:15:0: scsi: Device offlined - not ready after error recovery
 st0: Error 8 (sugg. bt 0x0, driver bt 0x0, host bt 0x8).
  target5:0:15: Beginning Domain Validation
  target5:0:15: Domain Validation skipping write tests
  target5:0:15: Ending Domain Validation
  target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
  target5:0:15: Beginning Domain Validation
  target5:0:15: Domain Validation skipping write tests
  target5:0:15: Ending Domain Validation
  target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
  target5:0:15: Beginning Domain Validation
  target5:0:15: Domain Validation skipping write tests
  target5:0:15: Ending Domain Validation
  target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
  target5:0:15: Beginning Domain Validation
  target5:0:15: Domain Validation skipping write tests
  target5:0:15: Ending Domain Validation
  target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
 
 i've reseated my cables and terminator. reseated the scsi card. any idea
 where the problem is? this is centOS 5, kernel is:
 
 2.6.18-8.1.14.el5 #1

Re: [Bacula-users] scsi problems

2007-11-08 Thread Michael Galloway
not yet, i just got this late yesterday, ran some preliminary testing just
to be sure its working. i will start undoing the changes today and see if
i can find the relevant factor. 

-- michael

On Thu, Nov 08, 2007 at 02:28:55PM +0200, Michael Lewinger wrote:
 Hi Michael.
 
 Firstly, I'm glad you solved the problem and shared it with the list.
 
 I'm also having problems with the SCSI tape I'm trying to use, so maybe I'll
 profit from your experience as well. Have you succeeded in pinpointing the
 relvant change ?
 
 Michgael
 
 
 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] scsi problems

2007-11-05 Thread Michael Galloway
i'm going to just add this bit of data into the mix. dd onto and
off the tape device:

# dd if=/dev/zero of=/dev/nst0 bs=65536 count=10
10+0 records in
10+0 records out
655360 bytes (6.6 GB) copied, 61.687 seconds, 106 MB/s
# mt -f /dev/nst0 rewind
# dd of=/dev/null if=/dev/nst0 bs=65536 count=10
10+0 records in
10+0 records out
655360 bytes (6.6 GB) copied, 58.3182 seconds, 112 MB/s
# mt -f /dev/nst0 rewind
# dd if=/dev/zero of=/dev/nst0 bs=65536 count=30
30+0 records in
30+0 records out
1966080 bytes (20 GB) copied, 181.502 seconds, 108 MB/s
# mt -f /dev/nst0 rewind
# dd of=/dev/null if=/dev/nst0 bs=65536 count=30
30+0 records in
30+0 records out
1966080 bytes (20 GB) copied, 215.185 seconds, 91.4 MB/s

so at least it would seem that the drivers/adapter can move data
at an acceptable rate. 

i've done an strace of btape test and it hangs when the scsi bus
hangs, and would be glad to make the file available to to anyone
that thinks they can help.

-- michael



On Fri, Nov 02, 2007 at 09:15:08PM +0100, Arno Lehmann wrote:
 Hi,
 
 02.11.2007 17:55,, Michael Galloway wrote::
  ok, i'd like to revisit this issue. i changed scsi cards and i still
  get scsi crashes from btape test command. new card is adaptec 29320:
 
 Bad.
 
  03:06.0 SCSI storage controller: Adaptec ASC-29320A U320 (rev 10)
  
  i spent the morning tar onto and off the LTO-4 drive:
  
  [5:0:15:0]   tapeIBM  ULTRIUM-TD4  7950  /dev/st0
  
  with no issues. then i erased the tape and started with btape again:
  
  ./btape -c bacula-sd.conf /dev/nst0
  Tape block granularity is 1024 bytes.
  btape: butil.c:285 Using device: /dev/nst0 for writing.
  btape: btape.c:368 open device LTO4 (/dev/nst0): OK
  *test
  
  === Write, rewind, and re-read test ===
  
  I'm going to write 1000 records and an EOF
  then write 1000 records and an EOF, then rewind,
  and re-read the data to verify that it is correct.
  
  This is an *essential* feature ...
  
  btape: btape.c:827 Wrote 1000 blocks of 64412 bytes.
  btape: btape.c:501 Wrote 1 EOF to LTO4 (/dev/nst0)
  btape: btape.c:843 Wrote 1000 blocks of 64412 bytes.
  btape: btape.c:501 Wrote 1 EOF to LTO4 (/dev/nst0)
  btape: btape.c:852 Rewind OK.
  1000 blocks re-read correctly.
  
  hangs there with this in the dmesg log:
  
  dmesg
 ...snipped. I can't understand that stuff easily.
 ...
  so, in the end i cannot make a successful btape test run with this LTO-4 
  drive
  with two different scsi cards. i guess my question is, is this a bacula 
  btape 
  issue or an LTO or spectralogic scsi issue?
 
 I'm really not sure... I know that btape works correctly; at least it 
 did so everytime I used it.
 
 LTO tapes, too, can be used without problems by Bacula, and btape 
 testing them works for me and many others, too.
 
 Finally, I'm quite sure that Bacula is run on Spektralogic hardware 
 somewhere out there.
 
 Currently, I can only recommend to ensure you've got the latest 
 firmware for HBA and tape device, a proven driver and kernel on your 
 system, and run a current version of btape.
 
 If the problem persists (which I assume) you should file a bug report 
 at bugs.bacula.org and perhaps also contact the developers of the SCSI 
 driver you're running.
 
 Apart from that I can only wish good luck...
 
 Arno
 
 -- 
 Arno Lehmann
 IT-Service Lehmann
 www.its-lehmann.de
 
 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now  http://get.splunk.com/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] scsi problems

2007-11-02 Thread Michael Galloway
ok, i'd like to revisit this issue. i changed scsi cards and i still
get scsi crashes from btape test command. new card is adaptec 29320:

03:06.0 SCSI storage controller: Adaptec ASC-29320A U320 (rev 10)

i spent the morning tar onto and off the LTO-4 drive:

[5:0:15:0]   tapeIBM  ULTRIUM-TD4  7950  /dev/st0

with no issues. then i erased the tape and started with btape again:

./btape -c bacula-sd.conf /dev/nst0
Tape block granularity is 1024 bytes.
btape: butil.c:285 Using device: /dev/nst0 for writing.
btape: btape.c:368 open device LTO4 (/dev/nst0): OK
*test

=== Write, rewind, and re-read test ===

I'm going to write 1000 records and an EOF
then write 1000 records and an EOF, then rewind,
and re-read the data to verify that it is correct.

This is an *essential* feature ...

btape: btape.c:827 Wrote 1000 blocks of 64412 bytes.
btape: btape.c:501 Wrote 1 EOF to LTO4 (/dev/nst0)
btape: btape.c:843 Wrote 1000 blocks of 64412 bytes.
btape: btape.c:501 Wrote 1 EOF to LTO4 (/dev/nst0)
btape: btape.c:852 Rewind OK.
1000 blocks re-read correctly.

hangs there with this in the dmesg log:

dmesg
scsi5:A:15: no active SCB for reconnecting target - issuing BUS DEVICE RESET
SAVED_SCSIID == 0xf7, SAVED_LUN == 0x0, REG0 == 0x ACCUM = 0xc0
SEQ_FLAGS == 0xc0, SCBPTR == 0xc0, BTT == 0x, SINDEX == 0x1c0
SELID == 0xf0, SCB_SCSIID == 0x0, SCB_LUN == 0x0, SCB_CONTROL == 0x0
SCSIBUS[0] == 0x2, SCSISIGI == 0xc6
SXFRCTL0 == 0x88
SEQCTL0 == 0x0
 Dump Card State Begins 
scsi5: Dumping Card State at program address 0x161 Mode 0x33
Card was paused
INTSTAT[0x0] SELOID[0xf] SELID[0xf0] HS_MAILBOX[0x0] 
INTCTL[0x80] SEQINTSTAT[0x0] SAVED_MODE[0x11] DFFSTAT[0x33] 
SCSISIGI[0xc6] SCSIPHASE[0x20] SCSIBUS[0x2] LASTPHASE[0xc0] 
SCSISEQ0[0x0] SCSISEQ1[0x12] SEQCTL0[0x0] SEQINTCTL[0x0] 
SEQ_FLAGS[0xc0] SEQ_FLAGS2[0x0] QFREEZE_COUNT[0xfe] 
KERNEL_QFREEZE_COUNT[0xfe] MK_MESSAGE_SCB[0xff00] 
MK_MESSAGE_SCSIID[0xff] SSTAT0[0x2] SSTAT1[0x19] 
SSTAT2[0x0] SSTAT3[0x0] PERRDIAG[0x0] SIMODE1[0xac] 
LQISTAT0[0x0] LQISTAT1[0x0] LQISTAT2[0x0] LQOSTAT0[0x0] 
LQOSTAT1[0x0] LQOSTAT2[0x0] 

SCB Count = 4 CMDS_PENDING = 1 LASTSCB 0x CURRSCB 0x3 NEXTSCB 0x0
qinstart = 56099 qinfifonext = 56099
QINFIFO:
WAITING_TID_QUEUES:
Pending list:
  3 FIFO_USE[0x0] SCB_CONTROL[0x64] SCB_SCSIID[0xf7] 
Total 1
Kernel Free SCB list: 2 1 0 
Sequencer Complete DMA-inprog list: 
Sequencer Complete list: 
Sequencer DMA-Up and Complete list: 
Sequencer On QFreeze and Complete list: 


scsi5: FIFO0 Free, LONGJMP == 0x80ff, SCB 0x0
SEQIMODE[0x3f] SEQINTSRC[0x0] DFCNTRL[0x0] DFSTATUS[0x89] 
SG_CACHE_SHADOW[0x2] SG_STATE[0x0] DFFSXFRCTL[0x0] 
SOFFCNT[0x0] MDFFSTAT[0x5] SHADDR = 0x00, SHCNT = 0x0 
HADDR = 0x00, HCNT = 0x0 CCSGCTL[0x10] 

scsi5: FIFO1 Free, LONGJMP == 0x81f2, SCB 0x3
SEQIMODE[0x3f] SEQINTSRC[0x0] DFCNTRL[0x4] DFSTATUS[0x89] 
SG_CACHE_SHADOW[0x2] SG_STATE[0x0] DFFSXFRCTL[0x0] 
SOFFCNT[0x0] MDFFSTAT[0x5] SHADDR = 0x00, SHCNT = 0x0 
HADDR = 0x00, HCNT = 0x0 CCSGCTL[0x10] 
LQIN: 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 
0x0 0x0 
scsi5: LQISTATE = 0x0, LQOSTATE = 0x0, OPTIONMODE = 0x52
scsi5: OS_SPACE_CNT = 0x20 MAXCMDCNT = 0x0
scsi5: SAVED_SCSIID = 0x0 SAVED_LUN = 0x0
SIMODE0[0xc] 
CCSCBCTL[0x4] 
scsi5: REG0 == 0x, SINDEX = 0x1c0, DINDEX = 0x1be
scsi5: SCBPTR == 0xc0, SCB_NEXT == 0xff00, SCB_NEXT2 == 0x0
CDB c0 0 0 0 0 0
STACK: 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
 Dump Card State Ends 

so, in the end i cannot make a successful btape test run with this LTO-4 drive
with two different scsi cards. i guess my question is, is this a bacula btape 
issue or an LTO or spectralogic scsi issue?

-- michael


On Mon, Oct 29, 2007 at 09:43:09PM -0400, Michael Galloway wrote:
 seem to be having some scsi problems with btape test. this test is with
 a spectra T50/LTO-4 attached via an LSI LSIU320 controller. i ran 100GB
 of data onto the drive with tar with no issue. but when i run this:
 
 ./btape -c bacula-sd.conf /dev/nst0
 
 test
 
 i get:
 
 *test
 
 === Write, rewind, and re-read test ===
 
 I'm going to write 1000 records and an EOF
 then write 1000 records and an EOF, then rewind,
 and re-read the data to verify that it is correct.
 
 This is an *essential* feature ...
 
 btape: btape.c:827 Wrote 1000 blocks of 64412 bytes.
 btape: btape.c:501 Wrote 1 EOF to LTO4 (/dev/nst0)
 btape: btape.c:843 Wrote 1000 blocks of 64412 bytes.
 btape: btape.c:501 Wrote 1 EOF to LTO4 (/dev/nst0)
 btape: btape.c:852 Rewind OK.
 1000 blocks re-read correctly.
 29-Oct 21:27 btape JobId 0: Error: block.c:995 Read error on fd=3 at file:blk 
 0:1000 on device LTO4 (/dev/nst0). ERR=No such device or address.
 btape: btape.c:864 Read block 1001 failed! ERR=No such device or address
 
 and i the kernel ring buffer log:
 
 st0: Block limits 1 - 16777215 bytes.
 mptbase: ioc0: LogInfo(0x11010f00): F/W: bug! MID not found
 mptbase: ioc0: LogInfo(0x11010f00): F/W: bug! MID not found
 mptbase: ioc0: IOCStatus(0x004b): SCSI IOC Terminated
 st0: Error

Re: [Bacula-users] scsi problems

2007-10-30 Thread Michael Galloway
ok, btape is running fill now (looks like reading is the issue) but
its slow:

12:14:11 Flush block, write EOF
Wrote blk_block=3425000, dev_blk_num=1000 VolBytes=220,953,535,488 rate=38134.9 
KB/s
Wrote blk_block=343, dev_blk_num=6000 VolBytes=221,276,095,488 rate=38157.6 
KB/s
Wrote blk_block=3435000, dev_blk_num=11000 VolBytes=221,598,655,488 
rate=38167.2 KB/s
Wrote blk_block=344, dev_blk_num=500 VolBytes=221,921,215,488 rate=38170.1 
KB/s
Wrote blk_block=3445000, dev_blk_num=5500 VolBytes=222,243,775,488 rate=38179.7 
KB/s
Wrote blk_block=345, dev_blk_num=10500 VolBytes=222,566,335,488 
rate=38189.1 KB/s
Wrote blk_block=3455000, dev_blk_num=15500 VolBytes=222,888,895,488 
rate=38192.1 KB/s
12:15:03 Flush block, write EOF
Wrote blk_block=346, dev_blk_num=4000 VolBytes=223,211,455,488 rate=38162.3 
KB/s
Wrote blk_block=3465000, dev_blk_num=9000 VolBytes=223,534,015,488 rate=38171.8 
KB/s
Wrote blk_block=347, dev_blk_num=14000 VolBytes=223,856,575,488 
rate=38194.3 KB/s
Wrote blk_block=3475000, dev_blk_num=3500 VolBytes=224,179,135,488 rate=38184.1 
KB/s

38MB/s is only around 140GB/hr. i'd expect a bit more from LTO-4. 

-- michael

On Tue, Oct 30, 2007 at 07:43:14AM +0200, Michael Lewinger wrote:
 Hi Michael,
 
 tar is not checking while writing, but bacula does. Is the read error on
 subsequent tests failing at the same block ?
 
 29-Oct 21:27 btape JobId 0: Error: block.c:995 Read error on fd=3 at
 file:blk 0:1000
 
 I'd suggest cleaning the tape head anyway (how frequently do you do it ?),
 reformat the tape, and check again.
 
 Michael
 
 On 10/30/07, Michael Galloway [EMAIL PROTECTED] wrote:
 
  seem to be having some scsi problems with btape test. this test is with
  a spectra T50/LTO-4 attached via an LSI LSIU320 controller. i ran 100GB
  of data onto the drive with tar with no issue. but when i run this:
 
  ./btape -c bacula-sd.conf /dev/nst0
 
  test
 
  i get:
 
  *test
 
  === Write, rewind, and re-read test ===
 
  I'm going to write 1000 records and an EOF
  then write 1000 records and an EOF, then rewind,
  and re-read the data to verify that it is correct.
 
  This is an *essential* feature ...
 
  btape: btape.c:827 Wrote 1000 blocks of 64412 bytes.
  btape: btape.c:501 Wrote 1 EOF to LTO4 (/dev/nst0)
  btape: btape.c:843 Wrote 1000 blocks of 64412 bytes.
  btape: btape.c:501 Wrote 1 EOF to LTO4 (/dev/nst0)
  btape: btape.c:852 Rewind OK.
  1000 blocks re-read correctly.
  29-Oct 21:27 btape JobId 0: Error: block.c:995 Read error on fd=3 at
  file:blk 0:1000 on device LTO4 (/dev/nst0). ERR=No such device or address.
  btape: btape.c:864 Read block 1001 failed! ERR=No such device or address
 
  and i the kernel ring buffer log:
 
  st0: Block limits 1 - 16777215 bytes.
  mptbase: ioc0: LogInfo(0x11010f00): F/W: bug! MID not found
  mptbase: ioc0: LogInfo(0x11010f00): F/W: bug! MID not found
  mptbase: ioc0: IOCStatus(0x004b): SCSI IOC Terminated
  st0: Error 8 (sugg. bt 0x0, driver bt 0x0, host bt 0x8).
  mptscsih: ioc0: attempting task abort! (sc=81011bf35240)
  st 5:0:15:0:
  command: Read(6): 08 00 00 fc 00 00
  mptbase: Initiating ioc0 recovery
  mptscsih: ioc0: task abort: SUCCESS (sc=81011bf35240)
  mptbase: ioc0: IOCStatus(0x0043): SCSI Device Not There
  mptscsih: ioc0: attempting target reset! (sc=81011bf35240)
  st 5:0:15:0:
  command: Read(6): 08 00 00 fc 00 00
  mptscsih: ioc0: target reset: SUCCESS (sc=81011bf35240)
  mptbase: ioc0: IOCStatus(0x0043): SCSI Device Not There
  mptscsih: ioc0: attempting bus reset! (sc=81011bf35240)
  st 5:0:15:0:
  command: Read(6): 08 00 00 fc 00 00
  mptscsih: ioc0: bus reset: SUCCESS (sc=81011bf35240)
  mptbase: ioc0: IOCStatus(0x0047): SCSI Protocol Error
  mptscsih: ioc0: Attempting host reset! (sc=81011bf35240)
  mptbase: Initiating ioc0 recovery
  mptbase: ioc0: IOCStatus(0x0047): SCSI Protocol Error
  st 5:0:15:0: scsi: Device offlined - not ready after error recovery
  st0: Error 8 (sugg. bt 0x0, driver bt 0x0, host bt 0x8).
  target5:0:15: Beginning Domain Validation
  target5:0:15: Domain Validation skipping write tests
  target5:0:15: Ending Domain Validation
  target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
  target5:0:15: Beginning Domain Validation
  target5:0:15: Domain Validation skipping write tests
  target5:0:15: Ending Domain Validation
  target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
  target5:0:15: Beginning Domain Validation
  target5:0:15: Domain Validation skipping write tests
  target5:0:15: Ending Domain Validation
  target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
  target5:0:15: Beginning Domain Validation
  target5:0:15: Domain Validation skipping write tests
  target5:0:15: Ending Domain Validation
  target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
 
  i've reseated my cables and terminator. reseated the scsi card. any idea
  where the problem

Re: [Bacula-users] scsi problems

2007-10-30 Thread Michael Galloway
nothing is solved yet. i rebooted everything this morning to make sure the
scsi bus was cleared and reset. then i decided to try the individual btape
tests to see if i could isolate the issue. the fill test is running all day
still running:

16:23:27 Flush block, write EOF
Wrote blk_block=12325000, dev_blk_num=5000 VolBytes=795,110,335,488 
rate=38313.0 KB/s
Wrote blk_block=1233, dev_blk_num=1 VolBytes=795,432,895,488 
rate=38313.8 KB/s

i get the impression that the issue was with the read part of the btape test, i 
may
upgrade the mpt drivers for the LSI controller when this finishes as well. right
now they are version 3.x:

# cat /proc/mpt/version 
mptlinux-3.04.02
  Fusion MPT base driver
  Fusion MPT SPI host driver

and version 4.00.13.04-1-rhel5.x86_64 is available.

-- michael



On Tue, Oct 30, 2007 at 10:20:02PM +0200, Michael Lewinger wrote:
 Hi Michael,
 
 So - what was the issue ? How was it solved ?
 
 michael
 
 On 10/30/07, Ralf Gross [EMAIL PROTECTED] wrote:
 
  Michael Galloway schrieb:
   Wrote blk_block=3465000, dev_blk_num=9000 VolBytes=223,534,015,488 rate=
  38171.8 KB/s
   Wrote blk_block=347, dev_blk_num=14000 VolBytes=223,856,575,488
  rate=38194.3 KB/s
   Wrote blk_block=3475000, dev_blk_num=3500 VolBytes=224,179,135,488 rate=
  38184.1 KB/s
  
   38MB/s is only around 140GB/hr. i'd expect a bit more from LTO-4.
 
  I get ~77 MB/s with a HP Ultrium LTP-4 drive during full backups and
  spooling (write speed to tape).
 
 target5:0:15: Beginning Domain Validation
 target5:0:15: Domain Validation skipping write tests
 target5:0:15: Ending Domain Validation
 target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
 target5:0:15: Beginning Domain Validation
 target5:0:15: Domain Validation skipping write tests
 target5:0:15: Ending Domain Validation
 target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
 target5:0:15: Beginning Domain Validation
 target5:0:15: Domain Validation skipping write tests
 target5:0:15: Ending Domain Validation
 target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
 target5:0:15: Beginning Domain Validation
 target5:0:15: Domain Validation skipping write tests
 target5:0:15: Ending Domain Validation
 target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
 
  I don't see the errors you get. But I also see FAST-80 WIDE SCSI
  160.0 MB/s mesages in my kernel log. The HP tool ltt reports U160
  connections for both of my drives.  Altough the scsi controller's bios
  claims during boot that the drives are connected with U320 and the
  drives are capable of U320.
 
  After boot I checked the scsi paramters and it looks like 6.25 would be
  the correct value for U320. But the HP tool still shows that the
  drives are connected with U160.
 
  /sys/class/spi_transport/target5\:0\:1/min_period
  6.25
 
  Sorry for hijacking one half of your thread, but I would like to know
  if anyone has seen a FAST-160 message in the kernel log with tape
  drives.  Not that I could saturate a U160 connection, but I would like
  to know why it's only connecting with U160 speed. An other LSI
  controller which is used for a RAID devices shows the propper U320
  (FAST-160) value during boot.
 
  Ralf
 
  -
  This SF.net email is sponsored by: Splunk Inc.
  Still grepping through log files to find problems?  Stop.
  Now Search log events and configuration files using AJAX and a browser.
  Download your FREE copy of Splunk now  http://get.splunk.com/
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
 
 
 
 
 -- 
 Michael Lewinger
 MBR Computers
 http://mbrcomp.co.il

 -
 This SF.net email is sponsored by: Splunk Inc.
 Still grepping through log files to find problems?  Stop.
 Now Search log events and configuration files using AJAX and a browser.
 Download your FREE copy of Splunk now  http://get.splunk.com/

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] scsi problems

2007-10-30 Thread Michael Galloway
:)

i may be off the mark, but not that far. i was able to write and read data
from the device, and the device reports itself as a scsi tape:

# mt -f /dev/st0 status
SCSI 2 tape drive:
File number=-1, block number=-1, partition=0.
Tape block size 0 bytes. Density code 0x0 (default).
Soft error count since last status=0
General status bits on (1):
 IM_REP_EN

# lsscsi
[0:0:0:0]diskATA  TOSHIBA MK2035GS DK02  /dev/sda
[1:0:0:0]diskATA  TOSHIBA MK2035GS DK02  /dev/sdb
[4:0:0:0]diskAMCC 9650SE-16M DISK  3.06  /dev/sdc
[5:0:15:0]   tapeIBM  ULTRIUM-TD4  7950  /dev/st0
[5:0:15:1]   mediumx SPECTRA  PYTHON   2000  -   

so, that probably is not my issue.

-- michael

On Tue, Oct 30, 2007 at 11:05:17PM +0200, Michael Lewinger wrote:
 Hi Michael,
 
 This looks a bit like your problem; error 8 is quite common (google) but
 only one link produced a valid answer that actually solved the problem. i
 won't post the contents as it is quite embarrassing.
 
 http://unix.derkeiler.com/Newsgroups/comp.unix.solaris/2003-05/2858.html
 
 Pls keep updating.
 
 Michael
 
 On 10/30/07, Michael Galloway [EMAIL PROTECTED] wrote:
 
  nothing is solved yet. i rebooted everything this morning to make sure the
  scsi bus was cleared and reset. then i decided to try the individual btape
  tests to see if i could isolate the issue. the fill test is running all
  day
  still running:
 
  16:23:27 Flush block, write EOF
  Wrote blk_block=12325000, dev_blk_num=5000 VolBytes=795,110,335,488 rate=
  38313.0 KB/s
  Wrote blk_block=1233, dev_blk_num=1 VolBytes=795,432,895,488 rate=
  38313.8 KB/s
 
  i get the impression that the issue was with the read part of the btape
  test, i may
  upgrade the mpt drivers for the LSI controller when this finishes as well.
  right
  now they are version 3.x:
 
  # cat /proc/mpt/version
  mptlinux-3.04.02
Fusion MPT base driver
Fusion MPT SPI host driver
 
  and version 4.00.13.04-1-rhel5.x86_64 is available.
 
  -- michael
 
 
 
  On Tue, Oct 30, 2007 at 10:20:02PM +0200, Michael Lewinger wrote:
   Hi Michael,
  
   So - what was the issue ? How was it solved ?
  
   michael
  
   On 10/30/07, Ralf Gross [EMAIL PROTECTED] wrote:
   
Michael Galloway schrieb:
 Wrote blk_block=3465000, dev_blk_num=9000 VolBytes=223,534,015,488
  rate=
38171.8 KB/s
 Wrote blk_block=347, dev_blk_num=14000 VolBytes=223,856,575,488
rate=38194.3 KB/s
 Wrote blk_block=3475000, dev_blk_num=3500 VolBytes=224,179,135,488
  rate=
38184.1 KB/s

 38MB/s is only around 140GB/hr. i'd expect a bit more from LTO-4.
   
I get ~77 MB/s with a HP Ultrium LTP-4 drive during full backups and
spooling (write speed to tape).
   
   target5:0:15: Beginning Domain Validation
   target5:0:15: Domain Validation skipping write tests
   target5:0:15: Ending Domain Validation
   target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset
  126)
   target5:0:15: Beginning Domain Validation
   target5:0:15: Domain Validation skipping write tests
   target5:0:15: Ending Domain Validation
   target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset
  126)
   target5:0:15: Beginning Domain Validation
   target5:0:15: Domain Validation skipping write tests
   target5:0:15: Ending Domain Validation
   target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset
  126)
   target5:0:15: Beginning Domain Validation
   target5:0:15: Domain Validation skipping write tests
   target5:0:15: Ending Domain Validation
   target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset
  126)
   
I don't see the errors you get. But I also see FAST-80 WIDE SCSI
160.0 MB/s mesages in my kernel log. The HP tool ltt reports U160
connections for both of my drives.  Altough the scsi controller's bios
claims during boot that the drives are connected with U320 and the
drives are capable of U320.
   
After boot I checked the scsi paramters and it looks like 6.25 would
  be
the correct value for U320. But the HP tool still shows that the
drives are connected with U160.
   
/sys/class/spi_transport/target5\:0\:1/min_period
6.25
   
Sorry for hijacking one half of your thread, but I would like to know
if anyone has seen a FAST-160 message in the kernel log with tape
drives.  Not that I could saturate a U160 connection, but I would like
to know why it's only connecting with U160 speed. An other LSI
controller which is used for a RAID devices shows the propper U320
(FAST-160) value during boot.
   
Ralf
   
   
  -
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a
  browser.
Download your FREE copy of Splunk now

[Bacula-users] scsi problems

2007-10-29 Thread Michael Galloway
seem to be having some scsi problems with btape test. this test is with
a spectra T50/LTO-4 attached via an LSI LSIU320 controller. i ran 100GB
of data onto the drive with tar with no issue. but when i run this:

./btape -c bacula-sd.conf /dev/nst0

test

i get:

*test

=== Write, rewind, and re-read test ===

I'm going to write 1000 records and an EOF
then write 1000 records and an EOF, then rewind,
and re-read the data to verify that it is correct.

This is an *essential* feature ...

btape: btape.c:827 Wrote 1000 blocks of 64412 bytes.
btape: btape.c:501 Wrote 1 EOF to LTO4 (/dev/nst0)
btape: btape.c:843 Wrote 1000 blocks of 64412 bytes.
btape: btape.c:501 Wrote 1 EOF to LTO4 (/dev/nst0)
btape: btape.c:852 Rewind OK.
1000 blocks re-read correctly.
29-Oct 21:27 btape JobId 0: Error: block.c:995 Read error on fd=3 at file:blk 
0:1000 on device LTO4 (/dev/nst0). ERR=No such device or address.
btape: btape.c:864 Read block 1001 failed! ERR=No such device or address

and i the kernel ring buffer log:

st0: Block limits 1 - 16777215 bytes.
mptbase: ioc0: LogInfo(0x11010f00): F/W: bug! MID not found
mptbase: ioc0: LogInfo(0x11010f00): F/W: bug! MID not found
mptbase: ioc0: IOCStatus(0x004b): SCSI IOC Terminated
st0: Error 8 (sugg. bt 0x0, driver bt 0x0, host bt 0x8).
mptscsih: ioc0: attempting task abort! (sc=81011bf35240)
st 5:0:15:0: 
command: Read(6): 08 00 00 fc 00 00
mptbase: Initiating ioc0 recovery
mptscsih: ioc0: task abort: SUCCESS (sc=81011bf35240)
mptbase: ioc0: IOCStatus(0x0043): SCSI Device Not There
mptscsih: ioc0: attempting target reset! (sc=81011bf35240)
st 5:0:15:0: 
command: Read(6): 08 00 00 fc 00 00
mptscsih: ioc0: target reset: SUCCESS (sc=81011bf35240)
mptbase: ioc0: IOCStatus(0x0043): SCSI Device Not There
mptscsih: ioc0: attempting bus reset! (sc=81011bf35240)
st 5:0:15:0: 
command: Read(6): 08 00 00 fc 00 00
mptscsih: ioc0: bus reset: SUCCESS (sc=81011bf35240)
mptbase: ioc0: IOCStatus(0x0047): SCSI Protocol Error
mptscsih: ioc0: Attempting host reset! (sc=81011bf35240)
mptbase: Initiating ioc0 recovery
mptbase: ioc0: IOCStatus(0x0047): SCSI Protocol Error
st 5:0:15:0: scsi: Device offlined - not ready after error recovery
st0: Error 8 (sugg. bt 0x0, driver bt 0x0, host bt 0x8).
 target5:0:15: Beginning Domain Validation
 target5:0:15: Domain Validation skipping write tests
 target5:0:15: Ending Domain Validation
 target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
 target5:0:15: Beginning Domain Validation
 target5:0:15: Domain Validation skipping write tests
 target5:0:15: Ending Domain Validation
 target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
 target5:0:15: Beginning Domain Validation
 target5:0:15: Domain Validation skipping write tests
 target5:0:15: Ending Domain Validation
 target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)
 target5:0:15: Beginning Domain Validation
 target5:0:15: Domain Validation skipping write tests
 target5:0:15: Ending Domain Validation
 target5:0:15: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 126)

i've reseated my cables and terminator. reseated the scsi card. any idea
where the problem is? this is centOS 5, kernel is:

2.6.18-8.1.14.el5 #1 SMP Thu Sep 27 19:05:32 EDT 2007 x86_64 x86_64 x86_64 
GNU/Linux

-- michael

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] stored build problems centOS5/postgresql

2007-10-25 Thread Michael Galloway
fwiw, i get this same build failure if i use the postgres rpms from postgres
and 8.2.5.

has anyone managed to build 2.2.5 on centOS 5 and with postgres? 

-- michael

On Wed, Oct 24, 2007 at 12:25:28PM -0400, Michael Galloway wrote:
 good day all, i'm trying to build 2.2.5 on centOS5 with postgresql. my config 
 looks
 like this:
 
   Database lib:   -L/usr/lib64 -lpq -lcrypt
   Database name:  bacula
   Database user:  bacula
 
   Job Output Email:   
   Traceback Email:
   SMTP Host Address:  
 
   Director Port:  9101
   File daemon Port:   9102
   Storage daemon Port:9103
 
   Director User:  
   Director Group: 
   Storage Daemon User:
   Storage DaemonGroup:
   File Daemon User:   
   File Daemon Group:  
 
   SQL binaries Directory  /usr/bin
 
   Large file support: yes
   Bacula conio support:   yes -ltermcap
   readline support:   no 
   TCP Wrappers support:   no 
   TLS support:no
   Encryption support: no
   ZLIB support:   yes
   enable-smartalloc:  yes
   bat support:no 
   enable-gnome:   no 
   enable-bwx-console: no 
   enable-tray-monitor:
   client-only:no
   build-dird: yes
   build-stored:   yes
   ACL support:yes
   Python support: yes -L/usr/lib64/python2.4/config -lpython2.4 
 -lutil -lrt 
   Batch insert enabled:   yes
 
 but the build fails in building stored with this:
 
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/ip.c:81: warning: 
 Using 'getaddrinfo' in statically linked applications requires at runtime the 
 shared libraries from the glibc version used for linking
 ../lib/libbac.a(bnet.o): In function `resolv_host':
 /usr/local/bacula-2.2.5/src/lib/bnet.c:424: warning: Using 'gethostbyname2' 
 in statically linked applications requires at runtime the shared libraries 
 from the glibc version used for linking
 ../lib/libbac.a(address_conf.o): In function `add_address':
 /usr/local/bacula-2.2.5/src/lib/address_conf.c:310: warning: Using 
 'getservbyname' in statically linked applications requires at runtime the 
 shared libraries from the glibc version used for linking
 /usr/lib64/libpq.a(fe-misc.o): In function `pqSocketCheck':
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-misc.c:972: 
 undefined reference to `SSL_pending'
 /usr/lib64/libpq.a(fe-secure.o): In function `SSLerrmessage':
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1198: 
 undefined reference to `ERR_get_error'
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1204: 
 undefined reference to `ERR_reason_error_string'
 /usr/lib64/libpq.a(fe-secure.o): In function `pqsecure_write':
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:415: 
 undefined reference to `SSL_write'
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:416: 
 undefined reference to `SSL_get_error'
 /usr/lib64/libpq.a(fe-secure.o): In function `pqsecure_read':
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:324: 
 undefined reference to `SSL_read'
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:325: 
 undefined reference to `SSL_get_error'
 /usr/lib64/libpq.a(fe-secure.o): In function `close_SSL':
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1165: 
 undefined reference to `SSL_shutdown'
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1166: 
 undefined reference to `SSL_free'
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1172: 
 undefined reference to `X509_free'
 /usr/lib64/libpq.a(fe-secure.o): In function `open_client_SSL':
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1035: 
 undefined reference to `SSL_connect'
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1038: 
 undefined reference to `SSL_get_error'
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1115: 
 undefined reference to `SSL_get_peer_certificate'
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1128: 
 undefined reference to `X509_get_subject_name'
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1128: 
 undefined reference to `X509_NAME_oneline'
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1132: 
 undefined reference to `X509_get_subject_name'
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1132: 
 undefined reference to `X509_NAME_get_text_by_NID'
 /usr/lib64/libpq.a(fe-secure.o): In function `pqsecure_open_client':
 /builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:270: 
 undefined reference

[Bacula-users] 2.2.4 ok to run?

2007-10-25 Thread Michael Galloway
ok, since i cannot get 2.2.5 to build, i guess i will have to drop back to the 
2.2.4 EL5/centOS5
rpms off sourceforge. is 2.2.4 ok to run, or are there outstanding bugs in it?

-- michael

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] stored build problems centOS5/postgresql

2007-10-24 Thread Michael Galloway
good day all, i'm trying to build 2.2.5 on centOS5 with postgresql. my config 
looks
like this:

  Database lib:   -L/usr/lib64 -lpq -lcrypt
  Database name:  bacula
  Database user:  bacula

  Job Output Email:   
  Traceback Email:
  SMTP Host Address:  

  Director Port:  9101
  File daemon Port:   9102
  Storage daemon Port:9103

  Director User:  
  Director Group: 
  Storage Daemon User:
  Storage DaemonGroup:
  File Daemon User:   
  File Daemon Group:  

  SQL binaries Directory  /usr/bin

  Large file support: yes
  Bacula conio support:   yes -ltermcap
  readline support:   no 
  TCP Wrappers support:   no 
  TLS support:no
  Encryption support: no
  ZLIB support:   yes
  enable-smartalloc:  yes
  bat support:no 
  enable-gnome:   no 
  enable-bwx-console: no 
  enable-tray-monitor:
  client-only:no
  build-dird: yes
  build-stored:   yes
  ACL support:yes
  Python support: yes -L/usr/lib64/python2.4/config -lpython2.4 
-lutil -lrt 
  Batch insert enabled:   yes

but the build fails in building stored with this:

/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/ip.c:81: warning: 
Using 'getaddrinfo' in statically linked applications requires at runtime the 
shared libraries from the glibc version used for linking
../lib/libbac.a(bnet.o): In function `resolv_host':
/usr/local/bacula-2.2.5/src/lib/bnet.c:424: warning: Using 'gethostbyname2' in 
statically linked applications requires at runtime the shared libraries from 
the glibc version used for linking
../lib/libbac.a(address_conf.o): In function `add_address':
/usr/local/bacula-2.2.5/src/lib/address_conf.c:310: warning: Using 
'getservbyname' in statically linked applications requires at runtime the 
shared libraries from the glibc version used for linking
/usr/lib64/libpq.a(fe-misc.o): In function `pqSocketCheck':
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-misc.c:972: 
undefined reference to `SSL_pending'
/usr/lib64/libpq.a(fe-secure.o): In function `SSLerrmessage':
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1198: 
undefined reference to `ERR_get_error'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1204: 
undefined reference to `ERR_reason_error_string'
/usr/lib64/libpq.a(fe-secure.o): In function `pqsecure_write':
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:415: 
undefined reference to `SSL_write'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:416: 
undefined reference to `SSL_get_error'
/usr/lib64/libpq.a(fe-secure.o): In function `pqsecure_read':
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:324: 
undefined reference to `SSL_read'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:325: 
undefined reference to `SSL_get_error'
/usr/lib64/libpq.a(fe-secure.o): In function `close_SSL':
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1165: 
undefined reference to `SSL_shutdown'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1166: 
undefined reference to `SSL_free'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1172: 
undefined reference to `X509_free'
/usr/lib64/libpq.a(fe-secure.o): In function `open_client_SSL':
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1035: 
undefined reference to `SSL_connect'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1038: 
undefined reference to `SSL_get_error'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1115: 
undefined reference to `SSL_get_peer_certificate'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1128: 
undefined reference to `X509_get_subject_name'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1128: 
undefined reference to `X509_NAME_oneline'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1132: 
undefined reference to `X509_get_subject_name'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:1132: 
undefined reference to `X509_NAME_get_text_by_NID'
/usr/lib64/libpq.a(fe-secure.o): In function `pqsecure_open_client':
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:270: 
undefined reference to `SSL_new'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:270: 
undefined reference to `SSL_set_ex_data'
/builddir/build/BUILD/postgresql-8.1.9/src/interfaces/libpq/fe-secure.c:270: 
undefined reference to `SSL_set_fd'
/usr/lib64/libpq.a(fe-secure.o): In function `destroy_SSL':

  1   2   >