Re: [Bacula-users] Autochanger: Tapes are sometimes not changed automatically

2005-10-17 Thread Christian Theune
Hi,

ok, here is a dump from -d400 that shows everything between starting the storage daemon and the not-working tape change. 

When starting up, the drive has volume 306 from slot 7 loaded. This one
is full and not appendable anymore. It recycles volume 300 and wants it
loaded from slot 1. This is all correct. But then it stops. I don't see
the actual error in there.
Log is attached.

Cheers,
Christian
bacula-sd: lex.c:148 Open config file: /etc/bacula/bacula-sd.conf
bacula-sd: stored_conf.c:613 Inserting director res: Monitor
bacula-sd: lex.c:148 Open config file: /etc/bacula/bacula-sd.conf
bacula-sd: message.c:238 Copy message resource 0x80af3e0 to 0x80ae010
bart-sd: bsys.c:498 Could not open state file. sfd=-1 size=188: ERR=No such file or directory
bart-sd: bpipe.c:291 Run program returning 0
bart-sd: bnet_server.c:83 Addresses host[ipv4:0.0.0.0:9103]
bart-sd: stored.c:492 calling init_dev /dev/nst0
bart-sd: dev.c:242 init_dev: tape=2 dev_name=/dev/nst0
bart-sd: stored.c:494 SD init done /dev/nst0
bart-sd: autochanger.c:191 Locking changer GoceptChanger
bart-sd: bpipe.c:291 Run program returning 0
bart-sd: autochanger.c:162 run_prog: /etc/bacula/mtx-changer /dev/sg3 loaded 0 /dev/nst0 0 stat=0 result=7

bart-sd: autochanger.c:200 Unlocking changer GoceptChanger
bart-sd: stored.c:507 calling first_open_device GoceptChangerDev (/dev/nst0)
bart-sd: device.c:246 start open_output_device()
bart-sd: device.c:265 Opening device.
bart-sd: dev.c:277 open dev: tape=2 dev_name=GoceptChangerDev (/dev/nst0) vol= mode=OPEN_READ_ONLY
bart-sd: dev.c:335 open dev: device is tape
bart-sd: autochanger.c:191 Locking changer GoceptChanger
bart-sd: bpipe.c:291 Run program returning 0
bart-sd: autochanger.c:162 run_prog: /etc/bacula/mtx-changer /dev/sg3 loaded 0 /dev/nst0 0 stat=0 result=7

bart-sd: autochanger.c:200 Unlocking changer GoceptChanger
bart-sd: dev.c:358 Try open GoceptChangerDev (/dev/nst0) mode=OPEN_READ_ONLY nonblocking=2048
bart-sd: dev.c:397 openmode=3 OPEN_READ_ONLY
bart-sd: dev.c:410 open dev: tape 8 opened
bart-sd: device.c:272 open dev GoceptChangerDev (/dev/nst0) OK
bart-sd: label.c:71 Enter read_volume_label device=GoceptChangerDev (/dev/nst0) vol= dev_Vol=*NULL*
bart-sd: dev.c:654 rewind_dev fd=8 GoceptChangerDev (/dev/nst0)
bart-sd: label.c:138 Big if statement in read_volume_label
bart-sd: block.c:879 Full read() in read_block_from_device() len=64512
bart-sd: block.c:943 Read device got 64512 bytes at 0:0
bart-sd: block.c:284 unser_block_header block_len=197
bart-sd: block.c:295 Read binbuf = 173 24 block_len=197
bart-sd: block.c:1065 At end of read block
bart-sd: block.c:1078 Exit read_block read_len=64512 block_len=197
bart-sd: label.c:755 unser_vol_label

Volume Label:
Id: Bacula 1.0 immortal
VerNo : 11
VolName   : DailySet306
PrevVolName   :
VolFile   : 0
LabelType : VOL_LABEL
LabelSize : 161
PoolName  : ChangerPool
MediaType : DLT40
PoolType  : Backup
HostName  : bart
Date label written: 04-Aug-2005 23:21
bart-sd: reserve.c:118 new_volume DailySet306
bart-sd: label.c:206 Compare Vol names: VolName= hdr=DailySet306
bart-sd: label.c:222 Copy vol_name=DailySet306

Volume Label:
Id: Bacula 1.0 immortal
VerNo : 11
VolName   : DailySet306
PrevVolName   :
VolFile   : 0
LabelType : VOL_LABEL
LabelSize : 161
PoolName  : ChangerPool
MediaType : DLT40
PoolType  : Backup
HostName  : bart
Date label written: 04-Aug-2005 23:21
bart-sd: label.c:227 Leave read_volume_label() VOL_OK
bart-sd: dev.c:654 rewind_dev fd=8 GoceptChangerDev (/dev/nst0)
bart-sd: bnet.c:1125 who=client host=10.1.1.40 port=36643
bart-sd: dircmd.c:157 Conn: Hello Director bart-dir calling
bart-sd: dircmd.c:166 Start Dir Job
bart-sd: cram-md5.c:52 send: auth cram-md5 [EMAIL PROTECTED] ssl=0
bart-sd: cram-md5.c:68 Authenticate OK nzlihT+eh4/pO6sBenBjDB
bart-sd: cram-md5.c:97 cram-get: auth cram-md5 [EMAIL PROTECTED] ssl=0
bart-sd: cram-md5.c:114 sending resp to challenge: t6+BM9/7J50H5hBkAz+ziD
bart-sd: dircmd.c:187 Message channel init completed.
bart-sd: dircmd.c:194 dird: JobId=665 job=BaculaCatalog.2005-10-17_09.34.06 job_name=BaculaCatalog client_name=bart-fd type=66 level=70 FileSet=Catalog NoAttr=0 SpoolAttr=0 FileSetMD5=f7/JDyhoN90Uo7/wui/xgD SpoolData=1 WritePartAfterJob=0 PreferMountedVols=1

bart-sd: dircmd.c:208 Do command: JobId=
bart-sd: job.c:70 dird: JobId=665 job=BaculaCatalog.2005-10-17_09.34.06 job_name=BaculaCatalog client_name=bart-fd type=66 level=70 FileSet=Catalog NoAttr=0 SpoolAttr=0 FileSetMD5=f7/JDyhoN90Uo7/wui/xgD SpoolData=1 WritePartAfterJob=0 PreferMountedVols=1
bart-sd: job.c:123 dird: 3000 OK Job SDid=1 SDtime=1129534433 Authorization=BJCD-HHDG-POPI-CHJB-JDKA-PCKP-PGDG-LBHL
bart-sd: dircmd.c:194 dird: use storage=GoceptChanger media_type=DLT40 pool_name=ChangerPool pool_type=Backup append=1 copy=0 stripe=0

bart-sd: 

Re: [Bacula-users] Re: OT: downloadable bacula mailing list archives?

2005-10-17 Thread Thorsten Huber
Hi Felix,

On Fri, Oct 14, 2005 at 10:06:57AM +0200, Felix Schwarz wrote:
 Thorsten Huber schrieb:
  is there somewhere a downloadable archive of the bacula-users and
  bacula-devel mailinglists? I cannont find such a service on
  sourceforge or any hints in recent mails in the sourceforge
  webinterface to the bacula archives.
 
 have a look at:
 http://gmane.org/info.php?group=gmane.comp.bacula.user
 http://gmane.org/info.php?group=gmane.comp.sysutils.backup.bacula.devel

thats great. The gmane NNTP feed is exactly what I was searching for.

-- 
Gruss / Best regards  |  LF.net GmbH|  fon +49 711 90074-414
Thorsten Huber|  Ruppmannstrasse 27 |  fax +49 711 90074-33
[EMAIL PROTECTED] |  D-70565 Stuttgart  |  http://www.lf.net 




---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] WEB Interface to Bacula

2005-10-17 Thread Danie Theron

Phil Stracchino wrote:

Yu Safin wrote:
  

is there any WEB Interface to Bacula that would do some or all of what
you can do with ./bconsole for restores?
Our servers don't have any X but they run Apache.



There is not at this time any Bacula web administration GUI.  I believe
several people are working on creating one.
  
There is a distro that includes a Web interface for bacula (not bconsole 
though) , goto www.clarkconnect.org


  




---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Autochanger: Tapes are sometimes not changed automatically

2005-10-17 Thread Christian Theune
Hi,

I'm stupid. I found the error right after sending the mail. Half of my volumes where set to slot=0. It's much better now  :)On 10/17/05, Christian Theune
 [EMAIL PROTECTED] wrote:Hi,

ok, here is a dump from -d400 that shows everything between starting the storage daemon and the not-working tape change. 

When starting up, the drive has volume 306 from slot 7 loaded. This one
is full and not appendable anymore. It recycles volume 300 and wants it
loaded from slot 1. This is all correct. But then it stops. I don't see
the actual error in there.
Actually I misread the part with the slot=1 ... 
Thanks for helping,
Christian



Re: [Bacula-users] Autochanger: Tapes are sometimes not changed automatically

2005-10-17 Thread Arno Lehmann

Hello,

On 17.10.2005 09:42, Christian Theune wrote:


Hi,

ok, here is a dump from -d400 that shows everything between starting the 
storage daemon and the not-working tape change.


When starting up, the drive has volume 306 from slot 7 loaded. This one 
is full and not appendable anymore. It recycles volume 300 and wants it 
loaded from slot 1. This is all correct. But then it stops. I don't see 
the actual error in there.


Log is attached.


Ok, I'll go through it.


Cheers,
Christian




bacula-sd: lex.c:148 Open config file: /etc/bacula/bacula-sd.conf
bacula-sd: stored_conf.c:613 Inserting director res: Monitor
bacula-sd: lex.c:148 Open config file: /etc/bacula/bacula-sd.conf
bacula-sd: message.c:238 Copy message resource 0x80af3e0 to 0x80ae010
bart-sd: bsys.c:498 Could not open state file. sfd=-1 size=188: ERR=No such 
file or directory


This here is an indication of an incorrect installation or wrong 
configuration, I think.



bart-sd: bpipe.c:291 Run program returning 0
bart-sd: bnet_server.c:83 Addresses host[ipv4:0.0.0.0:9103]
bart-sd: stored.c:492 calling init_dev /dev/nst0
bart-sd: dev.c:242 init_dev: tape=2 dev_name=/dev/nst0
bart-sd: stored.c:494 SD init done /dev/nst0
bart-sd: autochanger.c:191 Locking changer GoceptChanger
bart-sd: bpipe.c:291 Run program returning 0
bart-sd: autochanger.c:162 run_prog: /etc/bacula/mtx-changer /dev/sg3 loaded 0 
/dev/nst0 0 stat=0 result=7

bart-sd: autochanger.c:200 Unlocking changer GoceptChanger
bart-sd: stored.c:507 calling first_open_device GoceptChangerDev (/dev/nst0)
bart-sd: device.c:246 start open_output_device()
bart-sd: device.c:265 Opening device.
bart-sd: dev.c:277 open dev: tape=2 dev_name=GoceptChangerDev (/dev/nst0) 
vol= mode=OPEN_READ_ONLY
bart-sd: dev.c:335 open dev: device is tape
bart-sd: autochanger.c:191 Locking changer GoceptChanger
bart-sd: bpipe.c:291 Run program returning 0
bart-sd: autochanger.c:162 run_prog: /etc/bacula/mtx-changer /dev/sg3 loaded 0 
/dev/nst0 0 stat=0 result=7

bart-sd: autochanger.c:200 Unlocking changer GoceptChanger
bart-sd: dev.c:358 Try open GoceptChangerDev (/dev/nst0) mode=OPEN_READ_ONLY 
nonblocking=2048
bart-sd: dev.c:397 openmode=3 OPEN_READ_ONLY
bart-sd: dev.c:410 open dev: tape 8 opened
bart-sd: device.c:272 open dev GoceptChangerDev (/dev/nst0) OK
bart-sd: label.c:71 Enter read_volume_label device=GoceptChangerDev 
(/dev/nst0) vol= dev_Vol=*NULL*
bart-sd: dev.c:654 rewind_dev fd=8 GoceptChangerDev (/dev/nst0)
bart-sd: label.c:138 Big if statement in read_volume_label
bart-sd: block.c:879 Full read() in read_block_from_device() len=64512
bart-sd: block.c:943 Read device got 64512 bytes at 0:0
bart-sd: block.c:284 unser_block_header block_len=197
bart-sd: block.c:295 Read binbuf = 173 24 block_len=197
bart-sd: block.c:1065 At end of read block
bart-sd: block.c:1078 Exit read_block read_len=64512 block_len=197
bart-sd: label.c:755 unser_vol_label

Volume Label:
Id: Bacula 1.0 immortal
VerNo : 11
VolName   : DailySet306
PrevVolName   :
VolFile   : 0
LabelType : VOL_LABEL
LabelSize : 161
PoolName  : ChangerPool
MediaType : DLT40
PoolType  : Backup
HostName  : bart
Date label written: 04-Aug-2005 23:21
bart-sd: reserve.c:118 new_volume DailySet306
bart-sd: label.c:206 Compare Vol names: VolName= hdr=DailySet306
bart-sd: label.c:222 Copy vol_name=DailySet306

Volume Label:
Id: Bacula 1.0 immortal
VerNo : 11
VolName   : DailySet306
PrevVolName   :
VolFile   : 0
LabelType : VOL_LABEL
LabelSize : 161
PoolName  : ChangerPool
MediaType : DLT40
PoolType  : Backup
HostName  : bart
Date label written: 04-Aug-2005 23:21
bart-sd: label.c:227 Leave read_volume_label() VOL_OK
bart-sd: dev.c:654 rewind_dev fd=8 GoceptChangerDev (/dev/nst0)
bart-sd: bnet.c:1125 who=client host=10.1.1.40 port=36643
bart-sd: dircmd.c:157 Conn: Hello Director bart-dir calling
bart-sd: dircmd.c:166 Start Dir Job
bart-sd: cram-md5.c:52 send: auth cram-md5 [EMAIL PROTECTED] ssl=0
bart-sd: cram-md5.c:68 Authenticate OK nzlihT+eh4/pO6sBenBjDB
bart-sd: cram-md5.c:97 cram-get: auth cram-md5 [EMAIL PROTECTED] ssl=0
bart-sd: cram-md5.c:114 sending resp to challenge: t6+BM9/7J50H5hBkAz+ziD
bart-sd: dircmd.c:187 Message channel init completed.
bart-sd: dircmd.c:194 dird: JobId=665 job=BaculaCatalog.2005-10-17_09.34.06 
job_name=BaculaCatalog client_name=bart-fd type=66 level=70 FileSet=Catalog 
NoAttr=0 SpoolAttr=0 FileSetMD5=f7/JDyhoN90Uo7/wui/xgD SpoolData=1 
WritePartAfterJob=0 PreferMountedVols=1

bart-sd: dircmd.c:208 Do command: JobId=
bart-sd: job.c:70 dird: JobId=665 job=BaculaCatalog.2005-10-17_09.34.06 
job_name=BaculaCatalog client_name=bart-fd type=66 level=70 FileSet=Catalog 
NoAttr=0 SpoolAttr=0 FileSetMD5=f7/JDyhoN90Uo7/wui/xgD SpoolData=1 
WritePartAfterJob=0 

[Bacula-users] Barcode labels with Bacula

2005-10-17 Thread Ed Clarke
I'm using an ADIC FastStor22 library with Bacula right now with no 
problems.  I bought this
library on eBay to get away from the ancient Exabyte library that I had 
been using.  Although
it was not mentioned in the listing, it seems that the library includes 
the optional barcode

reader.  None of my tapes are have the barcode labels on them.

How, exactly do these labels relate to the bacula label?  I just ordered 
some labels from a guy
(again on eBay) because they were surplus and cheap ( $14 including 
shipping versus $75+
for HP labels from a store ).  I don't get to pick what's printed on the 
labels.  If Bacula and the

barcode have to match, that's not a problem but I do need to know about it.




---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Connection to remote Storage Daemon hangs

2005-10-17 Thread frank

Hi there,

I try to setup Bacula to backup via an OpenVPN tunnel connection, so far 
without luck. If I backup locally to a mounted file it works without problems 
however whenever I try to run that same job to store its data to a remote SD, 
the connection to the SD somehow stops responding. When I retrieve a status 
from the client it hangs always at the same point “SDReadSeqNo=5 fd=7”:

Running Jobs:
Director connected at: 17-Oct-05 22:55
JobId 29 Job Client1.2005-10-17_22.49.05 is running.
Backup Job started: 17-Oct-05 22:49
Files=340 Bytes=289,365 Bytes/sec=817
Files Examined=360
Processing file: /var/www/html/images/favicon.ico
SDReadSeqNo=5 fd=7

When I try to ask for the status of the SD, the connection hangs, however when 
no jobs are running I can successfully query the status of the SD.

After some time the job terminates with the error message “Broken Pipe”. As 
indicated in the documentation, I played around with the heartbeat parameter 
however without success.

Full log result of a job failure:

17-Oct 21:19 host1-dir: No prior Full backup Job record found.
17-Oct 21:19 host1-dir: No prior or suitable Full backup found. Doing FULL 
backup.
17-Oct 21:19 host1-dir: Start Backup JobId 21, Job=Client1.2005-10-17_21.19.54 
17-Oct 21:20 host2-sd: Volume Subversion previously written, moving to end of 
data.
17-Oct 21:38 host1-fd: Client1.2005-10-17_21.19.54 Fatal error: backup.c:477 
Network send error 4298 to SD. ERR=Broken pipe 17-Oct 21:39 host1-dir: 
Client1.2005-10-17_21.19.54 Error: Bacula 1.36.3 (22Apr05): 17-Oct-2005 21:39:02
  JobId:  21
  Job:Client1.2005-10-17_21.19.54
  Backup Level:   Full (upgraded from Incremental)
  Client: host1-fd
  FileSet:Full Set 2005-10-15 20:27:09
  Pool:   Default
  Storage:File
  Start time: 17-Oct-2005 21:19:56
  End time:   17-Oct-2005 21:39:02
  FD Files Written:   359
  SD Files Written:   0
  FD Bytes Written:   358,575
  SD Bytes Written:   0
  Rate:   0.3 KB/s
  Software Compression:   1.2 %
  Volume name(s): 
  Volume Session Id:  1
  Volume Session Time:1129555063
  Last Volume Bytes:  1
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  Error
  SD termination status:  Running
  Termination:*** Backup Error ***

I run bacula version 1.36.3 on Fedora Core 4 installed from the RPM from 
Sourceforge. The /lib/tls is disabled using LD_ASSUME_KERNEL=2.4.19 from the 
boot scripts.

I’m running out of options, any pointers are appreciated.

regards
Frank



---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Connection to remote Storage Daemon hangs

2005-10-17 Thread Florian Schnabel

frank wrote:

Hi there,

I try to setup Bacula to backup via an OpenVPN tunnel connection, so far 
without luck. If I backup locally to a mounted file it works without problems 
however whenever I try to run that same job to store its data to a remote SD, 
the connection to the SD somehow stops responding. When I retrieve a status 
from the client it hangs always at the same point “SDReadSeqNo=5 fd=7”:

Running Jobs:
Director connected at: 17-Oct-05 22:55
JobId 29 Job Client1.2005-10-17_22.49.05 is running.
Backup Job started: 17-Oct-05 22:49
Files=340 Bytes=289,365 Bytes/sec=817
Files Examined=360
Processing file: /var/www/html/images/favicon.ico
SDReadSeqNo=5 fd=7

When I try to ask for the status of the SD, the connection hangs, however when 
no jobs are running I can successfully query the status of the SD.

After some time the job terminates with the error message “Broken Pipe”. As 
indicated in the documentation, I played around with the heartbeat parameter 
however without success.

Full log result of a job failure:

17-Oct 21:19 host1-dir: No prior Full backup Job record found.
17-Oct 21:19 host1-dir: No prior or suitable Full backup found. Doing FULL 
backup.
17-Oct 21:19 host1-dir: Start Backup JobId 21, Job=Client1.2005-10-17_21.19.54 17-Oct 
21:20 host2-sd: Volume Subversion previously written, moving to end of data.
17-Oct 21:38 host1-fd: Client1.2005-10-17_21.19.54 Fatal error: backup.c:477 
Network send error 4298 to SD. ERR=Broken pipe 17-Oct 21:39 host1-dir: 
Client1.2005-10-17_21.19.54 Error: Bacula 1.36.3 (22Apr05): 17-Oct-2005 21:39:02
  JobId:  21
  Job:Client1.2005-10-17_21.19.54
  Backup Level:   Full (upgraded from Incremental)
  Client: host1-fd
  FileSet:Full Set 2005-10-15 20:27:09
  Pool:   Default
  Storage:File
  Start time: 17-Oct-2005 21:19:56
  End time:   17-Oct-2005 21:39:02
  FD Files Written:   359
  SD Files Written:   0
  FD Bytes Written:   358,575
  SD Bytes Written:   0
  Rate:   0.3 KB/s
  Software Compression:   1.2 %
  Volume name(s): 
  Volume Session Id:  1

  Volume Session Time:1129555063
  Last Volume Bytes:  1
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  Error
  SD termination status:  Running
  Termination:*** Backup Error ***

I run bacula version 1.36.3 on Fedora Core 4 installed from the RPM from 
Sourceforge. The /lib/tls is disabled using LD_ASSUME_KERNEL=2.4.19 from the 
boot scripts.

I’m running out of options, any pointers are appreciated.

regards
Frank



the broken pipe means only your client closed the connection 
ungracefully .. i.e. you jsut closed it without using quit


try to wait, it may jsut take a while

Florian


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


RE: [Bacula-users] Connection to remote Storage Daemon hangs

2005-10-17 Thread frank

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Florian 
 Schnabel
 Sent: Monday, October 17, 2005 10:47 PM
 To: bacula-users@lists.sourceforge.net
 Subject: Re: [Bacula-users] Connection to remote Storage Daemon hangs
 
 frank wrote:
  Hi there,
  
  I try to setup Bacula to backup via an OpenVPN tunnel connection, so far 
  without luck. If I backup locally to a mounted file it works without 
  problems however whenever I try to run that same job to store its data to a 
  remote SD, the connection to the SD somehow stops responding. When I 
  retrieve a status from the client it hangs always at the same point 
  “SDReadSeqNo=5 fd=7”:
  
  Running Jobs:
  Director connected at: 17-Oct-05 22:55 JobId 29 Job 
  Client1.2005-10-17_22.49.05 is running.
  Backup Job started: 17-Oct-05 22:49
  Files=340 Bytes=289,365 Bytes/sec=817
  Files Examined=360
  Processing file: /var/www/html/images/favicon.ico
  SDReadSeqNo=5 fd=7
  
  When I try to ask for the status of the SD, the connection hangs, however 
  when no jobs are running I can successfully query the status of the SD.
  
  After some time the job terminates with the error message “Broken Pipe”. As 
  indicated in the documentation, I played around with the heartbeat 
  parameter however without success.
  
  Full log result of a job failure:
  
  17-Oct 21:19 host1-dir: No prior Full backup Job record found.
  17-Oct 21:19 host1-dir: No prior or suitable Full backup found. Doing FULL 
  backup.
  17-Oct 21:19 host1-dir: Start Backup JobId 21, 
  Job=Client1.2005-10-17_21.19.54 17-Oct 21:20 host2-sd: Volume Subversion 
  previously written, moving to end of data.
  17-Oct 21:38 host1-fd: Client1.2005-10-17_21.19.54 Fatal error: 
  backup.c:477 Network send error 4298 to SD. ERR=Broken pipe 17-Oct 21:39 
  host1-dir: Client1.2005-10-17_21.19.54 Error: Bacula 1.36.3 (22Apr05): 
  17-Oct-2005 21:39:02
JobId:  21
Job:Client1.2005-10-17_21.19.54
Backup Level:   Full (upgraded from Incremental)
Client: host1-fd
FileSet:Full Set 2005-10-15 20:27:09
Pool:   Default
Storage:File
Start time: 17-Oct-2005 21:19:56
End time:   17-Oct-2005 21:39:02
FD Files Written:   359
SD Files Written:   0
FD Bytes Written:   358,575
SD Bytes Written:   0
Rate:   0.3 KB/s
Software Compression:   1.2 %
Volume name(s): 
Volume Session Id:  1
Volume Session Time:1129555063
Last Volume Bytes:  1
Non-fatal FD errors:0
SD Errors:  0
FD termination status:  Error
SD termination status:  Running
Termination:*** Backup Error ***
  
  I run bacula version 1.36.3 on Fedora Core 4 installed from the RPM from 
  Sourceforge. The /lib/tls is disabled using LD_ASSUME_KERNEL=2.4.19 from 
  the boot scripts.
  
  I’m running out of options, any pointers are appreciated.
  
  regards
  Frank
  
 
 the broken pipe means only your client closed the connection ungracefully 
 .. i.e. you jsut closed it without using quit
 
 try to wait, it may jsut take a while
 
 Florian

The job terminates by itself after about 20 mins. During those 20 minutes I can 
no longer query the status of the SD, it does not respond to status command. 
From the logfile I see that the SD reports that it did not write any data 
however the file on disk grows slightly on every job.

regards
Frank



---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large network bacula tips?

2005-10-17 Thread Mark Bober

I get about ~20 MB/s from my fastest storage, so that's 1200 MB/min, or 1.2 
GB/min. You're pulling 300MB/min, or 5 MB/s.

So you might be a touch slow. I have a maximum of 4 jobs running on any given 
storage device, however, and usually during fulls on the weekend I've got no 
more than 4 jobs running at any given time anyway.

Here's my setup:

bacula-dir: Sun v20z (dual Opt, 4g ram) running CentOS 4.1 (RHEL clone).

Tape Storage: Either Quantum SDLT/160 Autoloader or Overland LXB SDLT/110 
changer for large jobs, hanging off an U320 MPT SCSI PCI-X controller. 

My spool device is a set of random SCSI disks, mostly old 50 giggers, in a 
striped software raid. About 400G. They're on a PCI 33mhz controller, a Symbios 
something-or-other. Nothing special.

All gigabit to major servers.

All in all I've got about 80 clients, Windows, Solaris, and Linux. A few OSF/1 
also. I've ran about 3500 jobs, and have totalled about 17 TB over the past... 
2 1/2 months I've been production with Bacula.

(I secretly hope that wins me some sort of Biggest Bacula award)

Suggestions:

1) Solaris storage-d was *very slow*. It's Solaris's fault. Try a linux 
storage-d, see what happens. My Linux clients always outpace everything else, 
even given the same hardware. Go ahead and shoot for 1.37.40 as well. 

2) It's Virtuozzo, also. I've got a set of VMWare ESX servers, same hardware as 
the director. They go about 5 MB/s to disk, which GZIP compression on, when I'd 
expect 20 MB/s from a plain Linux install without GZIP. Not much can be done 
about that, really. (this is the VMWare itself - the linux underpinnings, not 
the virtual machines, backing up). If those 65 virtual machines all have load 
on each server, I'm amazed they're that fast at all.

If you're backing up to disk, drop GZIP once and see how it goes. If you're 
going straight to tape, you're pretty much at the limit then. That's a lot of 
virtualization.

Mark


On Thu, Oct 13, 2005 at 03:56:57PM -0700, Lyle Vogtmann wrote:
 Hello fellow Bacula users!
 
 I've only been lurking on this list for a little while, please excuse
 me if this topic has been covered previously.
 
 I've got what I would consider a large network of machines each
 hosting many virtual private servers with Virtuozzo. 
 http://www.swsoft.com/en/products/virtuozzo/  (19 servers, each
 hosting an average of 65 virtual environments, average 160GB data per
 server.   Total data to back up ~ 3TB.  Generous estimate to allow for
 growth.)
 
 I've been tasked with replacing an aging Amanda install that has been
 backing them up to disk daily.
 
 I've done some testing already with a couple of the servers, and have
 recently started a backup of all systems.  Ran into a small problem
 with the catalog where the File table grew to 4GB and claimed to be
 full, easily fixed by switching from MyISAM to the InnoDB engine.  It
 got me thinking though, are there any other gotchas or caveats
 anyone else has overcome in backing up such a large quantity of data?
 
 We have a gigabit Ethernet network over which the backups are run, but
 it still seems to take an inordinate amount of time to complete a full
 backup.  Currently filling a two gigabyte volume every 6 minutes on
 average.  At that rate, it will take 6 days to finish a full backup?! 
 Maybe I'm doing the math wrong (I already know I haven't taken
 compression into account), but I think I'm missing something.
 
 Comments and suggestions welcome!  Thanks for such a great project! 
 (It's backing up my home network of 3 Macs handily!)
 
 Oh yeah:
 Director is running on a FreeBSD 5.4 box, all other clients are Linux.
  Bacula version 1.36.3 compiled from source (ports tree on director).
 
 Thanks in advance,
 Lyle Vogtmann
 
 
 ---
 This SF.Net email is sponsored by:
 Power Architecture Resource Center: Free content, downloads, discussions,
 and more. http://solutions.newsforge.com/ibmarch.tmpl
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Connection Problems

2005-10-17 Thread John
I have successfully setup Bacula with a dozen clients. Server and
clients are running RedHat Enterprise 3 ES. Bacula on the server
was compiled from source, and the clients were installed via RPM.
My last system I need to backup is a RH8 system and I've tried the RH8
RPMs and tried compiling from source but keep getting errors (see
below). I've verified the passwords are correct. There is
no firewall on the clients or server, the FD starts up on the client,
and all other servers connect fine to the server. Also, I can ssh
between server client and server, both directions so network
connectivity isn't a problem. In the error report Flash is the
server, Sybil is the client.

Any ideas?

17-Oct 01:44 flash-dir: No prior Full backup Job record found.
17-Oct 01:44 flash-dir: No prior or suitable Full backup found. Doing FULL backup.
17-Oct 01:44 flash-dir: Start Backup JobId 194,
Job=Sybil.2005-10-17_01.05.12 17-Oct 01:51 flash-dir:
Sybil.2005-10-17_01.05.12 Warning: bnet.c:769 Could not connect to File
daemon on sybil:9102. ERR=Connection timed out Retrying ...
17-Oct 03:34 flash-dir: Sybil.2005-10-17_01.05.12 Warning: bnet.c:769
Could not connect to File daemon on sybil:9102. ERR=Connection timed
out Retrying ...
17-Oct 05:17 flash-dir: Sybil.2005-10-17_01.05.12 Warning: bnet.c:769
Could not connect to File daemon on sybil:9102. ERR=Connection timed
out Retrying ...
17-Oct 06:59 flash-dir: Sybil.2005-10-17_01.05.12 Warning: bnet.c:769
Could not connect to File daemon on sybil:9102. ERR=Connection timed
out Retrying ...
17-Oct 08:42 flash-dir: Sybil.2005-10-17_01.05.12 Warning: bnet.c:769
Could not connect to File daemon on sybil:9102. ERR=Connection timed
out Retrying ...
17-Oct 10:25 flash-dir: Sybil.2005-10-17_01.05.12 Warning: bnet.c:769
Could not connect to File daemon on sybil:9102. ERR=Connection timed
out Retrying ...
17-Oct 11:42 flash-dir: Sybil.2005-10-17_01.05.12 Fatal error:
bnet.c:775 Unable to connect to File daemon on sybil:9102.
ERR=Connection timed out 17-Oct 11:42 flash-dir:
Sybil.2005-10-17_01.05.12 Error: Bacula 1.36.3 (22Apr05): 17-Oct-2005
11:42:00
 JobId: 194

Job:
Sybil.2005-10-17_01.05.12
 Backup Level: Full (upgraded from Incremental)
 Client: sybil-fd

FileSet:
SybilFileSet 2005-10-07 11:58:11

Pool:
FullPool
 Storage: File
 Start time: 17-Oct-2005 01:44:54
 End time: 17-Oct-2005 11:42:00
 FD Files Written: 0
 SD Files Written: 0
 FD Bytes Written: 0
 SD Bytes Written: 0

Rate:
0.0 KB/s
 Software Compression: None
 Volume name(s): 
 Volume Session Id: 61
 Volume Session Time: 1129227033
 Last Volume Bytes: 0
 Non-fatal FD errors: 0
 SD Errors: 0
 FD termination status: 
 SD termination status: Waiting on FD
 Termination: *** Backup Error ***






Re: [Bacula-users] Connection Problems

2005-10-17 Thread Achim Schmidt
Hi,

do you investigated with tcpdump on both servers if there is any
communication between teh systems on the given ports  ?

rgds,

Achim


John wrote:

 I have successfully setup Bacula with a dozen clients.  Server and
 clients are running RedHat Enterprise 3 ES.  Bacula on the server was
 compiled from source, and the clients were installed via RPM.  My last
 system I need to backup is a RH8 system and I've tried the RH8 RPMs
 and tried compiling from source but keep getting errors (see below). 
 I've verified the passwords are correct.  There is no firewall on the
 clients or server, the FD starts up on the client, and all other
 servers connect fine to the server.  Also, I can ssh between server
 client and server, both directions so network connectivity isn't a
 problem.  In the error report Flash is the server, Sybil is the client.

 Any ideas?

 17-Oct 01:44 flash-dir: No prior Full backup Job record found.
 17-Oct 01:44 flash-dir: No prior or suitable Full backup found. Doing
 FULL backup.
 17-Oct 01:44 flash-dir: Start Backup JobId 194,
 Job=Sybil.2005-10-17_01.05.12 17-Oct 01:51 flash-dir:
 Sybil.2005-10-17_01.05.12 Warning: bnet.c:769 Could not connect to
 File daemon on sybil:9102. ERR=Connection timed out Retrying ...
 17-Oct 03:34 flash-dir: Sybil.2005-10-17_01.05.12 Warning: bnet.c:769
 Could not connect to File daemon on sybil:9102. ERR=Connection timed
 out Retrying ...
 17-Oct 05:17 flash-dir: Sybil.2005-10-17_01.05.12 Warning: bnet.c:769
 Could not connect to File daemon on sybil:9102. ERR=Connection timed
 out Retrying ...
 17-Oct 06:59 flash-dir: Sybil.2005-10-17_01.05.12 Warning: bnet.c:769
 Could not connect to File daemon on sybil:9102. ERR=Connection timed
 out Retrying ...
 17-Oct 08:42 flash-dir: Sybil.2005-10-17_01.05.12 Warning: bnet.c:769
 Could not connect to File daemon on sybil:9102. ERR=Connection timed
 out Retrying ...
 17-Oct 10:25 flash-dir: Sybil.2005-10-17_01.05.12 Warning: bnet.c:769
 Could not connect to File daemon on sybil:9102. ERR=Connection timed
 out Retrying ...
 17-Oct 11:42 flash-dir: Sybil.2005-10-17_01.05.12 Fatal error:
 bnet.c:775 Unable to connect to File daemon on sybil:9102.
 ERR=Connection timed out 17-Oct 11:42 flash-dir:
 Sybil.2005-10-17_01.05.12 Error: Bacula 1.36.3 (22Apr05): 17-Oct-2005
 11:42:00
   JobId:  194
   Job:Sybil.2005-10-17_01.05.12
   Backup Level:   Full (upgraded from Incremental)
   Client: sybil-fd
   FileSet:SybilFileSet 2005-10-07 11:58:11
   Pool:   FullPool
   Storage:File
   Start time: 17-Oct-2005 01:44:54
   End time:   17-Oct-2005 11:42:00
   FD Files Written:   0
   SD Files Written:   0
   FD Bytes Written:   0
   SD Bytes Written:   0
   Rate:   0.0 KB/s
   Software Compression:   None
   Volume name(s):
   Volume Session Id:  61
   Volume Session Time:1129227033
   Last Volume Bytes:  0
   Non-fatal FD errors:0
   SD Errors:  0
   FD termination status: 
   SD termination status:  Waiting on FD
   Termination:*** Backup Error ***







---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large network bacula tips?

2005-10-17 Thread Lyle Vogtmann
Thank you for the response!

On 10/17/05, Mark Bober [EMAIL PROTECTED] wrote:

 I get about ~20 MB/s from my fastest storage, so that's 1200 MB/min, or 1.2 
 GB/min. You're
 pulling 300MB/min, or 5 MB/s.

I thought so, thanks for verifying my sanity.

 So you might be a touch slow. I have a maximum of 4 jobs running on any given 
 storage device,
 however, and usually during fulls on the weekend I've got no more than 4 jobs 
 running at any
 given time anyway.

I was thinking of limiting the number of concurrent jobs.  The project
manager really wants to have them all done simultaneously, not 100%
sure on the reasoning, but it is listed as a requirement.  If it has a
large negative impact on performance, I'm sure we can drop that
requirement.

 Here's my setup:

 bacula-dir: Sun v20z (dual Opt, 4g ram) running CentOS 4.1 (RHEL clone).

 Tape Storage: Either Quantum SDLT/160 Autoloader or Overland LXB SDLT/110 
 changer for
 large jobs, hanging off an U320 MPT SCSI PCI-X controller.

 My spool device is a set of random SCSI disks, mostly old 50 giggers, in a 
 striped software raid.
 About 400G. They're on a PCI 33mhz controller, a Symbios something-or-other. 
 Nothing special.

 All gigabit to major servers.

 All in all I've got about 80 clients, Windows, Solaris, and Linux. A few 
 OSF/1 also. I've ran about
 3500 jobs, and have totalled about 17 TB over the past... 2 1/2 months I've 
 been production with
 Bacula.

 (I secretly hope that wins me some sort of Biggest Bacula award)

:)  Anyone have any statistics to top it?

 Suggestions:

 1) Solaris storage-d was *very slow*. It's Solaris's fault. Try a linux 
 storage-d, see what happens.
 My Linux clients always outpace everything else, even given the same 
 hardware. Go ahead and
 shoot for 1.37.40 as well.

OK, wasn't sure how stable that release was, but since I'm still in
testing mode, it doesn't really matter.  I'll give it a go.

 2) It's Virtuozzo, also. I've got a set of VMWare ESX servers, same hardware 
 as the director. They
 go about 5 MB/s to disk, which GZIP compression on, when I'd expect 20 MB/s 
 from a plain Linux
 install without GZIP. Not much can be done about that, really. (this is the 
 VMWare itself - the linux
 underpinnings, not the virtual machines, backing up). If those 65 virtual 
 machines all have load
 on each server, I'm amazed they're that fast at all.

None of the clients are vps's themselves.  Just hosting vps's, so I
guess I could have left that out of my original message.  They are the
main cause of the large amount of data.

 If you're backing up to disk, drop GZIP once and see how it goes. If you're 
 going straight to tape,
 you're pretty much at the limit then. That's a lot of virtualization.

Yep, to disk.  I forgot about the gzip compression, thanks for reminding me.

I appreciate the suggestions!

Lyle


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] DLT 40/80GB never reaches 40GB

2005-10-17 Thread Anwar Ahmad

Hi All,

I have this problem where I've got a mix of 20/40GB and 40/80GB DLT 
tapes that are in 1 pool configured to backup using 1 tape drive. The 
problem is, all of them seem to be acting like 20/40GB tapes including 
the higher capacity ones. I've not been able to get the 40/80GB tape to 
store even remotely near 40GB let alone the compressed capacity of 80GB. 
I was wondering whether anyone had similar problems before...


The funny thing is, I've got a another pool that uses another tape drive 
that all backup correctly. Most backup are close to around 68-72GB, 
which I believe is fine. I've configured both pools  devices similarly 
but don't know why 1 only backs up to around 34GB (acting like a 20/40). 
Both tape drives were bough at the same time and are identical models. 
Is the mix of tapes causing the problem?


They are currently listed as /dev/nst0 and /dev/nst1 respectively. Is 
there any configuration files in outside of bacula-dir that I need to 
configure or perhaps in Linux system files perhaps, although I don't 
recall any configuration files other than the bacula conf files where 
done back during the initial setup.


Thanks!

Kind Regards,
Anwar


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] DLT 40/80GB never reaches 40GB

2005-10-17 Thread Phil Stracchino
Anwar Ahmad wrote:
 Hi All,
 
 I have this problem where I've got a mix of 20/40GB and 40/80GB DLT
 tapes that are in 1 pool configured to backup using 1 tape drive. The
 problem is, all of them seem to be acting like 20/40GB tapes including
 the higher capacity ones. I've not been able to get the 40/80GB tape to
 store even remotely near 40GB let alone the compressed capacity of 80GB.
 I was wondering whether anyone had similar problems before...
 
 The funny thing is, I've got a another pool that uses another tape drive
 that all backup correctly. Most backup are close to around 68-72GB,
 which I believe is fine. I've configured both pools  devices similarly
 but don't know why 1 only backs up to around 34GB (acting like a 20/40).
 Both tape drives were bough at the same time and are identical models.
 Is the mix of tapes causing the problem?
 
 They are currently listed as /dev/nst0 and /dev/nst1 respectively. Is
 there any configuration files in outside of bacula-dir that I need to
 configure or perhaps in Linux system files perhaps, although I don't
 recall any configuration files other than the bacula conf files where
 done back during the initial setup.

This may be a silly question, but ..

You've stated you have a mixture of DLT1 (20/40GB nominal) and DLT2
(40/80GB nominal) tapes.  Are your drives DLT1 or DLT2?  A DLT1 drive
will only get 20/40GB capacity from a DLT2, DLT3 or even DLT4 tape.


-- 
 Phil Stracchino   [EMAIL PROTECTED]
Renaissance Man, Unix generalist, Perl hacker
 Mobile: 603-216-7037 Landline: 603-886-3518


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] DLT 40/80GB never reaches 40GB

2005-10-17 Thread Anwar Ahmad

Hi Phil,

No worries, I've checked that both are DLT2 drives. Specifically they 
are HP SureStore Autoloaders. Both were bough at the same time from HP 
directly. They were also configured identically. We initially had a 
bunch of DLT tapes around (10 or so) from our old HP server which had an 
inbuilt DLT drive. This was DLT1.


When we purchased the 2 Autoloaders we wanted to reuse some of the old 
tapes rather than junk it since we're using bacula as network backup. We 
also purchased well over 30 DLT 40/80 tapes.


Interestingly, I've relabeled a 40/80 tape that was originally in the 
pool that backed up from pool1 (the one with the problem) and put it 
into pool2 (working normally) and it works correctly. I thought about 
the physical tape drive causing the problem but I've nowhere to check? 
Is there any way to check using software?


I'm not certain how this actually works out. Since both autoloader are 
bought together and are from the same batch, 1 couldn't have been DLT1 
while the other DLT2. I ruled out the autoloader as the source of the 
problem since it's still working. It's unlikely a hardware fault could 
cause something like that... however I could be wrong. I'd like to 
explore software configuration issues before accepting the drive is 
faulty. From my previous experience, the drive either works or don't. 
I've never encountered an issue where it slows down but still works


Thanks!

Kind Regards,
Anwar

Phil Stracchino wrote:


Anwar Ahmad wrote:
 


Hi All,

I have this problem where I've got a mix of 20/40GB and 40/80GB DLT
tapes that are in 1 pool configured to backup using 1 tape drive. The
problem is, all of them seem to be acting like 20/40GB tapes including
the higher capacity ones. I've not been able to get the 40/80GB tape to
store even remotely near 40GB let alone the compressed capacity of 80GB.
I was wondering whether anyone had similar problems before...

The funny thing is, I've got a another pool that uses another tape drive
that all backup correctly. Most backup are close to around 68-72GB,
which I believe is fine. I've configured both pools  devices similarly
but don't know why 1 only backs up to around 34GB (acting like a 20/40).
Both tape drives were bough at the same time and are identical models.
Is the mix of tapes causing the problem?

They are currently listed as /dev/nst0 and /dev/nst1 respectively. Is
there any configuration files in outside of bacula-dir that I need to
configure or perhaps in Linux system files perhaps, although I don't
recall any configuration files other than the bacula conf files where
done back during the initial setup.
   



This may be a silly question, but ..

You've stated you have a mixture of DLT1 (20/40GB nominal) and DLT2
(40/80GB nominal) tapes.  Are your drives DLT1 or DLT2?  A DLT1 drive
will only get 20/40GB capacity from a DLT2, DLT3 or even DLT4 tape.


 





---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] DLT 40/80GB never reaches 40GB

2005-10-17 Thread drescher0110-bacula
 Were the tapes ever used in a DLT1 drive?
 



---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users