[Bacula-users] Bacula ignores Maximum Block Size and Maximum Network Buffer Size

2011-07-12 Thread Stefan-Michael Guenther
Hello,

we are using a Tandberg Autochanger with LTO3 tapes (400/800GB).

After Bacula reported the VolStatus Full after just saving about 260 GB,
we checked the output of demsg an found the following:

[ 7.806343] st 3:0:1:0: Attached scsi tape st0
[ 7.806345] st 3:0:1:0: st0: try direct i/o: yes (alignment 512 B)
[ 3295.705344] st0: Block limits 1 - 16777215 bytes.
[ 3299.906724] st0: Failed to read 65536 byte block with 64512 byte
transfer.
[ 3856.794626] st0: Failed to read 65536 byte block with 64512 byte
transfer.
[ 4027.852762] st0: Failed to read 65536 byte block with 64512 byte
transfer.
[ 4173.406028] st0: Failed to read 65536 byte block with 64512 byte
transfer.
[ 7426.885546] st0: Sense Key : Not Ready [current]
[ 7426.885552] st0: Add. Sense: Medium not present
[ 7489.886551] st0: Failed to read 65536 byte block with 64512 byte
transfer.
[ 7587.498621] st0: Failed to read 65536 byte block with 64512 byte
transfer.
[ 7685.139676] st0: Failed to read 65536 byte block with 64512 byte
transfer.
[ 7790.334630] st0: Failed to read 65536 byte block with 64512 byte
transfer.
[ 7890.184317] st0: Failed to read 65536 byte block with 64512 byte
transfer.
[11107.368911] st0: Failed to read 65536 byte block with 64512 byte
transfer.
[11236.844953] st0: Failed to read 65536 byte block with 64512 byte
transfer.
[11593.028152] st0: Failed to read 65536 byte block with 64512 byte
transfer.
[12614.773569] st0: Failed to read 65536 byte block with 64512 byte
transfer.
[12822.140308] st0: Failed to read 65536 byte block with 64512 byte
transfer.
[12970.124446] st0: Failed to read 65536 byte block with 64512 byte
transfer.
[284116.085677] st0: Sense Key : Not Ready [current]
[284116.085683] st0: Add. Sense: Medium not present
[853425.793613] st0: Sense Key : Not Ready [current]
[853425.793619] st0: Add. Sense: Medium not present
[1824459.457956] st0: Failed to read 65536 byte block with 64512 byte
transfer.
[2185340.457194] st0: Sense Key : Not Ready [current]
[2185340.457200] st0: Add. Sense: Medium not present

We found a posting that suggested to define 

Maximum Block Size = 65536
Maximum Network Buffer Size = 65536

in bacula-sd.conf.

But although we didn't forget to restart the Storage Daemon, the error
appaers again and the only about 260 GB are used.

Is this a problem of Bacula or of the st0 driver.

Thanks for any suggestions or hints,

Stefan




--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape BLOCKED waiting for.....

2011-07-12 Thread Martin Simmons
 On Mon, 11 Jul 2011 22:35:15 -0400, Christian Tardif said:
 
 On 11/07/2011 14:34, Christian Tardif wrote:
  On 11/07/11 12:15 PM, Adrian Reyer wrote:
  Never tried it myself, but I have seen some documentation yesterday
  about labling tapes when needed by issuing 'unmount' first. Doesn't this
  resolve your 'blocked' status?
  -
  http://www.bacula.org/5.0.x-manuals/en/main/main/Brief_Tutorial.html#SECTION001610
 
  Regards,
 Adrian
 
  I'll give it a try tonight on a test environment but I think I've 
  already tried that.
 
  I'll let you know.
 
 Nope, this is not working. The tape drive is just not reacting at all... 
 If this is a feature, this should be renamed as a bug, as there MUST be 
 a way to push a label function even when the device is BLOCKED, waiting 
 for a tape to be mounted or labelled.
 
 Luckily enough, I have the exact message I was talking about:
 
 11-Jul 16:08 localhost-sd JobId 11321: Please mount Volume LTO-E or label a 
 new one for:
  Job:  fdld-E.2011-07-10_23.59.05_08
  Storage:  Ultrium-2 (/dev/nst2)
  Pool: E
  Media type:   Ultrium2

Please post the output of running these commands in bconsole (with appropriate
responses):

unmount
insert a blank tape into the drive
label
mount

__Martin

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula Support for Micsoft Hyper-V

2011-07-12 Thread cheungne
Dear All,

May I know do Bacula support the backup of Hyper-V virtual harddisk? and where 
can I find related documents?

Thanks!

+--
|This was sent by cheun...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catastrophic error. Cannot write overflow block to device LTO4

2011-07-12 Thread Martin Simmons
 On Mon, 11 Jul 2011 16:00:15 -0500, Steve Costaras said:
 Authentication-Results:  cm-omr4 smtp.user=stev...@chaven.com; auth=pass 
 (CRAM-MD5)
 
 On 2011-07-11 06:13, Martin Simmons wrote:
  On Sun, 10 Jul 2011 12:17:55 +, Steve Costaras said:
  Importance: Normal
  Sensitivity: Normal
 
  I am trying a full backup/multi-job to a single client and all was going 
  well until this morning when I received the error below.   All other jobs 
  were also canceled.
 
  My question is two fold:
 
  1) What the heck is this error?  I can unmount the drive, issue a rawfill 
  to
  the tape w/ btape and no problems?
  ...
  3000 OK label. VolBytes=1024 DVD=0 Volume=FA0016 Device=LTO4 
  (/dev/nst0)
  Requesting to mount LTO4 ...
  3905 Bizarre wait state 7
  Do not forget to mount the drive!!!
  2011-07-10 03SD-loki JobId 6: Wrote label to prelabeled Volume FA0016 on 
  device LTO4 (/dev/nst0)
  2011-07-10 03SD-loki JobId 6: New volume FA0016 mounted on device LTO4 
  (/dev/nst0) at 10-Jul-2011 03:51.
  2011-07-10 03SD-loki JobId 6: Fatal error: block.c:439 Attempt to write on 
  read-only Volume. dev=LTO4 (/dev/nst0)
  2011-07-10 03SD-loki JobId 6: End of medium on Volume FA0016 Bytes=1,024 
  Blocks=0 at 10-Jul-2011 03:51.
  2011-07-10 03SD-loki JobId 6: Fatal error: Job 6 canceled.
  2011-07-10 03SD-loki JobId 6: Fatal error: device.c:192 Catastrophic 
  error. Cannot write overflow block to device LTO4 (/dev/nst0). 
  ERR=Input/output error
  Do you regularly see the 3905 Bizarre wait state 7 message?  It could be 
  an
  indication of problems (and everything after that could be a consequence of
  it).
 
  What are the messages that lead up to that point?
 Nothing, really, this was the 17th tape in a row on a ~3day (so far) 
 backup.No messages in /var/log/messages.   Previous messages from 
 bacula are below as you can see it just blows chunks right after FA0016 
 is mounted, all concurrent jobs are killed.And I've tested that tape 
 before the backup ran and again right after this failure with btape.   
 no problems.

Yes, that looks mostly normal.

I would report that log output as a bug at bugs.bacula.org.

I'm a little surprised that it specifically asked for the volume named FA0016
though:

  2011-07-10 03SD-loki JobId 6: Please mount Volume FA0016 or label a new one 
for:

but you then issued the label command for that volume.

Was FA0016 in the database already?  If not, how did bacula predict the name?

__Martin

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Virtual diff instead of full

2011-07-12 Thread Marcus Hallberg
Hi!

I have problem with Virtual full on one set of backups where it does not give 
me a virtual full but a virtual diff instead.

The last fullbackup was about 125 GB and an new estimate says that a new full 
would be about 200 GB.

When I ask i to produce a new VirtualFull it starts reading from the last Diff 
and gives me a virtual full file of about 50 GB wich is about the size of my 
diffs.

Does anyone have any pointers they would be greatly appressiated

/marcus
--



/Marcus Hallberg
Wimlet Consulting AB
Gamla Varvsgatan 1
414 59 Göteborg
Tel: 031-3107000
Direkt 031-3107010
e-post: mar...@wimlet.se
hemsida: www.wimlet.se






attachment: g12.png--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual diff instead of full

2011-07-12 Thread James Harper
 Hi!
 
 I have problem with Virtual full on one set of backups where it does
not give
 me a virtual full but a virtual diff instead.
 
 The last fullbackup was about 125 GB and an new estimate says that a
new full
 would be about 200 GB.
 
 When I ask i to produce a new VirtualFull it starts reading from the
last Diff
 and gives me a virtual full file of about 50 GB wich is about the size
of my
 diffs.
 
 Does anyone have any pointers they would be greatly appressiated
 

If you want to get your hands dirty and like sifting through logfiles
you can turn on mysql logging (assuming you are using mysql) and have a
look at what queries are used to determine the volumes that make up the
virtualfull.

Regular bacula debug logging may help too.

James

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catastrophic error. Cannot write overflow block to device LTO4

2011-07-12 Thread Steve Costaras


On 2011-07-12 05:38, Martin Simmons wrote:
 Yes, that looks mostly normal.

 I would report that log output as a bug at bugs.bacula.org.

 I'm a little surprised that it specifically asked for the volume named FA0016
 though:

2011-07-10 03SD-loki JobId 6: Please mount Volume FA0016 or label a new 
 one for:

 but you then issued the label command for that volume.

 Was FA0016 in the database already?  If not, how did bacula predict the name?

Yes, I pre-populate the database with the range of tapes for each pool 
since I already have the bar coded tapes.

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Support for Micsoft Hyper-V

2011-07-12 Thread shouldbe q931
On Tue, Jul 12, 2011 at 8:09 AM, cheungne
bacula-fo...@backupcentral.com wrote:
 Dear All,

 May I know do Bacula support the backup of Hyper-V virtual harddisk? and 
 where can I find related documents?

 Thanks!


Yes, but not live, you would have to shut down the vm, backup the
file, startup the vm again.

I have heard that it is possible to take a VSS snap of a VM HD in a
way that links to the VSS writer in the guest to create a self
consistent snapshot of a VHD, but I have not seen it done.

Cheers

Arne

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Backup hangs at waiting for Client to connect to Stora

2011-07-12 Thread rlh1533
There's no reference to localhost or 127.0.0.1 in the bacula server config 
files (bacula-fd.conf, bacula-sd.conf, and bacula-dir.conf) and there isn't any 
reference to it in the file daemon in the client config file (bacula-fd.conf), 
either. Is there another config file I might look for this?

+--
|This was sent by rlh1...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bextract 5.0.3/64bit hangs ? 100% cpu, no result

2011-07-12 Thread Pierre Bourgin
- Martin Simmons mar...@lispworks.com wrote:

  On Mon, 11 Jul 2011 11:42:35 +0200 (CEST), Pierre Bourgin said:
  
  Hello,
  
  I have installed bacula 5.0.3 on a CentOS 5.4 x86_64 system (RPM
 x86_64 rebuilt from source) and it's working great since a year.
  
  After a mistake I mad, I need to restore my catalog.
  So I tried to use bextract in order to restore a 51 MB file from a
 volume-disk file of 20GB.
  bextract hangs a lot: 100% CPU used, no I/O wait at all.
  After several minutes of run, I stopped it without any success:
 restored file with created, but empty.
  
  Since I really need this file, I've tried the 32bit version of
 bextract on the same system: worked fine !
  
  I've tried to debug it by the use of strace, but I'm not clever
 enough to find anything usefull in these outputs.
  (please find the strace files attached to this email)
  
  So I don't know if it's a bug from the packaging or a bextract bug
 related to 64bit platform ?
  
  If someone has a clue ...
 
 To find out where is it looping, attach gdb to the process when it is
 hanging
 (use gdb -p $pidofbextract) and then issue the gdb commands
 
 thread apply all bt
 detach
 quit
 
 Do this a few times to get an idea of how it changes.

Hello,

Thanks for your help.

Once bextract has started, I've launched a batched gdb once per minute with the 
gdb commands you provided.
gdb then always shows a similar output like this (see below):
- adresses of the Thread 1 stack are always the same
- addresss of the Thread 2 stack: only #0 and #1 are different (inflate_table() 
and inflate()), 
- Thread 1: sometime call to inflate_table() does not appears

# while [ 1 ]; do gdb -p `pgrep bextract` -x gdb.show-backtrace.commands  ; 
sleep 4; done

== gdb sample output ==
This GDB was configured as x86_64-redhat-linux-gnu.
Attaching to process 10007
Reading symbols from /usr/sbin/bextract...(no debugging symbols found)...done.
Reading symbols from /lib64/libacl.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib64/libacl.so.1

Reading symbols from /lib64/libpthread.so.0...done.
[Thread debugging using libthread_db enabled]
[New Thread 0x2b7ee0469850 (LWP 10007)]
[New Thread 0x40af4940 (LWP 10008)]
Loaded symbols for /lib64/libpthread.so.0

Loaded symbols for /lib64/libselinux.so.1
Reading symbols from /lib64/libsepol.so.1...done.
Loaded symbols for /lib64/libsepol.so.1
0x2b7ee025a106 in inflate_table () from /usr/lib64/libz.so.1

Thread 2 (Thread 0x40af4940 (LWP 10008)):
#0  0x003a7620dfe1 in nanosleep () from /lib64/libpthread.so.0
#1  0x003e32a1425b in bmicrosleep (sec=30, usec=0) at bsys.c:63
#2  0x003e32a40efb in check_deadlock () at lockmgr.c:571
#3  0x003a76206617 in start_thread () from /lib64/libpthread.so.0
#4  0x003a75ad3c2d in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x2b7ee0469850 (LWP 10007)):
#0  0x2b7ee025a106 in inflate_table () from /usr/lib64/libz.so.1
#1  0x2b7ee0257537 in inflate () from /usr/lib64/libz.so.1
#2  0x2b7ee0252396 in uncompress () from /usr/lib64/libz.so.1
#3  0x00406b6f in record_cb ()
#4  0x00425298 in read_records ()
#5  0x00406438 in main ()
== gdb sample output ==

so trouble related to zlib and its use by bextract ?

# rpm -qf /usr/lib64/libz.so.1
zlib-1.2.3-3

My RPM build system uses exactly the same version for the -devel version
(unchanged since build of bacula):
zlib-devel-1.2.3-3

On the bacula server, I've updated my zlib package with the most recent one: 
zlib-1.2.3-4.
No difference, the same problem arises with bextract.

I've checked my bacula's backups: they are fine:
I've restored the bacula.sql file (BackupCatalog job) with bconsole and its 
restore command in seconds 
(for the same tape file).

Another thing: 
bextract and bconsole do not have the same entry point for libz.so, and only 
for that one;
does it mean they do not use libz the same way ?

# ldd /usr/sbin/bextract /usr/sbin/bconsole
libacl.so.1 = /lib64/libacl.so.1 (0x003a7ba0)
libbacfind-5.0.3.so = /usr/lib64/libbacfind-5.0.3.so 
(0x003e3320)
libbaccfg-5.0.3.so = /usr/lib64/libbaccfg-5.0.3.so (0x003e32e0)
libbac-5.0.3.so = /usr/lib64/libbac-5.0.3.so (0x003e32a0)
libz.so.1 = /usr/lib64/libz.so.1 (0x2b0f663b7000)
libpthread.so.0 = /lib64/libpthread.so.0 (0x003a7620)
libdl.so.2 = /lib64/libdl.so.2 (0x003a75e0)
libssl.so.6 = /lib64/libssl.so.6 (0x003a7920)
libcrypto.so.6 = /lib64/libcrypto.so.6 (0x003a7820)
libstdc++.so.6 = /usr/lib64/libstdc++.so.6 (0x003a7860)
libm.so.6 = /lib64/libm.so.6 (0x003a7660)
libgcc_s.so.1 = /lib64/libgcc_s.so.1 (0x003a78a0)
libc.so.6 = /lib64/libc.so.6 (0x003a75a0)
libattr.so.1 = 

[Bacula-users] Backup hangs at waiting for Client to connect to Stora

2011-07-12 Thread rlh1533
I was just able to get this to work, thanks to your tip on the localhost bit. I 
went back in bacula-dir.conf and indeed, under the storage daemon section, it 
was set to localhost for the address. Changing that to the IP of the bacula-sd 
fixed it.

+--
|This was sent by rlh1...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] possible memory leak in bacula/ubuntu/64-bit

2011-07-12 Thread Gavin McCullagh
Hi,

I was looking at the output of htop today and noticed that the bacula-fd
process, although entirely idle was the highest memory process on the
server.

The output was:

  PID USER PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
 3949 root  20   0  219M 63296   936 S  0.0  1.5  0:01.61 
/usr/sbin/bacula-fd -c /etc/bacula/bacula-fd.conf
 3952 root  20   0  219M 63296   936 S  0.0  1.5  0:00.81 
/usr/sbin/bacula-fd -c /etc/bacula/bacula-fd.conf

I stopped the file daemon, started it again and its memory usage fell quite
dramatically.  Initially it looked like this:

  PID USER PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
12768 root  20   0 66512  1616   632 S  0.0  0.0  0:00.00 
/usr/sbin/bacula-fd -c /etc/bacula/bacula-fd.conf
12769 root  20   0 66512  1616   632 S  0.0  0.0  0:00.00 
/usr/sbin/bacula-fd -c /etc/bacula/bacula-fd.conf

Some minutes later:

  PID USER PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
12768 root  20   0 74708  1856   860 S  0.0  0.0  0:00.01 
/usr/sbin/bacula-fd -c /etc/bacula/bacula-fd.conf   
   
12769 root  20   0 74708  1856   860 S  0.0  0.0  0:00.00 
/usr/sbin/bacula-fd -c /etc/bacula/bacula-fd.conf

where it seems pretty stable.

The server (and therefore bacula-fd process) had been up a couple of weeks and
has daily incrementals, weekly differentials and monthly full backups scheduled
with the accurate option on and compression and encryption off.  A full backup
is just shy of 500GB.  The FD listens on both IPv4 and IPv6 and the backups
generally happen over IPv6.

Any thoughts?

Gavin



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bextract 5.0.3/64bit hangs ? 100% cpu, no result

2011-07-12 Thread Martin Simmons
 On Tue, 12 Jul 2011 16:15:59 +0200 (CEST), Pierre Bourgin said:
 
 - Martin Simmons mar...@lispworks.com wrote:
 
   On Mon, 11 Jul 2011 11:42:35 +0200 (CEST), Pierre Bourgin said:
   
   Hello,
   
   I have installed bacula 5.0.3 on a CentOS 5.4 x86_64 system (RPM
  x86_64 rebuilt from source) and it's working great since a year.
   
   After a mistake I mad, I need to restore my catalog.
   So I tried to use bextract in order to restore a 51 MB file from a
  volume-disk file of 20GB.
   bextract hangs a lot: 100% CPU used, no I/O wait at all.
   After several minutes of run, I stopped it without any success:
  restored file with created, but empty.
   
   Since I really need this file, I've tried the 32bit version of
  bextract on the same system: worked fine !
   
   I've tried to debug it by the use of strace, but I'm not clever
  enough to find anything usefull in these outputs.
   (please find the strace files attached to this email)
   
   So I don't know if it's a bug from the packaging or a bextract bug
  related to 64bit platform ?
   
   If someone has a clue ...
  
  To find out where is it looping, attach gdb to the process when it is
  hanging
  (use gdb -p $pidofbextract) and then issue the gdb commands
  
  thread apply all bt
  detach
  quit
  
  Do this a few times to get an idea of how it changes.
 
 Hello,
 
 Thanks for your help.
 
 Once bextract has started, I've launched a batched gdb once per minute with 
 the gdb commands you provided.
 gdb then always shows a similar output like this (see below):
 - adresses of the Thread 1 stack are always the same
 - addresss of the Thread 2 stack: only #0 and #1 are different 
 (inflate_table() and inflate()), 
 - Thread 1: sometime call to inflate_table() does not appears
 
 # while [ 1 ]; do gdb -p `pgrep bextract` -x gdb.show-backtrace.commands  ; 
 sleep 4; done
 
 == gdb sample output ==
 This GDB was configured as x86_64-redhat-linux-gnu.
 Attaching to process 10007
 Reading symbols from /usr/sbin/bextract...(no debugging symbols found)...done.
 Reading symbols from /lib64/libacl.so.1...(no debugging symbols found)...done.
 Loaded symbols for /lib64/libacl.so.1
 
 Reading symbols from /lib64/libpthread.so.0...done.
 [Thread debugging using libthread_db enabled]
 [New Thread 0x2b7ee0469850 (LWP 10007)]
 [New Thread 0x40af4940 (LWP 10008)]
 Loaded symbols for /lib64/libpthread.so.0
 
 Loaded symbols for /lib64/libselinux.so.1
 Reading symbols from /lib64/libsepol.so.1...done.
 Loaded symbols for /lib64/libsepol.so.1
 0x2b7ee025a106 in inflate_table () from /usr/lib64/libz.so.1
 
 Thread 2 (Thread 0x40af4940 (LWP 10008)):
 #0  0x003a7620dfe1 in nanosleep () from /lib64/libpthread.so.0
 #1  0x003e32a1425b in bmicrosleep (sec=30, usec=0) at bsys.c:63
 #2  0x003e32a40efb in check_deadlock () at lockmgr.c:571
 #3  0x003a76206617 in start_thread () from /lib64/libpthread.so.0
 #4  0x003a75ad3c2d in clone () from /lib64/libc.so.6
 
 Thread 1 (Thread 0x2b7ee0469850 (LWP 10007)):
 #0  0x2b7ee025a106 in inflate_table () from /usr/lib64/libz.so.1
 #1  0x2b7ee0257537 in inflate () from /usr/lib64/libz.so.1
 #2  0x2b7ee0252396 in uncompress () from /usr/lib64/libz.so.1
 #3  0x00406b6f in record_cb ()
 #4  0x00425298 in read_records ()
 #5  0x00406438 in main ()
 == gdb sample output ==
 
 so trouble related to zlib and its use by bextract ?

Nicely done.  It looks like bug 1703:

http://bugs.bacula.org/view.php?id=1703


 Another thing: 
 bextract and bconsole do not have the same entry point for libz.so, and only 
 for that one;
 does it mean they do not use libz the same way ?

By entry point, do you mean the number shown after the filename in the
output of ldd?  That is the base address in memory and isn't important.


  If you have debuginfo packages for bacula, then install them first.
 
 these packages definitions are not provided by bacula.spec from the 
 bacula-5.0.3 sources. 
 Would you have such a .spec file to generate them ?

Don't worry about it -- some rpm build configurations generate them
automatically, but maybe yours doesn't.

__Martin

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] cannot build BAT - hopefully something simple?

2011-07-12 Thread Brodie, Kent
Good morning!

 

New bacula user here--  I'm building bacula on a RedHat 5.5 server.   I
built the depkgs-qt, sourced the paths  - but when I try to build

Bacula (the following is a snippet from my .configure...)- I get the
following:

 

 

Creating bat Makefile

QMAKESPEC has not been set, so configuration cannot be deduced.

Error processing project file:
/usr/local/src/bacula/bacula-5.0.3/src/qt-console/bat.pro

make: *** No rule to make target `clean'.  Stop.

Doing make of dependencies

..

 

 

Afterward, the rest of bacula appears to build OK, but I can't build BAT
- which I'd really like to use.

 

qmake is defined properly, as are the qt-related paths:

 

# which qmake

/usr/local/src/bacula/depkgs-qt/qt4/bin/qmake

 

# printenv | grep Q

QTDIR=/usr/local/src/bacula/depkgs-qt/qt4

QTINC=/usr/local/src/bacula/depkgs-qt/qt4/include

QTLIB=/usr/local/src/bacula/depkgs-qt/qt4/lib

 

 

 

Any help is appreciated!

 

-kent

--
AppSumo Presents a FREE Video for the SourceForge Community by Eric 
Ries, the creator of the Lean Startup Methodology on Lean Startup 
Secrets Revealed. This video shows you how to validate your ideas, 
optimize your ideas and identify your business strategy.
http://p.sf.net/sfu/appsumosfdev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Mac OS X client fails to backup

2011-07-12 Thread tscollins
Found the problem was with the Linux firewall not with any of the configuration 
files.

+--
|This was sent by tscoll...@languagemate.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
AppSumo Presents a FREE Video for the SourceForge Community by Eric 
Ries, the creator of the Lean Startup Methodology on Lean Startup 
Secrets Revealed. This video shows you how to validate your ideas, 
optimize your ideas and identify your business strategy.
http://p.sf.net/sfu/appsumosfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Migrate SD to a new DIR

2011-07-12 Thread Joseph L. Casale
I have a director with several sd's and one with an extensive set of data and 
file based
volumes needs to be migrated away to another completely distinct director.

What if any is my least painful way of retaining the catalog data for the 
volumes in
question?

Thanks,
jlc

--
AppSumo Presents a FREE Video for the SourceForge Community by Eric 
Ries, the creator of the Lean Startup Methodology on Lean Startup 
Secrets Revealed. This video shows you how to validate your ideas, 
optimize your ideas and identify your business strategy.
http://p.sf.net/sfu/appsumosfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrate SD to a new DIR

2011-07-12 Thread Dan Langille
On Jul 12, 2011, at 6:24 PM, Joseph L. Casale wrote:

 I have a director with several sd's and one with an extensive set of data and 
 file based
 volumes needs to be migrated away to another completely distinct director.
 
 What if any is my least painful way of retaining the catalog data for the 
 volumes in
 question?


These questions will help others answer you:

* Are you keeping the old Dir active?  If you are abandoning the old Dir, you 
can make the new Dir use the existing Catalog

* If you are keeping the old Dir, will you be using anything from the Catalog 
on that Dir?

-- 
Dan Langille - http://langille.org


--
AppSumo Presents a FREE Video for the SourceForge Community by Eric 
Ries, the creator of the Lean Startup Methodology on Lean Startup 
Secrets Revealed. This video shows you how to validate your ideas, 
optimize your ideas and identify your business strategy.
http://p.sf.net/sfu/appsumosfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore Fails, how to debug?

2011-07-12 Thread Dan Langille
On Jul 11, 2011, at 10:57 AM, fsbren...@online.de wrote:

 Hello,
 
 I've set up disk-based backups. The backups themselves seem to be running ok, 
 but restore fails with: 
 
 
 11-Jul 16:45 mission-control-dir JobId 108: Error: Bacula backuphost-dir 
 5.0.1 (24Feb10): 11-Jul-2011 16:45:31
  Build OS:   x86_64-pc-linux-gnu ubuntu 10.04
  JobId:  108
  Job:RestoreFiles.2011-07-11_16.45.28_36
  Restore Client: birch-fd
  Start time: 11-Jul-2011 16:45:30
  End time:   11-Jul-2011 16:45:31
  Files Expected: 1
  Files Restored: 0
  Bytes Restored: 0
  Rate:   0.0 KB/s
  FD Errors:  1
  FD termination status:  
  SD termination status:  
  Termination:*** Restore Error ***
 
 Apparently the error is on the file daemon side, but I don't really see any 
 other useful information. The log doesn't show anything related to the 
 restore - how can I make bacula more verbose about restores or otherwise go 
 about tracking down the problem?



You can start most bacula daemons with -d (debug) and a number. Higher numbers, 
more verbosity.

I suggest also using -f to keep the daemon in the foreground.

-- 
Dan Langille - http://langille.org


--
AppSumo Presents a FREE Video for the SourceForge Community by Eric 
Ries, the creator of the Lean Startup Methodology on Lean Startup 
Secrets Revealed. This video shows you how to validate your ideas, 
optimize your ideas and identify your business strategy.
http://p.sf.net/sfu/appsumosfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrate SD to a new DIR

2011-07-12 Thread Joseph L. Casale
These questions will help others answer you:

* Are you keeping the old Dir active?  If you are abandoning the old Dir, you 
can make the new Dir use the existing Catalog

Yes, they will both remain active.

* If you are keeping the old Dir, will you be using anything from the Catalog 
on that Dir?

Yes, that director has one catalogue db for all its sd's.

Thanks!
jlc

--
AppSumo Presents a FREE Video for the SourceForge Community by Eric 
Ries, the creator of the Lean Startup Methodology on Lean Startup 
Secrets Revealed. This video shows you how to validate your ideas, 
optimize your ideas and identify your business strategy.
http://p.sf.net/sfu/appsumosfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Saving just the metadata and not the data

2011-07-12 Thread Andre Ruiz
Hi all

There are some situations where knowing which files you had and how
they were organized is more important than the data on the files
themselves.

I'll give you an example: you collected a great amount of files
mirrored from different places on the internet, like say media files,
or linux packages repositories, or ISOs downloaded from different
places, and organized them carefully in a specific hierarchy to
satisfy a project you are developing which will use this data. Also
the permissions and ACLs on the dirs are of more importance than the
data. If you ever loose that data, you could re-download all of them
again, you would just want to have the file tree and all the metadata
saved to save you time when re-arranging it all (if space on the tapes
were a big deal or the collection is really huge, you don't need to
backup the data).

Is there a good way to do this with bacula? I can see some client
run script that would generate a list on the client filesystem
(dumping metadata -- acls, permissions, etc.) and backup it together
with other data, but that would be a re-implementation of what bacula
does best and I would like very much to have this handled by bacula
itself, to have the file list available in the bacula database as if
they were legitimate files, just happening to being empty of data on
the tapes. I could even want to restore them (giving me files of 0
bytes back on the disk).

The ideal world would be to have a different keyword for that in the FileSet:

FileSet {
  Name = Complete
  Include {
Options {
  Signature = MD5
}
File = /
File = /boot
JustMetadata = /data
  }
}

I guess that's not possible, is there an equivalent way in funcionatily?

Thanks!

Andre

-- 
Andre Ruiz  andre.r...@gmail.com
Curitiba, PR, Brasil

--
AppSumo Presents a FREE Video for the SourceForge Community by Eric 
Ries, the creator of the Lean Startup Methodology on Lean Startup 
Secrets Revealed. This video shows you how to validate your ideas, 
optimize your ideas and identify your business strategy.
http://p.sf.net/sfu/appsumosfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bextract 5.0.3/64bit hangs ? 100% cpu, no result

2011-07-12 Thread Pierre Bourgin
On 07/12/2011 06:02 PM, Martin Simmons wrote:
 On Tue, 12 Jul 2011 16:15:59 +0200 (CEST), Pierre Bourgin said:

 - Martin Simmonsmar...@lispworks.com  wrote:

 On Mon, 11 Jul 2011 11:42:35 +0200 (CEST), Pierre Bourgin said:

 Hello,

 I have installed bacula 5.0.3 on a CentOS 5.4 x86_64 system (RPM
 x86_64 rebuilt from source) and it's working great since a year.

 After a mistake I mad, I need to restore my catalog.
 So I tried to use bextract in order to restore a 51 MB file from a
 volume-disk file of 20GB.
 bextract hangs a lot: 100% CPU used, no I/O wait at all.
 After several minutes of run, I stopped it without any success:
 restored file with created, but empty.

 Since I really need this file, I've tried the 32bit version of
 bextract on the same system: worked fine !

 I've tried to debug it by the use of strace, but I'm not clever
 enough to find anything usefull in these outputs.
 (please find the strace files attached to this email)

 So I don't know if it's a bug from the packaging or a bextract bug
 related to 64bit platform ?

 If someone has a clue ...

 To find out where is it looping, attach gdb to the process when it is
 hanging
 (use gdb -p $pidofbextract) and then issue the gdb commands

 thread apply all bt
 detach
 quit

 Do this a few times to get an idea of how it changes.

 Hello,

 Thanks for your help.

 Once bextract has started, I've launched a batched gdb once per minute with 
 the gdb commands you provided.
 gdb then always shows a similar output like this (see below):
 - adresses of the Thread 1 stack are always the same
 - addresss of the Thread 2 stack: only #0 and #1 are different 
 (inflate_table() and inflate()),
 - Thread 1: sometime call to inflate_table() does not appears

 # while [ 1 ]; do gdb -p `pgrep bextract` -x gdb.show-backtrace.commands  ; 
 sleep 4; done

 == gdb sample output ==
 This GDB was configured as x86_64-redhat-linux-gnu.
 Attaching to process 10007
 Reading symbols from /usr/sbin/bextract...(no debugging symbols 
 found)...done.
 Reading symbols from /lib64/libacl.so.1...(no debugging symbols 
 found)...done.
 Loaded symbols for /lib64/libacl.so.1
 
 Reading symbols from /lib64/libpthread.so.0...done.
 [Thread debugging using libthread_db enabled]
 [New Thread 0x2b7ee0469850 (LWP 10007)]
 [New Thread 0x40af4940 (LWP 10008)]
 Loaded symbols for /lib64/libpthread.so.0
 
 Loaded symbols for /lib64/libselinux.so.1
 Reading symbols from /lib64/libsepol.so.1...done.
 Loaded symbols for /lib64/libsepol.so.1
 0x2b7ee025a106 in inflate_table () from /usr/lib64/libz.so.1

 Thread 2 (Thread 0x40af4940 (LWP 10008)):
 #0  0x003a7620dfe1 in nanosleep () from /lib64/libpthread.so.0
 #1  0x003e32a1425b in bmicrosleep (sec=30, usec=0) at bsys.c:63
 #2  0x003e32a40efb in check_deadlock () at lockmgr.c:571
 #3  0x003a76206617 in start_thread () from /lib64/libpthread.so.0
 #4  0x003a75ad3c2d in clone () from /lib64/libc.so.6

 Thread 1 (Thread 0x2b7ee0469850 (LWP 10007)):
 #0  0x2b7ee025a106 in inflate_table () from /usr/lib64/libz.so.1
 #1  0x2b7ee0257537 in inflate () from /usr/lib64/libz.so.1
 #2  0x2b7ee0252396 in uncompress () from /usr/lib64/libz.so.1
 #3  0x00406b6f in record_cb ()
 #4  0x00425298 in read_records ()
 #5  0x00406438 in main ()
 == gdb sample output ==

 so trouble related to zlib and its use by bextract ?

 Nicely done.  It looks like bug 1703:

 http://bugs.bacula.org/view.php?id=1703

Thanks for the pointer.
Indeed: bug in bextract if using GZIP compression scheme.
Unfortunately, I did not think about the bacula' bugs database, so made 
the debug job twice :(

This bug is corrected only in the devel branch code.
So I just have to wait some weeks for the 5.2 release or backport the 
patch on the 5.0.3 code on my own, right ?

In the interval, I will avoid to make things implying the use of 
bextract :-)

 Another thing:
 bextract and bconsole do not have the same entry point for libz.so, and only 
 for that one;
 does it mean they do not use libz the same way ?

 By entry point, do you mean the number shown after the filename in the
 output of ldd?  That is the base address in memory and isn't important.

oops ... sorry for this.

 If you have debuginfo packages for bacula, then install them first.

 these packages definitions are not provided by bacula.spec from the 
 bacula-5.0.3 sources.
 Would you have such a .spec file to generate them ?

 Don't worry about it -- some rpm build configurations generate them
 automatically, but maybe yours doesn't.

Using the standard bacula.spec from source with rpmbuild on CentOS do 
not generate them.


Thanks again for your help !

Regards,

Pierre Bourgin

--
AppSumo Presents a FREE Video for the SourceForge Community by Eric 
Ries, the creator of