[Bacula-users] continuing a failed job

2010-01-06 Thread Silver Salonen
Hi.

Has anyone figured out how to continue a failed job? I have a client that has 
gigabytes of data, but very fragile internet connection, so it's almost 
impossible to get a normal full backup job.

I thought I'd do it with VirtualFull job to create a successful job out of 
failed one and then use it as basis for incremental, but unfortunately 
VirtualFull requires a previous successful job as well (duh). Is there a way 
to mark a job successful?

-- 
Silver

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Auto-deleting purged volumes

2010-01-06 Thread Marek Simon
Bacula is not intended to delete anything. The way you have a new volume 
for a new day with a new uniq date-based name is not a good way for 
bacula. You shuld have the volumes with generic names and recycle them 
instead of removing them (you do not destroy the tapes after one use 
either).
However you always can have a cron script or bacula admin job, which 
finds the old volumes and removes them.
I have similar script for deleting volumes older then 60 days, usefull 
for automatic tide up when you often add and remove clients.

example:
STORAGE_PATH=/var/lib/bacula/backup
/usr/bin/find $STORAGE_PATH/storedevice* -type f -ctime +60 -regex 
'.*-all-[0-9][0-9][0-9][0-9]$' -delete

Marek

Phil Stracchino napsal(a):
 I have a disk-based backup setup that uses dated volumes which are used
 for a 23-hour period then marked 'used', so that I can be certain a
 particular day's backups are contained in a single file.  They are of
 course purged after their retention time expires, at which time they are
 moved to the Scratch pool.
 
 What I would LIKE to happen next is to have these purged volumes
 automatically deleted both from the catalog AND from disk.  Has anyone
 else already implemented a solution to this problem?  I'd sooner not
 reinvent the wheel if there's a perfectly good existing solution out there.
 
 


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Trying to compile Bacula

2010-01-06 Thread francisco javier funes nieto
libssl-dev ? (Debian)


J.
2010/1/6 brown wrap gra...@yahoo.com



 I cut and pasted four errors I received in trying to build bacula, any
 ideas ?:


 crypto.c:1226: error: cannot convert ‘unsigned char*’ to ‘EVP_PKEY_CTX*’
 for argument ‘1’ to ‘int EVP_PKEY_decrypt(EVP_PKEY_CTX*, unsigned char*,
 size_t*, const unsigned char*, size_t)’
 make[1]: *** [crypto.lo] Error 1


 make[1]: *** No rule to make target `../lib/libbac.la', needed by `all'.
 Stop.

 make[1]: *** No rule to make target `../lib/libbacpy.la', needed by
 `bacula-fd'.  Stop.

 make[1]: *** No rule to make target `../lib/libbac.la', needed by
 `bconsole'.  Stop.




 --
 This SF.Net email is sponsored by the Verizon Developer Community
 Take advantage of Verizon's best-in-class app development support
 A streamlined, 14 day to market process makes app distribution fast and
 easy
 Join now and get one step closer to millions of Verizon customers
 http://p.sf.net/sfu/verizon-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




-- 
_

Francisco Javier Funes Nieto [esen...@gmail.com]
CANONIGOS
Servicios Informáticos para PYMES.
Cl. Cruz 2, 1º Oficina 7
Tlf: 958.536759 / 661134556
Fax: 958.521354
GRANADA - 18002
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] No constraints in bacula database

2010-01-06 Thread Marek Simon
Hi,
I do some experiments with bacula database and I found that there is no 
foreign key constraint between jobmedia and job, jobmedia and media, job 
and file and many similar logicaly connected tables. Why? Constraints 
prevent arising void references (File with empty job, File with empty 
filename, jobmedia record with empty job or media)
I have bacula-dir version 2.4.4, postgres variant, and postgres 8.3, 
default instalation.

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Auto-deleting purged volumes

2010-01-06 Thread Mike Holden
Marek Simon wrote:
 Phil Stracchino napsal(a):
 I have a disk-based backup setup that uses dated volumes which are used
 for a 23-hour period then marked 'used', so that I can be certain a
 particular day's backups are contained in a single file.  They are of
 course purged after their retention time expires, at which time they are
 moved to the Scratch pool.

 What I would LIKE to happen next is to have these purged volumes
 automatically deleted both from the catalog AND from disk.  Has anyone
 else already implemented a solution to this problem?  I'd sooner not
 reinvent the wheel if there's a perfectly good existing solution out there.


(top-posting fixed)

 Bacula is not intended to delete anything. The way you have a new volume
 for a new day with a new uniq date-based name is not a good way for
 bacula. You shuld have the volumes with generic names and recycle them
 instead of removing them (you do not destroy the tapes after one use
 either).
 However you always can have a cron script or bacula admin job, which
 finds the old volumes and removes them.
 I have similar script for deleting volumes older then 60 days, usefull
 for automatic tide up when you often add and remove clients.

 example:
 STORAGE_PATH=/var/lib/bacula/backup
 /usr/bin/find $STORAGE_PATH/storedevice* -type f -ctime +60 -regex
 '.*-all-[0-9][0-9][0-9][0-9]$' -delete

 Marek


Attached is the script I wrote to take care of this. What I found is that over 
time,
backups are occasionally larger than expected due to events like software 
updates or
whatever, and disk space usage slowly increases. I initially took care of this 
by adding
extra space using LVM to grow the partition and reiserfs to grow the filesystem 
into the
new space. Eventually though, you run out of physical disk to add!

Although in principle it is good that bacula keeps backed up data until it is 
re-needed,
in practice, it can lead to infinite disk space growth over time.

The attached script is php, and connects to the bacula database (plug in 
appropriate
user name, password and server name in the connect statement at the top). It 
first of
all looks for volumes past their expiry date and purges them. It then looks for 
purged
volumes and deletes them. Finally, it looks in the backup directory (plug in 
your
location here) for files with no matching database entry and deletes them.

Use at your own risk etc. I'm not responsible if it deletes your backups, 
crashes your
car or causes your freezer to thaw out!!!

Automated removal as detailed by Marek above is difficult to implement simply, 
because
you will probably have multiple retention periods on your files, and you need 
to take
this into account in your deletion. Plus, deleting the files while they are 
still listed
in the bacula database is possibly not a good idea!
-- 
Mike Holden

http://www.by-ang.com - the place to shop
for all manner of hand crafted items,
including Jewellery, Greetings Cards and Gifts



purgeall
Description: application/php
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity lower than expected

2010-01-06 Thread Tino Schwarze
On Tue, Jan 05, 2010 at 04:55:51PM -0500, John Drescher wrote:

  I'm not seeing anywhere close to 60M/s (  30 ).  I think I just fixed
  that.  I increased the block size to 1M, and that seemed to really
  increase the throughput, in the test I just did.  I will see tomorrow,
  when it all runs.
 
  Yes, if you aren't already, whenever writing to tape you should almost
  without exception (and certainly on any modern tape drive) be using the
  largest block size that btape says your drive supports.
 
 I tried to do that years ago but I believe this made all tapes that
 were already written to unreadable (and I now have 80) so I gave this
 up. With my 5+ year old dual processor Opteron 248 server I get 25MB/s
 to 45MB/s despools (which measures the actual tape rate) for my LTO2
 drives. The reason for the wide range seems to be compression.

Can anybody confirm or rebute this for 2.2.x? I'm currently fiddling
with Maximum Block Size and a shiny new tape. It looks like 1M is too
much for my tape drive, but 512K seems to work and it's making a huge
difference: btape fill reports  60 MB/s right at the beginning, then
drops to abour 52 MB/s.

Thanks,

Tino.

-- 
What we nourish flourishes. - Was wir nähren erblüht.

www.lichtkreis-chemnitz.de
www.tisc.de

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multiple SD's

2010-01-06 Thread Daniel Holtkamp
Hello !

On 1/5/10 6:40 PM, Phil Stracchino wrote:
 Brian Debelius wrote:
 I want to see if having disk storage on one sd process, and tape storage 
 on another sd process, would increase throughput during copy jobs.
 
 Actually, it'll decrease it rather drastically.  All the way to none.

Good laugh there :)

 You see, at the present time you cannot copy or migrate from one SD to
 another.  Copy and migration operations can only occur between devices
 attached to a single SD, because SDs cannot yet communicate directly
 with one another, which they would need to do to perform inter-SD copies
 or migrations.

My company has requested this feature (copy jobs between multiple SDs)
and is willing to pay half of what Bacula Systems wants for
implementing. They asked for 10k€ and we are willing to pay 5k€.

Our main reason to request this feature is that we can have a central
director in our office, a SD in our office and a SD in the datacenter.
We could then do a (local) backup in the datacenter and copy those jobs
to our office-sd during off-hours. And from there they get copied to
tape for achival.

It would be great if we could find more people willing to commit money
to this feature. On the other hand if any devs feel up to the task and
the 5k€ we offer is enough that would be great too.

Best regards,

Daniel Holtkamp

-- 
.
Riege Software International GmbH  Fon: +49 (2159) 9148 0
Mollsfeld 10   Fax: +49 (2159) 9148 11
40670 MeerbuschWeb: www.riege.com
GermanyE-Mail: holtk...@riege.com
------
Handelsregister:   Managing Directors:
Amtsgericht Neuss HRB-NR 4207  Christian Riege
USt-ID-Nr.: DE120585842Gabriele  Riege
   Johannes  Riege
.
   YOU CARE FOR FREIGHT, WE CARE FOR YOU  




--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Auto-deleting purged volumes

2010-01-06 Thread Phil Stracchino
Marek Simon wrote:
 Bacula is not intended to delete anything. The way you have a new volume 
 for a new day with a new uniq date-based name is not a good way for 
 bacula. You shuld have the volumes with generic names and recycle them 
 instead of removing them (you do not destroy the tapes after one use 
 either).

Indeed, recycling is the obvious approach for tapes.  For disk-based
backup, though, I have found it to be non-optimal.

 However you always can have a cron script or bacula admin job, which 
 finds the old volumes and removes them.
 I have similar script for deleting volumes older then 60 days, usefull 
 for automatic tide up when you often add and remove clients.

Deleting the volumes from disk is a simple problem.  I was asking
whether anyone had come up with a standard way to delete expired
volumes from the catalog as a scheduled admin job.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity lower than expected

2010-01-06 Thread Ralf Gross
Tino Schwarze schrieb:
 On Tue, Jan 05, 2010 at 04:55:51PM -0500, John Drescher wrote:
 
   I'm not seeing anywhere close to 60M/s (  30 ).  I think I just fixed
   that.  I increased the block size to 1M, and that seemed to really
   increase the throughput, in the test I just did.  I will see tomorrow,
   when it all runs.
  
   Yes, if you aren't already, whenever writing to tape you should almost
   without exception (and certainly on any modern tape drive) be using the
   largest block size that btape says your drive supports.
  
  I tried to do that years ago but I believe this made all tapes that
  were already written to unreadable (and I now have 80) so I gave this
  up. With my 5+ year old dual processor Opteron 248 server I get 25MB/s
  to 45MB/s despools (which measures the actual tape rate) for my LTO2
  drives. The reason for the wide range seems to be compression.
 
 Can anybody confirm or rebute this for 2.2.x? I'm currently fiddling
 with Maximum Block Size and a shiny new tape. It looks like 1M is too
 much for my tape drive, but 512K seems to work and it's making a huge
 difference: btape fill reports  60 MB/s right at the beginning, then
 drops to abour 52 MB/s.

With

Maximum File Size = 5G
Maximum Block Size = 262144
Maximum Network Buffer Size = 262144

I get up to 150M MB/s while despooling to LTO-4 drives. Maximum File
Size gave me some extra MB/s, I think it's as important as the Maximum
Block Size.

Ralf

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multiple SD's

2010-01-06 Thread Phil Stracchino
Daniel Holtkamp wrote:
 Hello !
 
 On 1/5/10 6:40 PM, Phil Stracchino wrote:
 Brian Debelius wrote:
 I want to see if having disk storage on one sd process, and tape storage 
 on another sd process, would increase throughput during copy jobs.
 Actually, it'll decrease it rather drastically.  All the way to none.
 
 Good laugh there :)
 
 You see, at the present time you cannot copy or migrate from one SD to
 another.  Copy and migration operations can only occur between devices
 attached to a single SD, because SDs cannot yet communicate directly
 with one another, which they would need to do to perform inter-SD copies
 or migrations.
 
 My company has requested this feature (copy jobs between multiple SDs)
 and is willing to pay half of what Bacula Systems wants for
 implementing. They asked for 10k€ and we are willing to pay 5k€.
 
 Our main reason to request this feature is that we can have a central
 director in our office, a SD in our office and a SD in the datacenter.
 We could then do a (local) backup in the datacenter and copy those jobs
 to our office-sd during off-hours. And from there they get copied to
 tape for achival.
 
 It would be great if we could find more people willing to commit money
 to this feature. On the other hand if any devs feel up to the task and
 the 5k€ we offer is enough that would be great too.


It's one of the problems I've been planning to look at myself, once I
have a current dev environment on my workstation again.  I plan to get
my feet wet again with a few simpler issues first though.

I want this capability too, and for similar reasons.  All of my nightly
backups run to a disk pool, but I'd like to copy Full backups to tape
after running.  However, the SCSI tape drive is on a physically separate
machine on the other side of a wall from the disk server, and I'm not
comfortable with putting a tape drive where the disk server is, tape
being rather more sensitive than disk in terms of environmental limits.
 (My server room is heated only by the servers, and can get pretty cold
in winter.)


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity lower than expected

2010-01-06 Thread Tino Schwarze
Hi there,

On Tue, Jan 05, 2010 at 07:30:44PM +0100, Tino Schwarze wrote:

   It looks like btape is not happy.
  
   Error reading block: ERR=block.c:1008 Read zero bytes at 326:0 on device 
   Superloader-Drive (/dev/nst0).
  
   Are your tapes old (still good)? Did you clean the drive? Latest Firmware?
  
 The drive requested cleaning just yesterday which I did. I'm not sure
 how old the tape in question really is. I used a dell update utility and
 it updated the drive to v2181 which seems to be the latest for a
 half-height SCSI tape drive. 30MB/s seems a bit low to me - it should be
 able to do 60MB/s per spec. Or is that just theory vs. real world?

  I would add are you using LTO3 tapes or LTO2 tapes?
 
 They are labelled 400GB/800GB - all of the same kind, so they're
 definitely LTO3. The autoloader says: Drive Idle Gen 3 Data in it's web
 interface, so yes, I'm sure.
 
 I could check a very new tape tomorrow or try a shiny new one, just to
 be sure.

Here are the results for a shiny new tape with block size 512k:

 Wrote blk_block=75, dev_blk_num=651 VolBytes=391,110,459,392 rate=59638.7 
 KB/s
 06-Jan 13:50 btape JobId 0: End of Volume TestVolume1 at 397:1147 on device 
 Superloader-Drive (/dev/nst0). Write of 524288 bytes got -1.
 06-Jan 13:50 btape JobId 0: Re-read of last block succeeded.  btape: 
 btape.c:2345 Last block at: 397:1146 this_dev_block_num=1147
 btape: btape.c:2379 End of tape 397:0. VolumeCapacity=391,370,506,240.
 Write rate = 59587.5 KB/s
 Done writing 0 records ...
 Wrote state file last_block_num1=1146 last_block_num2=0
 
 
 13:50:27 Done filling tape at 397:0. Now beginning re-read of tape ...
 06-Jan 13:50 btape JobId 0: 3301 Issuing autochanger loaded? drive 0 
 command.
 06-Jan 13:50 btape JobId 0: 3302 Autochanger loaded? drive 0, result is 
 Slot 1.
 06-Jan 13:50 btape JobId 0: 3301 Issuing autochanger loaded? drive 0 
 command.
 06-Jan 13:50 btape JobId 0: 3302 Autochanger loaded? drive 0, result is 
 Slot 1.
 06-Jan 13:50 btape JobId 0: Ready to read from volume TestVolume1 on device 
 Superloader-Drive (/dev/nst0).
 Rewinding.
 Reading the first 1 records from 0:0.
 1 records read now at 1:626
 Reposition from 1:626 to 397:1146
 Reading block 1146.
 
 The last block on the tape matches. Test succeeded.

So this looks very promising: 390 GB and I'm also seeing almost 60MB/s.
I will repeat this test using one of the older tapes, then report back.

Is it possible that the low block size of 64k affects tape capacity? It
looks suspicious to me that all tapes end at about the same size...

Thanks so far,

Tino.

-- 
What we nourish flourishes. - Was wir nähren erblüht.

www.lichtkreis-chemnitz.de
www.tisc.de

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-users-es] Documentation and translations

2010-01-06 Thread Victor Hugo dos Santos
On Sun, Jan 3, 2010 at 4:08 PM, Kern Sibbald k...@sibbald.com wrote:
 Hello,

 The other day, Eric pointed out to me that some of the manuals that we have
 posted in different languages are quite out of date -- in fact, the partial
 French translation apparently dates from version 1.38.

Hello Masters,

well, I believe that the big problem is that documentation don't
have a friendly interface to translate or easy of compare.

IMHO, I think that a tool or interface as http://l10n.openoffice.org/
is a good idea (maybe not is the tool that we need, but is a good
example)..

On resume, anyone know a collaborative tool for translation that can
import/export latex documents, that show paragraphs no translated and
that make a good coffee ?? :D

 I hope you all had a great holiday season and survived seeing the New Year
 in ...

idem

salu2

-- 
-- 
Victor Hugo dos Santos
Linux Counter #224399

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity lower than expected

2010-01-06 Thread Tino Schwarze
On Wed, Jan 06, 2010 at 02:20:06PM +0100, Tino Schwarze wrote:

It looks like btape is not happy.
   
Error reading block: ERR=block.c:1008 Read zero bytes at 326:0 on 
device Superloader-Drive (/dev/nst0).
   
Are your tapes old (still good)? Did you clean the drive? Latest 
Firmware?
   
  The drive requested cleaning just yesterday which I did. I'm not sure
  how old the tape in question really is. I used a dell update utility and
  it updated the drive to v2181 which seems to be the latest for a
  half-height SCSI tape drive. 30MB/s seems a bit low to me - it should be
  able to do 60MB/s per spec. Or is that just theory vs. real world?
 
   I would add are you using LTO3 tapes or LTO2 tapes?
  
  They are labelled 400GB/800GB - all of the same kind, so they're
  definitely LTO3. The autoloader says: Drive Idle Gen 3 Data in it's web
  interface, so yes, I'm sure.
  
  I could check a very new tape tomorrow or try a shiny new one, just to
  be sure.
 
 Here are the results for a shiny new tape with block size 512k:
 
  Wrote blk_block=75, dev_blk_num=651 VolBytes=391,110,459,392 
  rate=59638.7 KB/s
  06-Jan 13:50 btape JobId 0: End of Volume TestVolume1 at 397:1147 on 
  device Superloader-Drive (/dev/nst0). Write of 524288 bytes got -1.
  06-Jan 13:50 btape JobId 0: Re-read of last block succeeded.  btape: 
  btape.c:2345 Last block at: 397:1146 this_dev_block_num=1147
  btape: btape.c:2379 End of tape 397:0. VolumeCapacity=391,370,506,240.
  Write rate = 59587.5 KB/s
  Done writing 0 records ...
  Wrote state file last_block_num1=1146 last_block_num2=0
  
  
  13:50:27 Done filling tape at 397:0. Now beginning re-read of tape ...
  06-Jan 13:50 btape JobId 0: 3301 Issuing autochanger loaded? drive 0 
  command.
  06-Jan 13:50 btape JobId 0: 3302 Autochanger loaded? drive 0, result is 
  Slot 1.
  06-Jan 13:50 btape JobId 0: 3301 Issuing autochanger loaded? drive 0 
  command.
  06-Jan 13:50 btape JobId 0: 3302 Autochanger loaded? drive 0, result is 
  Slot 1.
  06-Jan 13:50 btape JobId 0: Ready to read from volume TestVolume1 on 
  device Superloader-Drive (/dev/nst0).
  Rewinding.
  Reading the first 1 records from 0:0.
  1 records read now at 1:626
  Reposition from 1:626 to 397:1146
  Reading block 1146.
  
  The last block on the tape matches. Test succeeded.
 
 So this looks very promising: 390 GB and I'm also seeing almost 60MB/s.
 I will repeat this test using one of the older tapes, then report back.
 
 Is it possible that the low block size of 64k affects tape capacity? It
 looks suspicious to me that all tapes end at about the same size...

It feels like the tapes really are worn out. I aborted the new btape
test since the throughput was really low:

14:31:54 Begin writing Bacula records to tape ...
Wrote blk_block=5000, dev_blk_num=1185 VolBytes=2,620,915,712 rate=24494.5 KB/s
Wrote blk_block=1, dev_blk_num=464 VolBytes=5,242,355,712 rate=24845.3 KB/s
Wrote blk_block=15000, dev_blk_num=1650 VolBytes=7,863,795,712 rate=26656.9 KB/s
Wrote blk_block=2, dev_blk_num=929 VolBytes=10,485,235,712 rate=24613.2 KB/s
Wrote blk_block=25000, dev_blk_num=208 VolBytes=13,106,675,712 rate=24590.4 KB/s
Wrote blk_block=3, dev_blk_num=1394 VolBytes=15,728,115,712 rate=25532.7 
KB/s
14:43:17 Flush block, write EOF

All the same config, just an older tape. :-( I'm not sure, how often
it's been used because of our rather complicted multi-stage backup
scheme (daily/weekly/monthly/yearly pools). Is it possible that the
drive simply wastes capacity because of low throughput?

What's your experience: How often may a tape be written to?

Thanks,

Tino.

-- 
What we nourish flourishes. - Was wir nähren erblüht.

www.lichtkreis-chemnitz.de
www.tisc.de

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity lower than expected

2010-01-06 Thread Brian Debelius
I believe they are rated for 250 complete passes.

On 1/6/2010 9:05 AM, Tino Schwarze wrote:
 All the same config, just an older tape. :-( I'm not sure, how often
 it's been used because of our rather complicted multi-stage backup
 scheme (daily/weekly/monthly/yearly pools). Is it possible that the
 drive simply wastes capacity because of low throughput?

 What's your experience: How often may a tape be written to?

 Thanks,

 Tino.




--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity lower than expected

2010-01-06 Thread Brian Debelius
That should not matter.  I have read somewhere that Kern said that very 
large blocks sizes can waste space, I don not know how this works tho.


On 1/6/2010 8:20 AM, Tino Schwarze wrote:

 Is it possible that the low block size of 64k affects tape capacity? It
 looks suspicious to me that all tapes end at about the same size...

 Thanks so far,

 Tino.




--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity lower than expected

2010-01-06 Thread Brian Debelius
Put a new tape in, and run tapeinfo -f /dev/nst0 and it should report 
what block size range your drive can support.  Alng with lots of other 
useful information.

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity lower than expected

2010-01-06 Thread John Drescher
 Is it possible that the low block size of 64k affects tape capacity? It
 looks suspicious to me that all tapes end at about the same size...


I have never seen it go below native capacity. I usually get around
1.5 to 1 compression rate on my data with outliners  being close to
1.1 to 1 and 5.5 to 1.

However my tapes do not get reused that many times. Most of my 80
tapes are used for archive purposes (never get deleted) and the rest
there are over 10 in rotation that lasts months before being recycled.

John

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula and curlftpfs does not work

2010-01-06 Thread Karsten Schulze
System: Debian Squeeze (testing),
Kernel version 2.6.26 (modified)
Bacula V3.0.2. and MySQL v5.1.41
curlftpfs 0.9.2-1
curl 7.19.7-1

At the moment I test some configurations before I want to go to the live 
system.
I want to store the backup files on te ftp-server using curlftpfs,
but I am not able to create a label on this machine.

The configuration of bacula
/etc/bacula/bacula-dir.conf:
Pool {
  Name = Default
  Pool Type = Backup
  Recycle = Yes   # Bacula can automatically recycle 
Volumes
  AutoPrune = Yes # Prune expired volumes
  Volume Retention = 9 days
  Label Format = Volume-
  Maximum Volume Bytes = 1g
  Maximum Volumes = 25  
}

/etc/bacula/bacula-sd.conf:
Device {
  Name = FileStorage
  Media Type = File
  Archive Device = /home/bacula/archive/Daten/Backup
  LabelMedia = Yes
  Random Access = Yes
  Maximum Volume Size = 1g
  AutomaticMount = Yes   # when device opened, read it
  RemovableMedia = No
}

/etc/group:
...
fuse:x:120:bacula
...

/etc/fuse.conf:
# Allow non-root users to specify the 'allow_other' or 'allow_root'mount 
options.
user_allow_other


I have mounted the ftp-filesystem permanently using the following command:
# curlftpfs -o user=karsten:mysecret,debug,allow_other,uid=117  
192.168.1.43 /home/bacula/archive
( I added the debug flag to get some hints )
In the shell it is possible to create a file:
# sudo -u bacula cat  /home/bacula/archive/Daten/Backup/VolTest2
s
^C
# ll -al /home/bacula/archive/Daten/Backup
...
-rw-r--r-- 1 bacula root6  6. Jan 14:21 VolTest2
...
#

When I try to create a label in the bconsole, I got the following message:
*label
Automatically selected Storage: File
Enter new Volume name: Vol4
Automatically selected Pool: Default
Connecting to Storage daemon File at vdrserver:9103 ...
Sending label command for Volume Vol4 Slot 0 ...
Jmsg Job=*System* type=5 level=1262785239 vdrserver-sd JobId 0: Warning: 
dev.c:534 dev.c:532 Could not open: 
/home/bacula/archive/Daten/Backup/Vol4, ERR=Operation not supported
3910 Unable to open device FileStorage 
(/home/bacula/archive/Daten/Backup): ERR=dev.c:532 Could not open: 
/home/bacula/archive/Daten/Backup/Vol4, ERR=Operation not supported

Label command failed for Volume Vol4.
Do not forget to mount the drive!!!
*
mount command is not helpful.

Below a short list of debug messages of curlftpfs:

with successful command in the shell:
...
getattr /Daten/Backup/VolTest1
ftpfs: operation ftpfs_getattr failed because No such file or directory
   unique: 5, error: -2 (No such file or directory), outsize: 16
unique: 6, opcode: CREATE (35), nodeid: 3, insize: 57
create flags: 0x8241 /Daten/Backup/VolTest1 0100644 umask=
   create[147432816] flags: 0x8241 /Daten/Backup/VolTest1


with label command of bacula:
...
getattr /Daten/Backup/Vol4
ftpfs: operation ftpfs_getattr failed because No such file or directory
   unique: 25, error: -2 (No such file or directory), outsize: 16
unique: 26, opcode: CREATE (35), nodeid: 3, insize: 53
  create flags: 0x8042 /Daten/Backup/Vol4 0100640 umask=
ftpfs: operation ftpfs_open failed because Operation not supported
   unique: 26, error: -95 (Operation not supported), outsize: 16

   
Why does bacula use this interface in a different way? What is wrong in 
my configuration?
Any hints are appreciated.

Karsten

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multiple SD's

2010-01-06 Thread Jason A. Kates
One of the changes in the pipeline seems to be making the SD multi
threaded.   Thus if you use a single SD it will be able to use multiple
CPU's going forward.   I don't know what the ETA is, but I look forward
to that change.
-Jason


On Tue, 2010-01-05 at 12:56 -0500, Brian Debelius wrote:
 Hrumph...sigh.
 
 On 1/5/2010 12:40 PM, Phil Stracchino wrote:
  Brian Debelius wrote:
 
  I want to see if having disk storage on one sd process, and tape storage
  on another sd process, would increase throughput during copy jobs.
   
  Actually, it'll decrease it rather drastically.  All the way to none.
 
  You see, at the present time you cannot copy or migrate from one SD to
  another.  Copy and migration operations can only occur between devices
  attached to a single SD, because SDs cannot yet communicate directly
  with one another, which they would need to do to perform inter-SD copies
  or migrations.
 
 
 
 
 
 --
 This SF.Net email is sponsored by the Verizon Developer Community
 Take advantage of Verizon's best-in-class app development support
 A streamlined, 14 day to market process makes app distribution fast and easy
 Join now and get one step closer to millions of Verizon customers
 http://p.sf.net/sfu/verizon-dev2dev 
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 

Jason A. Kates (ja...@kates.org) 
Fax:208-975-1514
Phone:  660-960-0070



--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Is Btape limited to 100000 max block size?

2010-01-06 Thread Brian Debelius
Is Btape limited to a 10 maximum block size?  999424 works.  1000448 
fails.

v3.0.3

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Trying to compile Bacula

2010-01-06 Thread brown wrap
No, its Fedora 12.



--- On Wed, 1/6/10, francisco javier funes nieto esen...@gmail.com wrote:

From: francisco javier funes nieto esen...@gmail.com
Subject: Re: [Bacula-users] Trying to compile Bacula
To: brown wrap gra...@yahoo.com
Cc: bacula-users@lists.sourceforge.net
Date: Wednesday, January 6, 2010, 2:37 AM

libssl-dev ? (Debian)


J.
2010/1/6 brown wrap gra...@yahoo.com



I cut and pasted four errors I received in trying to build bacula, any ideas ?:


crypto.c:1226:
error: cannot convert ‘unsigned char*’ to ‘EVP_PKEY_CTX*’ for argument
‘1’ to ‘int EVP_PKEY_decrypt(EVP_PKEY_CTX*, unsigned char*, size_t*,
const unsigned char*, size_t)’
make[1]: *** [crypto.lo] Error 1


make[1]: *** No rule to make target `../lib/libbac.la', needed by `all'.  Stop.


make[1]: *** No rule to make target `../lib/libbacpy.la', needed by 
`bacula-fd'.  Stop.

make[1]: *** No rule to make target `../lib/libbac.la', needed by `bconsole'.  
Stop.





  
--

This SF.Net email is sponsored by the Verizon Developer Community

Take advantage of Verizon's best-in-class app development support

A streamlined, 14 day to market process makes app distribution fast and easy

Join now and get one step closer to millions of Verizon customers

http://p.sf.net/sfu/verizon-dev2dev 
___

Bacula-users mailing list

Bacula-users@lists.sourceforge.net

https://lists.sourceforge.net/lists/listinfo/bacula-users





-- 
_

Francisco Javier Funes Nieto [esen...@gmail.com]
CANONIGOS
Servicios Informáticos para PYMES.

Cl. Cruz 2, 1º Oficina 7
Tlf: 958.536759 / 661134556
Fax: 958.521354
GRANADA - 18002





  --
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity lower than expected

2010-01-06 Thread Thomas Mueller

  I tried to do that years ago but I believe this made all tapes that
  were already written to unreadable (and I now have 80) so I gave this
  up. With my 5+ year old dual processor Opteron 248 server I get
  25MB/s to 45MB/s despools (which measures the actual tape rate) for
  my LTO2 drives. The reason for the wide range seems to be
  compression.
 
 Can anybody confirm or rebute this for 2.2.x? I'm currently fiddling
 with Maximum Block Size and a shiny new tape. It looks like 1M is too
 much for my tape drive, but 512K seems to work and it's making a huge
 difference: btape fill reports  60 MB/s right at the beginning, then
 drops to abour 52 MB/s.
 
 With
 
 Maximum File Size = 5G
 Maximum Block Size = 262144
 Maximum Network Buffer Size = 262144
 
 I get up to 150M MB/s while despooling to LTO-4 drives. Maximum File
 Size gave me some extra MB/s, I think it's as important as the Maximum
 Block Size.
 

thanks for providing this hints. just searching why my lto-4 is writing 
just at 40mb/s. will try them out!

searching the Maximum File Size in the manual I found this:

If you are configuring an LTO-3 or LTO-4 tape, you probably will want to 
set the Maximum File Size to 2GB to avoid making the drive stop to write 
an EOF mark. 

maybe this is the reason for the extra mb/s. 


- Thomas


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] continuing a failed job

2010-01-06 Thread Thomas Mueller
Am Wed, 06 Jan 2010 11:08:17 +0200 schrieb Silver Salonen:

 Hi.
 
 Has anyone figured out how to continue a failed job? I have a client
 that has gigabytes of data, but very fragile internet connection, so
 it's almost impossible to get a normal full backup job.
 
 I thought I'd do it with VirtualFull job to create a successful job out
 of failed one and then use it as basis for incremental, but
 unfortunately VirtualFull requires a previous successful job as well
 (duh). Is there a way to mark a job successful?

you can't continue a failed job. 

this is a known problem with unstable internet connections. maybe you 
can work around with openvpn (or something like that) to simply hide 
short outages to bacula. 

- Thomas


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] continuing a failed job

2010-01-06 Thread Timo Neuvonen
Thomas Mueller tho...@chaschperli.ch kirjoitti viestissä 
news:hi2ffj$ga...@ger.gmane.org...
 Am Wed, 06 Jan 2010 11:08:17 +0200 schrieb Silver Salonen:

 Hi.

 Has anyone figured out how to continue a failed job? I have a client
 that has gigabytes of data, but very fragile internet connection, so
 it's almost impossible to get a normal full backup job.

 I thought I'd do it with VirtualFull job to create a successful job out
 of failed one and then use it as basis for incremental, but
 unfortunately VirtualFull requires a previous successful job as well
 (duh). Is there a way to mark a job successful?

 you can't continue a failed job.

 this is a known problem with unstable internet connections. maybe you
 can work around with openvpn (or something like that) to simply hide
 short outages to bacula.


Another workaround could be splitting the fileset to several smaller ones. 
This way one job takes less time and it's more propable it will finish 
successfully. And if it won't, it takes less time to re-run it.

--
TiN 



--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] continuing a failed job

2010-01-06 Thread Phil Stracchino
Thomas Mueller wrote:
 Am Wed, 06 Jan 2010 11:08:17 +0200 schrieb Silver Salonen:
 
 Hi.

 Has anyone figured out how to continue a failed job? I have a client
 that has gigabytes of data, but very fragile internet connection, so
 it's almost impossible to get a normal full backup job.

 I thought I'd do it with VirtualFull job to create a successful job out
 of failed one and then use it as basis for incremental, but
 unfortunately VirtualFull requires a previous successful job as well
 (duh). Is there a way to mark a job successful?
 
 you can't continue a failed job. 
 
 this is a known problem with unstable internet connections. maybe you 
 can work around with openvpn (or something like that) to simply hide 
 short outages to bacula. 

My first inclination is to say that if the network connection to the
machine is sufficiently unstable that you can't complete a full backup,
you probably shouldn't be trying to back it up over the network.
However, there are ways to work around the problem.

Were I trying to work around such a situation, with a mission-critical
client on the far side of an unstable network connection, I would
probably create a local partition somewhere of the same size as the disk
on the remote machine, mirror the remote machine to that via rsync, and
then back up the local mirror.  The rsync will only need to transfer
small amounts of data each day, and with the way rsync works, if one
rsync is interrupted by a network outage, the next one will just pick up
where it left off.  Should you need to do a restore, you restore to the
local mirror, then rsync only the restored files back to the remote client.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity lower than expected

2010-01-06 Thread Ralf Gross
Thomas Mueller schrieb:
 
   I tried to do that years ago but I believe this made all tapes that
   were already written to unreadable (and I now have 80) so I gave this
   up. With my 5+ year old dual processor Opteron 248 server I get
   25MB/s to 45MB/s despools (which measures the actual tape rate) for
   my LTO2 drives. The reason for the wide range seems to be
   compression.
  
  Can anybody confirm or rebute this for 2.2.x? I'm currently fiddling
  with Maximum Block Size and a shiny new tape. It looks like 1M is too
  much for my tape drive, but 512K seems to work and it's making a huge
  difference: btape fill reports  60 MB/s right at the beginning, then
  drops to abour 52 MB/s.
  
  With
  
  Maximum File Size = 5G
  Maximum Block Size = 262144
  Maximum Network Buffer Size = 262144
  
  I get up to 150M MB/s while despooling to LTO-4 drives. Maximum File
  Size gave me some extra MB/s, I think it's as important as the Maximum
  Block Size.
  
 
 thanks for providing this hints. just searching why my lto-4 is writing 
 just at 40mb/s. will try them out!
 
 searching the Maximum File Size in the manual I found this:
 
 If you are configuring an LTO-3 or LTO-4 tape, you probably will want to 
 set the Maximum File Size to 2GB to avoid making the drive stop to write 
 an EOF mark. 
 
 maybe this is the reason for the extra mb/s. 

Modifying the Maximum Block Size to more than 262144 didn't change
much here. But changing the File Size did. Much.

Anyway, 40 MB/s seems a bit low, even with the defaults. Before tuning
our setup I got ~75 MB/s. Are you spooling the data to disk or writing
directly to tape?

Ralf

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula and curlftpfs 0.9.2 does not work (anymore)

2010-01-06 Thread Karsten Schulze
I believe that bacula does not work anymore with curlftpfs (version 0.9.2).
I have found several reports which describe a similar behavior.
https://bugs.launchpad.net/ubuntu/+source/curlftpfs/+bug/367091
http://sourceforge.net/projects/curlftpfs/forums/forum/542750/topic/3295831

Finally I found the release notes of curlftpfs 0.9.2:
http://sourceforge.net/project/shownotes.php?release_id=602461
Be aware that some applications might not be able to save files on 
curlftpfs from 0.9.2 on, because we don't support open(read+write) or 
open(write) and seek anymore.

I have written a small program to test this functionality. You can use 
it to verify your environment.
#include stdio.h
#include string.h

  int main(void) {
//FILE *fp = fopen(/home/bacula/test,w+b);
//works fine (without curlftpfs)
//FILE *fp = fopen(/home/bacula/archive/Daten/Backup/test,a+b);
//create flags: 0x442 /Daten/Backup/test 0100644 umask=
//ftpfs: operation ftpfs_open failed because Operation not supported
//FILE *fp = fopen(/home/bacula/archive/Daten/Backup/test,a+);
//create flags: 0x442 /Daten/Backup/test 0100644 umask=
//ftpfs: operation ftpfs_open failed because Operation not supported
//FILE *fp = fopen(/home/bacula/archive/Daten/Backup/test,w+);
//create flags: 0x242 /Daten/Backup/test 0100644 umask=
//ftpfs: operation ftpfs_open failed because Operation not supported
FILE *fp = fopen(/home/bacula/archive/Daten/Backup/test,w+b);
//create flags: 0x242 /Daten/Backup/test 0100644 umask=
//ftpfs: operation ftpfs_open failed because Operation not supported

fprintf(stdout, I try to open file\n);
if(fp==NULL)
{
  fprintf(stdout,Error: can't open file.\n);
  return 1;
}
else {
  char str[40];
  int i;

  strcpy(str,somecharacters);
  printf(File opened successfully. Writing\n\n);
  for (i=0;i8;i++) {
fputc(str[i],fp);
  }
}
fclose(fp);
return 0;
  }

I would recommend that the documentation of bacula should mention this 
incompatibility.
Using a ftp service you have to find an alternative (which one?)

Br, Karsten


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] HP MSL2024 Autochanger ERR=Device or resource busy.

2010-01-06 Thread Richard Scobie
I have an MSL2024 library I have been battling to get running for the 
last couple of days and am about out of ideas. It is the parallel SCSI 
attached LTO-4 version, connected to an HP LSI based HBA.

I have bacula configured and it passes the mtx-changer test commands 
recommended in the manual and also passes all the btape test tests, 
including the auto test.

However, operationally after starting out, something appears to not 
release /dev/nst0 and everything fails beyond this point.

Below is an example where from bconsole I issued automount off then 
the label command:

Connecting to Storage daemon HP_MSL2024 at library1.sauce.co.nz:9103 ...
Sending label command for Volume GQJ800L4 Slot 1 ...
3301 Issuing autochanger loaded? drive 0 command.
3302 Autochanger loaded? drive 0, result: nothing loaded.
3304 Issuing autochanger load slot 1, drive 0 command.
3305 Autochanger load slot 1, drive 0, status is OK.
block.c:1010 Read error on fd=5 at file:blk 0:0 on device Drive-1 
(/dev/nst0). ERR=Device or resource busy.
3000 OK label. VolBytes=1024 DVD=0 Volume=GQJ800L4 Device=Drive-1 
(/dev/nst0)
Catalog record for Volume GQJ800L4, Slot 1  successfully created.
Do not forget to mount the drive!!!

Another example, after resetting everything, loading 12 blank tapes and 
issuing label barcodes, it labelled the first 2 tapes OK, then every 
subsequent one included the error:

block.c:1010 Read error on fd=5 at file:blk 0:0 on device Drive-1 
(/dev/nst0). ERR=Device or resource busy.

I am almost sure it is not a bacula issue as I have just stopped bacula 
and run btape:

btape -c bacula-sd.conf /dev/nst0
Tape block granularity is 1024 bytes.
btape: butil.c:285 Using device: /dev/nst0 for writing.
07-Jan 10:03 btape JobId 0: 3301 Issuing autochanger loaded? drive 0 
command.
07-Jan 10:03 btape JobId 0: 3302 Autochanger loaded? drive 0, result 
is Slot 1.
btape: btape.c:383 open device Drive-1 (/dev/nst0): OK
*readlabel
07-Jan 10:03 btape JobId 0: Error: block.c:1010 Read error on fd=3 at 
file:blk 0:0 on device Drive-1 (/dev/nst0). ERR=Device or resource busy.
btape: btape.c:432 Volume has no label.

Volume Label:
Id: **error**VerNo : 0
VolName   :
PrevVolName   :
VolFile   : 0
LabelType : Unknown 0
LabelSize : 0
PoolName  :
MediaType :
PoolType  :
HostName  :
Date label written: -4712-01-01 at 00:00

Here is the relevant director conf entry:

# Definition of tape storage device
Storage {
   Name = HP_MSL2024
   Address = library1.sauce.co.nz
   SDPort = 9103
   Password = deliberately munged  # password for Storage daemon
   Device = HP_MSL2024  # must be same as Device in 
Storage daemon
   Media Type = LTO-4  # must be same as MediaType in 
Storage daemon
   Autochanger = yes   # enable for autochanger device
}

Storage daemon conf:

Autochanger {
   Name = HP_MSL2024
   Device = Drive-1
   Changer Command = /etc/bacula/mtx-changer %c %o %S %a %d
   Changer Device = /dev/sg9
}

Device {
   Name = Drive-1
   Drive Index = 0
   Media Type = LTO-4
   Archive Device = /dev/nst0
   AutomaticMount = yes;
   AlwaysOpen = yes;
   RemovableMedia = yes;
   RandomAccess = no;
   Maximum block size = 262144
   Maximum File Size = 5GB
   AutoChanger = yes
   Alert Command = sh -c 'smartctl -H -l error %c'
}

The library itself is configured automatic, random (the default) and I 
have also tried Manual, random.

Any help much appreciated.

Regards,

Richard


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] HP MSL2024 Autochanger ERR=Device or resource busy.

2010-01-06 Thread John Drescher
On Wed, Jan 6, 2010 at 5:16 PM, Richard Scobie rich...@sauce.co.nz wrote:
 I have an MSL2024 library I have been battling to get running for the
 last couple of days and am about out of ideas. It is the parallel SCSI
 attached LTO-4 version, connected to an HP LSI based HBA.

 I have bacula configured and it passes the mtx-changer test commands
 recommended in the manual and also passes all the btape test tests,
 including the auto test.

 However, operationally after starting out, something appears to not
 release /dev/nst0 and everything fails beyond this point.

 Below is an example where from bconsole I issued automount off then
 the label command:

 Connecting to Storage daemon HP_MSL2024 at library1.sauce.co.nz:9103 ...
 Sending label command for Volume GQJ800L4 Slot 1 ...
 3301 Issuing autochanger loaded? drive 0 command.
 3302 Autochanger loaded? drive 0, result: nothing loaded.
 3304 Issuing autochanger load slot 1, drive 0 command.
 3305 Autochanger load slot 1, drive 0, status is OK.
 block.c:1010 Read error on fd=5 at file:blk 0:0 on device Drive-1
 (/dev/nst0). ERR=Device or resource busy.
 3000 OK label. VolBytes=1024 DVD=0 Volume=GQJ800L4 Device=Drive-1
 (/dev/nst0)
 Catalog record for Volume GQJ800L4, Slot 1  successfully created.
 Do not forget to mount the drive!!!

 Another example, after resetting everything, loading 12 blank tapes and
 issuing label barcodes, it labelled the first 2 tapes OK, then every
 subsequent one included the error:

 block.c:1010 Read error on fd=5 at file:blk 0:0 on device Drive-1
 (/dev/nst0). ERR=Device or resource busy.

 I am almost sure it is not a bacula issue as I have just stopped bacula
 and run btape:

 btape -c bacula-sd.conf /dev/nst0
 Tape block granularity is 1024 bytes.
 btape: butil.c:285 Using device: /dev/nst0 for writing.
 07-Jan 10:03 btape JobId 0: 3301 Issuing autochanger loaded? drive 0
 command.
 07-Jan 10:03 btape JobId 0: 3302 Autochanger loaded? drive 0, result
 is Slot 1.
 btape: btape.c:383 open device Drive-1 (/dev/nst0): OK
 *readlabel
 07-Jan 10:03 btape JobId 0: Error: block.c:1010 Read error on fd=3 at
 file:blk 0:0 on device Drive-1 (/dev/nst0). ERR=Device or resource busy.
 btape: btape.c:432 Volume has no label.

 Volume Label:
 Id                : **error**VerNo             : 0
 VolName           :
 PrevVolName       :
 VolFile           : 0
 LabelType         : Unknown 0
 LabelSize         : 0
 PoolName          :
 MediaType         :
 PoolType          :
 HostName          :
 Date label written: -4712-01-01 at 00:00

 Here is the relevant director conf entry:

 # Definition of tape storage device
 Storage {
   Name = HP_MSL2024
   Address = library1.sauce.co.nz
   SDPort = 9103
   Password = deliberately munged          # password for Storage daemon
   Device = HP_MSL2024                      # must be same as Device in
 Storage daemon
   Media Type = LTO-4                  # must be same as MediaType in
 Storage daemon
   Autochanger = yes                   # enable for autochanger device
 }

 Storage daemon conf:

 Autochanger {
   Name = HP_MSL2024
   Device = Drive-1
   Changer Command = /etc/bacula/mtx-changer %c %o %S %a %d
   Changer Device = /dev/sg9
 }

 Device {
   Name = Drive-1
   Drive Index = 0
   Media Type = LTO-4
   Archive Device = /dev/nst0
   AutomaticMount = yes;
   AlwaysOpen = yes;
   RemovableMedia = yes;
   RandomAccess = no;
   Maximum block size = 262144
   Maximum File Size = 5GB
   AutoChanger = yes
   Alert Command = sh -c 'smartctl -H -l error %c'
 }

 The library itself is configured automatic, random (the default) and I
 have also tried Manual, random.

 Any help much appreciated.


You probably need to add a wait in the mtx-changer script just after
the load. This wait will make sure the tape is in the drive and has
completed the loading process. For some systems the script does not
wait long enough for the tape drive to finish.

John

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] HP MSL2024 Autochanger ERR=Device or resource busy.

2010-01-06 Thread Richard Scobie


 You probably need to add a wait in the mtx-changer script just after
 the load. This wait will make sure the tape is in the drive and has
 completed the loading process. For some systems the script does not
 wait long enough for the tape drive to finish.

 John

Thanks John. I already have a 30 second wait in there but will try 
extending it further.

I have also run the sleep test script from the manual and it runs it OK.

Regards,

Richard



--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] No bacula volume is mounted, but the volume is in use on the same device

2010-01-06 Thread Javier Barroso
Hi,
On Tue, Jan 5, 2010 at 12:36 PM, Javier Barroso javibarr...@gmail.com wrote:
 On Tue, Jan 5, 2010 at 12:18 PM, John Drescher dresche...@gmail.com wrote:
 On Tue, Jan 5, 2010 at 4:26 AM, Javier Barroso javibarr...@gmail.com wrote:
 Hi people,

 First, I'm using an old bacula version (etch version 1.38.11-8), so I
 now this is a 2006 question :(
 ...
 # mtx -f /dev/autochanger1 load 4 0
 * mount
 * status storage
 ...
 Device status:
 Autochanger Autochanger with devices:
   Drive-1 (/dev/st0)
 Device FileStorage (/tmp) is not open or does not exist.
 Device Drive-1 (/dev/st0) open but no Bacula volume is mounted.
    Device is BLOCKED waiting for media.
    Slot 4 is loaded in drive 0.
    Total Bytes Read=0 Blocks Read=0 Bytes/block=0
    Positioned at File=0 Block=0
 Device Drive-2 (/dev/st1) open but no Bacula volume is mounted.
    Total Bytes Read=0 Blocks Read=0 Bytes/block=0
    Positioned at File=0 Block=0
 

 In Use Volume status:
 ISOLAD01 on device Drive-1 (/dev/st0)
 
 You have messages
 *
 05-Jan 10:18 backup-sd: 3301 Issuing autochanger loaded drive 0 command.
 05-Jan 10:18 backup-sd: 3302 Autochanger loaded drive 0, result is Slot 4.
 05-Jan 10:18 backup-sd: 3301 Issuing autochanger loaded drive 0 command.
 05-Jan 10:18 backup-sd: 3302 Autochanger loaded drive 0, result is Slot 4.
 05-Jan 10:18 backup-sd: 3301 Issuing autochanger loaded drive 0 command.
 05-Jan 10:18 backup-sd: 3302 Autochanger loaded drive 0, result is Slot 4.
 05-Jan 10:18 backup-sd: Please mount Volume ISOLAD01 on Storage
 Device Drive-1 (/dev/st0) for Job openbravodb.2010-01-04_20.00.06

 # ISOLAD01 is a volume in bacula db
 * list media pool=DiarioLunes
 Pool: DiarioLunes
 +-++---+-+--+--+-+--+---+---+-+
 | MediaId | VolumeName | VolStatus | VolBytes        | VolFiles |
 VolRetention | Recycle | Slot | InChanger | MediaType     |
 LastWritten         |
 +-++---+-+--+--+-+--+---+---+-+
 |      51 | ISOLAD01   | Append    | 371,662,282,376 |      523 |
 518,400 |       1 |    4 |         1 | Ultrium3-SCSI | 2009-10-20
 02:33:52 |

 So, what am I missing ? Any help is appreciated.

 Nothing. There are bugs like this in 1.38.

 1) Change /dev/st0 to /dev/nst0 in your config so bacula does not
 accidentally delete one of your volumes.

 2) Stop bacula-sd

 3) Manually remove the tape using the autohanger command

 4) Restart bacula-sd. Your jobs probably will have been terminated by
 this action.

 Thank you very much, I'll try it the next time it happens (I'll change
 my config like in your tip)
Ok, so I tried it again, and the same issue (bacula want a tape which
is mounted, but it doesn't recognize)

I tracked the problem i turned on debug in mtx-changer, and see about
mt -f /dev/nst0 status was telling mt: /dev/nst0: No medium found

Then I tried mt -f with the others three devices (I have an
autochanger with /dev/nst{0,1,2,3} as tapes) and see only nst2 was
online.

I changed my config to /dev/nst2, and bacula is now working again.

But I'm confused now ! :( :(. Somebody can explain me this situation ?:

# mtx -f /dev/autochanger1 status | head -6
  Storage Changer /dev/autochanger1:4 Drives, 60 Slots ( 2 Import/Export )
Data Transfer Element 0:Full (Storage Element 2 Loaded):VolumeTag =
ISOXAD02
Data Transfer Element 1:Empty
Data Transfer Element 2:Empty
Data Transfer Element 3:Empty

# lsscsi
[0:0:0:0]mediumx HP   MSL6000 Series   0520  /dev/sch0
[0:0:0:1]tapeHP   Ultrium 3-SCSI   G63W  /dev/st2
[0:0:0:2]tapeHP   Ultrium 3-SCSI   G63W  /dev/st3
[0:0:0:3]storage HP   NS E1200-320 593d  -
[1:0:3:0]tapeHP   Ultrium 3-SCSI   G63W  /dev/st0
[1:0:4:0]tapeHP   Ultrium 3-SCSI   G54W  /dev/st1

I can guess that first Data Transfer element listed in mtx -f dev
status is the first tape found in lsscsi command output. Then Data
Transfert Element 1 will be /dev/st3 and go ...

Is this true ? Should i name my devices with some persistente name
like /dev/tape1 ?

And more important, why are others (three) devices like not medium found ?

See next session (first I loaded 4 tapes):
# mtx -f /dev/autochanger1 status | head -10
  Storage Changer /dev/autochanger1:4 Drives, 60 Slots ( 2 Import/Export )
Data Transfer Element 0:Full (Storage Element 2 Loaded):VolumeTag =
ISOXAD02
Data Transfer Element 1:Full (Storage Element 1 Loaded):VolumeTag =
ISOVBF04
Data Transfer Element 2:Full (Storage Element 3 Loaded):VolumeTag =
ISOVAF03
Data Transfer Element 3:Full (Storage Element 4 Loaded):VolumeTag =
ISOLAD01

# Bacula is using Data  Transfer Element 0:
* st st
...
Device Drive-1 (/dev/nst2) is mounted with Volume=ISOXAD02
Pool=DiarioMiercoles
...

# But nst3 device is not ready, a while ago nst0 and nst1 wasn't ONLINE yet:
# mt -f 

Re: [Bacula-users] No bacula volume is mounted, but the volume is in use on the same device

2010-01-06 Thread Javier Barroso
Hi again,

On Thu, Jan 7, 2010 at 12:27 AM, Javier Barroso javibarr...@gmail.com wrote:
 Hi,
 On Tue, Jan 5, 2010 at 12:36 PM, Javier Barroso javibarr...@gmail.com wrote:
 On Tue, Jan 5, 2010 at 12:18 PM, John Drescher dresche...@gmail.com wrote:
 On Tue, Jan 5, 2010 at 4:26 AM, Javier Barroso javibarr...@gmail.com 
 wrote:
 Hi people,

 First, I'm using an old bacula version (etch version 1.38.11-8), so I
 now this is a 2006 question :(
 ...
 # mtx -f /dev/autochanger1 load 4 0
 * mount
 * status storage
 ...
 Device status:
 Autochanger Autochanger with devices:
   Drive-1 (/dev/st0)
 Device FileStorage (/tmp) is not open or does not exist.
 Device Drive-1 (/dev/st0) open but no Bacula volume is mounted.
    Device is BLOCKED waiting for media.
    Slot 4 is loaded in drive 0.
    Total Bytes Read=0 Blocks Read=0 Bytes/block=0
    Positioned at File=0 Block=0
 Device Drive-2 (/dev/st1) open but no Bacula volume is mounted.
    Total Bytes Read=0 Blocks Read=0 Bytes/block=0
    Positioned at File=0 Block=0
 

 In Use Volume status:
 ISOLAD01 on device Drive-1 (/dev/st0)
 
 You have messages
 *
 05-Jan 10:18 backup-sd: 3301 Issuing autochanger loaded drive 0 command.
 05-Jan 10:18 backup-sd: 3302 Autochanger loaded drive 0, result is Slot 
 4.
 05-Jan 10:18 backup-sd: 3301 Issuing autochanger loaded drive 0 command.
 05-Jan 10:18 backup-sd: 3302 Autochanger loaded drive 0, result is Slot 
 4.
 05-Jan 10:18 backup-sd: 3301 Issuing autochanger loaded drive 0 command.
 05-Jan 10:18 backup-sd: 3302 Autochanger loaded drive 0, result is Slot 
 4.
 05-Jan 10:18 backup-sd: Please mount Volume ISOLAD01 on Storage
 Device Drive-1 (/dev/st0) for Job openbravodb.2010-01-04_20.00.06

 # ISOLAD01 is a volume in bacula db
 * list media pool=DiarioLunes
 Pool: DiarioLunes
 +-++---+-+--+--+-+--+---+---+-+
 | MediaId | VolumeName | VolStatus | VolBytes        | VolFiles |
 VolRetention | Recycle | Slot | InChanger | MediaType     |
 LastWritten         |
 +-++---+-+--+--+-+--+---+---+-+
 |      51 | ISOLAD01   | Append    | 371,662,282,376 |      523 |
 518,400 |       1 |    4 |         1 | Ultrium3-SCSI | 2009-10-20
 02:33:52 |

 So, what am I missing ? Any help is appreciated.

 Nothing. There are bugs like this in 1.38.

 1) Change /dev/st0 to /dev/nst0 in your config so bacula does not
 accidentally delete one of your volumes.

 2) Stop bacula-sd

 3) Manually remove the tape using the autohanger command

 4) Restart bacula-sd. Your jobs probably will have been terminated by
 this action.

 Thank you very much, I'll try it the next time it happens (I'll change
 my config like in your tip)
 Ok, so I tried it again, and the same issue (bacula want a tape which
 is mounted, but it doesn't recognize)

 I tracked the problem i turned on debug in mtx-changer, and see about
 mt -f /dev/nst0 status was telling mt: /dev/nst0: No medium found

 Then I tried mt -f with the others three devices (I have an
 autochanger with /dev/nst{0,1,2,3} as tapes) and see only nst2 was
 online.

 I changed my config to /dev/nst2, and bacula is now working again.

 But I'm confused now ! :( :(. Somebody can explain me this situation ?:

 # mtx -f /dev/autochanger1 status | head -6
  Storage Changer /dev/autochanger1:4 Drives, 60 Slots ( 2 Import/Export )
 Data Transfer Element 0:Full (Storage Element 2 Loaded):VolumeTag =
 ISOXAD02
 Data Transfer Element 1:Empty
 Data Transfer Element 2:Empty
 Data Transfer Element 3:Empty

 # lsscsi
 [0:0:0:0]    mediumx HP       MSL6000 Series   0520  /dev/sch0
 [0:0:0:1]    tape    HP       Ultrium 3-SCSI   G63W  /dev/st2
 [0:0:0:2]    tape    HP       Ultrium 3-SCSI   G63W  /dev/st3
 [0:0:0:3]    storage HP       NS E1200-320     593d  -
 [1:0:3:0]    tape    HP       Ultrium 3-SCSI   G63W  /dev/st0
 [1:0:4:0]    tape    HP       Ultrium 3-SCSI   G54W  /dev/st1

 I can guess that first Data Transfer element listed in mtx -f dev
 status is the first tape found in lsscsi command output. Then Data
 Transfert Element 1 will be /dev/st3 and go ...

 Is this true ? Should i name my devices with some persistente name
 like /dev/tape1 ?

 And more important, why are others (three) devices like not medium found ?

 See next session (first I loaded 4 tapes):
 # mtx -f /dev/autochanger1 status | head -10
  Storage Changer /dev/autochanger1:4 Drives, 60 Slots ( 2 Import/Export )
 Data Transfer Element 0:Full (Storage Element 2 Loaded):VolumeTag =
 ISOXAD02
 Data Transfer Element 1:Full (Storage Element 1 Loaded):VolumeTag =
 ISOVBF04
 Data Transfer Element 2:Full (Storage Element 3 Loaded):VolumeTag =
 ISOVAF03
 Data Transfer Element 3:Full (Storage Element 4 Loaded):VolumeTag =
 ISOLAD01

 # Bacula is using Data  Transfer Element 0:
 * st st
 ...
 Device Drive-1 (/dev/nst2) is 

Re: [Bacula-users] Has anyone installed Bacula on Fedora 12?

2010-01-06 Thread Terry L. Inzauro
On 01/06/2010 05:40 PM, brown wrap wrote:
  I tried compiling it, and received errors which I posted, but didn't
 really get an answer to. I then started to look for RPMs. I found the
 client rpm, but not the server rpm unless I don't know what I'm looking
 for. Can someone point me to the rpms I need? Thanks.
 
 
 greg
 

Are the RPMS listed here not good enough?  I se FC 9 and 10 but no 12.  If 
either of the fc9 or fc10 rpms don't work, there
is a source rpm listed and that should get your going.

bacula dloads:
http://bacula.org/en/?page=downloads

rpmbuild man page:
http://www.rpm.org/max-rpm-snapshot/rpmbuild.8.html

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] HP MSL2024 Autochanger ERR=Device or resource busy.

2010-01-06 Thread Richard Scobie
Richard Scobie wrote:


 You probably need to add a wait in the mtx-changer script just after
 the load. This wait will make sure the tape is in the drive and has
 completed the loading process. For some systems the script does not
 wait long enough for the tape drive to finish.

 John

 Thanks John. I already have a 30 second wait in there but will try
 extending it further.

 I have also run the sleep test script from the manual and it runs it OK.

And increasing this to 60 seconds has not improved things.

Anyone else running this library I can compare notes with?

Regards,

Richard

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] undefined symbol in libbacsql.so

2010-01-06 Thread Brent Kearney
Hello,

I'm having difficulty compiling Bacula 3.0.3 on Solaris 10 x86.  The build 
fails at the same spot whether I'm using gcc or Sun Studio's compiler.  I'm 
linking against the 32-bit binary build of MySQL5 from OpenCSW.  My build 
environment is:

LDFLAGS=-L/opt/sunstudio12.1/lib -L/opt/csw/mysql5/lib/mysql
CPPFLAGS=-I/opt/sunstudio12.1/include
LD_LIBRARY_PATH=/opt/csw/mysql5/lib/mysql
LIBS=-lmysqlclient
CXX=/bin/CC
CC=/bin/cc

./configure --prefix=/opt/bacula --with-openssl=/opt/csw 
--with-mysql=/opt/csw/mysql5  --without-x

Everything compiles fine up to here:

make[1]: Entering directory `/opt/system/software/src/bacula-3.0.3/src/dird'
...
Linking bacula-dir ...
/opt/system/software/src/bacula-3.0.3/libtool --silent --tag=CXX --mode=link 
/bin/CC  -L/opt/sunstudio12.1/lib -L/opt/csw/mysql5/lib/mysql -L../lib 
-L../cats -L../findlib -o bacula-dir dird.o admin.o authenticate.o autoprune.o 
backup.o bsr.o catreq.o dir_plugins.o dird_conf.o expand.o fd_cmds.o getmsg.o 
inc_conf.o job.o jobq.o migrate.o mountreq.o msgchan.o next_vol.o newvol.o 
pythondir.o recycle.o restore.o run_conf.o scheduler.o ua_acl.o ua_cmds.o 
ua_dotcmds.o ua_query.o ua_input.o ua_label.o ua_output.o ua_prune.o ua_purge.o 
ua_restore.o ua_run.o ua_select.o ua_server.o ua_status.o ua_tree.o ua_update.o 
vbackup.o verify.o \
  -lbacfind -lbacsql -lbacpy -lbaccfg -lbac -lm-lpthread 
-lresolv -lnsl -lsocket -lxnet -lmysqlclient -lintl -lresolv \
-L/opt/csw/lib -lssl -lcrypto
Undefined   first referenced
 symbol in file
my_thread_end   
/opt/system/software/src/bacula-3.0.3/src/cats/.libs/libbacsql.so
ld: fatal: Symbol referencing errors. No output written to .libs/bacula-dir
make[1]: *** [bacula-dir] Error 1
make[1]: Leaving directory `/opt/system/software/src/bacula-3.0.3/src/dird'


  == Error in /opt/system/software/src/bacula-3.0.3/src/dird ==

As I mentioned, changing CC and CXX to gcc doesn't seem to make a difference -- 
it results in an undefined symbol, my _thread_end error when linking 
bacula-dir.

Any suggestions?

Thanks,
Brent



--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity lower than expected

2010-01-06 Thread Thomas Mueller

  With
  
  Maximum File Size = 5G
  Maximum Block Size = 262144
  Maximum Network Buffer Size = 262144
  
  I get up to 150M MB/s while despooling to LTO-4 drives. Maximum File
  Size gave me some extra MB/s, I think it's as important as the
  Maximum Block Size.
  
  
 thanks for providing this hints. just searching why my lto-4 is writing
 just at 40mb/s. will try them out!
 
 searching the Maximum File Size in the manual I found this:
 
 If you are configuring an LTO-3 or LTO-4 tape, you probably will want
 to set the Maximum File Size to 2GB to avoid making the drive stop to
 write an EOF mark.
 
 maybe this is the reason for the extra mb/s.
 
 Modifying the Maximum Block Size to more than 262144 didn't change much
 here. But changing the File Size did. Much.

I found a post from Kern saying that Quantum told him, that about 262144 
is the best blocksize - increasing it would increase error rate too. 

 
 Anyway, 40 MB/s seems a bit low, even with the defaults. Before tuning
 our setup I got ~75 MB/s. Are you spooling the data to disk or writing
 directly to tape?

yes, i was surprised too that it is that slow. 

I'm spooling to disk first (2x 1TB disk as RAID0, dedicated to bacula for 
spooling). i will also start a sequential read test to check if the disks 
are the bottleneck. The slow job was the only one running.

watching iotop i saw the maximum file size problem: it stops writing 
after 1 GB (default file size) and writes to the DB and then continues 
writing. so for a LTO-4 it stops nearly 800 times until the tape is full. 


- Thomas


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] continuing a failed job

2010-01-06 Thread Silver Salonen
On Wednesday 06 January 2010 19:30:42 Timo Neuvonen wrote:
 Thomas Mueller tho...@chaschperli.ch kirjoitti viestissä 
 news:hi2ffj$ga...@ger.gmane.org...
  Am Wed, 06 Jan 2010 11:08:17 +0200 schrieb Silver Salonen:
 
  Hi.
 
  Has anyone figured out how to continue a failed job? I have a client
  that has gigabytes of data, but very fragile internet connection, so
  it's almost impossible to get a normal full backup job.
 
  I thought I'd do it with VirtualFull job to create a successful job out
  of failed one and then use it as basis for incremental, but
  unfortunately VirtualFull requires a previous successful job as well
  (duh). Is there a way to mark a job successful?
 
  you can't continue a failed job.
 
  this is a known problem with unstable internet connections. maybe you
  can work around with openvpn (or something like that) to simply hide
  short outages to bacula.
 
 
 Another workaround could be splitting the fileset to several smaller ones. 
 This way one job takes less time and it's more propable it will finish 
 successfully. And if it won't, it takes less time to re-run it.

Well, yes.. I know the job itself cannot be continued. But could it be somehow 
marked as OK, so it could be used as a basis for another non-full backup?

-- 
Silver

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users