Re: [Bacula-users] SPEED!

2011-07-06 Thread J. Echter
Am 07.07.2011 04:43, schrieb Glen Barber:
> On 7/6/11 12:37 PM, J. Echter wrote:
>> backup speed has nothing to do with regular backup speed.
>>
> Can you explain exactly what this means?
>
sorry, i meant backup speed has nothing to do with regular *network* speed.

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Automatic Volume creation

2011-07-06 Thread Robert.Mortimer
>
>Hi,
>
>I have Bacula running with an IBM TL3400. As the tapes are bar coded
>Bacula can read the library and add the tapes to it's catalogue. I can
>than move then to the backup pools. My issue is the tapes have not had a
>volume written to the front of the tape using the label command they
>fail to mount with the following bconsole status. 
>
>Device "Drive-3" (/dev/nst2) open but no Bacula volume is currently
>mounted.
>Device is BLOCKED waiting for mount of volume "CBP10020",
>   Pool:snapshot-pool
>   Media type:  LTO-3
>Slot 11 is loaded in drive 2.
>Total Bytes Read=0 Blocks Read=0 Bytes/block=0
>Positioned at File=0 Block=0
>
>
>Tape CBP10020 is in /dev/nst2 but is blank. How do I get Bacula to
>automatically initialise the tape and write the volume label. My storage
>definition is as follows:-
>
>Device {
>  Name = Drive-3  #
>  Drive Index = 2
>  Media Type = LTO-3
>  Archive Device = /dev/nst2
>  AutomaticMount = yes;   # when device opened, read it
>  AlwaysOpen = yes;
>  RemovableMedia = yes;
>  RandomAccess = no;
>  AutoChanger = yes
>  Alert Command = "sh -c 'smartctl -H -l error %c'"
>Maximum Spool Size = 100GB
># Spool Size = 40GB
>Maximum Job Spool Size = 40GB
>Spool Directory = /backup/hosts/SPOOL
>}
>
>
>I sort of assumed that AutomaticMount would do it. I am reluctant to use
>LabelMedia because I am not sure it will use that label on the bar code
>rather than issuing a new one. Can anyone confirm that behaviour of
>LabelMedia when used with a bar coded auto-changer?
>

I have tried "LabelMedia = yes" and I still have the same issue
There must be a fix for this but at the moment I am a loss I 
Suspect there is a fix but I just don't know the magic word 


>Robert Mortimer
>Linux Systems Administrator
>
>
>
>
>

 
http://www.lifestylegroup.co.uk/
   http://www.linkedin.com/company/39206 
 
This e-mail is confidential and only intended for the named recipient. If you 
are not the intended recipient please notify the sender immediately by phone or 
email. Please then delete it from your system. Any opinions are those of the 
author and do not necessarily represent those of Lifestyle Services Group Ltd. 
Please note that this e-mail and any attachments have not been encrypted. They 
may therefore be liable to be compromised. We do not accept any liability for 
any virus infection, external compromise of security or confidentiality in 
relation to e-mails.
 
Lifestyle Services Group Limited (LSG) is a company registered in England and 
Wales No. 5114385 and is authorised and regulated by the Financial Services 
Authority in respect of insurance mediation activities. Registered Office: 
Osprey House, Ore Close, Lymedale Business Park, Newcastle-under-Lyme, 
Staffordshire ST5 9QD.
 


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SPEED!

2011-07-06 Thread Mike Ruskai

On 7/6/2011 12:31 PM, Jake Debord wrote:

I have a machine I back up that when done averages:
Elapsed time:   41 mins 47 secs
  Priority:   1
  FD Files Written:   6,948
  SD Files Written:   6,948
  FD Bytes Written:   14,587,852,350 (14.58 GB)
  SD Bytes Written:   14,589,273,339 (14.58 GB)
  Rate:   5818.8 KB/s
  Software Compression:   11.7 %

Is this acceptable??? 6Mbps seems slow. I backup my machine and 
achieve a little better results

Elapsed time:   3 mins 51 secs
   Priority:   1
   FD Files Written:   665
   SD Files Written:   665
   FD Bytes Written:   2,192,593,865 (2.192 GB)
   SD Bytes Written:   2,192,728,783 (2.192 GB)
   Rate:   9491.7 KB/s
   Software Compression:   9.8 %

Both of our Machines are almost identical in specs. I'm just wondering if this 
is typical or if there are tweeks to speeding things up. My setup is basically 
out of the box so not much extra done to it.

I also use mysql for the database.



As others have suggested, this is entirely about client-side compression 
speed.  If those compression levels are typical, I'd recommend just 
turning it off.  Then you'll be limited by the lesser of disc and 
network speed.  Another possibility is to selectively disable 
compression for certain file extension that are known to be already 
highly compressed, such as JPG, ZIP, BZ2, etc.  I get about 98MB/sec 
throughput on a completely uncompressed backup of 80GB, and about 
44MB/sec on a selectively compressed backup of around 1.2TB of data, 
with a 3.85GHz i7 processor (net compression is about 21%, so 1TB is 
stored).



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SPEED!

2011-07-06 Thread Glen Barber
On 7/6/11 12:37 PM, J. Echter wrote:
> 
> backup speed has nothing to do with regular backup speed.
> 

Can you explain exactly what this means?

-- 
Glen Barber

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] webacula issues

2011-07-06 Thread Dan Langille

On Jul 6, 2011, at 7:30 PM, museikaze wrote:

> Hi guys,
> 
> Im kinda new here but I was wondering if anyone knows how to help me with a 
> problem im having. I have installed bacula and that works fine for me, but 
> when I tried to install webacula I get this error when I load the page.
> 
> ERROR: There was a problem executing bconsole. See below.
> ERROR Command:
> /usr/bin/sudo /usr/sbin/bconsole -n -c /etc/bacula/bconsole.conf
> output:

 I suspect the above command is not permitted by sudo

I'd try running visudo and see if that command appears there for the user in 
question.

> 
> It sais theres an error with that command, but the command produces no error 
> output. So I tried running the command "/usr/bin/sudo /usr/sbin/bconsole -n 
> -c /etc/bacula/bconsole.conf" and "sudo bconsole -n -c /etc/bacula/bconsole" 
> as root and they both work. The only reason I can think of is perhaps I need 
> a file permission somewhere? Im using FC12.
> 
> Thanks
> 
> +--
> |This was sent by museik...@hotmail.com via Backup Central.
> |Forward SPAM to ab...@backupcentral.com.
> +--
> 
> 
> 
> --
> All of the data generated in your IT infrastructure is seriously valuable.
> Why? It contains a definitive record of application performance, security 
> threats, fraudulent activity, and more. Splunk takes this data and makes 
> sense of it. IT sense. And common sense.
> http://p.sf.net/sfu/splunk-d2d-c2
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 

-- 
Dan Langille - http://langille.org


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] webacula issues

2011-07-06 Thread museikaze
Hi guys,

Im kinda new here but I was wondering if anyone knows how to help me with a 
problem im having. I have installed bacula and that works fine for me, but when 
I tried to install webacula I get this error when I load the page.

ERROR: There was a problem executing bconsole. See below.
ERROR Command:
/usr/bin/sudo /usr/sbin/bconsole -n -c /etc/bacula/bconsole.conf
output:

It sais theres an error with that command, but the command produces no error 
output. So I tried running the command "/usr/bin/sudo /usr/sbin/bconsole -n -c 
/etc/bacula/bconsole.conf" and "sudo bconsole -n -c /etc/bacula/bconsole" as 
root and they both work. The only reason I can think of is perhaps I need a 
file permission somewhere? Im using FC12.

Thanks

+--
|This was sent by museik...@hotmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance options for single large (100TB) server backup?

2011-07-06 Thread Steve Costaras


Initial thoughts on this would be one of two ways (or both):

All in the fileset resource:
 As a fileset option something like:
 StreamPerFS
 which would kick off a stream for every FS in the fileset. More of an 
'automated' method to improve performance for those who don't want to manually 
tune it.

Or something like a new token to indicate what to back up as a new stream. I.e.
 StreamFile =

 which would act just like "File = " but would kick off a new stream for that 
location.


This would need to tie into the SD to tell it to also spool direct dump each 
one seperately just list it does with multiple concurrant jobs. Likewise tell 
the FD to spawn off a new thread for each directed (StreamFile) or dynamic 
'StreamPerFS' point).


I don't know how much work that would be in the code?



-Original Message-
From: Eric Bollengier [mailto:eric.bolleng...@baculasystems.com]
Sent: Wednesday, July 6, 2011 11:20 AM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Performance options for single large (100TB) server 
backup?
Hello,

On 07/06/2011 04:20 PM, Florian Heigl wrote:
> Saving multiple streams is something that has been proven as a
> solution for many years, and where that is still too slow NDMP comes
> into place. (in case of ZFS NDMP is still at a unusable stage)
>
> 100TB is a lot, but I wonder if everyone agrees the "right" solution
> would be saving multiple streams instead of splitting up the source
> system (will be fun to do a restore of such a split client)...
>
> Hopefully some of the larger Bacula customers will fund these features
> some day, as both have the mix of being very important, complex and
> elemental changes :))

I would like also to see such feature in Bacula :-) (even if a
workaround already exists)

Have you any idea on how Bacula would choose when it should start a new
"backup stream" (can be automatic, by fileset configuration, etc..) ?

I think that it would be nice the spread the load between physical
disks, and not only trying to read the same FS with many threads.

Bye

--
Need professional help and support for Bacula ?
Visit http://www.baculasystems.com

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula-users Digest, Vol 63, Issue 4

2011-07-06 Thread Bacula-Dev
Hello,

Maybe bacula-web can provide you what you need.
This tool is current under "hard" development and a new beta version is
comming soon.

It's more a reporting and monitoring tool than an admin tool but it don't
need that you to have "system admin" skills  ;o)

Best regards


> Hi,

you are right, but as you may now, not all the people are system
> administrator, so I need to let an easy interface for the dummies admins
>
> Any other suggestions?
>
> Thanks
>
>
> ---
> Carlo
>
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SPEED!

2011-07-06 Thread Jake Debord
Yes, I use disk based file storage. It is a full backup but, I will turn
gzip off and defrag it to see if I can improve the speed. Thanks for the
advice and any additional advice is welcomed.

On Wed, Jul 6, 2011 at 11:37 AM, J. Echter  wrote:

> Am 06.07.2011 18:31, schrieb Jake Debord:
> > I have a machine I back up that when done averages:
> > Elapsed time:   41 mins 47 secs
> >   Priority:   1
> >   FD Files Written:   6,948
> >   SD Files Written:   6,948
> >   FD Bytes Written:   14,587,852,350 (14.58 GB)
> >   SD Bytes Written:   14,589,273,339 (14.58 GB)
> >   Rate:   5818.8 KB/s
> >   Software Compression:   11.7 %
> >
> > Is this acceptable??? 6Mbps seems slow. I backup my machine and achieve
> > a little better results
> >
> > Elapsed time:   3 mins 51 secs
> >   Priority:   1
> >   FD Files Written:   665
> >   SD Files Written:   665
> >   FD Bytes Written:   2,192,593,865 (2.192 GB)
> >   SD Bytes Written:   2,192,728,783 (2.192 GB)
> >   Rate:   9491.7 KB/s
> >   Software Compression:   9.8 %
> >
> > Both of our Machines are almost identical in specs. I'm just wondering if
> this is typical or if there are tweeks to speeding things up. My setup is
> basically out of the box so not much extra done to it.
> >
> > I also use mysql for the database.
> >
> >
> >
> >
> >
> --
> > All of the data generated in your IT infrastructure is seriously
> valuable.
> > Why? It contains a definitive record of application performance, security
> > threats, fraudulent activity, and more. Splunk takes this data and makes
> > sense of it. IT sense. And common sense.
> > http://p.sf.net/sfu/splunk-d2d-c2
> >
> >
> >
> > ___
> > Bacula-users mailing list
> > Bacula-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users
>
> depends on many things compression enabled slows down, lots of small
> files slow down.
>
> backup speed has nothing to do with regular backup speed.
>
>
> --
> All of the data generated in your IT infrastructure is seriously valuable.
> Why? It contains a definitive record of application performance, security
> threats, fraudulent activity, and more. Splunk takes this data and makes
> sense of it. IT sense. And common sense.
> http://p.sf.net/sfu/splunk-d2d-c2
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SPEED!

2011-07-06 Thread Laurent HENRY (EHESS/CRI)
Le Mer 6 juillet 2011 18:43, John Drescher a écrit :
> 2011/7/6 Jake Debord :
>> I have a machine I back up that when done averages:
>> Elapsed time:   41 mins 47 secs
>>   Priority:   1
>>   FD Files Written:   6,948
>>   SD Files Written:   6,948
>>   FD Bytes Written:   14,587,852,350 (14.58 GB)
>>   SD Bytes Written:   14,589,273,339 (14.58 GB)
>>   Rate:   5818.8 KB/s
>>   Software Compression:   11.7 %
>>
>> Is this acceptable??? 6Mbps seems slow. I backup my machine and achieve
>> a
>> little better results
>>
>> Elapsed time:   3 mins 51 secs
>>   Priority:   1
>>   FD Files Written:   665
>>   SD Files Written:   665
>>   FD Bytes Written:   2,192,593,865 (2.192 GB)
>>   SD Bytes Written:   2,192,728,783 (2.192 GB)
>>   Rate:   9491.7 KB/s
>>   Software Compression:   9.8 %
>>
>> Both of our Machines are almost identical in specs. I'm just wondering
>> if
>> this is typical or if there are tweeks to speeding things up. My setup
>> is
>> basically out of the box so not much extra done to it.
>>
>> I also use mysql for the database.
>>
>
> Are you using disk based volumes? If so try this with compression
> turned off. Also A Full backup will have a much higher rate than an
> Incremental or Differential because more of the time will be spent
> looking for the files to backup instead of backing up every file.
> Fragmentation of the client disk also plays a large part in backup
> rates.
>
> John
>

you could try an iperf for each machine o be sure all is ok with network



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SPEED!

2011-07-06 Thread John Drescher
2011/7/6 Jake Debord :
> I have a machine I back up that when done averages:
> Elapsed time:   41 mins 47 secs
>   Priority:   1
>   FD Files Written:   6,948
>   SD Files Written:   6,948
>   FD Bytes Written:   14,587,852,350 (14.58 GB)
>   SD Bytes Written:   14,589,273,339 (14.58 GB)
>   Rate:   5818.8 KB/s
>   Software Compression:   11.7 %
>
> Is this acceptable??? 6Mbps seems slow. I backup my machine and achieve a
> little better results
>
> Elapsed time:   3 mins 51 secs
>   Priority:   1
>   FD Files Written:   665
>   SD Files Written:   665
>   FD Bytes Written:   2,192,593,865 (2.192 GB)
>   SD Bytes Written:   2,192,728,783 (2.192 GB)
>   Rate:   9491.7 KB/s
>   Software Compression:   9.8 %
>
> Both of our Machines are almost identical in specs. I'm just wondering if
> this is typical or if there are tweeks to speeding things up. My setup is
> basically out of the box so not much extra done to it.
>
> I also use mysql for the database.
>

Are you using disk based volumes? If so try this with compression
turned off. Also A Full backup will have a much higher rate than an
Incremental or Differential because more of the time will be spent
looking for the files to backup instead of backing up every file.
Fragmentation of the client disk also plays a large part in backup
rates.

John

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SPEED!

2011-07-06 Thread J. Echter
Am 06.07.2011 18:31, schrieb Jake Debord:
> I have a machine I back up that when done averages:
> Elapsed time:   41 mins 47 secs
>   Priority:   1
>   FD Files Written:   6,948
>   SD Files Written:   6,948
>   FD Bytes Written:   14,587,852,350 (14.58 GB)
>   SD Bytes Written:   14,589,273,339 (14.58 GB)
>   Rate:   5818.8 KB/s
>   Software Compression:   11.7 %
> 
> Is this acceptable??? 6Mbps seems slow. I backup my machine and achieve
> a little better results
> 
> Elapsed time:   3 mins 51 secs
>   Priority:   1
>   FD Files Written:   665
>   SD Files Written:   665
>   FD Bytes Written:   2,192,593,865 (2.192 GB)
>   SD Bytes Written:   2,192,728,783 (2.192 GB)
>   Rate:   9491.7 KB/s
>   Software Compression:   9.8 %
> 
> Both of our Machines are almost identical in specs. I'm just wondering if 
> this is typical or if there are tweeks to speeding things up. My setup is 
> basically out of the box so not much extra done to it. 
> 
> I also use mysql for the database.
> 
> 
> 
> 
> --
> All of the data generated in your IT infrastructure is seriously valuable.
> Why? It contains a definitive record of application performance, security 
> threats, fraudulent activity, and more. Splunk takes this data and makes 
> sense of it. IT sense. And common sense.
> http://p.sf.net/sfu/splunk-d2d-c2
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

depends on many things compression enabled slows down, lots of small
files slow down.

backup speed has nothing to do with regular backup speed.

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance options for single large (100TB) server backup?

2011-07-06 Thread Eric Bollengier
Hello,

On 07/06/2011 04:20 PM, Florian Heigl wrote:
> Saving multiple streams is something that has been proven as a
> solution for many years, and where that is still too slow NDMP comes
> into place. (in case of ZFS NDMP is still at a unusable stage)
>
> 100TB is a lot, but I wonder if everyone agrees the "right" solution
> would be saving multiple streams instead of splitting up the source
> system (will be fun to do a restore of such a split client)...
>
> Hopefully some of the larger Bacula customers will fund these features
> some day, as both have the mix of being very important, complex and
> elemental changes :))

I would like also to see such feature in Bacula :-) (even if a 
workaround already exists)

Have you any idea on how Bacula would choose when it should start a new 
"backup stream" (can be automatic, by fileset configuration, etc..) ?

I think that it would be nice the spread the load between physical 
disks, and not only trying to read the same FS with many threads.

Bye

-- 
Need professional help and support for Bacula ?
Visit http://www.baculasystems.com

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] SPEED!

2011-07-06 Thread Jake Debord
I have a machine I back up that when done averages:
Elapsed time:   41 mins 47 secs
  Priority:   1
  FD Files Written:   6,948
  SD Files Written:   6,948
  FD Bytes Written:   14,587,852,350 (14.58 GB)
  SD Bytes Written:   14,589,273,339 (14.58 GB)
  Rate:   5818.8 KB/s
  Software Compression:   11.7 %

Is this acceptable??? 6Mbps seems slow. I backup my machine and achieve a
little better results

Elapsed time:   3 mins 51 secs
  Priority:   1
  FD Files Written:   665
  SD Files Written:   665
  FD Bytes Written:   2,192,593,865 (2.192 GB)
  SD Bytes Written:   2,192,728,783 (2.192 GB)
  Rate:   9491.7 KB/s
  Software Compression:   9.8 %

Both of our Machines are almost identical in specs. I'm just wondering
if this is typical or if there are tweeks to speeding things up. My
setup is basically out of the box so not much extra done to it.
I also use mysql for the database.
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Invalid Tape position - Marking tapes with error

2011-07-06 Thread James Woodward
Hello,

I've tried searching around a bit but I can't seem to find an answer as to what 
might cause this problem. I have qutie a few tapes that have been marked in 
error due to Invalid Tape position. I've listed the Invalid tape position 
errors as well as detail about one of the affected volumes. Has anyone else 
encountered this or have any idea's as to why it might be happening? For the 
life of me I can't figure out what the Expected 0 portion refers to. I thought 
it might refer to volfiles but those numbers don't always match either.

Thank you,

James


Invalid Tape Position Errors
# grep -i "invalid tape position" log
11-Mar 18:01 esqb6-sd JobId 196: Error: Invalid tape position on volume 
"090137" on device "Drive-2" (/dev/sa2). Expected 0, got 20
27-Mar 18:02 esqb6-sd JobId 353: Error: Invalid tape position on volume 
"090164" on device "Drive-1" (/dev/sa1). Expected 0, got 77
07-May 00:01 esqb6-sd JobId 867: Error: Invalid tape position on volume 
"090142" on device "Drive-2" (/dev/sa2). Expected 0, got 152
09-May 00:01 esqb6-sd JobId 907: Error: Invalid tape position on volume 
"090152" on device "Drive-2" (/dev/sa2). Expected 0, got 102
11-May 00:01 esqb6-sd JobId 950: Error: Invalid tape position on volume 
"090155" on device "Drive-2" (/dev/sa2). Expected 0, got 9
13-May 00:01 esqb6-sd JobId 993: Error: Invalid tape position on volume 
"090157" on device "Drive-1" (/dev/sa1). Expected 0, got 65
13-May 18:01 esqb6-sd JobId 1002: Error: Invalid tape position on volume 
"090175" on device "Drive-1" (/dev/sa1). Expected 0, got 181
15-May 18:13 esqb6-sd JobId 1043: Error: Invalid tape position on volume 
"090134" on device "Drive-0" (/dev/sa0). Expected 0, got 117
19-May 00:30 esqb6-sd JobId 1123: Error: Invalid tape position on volume 
"090158" on device "Drive-2" (/dev/sa2). Expected 0, got 134
20-May 18:01 esqb6-sd JobId 1147: Error: Invalid tape position on volume 
"090159" on device "Drive-1" (/dev/sa1). Expected 0, got 30
21-May 18:03 esqb6-sd JobId 1171: Error: Invalid tape position on volume 
"090166" on device "Drive-2" (/dev/sa2). Expected 0, got 274
24-May 00:32 esqb6-sd JobId 1231: Error: Invalid tape position on volume 
"090165" on device "Drive-2" (/dev/sa2). Expected 0, got 55
24-May 18:02 esqb6-sd JobId 1238: Error: Invalid tape position on volume 
"090160" on device "Drive-1" (/dev/sa1). Expected 0, got 50
30-May 18:04 esqb6-sd JobId 1370: Error: Invalid tape position on volume 
"090161" on device "Drive-2" (/dev/sa2). Expected 0, got 83
06-Jun 00:31 esqb6-sd JobId 1528: Error: Invalid tape position on volume 
"090209" on device "Drive-1" (/dev/sa1). Expected 0, got 253
22-Jun 00:31 esqb6-sd JobId 2036: Error: Invalid tape position on volume 
"090179" on device "Drive-2" (/dev/sa2). Expected 0, got 400
23-Jun 00:31 esqb6-sd JobId 2070: Error: Invalid tape position on volume 
"090121" on device "Drive-2" (/dev/sa2). Expected 0, got 132
25-Jun 21:01 esqb6-sd JobId 2156: Error: Invalid tape position on volume 
"090201" on device "Drive-2" (/dev/sa2). Expected 0, got 66
27-Jun 21:01 esqb6-sd JobId : Error: Invalid tape position on volume 
"090202" on device "Drive-1" (/dev/sa1). Expected 0, got 78
28-Jun 21:01 esqb6-sd JobId 2255: Error: Invalid tape position on volume 
"090126" on device "Drive-1" (/dev/sa1). Expected 0, got 136
29-Jun 18:05 esqb6-sd JobId 2277: Error: Invalid tape position on volume 
"090128" on device "Drive-2" (/dev/sa2). Expected 0, got 2
01-Jul 00:31 esqb6-sd JobId 2335: Error: Invalid tape position on volume 
"090146" on device "Drive-2" (/dev/sa2). Expected 0, got 104
03-Jul 19:01 esqb6-sd JobId 2418: Error: Invalid tape position on volume 
"090147" on device "Drive-1" (/dev/sa1). Expected 0, got 52


HIstory for a single Tape
# grep 090147 log
21-May 18:03 esqb6-dir JobId 1172: Using Volume "090147" from 'Scratch' pool.
01-Jul 00:37 esqb6-sd JobId 2335: Wrote label to prelabeled Volume "090147" on 
device "Drive-2" (/dev/sa2)
01-Jul 00:39 esqb6-sd JobId 2335: Committing spooled data to Volume "090147". 
Despooling 2,572,404,289 bytes ...
  Volume name(s): 090147
01-Jul 18:06 esqb6-sd JobId 2341: Volume "090147" previously written, moving to 
end of data.
01-Jul 18:06 esqb6-sd JobId 2341: Ready to append to end of Volume "090147" at 
file=1.
01-Jul 18:06 esqb6-sd JobId 2341: Committing spooled data to Volume "090147". 
Despooling 66,941,308 bytes ...
  Volume name(s): 090147
01-Jul 18:13 esqb6-sd JobId 2342: Committing spooled data to Volume "090147". 
Despooling 3,509,656,952 bytes ...
  Volume name(s): 090147
01-Jul 19:06 esqb6-sd JobId 2350: Volume "090147" previously written, moving to 
end of data.
01-Jul 19:06 esqb6-sd JobId 2350: Ready to append to end of Volume "090147" at 
file=2.
01-Jul 19:06 esqb6-sd JobId 2350: Committing spooled data to Volume "090147". 
Despooling 121,000,236 bytes ...
  Volume name(s): 090147
01-Jul 20:04 esqb6-sd JobId 2354: Volume "090147" previously written, moving to 
end of data.
01-Ju

Re: [Bacula-users] Performance with many files

2011-07-06 Thread Phil Stracchino
On 07/06/11 10:41, Adrian Reyer wrote:
> On Wed, Jul 06, 2011 at 10:09:56AM -0400, Phil Stracchino wrote:
>> should I use for my tables?" is MyISAM.[1]  At this point, wherever
>> possible, EVERYONE should be using InnoDB.
> 
> I will, if the current backup ever finishes. For a start on MySQL 5.1
> though (Debian squeeze). I am aware InnoDB has a more stable performance
> according to the posts I have found in various bacula-mysql related
> posts. Your post gives me some hope I can get away with converting the
> table format instead of migrating to postgres. Simple for the fact I
> have nicer backup scripts for mysql than for postgres.


Oh, sure.  It's dead simple.

for table in $(mysql -N --batch -e 'select
concat(table_schema,'.',table_name) from information_schema.tables where
engine='MyISAM' and table_schema not in
('information_schema','mysql')'); do mysql -N --batch -e "alter table
$table engine=InnoDB" ; done


Keep in mind that on MySQL 5.1, you should preferably be using the
InnoDB plugin rather than the built-in InnoDB engine.  The plugin InnoDB
engine is newer and performs better.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance options for single large (100TB) server backup?

2011-07-06 Thread Steve Costaras

Found a solutino to why multi-volume was not working correctly (don't know what 
the problem was) but had to re-create the database and once I re-did 
that/recreated the tape pool now it's working with multi-jobs using the same 
tape. Go figure.

As for your comment here with multi-streaming, YES I agree 100%. Splitting this 
up into multiple jobs /works/ but it's a kludge. Restores now also have to be 
done seperately as there is no cross-correlation that I'm backing up the SAME 
client. So if I have 20 jobs I need 20 restores and the restores since they are 
separete will take more tape mounting time as each restore will wait on the 
previous one.

Arkeia and Netbackup both allow for multi-streams or flows for a single 
job/client. That would be the ideal solution, however like I mentioned 
originally I would rather put the $$ into a open source project than giving it 
to a closed source one if that was possible (it's not like I'm a large company 
here at all).



-Original Message-
From: Florian Heigl [mailto:florian.he...@gmail.com]
Sent: Wednesday, July 6, 2011 09:20 AM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Performance options for single large (100TB) server 
backup?


Hi,

Breaking the server into multiple file daemons sounds as broken as
breaking the stuff amanda users had to do (break your filesystem into
something that fits a tape).
Saving multiple streams is something that has been proven as a
solution for many years, and where that is still too slow NDMP comes
into place. (in case of ZFS NDMP is still at a unusable stage)

100TB is a lot, but I wonder if everyone agrees the "right" solution
would be saving multiple streams instead of splitting up the source
system (will be fun to do a restore of such a split client)...

Hopefully some of the larger Bacula customers will fund these features
some day, as both have the mix of being very important, complex and
elemental changes :))

*hint hint*

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with many files

2011-07-06 Thread Adrian Reyer
On Wed, Jul 06, 2011 at 10:09:56AM -0400, Phil Stracchino wrote:
> should I use for my tables?" is MyISAM.[1]  At this point, wherever
> possible, EVERYONE should be using InnoDB.

I will, if the current backup ever finishes. For a start on MySQL 5.1
though (Debian squeeze). I am aware InnoDB has a more stable performance
according to the posts I have found in various bacula-mysql related
posts. Your post gives me some hope I can get away with converting the
table format instead of migrating to postgres. Simple for the fact I
have nicer backup scripts for mysql than for postgres.

> your MySQL configuration using MySQLtuner (free download from
> http://mysqltuner.com/mysqltuner.pl; requires Perl, DBI.pm, and DBD::mysql.)

I am using that one and tuning-primer.sh from http://www.day32.com/MySQL/

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting & Support - USt-ID: DE 227 816 626 Stuttgart

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing up only files with *foo* in the name?

2011-07-06 Thread Phil Stracchino
On 07/06/11 10:08, dobbin wrote:
> Hello there, we've got one particular server backing up to a slightly
> older debian server running bacula 2.44.
> 
> There's way too much stuff on the server to backup but the only
> important files are ones with *foo* in the filename
> 
> I want to backup every file in D:/media and all of it's
> subdirectories (of which there are many) with *foo* in the name.
> 
> I'm having a bugger of a time figuring out the syntax, can anyone
> point me in the right direction?

This is probably a good case for piping your Fileset from a simple
script (which probably needn't be much more than a 'find / -iname \*foo\*').

Oh, and you should probably use the 'ignore fileset changes' option.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance options for single large (100TB) server backup?

2011-07-06 Thread Florian Heigl
Hi,

Breaking the server into multiple file daemons sounds as broken as
breaking the stuff amanda users had to do (break your filesystem into
something that fits a tape).
Saving multiple streams is something that has been proven as a
solution for many years, and where that is still too slow NDMP comes
into place. (in case of ZFS NDMP is still at a unusable stage)

100TB is a lot, but I wonder if everyone agrees the "right" solution
would be saving multiple streams instead of splitting up the source
system (will be fun to do a restore of such a split client)...

Hopefully some of the larger Bacula customers will fund these features
some day, as both have the mix of being very important, complex and
elemental changes :))

*hint hint*

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Performance with many files

2011-07-06 Thread Phil Stracchino
On 07/06/11 08:04, Adrian Reyer wrote:
> Hi,
> 
> I am using bacula for a bit more than a month now and the database gets
> slower and slower both for selecting stuff and for running backups as
> such.
> I am using a MySQL database, still myisam tables and I am considering
> switching to InnoDB tables or postgresql.


Just for the record:

Unless you are using merge tables (which, since the advent of table
partitioning, you shouldn't be) or full-text indexes, there is NO USE
CASE for MySQL for which the correct answer to "What storage engine
should I use for my tables?" is MyISAM.[1]  At this point, wherever
possible, EVERYONE should be using InnoDB.

(Also, preferably everyone should be using MySQL 5.5.  However, RHEL -
for example - isn't even shipping MySQL 5.1 yet, let alone 5.5.  They'll
probably start shipping MySQL 5.5 along about the time MySQL hits 6.5.)

There are many reasons for this, including performance, crash recovery,
and referential integrity (InnoDB offers full ACID guarantees, MyISAM
does not).  MyISAM was designed to run acceptably well on servers with
32MB or less RAM, and it not only *does not*, it CANNOT make effective
use of more than a small fraction of the memory available on modern-day
commodity hardware.  MyISAM cannot re-apply an interrupted transaction,
cannot roll back a failed transaction, and it is not robust in the face
of events like disk full conditions or unexpected power outages.


You will (still) hear a lot of FUD from people who frankly don't
understand the issues, about how InnoDB locks are slower than MyISAM
locks.  This is, *technically*, true.  However, it completely fails to
take into account that not only are InnoDB locks row level while MyISAM
locks are page level - meaning that many *write* transactions can
execute simultaneously on the same InnoDB table as long as they update
different rows, while NOTHING can execute simultaneously to any write
transaction on a MyISAM table - but, thanks to multi-view consistency,
InnoDB can execute most queries without needing to lock anything at all.
 The real performance situation is this:  With an identical transaction
load and identical data on identical hardware, on a 100% read query
load, which is the *best possible* performance case for MyISAM, InnoDB
still outperforms MyISAM by 60% or more.  On a query load that is 75%
reads, 25% writes, InnoDB outperforms MySQL by over 400%.

So, yes.  Convert all of your tables to InnoDB.  Also, if you can,
update to MySQL 5.5 if you're not already using it.  (Properly
configured, InnoDB in MySQL 5.5 on Linux has a 150% performance increase
over InnoDB 5.1, and on Windows, 5.5 InnoDB performs 1500% better than
5.1 InnoDB, according to Oracle's benchmarks.)  Throw as much memory at
the InnoDB buffer pool as you can spare, pare down MyISAM buffers that
you're not using, and if you're using 5.5, look at the new
innodb_buffer_pool_instances variable.  You can get a basic check of
your MySQL configuration using MySQLtuner (free download from
http://mysqltuner.com/mysqltuner.pl; requires Perl, DBI.pm, and DBD::mysql.)



[1]  At this time, MySQL *itself* still requires MyISAM for the grant
tables.  Word from inside Oracle says that fixing this and enabling the
grant tables to also be stored in InnoDB is work in progress, and that
once this is accomplished, the entire MyISAM storage engine will
probably be deprecated.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Backing up only files with *foo* in the name?

2011-07-06 Thread dobbin
Hello there, we've got one particular server backing up to a slightly older 
debian server running bacula 2.44.

There's way too much stuff on the server to backup but the only important files 
are ones with *foo* in the filename

I want to backup every file in D:/media and all of it's subdirectories (of 
which there are many) with *foo* in the name.

I'm having a bugger of a time figuring out the syntax, can anyone point me in 
the right direction?

+--
|This was sent by tibus.supp...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Write spooled data to 2 volumes

2011-07-06 Thread Jan Behrend
Hi,

I'd like to have a 2nd tape copy written to an offsite tapelibary which
is accessible in the local SAN.  For now I am doing my usual Backup jobs
and after this copy these jobs to the offsite library.  This takes
forever because bacula needs to spool all data from the first backup
volume again and then write it to the second one.  I'd like to have
bacula write the spooled data of a nightly backup job to two volumes
right away.

Any hints, thoughts ...

Cheers Jan

-- 
MAX-PLANCK-INSTITUT fuer Radioastronomie
Jan Behrend - Rechenzentrum

Auf dem Huegel 69, D-53121 Bonn  
Tel: +49 (228) 525 359, Fax: +49 (228) 525 229
jbehr...@mpifr-bonn.mpg.de http://www.mpifr-bonn.mpg.de




smime.p7s
Description: S/MIME cryptographic signature
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Web interface

2011-07-06 Thread Mauro Colorio
> I would like to say exactly "webmin bacula module"
>

what's wrong with webmin module?
for non sysadmin user is enough, I think non admin needs just to
restore a file if something goes wrong :)

ciao
Mauro

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fwd: RE: Automatic Volume creation

2011-07-06 Thread Jeremy Maes


  
  
Op 6/07/2011 10:17, Robert.Mortimer schreef: 

  Hi,
   
  I have Bacula running with an IBM TL3400. As the tapes are bar coded
  Bacula can read the library and add the tapes to it's catalogue. I can
  than move then to the backup pools. My issue is the tapes have not had a
  volume written to the front of the tape using the label command they
  fail to mount with the following bconsole status. 
   
  Device "Drive-3" (/dev/nst2) open but no Bacula volume is currently
  mounted.
      Device is BLOCKED waiting for mount of volume "CBP10020",
     Pool:    snapshot-pool
     Media type:  LTO-3
      Slot 11 is loaded in drive 2.
      Total Bytes Read=0 Blocks Read=0 Bytes/block=0
      Positioned at File=0 Block=0
   
   
  Tape CBP10020 is in /dev/nst2 but is blank. How do I get Bacula to
  automatically initialise the tape and write the volume label. My storage
  definition is as follows:-
  
From what I know all you need is one command to label all the
tapes with their barcode. Or as the manual reads:

  If your autochanger has barcode labels, you can label all the
  Volumes in your autochanger one after another
  by using the label barcodes command. For each tape in
  the changer containing a barcode, Bacula will
  mount the tape and then label it with the same name as the
  barcode. An appropriate Media record will
  also be created in the catalog. Any barcode that begins with
  the same characters as specified on the
  ”CleaningPrefix=xxx” command, will be treated as a cleaning
  tape, and will not be labeled. 



  I sort of assumed that AutomaticMount would do it. I am reluctant to use
  LabelMedia because I am not sure it will use that label on the bar code
  rather than issuing a new one. Can anyone confirm that behaviour of
  LabelMedia when used with a bar coded auto-changer?
   
  Robert Mortimer
  Linux Systems Administrator
  I don't think you'll need LabelMedia as that
would indeed label the tapes with a label as specified by the
LabelFormat directive you'd put in there with it. The command
above should achieve what you want.

More on autochangers in Chapter 31 of the manual :)

Regards,
Jeremy
  Bacula was set up to
  periodically read the auto-changer catalogue. As a result
  media records existed for the unlabeled tapes. Tapes with a
  media record are overlooked by “label barcodes”. The other
  issue I had was when I tried it threatened to label all the
  tapes and assign them to one pool. It did not explicitly
  mention that is would ignore tapes with an existing media
  record. I will now think about the automatic read of the tape
  library and change it to sweep new tapes into the scratch
  pool.
  Thanks Rob
   
   DISCLAIMER 
http://www.schaubroeck.be/maildisclaimer.htm

 



   
   
 

This e-mail is
  confidential and only intended for the
  named recipient. If you are not the
  intended recipient please notify the
  sender immediately by phone or email.
  Please then delete it from your system.  Any opinions are
  those of the author and do not necessarily
  represent those of Lifestyle Services
  Group Ltd. Please note that this e-mail
  and any attachments have not been
  encrypted. They may therefore be liable to
  be compromised.  We
  do not accept any liability for any virus
  infection, external compromise of security
  or confidentiality in relation to e-mails.
 
Lifestyle
  Services Group Limited (LSG) is a company
  registered in England and Wales No.
  5114385 and is authorised and regulated by
  the Financial Services Authority in
  respect of insurance mediation activities.  Registered Office:
  Osprey House, Ore Close, Lymedale Business
  Park, Newcastle-under-Lyme, Staffordshire
  ST5 9QD.

   
   DISCLAIMER http://www.schaubroeck.be/maildisclaimer.htm



[Bacula-users] Performance with many files

2011-07-06 Thread Adrian Reyer
Hi,

I am using bacula for a bit more than a month now and the database gets
slower and slower both for selecting stuff and for running backups as
such.
I am using a MySQL database, still myisam tables and I am considering
switching to InnoDB tables or postgresql.
Amongst normal fileserver data there is 450GB IMAP-Serverdata, single
small files to be backed up and after 1 month (2 full backups, weekly
differential and daily incremental) the tables look like this:
select count(*) FROM Filename;
3928838
select count(*) FROM File;
54211255
select count(*) FROM Path;
1016689
Diskspace:
# du -sk /var/lib/mysql/
8741404 /var/lib/mysql/

The backup is mostly to disk and currently uses 11TB of space, the
disk-volumes are valid vor 35 days and are copied to tape somewhere in
that period to remain available for 13 months.

The database server has 16GB of RAM and MySQL is configured to use ~8GB
of RAM. MySQL parameters:
key_buffer  = 8192M
max_allowed_packet  = 40M
join_buffer_size= 4M
thread_stack= 192K
thread_cache_size   = 8
max_connections = 200
table_cache = 1024
thread_concurrency  = 10
query_cache_limit   = 127M
query_cache_size= 127M
max_heap_table_size = 512M
tmp_table_size = 512M

The backups run with SpoolData=yes and SpoolAttribute=yes, the latter
specifically set for the backupserver itself as it serves as
rsync-target as well and has SpoolData=no.
bacula-director and -sd reside on a small server with 4GB RAM, the
database itself is on a seperate server.

I seems like performance will get worse and worse over time and it is
only the 1st month of the 13 I'd like to keep. The problem seems not to
be disk io but MySQL running at 99% CPU for extended times probably
while despooling attribute data.

What can I do to improove the performance?

Regards,
Adrian
-- 
LiHAS - Adrian Reyer - Hessenwiesenstraße 10 - D-70565 Stuttgart
Fon: +49 (7 11) 78 28 50 90 - Fax:  +49 (7 11) 78 28 50 91
Mail: li...@lihas.de - Web: http://lihas.de
Linux, Netzwerke, Consulting & Support - USt-ID: DE 227 816 626 Stuttgart

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Web interface

2011-07-06 Thread Carlo Filippetto
Hi,
you are right, but as you may now, not all the people are system
administrator, so I need to let an easy interface for the dummies admins

Any other suggestions?

Thanks


---
Carlo






2011/7/5 Rory Campbell-Lange 

> The best interface in my view is psql and bconsole.
>
> These are 'web' accessible over ssh.
>
> --
> Rory Campbell-Lange
> Director
> Campbell-Lange Workshop
> www.campbell-lange.net
>
>
> On 5 Jul 2011, at 17:20, Mauro Colorio  wrote:
>
> >> I tried phpmyadmin, and webacula but I need something more powerfull...
> >
> > phpmyadmin isn't a bacula web interface..
> > webacula works great but fails on restore,
> > I suggest to use webmin bacula module
> >
> >
> > ciao
> > Mauro
> >
> >
> --
> > All of the data generated in your IT infrastructure is seriously
> valuable.
> > Why? It contains a definitive record of application performance, security
> > threats, fraudulent activity, and more. Splunk takes this data and makes
> > sense of it. IT sense. And common sense.
> > http://p.sf.net/sfu/splunk-d2d-c2
> > ___
> > Bacula-users mailing list
> > Bacula-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Web interface

2011-07-06 Thread Carlo Filippetto
Yes sorry!!
I would like to say exactly "webmin bacula module"

Thank's


2011/7/5 Mauro Colorio 

> > I tried phpmyadmin, and webacula but I need something more powerfull...
>
> phpmyadmin isn't a bacula web interface..
> webacula works great but fails on restore,
> I suggest to use webmin bacula module
>
>
> ciao
> Mauro
>
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fatal error: Network error with FD during Backup: ERR=Interrupted system call

2011-07-06 Thread Martin Simmons
> On Wed, 6 Jul 2011 10:06:02 +0200, Markus Schulz said:
> 
> hello,
> 
> i have a strange error. Each try of a full backup aborts after an amount 
> of time. The client, director and storage daemon are running on the same 
> computer. Therefore any network/firewall issues can be excluded.
> The system was a debian/squeeze.
> 
> any ideas?

Do you have MaxRunSchedTime, MaxWaitTime or MaxRunTime set in the config?

__Martin

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Automatic Volume creation

2011-07-06 Thread Jeremy Maes

Op 6/07/2011 10:17, Robert.Mortimer schreef:

Hi,

I have Bacula running with an IBM TL3400. As the tapes are bar coded
Bacula can read the library and add the tapes to it's catalogue. I can
than move then to the backup pools. My issue is the tapes have not had a
volume written to the front of the tape using the label command they
fail to mount with the following bconsole status.

Device "Drive-3" (/dev/nst2) open but no Bacula volume is currently
mounted.
 Device is BLOCKED waiting for mount of volume "CBP10020",
Pool:snapshot-pool
Media type:  LTO-3
 Slot 11 is loaded in drive 2.
 Total Bytes Read=0 Blocks Read=0 Bytes/block=0
 Positioned at File=0 Block=0


Tape CBP10020 is in /dev/nst2 but is blank. How do I get Bacula to
automatically initialise the tape and write the volume label. My storage
definition is as follows:-


From what I know all you need is one command to label all the tapes 
with their barcode. Or as the manual reads:

/
If your autochanger has barcode labels, you can label all the Volumes in 
your autochanger one after another
by using the *label barcodes* command. For each tape in the changer 
containing a barcode, Bacula will
mount the tape and then label it with the same name as the barcode. An 
appropriate Media record will
also be created in the catalog. Any barcode that begins with the same 
characters as specified on the
"CleaningPrefix=xxx" command, will be treated as a cleaning tape, and 
will not be labeled. /



I sort of assumed that AutomaticMount would do it. I am reluctant to use
LabelMedia because I am not sure it will use that label on the bar code
rather than issuing a new one. Can anyone confirm that behaviour of
LabelMedia when used with a bar coded auto-changer?

Robert Mortimer
Linux Systems Administrator
I don't think you'll need LabelMedia as that would indeed label the 
tapes with a label as specified by the LabelFormat directive you'd put 
in there with it. The command above should achieve what you want.


More on autochangers in Chapter 31 of the manual :)

Regards,
Jeremy

 DISCLAIMER 
http://www.schaubroeck.be/maildisclaimer.htm
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Automatic Volume creation

2011-07-06 Thread Robert.Mortimer
Hi,

I have Bacula running with an IBM TL3400. As the tapes are bar coded
Bacula can read the library and add the tapes to it's catalogue. I can
than move then to the backup pools. My issue is the tapes have not had a
volume written to the front of the tape using the label command they
fail to mount with the following bconsole status. 

Device "Drive-3" (/dev/nst2) open but no Bacula volume is currently
mounted.
Device is BLOCKED waiting for mount of volume "CBP10020",
   Pool:snapshot-pool
   Media type:  LTO-3
Slot 11 is loaded in drive 2.
Total Bytes Read=0 Blocks Read=0 Bytes/block=0
Positioned at File=0 Block=0


Tape CBP10020 is in /dev/nst2 but is blank. How do I get Bacula to
automatically initialise the tape and write the volume label. My storage
definition is as follows:-

Device {
  Name = Drive-3  #
  Drive Index = 2
  Media Type = LTO-3
  Archive Device = /dev/nst2
  AutomaticMount = yes;   # when device opened, read it
  AlwaysOpen = yes;
  RemovableMedia = yes;
  RandomAccess = no;
  AutoChanger = yes
  Alert Command = "sh -c 'smartctl -H -l error %c'"
Maximum Spool Size = 100GB
# Spool Size = 40GB
Maximum Job Spool Size = 40GB
Spool Directory = /backup/hosts/SPOOL
}


I sort of assumed that AutomaticMount would do it. I am reluctant to use
LabelMedia because I am not sure it will use that label on the bar code
rather than issuing a new one. Can anyone confirm that behaviour of
LabelMedia when used with a bar coded auto-changer?

Robert Mortimer
Linux Systems Administrator


 
http://www.lifestylegroup.co.uk/
   http://www.linkedin.com/company/39206 
 
This e-mail is confidential and only intended for the named recipient. If you 
are not the intended recipient please notify the sender immediately by phone or 
email. Please then delete it from your system. Any opinions are those of the 
author and do not necessarily represent those of Lifestyle Services Group Ltd. 
Please note that this e-mail and any attachments have not been encrypted. They 
may therefore be liable to be compromised. We do not accept any liability for 
any virus infection, external compromise of security or confidentiality in 
relation to e-mails.
 
Lifestyle Services Group Limited (LSG) is a company registered in England and 
Wales No. 5114385 and is authorised and regulated by the Financial Services 
Authority in respect of insurance mediation activities. Registered Office: 
Osprey House, Ore Close, Lymedale Business Park, Newcastle-under-Lyme, 
Staffordshire ST5 9QD.
 


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fatal error: Network error with FD during Backup: ERR=Interrupted system call

2011-07-06 Thread Markus Schulz
hello,

i have a strange error. Each try of a full backup aborts after an amount 
of time. The client, director and storage daemon are running on the same 
computer. Therefore any network/firewall issues can be excluded.
The system was a debian/squeeze.

any ideas?

this was the log from last run:

05-Jul 18:17 ferrari-dir JobId 17073: Start Backup JobId 17073, 
Job=ClientFerrari.2011-07-05_18.17.14_18
05-Jul 18:17 ferrari-dir JobId 17073: Using Device "DLT-V4"
05-Jul 18:17 ferrari-fd JobId 17073: shell command: run 
ClientRunBeforeJob "/etc/bacula/scripts/make_svn_backup"
05-Jul 19:38 ferrari-sd JobId 17073: Volume "TAPE004" previously 
written, moving to end of data.
05-Jul 19:40 ferrari-sd JobId 17073: Ready to append to end of Volume 
"TAPE004" at file=5.
05-Jul 19:40 ferrari-sd JobId 17073: Spooling data ...
05-Jul 19:42 ferrari-fd JobId 17073:  /var/lib/nfs/rpc_pipefs is a 
different filesystem. Will not descend from /var into 
/var/lib/nfs/rpc_pipefs
05-Jul 19:42 ferrari-sd JobId 17073: User specified spool size reached.
05-Jul 19:42 ferrari-sd JobId 17073: Writing spooled data to Volume. 
Despooling 5,266,577,746 bytes ...
05-Jul 19:55 ferrari-sd JobId 17073: Despooling elapsed time = 00:12:37, 
Transfer rate = 6.957 M Bytes/second
05-Jul 19:55 ferrari-sd JobId 17073: Spooling data again ...
05-Jul 19:56 ferrari-sd JobId 17073: User specified spool size reached.
05-Jul 19:56 ferrari-sd JobId 17073: Writing spooled data to Volume. 
Despooling 5,266,577,877 bytes ...
05-Jul 20:09 ferrari-sd JobId 17073: Despooling elapsed time = 00:12:45, 
Transfer rate = 6.884 M Bytes/second
05-Jul 20:09 ferrari-sd JobId 17073: Spooling data again ...
05-Jul 20:11 ferrari-sd JobId 17073: User specified spool size reached.
05-Jul 20:11 ferrari-sd JobId 17073: Writing spooled data to Volume. 
Despooling 5,266,577,879 bytes ...
05-Jul 20:24 ferrari-sd JobId 17073: Despooling elapsed time = 00:12:42, 
Transfer rate = 6.911 M Bytes/second
05-Jul 20:24 ferrari-sd JobId 17073: Spooling data again ...
05-Jul 20:26 ferrari-sd JobId 17073: User specified spool size reached.
05-Jul 20:26 ferrari-sd JobId 17073: Writing spooled data to Volume. 
Despooling 5,266,577,883 bytes ...
05-Jul 20:39 ferrari-sd JobId 17073: Despooling elapsed time = 00:12:54, 
Transfer rate = 6.804 M Bytes/second
05-Jul 20:39 ferrari-sd JobId 17073: Spooling data again ...
05-Jul 20:43 ferrari-sd JobId 17073: User specified spool size reached.
05-Jul 20:43 ferrari-sd JobId 17073: Writing spooled data to Volume. 
Despooling 5,266,577,698 bytes ...
05-Jul 20:56 ferrari-sd JobId 17073: Despooling elapsed time = 00:12:50, 
Transfer rate = 6.839 M Bytes/second
05-Jul 20:56 ferrari-sd JobId 17073: Spooling data again ...
05-Jul 20:59 ferrari-sd JobId 17073: User specified spool size reached.
05-Jul 20:59 ferrari-sd JobId 17073: Writing spooled data to Volume. 
Despooling 5,266,577,766 bytes ...
05-Jul 21:05 ferrari-dir JobId 17073: Fatal error: Network error with FD 
during Backup: ERR=Interrupted system call
05-Jul 21:05 ferrari-sd JobId 17073: JobId=17073 
Job="ClientFerrari.2011-07-05_18.17.14_18" marked to be canceled.
05-Jul 21:05 ferrari-sd JobId 17073: JobId=17073 
Job="ClientFerrari.2011-07-05_18.17.14_18" marked to be canceled.
05-Jul 21:05 ferrari-dir JobId 17073: Fatal error: No Job status 
returned from FD.
05-Jul 21:05 ferrari-dir JobId 17073: Bacula ferrari-dir 5.0.2 
(28Apr10): 05-Jul-2011 21:05:01
  Build OS:   i486-pc-linux-gnu debian squeeze/sid
  JobId:  17073
  Job:ClientFerrari.2011-07-05_18.17.14_18
  Backup Level:   Full
  Client: "ferrari-fd" 5.0.2 (28Apr10) i486-pc-linux-
gnu,debian,squeeze/sid
  FileSet:"StdSet" 2007-01-11 15:22:08
  Pool:   "Default" (From Job resource)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"Tapes" (From Job resource)
  Scheduled time: 05-Jul-2011 18:17:03
  Start time: 05-Jul-2011 19:38:16
  End time:   05-Jul-2011 21:05:01
  Elapsed time:   1 hour 26 mins 45 secs
  Priority:   20
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): TAPE004
  Volume Session Id:  1260
  Volume Session Time:1298805900
  Last Volume Bytes:  32,353,090,560 (32.35 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  Error
  SD termination status:  SD despooling Data
  Termination:Backup Canceled


regards,
msc

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, secu

Re: [Bacula-users] Why doesn't Bacula use the whole tape? [SOLVED]

2011-07-06 Thread Stefan Günther
HI,

we had to add the parameters

Maximum Block Size = 65536
Maximum Network Buffer Size = 65536

to the device definition in bacula-sd.conf:

Device {
  Name = Drive-1
  Media Type = Ultrium-LTO3
  Archive Device = /dev/st0
  AutomaticMount = yes;
  AlwaysOpen = yes;
  RemovableMedia = yes;
  RandomAccess = no;
  AutoChanger = yes
  Spool Directory = /var/spool/bacula
  Maximum Spool Size = 100G
  Maximum Block Size = 65536
  Maximum Network Buffer Size = 65536
}

The corresponding output of dmesg was 

[ 3299.906724] st0: Failed to read 65536 byte block with 64512 byte transfer.
[ 3856.794626] st0: Failed to read 65536 byte block with 64512 byte transfer. 

Stefan
___
Schon gehört? WEB.DE hat einen genialen Phishing-Filter in die
Toolbar eingebaut! http://produkte.web.de/go/toolbar

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] New to Bacula - emergency situation.

2011-07-06 Thread Jari Fredriksson

All the necessary changes to the configuration will not be done via
bconsole, but by editing /etc/bacula/bacula-dir.conf, and the issuing a
reload command in bconsole.

-- 

The surest protection against temptation is cowardice.
-- Mark Twain



signature.asc
Description: OpenPGP digital signature
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users