Re: [Bacula-users] Fwd: Freebsd snapshots and complaints Will not descend

2011-10-11 Thread Christian Manal
Am 10.10.2011 21:11, schrieb Troy Kocher:
 
 On 10,Oct 2011, at 1:12 PM, Martin Simmons wrote:
 
 On Mon, 10 Oct 2011 11:51:14 -0500, Troy Kocher said:


 08-Oct 23:57 kfoobarb-sd JobId 2858: Job write elapsed time = 14:45:49, 
 Transfer rate = 2.702 M Bytes/second 

 Are you running an automounter for home directories?  That could explain both
 the Will not descend messages and also why the warnings vary over time.

 __Martin


 
 I'm not running an automounter.  And as I mentioned this error is 
 intermittent.  I run this job incremental daily without complaint, I get this 
 issue on the differential weekly run.  Regarding the time warning, I 
 corrected this once by forcing an ntp on the fd client.  I think my ntp must 
 not be running properly over there.
 
 Beginning to feel like it's something with the snapshot (/mnt/foobar) not 
 responding as a normal file system under load, and telling bacula-fd access 
 is delayed/denied/?, then bacula understands the delay as device unreachable?
 
 Troy


Hi,

bacula won't recurse filesystems if you don't explicitly tell it to.
Look at the onefs option for the fileset resource:

http://bacula.org/5.0.x-manuals/en/main/main/Configuring_Director.html#8566


Regards,
Christian Manal

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Filling Database Table - very slow

2011-10-11 Thread Jarrod Holder
Bacula version 5.0.3
 
In BAT, when trying to restore a directory (roughly 31,000 files in 560 sub 
folders)  The Filling Database Table takes an extremely long time to complete 
(about an hour or so).
 
I've been looking around for a way to speed this up.  Found a post on here that 
referred to an article that basically said PostgreSQL was the way to go as far 
as speed 
(http://wiki.bacula.org/doku.php?id=faq#restore_takes_a_long_time_to_retrieve_sql_results_from_mysql_catalog).
  So I converted from MySQL to PostgreSQL using the conversion procedure in the 
Bacula documentation.  We are now on PostgreSQL, but the speed seems just as 
slow (if not slower).  Is there anything else that can be done to speed this 
process up?
 
I've also tried the running the DB under MySQL with MyISAM and InnoDB tables.  
Both had the same slow performance here.  With MySQL, I also tried using the 
my-large.cnf and my-huge.cnf files.  Neither helped.
 
Server load is very low during this process (0.06).  BAT process is at about 3% 
cpu and 1.6% memory.  Postgres service is about 1%cpu, 0.6% memory.  Drive 
array is pretty quiet also.
 
Any help would be greatly appreciated.  If any extra info is needed, I will 
gladly provide it.
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Filling Database Table - very slow

2011-10-11 Thread Christian Manal
Am 11.10.2011 14:04, schrieb Jarrod Holder:
 Bacula version 5.0.3
  
 In BAT, when trying to restore a directory (roughly 31,000 files in 560 sub 
 folders)  The Filling Database Table takes an extremely long time to 
 complete (about an hour or so).
  
 I've been looking around for a way to speed this up.  Found a post on here 
 that referred to an article that basically said PostgreSQL was the way to go 
 as far as speed 
 (http://wiki.bacula.org/doku.php?id=faq#restore_takes_a_long_time_to_retrieve_sql_results_from_mysql_catalog).
   So I converted from MySQL to PostgreSQL using the conversion procedure in 
 the Bacula documentation.  We are now on PostgreSQL, but the speed seems just 
 as slow (if not slower).  Is there anything else that can be done to speed 
 this process up?
  
 I've also tried the running the DB under MySQL with MyISAM and InnoDB tables. 
  Both had the same slow performance here.  With MySQL, I also tried using the 
 my-large.cnf and my-huge.cnf files.  Neither helped.
  
 Server load is very low during this process (0.06).  BAT process is at about 
 3% cpu and 1.6% memory.  Postgres service is about 1%cpu, 0.6% memory.  Drive 
 array is pretty quiet also.
  
 Any help would be greatly appreciated.  If any extra info is needed, I will 
 gladly provide it.


Hi,

what OS are you running on? Did you built Bacula from the tarball? I had
a similar problem on Solaris 10, with the stock Postgres 8.3. Bacula's
'configure' didn't detect that Postgres was thread safe, so it omitted
--enable-batch-insert.

Without batch-insert, a full backup of my biggest fileset took roughly
24 hours. The backup of the data itself was (and still is) only 4 to 5
hours, the rest was despooling attributes into the database (I only
noticed this when I enabled attribute spooling).

With batch-insert (had to hack around in the 'configure' script a
little), the time for attribute despooling shrunk down down to maybe 20
_minutes_. It helps *a lot*.


Regards,
Christian Manal

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Multiple autoloaders, 2nd autoloader has 0 slots

2011-10-11 Thread tomse
Hello.

I've just setup 2 autoloaders on the same server

the first autoloader works fine when doing update slots in the console

doing the same on the second, it responds with Device my2dev has 0 slots


bacula-sd.conf

autochanger {
name = my1dev-library
changer command =/usr/local/share/bacula/mtx-changer %c %o %S %a %d
Changer Device = /dev/pass3
}

autochanger {
name = my2dev-library
changer command =/usr/local/share/bacula/mtx-changer %c %o %S %a %d
Changer Device = /dev/pass5
}


The devices setup are a copy to eachother except for the name and device 
(I'm not posting the device configs here as I don't think it's relevant in this 
matter)

doing a 
mtx-changer /dev/pass3 slots
mtx-changer /dev/pass5 slots
returns both 8 
which is the correct number of slots

The 2 autoloaders are both same model  HP 1x8 G2 SAS 3000 
Running FreeBSD 8.2 - Bacula 5.0.3


anyone has any idea why this happens ?
is there any link to this problem with the issue that after changing magazines 
it also says 0 slots when you run update slots / scan barcodes (even on the 
working autoloader), running it the second time it gives the correct output on 
the autoloader that works.

+--
|This was sent by to...@tomse.dk via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Filling Database Table - very slow

2011-10-11 Thread Alan Brown
Jarrod Holder wrote:
 Bacula version 5.0.3
  
 In BAT, when trying to restore a directory (roughly 31,000 files in 560 sub 
 folders)  The Filling Database Table takes an extremely long time to 
 complete (about an hour or so).
  
 I've been looking around for a way to speed this up.  Found a post on here 
 that referred to an article that basically said PostgreSQL was the way to go 
 as far as speed 
 (http://wiki.bacula.org/doku.php?id=faq#restore_takes_a_long_time_to_retrieve_sql_results_from_mysql_catalog).
   So I converted from MySQL to PostgreSQL using the conversion procedure in 
 the Bacula documentation.  We are now on PostgreSQL, but the speed seems just 
 as slow (if not slower).  Is there anything else that can be done to speed 
 this process up?

Have you performed any mysql/postgresql optimisations?

Default configs for both databases are slow.




--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Filling Database Table - very slow

2011-10-11 Thread Phil Stracchino
On 10/11/11 08:04, Jarrod Holder wrote:
 I've also tried the running the DB under MySQL with MyISAM and InnoDB
 tables.  Both had the same slow performance here.  With MySQL, I also
 tried using the my-large.cnf and my-huge.cnf files.  Neither helped.

Ignore the packaged out-of-the-box MySQL configs entirely.  They are
worthless.  They were written back when a large machine was one with
more than 32MB of RAM.  If you want performance out of MySQL, learn to
tune and configure it properly yourself.



-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Filling Database Table - very slow

2011-10-11 Thread Brian Debelius

Hi,

I have a 5GB database.  The server has 6GB RAM.  These are the settings 
I am using right now.


default-storage-engine=innodb
default-table-type=innodb
query_cache_limit=16M
query_cache_size=256M
innodb_log_file_size=384M
innodb_buffer_pool_size=3G
innodb_log_buffer_size=2M
innodb_flush_log_at_trx_commit=2

Your mileage may vary,
Brian-


On 10/11/2011 8:04 AM, Jarrod Holder wrote:

Bacula version 5.0.3
In BAT, when trying to restore a directory (roughly 31,000 files in 
560 sub folders)  The Filling Database Table takes an extremely long 
time to complete (about an hour or so).
I've been looking around for a way to speed this up.  Found a post on 
here that referred to an article that basically said PostgreSQL was 
the way to go as far as speed 
(http://wiki.bacula.org/doku.php?id=faq#restore_takes_a_long_time_to_retrieve_sql_results_from_mysql_catalog).  
So I converted from MySQL to PostgreSQL using the conversion procedure 
in the Bacula documentation.  We are now on PostgreSQL, but the speed 
seems just as slow (if not slower).  Is there anything else that can 
be done to speed this process up?
I've also tried the running the DB under MySQL with MyISAM and InnoDB 
tables.  Both had the same slow performance here.  With MySQL, I also 
tried using the my-large.cnf and my-huge.cnf files.  Neither helped.
Server load is very low during this process (0.06).  BAT process is at 
about 3% cpu and 1.6% memory.  Postgres service is about 1%cpu, 0.6% 
memory.  Drive array is pretty quiet also.
Any help would be greatly appreciated.  If any extra info is needed, I 
will gladly provide it.



--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Filling Database Table - very slow

2011-10-11 Thread Phil Stracchino
On 10/11/11 11:09, Brian Debelius wrote:
 Hi,
 
 I have a 5GB database.  The server has 6GB RAM.  These are the settings
 I am using right now.
 
 default-storage-engine=innodb
 default-table-type=innodb
 query_cache_limit=16M
 query_cache_size=256M
 innodb_log_file_size=384M
 innodb_buffer_pool_size=3G
 innodb_log_buffer_size=2M
 innodb_flush_log_at_trx_commit=2
 
 Your mileage may vary,

If using MySQL 5.5, do not overlook the innodb_buffer_pool_instances
setting.  (The name is a little misleading, IMHO; they should have
called it innodb_buffer_pool_partitions.)  It is one of the most
important configuration settings for obtaining optimum InnoDB
performance in MySQL 5.5.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Freebsd snapshots and complaints Will not descend

2011-10-11 Thread Troy Kocher

On 11,Oct 2011, at 1:53 AM, Christian Manal wrote:

 Am 10.10.2011 21:11, schrieb Troy Kocher:
 
 On 10,Oct 2011, at 1:12 PM, Martin Simmons wrote:
 
 On Mon, 10 Oct 2011 11:51:14 -0500, Troy Kocher said:
 
 
 08-Oct 23:57 kfoobarb-sd JobId 2858: Job write elapsed time = 14:45:49, 
 Transfer rate = 2.702 M Bytes/second 
 
 Are you running an automounter for home directories?  That could explain 
 both
 the Will not descend messages and also why the warnings vary over time.
 
 __Martin
 
 
 
 I'm not running an automounter.  And as I mentioned this error is 
 intermittent.  I run this job incremental daily without complaint, I get 
 this issue on the differential weekly run.  Regarding the time warning, I 
 corrected this once by forcing an ntp on the fd client.  I think my ntp must 
 not be running properly over there.
 
 Beginning to feel like it's something with the snapshot (/mnt/foobar) not 
 responding as a normal file system under load, and telling bacula-fd access 
 is delayed/denied/?, then bacula understands the delay as device unreachable?
 
 Troy
 
 
 Hi,
 
 bacula won't recurse filesystems if you don't explicitly tell it to.
 Look at the onefs option for the fileset resource:
 
 http://bacula.org/5.0.x-manuals/en/main/main/Configuring_Director.html#8566
 


Thanks for the suggestions I investigated onefs on your suggestion, and it gave 
me a hint as to the a potential fix.  On my client snapshot process isn't 
working properly.  The daily unmount is broken and I have multiple days (6) 
mounts being mounted in the same location /mnt/foobar.  I'm going to fix that 
umount issue and see if my problems go away.

Thank you!
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula hangs waiting on a client

2011-10-11 Thread Joseph Spenner


From: John Drescher dresche...@gmail.com
To: Joseph Spenner joseph85...@yahoo.com
Cc: bacula-users bacula-users@lists.sourceforge.net
Sent: Monday, September 26, 2011 11:31 AM
Subject: Re: [Bacula-users] Bacula hangs waiting on a client

2011/9/26 Joseph Spenner joseph85...@yahoo.com:
 From: Ben Walton bwal...@artsci.utoronto.ca

 Excerpts from Joseph Spenner's message of Fri Sep 23 16:55:32 -0400 2011:

 Storage {
   Name = bacula-va-sd
   SDPort = 9103  # Director's port
   WorkingDirectory = /opt/bacula/bin/working
   Pid Directory = /opt/bacula/bin/working
   Maximum Concurrent Jobs = 20
 }

 Ok, so then this isn't the issue...that would have been nice and
 easy.  I'm not sure where to look next as I'm relatively new to
 bacula.


Are you using separate pools per client?

John

==
I am only using 1 pool.  I have 7 sata disks as my backup medium.
Is this a suboptimal configuration?--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula hangs waiting on a client

2011-10-11 Thread John Drescher
 I am only using 1 pool.  I have 7 sata disks as my backup medium.
 Is this a suboptimal configuration?

That question was only to verify that you were not being blocked by
the fact that a single storage device can only load 1 volume at a time
and thus only 1 pool at a time.

John

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] error backing up Exchange 2003

2011-10-11 Thread jbuckley
Performing a full backup of the windows system with Bacula did remove the 
Fatal error: HrESEBackupSetup failed with error when backup up exchange.

I also applied some updates and rebooted the Windows server and now my exchange 
backup works fine.

+--
|This was sent by jbuck...@cendatsys.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Encryption keys

2011-10-11 Thread Jon Schewe
Is there any reason (besides good security) that I can't use the same
private key for all bacula clients? Can I use the same pem file as well?

Jon
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Encryption keys

2011-10-11 Thread Mark
Hi Jon,

2011/10/11 Jon Schewe jpsch...@mtu.net

 Is there any reason (besides good security) that I can't use the same
 private key for all bacula clients? Can I use the same pem file as well?

 Jon


Works fine for me here... I'm not trying to protect my machines' data from
each other, only to ensure it's encrypted when offsite.  They alll use the
same client cert and key.

Regards,
Mark
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users