On Wed, Oct 05, 2011 at 02:37:37PM +0100, Rory Campbell-Lange wrote:
> 
> I've been using non-batch insertion with postgres (following your dare, I 
> think, Phil) for about a year. Backups are only about 8TB, but it works 
> extremely well for us. 
> 

Hi folks,

thanks for your recommendations and thoughts. I've also noticed long
wait times for jobs with status "Dir inserting attributes", and it
looks like the new compile (5.0.3) was configured without batch
insertion by default. 

I'm still looking for mysql optimizations, or do you think it best to
leave my.cnf at the default values (we don't do many restores) and use
the maximum amount of RAM for the fs buffer cache as needed? 

All the best, Uwe 
-- 
NIONEX --- Ein Unternehmen der Bertelsmann AG



------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to