On 10/07/2010 11:03 PM, Mingus Dew wrote:
All,
I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running
MySQL 4.1.22 for the database server. I do plan on upgrading to a compatible
version of MySQL 5, but migrating to PostgreSQL isn't an option at this
time.
I am trying
hello,
i use bacula for two client.
my schedule is
* FULL on sunday
* INCREMENTAL the other days
i would like to know if it is possible to send by mail a weekly report
with the status of the runned jobs
something like what i see if i do list jobs for client X on the bconsole
thanks,
hOZONE
Hi John
I needed to create a new pool with the settings mentioned above, and run a
full job, anyway Vol Usage remains 0.00%, it's nothing important, but the
way this all works out and does not show properly. If anyone has any tips,
Thanks
Bruno
2010/10/4 Bruno Gomes da Silva
Bruno,
Not so rude at all :) You've made me think of 2 questions
How difficult is it (or procedure for) converting to InnoDB and what exactly
will this gain in performance increase?
Also, you mention Postgresql and batch inserts. Does Bacula not use batch
inserts with MySQL by default?
I'm
On Thu, 7 Oct 2010 15:34:45 -0300, Eduardo Júnior said:
On Thu, Oct 7, 2010 at 1:12 PM, Martin Simmons mar...@lispworks.com wrote:
Now my question:
How can I configure my job to continue getting the incremental changes
from server2, without running a full job, ie, based on the last
Hi
On Fri, Oct 8, 2010 at 12:08 PM, Martin Simmons mar...@lispworks.com wrote:
But in this way, i always would need change 'Address' in Client Section
But is it possible I have multiples 'Address' in Client Section,
that when the first one fail, the second one is used?
No, you can only
For batch insert by default on mysql, it could be or not, depending on several
factors
is mysql pthread safe or not, and configure option choose during building time.
The mysql 4 is obsolete now with 5.0.3 (I think there's some good reasons for
that)
Transforming table to innodb is quite
'Mingus Dew' wrote:
All,
I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running
MySQL 4.1.22 for the database server. I do plan on upgrading to a
compatible version of MySQL 5, but migrating to PostgreSQL isn't an
option at this time.
I am trying to backup to tape a very large
On 10/08/10 15:30, Henrik Johansen wrote:
Since you are using Solaris 10 I assume that you are going to run MySQL
off ZFS - in that case you need to adjust the ZFS recordsize for the
filesystem that is going to hold your InnoDB datafiles to match the
InnoDB block size.
Henrik,
This is an
This is an interesting observation. How does one
determine/set the InnoDB block size?
Sorry for butting in here, but I've been following this thread.
You can't change the InnoDB block size unless you recompile from source, from
what I understand...but that's besides the point.
Using InnoDB
Phil Stracchino wrote:
On 10/08/10 15:30, Henrik Johansen wrote:
Since you are using Solaris 10 I assume that you are going to run MySQL
off ZFS - in that case you need to adjust the ZFS recordsize for the
filesystem that is going to hold your InnoDB datafiles to match the
InnoDB block size.
Yes, quite possible.
Check the examples/reports directory in the source tarball.
I've taken the reports.pl script and tweaked it to do
somethings specific to me. It's very straightforward
and you just kick it off with cron.
-John
On Fri, Oct 08, 2010 at 10:38:37AM +0200, hOZONE wrote:
On 10/08/10 17:49, Attila Fülöp wrote:
please see
http://dev.mysql.com/tech-resources/articles/mysql-zfs.html#Set_the_ZFS_Recordsize_to_match_the_block_size
16K is the zfs recodesize I'm using.
Aha! Thanks, Attila. Exactly what I needed.
--
Phil Stracchino, CDK#2 DoD#299792458
13 matches
Mail list logo