On Thu, Nov 18, 2010 at 10:27 PM, Alan Brown a...@mssl.ucl.ac.uk wrote:
On 13/11/10 04:46, Gary R. Schmidt wrote:
You mean looks increasingly *unlikely* don't you? As InnoDB is the
default in MySQL 5.5...
Yes it is, but take a look at what Oracle's been doing to the other
opensource
On Fri, 19 Nov 2010, Mikael Fridh wrote:
The FUD stops here, this is pointless in the case of (where this
discussion started) restore performance on a MySQL back-end.
In terms of restore performance, you're right. Better optimised queries
would speed things up, but probably not by much (see
On 13/11/10 04:46, Gary R. Schmidt wrote:
You mean looks increasingly *unlikely* don't you? As InnoDB is the
default in MySQL 5.5...
Yes it is, but take a look at what Oracle's been doing to the other
opensource projects it inherited.
It says a lot when core mysql developers fork a new
Hi,
On Fri, 12 Nov 2010, Bob Hetzel wrote:
I'm starting to think the issue might be linked to some kernels or linux
distros. I have two bacula servers here. One system is a year and a half
old (12 GB RAM), has with a File table having approx 40 million File
records. That system has had
On 11/12/2010 11:46 PM, Gary R. Schmidt wrote:
Frankly, I'd rather there were reliable connectors and queries available
for Oracle and DB2
My usual conclusion when something does not exist is that nobody [with
the ability to create it] wants them.
rather than this childish prattle over
On Thu, Nov 11, 2010 at 3:47 PM, Gavin McCullagh gavin.mccull...@gcd.ie wrote:
On Mon, 08 Nov 2010, Gavin McCullagh wrote:
We seem to have the correct indexes on the file table. I've run optimize
table
and it still takes 14 minutes to build the tree on one of our bigger clients.
We have 51
Hi,
On Fri, 12 Nov 2010, Mikael Fridh wrote:
On Thu, Nov 11, 2010 at 3:47 PM, Gavin McCullagh gavin.mccull...@gcd.ie
wrote:
# Time: 10 14:24:49
# u...@host: bacula[bacula] @ localhost []
# Query_time: 1139.657646 Lock_time: 0.000471 Rows_sent: 4263403
Rows_examined: 50351037
Mikael Fridh wrote:
Tuning's not going to get any of those 50 million traversed rows
disappear. Only a differently optimized query plan will.
This applies across both mysql and postgresql...
This is an Ubuntu Linux server running MySQL v5.1.41. The mysql data is on
an
MD software RAID 1
'Alan Brown' wrote:
Mikael Fridh wrote:
Tuning's not going to get any of those 50 million traversed rows
disappear. Only a differently optimized query plan will.
This applies across both mysql and postgresql...
This is an Ubuntu Linux server running MySQL v5.1.41. The mysql data is on
an
From: Gavin McCullagh gavin.mccull...@gcd.ie
Subject: Re: [Bacula-users] Tuning for large (millions of files)
backups?
To: bacula-users@lists.sourceforge.net
Message-ID: 2010144733.gz20...@gcd.ie
Content-Type: text/plain; charset=us-ascii
On Mon, 08 Nov 2010, Gavin McCullagh
Gavin McCullagh wrote:
On Tue, 09 Nov 2010, Alan Brown wrote:
and it still takes 14 minutes to build the tree on one of our bigger
clients.
We have 51 million entries in the file table.
Add individual indexes for Fileid, Jobid and Pathid
Postgres will work with the combined index for
'Alan Brown' wrote:
Gavin McCullagh wrote:
On Tue, 09 Nov 2010, Alan Brown wrote:
and it still takes 14 minutes to build the tree on one of our bigger
clients.
We have 51 million entries in the file table.
Add individual indexes for Fileid, Jobid and Pathid
Postgres will work with the
Hi,
On Thu, 11 Nov 2010, Alan Brown wrote:
What tuning (if any) have you performed on your my.cnf and how much
memory do you have?
Thus far I haven't spent much time on this and haven't tuned MySQL. The
slow build an annoyance, but not a killer so I've not really got around to
it. The
Henrik Johansen wrote:
I have had about as much of this as I can take now so please, stop spreading
FUD about MySQL.
Have you used Mysql with datasets in excess of 100-200 million objects?
I have. Our current database holds about 400 million File table entries.
MySQL requires significant
'Alan Brown' wrote:
Henrik Johansen wrote:
I have had about as much of this as I can take now so please, stop spreading
FUD about MySQL.
Have you used Mysql with datasets in excess of 100-200 million objects?
Sure - our current Bacula deployment consists of 3 catalog servers with
the smallest
On Mon, 08 Nov 2010, Gavin McCullagh wrote:
We seem to have the correct indexes on the file table. I've run optimize
table
and it still takes 14 minutes to build the tree on one of our bigger clients.
We have 51 million entries in the file table.
I thought I should give some mroe concrete
On 08/11/10 22:21, Gavin McCullagh wrote:
Right you are
http://wiki.bacula.org/doku.php?id=faq#restore_takes_a_long_time_to_retrieve_sql_results_from_mysql_catalog
There is still an element of move to postgresql though
With good reason. I did resist moving to pgsql for quite a while
On Tue, 09 Nov 2010, Alan Brown wrote:
and it still takes 14 minutes to build the tree on one of our bigger clients.
We have 51 million entries in the file table.
Add individual indexes for Fileid, Jobid and Pathid
Postgres will work with the combined index for individual table
Ondrej PLANKA (Ignum profile) wrote:
We have several 10+ million file jobs - all run without problem (backup
and restore).
I am aware of the fact that a lot of Bacula users run PG ( Bacula
Systems also does recommend PG for larger setups ) but nevertheless
MySQL has served us very well so
On Mon, 08 Nov 2010, Alan Brown wrote:
Mysql works well - if tuned, but tuning is a major undertaking when
things get large/busy and may take several iterations.
Some time back there was an issue with Bacula (v5?) which seemed to come
down to a particular query associated (I think) with
Gavin McCullagh wrote:
On Mon, 08 Nov 2010, Alan Brown wrote:
Mysql works well - if tuned, but tuning is a major undertaking when
things get large/busy and may take several iterations.
When we do restores, building the tree takes a considerable time now. I
haven't had a lot of time to
Hi Alan,
On Mon, 08 Nov 2010, Alan Brown wrote:
When we do restores, building the tree takes a considerable time now. I
haven't had a lot of time to look at it, but suspected it might be down to
this issue.
That's a classic symptom of not having the right indexes on the File table.
'Ondrej PLANKA (Ignum profile)' wrote:
Thanks :)
Which type of MySQL storage engine are you using? MyISAM or InnoDB for
large Bacula system?
Can you please copy/paste your MySQL configuration? I mean my.cnf file
Please re-read this thread and you should find what you are looking for.
Thanks,
Am Mon, 01 Nov 2010 06:15:18 +0100 schrieb Ondrej PLANKA (Ignum profile):
Thanks :)
Which type of MySQL storage engine are you using? MyISAM or InnoDB for
large Bacula system?
Can you please copy/paste your MySQL configuration? I mean my.cnf file
Thanks, Ondrej.
I would use InnoDB.
a
Hello Henrik,
what are you using? MySQL?
Thanks, Ondrej.
'Mingus Dew' wrote:
Henrik,
Have you had any problems with slow queries during backup or restore
jobs? I'm thinking about http://bugs.bacula.org/view.php?id=1472
specifically, and considering that the bacula.File table already has 73
'Ondrej PLANKA (Ignum profile)' wrote:
Hello Henrik,
what are you using? MySQL?
Yes - all our catalog servers run MySQL.
I forgot to mention this in my last post - we are Bacula System
customers and they have proved to very supportive and competent.
If you are thinking about doing large scale
Thanks :)
Which type of MySQL storage engine are you using? MyISAM or InnoDB for
large Bacula system?
Can you please copy/paste your MySQL configuration? I mean my.cnf file
Thanks, Ondrej.
Henrik Johansen napsal(a):
'Ondrej PLANKA (Ignum profile)' wrote:
Hello Henrik,
what are you
Henrik,
Have you had any problems with slow queries during backup or restore
jobs? I'm thinking about
http://bugs.bacula.org/view.php?id=1472specifically, and considering
that the bacula.File table already has 73
million rows in it and I haven't even successfully ran the big job yet.
Just
'Mingus Dew' wrote:
Henrik,
Have you had any problems with slow queries during backup or restore
jobs? I'm thinking about http://bugs.bacula.org/view.php?id=1472
specifically, and considering that the bacula.File table already has 73
million rows in it and I haven't even successfully ran the big
Alan Brown wrote:
You are going to hit a big pain point with myisam with that many files
anyway (it breaks around 4 billion entries without tuning), but even
inno will grow large/slow and need a lot of my.cnf tuning
That should be 4Gb - about 50 million entries.
Bruno Friedmann wrote:
Rude answer :
If you really want to use Mysql drop the myisam to innodb.
But you don't want to use mysql for that job, just use Postgresql fine tuned
with batch insert enabled.
Seconded - having been through this issue.
You are going to hit a big pain point with
On 12/10/10, Alan Brown (a...@mssl.ucl.ac.uk) wrote:
Bruno Friedmann wrote:
But you don't want to use mysql for that job, just use Postgresql
fine tuned with batch insert enabled.
Seconded - having been through this issue.
I am running Postgresql with batch insert with jobs of around 8
Henrik,
I really appreciate your reply, particularly as a fellow
Bacula-on-Solaris user. I do not have my databases on ZFS, only my Bacula
storage. I'll probably have to tune for local disk.
Thanks very much,
Shon
On Fri, Oct 8, 2010 at 3:30 PM, Henrik Johansen hen...@scannet.dk wrote:
On 10/07/2010 11:03 PM, Mingus Dew wrote:
All,
I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running
MySQL 4.1.22 for the database server. I do plan on upgrading to a compatible
version of MySQL 5, but migrating to PostgreSQL isn't an option at this
time.
I am trying
Bruno,
Not so rude at all :) You've made me think of 2 questions
How difficult is it (or procedure for) converting to InnoDB and what exactly
will this gain in performance increase?
Also, you mention Postgresql and batch inserts. Does Bacula not use batch
inserts with MySQL by default?
I'm
For batch insert by default on mysql, it could be or not, depending on several
factors
is mysql pthread safe or not, and configure option choose during building time.
The mysql 4 is obsolete now with 5.0.3 (I think there's some good reasons for
that)
Transforming table to innodb is quite
'Mingus Dew' wrote:
All,
I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running
MySQL 4.1.22 for the database server. I do plan on upgrading to a
compatible version of MySQL 5, but migrating to PostgreSQL isn't an
option at this time.
I am trying to backup to tape a very large
On 10/08/10 15:30, Henrik Johansen wrote:
Since you are using Solaris 10 I assume that you are going to run MySQL
off ZFS - in that case you need to adjust the ZFS recordsize for the
filesystem that is going to hold your InnoDB datafiles to match the
InnoDB block size.
Henrik,
This is an
This is an interesting observation. How does one
determine/set the InnoDB block size?
Sorry for butting in here, but I've been following this thread.
You can't change the InnoDB block size unless you recompile from source, from
what I understand...but that's besides the point.
Using InnoDB
Phil Stracchino wrote:
On 10/08/10 15:30, Henrik Johansen wrote:
Since you are using Solaris 10 I assume that you are going to run MySQL
off ZFS - in that case you need to adjust the ZFS recordsize for the
filesystem that is going to hold your InnoDB datafiles to match the
InnoDB block size.
On 10/08/10 17:49, Attila Fülöp wrote:
please see
http://dev.mysql.com/tech-resources/articles/mysql-zfs.html#Set_the_ZFS_Recordsize_to_match_the_block_size
16K is the zfs recodesize I'm using.
Aha! Thanks, Attila. Exactly what I needed.
--
Phil Stracchino, CDK#2 DoD#299792458
All,
I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running
MySQL 4.1.22 for the database server. I do plan on upgrading to a compatible
version of MySQL 5, but migrating to PostgreSQL isn't an option at this
time.
I am trying to backup to tape a very large number of files
42 matches
Mail list logo