So should we always use InnoDB over BerkeleyBD? I was
under the impression Berkeley was faster and better at
handling transactions.

Dan

-----Original Message-----
From: Bruce Dembecki [mailto:[EMAIL PROTECTED]
Sent: Wednesday, January 28, 2004 11:01 AM
To: [EMAIL PROTECTED]
Subject: Re: Memory Problems on a G5/OSX/MySql4.0.17


> I don't think there would be any benefit to using
InnoDB, at least not
> from a transaction point of view

For the longest time I was reading the books and
listening to the experts
and all I was hearing is InnoDB is great because it
handles transactions.
Having little interest in transactions per se I pretty
much started tuning
things out whenever people mentioned InnoDB.

One day when talking to some MySQL AB folks they asked
why I wasn't using
InnoDB... I kind of looked at them blankly and replied
that I don't need
transactions, and they looked back as if I was mad.

Turns out InnoDB is far better at handling large
databases than MyISAM, we
had a massive (and I do mean massive) increase in
performance just by
switching to InnoDB. Uses a little more disk space,
but it's worth it, and
with a 5GByte database and a G5 server you have room
to spare, even if you
only got the "smaller" disks.

InnoDB is a major thing for us now, everything is
InnoDB. If an Engineer
complains something they have done is running slowly
it usually turns out to
be they made some new thing and didn't make the table
InnoDB. The fix is
easy and quick. I also suspect that you could do away
with that nightly
table repair that ties up the machine for hours at a
time if you were using
InnoDB.

We have 4 G5 towers serving MySQL for us, all DP2GHz
machines with 4GBytes
of RAM. If your data is changing rapidly, as it
appears from your samples
most pages include some sort of insert, you will have
limited benefit from
the Query cache - every time a table receives any type
of change to it's
data any queries in the query cache that use that
table are dumped. In
February we are adding to the mix with 2 G5 XServes...
These are for new
projects, the current servers are handling their loads
fine.

On the Disk side we got the dual 250GBytes and
mirrored them for redundancy,
speed isn't an issue as far as we can tell.

We chose to replace our old database servers with G5s.
The old machines were
quad processor Sun boxes, and one was an 8 CPU
machine. The G5s left them
all for dead in terms of performance, although I'd
prefer a couple of extra
processors, something inside me still feels better
knowing that when a
process goes AWOL it's not holding up 50% of the
server's resources. The
Application servers are still typically Sun, although
new ones won't be.

We average about 140 Queries per second per machine
(of course the load
isn't that well distributed... but it gives you an
idea), and typical high
points are about 400 - 500 qps on any given machine
without stressing the
machines (replication catch up can see 1500 - 2000
queries per second, but
that's not so common and of course is mostly inserts).

Before we did the upgrade to 4.0.17 during last
Friday's maintenance window
we were over 1.5 billion queries total for the 28 days
the machines had been
up.

So.. My tips for you:

1) Consider a switch to InnoDB, the performance hit
was dramatic, and it's
about SO much more than transactions (which we still
don't do)!

2) Drop the query cache to something more practical, a
gigabyte is fine if
your data is static, if it's not it's way too much. We
use 128MBytes and
typically have about a 30% hit rate on the Query cache
and the busiest
server is showing 80MBytes unused memory in the query
cache and a 41% hit
rate, and our databases take about 40G of disk space.
Remember having a big
query cache doesn't help if it's mostly sitting unused
(in fact if ours are
still sitting with 80M free in a week I'll drop all of
them 64MBytes).

3) Give lots of memory to InnoDB, I'll share my
settings below.

4) Take most of the non InnoDB memory settings and
drop them down real low,
InnoDB does well on it's own and if you convert all
tables you don't need to
leave much in the way of resources for MyISAM.

5) Turn on and use the slow query log (and if need be
change the time needed
to qualify as a slow query, the default 10 seconds is
a lifetime). You may
not code the queries yourself, but you can identify
the queries that are
causing problems and from there you can advise the
client on changes to the
database structure (indexes etc) or at least tell him
exactly what the
problem queries are.

6) Go get MyTOP from Jeremy Zawodny at
http://jeremy.zawodny.com/mysql/mytop/ - personally I
like version 1.3 but
that may just be what I am used to... You may not be
able to control the
coding part but you can at least monitor the server
and see what it's up to
and quickly and easily see problems.

7) If you decide to stay with MyISAM and not InnoDB
then you will want as
much memory as you can in the Key Buffer while leaving
some space in the
sort/read/join buffers.. I'd up the sort/read/join
buffers to maybe
10MBytes, or even 20Mbytes, if you need to drop Key
buffer to 1500M to give
you the space for the others. We got OKish results on
MyISAM with the larger
sort/read/join buffers - InnoDB made all the
difference though.

Before giving you our settings I do want to point out
one thing... We
haven't fine tuned the memory settings since we did
the G5 switch. At the
time I was bringing the machines up they needed to be
up quickly, and when
it didn't work correctly with my original settings I
had to make adjustments
to get it to run at all. Ideally I want InnoDB holding
about 3GBytes of ram,
I'll make adjustments next month when I can play on an
offline server and
figure out what works best.

This one worked and got the machines up inside the
maintenance window, so I
went with it (we get 2 hours a week max. and before we
can touch the
database servers we have to bring down 120 application
servers and we have
to finish with the database servers in enough time to
bring up the 120
application servers before the two hours is over).
When we get the G5
XServes I will have some more time to run tests on
just how far I can push
the InnoDB Buffer pool... As you can understand we
don't like to bring down
our servers for this type of testing. Clearly our
performance isn't too bad
or I would have focused on this sooner.

Best Regards, Bruce



Here are our memory related settings from our G5s:

set-variable    = max_connections=1000
#
# Remember some memory is allocated to each
connection,
# so this can be a factor

set-variable    = key_buffer_size=16M
set-variable    = table_cache=4096
set-variable    = sort_buffer_size=2M
set-variable    = read_buffer_size=2M
set-variable    = read_rnd_buffer_size=2M
set-variable    = thread_cache_size=32
set-variable    = myisam_sort_buffer_size=2M
set-variable    = query_cache_size=128M
#
# Really here the query cache is the main thing, we
tend
# to leave small buffers for the rest for those times
when
# we use a MyISAM table, like when an engineer forgets
to
# make a table InnoDB for example (and remember to
ALWAYS
# leave the mysql database as MyISAM tables)
#

set-variable = innodb_buffer_pool_size=1500M
set-variable = innodb_additional_mem_pool_size=256M
set-variable = innodb_log_buffer_size=20M
innodb_flush_log_at_trx_commit=0
#
# I include this last one here because it really
helped us,
# it's not memory related, but it made a big
performance
# difference in our case - check innodb.com for what
it
# does and decide if it will help your situation



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:   
http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to