> On Jan 28, 2004, at 12:01 PM, Bruce Dembecki wrote this wonderful stuff: >> >> So.. My tips for you: >> >> 1) Consider a switch to InnoDB, the performance hit was dramatic, and it's >> about SO much more than transactions (which we still don't do)! >> > Consider it switched! as soon as I find the way to do so :) > Are there any changes necessary to his code/queries to use innodb? > No changes needed to code/queries, except probably the daily table optimize/repair can go away... As far as how to do it...
It seems you can (or are forced to) pretty much have daily down time... All you need is some time to bring the machine down for a quick bounce... First edit the my.cnf file to add the InnoDB settings... You have a setting in your current my.cnf disabling InnoDB, that needs to go away, my InnoDB settings are below, you probably don't need a 40Gig disk space for InnoDB, but you should take 4 x data size at least (InnoDB keeps data times 2 so it can roll back transactions, and then there are index etc...). So Decide how big you want your space to be and make the appropriate changes in my.cnf to make it happen... Whatever you go with you are stuck with as a minimmum, it's hard to go smaller once it is live. After you bounce MySQL and InnoDB is then an option you will still need some memory for MyISAM until you change the tables... So maybe set the sort/read/join buffers to 2M if they aren't already, Key Buffer to say 250M and the InnoDB numbers close to mine. After you have moved the tables just drop the Key Buffer down to 16M or so. Oh yes, and as I will say below, drop that query cache, 64M is plenty for you I expect, the rest is being wasted. Once this is all setup go ahead and bounce the server so the new settings take place. So now you have InnoDB available... All you need now it to change the data... The command is: ALTER TABLE xxx TYPE=InnoDB; Once again, do NOT change the mysql database, it MUST stay as MyISAM, and doesn't affect your performance at any rate. Once this is done you can go ahead and drop the Key Buffer right down to something tiny, 16M or so is what we have, and bounce MySQL again and you are all set. >> 2) Drop the query cache to something more practical, a gigabyte is fine if >> your data is static, if it's not it's way too much. We use 128MBytes and >> typically have about a 30% hit rate on the Query cache and the busiest >> server is showing 80MBytes unused memory in the query cache and a 41% >> hit rate, and our databases take about 40G of disk space. Remember having >> a big query cache doesn't help if it's mostly sitting unused (in fact if >> ours are still sitting with 80M free in a week I'll drop all of them >> 64MBytes). >> > we have an average of ~15-20%, with times sustaining 30+% > Errmm... The stats you included says that the Query Cache is WAY out of control. There's like a gigabyte of unused space cache there. >> 3) Give lots of memory to InnoDB, I'll share my settings below. >> > Thank You! > >> 4) Take most of the non InnoDB memory settings and drop them down real low, >> InnoDB does well on it's own and if you convert all tables you don't need to >> leave much in the way of resources for MyISAM. >> > ok > >> 5) Turn on and use the slow query log (and if need be change the time >> needed to qualify as a slow query, the default 10 seconds is a lifetime). You >> may not code the queries yourself, but you can identify the queries that >> are causing problems and from there you can advise the client on changes >> to the database structure (indexes etc) or at least tell him exactly what the >> problem queries are. >> > The slow log has helped us a lot in the past... with the current slow > log settings, only about 0.1% are slow queries. 3K out of 4million in the > past 18hours. > Currently the time appears to be set at 2 (From show variables: > slow_launch_time 2 ). > >> 6) Go get MyTOP from Jeremy Zawodny at >> http://jeremy.zawodny.com/mysql/mytop/ - personally I like version 1.3 >> but that may just be what I am used to... You may not be able to control >> the coding part but you can at least monitor the server and see what it's >> up to and quickly and easily see problems. >> > Great tool.. only recently started using it. >> >> 7) If you decide to stay with MyISAM and not InnoDB then you will want >> as much memory as you can in the Key Buffer while leaving some space in >> the sort/read/join buffers.. I'd up the sort/read/join buffers to maybe >> 10MBytes, or even 20Mbytes, if you need to drop Key buffer to 1500M to >> give you the space for the others. We got OKish results on MyISAM with the >> larger sort/read/join buffers - InnoDB made all the difference though. > > I've only gone as high as 6M on those before. > >> Before giving you our settings I do want to point out one thing... We >> haven't fine tuned the memory settings since we did the G5 switch. At >> the time I was bringing the machines up they needed to be up quickly, and >> when it didn't work correctly with my original settings I had to make >> adjustments to get it to run at all. Ideally I want InnoDB holding about >> 3GBytes of ram, I'll make adjustments next month when I can play on an >> offline >> server and figure out what works best. >> > We've got 4G and dual 2Ghz like you.. and, it appears that the G5 may be > best used as a dedicated mysql box, so, whatever it can best use of the 4G, > the better. > >> This one worked and got the machines up inside the maintenance window, >> so I went with it (we get 2 hours a week max. and before we can touch the >> database servers we have to bring down 120 application servers and we >> have to finish with the database servers in enough time to bring up the 120 >> application servers before the two hours is over). When we get the G5 >> XServes I will have some more time to run tests on just how far I can >> push the InnoDB Buffer pool... As you can understand we don't like to bring >> down our servers for this type of testing. Clearly our performance isn't >> too bad or I would have focused on this sooner. >> >> Best Regards, > > Bruce Unfortunately, while we would love to have similar down times to that, > we obviously have overloads on a daily basis, bringing the machine down, or > bogging it out, for vast periods of the day... This is only recently (past > 2-3 months) before that, it bogged out onlya couple time a day, and the > slowness was, at least, functional- though, thats a very relative term, > especially when dealing with users who may have shorter attention spans. > > How are you spanning your queries over multiple DB servers? and are all > writes being done on one master server, or have you found a way to do 2 way > replication? Currently we have only the main appserver apache2/php/thttpd, > and are trying a secondary apache2/php server (dual PIII/850/2G) which > doesn't appear to be handling the load well at all... We're looking, > obviously, at adding several front end appservers, though we want to ensure > the G5 will be able to handle it's job if there are 5-6+ frontends on it. > > Here's a few stats from today. If you see anything oddball, please let me > know. I see the Qcache_free_memory is awfully large, I assume that means > it's being tremendously wasted. > The stats say that the Query Cache is WAY big, but I predicted that. As for the slowness, I can't address the application side, but MySQL shouldn't be the cause of problems with this sort of hardware and server load. On the rest of the stats my comments are the number of queries aren't that high, this server should be more than enough properly tuned.... Number of connections seems high, that's an Application thing, I can only assume it's efficient in how it manages it's database connections etc? My stats show 47K connections on an uptime of a week, yours is at 242K connections on less than a day. My server typically has a little less than 400 connections actually present at any time and if I have 10 threads running (including replication etc) I start to get nervous and look for a problem (and pagers go off if we get over 20 threads running for more than 30 seconds). The number of times your connections change database seems high too, there are as many change database commands as selects. Again this is likely an Application thing, but might be an area that could be improved. As to your replication question.... Replication is key to us, we run our servers in matched pairs, server A uses server B as a master and vice versa... Both servers have the line "log-slave-updates" in the my.cnf so that the binary logs include changes from the other server as well as from itself (which means we could in theory run a circle and have more than two servers in a loop - that proved a little unreliable and we prefer the matched pairs approach). That means that amongst other things we can (and do) have a third (smaller) server replicating the data from server A, which gets all the data from server A and B. We use that third server for reports mainly, all the stats from our apps are kept in with the rest of the data, and running the page view reports and so on for each app was creating slowness - that can take place offline as it were, without affecting production machines. We found we sometimes encountered problems with this two way replication where a duplicate entry would exist in the database and that would cause replication to stop. We have this line in our my.cnf to take care of that, "slave-skip-errors=1062" - it ignores errors caused by trying to create a duplicate record in a table where a primary key is saying you can't. All other errors still cause replication to stop, but we monitor that and bells and sirens go off when replication stops. So the data is replicated both ways between the server pairs. We then have two instances of the application running, each using a different database server, each running on a different application server. The load balancers in our case send all traffic to one application server... If it goes down, the load balancer sends traffic to the other. If a database server goes down (hey, it could happen), all the application servers using it report as down and the load balancers sends traffic to the other application server in each pair. It's all about redundancy. The other thing you may want to consider is going out and getting a copy of InnoDB Hot Backup (innodb.com), we run it daily, backing up the data off both pairs of servers. It doesn't affect performance too much if you keep compression down or off. More of my my.cnf files are below :-) Best Regards, Bruce Abbreviated my.cnf: basedir=/usr/local/mysql datadir=/mysqldata log-slave-updates server-id=11 slave-skip-errors=1062 set-variable = lower_case_table_name=0 set-variable = long_query_time=10 set-variable = max_connections=1000 set-variable = key_buffer_size=16M set-variable = table_cache=4096 set-variable = sort_buffer_size=2M set-variable = read_buffer_size=2M set-variable = read_rnd_buffer_size=2M set-variable = thread_cache_size=32 set-variable = thread_concurrency=8 set-variable = myisam_sort_buffer_size=2M set-variable = max_allowed_packet=20M set-variable = query_cache_size=128M innodb_data_home_dir = /mysqldata innodb_data_file_path = ibdata01:2000M;ibdata02:2000M;ibdata03:2000M;<truncated here, you get the idea> # This line above is key, in our case it goes from ibdata01 to ibdata20 - # it essentially makes 20 2GByte data files for InnoDB to use. The warning I # have for you is to be careful when working with this line, cut and paste # from the OSX terminal window tends to take it as 5 lines of text, and # clearly that will be an issue, you need to make sure this is all on one # line whenever you work with it... Back to the conf file... innodb_log_group_home_dir = /mysqldata innodb_log_arch_dir = /mysqldata set-variable = innodb_buffer_pool_size=1500M set-variable = innodb_additional_mem_pool_size=256M set-variable = innodb_log_files_in_group=2 set-variable = innodb_log_file_size=250M set-variable = innodb_log_buffer_size=20M innodb_flush_log_at_trx_commit=0 set-variable = innodb_lock_wait_timeout=30 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]