Re: Big innodb tables, how can I work with them?

2014-05-19 Thread Manuel Arostegui
2014-05-15 14:26 GMT+02:00 Antonio Fernández Pérez 
antoniofernan...@fabergroup.es:

 ​

 ​
 ​Hi,

 I have in my server database some tables that are too much big and produce
 some slow query, even with correct indexes created.

 For my application, it's necessary to have all the data because we make an
 authentication process with RADIUS users (AAA protocol) to determine if one
 user can or not navigate in Internet (Depending on the time of all his
 sessions).

 So, with 8GB of data in one table, what are your advices to follow?
 Fragmentation and sharding discarted because we are working with disk
 arrays, so not apply. Another option is to delete rows, but in this case, I
 can't. For the other hand, maybe de only possible solution is increase the
 resources (RAM).


Adding more RAM will only save you for a few weeks/months until the data
isn't able to fit in memory any longer. You will face the same problem soon
(if your data is and will be still growing).
There will be a point where you just can't buy more and better hardware
(actually you kinda can, but you will spend load of money and might end up
with nice servers just doing nothing because they support more memory in
their motherboard so you need to upgrade it too).

You should give your application a thought and start considering
noSQL/table sharding/partitioning/archiving.
Maybe it is too late, but before needing another hardware upgrade, yo
should've thought about a solution that would allow you keep growing
without needing to spend all in hardware (unless you have unlimited money).

Good luck!
Manuel.


Re: Big innodb tables, how can I work with them?

2014-05-19 Thread Manuel Arostegui
2014-05-19 11:49 GMT+02:00 Johan De Meersman vegiv...@tuxera.be:


 - Original Message -
  From: Manuel Arostegui man...@tuenti.com
  Subject: Re: Big innodb tables, how can I work with them?
 
  noSQL/table sharding/partitioning/archiving.

 I keep wondering how people believe that NoSQL solutions magically don't
 need RAM to work. Nearly all of them slow down to a crawl, many even worse
 than an SQL database, as soon as the full or working set no longer fits in
 memory, too.

 Don't get me wrong - they have certain benefits and definite usecases, but
 it's time people stop presenting them as a magic bullet. They require
 understanding and work, just like any other technology.


I was thinking about its distributed system as it might speed up reads :-)
We do have a huge noSQL cluster here at work and it certainly needs lot of
RAM.

Manuel


Re: mysql Access denied error

2014-05-05 Thread Manuel Arostegui
2014-05-05 4:17 GMT+02:00 EdwardKing zhan...@neusoft.com:

 I use mysql to create a database and grant rights to a user
 hadooptest,then I use hadooptest to login mysql and use the database,but it
 failed. Why raise error after I grant rights? How to solve it? Thanks.

 My operation is follows:

 [hadoop@master ~]$ mysql -h localhost -u root -p
 Enter password:
 Welcome to the MySQL monitor.  Commands end with ; or \g.
 Your MySQL connection id is 3
 Server version: 5.5.37-log MySQL Community Server (GPL)

 Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights
 reserved.

 Oracle is a registered trademark of Oracle Corporation and/or its
 affiliates. Other names may be trademarks of their respective
 owners.

 Type 'help;' or '\h' for help. Type '\c' to clear the current input
 statement.

 mysql create database hadooptest;
 mysql grant all on hadooptest.* to 'hadoopuser'@'%' identified by
 '123456';
 Query OK, 0 rows affected (0.00 sec)

 mysql flush priviledges;
 ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
 that corresponds to your MySQL server version for the right syntax to use
 near 'priviledges' at line 1
 mysql flush privileges;
 Query OK, 0 rows affected (0.00 sec)

 mysql quit;
 Bye

 Then I use hadoopuser  with password 123456
 [hadoop@master ~]$ mysql -u hadoopuser -p
 Enter password:
 ERROR 1045 (28000): Access denied for user 'hadoopuser'@'localhost'
 (using password: YES)



Hello,

I assume MySQL is listening in 3306 so:
Try this:  mysql -u hadoopuser -plocalhost (or 127.0.0.1)

% doesn't match localhost so if you don't specify it you will be
attempting to connect via Unix Socket.
If you don't want to specify -hlocalhost all the time, just do the grant
with @localhost instead of @%

Hope this helps
Manuel.


Re: mysql Access denied error

2014-05-05 Thread Manuel Arostegui
2014-05-05 10:00 GMT+02:00 Reindl Harald h.rei...@thelounge.net:



 Am 05.05.2014 08:34, schrieb Manuel Arostegui:
  % doesn't match localhost so if you don't specify it you will be
  attempting to connect via Unix Socket.
  If you don't want to specify -hlocalhost all the time, just do the grant
  with @localhost instead of @%

 nonsense

 % matches *any host*


Do the test yourself.


Re: mysql Access denied error

2014-05-05 Thread Manuel Arostegui
2014-05-05 10:57 GMT+02:00 Reindl Harald h.rei...@thelounge.net:



 Am 05.05.2014 10:19, schrieb Manuel Arostegui:
  2014-05-05 10:00 GMT+02:00 Reindl Harald h.rei...@thelounge.netmailto:
 h.rei...@thelounge.net:
 
  Am 05.05.2014 08:34, schrieb Manuel Arostegui:
   % doesn't match localhost so if you don't specify it you will be
   attempting to connect via Unix Socket.
   If you don't want to specify -hlocalhost all the time, just do the
 grant
   with @localhost instead of @%
 
  nonsense
 
  % matches *any host*
 
  Do the test yourself

 i don't need to test such basics since i am working
 as mysql administrator the last 11 years and curently
 responsible for some hundret databases heavily using
 host specific permissions


Good for you.
I might be working somewhere bigger than that.

Here is what I am talking about:


mysql  grant all on hadooptest.* to 'hadoopuser'@'%' identified by
'123456'; flush privileges;
Query OK, 0 rows affected (0.01 sec)

Query OK, 0 rows affected (0.00 sec)

mysql


# mysql -uhadoopuser -p123456
ERROR 1045 (28000): Access denied for user 'hadoopuser'@'localhost' (using
password: YES)

# mysql -uhadoopuser -p123456 -hlocalhost
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 392245
snip


---


mysql drop user 'hadoopuser'@'%'; flush privileges;
Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.00 sec)

mysql  grant all on hadooptest.* to 'hadoopuser'@'localhost' identified by
'123456'; flush privileges;
Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.00 sec)

# mysql -uhadoopuser -p123456
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 392191
snip


I have been reading your impoliteness in this list for years. You are well
known in this (and other) lists for being like that. So I am not going to
start a fight over the list and this will be my last message in this thread.
Hopefully I have helped the original poster with my point.

Good luck with your hundreds of databases.

Manuel.


Re: Difference between log-buffer flushing and log-buffer syncing?

2014-04-17 Thread Manuel Arostegui
2014-04-17 11:11 GMT+02:00 Ajay Garg ajaygargn...@gmail.com:

 On Thu, Apr 17, 2014 at 2:28 PM, Reindl Harald h.rei...@thelounge.net
 wrote:

 
 
  Am 17.04.2014 10:55, schrieb Ajay Garg:
   I do understand the meaning of Unix sync function.
  
   So, you mean to say that flushing and syncing are same, in the
  context of MySQL?
 
  please get rid of top-posting and reply-all
 
  a flush without a sync is dear operating system, may i ask you to write
  that to disk if you find the time to do so while a sync is write that
  to disk without a nice asking
 

 Thanks Reindl, that clears it up !!


Keep in mind that if you are using HW RAID controller (and a BBU) the
concept of write-back and write-through are important to have in mind too
whilst thinking about how and when your data is written to disk.
If you enable sync_binlog and trx_commit to 1, it might not necessarily be
committing on every transaction to disk, as it might depend on how you've
configured your controller.

If you have write-through configured, the data will be written to disk and
cache at the same time (this has a performance drawback).
If you have write-back configured, the data will be written to cache (and
then to disk, but _not_ at the same time) so you might lose data if there's
a power failure. When you have write-back, at an OS level your data has
been sync'ed to disk even though this is not totally true.
write-back is better for performance, but you have to keep in mind that you
can lose data (which is not common, but you're not 100% safe).

Hope this helps
Manuel.


Re: Data masking for mysql

2014-04-15 Thread Manuel Arostegui
2014-04-15 8:52 GMT+02:00 reena.kam...@jktech.com:

 Hi,

 I need to do data masking to sensitive data exists in mysql db. is there
 any data masking tool available for mysql with linux platform.
 if yes... please provide the links.
 else... please suggest other alternatives for this requirement.

 I look forward to hearing from you.


I don't think there's a generic tool for it.
It depends on your data, format etc.
ie: it's not the same a credit card record than an email or a phone number
one.

It depends on your data, so you might want to do it from an application
level.

Good luck
Manuel.


Re: Things to be considered before/after the OS patch or upgrade

2014-04-15 Thread Manuel Arostegui
2014-04-15 12:05 GMT+02:00 Radoulov, Dimitre cichomit...@gmail.com:

 Hi Reindl,


 On 15/04/2014 11:32, Reindl Harald wrote:

 Am 15.04.2014 11:22, schrieb Radoulov, Dimitre:

 we're going to patch our MySQL servers. Most of them are RHEL/OEL 5.x
 and are going to be upgraded  to 5.x+n (for
 example 5.5 to 5.10).
 I don't seem to find anything about this in the official documentation
 (verifying the patch compatibility,
 relinking etc.).

 P.S. We're using the rpm distributions from dev.mysql.com (the official
 website)

 RHEL is a LTS enterprise distribution
 there are no ABI breakages, that's why you use RHEL or a clone at all


 Thank you! So I suppose we need to, or actually we *must*, upgrade the
 binaries only in case of a release upgrade (RHEL 5 to 6, for example),
 i.e. when a different binary distribution exsists:
 MySQL-5.6.17-1.rhel5.x86_64.rpm to MySQL-5.6.17-1.el6.x86_64.rpm.



Hello,

Upgrades are always scary. Don't you have another server (staging) with
same hardware/packages so you can try there first before doing it in
production?

Manuel.


Re: Locking a Database (not tables) x

2014-03-22 Thread Manuel Arostegui
2014-03-21 18:42 GMT+01:00 David Lerer dle...@univision.net:

 Frequently, we import a production dump that contains only 1 or 2
 databases into one of our QA instances that contains many more databases.
 (i.e. database being a schema or a catalogue).
 At the beginning of the import script, we first drop all objects in the QA
 database so that it will be a perfect match (object wise) to production.

 Is there an easy way to lock the whole database for the duration of the
 import - so that no developers can update the database?
 Obviously, I can revoke permissions, but I was wondering whether there is
 a better approach.



Hello,

One more idea:

Assuming you can stop your DB - restart the database so it only listens in
the unix socket or in a different IP (an alias of your current IP could
work) and connect thru it do all your stuff and enabled it back to its
original port and IP.
Obviously I am assuming your developers connect remotely (thru port 3306 or
whichever you use).

Manuel.


Re: DBA: please review my.cnf [for Java Hibernate application] and give suggestions

2014-02-12 Thread Manuel Arostegui
2014-02-12 12:32 GMT+01:00 Lukas Lehner webleh...@gmail.com:

 Hi Antonio

 all tables use InnoDB. The size is 27 GB (not yet in prod). I guess in prod
 it will be fast 80GB.


Depending on how your application is going to use MySQL resources you will
need to tweak some things (and not only MySQL).
If it is going to be CPU bound, IO bound etc...there are different scenarios

Anyways, some general things to take a look at:

- Use file per table if possible. This won't give you extra performance,
but it will be good if you run into disk spaces issues or for future table
migrations.
- Make sure you have trx_commit and sync_binlog disabled (make sure you
understand what this means and what problems you could have during an un
expected crash)
- If you're expecting lot of temporary tables (filesorts), make sure tmpdir
runs over a fast disk.
- Use NUMA memory handling
- Make sure you test different disk schedulers (depending if you have RAID
and which kind of it) and see how they perform.
- You might want to take a look to smp irq affinity and see how it could
impact in your system.

Manuel.


Re: replication question replacing the master

2014-01-18 Thread Manuel Arostegui
2014/1/17 Richard Reina gatorre...@gmail.com

 I have 3 relay MySQL database servers on my small office LAN backing up a
 master and 3 more machines backing up each relay (1 each). They are all
 replicating all databases and all tables. The master although running fine
 is almost eight years old. I'm thinking it's probably time to make one of
 the relays the master, but I've never done this before.

 I want the new master to have the IP address of the old master 192.168.0.1
 . To make the change I was going to take the master off the LAN and
 shutdown mysql on all the machines, change the IP address on the chosen
 relay to that of the master 192.168.0.1, then restart mysql on all the
 machines. I always refer to the machines by their IP addresses and never by
 their hostnames. Once I successfully make the change I was planning on
 making the old master a relay since it is still working fine.

 Will this plan work ok? Is there a better or easier way?



If the three machines are sync'ed and have consistent data I don't see the
need of stopping MySQL:

- Stop whatever writes to your current master
- Once you are completely sure there are no writes in your current master,
set it to read_only = ON
- In the slave which will become the master, get the logfile and current
position with: show master status;
 - Set the new IP in the new master


Using the position taken in the new master go to the slaves machines and:

stop slave; change master to master_host='IP',
master_log_file='whatever_file_name_you_got',
master_log_pos=whatever_number_you_got,
master_user='replication_or_whatever_you_have',
master_password='replication_or_whatever_you_have'; start slave;

- Set read_only = OFF in your new master
- Start your application so you can start getting writes again.

As soon as you get writes if you do a show master status; in the new
master you should see the position going forward.

I see that faster than any other thing.

Hope this helps
Manuel.


Re: ERROR 2013 (HY000): Lost connection to MySQL server during query

2014-01-08 Thread Manuel Arostegui
2014/1/7 h...@tbbs.net

  2014/01/06 17:07 +0100, Reindl Harald 
 what about look in the servers logfiles
 most likely max_allowed_packet laughable low
 
 Is this then, too, likly when the server and the client are the same
 machine?

 I left this out, that it only then happens when the client has been idle,
 and right afterwards the client repeats the request and all goes well. The
 message is no more than an irritatind break between request and fulfillment.

 Hello,

That happens when you're trying to re-use an existing connection which
wasn't properly closed and as you said, it's been idle. When you repeat the
operation, the thread is created again and thus everything goes normal.

Review the following variables

wait_timeout
net_write_timeout
net_read_timeout


Manu


Re: InnoDB error 5

2013-11-21 Thread Manuel Arostegui
2013/11/21 Reindl Harald h.rei...@thelounge.net


 Am 21.11.2013 13:51, schrieb Paul Halliday:
  Had a system crash this morning and I can't seem to get mysql back up
  and running. This is the error:
 
  InnoDB: Progress in percent: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
  18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
  41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
  64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86
  87 88 89 90 91 92 93 94 95 96 97 98 99 2013-11-21 08:47:26 1570
  [ERROR] InnoDB: Tried to read 16384 bytes at offset 589824. Was only
  able to read -1.
  2013-11-21 08:47:26 802808c00  InnoDB: Operating system error number 5
  in a file operation.
  InnoDB: Error number 5 means 'Input/output error'.
  InnoDB: Some operating system error numbers are described at
  InnoDB:
 http://dev.mysql.com/doc/refman/5.6/en/operating-system-error-codes.html
  2013-11-21 08:47:26 802808c00  InnoDB: File operation call: 'read'
  returned OS error 105.
  2013-11-21 08:47:26 802808c00  InnoDB: Cannot continue operation.
 
  I followed that link but it doesn't tell me anything outside of what
  is above. Can I fix this?

 i would look in the *system logs* because this pretty sure comes
 from the underlying operating system and is *not* mysql specific
 which is also in the message statet with returned OS error 105


 http://lxr.free-electrons.com/source/include/uapi/asm-generic/errno-base.h#L8


Looks like a broken disk or FS corruption :-(

Good luck!
Manuel.


Re: Archive Engine Question

2013-09-18 Thread Manuel Arostegui
2013/9/17 Wayne Leutwyler wleut...@columbus.rr.com

 Hello List,

 I have a customer who is wanting to use the Archive Engine. I have no
 experience with this engine, other than what I am been reading. Why would I
 want to use Archive over InnoDB. They are only going to be placing audit
 information in the table.


Hello,

We use Archive for archive clusters (obviously) and it is a good option to
save lot of disk space if you assume the performance can be slightly worse.
As Keith has pointed out, make sure you know which statements you use
because ARCHIVE doesn't support all the MySQL ones.

If Archive isn't an option but you still want to save some disk, you can
use InnoDB Compression:
http://dev.mysql.com/doc/innodb-plugin/1.0/en/innodb-compression-background.html
http://dev.mysql.com/doc/refman/5.5/en/innodb-compression-internals.html#innodb-compression-internals-storage-btree

In all the tests we did we saw some performance degradation but nothing too
serious and nothing we couldn't afford, but if you decide to try this, make
sure you do a PoC so you know how it could impact in your scenario.

Hope this helps
Manuel.


Re: Performance hiccoughs..

2013-08-14 Thread Manuel Arostegui
2013/8/14 Andy Wallace awall...@ihouseweb.com

 Hey all -

 We have been focusing on performance in our systems a lot lately, and have
 made some pretty
 good progress. Upgrading the mySQL engine from 5.1 to 5.5 was eye-opening.

 But there are still issues, and one in particular is vexing. It seems like
 a tuning problem
 for sure - I notice this even at the command-line interface.  I will have
 a update command:

   update my_table set test_column = 'tester_value' where key_value =
 'a-test-key';

 key_value is the primary key for my_table, which is an INNODB table, about
 50MB, 96K rows



Can you provide the whole show create table for that table?





 If I run this 10 times with different key values, most of the time, it
 will return pretty
 much instantaneously. But at least once, it will take 10, 20, 30 seconds
 to return. This
 affects our applications as well - operations that are generally fast will
 suddenly be
 very, very slow... and then back to fast.


 OS: SunOS 5.10
 SQL version: 5.5.33-log MySQL Community Server (GPL)
 Hardware: Virtual Machine (VMWare), 4 cpus - 16GB RAM


 Tuning section of my.cnf:

 # tuning
 key_buffer_size=512M
 max_allowed_packet=16M
 table_open_cache=512
 sort_buffer_size=10M
 read_buffer_size=10M
 read_rnd_buffer_size=8M
 myisam_sort_buffer_size=512M
 thread_cache_size=8
 query_cache_type=1
 query_cache_size=1024M
 query_cache_limit=10M


Have you done tests with query cache disabled? Depending on your workload
you might get some better performance.



 # 2 x numcpus
 #thread_concurrency=4
 #innodb_thread_concurrency=0
 #innodb_read_io_threads=16
 #innodb_write_io_threads=16
 # You can set .._buffer_pool_size up to 50 - 80 %
 # of RAM but beware of setting memory usage too high
 innodb_buffer_pool_size=2048M


Why only 2GB if you have 16GB in the machine? If possible try to allocate
more memory for MySQL.



 innodb_additional_mem_pool_**size = 20M
 # Set .._log_file_size to 25 % of buffer pool size
 innodb_log_file_size=100M
 innodb_log_buffer_size=8M
 innodb_flush_log_at_trx_**commit=1


This can be a performance killer, try to set it to 0 and make sure you
understand what it means (
http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit
)


 innodb_lock_wait_timeout=50
 innodb=on



Do you have innodb_file_per_table enabled?

Cheers,
Manuel


Re: hypothetical question about data storage

2013-07-30 Thread Manuel Arostegui
2013/7/30 Rick James rja...@yahoo-inc.com

 Elevator...  If the RAID _controller_ does the Elevator stuff, any OS
 optimizations are wasted.
 And there have been benchmarks backing that up.  (Sorry, don't have any
 links handy.)

 RAID 5/10 ...  The testing I have done shows very little difference.
  However, you can slant the conclusion by picking one versus the other of:
 For a given amount of disk space... RAID-X is better than Y.
 For a given number of drives... RAID-Y is better than X.


The tests I have done with RAID5 vs RAID10 the difference is huge, at least
in our clusters with heavy writes.
We usually do RAIDS over 4 or 8 SAS disks (15krpm).
The performance of each type of RAID needs to be tested for your concrete
scenario, you can find lot of benchmarks out there, but you need to test
your workload to be sure what works better for you. As Rick said, with
BBUs, disk schedulers, write back/write thru configuration etc things can
change.

The last tests with SSD disks shows no difference, so for the new servers
with SSD we're going for RAID5 as you get more disk space :-)

Just my 2 cents!
Manuel.


-- 
Manuel Aróstegui
Systems Team
tuenti.com


Re: InnoDB problem.

2013-07-23 Thread Manuel Arostegui
2013/7/23 Rick James rja...@yahoo-inc.com

 Did you change innodb_log_file_size?


innodb_log_file_size error always appears in the logs...he only posted a
few lines of his log...but I guess (or I want to believe) he's gone through
the whole log before starting the thread :-)




Manuel






  -Original Message-
  From: Johan De Meersman [mailto:vegiv...@tuxera.be]
  Sent: Tuesday, July 23, 2013 9:57 AM
  To: Luis H. Forchesatto; mysql list
  Subject: Re: InnoDB problem.
 
  Eek.
 
  No immediate clue here, but maybe someone else does - so please keep the
  list in CC at all times :-p
 
  Random question: were the files backed up from a different version? I'd
  expect some kind of warning about that in the logs, really, but you never
  know.
 
  - Original Message -
 
   From: Luis H. Forchesatto luisforchesa...@gmail.com
   To: Johan De Meersman vegiv...@tuxera.be
   Sent: Tuesday, 23 July, 2013 6:34:47 PM
   Subject: Re: InnoDB problem.
 
   The error log:
 
   130723 10:04:23 [ERROR] Plugin 'InnoDB' init function returned error.
   130723 10:04:23 [ERROR] Plugin 'InnoDB' registration as a STORAGE
   ENGINE failed.
   130723 10:04:23 [Note] Event Scheduler: Loaded 0 events
   130723 10:04:23 [Note] C:\xampp\mysql\bin\mysqld.exe: ready for
   connections.
   Version: '5.1.41' socket: '' port: 3306 Source distribution
 
   2013/7/23 Johan De Meersman  vegiv...@tuxera.be 
 
   --
 
   Att.
 
   Luis H. Forchesatto
   http://br.linkedin.com/pub/luis-henrique-forchesatto/73/684/a67
 
  --
 
  Unhappiness is discouraged and will be corrected with kitten pictures.




-- 
Manuel Aróstegui
Systems Team
tuenti.com


Re: mysql on zfs

2013-06-28 Thread Manuel Arostegui
Hello,

Do you have trim enabled? Maybe those stalls could happen when the
disks are getting trimmed.

Just a random thought.
Manuel

2013/6/28, nixofortune nixofort...@gmail.com:
 Hi guys,
 Did you have any experience running MyLSQ or in my case MariaDB 5.5.31
 on FreeBSD on top of zfs?
 We are using Samsung Pro 840 SSD drives and experiencing temporary stalls.
 Our workload very much skewed towards inserts into big InnoDB tables
 (70-100Gig) the dataset overall 1.5T.
 I have feeling that ZFS is not mature enough to be used on production.
 The speed is not great either 2k-6k/s.
 I disabled innodb_checksums = 0 , innodb_doublewrite = 0  but the stalls
 up to 8min still there.
 Would it be better option to move to EXT4? We need FS snapshots for
 backups.
 Your thought guys.
 Many thanks.
 Igor


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
Manuel Aróstegui
Systems Team
tuenti.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: help: innodb database cannot recover

2013-06-20 Thread Manuel Arostegui
2013/6/20 Peter one2001...@yahoo.com

 Hello,

 I copied innodb database (ib_logfile0  ib_logfile1  ibdata1 and the whole
 database directory) from one crashed machine to another.
 I find that I cannot start database to get the database data any more.


How did you copy the database?

Manuel


Re: help: innodb database cannot recover

2013-06-20 Thread Manuel Arostegui
2013/6/20 Peter one2001...@yahoo.com


 2013/6/20 Peter one2001...@yahoo.com

 Hello,

 I copied innodb database (ib_logfile0  ib_logfile1  ibdata1 and the whole
 database directory) from one crashed machine to another.
 I find that I cannot start database to get the database data any more.


 How did you copy the database?
 Manuel

 I copy the files ib_logfile0  ib_logfile1  ibdata1 into /var/lib/mysql in
 linux and the whole database directory my_database_name into
 /var/lib/mysql/my_database_name, the same location as previous machine
 /var/lib/mysql/my_database_name



Was MySQL daemon stopped in both machines? It must be stopped in both of
them.
If you cannot afford stopping your mysql, you might want to take a look at
xtrabackup: http://www.percona.com/doc/percona-xtrabackup/2.1/

Manuel.


Re: open files in mysqld 5.1.53

2013-06-12 Thread Manuel Arostegui
2013/6/12 walter harms wha...@bfs.de


 Hi list,
 i am trying to understand the incredible use of filepointers in our mysql
 server (5.1.53).
 under normal condition the server reports 10k-15k open files pointer.
 I run a 'flush tables' every 2h to avoid problems, the number of
 users/connections is constant
 It is an automatic system but a few human users, a happy mix of myisam and
 innodb tables running
 with no problems on  mysqld 5.0 . But now sometimes i am
 hit with an insane increase hitting the ceiling at 60k. I do not like the
 idea to increase the
 limit further because i do
 not understand why this is happening (I seems to happen at random times).

 I am running out of idea what my cause the unexpected increase, any idea
 what to watch ?


I had a similar problem some years ago with 5.0.84. It was a database with
thousand of tables (mostly in MyISAM).
It turned to be the  --myisam-recover option in Debian init scripts (
/usr/share/mysql/debian-start.inc.sh) . Obviously it's not a good idea to
remove it, as if the server crashes, you will needed it.
This used to be our db server after getting started:

dXX:~# find /proc/`cat /var/run/mysqld/mysqld.pid`/fd -follow -type f
2/dev/null | wc -l
116810

This is what I saw at the time after removing that MyISAM check:

dXX:~# find /proc/`cat /var/run/mysqld/mysqld.pid`/fd -follow -type f
2/dev/null | wc -l
10730

I would not recommend to keep this as a for-good-solution but a temporary
one till you decide what to do. We migrated most of the tables to InnoDB
(there was no reason to keep them as MyISAM, it was just legacy stuff). And
these problems were gone.

I don't know if this can be your case, but this is what happened to us.

Hope this helps.
Manuel.


Re: Stopping mysql does not always stop it?

2013-06-06 Thread Manuel Arostegui
2013/6/6 Mike Franon kongfra...@gmail.com

 I am running mysql 5.0 for now, and I have a script that I wrote at 12 am,
 that stops mysql server, unmounts the disk that has mysql, and takes a
 different snapshot from amazon as the new disk.


 Long story short, 50% of the time the command /etc/init.d/mysqld stop  will
 fail


 Stopping MySQL:  [FAILED]

 Unmounting /opt on dbserver1


 I then log on the server and mysql is still running.

 When that happens what is the best way to stop mysql? I am trying to write
 something in my script that will do that, and thinking of using the kill
 command, but is that the best way?



 Or should I be using mysqladmin stop?  Is there a better way to bring down
 mysql server and make sure it is down?  Should I be doing something else to
 ensure it comes down?



Hello,

The first thing I would do is to check why that 50% of the time the MySQL
doesn't get stopped. That's the issue you should start looking at.

Once that problem has been found...you can use either /etc/init.d/mysql
stop or mysqladmin. Both should work in the same way.
You should never use kill.

In order to check it, you can just try to use pgrep to look for the process
and/or also check whether the .pid (and/or .sock) exists.
I would always use a combination of two methods to make sure it is stopped.

Hope this helps.
Manuel.

-- 
Manuel Aróstegui
Systems Team
tuenti.com


Re: Stopping mysql does not always stop it?

2013-06-06 Thread Manuel Arostegui
2013/6/6 Reindl Harald h.rei...@thelounge.net


 Am 06.06.2013 15:39, schrieb Manuel Arostegui:
  Once that problem has been found...you can use either /etc/init.d/mysql
  stop or mysqladmin. Both should work in the same way

  You should never use kill

 and why? what do you think does any stop command?


I assumed he meant kill -9. As it is not the first time I see people doing
-9 just because a process never stops.





-- 
Manuel Aróstegui
Systems Team
tuenti.com


Re: Mysql server - which filesystem to choose? Is it really that important nowadays?

2013-05-23 Thread Manuel Arostegui
2013/5/22 Tim Callaghan tmcallag...@gmail.com

 Rafal,

 I don't believe TRIM is supported for XFS.


I tried this two weeks ago and worked pretty well:
http://xfs.org/index.php/FITRIM/discard

Manuel.


Re: Mysql server - which filesystem to choose? Is it really that important nowadays?

2013-05-22 Thread Manuel Arostegui
2013/5/22 Rafał Radecki radecki.ra...@gmail.com

 Hi All.

 I use mysql/perconna/maria on my production CentOS 6 Linux servers. I
 currently try to choose the default filesystem for partitions with mysql
 data. Some time ago (previous dba) reiserfs was the choice but now it is
 not in the kernel and the main author is in prison.

 From what I've read xfs and ext4 are valid choices and performance
 benchmarks over the web show that they are comparable (no clear winner).
 I've also read that with every new kernel there can be changes in
 performance in every filesystem ( for example

 http://gtowey.blogspot.com/2013/02/serious-xfs-performance-regression-in.html
  ).

 From your experiences: which filesystem to choose for a mysql db? Is ext4
 or xfs better? Or is it more a case of proper filesystem tuning to my
 workload? Any articles worth reading which you can recommend?


Hi Rafal,

I guess it really depends on your workload, your HW, kernel etc.
From my experience, having XFS with lazy-count=1 and kernels 2.6.31.X gives
better performance in our HW RAID 10 + BBU servers. We do have this
configuration in around 200 DBs without any stability issue.
I still have pending to test ext4/xfs with 3.2.X kernels...

Manuel.


Re: Chain Replication QUestion

2013-04-30 Thread Manuel Arostegui
2013/4/30 Richard Reina gatorre...@gmail.com

 I have a few slaves set up on my local network that get updates from
 my main mysql database master. I was hoping to turn one into a master
 while keeping it a slave so that I can set up a chain.  Does anyone
 know where I can find a how to or other documentation for this
 specific task?


It is quite easy:

Enable log-slave-updates in the slave you want to be a master.
Do a mysqldump -e --master-data=2 and put that mysqldump in the future
slaves. Take a look at the first lines of the mysqldump where you'll find
the position and logfile those slaves need to start the replication from.
You can also use xtrabackup if you like.

Manuel.


Re: Determing number of queries

2013-04-04 Thread Manuel Arostegui
2013/4/4 Richard Reina gatorre...@gmail.com

 I am looking to spec out hardware for a new database server. I figured
 a good starting point would be to find out how much usage my current
 server is getting. It just a local machine that runs mysql and is
 queried by a few users here in the office. Is there a way that mysql
 can tell me info about it's workload?

 Thanks


Hello,

You can start with show innodb status;
For the basics, you might want to start graphing the stats under ROW
OPERATIONS (inserts, updates, reads, deletes...)

Manuel.


Re: Determing number of queries

2013-04-04 Thread Manuel Arostegui
2013/4/4 h...@tbbs.net

  2013/04/04 22:40 +0200, Manuel Arostegui 
 You can start with show innodb status;
 
 It is now
 show engine innodb status


Yep, sorry, not used to it just yet :-)


-- 
Manuel Aróstegui
Systems Team
tuenti.com


Re: Importing SQL dumps into MySQL through Command line

2013-03-31 Thread Manuel Arostegui
2013/3/31 Norah Jones nh.jone...@gmail.com

 Hi,

 To source sqldump i can use the source command, but if I need to do the
 same stuff using command line without going to the sqlpromt, can I achieve
 that.



Hello,

You mean cat sqldump.sql | mysql -uwhatever -pwhatever whatever_database ?

Manuel.


Re: Defragmentation of MySQL tables, how many times have I to do it?

2013-03-28 Thread Manuel Arostegui
Hello Antonio,

As Reindl said, it is recommended but it shouldnt be done just
everyday if your tables are quite big and you use query cache.

If you do have lot of writes and deletes (depending on what you
consider a lot), your table will indeed have lot of 'blank rows' which
eat disk space.

Optimizing tables if good for both: saving disk space and reading
faster from them.
However, remember that optimize locks your table so it can affect your
writes and create a big overhead in your server (IO).

If I were your I would benchmark how fast my operations are after
a...lets say monthly optimize and then decide.
I am assuming you are not optimizing because of disk space issues.

Manuel

2013/3/28, Antonio Fernández Pérez antoniofernan...@fabergames.com:
 Ok. Thank you very much.

 Best regards,

 Antonio.



-- 
Manuel Aróstegui
Systems Team
tuenti.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How to change max simultaneous connection parameter in mysql.

2013-03-24 Thread Manuel Arostegui
2013/3/24 Reindl Harald h.rei...@thelounge.net



 Am 24.03.2013 05:20, schrieb spameden:
  2013/3/19 Rick James rja...@yahoo-inc.com:
  you never have hosted a large site
  Check my email address before saying that.
 
  :D

 as said, big company does not have only geniusses

  20 may be low, but 100 is rather high.
  Never use apache2 it has so many problems under load..

 if you are too supid to configure it yes

  The best combo is php5-fpm+nginx.
  Handles loads of users at once if well tuned

 Apache 2.4 handles the load of 600 parallel executed
 php-scripts from our own CMS-system

 maybe you guys should learn what a opcode-cache is and
 how to compile and optimize software (binaries and config)


And maybe you should learn some good manners.


Re: Promoting MySQL 5.5 slave to master

2013-03-21 Thread Manuel Arostegui
2013/3/21 Miguel González Castaños miguel_3_gonza...@yahoo.es

 Dear all,

I have a pretty simple setup. A LAMP server in production and a
 failover LAMP server in case the main server is down.

I have been searching around to find out a clear answer of how to
 proceed when you need to promote a MySQL 5.5 slave server when the master
 (also 5.5) has crashed.

So far I have read that if the master crashes I should issue an stop
 slave in the slave server and restart the mysql service (otherwise the
 master info is still kept in memory) with a skip-start-slave setting. Is
 that right? Is that all?




Hi Miguel,

Do you have a master-master replication between them? How is the failover
over done? Do you just move a VIP between them?
Anyways, there's no need to do stop slave or restar MySQL service in the
new master...you just need to set it active (move the VIP, pointing your
Apache manually to write to it, however you do it).

What you have to make sure though is about the read_only parameter. Make
sure the new master comes back to life with read_only = ON just to avoid
any problems. The standby master should always have read_only = ON until it
becomes active.

Manuel.


Re: Promoting MySQL 5.5 slave to master

2013-03-21 Thread Manuel Arostegui
2013/3/21 Miguel Gonzalez miguel_3_gonza...@yahoo.es




 Can you elaborate about this? I thought that once you fixed the issues in
 the master server you needed to set it as slave of the new promoted master
 server, and do the other way round.


That's why you might want to have master-master replication to avoid this
manual process every time the active master dies.

Anyways, you're right - you'd need to configure the dead master to become a
slave of the new master. In order to do so, you need to now the binlog file
and position the new master had before getting the VIP. You'd need to check
the binlogs.
By having both MySQL replicating from each other, you'd avoid this.

Make sure you do reply all instead of replying only to me :-)

Manuel.


Re: How to change max simultaneous connection parameter in mysql.

2013-03-16 Thread Manuel Arostegui
2013/3/16 Kevin Peterson qh.res...@gmail.com

 I am using PHP along with mysql. Mysql default configuration allows to
 have 100 simultaneous connection which I want to chane to 200. Please help.



If you're reaching too many connections quite often, this change can imply
memory problems in your server. If you are close to get your server to
swap...be careful with this parameter as any swapping will affect your
performance.

Manuel.




-- 
Manuel Aróstegui
Systems Team
tuenti.com


Re: Multi-Master Replication Manager - monitor works but mmm_control show executes very long and sometimes returns no output.

2013-03-11 Thread Manuel Arostegui
2013/3/11 Rafał Radecki radecki.ra...@gmail.com

 Hi All.

 I use:

 cat /etc/redhat-release
 CentOS release 6.3 (Final)

 uname -a
 Linux prod1.local 2.6.32-279.14.1.el6.x86_64 #1 SMP Tue Nov 6 23:43:09
 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

 on db host:
 rpm -qa | grep mmm
 mysql-mmm-2.2.1-1.el6.noarch
 bmysql-mmm-agent-2.2.1-1.el6.noarch

 on monitor host:
 rpm -qa | grep mmm
 mysql-mmm-monitor-2.2.1-1.el6.noarch
 mysql-mmm-2.2.1-1.el6.noarch
 mysql-mmm-tools-2.2.1-1.el6.noarch

 When I make mmm_control show/checks the command executes for example
 for ~15 seconds and sometimes returns no output. In logs there is no
 info about problems and overall the monitor performs well. But I
 cannot use mmm_control to check its status. The servers are not over
 loaded. I have restarted agents and monitor but that has not resolved
 the problem. When I have rebooted the monitor host first use of
 mmm_control show was ok but then the problem was active again.

 Any advice? Have you had any similar problems?


Hello,

Have you tried to set debug 1 in your mmm_common.conf and start the
monitor? You might find useful output in there.

Manuel.


-- 
Manuel Aróstegui
Systems Team
tuenti.com


Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6

2013-02-16 Thread Manuel Arostegui
2013/2/15 Reindl Harald h.rei...@thelounge.net

 our database is 400 GB, mysqldump is 600MB was not a typo and you
 honestly believed that you can import this dump to somewhat?

 WTF - as admin you should be able to see if the things in front
 of you are theoretically possible before your start any action
 and 1:400 is impossible, specially because mysql-dumps are
 ALWAYS WAY LARGER then the databasses because they contain
 sql-statements and not only data


That's not completely true. If you have a poor maintained database or just
tables with lot of writes and deletes and you don't periodically optimize
it - you can end up with lot of blank spaces in your tables which will use
_a lot_ of space. If you do a du or whatever to measure your database
size...you can get really confused.
mysqldump obviously doesn't backup blank spaces and once you get rid of
them, your database will use much less space.

I have seen this scenario many times and I have seen tables using like 30GB
disk space and after an optimize their reported disk size would be just
5-10GB.

Manuel.


Re: slave replication with lots of 'duplicate entry' errors

2013-02-15 Thread Manuel Arostegui
2013/2/14 Robert Citek robert.ci...@gmail.com



 According to the client, nothing is writing to the slave and
 everything is being logged at the master.  I have not had the
 opportunity to independently verified any of this, yet.  I do know
 that the slave is not in read-only mode, but rather we promise not to
 write to it mode.


The only way to be totally safe is to enable it.



 At the moment, I am trying to come up with plausible explanations for
 the observations.


If you are sure you're setting the correct master IP, logfile and position
that appears in your mysqldump...the only explanation is...they are writing
directly into the slave.

Manuel.


Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6

2013-02-14 Thread Manuel Arostegui
2013/2/14 Mike Franon kongfra...@gmail.com

 Great thanks for the info, I guess the best way to do this is take a
 spare server, set it up with our standard setup, and then start the
 upgrade as you said 5.0 - 5.1 - 5.5, test and then upgrade to 5.6
 and test.


Do not forget to leave that spare server running for several days before
upgrading the rest of machines to 5.6. If possible, I would do some stress
tests or benchmarking to make sure it performs as you expect.

Manuel.


Re: slave replication with lots of 'duplicate entry' errors

2013-02-14 Thread Manuel Arostegui
2013/2/13 Robert Citek robert.ci...@gmail.com

 On Wed, Feb 13, 2013 at 8:59 AM, Robert Citek robert.ci...@gmail.com
 wrote:
  Any other possibilities?  Do other scenarios become likely if there
  are two or more tables?
 
  Of those, which are the most likely?

 [from off-list responder]:
  Other possibility: The replication is reading from master not from the
 point when the dump was done, but some time before and is fetching insert
 statements which are already in the dump.

 To prevent that I used the coordinates in the dump file included with
 --master-data=2.  Could the coordinates still be off?


Hello,

Are you sure nothing is getting inserted directly into the slave? Is it in
read only mode?
If you're starting replication using the values provided by --master-data=2
(which should be something like):

 -- Position to start replication or point-in-time recovery from

-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000974',
MASTER_LOG_POS=240814775;

And if you're using the right IP, there's no reason to have duplicate
entries unless someone is writing directly into the slave.

Manuel.


Re: log sequence number InnoDB: is in the future!?

2013-02-04 Thread Manuel Arostegui
2013/2/3 Larry Martell larry.mart...@gmail.com


 We also ended up dropping the database and restoring from dumps.
 However all recent dumps ended up having a similar corruption and we
 were still getting the same errors. We had to go back to an October
 dump before it would come up cleanly. And our db is fairly large, and
 it takes around 4 hours to load a dump. We were working on this Friday
 from 8:30 AM until 4AM Saturday before we got it going. And now we're
 trying to recall all the alters we did since then, and reload all the
 data since then, most of which is in files we can import. The only
 thing we don't have is the manual updates done. All in all a total
 disaster and something that will make us rethink our procedures.
 Perhaps we'll look at replication, although I don't know if that would
 have helped in this case.


Hi Larry,

I am sorry to read this. I hope you guys recovered everything already.
I would like to suggest something though.

From my point of view it is always good to backup just schemas (without
data) aside from regular data backups, that's to say, combine both. If
something like this happens, you can always do a diff and get the schemas
recovered in a matter of minutes.
Generally, schemas are pretty light and they won't use any significant disk
space.

About the replication solutionI would strongly recommend to use it if
possible in your scenario.
Clearly it won't prevent any data-loss generated by a bad statement (UPDATE
without where, DELETE * from etc). Albeit, if you're thinking to have a
dedicated slave for backups you might want to use pt-delay-slave (
http://www.percona.com/doc/percona-toolkit/2.1/pt-slave-delay.html) so you
can have your slave delayed XX minutes/hours and you can prevent disasters
coming from bad statements such as the ones I described earlier.

Anyways, as I was saying, if it's possible to have a server just acting as
a slave as a backup, that would help you to recover faster in corruption
due to HW problems. It would be a matter of setting it up as a master,
which generally takes minutes.

Hope you guys fixed everything already!
Manuel.


Re: log sequence number InnoDB: is in the future!?

2013-01-28 Thread Manuel Arostegui
2013/1/28 walter harms wha...@bfs.de

 hi list,

 i am using mysql 5.1.53.
 after a crash i have the follwing error in my log:

 130128 10:45:25  InnoDB: Error: page 61 log sequence number 0 2871649158
 InnoDB: is in the future! Current system log sequence number 0 2494349480.
 InnoDB: Your database may be corrupt or you may have copied the InnoDB
 InnoDB: tablespace but not the InnoDB log files. See
 InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html
 InnoDB: for more information.

 according to the doc's at
 http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html
 I need to restore the database from scratch (short version). What i do not
 understand is what
 exactly is broken ?  Whole DBM ? One Instance ? (mysqlcheck says all
 tables are ok).

 Not all tables are INNODB. Is is possible to restore only immodb tables ?
 (Having fun with forgein keys)

 Or is there a better way to handle this ?



Hello,

I reckon you really need to think of what caused your MySQL to crash. If
there's not a clear reason (HW problem) you might want to dig into that to
prevent this happening again. I am saying this because it is not the first
time I see someone fixing a corruption (re-building the database or fixing
corrupted tables) and then getting it corrupted again within some hours.

The problem itself has a solution: increasing the log sequence counter. I
wouldn't do it if it's not totally necessary (ie: you don't have another
machine to copy the data from). If you can get the data copied again from
some other server, that is probably the safest solution here to make sure
your data isn't corrupted. If not, I would suggest to run pt-table-checksum
to make sure the data is okay. Once your DB is recovered from this crash.

Cheers
Manuel.


Re: Handlersockets - mutex_delay contention

2012-12-12 Thread Manuel Arostegui
Hello,

Just for the record, I found the problem and overcome the contention.
The key was: innodb_thread_concurrency

Quoting the documentation:
innodb_thread_concurrency is the variable that limits the number of
operating system threads that can run concurrently inside the InnoDB
engine. Rest of the threads have to wait in a FIFO queue for execution.
Also, threads waiting for locks are not counted in the number of
concurrently executing threads.

When the tests started, we set it to the recommended value: 2xNumber of
Cores.
I tried also setting it up with a crazy number

Only when I disabled it (set global  innodb_thread_concurrency = 0;) the
reads went crazy and I was able to do 1.1M reads per second :-)
The CPU is now the bottleneck (which makes sense) and reached around
93-94%, at that point I am not able to go over 1.1M r/s.

The mutex_delay never appeared again after disabling the transactions limit.

Manuel.

2012/12/7 Manuel Arostegui man...@tuenti.com

 Hello all,

 I am testing handlersockets performance in a 5.5.28-29.1-log (Percona)
 server.
 These are the enabled options:

 loose_handlersocket_port = 9998
 loose_handlersocket_port_wr = 
 loose_handlersocket_threads = 48
 loose_handlersocket_threads_wr = 1
 innodb_spin_wait_delay=0

 The machine has 24 (Xeon - 2.00GHz) cores and 64GB RAM. We are using
 bonding to make sure the ethernets aren't limiting us here (we get around
 90Mbps)

 We are able to handler around 500K requests per second using handler
 socket plugin. Even though it looks pretty impressive number, it's still
 not close to the 750K ones Yoshinori is able to get (
 http://yoshinorimatsunobu.blogspot.com.es/2010/10/using-mysql-as-nosql-story-for.html
 )
 The machine is acting a normal slave in a cluster - receiving normal
 traffic from our site (we do this on purpose to see how many requests we
 can handle in a normal workload environment)

 Obviously we're not expecting to get similar numbers as our tests aren't
 the same.
 However, doing a bit of profiling to try to determine what's the
 bottleneck here we've seen this:

 CPU: Intel Architectural Perfmon, speed 2000.26 MHz (estimated)
 Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit
 mask of 0x00 (No unit mask) count 10
 samples  %image name   symbol name
 *2163868  24.2065  mysqld   mutex_delay*
 6945407.7696  mysqld
 build_template(row_prebuilt_struct*, THD*, TABLE*, unsigned int)
 6264057.0074  mysqld   buf_page_get_gen
 6076646.7978  mysqld   rec_get_offsets_func
 4809375.3801  mysqld   cmp_dtuple_rec_with_match
 4120814.6098  mysqld   btr_cur_search_to_nth_level
 3654024.0876  mysqld   rec_init_offsets
 3560643.9832  mysqld   page_cur_search_with_match
 3108193.4770  mysqld   row_search_for_mysql
 2742483.0679  mysqld   row_sel_store_mysql_rec
 2084662.3320  mysqld   my_pthread_fastmutex_lock
 1858532.0791  mysqld
 ha_innobase::index_read(unsigned char*, unsigned char const*, unsigned int,
 ha_rkey_function)
 1829392.0465  mysqld   pfs_mutex_enter_func
 1543691.7269  mysqld   mtr_memo_slot_release
 1385581.5500  mysqld   page_check_dir
 1316221.4724  mysqld   dict_index_copy_types
 1011621.1317  mysqld   srv_conc_force_exit_innodb
 72754 0.8139  mysqld
 ha_innobase::change_active_index(unsigned int)
 65191 0.7293  mysqld   my_long10_to_str_8bit
 62574 0.7000  mysqld   btr_pcur_store_position
 61900 0.6925  mysqld   pfs_mutex_exit_func
 51889 0.5805  mysqld   Field_long::pack_length() const
 49073 0.5490  mysqld   srv_conc_enter_innodb
 44079 0.4931  mysqld
 ha_innobase::init_table_handle_for_HANDLER()
 38998 0.4363  mysqld   Field_tiny::pack_length() const
 38868 0.4348  mysqld   rec_copy_prefix_to_buf
 36292 0.4060  mysqld
 ha_innobase::innobase_get_index(unsigned int)

 That mutex_delay is eating quite a big % of the time.
 I have not been able to find what is that related to. Does anyone has a
 clue about what's it and if there's a way to improve and overcome it?

 Cheers
 Manuel.



Handlersockets - mutex_delay contention

2012-12-07 Thread Manuel Arostegui
Hello all,

I am testing handlersockets performance in a 5.5.28-29.1-log (Percona)
server.
These are the enabled options:

loose_handlersocket_port = 9998
loose_handlersocket_port_wr = 
loose_handlersocket_threads = 48
loose_handlersocket_threads_wr = 1
innodb_spin_wait_delay=0

The machine has 24 (Xeon - 2.00GHz) cores and 64GB RAM. We are using
bonding to make sure the ethernets aren't limiting us here (we get around
90Mbps)

We are able to handler around 500K requests per second using handler socket
plugin. Even though it looks pretty impressive number, it's still not close
to the 750K ones Yoshinori is able to get (
http://yoshinorimatsunobu.blogspot.com.es/2010/10/using-mysql-as-nosql-story-for.html
)
The machine is acting a normal slave in a cluster - receiving normal
traffic from our site (we do this on purpose to see how many requests we
can handle in a normal workload environment)

Obviously we're not expecting to get similar numbers as our tests aren't
the same.
However, doing a bit of profiling to try to determine what's the bottleneck
here we've seen this:

CPU: Intel Architectural Perfmon, speed 2000.26 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit
mask of 0x00 (No unit mask) count 10
samples  %image name   symbol name
*2163868  24.2065  mysqld   mutex_delay*
6945407.7696  mysqld
build_template(row_prebuilt_struct*, THD*, TABLE*, unsigned int)
6264057.0074  mysqld   buf_page_get_gen
6076646.7978  mysqld   rec_get_offsets_func
4809375.3801  mysqld   cmp_dtuple_rec_with_match
4120814.6098  mysqld   btr_cur_search_to_nth_level
3654024.0876  mysqld   rec_init_offsets
3560643.9832  mysqld   page_cur_search_with_match
3108193.4770  mysqld   row_search_for_mysql
2742483.0679  mysqld   row_sel_store_mysql_rec
2084662.3320  mysqld   my_pthread_fastmutex_lock
1858532.0791  mysqld   ha_innobase::index_read(unsigned
char*, unsigned char const*, unsigned int, ha_rkey_function)
1829392.0465  mysqld   pfs_mutex_enter_func
1543691.7269  mysqld   mtr_memo_slot_release
1385581.5500  mysqld   page_check_dir
1316221.4724  mysqld   dict_index_copy_types
1011621.1317  mysqld   srv_conc_force_exit_innodb
72754 0.8139  mysqld
ha_innobase::change_active_index(unsigned int)
65191 0.7293  mysqld   my_long10_to_str_8bit
62574 0.7000  mysqld   btr_pcur_store_position
61900 0.6925  mysqld   pfs_mutex_exit_func
51889 0.5805  mysqld   Field_long::pack_length() const
49073 0.5490  mysqld   srv_conc_enter_innodb
44079 0.4931  mysqld
ha_innobase::init_table_handle_for_HANDLER()
38998 0.4363  mysqld   Field_tiny::pack_length() const
38868 0.4348  mysqld   rec_copy_prefix_to_buf
36292 0.4060  mysqld
ha_innobase::innobase_get_index(unsigned int)

That mutex_delay is eating quite a big % of the time.
I have not been able to find what is that related to. Does anyone has a
clue about what's it and if there's a way to improve and overcome it?

Cheers
Manuel.


Re: MySQL Replication Error

2012-12-05 Thread Manuel Arostegui
2012/11/30 Néstor rot...@gmail.com

 I am trying to set up mysql replication on 2 systems but Once I get it
 going, I get the following an error 1062,


skip


 I have re-installed the database on the slave also to see if this fixes the
 problem
 but after a few minutes I get the same error.  I have repeated the
 replication from the
 beginning but I get the same problem after a while.

 Does anyone know a way to fix this?


Hello,

How did you build the slave from the master? How did you decide in which
position the slave should start replicating from?
You might want to try pt-table-checksum (
http://www.percona.com/doc/percona-toolkit/2.1/pt-table-checksum.html) to
see what else do you have broken.

Manuel.


-- 
Manuel Aróstegui
Systems Team
tuenti.com


Re: Innodb, MySQL 5.5.28 - Would an incorrect setting in my.cnf cause mysqld to randomly crash on high load?

2012-11-26 Thread Manuel Arostegui
Hi Tom,

I am assuming nothing relevant shows up in dmesg, right?

I have experienced random crashes like that and most of them turned to
be HW issues - hard disk and memory banks related.

Is it a HW RAID? Have you tried looking at the controller logs? (Megacli).

And yes, corrupted tables would fail when restoring them (or even when
backuping them).

Good luck!
Manuel

2012/11/26, Rick James rja...@yahoo-inc.com:
 Nothing looks bad.
 96G for the buffer_pool is bigger than I have experienced, but I know of no
 reason for it to fail (given that you have 128GB of RAM).

 -Original Message-
 From: Tom [mailto:livefortheda...@gmail.com]
 Sent: Wednesday, November 21, 2012 5:17 PM
 To: mysql@lists.mysql.com
 Subject: Innodb, MySQL 5.5.28 - Would an incorrect setting in my.cnf
 cause mysqld to randomly crash on high load?

 We have a high-end server, 128GB RAM, 32 Core , Xeon, SSD RAID 10 -
 running Ubuntu 12.04 with MySQL 5.5.28 . Doing random imports to large
 InnoDB tables, over 50+ gigs, randomly after a few hours of heavy load,
 mysql does a Signal 11 and crashes.

 We have tried to move hardware. Doing a full dump (but not a restore
 yet) gives no issues. Usually on corrupted tables, a dump would fail
 no?

 Below is the crash log and my.cnf .

 ---
 12:45:4 UTC - mysqld got signal 11 ;
 This could be because you hit a bug. It is also possible that this
 binary or one of the libraries it was linked against is corrupt,
 improperly built, or misconfigured. This error can also be caused by
 malfunctioning hardware.
 We will try our best to scrape up some info that will hopefully help
 diagnose the problem, but since we have already crashed, something is
 definitely wrong and this may fail.

 key_buffer_size=536870912
 read_buffer_size=131072
 max_used_connections=324
 max_threads=200
 thread_count=308
 connection_count=308
 It is possible that mysqld could use up to key_buffer_size +
 (read_buffer_size + sort_buffer_size)*max_threads =
 965187 K  bytes of memory
 Hope that's ok; if not, decrease some variables in the equation.

 Thread pointer: 0x7fc7eb1b5040
 Attempting backtrace. You can use the following information to find out
 where mysqld died. If you see no messages after this, something went
 terribly wrong...
 stack_bottom = 7fadf6abfe60 thread_stack 0x3
 /usr/sbin/mysqld(my_print_stacktrace+0x29)[0x7fc758522759]
 /usr/sbin/mysqld(handle_fatal_signal+0x483)[0x7fc7583e9ae3]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7fc75713bcb0]
 /usr/sbin/mysqld(+0x6671b0)[0x7fc75863a1b0]
 /usr/sbin/mysqld(+0x61d6b9)[0x7fc7585f06b9]
 /usr/sbin/mysqld(+0x630d12)[0x7fc758603d12]
 /usr/sbin/mysqld(+0x6319c2)[0x7fc7586049c2]
 /usr/sbin/mysqld(+0x631d85)[0x7fc758604d85]
 /usr/sbin/mysqld(+0x626e7d)[0x7fc7585f9e7d]
 /usr/sbin/mysqld(+0x633cea)[0x7fc758606cea]
 /usr/sbin/mysqld(+0x6347e2)[0x7fc7586077e2]
 /usr/sbin/mysqld(+0x624426)[0x7fc7585f7426]
 /usr/sbin/mysqld(+0x610871)[0x7fc7585e3871]
 /usr/sbin/mysqld(+0x5d4cb0)[0x7fc7585a7cb0]
 /usr/sbin/mysqld(+0x5b7c9c)[0x7fc75858ac9c]
 /usr/sbin/mysqld(_ZN7handler21read_multi_range_nextEPP18st_key_multi_ra
 nge+0x24)[0x7fc7583e9fe4]
 /usr/sbin/mysqld(_ZN18QUICK_RANGE_SELECT8get_nextEv+0x3c)[0x7fc7584a3c8
 c]
 /usr/sbin/mysqld(+0x4e9195)[0x7fc7584bc195]
 /usr/sbin/mysqld(_Z10sub_selectP4JOINP13st_join_tableb+0x71)[0x7fc7582f
 1741]
 /usr/sbin/mysqld(+0x32f025)[0x7fc758302025]
 /usr/sbin/mysqld(_ZN4JOIN4execEv+0x4a5)[0x7fc758311155]
 /usr/sbin/mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_E
 S2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_sele
 ct_lex+0x130)[0x7fc75830d000]
 /usr/sbin/mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x17c)[0x
 7fc758312f5c]
 /usr/sbin/mysqld(+0x2f66b4)[0x7fc7582c96b4]
 /usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x16d8)[0x7fc7582d1118]
 /usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7fc758
 2d5daf]
 /usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x13
 80)[0x7fc7582d7200]
 /usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7fc75837b7a
 d]
 /usr/sbin/mysqld(handle_one_connection+0x50)[0x7fc75837b810]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7fc757133e9a]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7fc756864cbd]

 Trying to get some variables.
 Some pointers may be invalid and cause the dump to abort.
 Query (7faa4c18e440): is an invalid pointer Connection ID (thread ID):
 2286
 Status: NOT_KILLED

 The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html
 contains information that should help you find out what is causing the
 crash.
 121120 12:48:48 [Note] Plugin 'FEDERATED' is disabled.
 121120 12:48:48 InnoDB: The InnoDB memory heap is disabled
 121120 12:48:48 InnoDB: Mutexes and rw_locks use GCC atomic builtins
 121120 12:48:48 InnoDB: Compressed tables use zlib 1.2.3.4
 121120 12:48:48 InnoDB: Initializing buffer pool, size = 96.0G
 121120 12:48:56 InnoDB: Completed initialization of buffer pool
 121120 12:48:57 

Re: Replication between different versions

2012-11-15 Thread Manuel Arostegui
2012/11/15 Lorenzo Milesi max...@ufficyo.com

 Hi.

 Is it possible to have a master/master replication between servers of
 different versions?
 Now I've two 5.0.51 syncing themselves. I'm installing a new server with
 5.5.28 and I'd like if I can upgrade but still mantaining compatibility.
 Otherwise I'll keep on installing the same version...


You should have no problem as long as you're not using some feature not
supported anymore in 5.5 version.

Manuel


Re: Assistance with replication

2012-11-12 Thread Manuel Arostegui
2012/11/12 Machiel Richards - Gmail machiel.richa...@gmail.com

 Hi Guys

 We need some assistance with a master slave setup we are
 experiencing some issues with.

 We have one master with 2 slaves replicating from it.

 Both the slaves stopped working this morning in the early hours at
 exactly the same bin_log file and position.

 We have since restarted the slave servers and when you run a show
 slave status the seconds behind master stays 0.


 Upon closer investigation, the Relay_Master_Log_File and
 Exec_Master_Log_Pos does not change, although all other values are changing.

 No errors are being logged, and there is enough disk space on the
 server so it can't be space related.



 Can anyone suggest some things we can check or do in order to get
 replication sorted and to avoid future issues like this?


Hello Machiel,

If seconds behind the master is at 0 it means the replication is working
fine - it's not broken (broken would show NULL)
What you can do to make sure both machines are indeed replicating all the
data arriving to the master is:

Master: show master status\G
Check File and Position field

 Slave: show slave status\G
Check Exec_Master_Log_Pos and Master_Log_File

The should be the same in both, master and slaves.
Keep in mind these values will change if the master is getting writes. So
try to run the commands at the same time in the 3 servers.

Good luck!
Manuel.


Re: Assistance with replication

2012-11-12 Thread Manuel Arostegui
2012/11/12 Machiel Richards - Gmail machiel.richa...@gmail.com

  Hi Manuel

 Please take careful note of what I have stated in the original
 mail.

 1. the fact that the slaves say seconds behind master = 0 does
 not neccesarily mean that repliication is working as I have found this a
 lot of times where it shows 0 then have proven that replication was not
 working (I.e. data not being processed on the slave). The 0 seconds simply
 means that it has read all the data but does not mean that it has been
 processed or up to date.


Agreed. What I meant is that replication is not broken (slaves are able to
connect to the master). If they're not replicating data, that's another
story.




 2. the Exec_Master_Log_Pos and Master_Log_File are in fact out of
 sync with the master (i.e. on the master the file and log position are way
 ahead of the slaves by about 9 hours).


Have you tried to take a look at the master's binlog to see what happened
at the position where the slaves got out of sync? Just to make sure it's a
normal statement.




 3. We have tested replication by creating a test table in the
 database with some data and it was not replicated to the slaves.


 I am assuming that Replicate_Do_DB is empty in both slaves, isn't it?
Have you tried to start replication again at the same position and file in
one of the slaves to force them to force them to rotate their relay logs?
That's what is usually done when the relay bin gets corrupted (although you
get an error message in the show slave status output)

Good luck!
Manuel


Re: How to verify mysqldump files

2012-11-07 Thread Manuel Arostegui
2012/11/7 Ananda Kumar anan...@gmail.com

 you can use checksum to make sure there are not corruption in the file



That would work for the file integrity itself not for the data integrity
_in_ the file.

As Claudio suggested, probably going thru the whole recovery process from
time to time is the best way to make sure the backup'ed data is correct.

Manuel.


Re: Mysql backup for large databases

2012-11-02 Thread Manuel Arostegui
Hello,

Just one more suggestion to do full backups in large databases:

- Dedicated slave (either physical machine, a disk cabinet using iscsi
connections from a machine just with a bunch of RAM etc)
- Get the slave delayed a certain time (ie: 1 hour, 2 hours...depends on
your workload) using Percona Toolkit (
http://www.percona.com/doc/percona-toolkit/2.1/pt-slave-delay.html)
- Make sure that slave has (*log*-*slave*-*updates) *in my.cnf.
- Make sure the master keeps the necessary logs for a point in time
recovery if needed.

If something goes wrong you have a full copy of your database, just delayed
a few hours. If you want to recover from a drop database (or any big fail
executed from MySQL CLI) in the master, you can just sync up your delayed
slave to that specific statment - go thru your master binlog, locate the
drop database statement and set START SLAVE SQL_THREAD UNTIL
master_log_file = 'mysql-bin.XXX', master_log_pos=XXX; to the position
right before the bad one. That way you'd have your database as it was just
before the wrong statement execution.

As Rick said - if you're thinking about doing snapshots, make sure you stop
your mysql daemon in the delayed slave to make sure everything is committed
to disk. Otherwise can end up having a corrupted DB which won't boot when
you need it.

And lastly, but probably most importanttest your backups periodically!!

Hope this helps
Manuel.


2012/11/1 Karen Abgarian a...@apple.com

 Hi,

 For doing backups on the primary database, I know nothing better than have
 your tables in InnoDB and use Innobackup (or MySQL Enterprise backup).
 This, however, still has the possibility of hanging as it is using FLUSH
 TABLES WITH READ LOCK for taking backups of MyISAM tables.One may want
 to script it to kill the backup if the wait exceeds some threshold.   The
 backup taken this way has incremental backups feature which may reduce the
 impact.

 For offloading the backups to a replica, there exist more options because
 the replica can be frozen and/or shut down.   For an InnoDB database, it
 has to be shut down for taking a consistent backup.   If it is not, it will
 result in cute little inconsistencies unless a DBA is one lucky guy and
 always wins playing roulette.

 Combining the two, I like the idea of doing EM backup on a replica and
 having all tables in InnoDB.

 After a backup has been taken, it will eventually need to be restored
 unless someone just likes taking them.   For this reason, it will have to
 be brought to the recovered system.   Unless somebody knows in advance when
 the database would need to be recovered (f.e. it is known that a bad guy
 always corrupts it on Monday mornings), the backup will need to be
 available for restore always.   These considerations usually imply things
 like shared filesystems between primary and replica, rejecting backups for
 recoveries across datacenters and the like.

 Backing up binary logs allows providing continuous coverage for recovery
 instead of discrete.

 Cheers
 Karen


 On 01.11.2012, at 8:53, machiel.richa...@gmail.com wrote:

  Well, the biggest problem we have to answer for the clients is the
 following:
  1. Backup method that doesn't take long and don't impact system
  2. Restore needs to be done on a quick as possible way in order to
 minimize downtime.
 
  The one client is running master - master replication with master server
 in usa, and slave in south africa. They need master backup to be done in
 the states.
 
 
  Sent via my BlackBerry from Vodacom - let your email find you!
 
  -Original Message-
  From: Reindl Harald h.rei...@thelounge.net
  Date: Thu, 01 Nov 2012 16:49:45
  To: mysql@lists.mysql.commysql@lists.mysql.com
  Subject: Re: Mysql backup for large databases
 
  good luck
 
  i would call snapshots on a running system much more dumb
  than innodb_flush_log_at_trx_commit = 2 on systems with
  100% stable power instead waste IOPS on shared storages
 
  Am 01.11.2012 16:45, schrieb Singer Wang:
  Assuming you're not doing dumb stuff like innodb_flush_log_at_tx=0 or 2
 and etc, you should be fine. We have been
  using the trio: flush tables with read lock, xfs_freeze, snapshot for
 months now without any issues. And we test
  the backups (we load the backup into a staging once a day, and dev once
 a week)
 
  On Thu, Nov 1, 2012 at 11:41 AM, Reindl Harald 
  h.rei...@thelounge.netmailto:
 h.rei...@thelounge.net wrote:
 
  Why do you need downtime?
 
 because mysqld has many buffers in memory and there
 is no atomic flush buffers in daemon and freeze backend FS
 
 short ago there was a guy on this list which had to realize
 this the hard way with a corrupt slave taken from a snapshot
 
 that's why i would ALWAYS do master/slave what means ONE time
 down (rsync; stop master; rsync; start master) for a small
 timewindow and after that you can stop the slave, take a
 100% consistent backup of it's whole datadir and start
 the slave 

Re: Replication Question

2012-10-26 Thread Manuel Arostegui
2012/10/25 Sabika M sabika.makhd...@gmail.com

 I have replication setup in the following way:

 A - B -C

 I am making updates to server A. I want to stop all my updates and point
 them to server C. After I start writing to server C, can I use the change
 master statement to make the C the master of A (take B out of the topology)
 and proceed to set up peer-to-peer between A - C without taking a new
 backup of c to set up the replication between A-C?

 I guess what I am really asking is if the data is the same, is a backup
 required for initialization?

 Hello,

Make sure you run a show master status at C before sending the writes to
it so you can get to know at which position/binlog you have to start A
replicating from. Also make sure read_only = OFF in C.

Manuel.


Re: monitoring tool

2012-10-10 Thread Manuel Arostegui
2012/10/10 Aastha aast...@gmail.com

 Hi,

 Are there any open source MYSQL rela time monitoring tools available in the
 market.

 Aastha


Hello Aastha,

You should try to be more specific when asking for stuff.
What do you want to monitor? reads/writes? QPS? threads? etc

Thanks
Manuel.


Re: monitoring tool

2012-10-10 Thread Manuel Arostegui
2012/10/10 Aastha aast...@gmail.com

 it should do the following:

 1. give the status of the health of the nodes -Primary concern


What do you mean with health of the nodes? mysqld running? master-slaves
up and sync'ed? replication not broken? you using NDB?. Still a very vague
explanation.


 2. Give slow queries


You will need to enable slow query log (trying to filter as much as
possible to only log what you would consider a slow query in your
environment). You'd need to scan it too. You might want to take a look at
pt-query-digest. Or use regex over the log if you don't want to use
external tools.


 3. NO  or reads etc


You can parse the output of  show innodb status for that.


 4. No of users logged in


Remember you need the SUPER privilege to be able to do this. show
processlist will show you the logged users. Again, you'd need to parse its
output.


 5. Other admin tasks


Do you really think we can help you with this description? :-)

Manuel.


Re: InnoDB corrupt after power failure

2012-10-04 Thread Manuel Arostegui
2012/10/4 Andrew Miklas and...@pagerduty.com

 Hi guys,

 I recently had a data corruption issue with InnoDB.  MySQL was shut down
 improperly (power failure), and when the system came back up, MySQL refused
 to start.  On inspection of the logs (see below), it looks like the
 tablespace became seriously corrupted.  In the end, I had to rebuild the
 slave using mysqldump.

 I'm curious what happened here, since I thought InnoDB wasn't supposed to
 become corrupted on an improper shutdown.  One possibility that we were
 exploring was that the filesystem journal setting was incorrect.  We were
 using ext3 with the journal set to writeback mode.  Is this a known bad
 config with InnoDB?


Hey Andrew,

it shouldn't be a biggie if you have a BBU. Do you guys use HW RAID + BBU?
What's your innodb_flush_log_at_trx_commit value?

Have you tried playing with innodb_force_recovery option to try to get the
server started at least? That way you might be able to identify which
table(s) is/are the corrupted one and the one(s) preventing the whole
server from booting up.

Manuel


Re: InnoDB vs. other storage engines

2012-09-19 Thread Manuel Arostegui
2012/9/19 Mark Haney ma...@abemblem.com

 I hope this doesn't end in some kind of flame war.  I'm looking to
 optimize my tables (and performance in general) of the DB my web app is
 using.  I'm tweaking things a little at a time, but I'm curious as to what
 the rest of the MySQL list thinks about changing my storage engine from
 InnoDB to something else so I can optimize the tables on a regular basis.

 Is it worth the effort?  Any caveats?


Hi Mark,

I would depend on what your workload would be. Mostly writes, mostly reads,
how many writes/reads do you expect etc.
The best approach, from my point of view, would be, firstly, tune your
MySQL server (if you've not done it yet) before getting into engine/tables
optimizations which can be more complicated.

 Manuel.


Re: Temporary table creation fails

2012-09-10 Thread Manuel Arostegui
2012/9/10 Machiel Richards - Gmail machiel.richa...@gmail.com

 Hi,

  permissions are confirmed as being correct. Other applications and
 users are currently writing files to this directory yes.


Have you tried su - mysql and touch /tmp/test? (if your mysql user has
shell...)

Good luck!
Manuel.


Re: Understanding Slow Query Log

2012-09-05 Thread Manuel Arostegui
2012/9/5 Adarsh Sharma eddy.ada...@gmail.com

 Actually that query is not my concern :

 i have a query that is taking so much time :
 Slow Log Output :
 # Overall: 195 total, 16 unique, 0.00 QPS, 0.31x concurrency _
 # Time range: 2012-09-01 14:30:01 to 2012-09-04 14:13:46
 # Attribute  total min max avg 95%  stddev  median
 #  === === === === === === ===
 # Exec time 80887s   192us   2520s415s   1732s612s 80s
 # Lock time   13ms   0   133us68us   103us23us69us
 # Rows sent430.89k   0  17.58k   2.21k  12.50k   3.96k   49.17
 # Rows examine  32.30M   0 466.46k 169.63k 440.37k 186.02k 117.95k
 # Query size65.45k   6 577  343.70  563.87  171.06  246.02

 In the logs output :
 # Query_time: 488.031783  Lock_time: 0.41 Rows_sent: 50
  Rows_examined: 471150
 SET timestamp=1346655789;
 SELECT t0.id, t0.app_name, t0.status, t0.run, t0.user_name,
 t0.group_name, t0.created_time, t0.start_time, t0.last_modified_time,
 t0.end_time, t0.external_id FROM WF_1 t0 WHERE t0.bean_type = 'Workflow'
 ORDER BY t0.created_time DESC LIMIT 0, 50;

 The table is near about 30 GB and growing day by day.


Just out curiosity, is that table too fragmented? 471k rows are quite a
lot, but 488 of query time is insane. Seems you're reading from disk too
much!



 Attaching the table definition  indexes output. I have a index on bean
 type column but cann't understand why it
 examined the all rows of table.


Where's the table's schema so we can give it a try?

Manu