On 25/09/14 15:37, Qiming Teng wrote:
Some weeks ago, I checked my then latest devstack install and I learned
this: event support in Ceilometer is only available for sqlalchemy
backend; mongodb backend was still under development. I have been using
MySQL during the past weeks and now I think I'm trapped by a performance
problem of MySQL.
One or two Nova servers were launched and remain idle for about 10 days.
Now I'm seeing a lot of data accumulated in db and I wanted to cleanse
it manually. Here is what I got:
mysql> select count(*) from metadata_text;
| count(*) |
| 25249913 |
1 row in set (3.83 sec)
mysql> delete from metadata_text limit 1000;
Query OK, 1000 rows affected (0.02 sec)
mysql> delete from metadata_text limit 10000;
Query OK, 10000 rows affected (0.39 sec)
mysql> delete from metadata_text limit 100000;
Query OK, 100000 rows affected (2.31 sec)
mysql> delete from metadata_text limit 1000000;
Query OK, 1000000 rows affected (25.32 sec)
mysql> delete from metadata_text limit 2000000;
Query OK, 2000000 rows affected (1 min 16.17 sec)
mysql> delete from metadata_text limit 4000000;
Query OK, 4000000 rows affected (7 min 40.40 sec)
There were 25M records in one table. The deletion time is reaching an
unacceptable level (7 minutes for 4M records) and it was not increasing
in a linear way. Maybe DB experts can show me how to optimize this?
Writes of bigger datasets will take non linear time when (possibly
default?) configs are outgrown. For instance (assumimg metadata_text is
an innodb table, take a look at:
- innodb_log_file_size (warning: read the manual carefully before
Also index maintenance can get to be a limiting factor, I'm not sure if
mysql will use the sort buffer to help with this, but maybe try increase
(just for the session doing the delete) and see if it helps.
There are many (way too many) other parameters to tweak, but the above
ones are probably the best to start with.
OpenStack-dev mailing list