From: machi...@rdc.co.za
To: mysql@lists.mysql.com
Subject: slow queries not being logged
Date: Tue, 23 Feb 2010 09:59:13 +0200
Good day all
I hope you can assist me with this one...
We have a client where the slow query log was disabled.
Hi All
I found my problem and this was kind of a blonde moment for
me...
When configuring the log_slow_queries parameter, it was
configured as follows: log_slow_queries=1
This the file being created is called 1 and the 1 does not
mean it
slow query log will also have sql's which are not using indexes(doing full
table scan).
May be those queries with ZERO SECOND run on small table without using
indexes.
regards
anandkl
On Tue, Feb 23, 2010 at 2:02 PM, Machiel Richards machi...@rdc.co.zawrote:
Hi All
I found
that's very much gonna depend on what your selects look like. For example, a
low-cardinality but often-where'd field makes an interesting candidate, as
such a partitioning will take the size of your table scans down. If you know
that you'll mostly access just last month's data, partition on
Hi Jerry,
I guess modification of the table is needed! What are you trying to achieve by
partitioning?
If the primary key is rarely used then maybe adding another column with a
numeric value based on `prod_id` and adding that column to the primary key
would work and at least let you do some
You might want to read the comments to this posting:
http://www.bitbybit.dk/carsten/blog/?p=116
Several tools/methods for controlling and analyzing the slow query log are
suggested there.
Best,
/ Carsten
On Tue, 23 Feb 2010 14:09:30 +0530, Ananda Kumar anan...@gmail.com
wrote:
Securich - Security Plugin for MySQL
http://forge.mysql.com/wiki/Securich_-_Security_Plugin_for_MySQL
This Thursday (February 25th, 13:00 UTC - way earlier than usual!),
Darren Cassar will present Securich - Security Plugin for MySQL.
According to Darren, the author of the plugin, Securich is an
-Original Message-
From: John Daisley [mailto:mg_s...@hotmail.com]
Sent: Tuesday, February 23, 2010 6:07 AM
To: jschwa...@the-infoshop.com ; mysql@lists.mysql.com
Subject: RE: Partitioning
Hi Jerry,
I guess modification of the table is needed! What are you trying to achieve
by
From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De
Meersman
Sent: Tuesday, February 23, 2010 5:52 AM
To: Jerry Schwartz
Cc: MY SQL Mailing list
Subject: Re: Partitioning
that's very much gonna depend on what your selects look like. For example, a
low-cardinality
Is there still no such thing anywhere for Mysql as an index analyser?
Many others have such a thing that will sit and monitor db activity over a
poeriod of time and suggest the exact indexes on each table based on what it
has seen to improve performance
Anyone got that for MySQL?
On Tue, February 23, 2010 1:28 pm, Cantwell, Bryan wrote:
Is there still no such thing anywhere for Mysql as an index analyser?
Many others have such a thing that will sit and monitor db activity over a
poeriod of time and suggest the exact indexes on each table based on what
it has seen to
At 03:28 PM 2/23/2010, you wrote:
Is there still no such thing anywhere for Mysql as an index analyser?
Many others have such a thing that will sit and monitor db activity over a
poeriod of time and suggest the exact indexes on each table based on what
it has seen to improve performance
Ya, that one is helpful... just trying to land on a solution like I've seen in
other DB's that have index-advisor that listens and creates what it thinks is
the perfect indexes ... but thx...
From: mos [mo...@fastmail.fm]
Sent: Tuesday, February 23,
Sirs,
Because one table will hold the large amount of data, only the recent data will
be used for transactions; so rest of the old records are remain same with out
any transaction. So we have decided to go for year based storage; here even old
records can be taken out by join queries.
I hope
I recently tried to run
INSERT INTO general_log SELECT * FROM mysql.general_log;
but that failed a few hours in because I ran out of disk space.
'SELECT COUNT(*) FROM general_log' returns 0, yet ibdata1 is still
49GB (started at 3GB before the INSERT; the source mysql.general_log,
a CSV table,
Your innodb data file just auto-extended until you either reached its max or
ran out of disk space if you had no max.
The only way I know to reduce it is to dump all the innodb tables, drop the
innodb data file and logs (and drop the innodb tables if you're using
file-per-table), restart mysql,
16 matches
Mail list logo