Hi!
I use mysql/innodb tables on aws services
in a small table (about 2M records) I changed some columns datatypes from
unsigned int to decimal and from float to decimal
I didn't change anything about primary key or other indexes
after the change (done without troubles), all my queries
Hi All,
Can someone please fill me in as what I am seeing here... I have two
identical tables, with identical indexes, having different records. Both
tables have +- 15m records in it...
mysql> EXPLAIN SELECT ArticleID, DateObtained, DateAccessed, TimesAccessed
FROM IDXa ORDER BY DateAcces
04 1657 [Note] InnoDB: Sync to disk
>> 2014-10-10 13:27:25 1657 [Note] InnoDB: Sync to disk - done!
>> 2014-10-10 13:27:26 1657 [Note] InnoDB: Phase III - Flush changes to
>> disk
>> 2014-10-10 13:27:26 1657 [Note] InnoDB: Phase IV - Flush comple
sh changes to
> disk
> 2014-10-10 13:27:26 1657 [Note] InnoDB: Phase IV - Flush complete
> 2014-10-10 13:27:26 1657 [Note] InnoDB: "borrame"."creditLine"
InnoDB: Phase III - Flush changes to
disk
2014-10-10 13:27:26 1657 [Note] InnoDB: Phase IV - Flush complete
2014-10-10 13:27:26 1657 [Note] InnoDB: "borrame"."creditLine" autoinc
value set to 87313435
After this, the indexes look empty:
(13
I got an "interesting" problem with creation of indexes on MyISAM
tables in MySQL 5.6.15 and MySQL 5.6.14 running on FreeBSD 8.4 for float
columns - I am not able to create indexes on these columns
Indexes on all other columns work just fine
The problem occur while I was loading data
That's exactly what I thought when reading Michael's email, but tried
anyways, thanks for clarification :)
2012/10/16
> 2012/10/16 12:57 -0400, Michael Dykman
> your now() statement is getting executed for every row on the select. try
> ptting the phrase up front
> as in:
> set @ut= u
2012/10/16 12:57 -0400, Michael Dykman
your now() statement is getting executed for every row on the select. try
ptting the phrase up front
as in:
set @ut= unix_timestamp(now())
and then use that in your statement.
Quote:
Functions that return the current date or time each are
Interesting thought, but I get the same result.
# Query_time: 0.001769 Lock_time: 0.001236 Rows_sent: 0 Rows_examined: 0
use kannel;
SET timestamp=1350413592;
select * from send_sms FORCE INDEX (priority_time) where time<=@ut order by
priority limit 0,11;
the MySQL i'm using is 5.5.28 from dotd
your now() statement is getting executed for every row on the select. try
ptting the phrase up front
as in:
set @ut= unix_timestamp(now())
and then use that in your statement.
On 2012-10-16 8:42 AM, "spameden" wrote:
Will do.
mysql> SHOW GLOBAL VARIABLES LIKE '%log%';
+---
Will do.
mysql> SHOW GLOBAL VARIABLES LIKE '%log%';
+-+-+
| Variable_name | Value
|
+-+-+
| back_log
On 10/15/2012 7:15 PM, spameden wrote:
Thanks a lot for all your comments!
I did disable Query cache before testing with
set query_cache_type=OFF
for the current session.
I will report this to the MySQL bugs site later.
First. What are all of your logging settings?
SHOW GLOBAL VARIAB
ptimizer's choice.
>
> ** **
>
> *From:* spameden [mailto:spame...@gmail.com]
> *Sent:* Monday, October 15, 2012 3:29 PM
>
> *To:* Rick James
> *Cc:* mysql@lists.mysql.com
> *Subject:* Re: mysql logs query with indexes used to the slow-log and not
> logging if
From: spameden [mailto:spame...@gmail.com]
Sent: Monday, October 15, 2012 3:29 PM
To: Rick James
Cc: mysql@lists.mysql.com
Subject: Re: mysql logs query with indexes used to the slow-log and not logging
if there is index in reverse order
Sorry, forgot to say:
mysql> show variables lik
Since the ORDER BY matches one of the indexes, it can avoid the sort and stop
with the LIMIT. However, if most of the rows failed the WHERE clause, this
could be the "wrong" choice.
That is, it is hard for the optimizer to get a query like this "right" every
time.
To
g only due:
mysql> show variables like '%indexes%';
+---+---+
| Variable_name | Value |
+---+---+
| log_queries_not_using_indexes | ON|
+---+---+
1 row in set (0.00 sec)
I
not a 3-digit integer, it is a full 32-bit integer (4 bytes).
> Perhaps you should have SMALLINT UNSIGNED (2 bytes).
>
> * BIGINT takes 8 bytes -- usually over-sized.
>
>
> > -Original Message-
> > From: spameden [mailto:spame...@gmail.com]
> > Sent: Monday, O
IGINT takes 8 bytes -- usually over-sized.
> -Original Message-
> From: spameden [mailto:spame...@gmail.com]
> Sent: Monday, October 15, 2012 1:42 PM
> To: mysql@lists.mysql.com
> Subject: mysql logs query with indexes used to the slow-log and not
> logging if there is
rnings: 0
>
> mysql> *desc select * from send_sms_test where
> time<=UNIX_TIMESTAMP(NOW()) order by priority limit 0,11;*
>
> ++-+---+---+---+---+-+--+--+-+
> | id | select_type | table | type | possible_keys | key
> | key_len | ref | rows | Extra |
>
> +----+
t | index | time_priority | priority_time
| 12 | NULL | *22* | Using where |
++-+---+---+-------+---+-+--+--+-+
And if both indexes created I do not have anymore this query in the
slow-log.
Of course If I disable log_quer
t0.status = 'KILLED'
OR t0.status = 'RUNNING')
AND t0.last_modified_time <= '2012-09-07 08:08:34')
AND t0.bean_type = 'ActionItems';
Change the `status` check to
AND t0.status IN ('SUSPENDED', 'KILLED', 'RUNNING')
Othe
Hi all,
Currently i am doing performance level tuning of some queries that are
running very slow in my slow -query log. Below are the sample of some
queries & the cardinality of indexes :-
--- Below queries take more than 15 minutes to complete on a table
scd_table of size 7 GB
SELECT t0.id
On 05/07/2012 12:30 PM, Zhangzhigang wrote:
Thanks, i thought about this answer in the past, and i appreciate your reply.
How about the omelet?
What's your method?
--
RMA.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/m
" that aggregate data to make
"reports" more efficient. (I have seen 10x to 1000x performance improvement.)
Should we discuss this?
> -Original Message-
> From: Karen Abgarian [mailto:a...@apple.com]
> Sent: Monday, May 07, 2012 8:37 PM
> To: mysql@lists.mysql.c
2012年5月9日, 星期三, 下午 11:21
主题: RE: 回复: Why is creating indexes faster after inserting massive data rows?
A BTree that is small enough to be cached in RAM can be quickly maintained.
Even the “block splits” are not too costly without the I/O.
A big file that needs sorting – bigger than ca
enchmark _*your*_ case.
>
> ** **
>
> *From:* Claudio Nanni [mailto:claudio.na...@gmail.com]
> *Sent:* Wednesday, May 09, 2012 8:34 AM
> *To:* Rick James
> *Cc:* Zhangzhigang; mysql@lists.mysql.com
> *Subject:* Re: 回复: Why is creating indexes faster after inserting massive
> data rows?
e will have to hit disk.
> If you are using normal disks, that is on the order of 125 rows per second
> that you can insert �C Terrible! Sortmerge is likely to average over 10,000.
>
>
>
> From: Zhangzhigang [mailto:zzgang_2...@yahoo.com.cn]
> Sent: Tuesday, May 08, 2012 9:13
rows per second that
you can insert – Terrible! Sortmerge is likely to average over 10,000.
From: Zhangzhigang [mailto:zzgang_2...@yahoo.com.cn]
Sent: Tuesday, May 08, 2012 9:13 PM
To: Rick James
Cc: mysql@lists.mysql.com
Subject: 回复: Why is creating indexes faster after inserting massive data
James...
>* By doing all the indexes after building the table (or at least all the
>non-UNIQUE indexes), "sort merge" can be used. This technique had been highly
>optimized over the past half-century, and is more efficient.
I have a question about "sort merge":
Oh... I thought that it uses it's own buffer cache as same as the InnoDB. I
have got a mistake for this, thanks!
发件人: Karen Abgarian
收件人: mysql@lists.mysql.com
发送日期: 2012年5月9日, 星期三, 上午 2:51
主题: Re: 回复: 回复: 回复: Why is creating indexes faster after inse
Hi,
If MyISAM tables were being written directly to disk, the MyISAM tables would
be so slow that nobody would ever use them.That's the cornerstone of their
performance, that the writes do not wait for the physical I/O to complete!
On May 8, 2012, at 3:07 AM, Johan De Meersman wrote:
>
Ok, thanks for your help.
发件人: Johan De Meersman
收件人: Zhangzhigang
抄送: mysql@lists.mysql.com; Karen Abgarian
发送日期: 2012年5月8日, 星期二, 下午 6:07
主题: Re: 回复: 回复: 回复: Why is creating indexes faster after inserting massive data
rows?
- Original Message
- Original Message -
> From: "Zhangzhigang"
>
> As i known, the mysql writes the data to disk directly but does not
> use the Os cache when the table is updating.
If it were to use the OS cache for reading but not writing, then the OS cache
would be inconsistent with the underlying file
- Original Message -
> From: "Zhangzhigang"
>
> The mysql does not use this approach what you said which is
> complicated.
>
> I agree with ohan De Meersman.
Umm... It's not a matter of who you agree with :-) Karen's technical detail is
quite correct; I merely presented a simplified pic
: Why is creating indexes faster after inserting massive data
rows?
Honestly, I did not understand that. I did not say anything about being
complicated. What does mysql not use, caching??
Judging by experience, creating a unique index on say, a 200G table could be a
bitter one.
On 07
ysql does not use this approach what you said which is complicated.
>
> I agree with ohan De Meersman.
>
>
>
> 发件人: Karen Abgarian
> 收件人: mysql@lists.mysql.com
> 发送日期: 2012年5月8日, 星期二, 上午 1:30
> 主题: Re: 回复: Why is creating indexes fast
Karen...
The mysql does not use this approach what you said which is complicated.
I agree with ohan De Meersman.
发件人: Karen Abgarian
收件人: mysql@lists.mysql.com
发送日期: 2012年5月8日, 星期二, 上午 1:30
主题: Re: 回复: Why is creating indexes faster after inserting massive
Good point about key buffer. I was only thinking about the table updates for
MyISAM, not indexes. The being stuck waiting for buffer flush could also
happen. However, for the table blocks this would be the same issue as with
load followed by index rebuild, and for the indexes, it will have
e and to sort all rows by the index key. The latter process will be the
most determining factor in answering the original question, because for the
large tables the sort will have to do a lot of disk I/O.The point I am
trying to make is there will be situations when creating indexes and
ect: Re: 回复: Why is creating indexes faster after inserting
> massive data rows?
>
> - Original Message -
>
> > From: "Zhangzhigang"
>
> > Ok, Creating the index *after* the inserts, the index gets created in
> > a single operation.
> > But t
* Batch INSERTs run faster than one-row-at-a-time, but this is unrelated to
INDEX updating speed.
* The cache size is quite important to dealing with indexing during INSERT; see
http://mysql.rjweb.org/doc.php/memory
* Note that mysqldump sets up for an efficient creation of indexes after
- Original Message -
> From: "Zhangzhigang"
> Ok, Creating the index *after* the inserts, the index gets created in
> a single operation.
> But the indexes has to be updating row by row after the data rows has
> all been inserted. Does it work in this way?
No,
发送日期: 2012年5月7日, 星期一, 下午 4:59
> 主题: Re: 回复: Why is creating indexes faster after inserting massive data
> rows?
>
> On 2012/05/07 10:53, Zhangzhigang wrote:
> > johan
> >> Plain and simple: the indices get updated after every insert statement,
> > whereas if y
Thanks, i thought about this answer in the past, and i appreciate your reply.
发件人: Alex Schaft
收件人: mysql@lists.mysql.com
发送日期: 2012年5月7日, 星期一, 下午 4:59
主题: Re: 回复: Why is creating indexes faster after inserting massive data rows?
On 2012/05/07 10:53
回复: Why is creating indexes faster after inserting massive data rows?
Creating the index in one time is one macro-sort operation,
updating the index at every row is doing the operation on and on again.
If you do not understand the difference I recommend you to read some basics
about sorting algori
, the index gets created in a
> single operation.
> But the indexes has to be updating row by row after the data rows has all
> been inserted. Does it work in this way?
> So i can not find the different overhead about two ways.
>
>
>
>
>
> _
* the inserts, the index gets created in a single
operation.
But the indexes has to be updating row by row after the data rows has all been
inserted. Does it work in this way?
So i can not find the different overhead about two ways.
My simplified 2c. When inserting rows with active indexes one by
n a single
operation.
But the indexes has to be updating row by row after the data rows has all been
inserted. Does it work in this way?
So i can not find the different overhead about two ways.
发件人: Johan De Meersman
收件人: Zhangzhigang
抄送: mysql@lists.mysql.
- Original Message -
> From: "Zhangzhigang"
>
> Creating indexes after inserting massive data rows is faster than
> before inserting data rows.
> Please tell me why.
Plain and simple: the indices get updated after every insert statement, whereas
if you only cre
Version : Mysql 5.1
Engine : MyISAM.
The indexes are normal but neither primary key or unique key.
I should describe mysql question clearly.
When inserting massive data rows to table which need to be created indexes, i
can create indexes before inserting data rows, anther way is that i can
which version of mysql are you using.
Is this secondary index.?
On Mon, May 7, 2012 at 12:07 PM, Zhangzhigang wrote:
> hi all:
>
> I have a question:
>
> Creating indexes after inserting massive data rows is faster than before
> inserting data rows.
> Please tell me why.
>
At 01:58 PM 10/7/2011, you wrote:
Do you have any good documentation with regards creating indexes.
Also information for explain statement and what would be the desired
result of the explain statement?
This might help:
http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html
http
That cleared it up for me. Thanks!
On 10/07/2011 03:06 PM, Jerry Schwartz wrote:
-Original Message-
From: Reindl Harald [mailto:h.rei...@thelounge.net]
Sent: Friday, October 07, 2011 12:21 PM
To: mysql@lists.mysql.com
Subject: Re: MySQL Indexes
but could this not be called a bug
>-Original Message-
>From: Reindl Harald [mailto:h.rei...@thelounge.net]
>Sent: Friday, October 07, 2011 12:21 PM
>To: mysql@lists.mysql.com
>Subject: Re: MySQL Indexes
>
>but could this not be called a bug?
>
[JS] No.
Think of two telephone books: one is sorted
ay give you some hints, and I
> >>>>> think it's maatkit that has a tool to run a "query log" to find good
> >>>>> candidates - I've seen it somewhere, I believe
> >>>>>
> >>>>> Just remember t
Do you have any good documentation with regards creating indexes. Also
information for explain statement and what would be the desired result of the
explain statement?
On 7 Oct 2011, at 17:10, Michael Dykman wrote:
> How heavily a given table is queried does not directly affect the index s
;>> Just remember that idx_a(field_a,field_b) is not the same, and is not
>>>>> considered for use, the same way as idx_b(field_b,field_a).
>>>>>
>>>>> -NT
>>>>>
>>>>>
>>>>> Em 07-10-2011 00:22, Michae
I believe
> >>>
> >>> Just remember that idx_a(field_a,field_b) is not the same, and is not
> >>> considered for use, the same way as idx_b(field_b,field_a).
> >>>
> >>> -NT
> >>>
> >>>
> >>> Em 07-
remember that idx_a(field_a,field_b) is not the same, and is not
>>> considered for use, the same way as idx_b(field_b,field_a).
>>>
>>> -NT
>>>
>>>
>>> Em 07-10-2011 00:22, Michael Dykman escreveu:
>>>
>>>> Only one index
How heavily a given table is queried does not directly affect the index
size, only the number and depth of the indexes.
No, it is not that unusual to have the index file bigger. Just make sure
that every index you have is justified by the queries you are making against
the table.
- md
On Fri
Em 07-10-2011 00:22, Michael Dykman escreveu:
>>
>>> Only one index at a time can be used per query, so neither strategy is
>>> optimal. You need at look at the queries you intend to run against the
>>> system and construct indexes which support them.
>>&
t look at the queries you intend to run against the
system and construct indexes which support them.
- md
On Thu, Oct 6, 2011 at 2:35 PM, Neil Tompkins
wrote:
Maybe that was a bad example. If the query was name = 'Red' what index
should I create ?
Should I create a index of all
to run against the
> system and construct indexes which support them.
>
> - md
>
> On Thu, Oct 6, 2011 at 2:35 PM, Neil Tompkins <
> neil.tompk...@googlemail.com> wrote:
>
>> Maybe that was a bad example. If the query was name = 'Red' what index
>>
Michael Dykman escreveu:
> Only one index at a time can be used per query, so neither strategy is
> optimal. You need at look at the queries you intend to run against the
> system and construct indexes which support them.
>
> - md
>
> On Thu, Oct 6, 2011 at 2:35 PM, Neil T
Only one index at a time can be used per query, so neither strategy is
optimal. You need at look at the queries you intend to run against the
system and construct indexes which support them.
- md
On Thu, Oct 6, 2011 at 2:35 PM, Neil Tompkins
wrote:
> Maybe that was a bad example. If
:
> Hi,
>
> Can anyone help and offer some advice with regards MySQL indexes. Basically
> we have a number of different tables all of which have the obviously primary
> keys. We then have some queries using JOIN statements that run slowly than
> we wanted. How many indexes are rec
>
> Can anyone help and offer some advice with regards MySQL indexes.
> Basically
> we have a number of different tables all of which have the obviously
> primary
> keys. We then have some queries using JOIN statements that run slowly than
> we wanted. How many indexes are r
Hi,
Can anyone help and offer some advice with regards MySQL indexes. Basically
we have a number of different tables all of which have the obviously primary
keys. We then have some queries using JOIN statements that run slowly than
we wanted. How many indexes are recommended per table ? For
Sundar ; mysql@lists.mysql.com
Sent: Tue, August 10, 2010 5:03:44 PM
Subject: Re: Dropping ALL indexes from a database / not just a table?
auto_increment is only allowed on primary-keyed columns. I expect it
is not allowing you to drop the primary key because that column has
the auto_increment attribute. Dro
:
> Hi Micheal and all, ok so I did some digging around and I still can't find
> why I cant drop the last few indexes.
>
> mysql> SELECT COUNT(1) FROM INFORMATION_SCHEMA.STATISTICS WHERE table_schema
> = 'db_Market' AND table_name = 'dbt_Fruit' and index_n
Hi Micheal and all, ok so I did some digging around and I still can't find why
I
cant drop the last few indexes.
mysql> SELECT COUNT(1) FROM INFORMATION_SCHEMA.STATISTICS WHERE table_schema =
'db_Market' AND table_name = 'dbt_Fruit' and index_name = '
nzio
>
> From: Michael Dykman
> To: Nunzio Daveri
> Cc: Anirudh Sundar ; mysql@lists.mysql.com
> Sent: Tue, August 10, 2010 3:17:48 PM
> Subject: Re: Dropping ALL indexes from a database / not just a table?
>
> This should give you a good st
chael Dykman
To: Nunzio Daveri
Cc: Anirudh Sundar ; mysql@lists.mysql.com
Sent: Tue, August 10, 2010 3:17:48 PM
Subject: Re: Dropping ALL indexes from a database / not just a table?
This should give you a good starting point (not tested):
select distinct concat('ALTE
, Nunzio Daveri wrote:
> Thanks for the feedback. What I am trying to do is two things:
>
> 1. Remove all indexes and make the database smaller to copy and move to
> another
> prod box. Currently my indexes are in the double digit GB! Yikes ;-)
>
> 2. Remove all indexes so I can fi
Thanks for the feedback. What I am trying to do is two things:
1. Remove all indexes and make the database smaller to copy and move to another
prod box. Currently my indexes are in the double digit GB! Yikes ;-)
2. Remove all indexes so I can find out which ones are needed then tell mysql
to
At 01:06 AM 8/10/2010, you wrote:
Hello Nunzio,
Instead of Dropping a index, you can disable the indexes and get the work
done and re-enable them.
Disabling keys will NOT disable Primary or Unique keys. They will still be
active.
Mike
If you are ok with this then run the below as a
Hello Nunzio,
Instead of Dropping a index, you can disable the indexes and get the work
done and re-enable them.
If you are ok with this then run the below as a shell script :-
MUSER="username"
MPASS="password"
DATABASE="dbname"
for db in $DATABASE
do
echo &
Hello Gurus, is there a way / script that will let me DROP ALL the indexes in a
single database? for example, lets say my database is call db_Animals, and
inside db_Animals there are 97 tables, is there a SINGLE command or a perl
script of some kind that can read all the MYI files, remove the
A boxes and noticed that the
> innodb database is 190Gb in size BUT the worrying issue is that the
indexes are
> 30GB in size!!! When I hit this server hard, it tanks on memory but still
> performs, slower of course ;-)
Having indexes which are larger than RAM is (in itself) not critic
Thanks again :-)
Nunzio
From: Joerg Bruehe
To: Nunzio Daveri ; mysQL General List
Sent: Fri, July 30, 2010 1:31:54 PM
Subject: Re: Indexes larger than RAM (was: Do you know who can answer this
question I posted yesterday please?)
Hi!
I am no InnoDB and
BUT the worrying issue is that the indexes
> are
> 30GB in size!!! When I hit this server hard, it tanks on memory but still
> performs, slower of course ;-)
Having indexes which are larger than RAM is (in itself) not critical.
IMO, it becomes bad only when accesses to these indexes ar
.mysql.com
Subject: Re: combined or single indexes?
Thanks for the information Jerry. Just to confirm, you mentioned
"if you only need one key then you only need one key". My question
was that this particular query was using SELECT against a primary
key and other fields which a
From: Tompkins Neil [mailto:neil.tompk...@googlemail.com]
Sent: Thursday, July 22, 2010 4:50 PM
To: Jerry Schwartz
Cc: Shawn Green (MySQL); mysql@lists.mysql.com
Subject: Re: combined or single indexes?
Thanks for the information Jerry. Just to confirm, you mentioned "if you only
nee
ginal Message-
> >From: Tompkins Neil [mailto:neil.tompk...@googlemail.com]
> >Sent: Thursday, July 22, 2010 3:39 PM
> >To: Shawn Green (MySQL)
> >Cc: mysql@lists.mysql.com
> >Subject: Re: combined or single indexes?
> >
> >Thanks for your reply, and sorr
>-Original Message-
>From: Tompkins Neil [mailto:neil.tompk...@googlemail.com]
>Sent: Thursday, July 22, 2010 3:39 PM
>To: Shawn Green (MySQL)
>Cc: mysql@lists.mysql.com
>Subject: Re: combined or single indexes?
>
>Thanks for your reply, and sorry for not verifying
ove all doubt?
>
> http://dev.mysql.com/doc/refman/5.1/en/explain-output.html
>
> ALL
>
> A full table scan is done for each combination of rows from the previous
> tables. This is normally not good if the table is the first table not marked
> const, and usually very b
table not
marked const, and usually very bad in all other cases. Normally, you can
avoid ALL by adding indexes that enable row retrieval from the table
based on constant values or column values from earlier tables.
--
Shawn Green
MySQL Principle Technical Support Engineer
Oracle USA, I
condition in a where clause for ?
>>
>
> Neil,
> Keep in mind that the more indexes you add to a table, the slower your
> inserts will become because it needs to maintain the indexes. I would only
> consider adding indexes to the slow queries. You can use the
At 12:43 AM 7/9/2010, you wrote:
On Fri, Jul 9, 2010 at 7:30 AM, Neil Tompkins
wrote:
>
> Should we be looking to create a index for all fields that we might be
> perform a select condition in a where clause for ?
Neil,
Keep in mind that the more indexes you add to a table, t
On Fri, Jul 9, 2010 at 7:30 AM, Neil Tompkins
wrote:
>
> Should we be looking to create a index for all fields that we might be
> perform a select condition in a where clause for ?
It's a bit of trial and error, you have to weight the number of times you
select based on a potential index vs. the
Should we be looking to create a index for all fields that we might be
perform a select condition in a where clause for ?
On 9 Jul 2010, at 05:59, Johan De Meersman wrote:
As many as you need, but no more :-)
The right indexes give you a boost in select performance, but every
index
As many as you need, but no more :-)
The right indexes give you a boost in select performance, but every index
also needs to be updated when your data changes.
On Thu, Jul 8, 2010 at 11:25 PM, Neil Tompkins wrote:
> How many indexes are recommended per table ??
>
>
>
>
> On
How many indexes are recommended per table ??
On 7 Jul 2010, at 06:06, "Octavian Rasnita" > wrote:
Hi,
MySQL can use a single index in a query as you've seen in the result
of explain.
Of course it is better to have an index made of 2 or more columns
because it wil
that case the query will use only the index,
without getting data from the table.
--
Octavian
- Original Message -
From: "Bryan Cantwell"
To:
Sent: Tuesday, July 06, 2010 6:41 PM
Subject: combined or single indexes?
Is there a benefit to a combined index on a tabl
Hi Bryan, all!
Bryan Cantwell wrote:
> Is there a benefit to a combined index on a table? Or is multiple single
> column indexes better?
This is a FAQ, but I'm not aware of a place to point you for the answer.
>
> If I have table 'foo' with columns a, b, and c.
Is there a benefit to a combined index on a table? Or is multiple single
column indexes better?
If I have table 'foo' with columns a, b, and c. I will have a query
like:
select c from foo where a in (1,2,3) and b < 12345;
Is index on a,b better in any way than an a index and
filesort
>
>
> The bad part is "Using filesort", and I thought that this is because it
> doesn't like varchar or char columns for indexes, so I tried to use columns
> that contain integers:
>
> mysql> explain select * from test where id_symbol=2 order by id
ymbol`),
> KEY `id_market` (`id_market`)
> ) ENGINE=InnoDB DEFAULT CHARSET=utf8
So you have a table with 5 columns, one being the primary key, and
separate single-column indexes on the other 4 columns.
>
> The search query is:
>
> mysql> explain select * from test where symbol=
g filesort", and I thought that this is because it doesn't
like varchar or char columns for indexes, so I tried to use columns that
contain integers:
mysql> explain select * from test where id_symbol=2 order by id_market limit
20\G
*** 1. row **
> > I am loading 35 million rows of data into an empty MyISAM table. This
> table
> > has 1 primary key (AutoInc) and 1 unique index and 2 non-unique indexes.
> >
> > Is it going to be any faster if I remove the indexes from the table
> before
> > loading th
1 - 100 of 706 matches
Mail list logo