SELECT * FROM XYX WHERE VAL IN ('BLUE','RED','GREEN'); query will make it
faster, if the field is ENUM
On Fri, Nov 16, 2012 at 12:36 AM, Anupam Karmarkar
sb_akarmar...@yahoo.comwrote:
Hi All,
Consider a scenario, I have table XYZ which contains value follow
BLUE
RED
GREEN
NULL
@lists.mysql.com
Subject: Query Optimization
Hi All,
Consider a scenario, I have table XYZ which contains value follow BLUE RED
GREEN NULL
following are queries we can use get this values
1. SELECT * FROM XYX WHERE VAL IN ('BLUE','RED','GREEN'); 2. SELECT * FROM
XYZ WHERE VAL
I have a table with 24 million rows, I need to figure out how to optimize a
query. It has to do with mac addresses and radius packets - I want to see the
# of connections and the min/max date. So I basically want all this data:
select cpe_mac,count(*) c,min(r3_dt) mindt,max(r3_dt) maxdt,
Your outer query select cpe_mac,max(r3_dt) from rad_r3cap, is doing a full
table scan, you might want to check on this and use a WHERE condition to
use indexed column
On Fri, Sep 23, 2011 at 12:14 AM, supr_star suprstar1...@yahoo.com wrote:
I have a table with 24 million rows, I need to
On 09/01/2011 01:32 PM, Brandon Phelps wrote:
On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:
On 9/1/2011 09:42, Brandon Phelps wrote:
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
...
WHERE
(open_dt = '2011-08-30 00:00:00' OR close_dt = '2011-08-30
00:00:00')
AND (open_dt =
Thinking outside the query, is there any archiving that could happen to make
your large tables kinder in the range scan?
Andy
On Thu, Sep 8, 2011 at 7:03 PM, Brandon Phelps bphe...@gls.com wrote:
On 09/01/2011 01:32 PM, Brandon Phelps wrote:
On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:
Thanks for the reply Andy. Unfortunately the users will be selecting varying
date ranges and new data is constantly coming in, so I am not sure how I could
archive/cache the necessary data that would be any more efficient than simply
using the database directly.
On 09/08/2011 02:16 PM,
I don't think I saw any query timings in the emails (maybe I missed them).
What version of MySQL are you currently using?
What does the explain look like when your remove the limit 10?
Is your server tuned for MyISAM or InnoDB?
What kind of disk setup is in use?
How much memory is in your
If you're running version 5.1+ you may wanna take a look at table partitioning
options you may have.
On Sep 8, 2011, at 2:27 PM, Brandon Phelps wrote:
Thanks for the reply Andy. Unfortunately the users will be selecting varying
date ranges and new data is constantly coming in, so I am not
Andy,
The queries take minutes to run. MySQL is 5.1.54 and it's running on Ubuntu
server 11.04. Unfortunately the machine only has 2GB of RAM but no other major
daemons are running on the machine. We are running RAID 1 (mirroring) with 1TB
drives. The tables in question here are all
Correct me if I'm wrong. You're wanting to get all records that have an
open_date or a close_date between two times.
If that's correct, you might be able to get an index_merge by doing a query
like:
WHERE ((starting time)=open_dt= (ending time)) OR ((starting
time)=close_dt=(ending time))
Mihail,
I have considered this but have not yet determined how best to go about
partitioning the table. I don't think partitioning by dst_address or
src_address would help because most of the queries do not filter on IP address
(except very specific queries where the end-user is searching
Thanks for the idea Derek, however given the following query my EXPLAIN output
is identical:
SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA(sc.src_address) AS src_address,
sc.src_port,
INET_NTOA(sc.dst_address) AS dst_address,
Partitioning isn't a bad idea for this however I'm still thinking about your
dataset size and possible hardware limitations. It's not likely going to fit
into relevant buffers/memory so you're going to be on disk more then you
want. You're probably creating temporary tables like crazy and I would
How about:
SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA(sc.src_address) AS src_address,
sc.src_port,
INET_NTOA(sc.dst_address) AS dst_address,
sc.dst_port,
sc.sent,
sc.rcvd,
spm.desc AS src_port_desc,
Andrew,
Generally there is only 1 user performing the complicated SELECT query at a time, however
the background process that fills the table is constantly doing a fast SELECT (0.3
seconds) and a subsequent UPDATE. Basically whenever a connection is closed on the
firewall, the bg process
Mihail,
Thanks so much! I modified your example to include the proper ORDER BY and
LIMIT clauses and this, so far, is running super fast (0.0007 seconds).
Question, if a record's open_dt is between the range AND the close_dt is
between the range as well, will the UNION output the record
From the manual: The default behavior for UNION is that duplicate rows are
removed from the result.
On Sep 8, 2011, at 4:50 PM, Brandon Phelps wrote:
Mihail,
Thanks so much! I modified your example to include the proper ORDER BY and
LIMIT clauses and this, so far, is running super fast
Ah I see. Well thanks for your assistance!
-Brandon
On 09/08/2011 05:21 PM, Mihail Manolov wrote:
From the manual: The default behavior for UNION is that duplicate rows are removed
from the result.
On Sep 8, 2011, at 4:50 PM, Brandon Phelps wrote:
Mihail,
Thanks so much! I modified
On Aug 30, 2011 6:46 PM, Brandon Phelps wrote:
SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA( sc.src_address ) AS src_address,
sc.src_port,
INET_NTOA( sc.dst_address ) AS dst_address,
sc.dst_port,
sc.sent,
sc.rcvd,
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA( sc.src_address ) AS src_address,
sc.src_port,
INET_NTOA( sc.dst_address ) AS dst_address,
sc.dst_port,
sc.sent,
On 9/1/2011 09:42, Brandon Phelps wrote:
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
...
WHERE
(open_dt = '2011-08-30 00:00:00' OR close_dt = '2011-08-30
00:00:00')
AND (open_dt = '2011-08-30 12:36:53' OR close_dt = '2011-08-30
12:36:53')
In that case your logic here
On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:
On 9/1/2011 09:42, Brandon Phelps wrote:
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
...
WHERE
(open_dt = '2011-08-30 00:00:00' OR close_dt = '2011-08-30
00:00:00')
AND (open_dt = '2011-08-30 12:36:53' OR close_dt = '2011-08-30
Hello,
I have the following query I'd like to optimize a bit:
SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA( sc.src_address ) AS src_address,
sc.src_port,
INET_NTOA( sc.dst_address ) AS dst_address,
sc.dst_port,
sc.sent,
There are a few things gonig on, but mainly it is the ORs that are killing
you.
As your require OR to examine two distinct columns, both of equal relevance
to the query, MySQL: is left with no choice but to do a full table scan on
what might be (at a guess) a very larger table. No amount of
Hello all,
I am using the query below and variations of it to query a database with
a TON of records. Currently the database has around 11 million records
but it grows every day and should cap out at around 150 million.
I am curious if there is any way I can better optimize the below query,
Thanks Singer, this took my query down to 0.0007, perfect! I wasn't
aware a single index of multiple columns would work when one of the
columns was in the WHERE clause and the other in the ORDER BY clause.
Learn something new every day I guess!
On 08/10/2011 02:03 PM, Singer X.J. Wang
On 8/10/2011 1:01 PM, Brandon Phelps wrote:
Hello all,
I am using the query below and variations of it to query a database
with a TON of records. Currently the database has around 11 million
records but it grows every day and should cap out at around 150 million.
I am curious if there is
From: Brandon Phelps bphe...@gls.com
I am curious if there is any way I can better optimize the below query, as
currently it takes this query around 10 seconds to run but I am sure this
will get slower and slower as the database grows.
You need an index on `close_dt`.
SELECT
It may only be returning 51 rows but its having to read significantly more.
Get rid of the derived table join if possible. Something like
SELECT TAP.ID http://tap.id/, M.UID, TAP.FirstName, TAP.MI, TAP.LastName,
TAP.State,
TAP.SchoolName, TAP.StateApproved, TAP.DiplomatApproved, C.ChapterType,
I have the following query:
SELECT TAP.ID, M.UID, TAP.FirstName, TAP.MI, TAP.LastName, TAP.State,
TAP.SchoolName, TAP.StateApproved, TAP.DiplomatApproved, C.ChapterType,
S.Region, S.District,Pts.TotPoints
FROM TorchAwardParticipants TAP
JOIN Members M On M.ID=TAP.CurrentMemberID
JOIN
Can you please show us the indexes on both the tables.
regards
anandkl
On Tue, Mar 16, 2010 at 11:47 PM, Jesse j...@msdlg.com wrote:
I have the following query:
SELECT TAP.ID http://tap.id/, M.UID, TAP.FirstName, TAP.MI,
TAP.LastName, TAP.State,
TAP.SchoolName, TAP.StateApproved,
I feel like there should be a better way to do this... So I'm hoping
someone will be able to advise.
We have contacts that belong to organizations. We also have a document
tracker. Holding over from VFP you can have up to six organization ids
on the document and up to six contact ids. Right now
Lee [mailto:ciaran@gmail.com]
Sent: Tuesday, September 22, 2009 1:32 PM
To: mysql@lists.mysql.com
Subject: query optimization question (my struggle against 'using temporary;
using filesort')
Hi,
I hope this is the right place to ask a question about query optimization.
Background:
I have
.
Those queries
Regards,
Gavin Towey
-Original Message-
From: Ciaran Lee [mailto:ciaran@gmail.com]
Sent: Tuesday, September 22, 2009 1:32 PM
To: mysql@lists.mysql.com
Subject: query optimization question (my struggle against 'using temporary;
using filesort')
Hi,
I hope
Hi,
I hope this is the right place to ask a question about query optimization.
Background:
I have a database which has events, which occur in places (listings). Places
have addresses, and addresses belong to a city. I can select the latest
event within a particular city very efficiently (less
Hi!
bharani kumar wrote:
Then which DB engine is better.
Which fruit is better, an apple or a banana?
And if X is better, why do farmers still grow Y, and why do people buy it?
More serious:
Different engines offer different features, as described in the manual.
The most obvious
-Original Message-
From: joerg.bru...@sun.com [mailto:joerg.bru...@sun.com]
Sent: Wednesday, August 26, 2009 5:08 AM
To: mysql@lists.mysql.com
Cc: bharani kumar
Subject: Re: Hi Query Optimization steps
Hi!
bharani kumar wrote:
Then which DB engine is better.
Which fruit
Then which DB engine is better.
Thanks
On Mon, Aug 24, 2009 at 8:16 PM, mos mo...@fastmail.fm wrote:
At 05:01 AM 8/22/2009, you wrote:
Hi all
Tell Me few Query Optimization Tips
For example.. some people saying that , avoid using more of and condition
in
the query
Huh
At 05:01 AM 8/22/2009, you wrote:
Hi all
Tell Me few Query Optimization Tips
For example.. some people saying that , avoid using more of and condition in
the query
Huh? You need to carefully plan your table structure and use indexes
sparingly on the columns you are using
Hi all
Tell Me few Query Optimization Tips
For example.. some people saying that , avoid using more of and condition in
the query
This sort of very basic things please tell me ...
Also tell me , in the optimization view ,, which database engine is best
Thanks
Bharanikumar
The index hint is not in productoin code.. I was trying ot force it to use
the index even when using the OR clause.. ment to take that out before I
sent the email.
The table structure is:
CREATE TABLE `customer` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`ssn` varchar(32) DEFAULT NULL,
Try a union instead of an or condition.
http://dev.mysql.com/doc/refman/5.0/en/union.html
Johnny Withers wrote:
I have the following tables:
Customer: id,ssn
Customer_Id: id,customer_id,id_num
The customer table holds customers along with their SSN and the customer_id
table holds
Do you have an index on id_num? What sort of explain output do you
get when you don't use a query hint? Your USE INDEX hint may be
causing MySQL to ignore a better strategy.
If you have separate indexes on ssn and id_num, MySQL may be able to
efficiently use an index merge optimization . See
If you have separate indexes on ssn and id_num, MySQL may be able to
efficiently use an index merge optimization . See
http://dev.mysql.com/doc/refman/5.0/en/index-merge-optimization.html.
This is only in 5.0+ - on older versions of MySQL you may find a union
more efficient.
And in newer
On Tue, Jan 13, 2009 at 7:07 PM, Baron Schwartz ba...@xaprb.com wrote:
If you have separate indexes on ssn and id_num, MySQL may be able to
efficiently use an index merge optimization . See
http://dev.mysql.com/doc/refman/5.0/en/index-merge-optimization.html.
This is only in 5.0+ - on older
I have the following tables:
Customer: id,ssn
Customer_Id: id,customer_id,id_num
The customer table holds customers along with their SSN and the customer_id
table holds identifications for each customer (Driver's License, State
Issued ID, Student ID, etc). The SSN column from the customer table
First, you might want to move the WHERE...t3.int_a = some integer
condition into the join condition for t3.
Your not using anything from t4, so I'm not sure why you have that
table in your query.
You can suggest or force mysql to use an index if it's using the wrong
one:
I have this query:
SELECT DISTINCT t1.string FROM t1
LEFT JOIN t2 ON t1.string=t2.string
LEFT JOIN t3 ON t1.int_a=t3.int_a
LEFT JOIN t4 ON t1.int_b=t4.int_b
WHERE
t1.string != '' AND
t2.string IS NULL AND
t3.int_a = some integer
ORDER BY
-What I'am trying to do:
Bit hard to explain. I've got a table consisting of ip addresses
(ipv4_src), destination addresses (ipv4_dst), and port numbers
(port_dst) and some other irrelevant columns. Ultimately my goal is to
find a linear order in a subset of ports. For example, host A connects
to
Optimize query
I've got one query, which I would like to improve a lot since it takes
very long (24 hours) to execute. Here is the idea:
1. Take the table ipv4_srcipv4_dstport_dst (other rows in this
table are not mentioned for clearity) and remove all duplicate
tuple's. This is done by subquery
Joris Kinable schrieb:
Optimize query
I've got one query, which I would like to improve a lot since it takes
very long (24 hours) to execute. Here is the idea:
1. Take the table ipv4_srcipv4_dstport_dst (other rows in this
table are not mentioned for clearity) and remove all duplicate
tuple's.
Perrin Harkins schrieb:
On Jan 4, 2008 5:51 PM, Eben [EMAIL PROTECTED] wrote:
The resultset is paginated on the front end using the
SQL_CALC_FOUND_ROWS functionality...
Usually a bad idea:
http://www.mysqlperformanceblog.com/2007/08/28/to-sql_calc_found_rows-or-not-to-sql_calc_found_rows/
On Jan 9, 2008 4:33 AM, Sebastian Mendel [EMAIL PROTECTED] wrote:
usually ... but i do not know of any index capable of having FULLTEXT and
'normal' fields in one index
Does that matter? It would have to be doing a full scan for
SQL_CALC_FOUND_ROWS to work out well.
- Perrin
--
MySQL
Perrin Harkins schrieb:
On Jan 9, 2008 4:33 AM, Sebastian Mendel [EMAIL PROTECTED] wrote:
usually ... but i do not know of any index capable of having FULLTEXT and
'normal' fields in one index
Does that matter?
yes, as written in the mentioned article the test is only relevant with
correct
On Jan 9, 2008 8:34 AM, Sebastian Mendel [EMAIL PROTECTED] wrote:
yes, as written in the mentioned article the test is only relevant with
correct used indexes, but MySQL does not use more than one index, so this
query cannot all be done with indexes
Well, first of all, MySQL 5 does use more
Your biggest problem is probably the subquery/IN your are performing.
You should change that to a join. And I don't know about using
SQL_CALC_FOUND_ROWS in a full text query that's not boolean, and you
shouldn't use it unless you have a LIMIT clause.
SELECT SQL_CALC_FOUND_ROWS
I left something out, the query looks like:
SELECT SQL_CALC_FOUND_ROWS table_1.id,table_2.*
FROM table_2
LEFT JOIN table_1 ON table_2.shared_id = table_1.shared_id
WHERE MATCH table_2.field AGAINST ('value')
AND table_2.current = 1
AND table_2.shared_id IN (SELECT shared_id FROM table_1_view)
On Jan 4, 2008 5:51 PM, Eben [EMAIL PROTECTED] wrote:
The resultset is paginated on the front end using the
SQL_CALC_FOUND_ROWS functionality...
Usually a bad idea:
http://www.mysqlperformanceblog.com/2007/08/28/to-sql_calc_found_rows-or-not-to-sql_calc_found_rows/
- Perrin
--
MySQL General
Hi,
I have a query that has to run on a full text indexed table with many
millions of records. I'm trying to figure out some optimizations for
it. Here's the general query:
SELECT SQL_CALC_FOUND_ROWS table_1.id,table_2.*
FROM table_2
LEFT JOIN table_1 ON table_2.shared_id =
I've never used this feature before on my application.
On Jan 5, 2008 7:06 AM, Perrin Harkins [EMAIL PROTECTED] wrote:
On Jan 4, 2008 5:51 PM, Eben [EMAIL PROTECTED] wrote:
The resultset is paginated on the front end using the
SQL_CALC_FOUND_ROWS functionality...
Usually a bad idea:
I need to check a date_time value in a number of tables for
up-to-date-ness.
The following query is part of a larger PHP script that runs as a cron job
every 10 minutes:
query
select
case
# If within 2 minutes, do nothing.
when (unix_timestamp() -
Hi All,
How can I optimize the following 3-way JOIN SQL query given then following
schema:
SELECT
ORDER.ID, ORDER.STATUS, ORDER.ORDER_TIME, ORDER_ITEM.ID, ORDER_ITEM.QUANTITY,
PRODUCT.SIZE, PRODUCT.SHAPE, PRODUCT.PAPER, PRODUCT.TURNAROUND
FROM
ORDER, ORDER_ITEM, PRODUCT
Hi,
Using MySQL-5.0.22/Win-XP-SP2 and storing data in InnoDB tables.
Clients connect from VB6. Enclosing all transactions in START
TRANSACTION ... COMMIT statements from VB6 clients.
I have the following query for a table tmp2 with a column x of data-type INT.
Insert into tmp2(x,y,x)
[EMAIL PROTECTED] wrote:
Yes, a ranged query should respond faster than a negation. In some cases
you can seriously improve query performance for a negation query if you
split it into two range queries unioned together. Here is a pseudo
example:
This query should be slow due to the table
Robert DiFalco [EMAIL PROTECTED] wrote on 03/11/2006 12:43:43 PM:
In a previous database engine I was using an IN was more optimal than a
. So, for example:
SELECT * FROM table WHERE table.type IN (1,2,3);
Where the possible values of type are 0-3, was appreciably faster than:
Shawn,
Any performance gains for specifying type 0 than type 0 ?
R.
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Monday, March 13, 2006 6:37 AM
To: Robert DiFalco
Cc: mysql@lists.mysql.com
Subject: Re: Query Optimization Question
Robert
To: Robert DiFalco
Cc: mysql@lists.mysql.com
Subject: Re: Query Optimization Question
Robert DiFalco [EMAIL PROTECTED] wrote on 03/11/2006 12:43:43
PM:
In a previous database engine I was using an IN was more optimal than
a
. So, for example:
SELECT * FROM table WHERE
Subject: RE: Query Optimization Question
Yes, a ranged query should respond faster than a negation. In some cases
you can seriously improve query performance for a negation query if you
split it into two range queries unioned together. Here is a pseudo
example:
This query should be slow due
Robert DiFalco wrote:
In a previous database engine I was using an IN was more optimal than a
. So, for example:
SELECT * FROM table WHERE table.type IN (1,2,3);
Where the possible values of type are 0-3, was appreciably faster than:
SELECT * FROM table WHERE table.type 0;
I've
In a previous database engine I was using an IN was more optimal than a
. So, for example:
SELECT * FROM table WHERE table.type IN (1,2,3);
Where the possible values of type are 0-3, was appreciably faster than:
SELECT * FROM table WHERE table.type 0;
I've been playing with the
Robert DiFalco wrote:
In a previous database engine I was using an IN was more optimal than a
. So, for example:
SELECT * FROM table WHERE table.type IN (1,2,3);
Where the possible values of type are 0-3, was appreciably faster than:
SELECT * FROM table WHERE table.type 0;
IN
Hi All,
We use Vbulletin for our message boards, and I have a query which
takes approximately 1.2 - 1.5 seconds to execute, on a table ('thread')
having around 130,000 rows, it is as follows
SELECT threadid
FROM thread AS thread
LEFT JOIN deletionlog AS deletionlog ON ( thread.threadid =
Kishore,
We use Vbulletin for our message boards, and I have a
query which
takes approximately 1.2 - 1.5 seconds to execute, on a table
('thread')
having around 130,000 rows, it is as follows
SELECT threadid
FROM thread
LEFT JOIN deletionlog
ON ( thread.threadid = deletionlog.primaryid AND
Hi:
I'm develop a simple shopping cart. I have this two tables:
carro_de_compras
--
IDU int(11) NOT NULL
IDA int(11) NOT NULL
CantidadDeArticulos int(11) NOT NULL
os_articulo
--
IDA int(11) NOT NULL auto_increment,
IDC int(11) NOT NULL default '0',
ANombre varchar(200) NOT NULL
Dear Reynier,
You can use JOIN on your both,
The JOIN have to run on the same feilds i.e IDA.
SELECT * FROM carro_de_compras LEFT JOIN os_articulo ON carro_de_compras.IDA
= os_articulo.IDA
This query returns all your users with their articles if any and you can
iterate on it.
but one note:
Use
Hi Tripp, all!
Tripp Bishop wrote:
[[...]]
Why would adding extra fields to the query slow the
query down so much? In either case it looks like it's
using the same index.
If a query is to return only fields contained in the index, it can be
executed as an index-only query, and the true
I have a query that's taking about 0.3 seconds to run and it should
be faster I think. I've read all about query optimization and
forcing indexes and I'm not entirely sure what I'm having this
problem. Google finds all kinds of basic information like using
indexes, and not doing math on the query
and it should
be faster I think. I've read all about query optimization and
forcing indexes and I'm not entirely sure what I'm having this
problem. Google finds all kinds of basic information like using
indexes, and not doing math on the query fields. Stupid stuff I'm
not doing.
I have two tables
read all about query optimization and
forcing indexes and I'm not entirely sure what I'm having this
problem. Google finds all kinds of basic information like using
indexes, and not doing math on the query fields. Stupid stuff I'm
not doing.
I have two tables EmailAddr and CustInfo. That look
Baker wrote:
I have a query that's taking about 0.3 seconds to run and it should
be faster I think. I've read all about query optimization and
forcing indexes and I'm not entirely sure what I'm having this
problem. Google finds all kinds of basic information like using
indexes, and not doing math
Howdy all,
I've got a question regarding optimizing a query.
I've got two tables:
listing and listing_city_xref
listing is a pretty wide table with many fields that
can be queried by users. There is a subset of fields
that are always in queries of this table but there are
many additional
It might be a good idea if you could post the results of an EXPLAIN SELECT ...
for a fast query and a slow query along with their actual SQL statements.
James Harvard
At 11:37 am -0800 12/1/06, Tripp Bishop wrote:
I've got a question regarding optimizing a query.
--
MySQL General Mailing List
Jim, here's an example query:
SELECT COUNT(listing.listing_id) AS listing_count FROM
listing_city_xref INNER JOIN listing
ON listing.listing_id = listing_city_xref.listing_id
AND listing.group_id = listing_city_xref.group_id
WHERE listing_city_xref.group_id =2
AND listing_city_xref.city_id IN
Eric,
Can you send us the actual show indexes from table and explain output that
isn't shortend?
Thank you for answer my question.
Actually, the real query and tables are much more complex than
the data in the previous message.
A just tryed to optimize the information for you better
Hello,
I´m using MySQL-4.1.7, and I have a query that seems,
even using where clause, an specific table isn´t being optimized
properly:
I have four tables:
real_state: cod, name, city, ag_cod, sell
agency: cod, name, ag_status
front: cod, rs_cod, ag_cod
photo: cod, rs_cod
These table have the
Can you send us the actual show indexes from table and explain output
that isn't shortend?
Ronan Lucio wrote:
Hello,
I´m using MySQL-4.1.7, and I have a query that seems,
even using where clause, an specific table isn´t being optimized
properly:
I have four tables:
real_state: cod, name,
Hi All,
I have a mysql query which takes 8 seconds to run ona dual
xeon 2.4, 3Gig ram box,
SELECT gamename, MAX(score) AS score, COUNT(valid=1) AS played FROM
gamesessions AS gamesessions
WHERE valid=1 AND sessiontype IN (1,2)
GROUP BY gamename;
Explain
Hi,
Put indexes on 'valid' and 'sessiontype' and all will be good.
Cheers,
Andrew
On 15/7/05 18:26, Kishore Jalleda [EMAIL PROTECTED] wrote:
Hi All,
I have a mysql query which takes 8 seconds to run ona dual
xeon 2.4, 3Gig ram box,
SELECT gamename, MAX(score) AS score,
The following query is in desperate need of optimization. Any gurus
out there who can share some insights, I'd greatly appreciate it.
I have a request table, from which I'm pulling all records. Each
request record has a part number, and for each requested part number,
I'd like to query the
I asked for help with a version of this query a few months
ago, and subsequently made some improvements to it, and also
fooled around with other parts of my app that were in need of
tuning. I've since done some more extensive benchmarking and
realized that this query really is somewhat slow. Even
Hello,
Supposing I have 2 tables:
product
==
- id
- name
price
=
- id
- product_id
- sell_price
- rent_price
If I want to make a SELECT for the products having the
rent_price between 300,00 and 500,00, so I use the query:
SELECT rent_price
FROM product
LEFT JOIN price ON (product.id =
Query optimization question
I am selecting from a single table but it has a lot of rows and it has
a very involved calculation. What I really want to do is
is FIRST restrict the number of rows so that the big calculation is only
performed on the ones that are within 3 degrees.
Using 4.0.20
Gerald Taylor wrote:
Query optimization question
I am selecting from a single table but it has a lot of rows and it
has a very involved calculation. What I really want to do is
is FIRST restrict the number of rows so that the big calculation is only
performed on the ones that are within 3
BY distance
DROP TEMPORARY TABLE tmpDeltaData
I would also test the WHERE clause from tmpDeltaData with your original
query to compare speeds of the two methods.
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine
Gerald Taylor [EMAIL PROTECTED] wrote on 10/04/2004 10:33:22 AM:
Query
Hi,
I'm trying to sort out a query that identifies images that are not in the
story table (structures below). Peter Brawley has kindly pointed me in the
right direction with the sql structure (which does work), but I'm getting
benchmarks of 5+ seconds on the test data, whereas the live site
I'm trying to sort out a query that identifies images that are not in the
story table (structures below). Peter Brawley has kindly pointed me in the
right direction with the sql structure (which does work), but I'm getting
benchmarks of 5+ seconds on the test data, whereas the live site has
Hi,
Does anybody know where I can find information about query
optimization in MySQL? Of which the techniques that are used?
Regards,
Volnei Galbino
At 21:55 -0300 3/7/04, Volnei Galbino wrote:
Hi,
Does anybody know where I can find information about query
optimization in MySQL? Of which the techniques that are used?
Regards,
Volnei Galbino
Yes, there's a chapter on optimization in the MySQL Reference Manual.
http://www.mysql.com/doc/en
Chuck Gadd wrote:
I've got a query that I can't seem to get optimized, so I'm
hoping someone here can spot something I've missing!
Table has three columns:
CoordID int unsigned,
Zip_Lo char(9),
Zip_Hi char(9)
Table has 3 million records
indexes:
acg_lo (Zip_Lo)
acg_hi (Zip_Hi)
acg_combined
1 - 100 of 156 matches
Mail list logo