Re: [PHP-DB] Slooooow query in MySQL.

2007-07-24 Thread Aleksandar Vojnovic
In addition to Chris's suggestions you should also alter the homeid 
column (set default to NULL and update the whole database which 
shouldn't be a problem) so you don't have to do a double check on the 
same column. I would also suggest that the TSCN_MEDIAid column should be 
an int not a varchar.


Aleksander

Chris wrote:

Rob Adams wrote:


select h.addr, h.city, h.county, h.state, h.zip, 'yes' as show_prop,
   h.askingprice, '' as year_built, h.rooms, h.baths,
   '' as apt, '' as lot, h.sqft, h.listdate, '' as date_sold, 
h.comments, h.mlsnum,

   r.agency, concat(r.fname, ' ', r.lname) as rname,
   r.phone as rphone, '' as remail, '' as status, '' as prop_type,
   ts.TSCNfile as picture,
   h.homeid as homeid, 'yes' as has_virt
   from ProductStatus ps, home h, realtor r, ProductBin pb
   left join TourScene ts on ts.TSCNtourId = pb.PBINid and 
ts.TSCN_MEDIAid = '3'
   where ps.PSTSstatus = 'posted' and pb.PBINid = PSTS_POid and h.id 
= pb.PBINid

   and h.listdate  DATE_SUB(NOW(), INTERVAL 2 YEAR)
   and (h.homeid is not null and h.homeid  '')
   and r.realtorid = pb.PBIN_HALOid limit {l1}, {l2}

Here is the query.  I didn't know that it needed to have an ORDER 
clause in it for the limit to work properly.  I'll probably order by 
h.listdate


If you don't have an ORDER BY clause then you're going to get 
inconsistent results. The database will never guarantee returning 
results in a set order unless you tell it to by specifying an order by 
clause.



To speed up your query, make sure you have indexes on:

TourScene(TSCNtourId, TSCN_MEDIAid)
ProductBin(PBINid, PBIN_HALOid)
home(id, listdate)
realtor(realtorid)

If you can't get it fast, then post the EXPLAIN output.



--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Slooooow query in MySQL.

2007-07-23 Thread Stut

Stut wrote:

Chris wrote:

Stut wrote:

Chris wrote:

Rob Adams wrote:
I have a query that I run using mysql that returns about 60,000 
plus rows. It's been so large that I've just been testing it with a 
limit 0, 1 (ten thousand) on the query.  That used to take 
about 10 minutes to run, including processing time in PHP which 
spits out xml from the query.  I decided to chunk the query down 
into 1,000 row increments, and tried that. The script processed 
10,000 rows in 23 seconds!  I was amazed!  But unfortunately it 
takes quite a bit longer than 6*23 to process the 60,000 rows that 
way (1,000 at a time).  It takes almost 8 minutes.  I can't figure 
out why it takes so long, or how to make it faster.  The data for 
60,000 rows is about 120mb, so I would prefer not to use a 
temporary table.  Any other suggestions?  This is probably more a 
db issue than a php issue, but I thought I'd try here first.


Sounds like missing indexes or something.

Use explain: http://dev.mysql.com/doc/refman/4.1/en/explain.html


If that were the case I wouldn't expect limiting the number of rows 
returned to make a difference since the actual query is the same.


Actually it can. I don't think mysql does this but postgresql does 
take the limit/offset clauses into account when generating a plan.


http://www.postgresql.org/docs/current/static/sql-select.html#SQL-LIMIT

Not really relevant to the problem though :P


How many queries do you run with an order? But you're right, if there is 
no order by clause adding a limit probably will make a difference, but 
there must be an order by when you use limit to ensure the SQL engine 
doesn't give you the same rows in response to more than one of the queries.


Oops, that was meant to say How many queries do you run *without* an 
order?


-Stut

--
http://stut.net/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Slooooow query in MySQL.

2007-07-23 Thread Rob Adams


select h.addr, h.city, h.county, h.state, h.zip, 'yes' as show_prop,
   h.askingprice, '' as year_built, h.rooms, h.baths,
   '' as apt, '' as lot, h.sqft, h.listdate, '' as date_sold, h.comments, 
h.mlsnum,

   r.agency, concat(r.fname, ' ', r.lname) as rname,
   r.phone as rphone, '' as remail, '' as status, '' as prop_type,
   ts.TSCNfile as picture,
   h.homeid as homeid, 'yes' as has_virt
   from ProductStatus ps, home h, realtor r, ProductBin pb
   left join TourScene ts on ts.TSCNtourId = pb.PBINid and ts.TSCN_MEDIAid 
= '3'
   where ps.PSTSstatus = 'posted' and pb.PBINid = PSTS_POid and h.id = 
pb.PBINid

   and h.listdate  DATE_SUB(NOW(), INTERVAL 2 YEAR)
   and (h.homeid is not null and h.homeid  '')
   and r.realtorid = pb.PBIN_HALOid limit {l1}, {l2}

Here is the query.  I didn't know that it needed to have an ORDER clause in 
it for the limit to work properly.  I'll probably order by h.listdate


 -- Rob


Stut [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]

Chris wrote:

Stut wrote:

Chris wrote:

Rob Adams wrote:
I have a query that I run using mysql that returns about 60,000 plus 
rows. It's been so large that I've just been testing it with a limit 
0, 1 (ten thousand) on the query.  That used to take about 10 
minutes to run, including processing time in PHP which spits out xml 
from the query.  I decided to chunk the query down into 1,000 row 
increments, and tried that. The script processed 10,000 rows in 23 
seconds!  I was amazed!  But unfortunately it takes quite a bit longer 
than 6*23 to process the 60,000 rows that way (1,000 at a time).  It 
takes almost 8 minutes.  I can't figure out why it takes so long, or 
how to make it faster.  The data for 60,000 rows is about 120mb, so I 
would prefer not to use a temporary table.  Any other suggestions? 
This is probably more a db issue than a php issue, but I thought I'd 
try here first.


Sounds like missing indexes or something.

Use explain: http://dev.mysql.com/doc/refman/4.1/en/explain.html


If that were the case I wouldn't expect limiting the number of rows 
returned to make a difference since the actual query is the same.


Actually it can. I don't think mysql does this but postgresql does take 
the limit/offset clauses into account when generating a plan.


http://www.postgresql.org/docs/current/static/sql-select.html#SQL-LIMIT

Not really relevant to the problem though :P


How many queries do you run with an order? But you're right, if there is 
no order by clause adding a limit probably will make a difference, but 
there must be an order by when you use limit to ensure the SQL engine 
doesn't give you the same rows in response to more than one of the 
queries.


-Stut

--
http://stut.net/ 


--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Slooooow query in MySQL.

2007-07-23 Thread Chris

Stut wrote:

Stut wrote:

Chris wrote:

Stut wrote:

Chris wrote:

Rob Adams wrote:
I have a query that I run using mysql that returns about 60,000 
plus rows. It's been so large that I've just been testing it with 
a limit 0, 1 (ten thousand) on the query.  That used to take 
about 10 minutes to run, including processing time in PHP which 
spits out xml from the query.  I decided to chunk the query down 
into 1,000 row increments, and tried that. The script processed 
10,000 rows in 23 seconds!  I was amazed!  But unfortunately it 
takes quite a bit longer than 6*23 to process the 60,000 rows that 
way (1,000 at a time).  It takes almost 8 minutes.  I can't figure 
out why it takes so long, or how to make it faster.  The data for 
60,000 rows is about 120mb, so I would prefer not to use a 
temporary table.  Any other suggestions?  This is probably more a 
db issue than a php issue, but I thought I'd try here first.


Sounds like missing indexes or something.

Use explain: http://dev.mysql.com/doc/refman/4.1/en/explain.html


If that were the case I wouldn't expect limiting the number of rows 
returned to make a difference since the actual query is the same.


Actually it can. I don't think mysql does this but postgresql does 
take the limit/offset clauses into account when generating a plan.


http://www.postgresql.org/docs/current/static/sql-select.html#SQL-LIMIT

Not really relevant to the problem though :P


How many queries do you run with an order? But you're right, if there 
is no order by clause adding a limit probably will make a difference, 
but there must be an order by when you use limit to ensure the SQL 
engine doesn't give you the same rows in response to more than one of 
the queries.


Oops, that was meant to say How many queries do you run *without* an 
order?


Almost never - but my point was actually this sentence:

The query planner takes LIMIT into account when generating a query plan, 
so you are very likely to get different plans (yielding different row 
orders) depending on what you use for LIMIT and OFFSET.


--
Postgresql  php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Slooooow query in MySQL.

2007-07-23 Thread Chris

Rob Adams wrote:


select h.addr, h.city, h.county, h.state, h.zip, 'yes' as show_prop,
   h.askingprice, '' as year_built, h.rooms, h.baths,
   '' as apt, '' as lot, h.sqft, h.listdate, '' as date_sold, 
h.comments, h.mlsnum,

   r.agency, concat(r.fname, ' ', r.lname) as rname,
   r.phone as rphone, '' as remail, '' as status, '' as prop_type,
   ts.TSCNfile as picture,
   h.homeid as homeid, 'yes' as has_virt
   from ProductStatus ps, home h, realtor r, ProductBin pb
   left join TourScene ts on ts.TSCNtourId = pb.PBINid and 
ts.TSCN_MEDIAid = '3'
   where ps.PSTSstatus = 'posted' and pb.PBINid = PSTS_POid and h.id = 
pb.PBINid

   and h.listdate  DATE_SUB(NOW(), INTERVAL 2 YEAR)
   and (h.homeid is not null and h.homeid  '')
   and r.realtorid = pb.PBIN_HALOid limit {l1}, {l2}

Here is the query.  I didn't know that it needed to have an ORDER clause 
in it for the limit to work properly.  I'll probably order by h.listdate


If you don't have an ORDER BY clause then you're going to get 
inconsistent results. The database will never guarantee returning 
results in a set order unless you tell it to by specifying an order by 
clause.



To speed up your query, make sure you have indexes on:

TourScene(TSCNtourId, TSCN_MEDIAid)
ProductBin(PBINid, PBIN_HALOid)
home(id, listdate)
realtor(realtorid)

If you can't get it fast, then post the EXPLAIN output.

--
Postgresql  php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Slooooow query in MySQL.

2007-07-22 Thread Chris

Stut wrote:

Chris wrote:

Rob Adams wrote:
I have a query that I run using mysql that returns about 60,000 plus 
rows. It's been so large that I've just been testing it with a limit 
0, 1 (ten thousand) on the query.  That used to take about 10 
minutes to run, including processing time in PHP which spits out xml 
from the query.  I decided to chunk the query down into 1,000 row 
increments, and tried that. The script processed 10,000 rows in 23 
seconds!  I was amazed!  But unfortunately it takes quite a bit 
longer than 6*23 to process the 60,000 rows that way (1,000 at a 
time).  It takes almost 8 minutes.  I can't figure out why it takes 
so long, or how to make it faster.  The data for 60,000 rows is about 
120mb, so I would prefer not to use a temporary table.  Any other 
suggestions?  This is probably more a db issue than a php issue, but 
I thought I'd try here first.


Sounds like missing indexes or something.

Use explain: http://dev.mysql.com/doc/refman/4.1/en/explain.html


If that were the case I wouldn't expect limiting the number of rows 
returned to make a difference since the actual query is the same.


Actually it can. I don't think mysql does this but postgresql does take 
the limit/offset clauses into account when generating a plan.


http://www.postgresql.org/docs/current/static/sql-select.html#SQL-LIMIT

Not really relevant to the problem though :P

--
Postgresql  php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Slooooow query in MySQL.

2007-07-20 Thread OKi98

Rob Adams napsal(a):
I have a query that I run using mysql that returns about 60,000 plus 
rows. It's been so large that I've just been testing it with a limit 
0, 1 (ten thousand) on the query.  That used to take about 10 
minutes to run, including processing time in PHP which spits out xml 
from the query.  I decided to chunk the query down into 1,000 row 
increments, and tried that. The script processed 10,000 rows in 23 
seconds!  I was amazed!  But unfortunately it takes quite a bit longer 
than 6*23 to process the 60,000 rows that way (1,000 at a time).  It 
takes almost 8 minutes.  I can't figure out why it takes so long, or 
how to make it faster.  The data for 60,000 rows is about 120mb, so I 
would prefer not to use a temporary table.  Any other suggestions?  
This is probably more a db issue than a php issue, but I thought I'd 
try here first.
60k rows is not that much, I have tables with 500k rows and queries are 
running smoothly.


Anyway we cannot help you if you do not post:
1. show create table
2. result of explain query
3. the query itself

OKi98

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Slooooow query in MySQL.

2007-07-20 Thread Aleksandar Vojnovic
60k records shouldn't be a problem. Show us the query you're making and 
the table structure.


 OKi98 wrote:

Rob Adams napsal(a):
I have a query that I run using mysql that returns about 60,000 plus 
rows. It's been so large that I've just been testing it with a limit 
0, 1 (ten thousand) on the query.  That used to take about 10 
minutes to run, including processing time in PHP which spits out xml 
from the query.  I decided to chunk the query down into 1,000 row 
increments, and tried that. The script processed 10,000 rows in 23 
seconds!  I was amazed!  But unfortunately it takes quite a bit 
longer than 6*23 to process the 60,000 rows that way (1,000 at a 
time).  It takes almost 8 minutes.  I can't figure out why it takes 
so long, or how to make it faster.  The data for 60,000 rows is about 
120mb, so I would prefer not to use a temporary table.  Any other 
suggestions?  This is probably more a db issue than a php issue, but 
I thought I'd try here first.
60k rows is not that much, I have tables with 500k rows and queries 
are running smoothly.


Anyway we cannot help you if you do not post:
1. show create table
2. result of explain query
3. the query itself

OKi98



--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Slooooow query in MySQL.

2007-07-20 Thread Stut

Chris wrote:

Rob Adams wrote:
I have a query that I run using mysql that returns about 60,000 plus 
rows. It's been so large that I've just been testing it with a limit 
0, 1 (ten thousand) on the query.  That used to take about 10 
minutes to run, including processing time in PHP which spits out xml 
from the query.  I decided to chunk the query down into 1,000 row 
increments, and tried that. The script processed 10,000 rows in 23 
seconds!  I was amazed!  But unfortunately it takes quite a bit longer 
than 6*23 to process the 60,000 rows that way (1,000 at a time).  It 
takes almost 8 minutes.  I can't figure out why it takes so long, or 
how to make it faster.  The data for 60,000 rows is about 120mb, so I 
would prefer not to use a temporary table.  Any other suggestions?  
This is probably more a db issue than a php issue, but I thought I'd 
try here first.


Sounds like missing indexes or something.

Use explain: http://dev.mysql.com/doc/refman/4.1/en/explain.html


If that were the case I wouldn't expect limiting the number of rows 
returned to make a difference since the actual query is the same.


Chances are it's purely a data transfer delay. Do a test with the same 
query but only grab one of the fields - something relative small like a 
integer field - and see if that's significantly quicker. I'm betting it 
will be.


If that is the problem you need to be looking at making sure you're only 
getting the fields you need. You may also want to look into changing the 
cursor type you're using although I'm not sure if that's possible with 
MySQL nevermind how to do it.


-Stut

--
http://stut.net/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



[PHP-DB] Slooooow query in MySQL.

2007-07-19 Thread Rob Adams
I have a query that I run using mysql that returns about 60,000 plus rows. 
It's been so large that I've just been testing it with a limit 0, 1 (ten 
thousand) on the query.  That used to take about 10 minutes to run, 
including processing time in PHP which spits out xml from the query.  I 
decided to chunk the query down into 1,000 row increments, and tried that. 
The script processed 10,000 rows in 23 seconds!  I was amazed!  But 
unfortunately it takes quite a bit longer than 6*23 to process the 60,000 
rows that way (1,000 at a time).  It takes almost 8 minutes.  I can't figure 
out why it takes so long, or how to make it faster.  The data for 60,000 
rows is about 120mb, so I would prefer not to use a temporary table.  Any 
other suggestions?  This is probably more a db issue than a php issue, but I 
thought I'd try here first. 


--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Slooooow query in MySQL.

2007-07-19 Thread Kevin Murphy

Seeing the query would help.

Are you using sub-queries? I believe that those can make the time go  
up exponentially.


--
Kevin Murphy
Webmaster: Information and Marketing Services
Western Nevada College
www.wnc.edu
775-445-3326

P.S. Please note that my e-mail and website address have changed from  
wncc.edu to wnc.edu.



On Jul 19, 2007, at 2:19 PM, Rob Adams wrote:

I have a query that I run using mysql that returns about 60,000  
plus rows. It's been so large that I've just been testing it with a  
limit 0, 1 (ten thousand) on the query.  That used to take  
about 10 minutes to run, including processing time in PHP which  
spits out xml from the query.  I decided to chunk the query down  
into 1,000 row increments, and tried that. The script processed  
10,000 rows in 23 seconds!  I was amazed!  But unfortunately it  
takes quite a bit longer than 6*23 to process the 60,000 rows that  
way (1,000 at a time).  It takes almost 8 minutes.  I can't figure  
out why it takes so long, or how to make it faster.  The data for  
60,000 rows is about 120mb, so I would prefer not to use a  
temporary table.  Any other suggestions?  This is probably more a  
db issue than a php issue, but I thought I'd try here first.

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php





Re: [PHP-DB] Slooooow query in MySQL.

2007-07-19 Thread Chris

Rob Adams wrote:
I have a query that I run using mysql that returns about 60,000 plus 
rows. It's been so large that I've just been testing it with a limit 0, 
1 (ten thousand) on the query.  That used to take about 10 minutes 
to run, including processing time in PHP which spits out xml from the 
query.  I decided to chunk the query down into 1,000 row increments, and 
tried that. The script processed 10,000 rows in 23 seconds!  I was 
amazed!  But unfortunately it takes quite a bit longer than 6*23 to 
process the 60,000 rows that way (1,000 at a time).  It takes almost 8 
minutes.  I can't figure out why it takes so long, or how to make it 
faster.  The data for 60,000 rows is about 120mb, so I would prefer not 
to use a temporary table.  Any other suggestions?  This is probably more 
a db issue than a php issue, but I thought I'd try here first.


Sounds like missing indexes or something.

Use explain: http://dev.mysql.com/doc/refman/4.1/en/explain.html

--
Postgresql  php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php