Re: [PHP-DB] Table optimization ideas needed

2008-03-27 Thread Shelley
On Thu, Mar 27, 2008 at 1:55 PM, Chris [EMAIL PROTECTED] wrote:

 Good idea. But I wonder whether calling the trigger each insert will loose
  any performance.
 

 It's going to affect things slightly but whether it'll be noticable only
 you can answer by testing.

 Another option I sometimes see is set up a replicated slave and run your
 reports off that instead of the live system, then:

 1) it won't bog the live db down
 2) it doesn't really matter how many queries you run
 3) it doesn't really matter how long they take to run


Hm... This makes sense.  :)

I indeed have several slaves running.

Great. Thank you.


Thank you for all your help.



 --
 Postgresql  php tutorials
 http://www.designmagick.com/




-- 
Regards,
Shelley


Re: [PHP-DB] Table optimization ideas needed

2008-03-26 Thread Shelley
Thank you very much, Chris. :)
Fyi,


On Wed, Mar 26, 2008 at 1:27 PM, Chris [EMAIL PROTECTED] wrote:

 Shelley wrote:

 
  +--+---+--+-+---++
  | Field| Type  | Null | Key | Default
  | Extra  |
 
  +--+---+--+-+---++
  | id   | int(11)   |  | PRI | NULL
   | auto_increment |
  | owner_id | int(11)   |  | MUL | 0
  ||
  | owner_name   | varchar(50)   |  | |
  ||
  | visitor_id   | int(11)   |  | MUL | 0
  ||
  | visitor_name | varchar(100)  |  | |
  ||
  | visit_time   | timestamp | YES  | | CURRENT_TIMESTAMP
  ||
  | first_time   | int(10) unsigned  |  | | 0
  ||
  | last_time| int(10) unsigned  |  | MUL | 0
  ||
  | visit_num| mediumint(8) unsigned |  | | 0
  ||
  | status   | tinyint(3) unsigned   |  | MUL | 0
  ||
 
  +--+---+--+-+---++
 
  That's the table which has more than 20 million records.
 

 And what query are you running?

 What does:

 explain your_query_here;

 show?

mysql explain select count(*) from message;
++-+-+---+---++-+--+--+-+
| id | select_type | table   | type  | possible_keys | key| key_len |
ref  | rows | Extra   |
++-+-+---+---++-+--+--+-+
|  1 | SIMPLE  | message | index | NULL  | status |   1 |
NULL | 23051499 | Using index |
++-+-+---+---++-+--+--+-+

Three queries return the same results.




 I can see indexes on at least owner_id, visitor_id, last_time and status,
 but at least one of those is across multiple columns ('MUL').

 Can you show us the index definitions:

 show indexes from table_name;

 or

 show create table table_name;

 and just include the indexes at the bottom.


mysql show indexes from message;
+-+++--+-+---+-+--++--++-+
| Table   | Non_unique | Key_name   | Seq_in_index | Column_name | Collation
| Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+-+++--+-+---+-+--++--++-+
| message |  0 | PRIMARY|1 | id  | A
|23051499 | NULL | NULL   |  | BTREE  | |
| message |  1 | owner_id   |1 | owner_id| A
|  922059 | NULL | NULL   |  | BTREE  | |
| message |  1 | visitor_id |1 | visitor_id  | A
|  501119 | NULL | NULL   |  | BTREE  | |
| message |  1 | status |1 | status  | A
|  18 | NULL | NULL   |  | BTREE  | |
| message |  1 | last_time  |1 | last_time   | A
|11525749 | NULL | NULL   |  | BTREE  | |
+-+++--+-+---+-+--++--++-+
5 rows in set (1.09 sec)




 --
 Postgresql  php tutorials
 http://www.designmagick.com/




-- 
Regards,
Shelley


Re: [PHP-DB] Table optimization ideas needed

2008-03-26 Thread Chris



mysql explain select count(*) from message;
++-+-+---+---++-+--+--+-+
| id | select_type | table   | type  | possible_keys | key| key_len 
| ref  | rows | Extra   |

++-+-+---+---++-+--+--+-+
|  1 | SIMPLE  | message | index | NULL  | status |   1 
| NULL | 23051499 | Using index |

++-+-+---+---++-+--+--+-+

Three queries return the same results.


That's never going to be fast because you're using innodb tables.

From a previous post:

 Then you can imagine how much time sql such as select a,b from
 table_name where c='d' will take.

 I have a lot of tables like that. So my questions is:
 What's your practice to optimize tables like that?

I thought that's what you needed help with. ?

--
Postgresql  php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Table optimization ideas needed

2008-03-26 Thread Shelley
On Thu, Mar 27, 2008 at 10:03 AM, Chris [EMAIL PROTECTED] wrote:


  mysql explain select count(*) from message;
 
  ++-+-+---+---++-+--+--+-+
  | id | select_type | table   | type  | possible_keys | key| key_len
  | ref  | rows | Extra   |
 
  ++-+-+---+---++-+--+--+-+
  |  1 | SIMPLE  | message | index | NULL  | status |   1
  | NULL | 23051499 | Using index |
 
  ++-+-+---+---++-+--+--+-+
 
  Three queries return the same results.
 

 That's never going to be fast because you're using innodb tables.

Should I change it to MyISAM ones?



 From a previous post:

  Then you can imagine how much time sql such as select a,b from
  table_name where c='d' will take.
 
  I have a lot of tables like that. So my questions is:
  What's your practice to optimize tables like that?

 I thought that's what you needed help with. ?

No. That's only part of it. I have a cron job, which get the total visits
often.



 --
 Postgresql  php tutorials
 http://www.designmagick.com/




-- 
Regards,
Shelley


Re: [PHP-DB] Table optimization ideas needed

2008-03-26 Thread Chris



That's never going to be fast because you're using innodb tables.

Should I change it to MyISAM ones?


It depends.

Do you need or use transactions? You can't change - myisam doesn't 
support them.



No. That's only part of it. I have a cron job, which get the total 
visits often.


If you are using mysql 5.0+, use a trigger to update a counter:

It'll depend on your data set and what you need to get out of the data 
(ie what queries you are running) but if you just need a total or even 
total per day, this would be the best.


Might need to do it as a stored procedure so you can see if the date 
already exists in that table so you can either do an update or insert, 
or maybe you can use replace into without needing to do that check.


See http://dev.mysql.com/doc/refman/5.0/en/using-triggers.html and 
http://dev.mysql.com/doc/refman/5.0/en/triggers.html - and look at the 
comments:



Triggers can call 'CALL' method.

DROP TRIGGER cinema.TESTTRIGGER;
CREATE TRIGGER cinema.TESTTRIGGER AFTER INSERT ON film FOR EACH ROW
CALL cinema.CHECKFILM('Old boy');

It works. The trigger is perform and do its job. The only request is to 
use the word 'AFTER' when you call your trigger. 'BEFORE' doesn't work 
(the trigger is called ans work but the insert failed in all case).




--
Postgresql  php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Table optimization ideas needed

2008-03-26 Thread Shelley
On Thu, Mar 27, 2008 at 10:40 AM, Chris [EMAIL PROTECTED] wrote:


 That's never going to be fast because you're using innodb tables.
 
  Should I change it to MyISAM ones?
 

 It depends.

 Do you need or use transactions? You can't change - myisam doesn't support
 them.


I haven't tried transactions so far. :(




  No. That's only part of it. I have a cron job, which get the total visits
  often.
 

 If you are using mysql 5.0+, use a trigger to update a counter:

It is 5.0+.



 It'll depend on your data set and what you need to get out of the data (ie
 what queries you are running) but if you just need a total or even total per
 day, this would be the best.


Good idea. But I wonder whether calling the trigger each insert will loose
any performance.



 Might need to do it as a stored procedure so you can see if the date
 already exists in that table so you can either do an update or insert, or
 maybe you can use replace into without needing to do that check.

 See http://dev.mysql.com/doc/refman/5.0/en/using-triggers.html and
 http://dev.mysql.com/doc/refman/5.0/en/triggers.html - and look at the
 comments:


 Triggers can call 'CALL' method.

 DROP TRIGGER cinema.TESTTRIGGER;
 CREATE TRIGGER cinema.TESTTRIGGER AFTER INSERT ON film FOR EACH ROW
 CALL cinema.CHECKFILM('Old boy');

 It works. The trigger is perform and do its job. The only request is to
 use the word 'AFTER' when you call your trigger. 'BEFORE' doesn't work (the
 trigger is called ans work but the insert failed in all case).





 --
 Postgresql  php tutorials
 http://www.designmagick.com/




-- 
Regards,
Shelley


Re: [PHP-DB] Table optimization ideas needed

2008-03-26 Thread Chris
Good idea. But I wonder whether calling the trigger each insert will 
loose any performance.


It's going to affect things slightly but whether it'll be noticable only 
you can answer by testing.


Another option I sometimes see is set up a replicated slave and run your 
reports off that instead of the live system, then:


1) it won't bog the live db down
2) it doesn't really matter how many queries you run
3) it doesn't really matter how long they take to run

--
Postgresql  php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Table optimization ideas needed

2008-03-25 Thread Shelley
Yes, Index can help a lot.
But actually there has been five indices. The table takes 1.4G space while
the indices take 2.3G.
The select sentence is still slow.  :(

On Tue, Mar 25, 2008 at 11:50 AM, Chris [EMAIL PROTECTED] wrote:

 Shelley wrote:
  Hi all,
 
  I made a post a week ago to ask for the idea of the fastest way to get
  table records.
  Fyi,
 
 http://phparch.cn/index.php/mysql/35-MySQL-programming/126-fastest-way-to-get-total-records-from-a-table
 
 
  Look at the time even a 'count(1)' took.
  Then you can imagine how much time sql such as select a,b from
  table_name where c='d' will take.
 
  I have a lot of tables like that. So my questions is:
  What's your practice to optimize tables like that?

 I pretty much follow what I've said in this article:

 http://www.designmagick.com/article/16/PostgreSQL/How-to-index-a-database

 --
 Postgresql  php tutorials
 http://www.designmagick.com/




-- 
Regards,
Shelley


Re: [PHP-DB] Table optimization ideas needed

2008-03-25 Thread Chris

Shelley wrote:

Yes, Index can help a lot.
But actually there has been five indices. The table takes 1.4G space 
while the indices take 2.3G.

The select sentence is still slow.  :(


Post your exact query, table definition(s), indexes and see if anyone 
has some suggestions.


If it's a mysql db, join one of the mysql lists and see if anyone has 
some suggestions (though they will ask for the same info). Same for any 
other db.


--
Postgresql  php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Table optimization ideas needed

2008-03-25 Thread Shelley
+--+---+--+-+---++
| Field| Type  | Null | Key | Default   |
Extra  |
+--+---+--+-+---++
| id   | int(11)   |  | PRI | NULL  |
auto_increment |
| owner_id | int(11)   |  | MUL | 0
||
| owner_name   | varchar(50)   |  | |
||
| visitor_id   | int(11)   |  | MUL | 0
||
| visitor_name | varchar(100)  |  | |
||
| visit_time   | timestamp | YES  | | CURRENT_TIMESTAMP
||
| first_time   | int(10) unsigned  |  | | 0
||
| last_time| int(10) unsigned  |  | MUL | 0
||
| visit_num| mediumint(8) unsigned |  | | 0
||
| status   | tinyint(3) unsigned   |  | MUL | 0
||
+--+---+--+-+---++

That's the table which has more than 20 million records.

On Wed, Mar 26, 2008 at 10:20 AM, Chris [EMAIL PROTECTED] wrote:

 Shelley wrote:
  Yes, Index can help a lot.
  But actually there has been five indices. The table takes 1.4G space
  while the indices take 2.3G.
  The select sentence is still slow.  :(

 Post your exact query, table definition(s), indexes and see if anyone
 has some suggestions.

 If it's a mysql db, join one of the mysql lists and see if anyone has
 some suggestions (though they will ask for the same info). Same for any
 other db.

 --
 Postgresql  php tutorials
 http://www.designmagick.com/




-- 
Regards,
Shelley


Re: [PHP-DB] Table optimization ideas needed

2008-03-25 Thread Chris

Shelley wrote:

+--+---+--+-+---++
| Field| Type  | Null | Key | Default   
| Extra  |

+--+---+--+-+---++
| id   | int(11)   |  | PRI | NULL  
| auto_increment |
| owner_id | int(11)   |  | MUL | 0 
||
| owner_name   | varchar(50)   |  | |   
||
| visitor_id   | int(11)   |  | MUL | 0 
||
| visitor_name | varchar(100)  |  | |   
||
| visit_time   | timestamp | YES  | | CURRENT_TIMESTAMP 
||
| first_time   | int(10) unsigned  |  | | 0 
||
| last_time| int(10) unsigned  |  | MUL | 0 
||
| visit_num| mediumint(8) unsigned |  | | 0 
||
| status   | tinyint(3) unsigned   |  | MUL | 0 
||

+--+---+--+-+---++

That's the table which has more than 20 million records.


And what query are you running?

What does:

explain your_query_here;

show?

I can see indexes on at least owner_id, visitor_id, last_time and 
status, but at least one of those is across multiple columns ('MUL').


Can you show us the index definitions:

show indexes from table_name;

or

show create table table_name;

and just include the indexes at the bottom.

--
Postgresql  php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



[PHP-DB] Table optimization ideas needed

2008-03-24 Thread Shelley

Hi all,

I made a post a week ago to ask for the idea of the fastest way to get 
table records.

Fyi,
http://phparch.cn/index.php/mysql/35-MySQL-programming/126-fastest-way-to-get-total-records-from-a-table

Look at the time even a 'count(1)' taken.
Then you can imagine how much time sql such as select a,b from 
table_name where c='d' will take.


I have a lot of tables like that. So my questions is:
What's your practice to optimize tables like that?

Thanks in advance.

--
Regards,
Shelley


--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



[PHP-DB] Table optimization ideas needed

2008-03-24 Thread Shelley

Hi all,

I made a post a week ago to ask for the idea of the fastest way to get 
table records.

Fyi,
http://phparch.cn/index.php/mysql/35-MySQL-programming/126-fastest-way-to-get-total-records-from-a-table

Look at the time even a 'count(1)' took.
Then you can imagine how much time sql such as select a,b from 
table_name where c='d' will take.


I have a lot of tables like that. So my questions is:
What's your practice to optimize tables like that?

Thanks in advance.

--
Regards,
Shelley


--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Table optimization ideas needed

2008-03-24 Thread Chris

Shelley wrote:

Hi all,

I made a post a week ago to ask for the idea of the fastest way to get 
table records.

Fyi,
http://phparch.cn/index.php/mysql/35-MySQL-programming/126-fastest-way-to-get-total-records-from-a-table 



Look at the time even a 'count(1)' took.
Then you can imagine how much time sql such as select a,b from 
table_name where c='d' will take.


I have a lot of tables like that. So my questions is:
What's your practice to optimize tables like that?


I pretty much follow what I've said in this article:

http://www.designmagick.com/article/16/PostgreSQL/How-to-index-a-database

--
Postgresql  php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Table optimization ideas needed

2008-03-24 Thread J. Hill
From a quick perusal of the article Chris mentions, I'd generally agree 
with that view about table optimization --  I'm not an expert on 
Postgres, but the recommendations generally seem to apply to MySQL as well.


My basic view is that, if you are routinely doing a select on millions 
of rows, you probably need to take a step back and consider your general 
structure.


Without revising the structure and other than indexing as Chris 
suggested, a couple off-the-cuff ideas: if the stability of the table is 
not critical, use MyISAM tables rather than InnoDB tables; try using 
stored procedures (MySQL=5.0).


While it isn't always true, my experience is that any table with a 
million rows or more is a problem created because the initial assumption 
was that the table would never grow that large so the general data 
structure was not fully thought through.


Google is capable of handling searches through billions of rows of data 
not because it uses supercomputers but because of its data structure.


Just my two centavos,

Jeff

Chris wrote:

Shelley wrote:

Hi all,

I made a post a week ago to ask for the idea of the fastest way to 
get table records.

Fyi,
http://phparch.cn/index.php/mysql/35-MySQL-programming/126-fastest-way-to-get-total-records-from-a-table 



Look at the time even a 'count(1)' took.
Then you can imagine how much time sql such as select a,b from 
table_name where c='d' will take.


I have a lot of tables like that. So my questions is:
What's your practice to optimize tables like that?


I pretty much follow what I've said in this article:

http://www.designmagick.com/article/16/PostgreSQL/How-to-index-a-database




--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php