[PHP-DB] Re: Table optimization ideas needed

2008-03-26 Thread Roberto Mansfield
Shelley wrote:
 Hi all,
 
 I made a post a week ago to ask for the idea of the fastest way to get
 table records.
 Fyi,
 http://phparch.cn/index.php/mysql/35-MySQL-programming/126-fastest-way-to-get-total-records-from-a-table
 

Hi Shelly,

I question your mysql database setup. I have a log table with about 2
million records which I used for comparison. Here are the queries you
mentioned in your link above:

 SELECT COUNT(*) FROM test_table;
 SELECT COUNT(id) FROM test_table;
 SELECT COUNT(1) FROM test_table;
 
 The results goes here:
 
 mysql SELECT COUNT(*) FROM test_table;
 +--+
 | count(*) |
 +--+
 | 20795139 |
 +--+
 1 row in set (1 min 8.22 sec)

A count(*) against the entire table does not scan all rows so this
should be very fast. In my case, a full table count was about .06 seconds.


 mysql SELECT COUNT(id) FROM test_table;
 +---+
 | count(id) |
 +---+
 | 20795139 |
 +---+
 1 row in set (1 min 1.45 sec)

This query counts all the rows where id is not null. This DOES require a
scan of all your table rows and so will be much slower. In my table,
this type of query ran about 6.6 seconds. Scaling to 10x (for 20 million
records), this query would take about a minute which compares to your
results.

 mysql SELECT COUNT(1) FROM test_table;
 +--+
 | count(1) |
 +--+
 | 20795139 |
 +--+
 1 row in set (56.67 sec)

This query was very fast in my table as well. Since 1 is never null,
there is no full table scan.


 Then you can imagine how much time sql such as select a,b from
 table_name where c='d' will take.

If c is indexed and the number of resulting rows is small then this
will be fast. BUt if you need to retrieve several hundred thousand rows,
this will take time to process. In my log table, a count(*) of rows with
a restriction:

select count(*) from log where username = 'test';

returned 104777 in ~ .4 seconds, but retrieveing all those records (from
a separate mysql db host) took  15 seconds.

Roberto

-- 
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Re: Table optimization ideas needed

2008-03-26 Thread Chris

Roberto Mansfield wrote:

Shelley wrote:

Hi all,

I made a post a week ago to ask for the idea of the fastest way to get
table records.
Fyi,
http://phparch.cn/index.php/mysql/35-MySQL-programming/126-fastest-way-to-get-total-records-from-a-table



Hi Shelly,

I question your mysql database setup. I have a log table with about 2
million records which I used for comparison. Here are the queries you
mentioned in your link above:


SELECT COUNT(*) FROM test_table;
SELECT COUNT(id) FROM test_table;
SELECT COUNT(1) FROM test_table;

The results goes here:

mysql SELECT COUNT(*) FROM test_table;
+--+
| count(*) |
+--+
| 20795139 |
+--+
1 row in set (1 min 8.22 sec)


A count(*) against the entire table does not scan all rows so this
should be very fast. In my case, a full table count was about .06 seconds.


You're assuming she's using a myisam table - which will indeed be fast. 
Switch to an innodb table (or falcon if you're feeling adventurous) and 
you'll have this issue because they support transactions and are prone 
to MVCC issues.


--
Postgresql  php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Re: Table optimization ideas needed

2008-03-26 Thread Shelley
Yes, Chris. You are right. I think I mentioned in the archive that the table
is Innodb engined.
Maybe Roberto didn't notice that.

On Thu, Mar 27, 2008 at 7:26 AM, Chris [EMAIL PROTECTED] wrote:

 Roberto Mansfield wrote:

  Shelley wrote:
 
   Hi all,
  
   I made a post a week ago to ask for the idea of the fastest way to get
   table records.
   Fyi,
  
   http://phparch.cn/index.php/mysql/35-MySQL-programming/126-fastest-way-to-get-total-records-from-a-table
  
  
  Hi Shelly,
 
  I question your mysql database setup. I have a log table with about 2
  million records which I used for comparison. Here are the queries you
  mentioned in your link above:
 
   SELECT COUNT(*) FROM test_table;
   SELECT COUNT(id) FROM test_table;
   SELECT COUNT(1) FROM test_table;
  
   The results goes here:
  
   mysql SELECT COUNT(*) FROM test_table;
   +--+
   | count(*) |
   +--+
   | 20795139 |
   +--+
   1 row in set (1 min 8.22 sec)
  
 
  A count(*) against the entire table does not scan all rows so this
  should be very fast. In my case, a full table count was about .06
  seconds.
 

 You're assuming she's using a myisam table - which will indeed be fast.
 Switch to an innodb table (or falcon if you're feeling adventurous) and
 you'll have this issue because they support transactions and are prone to
 MVCC issues.


 --
 Postgresql  php tutorials
 http://www.designmagick.com/




-- 
Regards,
Shelley


[PHP-DB] Re: Table optimization ideas needed

2008-03-26 Thread Shelley
Does that mean MySQL is slow?

Currently one practice I am using is: get $m = $userId%256, then store
$userId's information in table_$m.
Then the table with more than 20, 000, 000 records is split into 256 tables,
and that can speed up the query.

I want to listen to your opinion about that.

Actually, I wonder how facebook is dealing with this matter. Somebody knows?


On Wed, Mar 26, 2008 at 10:05 PM, Roberto Mansfield [EMAIL PROTECTED]
wrote:

 Shelley wrote:
  Hi all,
 
  I made a post a week ago to ask for the idea of the fastest way to get
  table records.
  Fyi,
 
 http://phparch.cn/index.php/mysql/35-MySQL-programming/126-fastest-way-to-get-total-records-from-a-table
 

 Hi Shelly,

 I question your mysql database setup. I have a log table with about 2
 million records which I used for comparison. Here are the queries you
 mentioned in your link above:

  SELECT COUNT(*) FROM test_table;
  SELECT COUNT(id) FROM test_table;
  SELECT COUNT(1) FROM test_table;
 
  The results goes here:
 
  mysql SELECT COUNT(*) FROM test_table;
  +--+
  | count(*) |
  +--+
  | 20795139 |
  +--+
  1 row in set (1 min 8.22 sec)

 A count(*) against the entire table does not scan all rows so this
 should be very fast. In my case, a full table count was about .06 seconds.


  mysql SELECT COUNT(id) FROM test_table;
  +---+
  | count(id) |
  +---+
  | 20795139 |
  +---+
  1 row in set (1 min 1.45 sec)

 This query counts all the rows where id is not null. This DOES require a
 scan of all your table rows and so will be much slower. In my table,
 this type of query ran about 6.6 seconds. Scaling to 10x (for 20 million
 records), this query would take about a minute which compares to your
 results.

  mysql SELECT COUNT(1) FROM test_table;
  +--+
  | count(1) |
  +--+
  | 20795139 |
  +--+
  1 row in set (56.67 sec)

 This query was very fast in my table as well. Since 1 is never null,
 there is no full table scan.


  Then you can imagine how much time sql such as select a,b from
  table_name where c='d' will take.

 If c is indexed and the number of resulting rows is small then this
 will be fast. BUt if you need to retrieve several hundred thousand rows,
 this will take time to process. In my log table, a count(*) of rows with
 a restriction:

 select count(*) from log where username = 'test';

 returned 104777 in ~ .4 seconds, but retrieveing all those records (from
 a separate mysql db host) took  15 seconds.

 Roberto




-- 
Regards,
Shelley


Re: [PHP-DB] Re: Table optimization ideas needed

2008-03-26 Thread Chris



Actually, I wonder how facebook is dealing with this matter. Somebody knows?


There's lots of info here:

http://highscalability.com/

about various websites (some using mysql, some using postgres, some 
using oracle etc).


--
Postgresql  php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP-DB] Re: Table optimization ideas needed

2008-03-26 Thread Shelley
On Thu, Mar 27, 2008 at 10:06 AM, Chris [EMAIL PROTECTED] wrote:


  Actually, I wonder how facebook is dealing with this matter. Somebody
  knows?
 

 There's lots of info here:

 http://highscalability.com/

 about various websites (some using mysql, some using postgres, some using
 oracle etc).

Thanks.



 --
 Postgresql  php tutorials
 http://www.designmagick.com/




-- 
Regards,
Shelley