Does that mean MySQL is slow? Currently one practice I am using is: get $m = $userId%256, then store $userId's information in table_$m. Then the table with more than 20, 000, 000 records is split into 256 tables, and that can speed up the query.
I want to listen to your opinion about that. Actually, I wonder how facebook is dealing with this matter. Somebody knows? On Wed, Mar 26, 2008 at 10:05 PM, Roberto Mansfield <[EMAIL PROTECTED]> wrote: > Shelley wrote: > > Hi all, > > > > I made a post a week ago to ask for the idea of the fastest way to get > > table records. > > Fyi, > > > http://phparch.cn/index.php/mysql/35-MySQL-programming/126-fastest-way-to-get-total-records-from-a-table > > > > Hi Shelly, > > I question your mysql database setup. I have a log table with about 2 > million records which I used for comparison. Here are the queries you > mentioned in your link above: > > > SELECT COUNT(*) FROM test_table; > > SELECT COUNT(id) FROM test_table; > > SELECT COUNT(1) FROM test_table; > > > > The results goes here: > > > > mysql> SELECT COUNT(*) FROM test_table; > > +----------+ > > | count(*) | > > +----------+ > > | 20795139 | > > +----------+ > > 1 row in set (1 min 8.22 sec) > > A count(*) against the entire table does not scan all rows so this > should be very fast. In my case, a full table count was about .06 seconds. > > > > mysql> SELECT COUNT(id) FROM test_table; > > +-----------+ > > | count(id) | > > +-----------+ > > | 20795139 | > > +-----------+ > > 1 row in set (1 min 1.45 sec) > > This query counts all the rows where id is not null. This DOES require a > scan of all your table rows and so will be much slower. In my table, > this type of query ran about 6.6 seconds. Scaling to 10x (for 20 million > records), this query would take about a minute which compares to your > results. > > > mysql> SELECT COUNT(1) FROM test_table; > > +----------+ > > | count(1) | > > +----------+ > > | 20795139 | > > +----------+ > > 1 row in set (56.67 sec) > > This query was very fast in my table as well. Since "1" is never null, > there is no full table scan. > > > > Then you can imagine how much time sql such as "select a,b from > > table_name where c='d'" will take. > > If c is indexed and the number of resulting rows is "small" then this > will be fast. BUt if you need to retrieve several hundred thousand rows, > this will take time to process. In my log table, a count(*) of rows with > a restriction: > > select count(*) from log where username = 'test'; > > returned 104777 in ~ .4 seconds, but retrieveing all those records (from > a separate mysql db host) took 15 seconds. > > Roberto > -- Regards, Shelley