On Thu, Dec 11, 2014 at 4:19 PM, Simon Slavin <slav...@bigfraud.org> wrote:

> In my table which had about 300 million (sic.) rows I did this
> SELECT count(*) FROM myTable;
> to count the number of rows.  After half an hour it was still processing
> and I had to kill it.
>

I have a little utility that connects to Oracle, and does a big UNION ALL
query to get the counts of all my tables (82 currently):

TOTAL: 1,900,343 rows in 20 tables (out of 82)
0.129u 0.051s 0:00.66 25.7%     0+0k 32+32io 0pf+0w  (COLD)
0.128u 0.045s 0:00.34 47.0%     0+0k 0+32io 0pf+0w (HOT)

Granted, it's not 300M rows, just 1.9M, but that's 660ms (340ms when in
cache), and that's counting the startup and connect time (~ 170ms).

The plan is INDEX (FAST FULL SCAN) of the PK (a raw(16) for a GUID). FWIW,
for context/comparison. --DD

PS: I was actually surprised it was that cheap.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to