Been pondering this today. There seem to be 7 day limits, or around 3000 tweet limits to the API. At first, my gut told me that was for load reasons, and it made sense.

I started thinking about paging results in development projects I have worked on.

Looking at this from a database perspective

SELECT foo, bar from something where name = 'test' ORDER BY id limit 1, 200;
Start at id 1, get me 200, may take x seconds.

Next page:
SELECT foo, bar from something where name = 'test' ORDER BY id limit 200, 200;
Start at id 200, get me 200, may also take x seconds

Arbitrary page:
SELECT foo, bar from something where name = 'test' ORDER BY id limit 5000, 200;
Start at id 5000, get me 200, will also take x seconds

In each case, x as time, does not change. Now, this assumes all the data is in a single database, or is normal in a way that facilitates this.

This question is just one of curiosity. I am betting, the tweets table has been distributed across many tables, and there is no simple way to get to the "pages" results as shown above?

If it is not, I am not seeing any performance hit to getting the first 100 records, or a subset that is 20,000 tweets into the record set.
--
Scott * If you contact me off list replace talklists@ with scott@ *

Reply via email to