As I read the docs, a temp table doesn't solve our problem, as it does
not persist between sessions.  With a web page there is no guarentee
that you will receive the same connection between requests, so a temp
table doesn't solve the problem.  It looks like you either have to
create a real table (which is undesirable becuase it has to be
physicaly synced, and TTFB will be very poor) or create an application
tier in between the web tier and the database tier to allow data to
persist between requests tied to a unique session id.

Looks like the solutions to this problem is not RDBMS IMHO.

Alex Turner
NetEconomist

Did you miss the proposal to store arrays of the found rows id's in a "cache" table ? Is 4 bytes per result row still too large ?


If it's still too large, you can still implement the same cache in the filesystem !
If you want to fetch 100.000 rows containing just an integer, in my case (psycopy) it's a lot faster to use an array aggregate. Time to get the data in the application (including query) :


select id from temp
        => 849 ms
select int_array_aggregate(id) as ids from temp
        => 300 ms

So you can always fetch the whole wuery results (in the form of an integer per row) and cache it in the filesystem. It won't work if you have 10 million rows though !

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
     joining column's datatypes do not match

Reply via email to