Supposing your searches display results which are rows coming from one specific table, you could create a cache table :
search_id serial primary key index_n position of this result in the global result set result_id id of the resulting row.
Then, making a search with 50k results would INSERT INTO cache ... SELECT FROM search query, with a way to set the index_n column, which can be a temporary sequence...
Then to display your pages, SELECT from your table with index_n BETWEEN so and so, and join to the data table.
If you're worried that it might take up too much space : store an integer array of result_id instead of just a result_id ; this way you insert fewer rows and save on disk space. Generate it with a custom aggregate... then just grab a row from this table, it contains all the id's of the rows to display.
---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq