If you make the assertion that you are transferring equal or less session data between your session server (lets say an RDBMS) and the app server than you are between the app server and the client, an out of band 100Mb network for session information is plenty of bandwidth.
So if you count on a mean page size of 6-8 kbytes gzipped, that will prevent you from caching the N first results of the Big Slow Search Query in a native object in the user session state (say, a list of integers indicating which rows match), so you will have to redo the Big Slow Search Query everytime the user clicks on Next Page instead of grabbing a set of cached row id's and doing a fast SELECT WHERE id IN ...
This is the worst case ... I'd gzip() the row id's and stuff them in the session, that's always better than blowing up the database with the Big Slow Search Query everytime someone does Next Page...
This also represents OLTP style traffic, which postgresql is pretty good at. You should easily be able to get over 100Tps. 100 hits per second is an awful lot of traffic, more than any website I've managed will ever see.
On the latest anandtech benchmarks, 100 hits per second on a blog/forum software is a big bi-opteron server running dotNET, at 99% load... it's a lot if you count only dynamic page hits.
---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match