> The fact is, there are situations in which such extreme traffic is
> warranted. My concern is that I am not able to use postgres in such
> situations because it cannot scale to that level. I feel that it would
> be possible to reach that level with support in the postmaster for
> replication. 

Replication won't help if those are all mostly write transactions. If a
small percentage, even 1% would be challenging, is INSERTS, UPDATES or
DELETES, master / slave replication might get you somewhere.

Otherwise you're going to need to partition the data up into smaller,
easily managed sizes -- that of course requires an ability to
horizontally partition the data.

Anyway, if you want a sane answer we need more information about the
data (is it partitionable?), schema type, queries producing the load
(simple or complex), acceptable data delays (does a new insert need to
be immediately visible?), etc.

Dealing with a hundred thousand queries/second isn't just challenging to
PostgreSQL, you will be hard pressed to find the hardware that will push
that much data around even with the overhead of the database itself.


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to