On Thu, 18 Dec 2003, Conny Thimren wrote:

> Hi,
> This a kind of newbie-question. I've been using Postgres for a long time in a low 
> transction environment - and it is great.
> 
> Now I've got an inquiry for using Postgresql in a heavy-load on-line system. This 
> system must handle something like 20 questions per sec with a response time at 1/10 
> sec. Each question will result in approx 5-6 reads and a couple of updates.
> Anybody have a feeling if this is realistic on a Intelbased Linux server with 
> Postgresql. Ofcourse I know that this is too little info for an exact answer but - 
> as I said - maybe someone can give a hint if it's possible. Maybe someone with 
> heavy-load can give an example of what is possible...

That really depends on how heavy each query is, so it's hard to say from 
what little you've given us.

If you are doing simple banking style transactions, then you can easily 
handle this load, if you are talking a simple shopping cart, ditto, if, 
however, you are talking about queries that run 4 or 5 tables with 
millions of rows againts each other, you're gonna have to test it 
yourself.

With the autovacuum daemon running, I ran a test overnight of pgbench 
(more for general purpose burn in than anything else) 

pgbench -i -s 100
pgbench -c 50 -t 250000

that's 10 million transactions, and it took just over twelve hours to 
complete at 220+ transactions per second.

so, for financials, you're likely to find it easy to meet your target.  
But as the tables get bigger / more complex / more interconnected you'll 
see a drop in performance.


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to