Thanks again everyone for the excellent suggestions.
I looked into IO::Reactor, but after a few hours of fiddling decided I was
getting the kind of performance I wanted from using a slightly more than
modest number of threads and decided(due to dev timelines) to come back to
patching the SNMP
Hi all,
I bet you get tired of the same ole questions over and
over.
I'm currently working on an application that will poll
thousands of cable modems per minute and I would like
to use PostgreSQL to maintain state between polls of
each device. This requires a very heavy amount of
updates in
On Aug 18, 2005, at 10:24 PM, Mark Cotner wrote:
I'm currently working on an application that will poll
thousands of cable modems per minute and I would like
to use PostgreSQL to maintain state between polls of
each device. This requires a very heavy amount of
updates in place on a reasonably
Excellent feedback. Thank you. Please do keep in mind I'm storing the
results of SNMP queries. The majority of the time each thread is in a wait
state, listening on a UDP port for return packet. The number of threads is
high because in order to sustain poll speed I need to minimize the impact
On Aug 19, 2005, at 12:14 AM, Mark Cotner wrote:
Excellent feedback. Thank you. Please do keep in mind I'm storing
the
results of SNMP queries. The majority of the time each thread is
in a wait
state, listening on a UDP port for return packet. The number of
threads is
high because in
I have managed tx speeds that high from postgresql going even as high
as 2500/sec for small tables, but it does require a good RAID
controler card (yes I'm even running with fsync on). I'm using 3ware
9500S-8MI with Raptor drives in multiple RAID 10s. The box wasn't too
$$$ at just around $7k.
Bob Ippolito [EMAIL PROTECTED] writes:
If you don't want to optimize the whole application, I'd at least
just push the DB operations down to a very small number of
connections (*one* might even be optimal!), waiting on some kind of
thread-safe queue for updates from the rest of the
Tom Lane wrote:
Bob Ippolito [EMAIL PROTECTED] writes:
If you don't want to optimize the whole application, I'd at least
just push the DB operations down to a very small number of
connections (*one* might even be optimal!), waiting on some kind of
thread-safe queue for updates from the
Andreas Pflug [EMAIL PROTECTED] writes:
Tom Lane wrote:
As far as the question can PG do 1-2k xact/sec, the answer is yes
if you throw enough hardware at it. Spending enough money on the
disk subsystem is the key ...
The 1-2k xact/sec for MySQL seems suspicious, sounds very much like
Alex mentions a nice setup, but I'm pretty sure I know how to beat
that IO subsystems HW's performance by at least 1.5x or 2x. Possibly
more. (No, I do NOT work for any vendor I'm about to discuss.)
Start by replacing the WD Raptors with Maxtor Atlas 15K II's.
At 5.5ms average access,
At 09:58 AM 8/19/2005, Andreas Pflug wrote:
The 1-2k xact/sec for MySQL seems suspicious, sounds very much like
write-back cached, not write-through, esp. considering that heavy
concurrent write access isn't said to be MySQLs strength...
Don't be suspicious.
I haven't seen the code under
11 matches
Mail list logo