Hello.
My application generates a large amount of inserts (~ 2000 per second)
using one connection to PostgreSQL. All queries are buffered in memory
and then the whole buffers are send to DB. But when I use two
connections to PostgreSQL instead of one on dual core CPU (i.e. I use
two processes of
Hello.
> I don't think so. Postgres spawns a single process for each connection,
> so each connection is going to be confined to a single core.
Thanks for your answer.
I know that I can use a connection pooler to involve early created
connections. Can poolers balance queries coming from my conne
Hello.
> You can use it for whatever you're generating.
I've tested this technique, and I'm wondering! 12 inserts per
~600ms! Thanks for your help.
> Multiple cores are not the solution to your problem here, but COPY
> almost certainly is :)
But as I can see this approach doesn't work over ne
> Sure it does.
>
> copy from STDIN
> 213 345 567
> 847 837 473
> \.
>
Thanks. Was this query entered in psql shell?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
> There is a network API for COPY. Look up pg_put_line (or PQputLine
> or whatever the convention is for whatever API you're using).
Thanks for your answer. I use Erlang (erlang.org) + pgsql2 (it's
native Erlang driver maintained by ejabberd developers). All all I
have is the following functions: