Hello

a) look on COPY statement and COPY API protocol - it can be 100x
faster than INSERTS
http://www.postgresql.org/docs/8.3/static/libpq-copy.html

b) if you can't to use COPY use:

* outer transaction - BEGIN, INSERT, INSERT ... COMMIT if this is possible
* use a prepared statement
http://www.postgresql.org/docs/8.3/static/sql-prepare.html

if you cannot to use a outer transaction, and you can to replay a
process, if there are some problems, use a asynchronnous commit
http://www.postgresql.org/docs/8.3/static/wal-async-commit.html

Regards

Pavel Stehule


2011/7/7 sergio mayoral <smayo...@gmail.com>:
> Hi,
> i am using libpq library and postgresql 8.4 for my linux application running
> on ARM with 256 MB. I am just doing:
> PQconnectdb();
> PQexec(INSERT INTO table1 ....); (0.009661 sec.)
> PQexec(INSERT INTO table1 ....); (0.004208 sec.)
> PQexec(INSERT INTO table2 ....); (0.007352 sec.)
> PQexec(INSERT INTO table2 ....); (0.002533 sec.)
> PQexec(INSERT INTO table2 ....); (0.002281 sec.)
> PQexec(INSERT INTO table2 ....); (0.002244 sec.)
> PQexec(INSERT INTO table3 ....); (0.006903 sec.)
> PQexec(INSERT INTO table3 ....); (0.002903 sec.)
> PQfinnish();
> I check the time for each PQexec with gettimeofday function and I always see
> that the first INSERT for each table needs longer than the next ones.
> this must be something with the parser stage and since i am doing every time
> the same queries, I would like to know if there is a way to cache this
> queries in order to speed up the first INSERTs.
> Thanks in advance,
> Sergio

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to