Hi,
I've two tables, t1 and t2, both with one bigint id indexed field and one
256 char data field; t1 has always got 10000 row, while t2 is increasing as
explained in the following.

My pqlib client countinously updates  one row in t1 (every time targeting a
different row) and inserts a new row in t2. All this in blocks of 1000
update-insert per commit, in order to get better performance.
Wal_method is fsync, fsync is on, attached my conf file.
I've a 3.8ghz laptop with evo SSD.

Performance is  measured every two executed blocks and related to these
blocks.

Over the first few minutes performance is around 10Krow/s then it slowly
drops, over next few minutes to 4Krow/s, then it slowly returns high and so
on, like a wave.
I don't understand this behaviour. Is it normal? What does it depend on?

Also, when I stop the client I see the SSD light still heavily working. It
would last quite a while unless I stop the postgresql server, in this case
it suddenly stops. If I restart the server it remains off.
I'm wondering if it's normal. I'd like to be sure that my data are safe
once commited.

Regards
Pupillo

P.S.: I put this question in general questions as my concern is not if the
performance is high or not.

Attachment: postgresql.conf
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to