Did you use pg_audit?
https://github.com/jcasanov/pg_audit




>________________________________
> De: Philipp Kraus <philipp.kr...@flashpixx.de>
>Para: pgsql-general@postgresql.org 
>Enviado: Domingo 23 de diciembre de 2012 22:01
>Asunto: [GENERAL] logger table
> 
>Hello,
>
>I need some ideas for creating a PG based logger. I have got a job, which can 
>run more than one time. So the PK is at the moment jobid & cycle number.
>The inserts in this table are in parallel with the same username from 
>different host (clustering). The user calls in the executable "myprint" and 
>the message
>will insert into this table, but at the moment I don't know a good structure 
>of the table. Each print call can be different length, so I think a text field 
>is a good
>choice, but I don't know how can I create a good PK value. IMHO a sequence can 
>be create problems that I'm logged in with the same user on multiple
>hosts, a hash key value like SHA1 based on the content are not a good choice, 
>because content is not unique, so I can get key collisions. 
>I would like to create on each "print" call a own record in the table, but how 
>can I create a good key value and get no problems in parallel access.
>I think there can be more than 1000 inserts each second.
>
>Does anybody can post a good idea?
>
>Thanks
>
>Phil
>
>-- 
>Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>To make changes to your subscription:
>http://www.postgresql.org/mailpref/pgsql-general
>
>
>

Reply via email to