Re: Logging

2018-06-12 Thread Andrew Bartley
Thanks All,

It seems that the related DETAIL message appears if the query is
successful.  On error the DETAIL log line is missing...   This makes
debugging difficult, considering the insert is coming from Dreamfactory via
a rest POST.

I am finding it very difficult navigating my way through the brave new
world of SAS and Blackbox type applications  Maybe 32 years as a
database programmer is just too long.

Thanks

Andrew




On Wed, 13 Jun 2018 at 14:12 Adrian Klaver 
wrote:

> On 06/12/2018 08:25 PM, David G. Johnston wrote:
> > On Tuesday, June 12, 2018, Andrew Bartley  > <mailto:ambart...@gmail.com>> wrote:
> >
> >
> >
> > On Wed, 13 Jun 2018 at 12:43 Laurenz Albe  > <mailto:laurenz.a...@cybertec.at>> wrote:
> >
> >
> > log_min_duration_statement = 0
> >
> > [...]
> >
> >
> > log_min_duration_statement -1
> >
> >
> > You've disabled statement logging altogether.  The zero value you were
> > directed to use is what causes everything to be logged.
>
> Actually no:
>
>
> https://www.postgresql.org/docs/10/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN
>
> log_min_duration_statement (integer)
>
> "... Setting this to zero prints all statement durations. Minus-one (the
> default) disables logging statement durations.  ..."
>
> "
>
> So -1 only affects logging statements relative to duration.
>
> If you have log_statements set then you will still get statements logged
> if you have log_min_duration_statement = -1 :
>
> Note
>
> When using this option together with log_statement, the text of
> statements that are logged because of log_statement will not be repeated
> in the duration log message.
> "
>
> This is how I have my logging setup, log_min_duration_statement = -1
> and log_statements = 'mod' and I see statements in the logs.
>
> >
> > David J.
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Logging

2018-06-12 Thread Andrew Bartley
Ok, thanks.   will try

On Wed, 13 Jun 2018 at 13:25 David G. Johnston 
wrote:

> On Tuesday, June 12, 2018, Andrew Bartley  wrote:
>
>> On Wed, 13 Jun 2018 at 12:43 Laurenz Albe 
>> wrote:
>>
>
>>> log_min_duration_statement = 0
>>>
>> [...]
>
>>
>> log_min_duration_statement -1
>>
>
> You've disabled statement logging altogether.  The zero value you were
> directed to use is what causes everything to be logged.
>
> David J.
>
>
>


Re: Logging

2018-06-12 Thread Andrew Bartley
On Wed, 13 Jun 2018 at 12:43 Laurenz Albe  wrote:

> Andrew Bartley wrote:
> > Can someone please tell me how to log the values being inserted in this
> example..
> >
> > 2018-06-12 22:31:49 UTC-5b2049d4.434ac-STATEMENT:  insert into
> "api_consumers" ("consumer_id", "create_datetime") values ($1, $2).
> >
> > I have tried many different logging options and combinations.
>
> That should automatically be logged as a DETAIL message.
>
> log_min_duration_statement = 0
> log_min_error_statement = log or better
> log_min_messages = log or better
>
> That should do the trick.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com

Thanks for the reply Laurenz,


Current settings

"PostgreSQL 9.6.7, compiled by Visual C++ build 1800, 64-bit"
Azure managed instance


log_min_duration_statement -1
log_min_error_statement DEBUG2
log_min_messages DEBUG2
log_statement ALL
log_error_verbosity VERBOSE

Still no joy.

Thanks Andrew


Logging

2018-06-12 Thread Andrew Bartley
Hi all,

Can someone please tell me how to log the values being inserted in this
example..

2018-06-12 22:31:49 UTC-5b2049d4.434ac-STATEMENT:  insert into
"api_consumers" ("consumer_id", "create_datetime") values ($1, $2).

I have tried many different logging options and combinations.

Thanks

Andrew Bartley


Re: Performance opportunities for write-only audit tables?

2018-05-24 Thread Andrew Bartley
2. Use  UNLOGGED in the log table creates.

Seriously?  For audit tables?


I guess that depends on what the log tables are used for Also in this
case the logs are written to via a trigger I all within one
transaction.  So it may not matter.

We use UNLOGGED because the audit logs are not financial, and used only
very rarely, and only for analysis of access and performance in a batch
setting.  Also certainly not via a trigger, as we don't want the logging
bound to the transaction.

On Fri, 25 May 2018 at 07:17 David G. Johnston <david.g.johns...@gmail.com>
wrote:

> On Thursday, May 24, 2018, Andrew Bartley <ambart...@gmail.com> wrote:
>
>> Hi,
>>
>> The two main techniques we use are.
>>
>> The idea here is to backup the rest of your DB to one backup regime and
>> the log tables to another. We set it up so at the end of the day the
>> current log table is backed up and loaded into an backup archive, then we
>> vacuum freeze the log table.
>>
>
> Yeah, doing logging in-database for immediate performance while
> periodically copying or moving said data to external storage seems like the
> best option.  Depends on the definition of an acceptable process and
> response time should the audit data be needed though.
>
>
>>
>> 2. Use  UNLOGGED in the log table creates.
>>
>
> Seriously?  For audit tables?
>
> David J.
>


Re: Performance opportunities for write-only audit tables?

2018-05-24 Thread Andrew Bartley
Sorry that should be a "vacuum full freeze"... not just a  "vacuum freeze"

On Fri, 25 May 2018 at 07:07 Andrew Bartley <ambart...@gmail.com> wrote:

> Hi,
>
> The two main techniques we use are.
>
> 1. Create a script to pass the -t param to pg_dump to exclude the log
> tables.  The idea here is to backup the rest of your DB to one backup
> regime and the log tables to another. We set it up so at the end of the day
> the current log table is backed up and loaded into an backup archive, then
> we vacuum freeze the log table.  The benefits are that once each log table
> is "Closed" (meaning you will no longer be writing to that log table ever
> again), is backed up once only ever...   It speeds housekeeping up, and
> your daily backups are much smaller.
>
> 2. Use  UNLOGGED in the log table creates.
>
> Have fun.
>
> Andrew
>
> On Fri, 25 May 2018 at 02:55 Paul Jungwirth <p...@illuminatedcomputing.com>
> wrote:
>
>> Hi,
>>
>> A project of mine uses a trigger-based approach to record changes to an
>> audit table. The audit table is partitioned by month (pg 9.5, so
>> old-fashioned partitioning). These tables are write-heavy but
>> append-only and practically write-only: we never UPDATE or DELETE, and
>> we seem to consult them only a few times a year. But they are enormous:
>> bigger than the rest of the database in fact. They slow down our
>> backups, they increase WAL size and streaming replication, they add to
>> recovery time, they make upgrades more time-consuming, and I suppose
>> they compete for RAM.
>>
>> This is all on an AWS EC2 instance with EBS storage. We also run a warm
>> standby with streaming replication.
>>
>> Since these tables are so different from everything else, I'm wondering
>> what opportunities we have to reduce their performance cost. I'm
>> interested both in practical high-bang-for-buck changes, but also in
>> harder just-interesting-to-think-about last-resort approaches. Here are
>> a few ideas of my own, but I'm curious what others think:
>>
>> We already have no indexes or foreign keys on these tables, so at least
>> there's no cost there.
>>
>> Since they are already partitioned, we could move old data to offline
>> storage and drop those tables. This feels like the biggest, easiest win,
>> and something we should have done a long time ago. Probably it's all we
>> need.
>>
>> Put them on a different tablespace. This one is also pretty obvious, but
>> aside from using a separate disk, I'm curious what other crazy things we
>> could do. Is there any per-tablespace tuning possible? (I think the
>> answer within Postgres is no, but I wish we could change the settings
>> for wal_level, or exclude them from replication, or something, so I'm
>> wondering if we could achieve the same effect by exploiting being on a
>> separate filesystem.) Maybe put the tablespace on some FUSE filesystem
>> to get async writes? Or just pick different mount options, e.g. on ext4
>> lazytime,dealloc,data=writeback? I don't know. Or at a different level:
>> change the triggers so they call a custom function that uses a new
>> thread to store the audit records elsewhere. Maybe these ideas are all
>> too risky, but I think the organization is fine with slightly relaxed
>> durability guarantees for this data, and anyway I'm just curious to have
>> a list of possibilities before I categorize anything as too crazy or
>> not. :-)
>>
>> If we upgraded to pg 10 we could use logical replication and leave out
>> the audit tables. That is appealing. Even without upgrading, I guess we
>> could replace those tables with postgres_fdw ones, so that they are not
>> replicated? Has anyone else used that trick?
>>
>> Thanks!
>>
>> --
>> Paul  ~{:-)
>> p...@illuminatedcomputing.com
>>
>>


Re: Performance opportunities for write-only audit tables?

2018-05-24 Thread Andrew Bartley
Hi,

The two main techniques we use are.

1. Create a script to pass the -t param to pg_dump to exclude the log
tables.  The idea here is to backup the rest of your DB to one backup
regime and the log tables to another. We set it up so at the end of the day
the current log table is backed up and loaded into an backup archive, then
we vacuum freeze the log table.  The benefits are that once each log table
is "Closed" (meaning you will no longer be writing to that log table ever
again), is backed up once only ever...   It speeds housekeeping up, and
your daily backups are much smaller.

2. Use  UNLOGGED in the log table creates.

Have fun.

Andrew

On Fri, 25 May 2018 at 02:55 Paul Jungwirth 
wrote:

> Hi,
>
> A project of mine uses a trigger-based approach to record changes to an
> audit table. The audit table is partitioned by month (pg 9.5, so
> old-fashioned partitioning). These tables are write-heavy but
> append-only and practically write-only: we never UPDATE or DELETE, and
> we seem to consult them only a few times a year. But they are enormous:
> bigger than the rest of the database in fact. They slow down our
> backups, they increase WAL size and streaming replication, they add to
> recovery time, they make upgrades more time-consuming, and I suppose
> they compete for RAM.
>
> This is all on an AWS EC2 instance with EBS storage. We also run a warm
> standby with streaming replication.
>
> Since these tables are so different from everything else, I'm wondering
> what opportunities we have to reduce their performance cost. I'm
> interested both in practical high-bang-for-buck changes, but also in
> harder just-interesting-to-think-about last-resort approaches. Here are
> a few ideas of my own, but I'm curious what others think:
>
> We already have no indexes or foreign keys on these tables, so at least
> there's no cost there.
>
> Since they are already partitioned, we could move old data to offline
> storage and drop those tables. This feels like the biggest, easiest win,
> and something we should have done a long time ago. Probably it's all we
> need.
>
> Put them on a different tablespace. This one is also pretty obvious, but
> aside from using a separate disk, I'm curious what other crazy things we
> could do. Is there any per-tablespace tuning possible? (I think the
> answer within Postgres is no, but I wish we could change the settings
> for wal_level, or exclude them from replication, or something, so I'm
> wondering if we could achieve the same effect by exploiting being on a
> separate filesystem.) Maybe put the tablespace on some FUSE filesystem
> to get async writes? Or just pick different mount options, e.g. on ext4
> lazytime,dealloc,data=writeback? I don't know. Or at a different level:
> change the triggers so they call a custom function that uses a new
> thread to store the audit records elsewhere. Maybe these ideas are all
> too risky, but I think the organization is fine with slightly relaxed
> durability guarantees for this data, and anyway I'm just curious to have
> a list of possibilities before I categorize anything as too crazy or
> not. :-)
>
> If we upgraded to pg 10 we could use logical replication and leave out
> the audit tables. That is appealing. Even without upgrading, I guess we
> could replace those tables with postgres_fdw ones, so that they are not
> replicated? Has anyone else used that trick?
>
> Thanks!
>
> --
> Paul  ~{:-)
> p...@illuminatedcomputing.com
>
>


Re: Rolls

2018-02-01 Thread Andrew Bartley
Sorry Roles


On Fri, 2 Feb 2018 at 08:29 Rob Sargent <robjsarg...@gmail.com> wrote:

>
>
> On 02/01/2018 02:22 PM, Andrew Bartley wrote:
> > Hi all,
> >
> > I am trying to work out a way to create a roll/user that can only
> > execute one particular function and nothing else. The particular
> > function has been created with "SECURITY DEFINER".
> >
> > Regards
> >
> > Andrew Bartley
> >
> >
> And I thought you we offering hot cinnamon buns or some such.
>
>