logging proxy

2019-10-31 Thread Олег Самойлов
Does anyone know PostgresQL proxy which can log queries with username, ip and 
affected rows for security reason. PostgresQL itself can log almost all, except 
affected rows.



Re: Getting following error in using cursor to fetch the records from a large table in c language

2019-10-31 Thread Laurenz Albe
M Tarkeshwar Rao wrote:
> When we running with vacuum full we are getting following error message.
> 
> Error:
> ---
> user was holding a relation lock for too long
> 
> Any idea about this error.

That means that you got a replication conflict, which is to be expected,
because VACUUM (FULL) is also one of the commands that require an
ACCESS EXCLUSIVE lock.

First suggestion: don't use VACUUM (FULL).

Second suggestion: A standby server can *either* be used for high
availability, in which case queries on the standby should be forbidden
or canceled in case of conflicts, *or* it can be used to run resource
intensive reading queries, in which case application of the transaction
log should be delayed.

Don't try to use a standby for both - use two standby servers.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Barman

2019-10-31 Thread Tomas Vondra

On Thu, Oct 31, 2019 at 05:29:34PM +, Daulat Ram wrote:

Hi All,

We have configured postgres 11.2 in streaming replication (primary &
Standby)  on docker and I am looking to initiate the Postgres backup
using barman. As I know there are few options for taking backup using
barman.

RSYNC backup
Incremental Backups
Streaming Backup with continuous WAL streaming
Centralized and Catalogued Backups

Which is the best option for backup using barman? So that we can keep
the database safe in case of disaster? I feel the Incremental Backups
are most useful to perform the PITR but I want to know the experts
suggestions.



You're mixing a number of topics, here. Firstly, all backups done by
barman are centralized and catalogued, that's pretty much one of the
main purposes of barman.

When it comes to backup methods, there are two basic methods. rsync and
postgres (which means pg_basebackup). This is about creating the initial
base backup. Both methods then can replicate WAL by either streaming or
archive_command.

So first you need to decide whether to use rsync and pg_basebackup,
where rsync allows advanced features like incremental backup, parallel
backup and deduplication.

Then you need to decide whether to use archive_command or streaming
(i.e. pg_receivexlog).

The "right" backup method very much depends on the size of your
database, activity, and so on. By default you should probably go with
the default option, described as "scenario 1" in the barman docs, i.e.
pg_basebackup (backup_method = postgres) and WAL streaming.

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Automatically parsing in-line composite types

2019-10-31 Thread Tom Lane
Merlin Moncure  writes:
> On Wed, Oct 30, 2019 at 5:41 PM Andres Freund  wrote:
>> Hm. Wouldn't it be fairly easy to allow the client to specify how much
>> metadata they'd want? I.e. opt-in into getting more complete metadata?

> Suppose you had a set of 1000 records, with an array of composites (10
> items per array).  How many times would the extra metadata describing
> get sent following this approach?

Presumably once per Describe request (or once per simple Query) ...
but that facile answer actually isn't very satisfactory.  It's
at least theoretically possible that a column of type "record" could
have varying actual rowtype from one tuple to the next.  I don't want
to swear that every part of Postgres is capable of coping with that,
but a lot is --- see record_out() itself for the prototypical coding
pattern.  The wire protocol is really pretty tightly bound to the
assumption that every tuple in a resultset has the same description,
but that's going to fall down if we want to transmit details about
what "record" means.

The same problem occurs with arrays, specifically anyarray --- and
here I don't have to think hard to come up with a query that will
break it.  "select * from pg_stats" is enough to do that.

It's not only the wire protocol that's going to have issues with
varying rowtypes.  How would we deal with that in libpq's resultset
API, for example?  I wonder whether the JDBC API could handle it either.

tl;dr: there are a lot more worms in this can than you might guess.

regards, tom lane




Re: Automatically parsing in-line composite types

2019-10-31 Thread Merlin Moncure
On Wed, Oct 30, 2019 at 5:41 PM Andres Freund  wrote:
>
> Hi,
>
> On 2019-10-29 14:33:00 -0400, Tom Lane wrote:
> > Mitar  writes:
> > > I think RowDescription should be extended to provide full recursive
> > > metadata about all data types. That would be the best way to do it.
> >
> > [ shrug... ]  In a world where stability of the wire protocol were
> > of zero value, maybe we would do that.  In the real world, don't
> > hold your breath.
>
> Hm. Wouldn't it be fairly easy to allow the client to specify how much
> metadata they'd want? I.e. opt-in into getting more complete metadata?
>
> Presumably a lot of clients/applications wouldn't want the server to do
> the extra work / use bandwidth for the full details anyway, so making a
> more expansive RowDescription be explicitly opt-in would be good, even
> if there were zero compatibility concerns.
>
> There's different ways we could do the opt-in. We could use the "_pq_."
> startup option stuff to opt in, we could make it an optional parameter
> to D messages (it'd be mildly hacky because unfortunately
> describe_target is not a counted text), we could use an additional
> describe_type etc...

Suppose you had a set of 1000 records, with an array of composites (10
items per array).  How many times would the extra metadata describing
get sent following this approach?  The binary wire format is already
fatter in terms of bandwidth in typical cases (unless escaping is
involved which can cause exponential growth of the text format).  If
the answer is 10k, I'd be worried about performance.

merlin




Re: Can you please suggest how to configure hot_standby_feedback?

2019-10-31 Thread Andreas Kretschmer




Am 31.10.19 um 06:21 schrieb M Tarkeshwar Rao:


Can you please suggest how to configure hot_standby_feedback?




turn it on if you want execute long running queries on the standby, keep 
in mind it can lead to more bloat on the master.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





RE: Getting following error in using cursor to fetch the records from a large table in c language

2019-10-31 Thread M Tarkeshwar Rao
When we running with vacuum full we are getting following error message.

Error:
---
user was holding a relation lock for too long

Any idea about this error.

-Original Message-
From: Laurenz Albe  
Sent: Thursday, October 31, 2019 12:28 PM
To: M Tarkeshwar Rao ; 
'pgsql-gene...@postgresql.org' 
Subject: Re: Getting following error in using cursor to fetch the records from 
a large table in c language

On Thu, 2019-10-31 at 05:18 +, M Tarkeshwar Rao wrote:
[queries get canceled on the standby]
> You are absolutely right. This is the issue with us.
> If we retry the query again. Will it be successful? 

Sometimes :^/

> Can you please suggest how to configure hot_standby_feedback?

You set it to "on", then you get no query cancellation because of VACUUM (at 
the price of potential bloat on the primary server).

Your query can still get canceled by conflichts with ACCESS EXCLUSIVE locks 
that are taken by TRUNCATE, ALTER/DROP TABLE and similar as well as autovacuum 
truncation.

Yours,
Laurenz Albe
--
Cybertec | 
https://protect2.fireeye.com/v1/url?k=a20e6965-fe84a291-a20e29fe-86cd58c48020-fb007d5e5585f41b=1=c64a1818-0510-4ceb-bd0f-50fdd335f83c=https%3A%2F%2Fwww.cybertec-postgresql.com%2F



Re: Getting following error in using cursor to fetch the records from a large table in c language

2019-10-31 Thread Laurenz Albe
On Thu, 2019-10-31 at 05:18 +, M Tarkeshwar Rao wrote:
[queries get canceled on the standby]
> You are absolutely right. This is the issue with us.
> If we retry the query again. Will it be successful? 

Sometimes :^/

> Can you please suggest how to configure hot_standby_feedback?

You set it to "on", then you get no query cancellation because of
VACUUM (at the price of potential bloat on the primary server).

Your query can still get canceled by conflichts with ACCESS EXCLUSIVE
locks that are taken by TRUNCATE, ALTER/DROP TABLE and similar
as well as autovacuum truncation.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com