Re: Logging statement having any threat?

2024-04-21 Thread Adrian Klaver

On 4/21/24 02:35, Lok P wrote:
On Sat, Apr 20, 2024 at 10:02 PM Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:



Have you tried?:


https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
 
<https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT>

"
log_statement (enum)

    <...>

The default is none. Only superusers and users with the appropriate SET
privilege can change this setting.
"

Or

https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-SET 
<https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-SET>

set_config ( setting_name text, new_value text, is_local boolean ) →
text


 >
 > Now when we reach out to the infrastructure team , they are
saying these
 > variables(pg_cluster_log_statement,pg_instance_log_statement) were

Where are those variables coming from? I can not find them in RDS or
Terraform docs.


  Thank You Adrian.

Actually I was trying to understand if the auto_explain can only work 
and help us see the slow sql statements in the log, only after we set 
the "log_statement" parameter to non default values (like all, mod, ddl)?


And what is the exact threat with the logging these queries , and i 


log_statement = 'mod'

create role pwd_test with password 'test';
CREATE ROLE

tail -f /var/log/postgresql/postgresql-16-main.log

<...>
2024-04-21 09:04:17.746 PDT [9664] postgres@test LOG:  statement: create 
role pwd_test with password 'test';


think ,I got the point as you mentioned , having access to database  
itself is making someone to see the object details, however do you agree 
that in case of RDS logs are available through different mediums like 
cloud watch, data dog agent etc , so that may pose additional threats as 


Aah, the joys of managed services where you have to check even more 
layers when building out your security.  Logging itself is not the 
issue, who has access to the logs is. The more access points the more 
difficult that gets. Dealing with this is going to require a system wide 
review by all parties and coming up with an agreed upon access policy 
that balances security with the need to monitor what is happening in the 
database. Otherwise troubleshooting issues will be a long drawn out 
process which in itself could end up being a security issue.



because , may be some person doesn't have access to database directly 
but still having permission to see the logs, so the appropriate access 
control need to put in place?


And additionally I was trying to execute the "SELECT 
set_config('log_statement', 'all', true);" but it says "/permission 
denied to set parameter "log_statement/".".So might be it needs a higher 
privileged user to run it.


To answer your question on the variable those we have on the 
terraform module, the terraform module is customized by the database 
infra team so that might be why we are seeing those there which may not 
be exactly the same as its showing in RDS docs for postgres.


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.Concepts.PostgreSQL.html
 
<https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.Concepts.PostgreSQL.html>



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Logging statement having any threat?

2024-04-21 Thread Lok P
On Sat, Apr 20, 2024 at 10:02 PM Adrian Klaver 
wrote:

>
> Have you tried?:
>
>
> https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
>
> "
> log_statement (enum)
>
><...>
>
> The default is none. Only superusers and users with the appropriate SET
> privilege can change this setting.
> "
>
> Or
>
>
> https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-SET
>
> set_config ( setting_name text, new_value text, is_local boolean ) → text
>
>
> >
> > Now when we reach out to the infrastructure team , they are saying these
> > variables(pg_cluster_log_statement,pg_instance_log_statement) were
>
> Where are those variables coming from? I can not find them in RDS or
> Terraform docs.
>
>
 Thank You Adrian.

Actually I was trying to understand if the auto_explain can only work and
help us see the slow sql statements in the log, only after we set the
"log_statement" parameter to non default values (like all, mod, ddl)?

And what is the exact threat with the logging these queries , and i think
,I got the point as you mentioned , having access to database  itself is
making someone to see the object details, however do you agree that in case
of RDS logs are available through different mediums like cloud watch, data
dog agent etc , so that may pose additional threats as because , may be
some person doesn't have access to database directly but still having
permission to see the logs, so the appropriate access control need to put
in place?

And additionally I was trying to execute the "SELECT
set_config('log_statement', 'all', true);" but it says "*permission denied
to set parameter "log_statement*".".So might be it needs a higher
privileged user to run it.

To answer your question on the variable those we have on the
terraform module, the terraform module is customized by the database infra
team so that might be why we are seeing those there which may not be
exactly the same as its showing in RDS docs for postgres.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.Concepts.PostgreSQL.html


Re: Logging statement having any threat?

2024-04-20 Thread Adrian Klaver

On 4/20/24 07:02, Lok P wrote:


Now when we reach out to the infrastructure team , they are saying these 
variables(pg_cluster_log_statement,pg_instance_log_statement) were 
removed due to potential security threat. So I want to understand from 
experts here , how this is really a security threat and if any option to 
get this logging enabled (which will help us debug performance issues) 
at same time addressing the threat too?


I should have added to previous post, if you have access to the database 
the security wall has already been breached.




Regards
Lok


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Logging statement having any threat?

2024-04-20 Thread Adrian Klaver

On 4/20/24 07:02, Lok P wrote:

Hello All,
Its postgres version 15.4 and its RDS, in which our dev team gets the 
infrastructure code from another third party team which provides us base 
infrastructure code to build a postgres database, in which we will be 
able to do change DB parameter values etc whatever is mentioned in the 
file with possible values. But surprisingly we don't see log_statement 
there. Below was our requirement,


For debugging and evaluating performance we were having 
pg_stat_statements but it contains aggregated information about all the 
query execution. But in case just want to debug any point in time issues 
where the selected few queries were performing bad (may be because of 
plan change), we were planning to have the auto_explain extension added 
and set the log_min_duration to ~5 seconds, So that, all the queries 
going above that time period(5 seconds) will be logged and provide 
detailed information on the exact point of bottleneck. But we see the 
log_statement parameter has been removed from the base infrastructure 
script/terraform script given by the database team here, so that means 
we will get it as default which is "NONE", which means no 
statement(SELECT/DML/DDL etc) can be logged.


Have you tried?:

https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT

"
log_statement (enum)

  <...>

The default is none. Only superusers and users with the appropriate SET 
privilege can change this setting.

"

Or

https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-SET

set_config ( setting_name text, new_value text, is_local boolean ) → text




Now when we reach out to the infrastructure team , they are saying these 
variables(pg_cluster_log_statement,pg_instance_log_statement) were 


Where are those variables coming from? I can not find them in RDS or 
Terraform docs.


removed due to potential security threat. So I want to understand from 
experts here , how this is really a security threat and if any option to 
get this logging enabled (which will help us debug performance issues) 
at same time addressing the threat too?


Regards
Lok


--
Adrian Klaver
adrian.kla...@aklaver.com





Logging statement having any threat?

2024-04-20 Thread Lok P
Hello All,
Its postgres version 15.4 and its RDS, in which our dev team gets the
infrastructure code from another third party team which provides us base
infrastructure code to build a postgres database, in which we will be able
to do change DB parameter values etc whatever is mentioned in the file with
possible values. But surprisingly we don't see log_statement there. Below
was our requirement,

For debugging and evaluating performance we were having pg_stat_statements
but it contains aggregated information about all the query execution. But
in case just want to debug any point in time issues where the selected few
queries were performing bad (may be because of plan change), we were
planning to have the auto_explain extension added and set the
log_min_duration to ~5 seconds, So that, all the queries going above that
time period(5 seconds) will be logged and provide detailed information on
the exact point of bottleneck. But we see the log_statement parameter has
been removed from the base infrastructure script/terraform script given by
the database team here, so that means we will get it as default which is
"NONE", which means no statement(SELECT/DML/DDL etc) can be logged.

Now when we reach out to the infrastructure team , they are saying these
variables(pg_cluster_log_statement,pg_instance_log_statement) were removed
due to potential security threat. So I want to understand from experts here
, how this is really a security threat and if any option to get this
logging enabled (which will help us debug performance issues) at same time
addressing the threat too?

Regards
Lok


Re: Exclude certain application pgaudit logging?

2024-02-06 Thread Adrian Klaver



On 2/6/24 10:18 AM, Adrian Klaver wrote:


On 2/6/24 10:11 AM, Ron Johnson wrote:
Currently, we use Object audit logging to capture all READ access to 
columns FOO_1, FOO_2 and FOO_3 in table BAR.SCRABBLE.  (They are the 
three columns have PII data.)



Using what audit software?



Memo to self, read subject line.







The problem is that the application legitimately reads these columns 
thousands of times per day.  Thus, the log fills up with meaningless 
data that swamps any legitimate invalid accesses.



How do you know they are legitimate  requests?




Thus, I'd like to exclude reads from "Postgresql JDBC Driver". 
(Currently, I filter that out using "grep -v" in a shell script 
that runs hourly from cron, but I find that unsatisfactory.)



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Exclude certain application pgaudit logging?

2024-02-06 Thread Christophe Pettus



> On Feb 6, 2024, at 10:11, Ron Johnson  wrote:
> Thus, I'd like to exclude reads from "Postgresql JDBC Driver".  (Currently, I 
> filter that out using "grep -v" in a shell script that runs hourly from cron, 
> but I find that unsatisfactory.)


pgAudit doesn't currently include filters by application name.  
Philosophically, I don't think you want to filter on application name, because 
it's a completely client-supplied string that could easily be spoofed.



Re: Exclude certain application pgaudit logging?

2024-02-06 Thread Adrian Klaver



On 2/6/24 10:11 AM, Ron Johnson wrote:
Currently, we use Object audit logging to capture all READ access to 
columns FOO_1, FOO_2 and FOO_3 in table BAR.SCRABBLE.  (They are the 
three columns have PII data.)



Using what audit software?




The problem is that the application legitimately reads these columns 
thousands of times per day.  Thus, the log fills up with meaningless 
data that swamps any legitimate invalid accesses.



How do you know they are legitimate  requests?




Thus, I'd like to exclude reads from "Postgresql JDBC Driver".  
(Currently, I filter that out using "grep -v" in a shell script 
that runs hourly from cron, but I find that unsatisfactory.)


--
Adrian Klaver
adrian.kla...@aklaver.com





Exclude certain application pgaudit logging?

2024-02-06 Thread Ron Johnson
Currently, we use Object audit logging to capture all READ access to
columns FOO_1, FOO_2 and FOO_3 in table BAR.SCRABBLE.  (They are the three
columns have PII data.)

The problem is that the application legitimately reads these columns
thousands of times per day.  Thus, the log fills up with meaningless data
that swamps any legitimate invalid accesses.

Thus, I'd like to exclude reads from "Postgresql JDBC Driver".  (Currently,
I filter that out using "grep -v" in a shell script that runs hourly from
cron, but I find that unsatisfactory.)


Re: Help understanding server-side logging (and more...)

2023-12-21 Thread Adrian Klaver

On 12/21/23 03:05, Dominique Devienne wrote:

If by any chance, anyone has seen this kind of behavior before, in a 
similar context or not, and has suggestions
on how we could further troubleshoot this (on the PostgreSQL 
server-side, but also the Node.JS client-side, why not),

that would be greatly appreciated.


1) It would be helpful to add the Postgres version to any future posts.

2) Also more information on what this "They run PostgreSQL in a Linux 
container" means?




Thanks, --DD



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Help understanding server-side logging (and more...)

2023-12-21 Thread Peter J. Holzer
On 2023-12-21 12:05:41 +0100, Dominique Devienne wrote:
> FWIW, the symptoms they are experiencing is that initially all queries
> are fast, then they start to get longer and longer, to the point all
> services accessing PostgreSQL grind to a halt and basically hang.
> While that happens, one can connect to the DB from a shell (that
> cluster has a single DB) w/o issues, and run queries just fine

If you do that, do you see the "hanging" queries in pg_stat_activity? If
so, what are they waiting for?

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Help understanding server-side logging (and more...)

2023-12-21 Thread Julien Rouhaud
Hi,

On Thu, Dec 21, 2023 at 12:05:41PM +0100, Dominique Devienne wrote:
> Hi. Another team (than mine) has been trying to troubleshoot hang issues in
> their Node/JS/TS-based mid-tier services,
> after a switch from MySQL to PostgreSQL. They run PostgreSQL in a Linux
> container (see PID [1] below), unlike us.
> They started looking at the PostgreSQL server-side logs, but with little to
> no experience reading them, we'd like to
> confirm the extract below looks normal, notably whether the various
> exit-related messages are OK or not.

All those messages are at DEBUG level.  Anything abnormal would be emitted at a
higher level.
>
>
> FWIW, the symptoms they are experiencing is that initially all queries are
> fast, then they start to get longer and longer,
> to the point all services accessing PostgreSQL grind to a halt and
> basically hang. While that happens, one can connect
> to the DB from a shell (that cluster has a single DB) w/o issues, and run
> queries just fine (they have little data,
> in the few dozens MBs spread in 2 schemas and probably 2 or 3 dozen tables
> max), thus I personally think the problem
> is more on the client Node.JS side, but given that the same code worked
> fine with MySQL, maybe I'm wrong.

Yeah, "everything works normally except inside the app" is usually a good
indication that it's unlikely to be a postgres problem.  Maybe they have some
locking issue, transactions that are opened for too long or something else but
it's impossible to say without much more information.  One sure thing is that
running with logs at DEBUG level is going to add noticeable overhead.




Help understanding server-side logging (and more...)

2023-12-21 Thread Dominique Devienne
Hi. Another team (than mine) has been trying to troubleshoot hang issues in
their Node/JS/TS-based mid-tier services,
after a switch from MySQL to PostgreSQL. They run PostgreSQL in a Linux
container (see PID [1] below), unlike us.
They started looking at the PostgreSQL server-side logs, but with little to
no experience reading them, we'd like to
confirm the extract below looks normal, notably whether the various
exit-related messages are OK or not.

I'm guessing it's fine/normal, especially since they use a Node PostgreSQL
module with Pooling (10 connections max by default)
that disconnects idle connections after 10s, which matches what the log
shows. Can anyone please confirm?

FWIW, the symptoms they are experiencing is that initially all queries are
fast, then they start to get longer and longer,
to the point all services accessing PostgreSQL grind to a halt and
basically hang. While that happens, one can connect
to the DB from a shell (that cluster has a single DB) w/o issues, and run
queries just fine (they have little data,
in the few dozens MBs spread in 2 schemas and probably 2 or 3 dozen tables
max), thus I personally think the problem
is more on the client Node.JS side, but given that the same code worked
fine with MySQL, maybe I'm wrong.

Their services are concurrent and accessed by maybe 50-100 users tops, with
queries coming in all the time.

If by any chance, anyone has seen this kind of behavior before, in a
similar context or not, and has suggestions
on how we could further troubleshoot this (on the PostgreSQL server-side,
but also the Node.JS client-side, why not),
that would be greatly appreciated.

Thanks, --DD

2023-12-20 00:00:16.185 UTC [3605] DEBUG:  InitPostgres
2023-12-20 00:00:16.185 UTC [3605] DEBUG:  my backend ID is 4
2023-12-20 00:00:16.186 UTC [1] DEBUG:  forked new backend, pid=3606
socket=10
2023-12-20 00:00:16.187 UTC [3606] DEBUG:  InitPostgres
2023-12-20 00:00:16.187 UTC [3606] DEBUG:  my backend ID is 6
2023-12-20 00:00:16.188 UTC [3605] DEBUG:  StartTransaction(1) name:
unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
2023-12-20 00:00:16.189 UTC [3605] DEBUG:  processing received SASL
response of length 51
2023-12-20 00:00:16.189 UTC [3605] DEBUG:  sending SASL challenge of length
84
2023-12-20 00:00:16.190 UTC [3606] DEBUG:  StartTransaction(1) name:
unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
2023-12-20 00:00:16.191 UTC [3606] DEBUG:  processing received SASL
response of length 51
2023-12-20 00:00:16.191 UTC [3606] DEBUG:  sending SASL challenge of length
84
2023-12-20 00:00:16.193 UTC [3605] DEBUG:  processing received SASL
response of length 104
2023-12-20 00:00:16.193 UTC [3605] DEBUG:  sending SASL challenge of length
46
2023-12-20 00:00:16.195 UTC [3606] DEBUG:  processing received SASL
response of length 104
2023-12-20 00:00:16.195 UTC [3606] DEBUG:  sending SASL challenge of length
46
2023-12-20 00:00:16.200 UTC [3605] DEBUG:  CommitTransaction(1) name:
unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
2023-12-20 00:00:16.201 UTC [3605] DEBUG:  StartTransaction(1) name:
unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
2023-12-20 00:00:16.201 UTC [3605] STATEMENT:  SELECT * FROM "NSP"."T1"
2023-12-20 00:00:16.202 UTC [3606] DEBUG:  CommitTransaction(1) name:
unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
2023-12-20 00:00:16.203 UTC [3606] DEBUG:  StartTransaction(1) name:
unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
2023-12-20 00:00:16.203 UTC [3606] STATEMENT:  SELECT * FROM "NSP"."T2"
2023-12-20 00:00:16.205 UTC [3605] DEBUG:  CommitTransaction(1) name:
unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
2023-12-20 00:00:16.205 UTC [3605] STATEMENT:  SELECT * FROM "NSP"."T1"
2023-12-20 00:00:16.207 UTC [3606] DEBUG:  CommitTransaction(1) name:
unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
2023-12-20 00:00:16.207 UTC [3606] STATEMENT:  SELECT * FROM "NSP"."T2"
2023-12-20 00:00:26.209 UTC [3605] DEBUG:  shmem_exit(0): 4
before_shmem_exit callbacks to make
2023-12-20 00:00:26.209 UTC [3605] DEBUG:  shmem_exit(0): 6 on_shmem_exit
callbacks to make
2023-12-20 00:00:26.209 UTC [3606] DEBUG:  shmem_exit(0): 4
before_shmem_exit callbacks to make
2023-12-20 00:00:26.209 UTC [3605] DEBUG:  proc_exit(0): 2 callbacks to make
2023-12-20 00:00:26.209 UTC [3605] DEBUG:  exit(0)
2023-12-20 00:00:26.209 UTC [3605] DEBUG:  shmem_exit(-1): 0
before_shmem_exit callbacks to make
2023-12-20 00:00:26.209 UTC [3605] DEBUG:  shmem_exit(-1): 0 on_shmem_exit
callbacks to make
2023-12-20 00:00:26.209 UTC [3605] DEBUG:  proc_exit(-1): 0 callbacks to
make
2023-12-20 00:00:26.209 UTC [3606] DEBUG:  shmem_exit(0): 6 on_shmem_exit
callbacks to make
2023-12-20 00:00:26.209 UTC [3606] DEBUG:  proc_exit(0): 2 callbacks to make
2023-12-20 00:00:26.209 UTC [3606] DEBUG:  exit(0)
2023-12-20 00:00:26.209 UTC [3606] 

Re: Postgres 11.5 not logging all sqls

2023-09-25 Thread Adrian Klaver

On 9/25/23 19:30, shashidhar Reddy wrote:

Thank you Adrian! I will apply the patch and see.


Just to be clear 11.21 represents the changes from 16 different minor 
releases.


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Postgres 11.5 not logging all sqls

2023-09-25 Thread shashidhar Reddy
Thank you Adrian! I will apply the patch and see.

On Tue, 26 Sept, 2023, 12:53 am Adrian Klaver, 
wrote:

> On 9/25/23 11:55, shashidhar Reddy wrote:
> > Thank you Erik,
> >
> > Not all queries are truncated at same size,different queries different
> > sizes but all 11.5 servers are behaving in same manner.
>
> FYI, the last minor release for version is 11.21. You are missing 16
> bug/security fixes. I would update to 11.21 and then see if the issue
> persists.
>
> >
> >
> > On Mon, 25 Sept, 2023, 5:33 pm Erik Wienhold,  > <mailto:e...@ewie.name>> wrote:
> >
> > On 2023-09-25 16:54 +0530, shashidhar Reddy wrote:
> >  > We are using postgres 11.5 on Ubuntu we have enabled logging and
> set
> >  > log_statements to all but when I checked the statements in the
> > logs I can
> >  > see only first few lines of the query and also when I check in
> > pgadmin for
> >  > current queries I could see only few lines. Some one please help
> > me to get
> >  > full queries in the logs and in pgadmin4.
> >
> > * Could be that the logging collector truncates messages.  Are all
> >messages truncated to the same length?
> >
> > * pgAdmin reads pg_stat_activity.query which is subject to config
> >track_activity_query_size[1].  That should, however, not affect
> >logging.
> >
> > * Maybe a bug 11.5 which is already 4 years old.
> >
> > [1]
> >
> https://www.postgresql.org/docs/11/runtime-config-statistics.html#GUC-TRACK-ACTIVITY-QUERY-SIZE
> <
> https://www.postgresql.org/docs/11/runtime-config-statistics.html#GUC-TRACK-ACTIVITY-QUERY-SIZE
> >
> >
> > --
> > Erik
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Postgres 11.5 not logging all sqls

2023-09-25 Thread shashidhar Reddy
Thank you Erik,

Not all queries are truncated at same size,different queries different
sizes but all 11.5 servers are behaving in same manner.


On Mon, 25 Sept, 2023, 5:33 pm Erik Wienhold,  wrote:

> On 2023-09-25 16:54 +0530, shashidhar Reddy wrote:
> > We are using postgres 11.5 on Ubuntu we have enabled logging and set
> > log_statements to all but when I checked the statements in the logs I can
> > see only first few lines of the query and also when I check in pgadmin
> for
> > current queries I could see only few lines. Some one please help me to
> get
> > full queries in the logs and in pgadmin4.
>
> * Could be that the logging collector truncates messages.  Are all
>   messages truncated to the same length?
>
> * pgAdmin reads pg_stat_activity.query which is subject to config
>   track_activity_query_size[1].  That should, however, not affect
>   logging.
>
> * Maybe a bug 11.5 which is already 4 years old.
>
> [1]
> https://www.postgresql.org/docs/11/runtime-config-statistics.html#GUC-TRACK-ACTIVITY-QUERY-SIZE
>
> --
> Erik
>


Re: Postgres 11.5 not logging all sqls

2023-09-25 Thread Adrian Klaver

On 9/25/23 11:55, shashidhar Reddy wrote:

Thank you Erik,

Not all queries are truncated at same size,different queries different 
sizes but all 11.5 servers are behaving in same manner.


FYI, the last minor release for version is 11.21. You are missing 16 
bug/security fixes. I would update to 11.21 and then see if the issue 
persists.





On Mon, 25 Sept, 2023, 5:33 pm Erik Wienhold, <mailto:e...@ewie.name>> wrote:


On 2023-09-25 16:54 +0530, shashidhar Reddy wrote:
 > We are using postgres 11.5 on Ubuntu we have enabled logging and set
 > log_statements to all but when I checked the statements in the
logs I can
 > see only first few lines of the query and also when I check in
pgadmin for
 > current queries I could see only few lines. Some one please help
me to get
 > full queries in the logs and in pgadmin4.

* Could be that the logging collector truncates messages.  Are all
   messages truncated to the same length?

* pgAdmin reads pg_stat_activity.query which is subject to config
   track_activity_query_size[1].  That should, however, not affect
   logging.

* Maybe a bug 11.5 which is already 4 years old.

[1]

https://www.postgresql.org/docs/11/runtime-config-statistics.html#GUC-TRACK-ACTIVITY-QUERY-SIZE
 
<https://www.postgresql.org/docs/11/runtime-config-statistics.html#GUC-TRACK-ACTIVITY-QUERY-SIZE>

-- 
Erik




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Postgres 11.5 not logging all sqls

2023-09-25 Thread Erik Wienhold
On 2023-09-25 16:54 +0530, shashidhar Reddy wrote:
> We are using postgres 11.5 on Ubuntu we have enabled logging and set
> log_statements to all but when I checked the statements in the logs I can
> see only first few lines of the query and also when I check in pgadmin for
> current queries I could see only few lines. Some one please help me to get
> full queries in the logs and in pgadmin4.

* Could be that the logging collector truncates messages.  Are all
  messages truncated to the same length?

* pgAdmin reads pg_stat_activity.query which is subject to config
  track_activity_query_size[1].  That should, however, not affect
  logging.

* Maybe a bug 11.5 which is already 4 years old.

[1] 
https://www.postgresql.org/docs/11/runtime-config-statistics.html#GUC-TRACK-ACTIVITY-QUERY-SIZE

-- 
Erik




Re: Postgres 11.5 not logging all sqls

2023-09-25 Thread shashidhar Reddy
Hi Laurenz,

Iog is also showing few lines that is incomplete statement.

On Mon, 25 Sept, 2023, 5:05 pm Laurenz Albe, 
wrote:

> On Mon, 2023-09-25 at 16:54 +0530, shashidhar Reddy wrote:
> > We are using postgres 11.5 on Ubuntu we have enabled logging and set
> > log_statements to all but when I checked the statements in the logs I
> > can see only first few lines of the query
>
> I don't know about pgAdmin, but the log will certainly have the complete
> statement.
>
> Yours,
> Laurenz Albe
>


Re: Postgres 11.5 not logging all sqls

2023-09-25 Thread Laurenz Albe
On Mon, 2023-09-25 at 16:54 +0530, shashidhar Reddy wrote:
> We are using postgres 11.5 on Ubuntu we have enabled logging and set
> log_statements to all but when I checked the statements in the logs I
> can see only first few lines of the query

I don't know about pgAdmin, but the log will certainly have the complete
statement.

Yours,
Laurenz Albe




Postgres 11.5 not logging all sqls

2023-09-25 Thread shashidhar Reddy
Hello

We are using postgres 11.5 on Ubuntu we have enabled logging and set
log_statements to all but when I checked the statements in the logs I can
see only first few lines of the query and also when I check in pgadmin for
current queries I could see only few lines. Some one please help me to get
full queries in the logs and in pgadmin4.


Re: logging_collector is off, but the logging is opening

2023-07-26 Thread Adrian Klaver

On 7/25/23 14:43, Wen Yi wrote:

Hi community,
When I use the postgres, I check the log.(I use the -l open the log)
As you can see, I type code like this:

postgres=# show logging_collector;
  logging_collector
---
  off
(1 row)

postgres=# ^C

Follow the document, when logging_collector is off, there's no log 
output, but when I check my 'postgres' directory:


[postgres@fedora postgres]$ ls
build.sh  logfile  src  startdb.sh  stopdb.sh  update.sh
[postgres@fedora postgres]$ cat logfile
2023-07-24 09:54:59.668 CST [35872] LOG:  starting PostgreSQL 17devel on 
x86_64-pc-linux-gnu, compiled by gcc (GCC) 13.1.1 20230614 (Red Hat 
13.1.1-4), 64-bit
2023-07-24 09:54:59.668 CST [35872] LOG:  listening on IPv6 address 
"::1", port 5432
2023-07-24 09:54:59.668 CST [35872] LOG:  listening on IPv4 address 
"127.0.0.1", port 5432
2023-07-24 09:54:59.684 CST [35872] LOG:  listening on Unix socket 
"/tmp/.s.PGSQL.5432"

...

In a word, the log is actually open. (My startup command: pg_ctl -l 
logfile /home/postgres/postgres/bin/bin/pg_ctl -l logfile -D 
/home/postgres/postgres/data start)

So It's confuse me, the confilt of logging_collector & logfile.

Can someone give me some advice?


https://www.postgresql.org/docs/current/runtime-config-logging.html


Note

It is possible to log to stderr without using the logging collector; the 
log messages will just go to wherever the server's stderr is directed. 
However, that method is only suitable for low log volumes, since it 
provides no convenient way to rotate log files. Also, on some platforms 
not using the logging collector can result in lost or garbled log 
output, because multiple processes writing concurrently to the same log 
file can overwrite each other's output.



https://www.postgresql.org/docs/current/app-pg-ctl.html

-l filename
--log=filename

Append the server log output to filename. If the file does not 
exist, it is created. The umask is set to 077, so access to the log file 
is disallowed to other users by default.




Thanks in advance!

Yours,
Wen Yi


--
Adrian Klaver
adrian.kla...@aklaver.com





logging_collector is off, but the logging is opening

2023-07-26 Thread Wen Yi
Hi community,
When I use the postgres, I check the log.(I use the -l open the log)
As you can see, I type code like this:



postgres=# show logging_collector;
logging_collector 
---
off
(1 row)

postgres=# ^C


Follow the document, when logging_collector is off, there's no log output, but 
when I check my 'postgres' directory:


[postgres@fedora postgres]$ ls
build.sh logfile src startdb.sh stopdb.sh 
update.sh

[postgres@fedora postgres]$ cat logfile
2023-07-24 09:54:59.668 CST [35872] LOG: starting PostgreSQL 17devel on 
x86_64-pc-linux-gnu, compiled by gcc (GCC) 13.1.1 20230614 (Red Hat 13.1.1-4), 
64-bit
2023-07-24 09:54:59.668 CST [35872] LOG: listening on IPv6 address "::1", 
port 5432
2023-07-24 09:54:59.668 CST [35872] LOG: listening on IPv4 address 
"127.0.0.1", port 5432
2023-07-24 09:54:59.684 CST [35872] LOG: listening on Unix socket 
"/tmp/.s.PGSQL.5432"
...


In a word, the log is actually open. (My startup command: pg_ctl -l logfile 
/home/postgres/postgres/bin/bin/pg_ctl -l logfile -D 
/home/postgres/postgres/data start)

So It's confuse me, the confilt of logging_collector  logfile.


Can someone give me some advice?
Thanks in advance!


Yours,
Wen Yi

Re: Suppress logging of "pg_hba.conf rejects connection for host"

2023-04-17 Thread Ron

On 4/16/23 10:02, ertan.kucuko...@1nar.com.tr wrote:

Hello,

One of the systems running PostgreSQL 14.7 receive a lot of lines like in
the subject. I have below pg_hba.conf line and that line causes these to be
logged.

host all all 0.0.0.0/0 reject

If possible, I do not want to see these lines in my logs. But, I failed to
find a parameter for it.

Is it possible to turn this specific message logging off?


1. Why do you need that line, instead of just allowing in the addresses you 
want?

2. Why are you getting so many "illegal" connection attempts?
3. Auditors like to see that you're rejecting "illegal" connection attempts.

--
Born in Arizona, moved to Babylonia.




Re: Suppress logging of "pg_hba.conf rejects connection for host"

2023-04-16 Thread Erik Wienhold
> On 16/04/2023 17:02 CEST ertan.kucuko...@1nar.com.tr wrote:
>
> One of the systems running PostgreSQL 14.7 receive a lot of lines like in
> the subject. I have below pg_hba.conf line and that line causes these to be
> logged.
>
> host all all 0.0.0.0/0 reject
>
> If possible, I do not want to see these lines in my logs. But, I failed to
> find a parameter for it.
>
> Is it possible to turn this specific message logging off?

There's no special config for this specific error message.  It is logged as
FATAL so the only way to silence it *and any other messages from DEBUG5 to 
FATAL*
is to set log_min_messages = PANIC.  I don't recommend it.  It also complicates
troubleshooting failing connections in the future if you don't log this message.

When logging to syslog you may be able to discard specific messages.
rsyslog has property-based filters[0] for example:

:msg, contains, "pg_hba.conf rejects connection for host" ~

You should also investigate the clients that try connecting ("a lot" as you
write) and figure out why they keep connecting if you want to reject their
attempts anyway.

[0] 
https://rsyslog.readthedocs.io/en/latest/configuration/filters.html#property-based-filters

--
Erik




Suppress logging of "pg_hba.conf rejects connection for host"

2023-04-16 Thread ertan.kucukoglu
Hello,

One of the systems running PostgreSQL 14.7 receive a lot of lines like in
the subject. I have below pg_hba.conf line and that line causes these to be
logged.

host all all 0.0.0.0/0 reject

If possible, I do not want to see these lines in my logs. But, I failed to
find a parameter for it.

Is it possible to turn this specific message logging off?

Thanks & Regards,
Ertan





Re: Logging the query executed on the server

2022-07-24 Thread Steve Baldwin
On Sun, Jul 24, 2022 at 4:29 PM Igor Korot  wrote:

>
> 2 things:
> 1. How do I turn this off? ;-)
>

When you change the setting via 'set', that change is only for the current
session. You can revert it with 'set {some param} to default;' or just
terminate the session. If you want to make the change permanent, you need
to set it in the config file.

2. The log does show the query but it shows it with the placeholders.
> Is there a way to see the actual query?
>

Not sure what you mean here. The query shown in the log should be what is
actually executed by the server. If you are using placeholders, there are
probably 3 different log entries - one for the parse step, one for the bind
step and one for the execute step. If you are asking what are the bind
variable values, they are shown in the bind step. For example:

2022-07-24 07:00:00
UTC:10.122.33.196(33732):b2bc_api@b2bcreditonline:[7786]:LOG:
duration: 0.072 ms bind :
select public.currency_on_mismatch() as on_mismatch,
set_config('search_path', $1, true),
set_config('application.user_id', $2, true),
set_config('application.app_client', $3, true),
set_config('application.api_client_id', $4 , true),
set_config('application.source', $5 , true),
set_config('application.request_id', $6 , true),
set_config('application.in_test_context', $7, true),
set_config('lock_timeout', $8, true),
txid_current()
2022-07-24 07:00:00
UTC:10.122.33.196(33732):b2bc_api@b2bcreditonline:[7786]:DETAIL:
parameters: $1 = 'public', $2 = 'Admin/Support', $3 = 'Admin/Support', $4 =
'ce34e2bc-2c65-4fc1-9b95-878aef19a348', $5 = '?', $6 =
'4432dbb2-ab1c-4bd8-a413-ff5c704209a6', $7 = 'f', $8 = '10s'

If this doesn't help, maybe post what you're seeing in the log.

Cheers,

Steve


Re: Logging the query executed on the server

2022-07-24 Thread Igor Korot
Hi, Steve,

On Sun, Jul 24, 2022 at 12:51 AM Steve Baldwin  wrote:
>
> Hi Igor,
>
> Before you issue your query, try something like this:
>
> (from psql, but hopefully you get the idea)
>
> b2bcreditonline=# set log_min_duration_statement to 0;
> SET
> b2bcreditonline=# set log_statement to 'all';
> SET
>
> Ref: https://www.postgresql.org/docs/current/sql-set.html, 
> https://www.postgresql.org/docs/current/config-setting.html
>
> Then submit your query and it will be set to the server log. You can get the 
> name of the current logfile with:
>
> b2bcreditonline=# select pg_current_logfile();
>  pg_current_logfile
> 
>  /log/pg.csv

2 things:
1. How do I turn this off? ;-)
2. The log does show the query but it shows it with the placeholders.
Is there a way to see the actual query?

Thank you.


>
> HTH,
>
> Steve
>
> On Sun, Jul 24, 2022 at 3:26 PM Igor Korot  wrote:
>>
>> Hi,
>> Is it possible to log the query that will be executed
>> on the server?
>>
>> I'm writing an application that connects to the server
>> through ODBC and libpq.
>> For some reason ODBC interface is failing - it desn't
>> return any rows
>>
>> So I'm thinking if I have a proof that the query I am
>> actually executing is the same as the one I run through
>> the psql - I will know where to look.
>>
>> I am actually binding some parameters and trying to
>> execute the query.
>>
>> Thank you.
>>
>>




Re: Logging the query executed on the server

2022-07-23 Thread Steve Baldwin
Hi Igor,

Before you issue your query, try something like this:

(from psql, but hopefully you get the idea)

b2bcreditonline=# set log_min_duration_statement to 0;
SET
b2bcreditonline=# set log_statement to 'all';
SET

Ref: https://www.postgresql.org/docs/current/sql-set.html,
https://www.postgresql.org/docs/current/config-setting.html

Then submit your query and it will be set to the server log. You can get
the name of the current logfile with:

b2bcreditonline=# select pg_current_logfile();
 pg_current_logfile

 /log/pg.csv

HTH,

Steve

On Sun, Jul 24, 2022 at 3:26 PM Igor Korot  wrote:

> Hi,
> Is it possible to log the query that will be executed
> on the server?
>
> I'm writing an application that connects to the server
> through ODBC and libpq.
> For some reason ODBC interface is failing - it desn't
> return any rows
>
> So I'm thinking if I have a proof that the query I am
> actually executing is the same as the one I run through
> the psql - I will know where to look.
>
> I am actually binding some parameters and trying to
> execute the query.
>
> Thank you.
>
>
>


Logging the query executed on the server

2022-07-23 Thread Igor Korot
Hi,
Is it possible to log the query that will be executed
on the server?

I'm writing an application that connects to the server
through ODBC and libpq.
For some reason ODBC interface is failing - it desn't
return any rows

So I'm thinking if I have a proof that the query I am
actually executing is the same as the one I run through
the psql - I will know where to look.

I am actually binding some parameters and trying to
execute the query.

Thank you.




Re: INSERT ALL with DML ERROR Logging replacement in PostgreSQL

2022-06-23 Thread Jagmohan Kaintura
Hi ALl,

Any other thought on this thread.

On Tue, Jun 21, 2022 at 2:24 PM Gilles Darold  wrote:

> Le 21/06/2022 à 10:28, Jagmohan Kaintura a écrit :
>
> Hi Gilles,
>
> I was going though this earlier today but didn't compiled it as I read it
> may not be able to capture the errors if we have below type of statement
> and most of our statements are of INSERT .. SELECT statements only.
>
> The form INSERT INTO  SELECT ... will not have the same
> behavior than in Oracle. It will not stored the successful insert and
> logged the rows in error. This is not supported because it is a single
> transaction for PostgreSQL and everything is rolled back in case of error.
>
> Our all statements are of that form will it be still useful.
>
>
> Right, this was not obvious in your post, but yes if you are using INSERT
> + SELECT this is not possible with the current version of this extension.
> Maybe that could be possible by rewriting internally the query to loop over
> the result of the select and generate an insert per row returned, but with
> performances lost of courses.
>
>
> Best regards,
>
> --
> Gilles Daroldhttp://www.darold.net/
>
>

-- 
*Best Regards,*
Jagmohan


Re: INSERT ALL with DML ERROR Logging replacement in PostgreSQL

2022-06-21 Thread Gilles Darold

Le 21/06/2022 à 10:28, Jagmohan Kaintura a écrit :

Hi Gilles,

I was going though this earlier today but didn't compiled it as I read 
it may not be able to capture the errors if we have below type of 
statement and most of our statements are of INSERT .. 
SELECT statements only.


The form |INSERT INTO  SELECT ...| will not have the same 
behavior than in Oracle. It will not stored the successful insert and 
logged the rows in error. This is not supported because it is a single 
transaction for PostgreSQL and everything is rolled back in case of error.


Our all statements are of that form will it be still useful.



Right, this was not obvious in your post, but yes if you are using 
INSERT + SELECT this is not possible with the current version of this 
extension. Maybe that could be possible by rewriting internally the 
query to loop over the result of the select and generate an insert per 
row returned, but with performances lost of courses.



Best regards,

--
Gilles Darold
http://www.darold.net/


Re: INSERT ALL with DML ERROR Logging replacement in PostgreSQL

2022-06-21 Thread Jagmohan Kaintura
Hi Gilles,

I was going though this earlier today but didn't compiled it as I read it
may not be able to capture the errors if we have below type of statement
and most of our statements are of INSERT .. SELECT statements only.

The form INSERT INTO  SELECT ... will not have the same behavior
than in Oracle. It will not stored the successful insert and logged the
rows in error. This is not supported because it is a single transaction for
PostgreSQL and everything is rolled back in case of error.

Our all statements are of that form will it be still useful.

On Tue, Jun 21, 2022 at 1:07 PM Gilles Darold  wrote:

> Le 21/06/2022 à 09:08, Jagmohan Kaintura a écrit :
>
> Hi Team,
> We are working on a project where we are moving from Oracle to PostgreSQL
> and working on a migration tool which mostly have statements for inserting
> the records which are correct and logging the errors in error table using
> ORACLE inbuilt statement for INSERT ALL with DML ERROR logging.
>
> As part of the postgresql best practices, what approach are we taking to
> move these types of statements in Postgresql as we don't have any such
> equivalent mechanism to load correct data in the main table and error
> record in error table with error reason.
>
> The statements mostly used are -->
> INSERT ALL INTO
> target_table
> (COLUMN LIST)
> VALUES()
> LOG ERROR INTO ...
> SELECT statement considering the source tables;
>
> )
> Can anyone please help me with what could be the best approach to convert
> this in the tool.
>
> --
> *Best Regards,*
> Jagmohan
>
>
> Hi,
>
>
> Maybe what you are looking for is here
> https://github.com/MigOpsRepos/pg_dbms_errlog , this is a PostgreSQL
> extension that emulates the DBMS_ERRLOG Oracle package.
>
>
> Best regards,
>
> --
> Gilles Daroldhttp://www.darold.net/
>
>

-- 
*Best Regards,*
Jagmohan


Re: INSERT ALL with DML ERROR Logging replacement in PostgreSQL

2022-06-21 Thread Gilles Darold

Le 21/06/2022 à 09:08, Jagmohan Kaintura a écrit :

Hi Team,
We are working on a project where we are moving from Oracle to 
PostgreSQL and working on a migration tool which mostly have 
statements for inserting the records which are correct and logging the 
errors in error table using ORACLE inbuilt statement for INSERT ALL 
with DML ERROR logging.


As part of the postgresql best practices, what approach are we taking 
to move these types of statements in Postgresql as we don't have any 
such equivalent mechanism to load correct data in the main table and 
error record in error table with error reason.


The statements mostly used are -->
INSERT ALL INTO
target_table
(COLUMN LIST)
VALUES()
LOG ERROR INTO ...
SELECT statement considering the source tables;

)
Can anyone please help me with what could be the best approach to 
convert this in the tool.


--
*Best Regards,*
Jagmohan



Hi,


Maybe what you are looking for is here 
https://github.com/MigOpsRepos/pg_dbms_errlog , this is a PostgreSQL 
extension that emulates the DBMS_ERRLOG Oracle package.



Best regards,

--
Gilles Darold
http://www.darold.net/


INSERT ALL with DML ERROR Logging replacement in PostgreSQL

2022-06-21 Thread Jagmohan Kaintura
Hi Team,
We are working on a project where we are moving from Oracle to PostgreSQL
and working on a migration tool which mostly have statements for inserting
the records which are correct and logging the errors in error table using
ORACLE inbuilt statement for INSERT ALL with DML ERROR logging.

As part of the postgresql best practices, what approach are we taking to
move these types of statements in Postgresql as we don't have any such
equivalent mechanism to load correct data in the main table and error
record in error table with error reason.

The statements mostly used are -->
INSERT ALL INTO
target_table
(COLUMN LIST)
VALUES()
LOG ERROR INTO ...
SELECT statement considering the source tables;

)
Can anyone please help me with what could be the best approach to convert
this in the tool.

-- 
*Best Regards,*
Jagmohan


Re: Exclude logging certain connections?

2020-03-15 Thread Paul Förster
Hi Adrian,

> Nothing from what I see:
> 
> From here:
> 
> https://www.postgresql.org/docs/12/config-setting.html#id-1.6.6.4.5
> 
> env PGOPTIONS="-c log_connections=off" psql -d test -U aklaver
> 
> psql: error: could not connect to server: FATAL:  permission denied to set 
> parameter "log_connections"
> 
> First problem is you need to be superuser. Alright so be superuser:
> 
> date
> Sun Mar 15 09:24:20 PDT 2020
> 
> aklaver@maura:~> env PGOPTIONS="-c log_connections=off" psql -d test -U 
> postgres
> 
> psql (12.1)
> Type "help" for help.
> 
> test=# \x
> Expanded display is on.
> test=# select * from pg_settings where name = 'log_connections';
> -[ RECORD 1 ]---+----
> name| log_connections
> setting | off
> unit|
> category| Reporting and Logging / What to Log
> short_desc  | Logs each successful connection.
> extra_desc  |
> context | superuser-backend
> vartype | bool
> source  | client
> min_val |
> max_val |
> enumvals|
> boot_val| off
> reset_val   | off
> sourcefile  |
> sourceline  |
> pending_restart | f
> 
> 
> The connection is still recorded:
> 
> [unknown]-[unknown]-2020-03-15 09:24:23.460 PDT-0LOG:  connection received: 
> host=[local]
> [unknown]-postgres-2020-03-15 09:24:23.460 PDT-0LOG:  connection authorized: 
> user=postgres database=test application_name=psql
> 
> 
> To me it looks like log_connections is all or none.

that's what I suspected. This is also what I found out so far. This is ugly 
because unwanted monitoring connections like those of pgwatch2 keep spamming 
the logfiles this way. It would be great if there was an option to specify a 
list of users whose connections would not be logged, even with logging 
connect/disconnect enabled.

Something like this:

log_exclude_user_connect = 'pgwatch2,myself,...'

in postgresql.conf

Thanks very much.

Cheers,
Paul



Re: Exclude logging certain connections?

2020-03-15 Thread Adrian Klaver

On 3/15/20 3:09 AM, Paul Förster wrote:

Hi,

is there a way to exclude certain connections, for example recurring monitoring 
connections, from the postgres.log?

I found this from in old post on stackexchange.com:

postgres=# create role mon login;
CREATE ROLE
postgres=# alter role mon set log_connections=off;
ERROR:  parameter "log_connections" cannot be set after connection start

(source: 
https://dba.stackexchange.com/questions/118018/is-it-possible-to-exclude-specific-users-in-log-activity-of-postgresql)

There is no connection at the time. So why do I get the error? And what can I 
do about it?


Nothing from what I see:

From here:

https://www.postgresql.org/docs/12/config-setting.html#id-1.6.6.4.5

env PGOPTIONS="-c log_connections=off" psql -d test -U aklaver

psql: error: could not connect to server: FATAL:  permission denied to 
set parameter "log_connections"


First problem is you need to be superuser. Alright so be superuser:

date
Sun Mar 15 09:24:20 PDT 2020

aklaver@maura:~> env PGOPTIONS="-c log_connections=off" psql -d test -U 
postgres


psql (12.1)
Type "help" for help.

test=# \x
Expanded display is on.
test=# select * from pg_settings where name = 'log_connections';
-[ RECORD 1 ]---+
name| log_connections
setting | off
unit|
category| Reporting and Logging / What to Log
short_desc  | Logs each successful connection.
extra_desc  |
context | superuser-backend
vartype | bool
source  | client
min_val |
max_val |
enumvals|
boot_val| off
reset_val   | off
sourcefile  |
sourceline  |
pending_restart | f


The connection is still recorded:

[unknown]-[unknown]-2020-03-15 09:24:23.460 PDT-0LOG:  connection 
received: host=[local]
[unknown]-postgres-2020-03-15 09:24:23.460 PDT-0LOG:  connection 
authorized: user=postgres database=test application_name=psql



To me it looks like log_connections is all or none.




Cheers,
Paul




--
Adrian Klaver
adrian.kla...@aklaver.com




Exclude logging certain connections?

2020-03-15 Thread Paul Förster
Hi,

is there a way to exclude certain connections, for example recurring monitoring 
connections, from the postgres.log?

I found this from in old post on stackexchange.com:

postgres=# create role mon login;
CREATE ROLE
postgres=# alter role mon set log_connections=off;
ERROR:  parameter "log_connections" cannot be set after connection start

(source: 
https://dba.stackexchange.com/questions/118018/is-it-possible-to-exclude-specific-users-in-log-activity-of-postgresql)

There is no connection at the time. So why do I get the error? And what can I 
do about it?

Cheers,
Paul



Re: Logging [RESOLVED]

2019-12-06 Thread Peter J. Holzer
On 2019-12-04 16:03:24 -0800, Rich Shepard wrote:
> On Wed, 4 Dec 2019, Stephen Eilert wrote:
> > Usually, this is done by logrotate or a similar mechanism in your system.
> > You’ll likely find that other logs in your system follow a similar
> > pattern, not just Postgresql.
> 
> I just checked /etc/logrotate.d/postgres and it was set at daily with rotate
> 7. I changed rotate to 4 but the dates are 3-4 days apart, not sequential.

Your log files are extremely small. At only a few hundred bytes every 3
or 4 days it is very likely that nothing is logged on most days. If your
log file is empty, logrotate won't rotate it if the option "notifempty"
is set (which is probably the case).

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Logging

2019-12-04 Thread Rich Shepard

On Wed, 4 Dec 2019, Adrian Klaver wrote:

Take a look at the logging section of postgresql.conf to see if Postgres is 
handing off to the system and logrotate


Adrian,

That conf file is in several places (different flavors). I'll check them
all.

Thanks,

Rich




Re: Logging [RESOLVED]

2019-12-04 Thread Rich Shepard

On Wed, 4 Dec 2019, Stephen Eilert wrote:


Usually, this is done by logrotate or a similar mechanism in your system.
You’ll likely find that other logs in your system follow a similar
pattern, not just Postgresql.


Stephen,

Other logs, controlled by logrotate, rotate daily for a maximum of 4
backups.

I just checked /etc/logrotate.d/postgres and it was set at daily with rotate
7. I changed rotate to 4 but the dates are 3-4 days apart, not sequential.

Thanks,

Rich




Re: Logging

2019-12-04 Thread Adrian Klaver

On 12/4/19 3:20 PM, Rich Shepard wrote:

Running Slackware-14.2/x86_64 and postgresql-11.5.

In /var/log/ are these files:

-rw-r- 1 postgres wheel   0 Nov 23 04:40 postgresql-11
-rw-r- 1 postgres wheel 723 Nov 23 04:40 postgresql-11.1
-rw-r- 1 postgres wheel 324 Nov 20 04:40 postgresql-11.2.gz
-rw-r- 1 postgres wheel 320 Nov 17 04:40 postgresql-11.3.gz
-rw-r- 1 postgres wheel 322 Nov 14 04:40 postgresql-11.4.gz
-rw-r- 1 postgres wheel 321 Nov 10 04:40 postgresql-11.5.gz
-rw-r- 1 postgres wheel 325 Nov  6 04:40 postgresql-11.6.gz
-rw-r- 1 postgres wheel 337 Oct 23 04:40 postgresql-11.7.gz

I assume that they're an automatic backup that runs every 3-4 days. What's
backed up and where is this controlled?


Looks like logrotate.
Take a look at the logging section of postgresql.conf to see if Postgres 
is handing off to the system and logrotate




I ask because I have a cron job that does a pg_dumpall each night at 11:30
pm. (It's a small installation for my business use so the files are not
excessive and I keep them for only short periods.)

Regards,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Logging

2019-12-04 Thread Stephen Eilert
Usually, this is done by logrotate or a similar mechanism in your system. 
You’ll likely find that other logs in your system follow a similar pattern, not 
just Postgresql.

— Stephen
On Dec 4, 2019, 3:21 PM -0800, Rich Shepard , wrote:
> Running Slackware-14.2/x86_64 and postgresql-11.5.
>
> In /var/log/ are these files:
>
> -rw-r- 1 postgres wheel 0 Nov 23 04:40 postgresql-11
> -rw-r- 1 postgres wheel 723 Nov 23 04:40 postgresql-11.1
> -rw-r- 1 postgres wheel 324 Nov 20 04:40 postgresql-11.2.gz
> -rw-r- 1 postgres wheel 320 Nov 17 04:40 postgresql-11.3.gz
> -rw-r- 1 postgres wheel 322 Nov 14 04:40 postgresql-11.4.gz
> -rw-r- 1 postgres wheel 321 Nov 10 04:40 postgresql-11.5.gz
> -rw-r- 1 postgres wheel 325 Nov 6 04:40 postgresql-11.6.gz
> -rw-r- 1 postgres wheel 337 Oct 23 04:40 postgresql-11.7.gz
>
> I assume that they're an automatic backup that runs every 3-4 days. What's
> backed up and where is this controlled?
>
> I ask because I have a cron job that does a pg_dumpall each night at 11:30
> pm. (It's a small installation for my business use so the files are not
> excessive and I keep them for only short periods.)
>
> Regards,
>
> Rich
>
>


Logging

2019-12-04 Thread Rich Shepard

Running Slackware-14.2/x86_64 and postgresql-11.5.

In /var/log/ are these files:

-rw-r- 1 postgres wheel   0 Nov 23 04:40 postgresql-11
-rw-r- 1 postgres wheel 723 Nov 23 04:40 postgresql-11.1
-rw-r- 1 postgres wheel 324 Nov 20 04:40 postgresql-11.2.gz
-rw-r- 1 postgres wheel 320 Nov 17 04:40 postgresql-11.3.gz
-rw-r- 1 postgres wheel 322 Nov 14 04:40 postgresql-11.4.gz
-rw-r- 1 postgres wheel 321 Nov 10 04:40 postgresql-11.5.gz
-rw-r- 1 postgres wheel 325 Nov  6 04:40 postgresql-11.6.gz
-rw-r- 1 postgres wheel 337 Oct 23 04:40 postgresql-11.7.gz

I assume that they're an automatic backup that runs every 3-4 days. What's
backed up and where is this controlled?

I ask because I have a cron job that does a pg_dumpall each night at 11:30
pm. (It's a small installation for my business use so the files are not
excessive and I keep them for only short periods.)

Regards,

Rich




Re: logging proxy

2019-11-07 Thread Bruce Momjian
On Fri, Nov 1, 2019 at 01:58:10AM +0300, Олег Самойлов
wrote:
> Does anyone know PostgresQL proxy which can log queries with username,
> ip and affected rows for security reason. PostgresQL itself can log
> almost all, except affected rows.

You can use the server logs to get the non-row information, then use
streaming replication with logical decoding to get the rows for each
transaction started by the user.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




logging proxy

2019-10-31 Thread Олег Самойлов
Does anyone know PostgresQL proxy which can log queries with username, ip and 
affected rows for security reason. PostgresQL itself can log almost all, except 
affected rows.



Re: GSSAPI: logging principal

2019-10-09 Thread Stephen Frost
Greetings,

* Allan Jensen (pgl...@winge-jensen.dk) wrote:
> I have GSSAPI-login and user mapping to postgres working fine.

Great!

> Whenever i login to postgres I get a line like the following in the
> logfile:
> 
> connection authorized: user=testrole database=testdb SSL enabled
> (protocol=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384, bits=256,
> compression=off)

Minor side-note: we have GSSAPI encryption built-in starting with v12,
removing the need to also have SSL.

> What bothers me, is that i can't see what kerberos prinicpal was used
> during authentication.

Yeah, I'm afraid that's probably right.  In looking, I don't see any
particularly easy way.  I could have sworn I complained about this ages
ago (there might even be a patch somewhere in the depths of -hackers
from 5 years ago or more) but clearly it never made it in.

One thing that is kind of nice is that with v12 there's a new view where
you can view the state of existing connections, including the principal
they authenticate with: pg_stat_gssapi.

> Is there any way to make postgres log the principal?

Would definitely be a good thing for us to have, and the CN for an
SSL-based connection.  I don't think it'd be hard for someone to hack up
a patch to do so.  I've added it to my list of "nice to haves" but it
seems unlikely I'll get any time in the near future to hack on it, so
if someone else wants to work on it, please feel free to do so...

Thanks,

Stephen


signature.asc
Description: PGP signature


GSSAPI: logging principal

2019-10-09 Thread Allan Jensen
Hi,

I have GSSAPI-login and user mapping to postgres working fine.

Whenever i login to postgres I get a line like the following in the
logfile:

connection authorized: user=testrole database=testdb SSL enabled
(protocol=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384, bits=256,
compression=off)

What bothers me, is that i can't see what kerberos prinicpal was used
during authentication.

I have fiddled around with log-settings to no avail.

Is there any way to make postgres log the principal?

--
Regards,
Allan




Re: logging "raise" to file

2019-07-28 Thread wambacher
Thank Georg,

regards
walter

> the red part writes your "raise notice" to your log
>
> psql (+your connection string) -f /path/to/file.sql *>
> /path/to/log/xxx.log 2>&1*
>
> -- 
My projects:

Admin Boundaries of the World 
Missing Boundaries

Emergency Map 
Postal Code Map (Germany only) 
Fools (QA for zipcodes in Germany) 
Postcode Boundaries of Germany 
OSM Software Watchlist



Re: logging "raise" to file

2019-07-28 Thread Georg H.

Hi,

Am 28.07.2019 um 12:32 schrieb wambac...@posteo.de:


Hi,

is there a way to log output from "raise ..." to a local file? \o file 
does not work (for me).


regards
walter



the red part writes your "raise notice" to your log

psql (+your connection string) -f /path/to/file.sql *> 
/path/to/log/xxx.log 2>&1*


see

https://dba.stackexchange.com/questions/107199/how-to-log-custom-messages-from-inside-a-postgresql-transaction

regards

Georg



Re: logging "raise" to file

2019-07-28 Thread Guillaume Lelarge
Hi,

Le dim. 28 juil. 2019 à 12:32,  a écrit :

> Hi,
>
> is there a way to log output from "raise ..." to a local file? \o file
> does not work (for me).
>

No, RAISE messages can only go to log files. You would need to call a
function that could write to a file (though without calling RAISE).


-- 
Guillaume.


logging "raise" to file

2019-07-28 Thread wambacher
Hi,

is there a way to log output from "raise ..." to a local file? \o file
does not work (for me).

regards
walter

-- 
My projects:

Admin Boundaries of the World 
Missing Boundaries

Emergency Map 
Postal Code Map (Germany only) 
Fools (QA for zipcodes in Germany) 
Postcode Boundaries of Germany 
OSM Software Watchlist



Re: query logging of prepared statements

2019-02-09 Thread Justin Pryzby
On Fri, Feb 08, 2019 at 07:29:53AM -0600, Justin Pryzby wrote:
> A couple months ago, I implemented prepared statements for PyGreSQL.  While
> updating our application in advance of their release with that feature, I
> noticed that our query logs were several times larger.

Previously sent to -general (and quoted fully below), resending to -hackers
with patch.
https://www.postgresql.org/message-id/20190208132953.GF29720%40telsasoft.com

I propose that the prepared statement associated with an EXECUTE should only be
included in log "DETAIL" when log_error_verbosity=VERBOSE, for both SQL EXECUTE
and PQexecPrepared.  I'd like to be able to continue logging DETAIL (including
bind parameters), so want like to avoid setting "TERSE" just to avoid redundant
messages.

With attached patch, I'm not sure if !*portal_name && !portal->prepStmtName
would catch cases other than PQexecParams (?)

Compare unpatched server to patched server to patched server with
log_error_verbosity=verbose.

|$ psql postgres -xtc "SET log_error_verbosity=default;SET log_statement='all'; 
SET client_min_messages=log" -c "PREPARE q AS SELECT 2" -c "EXECUTE q"
|SET
|LOG:  statement: PREPARE q AS SELECT 2
|PREPARE
|LOG:  statement: EXECUTE q
|DETAIL:  prepare: PREPARE q AS SELECT 2
|?column? | 2

|$ PGHOST=/tmp PGPORT=5678 psql postgres -xtc "SET 
log_error_verbosity=default;SET log_statement='all'; SET 
client_min_messages=log" -c "PREPARE q AS SELECT 2" -c "EXECUTE q"
|SET
|LOG:  statement: PREPARE q AS SELECT 2
|PREPARE
|LOG:  statement: EXECUTE q
|?column? | 2

|$ PGHOST=/tmp PGPORT=5678 psql postgres -xtc "SET 
log_error_verbosity=verbose;SET log_statement='all'; SET 
client_min_messages=log" -c "PREPARE q AS SELECT 2" -c "EXECUTE q"
|SET
|LOG:  statement: PREPARE q AS SELECT 2
|PREPARE
|LOG:  statement: EXECUTE q
|DETAIL:  prepare: PREPARE q AS SELECT 2
|?column? | 2

For PQexecPrepared library call:

|$ xPGPORT=5678 xPGHOST=/tmp PYTHONPATH=../PyGreSQL/build/lib.linux-x86_64-2.7/ 
python2.7 -c "import pg; d=pg.DB('postgres'); d.query('SET 
client_min_messages=log; SET log_error_verbosity=default; SET 
log_statement=\"all\"'); d.query('SELECT 1; PREPARE q AS SELECT \$1'); 
d.query_prepared('q',[1]); d.query_formatted('SELECT %s', [2])"
|LOG:  statement: SELECT 1; PREPARE q AS SELECT $1
|LOG:  execute q: SELECT 1; PREPARE q AS SELECT $1
|DETAIL:  parameters: $1 = '1'
|LOG:  execute : SELECT $1
|DETAIL:  parameters: $1 = '2'

|$ PGPORT=5678 PGHOST=/tmp PYTHONPATH=../PyGreSQL/build/lib.linux-x86_64-2.7/ 
python2.7 -c "import pg; d=pg.DB('postgres'); d.query('SET 
client_min_messages=log; SET log_error_verbosity=default; SET 
log_statement=\"all\"'); d.query('SELECT 1; PREPARE q AS SELECT \$1'); 
d.query_prepared('q',[1]); d.query_formatted('SELECT %s', [2])"
|LOG:  statement: SELECT 1; PREPARE q AS SELECT $1
|LOG:  execute q
|DETAIL:  parameters: $1 = '1'
|LOG:  execute : SELECT $1
|DETAIL:  parameters: $1 = '2'

|$ PGPORT=5678 PGHOST=/tmp PYTHONPATH=../PyGreSQL/build/lib.linux-x86_64-2.7/ 
python2.7 -c "import pg; d=pg.DB('postgres'); d.query('SET 
client_min_messages=log; SET log_error_verbosity=verbose; SET 
log_statement=\"all\"'); d.query('SELECT 1; PREPARE q AS SELECT \$1'); 
d.query_prepared('q',[1]); d.query_formatted('SELECT %s', [2])"
|LOG:  statement: SELECT 1; PREPARE q AS SELECT $1
|LOG:  execute q: SELECT 1; PREPARE q AS SELECT $1
|DETAIL:  parameters: $1 = '1'
|LOG:  execute : SELECT $1
|DETAIL:  parameters: $1 = '2'

Also, I noticed that if the statement was prepared using SQL PREPARE (rather
than PQprepare), and if it used simple query with multiple commands, they're
all included in the log, like this when executed with PQexecPrepared:
|LOG:  execute q: SET log_error_verbosity=verbose; SET client_min_messages=log; 
PREPARE q AS SELECT $1

And looks like this for SQL EXECUTE:
|[pryzbyj@telsasoft-db postgresql]$ psql postgres -txc "SET 
client_min_messages=log;SELECT 1;PREPARE q AS SELECT 2" -c "EXECUTE q"
|PREPARE
|LOG:  statement: EXECUTE q
|DETAIL:  prepare: SET client_min_messages=log;SELECT 1;PREPARE q AS SELECT 2
|?column? | 2

On Fri, Feb 08, 2019 at 07:29:53AM -0600, Justin Pryzby wrote:
> A couple months ago, I implemented prepared statements for PyGreSQL.  While
> updating our application in advance of their release with that feature, I
> noticed that our query logs were several times larger.
> 
> With non-prepared statements, we logged to CSV:
> |message| SELECT 1
> 
> With SQL EXECUTE, we log:
> |message| statement: EXECUTE sqlex(2);
> |detail | prepare: prepare sqlex AS SELECT $1;
> 
> With PQexecPrepared, we would log:
> |message| execute qq: PREPARE qq AS SELECT $1
> |detail |

query logging of prepared statements

2019-02-08 Thread Justin Pryzby
A couple months ago, I implemented prepared statements for PyGreSQL.  While
updating our application in advance of their release with that feature, I
noticed that our query logs were several times larger.

With non-prepared statements, we logged to CSV:
|message| SELECT 1

With SQL EXECUTE, we log:
|message| statement: EXECUTE sqlex(2);
|detail | prepare: prepare sqlex AS SELECT $1;

With PQexecPrepared, we would log:
|message| execute qq: PREPARE qq AS SELECT $1
|detail | parameters: $1 = '3'

For comparison, with PQexecParams, the logs I see look like this (apparently
the "unnamed" prepared statement is used behind the scenes):
|message| execute : SELECT [...]

It's not clear to me why it'd be desirable for the previous PREPARE to be
additionally logged during every execution, instead of just its name (in
"message") and params (in "detail").  (Actually, I had to triple check that it
wasn't somehow executing a prepared statement which itself created a prepared
statement...)

For us, the performance benefit is to minimize the overhead (mostly in pygres)
of many INSERTs into append-only tables.  It's not great that a feature
intended to improve performance is causing 2x more log volume to be written.

Also, it seems odd that for SQL EXECUTE, the PREPARE is shown in "detail", but
for the library call, it's shown in "message".

I found:
|commit bc24d5b97673c365f19be21f83acca3c184cf1a7
|Author: Bruce Momjian 
|Date:   Tue Aug 29 02:11:30 2006 +
|
|Now bind displays prepare as detail, and execute displays prepare and
|optionally bind.  I re-added the "statement:" label so people will
|understand why the line is being printed (it is log_*statement
|behavior).
|
|Use single quotes for bind values, instead of double quotes, and double
|literal single quotes in bind values (and document that).  I also made
|use of the DETAIL line to have much cleaner output.

and:

|commit c8961bf1ce0b51019db31c5572dac18b664e02f1
|Author: Bruce Momjian 
|Date:   Fri Aug 4 18:53:46 2006 +
|
|Improve logging of protocol-level prepared statements.

Justin



Re: Function for Exception Logging

2018-11-05 Thread Alexey Bashtanov



Even more difficult in PG functions as they have no commit / rollback 
capability.  I haven't played with stored procedures in in PG11 yet.


You can simulate oracle autonomous transaction feature in postgres by 
connecting to the same db using dblink.


As for implicit passing of error parameters, I don't think it's possible.

Best, Alex



Re: Function for Exception Logging

2018-10-31 Thread Tony Shelver
I'd be interested if there is an answer to this.   The big issue with
writing to a table is the interaction with rollbacks and commits.

Even more difficult in PG functions as they have no commit / rollback
capability.  I haven't played with stored procedures in in PG11 yet.

In Oracle, I wrote logging / error messages to a logging file on the fire
system, directly from PL/SQL.


This file can then be monitored for error messages via a messaging
solutions.


Regards



On Mon, 29 Oct 2018 at 19:13, Patrick FICHE 
wrote:

> Hi community,
>
>
>
> I would like to implement a function that would log managed Exceptions
> into a dedicated table.
>
> For example, I have some code like :
>
> BEGIN
>
> Code generation exception
>
> EXCEPTION
>
>   WHEN OTHERS THEN Log_Error();
>
> END;
>
>
>
> The Log_Error function would be able to get the exception context /
> parameters to log the exception parameters into a table.
>
> Is there any way to manage this without providing the Exception parameters
> as parameters to the Log_Error function ?
>
>
>
> Thanks,
>
> Patrick
>
>
>
>
>
> *Patrick Fiche*
>
> Database Engineer, Aqsacom Sas.
>
> *c.* 33 6 82 80 69 96
>
> *e.* patrick.fi...@aqsacom.com
>
>
>
> [image: 01-03_AQSA_Main_Corporate_Logo_JPEG_White_Low.jpg]
> <http://www.aqsacom.com/>
>
>
>


Function for Exception Logging

2018-10-29 Thread Patrick FICHE
Hi community,

I would like to implement a function that would log managed Exceptions into a 
dedicated table.
For example, I have some code like :
BEGIN
Code generation exception
EXCEPTION
  WHEN OTHERS THEN Log_Error();
END;

The Log_Error function would be able to get the exception context / parameters 
to log the exception parameters into a table.
Is there any way to manage this without providing the Exception parameters as 
parameters to the Log_Error function ?

Thanks,
Patrick


Patrick Fiche
Database Engineer, Aqsacom Sas.
c. 33 6 82 80 69 96
e. patrick.fi...@aqsacom.com

[01-03_AQSA_Main_Corporate_Logo_JPEG_White_Low.jpg]



Function for Exception Logging

2018-10-29 Thread Patrick FICHE
Hi community,

I would like to implement a function that would log managed Exceptions into a 
dedicated table.
For example, I have some code like :
BEGIN
Code generation exception
EXCEPTION
  WHEN OTHERS THEN Log_Error();
END;

The Log_Error function would be able to get the exception context / parameters 
to log the exception parameters into a table.
Is there any way to manage this without providing the Exception parameters as 
parameters to the Log_Error function ?

Thanks,
Patrick


Re: Advice on logging strategy

2018-10-12 Thread Mike Martin
Thanks!

On Fri, 12 Oct 2018 at 14:33, David Steele  wrote:

> On 10/11/18 11:26 AM, Mike Martin wrote:
> >
> > This results in enormous log files which take ages to import using copy
> > becuase each execute statement is logged with the parameters chosen
> >
> > Is there any way around this?
> >
> > I cant find any way to filter dml statements
>
> pgAudit (https://github.com/pgaudit/pgaudit) gives you fine-grain
> control over what is logged by command type, table, or user as well as a
> lot more detail.
>
> --
> -David
> da...@pgmasters.net
>


Re: Advice on logging strategy

2018-10-12 Thread David Steele

On 10/11/18 11:26 AM, Mike Martin wrote:


This results in enormous log files which take ages to import using copy 
becuase each execute statement is logged with the parameters chosen


Is there any way around this?

I cant find any way to filter dml statements


pgAudit (https://github.com/pgaudit/pgaudit) gives you fine-grain 
control over what is logged by command type, table, or user as well as a 
lot more detail.


--
-David
da...@pgmasters.net



Re: Advice on logging strategy

2018-10-11 Thread Jeff Janes
On Thu, Oct 11, 2018 at 6:27 AM Mike Martin  wrote:

> I have a question on logging strategy
>
> I have loggin set to
> log_statement = 'all' on a network database with logging set to csv so I
> can import it to a logging table
>
> However the database is populated via a nightly routine downloading data
> via REST APIusing prepared statements
>
> This results in enormous log files which take ages to import using copy
> becuase each execute statement is logged with the parameters chosen
>
> Is there any way around this?
>

One option is to convert to using COPY...FROM STDIN rather than prepared
INSERTs.

Another is to create a user specifically for bulk population, and do a
'ALTER USER bulk_load SET log_statement=none` to override the global
log_statement setting.

Cheers,

Jeff


Re: Advice on logging strategy

2018-10-11 Thread Mike Martin
I suppose the ideal would be to log the prepared statement once and detail
only if error rather than one per execution

On Thu, 11 Oct 2018 at 11:33, Rob Sargent  wrote:

>
>
> > On Oct 11, 2018, at 4:26 AM, Mike Martin  wrote:
> >
> > I have a question on logging strategy
> >
> > I have loggin set to
> > log_statement = 'all' on a network database with logging set to csv so I
> can import it to a logging table
> >
> > However the database is populated via a nightly routine downloading data
> via REST APIusing prepared statements
> >
> > This results in enormous log files which take ages to import using copy
> becuase each execute statement is logged with the parameters chosen
> >
> > Is there any way around this?
> >
> > I cant find any way to filter dml statements
> >
> > thanks
> >
> Do you want all the log lines in you logging table?
> There was a thread yesterday (10.Oct.2018) on COPY which mention the
> possibility of multiple processes COPYing to same table.


Re: Advice on logging strategy

2018-10-11 Thread Rob Sargent



> On Oct 11, 2018, at 4:26 AM, Mike Martin  wrote:
> 
> I have a question on logging strategy
> 
> I have loggin set to
> log_statement = 'all' on a network database with logging set to csv so I can 
> import it to a logging table
> 
> However the database is populated via a nightly routine downloading data via 
> REST APIusing prepared statements
> 
> This results in enormous log files which take ages to import using copy 
> becuase each execute statement is logged with the parameters chosen
> 
> Is there any way around this?
> 
> I cant find any way to filter dml statements
> 
> thanks
> 
Do you want all the log lines in you logging table?
There was a thread yesterday (10.Oct.2018) on COPY which mention the 
possibility of multiple processes COPYing to same table.


Advice on logging strategy

2018-10-11 Thread Mike Martin
I have a question on logging strategy

I have loggin set to
log_statement = 'all' on a network database with logging set to csv so I
can import it to a logging table

However the database is populated via a nightly routine downloading data
via REST APIusing prepared statements

This results in enormous log files which take ages to import using copy
becuase each execute statement is logged with the parameters chosen

Is there any way around this?

I cant find any way to filter dml statements

thanks


Re: Suggestion about logging only every n-th statement

2018-06-24 Thread Adrien Nayrat
On 06/20/2018 03:06 PM, Janning Vygen wrote:
>> FYI in made this patch which seems do what you want :
>> https://www.postgresql.org/message-id/flat/c30ee535-ee1e-db9f-fa97-146b9f62caed%40anayrat.info#c30ee535-ee1e-db9f-fa97-146b9f62c...@anayrat.info
>>
>>
>> I will add an entry in september's commit fest.
> 
> excellent. Exactly what I want!

FYI, I added an entry in next commit fest (september is not open) :
https://commitfest.postgresql.org/18/1691/

Feel free to review it ;)

Regards,

-- 
Adrien NAYRAT



signature.asc
Description: OpenPGP digital signature


Re: Suggestion about logging only every n-th statement

2018-06-20 Thread Adrien Nayrat
On 06/20/2018 03:08 PM, Andreas Kretschmer wrote:
> I would suggest using of pg_stat_statements for that.

I agree, but sometimes it is not sufficient. For example, if you need query's
parameters.

-- 
Adrien NAYRAT




signature.asc
Description: OpenPGP digital signature


Re: Suggestion about logging only every n-th statement

2018-06-20 Thread Andreas Kretschmer
On 20 June 2018 13:51:25 CEST, Janning Vygen  wrote:
>Back in 2009 I made a suggestion which is not implemented yet but would
>
>still be very valuable and easy to implement in my opinion (not for me 
>as I am not speaking C):
>
>https://www.postgresql.org/message-id/flat/200903161426.56662.vygen%40kicktipp.de#200903161426.56662.vy...@kicktipp.de
>
>We still can't afford to log each and every statement as we have too 
>many and we don't want to afford more disk performance/space. Therefore
>
>we log only statements slower than 50ms.
>
>But for analyzing usage patterns it would be very nice to have this 
>combined with a sample_rate for logging.
>
>   logging_sample_rate = n
>
>So each n-th statement will get logged regardless of execution time.
>
>What do you think?
>
>regards
>Janning

I would suggest using of pg_stat_statements for that.

Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: Suggestion about logging only every n-th statement

2018-06-20 Thread Janning Vygen

Am 20.06.2018 um 14:53 schrieb Adrien NAYRAT:

On 06/20/2018 01:51 PM, Janning Vygen wrote:
Back in 2009 I made a suggestion which is not implemented yet but 
would still be very valuable and easy to implement in my opinion (not 
for me as I am not speaking C):


https://www.postgresql.org/message-id/flat/200903161426.56662.vygen%40kicktipp.de#200903161426.56662.vy...@kicktipp.de 



FYI in made this patch which seems do what you want :
https://www.postgresql.org/message-id/flat/c30ee535-ee1e-db9f-fa97-146b9f62caed%40anayrat.info#c30ee535-ee1e-db9f-fa97-146b9f62c...@anayrat.info 



I will add an entry in september's commit fest.


excellent. Exactly what I want!

regards
Janning




Re: Suggestion about logging only every n-th statement

2018-06-20 Thread Geoff Winkless
On Wed, 20 Jun 2018 at 12:51, Janning Vygen  wrote:

> But for analyzing usage patterns it would be very nice to have this
> combined with a sample_rate for logging.
>
>logging_sample_rate = n
>
> So each n-th statement will get logged regardless of execution time.
>

I think you would need to introduce a randomizing element ​if you wanted to
make it statistically valid. Logging every n'th statement could (depending
on your usage pattern) be catastrophically inaccurate.

Geoff


Suggestion about logging only every n-th statement

2018-06-20 Thread Janning Vygen
Back in 2009 I made a suggestion which is not implemented yet but would 
still be very valuable and easy to implement in my opinion (not for me 
as I am not speaking C):


https://www.postgresql.org/message-id/flat/200903161426.56662.vygen%40kicktipp.de#200903161426.56662.vy...@kicktipp.de

We still can't afford to log each and every statement as we have too 
many and we don't want to afford more disk performance/space. Therefore 
we log only statements slower than 50ms.


But for analyzing usage patterns it would be very nice to have this 
combined with a sample_rate for logging.


  logging_sample_rate = n

So each n-th statement will get logged regardless of execution time.

What do you think?

regards
Janning




Re: Logging

2018-06-12 Thread Andrew Bartley
Thanks All,

It seems that the related DETAIL message appears if the query is
successful.  On error the DETAIL log line is missing...   This makes
debugging difficult, considering the insert is coming from Dreamfactory via
a rest POST.

I am finding it very difficult navigating my way through the brave new
world of SAS and Blackbox type applications  Maybe 32 years as a
database programmer is just too long.

Thanks

Andrew




On Wed, 13 Jun 2018 at 14:12 Adrian Klaver 
wrote:

> On 06/12/2018 08:25 PM, David G. Johnston wrote:
> > On Tuesday, June 12, 2018, Andrew Bartley  > <mailto:ambart...@gmail.com>> wrote:
> >
> >
> >
> > On Wed, 13 Jun 2018 at 12:43 Laurenz Albe  > <mailto:laurenz.a...@cybertec.at>> wrote:
> >
> >
> > log_min_duration_statement = 0
> >
> > [...]
> >
> >
> > log_min_duration_statement -1
> >
> >
> > You've disabled statement logging altogether.  The zero value you were
> > directed to use is what causes everything to be logged.
>
> Actually no:
>
>
> https://www.postgresql.org/docs/10/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN
>
> log_min_duration_statement (integer)
>
> "... Setting this to zero prints all statement durations. Minus-one (the
> default) disables logging statement durations.  ..."
>
> "
>
> So -1 only affects logging statements relative to duration.
>
> If you have log_statements set then you will still get statements logged
> if you have log_min_duration_statement = -1 :
>
> Note
>
> When using this option together with log_statement, the text of
> statements that are logged because of log_statement will not be repeated
> in the duration log message.
> "
>
> This is how I have my logging setup, log_min_duration_statement = -1
> and log_statements = 'mod' and I see statements in the logs.
>
> >
> > David J.
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Logging

2018-06-12 Thread Adrian Klaver

On 06/12/2018 08:25 PM, David G. Johnston wrote:
On Tuesday, June 12, 2018, Andrew Bartley <mailto:ambart...@gmail.com>> wrote:




On Wed, 13 Jun 2018 at 12:43 Laurenz Albe mailto:laurenz.a...@cybertec.at>> wrote:


log_min_duration_statement = 0

[...]


log_min_duration_statement -1


You've disabled statement logging altogether.  The zero value you were 
directed to use is what causes everything to be logged.


Actually no:

https://www.postgresql.org/docs/10/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN

log_min_duration_statement (integer)

"... Setting this to zero prints all statement durations. Minus-one (the 
default) disables logging statement durations.  ..."


"

So -1 only affects logging statements relative to duration.

If you have log_statements set then you will still get statements logged 
if you have log_min_duration_statement = -1 :


Note

When using this option together with log_statement, the text of 
statements that are logged because of log_statement will not be repeated 
in the duration log message.

"

This is how I have my logging setup, log_min_duration_statement = -1
and log_statements = 'mod' and I see statements in the logs.



David J.





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Logging

2018-06-12 Thread Andrew Bartley
Ok, thanks.   will try

On Wed, 13 Jun 2018 at 13:25 David G. Johnston 
wrote:

> On Tuesday, June 12, 2018, Andrew Bartley  wrote:
>
>> On Wed, 13 Jun 2018 at 12:43 Laurenz Albe 
>> wrote:
>>
>
>>> log_min_duration_statement = 0
>>>
>> [...]
>
>>
>> log_min_duration_statement -1
>>
>
> You've disabled statement logging altogether.  The zero value you were
> directed to use is what causes everything to be logged.
>
> David J.
>
>
>


Re: Logging

2018-06-12 Thread David G. Johnston
On Tuesday, June 12, 2018, Andrew Bartley  wrote:

>
>
> On Wed, 13 Jun 2018 at 12:43 Laurenz Albe 
> wrote:
>
>>
>> log_min_duration_statement = 0
>>
> [...]

>
> log_min_duration_statement -1
>

You've disabled statement logging altogether.  The zero value you were
directed to use is what causes everything to be logged.

David J.


Re: Logging

2018-06-12 Thread Andrew Bartley
On Wed, 13 Jun 2018 at 12:43 Laurenz Albe  wrote:

> Andrew Bartley wrote:
> > Can someone please tell me how to log the values being inserted in this
> example..
> >
> > 2018-06-12 22:31:49 UTC-5b2049d4.434ac-STATEMENT:  insert into
> "api_consumers" ("consumer_id", "create_datetime") values ($1, $2).
> >
> > I have tried many different logging options and combinations.
>
> That should automatically be logged as a DETAIL message.
>
> log_min_duration_statement = 0
> log_min_error_statement = log or better
> log_min_messages = log or better
>
> That should do the trick.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com

Thanks for the reply Laurenz,


Current settings

"PostgreSQL 9.6.7, compiled by Visual C++ build 1800, 64-bit"
Azure managed instance


log_min_duration_statement -1
log_min_error_statement DEBUG2
log_min_messages DEBUG2
log_statement ALL
log_error_verbosity VERBOSE

Still no joy.

Thanks Andrew


Re: Logging

2018-06-12 Thread Laurenz Albe
Andrew Bartley wrote:
> Can someone please tell me how to log the values being inserted in this 
> example..
> 
> 2018-06-12 22:31:49 UTC-5b2049d4.434ac-STATEMENT:  insert into 
> "api_consumers" ("consumer_id", "create_datetime") values ($1, $2).
> 
> I have tried many different logging options and combinations. 

That should automatically be logged as a DETAIL message.

log_min_duration_statement = 0
log_min_error_statement = log or better
log_min_messages = log or better

That should do the trick.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Logging

2018-06-12 Thread Andrew Bartley
Hi all,

Can someone please tell me how to log the values being inserted in this
example..

2018-06-12 22:31:49 UTC-5b2049d4.434ac-STATEMENT:  insert into
"api_consumers" ("consumer_id", "create_datetime") values ($1, $2).

I have tried many different logging options and combinations.

Thanks

Andrew Bartley


Re: Feature request: separate logging

2018-02-27 Thread Greg Stark
On 18 November 2016 at 13:00, otheus uibk <otheus.u...@gmail.com> wrote:
> What I do today is to configure postgresql to write csvlogs. Stdout/stderr
> are captured by journald. A custom perl script with the Text::CSV module and
> tail -F semantics continuously processes the csvlog file, ignores query,
> dml, and detail log lines,  and sends the rest via syslog() (which journald
> then handles).

Postgres supports syslog directly and syslogd supports directing logs
into various destinations. If you want to filter the logs and send
them to different servers you can interpose a remote-syslogd server to
do exactly what you want

I think using a pipe or writing to a file and then having a program
parse the text file back into messages and try to parse the fields out
is fundamentally just huge waste of programmer time and cycles as well
as error prone and susceptible to security problems. Much better to
just have options to have Postgres generate logs in the right format
to begin with and send them with the right protocol to begin with.

>1. Write to a csvlog with one set of selectors
>2. Write to stdout/stderr a different set of selectors (no statement, no
> autovacuum, etc)

Being able to send different messages differnent places isn't a bad
idea. But everyone's going to have a different idea of what should go
in which bucket so this will need more thought about the detail.

Perhaps we could get away with just using the error class (the first
two characters of the sql error code, see
src/backend/utils/errcodes.h) but that doesn't help with warnings and
lower level messages. And some of those warnings are pretty important
operational tips like raising checkpoint parameters or autovacuum
parameters.

>2.1. has the kind of detail contained in the CSV.  Currently, the
> log-prefix option does not offer some of the information provided in the CSV
> logs. Really, the CSV log should simply be an implementation of the
> log-prefix.
>2.2. Collapses multi-lined queries into one line (newlines and tabs
> are escaped with backslashes or the x1B character).

CSV specifies exactly how to handle newlines and quoting and if you're
not happy with that format -- and I would agree with you -- there are
a myriad of other standard formats such as JSON and msgpack. There's
no need to invent an almost-CSV with most of the problems of CSV
except one. One question immediately arises -- how do you plan to
escape the x1B character? (And before you say it's unlikely to appear
in the data consider that one of the main uses for csv logs is to load
them into Postgres so...)

I feel your pain, I'm trying to get logstash or fluentd working here
too and I'm amazed they don't have any correct CSV parser. It seems
like such a basic requirement for something designed to handle logs so
it's quite mysterious to me. Both of them have the same dumpster fire
of a multiline parser that depends on recognizing continuation lines
with a regexp.


> Finally, if these changes can be implemented, is it impossible to backport
> them to prior versions, say 9.1 and up? If I wrote a patch, under what
> conditions would the patch be accepted for inclusion in official releases of
> older versions?

The only way to support older versions would be to publish it
separately as an extension like the jsonlog extension. There's a hook
for logging so it should be possible. But it might not be easy. The
existing jsonlog extension has some quirky bits to deal with messages
at startup for example.


-- 
greg



pg_dump and logging

2017-12-11 Thread marcelo
When pg_dump runs on a database, is it warranted that the log is fully 
impacted, or at least, taken into account for the dumping?

TIA



Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-12-04 Thread David Pacheco
Thanks again for helping out.


On Mon, Dec 4, 2017 at 2:12 PM, Andres Freund  wrote:

> On 2017-12-04 13:57:52 -0800, David Pacheco wrote:
> > On Mon, Dec 4, 2017 at 12:23 PM, Andres Freund 
> wrote:
> > > FWIW, I'd like to see a report of this around the time the issue
> > > occurred before doing anything further here.
> > >
> >
> >
> > This failure begins when this process exits, so the best you could get is
> > memory in use immediately before it exited.  I obviously can't get that
> now
> > for the one instance I saw weeks ago, but maybe PostgreSQL could log
> > information about current memory usage when it's about to exit because of
> > ENOMEM?
>
> It already does so.
>


In that case, do you have the information you need in the log that I posted
earlier in the thread?
(
https://gist.githubusercontent.com/davepacheco/c5541bb464532075f2da761dd990a457/raw/2ba242055aca2fb374e9118045a830d08c590e0a/gistfile1.txt
)


What I was wondering about was the memory usage some time before it
> dies. In particular while the workload with the long query strings is
> running. ps output would be good, gdb'ing into the process and issuing
> MemoryContextStats(TopMemoryContext) would be better.
>


Would it make sense for PostgreSQL to periodically sample the memory used
by the current process, keep a small ringbuffer of recent samples, and then
log all of that when it exits due to ENOMEM?

One does not know that one is going to run into this problem before it
happens, and it may not happen very often.  (I've only seen it once.)  The
more PostgreSQL can keep the information needed to understand something
like this after the fact, the better -- particularly since the overhead
required to maintain this information should not be that substantial.


> That way if anybody hits a similar condition in the future, the
> > data will be available to answer your question.
> >
> > That said, I think the deadlock itself is pretty well explained by the
> data
> > we have already.
>
> Well, it doesn't really explain the root cause, and thus the extent of
> the fixes required. If the root cause is the amount of memory used by
> syslogger, we can remove the deadlock, but the experience is still going
> to be bad. Obviously better, but still bad.
>


Fair enough.  But we only know about one problem for sure, which is the
deadlock.  There may be a second problem of the syslogger using too much
memory, but I don't think there's any evidence to point in that direction.
Once the whole system is out of memory (and it clearly was, based on the
log entries), anything that tried to allocate would fail, and the log
reflects that a number of different processes did fail to allocate memory.
I'd help investigate this question, but I have no more data about it, and
I'm not sure when I will run into this again.

Thanks,
Dave


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-12-04 Thread Andres Freund
On 2017-12-04 13:57:52 -0800, David Pacheco wrote:
> On Mon, Dec 4, 2017 at 12:23 PM, Andres Freund  wrote:
> > FWIW, I'd like to see a report of this around the time the issue
> > occurred before doing anything further here.
> >
> 
> 
> This failure begins when this process exits, so the best you could get is
> memory in use immediately before it exited.  I obviously can't get that now
> for the one instance I saw weeks ago, but maybe PostgreSQL could log
> information about current memory usage when it's about to exit because of
> ENOMEM?

It already does so.

What I was wondering about was the memory usage some time before it
dies. In particular while the workload with the long query strings is
running. ps output would be good, gdb'ing into the process and issuing
MemoryContextStats(TopMemoryContext) would be better.

> That way if anybody hits a similar condition in the future, the
> data will be available to answer your question.
> 
> That said, I think the deadlock itself is pretty well explained by the data
> we have already.

Well, it doesn't really explain the root cause, and thus the extent of
the fixes required. If the root cause is the amount of memory used by
syslogger, we can remove the deadlock, but the experience is still going
to be bad. Obviously better, but still bad.

Greetings,

Andres Freund



Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-12-04 Thread David Pacheco
On Mon, Dec 4, 2017 at 12:23 PM, Andres Freund  wrote:

> Hi,
>
> On 2017-11-20 11:12:08 -0800, David Pacheco wrote:
> > $ ps -opid,rss,vsz,args -p 37627
> >   PID  RSS  VSZ COMMAND
> > 37627 2980 14968 /opt/postgresql/9.2.4/bin/postgres -D /manatee/pg/data
> >
> > I'm not sure what we can infer from that, as this is a different system,
> > and the workload that generates the very large query strings only runs
> > occasionally.  Those strings are also not logged unless something's gone
> > wrong.
>
> FWIW, I'd like to see a report of this around the time the issue
> occurred before doing anything further here.
>


This failure begins when this process exits, so the best you could get is
memory in use immediately before it exited.  I obviously can't get that now
for the one instance I saw weeks ago, but maybe PostgreSQL could log
information about current memory usage when it's about to exit because of
ENOMEM?  That way if anybody hits a similar condition in the future, the
data will be available to answer your question.

That said, I think the deadlock itself is pretty well explained by the data
we have already.

-- Dave


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-12-04 Thread David Pacheco
On Mon, Nov 20, 2017 at 11:12 AM, David Pacheco  wrote:

> I understand if the community isn't interested in fixing this case if
> other users aren't seeing it much, but surely it's still a bug that this
> unusual case can result in a deadlock?
>


I've filed bug 14945 to cover this issue:
https://www.postgresql.org/message-id/20171204201055.27108.18283%40wrigleys.postgresql.org

While the underlying cause in this case was likely operator error (i.e.,
too little memory provisioned), the deadlock seems clearly like an issue
that could be hit by others and requires operator intervention to restore
service.  I understand it might still be sufficiently rare, and the fix
sufficiently complex, that the community doesn't end up prioritizing it.

Thanks,
Dave


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-20 Thread David Pacheco
Responding to several points below.

Tom Lane wrote:
> David's report isn't too clear: did the syslogger process actually run
> out of memory and exit of its own volition after an ENOMEM, or did it get
> killed by the dreaded OOM killer?  In either case, it's unclear whether
> it was really using an excessive amount of memory.  We have not heard
> reports suggesting a memory leak in the syslogger, but maybe there is
> one under unusual circumstances?

I'm running on illumos, where there is no OOM killer.

I expect what happened is that the syslogger process attempted to allocate
memory, failed because the system was low, and explicitly exited.  That's
consistent with an exited process, no core file generated, and the "FATAL"
"out
of memory" entries in the log I posted.

Of course, that doesn't mean the syslogger was using much memory.  It means
only
that it attempted to allocate any memory in a context where it could not
handle
failing.


Andres Freund wrote:
> I'm somewhat inclined to say that one has to live with this if the
> system is so resource constrainted that processes barely using memory
> get killed.

Again, there's no OOM killer here, so the process has the opportunity to
respond
differently.  I can understand the database not functioning well when the
system
is out of memory, but this half-functioning, half-deadlocked state is quite
disruptive.  By comparison, while not ideal, if the syslogger had raised
SIGABRT
instead, that would have been significantly better for my use-case.  Our
process
manager would have terminated all of the processes associated with the
database
and restarted it.


Tom Lane wrote:
> Hm, so that's another angle David didn't report on: is it possible that
> his workload could have resulted in a very large volume of incomplete
> in-progress log messages?

Yes.  I mentioned in my Nov 6 mail that large log messages over a short
period
appear to have been a major contributing factor:

> Here are all of the entries in the PostgreSQL log from 23:19:12 until the
top of
> the next hour:
>
https://gist.githubusercontent.com/davepacheco/c5541bb464532075f2da761dd990a457/raw/2ba242055aca2fb374e9118045a830d08c590e0a/gistfile1.txt
...
> There are also many log entries for some very long SQL queries.  I'm sure
that
> contributed to this problem by filling up the pipe.  I was able to
extract the
> contents of the pipe while the system was hung, and it was more of these
giant
> query strings.



Andres Freund wrote:
> Note that there's plenty of cases where you could run into this even
> without being unable to fork new processes. You'd e.g. could also run
> into this while logging the exit of some other subprocess or such,
> there's enough ereports in postmaster.

This is a critical point.  As far as I can tell, all that's necessary for
this
deadlock to occur is:

- the syslogger is unable to make forward progress (e.g., because it
segfaulted)
- enough other processes write to the pipe that it fills up before the
  postmaster can restart the syslogger
- the postmaster attempts to write to the log as part of any of its signal
  handling operations

It seems to me that the same thing can happen if there were a bug in the
syslogger that resulted in a segfault around the same time that the
postmaster
attempted to start an autovacuum worker, for example (if there was also
reasonably high log traffic).


Thanks,
Dave