Am I in the same transaction block in complex PLPGSQL?

2022-03-11 Thread Durumdara
Hello!

A critical question for me because of future planning.

In autocommit mode, when I start a simple update or select, it is one
transaction, so if something fails, the whole modification is rolled back
(there is no "half update", or "only first record updated").

What will happen with complex statements, like PLPGSQL stored procedure, or
trigger?

Pseudo:

BEGIN
   select a from b ... into X;
   update b set a = X + 1 where ...;
   update b set mod_date = current_timestamp where ...;
   IF bla THEN raise Exception 'Wrong';
   select x from y;
   call stored procedure N;
   ...
   update b set mod_date = current_timestamp where ...;
END;

Is it also one statement logically (doesn't matter it would be recursive)?
Is this executed in one transaction? Or each substatement is a new
transaction?

So if the half executed but we got an exception in the center:
a.) the whole rolled back,
b.) or it is halfly finished and only the last substatement rolled back.

The main question is:
Do I need to embed the whole call into a directly started transaction block
to keep consistency - or this isn't needed because this is handled by PG as
one big statement with many sub statements?

This is also important for triggers to...
Triggers (insert/update/delete) also could start subprocedures with
substatements, so if I don't wrap around a directly started transaction,
maybe I will get the wrong result (not ALL OK/ALL FAILS).

Thank you for your help!

Best regards
   dd


Postgres query

2022-03-11 Thread Ian Dauncey
Hi All

Can anyone assist in shedding some light here.

We getting this query popping up in our postgresql log file at the same time as 
the connections to the databases starts increasing.
Not sure what is initiating this query, but we get around a hundred per second 
until we restart our applications.
Any help will be appreciated.

The Query :

"select $1[s], s - pg_catalog.array_lower($1,1) + 1
  from pg_catalog.generate_series(pg_catalog.array_lower($1,1),
   pg_catalog.array_upper($1,1),1) as g(s)"

Regards
Ian

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast, a leader in email security and cyber 
resilience. Mimecast integrates email defenses with brand protection, security 
awareness training, web security, compliance and other essential capabilities. 
Mimecast helps protect large and small organizations from malicious activity, 
human error and technology failure; and to lead the movement toward building a 
more resilient world. To find out more, visit our website.


Re: Postgres query

2022-03-11 Thread hubert depesz lubaczewski
On Fri, Mar 11, 2022 at 10:02:39AM +, Ian Dauncey wrote:
> Can anyone assist in shedding some light here.
> We getting this query popping up in our postgresql log file at the same time 
> as the connections to the databases starts increasing.
> Not sure what is initiating this query, but we get around a hundred per 
> second until we restart our applications.
> Any help will be appreciated.
> "select $1[s], s - pg_catalog.array_lower($1,1) + 1
>   from pg_catalog.generate_series(pg_catalog.array_lower($1,1),
>pg_catalog.array_upper($1,1),1) as g(s)"

The query simply unpacks given array.

For example, assuming array $1 is '{5,10,15}' it will yield:
 ?column? │ ?column? 
──┼──
5 │1
   10 │2
   15 │3
(3 rows)

basically old way to achieve unpacking of array, these days normally it would 
be called like:

$ select * from unnest('{5,10,15}'::int4[]) with ordinality;
 unnest │ ordinality 
┼
  5 │  1
 10 │  2
 15 │  3
(3 rows)

What is running it it's hard to say, the query doesn't strike me as
something that any db driver would call on its own.

depesz




Re: foreign key on delete cascade order?

2022-03-11 Thread George Woodring
On Thu, Mar 10, 2022 at 12:38 PM Tom Lane  wrote:

> With the amount of detail you've provided (viz: none)
>

This is an example of the error we are seeing from our application. Sorry,
I cannot find the postgresql log entry for this one.

2020-11-30T13:16:08,835 ERROR [foo/bar/01EF2.W01E/55159]
GlobalControllerAdvice: Caught exception (
https://noc.iglass.net/networkMachDelete.htm
):
org.springframework.dao.DeadlockLoserDataAccessException:
PreparedStatementCallback; SQL [DELETE FROM mach WHERE (machid=?)]; ERROR:
deadlock detected
  Detail: Process 27442 waits for ShareLock on transaction 1895244982;
blocked by process 21064.
Process 21064 waits for ShareLock on transaction 1895245026; blocked by
process 27442.
  Hint: See server log for query details.
  Where: while deleting tuple (5,18) in relation "status"
SQL statement "DELETE FROM ONLY "public"."status" WHERE $1
OPERATOR(pg_catalog.=) "pollid""; nested exception is
org.postgresql.util.PSQLException: ERROR: deadlock detected
  Detail: Process 27442 waits for ShareLock on transaction 1895244982;
blocked by process 21064.
Process 21064 waits for ShareLock on transaction 1895245026; blocked by
process 27442.
  Hint: See server log for query details.
  Where: while deleting tuple (5,18) in relation "status"
SQL statement "DELETE FROM ONLY "public"."status" WHERE $1
OPERATOR(pg_catalog.=) "pollid""
org.springframework.dao.DeadlockLoserDataAccessException:
PreparedStatementCallback; SQL [DELETE FROM mach WHERE (machid=?)]; ERROR:
deadlock detected
  Detail: Process 27442 waits for ShareLock on transaction 1895244982;
blocked by process 21064.
Process 21064 waits for ShareLock on transaction 1895245026; blocked by
process 27442.
  Hint: See server log for query details.
  Where: while deleting tuple (5,18) in relation "status"
SQL statement "DELETE FROM ONLY "public"."status" WHERE $1
OPERATOR(pg_catalog.=) "pollid""; nested exception is
org.postgresql.util.PSQLException: ERROR: deadlock detected
  Detail: Process 27442 waits for ShareLock on transaction 1895244982;
blocked by process 21064.
Process 21064 waits for ShareLock on transaction 1895245026; blocked by
process 27442.
  Hint: See server log for query details.
  Where: while deleting tuple (5,18) in relation "status"
SQL statement "DELETE FROM ONLY "public"."status" WHERE $1
OPERATOR(pg_catalog.=) "pollid""

The tables are involved are
CREATE TABLE mach ( machid serial, constraint mach_pkey primary key
(machid) ... );
CREATE TABLE pollgrpinfo ( pollgrpid serial, constraint pollgrpinfo_pkey
primary key (pollgrpid),
 machidint4 NOT NULL, constraint mach_exists FOREIGN
KEY(machid) REFERENCES mach ON DELETE CASCADE, ... );
CREATE TABLE poll ( pollid serial, constraint poll_pkey primary key
(pollid),
pollgrpid int4 not null, constraint pollgrp_exists FOREIGN
KEY(pollgrpid) REFERENCES pollgrpinfo (pollgrpid) ON DELETE CASCADE, ...);
CREATE TABLE status ( statusid serial, constraint status_pkey primary key
(statusid),
pollid int4 not null, constraint poll_exists FOREIGN KEY(pollid)
REFERENCES poll ON DELETE CASCADE, ...);

We are updating the entire status table every 5 minutes with
BEGIN;
UPDATE status SET () WHERE pollid = $1;
COMMIT;

The issue is arriving when some does a DELETE during the UPDATE of status
DELETE FROM mach WHERE machid=$1;

I don't know if this sheds any more light on it.

George
iGLASS Networks


Re: Am I in the same transaction block in complex PLPGSQL?

2022-03-11 Thread Tom Lane
Durumdara  writes:
> In autocommit mode, when I start a simple update or select, it is one
> transaction, so if something fails, the whole modification is rolled back
> (there is no "half update", or "only first record updated").
> What will happen with complex statements, like PLPGSQL stored procedure, or
> trigger?

It's one transaction unless you take steps to make it something else.
In plpgsql, you can make "sub transactions" with BEGIN/EXCEPTION blocks
(an exception rolls back only side-effects that happened since BEGIN).
Or, if it's a procedure not a function, you can explicitly COMMIT
what's been done so far, and then your next action starts a new
transaction.  But without such measures, all actions executed by
a function are part of the same transaction as the calling statement.

regards, tom lane




Re: Am I in the same transaction block in complex PLPGSQL?

2022-03-11 Thread Durumdara
Dear Tom!

Ok, very-very thanks for the answer! I have the same experience with txid:

DO $$DECLARE tx bigint;
BEGIN
  select txid_current() into tx;
  raise notice ''TXID: %'', tx;
  insert into a values (26);
  select txid_current() into tx;
  raise notice ''TXID: %'', tx;
  insert into a values (27);
  select txid_current() into tx;
  raise notice ''TXID: %'', tx;
  insert into a values (28);
  select txid_current() into tx;
  raise notice ''TXID: %'', tx;
  call test_trx_value(30);
  select txid_current() into tx;
  raise notice ''TXID: %'', tx;
END$$;

All of them are the same.

Then back to my previous question:

> How to log to client (port native code to stored procedure) + to db tables

The main problem that the code is local now. A Win32 app, with many logs.
Some of them stored into the database log tables for easily get.

But: when the transaction is same, I can't log in *PGSQL 9.6* - only to
client with "notices".

F. e. I have a stored procedure which has an exception handler block to
catch the log and error, and pass back to the caller.
On exception this catches the error, and returns with error + log + stored
them in a table record.

But the caller also must raise an exception, for rollback and stop. In this
time the exception rollbacks my table level logs too (log table rows).
Only client notices could help me what happened.

Same problem if the process started by a trigger (for example).

So if any problem happens, I will stand without any help why it happened. I
can't see from db log tables, because of the posted log records will vanish
on rollback!

I have access only to a database - not to the server (and it's real logs).

Hmmm

Do you have any idea? Or we must upgrade to min. PGSQL 11 for access
transaction handling and could post the logs through another transaction?

F. e.

...
call SubProc(oError, oLog);
if oError > '' then
  rollback;   <--- on Error we must roll back
end if;
insert mylog values(oError, oLog); <--- post log everytime
if oError > '' then
   commit;   <--- on error we save the logs with commit + raise an error
again
   raise Exception oError;
end if;
...

Thanks for it!

dd



Tom Lane  ezt írta (időpont: 2022. márc. 11., P, 16:01):

> Durumdara  writes:
> > In autocommit mode, when I start a simple update or select, it is one
> > transaction, so if something fails, the whole modification is rolled back
> > (there is no "half update", or "only first record updated").
> > What will happen with complex statements, like PLPGSQL stored procedure,
> or
> > trigger?
>
> It's one transaction unless you take steps to make it something else.
> In plpgsql, you can make "sub transactions" with BEGIN/EXCEPTION blocks
> (an exception rolls back only side-effects that happened since BEGIN).
> Or, if it's a procedure not a function, you can explicitly COMMIT
> what's been done so far, and then your next action starts a new
> transaction.  But without such measures, all actions executed by
> a function are part of the same transaction as the calling statement.
>
> regards, tom lane
>


COPY TO STDOUT WITH (FORMAT CSV, HEADER), and embedded newlines

2022-03-11 Thread Dominique Devienne
Hi.

It's my first time using COPY TO. And first time using built-in CSV support.
Performs well. BUT...

The code below (real code, but using a custom libpq wrapper lib) is run on
a few tables, with unit tests that verify the number of lines of the
output file.
And for a few of those tables, there's a mismatch, the output from PostgreSQL
"has too many lines". I've tracked these to text values in the DB with embedded
newlines. These values are 'normal'. I'm not use to CSV, but I suppose
such newlines
must be encoded, perhaps as \n, since AFAIK CSV needs to be 1 line per row, no?

So how's one supposed to configure the CSV output for the DB with
embedded newlines?

Thanks, --DD

auto rset = my_exec(*conn_, "COPY MY_TAB TO STDOUT WITH (FORMAT
CSV, HEADER)");
if (rset.status() != PGRES_COPY_OUT) {
raise("CSV Export via SQL COPY error: ", rset.error_msg());
}

std::ofstream os(file_name);
bool done = false;
while (!done) {
auto buf = pq::CopyOutBuffer::get(*conn_);

switch (buf.status()) {
case pq::CopyOutStatus::eWait:  assert(false); continue;
case pq::CopyOutStatus::eDone:  done = true; continue;
case pq::CopyOutStatus::eError: raise("PQgetCopyData: {}",
conn_->error_msg());
case pq::CopyOutStatus::eData:  break; // process it below
}

// Each buffer seems to a single line of output,
// with Unix-newline at the end, on all platforms.
os.write(buf.data(), buf.size());
}

os.close();




Re: COPY TO STDOUT WITH (FORMAT CSV, HEADER), and embedded newlines

2022-03-11 Thread David G. Johnston
On Friday, March 11, 2022, Dominique Devienne  wrote:

> I've tracked these to text values in the DB with embedded
>
newlines. These values are 'normal'. I'm not use to CSV, but I suppose
> such newlines
> must be encoded, perhaps as \n, since AFAIK CSV needs to be 1 line per
> row, no?


Haven’t tested but the docs indicate, and I do recall from memory, that
PostgreSQL does indeed use \r and \n in content so what you are describing
would be a bug if proven true.  Can you produce something that only uses
psql that shows this problem?  If not, whatever “auto rset = my_exec(…)” is
doing is probably at fault.

David J.


Re: COPY TO STDOUT WITH (FORMAT CSV, HEADER), and embedded newlines

2022-03-11 Thread Daniel Verite
Dominique Devienne wrote:

> These values are 'normal'. I'm not use to CSV, but I suppose
> such newlines
> must be encoded, perhaps as \n, since AFAIK CSV needs to be 1 line per row,
> no?

No, but such fields must be enclosed by double quotes, as documented
in RFC 4180 https://datatracker.ietf.org/doc/html/rfc4180

Consider this output:

psql> COPY (values (1, E'ab\ncd'), (2,'efgh')) TO STDOUT CSV;
1,"ab
cd"
2,efgh

That's 2 records on 3 lines.
If you feed this to a parser and it chokes on it, it means that it's
not a valid CSV parser.

Best regards,
-- 
Daniel Vérité
https://postgresql.verite.pro/
Twitter: @DanielVerite




Re: COPY TO STDOUT WITH (FORMAT CSV, HEADER), and embedded newlines

2022-03-11 Thread Dominique Devienne
On Fri, Mar 11, 2022 at 7:38 PM Daniel Verite  wrote:
>> Dominique Devienne wrote:
> > These values are 'normal'. I'm not used to CSV, but I suppose such newlines
> > must be encoded, perhaps as \n, since AFAIK CSV needs to be 1 line per row, 
> > no?
>
> No, but such fields must be enclosed by double quotes, as documented
> in RFC 4180 https://datatracker.ietf.org/doc/html/rfc4180

Hi Daniel. OK, good to know, thanks.

> Consider this output:
> psql> COPY (values (1, E'ab\ncd'), (2,'efgh')) TO STDOUT CSV;
> 1,"ab
> cd"
> 2,efgh

yes, this is consistent with what I am seeing in the debugger.
The value is indeed double-quoted, and definitely contains a _raw_ '\n' newline,
and not an encoded "backslash then n", as David was writing.

> That's 2 records on 3 lines.
> If you feed this to a parser and it chokes on it, it means that it's not a 
> valid CSV parser.

I never pretended that parser to be a CSV parser :). It's a naive "wc
-l" equivalent written by someone else.
And it's comparing PostgreSQL CSV output to some old legacy "CSV"
output that's home grown, which
does encode newlines as '\\' and 'n', and which most likely is not RFC
compliant (I'm not suprised! :))).

In my case, backward-compat is more important than "compliancy" (if
that's a word!),
so I can easily do that "\n" encoding myself, as a post-processing on
the buffer I get back.

Thank you for the help. --DD

PS: And David, no, it's not my wrapper that's at fault here :). It's a
thin wrapper,
  that's just easier, terser, and safer (RAII) to use compared to naked libpq.




Re: COPY TO STDOUT WITH (FORMAT CSV, HEADER), and embedded newlines

2022-03-11 Thread Daniel Verite
Dominique Devienne wrote:


> so I can easily do that "\n" encoding myself, as a post-processing on
> the buffer I get back.

Alternatively, it might be easier to use the default TEXT format of
COPY rather than CSV, as the TEXT format already produces \n for
line feeds, along with  half a dozen other special backslashes sequences.
See https://www.postgresql.org/docs/current/sql-copy.html


Best regards,
-- 
Daniel Vérité
https://postgresql.verite.pro/
Twitter: @DanielVerite




Re: COPY TO STDOUT WITH (FORMAT CSV, HEADER), and embedded newlines

2022-03-11 Thread Dominique Devienne
On Fri, Mar 11, 2022 at 8:32 PM Daniel Verite  wrote:
> Dominique Devienne wrote:
> > so I can easily do that "\n" encoding myself, as a post-processing on
> > the buffer I get back.
>
> Alternatively, it might be easier to use the default TEXT format of
> COPY rather than CSV, as the TEXT format already produces \n for
> line feeds, along with  half a dozen other special backslashes sequences.
> See https://www.postgresql.org/docs/current/sql-copy.html

I actually just submitted CSV+PostProcessing :)

But sure, if TEXT does the kind of pseudo-CSV I need, I'd change it to use it.
I'll look into it next week. Thanks again Daniel. --DD




Re: foreign key on delete cascade order?

2022-03-11 Thread Michael Lewis
>
> We are updating the entire status table every 5 minutes with
> BEGIN;
> UPDATE status SET () WHERE pollid = $1;
> COMMIT;
>
> The issue is arriving when some does a DELETE during the UPDATE of status
> DELETE FROM mach WHERE machid=$1;
>

Could you set lock_timeout, lock table explicitly for SHARE UPDATE
EXCLUSIVE (pretty sure that would be the proper level), then retry if it
fails because a delete is already going on?

Also, are you confident that before you call 'begin' to do the update, you
are not already in a transaction which might have some lock on row(s) in
mach, or one of the other tables involved?


Re: Am I in the same transaction block in complex PLPGSQL?

2022-03-11 Thread David G. Johnston
On Fri, Mar 11, 2022 at 9:24 AM Durumdara  wrote:

> Do you have any idea? Or we must upgrade to min. PGSQL 11 for access
> transaction handling and could post the logs through another transaction?
>

You really do need to open a second session somehow if you want the first
session to be able to fail while still allowing for stuff happening during
its execution to commit.

You can either do this in client-side code by simply opening up two
connections and doing the main work on one while doing the logging on the
other.

As far as I know to do this in-database you would need to use the dblink
extension:

https://www.postgresql.org/docs/current/dblink.html

While much of that module's purpose was subsumed by the addition of FOREIGN
DATA WRAPPERS the ability to have a transaction independent connection back
into the database was not one of those things - using FDW the remote work
is done in the same transaction context as the local database.

David J.