Re: PostgreSQL 17 Beta 1 release announcement draft

2024-05-19 Thread Erik Rijkers

Op 5/19/24 om 23:34 schreef Jonathan S. Katz:

On 5/15/24 9:45 PM, Jonathan S. Katz wrote:

Hi,

Attached is a copy of the PostgreSQL 17 Beta 1 release announcement 


'This release introduces adds an interface'  should be:
'This release adds an interface'
   (or 'introduces'; just not both...)

Thanks,

Erik Rijkers





Re: commitfest.postgresql.org is no longer fit for purpose

2024-05-16 Thread Erik Rijkers

Op 5/16/24 om 20:30 schreef Robert Haas:

Hi,

The original intent of CommitFests, and of commitfest.postgresql.org
by extension, was to provide a place where patches could be registered
to indicate that they needed to be reviewed, thus enabling patch
authors and patch reviewers to find each other in a reasonably
efficient way. I don't think it's working any more. I spent a good


Hi,

Perhaps it would be an idea to let patches 'expire' automatically unless 
they are 'rescued' (=given another year)  by committer or commitfest 
manager (or perhaps a somewhat wider group - but not too many). 
Expiration after, say, one year should force patch-authors to mount a 
credible defense for his/her patch to either get his work rescued or 
reinstated/resubmitted.


Just a thought that has crossed my mind already a few times. It's not 
very sympathetic but it might work keep the list smaller.


Erik Rijkers





Re: 2024-05-09 release announcement draft

2024-05-06 Thread Erik Rijkers

Op 5/6/24 om 19:44 schreef Jonathan S. Katz:

Hi,

Please find the draft of the 2024-05-09 release announcement.



'procedures that returns'  should be
'procedures that return'





libpq.sgml: "server ejectes GSS" -> server rejects GSS

2024-04-08 Thread Erik Rijkers

Typo. fix:

-attempted first. If the server ejectes GSS encryption, SSL is
+attempted first. If the server rejects GSS encryption, SSL is

Erik--- doc/src/sgml/libpq.sgml.orig	2024-04-09 06:28:36.254541932 +0200
+++ doc/src/sgml/libpq.sgml	2024-04-09 06:30:55.818541454 +0200
@@ -1820,7 +1820,7 @@

 Note that if gssencmode is set
 to prefer, a GSS connection is
-attempted first. If the server ejectes GSS encryption, SSL is
+attempted first. If the server rejects GSS encryption, SSL is
 negotiated over the same TCP connection using the traditional postgres
 protocol, regardless of sslnegotiation. In other
 words, the direct SSL handshake is not used, if a TCP connection has


Re: remaining sql/json patches

2024-03-04 Thread Erik Rijkers

Op 3/4/24 om 10:40 schreef Amit Langote:

Hi Jian,

Thanks for the reviews and sorry for the late reply. Replying to all
emails in one.


> [v40-0001-Add-SQL-JSON-query-functions.patch]
> [v40-0002-Show-function-name-in-TableFuncScan.patch]
> [v40-0003-JSON_TABLE.patch]

In my hands (applying with patch), the patches, esp. 0001, do not apply. 
 But I see the cfbot builds without problem so maybe just ignore these 
FAILED lines.  Better get them merged - so I can test there...


Erik


checking file doc/src/sgml/func.sgml
checking file src/backend/catalog/sql_features.txt
checking file src/backend/executor/execExpr.c
Hunk #1 succeeded at 48 with fuzz 2 (offset -1 lines).
Hunk #2 succeeded at 88 (offset -1 lines).
Hunk #3 succeeded at 2419 (offset -1 lines).
Hunk #4 succeeded at 4195 (offset -1 lines).
checking file src/backend/executor/execExprInterp.c
Hunk #1 succeeded at 72 (offset -1 lines).
Hunk #2 succeeded at 180 (offset -1 lines).
Hunk #3 succeeded at 485 (offset -1 lines).
Hunk #4 succeeded at 1560 (offset -1 lines).
Hunk #5 succeeded at 4242 (offset -1 lines).
checking file src/backend/jit/llvm/llvmjit_expr.c
checking file src/backend/jit/llvm/llvmjit_types.c
checking file src/backend/nodes/makefuncs.c
Hunk #1 succeeded at 856 (offset -1 lines).
checking file src/backend/nodes/nodeFuncs.c
Hunk #1 succeeded at 233 (offset -1 lines).
Hunk #2 succeeded at 517 (offset -1 lines).
Hunk #3 succeeded at 1019 (offset -1 lines).
Hunk #4 succeeded at 1276 (offset -1 lines).
Hunk #5 succeeded at 1617 (offset -1 lines).
Hunk #6 succeeded at 2381 (offset -1 lines).
Hunk #7 succeeded at 3429 (offset -1 lines).
Hunk #8 succeeded at 4164 (offset -1 lines).
checking file src/backend/optimizer/path/costsize.c
Hunk #1 succeeded at 4878 (offset -1 lines).
checking file src/backend/optimizer/util/clauses.c
Hunk #1 succeeded at 50 (offset -3 lines).
Hunk #2 succeeded at 415 (offset -3 lines).
checking file src/backend/parser/gram.y
checking file src/backend/parser/parse_expr.c
checking file src/backend/parser/parse_target.c
Hunk #1 succeeded at 1988 (offset -1 lines).
checking file src/backend/utils/adt/formatting.c
Hunk #1 succeeded at 4465 (offset -1 lines).
checking file src/backend/utils/adt/jsonb.c
Hunk #1 succeeded at 2159 (offset -4 lines).
checking file src/backend/utils/adt/jsonfuncs.c
checking file src/backend/utils/adt/jsonpath.c
Hunk #1 FAILED at 68.
Hunk #2 succeeded at 1239 (offset -1 lines).
1 out of 2 hunks FAILED
checking file src/backend/utils/adt/jsonpath_exec.c
Hunk #1 succeeded at 229 (offset -5 lines).
Hunk #2 succeeded at 2866 (offset -5 lines).
Hunk #3 succeeded at 3751 (offset -5 lines).
checking file src/backend/utils/adt/ruleutils.c
Hunk #1 succeeded at 474 (offset -1 lines).
Hunk #2 succeeded at 518 (offset -1 lines).
Hunk #3 succeeded at 8303 (offset -1 lines).
Hunk #4 succeeded at 8475 (offset -1 lines).
Hunk #5 succeeded at 8591 (offset -1 lines).
Hunk #6 succeeded at 9808 (offset -1 lines).
Hunk #7 succeeded at 9858 (offset -1 lines).
Hunk #8 succeeded at 10039 (offset -1 lines).
Hunk #9 succeeded at 10909 (offset -1 lines).
checking file src/include/executor/execExpr.h
checking file src/include/nodes/execnodes.h
checking file src/include/nodes/makefuncs.h
checking file src/include/nodes/parsenodes.h
checking file src/include/nodes/primnodes.h
checking file src/include/parser/kwlist.h
checking file src/include/utils/formatting.h
checking file src/include/utils/jsonb.h
checking file src/include/utils/jsonfuncs.h
checking file src/include/utils/jsonpath.h
checking file src/interfaces/ecpg/preproc/ecpg.trailer
checking file src/test/regress/expected/sqljson_queryfuncs.out
checking file src/test/regress/parallel_schedule
checking file src/test/regress/sql/sqljson_queryfuncs.sql
checking file src/tools/pgindent/typedefs.list




Re: remaining sql/json patches

2023-12-07 Thread Erik Rijkers

Op 12/7/23 om 10:32 schreef Amit Langote:

On Thu, Dec 7, 2023 at 12:26 AM Alvaro Herrera  wrote:

On 2023-Dec-06, Amit Langote wrote:

I think I'm inclined toward adapting the LA-token fix (attached 0005),

This one needs to be fixed, so done.

On Thu, Dec 7, 2023 at 5:25 PM Peter Eisentraut  wrote:

Here are a couple of small patches to tidy up the parser a bit in your
v28-0004 (JSON_TABLE) patch.  It's not a lot; the rest looks okay to me.


Thanks Peter.  I've merged these into 0004.


Hm, this set doesn't apply for me. 0003 gives error, see below (sorrty 
for my interspersed bash echoing - seemed best to leave it in.

(I'm using patch; should be all right, no? Am I doing it wrong?)

-- [2023.12.07 11:29:39 json_table2] patch 1 of 5 (json_table2) 
[/home/aardvark/download/pgpatches/0170/json_table/20231207/v29-0001-Add-soft-error-handling-to-some-expression-nodes.patch]

 rv [] # [ok]
OK, patch returned [0] so now break and continue (all is well)
-- [2023.12.07 11:29:39 json_table2] patch 2 of 5 (json_table2) 
[/home/aardvark/download/pgpatches/0170/json_table/20231207/v29-0002-Add-soft-error-handling-to-populate_record_field.patch]

 rv [0] # [ok]
OK, patch returned [0] so now break and continue (all is well)
-- [2023.12.07 11:29:39 json_table2] patch 3 of 5 (json_table2) 
[/home/aardvark/download/pgpatches/0170/json_table/20231207/v29-0003-SQL-JSON-query-functions.patch]

 rv [0] # [ok]
File src/interfaces/ecpg/test/sql/sqljson_queryfuncs: git binary diffs 
are not supported.
 patch  apply failed: rv = 1   patch file: 
/home/aardvark/download/pgpatches/0170/json_table/20231207/v29-0003-SQL-JSON-query-functions.patch

 rv [1] # [ok]
The text leading up to this was:
--
|From 712b95c8a1a3dd683852ac151e229440af783243 Mon Sep 17 00:00:00 2001
|From: Amit Langote 
|Date: Tue, 5 Dec 2023 14:33:25 +0900
|Subject: [PATCH v29 3/5] SQL/JSON query functions
|MIME-Version: 1.0
|Content-Type: text/plain; charset=UTF-8
|Content-Transfer-Encoding: 8bit
|
|This introduces the SQL/JSON functions for querying JSON data using
|jsonpath expressions. The functions are:
|
|JSON_EXISTS()
|JSON_QUERY()
|JSON_VALUE()
|


Erik



--
Thanks, Amit Langote
EDB: http://www.enterprisedb.com

[1] 
https://www.postgresql.org/message-id/CA%2BHiwqGsByGXLUniPxBgZjn6PeDr0Scp0jxxQOmBXy63tiJ60A%40mail.gmail.com





Re: proposal: possibility to read dumped table's name from file

2023-11-21 Thread Erik Rijkers

Op 11/21/23 om 22:10 schreef Daniel Gustafsson:

On 20 Nov 2023, at 06:20, Pavel Stehule  wrote:





The attached is pretty close to a committable patch IMO, review is welcome on
both the patch and commit message.  I tried to identify all reviewers over the
past 3+ years but I might have missed someone.


I've tested this, albeit mostly in the initial iterations  (*shrug* but 
a mention is nice)


Erik Rijkers



--
Daniel Gustafsson







Re: remaining sql/json patches

2023-11-15 Thread Erik Rijkers

Op 11/15/23 om 14:00 schreef Amit Langote:

Hi,


[..]


Attached updated patch.  The version of 0001 that I posted on Oct 11
to add the error-safe version of CoerceViaIO contained many
unnecessary bits that are now removed.

--
Thanks, Amit Langote
EDB: http://www.enterprisedb.com


> [v24-0001-Add-soft-error-handling-to-some-expression-nodes.patch]
> [v24-0002-Add-soft-error-handling-to-populate_record_field.patch]
> [v24-0003-SQL-JSON-query-functions.patch]
> [v24-0004-JSON_TABLE.patch]
> [v24-0005-Claim-SQL-standard-compliance-for-SQL-JSON-featu.patch]

Hi Amit,

Here is a statement that seems to gobble up all memory and to totally 
lock up the machine. No ctrl-C - only a power reset gets me out of that. 
It was in one of my tests, so it used to work:


select json_query(
jsonb '"[3,4]"'
  , '$[*]' returning bigint[] empty object on error
);

Can you have a look?

Thanks,

Erik








Re: remaining sql/json patches

2023-11-10 Thread Erik Rijkers

Hi,

At the moment, what is the patchset to be tested?  The latest SQL/JSON 
server I have is from September, and it's become unclear to me what 
belongs to the SQL/JSON patchset.  It seems to me cfbot erroneously 
shows green because it successfully compiles later detail-patches (i.e., 
not the SQL/JSON set itself). Please correct me if I'm wrong and it is 
in fact possible to derive from cfbot a patchset that are the ones to 
use to build the latest SQL/JSON server.


Thanks!

Erik




event trigger sgml touch-up

2023-10-16 Thread Erik Rijkers

Some small (grammatical) changes in event-trigger.sgml

(also one delete of 'community-we' (which I think is just confusing for 
the not-postgresql-community reader).



Erik--- doc/src/sgml/event-trigger.sgml.orig	2023-10-16 17:16:00.017452340 +0200
+++ doc/src/sgml/event-trigger.sgml	2023-10-16 17:22:32.965450992 +0200
@@ -40,7 +40,7 @@
  The login event occurs when an authenticated user logs
  into the system. Any bug in a trigger procedure for this event may
  prevent successful login to the system. Such bugs may be fixed by
- setting  is set to false
+ setting  to false
  either in a connection string or configuration file. Alternative is
  restarting the system in single-user mode (as event triggers are
  disabled in this mode). See the  reference
@@ -49,8 +49,8 @@
  To prevent servers from becoming inaccessible, such triggers must avoid
  writing anything to the database when running on a standby.
  Also, it's recommended to avoid long-running queries in
- login event triggers.  Notes that, for instance,
- cancelling connection in psql wouldn't cancel
+ login event triggers.  Note that, for instance,
+ cancelling a connection in psql wouldn't cancel
  the in-progress login trigger.

 
@@ -1359,7 +1359,7 @@
 END IF;
 
 -- The checks below cannot be performed on standby servers so
--- ensure the database is not in recovery before we perform any
+-- ensure the database is not in recovery before performing any
 -- operations.
 SELECT pg_is_in_recovery() INTO rec;
 IF rec THEN


Re: remaining sql/json patches

2023-09-27 Thread Erik Rijkers

Op 9/27/23 om 15:55 schreef Amit Langote:

On Thu, Sep 21, 2023 at 9:41 PM Amit Langote  wrote:


I don't knoe, maybe it's worthwhile to fix this (admittedly trivial) 
fail in the tests? It's been there for a while.


Thanks,

Erik

diff -U3 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.json_table/src/test/regress/expected/jsonb_sqljson.out
 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.json_table/src/test/regress/results/jsonb_sqljson.out
--- 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.json_table/src/test/regress/expected/jsonb_sqljson.out
 2023-09-27 16:10:52.361367183 +0200
+++ 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.json_table/src/test/regress/results/jsonb_sqljson.out
  2023-09-27 16:18:50.929382498 +0200
@@ -994,14 +994,14 @@
 FROM information_schema.check_constraints
 WHERE constraint_name LIKE 'test_jsonb_constraint%'
 ORDER BY 1;
-   check_clause
   
---
- ((JSON_EXISTS((js)::jsonb, 'strict $."a"' RETURNING integer TRUE ON ERROR) < 
2))
- ((JSON_QUERY((js)::jsonb, '$."a"' RETURNING character(5) OMIT QUOTES EMPTY 
ARRAY ON EMPTY) > ('a'::bpchar COLLATE "C")))
- ((JSON_QUERY((js)::jsonb, '$."a"' RETURNING jsonb WITH CONDITIONAL WRAPPER 
EMPTY OBJECT ON ERROR) < '[10]'::jsonb))
- ((JSON_VALUE((js)::jsonb, '$."a"' RETURNING integer DEFAULT (('12'::text || 
i))::integer ON EMPTY ERROR ON ERROR) > i))
- ((js IS JSON))
- (JSON_EXISTS((js)::jsonb, '$."a"' PASSING (i + 5) AS int, (i)::text AS txt, 
ARRAY[1, 2, 3] AS arr))
+  check_clause 
 
+
+ (JSON_EXISTS((js)::jsonb, 'strict $."a"' RETURNING integer TRUE ON ERROR) < 2)
+ (JSON_QUERY((js)::jsonb, '$."a"' RETURNING character(5) OMIT QUOTES EMPTY 
ARRAY ON EMPTY) > ('a'::bpchar COLLATE "C"))
+ (JSON_QUERY((js)::jsonb, '$."a"' RETURNING jsonb WITH CONDITIONAL WRAPPER 
EMPTY OBJECT ON ERROR) < '[10]'::jsonb)
+ (JSON_VALUE((js)::jsonb, '$."a"' RETURNING integer DEFAULT (('12'::text || 
i))::integer ON EMPTY ERROR ON ERROR) > i)
+ (js IS JSON)
+ JSON_EXISTS((js)::jsonb, '$."a"' PASSING (i + 5) AS int, (i)::text AS txt, 
ARRAY[1, 2, 3] AS arr)
 (6 rows)
 
 SELECT pg_get_expr(adbin, adrelid)


Re: Row pattern recognition

2023-09-22 Thread Erik Rijkers

Op 9/22/23 om 12:12 schreef Tatsuo Ishii:

Op 9/22/23 om 07:16 schreef Tatsuo Ishii:

Attached is the fix against v6 patch. I will include this in upcoming
v7 patch.

Attached is the v7 patch. It includes the fix mentioned above.  Also

(Champion's address bounced; removed)


On my side his adress bounced too:-<


Hi,

In my hands, make check fails on the rpr test; see attached .diff
file.
In these two statements:
-- using NEXT
-- using AFTER MATCH SKIP TO NEXT ROW
   result of first_value(price) and next_value(price) are empty.


Strange. I have checked out fresh master branch and applied the v7
patches, then ran make check. All tests including the rpr test
passed. This is Ubuntu 20.04.


The curious thing is that the server otherwise builds ok, and if I 
explicitly run on that server 'CREATE TEMP TABLE stock' + the 20 INSERTS 
 (just to make sure to have known data), those two statements now both 
return the correct result.


So maybe the testing/timing is wonky (not necessarily the server).

Erik



Best reagards,
--
Tatsuo Ishii
SRA OSS LLC
English: http://www.sraoss.co.jp/index_en/
Japanese:http://www.sraoss.co.jp





Re: Row pattern recognition

2023-09-22 Thread Erik Rijkers

Op 9/22/23 om 07:16 schreef Tatsuo Ishii:

Attached is the fix against v6 patch. I will include this in upcoming v7 patch.


Attached is the v7 patch. It includes the fix mentioned above.  Also


Hi,

In my hands, make check fails on the rpr test; see attached .diff file.
In these two statements:
-- using NEXT
-- using AFTER MATCH SKIP TO NEXT ROW
  result of first_value(price) and next_value(price) are empty.


Erik Rijkers



this time the pattern matching engine is enhanced: previously it
recursed to row direction, which means if the number of rows in a
frame is large, it could exceed the limit of stack depth.  The new
version recurses over matched pattern variables in a row, which is at
most 26 which should be small enough.

Best reagards,
--
Tatsuo Ishii
SRA OSS LLC
English: http://www.sraoss.co.jp/index_en/
Japanese:http://www.sraoss.co.jpdiff -U3 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.rpr/src/test/regress/expected/rpr.out 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.rpr/src/test/regress/results/rpr.out
--- 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.rpr/src/test/regress/expected/rpr.out  
2023-09-22 09:04:17.770392635 +0200
+++ 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.rpr/src/test/regress/results/rpr.out   
2023-09-22 09:38:23.826458109 +0200
@@ -253,23 +253,23 @@
 );
  company  |   tdate| price | first_value | last_value 
 --++---+-+
- company1 | 07-01-2023 |   100 | 100 |200
+ company1 | 07-01-2023 |   100 | |   
  company1 | 07-02-2023 |   200 | |   
  company1 | 07-03-2023 |   150 | |   
- company1 | 07-04-2023 |   140 | 140 |150
+ company1 | 07-04-2023 |   140 | |   
  company1 | 07-05-2023 |   150 | |   
  company1 | 07-06-2023 |90 | |   
- company1 | 07-07-2023 |   110 | 110 |130
+ company1 | 07-07-2023 |   110 | |   
  company1 | 07-08-2023 |   130 | |   
  company1 | 07-09-2023 |   120 | |   
  company1 | 07-10-2023 |   130 | |   
- company2 | 07-01-2023 |50 |  50 |   2000
+ company2 | 07-01-2023 |50 | |   
  company2 | 07-02-2023 |  2000 | |   
  company2 | 07-03-2023 |  1500 | |   
- company2 | 07-04-2023 |  1400 |1400 |   1500
+ company2 | 07-04-2023 |  1400 | |   
  company2 | 07-05-2023 |  1500 | |   
  company2 | 07-06-2023 |60 | |   
- company2 | 07-07-2023 |  1100 |1100 |   1300
+ company2 | 07-07-2023 |  1100 | |   
  company2 | 07-08-2023 |  1300 | |   
  company2 | 07-09-2023 |  1200 | |   
  company2 | 07-10-2023 |  1300 | |   
@@ -290,23 +290,23 @@
 );
  company  |   tdate| price | first_value | last_value 
 --++---+-+
- company1 | 07-01-2023 |   100 | 100 |200
+ company1 | 07-01-2023 |   100 | |   
  company1 | 07-02-2023 |   200 | |   
  company1 | 07-03-2023 |   150 | |   
- company1 | 07-04-2023 |   140 | 140 |150
+ company1 | 07-04-2023 |   140 | |   
  company1 | 07-05-2023 |   150 | |   
  company1 | 07-06-2023 |90 | |   
- company1 | 07-07-2023 |   110 | 110 |130
+ company1 | 07-07-2023 |   110 | |   
  company1 | 07-08-2023 |   130 | |   
  company1 | 07-09-2023 |   120 | |   
  company1 | 07-10-2023 |   130 | |   
- company2 | 07-01-2023 |50 |  50 |   2000
+ company2 | 07-01-2023 |50 | |   
  company2 | 07-02-2023 |  2000 | |   
  company2 | 07-03-2023 |  1500 | |   
- company2 | 07-04-2023 |  1400 |1400 |   1500
+ company2 | 07-04-2023 |  1400 | |   
  company2 | 07-05-2023 |  1500 | |   
  company2 | 07-06-2023 |60 | |   
- company2 | 07-07-2023 |  1100 |1100 |   1300
+ company2 | 07-07-2023 |  1100 | |   
  company2 | 07-08-2023 |  1300 | |   
  company2 | 07-09-2023 |  1200 | |   
  company2 | 07-10-2023 |  1300 | |   


Re: Row pattern recognition

2023-09-22 Thread Erik Rijkers

Op 9/22/23 om 10:23 schreef Erik Rijkers:

Op 9/22/23 om 07:16 schreef Tatsuo Ishii:
Attached is the fix against v6 patch. I will include this in upcoming 
v7 patch.


Attached is the v7 patch. It includes the fix mentioned above.  Also

(Champion's address bounced; removed)



Sorry, I forgot to re-attach the regression.diffs with resend...

Erik


Hi,

In my hands, make check fails on the rpr test; see attached .diff file.
In these two statements:
-- using NEXT
-- using AFTER MATCH SKIP TO NEXT ROW
   result of first_value(price) and next_value(price) are empty.

Erik Rijkers



this time the pattern matching engine is enhanced: previously it
recursed to row direction, which means if the number of rows in a
frame is large, it could exceed the limit of stack depth.  The new
version recurses over matched pattern variables in a row, which is at
most 26 which should be small enough.

Best reagards,
--
Tatsuo Ishii
SRA OSS LLC
English: http://www.sraoss.co.jp/index_en/
Japanese:http://www.sraoss.co.jp


diff -U3 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.rpr/src/test/regress/expected/rpr.out 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.rpr/src/test/regress/results/rpr.out
--- 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.rpr/src/test/regress/expected/rpr.out  
2023-09-22 09:04:17.770392635 +0200
+++ 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.rpr/src/test/regress/results/rpr.out   
2023-09-22 09:38:23.826458109 +0200
@@ -253,23 +253,23 @@
 );
  company  |   tdate| price | first_value | last_value 
 --++---+-+
- company1 | 07-01-2023 |   100 | 100 |200
+ company1 | 07-01-2023 |   100 | |   
  company1 | 07-02-2023 |   200 | |   
  company1 | 07-03-2023 |   150 | |   
- company1 | 07-04-2023 |   140 | 140 |150
+ company1 | 07-04-2023 |   140 | |   
  company1 | 07-05-2023 |   150 | |   
  company1 | 07-06-2023 |90 | |   
- company1 | 07-07-2023 |   110 | 110 |130
+ company1 | 07-07-2023 |   110 | |   
  company1 | 07-08-2023 |   130 | |   
  company1 | 07-09-2023 |   120 | |   
  company1 | 07-10-2023 |   130 | |   
- company2 | 07-01-2023 |50 |  50 |   2000
+ company2 | 07-01-2023 |50 | |   
  company2 | 07-02-2023 |  2000 | |   
  company2 | 07-03-2023 |  1500 | |   
- company2 | 07-04-2023 |  1400 |1400 |   1500
+ company2 | 07-04-2023 |  1400 | |   
  company2 | 07-05-2023 |  1500 | |   
  company2 | 07-06-2023 |60 | |   
- company2 | 07-07-2023 |  1100 |1100 |   1300
+ company2 | 07-07-2023 |  1100 | |   
  company2 | 07-08-2023 |  1300 | |   
  company2 | 07-09-2023 |  1200 | |   
  company2 | 07-10-2023 |  1300 | |   
@@ -290,23 +290,23 @@
 );
  company  |   tdate| price | first_value | last_value 
 --++---+-+
- company1 | 07-01-2023 |   100 | 100 |200
+ company1 | 07-01-2023 |   100 | |   
  company1 | 07-02-2023 |   200 | |   
  company1 | 07-03-2023 |   150 | |   
- company1 | 07-04-2023 |   140 | 140 |150
+ company1 | 07-04-2023 |   140 | |   
  company1 | 07-05-2023 |   150 | |   
  company1 | 07-06-2023 |90 | |   
- company1 | 07-07-2023 |   110 | 110 |130
+ company1 | 07-07-2023 |   110 | |   
  company1 | 07-08-2023 |   130 | |   
  company1 | 07-09-2023 |   120 | |   
  company1 | 07-10-2023 |   130 | |   
- company2 | 07-01-2023 |50 |  50 |   2000
+ company2 | 07-01-2023 |50 | |   
  company2 | 07-02-2023 |  2000 | |   
  company2 | 07-03-2023 |  1500 | |   
- company2 | 07-04-2023 |  1400 |1400 |   1500
+ company2 | 07-04-2023 |  1400 | |   
  company2 | 07-05-2023 |  1500 | |   
  company2 | 07-06-2023 |60 | |   
- company2 | 07-07-2023 |  1100 |1100 |   1300
+ company2 | 07-07-2023 |  1100 | |   
  company2 | 07-08-2023 |  1300 | |   
  company2 | 07-09-2023 |  1200 | |   
  company2 | 07-10-2023 |  1300 | |   


Re: Row pattern recognition

2023-09-22 Thread Erik Rijkers

Op 9/22/23 om 07:16 schreef Tatsuo Ishii:

Attached is the fix against v6 patch. I will include this in upcoming v7 patch.


Attached is the v7 patch. It includes the fix mentioned above.  Also

(Champion's address bounced; removed)

Hi,

In my hands, make check fails on the rpr test; see attached .diff file.
In these two statements:
-- using NEXT
-- using AFTER MATCH SKIP TO NEXT ROW
  result of first_value(price) and next_value(price) are empty.

Erik Rijkers



this time the pattern matching engine is enhanced: previously it
recursed to row direction, which means if the number of rows in a
frame is large, it could exceed the limit of stack depth.  The new
version recurses over matched pattern variables in a row, which is at
most 26 which should be small enough.

Best reagards,
--
Tatsuo Ishii
SRA OSS LLC
English: http://www.sraoss.co.jp/index_en/
Japanese:http://www.sraoss.co.jp





Re: remaining sql/json patches

2023-09-19 Thread Erik Rijkers

Op 9/19/23 om 13:56 schreef Amit Langote:

On Tue, Sep 19, 2023 at 7:18 PM Alvaro Herrera  wrote:

0001: I wonder why you used Node for the ErrorSaveContext pointer
instead of the specific struct you want.  I propose the attached, for
some extra type-safety.  Or did you have a reason to do it that way?


No reason other than that most other headers use Node.  I agree that
making an exception for this patch might be better, so I've
incorporated your patch into 0001.

I've also updated the query functions patch (0003) to address the
crashing bug reported by Erik.  Essentially, I made the coercion step
of JSON_QUERY to always use json_populate_type() when WITH WRAPPER is
used.  You might get funny errors with ERROR OR ERROR for many types
when used in RETURNING, but at least there should no longer be any
crashes.



Indeed, with v16 those crashes are gone.

Some lesser evil: gcc 13.2.0 gave some warnings, slightly different in 
assert vs non-assert build.


--- assert build:

-- [2023.09.19 14:06:35 json_table2/0] make core: make --quiet -j 4
In file included from ../../../src/include/postgres.h:45,
 from parse_expr.c:16:
In function ‘transformJsonFuncExpr’,
inlined from ‘transformExprRecurse’ at parse_expr.c:374:13:
parse_expr.c:4355:22: warning: ‘jsexpr’ may be used uninitialized 
[-Wmaybe-uninitialized]

 4355 | Assert(jsexpr->formatted_expr);
../../../src/include/c.h:864:23: note: in definition of macro ‘Assert’
  864 | if (!(condition)) \
  |   ^
parse_expr.c: In function ‘transformExprRecurse’:
parse_expr.c:4212:21: note: ‘jsexpr’ was declared here
 4212 | JsonExpr   *jsexpr;
  | ^~


--- non-assert build:

-- [2023.09.19 14:11:03 json_table2/1] make core: make --quiet -j 4
In function ‘transformJsonFuncExpr’,
inlined from ‘transformExprRecurse’ at parse_expr.c:374:13:
parse_expr.c:4356:28: warning: ‘jsexpr’ may be used uninitialized 
[-Wmaybe-uninitialized]

 4356 | if (exprType(jsexpr->formatted_expr) != JSONBOID)
  |  ~~^~~~
parse_expr.c: In function ‘transformExprRecurse’:
parse_expr.c:4212:21: note: ‘jsexpr’ was declared here
 4212 | JsonExpr   *jsexpr;
  | ^~


Thank you,

Erik





Re: remaining sql/json patches

2023-09-18 Thread Erik Rijkers

Op 9/18/23 om 13:14 schreef Erik Rijkers:

Op 9/18/23 om 12:20 schreef Amit Langote:

Hi Erik,


I am sorry to be bothering you with these somewhat idiotic SQL
statements but I suppose somehow it needs to be made more solid.




For 60 datatypes, I ran this statement:

select json_query(jsonb'{"a":[{"a":[2,3]},{"a":[4,5]}]}',
  '$.a[*].a?(@<=3)'returning ${datatype} with wrapper
);

against a 17devel server (a0a5) with json v15 patches and caught the 
output, incl. 30+ crashes, in the attached .txt. I hope that's useful.




and FYI: None of these crashes occur when I leave off the 'WITH WRAPPER' 
clause.




Erik






Re: remaining sql/json patches

2023-09-18 Thread Erik Rijkers

Op 9/18/23 om 12:20 schreef Amit Langote:

Hi Erik,


I am sorry to be bothering you with these somewhat idiotic SQL
statements but I suppose somehow it needs to be made more solid.




For 60 datatypes, I ran this statement:

select json_query(jsonb'{"a":[{"a":[2,3]},{"a":[4,5]}]}',
 '$.a[*].a?(@<=3)'returning ${datatype} with wrapper
);

against a 17devel server (a0a5) with json v15 patches and caught the 
output, incl. 30+ crashes, in the attached .txt. I hope that's useful.



Erik

--- [int4 START ] --
-- datatype  [int4]
select json_query(jsonb'{"a":[{"a":[2,3]},{"a":[4,5]}]}',
 '$.a[*].a?(@<=3)'returning int4 with wrapper
);
ERROR:  cannot cast jsonb array to type integer


--- [int4 END ] --
--- [bit START ] --
-- datatype  [bit]
select json_query(jsonb'{"a":[{"a":[2,3]},{"a":[4,5]}]}',
 '$.a[*].a?(@<=3)'returning bit with wrapper
);
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost


--- [bit END ] --
--- [bigint START ] 
--
-- datatype  [bigint]
select json_query(jsonb'{"a":[{"a":[2,3]},{"a":[4,5]}]}',
 '$.a[*].a?(@<=3)'returning bigint with wrapper
);
ERROR:  cannot cast jsonb array to type bigint


--- [bigint END ] --
--- [bit varying 
START ] --
-- datatype  [bit varying]
select json_query(jsonb'{"a":[{"a":[2,3]},{"a":[4,5]}]}',
 '$.a[*].a?(@<=3)'returning bit varying with wrapper
);
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost


--- [bit varying 
END ] --
--- [boolean START 
] --
-- datatype  [boolean]
select json_query(jsonb'{"a":[{"a":[2,3]},{"a":[4,5]}]}',
 '$.a[*].a?(@<=3)'returning boolean with wrapper
);
ERROR:  cannot cast jsonb array to type boolean


--- [boolean END ] 
--
--- [box START ] --
-- datatype  [box]
select json_query(jsonb'{"a":[{"a":[2,3]},{"a":[4,5]}]}',
 '$.a[*].a?(@<=3)'returning box with wrapper
);
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost


--- [box END ] --
--- [bytea START ] 
--
-- datatype  [bytea]
select json_query(jsonb'{"a":[{"a":[2,3]},{"a":[4,5]}]}',
 '$.a[*].a?(@<=3)'returning bytea with wrapper
);
   json_query   

 \x5b322c20335d
(1 row)



--- [bytea END ] --
--- [character 
varying(5) START ] --
-- datatype  [character varying(5)]
select json_query(jsonb'{"a":[{"a":[2,3]},{"a":[4,5]}]}',
 '$.a[*].a?(@<=3)'returning character varying(5) with wrapper
);
 json_query 

 [2, 3
(1 row)



--- [character 
varying(5) END ] --
--- [cidr START ] --
-- datatype  [cidr]
select json_query(jsonb'{"a":[{"a":[2,3]},{"a":[4,5]}]}',
 '$.a[*].a?(@<=3)'returning cidr with wrapper
);
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost


--- [cidr END ] --
--- [circle START ] 
--
-- datatype  [circle]
select json_query(jsonb'{"a":[{"a":[2,3]},{"a":[4,5]}]}',
 '$.a[*].a?(@<=3)'returning circle with wrapper
);
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost


--- [circle END ] --
--- [date START ] --
-- datatype  [date]
select json_query(jsonb'{"a":[{"a":[2,3]},{"a":[4,5]}]}',
 '$.a[*].a?(@<=3)'returning date with wrapper
);
 json_query 

 2000-01-01
(1 row)



--- [date END ] --

Re: remaining sql/json patches

2023-09-18 Thread Erik Rijkers

Op 9/18/23 om 05:15 schreef Amit Langote:

On Sun, Sep 17, 2023 at 3:34 PM Erik Rijkers  wrote:

Op 9/14/23 om 10:14 schreef Amit Langote:





Hi Amit,

Just now I built a v14-patched server and I found this crash:

select json_query(jsonb '
{
"arr": [
  {"arr": [2,3]}
, {"arr": [4,5]}
]
}'
, '$.arr[*].arr ? (@ <= 3)' returning anyarray  WITH WRAPPER) --crash
;
server closed the connection unexpectedly
 This probably means the server terminated abnormally
 before or while processing the request.
connection to server was lost


Thanks for the report.

Attached updated version fixes the crash, but you get an error as is
to be expected:

select json_query(jsonb '
{
"arr": [
  {"arr": [2,3]}
, {"arr": [4,5]}
]
}'
, '$.arr[*].arr ? (@ <= 3)' returning anyarray  WITH WRAPPER);
ERROR:  cannot accept a value of type anyarray

unlike when using int[]:

select json_query(jsonb '
{
"arr": [
  {"arr": [2,3]}
, {"arr": [4,5]}
]
}'
, '$.arr[*].arr ? (@ <= 3)' returning int[]  WITH WRAPPER);
  json_query

  {2,3}
(1 row)



Thanks, Amit. Alas, there are more: for 'anyarray' I thought I'd 
substitute 'interval', 'int4range', 'int8range', and sure enough they 
all give similar crashes. Patched with v15:


psql -qX -e << SQL
select json_query(jsonb'{"a":[{"a":[2,3]},{"a":[4,5]}]}',
  '$.a[*].a?(@<=3)'returning int[] with wrapper --ok
);

select json_query(jsonb'{"a": [{"a": [2,3]}, {"a": [4,5]}]}',
  '$.a[*].a?(@<=3)'returning interval  with wrapper --crash
--'$.a[*].a?(@<=3)'returning int4range with wrapper --crash
--'$.a[*].a?(@<=3)'returning int8range with wrapper --crash
--'$.a[*].a?(@<=3)'returning numeric[] with wrapper --{2,3} =ok
--'$.a[*].a?(@<=3)'returning anyarray  with wrapper --fixed
--'$.a[*].a?(@<=3)'returning anyarray   --null =ok
--'$.a[*].a?(@<=3)'returning int--null =ok
--'$.a[*].a?(@<=3)'returning int   with wrapper --error =ok
--'$.a[*].a?(@<=3)'returning int[] with wrapper -- {2,3} =ok
);
SQL
=> server closed the connection unexpectedly, etc

Because those first three tries gave a crash (*all three*), I'm a bit 
worried there may be many more.


I am sorry to be bothering you with these somewhat idiotic SQL 
statements but I suppose somehow it needs to be made more solid.


Thanks!

Erik




Re: remaining sql/json patches

2023-09-17 Thread Erik Rijkers

Op 9/14/23 om 10:14 schreef Amit Langote:





Hi Amit,

Just now I built a v14-patched server and I found this crash:

select json_query(jsonb '
{
  "arr": [
{"arr": [2,3]}
  , {"arr": [4,5]}
  ]
}'
  , '$.arr[*].arr ? (@ <= 3)' returning anyarray  WITH WRAPPER) --crash
;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost


Can you have a look?

Thanks,

Erik




Re: JSON Path and GIN Questions

2023-09-15 Thread Erik Rijkers

Op 9/15/23 om 22:27 schreef David E. Wheeler:

On Sep 12, 2023, at 21:00, Erik Wienhold  wrote:


That's also my understanding.  We had a discussion about the docs on @@, @?, and
jsonb_path_query on -general a while back [1].  Maybe it's useful also.


Okay, I’ll take a pass at expanding the docs on this. I think a little 
mini-tutorial on these two operators would be useful.

Meanwhile, I’d like to re-up this question about the index qualification of 
non-equality JSON Path operators.

On Sep 12, 2023, at 20:16, David E. Wheeler  wrote:


Issue 3: Index Use for Comparison
-

 From the docs 
(https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING), I had 
assumed any JSON Path query would be able to use the GIN index. However while the 
use of the == JSON Path operator is able to take advantage of the GIN index, 
apparently the >= operator cannot:

david=# explain analyze select id from movies where movie @? '$ ?($.year >= 
2023)';
   QUERY PLAN   
   
-
Seq Scan on movies  (cost=0.00..3741.41 rows=366 width=4) (actual 
time=34.815..36.259 rows=192 loops=1)
   Filter: (movie @? '$?($."year" >= 2023)'::jsonpath)
   Rows Removed by Filter: 36081
Planning Time: 1.864 ms
Execution Time: 36.338 ms
(5 rows)

Is this expected? Originally I tried with json_path_ops, which I can understand 
not working, since it stores hashes of paths, which would allow only exact 
matches. But a plain old GIN index doesn’t appear to work, either. Should it? Is 
there perhaps some other op class that would allow it to work? Or would I have to 
create a separate BTREE index on `movie -> 'year'`?




movie @? '$ ?($.year >= 2023)'

I believe it is indeed not possible to have such a unequality-search use 
the GIN index.  It is another weakness of JSON that can be unexpected to 
those not in the fullness of Knowledge of the manual. Yes, this too 
would be good to explain in the doc where JSON indexes are explained.


Erik Rijkers


Thanks,

David






Re: JSON Path and GIN Questions

2023-09-13 Thread Erik Rijkers

p 9/13/23 om 22:01 schreef David E. Wheeler:

On Sep 13, 2023, at 01:11, Erik Rijkers  wrote:


"All use of json*() functions preclude index usage."

That sentence is missing from the documentation.


Where did that come from? Why wouldn’t JSON* functions use indexes? I see that 
the docs only mention operators; why would the corresponding functions behave 
the same?

D


Sorry, perhaps my reply was a bit off-topic.
But you mentioned perhaps touching the docs and
the not-use-of-index is just so unexpected.
Compare these two statements:

select count(id) from movies where
movie @? '$ ? (@.year == 2023)'
Time: 1.259 ms
  (index used)

select count(id) from movies where
jsonb_path_match(movie, '$.year == 2023');
Time: 17.260 ms
  (no index used - unexpectedly slower)

With these two indexes available:
  using gin (movie);
  using gin (movie jsonb_path_ops);

(REL_15_STABLE; but it's the same in HEAD and
the not-yet-committed SQL/JSON patches.)

Erik Rijkers




Re: JSON Path and GIN Questions

2023-09-12 Thread Erik Rijkers

Op 9/13/23 om 03:00 schreef Erik Wienhold:

Hi David,

On 13/09/2023 02:16 CEST David E. Wheeler  wrote:


CREATE TABLE MOVIES (id SERIAL PRIMARY KEY, movie JSONB NOT NULL);
\copy movies(movie) from PROGRAM 'curl -s 
https://raw.githubusercontent.com/prust/wikipedia-movie-data/master/movies.json | jq -c 
".[]" | sed "s|||g"';
create index on movies using gin (movie);
analyze movies;

I have been confused as to the difference between @@ vs @?: Why do these
return different results?

david=# select id from movies where movie @@ '$ ?(@.title == "New Life 
Rescue")';
  id

(0 rows)

david=# select id from movies where movie @? '$ ?(@.title == "New Life 
Rescue")';
  id

  10
(1 row)

I posted this question on Stack Overflow 
(https://stackoverflow.com/q/77046554/79202),
and from the suggestion I got there, it seems that @@ expects a boolean to be
returned by the path query, while @? wraps it in an implicit exists(). Is that
right?


That's also my understanding.  We had a discussion about the docs on @@, @?, and
jsonb_path_query on -general a while back [1].  Maybe it's useful also.


If so, I’d like to submit a patch to the docs talking about this, and
suggesting the use of jsonb_path_query() to test paths to see if they return
a boolean or not.


+1

[1] 
https://www.postgresql.org/message-id/CACJufxE01sxgvtG4QEvRZPzs_roggsZeVvBSGpjM5tzE5hMCLA%40mail.gmail.com

--
Erik



"All use of json*() functions preclude index usage."

That sentence is missing from the documentation.


Erik Rijkers








Re: Row pattern recognition

2023-09-02 Thread Erik Rijkers

Op 9/2/23 om 08:52 schreef Tatsuo Ishii:


Attached is the v5 patch. Differences from previous patch include:



Hi,

The patches compile & tests run fine but this statement from the 
documentation crashes an assert-enabled server:


SELECT company, tdate, price, max(price) OVER w FROM stock
WINDOW w AS (
PARTITION BY company
ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING
AFTER MATCH SKIP PAST LAST ROW
INITIAL
PATTERN (LOWPRICE UP+ DOWN+)
DEFINE
LOWPRICE AS price <= 100,
UP AS price > PREV(price),
DOWN AS price < PREV(price)
);
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost


Log file:

TRAP: failed Assert("aggregatedupto_nonrestarted <= 
winstate->aggregatedupto"), File: "nodeWindowAgg.c", Line: 1054, PID: 68975
postgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) 
SELECT(ExceptionalCondition+0x54)[0x9b0824]

postgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) SELECT[0x71ae8d]
postgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) 
SELECT(standard_ExecutorRun+0x13a)[0x6def9a]

/home/aardvark/pg_stuff/pg_installations/pgsql.rpr/lib/pg_stat_statements.so(+0x55e5)[0x7ff3798b95e5]
/home/aardvark/pg_stuff/pg_installations/pgsql.rpr/lib/auto_explain.so(+0x2680)[0x7ff3798ab680]
postgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) SELECT[0x88a4ff]
postgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) 
SELECT(PortalRun+0x240)[0x88bb50]

postgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) SELECT[0x887cca]
postgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) 
SELECT(PostgresMain+0x14dc)[0x88958c]

postgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) SELECT[0x7fb0da]
postgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) 
SELECT(PostmasterMain+0xd2d)[0x7fc01d]
postgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) 
SELECT(main+0x1e0)[0x5286d0]

/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xea)[0x7ff378e9dd0a]
postgres: 17_rpr_d0ec_gulo: aardvark testdb ::1(34808) 
SELECT(_start+0x2a)[0x5289aa]
2023-09-02 19:59:05.329 CEST 46723 LOG:  server process (PID 68975) was 
terminated by signal 6: Aborted
2023-09-02 19:59:05.329 CEST 46723 DETAIL:  Failed process was running: 
SELECT company, tdate, price, max(price) OVER w FROM stock

WINDOW w AS (
PARTITION BY company
ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING
AFTER MATCH SKIP PAST LAST ROW
INITIAL
PATTERN (LOWPRICE UP+ DOWN+)
DEFINE
LOWPRICE AS price <= 100,
UP AS price > PREV(price),
DOWN AS price < PREV(price)
);
2023-09-02 19:59:05.329 CEST 46723 LOG:  terminating any other active 
server processes




Erik Rijkers




Re: remaining sql/json patches

2023-08-31 Thread Erik Rijkers

Op 8/31/23 om 14:57 schreef Amit Langote:

Hello,

On Wed, Aug 16, 2023 at 1:27 PM Amit Langote  wrote:

I will post a new version after finishing working on a few other
improvements I am working on.


Sorry about the delay.  Here's a new version.


Hi,

While compiling the new set

[v12-0001-Support-soft-error-handling-during-CoerceViaIO-e.patch]
[v12-0002-SQL-JSON-query-functions.patch]
[v12-0003-JSON_TABLE.patch]
[v12-0004-Claim-SQL-standard-compliance-for-SQL-JSON-featu.patch]

gcc 13.2.0 is sputtering somewhat:

--
In function ‘transformJsonFuncExpr’,
inlined from ‘transformExprRecurse’ at parse_expr.c:374:13:
parse_expr.c:4362:13: warning: ‘contextItemExpr’ may be used 
uninitialized [-Wmaybe-uninitialized]

 4362 | if (exprType(contextItemExpr) != JSONBOID)
  | ^
parse_expr.c: In function ‘transformExprRecurse’:
parse_expr.c:4214:21: note: ‘contextItemExpr’ was declared here
 4214 | Node   *contextItemExpr;
  | ^~~
nodeFuncs.c: In function ‘exprSetCollation’:
nodeFuncs.c:1238:25: warning: this statement may fall through 
[-Wimplicit-fallthrough=]

 1238 | {
  | ^
nodeFuncs.c:1247:17: note: here
 1247 | case T_JsonCoercion:
  | ^~~~
--

Those looks pretty unimportant, but I thought I'd let you know.

Tests (check, check-world and my own) still run fine.

Thanks,

Erik Rijkers







I found out that llvmjit_expr.c additions have been broken all along,
I mean since I rewrote the JsonExpr evaluation code to use soft error
handling back in January or so.  For example, I had made CoerceiViaIO
evaluation code (EEOP_IOCOERCE ExprEvalStep) invoked by JsonCoercion
node's evaluation to pass an ErrorSaveContext to the type input
functions so that any errors result in returning NULL instead of
throwing the error.  Though the llvmjit_expr.c code was not modified
to do the same, so the SQL/JSON query functions would return wrong
results when JITed.  I have made many revisions to the JsonExpr
expression evaluation itself, not all of which were reflected in the
llvmjit_expr.c counterparts.   I've fixed all that in the attached.

I've broken the parts to teach the CoerceViaIO evaluation code to
handle errors softly into a separate patch attached as 0001.

Other notable changes in the SQL/JSON query functions patch (now 0002):

* Significantly rewrote the parser changes to make it a bit more
readable than before.  My main goal was to separate the code for each
JSON_EXISTS_OP, JSON_QUERY_OP, and JSON_VALUE_OP such that the
op-type-specific behaviors are more readily apparent by reading the
code.

* Got rid of JsonItemCoercions struct/node, which contained a
JsonCoercion field to store the coercion expressions for each JSON
item type that needs to be coerced to the RETURNING type, in favor of
using List of JsonCoercion nodes.  That resulted in simpler code in
many places, most notably in the executor / llvmjit_expr.c.






https://git.postgresql.org/git/postgresql.git/ fails

2023-08-28 Thread Erik Rijkers

I normally pull from
  https://git.postgresql.org/git/postgresql.git/

but for a few hours now it's been failing (while other git repo's are 
still reachable).


Is it me or is there a hiccup there?

thanks,

Erik Rijkers





Re: PostgreSQL 16 release announcement draft

2023-08-25 Thread Erik Rijkers

Op 8/26/23 om 04:51 schreef Jonathan S. Katz:

On 8/24/23 11:17 AM, Erik Rijkers wrote:

Op 8/24/23 om 16:32 schreef Jonathan S. Katz:

On 8/23/23 5:07 PM, David Rowley wrote:
On Thu, 24 Aug 2023 at 05:55, Jonathan S. Katz 
 wrote:


Hi,

When v15 docs have:

"27.2.7. Cascading Replication
The cascading replication feature allows a standby server to accept 
replication connections and stream WAL records to other standbys, 
acting as a relay. This can be used to reduce the number of direct 
connections to the primary and also to minimize inter-site bandwidth 
overheads."


why then, in the release draft, is that capability mentioned as 
something that is new for v16?

"
In PostgreSQL 16, users can perform logical decoding from a standby
instance, meaning a standby can publish logical changes to other servers.
"

Is there a difference between the two?


Yes. Those docs refer to **physical** replication, where a standby can 
continue to replicate WAL records to other standbys. In v16, standbys 
can now publish changes over **logical** replication.


Well, I must assume you are right.

But why is the attached program, running 3 cascading v15 servers, 
showing 'logical' in the middle server's (port 6526) 
pg_replication_slots.slot_type ?  Surely that is not physical but 
logical replication?


 port |  svn   | slot_name  | slot_type
--+++---
 6526 | 150003 | pub_6527_from_6526 | logical   <--
(1 row)

I must be confused -- I will be thankful for enlightenment.

Erik


Thanks,

Jonathan


logrep_cascade_15.sh
Description: application/shellscript


Re: PostgreSQL 16 release announcement draft

2023-08-24 Thread Erik Rijkers

Op 8/24/23 om 16:32 schreef Jonathan S. Katz:

On 8/23/23 5:07 PM, David Rowley wrote:
On Thu, 24 Aug 2023 at 05:55, Jonathan S. Katz  
wrote:


Hi,

When v15 docs have:

"27.2.7. Cascading Replication
The cascading replication feature allows a standby server to accept 
replication connections and stream WAL records to other standbys, acting 
as a relay. This can be used to reduce the number of direct connections 
to the primary and also to minimize inter-site bandwidth overheads."


why then, in the release draft, is that capability mentioned as 
something that is new for v16?

"
In PostgreSQL 16, users can perform logical decoding from a standby
instance, meaning a standby can publish logical changes to other servers.
"

Is there a difference between the two?

Thanks,

Erik









regexp_replace weirdness amounts to a bug?

2023-08-16 Thread Erik Rijkers

Hello,

The following surprised me enough to think it might be a bug:
(17devel)

select
   regexp_replace('Abc Def'
  , '([a-z]) ([A-Z])'
  , '\1 ' || lower('\2')  );

regexp_replace

 Abc Def
(1 row)

-- 'Abc Def' got
-- 'Abc def' expected

What do you think?

Thanks,

Erik Rijkers





Re: proposal: jsonb_populate_array

2023-08-14 Thread Erik Rijkers
Op 8/14/23 om 14:51 schreef Pavel Stehule:> po 14. 8. 2023 v 11:32 
odesílatel Alvaro Herrera 

> with proposed function I can write
>
> select jsonb_populate_array(null:date[],
> '["2023-07-13","2023-07-14"]'::jsonb)
>
Not yet committed, but outstanding
SQL/JSON patches (v11) will let you do:

select json_query(
'["2023-07-13", "2023-07-14"]'::jsonb
  , '$' returning date[]
);
   json_query
-
 {2023-07-13,2023-07-14}
(1 row)

That's (more or less) what you want, no?

Let's hope it gets submitted 17-ish, anyway

Erik









Re: 2023-08-10 release announcement draft

2023-08-07 Thread Erik Rijkers

Op 8/8/23 om 03:15 schreef Jonathan S. Katz:


Please provide your feedback no later than August 10, 2023 0:00 AoE[1].


'You us'  should be
'You use'
   (2x)


Erik




Re: remaining sql/json patches

2023-08-04 Thread Erik Rijkers

Op 7/21/23 om 12:33 schreef Amit Langote:


Thanks for taking a look.



Hi Amit,

Is there any chance to rebase the outstanding SQL/JSON patches, (esp. 
json_query)?


Thanks!

Erik Rijkers





Re: remaining sql/json patches

2023-07-17 Thread Erik Rijkers

Op 7/17/23 om 07:00 schreef jian he:

hi.
seems there is no explanation about, json_api_common_syntax in
functions-json.html

I can get json_query full synopsis from functions-json.html as follows:
json_query ( context_item, path_expression [ PASSING { value AS
varname } [, ...]] [ RETURNING data_type [ FORMAT JSON [ ENCODING UTF8
] ] ] [ { WITHOUT | WITH { CONDITIONAL | [UNCONDITIONAL] } } [ ARRAY ]
WRAPPER ] [ { KEEP | OMIT } QUOTES [ ON SCALAR STRING ] ] [ { ERROR |
NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression } ON EMPTY ]
[ { ERROR | NULL | EMPTY { [ ARRAY ] | OBJECT } | DEFAULT expression }
ON ERROR ])

seems doesn't  have a full synopsis for json_table? only partial one
by  one explanation.



FWIW, Re: json_api_common_syntax

An (old) pdf that I have (ISO/IEC TR 19075-6 First edition 2017-03)
contains the below specification.  It's probably the source of the 
particular term.  It's easy to see how it maps onto the current v7 
SQL/JSON implementation.   (I don't know if it has changed in later 
incarnations.)



-- 8< 
5.2  JSON API common syntax

The SQL/JSON query functions all need a path specification, the JSON 
value to be input to that path specification for querying and 
processing, and optional parameter values passed to the path 
specification. They use a common syntax:


 ::=

 [ AS  ]
 [  ]

 ::=
   

 ::=
  

 ::=
   PASSING  [ {   } ]

 ::=
AS 

-- 8< 

And yes, we might need a readable translation of that in the docs 
although it might be easier to just get get rid of the term 
'json_api_common_syntax'.


HTH,

Erik Rijkers




Re: PG 16 draft release notes ready

2023-07-14 Thread Erik Rijkers

Op 7/4/23 om 23:32 schreef Bruce Momjian:

https://momjian.us/pgsql_docs/release-16.html


I noticed these:

'new new RULES'  should be
'new RULES'

'Perform apply of large transactions'  should be
'Performs apply of large transactions'
(I think)

'SQL JSON paths'  should be
'SQL/JSON paths'

Erik Rijkers







Re: PG 16 draft release notes ready

2023-05-24 Thread Erik Rijkers

Op 5/24/23 om 15:58 schreef Bruce Momjian:

On Wed, May 24, 2023 at 12:23:02PM +0700, John Naylor wrote:


On Wed, May 24, 2023 at 11:19 AM Bruce Momjian  wrote:


Typos:

'from standbys servers'  should be
'from standby servers'

'reindexedb'  should be
'reindexdb'
  (2x: the next line mentions, erroneously,  'reindexedb --system')

'created only created'  should be
'only created'
  (I think)

'could could'  should be
'could'

'are now require the role'  should be
'now require the role'

'values is'  should be
'value is'

'to marked'  should be
'to be marked'


thanks,
Erik






Re: PostgreSQL 16 Beta 1 release announcement draft

2023-05-22 Thread Erik Rijkers

Op 5/21/23 om 19:07 schreef Jonathan S. Katz:

On 5/19/23 12:17 AM, Jonathan S. Katz wrote:

Hi,

Attached is a draft of the release announcement for PostgreSQL 16 Beta 
Please provide feedback no later than May 24, 0:00 AoE. This will give 
Thanks everyone for your feedback. Here is the updated text that 


'substransaction'  should be
'subtransaction'

'use thousands separators'  perhaps is better:
'use underscore as digit-separator, as in `5_432` and `1_00_000`'

'instrcut'  should be
'instruct'


Erik




Re: PostgreSQL 16 Beta 1 release announcement draft

2023-05-18 Thread Erik Rijkers

Op 5/19/23 om 06:17 schreef Jonathan S. Katz:

Hi,

Attached is a draft of the release announcement for PostgreSQL 16 Beta 


Hi,


The usual small fry:


'continues to to'  should be
'continues to'

'continues to give users to the ability'  should be
'continues to give users the ability to'

'pg_createsubscription'  should be
'pg_create_subscription'

'starting with release'  should be
'starting with this release'

'credentials to connected to other services'  should be
'credentials to connect to other services'


Thanks,

Erik





Re: 2023-05-11 release announcement draft

2023-05-06 Thread Erik Rijkers

Op 5/7/23 om 05:37 schreef Jonathan S. Katz:
Attached is a draft of the release announcement for the upcoming update 
release on May 11, 2023.


Please provide any suggestions, corrections, or notable omissions no 
later than 2023-05-11 0:00 AoE.


'leak in within a'  should be
'leak within a'

Erik




Re: SQL/JSON revisited (documentation)

2023-04-11 Thread Erik Rijkers

Hi,

IS JSON is documented as:

expression IS [ NOT ] JSON
  [ { VALUE | SCALAR | ARRAY | OBJECT } ]
  [ { WITH | WITHOUT } UNIQUE [ KEYS ] ]

which is fine but 'VALUE' is nowhere mentioned
(except in the commit-message as: IS JSON [VALUE] )

Unless I'm mistaken 'VALUE' does indeed not change an IS JSON statement, 
so to document we could simply insert this line (as in the attached):


"The VALUE key word is optional noise."

Somewhere in its text in func.sgml, which is now:

"This predicate tests whether expression can be parsed as JSON, possibly 
of a specified type.  If SCALAR or ARRAY or OBJECT is specified, the 
test is whether or not the JSON is of that particular type. If WITH 
UNIQUE KEYS is specified, then any object in the expression is also 
tested to see if it has duplicate keys."



Erik Rijkers
--- doc/src/sgml/func.sgml.orig 2023-04-12 06:16:40.517722315 +0200
+++ doc/src/sgml/func.sgml  2023-04-12 06:30:56.410837805 +0200
@@ -16037,6 +16037,7 @@

 This predicate tests whether expression can 
be
 parsed as JSON, possibly of a specified type.
+The VALUE key word is optional noise.
 If SCALAR or ARRAY or
 OBJECT is specified, the
 test is whether or not the JSON is of that particular type. If


Re: SQL/JSON revisited

2023-03-29 Thread Erik Rijkers

Op 3/29/23 om 12:27 schreef Alvaro Herrera:

On 2023-Mar-28, Erik Rijkers wrote:


In the json_arrayagg() description, it says:
'If ABSENT ON NULL is specified, any NULL values are omitted.'
That's true, but as omitting NULL values is the default (i.e., also without
that clause) maybe it's better to say:
'Any NULL values are omitted unless NULL ON NULL is specified'


Doh, somehow I misread your report and modified the json_object()
documentation instead after experimenting with it (so now the
ABSENT/NULL ON NULL clause is inconsistenly described everywhere).
Would you mind submitting a patch fixing this mistake?


I think the json_object text was OK.  Attached are some changes where 
they were needed IMHO.


Erik



... and pushed it now, after some more meddling.

I'll rebase the rest of the series now.
--- doc/src/sgml/func.sgml.orig	2023-03-29 12:45:45.013598284 +0200
+++ doc/src/sgml/func.sgml	2023-03-29 14:24:41.966456134 +0200
@@ -15830,10 +15830,10 @@
  Constructs a JSON array from either a series of
  value_expression parameters or from the results
  of query_expression,
- which must be a SELECT query returning a single column. If
- ABSENT ON NULL is specified, NULL values are ignored.
- This is always the case if a
- query_expression is used.
+ which must be a SELECT query returning a single column. 
+ If the input is a series of value_expressions, NULL values are omitted
+ unless NULL ON NULL is specified.  If a query_expression is used NULLs
+ are always ignored.
 
 
  json_array(1,true,json '{"a":null}')
@@ -20310,13 +20310,14 @@
  ORDER BY sort_expression 
  { NULL | ABSENT } ON NULL 
  RETURNING data_type  FORMAT JSON  ENCODING UTF8   )
+json


 Behaves in the same way as json_array
 but as an aggregate function so it only takes one
 value_expression parameter.
-If ABSENT ON NULL is specified, any NULL
-values are omitted.
+NULL values are omitted unless NULL ON NULL
+is specified.
 If ORDER BY is specified, the elements will
 appear in the array in that order rather than in the input order.



Re: SQL/JSON revisited

2023-03-28 Thread Erik Rijkers

Op 3/27/23 om 20:54 schreef Alvaro Herrera:

Docs amended as I threatened.  Other than that, this has required more


> [v12-0001-SQL-JSON-constructors.patch]
> [v12-0001-delta-uniqueifyJsonbObject-bugfix.patch]

In doc/src/sgml/func.sgml, some minor stuff:

'which specify the data type returned'  should be
'which specifies the data type returned'

In the json_arrayagg() description, it says:
'If ABSENT ON NULL is specified, any NULL values are omitted.'
That's true, but as omitting NULL values is the default (i.e., also 
without that clause) maybe it's better to say:

'Any NULL values are omitted unless NULL ON NULL is specified'


I've found no bugs in functionality.

Thanks,

Erik Rijkers




Re: SQL/JSON revisited

2023-02-20 Thread Erik Rijkers

Op 20-02-2023 om 08:35 schreef Amit Langote:




Rebased again over queryjumble overhaul.



Hi,


But the following statement is a problem. It does not crash but it goes 
off, half-freezing the machine, and only comes back after fanatic 
Ctrl-C'ing.


select json_query(jsonb '[3,4]', '$[*]' returning bigint[] empty object 
on error);


Can you have a look?

Thanks,

Erik Rijkers



PS
Log doesn't really have anything interesting:

2023-02-20 14:57:06.073 CET 1336 LOG:  server process (PID 1493) was 
terminated by signal 9: Killed
2023-02-20 14:57:06.073 CET 1336 DETAIL:  Failed process was running: 
select json_query(jsonb '[3,4]', '$[*]' returning bigint[] empty object 
on error);
2023-02-20 14:57:06.359 CET 1336 LOG:  terminating any other active 
server processes
2023-02-20 14:57:06.667 CET 1336 LOG:  all server processes terminated; 
reinitializing
2023-02-20 14:57:11.870 CET 1556 LOG:  database system was interrupted; 
last known up at 2023-02-20 14:44:43 CET





Re: OpenSSL 3.0.0 vs old branches

2023-02-08 Thread Erik Rijkers

Op 08-02-2023 om 05:37 schreef Tom Lane:

Michael Paquier  writes:

On Tue, Feb 07, 2023 at 01:28:26PM -0500, Tom Lane wrote:

I think Peter's misremembering the history, and OpenSSL 3 *is*
supported in these branches.  There could be an argument for
not back-patching f0d2c65f17 on the grounds that pre-1.1.1 is
also supported there.  On the whole though, it seems more useful
today for that test to pass with 3.x than for it to pass with 0.9.8.
And I can't see investing effort to make it do both (but if Peter
wants to, I won't stand in the way).



Cutting support for 0.9.8 in oldest branches would be a very risky
move, but as you say, if that only involves a failure in the SSL
tests while still allowing anything we have to work, fine by me to
live with that.


Question: is anybody around here still testing with 0.9.8 (or 1.0.x)
at all?  The systems I had that had that version on them are dead.

regards, tom lane


I've hoarded an old centos 6.1 system that I don't really use anymore 
but sometimes (once every few weeks, I guess) start up and build master 
on, for instance to test with postgres_fdw/replication. Such a build 
would include a make check, and I think I would have noticed any fails.


That system says:
OpenSSL> OpenSSL 1.0.1e-fips 11 Feb 2013

FWIW, just now I built & ran check-world for 15 and 16 with 
PG_TEST_EXTRA=ssl (which I didn't use before).  Both finished ok.


Erik Rijkers




Re: logrep stuck with 'ERROR: int2vector has too many elements'

2023-01-15 Thread Erik Rijkers

On 1/15/23 12:33, Alvaro Herrera wrote:

On 2023-Jan-15, Erik Rijkers wrote:


Hello,

Logical replication sometimes gets stuck with
   ERROR:  int2vector has too many elements


Weird.  This error comes from int2vectorin which amusingly only wants to
read up to FUNC_MAX_ARGS values in the array (100 in the default config,
but it can be changed in pg_config_manual.h).  I wonder how come we
haven't noticed this before ... surely we use int2vector's for other
things than function argument lists nowadays.

At the same time, I don't understand why it fails in 16 but not in 15.
Maybe something changed in the way we process the column lists in 16?


I wrote as comment in the script, but that's maybe vague so let me be 
more explicit: 16 also accepts many columns, up to 1600, without error, 
as long as that is not combined with generated column(s) such as in the 
script. It seems the combination becomes quickly problematic. Although 
adding just 50 columns + a generated column is still ok, 100 is already 
too high (see the ADD_COLUMNS variable in my script).


Weird indeed.


Erik




logrep stuck with 'ERROR: int2vector has too many elements'

2023-01-15 Thread Erik Rijkers

Hello,

Logical replication sometimes gets stuck with
  ERROR:  int2vector has too many elements

I can't find the exact circumstances that cause it but it has something 
to do with many columns (or adding many columns) in combination with 
perhaps generated columns.


This replication test, in a slightly different form, used to work. This 
is also suggested by the fact that the attached runs without errors in 
REL_15_STABLE but gets stuck in HEAD.


What it does: it initdbs and runs two instances, primary and replica. In 
the primary 'pgbench -is1' done, and many columns, including 1 generated 
column, are added to all 4 pgbench tables. This is then 
pg_dump/pg_restored to the replica, and a short pgbench is run. The 
result tables on primary and replica are compared for the final result. 
(To run it will need some tweaks to directory and connection parms)


I ran it on both v15 and v16 for 25 runs: with the parameters as given 
15 has no problem while 16 always got stuck with the int2vector error. 
(15 can actually be pushed up to the max of 1600 columns per table 
without errors)


Both REL_15_STABLE and 16devel built from recent master on Debian 10, 
gcc 12.2.0.


I hope someone understands what's going wrong.

Thanks,

Erik Rijkers

logrepbug.sh
Description: application/shellscript


convey privileges -> confer privileges

2023-01-06 Thread Erik Rijkers

Can we change 'convey' to 'confer' in these recent doc changes?

Maybe 'convey a privilege' isn't exactly wrong but it leaves you 
wondering what exactly is meant.


Thanks,

Erik

--- doc/src/sgml/ref/createuser.sgml.orig	2023-01-05 21:37:35.803839575 +0100
+++ doc/src/sgml/ref/createuser.sgml	2023-01-05 21:38:14.700390046 +0100
@@ -47,7 +47,7 @@
CREATEROLE privilege.
Being a superuser implies the ability to bypass all access permission
checks within the database, so superuser access should not be granted
-   lightly. CREATEROLE also conveys
+   lightly. CREATEROLE also confers
very extensive privileges.
   
 
--- doc/src/sgml/user-manag.sgml.orig	2023-01-05 21:30:14.905548605 +0100
+++ doc/src/sgml/user-manag.sgml	2023-01-05 21:34:48.945471335 +0100
@@ -207,10 +207,10 @@
 SECURITY LABEL commands.


-However, CREATEROLE does not convey the ability to
-create SUPERUSER roles, nor does it convey any
+However, CREATEROLE does not confer the ability to
+create SUPERUSER roles, nor does it confer any
 power over SUPERUSER roles that already exist.
-Furthermore, CREATEROLE does not convey the power
+Furthermore, CREATEROLE does not confer the power
 to create REPLICATION users, nor the ability to
 grant or revoke the REPLICATION privilege, nor the
 ability to modify the role properties of such users.  However, it does


Re: Schema variables - new implementation for Postgres 15 (typo)

2022-12-13 Thread Erik Rijkers

Op 14-12-2022 om 05:54 schreef Pavel Stehule:

Hi

fresh rebase


typo alert:

v20221214-0003-LET-command.patch contains

errmsg("target session varible is of type %s"

('varible' should be 'variable')

Erik




Re: New docs chapter on Transaction Management and related changes

2022-11-22 Thread Erik Rijkers

Op 22-11-2022 om 19:00 schreef Bruce Momjian:

On Mon, Nov 21, 2022 at 11:15:36AM +0100, Laurenz Albe wrote:

   ..., while both columns will be set in read-write transactions.


Agreed, changed.  Updated patch attached.


In func.sgml:

'Only top-level transaction ID are'  should be
'Only top-level transaction IDs are'

'subtransaction ID are'  should be
'subtransaction IDs are'

In xact.sgml:

'Non-virtual TransactionId (or xid)' 
should be

'Non-virtual TransactionIds (or xids)'



Erik Rijkers






Re: allowing for control over SET ROLE

2022-11-18 Thread Erik Rijkers

Op 18-11-2022 om 22:19 schreef Robert Haas:

On Fri, Nov 18, 2022 at 1:50 PM Erik Rijkers  wrote:

In grant.sgml,

'actualy permisions'

looks a bit unorthodox.


Fixed that, and the other mistake Álvaro spotted, and also bumped
catversion because I forgot that earlier.


Sorry to be nagging but

  'permisions'  should be
  'permissions'

as well.


And as I'm nagging anyway: I also wondered whether the word order could 
improve:


- Word order as it stands:
However, the actual permissions conferred depend on the options 
associated with the grant.


-- maybe better:
However, the permissions actually conferred depend on the options 
associated with the grant.


But I'm not sure.


Thanks,

Erik

Thanks,

Erik




Re: allowing for control over SET ROLE

2022-11-18 Thread Erik Rijkers

Op 18-11-2022 om 19:43 schreef Robert Haas:

On Fri, Nov 18, 2022 at 12:50 PM Robert Haas  wrote:

Here's a rebased v3 to see what cfbot thinks.


cfbot is happy, so committed.


In grant.sgml,

  'actualy permisions'

looks a bit unorthodox.





Re: Code checks for App Devs, using new options for transaction behavior

2022-10-28 Thread Erik Rijkers

Op 27-10-2022 om 18:35 schreef Simon Riggs:

On Thu, 27 Oct 2022 at 12:09, Simon Riggs  wrote:


Comments please


Update from patch tester results.



> [001_psql_parse_only.v1.patch ]
> [002_nested_xacts.v7.patch]
> [003_rollback_on_commit.v1.patch  ]
> [004_add_params_to_sample.v1.patch]


patch 002 has (2x) :
  'transction'  should be
  'transaction'

also in patch 002:
  'at any level will be abort'  should be
  'at any level will abort'

I also dislike the 'we' in

  'Once we reach the top-level transaction,'

That seems a bit too much like the 'we developers working together to 
make a database server system' which is of course used often and 
usefully on this mailinglist and in code itself.  But I think 
user-facing docs should be careful with that team-building 'we'.  I 
remember well how it confused me, many years ago.  Better, IMHO:


  'Once the top-level transaction is reached,'


Thanks,

Erik Rijkers




date_part/extract parse curiosity

2022-10-20 Thread Erik Rijkers

Hi,

I noticed that
  select date_part('millennium', now()); --> 3

will execute also, unperturbed, in this form:
  select date_part('millennium x', now()); --> 3

By the same token

  select extract(millennium from now()) --> 3
  select extract(millenniumx from now()) --> 3

This laxness occurs in all releases, and with 'millennium', 
'millisecond', and 'microsecond' (at least).


Even though it's not likely to cause much real-life headaches, and I 
hesitate to call it a real bug, perhaps it would be better if it could 
be a bit stricter.


Thanks,

Erik Rijkers




Re: New docs chapter on Transaction Management and related changes

2022-10-13 Thread Erik Rijkers

Op 13-10-2022 om 23:28 schreef Bruce Momjian:

On Tue, Sep 13, 2022 at 03:02:34PM +0100, Simon Riggs wrote:

Thanks Robert. I've tried to accommodate all of your thoughts, plus Alvaro's.

New v5 attached.

Happy to receive further comments.


I liked this patch very much.  It gives details on a lot of the
internals we expose to users.  Some of my changes were:

*  tightening the wording
*  restructuring the flow
*  splitting out user-visible details (prepared transactions) from
internals, e.g., xid, vxid, subtransactions
*  adding references from places in our docs to these new sections



[xact.diff]


I think that
  'This chapter explains how the control the reliability of'

should be:
'This chapter explains how to control the reliability of'


And in these lines:
+   together in a transactional manner.  The commands PREPARE
+   TRANSACTION, COMMIT PREPARED and
+   ROLLBACK PREPARED.  Two-phase transactions

'The commands'

should be
'The commands are'


thanks,

Erik Rijkers



I plan to apply this and backpatch it to all supported versions since
these details apply to all versions.  These docs should enable our users
to much better understand and monitor Postgres.

Updated patch attached.





Re: proposal: possibility to read dumped table's name from file

2022-09-12 Thread Erik Rijkers




Op 12-09-2022 om 16:00 schreef Erik Rijkers:

Op 12-09-2022 om 09:58 schreef Daniel Gustafsson:

On 9 Sep 2022, at 11:00, Andrew Dunstan  wrote:

On Sep 9, 2022, at 5:53 PM, John Naylor 
 wrote:



[v4-0001-Add-include-exclude-filtering-via-file-in-pg_dump.patch]


I noticed that pg_restore --filter cannot, or at last not always, be 
used with the same filter-file that was used to produce a dump with 
pg_dump --filter.


Is that as designed?  It seems a bit counterintuitive.  It'd be nice if 
that could be fixed.  Admittedly, the 'same' problem in pg_restore -t, 
also less than ideal.


(A messy bashdemo below)


I hope the issue is still clear, even though in the bash I sent, I 
messed up the dumpfile name (i.e., in the bash that I sent the pg_dump 
creates another dump name than what is given to pg_restore. They should 
use the same dumpname, obviously)




thanks,

Erik Rijkers





Re: proposal: possibility to read dumped table's name from file

2022-09-12 Thread Erik Rijkers

Op 12-09-2022 om 09:58 schreef Daniel Gustafsson:

On 9 Sep 2022, at 11:00, Andrew Dunstan  wrote:


On Sep 9, 2022, at 5:53 PM, John Naylor  wrote:



[v4-0001-Add-include-exclude-filtering-via-file-in-pg_dump.patch]


I noticed that pg_restore --filter cannot, or at last not always, be 
used with the same filter-file that was used to produce a dump with 
pg_dump --filter.


Is that as designed?  It seems a bit counterintuitive.  It'd be nice if 
that could be fixed.  Admittedly, the 'same' problem in pg_restore -t, 
also less than ideal.


(A messy bashdemo below)

thanks,

Erik Rijkers


#! /bin/bash
db2='testdb2' db3='testdb3'
db2='testdb_source' db3='testdb_target'
sql_dropdb="drop database if exists $db2; drop database if exists $db3;"
sql_createdb="create database $db2; create database $db3;"
schema1=s1  table1=table1  t1=$schema1.$table1
schema2=s2  table2=table2  t2=$schema2.$table2
sql_schema_init="create schema if not exists $schema1; create schema if 
not exists $schema2;"
sql_test="select '$t1', n from $t1 order by n; select '$t2', n from $t2 
order by n;"


function sqltest()
{
  for database_name in $db2 $db3 ;do
port_used=$( echo "show port" |psql -qtAX -d $database_name )
echo -n "-- $database_name ($port_used):  "
echo "$sql_test" | psql -qtAX -a -d $database_name | md5sum
  done
  echo
}

echo "setting up orig db $db2, target db $db3"
echo "$sql_dropdb"| psql -qtAX
echo "$sql_createdb"  | psql -qtAX

psql -X -d $db2 << SQL
$sql_schema_init
create table $t1 as select n from generate_series(1, (10^1)::int) as f(n);
create table $t2 as select n from generate_series(2, (10^2)::int) as f(n);
SQL
echo "
include table $t1
include table $t2
# include schema $s1
# include schema $s2
" > inputfile1.txt

# in filter; out plain
echo "-- pg_dump -F p -f plainfile1 --filter=inputfile1.txt -d $db2"
 pg_dump -F p -f plainfile1 --filter=inputfile1.txt -d $db2

echo "$sql_schema_init" | psql -qX -d $db3
echo  "-- pg_restore -d $db3 dumpfile1"
  pg_restore -d $db3 dumpfile1
  rc=$?
echo "-- pg_restore returned [$rc]  -- pg_restore without --filter"
sqltest

# enable this to see it fail
if [[ 1 -eq 1 ]]
then

# clean out
echo "drop schema $schema1 cascade; drop schema $schema2 cascade; " | 
psql -qtAXad $db3


--filter=inputfile1.txt"
echo "$sql_schema_init" | psql -qX -d $db3
echo "-- pg_restore -d $db3 --filter=inputfile1.txt dumpfile1"
 pg_restore -d $db3 --filter=inputfile1.txt dumpfile1
 rc=$?
echo "-- pg_restore returned [$rc]  -- pg_restore without --filter"
sqltest

fi





Re: proposal: possibility to read dumped table's name from file

2022-09-08 Thread Erik Rijkers

Op 07-09-2022 om 21:45 schreef Daniel Gustafsson:


One thing this patchversion currently lacks is refined error messaging, but if
we feel that this approach is a viable path then that can be tweaked.  The
function which starts the parser can also be refactored to be shared across
pg_dump, pg_dumpall and pg_restore but I've kept it simple for now.

Thoughts?  It would be nice to get this patch across the finishline during this
commitfest.


> [0001-Add-include-exclude-filtering-via-file-in-pg_dump.patch]

This seems to dump & restore well (as Pavels patch does).

I did notice one peculiarity (in your patch) where for each table a few 
spaces are omitted by pg_dump.


-
#! /bin/bash

psql -qXc "drop database if exists testdb2"
psql -qXc "create database testdb2"

echo "
create schema if not exists test;
create table table0 (id integer);
create table table1 (id integer);
insert into table0 select n from generate_series(1,2) as f(n);
insert into table1 select n from generate_series(1,2) as f(n);
" | psql -qXad testdb2

echo "include table table0" > inputfile1.txt

echo "include table table0
include table table1" > inputfile2.txt

# 1 table, emits 2 spaces
echo -ne ">"
pg_dump -F p -f plainfile1 --filter=inputfile1.txt -d testdb2
echo "<"

# 2 tables, emits 4 space
echo -ne ">"
pg_dump -F p -f plainfile2 --filter=inputfile2.txt -d testdb2
echo "<"

# dump without filter emits no spaces
echo -ne ">"
pg_dump -F c -f plainfile3 -t table0 -table1 -d testdb2
echo "<"
-

It's probably a small thing -- but I didn't find it.

thanks,

Erik Rijkers


--
Daniel Gustafsson   https://vmware.com/






Re: PostgreSQL 15 Beta 4 release announcement draft

2022-09-06 Thread Erik Rijkers

Op 07-09-2022 om 03:40 schreef Jonathan S. Katz:

Hi,

I've attached a draft of the PostgreSQL 15 Beta 4 release announcement. 
Please review for correctness and if there are any omissions.


Please provide feedback on the draft no later than Sep 8, 2022 0:00 AoE.


'Fixes and changes in PostgreSQL 15 Beta 3 include:'  should be
'Fixes and changes in PostgreSQL 15 Beta 4 include:'


Erik




Re: New docs chapter on Transaction Management and related changes

2022-09-06 Thread Erik Rijkers

Op 06-09-2022 om 17:16 schreef Simon Riggs:

New chapter on transaction management, plus a few related changes.

Markup and links are not polished yet, so please comment initially on
the topics, descriptions and wording.


[xact_docs.v2.patch] 


Very clear explanations, thank you.

Two typos:

'not yet yet part'  should be
'not yet part'

'extenal'  should be
'external'


Erik




json docs fix jsonb_path_exists_tz again

2022-09-02 Thread Erik Rijkers
In funcs.sgml, the value fed into jsonb_path_exists_tz was wrong; fixed 
as attached.

(was inadvertently reverted with the big JSON revert)

Erik Rijkers--- doc/src/sgml/func.sgml.orig	2022-09-02 16:16:21.406405542 +0200
+++ doc/src/sgml/func.sgml	2022-09-02 16:17:41.751838806 +0200
@@ -16533,7 +16533,7 @@
 comparisons.


-jsonb_path_exists_tz('["2015-08-01 12:00:00 -05"]', '$[*] ? (@.datetime()  "2015-08-02".datetime())')
+jsonb_path_exists_tz('["2015-08-01 12:00:00-05"]', '$[*] ? (@.datetime()  "2015-08-02".datetime())')
 t

   


Re: Clarify restriction on partitioned tables primary key / unique indexes

2022-09-02 Thread Erik Rijkers



Op 02-09-2022 om 11:44 schreef David Rowley:

Over on [1], there was a question about why it wasn't possible to
create the following table:

CREATE TABLE foobar(
 id BIGINT NOT NULL PRIMARY KEY,
 baz VARCHAR NULL DEFAULT NULL
) PARTITION BY HASH(my_func(id));


The attached attempts to clarify these restrictions more accurately
based on the current code's restrictions.

If there's no objections or suggestions for better wording, I'd like
to commit the attached.


Minimal changes:

'To create a unique or primary key constraints on partitioned table'

should be

'To create unique or primary key constraints on partitioned tables'


Erik




Re: Schema variables - new implementation for Postgres 15

2022-08-24 Thread Erik Rijkers

Op 24-08-2022 om 08:37 schreef Pavel Stehule:




I fixed these.



> [v20220824-1-*.patch]

Hi Pavel,

I noticed just now that variable assignment (i.e., LET) unexpectedly 
(for me anyway) cast the type of the input value. Surely that's wrong? 
The documentation says clearly enough:


'The result must be of the same data type as the session variable.'


Example:

create variable x integer;
let x=1.5;
select x, pg_typeof(x);
 x | pg_typeof
---+---
 2 | integer
(1 row)


Is this correct?

If such casts (there are several) are intended then the text of the 
documentation should be changed.


Thanks,

Erik





Re: Column Filtering in Logical Replication

2022-08-22 Thread Erik Rijkers

Op 22-08-2022 om 10:27 schreef Peter Smith:


PSA new set of v2* patches.


Hi,

In the second file a small typo, I think:

"enclosed by parenthesis"  should be
"enclosed by parentheses"

thanks,
Erik





Re: Schema variables - new implementation for Postgres 15

2022-08-21 Thread Erik Rijkers

Op 21-08-2022 om 09:54 schreef Pavel Stehule:

ne 21. 8. 2022 v 6:36 odesílatel Julien Rouhaud  napsal:


On Sat, Aug 20, 2022 at 08:44:49PM +0200, Erik Rijkers wrote:

Op 20-08-2022 om 20:09 schreef Pavel Stehule:




should be fixed now> 



Yep, all tests OK now.
Thanks!

Erik





Re: Schema variables - new implementation for Postgres 15

2022-08-20 Thread Erik Rijkers

Op 20-08-2022 om 20:09 schreef Pavel Stehule:

Hi


  LET public.svar2 = (10, 20, 30);
  ALTER TYPE public.svar_test_type DROP ATTRIBUTE b;
  SELECT public.svar;
- svar

- (10,)
+  svar
+-
+ (10,16)
  (1 row)

  SELECT public.svar2;
svar2
  -
   (10,30)
  (1 row)



I hope so I found this error. It should be fixed
 > [patches v20220820-1-0001 -> 0012]



I'm afraid it still gives the same errors during  'make check', and 
again only errors when compiling  without  --enable-cassert


Thanks,

Erik



Regards

Pavel
diff -U3 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/expected/session_variables.out
 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/results/session_variables.out
--- 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/expected/session_variables.out
   2022-08-20 20:12:34.558069463 +0200
+++ 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/results/session_variables.out
2022-08-20 20:21:38.028404785 +0200
@@ -990,9 +990,9 @@
 ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;
 -- should to fail too (different type, different generation number);
 SELECT public.svar;
-   svar   
---
- (10,20,)
+   svar
+---
+ (10,20,0)
 (1 row)
 
 LET public.svar = ROW(10,20,30);
@@ -,17 +,17 @@
 LET public.svar = (10, 20);
 ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;
 SELECT public.svar;
-   svar   
---
- (10,20,)
+svar
+
+ (10,20,16)
 (1 row)
 
 LET public.svar2 = (10, 20, 30);
 ALTER TYPE public.svar_test_type DROP ATTRIBUTE b;
 SELECT public.svar;
- svar  

- (10,)
+  svar   
+-
+ (10,16)
 (1 row)
 
 SELECT public.svar2;


Re: Schema variables - new implementation for Postgres 15

2022-08-20 Thread Erik Rijkers




Op 20-08-2022 om 15:41 schreef Pavel Stehule:

so 20. 8. 2022 v 15:36 odesílatel Erik Rijkers  napsal:


Op 20-08-2022 om 15:32 schreef Erik Rijkers:

Op 19-08-2022 om 17:29 schreef Pavel Stehule:

make check  fails as a result of the errors in the attached
session_variables.out.




Sorry, that should have been this diffs file, of course (attached).



It looks like some problem with not well initialized memory, but I have no
idea how it is possible. What are your configure options?



I compiled both assert-enable and 'normal', and I only just noticed that 
the assert-enable one did pass tests normally.



Below is the config that produced the failing tests:

./configure 
--prefix=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables 
--bindir=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables/bin.fast 
--libdir=/home/aardvark/pg_stuff/pg_installations/pgsql.schema_variables/lib.fast 
--with-pgport=6986 --quiet --enable-depend --with-openssl --with-perl 
--with-libxml --with-libxslt --with-zlib  --enable-tap-tests 
--with-extra-version=_0820_schema_variables_1509--with-lz4  --with-icu



(debian 9, gcc 12.2.0)





Erik







Re: Schema variables - new implementation for Postgres 15

2022-08-20 Thread Erik Rijkers

Op 20-08-2022 om 15:32 schreef Erik Rijkers:

Op 19-08-2022 om 17:29 schreef Pavel Stehule:

make check  fails as a result of the errors in the attached 
session_variables.out.





Sorry, that should have been this diffs file, of course (attached).


Erikdiff -U3 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/expected/session_variables.out
 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/results/session_variables.out
--- 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/expected/session_variables.out
   2022-08-20 15:03:24.767103554 +0200
+++ 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/results/session_variables.out
2022-08-20 15:10:47.679172066 +0200
@@ -990,9 +990,9 @@
 ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;
 -- should to fail too (different type, different generation number);
 SELECT public.svar;
-   svar   
---
- (10,20,)
+   svar
+---
+ (10,20,0)
 (1 row)
 
 LET public.svar = ROW(10,20,30);
@@ -,9 +,9 @@
 LET public.svar = (10, 20);
 ALTER TYPE public.svar_test_type ADD ATTRIBUTE c int;
 SELECT public.svar;
-   svar   
---
- (10,20,)
+svar
+
+ (10,20,16)
 (1 row)
 
 LET public.svar2 = (10, 20, 30);


Re: Schema variables - new implementation for Postgres 15

2022-08-20 Thread Erik Rijkers

Op 19-08-2022 om 17:29 schreef Pavel Stehule:

pá 19. 8. 2022 v 15:57 odesílatel Pavel Stehule 
napsal:


Hi

I am sending fresh update

- enhanced work with composite types - now the used composite type can be
enhanced, reduced and stored value is converted to expected format
- enhancing find_composite_type_dependencies to support session variables,
so the type of any field of used composite type cannot be changed



update - fix cpp check


v20220819-2-0001-Catalogue-support-for-session-variables.patch
v20220819-2-0002-session-variables.patch
v20220819-2-0003-typecheck-check-of-consistency-of-format-of-stored-v.patch
v20220819-2-0004-LET-command.patch
v20220819-2-0005-Support-of-LET-command-in-PLpgSQL.patch
v20220819-2-0006-DISCARD-VARIABLES-command.patch
v20220819-2-0007-Enhancing-psql-for-session-variables.patch
v20220819-2-0008-Possibility-to-dump-session-variables-by-pg_dump.patch
v20220819-2-0009-typedefs.patch
v20220819-2-0010-Regress-tests-for-session-variables.patch
v20220819-2-0011-fix.patch
v20220819-2-0012-This-patch-changes-error-message-column-doesn-t-exis.patch
v20220819-2-0013-documentation.patch

make check  fails as a result of the errors in the attached 
session_variables.out.



Erik




Regards

Pavel

CREATE SCHEMA svartest;
SET search_path = svartest;
CREATE VARIABLE var1 AS integer;
CREATE TEMP VARIABLE var2 AS text;
DROP VARIABLE var1, var2;
-- functional interface
CREATE VARIABLE var1 AS numeric;
CREATE ROLE var_test_role;
GRANT USAGE ON SCHEMA svartest TO var_test_role;
SET ROLE TO var_test_role;
-- should fail
SELECT var1;
ERROR:  permission denied for session variable var1
SET ROLE TO DEFAULT;
GRANT READ ON VARIABLE var1 TO var_test_role;
SET ROLE TO var_test_role;
-- should fail
LET var1 = 10;
ERROR:  permission denied for session variable var1
-- should work
SELECT var1;
 var1 
--
 
(1 row)

SET ROLE TO DEFAULT;
GRANT WRITE ON VARIABLE var1 TO var_test_role;
SET ROLE TO var_test_role;
-- should work
LET var1 = 333;
SET ROLE TO DEFAULT;
REVOKE ALL ON VARIABLE var1 FROM var_test_role;
CREATE OR REPLACE FUNCTION secure_var()
RETURNS int AS $$
  SELECT svartest.var1::int;
$$ LANGUAGE sql SECURITY DEFINER;
SELECT secure_var();
 secure_var 

333
(1 row)

SET ROLE TO var_test_role;
-- should fail
SELECT svartest.var1;
ERROR:  permission denied for session variable var1
-- should work;
SELECT secure_var();
 secure_var 

333
(1 row)

SET ROLE TO DEFAULT;
EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM generate_series(1,100) g(v) WHERE v 
= var1;
  QUERY PLAN   
---
 Function Scan on pg_catalog.generate_series g
   Output: v
   Function Call: generate_series(1, 100)
   Filter: ((g.v)::numeric = var1)
(4 rows)

CREATE VIEW schema_var_view AS SELECT var1;
SELECT * FROM schema_var_view;
 var1 
--
  333
(1 row)

\c -
SET search_path = svartest;
-- should work still, but var will be empty
SELECT * FROM schema_var_view;
 var1 
--
 
(1 row)

LET var1 = pi();
SELECT var1;
   var1   
--
 3.14159265358979
(1 row)

-- we can see execution plan of LET statement
EXPLAIN (VERBOSE, COSTS OFF) LET var1 = pi();
 QUERY PLAN 

 SET SESSION VARIABLE
 Result
   Output: 3.14159265358979
(3 rows)

SELECT var1;
   var1   
--
 3.14159265358979
(1 row)

CREATE VARIABLE var3 AS int;
CREATE OR REPLACE FUNCTION inc(int)
RETURNS int AS $$
BEGIN
  LET svartest.var3 = COALESCE(svartest.var3 + $1, $1);
  RETURN var3;
END;
$$ LANGUAGE plpgsql;
SELECT inc(1);
 inc 
-
   1
(1 row)

SELECT inc(1);
 inc 
-
   2
(1 row)

SELECT inc(1);
 inc 
-
   3
(1 row)

SELECT inc(1) FROM generate_series(1,10);
 inc 
-
   4
   5
   6
   7
   8
   9
  10
  11
  12
  13
(10 rows)

SET ROLE TO var_test_role;
-- should fail
LET var3 = 0;
ERROR:  permission denied for session variable var3
SET ROLE TO DEFAULT;
DROP VIEW schema_var_view;
DROP VARIABLE var1 CASCADE;
DROP VARIABLE var3 CASCADE;
-- composite variables
CREATE TYPE sv_xyz AS (x int, y int, z numeric(10,2));
CREATE VARIABLE v1 AS sv_xyz;
CREATE VARIABLE v2 AS sv_xyz;
\d v1
\d v2
LET v1 = (1,2,3.14);
LET v2 = (10,20,3.14*10);
-- should work too - there are prepared casts
LET v1 = (1,2,3.14);
SELECT v1;
 v1 

 (1,2,3.14)
(1 row)

SELECT v2;
  v2   
---
 (10,20,31.40)
(1 row)

SELECT (v1).*;
 x | y |  z   
---+---+--
 1 | 2 | 3.14
(1 row)

SELECT (v2).*;
 x  | y  |   z   
++---
 10 | 20 | 31.40
(1 row)

SELECT v1.x + v1.z;
 ?column? 
--
 4.14
(1 row)

SELECT v2.x + v2.z;
 ?column? 
--
41.40
(1 row)

-- access to composite fields should be safe too
-- should fail
SET ROLE TO var_test_role;
SELECT v2.x;
ERROR:  permission denied for session variable v2
SET ROLE TO DEFAULT;
DROP VARIABLE v1;
DROP VARIABLE v2;
REVOKE USAGE ON SCHEMA svartest FROM var_test_role;
DROP ROLE 

Re: fix typos

2022-08-02 Thread Erik Rijkers

Op 02-08-2022 om 07:28 schreef John Naylor:


On Tue, Aug 2, 2022 at 1:05 AM Erik Rijkers <mailto:e...@xs4all.nl>> wrote:

 >
 > Recent typos...

The part of the sentence inside parentheses is not clear to me, before 
or after the patch:


     Dropping an extension causes its component objects, and other 
explicitly

     dependent routines (see ,
-   the depends on extension action), to be dropped as well.
+   that depend on extension action), to be dropped as well.
    



Hm, I see what you mean, I did not notice that earlier and I won't make 
a guess as to intention.  Maybe Bruce can have another look? (commit 
5fe2d4c56e)




--
John Naylor
EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>





fix typos

2022-08-01 Thread Erik Rijkers

Recent typos...
--- ./doc/src/sgml/ref/drop_extension.sgml.orig 2022-08-01 19:38:18.249729884 
+0200
+++ ./doc/src/sgml/ref/drop_extension.sgml  2022-08-01 19:40:33.312359069 
+0200
@@ -32,7 +32,7 @@
DROP EXTENSION removes extensions from the database.
Dropping an extension causes its component objects, and other explicitly
dependent routines (see ,
-   the depends on extension action), to be dropped as well.
+   that depend on extension action), to be dropped as well.
   
 
   
@@ -80,8 +80,8 @@
 
  
   This option prevents the specified extensions from being dropped
-  if there exists non-extension-member objects that depends on any
-  the extensions.  This is the default.
+  if there exist non-extension-member objects that depend on any
+  of the extensions.  This is the default.
  
 

--- ./doc/src/sgml/ref/psql-ref.sgml.orig   2022-08-01 19:58:21.000644788 
+0200
+++ ./doc/src/sgml/ref/psql-ref.sgml2022-08-01 19:59:00.417285655 +0200
@@ -2864,7 +2864,7 @@
   
 
   
-   page: truncate the the header line at the 
terminal
+   page: truncate the header line at the terminal
width.
   
 


Re: Schema variables - new implementation for Postgres 15

2022-07-24 Thread Erik Rijkers

On 7/22/22 10:58, Pavel Stehule wrote:


čt 21. 7. 2022 v 9:34 odesílatel Julien Rouhaud <mailto:rjuju...@gmail.com>> napsal:
 

> [v20220722] patches

Hi Pavel,

Thanks, docs now build.

Attached a few small text-changes.

Also, the pg_restore-doc still has the old name  'schema_variable' 
instead of session_variable:


-A schema_variable
--variable=schema_variable

Surely those should be changed as well.

Erik Rijkers--- ./doc/src/sgml/ref/alter_variable.sgml.orig	2022-07-22 12:18:47.905664314 +0200
+++ ./doc/src/sgml/ref/alter_variable.sgml	2022-07-22 12:34:52.928275143 +0200
@@ -28,7 +28,7 @@
 
  
 
-ALTER VARIABLE name OWNER TO { new_owner | CURRENT_USER | SESSION_USER }
+ALTER VARIABLE name OWNER TO { new_owner | CURRENT_ROLE | CURRENT_USER | SESSION_USER }
 ALTER VARIABLE name RENAME TO new_name
 ALTER VARIABLE name SET SCHEMA new_schema
 
--- ./doc/src/sgml/ref/create_variable.sgml.orig	2022-07-22 12:18:13.877300499 +0200
+++ ./doc/src/sgml/ref/create_variable.sgml	2022-07-24 14:52:50.587010931 +0200
@@ -58,10 +58,10 @@
   

 Inside a query or an expression, the session variable can be shadowed by
-column or by routine's variable or routine argument. The collision of
-identifiers can be solved by using qualified identifiers. Session variables
-can use schema name, columns can use table aliases, routine's variables
-can use block's labels, and routine's arguments can use routine name.
+column or by routine's variable or routine argument. Such collisions of
+identifiers can be resolved by using qualified identifiers. Session variables
+can use schema name, columns can use table aliases, routine variables
+can use block labels, and routine arguments can use the routine name.

   
  
@@ -74,21 +74,16 @@
 IMMUTABLE
 
  
-  The assigned value of the session variable can never be changed.
-  If the session variable doesn't have a default value, then a single
+  The assigned value of the session variable can not be changed.
+  Only if the session variable doesn't have a default value, a single
   initialization is allowed using the LET command. Once
-  done, no other change will be allowed until end of transcation
-  (when session variable was created with clause ON TRANSACTION
-  END RESET or until reset of all session variables by
-  DISCARD VARIABLES or until reset of all session
+  done, no further change is allowed until end of transaction
+  if the session variable was created with clause ON TRANSACTION
+  END RESET, or until reset of all session variables by
+  DISCARD VARIABLES, or until reset of all session
   objects by command DISCARD ALL.
  
 
- 
-  When IMMUTABLE session variable has assigned
-  default value by using DEFAULT expr clause, then
-  the value of session variable cannot be changed ever.
- 
 

 
@@ -152,7 +147,7 @@
  
   The DEFAULT clause can be used to assign a default
   value to a session variable. This expression is evaluated when the session
-  variable is accessed for reading first time, and had not assigned any value.
+  variable is first accessed for reading and had not yet been assigned a value.
  
 

--- ./doc/src/sgml/ref/let.sgml.orig	2022-07-22 12:04:25.096021723 +0200
+++ ./doc/src/sgml/ref/let.sgml	2022-07-24 15:05:39.038636565 +0200
@@ -58,7 +58,7 @@
 sql_expression
 
  
-  An SQL expression. The result is cast to the data type of the session
+  An SQL expression, in parentheses. The result must be of the same data type as the session
   variable.
  
 
@@ -69,7 +69,7 @@
 
  
   Reset the session variable to its default value, if that is defined.
-  If no explicit default value has been assigned, the session variable
+  If no explicit default value has been declared, the session variable
   is set to NULL.
  
 


Re: Schema variables - new implementation for Postgres 15

2022-07-21 Thread Erik Rijkers

On 7/21/22 08:16, Pavel Stehule wrote:

Hi

new update of session variable;s implementation

- fresh rebase
- new possibility to trace execution with DEBUG1 notification
- new SRF function pg_debug_show_used_session_variables that returns 
content of sessionvars hashtab
- redesign of work with list of variables for reset, recheck, on commit 
drop, on commit reset


Hi Pavel,

I don't know exactly what failed but the docs (html/pdf) don't build:


cd ~/pg_stuff/pg_sandbox/pgsql.schema_variables/doc/src/sgml

$ make html
/usr/bin/xmllint --path . --noout --valid postgres.sgml
postgres.sgml:374: element link: validity error : IDREF attribute 
linkend references an unknown ID "catalog-pg-variable"

make: *** [Makefile:135: html-stamp] Error 4



Erik Rijkers




Regards

Pavel






Re: SQL/JSON documentation JSON_TABLE

2022-07-15 Thread Erik Rijkers

On 7/14/22 17:45, Andrew Dunstan wrote:


On 2022-07-08 Fr 16:20, Andrew Dunstan wrote:

On 2022-07-08 Fr 16:03, Erik Rijkers wrote:

Hi,

Attached are a few small changes to the JSON_TABLE section in func.sgml.

The first two changes are simple typos.

Then there was this line:


context_item, path_expression [ AS json_path_name ] [ PASSING { value
AS varname } [, ...]]


those are the parameters to JSON_TABLE() so I changed that line to:


JSON_TABLE(context_item, path_expression [ AS json_path_name ] [
PASSING { value AS varname } [, ...]])


Some parts of the JSON_TABLE text strike me as opaque.  For instance,
there are paragraphs that more than once use the term:
    json_api_common_syntax

'json_api_common_syntax' is not explained.  It turns out it's a relic
from Nikita's original docs. I dug up a 2018 patch where the term is
used as:

 2018:
JSON_TABLE (
  json_api_common_syntax [ AS path_name ]
  COLUMNS ( json_table_column [, ...] )
  (etc...)


with explanation:

 2018:
json_api_common_syntax:
    The input data to query, the JSON path expression defining the
query, and an optional PASSING clause.


So that made sense then (input+jsonpath+params=api), but it doesn't
now fit as such in the current docs.

I think it would be best to remove all uses of that compound term, and
rewrite the explanations using only the current parameter names
(context_item, path_expression, etc).


Thanks for this. If you want to follow up that last sentence I will try
to commit a single fix early next week.


Here's a patch that deals with most of this. There's one change you
wanted that I don't think is correct, which I omitted.

[json-docs-fix.patch]


Thanks, much better. I also agree that the change I proposed (and you 
omitted) wasn't great (although it leaves the paragraph somewhat 
orphaned - but maybe it isn't too bad.).


I've now compared our present document not only with the original doc as 
produced by Nikita Glukhov et al in 2018,  but also with the ISO draft 
from 2017 (ISO/IEC TR 19075-6 (JSON) for JavaScript Object).


I think we can learn a few things from that ISO draft's JSON_TABLE text. 
Let me copy-paste its first explicatory paragraph on JSON_TABLE:


-- [ ISO SQL/JSON draft 2017 ] -
Like the other JSON querying operators, JSON_TABLE begins with common syntax> to specify the context item, path expression and PASSING 
clause. The path expression in this case is more accurately called the 
row pattern path expression. This path expression is intended to produce 
an SQL/JSON sequence, with one SQL/JSON item for each row of the output 
table.


The COLUMNS clause can define two kinds of columns: ordinality columns 
and regular columns.


An ordinality column provides a sequential numbering of rows. Row 
numbering is 1-based.


A regular column supports columns of scalar type. The column is produced 
using the semantics of JSON_VALUE. The column has an optional path 
expression, called the column pattern, which can be defaulted from the 
column name. The column pattern is used to search for the column within 
the current SQL/JSON item produced by the row pattern. The column also 
has optional ON EMPTY and ON ERROR clauses, with the same choices and 
semantics as JSON_VALUE.

--


So, where the ISO draft introduces the term 'row pattern' it /also/ 
introduces the term 'column pattern' close by, in the next paragraph.


I think our docs too should have both terms.  The presence of both 'row 
pattern' and 'column pattern' immediately makes their meanings obvious. 
 At the moment our docs only use the term 'row pattern', for all the 
JSON_TABLE json path expressions (also those in the COLUMN clause, it 
seems).



At the moment, we say, in the JSON_TABLE doc:

To split the row pattern into columns, json_table provides the COLUMNS 
clause that defines the schema of the created view.



I think that to use 'row pattern' here is just wrong, or at least 
confusing.  The 'row pattern' is /not/ the data as produced from the 
json expression; the 'row pattern' /is/ the json path expression.  (ISO 
draft: 'The path expression in this case is more accurately called the 
row pattern path expression.' )


If you agree with my reasoning I can try to rewrite these bits in our 
docs accordingly.



Erik Rijkers




SQL/JSON documentation JSON_TABLE

2022-07-08 Thread Erik Rijkers

Hi,

Attached are a few small changes to the JSON_TABLE section in func.sgml.

The first two changes are simple typos.

Then there was this line:


context_item, path_expression [ AS json_path_name ] [ PASSING { value AS 
varname } [, ...]]



those are the parameters to JSON_TABLE() so I changed that line to:


JSON_TABLE(context_item, path_expression [ AS json_path_name ] [ PASSING 
{ value AS varname } [, ...]])



Some parts of the JSON_TABLE text strike me as opaque.  For instance, 
there are paragraphs that more than once use the term:

   json_api_common_syntax

'json_api_common_syntax' is not explained.  It turns out it's a relic 
from Nikita's original docs. I dug up a 2018 patch where the term is 
used as:


 2018:
JSON_TABLE (
 json_api_common_syntax [ AS path_name ]
 COLUMNS ( json_table_column [, ...] )
 (etc...)


with explanation:

 2018:
json_api_common_syntax:
   The input data to query, the JSON path expression defining the 
query, and an optional PASSING clause.



So that made sense then (input+jsonpath+params=api), but it doesn't now 
fit as such in the current docs.


I think it would be best to remove all uses of that compound term, and 
rewrite the explanations using only the current parameter names 
(context_item, path_expression, etc).


But I wasn't sure and I haven't done any such changes in the attached.

Perhaps I'll give it a try during the weekend.


Erik Rijkers


--- ./doc/src/sgml/func.sgml.orig	2022-07-08 19:46:46.018505707 +0200
+++ ./doc/src/sgml/func.sgml	2022-07-08 20:47:35.488303254 +0200
@@ -18026,7 +18026,7 @@
 or array, but if it is CONDITIONAL it will not be
 applied to a single array or object. UNCONDITIONAL
 is the default.
-If the result is a a scalar string, by default the value returned will have
+If the result is a scalar string, by default the value returned will have
 surrounding quotes making it a valid JSON value. However, this behavior
 is reversed if OMIT QUOTES is specified.
 The ON ERROR and ON EMPTY
@@ -18097,7 +18097,7 @@
columns. Columns produced by NESTED PATHs at the
same level are considered to be siblings,
while a column produced by a NESTED PATH is
-   considered to be a child of the column produced by and
+   considered to be a child of the column produced by a
NESTED PATH or row expression at a higher level.
Sibling columns are always joined first. Once they are processed,
the resulting rows are joined to the parent row.
@@ -18106,7 +18106,7 @@
   

 
- context_item, path_expression  AS json_path_name   PASSING { value AS varname } , ...
+ JSON_TABLE(context_item, path_expression  AS json_path_name   PASSING { value AS varname } , ...)
 
 
 


ERROR: operator does not exist: json = json

2022-07-08 Thread Erik Rijkers

Hi,

Comparison of 2 values of type jsonb is allowed.

Comparison of 2 values of type json gives an error.

That seems like an oversight -- or is it deliberate?

Example:

select '42'::json = '{}'::json;
--> ERROR:  operator does not exist: json = json

(of course, easily 'solved' by casting but that's not really the point)


Thanks,

Erik Rijkers








Re: PostgreSQL 15 beta 2 release announcement draft

2022-06-29 Thread Erik Rijkers

Op 29-06-2022 om 02:04 schreef Jonathan S. Katz:

Hi,

Attached is a draft of the release announcement for PostgreSQL 15 Beta 
2. Please provide feedback on technical accuracy and if there are 
glaring omissions.


Hardly 'glaring' but still:

'Multiples fixes'  should be
'Multiple fixes'



Please provide any feedback prior to 2022-06-22 0:00 AoE.

Thanks,

Jonathan





Re: JSON/SQL: jsonpath: incomprehensible error message

2022-06-29 Thread Erik Rijkers

Op 29-06-2022 om 15:00 schreef Amit Kapila:

On Mon, Jun 27, 2022 at 8:46 PM Andrew Dunstan  wrote:


On 2022-06-26 Su 11:44, Erik Rijkers wrote:

JSON/SQL jsonpath

For example, a jsonpath string with deliberate typo 'like_regexp'
(instead of 'like_regex'):

select js
from (values (jsonb '{}')) as f(js)
where js @? '$ ? (@ like_regexp "^xxx")';

ERROR:  syntax error, unexpected IDENT_P at or near " " of jsonpath input
LINE 1: ...s from (values (jsonb '{}')) as f(js) where js @? '$ ? (@
li...

Both  'IDENT_P'  and  'at or near " "'  seem pretty useless.




removing this. One thing that is not clear to me is why OP sees an
acceptable message (ERROR:  syntax error, unexpected invalid token at
or near "=" of jsonpath input) for a similar query in 14?


To mention that was perhaps unwise of me because The  IDENT_P (or more 
generally, *_P)  messages can be provoked on 14 too. I just thought 
'invalid token' might be a better message because 'token' gives a more 
direct association with 'errors during parsing' which I assume is the 
case here.


IDENT_P or ANY_P convey exactly nothing.


Erik







Re: PostgreSQL 15 beta 2 release announcement draft

2022-06-29 Thread Erik Rijkers

Op 29-06-2022 om 02:04 schreef Jonathan S. Katz:

Hi,



'not advise you to run PostgreSQL 15 Beta 1'should be
'not advise you to run PostgreSQL 15 Beta 2'


Erik




JSON/SQL: jsonpath: incomprehensible error message

2022-06-26 Thread Erik Rijkers

JSON/SQL jsonpath

For example, a jsonpath string with deliberate typo 'like_regexp' 
(instead of 'like_regex'):


select js
from (values (jsonb '{}')) as f(js)
where js @? '$ ? (@ like_regexp "^xxx")';

ERROR:  syntax error, unexpected IDENT_P at or near " " of jsonpath input
LINE 1: ...s from (values (jsonb '{}')) as f(js) where js @? '$ ? (@ li...
 ^

Both  'IDENT_P'  and  'at or near " "'  seem pretty useless.

Perhaps some improvement can be thought of?

Similar messages in release 14 seem to use 'invalid token', which is better:

select js
from (values (jsonb '{"a":"b"}')) as f(js)
where js @? '$ ? (@.a .= "b")';
ERROR:  syntax error, unexpected invalid token at or near "=" of 
jsonpath input


thanks,
Erik Rijkers





Re: Parse CE and BCE in dates and times

2022-06-13 Thread Erik Rijkers

Op 13-06-2022 om 07:51 schreef David Fetter:

Folks,

Please find attached a patch to do $Subject. As dates in a fair number
of fields of endeavor are expressed this way, it seems reasonable to
ensure tha we can parse them on input. Making it possible to use them
in output is a more invasive patch, and would involve changes to
to_date and similar that would require careful consideration.


Hi David,

I find some unexpected results:

# select '112-04-30 BC'::date;
 date
---
 0112-04-30 BC
(1 row)

but the same with the ' BCE' suffix seems broken:

# select '112-04-30 BCE'::date;
ERROR:  invalid input syntax for type date: "112-04-30 BCE"
LINE 1: select '112-04-30 BCE'::date;

The same goes for '112-04-30 AD' (works) and its CE version (errors out).

Or is this as expected?


Erik Rijkers










Best,
David.





funcs.sgml - wrong example

2022-05-17 Thread Erik Rijkers

funcs.sgml  has

  42 <@ '{[1,7)}'::int4multirange

and calls it true.  The attached fixes that.

Included are two more changes where actual output differs a bit from 
what the doc examples show.


Erik

--- doc/src/sgml/func.sgml.orig	2022-05-17 17:50:40.975410855 +0200
+++ doc/src/sgml/func.sgml	2022-05-17 17:55:22.492016016 +0200
@@ -1046,7 +1046,7 @@


 @ -5.0
-5
+5.0

   
 
@@ -1463,7 +1463,7 @@


log(2.0, 64.0)
-   6.00
+   6.

   
 
@@ -21359,7 +21359,7 @@


 42 @ '{[1,7)}'::int4multirange
-t
+f

   
 


Re: JSON Functions and Operators Docs for v15

2022-05-16 Thread Erik Rijkers

Op 16-05-2022 om 20:53 schreef Andrew Dunstan:


On 2022-05-16 Mo 13:52, Erik Rijkers wrote:

-
2.

This example in func.sgml says it gives 't' but on my instance it
returns 'f'.  Is the example correct?

jsonb_path_exists_tz('["2015-08-01 12:00:00 -05"]', '$[*] ?
(@.datetime() < "2015-08-02".datetime())') → t


Yeah, it doesn't like the format of the timestamp literal. It works with
"2015-08-01T12:00:0 -05". I'll fix the example in the next version.


That doesn't work either, in my hands.  It seems the offending 
chracteristic is the presence of the second space, before  -05




cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com






Re: JSON Functions and Operators Docs for v15

2022-05-16 Thread Erik Rijkers

Op 16-05-2022 om 16:49 schreef Andrew Dunstan:


[sqljson-dox-rework-2.patch]


Two issues, derived from func.sgml:

-
1.

I noticed that some json functions, for instance json_object(), in their 
output insert unexpected spaces before the separator-colon:


testdb=# select json_object('{a, 1, b, "def", c, 3.5}');

  json_object
---
 {"a" : "1", "b" : "def", "c" : "3.5"}
(1 row)

instead of the expected
 {"a": "1", "b": "def", "c": "3.5"}

Of course not outright wrong but wouldn't it make more sense to 
normalize such output?  There is here no reason in the input to space 
the colon on both sides.


Functions that yield this peculiarly spaced output are:
  json_object
  json_objectagg
  json_build_object

-
2.

This example in func.sgml says it gives 't' but on my instance it 
returns 'f'.  Is the example correct?


jsonb_path_exists_tz('["2015-08-01 12:00:00 -05"]', '$[*] ? 
(@.datetime() < "2015-08-02".datetime())') → t



Thanks,

Erik



andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: JSON Functions and Operators Docs for v15

2022-05-14 Thread Erik Rijkers


Not done yet but here's where I'm at. If I'm on the wrong track or
missing things that should be done please let me know.


[sqljson-dox-rework.patch] 



Here are a few errors/typos/improvements.

I've added (=copied from the old docs) the CREATE TABLE for the my_films 
table so that the more complicated json_table examples can be run easily.



Erik Rijkers



--
Andrew Dunstan
EDB: https://www.enterprisedb.com--- doc/src/sgml/func.sgml.orig	2022-05-14 06:32:28.564537299 +0200
+++ doc/src/sgml/func.sgml	2022-05-14 08:10:05.313313154 +0200
@@ -16287,7 +16287,7 @@


 jsonb_set_lax('[{"f1":1,"f2":null},2,null,3]', '{0,f1}', null)
-[{"f1":null,"f2":null},2,null,3]
+[{"f1": null, "f2": null}, 2, null, 3]


 jsonb_set_lax('[{"f1":99,"f2":null},2]', '{0,f3}', null, true, 'return_target')
@@ -17682,7 +17682,7 @@
 object keys.


-json('{"a" 123, "b":[true,"foo"], "a":"bar"}')
+json('{"a":123, "b":[true,"foo"], "a":"bar"}')
 {"a":123, "b":[true,"foo"], "a":"bar"}


@@ -17959,7 +17959,7 @@


 json_exists(jsonb '{"a": [1,2,3]}', 'strict $.a[5]' ERROR ON ERROR)
-ERROR: Invalid SQL/JSON subscript
+ERROR:  jsonpath array subscript is out of bounds
   
  
  
@@ -17990,11 +17990,11 @@
 of the ON EMPTY clause.


-json_value('"123.45"', '$' RETURNING float)
+json_value('"123.45"'::jsonb, '$' RETURNING float)
 123.45


-json_value('"03:04 2015-02-01"', '$.datetime("HH24:MI -MM-DD")' RETURNING date)
+json_value('"03:04 2015-02-01"'::jsonb, '$.datetime("HH24:MI -MM-DD")' RETURNING date)
 2015-02-01


@@ -18111,7 +18111,7 @@
 
 
  The input data to query, the JSON path expression defining the query,
- and an optional PASSING clause, which can privide data
+ and an optional PASSING clause, which can provide data
  values to the path_expression.
  The result of the input data
  evaluation is called the row pattern. The row
@@ -18409,6 +18409,31 @@
   Examples
 
  
+ In these examples the following small table storing some JSON data will be used:
+
+CREATE TABLE my_films ( js jsonb );
+
+INSERT INTO my_films VALUES (
+'{ "favorites" : [
+   { "kind" : "comedy", "films" : [
+ { "title" : "Bananas",
+   "director" : "Woody Allen"},
+ { "title" : "The Dinner Game",
+   "director" : "Francis Veber" } ] },
+   { "kind" : "horror", "films" : [
+ { "title" : "Psycho",
+   "director" : "Alfred Hitchcock" } ] },
+   { "kind" : "thriller", "films" : [
+ { "title" : "Vertigo",
+   "director" : "Alfred Hitchcock" } ] },
+   { "kind" : "drama", "films" : [
+ { "title" : "Yojimbo",
+   "director" : "Akira Kurosawa" } ] }
+  ] }');
+
+ 
+
+ 
   Query the my_films table holding
   some JSON data about the films and create a view that
   distributes the film genre, title, and director between separate columns:
@@ -18427,7 +18452,7 @@
  1  | comedy   | Bananas  | Woody Allen
  1  | comedy   | The Dinner Game  | Francis Veber
  2  | horror   | Psycho   | Alfred Hitchcock
- 3  | thriller | Vertigo  | Hitchcock
+ 3  | thriller | Vertigo  | Alfred Hitchcock
  4  | drama| Yojimbo  | Akira Kurosawa
  (5 rows)
 


Re: First draft of the PG 15 release notes

2022-05-10 Thread Erik Rijkers

Op 10-05-2022 om 17:44 schreef Bruce Momjian:

I have completed the first draft of the PG 15 release notes and you can
see the results here:

 https://momjian.us/pgsql_docs/release-15.html


typos:

'accept empty array'  should be
'to accept empty array'

'super users'  should be
'superusers'
   (several times)

'The new options causes the column names'
'The new options cause the column names'

'were always effected.'
'were always affected.'
   (I think...)

'Previous the actual schema'
'Previously the actual schema'

'inforcement'  should be
'enforcement'
   (surely?)

'server slide'  should be
'server side'

'Add extensions to define their own'  should be
'Allow extensions to define their own'


And one strangely unfinished sentence:
'They also can only be'


Erik Rijkers




gcc 12.1.0 warning

2022-05-06 Thread Erik Rijkers

Hi,

Not sure if these compiler-mutterings are worth reporting but I guess 
we're trying to get a silent compile.


System: Debian 4.9.303-1 (2022-03-07) x86_64 GNU/Linux
Compiling with gcc 12.1.0 causes the below 'warning' and 'note'.
Compiling with --enable-cassert --enable-debug  is silent, no warnings)

In function ‘guc_var_compare’,
inlined from ‘bsearch’ at 
/usr/include/x86_64-linux-gnu/bits/stdlib-bsearch.h:33:23,

inlined from ‘find_option’ at guc.c:5649:35:
guc.c:5736:38: warning: array subscript ‘const struct config_generic[0]’ 
is partly outside array bounds of ‘const char[8]’ [-Warray-bounds]

 5736 | return guc_name_compare(confa->name, confb->name);
  | ~^~

guc.c: In function ‘find_option’:
guc.c:5636:25: note: object ‘name’ of size 8
 5636 | find_option(const char *name, bool create_placeholders, bool 
skip_errors,

  | ^~~~

(Compiling with gcc 6.3.0 does not complain.)

Below are the two configure lines, FWIW.


Erik Rijkers


# cassert-build: no warning/note
./configure \
--prefix=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD \
--bindir=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/bin \
--libdir=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/lib \
--with-pgport=6515 --quiet --enable-depend \
--enable-cassert --enable-debug \
--with-openssl --with-perl --with-libxml --with-libxslt --with-zlib \
--enable-tap-tests --with-extra-version=_0506_HEAD_701d --with-lz4


# normal build: causes warning/note:
./configure \
--prefix=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD \
--bindir=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/bin.fast \
--libdir=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/lib.fast \
--with-pgport=6515 --quiet --enable-depend \
--with-openssl --with-perl --with-libxml --with-libxslt --with-zlib \
--enable-tap-tests --with-extra-version=_0506_HEAD_701d --with-lz4





Re: SQL/JSON: FOR ORDINALITY bug

2022-05-04 Thread Erik Rijkers

Op 04-05-2022 om 21:12 schreef Andrew Dunstan:




I don't see how rowseq can be anything but 1.  Each invocation of



After some further experimentation, I now think you must be right, David.

Also, looking at the DB2 docs:
   https://www.ibm.com/docs/en/i/7.2?topic=data-using-json-table
     (see especially under 'Handling nested information')

There, I gathered some example data + statements where one is the case
at hand.  I also made them runnable under postgres (attached).

I thought that was an instructive example, with those
'outer_ordinality' and 'inner_ordinality' columns.




Yeah, I just reviewed the latest version of that page (7.5) and the
example seems fairly plain that we are doing the right thing, or if not
we're in pretty good company, so I guess this is probably a false alarm.
Looks like ordinality is for the number of the element produced by the
path expression. So a path of 'lax $' should just produce ordinality of
1 in each case, while a path of 'lax $[*]' will produce increasing
ordinality for each element of the root array.


Agreed.

You've probably noticed then that on that same page under 'Sibling 
Nesting' is a statement that gives a 13-row resultset on DB2 whereas in 
15devel that statement yields just 10 rows.  I don't know which is correct.



Erik





cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com






Re: SQL/JSON: FOR ORDINALITY bug

2022-05-04 Thread Erik Rijkers

Op 04-05-2022 om 13:55 schreef Andrew Dunstan:


On 2022-05-03 Tu 20:39, David G. Johnston wrote:

On Tue, May 3, 2022 at 5:27 PM Andrew Dunstan  wrote:


 On 2022-05-03 Tu 11:19, Erik Rijkers wrote:
 > Hi
 >
 > I've copied some statements from the .pdf called:
 > "TECHNICAL REPORT ISO/IEC TR 19075-6   First edition 2017-03
 > Part SQL Notation support 6: (JSON) for JavaScript Object"
 > (not available anymore although there should be a similar
 replacement
 > file)
 >
 > In that pdf I found the data and statement (called 'table 15' in the
 > .pdf) as in the attached bash file.  But the result is different: as
 > implemented by 15devel, the column rowseq is always 1.  It seems
 to me
 > that that is wrong; it should count 1, 2, 3 as indeed the
 > example-result column in that pdf shows.
 >
 > What do you think?
 >
 >

 Possibly.


I don't see how rowseq can be anything but 1.  Each invocation of



After some further experimentation, I now think you must be right, David.

Also, looking at the DB2 docs:
  https://www.ibm.com/docs/en/i/7.2?topic=data-using-json-table
(see especially under 'Handling nested information')

There, I gathered some example data + statements where one is the case 
at hand.  I also made them runnable under postgres (attached).


I thought that was an instructive example, with those 'outer_ordinality' 
and 'inner_ordinality' columns.


Erik



json_table is given a single jsonb record via the lateral reference to
bookclub.jcol.  It produces one result, having a rowseq 1.  It does
this for all three outer lateral reference tuples and thus produces
three output rows each with one match numbered rowseq 1.



I imagine we could overcome that by stashing the sequence counter
somewhere it would survive across calls. The question really is what is
the right thing to do? I'm also a bit worried about how correct is
ordinal numbering with nested paths, e.g. (from the regression tests):


select
     jt.*
from
     jsonb_table_test jtt,
     json_table (
     jtt.js,'strict $[*]' as p
     columns (
     n for ordinality,
     a int path 'lax $.a' default -1 on empty,
     nested path 'strict $.b[*]' as pb columns ( b int path '$' ),
     nested path 'strict $.c[*]' as pc columns ( c int path '$' )
     )
     ) jt;
  n | a  | b | c
---++---+
  1 |  1 |   |
  2 |  2 | 1 |
  2 |  2 | 2 |
  2 |  2 | 3 |
  2 |  2 |   | 10
  2 |  2 |   |
  2 |  2 |   | 20
  3 |  3 | 1 |
  3 |  3 | 2 |
  4 | -1 | 1 |
  4 | -1 | 2 |


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com


json_ordinality_db2.sh
Description: application/shellscript


SQL/JSON: FOR ORDINALITY bug

2022-05-03 Thread Erik Rijkers

Hi

I've copied some statements from the .pdf called:
"TECHNICAL REPORT ISO/IEC TR 19075-6   First edition 2017-03
Part SQL Notation support 6: (JSON) for JavaScript Object"
(not available anymore although there should be a similar replacement file)

In that pdf I found the data and statement (called 'table 15' in the 
.pdf) as in the attached bash file.  But the result is different: as 
implemented by 15devel, the column rowseq is always 1.  It seems to me 
that that is wrong; it should count 1, 2, 3 as indeed the example-result 
column in that pdf shows.


What do you think?

Erik Rijkers

table15bug.sh
Description: application/shellscript


Re: TRAP: FailedAssertion("tabstat->trans == trans", File: "pgstat_relation.c", Line: 508

2022-04-20 Thread Erik Rijkers

Op 20-04-2022 om 06:54 schreef Kyotaro Horiguchi:

At Tue, 19 Apr 2022 10:55:26 -0700, Andres Freund  wrote in

Hi,

On 2022-04-19 10:36:24 -0700, Andres Freund wrote:

On 2022-04-19 13:50:25 +0200, Erik Rijkers wrote:

The 12th run of statbug.sh crashed and gave a corefile.


I ran through quite a few iterations by now, without reproducing :(

I guess there's some timing issue and you're hitting on your system
due to the slower disks.


Ah. I found the issue. The new pgstat_report_stat(true) call in
LogicalRepApplyLoop()'s "timeout" section doesn't check if we're in a
transaction. And the transactional stats code doesn't handle that (never
has).

I think all that's needed is a if (IsTransactionState()) around that
pgstat_report_stat().


if (!IsTransactinoState()) ?


It might be possible to put an assertion into pgstat_report_stat(), but
I need to look at the process exit code to see if it is.


Inserting a sleep in pgoutput_commit_txn reproduced this. Crashes with
the same stack trace with the similar variable state.

diff --git a/src/backend/replication/pgoutput/pgoutput.c 
b/src/backend/replication/pgoutput/pgoutput.c
index b197bfd565..def4d751d3 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -568,6 +568,7 @@ pgoutput_commit_txn(LogicalDecodingContext *ctx, 
ReorderBufferTXN *txn,
return;
}
  
+	sleep(2);

OutputPluginPrepareWrite(ctx, true);
logicalrep_write_commit(ctx->out, txn, commit_lsn);
OutputPluginWrite(ctx, true);

The following  actuall works for this.

diff --git a/src/backend/replication/logical/worker.c 
b/src/backend/replication/logical/worker.c
index 4171371296..f4e5359513 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -2882,10 +2882,11 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
send_feedback(last_received, requestReply, 
requestReply);
  
  			/*

-* Force reporting to ensure long idle periods don't 
lead to
-* arbitrarily delayed stats.
+* Force reporting to ensure long out-of-transaction 
idle periods
+* don't lead to arbitrarily delayed stats.
 */
-   pgstat_report_stat(true);
+   if (!IsTransactionState())
+   pgstat_report_stat(true);
}
}
  


Yes, that seems to fix it: I applied that latter patch, and ran my 
program 250x without errors. Then I removed it again an it gave the 
error within 15x.


thanks!

Erik



regards.






Re: TRAP: FailedAssertion("tabstat->trans == trans", File: "pgstat_relation.c", Line: 508

2022-04-19 Thread Erik Rijkers

Op 19-04-2022 om 19:36 schreef Andres Freund:

Hi,

On 2022-04-19 13:50:25 +0200, Erik Rijkers wrote:

The 12th run of statbug.sh crashed and gave a corefile.


I ran through quite a few iterations by now, without reproducing :(

I guess there's some timing issue and you're hitting on your system
due to the slower disks.



Program terminated with signal 6, Aborted.
#0  0x00357d6324f5 in raise () from /lib64/libc.so.6
(gdb) bt
#0  0x00357d6324f5 in raise () from /lib64/libc.so.6
#1  0x00357d633cd5 in abort () from /lib64/libc.so.6
#2  0x00b3bada in ExceptionalCondition (conditionName=0xd389a1
"tabstat->trans == trans", errorType=0xd388b2 "FailedAssertion",
fileName=0xd388a0 "pgstat_relation.c", lineNumber=508) at assert.c:69
#3  0x009bf5dc in AtEOXact_PgStat_Relations (xact_state=0x31b1b50,
isCommit=true) at pgstat_relation.c:508
#4  0x009c4107 in AtEOXact_PgStat (isCommit=true, parallel=false) at
pgstat_xact.c:54
#5  0x00583764 in CommitTransaction () at xact.c:2360
#6  0x00584354 in CommitTransactionCommand () at xact.c:3048
#7  0x0090b34e in apply_handle_commit_internal
(commit_data=0x7ffd024b5940) at worker.c:1532
#8  0x0090a287 in apply_handle_commit (s=0x7ffd024b59b0) at
worker.c:845
#9  0x0090ce3a in apply_dispatch (s=0x7ffd024b59b0) at worker.c:2473
#10 0x0090d41c in LogicalRepApplyLoop (last_received=74680880) at
worker.c:2757
#11 0x0090e974 in start_apply (origin_startpos=0) at worker.c:3526
#12 0x0090f156 in ApplyWorkerMain (main_arg=0) at worker.c:3782
#13 0x008c7623 in StartBackgroundWorker () at bgworker.c:858
#14 0x008d1557 in do_start_bgworker (rw=0x30ff0a0) at
postmaster.c:5802
#15 0x008d1903 in maybe_start_bgworkers () at postmaster.c:6026
#16 0x008d09ba in sigusr1_handler (postgres_signal_arg=10) at
postmaster.c:5191
#17 
#18 0x00357d6e1683 in __select_nocancel () from /lib64/libc.so.6
#19 0x008cc6c1 in ServerLoop () at postmaster.c:1757
#20 0x008cc0aa in PostmasterMain (argc=11, argv=0x30d6590) at
postmaster.c:1465
#21 0x007c9256 in main (argc=11, argv=0x30d6590) at main.c:202
(gdb) f 3
#3  0x009bf5dc in AtEOXact_PgStat_Relations (xact_state=0x31b1b50,
isCommit=true) at pgstat_relation.c:508
508 Assert(tabstat->trans == trans);
(gdb) p tabstat
$1 = (PgStat_TableStatus *) 0x319e630
(gdb) p *tabstat
$2 = {t_id = 2139062143, t_shared = 127, trans = 0x7f7f7f7f7f7f7f7f,
t_counts = {t_numscans = 9187201950435737471, t_tuples_returned =
9187201950435737471, t_tuples_fetched = 9187201950435737471,
 t_tuples_inserted = 9187201950435737471, t_tuples_updated =
9187201950435737471, t_tuples_deleted = 9187201950435737471,
t_tuples_hot_updated = 9187201950435737471, t_truncdropped = 127,
 t_delta_live_tuples = 9187201950435737471, t_delta_dead_tuples =
9187201950435737471, t_changed_tuples = 9187201950435737471,
t_blocks_fetched = 9187201950435737471, t_blocks_hit = 9187201950435737471},
   relation = 0x7f7f7f7f7f7f7f7f}
(gdb) p trans
$3 = (PgStat_TableXactStatus *) 0x31b1ba8
(gdb) p *trans
$4 = {tuples_inserted = 1, tuples_updated = 0, tuples_deleted = 0,
truncdropped = false, inserted_pre_truncdrop = 0, updated_pre_truncdrop = 0,
deleted_pre_truncdrop = 0, nest_level = 1, upper = 0x0,
   parent = 0x319e630, next = 0x31b1ab8}
(gdb)


Could you print out
p xact_state
p *xact_state
p xact_state->first
p *xact_state->first

Do you have the server log file for the failed run / instance?



(gdb) p xact_state
$5 = (PgStat_SubXactStatus *) 0x31b1b50

(gdb) p *xact_state
$6 = {nest_level = 1, prev = 0x0, pending_drops = {head = {prev = 
0x31b1b60, next = 0x31b1b60}}, pending_drops_count = 0, first = 0x31b1ba8}


(gdb) p xact_state->first
$7 = (PgStat_TableXactStatus *) 0x31b1ba8

(gdb) p *xact_state->first
$8 = {tuples_inserted = 1, tuples_updated = 0, tuples_deleted = 0, 
truncdropped = false, inserted_pre_truncdrop = 0, updated_pre_truncdrop 
= 0, deleted_pre_truncdrop = 0, nest_level = 1, upper = 0x0,

  parent = 0x319e630, next = 0x31b1ab8}
(gdb)


The logfile is attached.


Erik



Greetings,

Andres Freund2022-04-19 13:33:42.944 CEST [24981] LOG:  starting PostgreSQL 
15devel_HEAD_20220419_1308_a62bff74b135 on x86_64-pc-linux-gnu, compiled by gcc 
(GCC) 8.2.0, 64-bit
2022-04-19 13:33:42.945 CEST [24981] LOG:  listening on IPv4 address 
"127.0.0.1", port 6526
2022-04-19 13:33:43.010 CEST [24981] LOG:  listening on Unix socket 
"/tmp/.s.PGSQL.6526"
2022-04-19 13:33:43.085 CEST [24991] LOG:  database system was shut down at 
2022-04-19 13:33:41 CEST
2022-04-19 13:33:43.181 CEST [24981] LOG:  database system is ready to accept 
connections
2022-04-19 13:33:51.040 CEST [25047] LOG:  received replication command: 
CREATE_REPLICATION_SLOT "pub_6527_from_6526" LOGICAL pgoutput (SNAPSHOT 
'nothing')
2022-04-19 13:33:51.040 CEST [250

Re: TRAP: FailedAssertion("tabstat->trans == trans", File: "pgstat_relation.c", Line: 508

2022-04-19 Thread Erik Rijkers

Op 19-04-2022 om 11:25 schreef Kyotaro Horiguchi:

Thaks Erik.

At Tue, 19 Apr 2022 07:00:30 +0200, Erik Rijkers  wrote in

Op 19-04-2022 om 02:15 schreef Kyotaro Horiguchi:

Could you read tabstat, *tabstat, trans, *trans here?


To be honest I'm not sure how to, but I gave it a try:




I rebuilt newest master (a62bff74b135)  with

export CUSTOM_COPT="-O0 -g"

The 12th run of statbug.sh crashed and gave a corefile.


GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
<http://gnu.org/licenses/gpl.html>

This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from 
/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/bin/postgres...done.

[New LWP 25058]

warning: Can't read pathname for load map: Input/output error.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `postgres: logical replication worker for 
subscription 16411   '.

Program terminated with signal 6, Aborted.
#0  0x00357d6324f5 in raise () from /lib64/libc.so.6
(gdb) bt
#0  0x00357d6324f5 in raise () from /lib64/libc.so.6
#1  0x00357d633cd5 in abort () from /lib64/libc.so.6
#2  0x00b3bada in ExceptionalCondition (conditionName=0xd389a1 
"tabstat->trans == trans", errorType=0xd388b2 "FailedAssertion", 
fileName=0xd388a0 "pgstat_relation.c", lineNumber=508) at assert.c:69
#3  0x009bf5dc in AtEOXact_PgStat_Relations 
(xact_state=0x31b1b50, isCommit=true) at pgstat_relation.c:508
#4  0x009c4107 in AtEOXact_PgStat (isCommit=true, 
parallel=false) at pgstat_xact.c:54

#5  0x00583764 in CommitTransaction () at xact.c:2360
#6  0x00584354 in CommitTransactionCommand () at xact.c:3048
#7  0x0090b34e in apply_handle_commit_internal 
(commit_data=0x7ffd024b5940) at worker.c:1532
#8  0x0090a287 in apply_handle_commit (s=0x7ffd024b59b0) at 
worker.c:845

#9  0x0090ce3a in apply_dispatch (s=0x7ffd024b59b0) at worker.c:2473
#10 0x0090d41c in LogicalRepApplyLoop (last_received=74680880) 
at worker.c:2757

#11 0x0090e974 in start_apply (origin_startpos=0) at worker.c:3526
#12 0x0090f156 in ApplyWorkerMain (main_arg=0) at worker.c:3782
#13 0x008c7623 in StartBackgroundWorker () at bgworker.c:858
#14 0x008d1557 in do_start_bgworker (rw=0x30ff0a0) at 
postmaster.c:5802

#15 0x008d1903 in maybe_start_bgworkers () at postmaster.c:6026
#16 0x008d09ba in sigusr1_handler (postgres_signal_arg=10) at 
postmaster.c:5191

#17 
#18 0x00357d6e1683 in __select_nocancel () from /lib64/libc.so.6
#19 0x008cc6c1 in ServerLoop () at postmaster.c:1757
#20 0x008cc0aa in PostmasterMain (argc=11, argv=0x30d6590) at 
postmaster.c:1465

#21 0x007c9256 in main (argc=11, argv=0x30d6590) at main.c:202
(gdb) f 3
#3  0x009bf5dc in AtEOXact_PgStat_Relations 
(xact_state=0x31b1b50, isCommit=true) at pgstat_relation.c:508

508 Assert(tabstat->trans == trans);
(gdb) p tabstat
$1 = (PgStat_TableStatus *) 0x319e630
(gdb) p *tabstat
$2 = {t_id = 2139062143, t_shared = 127, trans = 0x7f7f7f7f7f7f7f7f, 
t_counts = {t_numscans = 9187201950435737471, t_tuples_returned = 
9187201950435737471, t_tuples_fetched = 9187201950435737471,
t_tuples_inserted = 9187201950435737471, t_tuples_updated = 
9187201950435737471, t_tuples_deleted = 9187201950435737471, 
t_tuples_hot_updated = 9187201950435737471, t_truncdropped = 127,
t_delta_live_tuples = 9187201950435737471, t_delta_dead_tuples = 
9187201950435737471, t_changed_tuples = 9187201950435737471, 
t_blocks_fetched = 9187201950435737471, t_blocks_hit = 9187201950435737471},

  relation = 0x7f7f7f7f7f7f7f7f}
(gdb) p trans
$3 = (PgStat_TableXactStatus *) 0x31b1ba8
(gdb) p *trans
$4 = {tuples_inserted = 1, tuples_updated = 0, tuples_deleted = 0, 
truncdropped = false, inserted_pre_truncdrop = 0, updated_pre_truncdrop 
= 0, deleted_pre_truncdrop = 0, nest_level = 1, upper = 0x0,

  parent = 0x319e630, next = 0x31b1ab8}
(gdb)



Looks like we're one step further, no?


Erik



(gdb) p tabstat
$1 = 


Great! It is that.  But unfortunately they are optimized out..  Could
you cause the crash with -O0 binary?  You will see the variable with
it.  You can rebuild with the option as follows.

$ make clean; make install CUSTOM_COPT="-O0 -g"

You can dump only the whole xact_state chain from the current core
file but the result will give a bit obscure hint for diagnosis.

regards.






Re: TRAP: FailedAssertion("tabstat->trans == trans", File: "pgstat_relation.c", Line: 508

2022-04-18 Thread Erik Rijkers

Op 19-04-2022 om 02:15 schreef Kyotaro Horiguchi:

At Mon, 18 Apr 2022 10:57:02 +0200, Erikjan Rijkers  wrote in

Hm.  Just now I've recompiled and retried and after 5 runs got the
same crash.  Then tried on another machine (also old, I'm afraid),
and built 1a8b11053 and ran the same thing.  That failed on the first
try, and made core dump from which I extracted:


Thanks!


gdb ~/pg_stuff/pg_installations/pgsql.HEAD/bin/postgres
core-postgres-6-500-500-8289-1650269886 -ex bt -ex q

#2 0x00973fcb in ExceptionalCondition
#(conditionName=conditionName@entry=0xb20d76 "tabstat->trans == trans",
#errorType=errorType@entry=0x9c7c2b "FailedAssertion",
 fileName=fileName@entry=0xb20d0b "pgstat_relation.c",
 lineNumber=lineNumber@entry=508) at assert.c:69
#3 0x0086b77f in AtEOXact_PgStat_Relations
#(xact_state=xact_state@entry=0x26f0b50, isCommit=isCommit@entry=true)
#at pgstat_relation.c:508


Could you read tabstat, *tabstat, trans, *trans here?


To be honest I'm not sure how to, but I gave it a try:

GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 


This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu".
For bug reporting instructions, please see:
...
Reading symbols from 
/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/bin/postgres...done.

[New LWP 21839]

warning: Can't read pathname for load map: Input/output error.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `postgres: logical replication worker for 
subscription 16411   '.

Program terminated with signal 6, Aborted.
#0  0x00357d6324f5 in raise () from /lib64/libc.so.6
(gdb) bt
#0  0x00357d6324f5 in raise () from /lib64/libc.so.6
#1  0x00357d633cd5 in abort () from /lib64/libc.so.6
#2  0x0097400b in ExceptionalCondition 
(conditionName=conditionName@entry=0xb20df6 "tabstat->trans == trans", 
errorType=errorType@entry=0x9c7cab "FailedAssertion",
fileName=fileName@entry=0xb20d8b "pgstat_relation.c", 
lineNumber=lineNumber@entry=508) at assert.c:69
#3  0x0086b7bf in AtEOXact_PgStat_Relations 
(xact_state=xact_state@entry=0x2d9ab50, isCommit=isCommit@entry=true) at 
pgstat_relation.c:508
#4  0x0086ec4f in AtEOXact_PgStat (isCommit=isCommit@entry=true, 
parallel=parallel@entry=false) at pgstat_xact.c:54

#5  0x005bd2a3 in CommitTransaction () at xact.c:2360
#6  0x005be5d5 in CommitTransactionCommand () at xact.c:3048
#7  0x007ee76b in apply_handle_commit_internal 
(commit_data=commit_data@entry=0x7fffb90aa8e0) at worker.c:1532
#8  0x007efb09 in apply_handle_commit (s=0x7fffb90aaa80) at 
worker.c:845

#9  apply_dispatch () at worker.c:2473
#10 0x007f11e7 in LogicalRepApplyLoop (last_received=74695984) 
at worker.c:2757

#11 start_apply () at worker.c:3526
#12 0x007f179f in ApplyWorkerMain () at worker.c:3782
#13 0x007bdbb3 in StartBackgroundWorker () at bgworker.c:858
#14 0x007c3251 in do_start_bgworker (rw=) at 
postmaster.c:5802

#15 maybe_start_bgworkers () at postmaster.c:6026
#16 0x007c3b75 in sigusr1_handler 
(postgres_signal_arg=) at postmaster.c:5191

#17 
#18 0x00357d6e1683 in __select_nocancel () from /lib64/libc.so.6
#19 0x007c41e6 in ServerLoop () at postmaster.c:1757
#20 0x007c5c4b in PostmasterMain () at postmaster.c:1465
#21 0x00720d0e in main (argc=11, argv=0x2cbf590) at main.c:202
(gdb) f 3
#3  0x0086b7bf in AtEOXact_PgStat_Relations 
(xact_state=xact_state@entry=0x2d9ab50, isCommit=isCommit@entry=true) at 
pgstat_relation.c:508

508 Assert(tabstat->trans == trans);
(gdb) p tabstat
$1 = 
(gdb) p *tabstat
value has been optimized out
(gdb) p trans
$2 = 
(gdb) p *trans
value has been optimized out
(gdb)





#4 0x0086ec0f in AtEOXact_PgStat (isCommit=isCommit@entry=true,
#parallel=parallel@entry=false) at pgstat_xact.c:54
#5  0x005bd2a3 in CommitTransaction () at xact.c:2360
#6  0x005be5d5 in CommitTransactionCommand () at xact.c:3048
#7 0x007ee72b in apply_handle_commit_internal
#(commit_data=commit_data@entry=0x7ffe4606a7a0) at worker.c:1532
#8 0x007efac9 in apply_handle_commit (s=0x7ffe4606a940) at
#worker.c:845
#9  apply_dispatch () at worker.c:2473
#10 0x007f11a7 in LogicalRepApplyLoop (last_received=74454600)
#at worker.c:2757
#11 start_apply () at worker.c:3526
#12 0x007f175f in ApplyWorkerMain () at worker.c:3782


regards.






TRAP: FailedAssertion("tabstat->trans == trans", File: "pgstat_relation.c", Line: 508

2022-04-16 Thread Erik Rijkers

Hi,

I get this crash running the attached test program. On my slow-disked 
and old desktop it occurs once in 20 or so runs (it is yet another 
installment of an old test that runs pgbench with logical replication).


15devel compiled from d3609dd25.

(The bash deletes stuff, and without my environment it will need some 
tweaking)


Thanks,

Erik Rijkers





statbug.sh
Description: application/shellscript


Re: TRAP: FailedAssertion("HaveRegisteredOrActiveSnapshot()", File: "toast_internals.c", Line: 670, PID: 19403)

2022-03-29 Thread Erik Rijkers

Op 29-03-2022 om 12:50 schreef Matthias van de Meent:

On Tue, 29 Mar 2022 at 11:10, Kyotaro Horiguchi  wrote:


At Tue, 29 Mar 2022 17:06:21 +0900 (JST), Kyotaro Horiguchi 
 wrote in

At Mon, 28 Mar 2022 18:36:46 +0900 (JST), Kyotaro Horiguchi 
 wrote in
Then, I found that portal->holdSnapshot is that.  I came up with the
attached.  It does the follows:

1. Teach PlannedStmtRequiresSnapshot() to return true for FetchStmt.

2. Use holdSnapshot in RunFromStore if any.


The rerpducer is reduced to as small as the following.

CREATE TABLE t (a text);
INSERT INTO t VALUES('some random text');
BEGIN;
DECLARE c CURSOR FOR SELECT * FROM t;
FETCH ALL IN c;

But I haven't come up with a reasonable way to generate the 'some
random text' yet.


I gave up and took a straightforward way to generate one.

I don't like that it uses a fixed length for the random text, but
anyway it works for now...


An shorter (?) reproducer might be the following, which forces any
value for 'a' to be toasted and thus triggering the check in
init_toast_snapshot regardless of value length:

CREATE TABLE t (a text);
ALTER TABLE t ALTER COLUMN a SET STORAGE EXTERNAL;
INSERT INTO t VALUES ('toast');
BEGIN;
DECLARE c CURSOR FOR SELECT * FROM t;
FETCH ALL IN c;


Excellent. That indeed immediately forces the error.

(and the patch prevents it)

Thanks!




Enjoy,

-Matthias





TRAP: FailedAssertion("HaveRegisteredOrActiveSnapshot()", File: "toast_internals.c", Line: 670, PID: 19403)

2022-03-27 Thread Erik Rijkers

Hi,

On master I got a FailedAssertion("HaveRegisteredOrActiveSnapshot()"
on an assert-enabled instance and with (I think) data over a certain length.

I whittled it down to the attached bash (careful - it drops stuff).  It 
has 5 tsv-data lines (one long line) that COPY slurps into a table.  The 
middle, third line causes the problem, later on.  Shortening the long 
line to somewhere below 2000 characters fixes it again.


More info in the attached .sh file.

If debug-assert is 'off', the problem does not occur. (REL_14_STABLE 
also does not have the problem, assertions or not)


thanks,

Erik Rijkers



bugsnapshot.sh
Description: application/shellscript


Re: SQL/JSON: JSON_TABLE

2022-03-26 Thread Erik Rijkers

Op 25-03-2022 om 21:30 schreef Andrew Dunstan:


On 3/22/22 10:55, Daniel Gustafsson wrote:

On 22 Mar 2022, at 16:31, Andrew Dunstan  wrote:
I'm planning on pushing the functions patch set this week and json-table
next week.

My comments from 30827b3c-edf6-4d41-bbf1-298181874...@yesql.se are yet to be
addressed (or at all responded to) in this patchset.  I'll paste the ones which
still apply to make it easier:




I think I have fixed all those. See attached. I haven't prepared a new
patch set for SQL/JSON functions because there's just one typo to fix,
but I won't forget it. Please let me know if there's anything else you see.


At this stage I think I have finished with the actual code, and I'm
concentrating on improving the docs a bit.


> [ v59 ]


FWIW, I went through func.sgml (of v59) once.


Erik Rijkers
--- doc/src/sgml/func.sgml.orig	2022-03-25 22:17:13.908660140 +0100
+++ doc/src/sgml/func.sgml	2022-03-26 12:08:46.593271826 +0100
@@ -17673,8 +17673,8 @@
  Description
 
  
-  JSON function generates a JSON
-  from a text data.
+  The JSON function generates JSON
+  from text data.
  
 
 
@@ -17688,7 +17688,7 @@
 
  
   String expression that provides the JSON text data.
-  Accepted any character strings (text, char, etc.)
+  Accepts any character strings (text, char, etc.)
   or binary strings (bytea) in UTF8 encoding.
   For null input, SQL null value is returned.
  
@@ -17757,7 +17757,7 @@
 
  Examples
  
-  Construct a JSON the provided strings:
+  Construct JSON using the provided strings:
  
 
 SELECT JSON('{ "a" : 123, "b": [ true, "foo" ], "a" : "bar" }');
@@ -17794,8 +17794,8 @@
  Description
 
  
-  JSON_SCALAR function generates a scalar
-  JSON from a SQL data.
+  The JSON_SCALAR function generates scalar
+  JSON from SQL data.
  
 
 
@@ -17808,11 +17808,11 @@
 
 
  
-  Expression that provides the data for constructing a
+  Expression that provides the data for constructing
   JSON.
   For null input, SQL  null
-  (not a JSON null) value is returned.
-  For any scalar other than a number, a Boolean, the text representation
+  (not JSON null) value is returned.
+  For any scalar other than a number or a Boolean, the text representation
   will be used, with escaping as necessary to make it a valid
   JSON string value.
   For details, see
@@ -17847,7 +17847,7 @@
 
  Examples
  
-  Construct a JSON from the provided values various types:
+  Construct JSON from provided values of various type:
  
 
 SELECT JSON_SCALAR(123.45);
@@ -18753,7 +18753,7 @@
 
 -- Strict mode with ERROR on ERROR clause
 SELECT JSON_EXISTS(jsonb '{"a": [1,2,3]}', 'strict $.a[5]' ERROR ON ERROR);
-ERROR: Invalid SQL/JSON subscript
+ERROR:  jsonpath array subscript is out of bounds
 (1 row)
 
 
@@ -18795,11 +18795,11 @@
  Description
 
   
-   JSON_VALUE function extracts a value from the provided
+   The JSON_VALUE function extracts a value from the provided
JSON data and converts it to an SQL scalar.
If the specified JSON path expression returns more than one
SQL/JSON item, an error occurs. To extract
-   an SQL/JSON array or object, use .
+   an SQL/JSON array or object, see .
   
 
 
@@ -18885,19 +18885,19 @@
  
 
 
-SELECT JSON_VALUE('"123.45"', '$' RETURNING float);
+SELECT JSON_VALUE('"123.45"'::jsonb, '$' RETURNING float);
  json_value
 
  123.45
 (1 row)
 
-SELECT JSON_VALUE('123.45', '$' RETURNING int ERROR ON ERROR);
+SELECT JSON_VALUE('123.45'::jsonb, '$' RETURNING int ERROR ON ERROR);
  json_value
 
 123
 (1 row)
 
-SELECT JSON_VALUE('"03:04 2015-02-01"', '$.datetime("HH24:MI -MM-DD")' RETURNING date);
+SELECT JSON_VALUE('"03:04 2015-02-01"'::jsonb, '$.datetime("HH24:MI -MM-DD")' RETURNING date);
  json_value 
 
  2015-02-01
@@ -18907,10 +18907,10 @@
 ERROR:  invalid input syntax for integer: "123.45"
 
 SELECT JSON_VALUE(jsonb '[1]', 'strict $' ERROR ON ERROR);
-ERROR: SQL/JSON scalar required
+ERROR:  JSON path expression in JSON_VALUE should return singleton scalar item
 
 SELECT JSON_VALUE(jsonb '[1,2]', 'strict $[*]' ERROR ON ERROR);
-ERROR: more than one SQL/JSON item
+ERROR:  JSON path expression in JSON_VALUE should return singleton scalar item
 
 
  
@@ -18920,13 +18920,13 @@
  
 
 SELECT JSON_VALUE(jsonb '[1]', 'strict $' ERROR ON ERROR);
-ERROR: SQL/JSON scalar required
+ERROR:  JSON path expression in JSON_VALUE should return singleton scalar item
 
 SELECT JSON_VALUE(jsonb '{"a": 1}', 'strict $' ERROR ON ERROR);
-ERROR: SQL/JSON scalar required
+ERROR:  JSON path expression

Re: jsonpath syntax extensions

2022-03-21 Thread Erik Rijkers



Op 21-03-2022 om 21:13 schreef Greg Stark:

Hm. Actually... These changes were split off from the JSON_TABLE
patches? Are they still separate or have they been merged into those
other patches since? I see the JSON_TABLE thread is getting more
comments do those reviews include these patches?



They are separate.

FWIW, I've done all my JSON_PATH testing both without and with these 
syntax extensions (but I've done no code review.)  I like these 
extensions but as you say -- there seems to be not much interest.



Erik


On Mon, 21 Mar 2022 at 16:09, Greg Stark  wrote:


This patch seems to be getting ignored. Like David I'm a bit puzzled
because it doesn't seem like an especially obscure or difficult patch
to review. Yet it's been multiple years without even a superficial
"does it meet the coding requirements" review let alone a design
review.

Can we get a volunteer to at least give it a quick once-over? I don't
think it's ideal to be doing this in the last CF but neither is it
very appetizing to just shift it to the next CF without a review after
two years...

On Thu, 27 Feb 2020 at 10:58, Nikita Glukhov  wrote:


Hi, hackers!

Attached patches implement several useful jsonpath syntax extensions.
I already published them two years ago in the original SQL/JSON thread,
but then after creation of separate threads for SQL/JSON functions and
JSON_TABLE I forgot about them.

A brief description of the patches:

1. Introduced new jsonpath modifier 'pg' which is used for enabling
PostgreSQL-specific extensions.  This feature was already proposed in the
discussion of jsonpath's like_regex implementation.

2. Added support for raw jbvObject and jbvArray JsonbValues inside jsonpath
engine.  Now, jsonpath can operate with JSON arrays and objects only in
jbvBinary form.  But with introduction of array and object constructors in
patches #4 and #5 raw in-memory jsonb containers can appear in jsonpath engine.
In some places we can iterate through jbvArrays, in others we need to encode
jbvArrays and jbvObjects into jbvBinay.

3. SQL/JSON sequence construction syntax. A simple comma-separated list can be
used to concatenate single values or sequences into a single resulting sequence.

  SELECT jsonb_path_query('[1, 2, 3]', 'pg $[*], 4, 2 + 3');
   jsonb_path_query
  --
   1
   2
   3
   4
   5

  SELECT jsonb_path_query('{ "a": [1, 2, 3], "b": [4, 5] }',
 'pg ($.a[*], $.b[*]) ? (@ % 2 == 1)');
jsonb_path_query
  --
   1
   3
   5


Patches #4-#6 implement ECMAScript-like syntax constructors and accessors:

4. Array construction syntax.
This can also be considered as enclosing a sequence constructor into brackets.

  SELECT jsonb_path_query('[1, 2, 3]', 'pg [$[*], 4, 2 + 3]');
   jsonb_path_query
  --
   [1, 2, 3, 4, 5]

Having this feature, jsonb_path_query_array() becomes somewhat redundant.


5. Object construction syntax.  It is useful for constructing derived objects
from the interesting parts of the original object.  (But this is not sufficient
to "project" each object in array, item method like '.map()' is needed here.)

  SELECT jsonb_path_query('{"b": 2}', 'pg { a : 1, b : $.b, "x y" : $.b + 3 }');
  jsonb_path_query
  ---
   { "a" : 1, "b": 3, "x y": 5 }

Fields with empty values are simply skipped regardless of lax/strict mode:

  SELECT jsonb_path_query('{"a": 1}', 'pg { b : $.b, a : $.a ? (@ > 1) }');
   jsonb_path_query
  --
   {}


6. Object subscription syntax.  This gives us ability to specify what key to
extract on runtime.  The syntax is the same as ordinary array subscription
syntax.

  -- non-existent $.x is simply skipped in lax mode
  SELECT jsonb_path_query('{"a": "b", "b": "c"}', 'pg $[$.a, "x", "a"]');
   jsonb_path_query
  --
   "c"
   "b"

  SELECT jsonb_path_query('{"a": "b", "b": "c"}', 'pg $[$fld]', '{"fld": "b"}');
   jsonb_path_query
  --
   "c"

--
Nikita Glukhov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company




--
greg









faulty link

2022-02-10 Thread Erik Rijkers

The provided link
  https://www.postgresql.org/docs/release/

leads to
  https://www.postgresql.org/docs/release/14.2/

which gives 'Not Found' for me (Netherlands)


At least one person on IRC reports it 'works' for them but it seems 
there still something wrong..



Erik Rijkers







Re: support for MERGE

2022-01-28 Thread Erik Rijkers

Op 28-01-2022 om 21:27 schreef Alvaro Herrera:

MERGE, v10.  I am much more comfortable with this version; I have
removed a bunch of temporary hacks and cleaned up the interactions with
table AM and executor, which is something that had been bothering me for
a while.  The complete set of changes can be seen in github,
https://github.com/alvherre/postgres/commits/merge-15



[v10-0001-MERGE-SQL-Command-following-SQL-2016.patch]


The patch doesnt apply smoothly:

patching file src/backend/tcop/pquery.c
patching file src/backend/tcop/utility.c
patching file src/backend/utils/adt/ruleutils.c
patching file src/bin/psql/tab-complete.c
Hunk #3 FAILED at 1714.
Hunk #4 succeeded at 3489 (offset 6 lines).
Hunk #5 succeeded at 3508 (offset 6 lines).
Hunk #6 succeeded at 3776 (offset 6 lines).
Hunk #7 succeeded at 3855 (offset 6 lines).
1 out of 7 hunks FAILED -- saving rejects to file 
src/bin/psql/tab-complete.c.rej

patching file src/include/commands/trigger.h
patching file src/include/executor/execMerge.h
patching file src/include/executor/instrument.h
patching file src/include/executor/nodeModifyTable.h



tab-complete.c.rej attached


Erik--- src/bin/psql/tab-complete.c
+++ src/bin/psql/tab-complete.c
@@ -1714,6 +1736,12 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER PUBLICATION  ADD */
 	else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD"))
 		COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");
+	else if (Matches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") ||
+			 (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE") &&
+			  ends_with(prev_wd, ',')))
+		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables, NULL);
+	else if (HeadMatches("ALTER", "PUBLICATION", MatchAny, "ADD|SET", "TABLE"))
+		COMPLETE_WITH(",");
 	/* ALTER PUBLICATION  DROP */
 	else if (Matches("ALTER", "PUBLICATION", MatchAny, "DROP"))
 		COMPLETE_WITH("ALL TABLES IN SCHEMA", "TABLE");


Re: support for MERGE

2022-01-14 Thread Erik Rijkers




Op 13-01-2022 om 13:43 schreef Alvaro Herrera:

Apologies, there was a merge failure and I failed to notice.  Here's the
correct patch.

(This is also in github.com/alvherre/postgres/tree/merge-15)



[v6-0001-MERGE-SQL-Command-following-SQL-2016]


I read though the MERGE-docs; some typos:

'For example, given MERGE foo AS f'
'For example, given MERGE INTO foo AS f'

'that clause clause'
'that clause'

'This is same as'
'This is the same as'

'for certain type of action'
'for certain types of action'

'The MERGE allows the user'
'MERGE allows the user'
(from the paragraph 'Read Committed Isolation Level'.  Likely 
copied from the paragraph above: 'The DELETE'; but there it refers to an 
earlier mention of DELETE.)



Erik Rijkers




  1   2   3   >