Re: ICU for global collation
On Mon, Jun 27, 2022 at 08:23:59AM +0200, Peter Eisentraut wrote: > On 26.06.22 05:51, Julien Rouhaud wrote: >> On Sat, Jun 25, 2022 at 10:19:30AM -0500, Justin Pryzby wrote: + if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1500) > > I think the fix here is to change <= to < ? Yes. -- Michael signature.asc Description: PGP signature
Re: PG15 beta1 fix pg_stats_ext/pg_stats_ext_exprs view manual
On Mon, Jun 27, 2022 at 03:49:20AM +, Shinoda, Noriyoshi (PN Japan FSIP) wrote: > Thanks for your comment. sorry for the late reply. > I hope it will be fixed during the period of PostgreSQL 15 Beta. Apologies for the delay, fixed in time for beta2. -- Michael signature.asc Description: PGP signature
Re: ICU for global collation
On 26.06.22 05:51, Julien Rouhaud wrote: Hi, On Sat, Jun 25, 2022 at 10:19:30AM -0500, Justin Pryzby wrote: commit f2553d43060edb210b36c63187d52a632448e1d2 says >=1500 in a few places, but in pg_upgrade says <=1500, which looks wrong for upgrades from v15. I think it should say <= 1400. On Wed, Feb 02, 2022 at 02:01:23PM +0100, Peter Eisentraut wrote: diff --git a/src/bin/pg_upgrade/info.c b/src/bin/pg_upgrade/info.c index 69ef23119f..2a9ca0e389 100644 --- a/src/bin/pg_upgrade/info.c +++ b/src/bin/pg_upgrade/info.c @@ -312,11 +312,20 @@ get_db_infos(ClusterInfo *cluster) i_spclocation; charquery[QUERY_ALLOC]; snprintf(query, sizeof(query), -"SELECT d.oid, d.datname, d.encoding, d.datcollate, d.datctype, " +"SELECT d.oid, d.datname, d.encoding, d.datcollate, d.datctype, "); + if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1500) I think the fix here is to change <= to < ? + snprintf(query + strlen(query), sizeof(query) - strlen(query), +"'c' AS datcollprovider, NULL AS daticucoll, "); + else + snprintf(query + strlen(query), sizeof(query) - strlen(query), +"datcollprovider, daticucoll, "); + snprintf(query + strlen(query), sizeof(query) - strlen(query), "pg_catalog.pg_tablespace_location(t.oid) AS spclocation " "FROM pg_catalog.pg_database d " " LEFT OUTER JOIN pg_catalog.pg_tablespace t " Indeed!
Re: [BUG] Panic due to incorrect missingContrecPtr after promotion
On Fri, Jun 24, 2022 at 04:17:34PM +, Imseih (AWS), Sami wrote: > It is been difficult to get a generic repro, but the way we reproduce > Is through our test suite. To give more details, we are running tests > In which we constantly failover and promote standbys. The issue > surfaces after we have gone through a few promotions which occur > every few hours or so ( not really important but to give context ). Hmm. Could you describe exactly the failover scenario you are using? Is the test using a set of cascading standbys linked to the promoted one? Are the standbys recycled from the promoted nodes with pg_rewind or created from scratch with a new base backup taken from the freshly-promoted primary? I have been looking more at this thread through the day but I don't see a remaining issue. It could be perfectly possible that we are missing a piece related to the handling of those new overwrite contrecords in some cases, like in a rewind. > I am adding some additional debugging to see if I can draw a better > picture of what is happening. Will also give aborted_contrec_reset_3.patch > a go, although I suspect it will not handle the specific case we are deaing > with. Yeah, this is not going to change much things if you are still seeing an issue. This patch does not change the logic, aka it just simplifies the tracking of the continuation record data, resetting it when a complete record has been read. Saying that, getting rid of the dependency on StandbyMode because we cannot promote in the middle of a record is nice (my memories around that were a bit blurry but even recovery_target_lsn would not recover in the middle of an continuation record), and this is not bug so there is limited reason to backpatch this part of the change. -- Michael signature.asc Description: PGP signature
Re: Race condition in TransactionIdIsInProgress
On 25/06/2022 13:10, Simon Riggs wrote: On Sat, 25 Jun 2022 at 10:18, Heikki Linnakangas wrote: On 24/06/2022 04:43, Andres Freund wrote: On 2022-06-23 22:03:27 +0300, Heikki Linnakangas wrote: In summary, I think we should: - commit and backpatch Simon's just_remove_TransactionIdIsKnownCompleted_call.v1.patch - fix pg_xact_status() to check TransactionIdIsInProgress() - in v15, remove TransationIdIsKnownCompleted function altogether I'll try to get around to that in the next few days, unless someone beats me to it. Makes sense. This is what I came up with for master. One difference from Simon's original patch is that I decided to not expose the TransactionIdIsKnownNotInProgress(), as there are no other callers of it in core, and it doesn't seem useful to extensions. I inlined it into the caller instead. Looks good, thanks. Committed and backpatched. Thanks! - Heikki
RE: PG15 beta1 fix pg_stats_ext/pg_stats_ext_exprs view manual
Thanks for your comment. sorry for the late reply. I hope it will be fixed during the period of PostgreSQL 15 Beta. Regards, Noriyoshi Shinoda -Original Message- From: Justin Pryzby Sent: Tuesday, June 14, 2022 11:30 PM To: Shinoda, Noriyoshi (PN Japan FSIP) Cc: pgsql-hack...@postgresql.org; Tomas Vondra Subject: Re: PG15 beta1 fix pg_stats_ext/pg_stats_ext_exprs view manual On Tue, May 24, 2022 at 08:19:27PM -0500, Justin Pryzby wrote: > On Wed, May 25, 2022 at 01:08:12AM +, Shinoda, Noriyoshi (PN Japan FSIP) > wrote: > > Hi hackers, > > Thanks to all the developers. The attached patch updates the manual for the > > pg_stats_ext and pg_stats_ext_exprs view. > > The current pg_stats_ext/pg_stats_ext_exprs view manual are missing the > > inherited column. This column was added at the same time as the stxdinherit > > column in the pg_statistic_ext_data view. The attached patch adds the > > missing description. If there is a better description, please correct it. > > > > Commit: Add stxdinherit flag to pg_statistic_ext_data > > > > INVALID URI REMOVED > > tgresql.git;a=commit;h=269b532aef55a579ae02a3e8e8df14101570dfd9__;!! > > NpxR!kBff64MGwFvJU4EPtHmXM1YogdVCJKoc9-TAYGJxy_9p_MMVUGE0GJaL4KGVqY5 > > dTBlzhU6k0odtBi1Wv_fZ$ > > Current Manual: > > https://www.postgresql.org/docs/15/view-pg-stats-ext.html > > > > INVALID URI REMOVED > > pg-stats-ext-exprs.html__;!!NpxR!kBff64MGwFvJU4EPtHmXM1YogdVCJKoc9-T > > AYGJxy_9p_MMVUGE0GJaL4KGVqY5dTBlzhU6k0odtBvG3tq9F$ > > Thanks for copying me. > > This looks right, and uses the same language as pg_stats and pg_statistic. > > But, I'd prefer if it didn't say "inheritance child", since that now > sounds like it means "a child which is using inheritance" and not just "any > child". > > I'd made a patch for that, for which I'll create a separate thread shortly. The thread I started [0] has stalled out, so your patch seems seems fine, since it's consistent with pre-existing docs. [0] https://www.postgresql.org/message-id/20220525013248.go19...@telsasoft.com -- Justin
Re: Postgres do not allow to create many tables with more than 63-symbols prefix
On 6/27/22 06:38, Masahiko Sawada wrote: On Fri, Jun 24, 2022 at 2:12 PM Andrey Lepikhov wrote: On 6/23/22 07:03, Masahiko Sawada wrote: > On Sat, Jun 4, 2022 at 4:03 AM Andrey Lepikhov > wrote: >> It is very corner case, of course. But solution is easy and short. So, >> why not to fix? - See the patch in attachment. > > While this seems to be a good improvement, I think it's not a bug. > Probably we cannot backpatch it as it will end up having type names > defined by different naming rules. I'd suggest discussing it on > -hackers. Done. Thank for updating the patch. Please register this item to the next CF if not yet. Done [1]. > Regarding the patch, I think we can merge makeUniqueTypeName() to > makeArrayTypeName() as there is no caller of makeUniqueTypeName() who > pass tryOriginal = true. I partially agree with you. But I have one reason to leave makeUniqueTypeName() separated: It may be used in other codes with auto generated types. For example, I think, the DefineRelation routine should choose composite type instead of using the same name as the table. Okay. I have one comment on v2 patch: + for(;;) { - dest[i - 1] = '_'; - strlcpy(dest + i, typeName, NAMEDATALEN - i); - if (namelen + i >= NAMEDATALEN) - truncate_identifier(dest, NAMEDATALEN, false); - if (!SearchSysCacheExists2(TYPENAMENSP, - CStringGetDatum(dest), + CStringGetDatum(type_name), ObjectIdGetDatum(typeNamespace))) - return pstrdup(dest); + return type_name; + + /* Previous attempt was failed. Prepare a new one. */ + pfree(type_name); + snprintf(suffix, sizeof(suffix), "%d", ++pass); + type_name = makeObjectName("", typeName, suffix); } return NULL; I think it's better to break from the loop instead of returning from there. That way, we won't need "return NULL". Agree. Done. [1] https://commitfest.postgresql.org/38/3712/ -- Regards Andrey Lepikhov Postgres ProfessionalFrom 73b80676fff98e9edadab2576cf22778c6b5168b Mon Sep 17 00:00:00 2001 From: "Andrey V. Lepikhov" Date: Fri, 3 Jun 2022 21:40:01 +0300 Subject: [PATCH] Allow postgresql to generate more relations with the same 63 symbols long prefix. Rewrite the makeUniqueTypeName routine - generator of unique name is based on the same idea as the ChooseRelationName() routine. Authors: Dmitry Koval, Andrey Lepikhov --- src/backend/catalog/pg_type.c | 37 --- src/test/regress/expected/alter_table.out | 6 ++-- 2 files changed, 23 insertions(+), 20 deletions(-) diff --git a/src/backend/catalog/pg_type.c b/src/backend/catalog/pg_type.c index 03788cb975..405553013e 100644 --- a/src/backend/catalog/pg_type.c +++ b/src/backend/catalog/pg_type.c @@ -26,6 +26,7 @@ #include "catalog/pg_namespace.h" #include "catalog/pg_proc.h" #include "catalog/pg_type.h" +#include "commands/defrem.h" #include "commands/typecmds.h" #include "mb/pg_wchar.h" #include "miscadmin.h" @@ -936,17 +937,17 @@ makeMultirangeTypeName(const char *rangeTypeName, Oid typeNamespace) * makeUniqueTypeName * Generate a unique name for a prospective new type * - * Given a typeName, return a new palloc'ed name by prepending underscores - * until a non-conflicting name results. + * Given a typeName, return a new palloc'ed name by prepending underscore + * and (if needed) adding a suffix to the end of the type name. * * If tryOriginal, first try with zero underscores. */ static char * makeUniqueTypeName(const char *typeName, Oid typeNamespace, bool tryOriginal) { - int i; - int namelen; - char dest[NAMEDATALEN]; + int pass = 0; + char suffix[NAMEDATALEN]; + char *type_name; Assert(strlen(typeName) <= NAMEDATALEN - 1); @@ -956,23 +957,25 @@ makeUniqueTypeName(const char *typeName, Oid typeNamespace, bool tryOriginal) ObjectIdGetDatum(typeNamespace))) return pstrdup(typeName); + /* Prepare initial object name. Just for compatibility. */ + type_name = makeObjectName("", typeName, NULL); + /* - * The idea is to prepend underscores as needed until we make a name that - * doesn't collide with anything ... + * The idea of unique name generation is similar to ChooseRelationName() + * implementation logic. */ - namelen = strlen(typeName); - for (i = 1; i < NAMEDATALEN - 1; i++) + for(;;) { - dest[i - 1] = '_'; - strlcpy(dest + i, typeName, NAMEDATALEN - i); - if (namelen + i >= NAMEDATALEN) - truncate_identifier(dest, NAMEDATALEN, false); - if (!SearchSysCacheExists2(TYPENAMENSP, - CStringGetDatum(dest), + CStringGetDatum(type_name), ObjectIdGetDatum(typeNamespace))) - return pstrdup(dest); + break; + + /* Previous attempt was failed. Prepare a new one. */ + pfree(type_name); + snprintf(suffix, sizeof(suffix), "%d", ++pass); +
Re: Postgres do not allow to create many tables with more than 63-symbols prefix
On Fri, Jun 24, 2022 at 2:12 PM Andrey Lepikhov wrote: > > Moved from the pgsql-bugs mailing list [1]. > > On 6/23/22 07:03, Masahiko Sawada wrote: > > Hi, > > > > On Sat, Jun 4, 2022 at 4:03 AM Andrey Lepikhov > > wrote: > >> > >> According to subj you can try to create many tables (induced by the case > >> of partitioned table) with long prefix - see 6727v.sql for reproduction. > >> But now it's impossible because of logic of the makeUniqueTypeName() > >> routine. > >> You get the error: > >> ERROR: could not form array type name for type ... > >> > >> It is very corner case, of course. But solution is easy and short. So, > >> why not to fix? - See the patch in attachment. > > > > While this seems to be a good improvement, I think it's not a bug. > > Probably we cannot backpatch it as it will end up having type names > > defined by different naming rules. I'd suggest discussing it on > > -hackers. > Done. Thank for updating the patch. Please register this item to the next CF if not yet. > > > Regarding the patch, I think we can merge makeUniqueTypeName() to > > makeArrayTypeName() as there is no caller of makeUniqueTypeName() who > > pass tryOriginal = true. > I partially agree with you. But I have one reason to leave > makeUniqueTypeName() separated: > It may be used in other codes with auto generated types. For example, I > think, the DefineRelation routine should choose composite type instead > of using the same name as the table. Okay. I have one comment on v2 patch: + for(;;) { - dest[i - 1] = '_'; - strlcpy(dest + i, typeName, NAMEDATALEN - i); - if (namelen + i >= NAMEDATALEN) - truncate_identifier(dest, NAMEDATALEN, false); - if (!SearchSysCacheExists2(TYPENAMENSP, - CStringGetDatum(dest), + CStringGetDatum(type_name), ObjectIdGetDatum(typeNamespace))) - return pstrdup(dest); + return type_name; + + /* Previous attempt was failed. Prepare a new one. */ + pfree(type_name); + snprintf(suffix, sizeof(suffix), "%d", ++pass); + type_name = makeObjectName("", typeName, suffix); } return NULL; I think it's better to break from the loop instead of returning from there. That way, we won't need "return NULL". Regards, -- Masahiko Sawada EDB: https://www.enterprisedb.com/
Re: pg15b1: FailedAssertion("val > base", File: "...src/include/utils/relptr.h", Line: 67, PID: 30485)
On Thu, Jun 23, 2022 at 2:09 AM Robert Haas wrote: > On Wed, Jun 22, 2022 at 12:34 AM Thomas Munro wrote: > > > For the record, the third idea proposed was to use 1 for the first > > > byte, so that 0 is reserved for NULL and works with memset(0). Here's > > > an attempt at that. > > > > ... erm, though, duh, I forgot to adjust Assert(val > base). One more time. > > I like this idea and think this might have the side benefit of making > it harder to get away with accessing relptr_off directly. Thanks. Pushed, and back-patched to 14, where min_dynamic_shared_memory arrived. I wondered in passing if the stuff about relptr_declare() was still needed to avoid confusing pgindent, since we tweaked the indent code a bit for macros that take a typename, but it seems that it still mangles "relptr(FooBar) some_struct_member;", putting extra whitespace in front of it. Hmmph.
Re: Support logical replication of DDLs
On 2022-Jun-22, vignesh C wrote: > 1) Creation of temporary table fails infinitely in the subscriber. > CREATE TEMPORARY TABLE temp1 (a int primary key); > > The above statement is converted to the below format: > CREATE TEMPORARY TABLE pg_temp.temp1 (a pg_catalog.int4 , > CONSTRAINT temp1_pkey PRIMARY KEY (a)); > While handling the creation of temporary table in the worker, the > worker fails continuously with the following error: > 2022-06-22 14:24:01.317 IST [240872] ERROR: schema "pg_temp" does not exist Perhaps one possible fix is to change the JSON format string used in deparse_CreateStmt. Currently, the following is used: + if (node->ofTypename) + fmtstr = "CREATE %{persistence}s TABLE %{if_not_exists}s %{identity}D " + "OF %{of_type}T %{table_elements}s " + "%{with_clause}s %{on_commit}s %{tablespace}s"; + else + fmtstr = "CREATE %{persistence}s TABLE %{if_not_exists}s %{identity}D " + "(%{table_elements:, }s) %{inherits}s " + "%{with_clause}s %{on_commit}s %{tablespace}s"; + + createStmt = + new_objtree_VA(fmtstr, 1, + "persistence", ObjTypeString, + get_persistence_str(relation->rd_rel->relpersistence)); (Note that the word for the "persistence" element here comes straight from relation->rd_rel->relpersistence.) Maybe it would be more appropriate to set the schema to empty when the table is temp, since the temporary-ness is in the %{persistence} element, and thus there is no need to schema-qualify the table name. However, that would still replicate a command that involves a temporary table, which perhaps should not be considered fit for replication. So another school of thought is that if the %{persistence} is set to TEMPORARY, then it would be better to skip replicating the command altogether. I'm not sure how to plug that in the replication layer, however. -- Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/ "At least to kernel hackers, who really are human, despite occasional rumors to the contrary" (LWN.net)
Re: doc: Clarify Savepoint Behavior
Thank you for the review. On Thu, Jun 23, 2022 at 5:35 AM Simon Riggs wrote: > On Thu, 9 Jun 2022 at 16:41, David G. Johnston > wrote: > > "The name to give to the new savepoint. The name may already exist, > + in which case a rollback or release to the same name will use the > + one that was most recently defined." > > instead I propose: > > "The name to give to the new savepoint. If the name duplicates a > previously defined savepoint name then only the latest savepoint with > that name > can be referenced in a later ROLLBACK TO SAVEPOINT." > So leave the "release" behavior implied from the rollback behavior? On the whole I'm slightly in favor of your proposed wording (mostly due to the better fitting of the ROLLBACK command, though at the omission of RELEASE...) but are you seeing anything beyond personal style as to why you feel one is better than the other? Is there some existing wording in the docs that I should be conforming to here? > + > + If multiple savepoints have the same name, only the one that was most > + recently defined is released. > + > > instead I propose > > "Searches backwards through previously defined savepoints until the > a savepoint name matches the request. If the savepoint name duplicated > earlier > defined savepoints then those earlier savepoints can only be released if > multiple ROLLBACK TO SAVEPOINT commands are issued with the same > name, as shown in the following example." > > Upon reflection, adding this after the comments about cursors seems like a poor location, I will probably move it up one paragraph. I dislike this proposal for its added wordiness that doesn't bring in new material. The whole idea of "searching backwards until the name is found" is already covered in the description: "ROLLBACK TO SAVEPOINT implicitly destroys all savepoints that were established after the named savepoint." Using the phrase "can only be released if" here in the rollback to savepoint command page just seems to be asking for confusion between this and the release savepoint command. Also, I would just call the savepoint "s" in the example, to declutter it. > > If I do use a name that differs from the other two examples on that page I'll probably go with "sp" for added detectability - but deviating from the established convention doesn't seem warranted here. In all, I'm still content with the patch as-is; though I or the committer should consider moving up the one paragraph in rollback to savepoint. Otherwise I'll probably post an updated patch sometime this coming week and give another look at the savepoint name description and make that paragraph move. David J.
Re: Add LZ4 compression in pg_dump
Hi, Will you be able to send a rebased patch for the next CF ? If you update for the review comments I sent in March, I'll plan to do another round of review. On Sat, Mar 26, 2022 at 11:21:56AM -0500, Justin Pryzby wrote: > LZ4F_HEADER_SIZE_MAX isn't defined in old LZ4. > > I ran into that on an ubuntu LTS, so I don't think it's so old that it > shouldn't be handled more gracefully. LZ4 should either have an explicit > version check, or else shouldn't depend on that feature (or should define a > safe fallback version if the library header doesn't define it). > > https://packages.ubuntu.com/liblz4-1 > > 0003: typo: of legacy => or legacy > > There are a large number of ifdefs being added here - it'd be nice to minimize > that. basebackup was organized to use separate files, which is one way. > > $ git grep -c 'ifdef .*LZ4' src/bin/pg_dump/compress_io.c > src/bin/pg_dump/compress_io.c:19 > > In last year's CF entry, I had made a union within CompressorState. LZ4 > doesn't need z_streamp (and ztsd will need ZSTD_outBuffer, ZSTD_inBuffer, > ZSTD_CStream). > > 0002: I wonder if you're able to re-use any of the basebackup parsing stuff > from commit ffd53659c. You're passing both the compression method *and* > level. > I think there should be a structure which includes both. In the future, that > can also handle additional options. I hope to re-use these same things for > wal_compression=method:level. > > You renamed this: > > |- COMPR_ALG_LIBZ > |-} CompressionAlgorithm; > |+ COMPRESSION_GZIP, > |+} CompressionMethod; > > ..But I don't think that's an improvement. If you were to change it, it > should > say something like PGDUMP_COMPRESS_ZLIB, since there are other compression > structs and typedefs. zlib is not idential to gzip, which uses a different > header, so in WriteDataToArchive(), LIBZ is correct, and GZIP is incorrect. > > The cf* changes in pg_backup_archiver could be split out into a separate > commit. It's strictly a code simplification - not just preparation for more > compression algorithms. The commit message should "See also: > bf9aa490db24b2334b3595ee33653bf2fe39208c". > > The changes in 0002 for cfopen_write seem insufficient: > |+ if (compressionMethod == COMPRESSION_NONE) > |+ fp = cfopen(path, mode, compressionMethod, 0); > |else > |{ > | #ifdef HAVE_LIBZ > |char *fname; > | > |fname = psprintf("%s.gz", path); > |- fp = cfopen(fname, mode, compression); > |+ fp = cfopen(fname, mode, compressionMethod, > compressionLevel); > |free_keep_errno(fname); > | #else > > The only difference between the LIBZ and uncompressed case is the file > extension, and it'll be the only difference with LZ4 too. So I suggest to > first handle the file extension, and the rest of the code path is not > conditional on the compression method. I don't think cfopen_write even needs > HAVE_LIBZ - can't you handle that in cfopen_internal() ? > > This patch rejects -Z0, which ought to be accepted: > ./src/bin/pg_dump/pg_dump -h /tmp regression -Fc -Z0 |wc > pg_dump: error: can only specify -Z/--compress [LEVEL] when method is set > > Your 0003 patch shouldn't reference LZ4: > +#ifndef HAVE_LIBLZ4 > + if (*compressionMethod == COMPRESSION_LZ4) > + supports_compression = false; > +#endif > > The 0004 patch renames zlibOutSize to outsize - I think the patch series > should > be constructed such as to minimize the size of the method-specific patches. I > say this anticipating also adding support for zstd. The preliminary patches > should have all the boring stuff. It would help for reviewing to keep the > patches split up, or to enumerate all the boring things that are being renamed > (like change OutputContext to cfp, rename zlibOutSize, ...). > > 0004: The include should use and not "lz4.h" > > freebsd/cfbot is failing. > > I suggested off-list to add an 0099 patch to change LZ4 to the default, to > exercise it more on CI. On Sat, Mar 26, 2022 at 01:33:36PM -0500, Justin Pryzby wrote: > On Sat, Mar 26, 2022 at 11:21:56AM -0500, Justin Pryzby wrote: > > You're passing both the compression method *and* level. I think there > > should > > be a structure which includes both. In the future, that can also handle > > additional options. > > I'm not sure if there's anything worth saving, but I did that last year with > 0003-Support-multiple-compression-algs-levels-opts.patch > I sent a rebased copy off-list. > https://www.postgresql.org/message-id/flat/20210104025321.ga9...@telsasoft.com#ca1b9f9d3552c87fa874731cad9d8391 > > | fatal("not built with LZ4 support"); > | fatal("not built with lz4 support"); > > Please use consistent capitalization of "lz4" - then the compiler can optimize > away duplicate strings. > > > 0004: The include should use and not "lz4.h" > > Also, use USE_LZ4 rather than HAVE_LIBLZ4, per 75eae0908.
JSON/SQL: jsonpath: incomprehensible error message
JSON/SQL jsonpath For example, a jsonpath string with deliberate typo 'like_regexp' (instead of 'like_regex'): select js from (values (jsonb '{}')) as f(js) where js @? '$ ? (@ like_regexp "^xxx")'; ERROR: syntax error, unexpected IDENT_P at or near " " of jsonpath input LINE 1: ...s from (values (jsonb '{}')) as f(js) where js @? '$ ? (@ li... ^ Both 'IDENT_P' and 'at or near " "' seem pretty useless. Perhaps some improvement can be thought of? Similar messages in release 14 seem to use 'invalid token', which is better: select js from (values (jsonb '{"a":"b"}')) as f(js) where js @? '$ ? (@.a .= "b")'; ERROR: syntax error, unexpected invalid token at or near "=" of jsonpath input thanks, Erik Rijkers
Re: Core dump in range_table_mutator()
Dean Rasheed writes: > On Sat, 25 Jun 2022 at 04:39, Tom Lane wrote: >> Well, if we want to clean this up a bit rather than just doing the >> minimum safe fix ... I spent some time why we were bothering with the >> FLATCOPY step at all, rather than just mutating the Query* pointer. >> I think the reason is to not fail if the QTW_DONT_COPY_QUERY flag is >> set, but maybe we should clear that flag when recursing? > Hmm, interesting, but we don't actually pass on that flag when > recursing anyway. Since it is the mutator routine's responsibility to > make a possibly-modified copy of its input node, if it wants to > recurse into the subquery, it should always call query_tree_mutator() > with QTW_DONT_COPY_QUERY unset, and range_table_mutator() should never > need to FLATCOPY() the subquery. Actually, QTW_DONT_COPY_QUERY is dead code AFAICS: we don't use it anywhere, and Debian Code Search doesn't know of any outside users either. Removing it might be something to do in v16. (I think it's a bit late for unnecessary API changes in v15.) > But then, in the interests of further tidying up, why does > range_table_mutator() call copyObject() on the subquery if > QTW_IGNORE_RT_SUBQUERIES is set? I thought about that for a bit, but all of the QTW_IGNORE flags work like that, and I'm hesitant to change it. There may be code that assumes it can modify those trees in-place afterwards. Committed with just the change to use straight MUTATE, making this case exactly like the other places with QTW_IGNORE options. Thanks for the discussion! regards, tom lane
Re: ICU for global collation
On Sun, Jun 26, 2022 at 11:51:24AM +0800, Julien Rouhaud wrote: > On Sat, Jun 25, 2022 at 10:19:30AM -0500, Justin Pryzby wrote: >>> + if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1500) >>> + snprintf(query + strlen(query), sizeof(query) - strlen(query), >>> +"'c' AS datcollprovider, NULL AS daticucoll, >>> "); >>> + else >>> + snprintf(query + strlen(query), sizeof(query) - strlen(query), >>> +"datcollprovider, daticucoll, "); >>> + snprintf(query + strlen(query), sizeof(query) - strlen(query), >>> "pg_catalog.pg_tablespace_location(t.oid) AS >>> spclocation " >>> "FROM pg_catalog.pg_database d " >>> " LEFT OUTER JOIN pg_catalog.pg_tablespace t " > > Indeed! Oops. Beta2 tagging is very close by, so I think that it would be better to not take a risk on that now, and this is an issue only when upgrading from v15 where datcollprovider is ICU for a database. As things stand, someone using beta1 with this new feature, running pg_upgrade to beta2 would lose any non-libc locale provider set. -- Michael signature.asc Description: PGP signature