Re: [PATCH] Implement motd for PostgreSQL

2021-04-03 Thread Joel Jacobson
On Sat, Apr 3, 2021, at 17:50, Fabien COELHO wrote:
> > Perhaps the configuration-file parser has been fixed since to support 
> > embedded newlines? If so, then maybe it would actually be an idea to 
> > support newlines by escaping them?
> 
> Dunno.
> 
> If such a feature gets considered, I'm not sure I'd like to actually edit 
> pg configuration file to change the message.

For the ALTER SYSTEM case, the value would be written to postgresql.auto.conf,
and that file we shouldn't edit manually anyway, right?

> 
> The actual source looks pretty straightforward. I'm wondering whether pg 
> style would suggest to write motd != NULL instead of just motd.

That's what I had originally, but when reviewing my code verifying code style,
I noticed the other case it more common:

if \([a-z]* != NULL &&
119 results in 72 files

if \([a-z]* &&
936 results in 311 files

> 
> I'm wondering whether it should be possible to designate (1) a file the 
> content of which would be shown, or (2) a command, the output of which 
> would be shown [ok, there might be security implications on this one].

Can't we just do that via plpgsql and EXECUTE somehow?

/Joel

Re: Crash in BRIN minmax-multi indexes

2021-04-03 Thread Jaime Casanova
On Thu, Apr 01, 2021 at 03:22:59PM +0200, Tomas Vondra wrote:
> On 4/1/21 3:09 PM, Zhihong Yu wrote:
> > Hi,
> > Can you try this patch ?
> > 
> > Thanks
> > 
> > diff --git a/src/backend/access/brin/brin_minmax_multi.c
> > b/src/backend/access/brin/brin_minmax_multi.c
> > index 70109960e8..25d6d2e274 100644
> > --- a/src/backend/access/brin/brin_minmax_multi.c
> > +++ b/src/backend/access/brin/brin_minmax_multi.c
> > @@ -2161,7 +2161,7 @@ brin_minmax_multi_distance_interval(PG_FUNCTION_ARGS)
> >      delta = 24L * 3600L * delta;
> > 
> >      /* and add the time part */
> > -    delta += result->time / (float8) 100.0;
> > +    delta += (result->time + result->zone * USECS_PER_SEC) / (float8)
> > 100.0;
> > 
> 
> That won't work, because Interval does not have a "zone" field, so this
> won't even compile.
> 
> The problem is that interval comparisons convert the value using 30 days
> per month (see interval_cmp_value), but the formula in this function
> uses 31. So either we can tweak that (seems to fix it for me), or maybe
> just switch to interval_cmp_value directly.
> 

Changing to using month of 30 days on the formula fixed it.

and I found another issue, this time involves autovacuum which makes it
a little more complicated to reproduce.

Currently the only stable way to reproduce it is using pgbench:

pgbench -i postgres
psql -c "CREATE INDEX ON pgbench_history USING brin (tid 
int4_minmax_multi_ops);" postgres
pgbench -c2 -j2 -T 300 -n postgres

Attached a backtrace

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
set = {__val = {4194304, 140730649395040, 2, 6, 6603273, 
9461368344, 
4611686018427388799, 139855637088934, 0, 281470681751456, 0, 0, 0, 
0, 0, 0}}
pid = 
tid = 
ret = 
#1  0x7f32ad593535 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x0, sa_sigaction = 0x0}, 
sa_mask = {__val = {
  0, 0, 0, 0, 0, 139855634845685, 2, 7147272211649118304, 
7003431887686743910, 
  9461368344, 7003770563910163344, 0, 17199369337215936768, 
140730649395280, 0, 
  140730649396144}}, sa_flags = -144564240, sa_restorer = 0x0}
sigs = {__val = {32, 0 }}
#2  0x560cf7cbc5d0 in ExceptionalCondition (conditionName=0x560cf7d370c9 
"DatumGetBool(r)", 
errorType=0x560cf7d36f74 "FailedAssertion", fileName=0x560cf7d36f60 
"brin_minmax_multi.c", 
lineNumber=455) at assert.c:69
No locals.
#3  0x560cf7628767 in AssertCheckExpandedRanges (bdesc=0x560cf9914e08, 
colloid=0, attno=1, 
attr=0x7f32a442d858, ranges=0x560cf991bbf0, nranges=10) at 
brin_minmax_multi.c:455
r = 0
minval = 10
maxval = 10
i = 9
eq = 0x560cf9914ca0
lt = 0x560cf9914c40
#4  0x560cf762d17d in brin_minmax_multi_union (fcinfo=0x7ffe685dc520)
at brin_minmax_multi.c:2779
bdesc = 0x560cf9914e08
col_a = 0x560cf9923f88
col_b = 0x560cf9939898
colloid = 0
serialized_a = 0x560cf993d020
serialized_b = 0x560cf993d0b0
ranges_a = 0x560cf9923d48
ranges_b = 0x560cf9913be8
attno = 1
attr = 0x7f32a442d858
eranges = 0x560cf991bbf0
neranges = 10
cmpFn = 0x560cf9914c40
distanceFn = 0x560cf991aee0
distances = 0x560cf7d38d27
ctx = 0x560cf991bad0
oldctx = 0x560cf9923b00
#5  0x560cf7cc6ae2 in FunctionCall3Coll (flinfo=0x560cf991afb0, 
collation=0, 
arg1=94613726645768, arg2=94613726707592, arg3=94613726795928) at 
fmgr.c:1188
fcinfodata = {fcinfo = {flinfo = 0x560cf991afb0, context = 0x0, 
resultinfo = 0x0, 
fncollation = 0, isnull = false, nargs = 3, args = 0x7ffe685dc540}, 
  fcinfo_data = "\260\257\221\371\fV", '\000' , 
"\177\003\000\bN\221\371\fV\000\000\000;\222\371\fV\000\000\210?\222\371\fV\000\000\000\227\223\371\v\022\000\000\230\230\223\371\fV\000\000\000\257\221\371\fV\000"}
fcinfo = 0x7ffe685dc520
result = 17072012032
__func__ = "FunctionCall3Coll"
#6  0x560cf762541d in union_tuples (bdesc=0x560cf9914e08, a=0x560cf9923f60, 
b=0x560cf993d078)
at brin.c:1618
unionFn = 0x560cf991afb0
col_a = 0x560cf9923f88
col_b = 0x560cf9939898
opcinfo = 0x560cf9914bf0
keyno = 0
db = 0x560cf9939870
cxt = 0x560cf9939750
oldcxt = 0x560cf9923b00
#7  0x560cf7624eb2 in summarize_range (indexInfo=0x560cf99243a8, 
state=0x560cf9924e08, 
heapRel=0x7f32a442dfb8, heapBlk=460, heapNumBlks=463) at brin.c:1427
newtup = 0x560cf9924110
newsize = 72
didupdate = false
samepage = true
phbuf = 2283
phtup = 0x560cf993d078
phsz = 32
offset = 17
scanNumBlks = 2
__func__ = "summarize_range"
#8  

Re: TRUNCATE on foreign table

2021-04-03 Thread Kohei KaiGai
2021年4月4日(日) 13:07 Bharath Rupireddy :
>
> On Sat, Apr 3, 2021 at 8:31 PM Zhihong Yu  wrote:
> > w.r.t. Bharath's question on using hash table, I think the reason is that 
> > the search would be more efficient:
>
> Generally, sequential search would be slower if there are many entries
> in a list. Here, the use case is to store all the foreign table ids
> associated with each foreign server and I'm not sure how many foreign
> tables will be provided in a single truncate command that belong to
> different foreign servers. I strongly feel the count will be less and
> using a list would be easier than to have a hash table. Others may
> have better opinions.
>
https://www.postgresql.org/message-id/20200115081126.gk2...@paquier.xyz

It was originally implemented using a simple list, then modified according to
the comment by Michael.
I think it is just a matter of preference.

> > Should the hash table be released at the end of ExecuteTruncateGuts() ?
>
> If we go with a hash table and think that the frequency of "TRUNCATE"
> commands on foreign tables is heavy in a local session, then it does
> make sense to not destroy the hash, otherwise destroy the hash.
>
In most cases, TRUNCATE is not a command frequently executed.
So, exactly, it is just a matter of preference.

Best regards,
-- 
HeteroDB, Inc / The PG-Strom Project
KaiGai Kohei 




Re: Unused variable found in AttrDefaultFetch

2021-04-03 Thread Tom Lane
Michael Paquier  writes:
> On Sun, Apr 04, 2021 at 10:13:26AM +0530, Bharath Rupireddy wrote:
>> +1 to remove it and the patch LGTM.

> Indeed, there is no point to keep that around.  I'll go clean up that
> as you propose.

What Andrew was suggesting in the other thread might well result in
putting it back.  I'd hold off till that decision is made.

regards, tom lane




Re: Unused variable found in AttrDefaultFetch

2021-04-03 Thread Michael Paquier
On Sun, Apr 04, 2021 at 10:13:26AM +0530, Bharath Rupireddy wrote:
> +1 to remove it and the patch LGTM.

Indeed, there is no point to keep that around.  I'll go clean up that
as you propose.
--
Michael


signature.asc
Description: PGP signature


Re: Unused variable found in AttrDefaultFetch

2021-04-03 Thread Bharath Rupireddy
On Sun, Apr 4, 2021 at 8:14 AM Zhihong Yu  wrote:
>
> Hi,
> I was looking at AttrDefaultFetch and saw that the variable found is never 
> read.
>
> I think it can be removed. See attached patch.

+1 to remove it and the patch LGTM. For reference, below is the commit
that removed last usage of "found" variable:

commit 16828d5c0273b4fe5f10f42588005f16b415b2d8
Author: Andrew Dunstan 
Date:   Wed Mar 28 10:43:52 2018 +1030

Fast ALTER TABLE ADD COLUMN with a non-NULL default

Currently adding a column to a table with a non-NULL default results in
a rewrite of the table. For large tables this can be both expensive and
disruptive. This patch removes the need for the rewrite as long as the

@@ -4063,10 +4125,6 @@ AttrDefaultFetch(Relation relation)

systable_endscan(adscan);
heap_close(adrel, AccessShareLock);
-
-   if (found != ndef)
-   elog(WARNING, "%d attrdef record(s) missing for rel %s",
-ndef - found, RelationGetRelationName(relation));
 }

With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com




Re: TRUNCATE on foreign table

2021-04-03 Thread Bharath Rupireddy
On Sat, Apr 3, 2021 at 8:31 PM Zhihong Yu  wrote:
> w.r.t. Bharath's question on using hash table, I think the reason is that the 
> search would be more efficient:

Generally, sequential search would be slower if there are many entries
in a list. Here, the use case is to store all the foreign table ids
associated with each foreign server and I'm not sure how many foreign
tables will be provided in a single truncate command that belong to
different foreign servers. I strongly feel the count will be less and
using a list would be easier than to have a hash table. Others may
have better opinions.

> Should the hash table be released at the end of ExecuteTruncateGuts() ?

If we go with a hash table and think that the frequency of "TRUNCATE"
commands on foreign tables is heavy in a local session, then it does
make sense to not destroy the hash, otherwise destroy the hash.

With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com




RE: Stronger safeguard for archive recovery not to miss data

2021-04-03 Thread osumi.takami...@fujitsu.com
On  Friday, April 2, 2021 11:49 PM  Laurenz Albe  
wrote:
> On Thu, 2021-04-01 at 17:25 +0900, Fujii Masao wrote:
> > Thanks for updating the patch!
> >
> > +errhint("Use a backup taken after
> setting wal_level to higher than minimal "
> > +"or recover to the
> > + point in time before wal_level becomes minimal even though it causes
> > + data loss")));
> >
> > ISTM that "or recover to the point in time before wal_level was changed
> >   to minimal even though it may cause data loss" sounds better. Thought?
> 
> I would reduce it to
> 
> "Either use a later backup, or recover to a point in time before \"wal_level\"
> was set to \"minimal\"."
> 
> I'd say that we can leave it to the intelligence of the reader to deduce that
> recovering to an earlier time means more data loss.
Thank you. Yet, I prefer the longer version.
For example, the later backup can be another backup that fails during archive 
recovery
if the user have several backups during wal_level=replica
and it is taken before setting wal_level=minimal, right ?

Like this, giving much information is helpful for better decision taken by 
user, I thought.

Best Regards,
Takamichi Osumi



RE: Stronger safeguard for archive recovery not to miss data

2021-04-03 Thread osumi.takami...@fujitsu.com
On Thursday, April 1, 2021 5:25 PM Fujii Masao  
wrote:
> On 2021/04/01 12:45, osumi.takami...@fujitsu.com wrote:
> > Thank you for sharing your ideas about the hint. Absolutely need to change
> the message.
> > In my opinion, combining the basic idea of yours and Fujii-san's would be
> the best.
> >
> > Updated the patch and made v05. The changes I made are
> >
> > * rewording of errhint although this has become long !
> > * fix of the typo in the TAP test
> > * modification of my past changes not to change conditions in
> > CheckRequiredParameterValues
> > * rename of the test file to 024_archive_recovery.pl because two files are
> made
> > since the last update of this patch
> > * pgindent is conducted to check my alignment again.
> 
> Thanks for updating the patch!
> 
> +  errhint("Use a backup taken after setting
> wal_level to higher than minimal "
> +  "or recover to the point in
> time before wal_level becomes
> +minimal even though it causes data loss")));
> 
> ISTM that "or recover to the point in time before wal_level was changed
>   to minimal even though it may cause data loss" sounds better. Thought?
Adopted.


> +# Check if standby.signal exists
> +my $pgdata = $new_node->data_dir;
> +ok (-f "${pgdata}/standby.signal", 'standby.signal was created');
> 
> +# Check if recovery.signal exists
> +my $path = $another_node->data_dir;
> +ok (-f "${path}/recovery.signal", 'recovery.signal was created');
> 
> Why are these tests necessary?
> These seem to test that init_from_backup() works expectedly based on the
> parameter "standby". But if we are sure that init_from_backup() works fine,
> these tests don't seem to be necessary.
Absolutely, you are right. Fixed.


> +use Config;
> 
> This is not necessary?
Removed.


> +# Make the wal_level back to replica
> +$node->append_conf('postgresql.conf', $replica_config); $node->restart;
> +check_wal_level('replica', 'wal_level went back to replica again');
> 
> IMO it's better to comment why this server restart is necessary.
> As far as I understand correctly, this is necessary to ensure the WAL file
> containing the record about the change of wal_level (to minimal) is archived,
> so that the subsequent archive recovery will be able to replay it.
OK, added some comments. Further, I felt the way I wrote this part was not good 
at all and self-evident
and developers who read this test would feel uneasy about that point.
So, a little bit fixed that test so that we can get clearer conviction for wal 
archive.


> > By the way, when I build postgres with this patch and enable-coverage
> > option, the results of RT becomes unstable. Does someone know the
> reason ?
> > When it fails, I get stderr like below
> 
> I have no idea about this. Does this happen even without the patch?
Unfortunately, no. I get this only with --enable-coverage and with my patch,
althought regression tests have passed with this patch.
OSS HEAD doesn't produce the stderr even with --enable-coverage.


Best Regards,
Takamichi Osumi



stronger_safeguard_for_archive_recovery_v06.patch
Description: stronger_safeguard_for_archive_recovery_v06.patch


Unused variable found in AttrDefaultFetch

2021-04-03 Thread Zhihong Yu
Hi,
I was looking at AttrDefaultFetch and saw that the variable found is never
read.

I think it can be removed. See attached patch.

Cheers


def-fetch-found.patch
Description: Binary data


Re: ALTER TABLE ADD COLUMN fast default

2021-04-03 Thread Tom Lane
Andrew Gierth  writes:
> I just got through diagnosing a SEGV crash with someone on IRC, and the
> cause turned out to be exactly this - a table had (for some reason we
> could not determine within the available resources) lost its pg_attrdef
> record for the one column it had with a default (which was a serial
> column, so none of the fast-default code is actually implicated). Any
> attempt to alter the table resulted in a crash in equalTupleDesc on this
> line:
> if (strcmp(defval1->adbin, defval2->adbin) != 0)
> due to trying to compare adbin values which were NULL pointers.

Ouch.

> Does equalTupleDesc need to be more defensive about this, or does the
> above check need to be reinstated?

Looking around at the other touches of AttrDefault.adbin in the backend
(of which there are not that many), some of them are prepared for it to be
NULL and some are not.  I don't immediately have a strong opinion whether
that should be an allowed state; but if it is not allowed then it's not
okay for AttrDefaultFetch to leave such a state behind.  Alternatively,
if it is allowed then equalTupleDesc is in the wrong.  We should choose
one definition and make all the relevant code match it.

regards, tom lane




Re: Lowering the ever-growing heap->pd_lower

2021-04-03 Thread Peter Geoghegan
On Wed, Mar 31, 2021 at 2:49 AM Matthias van de Meent
 wrote:
> I had implemented it locally, but was waiting for some more feedback
> before posting that and got busy with other stuff since, it's now
> attached.
>
> I've also played around with marking the free space on the page as
> undefined for valgrind, but later realized that that would make the
> test for defined memory in PageAddItemExtended fail. This is
> documented in the commit message of the attached patch 0002.

I would like to deal with this work within the scope of the project
we're discussing over on the "New IndexAM API controlling index vacuum
strategies" thread. The latest revision of that patch series includes
a modified version of your patch:

https://postgr.es/m/CAH2-Wzn6a64PJM1Ggzm=uvx2otsopjmhfqj_g1raj4gwr3z...@mail.gmail.com

Please take discussion around this project over to that other thread.
There are a variety of issues that can only really be discussed in
that context.

Note that I've revised the patch so that it runs during VACUUM's
second heap pass only -- not during pruning/defragmentation. This
means that the line pointer array truncation mechanism will only ever
kick-in during a VACUUM operation.

--
Peter Geoghegan




[PATCH] force_parallel_mode and GUC categories

2021-04-03 Thread Justin Pryzby
Forking this thread
https://www.postgresql.org/message-id/20210403154336.GG29125%40momjian.us

On Sat, Apr  3, 2021 at 08:38:18PM +0530, aditya desai wrote:
> > > >> Yes, force_parallel_mode is on. Should we set it off?

Bruce Momjian  writes:
> > > > Yes.  I bet someone set it without reading our docs:
...
> > > > We might need to clarify this sentence to be clearer it is _only_ for
> > > > testing.

On Sat, Apr 03, 2021 at 11:39:19AM -0400, Tom Lane wrote:
> > > I wonder why it is listed under planner options at all, and not under
> > > developer options.

On Sat, Apr  3, 2021 at 10:41:14AM -0500, Justin Pryzby wrote:
> > Because it's there to help DBAs catch errors in functions incorrectly 
> > marked as
> > parallel safe.

On Sat, Apr 03, 2021 at 11:43:36AM -0400, Bruce Momjian wrote:
> Uh, isn't that developer/debugging?

I understood "developer" to mean someone who's debugging postgres itself, not
(say) a function written using pl/pgsql.  Like backtrace_functions,
post_auth_delay, jit_profiling_support.

But I see that some "dev" options are more user-facing (for a sufficiently
advanced user):
ignore_checksum_failure, ignore_invalid_pages, zero_damaged_pages.

Also, I understood this to mean the "category" in pg_settings, but I guess
what's important here is the absense of the GUC in the sample/template config
file.  pg_settings.category and the sample headings it appears are intended to
be synchronized, but a few of them are out of sync.  See attached.

+1 to move this to "developer" options and remove it from the sample config:

# - Other Planner Options -
#force_parallel_mode = off

-- 
Justin
>From ce0c5b99650859bb13f204ea56221ca854a601e1 Mon Sep 17 00:00:00 2001
From: Justin Pryzby 
Date: Sat, 3 Apr 2021 19:06:37 -0500
Subject: [PATCH 1/4] track_activity_query_size is STATS_COLLECTOR category

Not Resource Usage / Memory, as since 995fb7420
---
 contrib/postgres_fdw/postgres_fdw.c | 1 +
 src/backend/utils/misc/guc.c| 2 +-
 src/include/pg_config_manual.h  | 4 ++--
 3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c
index 16c2979f2d..aff6e8c085 100644
--- a/contrib/postgres_fdw/postgres_fdw.c
+++ b/contrib/postgres_fdw/postgres_fdw.c
@@ -6797,6 +6797,7 @@ fetch_more_data_begin(AsyncRequest *areq)
 	char		sql[64];
 
 	Assert(!fsstate->conn_state->pendingAreq);
+	Assert(fsstate->conn);
 
 	/* Create the cursor synchronously. */
 	if (!fsstate->cursor_exists)
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 584daffc8a..c315dd4bc1 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3435,7 +3435,7 @@ static struct config_int ConfigureNamesInt[] =
 	},
 
 	{
-		{"track_activity_query_size", PGC_POSTMASTER, RESOURCES_MEM,
+		{"track_activity_query_size", PGC_POSTMASTER, STATS_COLLECTOR,
 			gettext_noop("Sets the size reserved for pg_stat_activity.query, in bytes."),
 			NULL,
 			GUC_UNIT_BYTE
diff --git a/src/include/pg_config_manual.h b/src/include/pg_config_manual.h
index e28c990382..fdb4fc1de2 100644
--- a/src/include/pg_config_manual.h
+++ b/src/include/pg_config_manual.h
@@ -288,7 +288,7 @@
  * You should normally use MEMORY_CONTEXT_CHECKING with USE_VALGRIND;
  * instrumentation of repalloc() is inferior without it.
  */
-/* #define USE_VALGRIND */
+#define USE_VALGRIND
 
 /*
  * Define this to cause pfree()'d memory to be cleared immediately, to
@@ -313,7 +313,7 @@
  * facilitate catching code that depends on the contents of uninitialized
  * memory.  Caution: this is horrendously expensive.
  */
-/* #define RANDOMIZE_ALLOCATED_MEMORY */
+#define RANDOMIZE_ALLOCATED_MEMORY
 
 /*
  * For cache invalidation debugging, define CLOBBER_CACHE_ENABLED to enable
-- 
2.17.0

>From 6604d8b515ef62e77fea2550e5f040d89fa948f4 Mon Sep 17 00:00:00 2001
From: Justin Pryzby 
Date: Sat, 3 Apr 2021 19:10:01 -0500
Subject: [PATCH 2/4] log_autovacuum_min_duration is LOGGING_WHAT

Not AUTOVACUUM, since 48f7e6439 and ef23a7744
---
 doc/src/sgml/config.sgml  | 56 +--
 src/backend/utils/misc/postgresql.conf.sample |  8 +--
 2 files changed, 32 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 9d87b5097a..c4d5126c2a 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -6688,6 +6688,34 @@ local0.*/var/log/postgresql
   
  
 
+ 
+  log_autovacuum_min_duration (integer)
+  
+   log_autovacuum_min_duration
+   configuration parameter
+  
+  
+  
+   
+Causes each action executed by autovacuum to be logged if it ran for at
+least the specified amount of time.  Setting this to zero logs
+all autovacuum actions. -1 (the default) disables
+logging autovacuum actions.
+If this value is specified without units, it is taken as milliseconds.
+For example, if you set this to
+

Re: ModifyTable overheads in generic plans

2021-04-03 Thread Tom Lane
Amit Langote  writes:
> On Thu, Apr 1, 2021 at 3:12 AM Tom Lane  wrote:
>> Amit Langote  writes:
> [ v14-0002-Initialize-result-relation-information-lazily.patch ]
>> Needs YA rebase over 86dc90056.

> Done.

I spent some time looking this over.  There are bits of it we can
adopt without too much trouble, but I'm afraid that 0001 (delay
FDW BeginDirectModify until the first actual update) is a nonstarter,
which makes the main idea of delaying ExecInitResultRelation unworkable.

My fear about 0001 is that it will destroy any hope of direct updates
on different remote partitions executing with consistent semantics
(i.e. compatible snapshots), because some row updates triggered by the
local query may have already happened before a given partition gets to
start its remote query.  Maybe we can work around that, but I do not
want to commit a major restructuring that assumes we can dodge this
problem when we don't yet even have a fix for cross-partition updates
that does rely on the assumption of synchronous startup.

In some desultory performance testing here, it seemed like a
significant part of the cost is ExecOpenIndices, and I don't see
a reason offhand why we could not delay/skip that.  I also concur
with delaying construction of ri_ChildToRootMap and the
partition_tuple_routing data structures, since many queries will
never need those at all.

> * PartitionTupleRouting.subplan_resultrel_htab is removed in favor
> of using ModifyTableState.mt_resultOidHash to look up an UPDATE
> result relation by OID.

Hmm, that sounds promising too, though I didn't look at the details.

Anyway, I think the way to proceed for now is to grab the low-hanging
fruit of things that clearly won't change any semantics.  But tail end
of the dev cycle is no time to be making really fundamental changes
in how FDW direct modify works.

regards, tom lane




Re: ALTER TABLE ADD COLUMN fast default

2021-04-03 Thread Andrew Gierth
[warning, reviving a thread from 2018]

> "Andrew" == Andrew Dunstan  writes:

 > On Wed, Feb 21, 2018 at 7:48 PM, Andres Freund  wrote:
 >> Hi,

 Andrew> Comments interspersed.

 >>> @@ -4059,10 +4142,6 @@ AttrDefaultFetch(Relation relation)
 >>> 
 >>> systable_endscan(adscan);
 >>> heap_close(adrel, AccessShareLock);
 >>> -
 >>> - if (found != ndef)
 >>> - elog(WARNING, "%d attrdef record(s) missing for rel %s",
 >>> -  ndef - found, RelationGetRelationName(relation));
 >>> }
 >> 
 >> Hm, it's not obvious why this is a good thing?

I didn't find an answer to this in the thread archive, and the above
change apparently did make it into the final patch.

I just got through diagnosing a SEGV crash with someone on IRC, and the
cause turned out to be exactly this - a table had (for some reason we
could not determine within the available resources) lost its pg_attrdef
record for the one column it had with a default (which was a serial
column, so none of the fast-default code is actually implicated). Any
attempt to alter the table resulted in a crash in equalTupleDesc on this
line:

if (strcmp(defval1->adbin, defval2->adbin) != 0)

due to trying to compare adbin values which were NULL pointers.

So, questions: why was the check removed in the first place?

(Why was it previously only a warning when it causes a crash further
down the line on any alteration?)

Does equalTupleDesc need to be more defensive about this, or does the
above check need to be reinstated?

(The immediate issue was fixed by "update pg_attribute set
atthasdef=false ..." for the offending attribute and then adding it back
with ALTER TABLE, which seems to have cured the crash.)

-- 
Andrew (irc:RhodiumToad)




Allowing dsm allocations in single user mode

2021-04-03 Thread Andres Freund
Hi,

Right now dsm_create() has the following assertion:
/* Unsafe in postmaster (and pointless in a stand-alone backend). */
Assert(IsUnderPostmaster);

I agree with the "unsafe in postmaster" bit. But I'm not convinced by
the "pointless in a stand-alone backend" part.

We're starting to build building blocks of the system using DSM now, and
several of those seem like they should work the same whether in single
user mode or not.

I just hit this when testing whether the shared memory stats support
works in single user mode: It does, as long as only a few stats exist,
after that this assertion is hit, removing the assertion solves that.

Today the stats system doesn't work in single user mode, in a weird way:
2021-04-03 16:01:39.872 PDT [3698737][not initialized][1/3:0] LOG:  using stale 
statistics instead of current ones because stats collector is not responding
2021-04-03 16:01:39.872 PDT [3698737][not initialized][1/3:0] STATEMENT:  
select * from pg_stat_all_tables;
then proceeding to return a lot of 0s and NULLs.

I think that's not great: E.g. when hitting wraparound issues, checking
something like pg_stat_all_tables.last_vacuum seems like an entirely
reasonable thing to do.

Obviously not something we'd fix with the current stats collector
approach, but I don't think it's something we should cargo cult forward
either.

Therefore I propose replacing the assertion with something along the
lines of
Assert(IsUnderPostmaster || !IsPostmasterEnvironment);

Greetings,

Andres Freund




Re: Confusing behavior of psql's \e

2021-04-03 Thread Tom Lane
Laurenz Albe  writes:
> Attached is version 6.

Pushed with some mostly-cosmetic fiddling.

One thing I changed that wasn't cosmetic is that as you had it,
the behavior of "\e file" varied depending on whether the query
buffer had been empty, which surely seems like a bad idea.
I made it do discard_on_quit always in that case.  I think there
might be a case for discard_on_quit = false always, ie maybe
the user wanted to load the file into the query buffer as-is.
But it seems like a pretty weak case --- you'd be more likely
to just use \i for that situation.

regards, tom lane




Re: insensitive collations

2021-04-03 Thread Daniel Verite
Jim Finnerty wrote:

> SET client_encoding = WIN1252;
> [...]
> postgres=# SELECT * FROM locations WHERE location LIKE 'Franche-Comt__';  --
> the wildcard is applied byte by byte instead of character by character, so
> the 2-byte accented character is matched only by 2 '_'s
>location
> 
> Franche-Comté
> (1 row)

The most plausible explanation is that the client-side text is encoded
in UTF-8, rather than WIN1252 as declared.

If you added  length('Franche-Comté') to the above query, I suspect
it would tell that the string is one character longer than
expected, and octet_length('Franche-Comté') would be
two-byte longer than expected.

Also dumping the contents of the "location" column with
convert_to() would show that the accents have been
wrongly translated, if the explanation of the encoding snafu is
correct.


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: https://www.manitou-mail.org
Twitter: @DanielVerite




Re: SP-GiST confusion: indexed column's type vs. index column type

2021-04-03 Thread Tom Lane
Here's a patch that, in addition to what I mentioned upthread,
rescinds the limitation that user-defined SPGIST opclasses can't
set the STORAGE parameter, and cleans up some residual confusion
about whether values are of the indexed type (attType) or the
storage type (leafType).  Once I'd wrapped my head around the idea
that indeed intermediate-level "reconstructed" values ought to be
of the leafType, there were fewer bugs of that sort than I thought
yesterday ... but still a nonzero number.

I've also attached a test module that exercises reconstruction
during index-only scan with leafType being meaningfully different
from attType.  I'm not quite sure whether this is worth
committing, but I'm leaning towards doing so.

regards, tom lane

diff --git a/doc/src/sgml/spgist.sgml b/doc/src/sgml/spgist.sgml
index ea88ae45e5..9ed1f564ee 100644
--- a/doc/src/sgml/spgist.sgml
+++ b/doc/src/sgml/spgist.sgml
@@ -205,10 +205,12 @@
  
 
  
-  Leaf tuples of an SP-GiST tree contain values of the
-  same data type as the indexed column.  Leaf tuples at the root level will
-  always contain the original indexed data value, but leaf tuples at lower
-  levels might contain only a compressed representation, such as a suffix.
+  Leaf tuples of an SP-GiST tree usually contain values
+  of the same data type as the indexed column, although it is also possible
+  for them to contain lossy representations of the indexed column.
+  Leaf tuples stored at the root level will directly represent
+  the original indexed data value, but leaf tuples at lower
+  levels might contain only a partial value, such as a suffix.
   In that case the operator class support functions must be able to
   reconstruct the original value using information accumulated from the
   inner tuples that are passed through to reach the leaf level.
@@ -330,19 +332,26 @@ typedef struct spgConfigOut
  
 
  
-  leafType is typically the same as
-  attType.  For the reasons of backward
-  compatibility, method config can
-  leave leafType uninitialized; that would
-  give the same effect as setting leafType equal
-  to attType.  When attType
-  and leafType are different, then optional
+  leafType must match the index storage type
+  defined by the operator class's opckeytype
+  catalog entry.  For reasons of backward compatibility,
+  method config can
+  leave leafType uninitialized (zero);
+  that is interpreted as meaning opckeytype.
+  (Recall that opckeytype can in turn be zero,
+  implying the storage type is the same as the operator class's input
+  type, which is the most common situation.)  In what
+  follows, leafType should be understood as
+  referring to the fully resolved leaf data type.
+ 
+
+ 
+  When attType
+  and leafType are different, the optional
   method compress must be provided.
   Method compress is responsible
   for transformation of datums to be indexed from attType
   to leafType.
-  Note: both consistent functions will get scankeys
-  unchanged, without transformation using compress.
  
  
 
@@ -677,8 +686,7 @@ typedef struct spgInnerConsistentOut
reconstructedValue is the value reconstructed for the
parent tuple; it is (Datum) 0 at the root level or if the
inner_consistent function did not provide a value at the
-   parent level. reconstructedValue is always of
-   spgConfigOut.leafType type.
+   parent level.
traversalValue is a pointer to any traverse data
passed down from the previous call of inner_consistent
on the parent index tuple, or NULL at the root level.
@@ -713,9 +721,14 @@ typedef struct spgInnerConsistentOut
necessarily so, so an array is used.)
If value reconstruction is needed, set
reconstructedValues to an array of the values
-   of spgConfigOut.leafType type
reconstructed for each child node to be visited; otherwise, leave
reconstructedValues as NULL.
+   The reconstructed values are assumed to be of type
+   spgConfigOut.leafType.
+   (However, since the core system will do nothing with them except
+   possibly copy them, it is sufficient for them to have the
+   same typlen and typbyval
+   properties as leafType.)
If ordered search is performed, set distances
to an array of distance values according to orderbys
array (nodes with lowest distances will be processed first).  Leave it
@@ -797,8 +810,7 @@ typedef struct spgLeafConsistentOut
reconstructedValue is the value reconstructed for the
parent tuple; it is (Datum) 0 at the root level or if the
inner_consistent function did not provide a value at the
-   parent level. reconstructedValue is always of
-   spgConfigOut.leafType type.
+   parent level.
traversalValue is a pointer to any traverse data
passed 

Re: MultiXact\SLRU buffers configuration

2021-04-03 Thread Andrey Borodin



> 1 апр. 2021 г., в 06:40, Thomas Munro  написал(а):
> 
> 2.  Remove the cap of 128 buffers for xact_buffers as agreed.  We
> still need a cap though, to avoid a couple of kinds of overflow inside
> slru.c, both when computing the default value and accepting a
> user-provided number.  I introduced SLRU_MAX_ALLOWED_BUFFERS to keep
> it <= 1GB, and tested this on a 32 bit build with extreme block sizes.
BTW we do not document maximum values right now.
I was toying around with big values. For example if we set different big 
xact_buffers we can get something like
FATAL:  not enough shared memory for data structure "Notify" (72768 bytes 
requested)
FATAL:  not enough shared memory for data structure "Async Queue Control" (2492 
bytes requested)
FATAL:  not enough shared memory for data structure "Checkpointer Data" (393280 
bytes requested)

But never anything about xact_buffers. I don't think it's important, though.

> 
> Likewise, I removed the cap of 16 buffers for commit_ts_buffers, but
> only if you have track_commit_timestamp enabled.  
Is there a reason to leave 16 pages if commit_ts is disabled? They might be 
useful for some artefacts of previously enabled commit_ts?

> 4.  Change the default for commit_ts_buffers back to shared_buffers /
> 1024 (with a minimum of 4), because I think you might have changed it
> by a copy and paste error -- or did you intend to make the default
> higher?
I changed default due to some experiments with 
https://www.postgresql.org/message-id/flat/20210115220744.GA24457%40alvherre.pgsql
In fact most important part of that thread was removing the cap, which is done 
by the patchset now.

Thanks!

Best regards, Andrey Borodin.







Re: Autovacuum on partitioned table (autoanalyze)

2021-04-03 Thread Alvaro Herrera
Thanks for the quick rework.  I like this design much better and I think
this is pretty close to committable.  Here's a rebased copy with some
small cleanups (most notably, avoid calling pgstat_propagate_changes
when the partition doesn't have a tabstat entry; also, free the lists
that are allocated in a couple of places).

I didn't actually verify that it works.

-- 
Álvaro Herrera   Valdivia, Chile
"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.
Escriba un guión que no toque nada para no causar daños." (Jakob Nielsen)
diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index d897bbec2b..5554275e64 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -108,7 +108,7 @@ static relopt_bool boolRelOpts[] =
 		{
 			"autovacuum_enabled",
 			"Enables autovacuum in this relation",
-			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST | RELOPT_KIND_PARTITIONED,
 			ShareUpdateExclusiveLock
 		},
 		true
@@ -246,7 +246,7 @@ static relopt_int intRelOpts[] =
 		{
 			"autovacuum_analyze_threshold",
 			"Minimum number of tuple inserts, updates or deletes prior to analyze",
-			RELOPT_KIND_HEAP,
+			RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,
 			ShareUpdateExclusiveLock
 		},
 		-1, 0, INT_MAX
@@ -420,7 +420,7 @@ static relopt_real realRelOpts[] =
 		{
 			"autovacuum_analyze_scale_factor",
 			"Number of tuple inserts, updates or deletes prior to analyze as a fraction of reltuples",
-			RELOPT_KIND_HEAP,
+			RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,
 			ShareUpdateExclusiveLock
 		},
 		-1, 0.0, 100.0
@@ -1962,12 +1962,11 @@ bytea *
 partitioned_table_reloptions(Datum reloptions, bool validate)
 {
 	/*
-	 * There are no options for partitioned tables yet, but this is able to do
-	 * some validation.
+	 * autovacuum_enabled, autovacuum_analyze_threshold and
+	 * autovacuum_analyze_scale_factor are supported for partitioned tables.
 	 */
-	return (bytea *) build_reloptions(reloptions, validate,
-	  RELOPT_KIND_PARTITIONED,
-	  0, NULL, 0);
+
+	return default_reloptions(reloptions, validate, RELOPT_KIND_PARTITIONED);
 }
 
 /*
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 5f2541d316..fb41b06539 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -660,7 +660,7 @@ CREATE VIEW pg_stat_all_tables AS
 FROM pg_class C LEFT JOIN
  pg_index I ON C.oid = I.indrelid
  LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
-WHERE C.relkind IN ('r', 't', 'm')
+WHERE C.relkind IN ('r', 't', 'm', 'p')
 GROUP BY C.oid, N.nspname, C.relname;
 
 CREATE VIEW pg_stat_xact_all_tables AS
@@ -680,7 +680,7 @@ CREATE VIEW pg_stat_xact_all_tables AS
 FROM pg_class C LEFT JOIN
  pg_index I ON C.oid = I.indrelid
  LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
-WHERE C.relkind IN ('r', 't', 'm')
+WHERE C.relkind IN ('r', 't', 'm', 'p')
 GROUP BY C.oid, N.nspname, C.relname;
 
 CREATE VIEW pg_stat_sys_tables AS
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index f84616d3d2..35e9a2fc17 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -655,20 +655,22 @@ do_analyze_rel(Relation onerel, VacuumParams *params,
 InvalidMultiXactId,
 in_outer_xact);
 		}
+	}
 
-		/*
-		 * Now report ANALYZE to the stats collector.
-		 *
-		 * We deliberately don't report to the stats collector when doing
-		 * inherited stats, because the stats collector only tracks per-table
-		 * stats.
-		 *
-		 * Reset the changes_since_analyze counter only if we analyzed all
-		 * columns; otherwise, there is still work for auto-analyze to do.
-		 */
+	/*
+	 * Now report ANALYZE to the stats collector.
+	 *
+	 * Regarding inherited stats, we report only in the case of declarative
+	 * partitioning.  For partitioning based on inheritance, stats collector
+	 * only tracks per-table stats.
+	 *
+	 * Reset the changes_since_analyze counter only if we analyzed all
+	 * columns; otherwise, there is still work for auto-analyze to do.
+	 */
+	if (!inh || onerel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
 		pgstat_report_analyze(onerel, totalrows, totaldeadrows,
 			  (va_cols == NIL));
-	}
+
 
 	/*
 	 * If this isn't part of VACUUM ANALYZE, let index AMs do cleanup.
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 23ef23c13e..7ca074a800 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -74,7 +74,9 @@
 #include "access/xact.h"
 #include "catalog/dependency.h"
 #include "catalog/namespace.h"
+#include "catalog/partition.h"
 #include "catalog/pg_database.h"
+#include "catalog/pg_inherits.h"
 #include "commands/dbcommands.h"
 #include "commands/vacuum.h"
 #include "lib/ilist.h"
@@ -350,6 +352,8 

Re: [PATCH] Implement motd for PostgreSQL

2021-04-03 Thread Alvaro Herrera
On 2021-Apr-03, Joel Jacobson wrote:

> I'm actually using it myself in production for something, to display
> instructions to users when they login.

Yeah, such as

"If your CREATE sentences don't work, please run
CREATE SCHEMA AUTHORIZATION CURRENT_USER"

for systems where the PUBLIC schema has been dropped.

-- 
Álvaro Herrera   Valdivia, Chile
 It does it in a really, really complicated way
 why does it need to be complicated?
 Because it's MakeMaker.




Re: Making wait events a bit more efficient

2021-04-03 Thread Andres Freund
On 2021-04-02 19:55:16 -0700, Andres Freund wrote:
> On 2021-04-02 12:44:58 -0700, Andres Freund wrote:
> > If we went for the my_wait_event_info approach there is one further
> > advantage, after my change to move the wait event code into a separate
> > file: wait_event.h does not need to include proc.h anymore, which seems
> > architecturally nice for things like fd.c.
> 
> That part turns out to make one aspect of the shared memory stats patch
> cleaner, so I am planning to push this commit fairly soon, unless
> somebody sees a reason not to do so?

Done.




Re: Additional Chapter for Tutorial - arch-dev.sgml

2021-04-03 Thread Alvaro Herrera
On 2021-Apr-03, Jürgen Purtz wrote:

> On 03.04.21 15:39, Alvaro Herrera wrote:
> > Yes, there is.  AFAICS Heikki committed a small wordsmithing patch --
> > not the large patch with the additional chapter.
> 
> What can i do to move the matter forward?

Please post a version that applies to the current sources.  If the
latest version posted does, please state so.

-- 
Álvaro Herrera39°49'30"S 73°17'W




Re: [PATCH] Implement motd for PostgreSQL

2021-04-03 Thread Chapman Flack
On 04/03/21 14:24, Joel Jacobson wrote:
> Thanks for noticing.
> I've updated the ascii art now using the version from swisspug.org,
> does it look correct now to you?
> 
> $ psql -U brad -h pit.org motd
> 
> NOTICE:
>     __  ___
> /)/  \/   \
> ( / ___\)
> \(/ o)  ( o)   )
>   \_  (_  )   \ ) _/
> \  /\_/\)/
>  \/ 
>   _|  |
>   \|_/

In the email as I received it (including in view-source, so it is
not a display artifact), rows 2 and 4 are now missing an initial space.

I've pulled up the version from the list archive also[1], and see the
same issue there.

I'm not /only/ trying to be funny here ... I'm wondering if there could
be something relevant to be learned from finding out where the initial
space is being dropped, and why.

Regards,
-Chap


[1]
https://www.postgresql.org/message-id/b8855527-88f5-4613-a258-8523cbded8be%40www.fastmail.com




Re: [PATCH] Implement motd for PostgreSQL

2021-04-03 Thread Joel Jacobson
On Sat, Apr 3, 2021, at 15:43, Chapman Flack wrote:
> Now there's some kind of Max Headroom thing going on with the second row,
> and this time I'm not sure how to explain it. (I knew the backslashes were
> because they weren't doubled.)
> 
> I have done 'view source' in my mail client to make sure it's not just
> some display artifact on my end. Something has eaten a space before that
> left paren. What would do that?

Thanks for noticing.
I've updated the ascii art now using the version from swisspug.org,
does it look correct now to you?

$ psql -U brad -h pit.org motd

NOTICE:
    __  ___
/)/  \/   \
( / ___\)
\(/ o)  ( o)   )
  \_  (_  )   \ ) _/
\  /\_/\)/
 \/ 
  _|  |
  \|_/
To be or not to be.
-- Shakespeare
To do is to be.
-- Nietzsche
To be is to do.
-- Sartre
Do be do be do.
-- Sinatra

/Joel

Re: multi-install PostgresNode fails with older postgres versions

2021-04-03 Thread Andrew Dunstan


On 3/31/21 10:28 PM, Mark Dilger wrote:
>
>> On Mar 31, 2021, at 1:07 PM, Mark Dilger  
>> wrote:
>>
>>
>>
>>> On Mar 31, 2021, at 1:05 PM, Andrew Dunstan  wrote:
>>>
>>>
>>> On 3/31/21 3:48 PM, Alvaro Herrera wrote:
 On 2021-Mar-31, Mark Dilger wrote:

> PostgresNode::start() doesn't work for servers older than version 10,
> either.  If I hack that function to sleep until the postmaster.pid
> file exists, it works, but that is really ugly and is just to prove to
> myself that it is a timing issue.  There were a few commits in the
> version 10 development cycle (cf, commit
> f13ea95f9e473a43ee4e1baeb94daaf83535d37c) which changed how pg_ctl
> works, though I haven't figured out yet exactly what the interaction
> with PostgresNode would be.  I'll keep looking.
 Do you need to do "pg_ctl -w" perhaps?
>>>
>>>
>>> Probably. The buildfarm does this unconditionally and has done for a
>>> very long time, so maybe we don't need a version test for it.
>> I put a version test for this and it works for me.  I guess you could do it 
>> unconditionally, if you want, but the condition is just:
>>
>> -   TestLib::system_or_bail('pg_ctl', '-D', $pgdata, '-l', $logfile,
>> +   TestLib::system_or_bail('pg_ctl',
>> +   $self->older_than_version('10') ? '-w' : (),
>> +   '-D', $pgdata, '-l', $logfile,
>>'restart');
> I have needed to do a number of these version checks to get PostgresNode 
> working across a variety of versions.  Attached is a WIP patch set with those 
> changes, and with a framework that exercises PostgresNode and can be extended 
> to check other things.  For now, it just checks that init(), start(), 
> safe_psql(), and teardown_node() work.
>
> With the existing changes to PostgresNode in 0001, the framework in 0002 
> works for server versions back to 9.3.  Versions 9.1 and 9.2 fail on the 
> safe_psql(), and I haven't dug into that far enough yet to explain why.  
> Versions 8.4 and 9.0 fail on the start().  I had trouble getting versions of 
> postgres older than 8.4 to compile on my laptop.  I haven't dug far enough 
> into that yet, either.
>
> To get this running, you need to install the versions you care about and edit 
> src/test/modules/test_cross_version/version.dat with the names and locations 
> of those installations.  (I committed the patch with my local settings, so 
> you can easily compare and edit.)  That should get you to the point where you 
> can run 'make check' in the test_cross_version directory.


I've had a look at the first of these patches. I think it's generally
ok, but:


-    TestLib::system_or_bail('initdb', '-D', $pgdata, '-A', 'trust', '-N',
+    TestLib::system_or_bail('initdb', '-D', $pgdata, '-A', 'trust',
+        $self->at_least_version("9.3") ? '-N' : (),
     @{ $params{extra} });


I'd rather do this in two steps to make it clearer.


I still think just doing pg_ctl -w unconditionally would be simpler.


Prior to 9.3 "unix_socket_directories" was spelled
"unix_socket_directory". We should just set a variable appropriately and
use it. That should make the changes around that a whole lot simpler.
(c.f. buildfarm code)


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Additional Chapter for Tutorial - arch-dev.sgml

2021-04-03 Thread Jürgen Purtz

On 03.04.21 15:39, Alvaro Herrera wrote:

Yes, there is.  AFAICS Heikki committed a small wordsmithing patch --
not the large patch with the additional chapter.


What can i do to move the matter forward?

--

J. Purtz






Re: Replication slot stats misgivings

2021-04-03 Thread vignesh C
On Fri, Apr 2, 2021 at 11:28 AM Bharath Rupireddy
 wrote:
>
> On Fri, Apr 2, 2021 at 9:57 AM vignesh C  wrote:
> > Thanks for the comments, I will fix the comments and provide a patch
> > for this soon.
>

Thanks for the comments.

> Here are some comments:
> 1) How about something like below
> +(errmsg("skipping \"%s\" replication slot
> statistics as the statistic collector process does not have enough
> statistic slots",
> instead of
> +(errmsg("skipping \"%s\" replication
> slot's statistic as the statistic collector process does not have
> enough statistic slots",
>

Modified.

> 2) Does it mean "pg_statistic slots" when we say "statistic slots" in
> the above warning? If yes, why can't we use "pg_statistic slots"
> instead of "statistic slots" as with another existing message
> "insufficient pg_statistic slots for array stats"?
>

Here pg_stat_replication_slots will not have enought slots. I changed
it to below:
errmsg("skipping \"%s\" replication slot statistics as
pg_stat_replication_slots does not have enough slots"
Thoughts?

> 3) Should we change the if condition to max_replication_slots <=
> nReplSlotStats instead of max_replication_slots == nReplSlotStats? In
> the scenario, it is mentioned that "one of the replication slots is
> dropped", will this issue occur when multiple replication slots are
> dropped?
>

I felt it should be max_replication_slots == nReplSlotStats, if
max_replication_slots = 5, we will be able to store 5 replication slot
statistics from 0,1..4, from 5th we will not have space. I think this
need not be changed.

> 4) Let's end the statement after this and start a new one, something like 
> below
> + * this. To avoid writing beyond the max_replication_slots
> instead of
> + * this, to avoid writing beyond the max_replication_slots
>

Changed it.

> 5) How about something like below
> + * this. To avoid writing beyond the max_replication_slots,
> + * this replication slot statistics information will
> be skipped.
> + */
> instead of
> + * this, to avoid writing beyond the max_replication_slots
> + * these replication slot statistic information will
> be skipped.
> + */
>

Changed it.

> 6) Any specific reason to use a new local variable replSlotStat and
> later memcpy into replSlotStats[nReplSlotStats]? Instead we could
> directly fread into [nReplSlotStats] and do
> memset([nReplSlotStats], 0,
> sizeof(PgStat_ReplSlotStats)); before the warnings. As warning
> scenarios seem to be less frequent, we could avoid doing memcpy
> always.
> -if (fread([nReplSlotStats], 1,
> sizeof(PgStat_ReplSlotStats), fpin)
> +if (fread(, 1, sizeof(PgStat_ReplSlotStats), 
> fpin)
>
> +memcpy([nReplSlotStats], ,
> sizeof(PgStat_ReplSlotStats));
>

I wanted to avoid the memcpy instructions multiple times, but your
explanation makes sense to keep the memcpy in failure path so that the
positive flow can be faster. Changed it.
These comments are fixed in the v2 patch posted in my previous mail.

Regards,
Vignesh




Re: Replication slot stats misgivings

2021-04-03 Thread vignesh C
On Fri, Apr 2, 2021 at 9:29 AM Masahiko Sawada  wrote:
>
> On Fri, Apr 2, 2021 at 1:55 AM vignesh C  wrote:
> >
> > On Thu, Apr 1, 2021 at 5:58 PM Amit Kapila  wrote:
> > >
> > > On Thu, Apr 1, 2021 at 3:43 PM vignesh C  wrote:
> > > >
> > > > On Wed, Mar 31, 2021 at 11:32 AM vignesh C  wrote:
> > > > >
> > > > > On Tue, Mar 30, 2021 at 11:00 AM Andres Freund  
> > > > > wrote:
> > > > > >
> > > > > > Hi,
> > > > > >
> > > > > > On 2021-03-30 10:13:29 +0530, vignesh C wrote:
> > > > > > > On Tue, Mar 30, 2021 at 6:28 AM Andres Freund 
> > > > > > >  wrote:
> > > > > > > > Any chance you could write a tap test exercising a few of these 
> > > > > > > > cases?
> > > > > > >
> > > > > > > I can try to write a patch for this if nobody objects.
> > > > > >
> > > > > > Cool!
> > > > > >
> > > > >
> > > > > Attached a patch which has the test for the first scenario.
> > > > >
> > > > > > > > E.g. things like:
> > > > > > > >
> > > > > > > > - create a few slots, drop one of them, shut down, start up, 
> > > > > > > > verify
> > > > > > > >   stats are still sane
> > > > > > > > - create a few slots, shut down, manually remove a slot, lower
> > > > > > > >   max_replication_slots, start up
> > > > > > >
> > > > > > > Here by "manually remove a slot", do you mean to remove the slot
> > > > > > > manually from the pg_replslot folder?
> > > > > >
> > > > > > Yep - thereby allowing max_replication_slots after the 
> > > > > > shutdown/start to
> > > > > > be lower than the number of slots-stats objects.
> > > > >
> > > > > I have not included the 2nd test in the patch as the test fails with
> > > > > following warnings and also displays the statistics of the removed
> > > > > slot:
> > > > > WARNING:  problem in alloc set Statistics snapshot: detected write
> > > > > past chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438
> > > > > WARNING:  problem in alloc set Statistics snapshot: detected write
> > > > > past chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438
> > > > >
> > > > > This happens because the statistics file has an additional slot
> > > > > present even though the replication slot was removed.  I felt this
> > > > > issue should be fixed. I will try to fix this issue and send the
> > > > > second test along with the fix.
> > > >
> > > > I felt from the statistics collector process, there is no way in which
> > > > we can identify if the replication slot is present or not because the
> > > > statistic collector process does not have access to shared memory.
> > > > Anything that the statistic collector process does independently by
> > > > traversing and removing the statistics of the replication slot
> > > > exceeding the max_replication_slot has its drawback of removing some
> > > > valid replication slot's statistics data.
> > > > Any thoughts on how we can identify the replication slot which has been 
> > > > dropped?
> > > > Can someone point me to the shared stats patch link with which message
> > > > loss can be avoided. I wanted to see a scenario where something like
> > > > the slot is dropped but the statistics are not updated because of an
> > > > immediate shutdown or server going down abruptly can occur or not with
> > > > the shared stats patch.
> > > >
> > >
> > > I don't think it is easy to simulate a scenario where the 'drop'
> > > message is dropped and I think that is why the test contains the step
> > > to manually remove the slot. At this stage, you can probably provide a
> > > test patch and a code-fix patch where it just drops the extra slots
> > > from the stats file. That will allow us to test it with a shared
> > > memory stats patch on which Andres and Horiguchi-San are working. If
> > > we still continue to pursue with current approach then as Andres
> > > suggested we might send additional information from
> > > RestoreSlotFromDisk to keep it in sync.
> >
> > Thanks for your comments, Attached patch has the fix for the same.
> > Also attached a couple of more patches which addresses the comments
> > which Andres had listed i.e changing char to NameData type and also to
> > display the unspilled/unstreamed transaction information in the
> > replication statistics.
> > Thoughts?
>
> Thank you for the patches!
>
> I've looked at those patches and here are some comments on 0001, 0002,
> and 0003 patch:

Thanks for the comments.

> 0001 patch:
>
> -   values[0] = PointerGetDatum(cstring_to_text(s->slotname));
> +   values[0] = PointerGetDatum(cstring_to_text(s->slotname.data));
>
> We can use NameGetDatum() instead.

I felt we will not be able to use NameGetDatum because this function
will not have access to the value throughout the loop and NameGetDatum
must ensure the pointed-to value has adequate lifetime.

> ---
> 0002 patch:
>
> The patch uses logical replication to test replication slots
> statistics but I think it's necessarily necessary. It would be more
> simple to use logical decoding. Maybe we can add TAP tests to
> contrib/test_decoding.
>

I will try 

Re: SP-GiST confusion: indexed column's type vs. index column type

2021-04-03 Thread Tom Lane
I wrote:
> I still want to make an opclass in which those types are different,
> if only for testing purposes, but I'm having a hard time coming up
> with a plan that's not totally lame.  Best idea I can think of is
> to wrap the input in a bytea, which just begs the question "why
> would you do that?".  Anybody have a less lame thought?

I thought of a plan that's at least simple to code: make an opclass
that takes "name" but does all the internal storage as "text".  Then
all the code can be stolen from spgtextproc.c with very minor changes.
I'd been too fixated on finding an example in which attType and
leafType differ as to pass-by-ref vs pass-by-value, but actually a
test case with positive typlen vs. varlena typlen will do just as well
for finding wrong-type references.

And, having coded that up, my first test result is

regression=# create extension spgist_name_ops ;
ERROR:  storage type cannot be different from data type for access method 
"spgist"

evidently because SPGiST doesn't set amroutine->amstorage.

That's silly on its face because we have built-in opclasses in which
those types are different, but it probably helps explain why there are
no field reports of trouble with these bugs ...

regards, tom lane




Re: CLUSTER on partitioned index

2021-04-03 Thread Zhihong Yu
Hi,
For v10-0002-Implement-CLUSTER-of-partitioned-table.patch :

or that an partitioned index was previously set clustered.

'an partitioned index' -> a partitioned index

+ * Return a List of tables and associated index, where each index is a

associated index -> associated indices

For cluster():
-   rel = table_open(tableOid, NoLock);
+   rel = table_open(tableOid, ShareUpdateExclusiveLock);

Considering the comment preceding cluster() (forced to acquire exclusive
locks on all the tables), maybe add a comment explaining why it is safe to
take ShareUpdateExclusiveLock.

+cluster_multiple_rels(List *rvs, int options)

I think the multiple in the method name is not needed since the relation is
in plural.

Cheers

On Fri, Apr 2, 2021 at 1:03 PM Justin Pryzby  wrote:

> @cfbot: rebased
>


Re: [PATCH] Implement motd for PostgreSQL

2021-04-03 Thread Fabien COELHO




Perhaps the configuration-file parser has been fixed since to support 
embedded newlines? If so, then maybe it would actually be an idea to 
support newlines by escaping them?


Dunno.

If such a feature gets considered, I'm not sure I'd like to actually edit 
pg configuration file to change the message.


The actual source looks pretty straightforward. I'm wondering whether pg 
style would suggest to write motd != NULL instead of just motd.


I'm wondering whether it should be possible to designate (1) a file the 
content of which would be shown, or (2) a command, the output of which 
would be shown [ok, there might be security implications on this one].


--
Fabien.




Re: TRUNCATE on foreign table

2021-04-03 Thread Zhihong Yu
Continuing previous review...

+   relids_extra = lappend_int(relids_extra,
TRUNCATE_REL_CONTEXT__CASCADED);

I wonder if TRUNCATE_REL_CONTEXT_CASCADING is better
than TRUNCATE_REL_CONTEXT__CASCADED. Note the removal of the extra
underscore.
In English, we say: truncation cascading to foreign table.

w.r.t. Bharath's question on using hash table, I think the reason is that
the search would be more efficient:

+   ft_info = hash_search(ft_htab, _oid, HASH_ENTER, );
and
+   while ((ft_info = hash_seq_search()) != NULL)


+* Now go through the hash table, and process each entry associated to
the
+* servers involved in the TRUNCATE.

associated to -> associated with

Should the hash table be released at the end of ExecuteTruncateGuts() ?

Cheers

On Sat, Apr 3, 2021 at 7:38 AM Zhihong Yu  wrote:

> Hi,
> + TRUNCATE for each foreign server being involved
> + in one TRUNCATE command (note that invocations
>
> The 'being' in above sentence can be omitted.
>
> + the context where the foreign-tables are truncated. It is a list of
> integers and same length with
>
> There should be a verb between 'and' and same :
> It is a list of integers and has same length with
>
> + * Information related to truncation of foreign tables.  This is used for
> + * the elements in a hash table *that* uses the server OID as lookup key,
>
> The 'uses' is for 'This' (the struct), so 'that' should be 'and':
>
> the elements in a hash table and uses
>
> Alternatively:
>
> the elements in a hash table. It uses
>
> +   relids_extra = lappend_int(relids_extra, (recurse ?
> TRUNCATE_REL_CONTEXT__NORMAL : TRUNCATE_REL_CONTEXT__ONLY));
>
> I am curious: isn't one underscore enough in the identifier (before NORMAL
> and ONLY) ?
>
> I suggest naming them TRUNCATE_REL_CONTEXT_NORMAL and
> TRUNCATE_REL_CONTEXT_ONLY
>
> Cheers
>
> On Sat, Apr 3, 2021 at 6:46 AM Kazutaka Onishi 
> wrote:
>
>> Sorry but I found the v7 patch has typo and it can't be built...
>> I attached fixed one(v8).
>>
>> 2021年4月3日(土) 9:53 Kazutaka Onishi :
>> >
>> > All,
>> >
>> > Thank you for discussion.
>> > I've updated the patch (v6->v7) according to the conclusion.
>> >
>> > I'll show the modified points:
>> > 1. Comments for ExecuteTuncate()
>> > 2. Replacing extra value in frels_extra with integer to label.
>> > 3. Skipping XLOG_HEAP_TRUNCATE on foreign table
>> >
>> > Regards,
>> >
>> > 2021年4月2日(金) 11:44 Fujii Masao :
>> > >
>> > >
>> > >
>> > > On 2021/04/02 9:37, Kohei KaiGai wrote:
>> > > > It is fair enough for me to reverse the order of actual truncation.
>> > > >
>> > > > How about the updated comments below?
>> > > >
>> > > >  This is a multi-relation truncate.  We first open and grab
>> exclusive
>> > > >  lock on all relations involved, checking permissions (local
>> database
>> > > >  ACLs even if relations are foreign-tables) and otherwise
>> verifying
>> > > >  that the relation is OK for truncation. In CASCADE mode,
>> ...(snip)...
>> > > >  Finally all the relations are truncated and reindexed. If any
>> foreign-
>> > > >  tables are involved, its callback shall be invoked prior to
>> the truncation
>> > > >  of regular tables.
>> > >
>> > > LGTM.
>> > >
>> > >
>> > > >> BTW, the latest patch doesn't seem to be applied cleanly to the
>> master
>> > > >> because of commit 27e1f14563. Could you rebase it?
>> > > >>
>> > > > Onishi-san, go ahead. :-)
>> > >
>> > > +1
>> > >
>> > > Regards,
>> > >
>> > > --
>> > > Fujii Masao
>> > > Advanced Computing Technology Center
>> > > Research and Development Headquarters
>> > > NTT DATA CORPORATION
>>
>


Re: TRUNCATE on foreign table

2021-04-03 Thread Zhihong Yu
Hi,
+ TRUNCATE for each foreign server being involved
+ in one TRUNCATE command (note that invocations

The 'being' in above sentence can be omitted.

+ the context where the foreign-tables are truncated. It is a list of
integers and same length with

There should be a verb between 'and' and same :
It is a list of integers and has same length with

+ * Information related to truncation of foreign tables.  This is used for
+ * the elements in a hash table *that* uses the server OID as lookup key,

The 'uses' is for 'This' (the struct), so 'that' should be 'and':

the elements in a hash table and uses

Alternatively:

the elements in a hash table. It uses

+   relids_extra = lappend_int(relids_extra, (recurse ?
TRUNCATE_REL_CONTEXT__NORMAL : TRUNCATE_REL_CONTEXT__ONLY));

I am curious: isn't one underscore enough in the identifier (before NORMAL
and ONLY) ?

I suggest naming them TRUNCATE_REL_CONTEXT_NORMAL and
TRUNCATE_REL_CONTEXT_ONLY

Cheers

On Sat, Apr 3, 2021 at 6:46 AM Kazutaka Onishi  wrote:

> Sorry but I found the v7 patch has typo and it can't be built...
> I attached fixed one(v8).
>
> 2021年4月3日(土) 9:53 Kazutaka Onishi :
> >
> > All,
> >
> > Thank you for discussion.
> > I've updated the patch (v6->v7) according to the conclusion.
> >
> > I'll show the modified points:
> > 1. Comments for ExecuteTuncate()
> > 2. Replacing extra value in frels_extra with integer to label.
> > 3. Skipping XLOG_HEAP_TRUNCATE on foreign table
> >
> > Regards,
> >
> > 2021年4月2日(金) 11:44 Fujii Masao :
> > >
> > >
> > >
> > > On 2021/04/02 9:37, Kohei KaiGai wrote:
> > > > It is fair enough for me to reverse the order of actual truncation.
> > > >
> > > > How about the updated comments below?
> > > >
> > > >  This is a multi-relation truncate.  We first open and grab
> exclusive
> > > >  lock on all relations involved, checking permissions (local
> database
> > > >  ACLs even if relations are foreign-tables) and otherwise
> verifying
> > > >  that the relation is OK for truncation. In CASCADE mode,
> ...(snip)...
> > > >  Finally all the relations are truncated and reindexed. If any
> foreign-
> > > >  tables are involved, its callback shall be invoked prior to the
> truncation
> > > >  of regular tables.
> > >
> > > LGTM.
> > >
> > >
> > > >> BTW, the latest patch doesn't seem to be applied cleanly to the
> master
> > > >> because of commit 27e1f14563. Could you rebase it?
> > > >>
> > > > Onishi-san, go ahead. :-)
> > >
> > > +1
> > >
> > > Regards,
> > >
> > > --
> > > Fujii Masao
> > > Advanced Computing Technology Center
> > > Research and Development Headquarters
> > > NTT DATA CORPORATION
>


Re: TRUNCATE on foreign table

2021-04-03 Thread Bharath Rupireddy
On Sat, Apr 3, 2021 at 7:16 PM Kazutaka Onishi  wrote:
>
> Sorry but I found the v7 patch has typo and it can't be built...
> I attached fixed one(v8).

Thanks for the patch. Here are some comments on v8 patch:
1) We usually have the struct name after "+typedef struct
ForeignTruncateInfo", please refer to other struct defs in the code
base.

2) We should add ORDER BY clause(probably ORDER BY id?) for data
generating select queries in added tests, otherwise tests might become
unstable.

3) How about dropping the tables, foreign tables that got created for
testing in postgres_fdw.sql?

4) I think it's not "foreign-tables"/"foreign-table", it can be
"foreign tables"/"foreign table", other places in the docs use this
convention.
+ the context where the foreign-tables are truncated. It is a list
of integers and same length with

5) Can't we use do_sql_command function after making it non static? We
could go extra mile, that is we could make do_sql_command little more
generic by passing some enum for each of PQsendQuery,
PQsendQueryParams, PQsendQueryPrepared and PQsendPrepare and replace
the respective code chunks with do_sql_command in postgres_fdw.c.

+/* run remote query */
+if (!PQsendQuery(conn, sql.data))
+pgfdw_report_error(ERROR, NULL, conn, false, sql.data);
+res = pgfdw_get_result(conn, sql.data);
+if (PQresultStatus(res) != PGRES_COMMAND_OK)
+pgfdw_report_error(ERROR, res, conn, true, sql.data);
+/* clean-up */
+PQclear(res);

6) A white space error when the patch is applied.
contrib/postgres_fdw/postgres_fdw.c:2913: trailing whitespace.
+

7) I may be missing something here. Why do we need a hash table at
all? We could just do it with a linked list right? Is there a specific
reason to use a hash table? IIUC, the hash table entries will be lying
around until the local session exists since we are not doing
hash_destroy.

8) How about having something like this?
+TRUNCATE can be used for foreign tables if the
foreign data wrapper supports, for instance, see .

With Regards,
Bharath Rupireddy.
EnterpriseDB: http://www.enterprisedb.com




Re: TRUNCATE on foreign table

2021-04-03 Thread Kazutaka Onishi
Sorry but I found the v7 patch has typo and it can't be built...
I attached fixed one(v8).

2021年4月3日(土) 9:53 Kazutaka Onishi :
>
> All,
>
> Thank you for discussion.
> I've updated the patch (v6->v7) according to the conclusion.
>
> I'll show the modified points:
> 1. Comments for ExecuteTuncate()
> 2. Replacing extra value in frels_extra with integer to label.
> 3. Skipping XLOG_HEAP_TRUNCATE on foreign table
>
> Regards,
>
> 2021年4月2日(金) 11:44 Fujii Masao :
> >
> >
> >
> > On 2021/04/02 9:37, Kohei KaiGai wrote:
> > > It is fair enough for me to reverse the order of actual truncation.
> > >
> > > How about the updated comments below?
> > >
> > >  This is a multi-relation truncate.  We first open and grab exclusive
> > >  lock on all relations involved, checking permissions (local database
> > >  ACLs even if relations are foreign-tables) and otherwise verifying
> > >  that the relation is OK for truncation. In CASCADE mode, ...(snip)...
> > >  Finally all the relations are truncated and reindexed. If any 
> > > foreign-
> > >  tables are involved, its callback shall be invoked prior to the 
> > > truncation
> > >  of regular tables.
> >
> > LGTM.
> >
> >
> > >> BTW, the latest patch doesn't seem to be applied cleanly to the master
> > >> because of commit 27e1f14563. Could you rebase it?
> > >>
> > > Onishi-san, go ahead. :-)
> >
> > +1
> >
> > Regards,
> >
> > --
> > Fujii Masao
> > Advanced Computing Technology Center
> > Research and Development Headquarters
> > NTT DATA CORPORATION


pgsql14-truncate-on-foreign-table.v8.patch
Description: Binary data


Re: [PATCH] Implement motd for PostgreSQL

2021-04-03 Thread Chapman Flack
On 04/03/21 01:20, Joel Jacobson wrote:
> I've deployed the fix to production:
> 
> $ psql -U brad -h pit.org motd
> 
> NOTICE:
>  __  ___
>   /)/  \/   \
> ( / ___\)
>   \(/ o)  ( o)   )
>\_  (_  )   \ )  /
>  \  /\_/\)_/
>   \/  //|  |\\
>   v |  | v
> \__/

Now there's some kind of Max Headroom thing going on with the second row,
and this time I'm not sure how to explain it. (I knew the backslashes were
because they weren't doubled.)

I have done 'view source' in my mail client to make sure it's not just
some display artifact on my end. Something has eaten a space before that
left paren. What would do that?

Regards,
-Chap




Re: Additional Chapter for Tutorial - arch-dev.sgml

2021-04-03 Thread Alvaro Herrera
On 2021-Mar-25, David Steele wrote:

> On 1/22/21 4:15 AM, Heikki Linnakangas wrote:
> > On 21/01/2021 14:38, Jürgen Purtz wrote:
> > > This supervisor process is called  > > linkend="glossary-postmaster">postmaster and listens at
> > > a specified TCP/IP port for incoming connections. Whenever he
> > > detects a request for a connection, he spawns a new backend process.
> > 
> > It sounds weird to refer to a process with "he". I left out this hunk,
> > and the other with similar changes.
> > 
> > Committed the rest, thanks!.
> 
> So it looks like this was committed. Is there anything left to do?

Yes, there is.  AFAICS Heikki committed a small wordsmithing patch --
not the large patch with the additional chapter.

-- 
Álvaro Herrera39°49'30"S 73°17'W
"Ed is the standard text editor."
  http://groups.google.com/group/alt.religion.emacs/msg/8d94ddab6a9b0ad3




Re: Proposal: Save user's original authenticated identity for logging

2021-04-03 Thread Michael Paquier
On Fri, Apr 02, 2021 at 01:45:31PM +0900, Michael Paquier wrote:
> As a whole, this is a consolidation of its own, so let's apply this
> part first.

Slight rebase for this one to take care of the updates with the SSL
error messages.
--
Michael
From 01e836535119dcd5a69ce54e1c86ae51bfba492c Mon Sep 17 00:00:00 2001
From: Michael Paquier 
Date: Fri, 2 Apr 2021 13:44:39 +0900
Subject: [PATCH v19] Plug more TAP test suites with new PostgresNode's new
 routines

This switches the kerberos, ldap and authentication to use connect_ok()
and connect_fails() recently introduced in PostgresNode.pm.  The SSL
tests need some extra juggling to accomodate with the changes.
---
 src/test/authentication/t/001_password.pl |  17 +++-
 src/test/authentication/t/002_saslprep.pl |  18 +++-
 src/test/kerberos/t/001_auth.pl   |  41 +++-
 src/test/ldap/t/001_auth.pl   |  15 ++-
 src/test/perl/PostgresNode.pm |  67 +---
 src/test/ssl/t/001_ssltests.pl| 118 +++---
 src/test/ssl/t/002_scram.pl   |  16 +--
 7 files changed, 170 insertions(+), 122 deletions(-)

diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 36a616d7c7..4d5e304de1 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -46,12 +46,19 @@ sub test_role
 
 	$status_string = 'success' if ($expected_res eq 0);
 
-	local $Test::Builder::Level = $Test::Builder::Level + 1;
+	my $connstr = "user=$role";
+	my $testname = "authentication $status_string for method $method, role $role";
 
-	my $res = $node->psql('postgres', undef, extra_params => [ '-U', $role, '-w' ]);
-	is($res, $expected_res,
-		"authentication $status_string for method $method, role $role");
-	return;
+	if ($expected_res eq 0)
+	{
+		$node->connect_ok($connstr, $testname);
+	}
+	else
+	{
+		# No match pattern checks are done here on errors, only the
+		# status code.
+		$node->connect_fails($connstr, $testname);
+	}
 }
 
 # Initialize primary node
diff --git a/src/test/authentication/t/002_saslprep.pl b/src/test/authentication/t/002_saslprep.pl
index 0aaab090ec..530344a5d6 100644
--- a/src/test/authentication/t/002_saslprep.pl
+++ b/src/test/authentication/t/002_saslprep.pl
@@ -41,12 +41,20 @@ sub test_login
 
 	$status_string = 'success' if ($expected_res eq 0);
 
+	my $connstr = "user=$role";
+	my $testname = "authentication $status_string for role $role with password $password";
+
 	$ENV{"PGPASSWORD"} = $password;
-	my $res = $node->psql('postgres', undef, extra_params => [ '-U', $role ]);
-	is($res, $expected_res,
-		"authentication $status_string for role $role with password $password"
-	);
-	return;
+	if ($expected_res eq 0)
+	{
+		$node->connect_ok($connstr, $testname);
+	}
+	else
+	{
+		# No match pattern checks are done here on errors, only the
+		# status code
+		$node->connect_fails($connstr, $testname);
+	}
 }
 
 # Initialize primary node. Force UTF-8 encoding, so that we can use non-ASCII
diff --git a/src/test/kerberos/t/001_auth.pl b/src/test/kerberos/t/001_auth.pl
index 38e9ef7b1f..31fdc49b86 100644
--- a/src/test/kerberos/t/001_auth.pl
+++ b/src/test/kerberos/t/001_auth.pl
@@ -20,7 +20,7 @@ use Time::HiRes qw(usleep);
 
 if ($ENV{with_gssapi} eq 'yes')
 {
-	plan tests => 26;
+	plan tests => 30;
 }
 else
 {
@@ -185,25 +185,18 @@ sub test_access
 	my ($node, $role, $query, $expected_res, $gssencmode, $test_name, $expect_log_msg) = @_;
 
 	# need to connect over TCP/IP for Kerberos
-	my ($res, $stdoutres, $stderrres) = $node->psql(
-		'postgres',
-		"$query",
-		extra_params => [
-			'-XAtd',
-			$node->connstr('postgres')
-			  . " host=$host hostaddr=$hostaddr $gssencmode",
-			'-U',
-			$role
-		]);
+	my $connstr = $node->connstr('postgres') .
+		" user=$role host=$host hostaddr=$hostaddr $gssencmode";
 
-	# If we get a query result back, it should be true.
-	if ($res == $expected_res and $res eq 0)
+	if ($expected_res eq 0)
 	{
-		is($stdoutres, "t", $test_name);
+		# The result is assumed to match "true", or "t", here.
+		$node->connect_ok($connstr, $test_name, sql => $query,
+  expected_stdout => qr/t/);
 	}
 	else
 	{
-		is($res, $expected_res, $test_name);
+		$node->connect_fails($connstr, $test_name);
 	}
 
 	# Verify specified log message is logged in the log file.
@@ -227,20 +220,12 @@ sub test_query
 	my ($node, $role, $query, $expected, $gssencmode, $test_name) = @_;
 
 	# need to connect over TCP/IP for Kerberos
-	my ($res, $stdoutres, $stderrres) = $node->psql(
-		'postgres',
-		"$query",
-		extra_params => [
-			'-XAtd',
-			$node->connstr('postgres')
-			  . " host=$host hostaddr=$hostaddr $gssencmode",
-			'-U',
-			$role
-		]);
+	my $connstr = $node->connstr('postgres') .
+		" user=$role host=$host hostaddr=$hostaddr $gssencmode";
 
-	is($res, 0, $test_name);
-	like($stdoutres, $expected, $test_name);
-	is($stderrres, "", $test_name);
+	my ($stdoutres, $stderrres);
+
+	

Re: fix old confusing JSON example

2021-04-03 Thread Erik Rijkers
> On 2021.04.03. 14:01 Erik Rijkers  wrote:
>  
> Hello,
> 
> Attached is a small but confusing mistake in the json documentation (a @@ 
> instead of @?) that has been there since version 12.  (It took me quite some 
> time to figure that out while testing with the recent SQL/JSON patches -- 
> which I initially blamed).
>  
> To be applied from 12, 13, and master.

Oops, sent to wrong list.

Let me add some arguments for the change:

The original text is:
--
Also, GIN index supports @@ and @? operators, which perform jsonpath matching.

  SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @@ '$.tags[*] == "qui"';
  SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @@ '$.tags[*] ? (@ == 
"qui")';

--
So, that gives information on two operators, and then gives one example query 
for each.  Clearly, the second example was meant to illustrate a where-clause 
with the  @?  operator.

Small change to prevent great confusion (I'll admit it took me far too long to 
understand this).

thanks,

Erik Rijkers




















> 
> Thanks,
> 
> Erik Rijkers--- doc/src/sgml/json.sgml.orig	2021-04-03 13:47:29.484510936 +0200
+++ doc/src/sgml/json.sgml	2021-04-03 13:47:53.028866752 +0200
@@ -489,7 +489,7 @@
 SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @@ '$.tags[*] == "qui"';
 
 
-SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @@ '$.tags[*] ? (@ == "qui")';
+SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @? '$.tags[*] ? (@ == "qui")';
 
 GIN index extracts statements of following form out of
 jsonpath: accessors_chain = const.


Re: Issue with point_ops and NaN

2021-04-03 Thread Julien Rouhaud
Le jeu. 1 avr. 2021 à 15:54, Laurenz Albe  a
écrit :

> On Thu, 2021-04-01 at 09:35 +0900, Kyotaro Horiguchi wrote:
> > > > > > > > > SELECT point('NaN','NaN') <@
> polygon('(0,0),(1,0),(1,1),(0,0)');
> > > > > > > > > ?column?
> > > > > > > > > --
> > > > > > > > >t
> > > > > > > > >(1 row)
> > >
> > > If you think of "NaN" literally as "not a number", then FALSE would
> > > make sense, since "not a number" implies "not a number between 0 and
> 1".
> > > But since NaN is the result of operations like 0/0 or infinity -
> infinity,
> > > NULL might be better.
> > > So I'd opt for NULL too.
> >
> > Thanks.  Do you think it's acceptable that returning false instead of
> > NULL as an alternative behavior?
>
> Yes, I think that is acceptable.
>

+1 especially after looking at the poc patch you sent to handle NULLs.

>


Re: Refactoring HMAC in the core code

2021-04-03 Thread Michael Paquier
On Fri, Apr 02, 2021 at 10:10:36AM -0400, Bruce Momjian wrote:
> Works for me.

Thanks.  I got to spend some time on this stuff again today and did a
complete review, without noticing any issues except some indentation
that was strange so I have applied it.  Attached is a small extension
I have used for some of my tests to validate the implementations.
This uses some result samples one can find on Wikipedia at [1], for
instance.

[1]: https://en.wikipedia.org/wiki/HMAC
--
Michael


hmac_funcs.tar.gz
Description: application/gzip


signature.asc
Description: PGP signature


Re: [PATCH] Implement motd for PostgreSQL

2021-04-03 Thread Joel Jacobson
On Sat, Apr 3, 2021, at 10:14, Fabien COELHO wrote:
> 
> Hello Joel,
> 
> > This patch is one day late, my apologies for missing the deadline this year.
> >
> > PostgreSQL has since long been suffering from the lack of a proper UNIX 
> > style motd (message of the day).
> 
> My 0.02€: apart from the Fool's day joke dimension, I'd admit that I would 
> not mind actually having such a fun feature in pg, possibly disabled by 
> default.

Fun to hear you find it useful.
I'm actually using it myself in production for something, to display 
instructions to users when they login.

When implementing this I stumbled upon newlines can't be used in ALTER SYSTEM 
parameter values.

I see they were disallowed in commit 99f3b5613bd1f145b5dbbe86000337bbe37fb094

However, reading escaped newlines seems to be working just fine.
The commit message from 2016 seems to imply otherwise:

"the configuration-file parser doesn't support embedded newlines in string 
literals"

The first patch, 0001-quote-newlines.patch, fixes the part of escaping newlines
before they are written to the configuration file.

Perhaps the configuration-file parser has been fixed since to support embedded 
newlines?
If so, then maybe it would actually be an idea to support newlines by escaping 
them?
Especially since newlines are supported by set_config().

/Joel

Re: Can we remove extra memset in BloomInitPage, GinInitPage and SpGistInitPage when we have it in PageInit?

2021-04-03 Thread vignesh C
On Mon, Mar 22, 2021 at 10:16 AM Bharath Rupireddy
 wrote:
>
> Hi,
>
> We are memset-ting the special space page that's already set to zeros
> by PageInit in BloomInitPage, GinInitPage and SpGistInitPage. We have
> already removed the memset after PageInit in gistinitpage (see the
> comment there).  Unless I'm missing something, IMO they are redundant.
> I'm attaching a small patch that gets rid of the extra memset calls.
>
> While on it, I removed MAXALIGN(sizeof(SpGistPageOpaqueData)) in
> SpGistInitPage because the PageInit will anyways align the
> specialSize. This change is inline with other places (such as
> BloomInitPage, brin_page_init GinInitPage, gistinitpage,
> _hash_pageinit and so on) where we just pass the size of special space
> data structure.
>
> I didn't see any regression test failure on my dev system with the
> attached patch.
>
> Thoughts?

The changes look fine to me.

Regards,
Vignesh




Re: [PATCH] Implement motd for PostgreSQL

2021-04-03 Thread Fabien COELHO


Hello Joel,


This patch is one day late, my apologies for missing the deadline this year.

PostgreSQL has since long been suffering from the lack of a proper UNIX style 
motd (message of the day).


My 0.02€: apart from the Fool's day joke dimension, I'd admit that I would 
not mind actually having such a fun feature in pg, possibly disabled by 
default.


--
Fabien.

Re: [PATCH] Implement motd for PostgreSQL

2021-04-03 Thread Charles Clavadetscher

Hi

On 2021-04-03 07:16, Joel Jacobson wrote:

On Sat, Apr 3, 2021, at 04:47, Michael Paquier wrote:


On Fri, Apr 02, 2021 at 10:46:16PM +0200, Joel Jacobson wrote:


Ascii elephant in example by Michael Paquier [1], with ANSI colors

added by me.






[1]



https://www.postgresql.org/message-id/CAB7nPqRaacwcaANOYY3Hxd3T0No5RdZXyqM5HB6fta%2BCoDLOEg%40mail.gmail.com


The credit here goes to Charles Clavadetscher, not me.

--

Michael


Right! Sorry about that. The initial ">" in front of the ascii art
confused me, didn't understand it was part of the reply, since all
text around it was your.

Many thanks Charles for the beautiful ascii art!

/Joel


You are welcome. There were some discussions until it came to the final 
form (that you can see below).

You may also want to have a look at this:

https://www.swisspug.org/wiki/index.php/Promotion

It includes a DB-Function that can be used to create the ASCII image 
with texts.


Regards
Charles

--
Charles Clavadetscher
Swiss PostgreSQL Users Group
Treasurer
Spitzackerstrasse 9
CH - 8057 Zürich

http://www.swisspug.org

+---+
|   __  ___ |
|/)/  \/   \|
|   ( / ___\)   |
|\(/ o)  ( o)   )   |
| \_  (_  )   \ ) _/|
|   \  /\_/\)/  |
|\/   |
| _|  | |
| \|_/  |
|   |
| Swiss PostgreSQL  |
|   Users Group |
|   |
+---+




Re: [PATCH] Implement motd for PostgreSQL

2021-04-03 Thread Joel Jacobson
On Fri, Apr 2, 2021, at 23:09, Marko Tiikkaja wrote:
> Hi Joel
> 
> On Fri, Apr 2, 2021 at 11:47 PM Joel Jacobson  wrote:
>> PostgreSQL has since long been suffering from the lack of a proper UNIX 
>> style motd (message of the day).
> 
> First of all, thanks for your work on this!  I think this is an important 
> feature to have, but I would love to see a way to have a set of strings from 
> which you choose a random one to display.  That way you could brighten your 
> day with random positive messages.
> 
> 
> -marko

Fun idea! I implemented it as a Perl script using the fortune command.

There are quite a lot of elephant jokes in the fortune database actually.

$ sudo apt install fortune-mod

$ crontab -l
0 0 * * * bash -c "/usr/local/bin/fortune_slony.pl | psql"

$ psql
NOTICE:
     __  ___
  /)/  \/   \
( / ___\)
  \(/ o)  ( o)   )
   \_  (_  )   \ )  /
 \  /\_/\)_/
  \/  //|  |\\
  v |  | v
\__/
Q: Know what the difference between your latest project
and putting wings on an elephant is?
A: Who knows?  The elephant *might* fly, heh, heh...

/Joel#!/usr/bin/perl
use strict;
use warnings;

my $slony = <<'SLONY'
E'\u001B[94m'
'\n     __  ___  '
'\n  /)/  \\/   \\ '
'\n ( / ___\\)'
'\n  \\(/ o)  ( o)   )'
'\n   \\_  (_  )   \\ )  / '
'\n \\  /\\_/\\)_/  '
'\n  \\/  //|  |'
'\n  v |  | v'
'\n\\__/  '
'\u001b[0m'
SLONY
;

my $fortune = `/usr/games/fortune`;
$fortune =~ s/'/''/g;
$fortune =~ s/^(.*)$/'\\n$1'/gm;

print "ALTER SYSTEM SET motd TO $slony\n$fortune; SELECT pg_catalog.pg_reload_conf();";