Fix typos and inconsistencies for HEAD (take 8)

2019-07-27 Thread Alexander Lakhin
Hello hackers,

Please consider fixing the next set of typos and inconsistencies in the
tree:
8.1. LABORT -> LIKE_ABORT
8.2. LagTrackerWriter -> LagTrackerWrite
8.3. lag_with_offset_and_default, * ->
window_lag_with_offset_and_default, window_* (in windowfuncs.c)
8.4. language-name -> language_name
8.5. lastOverflowedXID -> lastOverflowedXid
8.6. last_processed -> last_processing
8.7. last-query -> last_query
8.8. lastsysoid -> datlastsysoid
8.9. lastUsedPage -> lastUsedPages
8.10. lbv -> lbsv
8.11. leafSegment -> leafSegmentInfo
8.12. LibraryName/SymbolName -> remove (orphaned after f9143d10)
8.13. licence -> license
8.14. LINE_ALLOC -> remove (orphaned since 12ee6ec7)
8.15. local_ip_addr, local_port_addr -> remove and update a comment
(orphaned since b4cea00a)
8.16. local_passwd.c -> update a comment (see
http://cvsweb.netbsd.org/bsdweb.cgi/src/usr.bin/passwd/local_passwd.c.diff?r1=1.19=1.20
)
8.17. localTransactionid -> localTransactionId
8.18. LocalTransactionID -> localTransactionId
8.19. LOCKDEF_H_ -> LOCKDEFS_H_
8.20. LOCK_H -> LOCK_H_
8.21. lockid -> lock
8.22. LOGICAL_PROTO_VERSION_NUM, PGLOGICAL_PROTO_MIN_VERSION_NUM ->
LOGICALREP_PROTO_VERSION_NUM, LOGICALREP_PROTO_MIN_VERSION_NUM
8.23. LOGICALREP_PROTO_H -> LOGICAL_PROTO_H
8.24. LogicalRewriteHeapCheckpoint -> CheckPointLogicalRewriteHeap
8.25. log_snap_interval_ms -> LOG_SNAPSHOT_INTERVAL_MS
8.26. from LVT -> form LVT
8.27. lwlockMode -> lwWaitMode
8.28. LWLockWait -> LWLockWaitForVar
8.29. MacroAssert -> AssertMacro
8.30. maintainer-check -> remove (orphaned after 5dd41f35)
8.31. manip.c -> remove (not present since PG95-1_01)
8.32. markleftchild -> markfollowright
8.33. mask_page_lsn -> mask_page_lsn_and_checksum
8.34. mdfd_seg_fds -> md_seg_fds
8.35. md_update -> px_md_update
8.36. meg -> 1 MB
8.37. MIGRATOR_API_VERSION -> remove (orphaned after 6f56b41a)
8.38. min_apply_delay -> recovery_min_apply_delay
8.39. min_multi -> cutoff_multi
8.40. minwg -> mingw
8.41. missingok -> missing_ok
8.42. mksafefunc/mkunsafefunc -> mkfunc (orphaned after 1f474d29)
8.43. MSG01.bin -> MSG1.bin
8.44. MSPACE -> MSSPACE
8.45. mtransfunc -> mtransfn
8.46. MULTI_QUERY -> PORTAL_MULTI_QUERY
8.47. MultixactId -> MultiXactId
8.48. MVDistinctItem -> MVNDistinctItem

In passing, I found a legacy script, FAQ2txt, that should be deleted as
unusable.

Best regards,
Alexander
diff --git a/doc/src/sgml/xplang.sgml b/doc/src/sgml/xplang.sgml
index d215ce82d0..60e0430751 100644
--- a/doc/src/sgml/xplang.sgml
+++ b/doc/src/sgml/xplang.sgml
@@ -137,7 +137,7 @@ CREATE FUNCTION validator_function_name(oid)
  
   Finally, the PL must be declared with the command
 
-CREATE TRUSTED LANGUAGE language-name
+CREATE TRUSTED LANGUAGE language_name
 HANDLER handler_function_name
 INLINE inline_function_name
 VALIDATOR validator_function_name ;
diff --git a/src/backend/access/spgist/spgvacuum.c b/src/backend/access/spgist/spgvacuum.c
index 2b1662a267..478d4c0d61 100644
--- a/src/backend/access/spgist/spgvacuum.c
+++ b/src/backend/access/spgist/spgvacuum.c
@@ -842,7 +842,7 @@ spgvacuumscan(spgBulkDeleteState *bds)
 		}
 	}
 
-	/* Propagate local lastUsedPage cache to metablock */
+	/* Propagate local lastUsedPages cache to metablock */
 	SpGistUpdateMetaPage(index);
 
 	/*
diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
index e7a59b0a92..e172dad07f 100644
--- a/src/backend/replication/walsender.c
+++ b/src/backend/replication/walsender.c
@@ -2259,7 +2259,7 @@ WalSndLoop(WalSndSendDataCallback send_data)
 WL_SOCKET_READABLE;
 
 			/*
-			 * Use fresh timestamp, not last_processed, to reduce the chance
+			 * Use fresh timestamp, not last_processing, to reduce the chance
 			 * of reaching wal_sender_timeout before sending a keepalive.
 			 */
 			sleeptime = WalSndComputeSleeptime(GetCurrentTimestamp());
@@ -2666,7 +2666,7 @@ XLogSendPhysical(void)
 	 * very close to together here so that we'll get a later position if it is
 	 * still moving.
 	 *
-	 * Because LagTrackerWriter ignores samples when the LSN hasn't advanced,
+	 * Because LagTrackerWrite ignores samples when the LSN hasn't advanced,
 	 * this gives us a cheap approximation for the WAL flush time for this
 	 * LSN.
 	 *
diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c
index ae6780011b..fadab62950 100644
--- a/src/backend/storage/ipc/procarray.c
+++ b/src/backend/storage/ipc/procarray.c
@@ -3169,7 +3169,7 @@ DisplayXidCache(void)
  *
  * When we throw away subXIDs from KnownAssignedXids, we need to keep track of
  * that, similarly to tracking overflow of a PGPROC's subxids array.  We do
- * that by remembering the lastOverflowedXID, ie the last thrown-away subXID.
+ * that by remembering the lastOverflowedXid, ie the last thrown-away subXID.
  * As long as that is within the range of interesting XIDs, we have to assume
  * that subXIDs are missing from snapshots.  (Note that subXID overflow occurs
  * on primary when 

Re: minor fixes after pgindent prototype fixes

2019-07-27 Thread Tom Lane
Andres Freund  writes:
> a few prototypes look odd. It appears to be cases where previously the
> odd indentation was put to some use, by indenting parameters less:
> ...
> but now that looks odd:
> extern void DefineCustomBoolVariable(
>  const char *name,
>  const char *short_desc,

> Unless somebody protests I'm going to remove the now pretty useless
> looking newline in the cases I can find.

+1.  I think Alvaro was muttering something about doing this,
but you beat him to it.

regards, tom lane




minor fixes after pgindent prototype fixes

2019-07-27 Thread Andres Freund
Hi,

I noticed that after

commit 8255c7a5eeba8f1a38b7a431c04909bde4f5e67d
Author: Tom Lane 
Date:   2019-05-22 13:04:48 -0400

Phase 2 pgindent run for v12.

Switch to 2.1 version of pg_bsd_indent.  This formats
multiline function declarations "correctly", that is with
additional lines of parameter declarations indented to match
where the first line's left parenthesis is.

Discussion: 
https://postgr.es/m/CAEepm=0p3fetxrcu5b2w3jv3pgrvz-kguxlgfd42ffhuroo...@mail.gmail.com

a few prototypes look odd. It appears to be cases where previously the
odd indentation was put to some use, by indenting parameters less:

extern void DefineCustomBoolVariable(
 const char *name,
 const char *short_desc,
 const char *long_desc,
 bool *valueAddr,
 bool bootValue,
 GucContext context,
 int flags,
 GucBoolCheckHook check_hook,
 GucBoolAssignHook assign_hook,
 GucShowHook show_hook);

but now that looks odd:

extern void DefineCustomBoolVariable(
 const char *name,
 const char *short_desc,
 const char *long_desc,
 bool *valueAddr,
 bool bootValue,
 GucContext context,
 int flags,
 GucBoolCheckHook check_hook,
 GucBoolAssignHook assign_hook,
 GucShowHook show_hook);

Unless somebody protests I'm going to remove the now pretty useless
looking newline in the cases I can find. I used
ack --type cc --type cpp '^[a-zA-Z_].*\(\n'
to find the ones I did. Not sure that catches everything.

Greetings,

Andres Freund
diff --git i/src/backend/commands/event_trigger.c w/src/backend/commands/event_trigger.c
index efef120c038..f7ee9838f7f 100644
--- i/src/backend/commands/event_trigger.c
+++ w/src/backend/commands/event_trigger.c
@@ -151,8 +151,7 @@ static void AlterEventTriggerOwner_internal(Relation rel,
 			HeapTuple tup,
 			Oid newOwnerId);
 static event_trigger_command_tag_check_result check_ddl_tag(const char *tag);
-static event_trigger_command_tag_check_result check_table_rewrite_ddl_tag(
-		  const char *tag);
+static event_trigger_command_tag_check_result check_table_rewrite_ddl_tag(const char *tag);
 static void error_duplicate_filter_variable(const char *defname);
 static Datum filter_list_to_array(List *filterlist);
 static Oid	insert_event_trigger_tuple(const char *trigname, const char *eventname,
diff --git i/src/backend/executor/nodeBitmapHeapscan.c w/src/backend/executor/nodeBitmapHeapscan.c
index 758b16dd357..f62105f5284 100644
--- i/src/backend/executor/nodeBitmapHeapscan.c
+++ w/src/backend/executor/nodeBitmapHeapscan.c
@@ -54,15 +54,13 @@
 
 
 static TupleTableSlot *BitmapHeapNext(BitmapHeapScanState *node);
-static inline void BitmapDoneInitializingSharedState(
-	 ParallelBitmapHeapState *pstate);
+static inline void BitmapDoneInitializingSharedState(ParallelBitmapHeapState *pstate);
 static inline void BitmapAdjustPrefetchIterator(BitmapHeapScanState *node,
 TBMIterateResult *tbmres);
 static inline void BitmapAdjustPrefetchTarget(BitmapHeapScanState *node);
 static inline void BitmapPrefetch(BitmapHeapScanState *node,
   TableScanDesc scan);
-static bool BitmapShouldInitializeSharedState(
-			  ParallelBitmapHeapState *pstate);
+static bool BitmapShouldInitializeSharedState(ParallelBitmapHeapState *pstate);
 
 
 /* 
diff --git i/src/backend/libpq/auth.c w/src/backend/libpq/auth.c
index 9358219aa60..49c94fd5176 100644
--- i/src/backend/libpq/auth.c
+++ w/src/backend/libpq/auth.c
@@ -133,8 +133,7 @@ static int	CheckBSDAuth(Port *port, char *user);
 
 /* Correct header from the Platform SDK */
 typedef
-ULONG		(*__ldap_start_tls_sA) (
-	IN PLDAP ExternalHandle,
+ULONG		(*__ldap_start_tls_sA) (IN PLDAP ExternalHandle,
 	OUT PULONG ServerReturnValue,
 	OUT LDAPMessage **result,
 	IN PLDAPControlA * ServerControls,
diff --git i/src/backend/libpq/pqcomm.c w/src/backend/libpq/pqcomm.c
index 384887e70d9..ed7a909d36a 100644
--- i/src/backend/libpq/pqcomm.c
+++ w/src/backend/libpq/pqcomm.c
@@ -131,8 +131,8 @@ static List *sock_paths = NIL;
  * enlarged by pq_putmessage_noblock() if the message doesn't fit otherwise.
  */
 
-#define PQ_SEND_BUFFER_SIZE 8192
-#define PQ_RECV_BUFFER_SIZE 8192
+#define PQ_SEND_BUFFER_SIZE 1048576
+#define PQ_RECV_BUFFER_SIZE 65536
 
 static char *PqSendBuffer;
 static int	PqSendBufferSize;	/* Size send buffer 

Re: pg_upgrade fails with non-standard ACL

2019-07-27 Thread Tom Lane
Bruce Momjian  writes:
> On Thu, Jul 18, 2019 at 06:53:12PM +0300, Anastasia Lubennikova wrote:
>> pg_upgrade from 9.6 fails if old cluster had non-standard ACL
>> on pg_catalog functions that have changed between versions,
>> for example pg_stop_backup(boolean).

> Uh, wouldn't this affect any default-installed function where the
> permission are modified?  Is fixing only a few functions really helpful?

No, it's just functions whose signatures have changed enough that
a GRANT won't find them.  I think the idea is that the set of
potentially-affected functions is determinate.  I have to say that
the proposed patch seems like a complete kluge, though.  For one
thing we'd have to maintain the list of affected functions in each
future release, and I have no faith in our remembering to do that.

It's also fair to question whether pg_upgrade should even try to
cope with such cases.  If the function has changed signature,
it might well be that it's also changed behavior enough so that
any previously-made grants need reconsideration.  (Maybe we should
just suppress the old grant rather than transferring it.)

Still, this does seem like a gap in the pg_init_privs mechanism.
I wonder if Stephen has any thoughts about what ought to happen.

regards, tom lane




Re: Testing LISTEN/NOTIFY more effectively

2019-07-27 Thread Tom Lane
Andres Freund  writes:
> I wonder if there aren't similar dangers around the notify handling. In
> your patch we don't print them particularly eagerly. Doesn't that also
> open us up to timing concerns?

I think probably not, because of the backend-side restrictions on when
notify messages will be sent.  The corresponding case for the NOTICE
bug we just fixed would be if a backend sent a NOTIFY before blocking;
but it can't do that internally to a transaction, and anyway the proposed
test script isn't doing anything that tricky.

I did spend some time thinking about how isolationtester might report
notifys that are sent spontaneously (without any "triggering" query)
but I didn't feel that that was worth messing with.  We'd have to
have the program checking all the connections not just the one that's
running what it thinks is the currently active step.

We might be approaching a time where it's worth scrapping the
isolationtester logic and starting over.  I'm not volunteering though.

regards, tom lane




Re: Testing LISTEN/NOTIFY more effectively

2019-07-27 Thread Andres Freund
Hi,

On 2019-07-27 20:02:13 -0400, Tom Lane wrote:
> Andres Freund  writes:
> I'm slightly more worried about the case of more than one bufferful
> of NOTICE messages: calling PQconsumeInput isn't entirely guaranteed to
> absorb *all* available input.  But for the cases we actually need to
> deal with, I think probably the patch as I sent it is OK.  We could
> complicate matters by going around the loop extra time(s) to verify
> that select() thinks no data is waiting, but I doubt it's worth the
> complexity.

It'd just be one continue; right? Except that we don't know if
PQconsumeInput() actually did anything... So we'd need to do something
like executing a select and only call PQconsumeInput() if the select
signals that there's data? And then always retry? Yea, that seems too
complicated.

Kinda annoying that we don't expose pqReadData()'s return value anywhere
that I can see. Not so much for this, but in general. Travelling back
into the past, ISTM, PQconsumeInput() should have returned a different
return code if either pqReadData() or pqFlush() did anything.

I wonder if there aren't similar dangers around the notify handling. In
your patch we don't print them particularly eagerly. Doesn't that also
open us up to timing concerns? In particular, for notifies sent out
while idle, we might print them together with the *last* command
executed - as far as I can tell, if they arrive before the
PQconsumeInput(), we'll process them all in the PQisBusy() call at the
top of try_complete_step()'s loop? Am I missing some interlock here?

Greetings,

Andres Freund




Re: pg_upgrade fails with non-standard ACL

2019-07-27 Thread Bruce Momjian
On Thu, Jul 18, 2019 at 06:53:12PM +0300, Anastasia Lubennikova wrote:
> pg_upgrade from 9.6 fails if old cluster had non-standard ACL
> on pg_catalog functions that have changed between versions,
> for example pg_stop_backup(boolean).
> 
> Error:
> 
> pg_restore: creating ACL "pg_catalog.FUNCTION "pg_stop_backup"()"
> pg_restore: creating ACL "pg_catalog.FUNCTION "pg_stop_backup"("exclusive"
> boolean, OUT "lsn" "pg_lsn", OUT "labelfile" "text", OUT "spcmapfile"
> "text")"
> pg_restore: [archiver (db)] Error while PROCESSING TOC:
> pg_restore: [archiver (db)] Error from TOC entry 2169; 0 0 ACL FUNCTION
> "pg_stop_backup"("exclusive" boolean, OUT "lsn" "pg_lsn", OUT "labelfile"
> "text", OUT "spcmapfile" "text") anastasia
> pg_restore: [archiver (db)] could not execute query: ERROR: function
> pg_catalog.pg_stop_backup(boolean) does not exist
>     Command was: GRANT ALL ON FUNCTION
> "pg_catalog"."pg_stop_backup"("exclusive" boolean, OUT "lsn" "pg_lsn", OUT
> "labelfile" "text", OUT "spcmapfile" "text") TO "backup";
> 
> Steps to reproduce:
> 1) create a database with pg9.6
> 2) create a user and change grants on pg_stop_backup(boolean):
> CREATE ROLE backup WITH LOGIN;
> GRANT USAGE ON SCHEMA pg_catalog TO backup;
> GRANT EXECUTE ON FUNCTION pg_stop_backup() TO backup;
> GRANT EXECUTE ON FUNCTION pg_stop_backup(boolean) TO backup;
> 3) perform pg_upgrade to v10 (or any version above)
> 
> The problem exists since we added to pg_dump support of ACL changes of
> pg_catalog functions in commit 23f34fa4b.
> 
> I think this is a bug since it unpredictably affects user experience, so I
> propose to backpatch the fix.
> Script to reproduce the problem and the patch to fix it (credit to Arthur
> Zakirov) are attached.

Uh, wouldn't this affect any default-installed function where the
permission are modified?  Is fixing only a few functions really helpful?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: Testing LISTEN/NOTIFY more effectively

2019-07-27 Thread Tom Lane
Andres Freund  writes:
> Polling for notices on the blocked connection before printing anything
> ought to practically be reliable. Theoretically I think it still allows
> for some reordering, e.g. because there was packet loss on one, but not
> the other connection.

As long as it's a local connection, packet loss shouldn't be a problem
;-).  I'm slightly more worried about the case of more than one bufferful
of NOTICE messages: calling PQconsumeInput isn't entirely guaranteed to
absorb *all* available input.  But for the cases we actually need to
deal with, I think probably the patch as I sent it is OK.  We could
complicate matters by going around the loop extra time(s) to verify
that select() thinks no data is waiting, but I doubt it's worth the
complexity.

regards, tom lane




Re: Testing LISTEN/NOTIFY more effectively

2019-07-27 Thread Andres Freund
Hi,

On 2019-07-27 19:27:17 -0400, Tom Lane wrote:
> Andres Freund  writes:
> > We could of course just send the pids in binary ;). No, not worth it
> > just to avoid a small redundant array ;)
> 
> IIRC, we'd have to do htonl on them, so we'd still end up with
> two representations ...

Yea. Although that'd could just be done in a local variable. Anyway,
it's obviously not important.


> > Hm. I wonder if all that's happening with prairedog is that the notice
> > is sent a bit later. I think that could e.g. conceivably happen because
> > it TCP_NODELAY isn't supported on prariedog? Or just because the machine
> > is very slow?
> 
> The notices (not notifies) are coming out in the opposite order from
> expected.  I haven't really thought hard about what's causing that;
> it seems odd, because isolationtester isn't supposed to give up waiting
> for a session until it's visibly blocked according to pg_locks.  Maybe
> it needs to recheck for incoming data once more after seeing that?

Yea, that's precisely what I was trying to refer to / suggesting. What I
think is happening is that both queries get sent to the server, we
PQisBusy();select() and figure out they're not done yet. On most
machines the raise NOTICE will have been processed by that time, after
it's a trivial query. But on prariedog (and I suspect even more likely
on valgrind / clobber cache animals), they're not that far yet. So we
send the blocking query, until we've seen that it blocks. But there's no
interlock guaranteeing that we'll have seen the notices before the
*other* connection has detected us blocking.  As the blocking query is
more complex to plan and execute, that window isn't that small.

Polling for notices on the blocked connection before printing anything
ought to practically be reliable. Theoretically I think it still allows
for some reordering, e.g. because there was packet loss on one, but not
the other connection.

Greetings,

Andres Freund




Re: Testing LISTEN/NOTIFY more effectively

2019-07-27 Thread Tom Lane
I wrote:
> Andres Freund  writes:
>> Hm. I wonder if all that's happening with prairedog is that the notice
>> is sent a bit later. I think that could e.g. conceivably happen because
>> it TCP_NODELAY isn't supported on prariedog? Or just because the machine
>> is very slow?

> The notices (not notifies) are coming out in the opposite order from
> expected.  I haven't really thought hard about what's causing that;
> it seems odd, because isolationtester isn't supposed to give up waiting
> for a session until it's visibly blocked according to pg_locks.  Maybe
> it needs to recheck for incoming data once more after seeing that?

Ah-hah, that seems to be the answer.  With the attached patch I'm
getting reliable-seeming passes on prairiedog.

regards, tom lane

diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 6ab19b1..e97fef1 100644
*** a/src/test/isolation/isolationtester.c
--- b/src/test/isolation/isolationtester.c
*** try_complete_step(Step *step, int flags)
*** 752,757 
--- 752,777 
  
  if (waiting)	/* waiting to acquire a lock */
  {
+ 	/*
+ 	 * Since it takes time to perform the lock-check query,
+ 	 * some data --- notably, NOTICE messages --- might have
+ 	 * arrived since we looked.  We should do PQconsumeInput
+ 	 * to process any such messages, and we might as well then
+ 	 * check PQisBusy, though it's unlikely to succeed.
+ 	 */
+ 	if (!PQconsumeInput(conn))
+ 	{
+ 		fprintf(stderr, "PQconsumeInput failed: %s\n",
+ PQerrorMessage(conn));
+ 		exit(1);
+ 	}
+ 	if (!PQisBusy(conn))
+ 		break;
+ 
+ 	/*
+ 	 * conn is still busy, so conclude that the step really is
+ 	 * waiting.
+ 	 */
  	if (!(flags & STEP_RETRY))
  		printf("step %s: %s \n",
  			   step->name, step->sql);


Re: Testing LISTEN/NOTIFY more effectively

2019-07-27 Thread Tom Lane
Andres Freund  writes:
> We could of course just send the pids in binary ;). No, not worth it
> just to avoid a small redundant array ;)

IIRC, we'd have to do htonl on them, so we'd still end up with
two representations ...

> Hm. I wonder if all that's happening with prairedog is that the notice
> is sent a bit later. I think that could e.g. conceivably happen because
> it TCP_NODELAY isn't supported on prariedog? Or just because the machine
> is very slow?

The notices (not notifies) are coming out in the opposite order from
expected.  I haven't really thought hard about what's causing that;
it seems odd, because isolationtester isn't supposed to give up waiting
for a session until it's visibly blocked according to pg_locks.  Maybe
it needs to recheck for incoming data once more after seeing that?

regards, tom lane




Re: Testing LISTEN/NOTIFY more effectively

2019-07-27 Thread Andres Freund
Hi,

On 2019-07-27 18:20:52 -0400, Tom Lane wrote:
> diff --git a/src/test/isolation/isolationtester.c 
> b/src/test/isolation/isolationtester.c
> index 6ab19b1..98e5bf2 100644
> --- a/src/test/isolation/isolationtester.c
> +++ b/src/test/isolation/isolationtester.c
> @@ -23,10 +23,12 @@
>  
>  /*
>   * conns[0] is the global setup, teardown, and watchdog connection.  
> Additional
> - * connections represent spec-defined sessions.
> + * connections represent spec-defined sessions.  We also track the backend
> + * PID, in numeric and string formats, for each connection.
>   */
>  static PGconn **conns = NULL;
> -static const char **backend_pids = NULL;
> +static int *backend_pids = NULL;
> +static const char **backend_pid_strs = NULL;
>  static int   nconns = 0;

Hm, a bit sad to have both of those around. Not worth getting bothered
about memory wise, but it does irk me somewhat.


> @@ -187,26 +191,9 @@ main(int argc, char **argv)
>
> blackholeNoticeProcessor,
>NULL);
>  
> - /* Get the backend pid for lock wait checking. */
> - res = PQexec(conns[i], "SELECT pg_catalog.pg_backend_pid()");
> - if (PQresultStatus(res) == PGRES_TUPLES_OK)
> - {
> - if (PQntuples(res) == 1 && PQnfields(res) == 1)
> - backend_pids[i] = pg_strdup(PQgetvalue(res, 0, 
> 0));
> - else
> - {
> - fprintf(stderr, "backend pid query returned %d 
> rows and %d columns, expected 1 row and 1 column",
> - PQntuples(res), PQnfields(res));
> - exit(1);
> - }
> - }
> - else
> - {
> - fprintf(stderr, "backend pid query failed: %s",
> - PQerrorMessage(conns[i]));
> - exit(1);
> - }
> - PQclear(res);
> + /* Save each connection's backend PID for subsequent use. */
> + backend_pids[i] = PQbackendPID(conns[i]);
> + backend_pid_strs[i] = psprintf("%d", backend_pids[i]);

Heh.


> @@ -738,7 +728,7 @@ try_complete_step(Step *step, int flags)
>   boolwaiting;
>  
>   res = PQexecPrepared(conns[0], PREP_WAITING, 1,
> -  
> _pids[step->session + 1],
> +  
> _pid_strs[step->session + 1],
>NULL, 
> NULL, 0);
>   if (PQresultStatus(res) != PGRES_TUPLES_OK ||
>   PQntuples(res) != 1)

We could of course just send the pids in binary ;). No, not worth it
just to avoid a small redundant array ;)


> + /* Report any available NOTIFY messages, too */
> + PQconsumeInput(conn);
> + while ((notify = PQnotifies(conn)) != NULL)
> + {

Hm. I wonder if all that's happening with prairedog is that the notice
is sent a bit later. I think that could e.g. conceivably happen because
it TCP_NODELAY isn't supported on prariedog? Or just because the machine
is very slow?

The diff you showed with the reordering afaict only reordered the NOTIFY
around statements that are marked as . As the waiting
detection is done over a separate connection, there's afaict no
guarantee that we see all notices/notifies that occurred before the
query started blocking.  It's possible we could make this practically
robust enough by checking for notice/notifies on the blocked connection
just before printing out the ?  That still leaves the
potential issue that the different backend connection deliver data out
of order, but that seems not very likely?


Greetings,

Andres Freund




Re: tap tests driving the database via psql

2019-07-27 Thread Andres Freund
Hi,

On 2019-07-27 18:57:58 -0400, Andrew Dunstan wrote:
> Maybe I don't write much python but I can read it without too much
> difficulty :-)
> 
> 
> But you did say your skeleton was pure perl ... slip of the fingers?

Ooops, yea.


> > /me once more regrets that perl, not python, has been chosen as the
> > scripting language of choice for postgres...

> That ship has sailed long ago.

Indeed. Not proposing to change that. Just sad about it...

Greetings,

Andres Freund




Re: tap tests driving the database via psql

2019-07-27 Thread Andrew Dunstan


On 7/27/19 6:37 PM, Andres Freund wrote:
> Hi,
>
> On 2019-07-27 17:48:39 -0400, Andrew Dunstan wrote:
>> On 7/27/19 3:15 PM, Andres Freund wrote:
>>> I'm not volunteering to do 4), my perl skills aren't great (if the test
>>> infra were python, otoh... I have the skeleton of a pure perl driver
>>> that I used for testing somewhere). But I am leaning towards that being
>>> the most reasonable choice.
>> +1 for #4.
>>
>>
>> I'll be happy to participate in any effort.
>>
>>
>> About 22 years ago I wrote a pure perl implementation of a wire protocol
>> of roughly similar complexity (RFC1459). I got it basically working in
>> just a few days, so this sort of thing is very doable. Let's see your
>> skeleton and maybe it's a good starting point.
> The skeleton's in python though, not sure how helpful that is?


Maybe I don't write much python but I can read it without too much
difficulty :-)


But you did say your skeleton was pure perl ... slip of the fingers?


>
> /me once more regrets that perl, not python, has been chosen as the
> scripting language of choice for postgres...
>


That ship has sailed long ago.


cheers


andrew



-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services





Re: Testing LISTEN/NOTIFY more effectively

2019-07-27 Thread Tom Lane
Andres Freund  writes:
> On 2019-07-27 15:39:44 -0400, Tom Lane wrote:
>> Unfortunately, I just found out that on a slow enough machine
>> (prairiedog's host) there *is* some variation in when that test's
>> notices come out.  I am unsure whether that's to be expected or
>> whether there's something wrong there

> Hm. Any chance you could show the diff? I don't immediately see why.

Sure.  If I remove the client_min_messages hack from HEAD, then on
my dev workstation I get the attached test diff; that reproduces
quite reliably on a couple of machines.  However, running that
diff on prairiedog's host gets the failure attached second more
often than not.  (Sometimes it will pass.)

regards, tom lane

diff --git a/src/test/isolation/expected/insert-conflict-specconflict.out b/src/test/isolation/expected/insert-conflict-specconflict.out
index 5726bdb..20cc421 100644
--- a/src/test/isolation/expected/insert-conflict-specconflict.out
+++ b/src/test/isolation/expected/insert-conflict-specconflict.out
@@ -13,7 +13,11 @@ pg_advisory_locksess   lock
 step controller_show: SELECT * FROM upserttest;
 keydata   
 
+s1: NOTICE:  called for k1
+s1: NOTICE:  blocking 3
 step s1_upsert: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s1') ON CONFLICT (blurt_and_lock(key)) DO UPDATE SET data = upserttest.data || ' with conflict update s1'; 
+s2: NOTICE:  called for k1
+s2: NOTICE:  blocking 3
 step s2_upsert: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s2') ON CONFLICT (blurt_and_lock(key)) DO UPDATE SET data = upserttest.data || ' with conflict update s2'; 
 step controller_show: SELECT * FROM upserttest;
 keydata   
@@ -30,10 +34,14 @@ step controller_unlock_1_3: SELECT pg_advisory_unlock(1, 3);
 pg_advisory_unlock
 
 t  
+s1: NOTICE:  called for k1
+s1: NOTICE:  blocking 2
 step controller_unlock_2_3: SELECT pg_advisory_unlock(2, 3);
 pg_advisory_unlock
 
 t  
+s2: NOTICE:  called for k1
+s2: NOTICE:  blocking 2
 step controller_show: SELECT * FROM upserttest;
 keydata   
 
@@ -50,6 +58,10 @@ step controller_unlock_1_2: SELECT pg_advisory_unlock(1, 2);
 pg_advisory_unlock
 
 t  
+s1: NOTICE:  called for k1
+s1: NOTICE:  blocking 2
+s1: NOTICE:  called for k1
+s1: NOTICE:  blocking 2
 step s1_upsert: <... completed>
 step controller_show: SELECT * FROM upserttest;
 keydata   
@@ -69,7 +81,11 @@ pg_advisory_locksess   lock
 step controller_show: SELECT * FROM upserttest;
 keydata   
 
+s1: NOTICE:  called for k1
+s1: NOTICE:  blocking 3
 step s1_upsert: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s1') ON CONFLICT (blurt_and_lock(key)) DO UPDATE SET data = upserttest.data || ' with conflict update s1'; 
+s2: NOTICE:  called for k1
+s2: NOTICE:  blocking 3
 step s2_upsert: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s2') ON CONFLICT (blurt_and_lock(key)) DO UPDATE SET data = upserttest.data || ' with conflict update s2'; 
 step controller_show: SELECT * FROM upserttest;
 keydata   
@@ -86,10 +102,14 @@ step controller_unlock_1_3: SELECT pg_advisory_unlock(1, 3);
 pg_advisory_unlock
 
 t  
+s1: NOTICE:  called for k1
+s1: NOTICE:  blocking 2
 step controller_unlock_2_3: SELECT pg_advisory_unlock(2, 3);
 pg_advisory_unlock
 
 t  
+s2: NOTICE:  called for k1
+s2: NOTICE:  blocking 2
 step controller_show: SELECT * FROM upserttest;
 keydata   
 
@@ -106,6 +126,10 @@ step controller_unlock_2_2: SELECT pg_advisory_unlock(2, 2);
 pg_advisory_unlock
 
 t  
+s2: NOTICE:  called for k1
+s2: NOTICE:  blocking 2
+s2: NOTICE:  called for k1
+s2: NOTICE:  blocking 2
 step s2_upsert: <... completed>
 step controller_show: SELECT * FROM upserttest;
 keydata   
@@ -127,7 +151,11 @@ keydata
 
 step s1_begin: BEGIN;
 step s2_begin: BEGIN;
+s1: NOTICE:  called for k1
+s1: NOTICE:  blocking 3
 step s1_upsert: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s1') ON CONFLICT (blurt_and_lock(key)) DO UPDATE SET data = upserttest.data || ' with conflict update s1'; 
+s2: NOTICE:  called for k1
+s2: NOTICE:  blocking 3
 step s2_upsert: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s2') ON CONFLICT (blurt_and_lock(key)) DO UPDATE SET data = upserttest.data || ' with conflict update s2'; 
 step controller_show: SELECT * FROM upserttest;
 keydata   
@@ -144,10 +172,14 @@ step controller_unlock_1_3: SELECT pg_advisory_unlock(1, 3);
 pg_advisory_unlock
 
 t  
+s1: NOTICE:  called for k1
+s1: NOTICE:  blocking 2
 step controller_unlock_2_3: SELECT pg_advisory_unlock(2, 3);
 pg_advisory_unlock
 
 t  
+s2: NOTICE:  called for k1
+s2: NOTICE:  blocking 2
 step controller_show: SELECT * FROM upserttest;
 keydata   
 
@@ -163,10 +195,16 @@ step 

Re: tap tests driving the database via psql

2019-07-27 Thread Andres Freund
Hi,

On 2019-07-27 17:48:39 -0400, Andrew Dunstan wrote:
> On 7/27/19 3:15 PM, Andres Freund wrote:
> > I'm not volunteering to do 4), my perl skills aren't great (if the test
> > infra were python, otoh... I have the skeleton of a pure perl driver
> > that I used for testing somewhere). But I am leaning towards that being
> > the most reasonable choice.
> 
> +1 for #4.
> 
> 
> I'll be happy to participate in any effort.
> 
> 
> About 22 years ago I wrote a pure perl implementation of a wire protocol
> of roughly similar complexity (RFC1459). I got it basically working in
> just a few days, so this sort of thing is very doable. Let's see your
> skeleton and maybe it's a good starting point.

The skeleton's in python though, not sure how helpful that is?

/me once more regrets that perl, not python, has been chosen as the
scripting language of choice for postgres...

Greetings,

Andres Freund




Re: Testing LISTEN/NOTIFY more effectively

2019-07-27 Thread Tom Lane
I wrote:
> Andres Freund  writes:
>> Perhaps we could just have isolationtester check to which
>> isolationtester session the backend pid belongs? And then print the
>> session name instead of the pid? That should be fairly easy, and would
>> probably give us all we need?

> Oh, that's a good idea -- it's already tracking all the backend PIDs,
> so probably not much extra work to do it like that.

I found out that to avoid confusion, one really wants the message to
identify both the sending and receiving sessions.  Here's a patch
that does it that way and extends the async-notify.spec test to
perform basic end-to-end checks on LISTEN/NOTIFY.

I intentionally made the test show the lack of NOTIFY de-deduplication
that currently happens with subtransactions.  If we change this as I
proposed in <17822.1564186...@sss.pgh.pa.us>, this test output will
change.

regards, tom lane

diff --git a/src/test/isolation/expected/async-notify.out b/src/test/isolation/expected/async-notify.out
index 92d281a..3dc1227 100644
--- a/src/test/isolation/expected/async-notify.out
+++ b/src/test/isolation/expected/async-notify.out
@@ -1,17 +1,82 @@
 Parsed test spec with 2 sessions
 
-starting permutation: listen begin check notify check
-step listen: LISTEN a;
-step begin: BEGIN;
-step check: SELECT pg_notification_queue_usage() > 0 AS nonzero;
+starting permutation: listenc notify1 notify2 notify3
+step listenc: LISTEN c1; LISTEN c2;
+step notify1: NOTIFY c1;
+notifier: NOTIFY "c1" with payload "" from notifier
+step notify2: NOTIFY c2, 'payload';
+notifier: NOTIFY "c2" with payload "payload" from notifier
+step notify3: NOTIFY c3, 'payload3';
+
+starting permutation: listenc notifyd notifys1
+step listenc: LISTEN c1; LISTEN c2;
+step notifyd: NOTIFY c2, 'payload'; NOTIFY c1; NOTIFY "c2", 'payload';
+notifier: NOTIFY "c2" with payload "payload" from notifier
+notifier: NOTIFY "c1" with payload "" from notifier
+step notifys1: 
+	BEGIN;
+	NOTIFY c1, 'payload1'; NOTIFY "c2", 'payload2';
+	NOTIFY c1, 'payload1'; NOTIFY "c2", 'payload2';
+	SAVEPOINT s1;
+	NOTIFY c1, 'payload1'; NOTIFY "c2", 'payload2';
+	NOTIFY c1, 'payload1s'; NOTIFY "c2", 'payload2s';
+	NOTIFY c1, 'payload1'; NOTIFY "c2", 'payload2';
+	NOTIFY c1, 'payload1s'; NOTIFY "c2", 'payload2s';
+	RELEASE SAVEPOINT s1;
+	SAVEPOINT s2;
+	NOTIFY c1, 'rpayload1'; NOTIFY "c2", 'rpayload2';
+	NOTIFY c1, 'rpayload1s'; NOTIFY "c2", 'rpayload2s';
+	NOTIFY c1, 'rpayload1'; NOTIFY "c2", 'rpayload2';
+	NOTIFY c1, 'rpayload1s'; NOTIFY "c2", 'rpayload2s';
+	ROLLBACK TO SAVEPOINT s2;
+	COMMIT;
+
+notifier: NOTIFY "c1" with payload "payload1" from notifier
+notifier: NOTIFY "c2" with payload "payload2" from notifier
+notifier: NOTIFY "c1" with payload "payload1" from notifier
+notifier: NOTIFY "c2" with payload "payload2" from notifier
+notifier: NOTIFY "c1" with payload "payload1s" from notifier
+notifier: NOTIFY "c2" with payload "payload2s" from notifier
+
+starting permutation: llisten notify1 notify2 notify3 lcheck
+step llisten: LISTEN c1; LISTEN c2;
+step notify1: NOTIFY c1;
+step notify2: NOTIFY c2, 'payload';
+step notify3: NOTIFY c3, 'payload3';
+step lcheck: SELECT 1 AS x;
+x  
+
+1  
+listener: NOTIFY "c1" with payload "" from notifier
+listener: NOTIFY "c2" with payload "payload" from notifier
+
+starting permutation: listenc llisten notify1 notify2 notify3 lcheck
+step listenc: LISTEN c1; LISTEN c2;
+step llisten: LISTEN c1; LISTEN c2;
+step notify1: NOTIFY c1;
+notifier: NOTIFY "c1" with payload "" from notifier
+step notify2: NOTIFY c2, 'payload';
+notifier: NOTIFY "c2" with payload "payload" from notifier
+step notify3: NOTIFY c3, 'payload3';
+step lcheck: SELECT 1 AS x;
+x  
+
+1  
+listener: NOTIFY "c1" with payload "" from notifier
+listener: NOTIFY "c2" with payload "payload" from notifier
+
+starting permutation: llisten lbegin usage bignotify usage
+step llisten: LISTEN c1; LISTEN c2;
+step lbegin: BEGIN;
+step usage: SELECT pg_notification_queue_usage() > 0 AS nonzero;
 nonzero
 
 f  
-step notify: SELECT count(pg_notify('a', s::text)) FROM generate_series(1, 1000) s;
+step bignotify: SELECT count(pg_notify('c1', s::text)) FROM generate_series(1, 1000) s;
 count  
 
 1000   
-step check: SELECT pg_notification_queue_usage() > 0 AS nonzero;
+step usage: SELECT pg_notification_queue_usage() > 0 AS nonzero;
 nonzero
 
 t  
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 6ab19b1..98e5bf2 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -23,10 +23,12 @@
 
 /*
  * conns[0] is the global setup, teardown, and watchdog connection.  Additional
- * connections represent spec-defined sessions.
+ * connections represent spec-defined sessions.  We also track the backend
+ * PID, in numeric and string formats, for each connection.
  */
 static PGconn **conns = 

Re: idea: log_statement_sample_rate - bottom limit for sampling

2019-07-27 Thread Tomas Vondra

Hi,

I've started reviewing this patch, thinking that maybe I could get it
committed as it's marked as RFC. In general I agree with having this
fuature, but I think we need to rethink the GUC because the current
approach is just confusing.

The way the current patch works is that we have three GUCs:

 log_min_duration_statement
 log_statement_sample_limit
 log_statement_sample_rate

and it essentially works like this:

- If the duration exceeds log_min_duration_statement, we start sampling
 the commands with log_statement_sample rate.

- If the duration exceeds log_statement_sample_limit, we just log the
 command every time (i.e. we disable sampling, using sample rate 1.0).

IMO that's bound to be confusing for users, because one threshold
behaves as minimum while the other behaves as maximum.


What I think we should do instead is to use two minimum thresholds.

1) log_min_duration_sample - enables sampling of commands, using the
existing GUC log_statement_sample_rate

2) log_min_duration_statement - logs all commands exceeding this


I think this is going to be much easier for users to understand.


The one difference between those approaches is in how they work with
existing current settings. That is, let's say you have

 log_min_duration_statement = 1000
 log_statement_sample_rate = 0.01

then no queries below 1000ms will be logged, and 1% of longer queries
will be sampled. And with the original config (as proposed in v3 of the
patch), this would still work the same way.

With the new approach (two min thresholds) it'd behave differently,
because we'd log *all* queries longer than 1000ms (not just 1%). And
whether we'd sample any queries (using log_statement_sample_rate) would
depend on how we'd pick the default value for the other threshold.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 





Re: tap tests driving the database via psql

2019-07-27 Thread Andrew Dunstan


On 7/27/19 3:15 PM, Andres Freund wrote:
> Hi,
>
> The discussion in [1]
> again reminded me how much I dislike that we currently issue database
> queries in tap tests by forking psql and writing/reading from it's
> stdin/stdout.
>
> That's quite cumbersome to write, and adds a good number of additional
> failure scenarios to worry about. For a lot of tasks you then have to
> reparse psql's output to separate out columns etc.
>
> I think we have a better approach for [1] than using tap tests, but I
> think it's a more general issue preventing us from having more extensive
> test coverage, and especially from having more robust coverage.
>
> I think we seriously ought to consider depending on a proper perl
> database driver.  I can see various approaches:
>
> 1) Just depend on DBD::Pg being installed. It's fairly common, after
>all. It'd be somewhat annoying that we'd often end up using a
>different version of libpq than what we're testing against. But in
>most cases that'd not be particularly problematic.
>
> 2) Depend on DBD::PgPP, a pure perl driver. It'd likely often be more
>lightweight to install. On the other hand, it's basically
>unmaintained (last commit 2010), and is a lot less commonly already
>installed than DBD::Pg. Also depends on DBI, which isn't part of a
>core perl IIRC.
>
> 3) Vendor a test-only copy of one of those libraries, and build them as
>part of the test setup. That'd cut down on the number of
>dependencies.
>
>But probably not that much, because we'd still depend on DBI, which
>IIRC isn't part of core perl.
>
>DBI by default does include C code, and is not that small. There's
>however a pure perl version maintained as part of DBI, and it looks
>like it might be reasonably enough sized. If we vendored that, and
>DBD::PgPP, we'd not have any additional dependencies.
>
>I suspect that the licensing (dual GPL *version 1* / Artistic
>License, also V1), makes this too complicated, however.
>
> 4) We develop a fairly minimal pure perl database driver, that doesn't
>depend on DBI. Include it somewhere as part of the test code, instead
>of src/interfaces, so it's clearer that it's not ment as an actual
>official driver.
>
>The obvious disadvantage is that this would be a noticable amount of
>code. But it's also not that crazily much.
>
>One big advantage I can see is that that'd make it a lot easier to
>write low-level protocol tests. Right now we either don't have them,
>or they have to go through libpq, which quite sensibly doesn't expose
>all the details to the outside.  IMO it'd be really nice if we had a
>way to to write low level protocol tests, especially for testing
>things like the v2 protocol.
>
> I'm not volunteering to do 4), my perl skills aren't great (if the test
> infra were python, otoh... I have the skeleton of a pure perl driver
> that I used for testing somewhere). But I am leaning towards that being
> the most reasonable choice.
>


+1 for #4.


I'll be happy to participate in any effort.


About 22 years ago I wrote a pure perl implementation of a wire protocol
of roughly similar complexity (RFC1459). I got it basically working in
just a few days, so this sort of thing is very doable. Let's see your
skeleton and maybe it's a good starting point.


cheers


andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services





Re: tap tests driving the database via psql

2019-07-27 Thread Andres Freund
Hi,

On 2019-07-27 22:32:37 +0200, David Fetter wrote:
> On Sat, Jul 27, 2019 at 12:15:23PM -0700, Andres Freund wrote:
> > 4) We develop a fairly minimal pure perl database driver, that doesn't
> >depend on DBI. Include it somewhere as part of the test code, instead
> >of src/interfaces, so it's clearer that it's not ment as an actual
> >official driver.
> 
> There's one that may or may not need updates that's basically just a
> wrapper around libpq.
> 
> https://ftp.postgresql.org/pub/projects/gborg/pgperl/stable/

That's pretty darn old however (2002). Needs to be compiled. And is GPL
v1 / Artistic v1 licensed.  I think all of the other alternatives are
better than this.


> >The obvious disadvantage is that this would be a noticable amount of
> >code. But it's also not that crazily much.
> > 
> >One big advantage I can see is that that'd make it a lot easier to
> >write low-level protocol tests. Right now we either don't have them,
> >or they have to go through libpq, which quite sensibly doesn't expose
> >all the details to the outside.  IMO it'd be really nice if we had a
> >way to to write low level protocol tests, especially for testing
> >things like the v2 protocol.
> 
> That sounds worth doing as a separate thing

What would be the point of doing this separately? If we have a small
driver for writing protocol tests, why would we want something
separate for the tap tests?


> and an obvious application of it would be something like a febesmith,
> which would get us a better idea as to whether we've implemented the
> protocol we say we have.

Hm, not convinced that's useful. And fairly sure that's pretty
independent of what I was writing about.

Greetings,

Andres Freund




Re: Testing LISTEN/NOTIFY more effectively

2019-07-27 Thread Peter Geoghegan
On Sat, Jul 27, 2019 at 12:39 PM Tom Lane  wrote:
> Unfortunately, I just found out that on a slow enough machine
> (prairiedog's host) there *is* some variation in when that test's
> notices come out.  I am unsure whether that's to be expected or
> whether there's something wrong there --- Peter, any thoughts?

I don't know why this happens, but it's worth noting that the plpgsql
function that raises these notices ("blurt_and_lock()") is marked
IMMUTABLE (not sure if you noticed that already). This is a deliberate
misrepresentation which is needed to acquire advisory locks at just
the right points during execution.

If I had to guess, I'd guess that it had something to do with that. I
might be able to come up with a better explanation if I saw the diff.

-- 
Peter Geoghegan




Re: [Proposal] Table-level Transparent Data Encryption (TDE) and Key Management Service (KMS)

2019-07-27 Thread Bruce Momjian
On Sat, Jul 27, 2019 at 03:02:02PM -0400, Sehrope Sarkuni wrote:
> On Sat, Jul 27, 2019 at 1:32 PM Bruce Momjian  wrote:
> > Uh, I listed the three options for the CRC and gave the benefits of
> > each:
> >
> > 
> > https://www.postgresql.org/message-id/20190725200343.xo4dcjm5azrfn...@momjian.us
> >
> > Obviously I was not clear on the benefits.  To quote:
> >
> > 1.  compute CRC and then encrypt everything
> > 3.  encrypt and then CRC, and store the CRC encrypted
> >
> > Numbers 1 & 3 give us tampering detection, though with the CRC being so
> > small, it isn't totally secure.
> >
> > > Are you worried about an attacker forging the page checksum by
> > > installing another encrypted page that gives the same checksum?  I'm not
> > > sure how that attack works ... I mean why can the attacker install
> > > arbitrary pages?
> >
> > Well, with #2
> >
> > 2   encrypt and then CRC, and store the CRC unchanged
> >
> > you can modify the page, even small parts, and just replace the CRC to
> > match your changes.  In #1 and #3, you would get a CRC error in almost
> > all cases since you have no way of setting the decrypted CRC without
> > knowing the key.  You can change the encrypted CRC, but the odds that
> > the decrypted one would match the page is very slim.
> 
> Regarding #1 and #3, with CTR mode you do not need to know the key to
> make changes to the CRC. Flipping bits of the encrypted CRC would flip
> the same bits of the decrypted one. This was one of the issues with
> the older WiFi encryption standard WEP[1] which used RC4 + CRC32. It's
> not the exact same usage pattern, but I wouldn't be surprised if there
> is a way to make in place updates and matching CRC32 changes even if
> it's encrypted.

I see.

> Given the non-cryptographic nature of CRC and its 16-bit size, I'd
> round down the malicious tamper detection it provides to zero. At best
> it catches random disk errors so might as well keep it in plain text
> and checkable offline.

OK, zero is pretty low.  ;-)  Let's just go with #2 then, and use CTR
mode so it is easy to skip the CRC bytes in the page.

> More generally, without a cryptographic MAC I don't think it's
> possible to provide any meaningful malicious tamper detection. And
> even that would have to be off-page to deal with page replay (which I
> think is out of scope).

Yeah.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: tap tests driving the database via psql

2019-07-27 Thread David Fetter
On Sat, Jul 27, 2019 at 12:15:23PM -0700, Andres Freund wrote:
> Hi,
> 
> The discussion in [1]
> again reminded me how much I dislike that we currently issue database
> queries in tap tests by forking psql and writing/reading from it's
> stdin/stdout.
> 
> That's quite cumbersome to write, and adds a good number of additional
> failure scenarios to worry about. For a lot of tasks you then have to
> reparse psql's output to separate out columns etc.
> 
> I think we have a better approach for [1] than using tap tests, but I
> think it's a more general issue preventing us from having more extensive
> test coverage, and especially from having more robust coverage.
> 
> I think we seriously ought to consider depending on a proper perl
> database driver.  I can see various approaches:
> 
> 1) Just depend on DBD::Pg being installed. It's fairly common, after
>all. It'd be somewhat annoying that we'd often end up using a
>different version of libpq than what we're testing against. But in
>most cases that'd not be particularly problematic.
> 
> 2) Depend on DBD::PgPP, a pure perl driver. It'd likely often be more
>lightweight to install. On the other hand, it's basically
>unmaintained (last commit 2010), and is a lot less commonly already
>installed than DBD::Pg. Also depends on DBI, which isn't part of a
>core perl IIRC.
> 
> 3) Vendor a test-only copy of one of those libraries, and build them as
>part of the test setup. That'd cut down on the number of
>dependencies.
> 
>But probably not that much, because we'd still depend on DBI, which
>IIRC isn't part of core perl.
> 
>DBI by default does include C code, and is not that small. There's
>however a pure perl version maintained as part of DBI, and it looks
>like it might be reasonably enough sized. If we vendored that, and
>DBD::PgPP, we'd not have any additional dependencies.
> 
>I suspect that the licensing (dual GPL *version 1* / Artistic
>License, also V1), makes this too complicated, however.
> 
> 4) We develop a fairly minimal pure perl database driver, that doesn't
>depend on DBI. Include it somewhere as part of the test code, instead
>of src/interfaces, so it's clearer that it's not ment as an actual
>official driver.

There's one that may or may not need updates that's basically just a
wrapper around libpq.

https://ftp.postgresql.org/pub/projects/gborg/pgperl/stable/

>The obvious disadvantage is that this would be a noticable amount of
>code. But it's also not that crazily much.
> 
>One big advantage I can see is that that'd make it a lot easier to
>write low-level protocol tests. Right now we either don't have them,
>or they have to go through libpq, which quite sensibly doesn't expose
>all the details to the outside.  IMO it'd be really nice if we had a
>way to to write low level protocol tests, especially for testing
>things like the v2 protocol.

That sounds worth doing as a separate thing, and an obvious
application of it would be something like a febesmith, which would get
us a better idea as to whether we've implemented the protocol we say
we have.

Best,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate




Re: Testing LISTEN/NOTIFY more effectively

2019-07-27 Thread Andres Freund
Hi,

On 2019-07-27 15:39:44 -0400, Tom Lane wrote:
> Andres Freund  writes:
> >> diff --git a/src/test/isolation/expected/insert-conflict-specconflict.out 
> >> b/src/test/isolation/expected/insert-conflict-specconflict.out
> >> index 5726bdb..20cc421 100644
> >> --- a/src/test/isolation/expected/insert-conflict-specconflict.out
> >> +++ b/src/test/isolation/expected/insert-conflict-specconflict.out
> >> @@ -13,7 +13,11 @@ pg_advisory_locksess   lock
> >> step controller_show: SELECT * FROM upserttest;
> >> keydata   
> >> 
> >> +s1: NOTICE:  called for k1
> >> +s1: NOTICE:  blocking 3
> >> step s1_upsert: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted 
> >> s1') ON CONFLICT (blurt_and_lock(key)) DO UPDATE SET data = 
> >> upserttest.data || ' with conflict update s1'; 
> >> +s2: NOTICE:  called for k1
> >> +s2: NOTICE:  blocking 3
> 
> > Hm, that actually makes the test - which is pretty complicated - easier
> > to understand.
> 
> Unfortunately, I just found out that on a slow enough machine
> (prairiedog's host) there *is* some variation in when that test's
> notices come out.  I am unsure whether that's to be expected or
> whether there's something wrong there

Hm. Any chance you could show the diff? I don't immediately see why.


>  --- Peter, any thoughts?

Think that's my transgression :/


> What I will do for the moment is remove the client_min_messages=WARNING
> setting from isolationtester.c and instead put it into
> insert-conflict-specconflict.spec, which seems like a saner
> way to manage this.  If we can get these messages to appear
> stably, we can just fix that spec file.

Makes sense.

Greetings,

Andres Freund




Re: Testing LISTEN/NOTIFY more effectively

2019-07-27 Thread Tom Lane
Andres Freund  writes:
> On 2019-07-27 14:15:39 -0400, Tom Lane wrote:
>> So I think we should apply something like the attached, and if the
>> buildfarm shows any instability as a result, dealing with that by
>> taking out the RAISE NOTICE commands.

> +1

>> diff --git a/src/test/isolation/expected/insert-conflict-specconflict.out 
>> b/src/test/isolation/expected/insert-conflict-specconflict.out
>> index 5726bdb..20cc421 100644
>> --- a/src/test/isolation/expected/insert-conflict-specconflict.out
>> +++ b/src/test/isolation/expected/insert-conflict-specconflict.out
>> @@ -13,7 +13,11 @@ pg_advisory_locksess   lock
>> step controller_show: SELECT * FROM upserttest;
>> keydata   
>> 
>> +s1: NOTICE:  called for k1
>> +s1: NOTICE:  blocking 3
>> step s1_upsert: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted 
>> s1') ON CONFLICT (blurt_and_lock(key)) DO UPDATE SET data = upserttest.data 
>> || ' with conflict update s1'; 
>> +s2: NOTICE:  called for k1
>> +s2: NOTICE:  blocking 3

> Hm, that actually makes the test - which is pretty complicated - easier
> to understand.

Unfortunately, I just found out that on a slow enough machine
(prairiedog's host) there *is* some variation in when that test's
notices come out.  I am unsure whether that's to be expected or
whether there's something wrong there --- Peter, any thoughts?

What I will do for the moment is remove the client_min_messages=WARNING
setting from isolationtester.c and instead put it into
insert-conflict-specconflict.spec, which seems like a saner
way to manage this.  If we can get these messages to appear
stably, we can just fix that spec file.

>> +s1: NOTICE:  x = foofoofoofo

> Yea, there indeed does not not much point in this.

Maybe we could just log the lengths of the strings... if there's
anything broken, we could expect that the decompressed output
would be a different length.

regards, tom lane




tap tests driving the database via psql

2019-07-27 Thread Andres Freund
Hi,

The discussion in [1]
again reminded me how much I dislike that we currently issue database
queries in tap tests by forking psql and writing/reading from it's
stdin/stdout.

That's quite cumbersome to write, and adds a good number of additional
failure scenarios to worry about. For a lot of tasks you then have to
reparse psql's output to separate out columns etc.

I think we have a better approach for [1] than using tap tests, but I
think it's a more general issue preventing us from having more extensive
test coverage, and especially from having more robust coverage.

I think we seriously ought to consider depending on a proper perl
database driver.  I can see various approaches:

1) Just depend on DBD::Pg being installed. It's fairly common, after
   all. It'd be somewhat annoying that we'd often end up using a
   different version of libpq than what we're testing against. But in
   most cases that'd not be particularly problematic.

2) Depend on DBD::PgPP, a pure perl driver. It'd likely often be more
   lightweight to install. On the other hand, it's basically
   unmaintained (last commit 2010), and is a lot less commonly already
   installed than DBD::Pg. Also depends on DBI, which isn't part of a
   core perl IIRC.

3) Vendor a test-only copy of one of those libraries, and build them as
   part of the test setup. That'd cut down on the number of
   dependencies.

   But probably not that much, because we'd still depend on DBI, which
   IIRC isn't part of core perl.

   DBI by default does include C code, and is not that small. There's
   however a pure perl version maintained as part of DBI, and it looks
   like it might be reasonably enough sized. If we vendored that, and
   DBD::PgPP, we'd not have any additional dependencies.

   I suspect that the licensing (dual GPL *version 1* / Artistic
   License, also V1), makes this too complicated, however.

4) We develop a fairly minimal pure perl database driver, that doesn't
   depend on DBI. Include it somewhere as part of the test code, instead
   of src/interfaces, so it's clearer that it's not ment as an actual
   official driver.

   The obvious disadvantage is that this would be a noticable amount of
   code. But it's also not that crazily much.

   One big advantage I can see is that that'd make it a lot easier to
   write low-level protocol tests. Right now we either don't have them,
   or they have to go through libpq, which quite sensibly doesn't expose
   all the details to the outside.  IMO it'd be really nice if we had a
   way to to write low level protocol tests, especially for testing
   things like the v2 protocol.

I'm not volunteering to do 4), my perl skills aren't great (if the test
infra were python, otoh... I have the skeleton of a pure perl driver
that I used for testing somewhere). But I am leaning towards that being
the most reasonable choice.

Craig, IIRC you'd thought about this before too?

Greetings,

Andres Freund

[1] https://www.postgresql.org/message-id/31304.1564246011%40sss.pgh.pa.us




Re: Adding column "mem_usage" to view pg_prepared_statements

2019-07-27 Thread Andres Freund
Hi,

On 2019-07-27 18:29:23 +, Daniel Migowski wrote:
> I just implemented a small change that adds another column "mem_usage"
> to the system view "pg_prepared_statements". It returns the memory
> usage total of CachedPlanSource.context,
> CachedPlanSource.query_content and if available
> CachedPlanSource.gplan.context.

FWIW, it's generally easier to comment if you actually provide the
patch, even if it's just POC, as that gives a better handle on how much
additional complexity it introduces.

I think this could be a useful feature. I'm not so sure we want it tied
to just cached statements however - perhaps we ought to generalize it a
bit more.


Regarding the prepared statements specific considerations: I don't think
we ought to explicitly reference CachedPlanSource.query_content, and
CachedPlanSource.gplan.context.

In the case of actual prepared statements (rather than oneshot plans)
CachedPlanSource.query_context IIRC should live under
CachedPlanSource.context.  I think there's no relevant cases where
gplan.context isn't a child of CachedPlanSource.context either, but not
quite sure.

Then we ought to just include child contexts in the memory computation
(cf. logic in MemoryContextStatsInternal(), although you obviously
wouldn't need all that). That way, if the cached statements has child
contexts, we're going to stay accurate.


> Also I wonder why the "prepare test as" is part of the statement
> column. I isn't even part of the real statement that is prepared as
> far as I would assume. Would prefer to just have the "select *..." in
> that column.

It's the statement that was executed. Note that you'll not see that in
the case of protocol level prepared statements.  It will sometimes
include relevant information, e.g. about the types specified as part of
the prepare (as in PREPARE foo(int, float, ...) AS ...).

Greetings,

Andres Freund




Re: [Proposal] Table-level Transparent Data Encryption (TDE) and Key Management Service (KMS)

2019-07-27 Thread Joe Conway
On 7/27/19 3:02 PM, Sehrope Sarkuni wrote:
> More generally, without a cryptographic MAC I don't think it's
> possible to provide any meaningful malicious tamper detection. And
> even that would have to be off-page to deal with page replay (which I
> think is out of scope).
> 
> [1]: https://en.wikipedia.org/wiki/CRC-32#Data_integrity

Yes, exactly -- pretty sure I made that point down thread but who knows;
I know I at least thought it ;-P

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [Proposal] Table-level Transparent Data Encryption (TDE) and Key Management Service (KMS)

2019-07-27 Thread Sehrope Sarkuni
On Sat, Jul 27, 2019 at 1:32 PM Bruce Momjian  wrote:
> Uh, I listed the three options for the CRC and gave the benefits of
> each:
>
> 
> https://www.postgresql.org/message-id/20190725200343.xo4dcjm5azrfn...@momjian.us
>
> Obviously I was not clear on the benefits.  To quote:
>
> 1.  compute CRC and then encrypt everything
> 3.  encrypt and then CRC, and store the CRC encrypted
>
> Numbers 1 & 3 give us tampering detection, though with the CRC being so
> small, it isn't totally secure.
>
> > Are you worried about an attacker forging the page checksum by
> > installing another encrypted page that gives the same checksum?  I'm not
> > sure how that attack works ... I mean why can the attacker install
> > arbitrary pages?
>
> Well, with #2
>
> 2   encrypt and then CRC, and store the CRC unchanged
>
> you can modify the page, even small parts, and just replace the CRC to
> match your changes.  In #1 and #3, you would get a CRC error in almost
> all cases since you have no way of setting the decrypted CRC without
> knowing the key.  You can change the encrypted CRC, but the odds that
> the decrypted one would match the page is very slim.

Regarding #1 and #3, with CTR mode you do not need to know the key to
make changes to the CRC. Flipping bits of the encrypted CRC would flip
the same bits of the decrypted one. This was one of the issues with
the older WiFi encryption standard WEP[1] which used RC4 + CRC32. It's
not the exact same usage pattern, but I wouldn't be surprised if there
is a way to make in place updates and matching CRC32 changes even if
it's encrypted.

Given the non-cryptographic nature of CRC and its 16-bit size, I'd
round down the malicious tamper detection it provides to zero. At best
it catches random disk errors so might as well keep it in plain text
and checkable offline.

More generally, without a cryptographic MAC I don't think it's
possible to provide any meaningful malicious tamper detection. And
even that would have to be off-page to deal with page replay (which I
think is out of scope).

[1]: https://en.wikipedia.org/wiki/CRC-32#Data_integrity

Regards,
-- Sehrope Sarkuni
Founder & CEO | JackDB, Inc. | https://www.jackdb.com/




Re: Testing LISTEN/NOTIFY more effectively

2019-07-27 Thread Andres Freund
Hi,

On 2019-07-27 14:15:39 -0400, Tom Lane wrote:
> So I think we should apply something like the attached, and if the
> buildfarm shows any instability as a result, dealing with that by
> taking out the RAISE NOTICE commands.

+1

> diff --git a/src/test/isolation/expected/insert-conflict-specconflict.out 
> b/src/test/isolation/expected/insert-conflict-specconflict.out
> index 5726bdb..20cc421 100644
> --- a/src/test/isolation/expected/insert-conflict-specconflict.out
> +++ b/src/test/isolation/expected/insert-conflict-specconflict.out
> @@ -13,7 +13,11 @@ pg_advisory_locksess   lock
>  step controller_show: SELECT * FROM upserttest;
>  keydata   
>  
> +s1: NOTICE:  called for k1
> +s1: NOTICE:  blocking 3
>  step s1_upsert: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted 
> s1') ON CONFLICT (blurt_and_lock(key)) DO UPDATE SET data = upserttest.data 
> || ' with conflict update s1'; 
> +s2: NOTICE:  called for k1
> +s2: NOTICE:  blocking 3

Hm, that actually makes the test - which is pretty complicated - easier
to understand.


> diff --git a/src/test/isolation/expected/plpgsql-toast.out 
> b/src/test/isolation/expected/plpgsql-toast.out
> index 4341153..39a7bbe 100644
> --- a/src/test/isolation/expected/plpgsql-toast.out
> +++ b/src/test/isolation/expected/plpgsql-toast.out
> @@ -35,6 +35,7 @@ step unlock:
>  pg_advisory_unlock
>  
>  t  
> +s1: NOTICE:  x = foofoofoofo

Yea, there indeed does not not much point in this.

Greetings,

Andres Freund




Adding column "mem_usage" to view pg_prepared_statements

2019-07-27 Thread Daniel Migowski
Hello,

I just implemented a small change that adds another column "mem_usage" to the 
system view "pg_prepared_statements". It returns the memory usage total of 
CachedPlanSource.context, CachedPlanSource.query_content and if available 
CachedPlanSource.gplan.context.

Looks like this:

IKOffice_Daume=# prepare test as select * from vw_report_salesinvoice where 
salesinvoice_id = $1;
PREPARE
IKOffice_Daume=# select * from pg_prepared_statements;
name |statement 
| prepare_time | parameter_types | from_sql | mem_usage
--+--+--+-+--+---
test | prepare test as select * from vw_report_salesinvoice where 
salesinvoice_id = $1; | 2019-07-27 20:21:12.63093+02 | {integer}   | t  
  |  33580232
(1 row)

I did this in preparation of reducing the memory usage of prepared statements 
and believe that this gives client application an option to investigate which 
prepared statements should be dropped. Also this makes it possible to directly 
examine the results of further changes and their effectiveness on reducing the 
memory load of prepared_statements.

Is a patch welcome or is this feature not of interest?

Also I wonder why the "prepare test as" is part of the statement column. I 
isn't even part of the real statement that is prepared as far as I would 
assume. Would prefer to just have the "select *..." in that column.

Kind regards,
Daniel Migowski



Re: Add parallelism and glibc dependent only options to reindexdb

2019-07-27 Thread Julien Rouhaud
On Sat, Jul 27, 2019 at 3:27 PM Michael Paquier  wrote:
>
> On Sat, Jul 27, 2019 at 11:44:47AM +0200, Julien Rouhaud wrote:
> > That's probably still more intuitive than having the count coming from
> > either main() or from get_parallel_object_list() depending on the
> > process type, so I'm fine with that alternative.  Maybe we could bite
> > the bullet and add a count meber to Simple*List, also providing a
> > macro to initialize a new list so that next time a field is added
> > there won't be a massive boilerplate code change?
>
> Perhaps, we could discuss about that on a separate thread.

Agreed.

>   For now I
> have gone with the simplest approach of counting the items, and
> stopping the count if there are more items than jobs.  While reviewing
> I have found a double-free in your patch when building a list of
> relations for schemas or databases.  If the list finishes empty,
> PQfinish() was called twice on the connection, leading to a crash.  I
> have added a test for that

Oops, thanks for spotting and fixing.

> , done an extra pass on the patch adjusting
> a couple of things then committed the patch with the restriction on
> --index and --jobs.  This entry is now marked as committed in the CF
> app.

Thanks!




Re: Testing LISTEN/NOTIFY more effectively

2019-07-27 Thread Tom Lane
While I'm looking at isolationtester ... my eye was immediately drawn
to this bit, because it claims to be dealing with NOTIFY messages ---
though that's wrong, it's really blocking NOTICE messages:

/*
 * Suppress NOTIFY messages, which otherwise pop into results at odd
 * places.
 */
res = PQexec(conns[i], "SET client_min_messages = warning;");
if (PQresultStatus(res) != PGRES_COMMAND_OK)
{
fprintf(stderr, "message level setup failed: %s", 
PQerrorMessage(conns[i]));
exit(1);
}
PQclear(res);

This seems to me to be a great example of terrible test design.
It's not isolationtester's job to impose a client_min_messages level
on the test scripts; if they want a non-default level, they can
perfectly well set it for themselves in their setup sections.
Furthermore, if I remove this bit, the only NOTICE messages I'm
actually seeing come from explicit RAISE NOTICE messages in the
test scripts themselves, which means this is overriding the express
intent of individual test authors.  And my testing isn't detecting
any instability in when those come out, although of course the
buildfarm might have a different opinion.

So I think we should apply something like the attached, and if the
buildfarm shows any instability as a result, dealing with that by
taking out the RAISE NOTICE commands.

I'm a little inclined to remove the notice anyway in the
plpgsql-toast test, as the bulk-to-value ratio doesn't seem good.

regards, tom lane

diff --git a/src/test/isolation/expected/insert-conflict-specconflict.out b/src/test/isolation/expected/insert-conflict-specconflict.out
index 5726bdb..20cc421 100644
--- a/src/test/isolation/expected/insert-conflict-specconflict.out
+++ b/src/test/isolation/expected/insert-conflict-specconflict.out
@@ -13,7 +13,11 @@ pg_advisory_locksess   lock
 step controller_show: SELECT * FROM upserttest;
 keydata   
 
+s1: NOTICE:  called for k1
+s1: NOTICE:  blocking 3
 step s1_upsert: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s1') ON CONFLICT (blurt_and_lock(key)) DO UPDATE SET data = upserttest.data || ' with conflict update s1'; 
+s2: NOTICE:  called for k1
+s2: NOTICE:  blocking 3
 step s2_upsert: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s2') ON CONFLICT (blurt_and_lock(key)) DO UPDATE SET data = upserttest.data || ' with conflict update s2'; 
 step controller_show: SELECT * FROM upserttest;
 keydata   
@@ -30,10 +34,14 @@ step controller_unlock_1_3: SELECT pg_advisory_unlock(1, 3);
 pg_advisory_unlock
 
 t  
+s1: NOTICE:  called for k1
+s1: NOTICE:  blocking 2
 step controller_unlock_2_3: SELECT pg_advisory_unlock(2, 3);
 pg_advisory_unlock
 
 t  
+s2: NOTICE:  called for k1
+s2: NOTICE:  blocking 2
 step controller_show: SELECT * FROM upserttest;
 keydata   
 
@@ -50,6 +58,10 @@ step controller_unlock_1_2: SELECT pg_advisory_unlock(1, 2);
 pg_advisory_unlock
 
 t  
+s1: NOTICE:  called for k1
+s1: NOTICE:  blocking 2
+s1: NOTICE:  called for k1
+s1: NOTICE:  blocking 2
 step s1_upsert: <... completed>
 step controller_show: SELECT * FROM upserttest;
 keydata   
@@ -69,7 +81,11 @@ pg_advisory_locksess   lock
 step controller_show: SELECT * FROM upserttest;
 keydata   
 
+s1: NOTICE:  called for k1
+s1: NOTICE:  blocking 3
 step s1_upsert: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s1') ON CONFLICT (blurt_and_lock(key)) DO UPDATE SET data = upserttest.data || ' with conflict update s1'; 
+s2: NOTICE:  called for k1
+s2: NOTICE:  blocking 3
 step s2_upsert: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s2') ON CONFLICT (blurt_and_lock(key)) DO UPDATE SET data = upserttest.data || ' with conflict update s2'; 
 step controller_show: SELECT * FROM upserttest;
 keydata   
@@ -86,10 +102,14 @@ step controller_unlock_1_3: SELECT pg_advisory_unlock(1, 3);
 pg_advisory_unlock
 
 t  
+s1: NOTICE:  called for k1
+s1: NOTICE:  blocking 2
 step controller_unlock_2_3: SELECT pg_advisory_unlock(2, 3);
 pg_advisory_unlock
 
 t  
+s2: NOTICE:  called for k1
+s2: NOTICE:  blocking 2
 step controller_show: SELECT * FROM upserttest;
 keydata   
 
@@ -106,6 +126,10 @@ step controller_unlock_2_2: SELECT pg_advisory_unlock(2, 2);
 pg_advisory_unlock
 
 t  
+s2: NOTICE:  called for k1
+s2: NOTICE:  blocking 2
+s2: NOTICE:  called for k1
+s2: NOTICE:  blocking 2
 step s2_upsert: <... completed>
 step controller_show: SELECT * FROM upserttest;
 keydata   
@@ -127,7 +151,11 @@ keydata
 
 step s1_begin: BEGIN;
 step s2_begin: BEGIN;
+s1: NOTICE:  called for k1
+s1: NOTICE:  blocking 3
 step s1_upsert: INSERT INTO upserttest(key, data) VALUES('k1', 'inserted s1') ON CONFLICT (blurt_and_lock(key)) 

Re: Testing LISTEN/NOTIFY more effectively

2019-07-27 Thread Tom Lane
Andres Freund  writes:
> On 2019-07-27 12:46:51 -0400, Tom Lane wrote:
>> I'm finding alternative #3 the most attractive, because we really
>> want isolation-style testing for LISTEN/NOTIFY, and this solution
>> doesn't require designing a psql feature that we'd need to get
>> consensus on.

> Perhaps we could just have isolationtester check to which
> isolationtester session the backend pid belongs? And then print the
> session name instead of the pid? That should be fairly easy, and would
> probably give us all we need?

Oh, that's a good idea -- it's already tracking all the backend PIDs,
so probably not much extra work to do it like that.

regards, tom lane




Re: Testing LISTEN/NOTIFY more effectively

2019-07-27 Thread Andres Freund
Hi,

On 2019-07-27 12:46:51 -0400, Tom Lane wrote:
> So, if we'd like to have more thorough NOTIFY coverage without going
> to that much work, what to do?  I thought of a few alternatives:
> 
> 1. Write a TAP test instead of using the old test frameworks, and
> use regexps to check the expected output.  But this seems ugly and
> hard to get right.  In particular, our TAP infrastructure doesn't
> seem to be (easily?) capable of running concurrent psql sessions,
> so it doesn't seem like there's any good way to test cross-session
> notifies that way.

It's not that hard to have concurrent psql sessions -
e.g. src/test/recovery/t/013_crash_restart.pl does so. Writing tests by
interactively controlling psql is pretty painful regardless.

I'm inclined to think that this is better tested using isolationtester
than a tap test.


> 2. Change psql so that there's a way to get NOTIFY messages without
> the sending PID.  For testing purposes, it'd be sufficient to know
> whether the sending PID is our own backend's PID or not.  This idea
> is not horrible, and it might even be useful for outside purposes
> if we made it flexible enough; which leads to thoughts like allowing
> the psql user to set a format-style string, similar to the PROMPT
> strings but with escapes for channel name, payload, etc.  I foresee
> bikeshedding, but we could probably come to an agreement on a feature
> like that.

I was wondering about just tying it to VERBOSITY. But that'd not allow
us to see whether our backend was the sender. I'm mildly inclined to
think that that might still be a good idea, even if we mostly go with
3) - some basic plain regression test coverage of actually receiving
notifies would be good.


> 3. On the other hand, that doesn't help much for the isolation tester
> because it doesn't go through psql.  In fact, AFAICS it doesn't have
> any provision for dealing with notify messages at all; probably,
> in the async-notify.spec test, the listening session builds up a
> queue of notifies that it never reads.  So we could imagine addressing
> the testing gap strictly inside the isolation-tester framework, if we
> added the ability for it to detect and print notifications in a
> test-friendly format (no explicit PIDs).
> 
> I'm finding alternative #3 the most attractive, because we really
> want isolation-style testing for LISTEN/NOTIFY, and this solution
> doesn't require designing a psql feature that we'd need to get
> consensus on.

Yea. I think that's really what need. As you say, the type of test we
really need is what isolationtester provides. We can reimplement it
awkwardly in perl, but there seems to be little point in doing so.
Especially as what we're talking about is an additional ~15 lines or so
of code in isolationtester.

It'd be kinda neat if we had other information in the notify
message. E.g. having access to the sender's application name would be
useful for isolationtester, to actually verify where the message came
from. But it's probably not worth investing a lot in that.

Perhaps we could just have isolationtester check to which
isolationtester session the backend pid belongs? And then print the
session name instead of the pid? That should be fairly easy, and would
probably give us all we need?

Greetings,

Andres Freund




Re: [PATCH] get rid of StdRdOptions, use individual binary reloptions representation for each relation kind instead

2019-07-27 Thread Tomas Vondra

Hi Nikolay,

thanks for sending a new version of the patch. I've done a basic review
today, so let me share some comments about the patch.

Firstly, there's an important question why should we actually do this.

At the beginning of this thread you mentioned memory usage - e.g. for
indexes the reduced struct is just 8B (instead of 82B). I doubt it's
worth doing for this reason - it's a tiny amount of memory (you'd need
~16k indexes to save 1MB). So memory consumption does not seem like a
pressing issue (e.g. we're probably wasting way more memory thanks to
AllocSet using the 2^N bins to allocate chunks). I see Andres already
voiced a similar opinion last year 

You then mentioned that

 > My real motivation for this patch is to make code more uniform.
 > So the patch over it will be much clearer. And to make result
 > code more clear and uniform too.

which seems like a much better reason. The thing is - I'm not quite
convinced this patch makes the code more uniform/clearer. While that's
somewhat subjective opinion, there are cases where the code/API gets
obviously better - and this does not seem like one of those cases :-(

The one remaining possible benefit of this patch is making the code
easier to reuse / extend from other patches. In fact, that's why I'm
looking at this patch, because in the "opclass parameters" thread it was
repeatedly mentioned this patch would be useful.

So I think it'd be good to coordinate with Nikita Glukhov, and rebase
the opclass parameters patch on top of this one to verify/demonstrate
how it benefits that patch.



Now, some comments about the patch itself:


1) I see there's a bunch of functions parsing reloptions with about this
pattern:

   bytea *
   btoptions(Datum reloptions, bool validate)
   {
   static const relopt_parse_elt tab[] = { ... }
   
   options = parseRelOptions(
   
   /* if none set, we're done */

   if (numoptions == 0)
   return NULL;
   
   rdopts = allocateReloptStruct(...)
   
   fillRelOptions(rdopts, ...);
   
   pfree(options);
   
   return (bytea *) rdopts;

   }

so I wonder if the patch might define a wrapper doing all of this,
instead of copying it on a number of places.


2) The patch makes various new places aware about how reloptions for
different relkinds are parsed differently. IMHO that's a bad thing,
because it essentially violates layering / levels of abstraction.

For example, there's this change in heap_multi_insert():

-saveFreeSpace = RelationGetTargetPageFreeSpace(relation,
-   HEAP_DEFAULT_FILLFACTOR);
+if (IsToastRelation(relation))
+saveFreeSpace = ToastGetTargetPageFreeSpace();
+else
+saveFreeSpace = HeapGetTargetPageFreeSpace(relation);

so a code which was entirely oblivious to TOAST vs. non-TOAST relations
suddenly cares about this difference. And there's no good reason for
that, because this if might be added to the original macro, eliminating
this change entirely. And this exact same change in on various other
places.

The same thing applies to this change in get_relation_info:

-/* Retrieve the parallel_workers reloption, or -1 if not set. */
-rel->rel_parallel_workers = RelationGetParallelWorkers(relation, -1);
+/*
+ * Retrieve the parallel_workers for heap and mat.view relations.
+ * Use -1 if not set, or if we are dealing with other relation kinds
+ */
+if (relation->rd_rel->relkind == RELKIND_RELATION ||
+relation->rd_rel->relkind == RELKIND_MATVIEW)
+rel->rel_parallel_workers = RelationGetParallelWorkers(relation, -1);
+else
+rel->rel_parallel_workers = -1;

and this vacuum_rel chunk is not particularly readable either:

   if (onerel->rd_options == NULL ||
-   ((StdRdOptions *) onerel->rd_options)->vacuum_index_cleanup)
+   (!IsToastRelation(onerel) &&
+((HeapRelOptions *) onerel->rd_options)->vacuum_index_cleanup) ||
+   (IsToastRelation(onerel) &&
+((ToastRelOptions *) onerel->rd_options)->vacuum_index_cleanup))

(To be fair, this already was looking directly at StdRdOptions, but
adding a new macro to get vacuum_index_cleanup would be a good idea
anyway).


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 





Re: [Proposal] Table-level Transparent Data Encryption (TDE) and Key Management Service (KMS)

2019-07-27 Thread Bruce Momjian
On Thu, Jul 25, 2019 at 11:30:55PM -0400, Alvaro Herrera wrote:
> On 2019-Jul-25, Alvaro Herrera wrote:
> 
> > > Uh, there are no known attacks on AES with known plain-text, e.g., SSL
> > > uses AES, so I think we are good with encrypting everything after the
> > > first 16 bytes.
> > 
> > Well, maybe there aren't any attacks *now*, but I don't know what will
> > happen in the future.  I'm not clear what's the intended win by
> > encrypting the all-zeroes page hole anyway.  If you leave it
> > unencrypted, the attacker knows the size of the hole, as well as the
> > size of the tuple data area and the size of the LP array.  Is that a
> > side-channer that leaks much?
> 
> This answer https://crypto.stackexchange.com/a/31090 is interesting for
> three reasons:
> 
> 1. it says we don't really have to worry about cleartext attacks, at
> least not in the immediate future, so encrypting the hole should be OK;
> 
> 2. it seems to reinforces a point I tried to make earlier, which is that
> reusing the IV a small number of times is *not that bad*:

I think using LSN and page number, we will _never_ reuse the IV, except
for cases like promoting two standbys, which I think we have to document
as an insecure practice.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: [Proposal] Table-level Transparent Data Encryption (TDE) and Key Management Service (KMS)

2019-07-27 Thread Bruce Momjian
On Thu, Jul 25, 2019 at 10:57:08PM -0400, Alvaro Herrera wrote:
> On 2019-Jul-25, Bruce Momjian wrote:
> 
> > On Thu, Jul 25, 2019 at 03:43:34PM -0400, Alvaro Herrera wrote:
> 
> > > Why are we encrypting the page header in the first place?  It seems to
> > > me that the encrypted area should cover only the line pointers and the
> > > tuple data area; the page header needs to be unencrypted so that it can
> > > be used at all: firstly because you need to obtain the LSN from it in
> > 
> > Yes, the plan was to not encrypt the first 16 bytes so the LSN was visible.
> 
> I don't see the value of encrypting the rest of the page header
> (which includes the page checksum).

Well, let's unpack this.  Encrypting the page in more finely grained
parts than 16-bytes is going to require the use of CTR, but I think we
are leaning toward that anyway.

One advantage of not encrypting the hole is that it might be faster, but
I think it might reduce parallelism possibilities, so it might be
slower.  This might need testing.

No encrypting the hold does leak the size of the hole to the attacker,
but the size of the table is also visible to the attacker, so I don't
know if the hole size helps.  Knowing index hole size might be useful to
an attacker --- not sure.

> > > order to compute the IV, and secondly because the checksum must be
> > > validated *before* decrypting (per Moxie Marlinspike's "cryptographic
> > > doom" principle mentioned in a comment in the SE question).
> > 
> > Uh, I think we are still on the fence about writing the checksum _after_
> > encryption,
> 
> I don't see what's the reason for doing that.  The "cryptographic doom
> principle" page talks about this kind of scenario, and ISTM that the
> ultimate suggestion is that the page checksum ought to be verifyable
> prior to doing any decryption.

Uh, I listed the three options for the CRC and gave the benefits of
each:


https://www.postgresql.org/message-id/20190725200343.xo4dcjm5azrfn...@momjian.us

Obviously I was not clear on the benefits.  To quote:

1.  compute CRC and then encrypt everything
3.  encrypt and then CRC, and store the CRC encrypted

Numbers 1 & 3 give us tampering detection, though with the CRC being so
small, it isn't totally secure.

> Are you worried about an attacker forging the page checksum by
> installing another encrypted page that gives the same checksum?  I'm not
> sure how that attack works ... I mean why can the attacker install
> arbitrary pages?

Well, with #2

2   encrypt and then CRC, and store the CRC unchanged

you can modify the page, even small parts, and just replace the CRC to
match your changes.  In #1 and #3, you would get a CRC error in almost
all cases since you have no way of setting the decrypted CRC without
knowing the key.  You can change the encrypted CRC, but the odds that
the decrypted one would match the page is very slim.

> > The only way offline tools can verify the CRC without access to the keys
> > is via #2, but #2 gives us _no_ detection of tampering.  I realize the
> > CRC tampering detection of #1 and #3 is not great, but it certainly has
> > some value.
> 
> It seems to me that you're trying to invent a cryptographic signature
> scheme on your own.  That seems very likely to backfire.

Well, we have to live within the constraints we have.  The question is
whether there is sufficient value to having such tampering detection (#1
& #3) compared to the ease of having offline tools verify the checksums
without need to access the keys (#2).

> > > I am not totally clear on whether the special space and the "page hole"
> > > need to be encrypted.  I tend to think that they should *not* be
> > > encrypted; in particular, encrypting a large area containing zeroes seem
> > > a plentiful source of known cleartext, which seems a bad thing.  Special
> > > space also seems to contain known cleartext; maybe not as much as the
> > > page hole, but still seems better avoided.
> > 
> > Uh, there are no known attacks on AES with known plain-text, e.g., SSL
> > uses AES, so I think we are good with encrypting everything after the
> > first 16 bytes.
> 
> Well, maybe there aren't any attacks *now*, but I don't know what will
> happen in the future.  I'm not clear what's the intended win by
> encrypting the all-zeroes page hole anyway.  If you leave it
> unencrypted, the attacker knows the size of the hole, as well as the
> size of the tuple data area and the size of the LP array.  Is that a
> side-channer that leaks much?

See above.

> > > The checksum we currently have is not cryptographically secure -- it's
> > > not a crypto-strong signature.  If we want that, we need some further
> > > protection.  Maybe for encrypted tables we replace our current checksum
> > > with an cryptographically secure signature ...?  Pretty sure 16 bits are
> > > insufficient for that, but I suppose we would just use a different page
> > > header with room for a proper sig.
> > 
> > Yes, checksum 

Re: fsync error handling in pg_receivewal, pg_recvlogical

2019-07-27 Thread Sehrope Sarkuni
While reviewing this patch I read through some of the other fsync
callsites and noticed this typo (walkdir is in file_utils.c, not
initdb.c) too:

diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c
index 315c74c745..9b79df2d7f 100644
--- a/src/backend/storage/file/fd.c
+++ b/src/backend/storage/file/fd.c
@@ -3208,7 +3208,7 @@ SyncDataDirectory(void)
  *
  * Errors are reported at level elevel, which might be ERROR or less.
  *
- * See also walkdir in initdb.c, which is a frontend version of this logic.
+ * See also walkdir in file_utils.c, which is a frontend version of this logic.
  */
 static void
 walkdir(const char *path,

Regards,
-- Sehrope Sarkuni
Founder & CEO | JackDB, Inc. | https://www.jackdb.com/




Re: fsync error handling in pg_receivewal, pg_recvlogical

2019-07-27 Thread Sehrope Sarkuni
Hi,

Tried out this patch and it applies, compiles, and passes check-world. Also
flipped things around in pg_recvlogical.c to exit-on-success to ensure it's
actually being called and that worked too. Outside of a more complicated
harness that simulates fsync errors not sure how else to test this further.

I did some searching and found a FUSE based on that looks interesting:
CharybdeFS[1]. Rather than being fixed at mount time, it has a
client/server interface so you can change the handling of syscalls on the
fly[2]. For example you can error out fsync calls halfway through a test
rather than always or randomly. Haven't tried it out but leaving it here as
it seems relevant.

[1]: https://github.com/scylladb/charybdefs
[2]:
https://www.scylladb.com/2016/05/02/fault-injection-filesystem-cookbook/

On Wed, Jun 26, 2019 at 12:11 AM Michael Paquier 
wrote:

> Why using a different error code.  Using EXIT_FAILURE is a more common
> practice in the in-core binaries.  The patch looks fine to me except
> that, that's a good first cut.
>

An error code specific to fsync issues could help with tests as the harness
could check it to ensure things died for the right reasons. With a generic
"messed up fsync" harness you might even be able to run some existing tests
that would otherwise pass and check for the fsync-specific exit code.

Regards,
-- Sehrope Sarkuni
Founder & CEO | JackDB, Inc. | https://www.jackdb.com/


Testing LISTEN/NOTIFY more effectively

2019-07-27 Thread Tom Lane
Since we have three or four different NOTIFY improvement proposals
floating around in the current CF, I got a bit distressed at the lack
of test coverage for that functionality.  While the code coverage
report makes it look like commands/async.c isn't so badly covered,
that's all coming from src/test/regress/sql/async.sql and
src/test/isolation/specs/async-notify.spec.  A look at those files
shows that nowhere is there any actual verification that "NOTIFY foo"
results in a report of "foo" being received; let alone any
more-advanced questions such as whether de-duplication of reports
happens.

The reason for this is that psql's report of a notification event
includes the sending backend's PID, making it impossible for the
test output to be stable; neither the core nor isolation regression
test frameworks can cope with unpredictable output.

We've occasionally batted around ideas for making it possible for
these test frameworks to verify not-entirely-fixed output, and that
would be a good thing to do, but I'm not volunteering for that today.

So, if we'd like to have more thorough NOTIFY coverage without going
to that much work, what to do?  I thought of a few alternatives:

1. Write a TAP test instead of using the old test frameworks, and
use regexps to check the expected output.  But this seems ugly and
hard to get right.  In particular, our TAP infrastructure doesn't
seem to be (easily?) capable of running concurrent psql sessions,
so it doesn't seem like there's any good way to test cross-session
notifies that way.

2. Change psql so that there's a way to get NOTIFY messages without
the sending PID.  For testing purposes, it'd be sufficient to know
whether the sending PID is our own backend's PID or not.  This idea
is not horrible, and it might even be useful for outside purposes
if we made it flexible enough; which leads to thoughts like allowing
the psql user to set a format-style string, similar to the PROMPT
strings but with escapes for channel name, payload, etc.  I foresee
bikeshedding, but we could probably come to an agreement on a feature
like that.

3. On the other hand, that doesn't help much for the isolation tester
because it doesn't go through psql.  In fact, AFAICS it doesn't have
any provision for dealing with notify messages at all; probably,
in the async-notify.spec test, the listening session builds up a
queue of notifies that it never reads.  So we could imagine addressing
the testing gap strictly inside the isolation-tester framework, if we
added the ability for it to detect and print notifications in a
test-friendly format (no explicit PIDs).

I'm finding alternative #3 the most attractive, because we really
want isolation-style testing for LISTEN/NOTIFY, and this solution
doesn't require designing a psql feature that we'd need to get
consensus on.

Before I start coding that, any thoughts or better ideas?

regards, tom lane




Re: Add parallelism and glibc dependent only options to reindexdb

2019-07-27 Thread Michael Paquier
On Sat, Jul 27, 2019 at 11:44:47AM +0200, Julien Rouhaud wrote:
> That's probably still more intuitive than having the count coming from
> either main() or from get_parallel_object_list() depending on the
> process type, so I'm fine with that alternative.  Maybe we could bite
> the bullet and add a count meber to Simple*List, also providing a
> macro to initialize a new list so that next time a field is added
> there won't be a massive boilerplate code change?

Perhaps, we could discuss about that on a separate thread.  For now I
have gone with the simplest approach of counting the items, and
stopping the count if there are more items than jobs.  While reviewing
I have found a double-free in your patch when building a list of
relations for schemas or databases.  If the list finishes empty,
PQfinish() was called twice on the connection, leading to a crash.  I
have added a test for that, done an extra pass on the patch adjusting
a couple of things then committed the patch with the restriction on
--index and --jobs.  This entry is now marked as committed in the CF
app.
--
Michael


signature.asc
Description: PGP signature


Re: Built-in connection pooler

2019-07-27 Thread Thomas Munro
On Tue, Jul 16, 2019 at 2:04 AM Konstantin Knizhnik
 wrote:
> I have committed patch which emulates epoll EPOLLET flag and so should
> avoid busy loop with poll().
> I will be pleased if you can check it at FreeBSD  box.

I tried your v12 patch and it gets stuck in a busy loop during make
check.  You can see it on Linux with ./configure ...
CFLAGS="-DWAIT_USE_POLL".


--
Thomas Munro
https://enterprisedb.com




Re: Built-in connection pooler

2019-07-27 Thread Dave Cramer
Responses inline. I just picked up this thread so please bear with me.

On Fri, 26 Jul 2019 at 16:24, Tomas Vondra 
wrote:

> Hi Konstantin,
>
> I've started reviewing this patch and experimenting with it, so let me
> share some initial thoughts.
>
>
> 1) not handling session state (yet)
>
> I understand handling session state would mean additional complexity, so
> I'm OK with not having it in v1. That being said, I think this is the
> primary issue with connection pooling on PostgreSQL - configuring and
> running a separate pool is not free, of course, but when people complain
> to us it's when they can't actually use a connection pool because of
> this limitation.
>
> So what are your plans regarding this feature? I think you mentioned
> you already have the code in another product. Do you plan to submit it
> in the pg13 cycle, or what's the plan? I'm willing to put some effort
> into reviewing and testing that.
>

I too would like to see the plan of how to make this feature complete.

My concern here is that for the pgjdbc client at least *every* connection
does some set parameter so I see from what I can tell from scanning this
thread pooling would not be used at all.I suspect the .net driver does the
same thing.



> FWIW it'd be nice to expose it as some sort of interface, so that other
> connection pools can leverage it too. There are use cases that don't
> work with a built-in connection pool (say, PAUSE/RESUME in pgbouncer
> allows restarting the database) so projects like pgbouncer or odyssey
> are unlikely to disappear anytime soon.
>

Agreed, and as for other projects. I see their value in having the pool on
a separate host as being a strength.  I certainly don't see them going
anywhere soon. Either way having a unified pooling API would be a useful
goal.



> I also wonder if we could make it more permissive even in v1, without
> implementing dump/restore of session state.
>
> Consider for example patterns like this:
>
>   BEGIN;
>   SET LOCAL enable_nestloop = off;
>   ...
>   COMMIT;
>
> or
>
>   PREPARE x(int) AS SELECT ...;
>   EXECUTE x(1);
>   EXECUTE x(2);
>   ...
>   EXECUTE x(10);
>   DEALLOCATE x;
>

Again pgjdbc does use server prepared statements so I'm assuming this would
not work for clients using pgjdbc or .net

Additionally we have setSchema, which is really set search_path, again
incompatible.

Regards,

Dave

>
>


Re: Add parallelism and glibc dependent only options to reindexdb

2019-07-27 Thread Julien Rouhaud
On Fri, Jul 26, 2019 at 9:41 AM Michael Paquier  wrote:
>
> On Fri, Jul 26, 2019 at 09:36:32AM +0200, Julien Rouhaud wrote:
> > I see that you iterate over the SimpleStringList after it's generated.
> > Why not computing that while building it in get_parallel_object_list
> > (and keep the provided table list count) instead?
>
> Yeah.  I was hesitating to do that, or just break out of the counting
> loop if there are more objects than concurrent jobs, but that's less
> intuitive.

That's probably still more intuitive than having the count coming from
either main() or from get_parallel_object_list() depending on the
process type, so I'm fine with that alternative.  Maybe we could bite
the bullet and add a count meber to Simple*List, also providing a
macro to initialize a new list so that next time a field is added
there won't be a massive boilerplate code change?




Re: LLVM compile failing in seawasp

2019-07-27 Thread Thomas Munro
On Sat, Jul 27, 2019 at 7:12 PM Thomas Munro  wrote:
> On Sat, Jul 27, 2019 at 7:06 PM Fabien COELHO  wrote:
> > Maybe we should consider doing an explicit bug report, but I would not bet
> > that they are going to fold… or fixing the issue pg side, eg "pg_Min",
> > less than 400 hundred instances, and backpatch to all supported
> > versions:-(
>
> I would just #undef Min for our small number of .cpp files that
> include LLVM headers.  It's not as though you need it in C++, which
> has std::min() from .

Like so.  Fixes the problem for me (llvm-devel-9.0.d20190712).

-- 
Thomas Munro
https://enterprisedb.com


0001-Avoid-macro-clash-with-LLVM-9.patch
Description: Binary data


Re: LLVM compile failing in seawasp

2019-07-27 Thread Thomas Munro
On Sat, Jul 27, 2019 at 7:06 PM Fabien COELHO  wrote:
> >>> c.h defines a C Min macro conflicting with llvm new class
> >>> llvm:ElementCount Min member
> >>
> >> Really?  Well, we will hardly be the only code they broke with that.
> >> I think we can just wait for them to reconsider.
> >
> > FYI This is now on LLVM's release_90 branch, due out on August 28.
>
> Maybe we should consider doing an explicit bug report, but I would not bet
> that they are going to fold… or fixing the issue pg side, eg "pg_Min",
> less than 400 hundred instances, and backpatch to all supported
> versions:-(

I would just #undef Min for our small number of .cpp files that
include LLVM headers.  It's not as though you need it in C++, which
has std::min() from .

-- 
Thomas Munro
https://enterprisedb.com




Re: LLVM compile failing in seawasp

2019-07-27 Thread Fabien COELHO



c.h defines a C Min macro conflicting with llvm new class
llvm:ElementCount Min member


Really?  Well, we will hardly be the only code they broke with that.
I think we can just wait for them to reconsider.


FYI This is now on LLVM's release_90 branch, due out on August 28.


Maybe we should consider doing an explicit bug report, but I would not bet 
that they are going to fold… or fixing the issue pg side, eg "pg_Min", 
less than 400 hundred instances, and backpatch to all supported 
versions:-(


--
Fabien.

Re: pg_upgrade version checking questions

2019-07-27 Thread Peter Eisentraut
On 2019-07-25 16:33, Daniel Gustafsson wrote:
> Fair enough, those are both excellent points.  I’ve shuffled the code around 
> to
> move back the check for exec_path to setup (albeit earlier than before due to
> where we perform directory checking).  This does mean that the directory
> checking in the options parsing must learn to cope with missing directories,
> which is a bit unfortunate since it’s already doing a few too many things 
> IMHO.
> To ensure dogfooding, I also removed the use of -B in ‘make check’ for
> pg_upgrade, which should bump the coverage.
> 
> Also spotted a typo in a pg_upgrade file header in a file touched by this, so
> included it in this thread too as a 0004.

I have committed 0002, 0003, and 0004.

The implementation in 0001 (Only allow upgrades by the same exact
version new bindir) has a problem.  It compares (new_cluster.bin_version
!= PG_VERSION_NUM), but new_cluster.bin_version is actually just the
version of pg_ctl, so this is just comparing the version of pg_upgrade
with the version of pg_ctl, which is not wrong, but doesn't really
achieve the full goal of having all binaries match.

I think a better structure would be to add a version check for each
validate_exec() so that each program is checked against pg_upgrade.
This should mirror what find_other_exec() in src/common/exec.c does.  In
a better world we would use find_other_exec() directly, but then we
can't support -B.  Maybe expand find_other_exec() to support this, or
make a private copy for pg_upgrade to support this.  (Also, we have two
copies of validate_exec() around.  Maybe this could all be unified.)

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] WAL logging problem in 9.4.3?

2019-07-27 Thread Noah Misch
On Thu, Jul 25, 2019 at 10:39:36AM +0900, Kyotaro Horiguchi wrote:
> No substantial change have been made by this rebasing.

Thanks.  I'll likely review this on 2019-08-20.  If someone opts to review it
earlier, I welcome that.




Re: Initdb failure

2019-07-27 Thread Peter Eisentraut
On 2019-07-25 17:09, Rafia Sabih wrote:
> But on the other hand emitting the right error message atleast would
> be good for the sake of correctness if nothing else. But yes that
> definitely should be weighed against what is the effort required for
> this.

I think if you want to make this more robust, get rid of the fixed-size
array, use dynamic allocation with PQExpBuffer, and let the operating
system complain if it doesn't like the directory name length.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services