Re: Code of Conduct plan

2018-09-22 Thread Robert Haas
On Fri, Sep 14, 2018 at 4:17 PM, Tom Lane  wrote:
> There's been quite a lot of input, from quite a lot of people, dating
> back at least as far as a well-attended session at PGCon 2016.  I find
> it quite upsetting to hear accusations that core is imposing this out
> of nowhere.  From my perspective, we're responding to a real need
> voiced by other people, not so much by us.

Yeah, but there's a difference between input and agreement.  I don't
think there's been a mailing list thread anywhere at any time where a
clear majority of the people on that thread supported the idea of a
code of conduct.  I don't think that question has even been put.  I
don't think there's ever been a developer meeting where by a show of
hands the idea of a CoC, much less the specific text, got a clear
majority.  I don't think that any attempt has been made to do that,
either.  Core is (thankfully) not usually given to imposing new rules
on the community; we normally operate by consensus.  Why this specific
instance is an exception, as it certainly seems to be, is unclear to
me.

To be clear, I'm not saying that no harassment occurs in our
community.  I suspect women get harassed at our conferences.  I know
of only one specific incident that made me uncomfortable, and that was
quite a few years ago and the woman in question laughed it off when I
asked her if there was a problem, but I have heard rumors of other
things on occasion, and I just wouldn't be too surprised if we're not
all as nice in private as we pretend to be in public.  And on the
other hand, I think that mailing list discussions step over the line
to harassment from time to time even though that's in full public
view.  Regrettably, you and I have both been guilty of that from time
to time, as have many others.  I know that I, personally, have been
trying to be a lot more careful about the way I phrase criticism in
recent years; I hope that has been noticeable, but I only see it from
my own perspective, so I don't know.  Nonwithstanding, I would like to
see us, as a group, do better.  We should tolerate less bad behavior
in ourselves and in others, and however good or bad we are today as
people, we should try to be better people.

Whether or not the code of conduct plan that the core committee has
decided to implement is likely to move us in that direction remains
unclear to me.  I can't say I'm very impressed by the way the process
has been carried out up to this point; hopefully it will work out for
the best all the same.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Changing the setting of wal_sender_timeout per standby

2018-09-22 Thread Andres Freund
Hi,

On 2018-09-22 15:27:24 +0900, Michael Paquier wrote:
> On Fri, Sep 21, 2018 at 06:26:19AM +, Tsunakawa, Takayuki wrote:
> > Agreed.
> 
> Okay, I have pushed the patch with all your suggestions included.

Have there been discussions about the security effects of this change?
Previously the server admin could control the timeout, which could
affect things like syncrep, after this it's not possible anymore.  I
*think* that's ok, but it should be discussed.

Greetings,

Andres Freund



Re: transction_timestamp() inside of procedures

2018-09-22 Thread Bruce Momjian
On Fri, Sep 21, 2018 at 06:35:02PM -0400, Bruce Momjian wrote:
> Does the SQL standard have anything to say about CURRENT_TIMESTAMP in
> procedures?  Do we need another function that does advance on procedure
> commit?

I found a section in the SQL standards that talks about it, but I don't
understand it.  Can I quote the paragraph here?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: testing pg_dump against very old versions

2018-09-22 Thread Michael Paquier
On Sat, Sep 22, 2018 at 12:46:31PM -0400, Andrew Dunstan wrote:
> Meanwhile, it would be good for people to think about creating a TAP testing
> regime for this.

Patch 0001 from this email, or something rather similar to that, could
be used:
https://www.postgresql.org/message-id/20180126080026.gi17...@paquier.xyz
--
Michael


signature.asc
Description: PGP signature


Re: PATCH: pgbench - option to build using ppoll() for larger connection counts

2018-09-22 Thread Tom Lane
I wrote:
> I'm strongly tempted to just remove the POLL_UNWANTED business
> altogether, as it seems both pointless and unportable on its face.
> Almost by definition, we can't know what "other" bits a given
> implementation might set.
> I'm not entirely following the point of including POLLRDHUP in
> POLL_EVENTS, either.  What's wrong with the traditional solution
> of detecting EOF?

So after studying that a bit longer, I think it's just wrong.
It's not the business of this code to be checking for connection
errors at all; that is libpq's province.  The libpq API specifies
that callers should wait for read-ready on the socket, and nothing
else.  So the only bit we need concern ourselves with is POLLIN.

I also seriously disliked both the details of the abstraction API
and its lack of documentation.  (Other people complained about that
upthread, too.)  So attached is a rewrite attempt.  There's still a
couple of grotty things about it; in particular the ppoll variant of
socket_has_input() knows more than one could wish about how it's being
used.  But I couldn't see a way to make it cleaner without significant
changes to the logic in threadRun, and that didn't seem better.

I think that Andres' concern upthread about iterating over a whole
lot of sockets is somewhat misplaced.  We aren't going to be iterating
over the entire set of client connections, only those being run by a
particular pgbench thread.  So assuming you're using a reasonable ratio
of threads to clients, there won't be very many to look at in any one
thread.  In any case, I'm dubious that we could get much of a win from
some other abstraction for waiting: both of these code paths do work
pretty much proportional to the number of connections the current
thread is responsible for, and it's hard to see how to avoid that.

I've tested this on both Linux and FreeBSD, and it seems to work fine.

I'm reasonably happy with this version of the patch, and would be
ready to commit it, but I thought I'd throw it out for another round
of review if anyone wants to.

regards, tom lane

diff --git a/configure b/configure
index 9b30402..21ecd29 100755
--- a/configure
+++ b/configure
@@ -15093,7 +15093,7 @@ fi
 LIBS_including_readline="$LIBS"
 LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'`
 
-for ac_func in cbrt clock_gettime fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open symlink sync_file_range utime utimes wcstombs_l
+for ac_func in cbrt clock_gettime fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate ppoll pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open symlink sync_file_range utime utimes wcstombs_l
 do :
   as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh`
 ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var"
diff --git a/configure.in b/configure.in
index 2e60a89..8fe6894 100644
--- a/configure.in
+++ b/configure.in
@@ -1562,7 +1562,7 @@ PGAC_FUNC_WCSTOMBS_L
 LIBS_including_readline="$LIBS"
 LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'`
 
-AC_CHECK_FUNCS([cbrt clock_gettime fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open symlink sync_file_range utime utimes wcstombs_l])
+AC_CHECK_FUNCS([cbrt clock_gettime fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate ppoll pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open symlink sync_file_range utime utimes wcstombs_l])
 
 AC_REPLACE_FUNCS(fseeko)
 case $host_os in
diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index 41b756c..ae81aba 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -28,8 +28,8 @@
  */
 
 #ifdef WIN32
-#define FD_SETSIZE 1024			/* set before winsock2.h is included */
-#endif			/* ! WIN32 */
+#define FD_SETSIZE 1024			/* must set before winsock2.h is included */
+#endif
 
 #include "postgres_fe.h"
 #include "fe_utils/conditional.h"
@@ -45,12 +45,21 @@
 #include 
 #include 
 #include 
+#ifdef HAVE_SYS_RESOURCE_H
+#include 		/* for getrlimit */
+#endif
+
+/* For testing, PGBENCH_USE_SELECT can be defined to force use of that code */
+#if defined(HAVE_PPOLL) && !defined(PGBENCH_USE_SELECT)
+#define POLL_USING_PPOLL
+#ifdef HAVE_POLL_H
+#include 
+#endif
+#else			/* no ppoll(), so use select() */
+#define POLL_USING_SELECT
 #ifdef HAVE_SYS_SELECT_H
 #include 
 #endif
-
-#ifdef HAVE_SYS_RESOURCE_H
-#include 		/* for getrlimit */
 #endif
 
 #ifndef M_PI
@@ -71,6 +80,33 @@
 #define MM2_ROT47
 
 /*
+ * Multi-platform socket set implementations
+ */
+
+#ifdef POLL_USING_PPOLL
+#define SOCKET_WAIT_METHOD "ppoll"
+
+typedef struct socket_set
+{
+	int			maxfds;			/* allocated length of pollfds[] array */
+	int			

Re: vary read_only in SPI calls? or poke at the on-entry snapshot?

2018-09-22 Thread Chapman Flack
On 09/20/18 00:44, Tom Lane wrote:
> Chapman Flack  writes:
>> Would it be unprecedented / be unreasonable / break anything for the
>> install_jar function to simply force a CommandCounterIncrement
>> at the end of step 1 (after its temporary snapshot has been popped,
>> so the former/on-entry ActiveSnapshot gets the increment)?
> 
> The risk you take there is changing the behavior of calling function(s).
> 
>> DECISION TIME ...
> 
>> 1. fiddle the loader to always pass read_only => false to SPI calls,
>>regardless of the volatility of the function it is loading for.
>> 2. leave the loader alone, and adjust install_jar (an infrequent
>>operation) to do something heretical with its on-entry snapshot.
> 
> I suspect #1 is less likely to have bad side-effects.  But I've not
> done any careful analysis.

Or, revisiting #2, what of install_jar first pushing a copied snapshot
(the current active one? a new one from GetTransactionSnapshot?) and
keeping that copy on the stack during both operations (loading the jar
and executing the deployment commands, with a CommandCounterIncrement
in between) and popping it before return?

Would that alleviate the concern of changing calling functions' behavior?

-Chap



Re: Add RESPECT/IGNORE NULLS and FROM FIRST/LAST options

2018-09-22 Thread Andrew Gierth
> "Krasiyan" == Krasiyan Andreev  writes:

 Krasiyan> Hi,

 Krasiyan> Patch applies and compiles, all included tests and building
 Krasiyan> of the docs pass. I am using last version from more than two
 Krasiyan> months ago in production environment with real data and I
 Krasiyan> didn't find any bugs, so I'm marking this patch as ready for
 Krasiyan> committer in the commitfest app.

Unfortunately, reviewing it from a committer perspective - I can't
possibly commit this as it stands, and anything I did to it would be
basically a rewrite of much of it.

Some of the problems could be fixed. For example the type names could be
given pg_* prefixes (it's clearly not acceptable to create random
special-purpose boolean subtypes in pg_catalog and _not_ give them such
a prefix), and the precedence hackery in gram.y could have comments
added (gram.y is already bad enough; _anything_ fancy with precedence
has to be described in the comments). But I don't like that hack with
the special types at all, and I think that needs a better solution.

Normally I'd push hard to try and get some solution that's sufficiently
generic to allow user-defined functions to make use of the feature. But
I think the SQL spec people have managed to make that literally
impossible in this case, what with the FROM keyword appearing in the
middle of a production and not followed by anything sufficiently
distinctive to even use for extra token lookahead.

Also, as has been pointed out in a number of previous features, we're
starting to accumulate identifiers that are reserved in subtly different
ways from our basic four-category system (which is itself a significant
elaboration compared to the spec's simple reserved/unreserved
distinction). As I recall this objection was specifically raised for
CUBE, but justified there by the existence of the contrib/cube extension
(and the fact that the standard CUBE() construct is used only in very
specific places in the syntax). This patch would make lead / lag /
first_value / last_value / nth_value syntactically "special" while not
actually reserving them (beyond having them in unreserved_keywords); I
think serious consideration should be given to whether they should
instead become col_name_keywords (which would, I believe, make it
unnecessary to mess with precedence).

Anyone have any thoughts or comments on the above?

-- 
Andrew (irc:RhodiumToad)



RE: impact of SPECTRE and MELTDOWN patches

2018-09-22 Thread Enrique Escobar
Hi.

In my case. I tested with ibm power 8 and 9 computers. And it did not go
very well (Support teams Mexico / Spain / United States), tried to optimize
without giving more performance.

I have intel computers and what I see is that you have 2 core and if you do
not see badly you have a bit of ram. What worked for me was to give more
core per operational load and more memory. (Clarifying that I have a lot of
operational load). Try assigning more.

I hope and serve you.

Regards

Enrique

 

 

 

De: ROS Didier [mailto:didier@edf.fr] 
Enviado el: viernes, 21 de septiembre de 2018 04:53 a. m.
Para: tony.r...@atos.net; pgsql-hack...@postgresql.org
Asunto: RE: impact of SPECTRE and MELTDOWN patches 

 

Hi

 

No, 80% of our IT infrastructure is INTEL HW. 

Have you any recommendations to correct the impact on the performance ?

 

Best Regards







Didier ROS
Expertise SGBD
DS IT/IT DMA/Solutions Groupe EDF/Expertise Applicative - SGBD
Nanterre Picasso - E2 565D (aile nord-est)
32 Avenue Pablo Picasso
92000 Nanterre

  didier@edf.fr
  support-postgres-nive...@edf.fr
  support-oracle-nive...@edf.fr
Tél. : 01 78 66 61 14
Tél. mobile : 06 49 51 11 88
Lync :   ros.did...@edf.fr

 

 

De : tony.r...@atos.net 
[mailto:tony.r...@atos.net] 
Envoyé : vendredi 21 septembre 2018 11:48
À : ROS Didier mailto:didier@edf.fr> >;
pgsql-hack...@postgresql.org  
Objet : RE: impact of SPECTRE and MELTDOWN patches 

 

Thx.

 

So, it isIntel HW.

Have you experimented too with Power HW?

 

Regards

 

Cordialement,

Tony Reix

tony.r...@atos.net  

ATOS / Bull SAS
ATOS Expert
IBM Coop Architect & Technical Leader

Office : +33 (0) 4 76 29 72 67

1 rue de Provence - 38432 Échirolles - France

 
 www.atos.net

  _  

De : ROS Didier mailto:didier@edf.fr> >
Envoyé : vendredi 21 septembre 2018 11:40:15
À : REIX, Tony; pgsql-hack...@postgresql.org
 
Objet : RE: impact of SPECTRE and MELTDOWN patches 

 

Hi

Here is the HW information :

 

[root@pcyyymm9 ~]# cat /proc/cpuinfo

processor   : 0

vendor_id   : GenuineIntel

cpu family  : 6

model   : 62

model name  : Intel(R) Xeon(R) CPU E5-4610 v2 @ 2.30GHz

stepping: 4

microcode   : 0x427

cpu MHz : 2300.000

cache size  : 16384 KB

physical id : 0

siblings: 2

core id : 0

cpu cores   : 2

apicid  : 0

initial apicid  : 0

fpu : yes

fpu_exception   : yes

cpuid level : 13

wp  : yes

flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx rdtscp lm
constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc aperfmperf
eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt aes xsave
avx f16c rdrand hypervisor lahf_lm epb fsgsbase smep dtherm ida arat pln pts

bogomips: 4600.00

clflush size: 64

cache_alignment : 64

address sizes   : 40 bits physical, 48 bits virtual

 

[root@pcyyymm9 ~]# free -m

  totalusedfree  shared  buff/cache
available

Mem:  158774879 2263029   10772
7545

Swap:  2047 1071940

 

[root@pcyyymm9 ~]# uname -a

Linux pcyyymm9.pcy.edfgdf.fr 3.10.0-862.6.3.el7.x86_64 #1 SMP Fri Jun 15
17:57:37 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux

 

Best regards







Didier ROS
Expertise SGBD
DS IT/IT DMA/Solutions Groupe EDF/Expertise Applicative - SGBD
Nanterre Picasso - E2 565D (aile nord-est)
32 Avenue Pablo Picasso
92000 Nanterre

  didier@edf.fr

 

 

De : tony.r...@atos.net 
[mailto:tony.r...@atos.net] 
Envoyé : vendredi 21 septembre 2018 11:24
À : ROS Didier mailto:didier@edf.fr> >;
pgsql-hack...@postgresql.org  
Objet : RE: impact of SPECTRE and MELTDOWN patches 

 

Hi,

 

Which HW have you experimented with?

 

Thx/Regards

 

Cordialement,

Tony Reix

tony.r...@atos.net  

ATOS / Bull SAS
ATOS Expert
IBM Coop Architect & Technical Leader

Office : +33 (0) 4 76 29 72 67

1 rue de Provence - 38432 Échirolles - France

 
 www.atos.net

 

  _  

De : ROS Didier mailto:didier@edf.fr> >
Envoyé : jeudi 20 septembre 2018 09:23
À : pgsql-hack...@postgresql.org  
Objet : impact of SPECTRE and MELTDOWN patches 

 

Hi


Participate in GCI as a Mentor

2018-09-22 Thread Tahir Ramzan
Honorable Concern,

I want to join GCI as a mentor, please guide me about the procedure, thanks
in anticipation.

-- 
Regards
Tahir Ramzan
MSCS Research Scholar
Google Summer of Code 2015 (CiviCRM)
Google Summer of Code 2016 (ModSecurity)
Outside Collaborator of SpiderLabs (Powered by TrustWave)
Google Android Students Club Facilitator and Organizer 2015

Contact:

+92-312-5518018 <+92%20312%205518018>

tahirram...@alumni.vu.edu.pk


More details about me and my work:

GitHub Profile: https://github.com/tahirramzan

LinkedIn Profile: https://pk.linkedin.com/in/tahirramzan


Re: [HACKERS] Bug in to_timestamp().

2018-09-22 Thread Alexander Korotkov
On Thu, Sep 20, 2018 at 3:52 PM Alexander Korotkov
 wrote:
>
> On Thu, Sep 20, 2018 at 6:09 AM amul sul  wrote:
> > Agreed, thanks for working on this.
>
> Pushed, thanks.

Please, find attached patch improving documentation about
letters/digits in to_date()/to_timestamp() template string.  I think
it needs review from native English speaker.

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


to_timestamp_letters_digits.patch
Description: Binary data


testing pg_dump against very old versions

2018-09-22 Thread Andrew Dunstan



In the interest of advancing $subject, I recently started a little 
skunkworks project to get old postgres running on modern systems so we 
could test if we'd broken backwards compatibility somehow. This was 
given a fillip a few days ago when my colleague Gianni Ciolli complained 
that it uses array syntax that isn't valid in 7.3 for the -T option. So 
here is the result. Essentially I set up a (barely workable) Fedora Core 
2 VM and build Postgres 7.2.8 there. Then I packed up the binaries and 
data directory and tried them on a modern system (Fedora 28). It turns 
out they need a few old libraries, but apart from that it works. So I 
have packaged all this up in a Vagrant setup, which is available at 



Next I'm going to work on a Docker image for this. I think that should 
be fairly simple now I have this piece down.


Then I will start adding other old versions.

Meanwhile, it would be good for people to think about creating a TAP 
testing regime for this.



cheers


andrew


--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: PATCH: pgbench - option to build using ppoll() for larger connection counts

2018-09-22 Thread Tom Lane
Fabien COELHO  writes:
> The patch was not applying cleanly anymore for me, so here is a rebase of 
> your latest version.

The cfbot doesn't like that patch, probably because of the Windows newlines.
Here's a version with regular newlines, and some cosmetic cleanup in the
configure infrastructure.

I haven't looked at the pgbench changes proper yet, but I did quickly
test building on FreeBSD 11, which has ppoll, and it falls over:

pgbench.c:6080:69: error: use of undeclared identifier 'POLLRDHUP'
  ...== -1 || (PQsocket(con) >= 0 && !(sa[idx].revents & POLL_UNWANTED)))
 ^
pgbench.c:6059:24: note: expanded from macro 'POLL_UNWANTED'
#define POLL_UNWANTED (POLLRDHUP|POLLERR|POLLHUP|POLLNVAL)
   ^
pgbench.c:6085:42: error: use of undeclared identifier 'POLLRDHUP'
errno, sa[idx].fd, (sa[idx].revents & POLL_UNWANTED));
  ^
pgbench.c:6059:24: note: expanded from macro 'POLL_UNWANTED'
#define POLL_UNWANTED (POLLRDHUP|POLLERR|POLLHUP|POLLNVAL)
   ^
pgbench.c:6107:19: error: use of undeclared identifier 'POLLRDHUP'
sa[idx].events = POLL_EVENTS;
 ^
pgbench.c:6057:22: note: expanded from macro 'POLL_EVENTS'
#define POLL_EVENTS (POLLRDHUP|POLLIN|POLLPRI)
 ^
3 errors generated.
make[3]: *** [: pgbench.o] Error 1


I'm strongly tempted to just remove the POLL_UNWANTED business
altogether, as it seems both pointless and unportable on its face.
Almost by definition, we can't know what "other" bits a given
implementation might set.

I'm not entirely following the point of including POLLRDHUP in
POLL_EVENTS, either.  What's wrong with the traditional solution
of detecting EOF?

regards, tom lane

diff --git a/configure b/configure
index 9b30402..21ecd29 100755
*** a/configure
--- b/configure
*** fi
*** 15093,15099 
  LIBS_including_readline="$LIBS"
  LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'`
  
! for ac_func in cbrt clock_gettime fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open symlink sync_file_range utime utimes wcstombs_l
  do :
as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh`
  ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var"
--- 15093,15099 
  LIBS_including_readline="$LIBS"
  LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'`
  
! for ac_func in cbrt clock_gettime fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate ppoll pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open symlink sync_file_range utime utimes wcstombs_l
  do :
as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh`
  ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var"
diff --git a/configure.in b/configure.in
index 2e60a89..8fe6894 100644
*** a/configure.in
--- b/configure.in
*** PGAC_FUNC_WCSTOMBS_L
*** 1562,1568 
  LIBS_including_readline="$LIBS"
  LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'`
  
! AC_CHECK_FUNCS([cbrt clock_gettime fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open symlink sync_file_range utime utimes wcstombs_l])
  
  AC_REPLACE_FUNCS(fseeko)
  case $host_os in
--- 1562,1568 
  LIBS_including_readline="$LIBS"
  LIBS=`echo "$LIBS" | sed -e 's/-ledit//g' -e 's/-lreadline//g'`
  
! AC_CHECK_FUNCS([cbrt clock_gettime fdatasync getifaddrs getpeerucred getrlimit mbstowcs_l memmove poll posix_fallocate ppoll pstat pthread_is_threaded_np readlink setproctitle setproctitle_fast setsid shm_open symlink sync_file_range utime utimes wcstombs_l])
  
  AC_REPLACE_FUNCS(fseeko)
  case $host_os in
diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index 41b756c..3d378db 100644
*** a/src/bin/pgbench/pgbench.c
--- b/src/bin/pgbench/pgbench.c
***
*** 45,53 
--- 45,62 
  #include 
  #include 
  #include 
+ #ifndef PGBENCH_USE_SELECT			/* force use of select(2)? */
+ #ifdef HAVE_PPOLL
+ #define POLL_USING_PPOLL
+ #include 
+ #endif
+ #endif
+ #ifndef POLL_USING_PPOLL
+ #define POLL_USING_SELECT
  #ifdef HAVE_SYS_SELECT_H
  #include 
  #endif
+ #endif
  
  #ifdef HAVE_SYS_RESOURCE_H
  #include 		/* for getrlimit */
*** static int	pthread_join(pthread_t th, vo
*** 92,104 
  
  /
   * some configurable parameters */
! 
! /* max number of clients allowed */
  #ifdef FD_SETSIZE
! #define MAXCLIENTS	(FD_SETSIZE - 10)
  #else
! #define MAXCLIENTS	1024
  #endif
  
  #define DEFAULT_INIT_STEPS "dtgvp"	/* default -I setting */
  
--- 101,119 
  
  

Re: pg_atomic_exchange_u32() in ProcArrayGroupClearXid()

2018-09-22 Thread Alexander Korotkov
On Sat, Sep 22, 2018 at 9:55 AM Amit Kapila  wrote:

> On Fri, Sep 21, 2018 at 11:06 PM Alexander Korotkov
>  wrote:
> > While investigating ProcArrayGroupClearXid() code I wonder why do we
> have this loop instead of plain pg_atomic_exchange_u32() call?
>
> We can use pg_atomic_exchange_u32 instead of a loop.  In fact, clog
> code uses pg_atomic_exchange_u32 for the same purpose.  I think it is
> better to make the code consistent at both places.  Feel free to
> change it, otherwise, I can take care of it in a few days time.
>

Thank you for feedback.  Pushed.

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: New function pg_stat_statements_reset_query() to reset statistics of a specific query

2018-09-22 Thread Amit Kapila
On Mon, Jul 9, 2018 at 11:28 AM Haribabu Kommi  wrote:
>
> On Mon, Jul 9, 2018 at 3:39 PM Haribabu Kommi  
> wrote:
>>
>>
>> On Mon, Jul 9, 2018 at 12:24 PM Michael Paquier  wrote:
>>>
>>> On Fri, Jul 06, 2018 at 05:10:18PM -0400, Alvaro Herrera wrote:
>>> > Ugh, it's true :-(
>>> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=25fff40798fc4ac11a241bfd9ab0c45c085e2212#patch8
>>> >
>>> > Dave, Simon, any comments?
>>>
>>> The offending line:
>>> contrib/pg_stat_statements/pg_stat_statements--1.4--1.5.sql:
>>> GRANT EXECUTE ON FUNCTION pg_stat_statements_reset() TO pg_read_all_stats;
>>>
>>> This will need a new version bump down to REL_10_STABLE...
>>
>>
>> Hearing no objections, attached patch removes all permissions from PUBLIC
>> as before this change went in. Or do we need to add command for revoke only
>> from pg_read_all_stats?
>
>
> Revoke all doesn't work, so patch updated with revoke from pg_read_all_stats.
>

The other questionable part of that commit (25fff40798) is that it
changes permissions for function pg_stat_statements_reset at SQL level
and for C function it changes the permission check for
pg_stat_statements, refer below change.

@@ -1391,7 +1392,7 @@ pg_stat_statements_internal(FunctionCallInfo fcinfo,
MemoryContext per_query_ctx;
MemoryContext oldcontext;
Oid userid = GetUserId();
-   boolis_superuser = superuser();
+   boolis_allowed_role = false;
char   *qbuffer = NULL;
Sizeqbuffer_size = 0;
Sizeextent = 0;
@@ -1399,6 +1400,9 @@ pg_stat_statements_internal(FunctionCallInfo fcinfo,
HASH_SEQ_STATUS hash_seq;
pgssEntry  *entry;

+   /* Superusers or members of pg_read_all_stats members are allowed */
+   is_allowed_role = is_member_of_role(GetUserId(),
DEFAULT_ROLE_READ_ALL_STATS);

Am I confused here?  In any case, I think it is better to start a
separate thread to discuss this issue.  It might help us in getting
more attention on this issue and we can focus on your proposed patch
in this thread.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: Adding a note to protocol.sgml regarding CopyData

2018-09-22 Thread Amit Kapila
On Mon, Sep 10, 2018 at 3:54 PM Tatsuo Ishii  wrote:
>
> >> Hello Bradley & Tatsuo-san,
> >>
>  ... references to the protocol version lacks homogeneity.
>  ... I'd suggest to keep "the vX.0 protocol" for a short version,
>  and "the version X.0 protocol" for long ...
> >>>
> >>> I agree. Change made.
> >>
> >> Patch applies cleanly. Doc build ok.
> >>
> >> One part talks about "terminating line", the other of "termination
> >> line". I wonder whether one is better than the other?
> >
> > I am not a native English speaker so I maybe wrong... for me, current
> > usage of "terminating line", and "termination line" looks correct. The
> > former refers to concrete example thus uses "the", while the latter
> > refers to more general case thus uses "an".
> >
> > BTW, I think the patch should apply to master and REL_11_STABLE
> > branches at least. But should this be applied to other back branches
> > as well?
>
> I have marked this as "ready for committer".
>

My first thought on this patch is that why do we want to duplicate the
same information in different words at three different places.  Why
can't we just extend the current Note where it is currently with some
more information about CopyDone message and then add a reference to
that section in other two places?

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: Proposal for disk quota feature

2018-09-22 Thread Pavel Stehule
so 22. 9. 2018 v 8:48 odesílatel Hubert Zhang  napsal:

> But it looks like redundant to current GUC configuration and limits
>
> what do you mean by current GUC configuration? Is that the general block
> number limit in your patch? If yes, the difference between GUC and
> pg_diskquota catalog is that pg_diskquota will store different quota limit
> for the different role, schema or table instead of a single GUC value.
>

storage is not relevant in this moment.

I don't see to consistent to sets some limits via SET command, or ALTER X
SET, and some other with CREATE QUOTA ON.

The quotas or object limits, resource limits are pretty useful and
necessary, but I don't see these like new type of objects, it is much more
some property of current objects. Because we have one syntax for this
purpose I prefer it. Because is not good to have two syntaxes for similar
purpose.

So instead CREATE DISC QUATA ON SCHEMA xxx some value I prefer

ALTER SCHEMA xxx SET disc_quota = xxx;

The functionality is +/- same. But ALTER XX SET was introduce first, and I
don't feel comfortable to have any new syntax for similar purpose

Regards

Pavel





>
> On Sat, Sep 22, 2018 at 11:17 AM Pavel Stehule 
> wrote:
>
>>
>>
>> pá 21. 9. 2018 v 16:21 odesílatel Hubert Zhang 
>> napsal:
>>
>>> just fast reaction - why QUOTA object?
 Isn't ALTER SET enough?
 Some like
 ALTER TABLE a1 SET quote = 1MB;
 ALTER USER ...
 ALTER SCHEMA ..
 New DDL commans looks like too hard hammer .
>>>
>>>
>>> It's an option. Prefer to consider quota setting store together:
>>> CREATE DISK QUOTA way is more nature to store quota setting in a
>>> separate pg_diskquota catalog
>>> While ALTER SET way is more close to store quota setting in pg_class,
>>> pg_role, pg_namespace. etc in an integrated way.
>>> (Note that here I mean nature/close is not must, ALTER SET could also
>>> store in pg_diskquota and vice versa.)
>>>
>>
>> I have not a problem with new special table for storing this information.
>> But it looks like redundant to current GUC configuration and limits. Can be
>> messy do some work with ALTER ROLE, and some work via CREATE QUOTE.
>>
>> Regards
>>
>> Pavel
>>
>>
>>> Here are some differences I can think of:
>>> 1 pg_role is a global catalog, not per database level. It's harder to
>>> tracker the user's disk usage in the whole clusters(considering 1000+
>>> databases).  So the semantic of  CREATE DISK QUOTA ON USER is limited: it
>>> only tracks the user's disk usage inside the current database.
>>> 2 using separate pg_diskquota could add more field except for quota
>>> limit without adding too many fields in pg_class, e.g. red zone to give the
>>> user a warning or the current disk usage of the db objects.
>>>
>>> On Fri, Sep 21, 2018 at 8:01 PM Pavel Stehule 
>>> wrote:
>>>


 pá 21. 9. 2018 v 13:32 odesílatel Hubert Zhang 
 napsal:

>
>
>
>
> *Hi all,We redesign disk quota feature based on the comments from
> Pavel Stehule and Chapman Flack. Here are the new 
> design.OverviewBasically,
>  disk quota feature is used to support multi-tenancy environment, 
> different
> level of database objects could be set a quota limit to avoid over use of
> disk space. A common case could be as follows: DBA could enable disk quota
> on a specified database list. DBA could set disk quota limit for
> tables/schemas/roles in these databases. Separate disk quota worker 
> process
> will monitor the disk usage for these objects and detect the objects which
> exceed their quota limit. Queries loading data into these “out of disk
> quota” tables/schemas/roles will be cancelled.We are currently working at
> init implementation stage. We would like to propose our idea firstly and
> get feedbacks from community to do quick iteration.SQL Syntax (How to use
> disk quota)1 Specify the databases with disk quota enabled in GUC
> “diskquota_databases” in postgresql.conf and restart the database.2 DBA
> could set disk quota limit for table/schema/role.CREATE DISK QUOTA tablea1
> ON TABLE a1 with (quota = ‘1MB’);CREATE DISK QUOTA roleu1 ON USER u1 with
> (quota = ‘1GB’);CREATE DISK QUOTA schemas1 ON SCHEMA s1 with (quota =
> ‘3MB’);*
>

 just fast reaction - why QUOTA object?

 Isn't ALTER SET enough?

 Some like

 ALTER TABLE a1 SET quote = 1MB;
 ALTER USER ...
 ALTER SCHEMA ..

 New DDL commans looks like too hard hammer .



>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *3 Simulate a schema out of quota limit case: suppose table a1 and
> table a2 are both under schema s1.INSERT INTO a1 SELECT
> generate_series(1,1000);INSERT INTO a2 SELECT
> generate_series(1,300);SELECT pg_sleep(5)INSERT INTO a1 SELECT
> generate_series(1,1000);ERROR:  schema's disk space quota exceededDROP
> TABLE 

Re: pg_atomic_exchange_u32() in ProcArrayGroupClearXid()

2018-09-22 Thread Amit Kapila
On Fri, Sep 21, 2018 at 11:06 PM Alexander Korotkov
 wrote:
>
> Hi!
>
> While investigating ProcArrayGroupClearXid() code I wonder why do we have 
> this loop instead of plain pg_atomic_exchange_u32() call?
>

We can use pg_atomic_exchange_u32 instead of a loop.  In fact, clog
code uses pg_atomic_exchange_u32 for the same purpose.  I think it is
better to make the code consistent at both places.  Feel free to
change it, otherwise, I can take care of it in a few days time.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: PATCH: psql tab completion for SELECT

2018-09-22 Thread Edmund Horner
Hi all,

Here are some rebased versions of the last two patches.  No changes in
functionality, except a minor case sensitivity fix in the "completion
after commas" patch.

Edmund


01-psql-select-tab-completion-v8.patch
Description: Binary data


02-select-completion-after-commas.patch
Description: Binary data


Re: Proposal for disk quota feature

2018-09-22 Thread Hubert Zhang
>
> But it looks like redundant to current GUC configuration and limits

what do you mean by current GUC configuration? Is that the general block
number limit in your patch? If yes, the difference between GUC and
pg_diskquota catalog is that pg_diskquota will store different quota limit
for the different role, schema or table instead of a single GUC value.

On Sat, Sep 22, 2018 at 11:17 AM Pavel Stehule 
wrote:

>
>
> pá 21. 9. 2018 v 16:21 odesílatel Hubert Zhang  napsal:
>
>> just fast reaction - why QUOTA object?
>>> Isn't ALTER SET enough?
>>> Some like
>>> ALTER TABLE a1 SET quote = 1MB;
>>> ALTER USER ...
>>> ALTER SCHEMA ..
>>> New DDL commans looks like too hard hammer .
>>
>>
>> It's an option. Prefer to consider quota setting store together:
>> CREATE DISK QUOTA way is more nature to store quota setting in a separate
>> pg_diskquota catalog
>> While ALTER SET way is more close to store quota setting in pg_class,
>> pg_role, pg_namespace. etc in an integrated way.
>> (Note that here I mean nature/close is not must, ALTER SET could also
>> store in pg_diskquota and vice versa.)
>>
>
> I have not a problem with new special table for storing this information.
> But it looks like redundant to current GUC configuration and limits. Can be
> messy do some work with ALTER ROLE, and some work via CREATE QUOTE.
>
> Regards
>
> Pavel
>
>
>> Here are some differences I can think of:
>> 1 pg_role is a global catalog, not per database level. It's harder to
>> tracker the user's disk usage in the whole clusters(considering 1000+
>> databases).  So the semantic of  CREATE DISK QUOTA ON USER is limited: it
>> only tracks the user's disk usage inside the current database.
>> 2 using separate pg_diskquota could add more field except for quota limit
>> without adding too many fields in pg_class, e.g. red zone to give the user
>> a warning or the current disk usage of the db objects.
>>
>> On Fri, Sep 21, 2018 at 8:01 PM Pavel Stehule 
>> wrote:
>>
>>>
>>>
>>> pá 21. 9. 2018 v 13:32 odesílatel Hubert Zhang 
>>> napsal:
>>>




 *Hi all,We redesign disk quota feature based on the comments from Pavel
 Stehule and Chapman Flack. Here are the new design.OverviewBasically,  disk
 quota feature is used to support multi-tenancy environment, different level
 of database objects could be set a quota limit to avoid over use of disk
 space. A common case could be as follows: DBA could enable disk quota on a
 specified database list. DBA could set disk quota limit for
 tables/schemas/roles in these databases. Separate disk quota worker process
 will monitor the disk usage for these objects and detect the objects which
 exceed their quota limit. Queries loading data into these “out of disk
 quota” tables/schemas/roles will be cancelled.We are currently working at
 init implementation stage. We would like to propose our idea firstly and
 get feedbacks from community to do quick iteration.SQL Syntax (How to use
 disk quota)1 Specify the databases with disk quota enabled in GUC
 “diskquota_databases” in postgresql.conf and restart the database.2 DBA
 could set disk quota limit for table/schema/role.CREATE DISK QUOTA tablea1
 ON TABLE a1 with (quota = ‘1MB’);CREATE DISK QUOTA roleu1 ON USER u1 with
 (quota = ‘1GB’);CREATE DISK QUOTA schemas1 ON SCHEMA s1 with (quota =
 ‘3MB’);*

>>>
>>> just fast reaction - why QUOTA object?
>>>
>>> Isn't ALTER SET enough?
>>>
>>> Some like
>>>
>>> ALTER TABLE a1 SET quote = 1MB;
>>> ALTER USER ...
>>> ALTER SCHEMA ..
>>>
>>> New DDL commans looks like too hard hammer .
>>>
>>>
>>>

















 *3 Simulate a schema out of quota limit case: suppose table a1 and
 table a2 are both under schema s1.INSERT INTO a1 SELECT
 generate_series(1,1000);INSERT INTO a2 SELECT
 generate_series(1,300);SELECT pg_sleep(5)INSERT INTO a1 SELECT
 generate_series(1,1000);ERROR:  schema's disk space quota exceededDROP
 TABLE a2;SELECT pg_sleep(5)INSERT INTO a1 SELECT
 generate_series(1,1000);INSERT 0 1000ArchitectureDisk quota has the
 following components.1. Quota Setting Store is where the disk quota setting
 to be stored and accessed. We plan to use catalog table pg_diskquota to
 store these information. pg_diskquota is
 like:CATALOG(pg_diskquota,6122,DiskQuotaRelationId){ NameData quotaname; /*
 diskquota name */int16 quotatype; /* diskquota type name */ Oid
 quotatargetoid; /* diskquota target db object oid*/ int32 quotalimit; /*
 diskquota size limit in MB*/ int32 quotaredzone; /* diskquota redzone in
 MB*/} FormData_pg_diskquota;2. Quota Change Detector is the monitor of size
 change of database objects. We plan to use stat collector to detect the
 ‘active’ table list at initial stage. But stat collector has some
 limitation on finding the active table which is in a 

Re: Changing the setting of wal_sender_timeout per standby

2018-09-22 Thread Michael Paquier
On Fri, Sep 21, 2018 at 06:26:19AM +, Tsunakawa, Takayuki wrote:
> Agreed.

Okay, I have pushed the patch with all your suggestions included.
--
Michael


signature.asc
Description: PGP signature


Re: [HACKERS] proposal: schema variables

2018-09-22 Thread Pavel Stehule
Hi

rebased against yesterday changes in tab-complete.c

Regards

Pavel


schema-variables-20180922-01.patch.gz
Description: application/gzip