Re: [HACKERS] Patch: make behavior of all versions of the "isinf" function be similar

2016-01-31 Thread Michael Paquier
On Mon, Feb 1, 2016 at 4:04 PM, Tom Lane  wrote:

> 1. I don't think the buildfarm is sufficient evidence to conclude that
> isinf.c is required nowhere.  It was in use as late as 2004, judging
> by the git history, and I don't know of good reason to assume we do not
> need it now.
>

This was 12 years ago... Btw, after a bit of digging, I found out that even
SunOS 5.1 includes it:
http://www.unix.com/man-page/sunos/3m/isinf/
-- 
Michael


Re: [HACKERS] pgbench stats per script & other stuff

2016-01-31 Thread Michael Paquier
On Fri, Jan 29, 2016 at 11:28 PM, Fabien COELHO  wrote:
> Here is a rebase of the 3 remaining parts:
>  - 15-c: per script stats
>  - 15-d: weighted scripts
>  - 15-e: prefix selection for -b

Regarding patch d.
+   /* compute total_weight */
+   for (i = 0; i < num_scripts; i++)
+   total_weight += sql_script[i].weight;
total_weight can overflow :) I don't think that's worth worrying, I am
just noticing that.

+The provided scriptname needs only be an unambiguous
+prefix of the builtin name, hence si would be enough to
+select simple-update.
[...]
-   if (strncmp(builtin_script[i].name, name,
-   strlen(builtin_script[i].name)) == 0)
+   if (strncmp(builtin_script[i].name, name, len) == 0)
I agree with Alvaro here: this should remain unchanged. It seems to be
a rebase mistake.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Refactoring of LWLock tranches

2016-01-31 Thread Amit Kapila
On Sat, Jan 30, 2016 at 2:15 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
>
> On Sat, Jan 30, 2016 at 10:58 AM, Amit Kapila 
wrote:
>>
>> On Fri, Jan 29, 2016 at 6:47 PM, Robert Haas 
wrote:
>> >
>> > On Fri, Jan 29, 2016 at 6:59 AM, Alexander Korotkov
>> >  wrote:
>> > > On Thu, Jan 21, 2016 at 12:37 AM, Alvaro Herrera <
alvhe...@2ndquadrant.com>
>> > > wrote:
>> > >> So far as I can tell, there are three patches in flight here:
>> > >>
>> > >> * replication slot IO lwlocks
>> > >> * ability of extensions to request tranches dynamically
>> > >> * PGPROC
>> > >>
>> > >> The first one hasn't been reviewed at all, but the other two have
seen a
>> > >> bit of discussion and evolution.  Is anyone doing any more
reviewing?
>> > >
>> > > I'd like to add another one: fixed tranche id for each SLRU.
>> >
>> > +1 for this.  The patch looks good and I will commit it if nobody
objects.
>> >
>>
>> +1. Patch looks good to me as well, but I have one related question:
>> Is there a reason why we should not assign ReplicationOrigins a
>> fixed tranche id  and then we might want to even get away with
>> LWLockRegisterTranche()?
>
>
> +1. I think we should do this.
>

Okay, Attached patch assigns fixed trancheid for ReplicationOrigins.
I don't think we can remove LWLockRegisterTranche(), as that will be
required for assigning transcheid's for extensions, so didn't change that
part of code.



With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


fixed_tranchid_repl_origin_v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: make behavior of all versions of the "isinf" function be similar

2016-01-31 Thread Tom Lane
Michael Paquier  writes:
> On Mon, Feb 1, 2016 at 2:38 PM, Michael Paquier
>  wrote:
>> Actually, is there actually a reason to keep this file in the code
>> tree? Are there platforms that do not have isinf()? Even for Windows
>> environments using MSVC < 1800 this is emulated using _fpclass.

> Looking at what is in the buildfarm, I noticed that isinf is provided
> everywhere. Attached is a patch. Thoughts?

Two comments:

1. I don't think the buildfarm is sufficient evidence to conclude that
isinf.c is required nowhere.  It was in use as late as 2004, judging
by the git history, and I don't know of good reason to assume we do not
need it now.

2. POSIX:2008 only requires that "The isinf() macro shall return a
non-zero value if and only if its argument has an infinite value."
Therefore, any assumption about the sign of the result is wrong anyway,
and any patch that depends on it will be rejected, regardless of what
isinf.c does.  Or else, if you can convince us that such an assumption
is really valuable, we'd need isinf.c more not less so that we can
replace isinf() on platforms where it doesn't meet the stronger spec.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] extend pgbench expressions with functions

2016-01-31 Thread Michael Paquier
On Fri, Jan 29, 2016 at 5:16 AM, Fabien COELHO <
fabien.coe...@mines-paristech.fr> wrote:
> v22 compared to previous:

Thanks for the new patch!

>  - remove the short macros (although IMO it is a code degradation)

FWIW, I like this suggestion from Robert.

>  - check for INT64_MIN / -1 (although I think it is useless)
>  - try not to remove/add blanks lines
>  - let some assert "as is"
>  - still exit on float to int overflow, see arguments in other mails

+make_op(const char *operator, PgBenchExpr *lexpr, PgBenchExpr *rexpr)
+{
+   return make_func(find_func(operator),
+/* beware that the list is
reversed in make_func */
+make_elist(rexpr,
make_elist(lexpr, NULL)));
+}
I think that this should use as argument the function ID, aka PGBENCH_ADD
or similar instead of find_func() with an operator character. This saves a
couple of instructions.

+ * - nargs: number of arguments (-1 is a special value for min & max)
My fault perhaps, it may be better to mention here that -1 means that min
and max need at least one argument, and that the number of arguments is not
fixed.

+   case PGBENCH_MOD:
+   if (coerceToInt(&rval) == 0)
{
fprintf(stderr,
"division by zero\n");
return false;
}
-   *retval = lval % rval;
+   if (coerceToInt(&lval) ==
INT64_MIN && coerceToInt(&rval) == -1)
+   {
+   fprintf(stderr,
"cannot divide INT64_MIN by -1\n");
+   return false;
+   }
For mod() there is no need to have an error, returning 0 is fine. You can
actually do it unconditionally when rval == -1.

+   setDoubleValue(retval, d < 0.0? -d: d);
Nitpick: lack of spaces between the question mark.

+typedef enum
+{
+   PGBT_NONE,
+   PGBT_INT,
+   PGBT_DOUBLE
+} PgBenchValueType;
NONE is used nowhere, but I think that you could use it for an assertion
check here:
+   if (retval->type == PGBT_INT)
+   fprintf(stderr, "int " INT64_FORMAT "\n",
retval->u.ival);
+   else if (retval->type == PGBT_DOUBLE)
+   fprintf(stderr, "double %f\n",
retval->u.dval);
+   else
+   fprintf(stderr, "none\n");
Just replace the "none" message by Assert(type != PGBT_NONE) for example.

Another remaining item: should support for \setrandom be dropped? As the
patch is presented this remains intact.
-- 
Michael


Re: [HACKERS] Several problems in tab-completions for SET/RESET

2016-01-31 Thread Michael Paquier
On Mon, Feb 1, 2016 at 1:21 PM, Fujii Masao  wrote:
> On Fri, Jan 29, 2016 at 1:02 PM, Michael Paquier
>  wrote:
>> On Fri, Jan 29, 2016 at 11:53 AM, Fujii Masao  wrote:
>>> I removed the above and added the following for that case.
>>>
>>> +/* Complete ALTER DATABASE|FUNCTION|ROLE|USER ... SET  */
>>> +else if (Matches2("ALTER", "DATABASE|FUNCTION|ROLE|USER") &&
>>> + TailMatches2("SET", MatchAny))
>>> +COMPLETE_WITH_LIST2("FROM CURRENT", "TO");
>>>
>>> Attached is the updated version of the patch.
>
> Thanks for the review!
>
>> "ALTER FUNCTION foo(bar)" suggests OWNER TO, RENAME TO and SET SCHEMA.
>> I think that we had better suggesting SET instead of SET SCHEMA, and
>> add SCHEMA in the list of things suggested by SET.
>
> Maybe, and it should suggest other keywords like RESET. That's it's better to
> overhaul the tab-completion of ALTER FUNCTION. But that's not the task of
> this patch. IMO it's better to fix that as a separate patch.

Er, OK... I thought that both problems seem rather linked per the
$subject but I can send an extra patch on this thread if necessary.
Never mind.

>> "ALTER DATABASE foodb SET foo_param" should suggest TO/= but that's
>> not the case. After adding TO/= manually, a list of values is
>> suggested though. Same problem with ALTER ROLE and ALTER FUNCTION.
>
> Fixed. Attached is the updated version of the patch.

+   /* Complete ALTER DATABASE|FUNCTION|ROLE|USER ... SET  */
+   else if (HeadMatches2("ALTER", "DATABASE|FUNCTION|ROLE|USER") &&
+TailMatches2("SET", MatchAny))
+   COMPLETE_WITH_LIST2("FROM CURRENT", "TO");
Small thing: adding "=" to the list of things that can be completed?

Except that the rest looks fine to me.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: make behavior of all versions of the "isinf" function be similar

2016-01-31 Thread Michael Paquier
On Mon, Feb 1, 2016 at 2:38 PM, Michael Paquier
 wrote:
> Actually, is there actually a reason to keep this file in the code
> tree? Are there platforms that do not have isinf()? Even for Windows
> environments using MSVC < 1800 this is emulated using _fpclass.

Looking at what is in the buildfarm, I noticed that isinf is provided
everywhere. Attached is a patch. Thoughts?
-- 
Michael
diff --git a/configure b/configure
index 3dd1b15..c9dd5b8 100755
--- a/configure
+++ b/configure
@@ -12633,63 +12633,6 @@ cat >>confdefs.h <<_ACEOF
 _ACEOF
 
 
-
-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for isinf" >&5
-$as_echo_n "checking for isinf... " >&6; }
-if ${ac_cv_func_isinf+:} false; then :
-  $as_echo_n "(cached) " >&6
-else
-  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-/* end confdefs.h.  */
-
-#include 
-double glob_double;
-
-int
-main ()
-{
-return isinf(glob_double) ? 0 : 1;
-  ;
-  return 0;
-}
-_ACEOF
-if ac_fn_c_try_link "$LINENO"; then :
-  ac_cv_func_isinf=yes
-else
-  ac_cv_func_isinf=no
-fi
-rm -f core conftest.err conftest.$ac_objext \
-conftest$ac_exeext conftest.$ac_ext
-fi
-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_func_isinf" >&5
-$as_echo "$ac_cv_func_isinf" >&6; }
-
-if test $ac_cv_func_isinf = yes ; then
-
-$as_echo "#define HAVE_ISINF 1" >>confdefs.h
-
-else
-  case " $LIBOBJS " in
-  *" isinf.$ac_objext "* ) ;;
-  *) LIBOBJS="$LIBOBJS isinf.$ac_objext"
- ;;
-esac
-
-  # Look for a way to implement a substitute for isinf()
-  for ac_func in fpclass fp_class fp_class_d class
-do :
-  as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh`
-ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var"
-if eval test \"x\$"$as_ac_var"\" = x"yes"; then :
-  cat >>confdefs.h <<_ACEOF
-#define `$as_echo "HAVE_$ac_func" | $as_tr_cpp` 1
-_ACEOF
- break
-fi
-done
-
-fi
-
 ac_fn_c_check_func "$LINENO" "crypt" "ac_cv_func_crypt"
 if test "x$ac_cv_func_crypt" = xyes; then :
   $as_echo "#define HAVE_CRYPT 1" >>confdefs.h
diff --git a/configure.in b/configure.in
index 9398482..5c75fea 100644
--- a/configure.in
+++ b/configure.in
@@ -1502,25 +1502,6 @@ fi
 
 AC_CHECK_DECLS([snprintf, vsnprintf])
 
-
-dnl Cannot use AC_CHECK_FUNC because isinf may be a macro
-AC_CACHE_CHECK([for isinf], ac_cv_func_isinf,
-[AC_LINK_IFELSE([AC_LANG_PROGRAM([
-#include 
-double glob_double;
-],
-[return isinf(glob_double) ? 0 : 1;])],
-[ac_cv_func_isinf=yes],
-[ac_cv_func_isinf=no])])
-
-if test $ac_cv_func_isinf = yes ; then
-  AC_DEFINE(HAVE_ISINF, 1, [Define to 1 if you have isinf().])
-else
-  AC_LIBOBJ(isinf)
-  # Look for a way to implement a substitute for isinf()
-  AC_CHECK_FUNCS([fpclass fp_class fp_class_d class], [break])
-fi
-
 AC_REPLACE_FUNCS([crypt fls getopt getrusage inet_aton mkdtemp random rint srandom strerror strlcat strlcpy])
 
 case $host_os in
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 16a272e..deac60e 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -276,9 +276,6 @@
 /* Define to 1 if you have support for IPv6. */
 #undef HAVE_IPV6
 
-/* Define to 1 if you have isinf(). */
-#undef HAVE_ISINF
-
 /* Define to 1 if you have the  header file. */
 #undef HAVE_LANGINFO_H
 
diff --git a/src/include/pg_config.h.win32 b/src/include/pg_config.h.win32
index 8566065..52cfbce 100644
--- a/src/include/pg_config.h.win32
+++ b/src/include/pg_config.h.win32
@@ -187,9 +187,6 @@
 /* Define to 1 if you have support for IPv6. */
 #define HAVE_IPV6 1
 
-/* Define to 1 if you have isinf(). */
-#define HAVE_ISINF 1
-
 /* Define to 1 if you have the  header file. */
 /* #undef HAVE_LANGINFO_H */
 
diff --git a/src/include/port.h b/src/include/port.h
index 9fc79f4..e908340 100644
--- a/src/include/port.h
+++ b/src/include/port.h
@@ -374,10 +374,6 @@ extern int	fls(int mask);
 extern int	getpeereid(int sock, uid_t *uid, gid_t *gid);
 #endif
 
-#ifndef HAVE_ISINF
-extern int	isinf(double x);
-#endif
-
 #ifndef HAVE_MKDTEMP
 extern char *mkdtemp(char *path);
 #endif
diff --git a/src/interfaces/ecpg/ecpglib/.gitignore b/src/interfaces/ecpg/ecpglib/.gitignore
index 8ef6401..e23e8f0 100644
--- a/src/interfaces/ecpg/ecpglib/.gitignore
+++ b/src/interfaces/ecpg/ecpglib/.gitignore
@@ -7,4 +7,3 @@
 /strlcpy.c
 /thread.c
 /win32setlocale.c
-/isinf.c
diff --git a/src/interfaces/ecpg/ecpglib/Makefile b/src/interfaces/ecpg/ecpglib/Makefile
index 39c4232..a27db67 100644
--- a/src/interfaces/ecpg/ecpglib/Makefile
+++ b/src/interfaces/ecpg/ecpglib/Makefile
@@ -27,7 +27,7 @@ LIBS := $(filter-out -lpgport, $(LIBS))
 
 OBJS= execute.o typename.o descriptor.o sqlda.o data.o error.o prepare.o memory.o \
 	connect.o misc.o path.o pgstrcasecmp.o \
-	$(filter snprintf.o strlcpy.o win32setlocale.o isinf.o, $(LIBOBJS)) $(WIN32RES)
+	$(filter snprintf.o strlcpy.o win32setlocale.o, $(LIBOBJS)) $(WIN32RES)
 
 # thread.c is needed only for non-WIN32 implementation of path.c
 ifneq ($(PORTNAME), win32)
@@ -55,7 +55,7 @@ include $(top_srcdir)/src/Makefile.shlib
 # necessarily use the 

Re: [HACKERS] Patch: make behavior of all versions of the "isinf" function be similar

2016-01-31 Thread Michael Paquier
On Mon, Feb 1, 2016 at 8:13 AM, Vitaly Burovoy  wrote:
> While I was searching for a function which checks doubles for
> infinity, I discovered a function "isinf" in a file src/port/isinf.c
> where one of three versions returns different value for "-inf" ("1"
> instead of "-1") comparing to the other two.
>
> For systems with HAVE_FPCLASS the function returns the same result for
> both "+inf" and "-inf". I can't test it on my WS, but I found only one
> man page[1] where system header file "ieeefp.h" must be included for
> ability to use "fpclass" function.

That's the case on OSX, but on Linux -1 is returned for -Inf, and 1 for +Inf.

> I guess nothing will be broken if that version of the function returns
> the same results for input values as the other two.
> Proposed patch makes that behavior.

Actually, is there actually a reason to keep this file in the code
tree? Are there platforms that do not have isinf()? Even for Windows
environments using MSVC < 1800 this is emulated using _fpclass.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Refactoring of LWLock tranches

2016-01-31 Thread Amit Kapila
On Sat, Jan 30, 2016 at 12:23 PM, Amit Kapila 
wrote:

> On Fri, Jan 29, 2016 at 6:55 PM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>>
>> Also couple of minor comments from me.
>>
>> I think this
>>
>> + StrNCpy(LWLockTrancheRequestArray[LWLockTrancheRequestsCount].tranche_name,
>>> tranche_name, strlen(tranche_name) + 1);
>>
>>
>> should be
>>
>> + StrNCpy(LWLockTrancheRequestArray[LWLockTrancheRequestsCount].tranche_name,
>>> tranche_name,
>>> sizeof(LWLockTrancheRequestArray[LWLockTrancheRequestsCount].tranche_name));
>>
>>
>>
> I think you are right, otherwise it might try to copy more, how
> about
>
> StrNCpy(LWLockTrancheRequestArray[LWLockTrancheRequestsCount].tranche_name,
> tranche_name, Min (strlen(tranche_name) + 1,
> sizeof(LWLockTrancheRequestArray[LWLockTrancheRequestsCount].tranche_name)));
>
>
Changed as per your suggestion, rather than using Min,
because I saw most code places uses directly sizeof for
StrNCpy(), so lets be consistent.


>
>
>> And as far as I know english "it's" should be "its" in the sentence below.
>>
>> + from _PG_init.  Tranche repersents an array of LWLocks
>>> and
>>> + can be accessed by it's name.  First parameter
>>> tranche_name
>>
>>
>>
> Right, will change.
>
>
Fixed.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


separate_tranche_extensions_v3.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pglogical_output - a general purpose logical decoding output plugin

2016-01-31 Thread Michael Paquier
On Thu, Jan 7, 2016 at 4:50 PM, Craig Ringer  wrote:
> On 7 January 2016 at 01:17, Peter Eisentraut  wrote:
>> On 12/22/15 4:55 AM, Craig Ringer wrote:
>> and we could probably go through them
>> one by one and ask, why do we need this bit?  So that kind of system
>> will be very hard to review as a standalone submission.
>
> Again, I disagree. I think you're looking at this way too narrowly.
>
> I find it quite funny, actually. Here we go and produce something that's a
> nice re-usable component that other people can use in their products and
> solutions ... and all anyone does is complain that the other part required
> to use it as a canned product isn't posted yet (though it is now). But with
> BDR all anyone ever does is complain that it's too tightly coupled to the
> needs of a single product and the features extracted from it, like
> replication origins, should be more generic and general purpose so other
> people can use them in their products too. Which is it going to be?

As far as I can see, this patch is leveraging the current
infrastructure in core, logical decoding to convert the data obtained
as a set JSON blobs via a custom protocol. Its maintenance load looks
minimal, that's at least a good thing.

> It would be helpful if you could take a step back and describe what *you*
> think logical replication for PostgreSQL should look like. You clearly have
> a picture in mind of what it should be, what requirements it satisfies, etc.
> If you're going to argue based on that it'd be very helpful to describe it.
> I might've missed some important points you've seen and you might've
> overlooked issues I've seen.

Personally, if I would put a limit of what should be in-core, or what
should be logical replication from the core prospective, that would be
just to give to potential consumers (understand plugins here) of this
binary data (be it pg_ddl_command or what the logical decoding context
offers) a way to access it and then to allow those plugins to change
this binary data into something that is suited to it, and have simple
tools and example to test those things without relying on anything
external. test_decoding and test_ddl_deparse cover that already. What
I find a bit disturbing regarding this patch is: why would a JSON
representation be able to cover all kinds of needs? Aren't other
replication solution going to have their own data format and their own
protocol with different requirements?

Considering that this module could have a happier life if managed independently.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Several problems in tab-completions for SET/RESET

2016-01-31 Thread Fujii Masao
On Fri, Jan 29, 2016 at 1:02 PM, Michael Paquier
 wrote:
> On Fri, Jan 29, 2016 at 11:53 AM, Fujii Masao  wrote:
>> I removed the above and added the following for that case.
>>
>> +/* Complete ALTER DATABASE|FUNCTION|ROLE|USER ... SET  */
>> +else if (Matches2("ALTER", "DATABASE|FUNCTION|ROLE|USER") &&
>> + TailMatches2("SET", MatchAny))
>> +COMPLETE_WITH_LIST2("FROM CURRENT", "TO");
>>
>> Attached is the updated version of the patch.

Thanks for the review!

> "ALTER FUNCTION foo(bar)" suggests OWNER TO, RENAME TO and SET SCHEMA.
> I think that we had better suggesting SET instead of SET SCHEMA, and
> add SCHEMA in the list of things suggested by SET.

Maybe, and it should suggest other keywords like RESET. That's it's better to
overhaul the tab-completion of ALTER FUNCTION. But that's not the task of
this patch. IMO it's better to fix that as a separate patch.

> "ALTER DATABASE foodb SET foo_param" should suggest TO/= but that's
> not the case. After adding TO/= manually, a list of values is
> suggested though. Same problem with ALTER ROLE and ALTER FUNCTION.

Fixed. Attached is the updated version of the patch.

Regards,

-- 
Fujii Masao
*** a/src/bin/psql/tab-complete.c
--- b/src/bin/psql/tab-complete.c
***
*** 1553,1559  psql_completion(const char *text, int start, int end)
  	else if (Matches2("ALTER", "SYSTEM"))
  		COMPLETE_WITH_LIST2("SET", "RESET");
  	/* ALTER SYSTEM SET|RESET  */
! 	else if (Matches4("ALTER", "SYSTEM", "SET|RESET", MatchAny))
  		COMPLETE_WITH_QUERY(Query_for_list_of_alter_system_set_vars);
  	/* ALTER VIEW  */
  	else if (Matches3("ALTER", "VIEW", MatchAny))
--- 1553,1559 
  	else if (Matches2("ALTER", "SYSTEM"))
  		COMPLETE_WITH_LIST2("SET", "RESET");
  	/* ALTER SYSTEM SET|RESET  */
! 	else if (Matches3("ALTER", "SYSTEM", "SET|RESET"))
  		COMPLETE_WITH_QUERY(Query_for_list_of_alter_system_set_vars);
  	/* ALTER VIEW  */
  	else if (Matches3("ALTER", "VIEW", MatchAny))
***
*** 1754,1759  psql_completion(const char *text, int start, int end)
--- 1754,1762 
  	 */
  	else if (Matches5("ALTER", "TABLE", MatchAny, "SET", "TABLESPACE"))
  		COMPLETE_WITH_QUERY(Query_for_list_of_tablespaces);
+ 	/* If we have ALTER TABLE  SET WITH provide OIDS */
+ 	else if (Matches5("ALTER", "TABLE", MatchAny, "SET", "WITH"))
+ 		COMPLETE_WITH_CONST("OIDS");
  	/* If we have ALTER TABLE  SET WITHOUT provide CLUSTER or OIDS */
  	else if (Matches5("ALTER", "TABLE", MatchAny, "SET", "WITHOUT"))
  		COMPLETE_WITH_LIST2("CLUSTER", "OIDS");
***
*** 2702,2708  psql_completion(const char *text, int start, int end)
  
  /* SET, RESET, SHOW */
  	/* Complete with a variable name */
! 	else if (Matches1("SET|RESET"))
  		COMPLETE_WITH_QUERY(Query_for_list_of_set_vars);
  	else if (Matches1("SHOW"))
  		COMPLETE_WITH_QUERY(Query_for_list_of_show_vars);
--- 2705,2711 
  
  /* SET, RESET, SHOW */
  	/* Complete with a variable name */
! 	else if (TailMatches1("SET|RESET") && !TailMatches3("UPDATE", MatchAny, "SET"))
  		COMPLETE_WITH_QUERY(Query_for_list_of_set_vars);
  	else if (Matches1("SHOW"))
  		COMPLETE_WITH_QUERY(Query_for_list_of_show_vars);
***
*** 2743,2750  psql_completion(const char *text, int start, int end)
  	/* Complete SET  with "TO" */
  	else if (Matches2("SET", MatchAny))
  		COMPLETE_WITH_CONST("TO");
  	/* Suggest possible variable values */
! 	else if (Matches3("SET", MatchAny, "TO|="))
  	{
  		/* special cased code for individual GUCs */
  		if (TailMatches2("DateStyle", "TO|="))
--- 2746,2757 
  	/* Complete SET  with "TO" */
  	else if (Matches2("SET", MatchAny))
  		COMPLETE_WITH_CONST("TO");
+ 	/* Complete ALTER DATABASE|FUNCTION|ROLE|USER ... SET  */
+ 	else if (HeadMatches2("ALTER", "DATABASE|FUNCTION|ROLE|USER") &&
+ 			 TailMatches2("SET", MatchAny))
+ 		COMPLETE_WITH_LIST2("FROM CURRENT", "TO");
  	/* Suggest possible variable values */
! 	else if (TailMatches3("SET", MatchAny, "TO|="))
  	{
  		/* special cased code for individual GUCs */
  		if (TailMatches2("DateStyle", "TO|="))

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Move PinBuffer and UnpinBuffer to atomics

2016-01-31 Thread Dilip Kumar
On Sun, Jan 31, 2016 at 11:44 AM, Dilip Kumar  wrote:

> By looking at the results with scale factor 1000 and 100 i don't see any
> reason why it will regress with scale factor 300.
>
> So I will run the test again with scale factor 300 and this time i am
> planning to run 2 cases.
> 1. when data fits in shared buffer
> 2. when data doesn't fit in shared buffer.
>

I have run the test again with 300 S.F and found no regression, in fact
there is improvement with the patch like we saw with 1000 scale factor.

Shared Buffer= 8GB
max_connections=150
Scale Factor=300

./pgbench  -j$ -c$ -T300 -M prepared -S postgres

ClientBasePatch
11974419382
8125923126395
3231393151
64387339496830
128306412350610

Shared Buffer= 512MB
max_connections=150
Scale Factor=300

./pgbench  -j$ -c$ -T300 -M prepared -S postgres

ClientBasePatch
11716916454
8108547105559
32241619262818
64206868233606
128137084217013


-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Generalizing SortSupport for text to work with char(n), bytea, and alternative opclasses

2016-01-31 Thread Andreas Karlsson

Hi,

I have reviewed this now and I think this is a useful addition even 
though these indexes are less common. Consistent behavior is worth a lot 
in my mind and this patch is reasonably small.


The patch no longer applies due to 1) oid collisions and 2) a trivial 
conflict. When I fixed those two the test suite passed.


I tested this patch by building indexes with all the typess and got nice 
measurable speedups.


Logically I think the patch makes sense and the code seems to be 
correct, but I have some comments on it.


- You use two names a lot "string" vs "varstr". What is the difference 
between those? Is there any reason for not using varstr consistently?


- You have a lot of renaming as has been mentioned previously in this 
thread. I do not care strongly for it either way, but it did make it 
harder to spot what the patch changed. If it was my own project I would 
have considered splitting the patch into two parts, one renaming 
everything and another adding the new feature, but the PostgreSQL seem 
to often prefer having one commit per feature. Do as you wish here.


- I think the comment about NUL bytes in varstr_abbrev_convert makes it 
seem like the handling is much more complicated than it actually is. I 
did not understand it after a couple of readings and had to read the 
code understand what it was talking about.


Nice work, I like your sorting patches.

Andreas


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How can I build OSSP UUID support on Windows to avoid duplicate UUIDs?

2016-01-31 Thread 高增琦
: (

still don't know how to build ossp-uuid on windows with MSVC.
Saito san's patch doesn't fix all errors during compiling...

I will try to combine this patch and the win32build on sf.net

Same questions again:
How was the dll file in the community binary built?
How to avoid duplicate UUIDs then?

thanks a lot.

2013-11-05 2:49 GMT+08:00 Christopher Browne :

> On Thu, Oct 31, 2013 at 3:42 PM, Robert Haas 
> wrote:
> > On Thu, Oct 31, 2013 at 2:44 PM, Garick Hamlin 
> wrote:
> >> I think using /dev/urandom directly would be surprising. At least it
> would
> >> have probably have taken me a while to figure out what was depleting the
> >> entropy pool here.
> >
> > Perhaps so; a bigger problem IMHO is that it's not portable. I think
> > the only way to solve this problem is to import (or have an option to
> > link with) a strong, sophisticated PRNG with much larger internal
> > state than pg_lrand48, which uses precisely 48 bits of internal state.
> > For this kind of thing, I'm fairly sure that we need something with
> > at least 128 bits of internal state (as wide as the random value we
> > want to generate) and I suspect it might be advantageous to have
> > something a whole lot wider, maybe a few kB.
>
> I mentioned the notion of building an entropy pool, into which one might
> add various sorts of random inputs, under separate cover...
>
> The last time I had need of a rather non-repeating RNG, I went with
> a Fibonacci-based one, namely Mersenne Twister...
>
> 
>
> The sample has 624 integers (presumably that means 624x32 bits) as
> its internal state. Apparently not terribly suitable for cryptographic
> purposes,
> but definitely highly non-repetitive, which is what we're notably
> worried about for UUIDs.
> --
> When confronted by a difficult problem, solve it by reducing it to the
> question, "How would the Lone Ranger handle this?"
>



-- 
GaoZengqi
pgf...@gmail.com
zengqi...@gmail.com


Re: [HACKERS] Template for commit messages

2016-01-31 Thread Sachin Kotwal
Sorry for little late.

Can we add Severity level of patch? with only three levels as (High,
Moderate, Low)

Many of our customers might not understand overall important of patch.
If we add this people/customers can choose patch is important for them or
not.
Other than Author and hackers can not easily understand overall importance
of patch.

Please consider if you feel it is important to add this parameter in commit
message format.







On Mon, Feb 1, 2016 at 10:04 AM, Joshua D. Drake 
wrote:

> On 01/31/2016 04:34 PM, Michael Paquier wrote:
>
>> On Mon, Feb 1, 2016 at 2:44 AM, Joshua D. Drake wrote:
>>
>>> On 01/29/2016 03:05 PM, Alvaro Herrera wrote:
>>>
 Joshua D. Drake wrote:
 One of the offers is to credit them (I'm not exactly clear
 on what is the group to benefit from this, but the phrasing used in the
 meeting was "contributors to the release") by having a section somewhere
 in the release notes with a list of their names.

>>>
>>>
>>> I can see this as being a nice thing but knowing that someone is a
>>> contributor isn't hard. There is a contributor list on the website and
>>> it is
>>> obvious from mail lists, archives and simple searches who is actually
>>> participating.
>>>
>>
>> This page would need a refresh IMO. I think it has not been touched
>> for the last couple of years.
>>
>
> No doubt but if we can't bother to keep that refreshed what makes us think
> that a structured format in commit messages that magically (through a lot
> of hard work and extra parsing anyway) is going to be any more accurate?
>
> Sincerely,
>
> JD
>
>
>>
>
> --
> Command Prompt, Inc.  http://the.postgres.company/
> +1-503-667-4564
> PostgreSQL Centered full stack support, consulting and development.
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>



-- 

Thanks and Regards,
Sachin Kotwal


[HACKERS] Way to check whether a particular block is on the shared_buffer?

2016-01-31 Thread Kouhei Kaigai
Hello,

Do we have a reliable way to check whether a particular heap block
is already on the shared buffer, but not modify?

Right now, ReadBuffer and ReadBufferExtended are entrypoint of the
buffer manager for extensions. However, it tries to acquire an
available buffer pool instead of the victim buffer, regardless of
the ReadBufferMode.

It is different from what I want to do:
 1. Check whether the supplied BlockNum is already loaded on the
shared buffer.
 2. If yes, the caller gets buffer descriptor as usual ReadBuffer.
 3. If not, the caller gets InvalidBuffer without modification of
the shared buffer, also no victim buffer pool.

It allows extensions (likely a custom scan provider) to take
different strategies for large table's scan, according to the
latest status of individual blocks.
If we don't have these interface, it seems to me an enhancement
of the ReadBuffer_common and (Local)BufferAlloc is the only way
to implement the feature.

Of course, we need careful investigation definition of the 'valid'
buffer pool. How about a buffer pool with BM_IO_IN_PROGRESS?
How about a buffer pool that needs storage extend (thus, no relevant
physical storage does not exists yet)? ... and so on.


As an aside, background of my motivation is the slide below:
http://www.slideshare.net/kaigai/sqlgpussd-english
(LT slides in JPUG conference last Dec)

I'm under investigation of SSD-to-GPU direct feature on top of
the custom-scan interface. It intends to load a bunch of data
blocks on NVMe-SSD to GPU RAM using P2P DMA, prior to the data
loading onto CPU/RAM, to preprocess the data to be filtered out.
It only makes sense if the target blocks are not loaded to the
CPU/RAM yet, because SSD device is essentially slower than RAM.
So, I like to have a reliable way to check the latest status of
the shared buffer, to kwon whether a particular block is already
loaded or not.

Thanks,
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei 




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] VACUUM Progress Checker.

2016-01-31 Thread Amit Langote
On 2016/01/29 21:02, Rahila Syed wrote:
>> Okay, I agree that reporting just the current blkno is simple and good
>> enough. How about numbers of what we're going to report as the "Vacuuming
>> Index and Heap" phase? I guess we can still keep the scanned_index_pages
>> and index_scan_count So we have:
>> +CREATE VIEW pg_stat_vacuum_progress AS
>> +   SELECT
>> +  S.pid,
>> +  S.relid,
>> +  S.phase,
>> +  S.total_heap_blks,
>> +  S.current_heap_blkno,
>> +  S.total_index_pages,
>> +  S.scanned_index_pages,
>> +  S.index_scan_count
>> +  S.percent_complete,
>> +   FROM pg_stat_get_vacuum_progress() AS S;
>> I guess it won't remain pg_stat_get_"vacuum"_progress(
>> ), though.
> 
> Apart from these, as suggested in [1] , finer grained reporting from index
> vacuuming phase can provide better insight. Currently we report number of
> blocks processed once at the end of vacuuming of each index.
> IIUC, what was suggested in [1] was instrumenting lazy_tid_reaped with a
> counter to count number of index tuples processed so far as lazy_tid_reaped
> is called for every index tuple to see if it matches any of the dead tuple
> tids.

Agreed. Although, as Robert already suggested, I too would vote for
counting pages, not tuples. I think there was an independent patch doing
something of that sort somewhere upthread.

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Template for commit messages

2016-01-31 Thread Joshua D. Drake

On 01/31/2016 04:34 PM, Michael Paquier wrote:

On Mon, Feb 1, 2016 at 2:44 AM, Joshua D. Drake wrote:

On 01/29/2016 03:05 PM, Alvaro Herrera wrote:

Joshua D. Drake wrote:
One of the offers is to credit them (I'm not exactly clear
on what is the group to benefit from this, but the phrasing used in the
meeting was "contributors to the release") by having a section somewhere
in the release notes with a list of their names.



I can see this as being a nice thing but knowing that someone is a
contributor isn't hard. There is a contributor list on the website and it is
obvious from mail lists, archives and simple searches who is actually
participating.


This page would need a refresh IMO. I think it has not been touched
for the last couple of years.


No doubt but if we can't bother to keep that refreshed what makes us 
think that a structured format in commit messages that magically 
(through a lot of hard work and extra parsing anyway) is going to be any 
more accurate?


Sincerely,

JD






--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Template for commit messages

2016-01-31 Thread Michael Paquier
On Mon, Feb 1, 2016 at 2:44 AM, Joshua D. Drake wrote:
> On 01/29/2016 03:05 PM, Alvaro Herrera wrote:
>> Joshua D. Drake wrote:
>> One of the offers is to credit them (I'm not exactly clear
>> on what is the group to benefit from this, but the phrasing used in the
>> meeting was "contributors to the release") by having a section somewhere
>> in the release notes with a list of their names.
>
>
> I can see this as being a nice thing but knowing that someone is a
> contributor isn't hard. There is a contributor list on the website and it is
> obvious from mail lists, archives and simple searches who is actually
> participating.

This page would need a refresh IMO. I think it has not been touched
for the last couple of years.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Patch: make behavior of all versions of the "isinf" function be similar

2016-01-31 Thread Vitaly Burovoy
Hello, hackers!

While I was searching for a function which checks doubles for
infinity, I discovered a function "isinf" in a file src/port/isinf.c
where one of three versions returns different value for "-inf" ("1"
instead of "-1") comparing to the other two.

It seems concrete values (not just "if isinf(...)") are checked only
in float.c in float4out and float8out, but I am going to check for
concrete values in my another patch.

For systems with HAVE_FPCLASS the function returns the same result for
both "+inf" and "-inf". I can't test it on my WS, but I found only one
man page[1] where system header file "ieeefp.h" must be included for
ability to use "fpclass" function.

I guess nothing will be broken if that version of the function returns
the same results for input values as the other two.

Proposed patch makes that behavior.

P.S.: Should the patch be added to the next CF?

[1]https://docs.oracle.com/cd/E36784_01/html/E36874/fpclass-3c.html
-- 
Best regards,
Vitaly Burovoy


isinf_v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Template for commit messages

2016-01-31 Thread Alvaro Herrera
Joshua D. Drake wrote:
> On 01/29/2016 03:05 PM, Alvaro Herrera wrote:
> >Joshua D. Drake wrote:
> >
> >>I think the best question to ask is:
> >>
> >>"What is the problem we are trying to solve?"
> >
> >The problem is alluring more patch reviewers, beta testers and bug
> >reporters.
> 
> Do we really want patch reviewers, beta testers and bug reporters that are
> doing it because we created a fixed template for commit messages that may
> mention their name?

We want *some* patch reviewers and beta testers.  We don't really care
what's their motivation, as long as their do their thing.  The
motivating factor is supposed to be getting credit -- not in the commit
messages (because other people don't look at those) but having their
names appear in the release notes.

> >One of the offers is to credit them (I'm not exactly clear
> >on what is the group to benefit from this, but the phrasing used in the
> >meeting was "contributors to the release") by having a section somewhere
> >in the release notes with a list of their names.
> 
> I can see this as being a nice thing but knowing that someone is a
> contributor isn't hard. There is a contributor list on the website and it is
> obvious from mail lists, archives and simple searches who is actually
> participating.

It's not obvious, which is why this is being discussed at all, and the
contributor list in the website is useless for the purposes at hand.
And it is hard, which is why the issue was raised in the first place --
heck, when this was mentioned, somebody said "but we would have to wait
until 9.7 because we don't have that information in existing commit
messages in 9.6 already" (he was promptly made to shut up, because most
if not all committers are already crediting reviewers in their commit
messages and we don't want to wait *two years* to implement this idea.)

> >So the problem, of course, is collating that list of names, and the
> >point of having a commit template is to have a single, complete source
> >of truth from where to extract the info.
> 
> I think the problem is that we think that this is somehow going to allure
> more people.

How come the issue of crediting people comes up so often, if it is, as
you seem to be saying, so worthless?

> I also think it is a quick step from:
> 
> Oh, Alvaro helped a lot with that.
> 
> vs
> 
> On, 2Q wrote that feature.

I'm not sure what's your point here, but the release notes are going to
list individuals, not companies.  (FWIW if it were up to me committers
and major contributors would not be in the list anyway.)

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] WIP: Detecting SSI conflicts before reporting constraint violations

2016-01-31 Thread Thomas Munro
Hi hackers,

As described in a recent Reddit discussion[1] and bug report 9301[2],
there are scenarios where overlapping concurrent read-write sequences
produce serialization failures without constraints, but produce
constraint violations when there is a unique constraint.  A simple
example is deciding on a new value for a primary key by first checking
the existing contents of a table.

This makes life difficult if you're trying to build systems that
automatically retry SERIALIZABLE transactions where appropriate,
because you have to decide how and when to handle unique constraint
violations too.  For example, people have experimented with automatic
retry-on-40001 at the level of HTTP requests for Django applications
when using the middleware that maps HTTP requests to database
transactions, and the same opportunities presumably exist in Java
application servers and other web service technologies, but unique
constraint violations get in the way of that.

Here is an experimental patch to report such SSI conflicts.  I had to
make a change to aminsert_function so that the snapshot could be
available to btree insert code (which may be unpopular).  The
questions on my mind now are:  Are there still conflicting schedules
that would be missed, or significant new spurious conflict reports, or
other theoretical soundness problems?  Is that PredicateLockPage call
sound and cheap enough?  It is the only new code that isn't in a path
already doomed to ereport, other than the extra snapshot propagation,
and without it read-write-unique-3.spec (taken from bug report 9301)
doesn't detect the conflict.

Thoughts?

[1] 
https://www.reddit.com/r/PostgreSQL/comments/3z74v2/postgres_acid_transactions_and_locking_to_prevent/
[2] 
http://www.postgresql.org/message-id/flat/20140221002001.29130.27...@wrigleys.postgresql.org

-- 
Thomas Munro
http://www.enterprisedb.com


ssi-read-write-unique-wip.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Audit Extension

2016-01-31 Thread Alvaro Herrera
Joshua D. Drake wrote:
> On 01/31/2016 05:07 AM, Alvaro Herrera wrote:
> >David Steele wrote:
> >>The attached patch implements audit logging for PostgreSQL as an
> >>extension.  I believe I have addressed the concerns that were raised at
> >>the end of the 9.5 development cycle.
> >
> >This patch got no feedback at all during the commitfest.  I think there
> >is some interest on auditing capabilities so it's annoying and
> >surprising that this has no movement at all.
> >
> >If the lack of activity means lack of interest, please can we all keep
> >what we've been doing in this thread, which is to ignore it, so that we
> >can just mark this as rejected.  Otherwise, please chime in.  I'm giving
> >this patch some more time by moving it to next commitfest instead.
> 
> From my perspective, lack of activity means since it doesn't have a
> technical requirement to be in -core, it doesn't need to be.

Well, from mine it doesn't mean that.  We kind of assume that "no answer
means yes"; but here what it really means is that nobody had time to
look at it and think about it, so it stalled (and so have many other
patches BTW).  But if you or others think that this patch belongs into
PGXN, it's good to have that opinion in an email, so that the author can
think about your perspective and agree or disagree with it.  Just by
expression that opinion, a thousand other hackers might vote +1 or -1 on
your proposal -- either way a clear sign.  Silence doesn't let us move
forward, and we punt the patch to the next CF and let inertia continue.
That's not good.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: index-only scans with partial indexes

2016-01-31 Thread Alvaro Herrera
Konstantin Knizhnik wrote:
> I am very interested in this patch because it allows to use partial indexes 
> to ... speed up inserts.
> I have implemented "ALTER INDEX ... WHERE ..." construction which allows to 
> change predicate of partial index without necessity to fully rebuild it.
> So it is not necessary to insert new records in index immediately (if new 
> records do not match partial index conditions).
> It can be done later in background (or at night). My experiments show that it 
> allows to increase insert speed five times (for either partial indexes).
> At the same time we do not loose RDBMS requirement that result of query 
> should not depend on presence of indexes. And it is applicable to all 
> indexes: B-Tree, GIN, GIST,...
> 
> But such optimization makes sense only of partial indexes can be used without 
> extra overhead, first of all for index-only scans.
> And it is impossible without this patch.

That sounds interesting.  So please review this patch and let us know
whether you like it, or whether you have any better ideas for any
particular hunk, or whether you think it should be rewritten from
scratch, or just state that it is perfect.  If you think it's useful,
then it's a good idea to vouch for it to be integrated; and one way of
doing that is making sure it meets the quality standards etc.  If you
don't say anything about the patch, we may never integrate it because we
might have doubts about whether it's worthy.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WAL Re-Writes

2016-01-31 Thread Jan Wieck

On 01/27/2016 08:30 AM, Amit Kapila wrote:

operation.  Now why OS couldn't find the corresponding block in
memory is that, while closing the WAL file, we use
POSIX_FADV_DONTNEED if wal_level is less than 'archive' which
lead to this problem.  So with this experiment, the conclusion is that
though we can avoid re-write of WAL data by doing exact writes, but
it could lead to significant reduction in TPS.


POSIX_FADV_DONTNEED isn't the only way how those blocks would vanish 
from OS buffers. If I am not mistaken we recycle WAL segments in a round 
robin fashion. In a properly configured system, where the reason for a 
checkpoint is usually "time" rather than "xlog", a recycled WAL file 
written to had been closed and not touched for about a complete 
checkpoint_timeout or longer. You must have a really big amount of spare 
RAM in the machine to still find those blocks in memory. Basically we 
are talking about the active portion of your database, shared buffers, 
the sum of all process local memory and the complete pg_xlog directory 
content fitting into RAM.



Regards, Jan

--
Jan Wieck
Senior Software Engineer
http://slony.info


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Template for commit messages

2016-01-31 Thread Joshua D. Drake

On 01/29/2016 03:05 PM, Alvaro Herrera wrote:

Joshua D. Drake wrote:


I think the best question to ask is:

"What is the problem we are trying to solve?"


The problem is alluring more patch reviewers, beta testers and bug
reporters.


Do we really want patch reviewers, beta testers and bug reporters that 
are doing it because we created a fixed template for commit messages 
that may mention their name?



One of the offers is to credit them (I'm not exactly clear
on what is the group to benefit from this, but the phrasing used in the
meeting was "contributors to the release") by having a section somewhere
in the release notes with a list of their names.


I can see this as being a nice thing but knowing that someone is a 
contributor isn't hard. There is a contributor list on the website and 
it is obvious from mail lists, archives and simple searches who is 
actually participating.



This proposal is
different from the previous proposal because their names wouldn't appear
next to each feature.


That certainly is a good thing.



So the problem, of course, is collating that list of names, and the
point of having a commit template is to have a single, complete source
of truth from where to extract the info.



I think the problem is that we think that this is somehow going to 
allure more people. I also think it is a quick step from:


Oh, Alvaro helped a lot with that.

vs

On, 2Q wrote that feature.

Any smart business is going to push for that once we start down this 
path in earnest.


Sincerely,

Joshua D. Drake


--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Audit Extension

2016-01-31 Thread Joshua D. Drake

On 01/31/2016 05:07 AM, Alvaro Herrera wrote:

David Steele wrote:

The attached patch implements audit logging for PostgreSQL as an
extension.  I believe I have addressed the concerns that were raised at
the end of the 9.5 development cycle.


This patch got no feedback at all during the commitfest.  I think there
is some interest on auditing capabilities so it's annoying and
surprising that this has no movement at all.

If the lack of activity means lack of interest, please can we all keep
what we've been doing in this thread, which is to ignore it, so that we
can just mark this as rejected.  Otherwise, please chime in.  I'm giving
this patch some more time by moving it to next commitfest instead.


From my perspective, lack of activity means since it doesn't have a 
technical requirement to be in -core, it doesn't need to be.


That said, it is certainly true that people express interest in auditing 
(all the time). So I think we need to either punt it as something that 
is going to be an external extension or someone needs to pick it up.


It used to be that there was a philosophy of we built what people need 
to build other things. This is why replication for the longest time 
wasn't incorporated. If that is still our philosophy I don't even know 
why the audit extension is still a question.


Sincerely,

JD








--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] extend pgbench expressions with functions

2016-01-31 Thread Fabien COELHO


v22 compared to previous:
 - remove the short macros (although IMO it is a code degradation)
 - try not to remove/add blanks lines
 - let some assert "as is"
 - still exit on float to int overflow, see arguments in other mails
 - check for INT64_MIN / -1 (although I think it is useless)

--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index 42d0667..d42208a 100644
--- a/doc/src/sgml/ref/pgbench.sgml
+++ b/doc/src/sgml/ref/pgbench.sgml
@@ -796,17 +796,21 @@ pgbench  options  dbname
   Sets variable varname to an integer value calculated
   from expression.
   The expression may contain integer constants such as 5432,
-  references to variables :variablename,
+  double constants such as 3.14159,
+  references to integer variables :variablename,
   and expressions composed of unary (-) or binary operators
-  (+, -, *, /, %)
-  with their usual associativity, and parentheses.
+  (+, -, *, /,
+  %) with their usual associativity, function calls and
+  parentheses.
+   shows the available
+  functions.
  
 
  
   Examples:
 
 \set ntellers 10 * :scale
-\set aid (1021 * :aid) % (10 * :scale) + 1
+\set aid (1021 * random(1, 10 * :scale)) % (10 * :scale) + 1
 
 

@@ -826,66 +830,35 @@ pgbench  options  dbname
  
 
  
-  By default, or when uniform is specified, all values in the
-  range are drawn with equal probability.  Specifying gaussian
-  or  exponential options modifies this behavior; each
-  requires a mandatory parameter which determines the precise shape of the
-  distribution.
- 
+  
+   
+
+ \setrandom n 1 10 or \setrandom n 1 10 uniform
+ is equivalent to \set n random(1, 10) and uses a uniform
+ distribution.
+
+   
 
- 
-  For a Gaussian distribution, the interval is mapped onto a standard
-  normal distribution (the classical bell-shaped Gaussian curve) truncated
-  at -parameter on the left and +parameter
-  on the right.
-  Values in the middle of the interval are more likely to be drawn.
-  To be precise, if PHI(x) is the cumulative distribution
-  function of the standard normal distribution, with mean mu
-  defined as (max + min) / 2.0, with
-
- f(x) = PHI(2.0 * parameter * (x - mu) / (max - min + 1)) /
-(2.0 * PHI(parameter) - 1.0)
-
-  then value i between min and
-  max inclusive is drawn with probability:
-  f(i + 0.5) - f(i - 0.5).
-  Intuitively, the larger parameter, the more
-  frequently values close to the middle of the interval are drawn, and the
-  less frequently values close to the min and
-  max bounds. About 67% of values are drawn from the
-  middle 1.0 / parameter, that is a relative
-  0.5 / parameter around the mean, and 95% in the middle
-  2.0 / parameter, that is a relative
-  1.0 / parameter around the mean; for instance, if
-  parameter is 4.0, 67% of values are drawn from the
-  middle quarter (1.0 / 4.0) of the interval (i.e. from
-  3.0 / 8.0 to 5.0 / 8.0) and 95% from
-  the middle half (2.0 / 4.0) of the interval (second and
-  third quartiles). The minimum parameter is 2.0 for
-  performance of the Box-Muller transform.
- 
+  
+   
+\setrandom n 1 10 exponential 3.0 is equivalent to
+\set n random_exponential(1, 10, 3.0) and uses an
+exponential distribution.
+   
+  
 
- 
-  For an exponential distribution, parameter
-  controls the distribution by truncating a quickly-decreasing
-  exponential distribution at parameter, and then
-  projecting onto integers between the bounds.
-  To be precise, with
-
-f(x) = exp(-parameter * (x - min) / (max - min + 1)) / (1.0 - exp(-parameter))
-
-  Then value i between min and
-  max inclusive is drawn with probability:
-  f(x) - f(x + 1).
-  Intuitively, the larger parameter, the more
-  frequently values close to min are accessed, and the
-  less frequently values close to max are accessed.
-  The closer to 0 parameter, the flatter (more uniform)
-  the access distribution.
-  A crude approximation of the distribution is that the most frequent 1%
-  values in the range, close to min, are drawn
-  parameter% of the time.
-  parameter value must be strictly positive.
+  
+   
+\setrandom n 1 10 gaussian 2.0 is equivalent to
+\set n random_gaussian(1, 10, 2.0), and uses a gaussian
+distribution.
+   
+  
+ 
+
+   See the documentation of these functions below for further information
+   about the precise shape of these distributions, depending on the value
+   of the parameter.
  
 
  
@@ -965,18 +938,184 @@ f(x) = exp(-parameter * (x - min) / (max - min + 1)) / (1.0 - exp(-parameter))

   
 
+   
+   
+PgBench Functions
+

Re: [HACKERS] Fwd: Core dump with nested CREATE TEMP TABLE

2016-01-31 Thread Tom Lane
Noah Misch  writes:
> On Sat, Jan 30, 2016 at 04:28:47PM -0500, Robert Haas wrote:
>> I think I've
>> pretty much said what I have to say about this; if nothing I wrote up
>> until now swayed you, it's unlikely that anything else I say after
>> this point will either.

> Say I drop the parts that change the binary.  Does the attached v2 manage to
> improve PostgreSQL, or is it neutral-or-harmful like v1?

There can surely be no objection to improving these comments.  However,
I'm not convinced that we should word the comments to insist that the
hypothetical cases are bugs.  As I said before, I do not think there is an
API contract that would promise that portals don't reach here in ACTIVE
state.  So IMO it's fair to note that no such case can arise currently,
but not to state that it's a bug if it does.  So for example I'd reword
your last comment addition along the lines of "Currently, every
MarkPortalActive() caller ensures it updates the portal status again
before relinquishing control, so that ACTIVE can't happen here.  If it
does happen, dispose the portal like existing MarkPortalActive() callers
would."

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: Trigonometric functions in degrees

2016-01-31 Thread Tom Lane
Piotr Stefaniak  writes:
> These changes from 65abaab547a5758b0d6d92df4af1663bb47d545f
> - result = sign * cosd_q1(arg1) / sind_q1(arg1);
> + result = sign * ((cosd_q1(arg1) / sind_q1(arg1)) / cot_45);

> and

> - result = sign * sind_q1(arg1) / cosd_q1(arg1);
> + result = sign * ((sind_q1(arg1) / cosd_q1(arg1)) / tan_45);

> both introduce division by zero, don't they?

Huh? cot_45 and tan_45 are fixed values that should be very close to 1.
Or were you complaining that the potential div by 0 existed beforehand?

It's possible that we should check for sind_q1(arg1) or cosd_q1(arg1)
being zero before we try the divide, and substitute get_float8_infinity()
instead.  But the regression tests exercise this case, and none of the
buildfarm members complained, so I'm a bit disinclined to add code for
that purpose.  If anyone does report regression test failure here, we can
revisit the issue then.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: Core dump with nested CREATE TEMP TABLE

2016-01-31 Thread Noah Misch
On Sat, Jan 30, 2016 at 04:28:47PM -0500, Robert Haas wrote:
> I think I've
> pretty much said what I have to say about this; if nothing I wrote up
> until now swayed you, it's unlikely that anything else I say after
> this point will either.

Say I drop the parts that change the binary.  Does the attached v2 manage to
improve PostgreSQL, or is it neutral-or-harmful like v1?

On Sat, Jan 30, 2016 at 04:28:47PM -0500, Robert Haas wrote:
> On Sat, Jan 30, 2016 at 2:13 PM, Noah Misch  wrote:
> > On Sat, Jan 30, 2016 at 07:37:45AM -0500, Robert Haas wrote:
> > > On Fri, Jan 29, 2016 at 11:19 PM, Noah Misch  wrote:
> > > > On Thu, Jan 28, 2016 at 11:12:15AM -0500, Robert Haas wrote:
> > > >> The code you quote emits a warning
> > > >> about a reasonably forseeable situation that can never be right, but
> > > >> there's no particular reason to think that MarkPortalFailed is the
> > > >> wrong thing to do here if that situation came up.
> > > >
> > > > I have two reasons to expect these MarkPortalFailed() calls will be the 
> > > > wrong
> > > > thing for hypothetical code reaching them.  First, PortalRun() and 
> > > > peers reset
> > > > ActivePortal and PortalContext on error in addition to fixing portal 
> > > > status
> > > > with MarkPortalFailed().  If code forgets to update the status, there's 
> > > > a
> > > > substantial chance it forgot the other two.  My patch added a comment
> > > > explaining the second reason:
> > > >
> > > > +   /*
> > > > +* See similar code in AtSubAbort_Portals().  This 
> > > > would fire if code
> > > > +* orchestrating multiple top-level transactions within 
> > > > a portal, such
> > > > +* as VACUUM, caught errors and continued under the 
> > > > same portal with a
> > > > +* fresh transaction.  No part of core PostgreSQL 
> > > > functions that way,
> > > > +* though it's a fair thing to want.  Such code would 
> > > > wish the portal
> > > > +* to remain ACTIVE, as in PreCommit_Portals(); we 
> > > > don't cater to
> > > > +* that.  Instead, like AtSubAbort_Portals(), interpret 
> > > > this as a bug.
> > > > +*/
> > > 
> > > You may be right, but then again Tom had a different opinion, even
> > > after seeing your patch, and he's no dummy.
> >
> > Eh?  Tom last posted to this thread before I first posted a patch.
> 
> http://www.postgresql.org/message-id/29758.1451780...@sss.pgh.pa.us
> seems to me to be a vote against the concept embodied by the patch.

That much is true.  The order of postings is the opposite of what you stated,
and there's no mailing list evidence that anyone other than you has read my
explanation of why MarkPortalFailed() is wrong here.  Alternately, I demand
the schematics for Tom's time machine.
diff --git a/src/backend/utils/mmgr/portalmem.c 
b/src/backend/utils/mmgr/portalmem.c
index a53673c..c840aa6 100644
--- a/src/backend/utils/mmgr/portalmem.c
+++ b/src/backend/utils/mmgr/portalmem.c
@@ -765,7 +765,14 @@ AtAbort_Portals(void)
{
Portal  portal = hentry->portal;
 
-   /* Any portal that was actually running has to be considered 
broken */
+   /*
+* See similar code in AtSubAbort_Portals().  This would fire 
if code
+* orchestrating multiple top-level transactions within a 
portal, such
+* as VACUUM, caught errors and continued under the same portal 
with a
+* fresh transaction.  No part of core PostgreSQL functions 
that way.
+* XXX Such code would wish the portal to remain ACTIVE, as in
+* PreCommit_Portals().
+*/
if (portal->status == PORTAL_ACTIVE)
MarkPortalFailed(portal);
 
@@ -919,9 +926,11 @@ AtSubAbort_Portals(SubTransactionId mySubid,
portal->activeSubid = parentSubid;
 
/*
-* Upper-level portals that failed while 
running in this
-* subtransaction must be forced into FAILED 
state, for the
-* same reasons discussed below.
+* If a bug in a MarkPortalActive() caller has 
it miss cleanup
+* after having failed while running an 
upper-level portal in
+* this subtransaction, we don't know what else 
in the portal
+* is wrong.  Force it into FAILED state, for 
the same reasons
+* discussed below.
 *
 * We assume we can get away without forcing 
upper-level READY
 * portals to fail, even if they were run and 
then suspended.
@@ -960,7 +969,11 @@ AtSubAbort_Portals(SubTransactionId mySubid,

Re: [HACKERS] PATCH: index-only scans with partial indexes

2016-01-31 Thread Konstantin Knizhnik

I am very interested in this patch because it allows to use partial indexes to 
... speed up inserts.
I have implemented "ALTER INDEX ... WHERE ..." construction which allows to 
change predicate of partial index without necessity to fully rebuild it.
So it is not necessary to insert new records in index immediately (if new 
records do not match partial index conditions).
It can be done later in background (or at night). My experiments show that it 
allows to increase insert speed five times (for either partial indexes).
At the same time we do not loose RDBMS requirement that result of query should 
not depend on presence of indexes. And it is applicable to all indexes: B-Tree, 
GIN, GIST,...

But such optimization makes sense only of partial indexes can be used without 
extra overhead, first of all for index-only scans.
And it is impossible without this patch.






On 01/31/2016 03:34 PM, Alvaro Herrera wrote:

Tomas Vondra wrote:


On 12/24/2015 04:05 AM, Michael Paquier wrote:

Tomas, are you still working on that? This thread is stalling for 3 weeks.

I haven't discovered anything interesting during the testing, so I guess the
"needs review" state is appropriate. Let's move the patch to the next
commitfest.

Not sure what to do here, since this patch got no feedback at all in
this CF.  The right thing to do, ISTM, is to just move it again to the
next CF.  But it'd be really useful if someone can have it a look and
verify at least whether it doesn't need a rebase (requiring a further
submission) so that other people can play with it.  Of course, if
Horiguchi-san or anyone has more review comments, that would be even
better.

Tomas said he'd do more testing, but we never got a report on whether
anything turned up.

(At this point I'm not sure if either Kyotaro or Tomas should be
considered the patch author ... maybe both?)




--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Idle In Transaction Session Timeout, revived

2016-01-31 Thread Vik Fearing
Attached is a rebased and revised version of my
idle_in_transaction_session_timeout patch from last year.

This version does not suffer the problems the old one did where it would
jump out of SSL code thanks to Andres' patch in commit
4f85fde8eb860f263384fffdca660e16e77c7f76.

The basic idea is if a session remains idle in a transaction for longer
than the configured time, that connection will be dropped thus releasing
the connection slot and any locks that may have been held by the broken
client.

Added to the March commitfest.
-- 
Vik Fearing  +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 392eb70..9cccf0e 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5908,6 +5908,29 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
   
  
 
+ 
+  idle_in_transaction_session_timeout (integer)
+  
+   idle_in_transaction_session_timeout configuration parameter
+  
+  
+  
+   
+   Terminate any session with an open transaction that has been idle for
+   longer than the specified duration in milliseconds. This allows any locks to
+   be released and the connection slot to be reused.
+   
+   
+   Sessions in the state "idle in transaction (aborted)" occupy a connection
+   slot but because they hold no locks, they are not considered by this
+   parameter.
+   
+   
+   The default value of 0 means that such sessions will not be terminated.
+   
+  
+ 
+
  
   vacuum_freeze_table_age (integer)
   
diff --git a/doc/src/sgml/mvcc.sgml b/doc/src/sgml/mvcc.sgml
index dd3c775..6352e12 100644
--- a/doc/src/sgml/mvcc.sgml
+++ b/doc/src/sgml/mvcc.sgml
@@ -725,7 +725,9 @@ ERROR:  could not serialize access due to read/write dependencies among transact
  
   
Don't leave connections dangling idle in transaction
-   longer than necessary.
+   longer than necessary.  The configuration parameter
+may be used to
+   automatically disconnect lingering sessions.
   
  
  
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 3690753..b711da4 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -57,6 +57,7 @@
 int			DeadlockTimeout = 1000;
 int			StatementTimeout = 0;
 int			LockTimeout = 0;
+int			IdleInTransactionSessionTimeout = 0;
 bool		log_lock_waits = false;
 
 /* Pointer to this process's PGPROC and PGXACT structs, if any */
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 390816b..7f03d8e 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -2978,6 +2978,11 @@ ProcessInterrupts(void)
 		}
 	}
 
+	if (IdleInTransactionTimeoutSessionPending)
+		ereport(FATAL,
+(errcode(ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT),
+ errmsg("terminating connection due to idle-in-transaction timeout")));
+
 	if (ParallelMessagePending)
 		HandleParallelMessages();
 }
@@ -3947,6 +3952,11 @@ PostgresMain(int argc, char *argv[],
 			{
 set_ps_display("idle in transaction", false);
 pgstat_report_activity(STATE_IDLEINTRANSACTION, NULL);
+
+/* Start the idle-in-transaction timer */
+if (IdleInTransactionSessionTimeout > 0)
+	enable_timeout_after(IDLE_IN_TRANSACTION_SESSION_TIMEOUT,
+		 IdleInTransactionSessionTimeout);
 			}
 			else
 			{
@@ -3987,7 +3997,13 @@ PostgresMain(int argc, char *argv[],
 		DoingCommandRead = false;
 
 		/*
-		 * (5) check for any other interesting events that happened while we
+		 * (5) turn off the idle-in-transaction timeout
+		 */
+		if (IdleInTransactionSessionTimeout > 0)
+disable_timeout(IDLE_IN_TRANSACTION_SESSION_TIMEOUT, false);
+
+		/*
+		 * (6) check for any other interesting events that happened while we
 		 * slept.
 		 */
 		if (got_SIGHUP)
@@ -3997,7 +4013,7 @@ PostgresMain(int argc, char *argv[],
 		}
 
 		/*
-		 * (6) process the command.  But ignore it if we're skipping till
+		 * (7) process the command.  But ignore it if we're skipping till
 		 * Sync.
 		 */
 		if (ignore_till_sync && firstchar != EOF)
diff --git a/src/backend/utils/errcodes.txt b/src/backend/utils/errcodes.txt
index 04c9c00..1a920e8 100644
--- a/src/backend/utils/errcodes.txt
+++ b/src/backend/utils/errcodes.txt
@@ -229,6 +229,7 @@ Section: Class 25 - Invalid Transaction State
 25007EERRCODE_SCHEMA_AND_DATA_STATEMENT_MIXING_NOT_SUPPORTED schema_and_data_statement_mixing_not_supported
 25P01EERRCODE_NO_ACTIVE_SQL_TRANSACTION  no_active_sql_transaction
 25P02EERRCODE_IN_FAILED_SQL_TRANSACTION  in_failed_sql_transaction
+25P03EERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUTidle_in_transaction_session_timeout
 
 Section: Class 26 - Inval

Re: [HACKERS] On columnar storage (2)

2016-01-31 Thread Alvaro Herrera
So we discussed some of this stuff during the developer meeting in
Brussels and the main conclusion is that we're going to split this up in
multiple independently useful pieces, and write up the general roadmap
in the wiki so that we can discuss in detail on-list.

I'm marking this as Returned with Feedback now.

Thanks everybody,

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Audit Extension

2016-01-31 Thread Alvaro Herrera
David Steele wrote:
> The attached patch implements audit logging for PostgreSQL as an
> extension.  I believe I have addressed the concerns that were raised at
> the end of the 9.5 development cycle.

This patch got no feedback at all during the commitfest.  I think there
is some interest on auditing capabilities so it's annoying and
surprising that this has no movement at all.

If the lack of activity means lack of interest, please can we all keep
what we've been doing in this thread, which is to ignore it, so that we
can just mark this as rejected.  Otherwise, please chime in.  I'm giving
this patch some more time by moving it to next commitfest instead.


-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: Trigonometric functions in degrees

2016-01-31 Thread Piotr Stefaniak

On 01/31/2016 01:23 PM, Michael Paquier wrote:

Per IEEE 754, division by 0 for a double results in Nan or +/-Inf, so
that's actually correct.


I didn't know that. I guess that in practice that is OK and the case is 
closed.


Interestingly to me, that assumption appears to rely on the C 
implementation complying to IEC 60559, in which case C99 lets the 
implementation signal that by defining the __STDC_IEC_559__ macro. C89 
doesn't seem to mention any of this.




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2016-01-31 Thread Alvaro Herrera
Ashutosh Bapat wrote:

> Here's updated patch. I didn't use version numbers in file names in my
> previous patches. I am starting from this onwards.

Um, I tried this patch and it doesn't apply at all.  There's a large
number of conflicts.  Please update it and resubmit to the next
commitfest.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: track last known XLOG segment in control file

2016-01-31 Thread Tomas Vondra

Hi,

On 01/31/2016 01:25 PM, Alvaro Herrera wrote:

This hasn't been updated in a long time; I should have closed it
earlier.  Chatting with Tomas about it, he seemed inclined to just have
the patch rejected because it's not terribly useful anyway.


we've discussed that some time ago and my memory is a bit flaky, but I 
don't think that's quite what I said. The main reason why I haven't 
posted an updated version of the patch is that it seems a bit silly to 
spend time on this while the underlying data loss issue is still not 
fixed (it got to "ready for committer" since then).




I'm marking it as rejected. Unless someone has a compelling use case
for this feature that hasn't been put forward, I think we're done
here.



I'm a bit torn - I think the protection might be useful, but there are a 
few issues with this approach:


 1) Getting the "can't start, WAL segments lost" message after a crash
is a bit late. It protects against silent data loss, it only makes
it "not silent" so it's not "fixed". But maybe it's worth it.

 2) It protects against a fairly narrow class of failures when we lose
the last WAL segment for some reason. Hopefully once we add the
additional fsyncs that particular failure scenario will get fixed,
but the question is whether we're in danger of reintroducing it
(or a similar issue) later.

I guess we could mark it as "rejected" but that's likely to eliminate 
any further feedback / discussion about the protection in general.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: use foreign keys to improve join estimates v1

2016-01-31 Thread Alvaro Herrera
David Rowley wrote:
 
> I'm not sure that I agree with this being set to "Needs review". The last
> progress that I see made on this was me hacking at the patch to remove some
> equivalence class limitations. I think the logical next step would be for
> you to look at these changes and either accept or reject them. So does that
> not suit "Waiting on author" better than "Needs review" ?

Tomas, I think we need you need to submit a new version of this patch,
please.  Closing it for now.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: index-only scans with partial indexes

2016-01-31 Thread Alvaro Herrera
Tomas Vondra wrote:

> On 12/24/2015 04:05 AM, Michael Paquier wrote:

> >Tomas, are you still working on that? This thread is stalling for 3 weeks.
> 
> I haven't discovered anything interesting during the testing, so I guess the
> "needs review" state is appropriate. Let's move the patch to the next
> commitfest.

Not sure what to do here, since this patch got no feedback at all in
this CF.  The right thing to do, ISTM, is to just move it again to the
next CF.  But it'd be really useful if someone can have it a look and
verify at least whether it doesn't need a rebase (requiring a further
submission) so that other people can play with it.  Of course, if
Horiguchi-san or anyone has more review comments, that would be even
better.  

Tomas said he'd do more testing, but we never got a report on whether
anything turned up.

(At this point I'm not sure if either Kyotaro or Tomas should be
considered the patch author ... maybe both?)

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-31 Thread Michael Paquier
On Thu, Jan 21, 2016 at 1:08 PM, Michael Paquier
 wrote:
> On Wed, Jan 20, 2016 at 2:32 AM, Joe Conway  wrote:
>> The only things I know of still lacking is:
>> 1) Documentation
>> 2) Decision on REVOKE ... FROM PUBLIC
>
> Yep, regarding 2) I am the only one actually making noise to protect
> this information by default, against at least 2 committers :)
>
>> I'm assuming by the lack of complainants that there is enough support
>> for the feature (conceptually at least) that it is worthwhile for me to
>> write the docs. Will do that over the next couple of days or so.
>
> I would think so as well.
>
> +typedef struct configdata
> +{
> +   char   *name;
> +   char   *setting;
> +} configdata;
> For a better analogy to ControlFileData, this could be renamed ConfigData?
>
> -OBJS_COMMON = exec.o pg_lzcompress.o pgfnames.o psprintf.o relpath.o \
> -   rmtree.o string.o username.o wait_error.o
> +# don't include subdirectory-path-dependent -I and -L switches
> +STD_CPPFLAGS := $(filter-out -I$(top_srcdir)/src/include
> -I$(top_builddir)/src/include,$(CPPFLAGS))
> +STD_LDFLAGS := $(filter-out -L$(top_builddir)/src/port,$(LDFLAGS))
> The point of the move to src/common is to remove the duplication in
> src/bin/pg_config/Makefile.
>
> --- /dev/null
> +++ b/src/common/config_info.c
> [...]
> +#include "postgres.h"
> +
> +#include "miscadmin.h"
> +#include "common/config_info.h"
> All the files in src/common should begin their include declarations with that:
> #ifndef FRONTEND
> #include "postgres.h"
> #else
> #include "postgres_fe.h"
> #endif
>
> +configdata *
> +get_configdata(char *my_exec_path, size_t *configdata_len)
> +{
> It may be good to mention that the result is palloc'd and that caller
> may need to free it if necessary. It does not matter in the two code
> paths of this patch, but it may matter for other users calling that.
>
> MSVC compilation is working correctly here.

I am marking this patch as returned with feedback for now, not all the
issues have been fixed yet, and there are still no docs (the
conclusion being that people would like to have this stuff, right?).
Feel free to move it to the next CF should a new version be written.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Additional role attributes && superuser review

2016-01-31 Thread Michael Paquier
On Sun, Jan 31, 2016 at 7:55 AM, Michael Paquier
 wrote:
> On Sun, Jan 31, 2016 at 5:32 AM, Craig Ringer  wrote:
>> On 29 January 2016 at 22:41, Stephen Frost  wrote:
>>>
>>> Michael,
>>>
>>> * Michael Paquier (michael.paqu...@gmail.com) wrote:
>>> > On Fri, Jan 29, 2016 at 6:37 AM, Stephen Frost 
>>> > wrote:
>>> > > * Robert Haas (robertmh...@gmail.com) wrote:
>>> > >> On Thu, Jan 28, 2016 at 11:04 AM, Stephen Frost 
>>> > wrote:
>>> > >> > Personally, I don't have any particular issue having both, but the
>>> > >> > desire was stated that it would be better to have the regular
>>> > >> > GRANT EXECUTE ON catalog_func() working before we consider having
>>> > >> > default roles for same.  That moves the goal posts awful far
>>> > >> > though, if
>>> > >> > we're to stick with that and consider how we might extend the GRANT
>>> > >> > system in the future.
>>> > >>
>>> > >> I don't think it moves the goal posts all that far.  Convincing
>>> > >> pg_dump to dump grants on system functions shouldn't be a crazy large
>>> > >> patch.
>>> > >
>>> > > I wasn't clear as to what I was referring to here.  I've already
>>> > > written
>>> > > a patch to pg_dump to support grants on system objects and agree that
>>> > > it's at least reasonable.
>>> >
>>> > Is it already posted somewhere? I don't recall seeing it. Robert and
>>> > Noah
>>> > have a point that this would be useful for users who would like to dump
>>> > GRANT/REVOKE rights on system functions & all, using a new option in
>>> > pg_dumpall, say --with-system-acl or --with-system-privileges.
>>>
>>> Multiple versions were posted on this thread.  I don't fault you for not
>>> finding it, this thread is a bit long in the tooth.  The one I'm
>>> currently working from is
>>>
>>
>> It strikes me that this thread would possibly benefit from a wiki page
>> outlining the permissions, overall concepts, etc, as it's getting awfully
>> hard to follow.
>
> +1. This has proved to be very beneficial for UPSERT.

I am marking this patch as returned with feedback per the current
status of this thread.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: BUG #13685: Archiving while idle every archive_timeout with wal_level hot_standby

2016-01-31 Thread Michael Paquier
On Sat, Jan 30, 2016 at 11:08 PM, Michael Paquier
 wrote:
> On Fri, Jan 29, 2016 at 9:13 PM, Michael Paquier wrote:
>> Well, to put it short, I am just trying to find a way to make the
>> backend skip unnecessary checkpoints on an idle system, which results
>> in the following WAL pattern if system is completely idle:
>> CHECKPOINT_ONLINE
>> RUNNING_XACTS
>> RUNNING_XACTS
>> [etc..]
>>
>> The thing is that I am lost with the meaning of this condition to
>> decide if a checkpoint should be skipped or not:
>> if (prevPtr == ControlFile->checkPointCopy.redo &&
>> prevPtr / XLOG_SEG_SIZE == curInsert / XLOG_SEG_SIZE)
>> {
>> WALInsertLockRelease();
>> LWLockRelease(CheckpointLock);
>> return;
>> }
>> As at least one standby snapshot is logged before the checkpoint
>> record, the redo position is never going to match the previous insert
>> LSN, so checkpoints will never be skipped if wal_level >= hot_standby.
>> Skipping such unnecessary checkpoints is what you would like to
>> address, no? Because that's what I would like to do here first. And
>> once we got that right, we could think about addressing the case where
>> WAL segments are forcibly archived for idle systems.
>
> I have put a bit more of brain power into that, and finished with the
> patch attached. A new field called chkpProgressLSN is attached to
> XLogCtl, which is to the current insert position of the checkpoint
> when wal_level <= archive, or the LSN position of the standby snapshot
> taken after a checkpoint. The bgwriter code is modified as well so as
> it uses this progress LSN and compares it with the current insert LSN
> to determine if a standby snapshot should be logged or not. On an
> instance of Postgres completely idle, no useless checkpoints, and no
> useless standby snapshots are generated anymore.

Attached is an additional patch that I have used for my tests (should
have sent that yesterday). This just shows up a couple of logs in the
bgwriter and around checkpoint to see in more details their activity
with not that much noise.
-- 
Michael
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index ccef3f0..e51b97e 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -8259,12 +8259,20 @@ CreateCheckPoint(int flags)
 	 * better if you don't need to keep two WAL segments around to recover the
 	 * checkpoint.
 	 */
+	elog(LOG, "prevPtr %X/%X, curInsert %X/%X, progress %X/%X, redo %X/%X",
+		 (uint32) (prevPtr >> 32), (uint32) prevPtr,
+		 (uint32) (curInsert >> 32), (uint32) curInsert,
+		 (uint32) (GetProgressRecPtr() >> 32), (uint32) GetProgressRecPtr(),
+		 (uint32) (ControlFile->checkPointCopy.redo >> 32),
+		 (uint32) ControlFile->checkPointCopy.redo);
 	if ((flags & (CHECKPOINT_IS_SHUTDOWN | CHECKPOINT_END_OF_RECOVERY |
   CHECKPOINT_FORCE)) == 0)
 	{
+		elog(LOG, "Not a forced or shutdown checkpoint");
 		if (GetProgressRecPtr() == prevPtr &&
 			prevPtr / XLOG_SEG_SIZE == curInsert / XLOG_SEG_SIZE)
 		{
+			elog(LOG, "Checkpoint is skipped");
 			WALInsertLockRelease();
 			LWLockRelease(CheckpointLock);
 			END_CRIT_SECTION();
@@ -8437,7 +8445,11 @@ CreateCheckPoint(int flags)
 	 * snapshots need to be logged, and skip them on idle systems.
 	 */
 	if (!shutdown && XLogStandbyInfoActive())
+	{
 		progress_lsn = LogStandbySnapshot();
+		elog(LOG, "snapshot taken by checkpoint %X/%X",
+			 (uint32) (progress_lsn >> 32), (uint32) progress_lsn);
+	}
 
 	START_CRIT_SECTION();
 
diff --git a/src/backend/postmaster/bgwriter.c b/src/backend/postmaster/bgwriter.c
index 22374b0..d55e446 100644
--- a/src/backend/postmaster/bgwriter.c
+++ b/src/backend/postmaster/bgwriter.c
@@ -308,17 +308,25 @@ BackgroundWriterMain(void)
 		{
 			TimestampTz timeout = 0;
 			TimestampTz now = GetCurrentTimestamp();
+			XLogRecPtr	progress_lsn = GetProgressRecPtr();
+			XLogRecPtr	insert_lsn = GetInsertRecPtr();
 
 			timeout = TimestampTzPlusMilliseconds(last_snapshot_ts,
   LOG_SNAPSHOT_INTERVAL_MS);
 
+			elog(LOG, "bgwriter progress_lsn %X/%X, insert_lsn %X/%X",
+ (uint32) (progress_lsn >> 32), (uint32) progress_lsn,
+ (uint32) (insert_lsn >> 32), (uint32) insert_lsn);
 			/*
 			 * only log if enough time has passed.
 			 */
 			if (now >= timeout &&
-GetProgressRecPtr() != GetInsertRecPtr())
+progress_lsn != insert_lsn)
 			{
-(void) LogStandbySnapshot();
+XLogRecPtr lsn;
+lsn = LogStandbySnapshot();
+elog(LOG, "snapshot taken by bgwriter %X/%X",
+	 (uint32) (lsn >> 32), (uint32) lsn);
 last_snapshot_ts = now;
 			}
 		}
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index a2846c4..ccef3f0 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -527,6 +527,8 @@ typedef struct XLogCtlData
 	TransactionId ckptXid;
 	XLogRecPtr	asyncXactLSN;	/* LSN of newest async commit/abort */
 	XLogRec

Re: [HACKERS] PATCH: track last known XLOG segment in control file

2016-01-31 Thread Alvaro Herrera
This hasn't been updated in a long time; I should have closed it
earlier.  Chatting with Tomas about it, he seemed inclined to just have
the patch rejected because it's not terribly useful anyway.

I'm marking it as rejected.  Unless someone has a compelling use case
for this feature that hasn't been put forward, I think we're done here.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: Trigonometric functions in degrees

2016-01-31 Thread Michael Paquier
On Sun, Jan 31, 2016 at 9:01 PM, Piotr Stefaniak
 wrote:
> - result = sign * cosd_q1(arg1) / sind_q1(arg1);
> + result = sign * ((cosd_q1(arg1) / sind_q1(arg1)) / cot_45);
>
> and
>
> - result = sign * sind_q1(arg1) / cosd_q1(arg1);
> + result = sign * ((sind_q1(arg1) / cosd_q1(arg1)) / tan_45);
>
> both introduce division by zero, don't they?

Per IEEE 754, division by 0 for a double results in Nan or +/-Inf, so
that's actually correct.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New gist vacuum.

2016-01-31 Thread Alvaro Herrera
Костя Кузнецов wrote:
> Thank you, Jeff.I reworking patch now. All // warning will be 
> deleted.About memory consumption new version will control size of 
> stack and will operate with map of little size because i want delete old 
> style vacuum(now if maintenance_work_mem less than needed to build info map 
> we use old-style vacuum with logical order).
> 

You never got around to submitting the updated version of this patch,
and it's been a long time now, so I'm marking it as returned with
feedback for now.  Please do submit a new version once you have it,
since this seems to be very useful.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PoC: Partial sort

2016-01-31 Thread Alvaro Herrera
Alexander Korotkov wrote:
 
> I'm sorry that I didn't found time for this yet. I'm certainly planning to
> get back to this in near future. The attached version is just rebased
> without any optimization.

Great to have a new version -- there seems to be a lot of interest in
this patch.  I'm moving this one to the next commitfest, thanks.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: Trigonometric functions in degrees

2016-01-31 Thread Piotr Stefaniak

These changes from 65abaab547a5758b0d6d92df4af1663bb47d545f

- result = sign * cosd_q1(arg1) / sind_q1(arg1);
+ result = sign * ((cosd_q1(arg1) / sind_q1(arg1)) / cot_45);

and

- result = sign * sind_q1(arg1) / cosd_q1(arg1);
+ result = sign * ((sind_q1(arg1) / cosd_q1(arg1)) / tan_45);

both introduce division by zero, don't they?



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for N synchronous standby servers - take 2

2016-01-31 Thread Michael Paquier
On Sun, Jan 31, 2016 at 5:28 PM, Masahiko Sawada  wrote:
> On Sun, Jan 31, 2016 at 5:18 PM, Michael Paquier
>  wrote:
>> On Sun, Jan 31, 2016 at 5:08 PM, Masahiko Sawada  
>> wrote:
>>> On Sun, Jan 31, 2016 at 1:17 PM, Michael Paquier
>>>  wrote:
 On Thu, Jan 28, 2016 at 10:10 PM, Masahiko Sawada wrote:
> By the discussions so far, I'm planning to have several replication
> methods such as 'quorum', 'complex' in the feature, and the each
> replication method specifies the syntax of s_s_names.
> It means that s_s_names could have the number of sync standbys like
> what current patch does.

 What if the application_name of a standby node has the format of an 
 integer?
>>>
>>> Even if the standby has an integer as application_name, we can set
>>> s_s_names like '2,1,2,3'.
>>> The leading '2' is always handled as the number of sync standbys when
>>> s_r_method = 'priority'.
>>
>> Hm. I agree with Fujii-san here, having the number of sync standbys
>> defined in a parameter that should have a list of names is a bit
>> confusing. I'd rather have a separate GUC, which brings us back to one
>> of the first patches that I came up with, and a couple of people,
>> including Josh were not happy with that because this did not support
>> real quorum. Perhaps the final answer would be really to get a set of
>> hooks, and a contrib module making use of that.
>
> Yeah, I agree with having set of hooks, and postgres core has simple
> multi sync replication mechanism like you suggested at first version.

If there are hooks, I don't think that we should really bother about
having in core anything more complicated than what we have now. The
trick will be to come up with a hook design modular enough to support
the kind of configurations mentioned on this thread. Roughly perhaps a
refactoring of the syncrep code so as it is possible to wait for
multiple targets some of them being optional,, one modular way in
pg_stat_get_wal_senders to represent the status of a node to user, and
another hook to return to decide which are the nodes to wait for. Some
of the nodes being waited for may be based on conditions for quorum
support. That's a hard problem to do that in a flexible enough way.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for N synchronous standby servers - take 2

2016-01-31 Thread Masahiko Sawada
On Sun, Jan 31, 2016 at 5:18 PM, Michael Paquier
 wrote:
> On Sun, Jan 31, 2016 at 5:08 PM, Masahiko Sawada  
> wrote:
>> On Sun, Jan 31, 2016 at 1:17 PM, Michael Paquier
>>  wrote:
>>> On Thu, Jan 28, 2016 at 10:10 PM, Masahiko Sawada wrote:
 By the discussions so far, I'm planning to have several replication
 methods such as 'quorum', 'complex' in the feature, and the each
 replication method specifies the syntax of s_s_names.
 It means that s_s_names could have the number of sync standbys like
 what current patch does.
>>>
>>> What if the application_name of a standby node has the format of an integer?
>>
>> Even if the standby has an integer as application_name, we can set
>> s_s_names like '2,1,2,3'.
>> The leading '2' is always handled as the number of sync standbys when
>> s_r_method = 'priority'.
>
> Hm. I agree with Fujii-san here, having the number of sync standbys
> defined in a parameter that should have a list of names is a bit
> confusing. I'd rather have a separate GUC, which brings us back to one
> of the first patches that I came up with, and a couple of people,
> including Josh were not happy with that because this did not support
> real quorum. Perhaps the final answer would be really to get a set of
> hooks, and a contrib module making use of that.

Yeah, I agree with having set of hooks, and postgres core has simple
multi sync replication mechanism like you suggested at first version.

Regards,

--
Masahiko Sawada


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for N synchronous standby servers - take 2

2016-01-31 Thread Michael Paquier
On Sun, Jan 31, 2016 at 5:08 PM, Masahiko Sawada  wrote:
> On Sun, Jan 31, 2016 at 1:17 PM, Michael Paquier
>  wrote:
>> On Thu, Jan 28, 2016 at 10:10 PM, Masahiko Sawada wrote:
>>> By the discussions so far, I'm planning to have several replication
>>> methods such as 'quorum', 'complex' in the feature, and the each
>>> replication method specifies the syntax of s_s_names.
>>> It means that s_s_names could have the number of sync standbys like
>>> what current patch does.
>>
>> What if the application_name of a standby node has the format of an integer?
>
> Even if the standby has an integer as application_name, we can set
> s_s_names like '2,1,2,3'.
> The leading '2' is always handled as the number of sync standbys when
> s_r_method = 'priority'.

Hm. I agree with Fujii-san here, having the number of sync standbys
defined in a parameter that should have a list of names is a bit
confusing. I'd rather have a separate GUC, which brings us back to one
of the first patches that I came up with, and a couple of people,
including Josh were not happy with that because this did not support
real quorum. Perhaps the final answer would be really to get a set of
hooks, and a contrib module making use of that.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for N synchronous standby servers - take 2

2016-01-31 Thread Masahiko Sawada
On Sun, Jan 31, 2016 at 1:17 PM, Michael Paquier
 wrote:
> On Thu, Jan 28, 2016 at 10:10 PM, Masahiko Sawada wrote:
>> By the discussions so far, I'm planning to have several replication
>> methods such as 'quorum', 'complex' in the feature, and the each
>> replication method specifies the syntax of s_s_names.
>> It means that s_s_names could have the number of sync standbys like
>> what current patch does.
>
> What if the application_name of a standby node has the format of an integer?

Even if the standby has an integer as application_name, we can set
s_s_names like '2,1,2,3'.
The leading '2' is always handled as the number of sync standbys when
s_r_method = 'priority'.

Regards,

--
Masahiko Sawada


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2016-01-31 Thread Tom Lane
Vitaly Burovoy  writes:
> On 1/30/16, Pavel Stehule  wrote:
>> I though about it, but it is not possible. Every PG_TRY/CATCH must be
>> finished by RETHROW.

> No, src/include/utils/elog.h tells different (emphasizes are mine):
> "The error recovery code can _optionally_ do PG_RE_THROW() to
> propagate the _same_ error outwards."

> So you can use it without rethrowing.

You can use it without re-throwing, but ONLY if you use the full
subtransaction mechanism to clean up.  Assuming that the only possible
errors are ones that don't need cleanup will get your patch rejected.

IME, in practically all cases in simple C functions, it's better to find
a way that doesn't require using TRY/CATCH to catch an expected error.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS]WIP: Covering + unique indexes.

2016-01-31 Thread David Rowley
On 27 January 2016 at 03:35, Anastasia Lubennikova
 wrote:
> including_columns_3.0 is the latest version of patch.
> And changes regarding the previous version are attached in a separate patch.
> Just to ease the review and debug.

Hi,

I've made another pass over the patch. There's still a couple of
things that I think need to be looked at.

Do we need the "b (included)" here? The key is (a) = (1). Having
irrelevant details might be confusing.

postgres=# create table a (a int not null, b int not null);
CREATE TABLE
postgres=# create unique index on a (a) including(b);
CREATE INDEX
postgres=# insert into a values(1,1);
INSERT 0 1
postgres=# insert into a values(1,1);
ERROR:  duplicate key value violates unique constraint "a_a_b_idx"
DETAIL:  Key (a, b (included))=(1, 1) already exists.

Extra tabs:
/* Truncate nonkey attributes when inserting on nonleaf pages. */
if (rel->rd_index->indnatts != rel->rd_index->indnkeyatts
&& !P_ISLEAF(lpageop))
{
itup = index_reform_tuple(rel, itup,
rel->rd_index->indnatts, rel->rd_index->indnkeyatts);
}

In index_reform_tuple() I find it a bit scary that you change the
TupleDesc's number of attributes then set it back again once you're
finished reforming the shortened tuple.
Maybe it would be better to modify index_form_tuple() to accept a new
argument with a number of attributes, then you can just Assert that
this number is never higher than the number of attributes in the
TupleDesc.

I'm also not that keen on index_reform_tuple() in general. I wonder if
there's a way we can just keep the Datum/isnull arrays a bit longer,
and only form the tuple when needed. I've not looked into this in
detail, but it does look like reforming the tuple is not going to be
cheap.

If we do need to keep this function, I think a better name might be
index_trim_tuple() and I don't think you need to pass the original
length. It might make sense to Assert() that the trim length is
smaller than the tuple size

What statement will cause this:

numberOfKeyAttributes = list_length(stmt->indexParams);
if (numberOfKeyAttributes <= 0)
ereport(ERROR,
(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),
errmsg("must specify at least one key column")));

I seem to just get errors from the parser when trying.


Much of this goes over 80 chars:

/*
* We append any INCLUDING columns onto the indexParams list so that
* we have one list with all columns. Later we can determine which of these
* are key columns, and which are just part of the INCLUDING list by
check the list
* position. A list item in a position less than ii_NumIndexKeyAttrs is part of
* the key columns, and anything equal to and over is part of the
* INCLUDING columns.
*/
stmt->indexParams = list_concat(stmt->indexParams, stmt->indexIncludingParams);

in gistrescan() there is some code:

for (attno = 1; attno <= natts; attno++)
{
TupleDescInitEntry(so->giststate->fetchTupdesc, attno, NULL,
  scan->indexRelation->rd_opcintype[attno - 1],
  -1, 0);
}

Going by RelationInitIndexAccessInfo() rd_opcintype[] is allocated to
be sized by the number of key columns, but this loop goes over the
number of attribute columns.
Perhaps this is not a big problem since GIST does not support
INCLUDING columns, but it does seem wrong still.

Which brings me to the fact that I've spent a bit of time trying to
look for places where you've forgotten to change natts to nkeyatts. I
did find this one, but I don't have much confidence that there's not
lots more places that have been forgotten. Apart from this one, how
confident are you that you've found all the places? I'm getting
towards being happy with the code that I see that's been changed, but
I'm hesitant to mark as "Ready for committer" due to not being all
that comfortable that all the code that needs to be updated has been
updated. I'm not quite sure of a good way to find all these places. I
wondering if hacking the code so that each btree index which is
created with > 1 column puts all but the first column into the
INCLUDING columns, then run the regression tests to see if there are
any crashes. I'm really not that sure of how else to increase the
confidence levels on this. Do you have ideas?

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers