[HACKERS] PostgreSQL 9.5.0 regress tests fails with Python 3.5.0

2016-01-08 Thread Pavel Stehule
Hi

make check-world fails,

It is fixed in HEAD.

Regards

Pavel


Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2016-01-08 Thread Magnus Hagander
On Thu, Jan 7, 2016 at 10:57 PM, Joshua D. Drake 
wrote:

> On 01/07/2016 12:32 PM, Jim Nasby wrote:
>
>> On 1/7/16 1:49 PM, Josh Berkus wrote:
>>
>>> Yeah, we could also get rid of this conversation:
>>>
>>> "Here's a patch for X, which is on the TODO list"
>>>
>>> "Oh, we've obsolesced that, that was added to the TODO before we had Y"
>>>
>>> ... by auto-closing TODO items at a certain age.
>>>
>>
>> Even if not auto-closed at least it'd be easy to find old items.
>>
>> Bonus points if we attract some volunteer project managers that will
>> keep tabs of all those kinds of things...
>>
>
> /me waves hand
>
> There are quite a few contributing companies that likely have people that
> could help out with this in an educated fashion but aren't coders.


Sort of like how they could also have helped out with cf management?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Optimizer questions

2016-01-08 Thread Konstantin Knizhnik


Attached please find improved version of the optimizer patch for LIMIT clause.
Now I try to apply this optimization only for non-trivial columns requiring 
evaluation.
May be it will be better to take in account cost of this columns evaluation but 
right now I just detect non-variable columns.




On 01/06/2016 12:03 PM, David Rowley wrote:

On 6 January 2016 at 13:13, Alexander Korotkov > wrote:

On Wed, Jan 6, 2016 at 12:08 AM, Tom Lane > wrote:

konstantin knizhnik > writes:
> 1. The cost compared in grouping_planner doesn't take in account 
price of get_authorized_users - it is not changed when I am altering function 
cost. Is it correct behavior?

The general problem of accounting for tlist eval cost is not handled 
very
well now, but especially not with respect to the idea that different 
paths
might have different tlist costs.  I'm working on an upper-planner 
rewrite
which should make this better, or at least make it practical to make it
better.


Hmm... Besides costing it would be nice to postpone calculation of 
expensive tlist functions after LIMIT.


I'd agree that it would be more than the costings that would need to be 
improved here.

The most simple demonstration of the problem I can think of is, if I apply the 
following:

diff --git a/src/backend/utils/adt/int.c b/src/backend/utils/adt/int.c
index 29d92a7..2ec9822 100644
--- a/src/backend/utils/adt/int.c
+++ b/src/backend/utils/adt/int.c
@@ -641,6 +641,8 @@ int4pl(PG_FUNCTION_ARGS)

result = arg1 + arg2;

+   elog(NOTICE, "int4pl(%d, %d)", arg1,arg2);
+
/*
 * Overflow check.  If the inputs are of different signs then their sum
 * cannot overflow.  If the inputs are of the same sign, their sum had

Then do:

create table a (b int);
insert into a select generate_series(1,10);
select b+b as bb from a order by b limit 1;
NOTICE:  int4pl(1, 1)
NOTICE:  int4pl(2, 2)
NOTICE:  int4pl(3, 3)
NOTICE:  int4pl(4, 4)
NOTICE:  int4pl(5, 5)
NOTICE:  int4pl(6, 6)
NOTICE:  int4pl(7, 7)
NOTICE:  int4pl(8, 8)
NOTICE:  int4pl(9, 9)
NOTICE:  int4pl(10, 10)
 bb

  2
(1 row)

We can see that int4pl() is needlessly called 9 times. Although, I think this 
does only apply to queries with LIMIT. I agree that it does seem like an 
interesting route for optimisation.

It seems worthwhile to investigate how we might go about improving this so that 
the evaluation of the target list happens after LIMIT, at least for the columns 
which are not required before LIMIT.

Konstantin, are you thinking of looking into this more, with plans to implement 
code to improve this?

--
 David Rowley http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services





--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 797df31..4cbccac 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -106,6 +106,7 @@ static bool choose_hashed_distinct(PlannerInfo *root,
 static List *make_subplanTargetList(PlannerInfo *root, List *tlist,
 	   AttrNumber **groupColIdx, bool *need_tlist_eval);
 static int	get_grouping_column_index(Query *parse, TargetEntry *tle);
+static int	get_sort_column_index(Query *parse, TargetEntry *tle);
 static void locate_grouping_columns(PlannerInfo *root,
 		List *tlist,
 		List *sub_tlist,
@@ -2402,6 +2403,13 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		  parse->limitCount,
 		  offset_est,
 		  count_est);
+		if (parse->sortClause && tlist != result_plan->targetlist)
+		{			
+			result_plan = (Plan *) make_result(root,
+			   tlist,
+			   NULL,
+			   result_plan);			
+		}
 	}
 
 	/*
@@ -3964,8 +3972,8 @@ make_subplanTargetList(PlannerInfo *root,
 	   bool *need_tlist_eval)
 {
 	Query	   *parse = root->parse;
-	List	   *sub_tlist;
-	List	   *non_group_cols;
+	List	   *sub_tlist = NIL;
+	List	   *non_group_cols = NIL;
 	List	   *non_group_vars;
 	int			numCols;
 
@@ -3978,6 +3986,61 @@ make_subplanTargetList(PlannerInfo *root,
 	if (!parse->hasAggs && !parse->groupClause && !parse->groupingSets && !root->hasHavingQual &&
 		!parse->hasWindowFuncs)
 	{
+		if (parse->sortClause && limit_needed(parse)) {
+			ListCell   *tl;
+			bool contains_non_vars = false;
+			*need_tlist_eval = false;	/* only eval if not flat tlist */
+			foreach(tl, tlist)
+			{
+TargetEntry *tle = (TargetEntry *) lfirst(tl);
+int			colno;
+
+colno = get_sort_column_index(parse, tle);
+if (colno >= 0)
+{
+	TargetEntry *newtle;
+	
+	newtle = makeTargetEntry(tle->expr,
+			 list_length(sub_tlist) 

[HACKERS] pg_dump and premature optimizations for objects not to be dumped

2016-01-08 Thread Tom Lane
I looked into Karsten Hilbert's report here
http://www.postgresql.org/message-id/20160108114529.gb22...@hermes.hilbert.loc
of being unable to run pg_upgrade on a database containing the pg_trgm
extension.  After some investigation, the problem is explained like this:

1. Karsten had installed pg_trgm into pg_catalog rather than some user
schema.

2. When getTypes() processes the gtrgm type created by pg_trgm, it
decides the type won't be dumped (cf selectDumpableType, which quite
properly rejects types in pg_catalog).  This causes it not to generate
a ShellTypeInfo subsidiary object.

3. Later, getExtensionMembership decides that we *should* dump gtrgm
and associated I/O routines, since we're in binary-upgrade mode.

4. Later still, repairTypeFuncLoop breaks the dependency loops between
gtrgm and its I/O routines, but it doesn't mark the shell type as dumpable
because there isn't one.

5. So we end up with no shell type being dumped, which does not work
in binary-upgrade mode because we can't create the shell type when
we see the first I/O function; we don't know what OID to give it.


The core of the problem, therefore, is the choice in getTypes to not
create a ShellTypeInfo if it thinks the type is not to be dumped.  That
was probably fine when it was written, but now that we can change our
minds about object dumpability later, it's clearly broken.  It's really
just unnecessary optimization anyway, since if we create the object and
don't use it, we've not wasted much but cycles.  I've verified that a
patch like this:

diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 56c0528..6acbf4a 100644
*** a/src/bin/pg_dump/pg_dump.c
--- b/src/bin/pg_dump/pg_dump.c
*** getTypes(Archive *fout, int *numTypes)
*** 3664,3671 
 * should copy the base type's catId, but then it might capture 
the
 * pg_depend entries for the type, which we don't want.
 */
!   if (tyinfo[i].dobj.dump && (tyinfo[i].typtype == TYPTYPE_BASE ||
!   
tyinfo[i].typtype == TYPTYPE_RANGE))
{
stinfo = (ShellTypeInfo *) 
pg_malloc(sizeof(ShellTypeInfo));
stinfo->dobj.objType = DO_SHELL_TYPE;
--- 3664,3671 
 * should copy the base type's catId, but then it might capture 
the
 * pg_depend entries for the type, which we don't want.
 */
!   if (tyinfo[i].typtype == TYPTYPE_BASE ||
!   tyinfo[i].typtype == TYPTYPE_RANGE)
{
stinfo = (ShellTypeInfo *) 
pg_malloc(sizeof(ShellTypeInfo));
stinfo->dobj.objType = DO_SHELL_TYPE;

fixes Karsten's symptom without obvious other problems.

However ... there are a whole bunch of *other* places where pg_dump
skips work on objects that are not to be dumped, and some of them
would add quite a bit more cost than this one.  An example is just
above this code, where we skip getDomainConstraints() for domains
we think aren't to be dumped.  A fix for that wouldn't be a one-liner
either, because if we do pull in the domain's constraints unconditionally,
we'd have to do something to cause the domain's own dumpability flag
(and updates thereof) to get propagated to them.

Even if we fix all these issues today, it's not hard to imagine new bugs
of similar ilk sneaking in.  Exacerbating this is that some of those
checks are fine because they occur after getExtensionMembership(), at
which point the dump-or-not decisions won't change.  But in most places
in pg_dump, it's not instantly obvious whether you're looking at a flag
that is still subject to change or not.

The thing that seems possibly most robust to me is to pull in the
extension membership data *first* and incorporate it into the initial
selectDumpableObject tests, thus getting us back to the pre-9.1 state
of affairs where the initial settings of the object dump flags could
be trusted.  This might be expensive though; and it would certainly add
a good chunk of work to the race-condition window where we've taken
pg_dump's transaction snapshot but not yet acquired lock on any of
the tables.

Thoughts?  Anybody have another idea about what to do about this?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL 9.5.0 regress tests fails with Python 3.5.0

2016-01-08 Thread Pavel Stehule
2016-01-08 18:45 GMT+01:00 Pavel Stehule :

>
>
> 2016-01-08 18:33 GMT+01:00 Pavel Stehule :
>
>>
>>
>> 2016-01-08 17:42 GMT+01:00 Tom Lane :
>>
>>> Pavel Stehule  writes:
>>> > make check-world fails,
>>> > It is fixed in HEAD.
>>>
>>> Hmm.  Apparently when Peter back-patched f16d52269, he missed the 9.5
>>> branch :-(.  I don't have Python 3.5 here, but blindly pushed the same
>>> patch into 9.5.
>>>
>>
> I see, you did it.
>
> Thank you
>

all tests passed

Regards

Pavel


>
> Pavel
>
>
>>
>> I'll send a patch
>>
>> Pavel
>>
>>
>>>
>>> regards, tom lane
>>>
>>
>>
>


Re: [HACKERS] PostgreSQL 9.5.0 regress tests fails with Python 3.5.0

2016-01-08 Thread Pavel Stehule
2016-01-08 18:33 GMT+01:00 Pavel Stehule :

>
>
> 2016-01-08 17:42 GMT+01:00 Tom Lane :
>
>> Pavel Stehule  writes:
>> > make check-world fails,
>> > It is fixed in HEAD.
>>
>> Hmm.  Apparently when Peter back-patched f16d52269, he missed the 9.5
>> branch :-(.  I don't have Python 3.5 here, but blindly pushed the same
>> patch into 9.5.
>>
>
I see, you did it.

Thank you

Pavel


>
> I'll send a patch
>
> Pavel
>
>
>>
>> regards, tom lane
>>
>
>


[HACKERS] Fwd: Re: [DOCS] Document Upper Limit for NAMEDATELEN in pgsql 9.5+

2016-01-08 Thread Tom Lane
[ redirecting from -docs, which isn't the best place to discuss code fixes ]

Whoever thought this was a good idea:

StaticAssertStmt(NAMEDATALEN <= MAX_LEVENSHTEIN_STRLEN,
 "Levenshtein hinting mechanism restricts NAMEDATALEN");

is nuts.

A minimal fix that would not put stumbling blocks in the way of changes
to NAMEDATALEN is to redefine MAX_LEVENSHTEIN_STRLEN as
Max(255, NAMEDATALEN).  But I wonder why we have to have an arbitrary limit
in this code at all.  I trust nobody here believes this is the only O(N^2)
algorithm in the backend; the others don't have this sort of thing in
front of them.

regards, tom lane


--- Forwarded Message

Date:Fri, 08 Jan 2016 14:49:22 -0500
From:Tom Lane 
To:  Kevin Day 
cc:  pgsql-d...@postgresql.org
Subject: Re: [DOCS] Document Upper Limit for NAMEDATELEN in pgsql 9.5+

Kevin Day  writes:
> Postgresql 9.5+ may now fail to compile if NAMEDATELEN has been set
> absurdly large (In my case, 384).
> The file src/backend/utils/adt/levenshtein.c does a static assert on
> "NAMEDATALEN <= MAX_LEVENSHTEIN_STRLEN" with MAX_LEVENSHTEIN_STRLEN
> currently set to 255.

Hmm.  I'm not sure whether 384 is "absurdly large", but I do wonder
why the levenshtein code gets to dictate limits on NAMEDATALEN at all.

Or to put it even more bluntly, I'm not sure that there is anything
whatever about MAX_LEVENSHTEIN_STRLEN that is well thought out.  Why not
rip it out and put some CHECK_FOR_INTERRUPTS tests into the loops instead?

regards, tom lane


-- 
Sent via pgsql-docs mailing list (pgsql-d...@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-docs

--- End of Forwarded Message


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: Re: [DOCS] Document Upper Limit for NAMEDATELEN in pgsql 9.5+

2016-01-08 Thread Peter Geoghegan
On Fri, Jan 8, 2016 at 12:05 PM, Tom Lane  wrote:
> Whoever thought this was a good idea:
>
> StaticAssertStmt(NAMEDATALEN <= MAX_LEVENSHTEIN_STRLEN,
>  "Levenshtein hinting mechanism restricts NAMEDATALEN");
>
> is nuts.

Then I must be nuts.

> A minimal fix that would not put stumbling blocks in the way of changes
> to NAMEDATALEN is to redefine MAX_LEVENSHTEIN_STRLEN as
> Max(255, NAMEDATALEN).  But I wonder why we have to have an arbitrary limit
> in this code at all.  I trust nobody here believes this is the only O(N^2)
> algorithm in the backend; the others don't have this sort of thing in
> front of them.

The default NAMEDATALEN is of course 64. I didn't feel that the static
assertion was especially restrictive, given that it still leaves
significant headroom. I'm slightly surprised that this cropped up. I
didn't want to presume that the algorithm being O(N^2) was the only
reason for the restriction. This comment seems pretty scary to me:

/*
 * For security concerns, restrict excessive CPU+RAM usage. (This
 * implementation uses O(m) memory and has O(mn) complexity.)
 */
if (m > MAX_LEVENSHTEIN_STRLEN ||
n > MAX_LEVENSHTEIN_STRLEN)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
 errmsg("argument exceeds the maximum length of %d bytes",
MAX_LEVENSHTEIN_STRLEN)));

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2016-01-08 Thread Jim Nasby

On 1/8/16 9:07 AM, Tom Lane wrote:

Dunno about that.  While a CF manager doesn't necessarily have to have
a commit bit, I think he/she probably does have to be someone who is known
and respected on the -hackers list.  Otherwise, nagging to finish patches
is likely to be ignored or even counterproductive.


IMHO that touches on the core issue here. JD and anyone else can offer 
up all the resources under the sun, but *it's up to the community to 
decide to use them*. In this specific case, if the community agrees that 
someone that's not a code contributor will be the CFM for a specific CF 
then I think it would work fine. Short of that, your observation might 
be very accurate.


I feel a bit bad about some of the emails I've sent on this and related 
topics because it feels hand-wavy, but I don't know what else to do. I 
can promote these ideas that I think will be very helpful, but without a 
strong consensus in the community that it's desirable any efforts will 
probably fail.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] Improvements of Hunspell dictionaries support

2016-01-08 Thread Alvaro Herrera
Artur Zakirov wrote:

> *** 77,83  typedef struct spell_struct
>   
>   typedef struct aff_struct
>   {
> ! uint32  flag:8,
>   type:1,
>   flagflags:7,
>   issimple:1,
> --- 74,80 
>   
>   typedef struct aff_struct
>   {
> ! uint32  flag:16,
>   type:1,
>   flagflags:7,
>   issimple:1,

By doing this, you're using 40 bits of a 32-bits-wide field.  What does
this mean?  Are the final 8 bits lost?  Does the compiler allocate a
second uint32 member for those additional bits?  I don't know, but I
don't think this is a very clean idea.

>   typedef struct spell_struct
>   {
> ! union
>   {
> ! /*
> !  * flag is filled in by NIImportDictionary. After 
> NISortDictionary, d
> !  * is valid and flag is invalid.
> !  */
> ! charflag[MAXFLAGLEN];
> ! struct
> ! {
> ! int affix;
> ! int len;
> ! }   d;
> ! }   p;
>   charword[FLEXIBLE_ARRAY_MEMBER];
>   } SPELL;
>   
> --- 57,72 
>   
>   typedef struct spell_struct
>   {
> ! struct
>   {
> ! int affix;
> ! int len;
> ! }   d;
> ! /*
> !  * flag is filled in by NIImportDictionary. After NISortDictionary, d
> !  * is valid and flag is invalid.
> !  */
> ! char   *flag;
>   charword[FLEXIBLE_ARRAY_MEMBER];
>   } SPELL;

Here you removed the union, with no rationale for doing so.  Why did you
do it?  Can it be avoided?  Because of the comment, I'd imagine that d
and flag are valid at different times, so at any time we care about only
one of them; but you haven't updated the comment stating the reason for
that no longer to be the case.  I suspect you need to keep flag valid
after NISortDictionary has been called, but if so why?  If "flag" is
invalid as the comment says, what's the reason for keeping it?


The routines decodeFlag and isAffixFlagInUse could do with more
comments.  Your patch adds zero.  Actually the whole file has not nearly
enough comments; adding some more would be very good.

Actually, after some more reading, I think this code is pretty terrible.
I have a hard time figuring out how the original works, which makes it
even more difficult to figure out whether your changes make sense.  I
would have to take your patch on faith, which doesn't sound so great an
idea.

palloc / cpalloc / tmpalloc make the whole mess even more confusing.
Why does this file have three ways to allocate memory?

Not sure what's a good way to go about this.  I am certainly not going
to commit this as is, because if I do whatever bugs you have are going
to become my problem; and with the severe lack of documentation and
given how fiddly this stuff is, I bet there are going to be a bunch of
bugs.  I suspect most committers are going to be in the same position.
I think you should start by adding a few comments here and there on top
of the original to explain how it works, then your patch on top.  I
suppose it's going to be a lot of work for you but I don't see any other
way.

A top-level overview about it would be good, too.  The current comment
at top of file states:

 * spell.c
 *  Normalizing word with ISpell

which is, err, somewhat laconic.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Add STRICT to some regression test C functions.

2016-01-08 Thread Andreas Seltenreich
Hi,

some of the C functions in the regression test DB readily crash when
passing NULL input values.  The regression tests themselves do not pass
NULL values to them, but when the regression database is used as a basis
for fuzz testing, they cause a lot of noise.  Maybe someone can sneak
this patch in?

Thanks,
Andreas
>From 2711471d48c2e58809c2f4617d36352c5903bbd9 Mon Sep 17 00:00:00 2001
From: Andreas Seltenreich 
Date: Sun, 3 Jan 2016 19:05:06 +0100
Subject: [PATCH] Add STRICT to some regression test C functions.

These functions readily crash when passing NULL input values.  The
regression tests themselves do not pass NULL values to them, but when
the regression database is used as a basis for fuzz testing, they
cause a lot of noise.
---
 src/test/regress/input/create_function_2.source  | 10 +-
 src/test/regress/output/create_function_2.source | 10 +-
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/src/test/regress/input/create_function_2.source b/src/test/regress/input/create_function_2.source
index 1b013ae..486803d 100644
--- a/src/test/regress/input/create_function_2.source
+++ b/src/test/regress/input/create_function_2.source
@@ -74,27 +74,27 @@ CREATE FUNCTION user_relns()
 CREATE FUNCTION pt_in_widget(point, widget)
RETURNS bool
AS '@libdir@/regress@DLSUFFIX@'
-   LANGUAGE C;
+   LANGUAGE C STRICT;
 
 CREATE FUNCTION overpaid(emp)
RETURNS bool
AS '@libdir@/regress@DLSUFFIX@'
-   LANGUAGE C;
+   LANGUAGE C STRICT;
 
 CREATE FUNCTION boxarea(box)
RETURNS float8
AS '@libdir@/regress@DLSUFFIX@'
-   LANGUAGE C;
+   LANGUAGE C STRICT;
 
 CREATE FUNCTION interpt_pp(path, path)
RETURNS point
AS '@libdir@/regress@DLSUFFIX@'
-   LANGUAGE C;
+   LANGUAGE C STRICT;
 
 CREATE FUNCTION reverse_name(name)
RETURNS name
AS '@libdir@/regress@DLSUFFIX@'
-   LANGUAGE C;
+   LANGUAGE C STRICT;
 
 CREATE FUNCTION oldstyle_length(int4, text)
RETURNS int4
diff --git a/src/test/regress/output/create_function_2.source b/src/test/regress/output/create_function_2.source
index 98e1c29..bdfc5be 100644
--- a/src/test/regress/output/create_function_2.source
+++ b/src/test/regress/output/create_function_2.source
@@ -58,23 +58,23 @@ CREATE FUNCTION user_relns()
 CREATE FUNCTION pt_in_widget(point, widget)
RETURNS bool
AS '@libdir@/regress@DLSUFFIX@'
-   LANGUAGE C;
+   LANGUAGE C STRICT;
 CREATE FUNCTION overpaid(emp)
RETURNS bool
AS '@libdir@/regress@DLSUFFIX@'
-   LANGUAGE C;
+   LANGUAGE C STRICT;
 CREATE FUNCTION boxarea(box)
RETURNS float8
AS '@libdir@/regress@DLSUFFIX@'
-   LANGUAGE C;
+   LANGUAGE C STRICT;
 CREATE FUNCTION interpt_pp(path, path)
RETURNS point
AS '@libdir@/regress@DLSUFFIX@'
-   LANGUAGE C;
+   LANGUAGE C STRICT;
 CREATE FUNCTION reverse_name(name)
RETURNS name
AS '@libdir@/regress@DLSUFFIX@'
-   LANGUAGE C;
+   LANGUAGE C STRICT;
 CREATE FUNCTION oldstyle_length(int4, text)
RETURNS int4
AS '@libdir@/regress@DLSUFFIX@'
-- 
2.1.4


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] snapshot too old, configured by time

2016-01-08 Thread Alvaro Herrera
Kevin Grittner wrote:

> People have said that issuing SQL commands directly from a TAP test
> via DBD::Pg is not acceptable for a core feature, and (despite
> assertions to the contrary) I see no way to test this feature with
> existing testing mechanisms.  The bigger set of work here, if we
> don't want this feature to go in without any testing scripts (which
> is not acceptable IMO), is to enhance the isolation tester or
> hybridize TAP testing with the isolation tester.

Is it possible to use the PostgresNode stuff to test this?  If not,
perhaps if you restate what additional capabilities you need we could
look into adding them there.  I suspect that what you need is the
ability to keep more than one session open and feed them commands;
perhaps we could have the framework have a function that opens a psql
process and returns a FD to which the test program can write, using the
IPC::Run stuff (start / pump / finish).

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add STRICT to some regression test C functions.

2016-01-08 Thread Piotr Stefaniak

On 01/09/2016 12:05 AM, Andreas Seltenreich wrote:

Maybe someone can sneak this patch in?


Exactly this is done by a part of another patch, by Michael Paquier, 
that he sent to an off-list thread.


Michael had asked for feedback there, but I didn't have the time to test 
the patch. Just from reading it I think it's good, though. And it still 
applies and passes HEAD's make check-world tests.




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New feature "... ALTER CONSTRAINT ... VERIFY USING INDEX"

2016-01-08 Thread Vitaly Burovoy
On 1/8/16, Alvaro Herrera  wrote:
> Simon Riggs wrote:
>> On 8 January 2016 at 14:14, Vitaly Burovoy 
>> wrote:
>>
>> > What about SET NOT NULL constraints? There is no VALIDATE CONSTRAINT
>> > for
>> > it.
>>
>> Sounds like a useful addition.
>
> Yes.  In order to make it a reality you need to make the NOT NULL
> constraints appear in pg_constraint.  Years ago I wrote a patch to do
> that, which was very close to done.  It would be really cool if Vitaly
> or someone else could look into that patch, update it and get it ready
> for commit.
>
> If someone is interested, I can send the patch along.
>
> --
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>

I guess you are talking about the other thread[1].
I'm not sure I have enough experience in Postgres hacking to start
working on it right now, but I'll have a look.
IMO the best way is to raise that topic by a letter with summary what
troubles are left there (in addition to a rebasing one).


[1] 
http://www.postgresql.org/message-id/flat/20110707213401.ga27...@alvh.no-ip.org

-- 
Best regards,
Vitaly Burovoy


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pglogical - logical replication contrib module

2016-01-08 Thread Steve Singer

On 12/31/2015 06:34 PM, Petr Jelinek wrote:

Hi,

I'd like to submit the replication solution which is based on the 
pglogical_output [1] module (which is obviously needed for this to 
compile).




Hi,

make check  gives me


for extra in ../../contrib/pglogical_output contrib/pglogical; do make 
-C '../..'/$extra DESTDIR='/usr/local/src/postgresql'/tmp_install 
install >>'/usr/local/src/postgresql'/tmp_install/log/install.log || 
exit; done
make[1]: *** ../../../../contrib/pglogical_output: No such file or 
directory.  Stop.

../../src/Makefile.global:325: recipe for target 'temp-install' failed
make: *** [temp-install] Error 2
ssinger@ssinger-laptop:/usr/local/src/postgresql/contrib/pglogical$

The attached patch fixes that but it then is creating the test database 
'contrib_regression' not 'regression'
changing pglogical.provider_dsn = 'contrib_regression' still leaves me 
with a lot of failures.



diff --git a/contrib/pglogical/Makefile b/contrib/pglogical/Makefile
new file mode 100644
index 1640f63..a4dab88
*** a/contrib/pglogical/Makefile
--- b/contrib/pglogical/Makefile
*** include $(top_srcdir)/contrib/contrib-gl
*** 27,34 
  # typical installcheck users do not have (e.g. buildfarm clients).
  @installcheck: ;
  
! EXTRA_INSTALL += $(top_srcdir)/contrib/pglogical_output
! EXTRA_REGRESS_OPTS += $(top_srcdir)/contrib/pglogical/regress-postgresql.conf
  
  override CPPFLAGS := $(CPPFLAGS) -I$(top_srcdir)/contrib/pglogical_output
  
--- 27,34 
  # typical installcheck users do not have (e.g. buildfarm clients).
  @installcheck: ;
  
! EXTRA_INSTALL += contrib/pglogical_output
! EXTRA_REGRESS_OPTS += --temp-config=regress-postgresql.conf
  
  override CPPFLAGS := $(CPPFLAGS) -I$(top_srcdir)/contrib/pglogical_output
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New feature "... ALTER CONSTRAINT ... VERIFY USING INDEX"

2016-01-08 Thread Alvaro Herrera
Vitaly Burovoy wrote:

> I guess you are talking about the other thread[1].
> I'm not sure I have enough experience in Postgres hacking to start
> working on it right now, but I'll have a look.
> IMO the best way is to raise that topic by a letter with summary what
> troubles are left there (in addition to a rebasing one).
> 
> 
> [1] 
> http://www.postgresql.org/message-id/flat/20110707213401.ga27...@alvh.no-ip.org

Here's a newer thread:
https://www.postgresql.org/message-id/1343682669-sup-2532%40alvh.no-ip.org
which includes some points needing additional work.

In my local tree I have a branch that maybe is not the same as the last
patch I posted in that thread, because the last commit has a newer date.
Here I attach what that branch has.  It applies cleanly on top of
03bda4535ee119d3.  I don't remember if it was in compilable state at the
time I abandoned it.

It needs some work to rebase, but from a quick experimental "git merge"
I just ran, it's not too bad.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index 8ac8373..4c9686f 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -14,12 +14,16 @@
 #include "postgres.h"
 
 #include "catalog/index.h"
+#include "catalog/pg_constraint.h"
+#include "commands/constraint.h"
 #include "commands/trigger.h"
 #include "executor/executor.h"
 #include "utils/builtins.h"
 #include "utils/rel.h"
+#include "utils/syscache.h"
 #include "utils/tqual.h"
 
+static char *tryExtractNotNull_internal(Node *node, Relation rel);
 
 /*
  * unique_key_recheck - trigger function to do a deferred uniqueness check.
@@ -188,3 +192,121 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 
 	return PointerGetDatum(NULL);
 }
+
+Constraint *
+createCheckNotNullConstraint(Oid nspid, char *constraint_name,
+			 const char *relname, const char *colname)
+{
+	Constraint *check = makeNode(Constraint);
+	ColumnRef  *colref;
+	NullTest   *nulltest;
+
+	colref = (ColumnRef *) makeNode(ColumnRef);
+	colref->fields = list_make1(makeString(pstrdup(colname)));
+
+	nulltest = (NullTest *) makeNode(NullTest);
+	nulltest->argisrow = false;	/* FIXME -- may be bogus! */
+	nulltest->nulltesttype = IS_NOT_NULL;
+	nulltest->arg = (Expr *) colref;
+
+	check->contype = CONSTR_CHECK;
+	check->location = -1;
+	check->conname = constraint_name ? constraint_name :
+		ChooseConstraintName(relname, colname, "not_null", nspid,
+			 NIL);
+	check->raw_expr = (Node *) nulltest;
+	check->cooked_expr = NULL;
+
+	return check;
+}
+
+/*
+ * Given a CHECK constraint, examine it and determine whether it is CHECK (col
+ * IS NOT NULL).  If it is, return the column name for which it is.  Otherwise
+ * return NULL.
+ */
+char *
+tryExtractNotNullFromCheckConstr(Constraint *constr)
+{
+	char   *retval;
+
+	Assert(constr->contype == CONSTR_CHECK);
+
+	if (constr->raw_expr != NULL)
+	{
+		retval = tryExtractNotNull_internal(constr->raw_expr, NULL);
+		if (retval != NULL)
+			return retval;
+	}
+
+	return tryExtractNotNull_internal(stringToNode(constr->cooked_expr), NULL);
+}
+
+/*
+ * As above, but use a pg_constraint row as input.
+ *
+ * tupdesc is pg_constraint's tuple descriptor, and rel is the relation the
+ * constraint is for.
+ */
+char *
+tryExtractNotNullFromCatalog(HeapTuple constrTup, TupleDesc tupdesc,
+			 Relation rel)
+{
+	Datum	val;
+	bool	isnull;
+	char   *conbin;
+	Node   *node;
+
+	val = SysCacheGetAttr(CONSTROID, constrTup, Anum_pg_constraint_conbin,
+		  );
+	if (isnull)
+		elog(ERROR, "null conbin for constraint %u",
+			 HeapTupleGetOid(constrTup));
+	conbin = TextDatumGetCString(val);
+	node = (Node *) stringToNode(conbin);
+
+	return tryExtractNotNull_internal(node, rel);
+}
+
+/*
+ * Worker for the above
+ */
+static char *
+tryExtractNotNull_internal(Node *node, Relation rel)
+{
+	if (IsA(node, NullTest))
+	{
+		NullTest *nulltest = (NullTest *) node;
+
+		if (nulltest->nulltesttype == IS_NOT_NULL)
+		{
+			if (IsA(nulltest->arg, ColumnRef))
+			{
+ColumnRef *colref = (ColumnRef *) nulltest->arg;
+
+if (list_length(colref->fields) == 1)
+	return strVal(linitial(colref->fields));
+			}
+			if (IsA(nulltest->arg, Var))
+			{
+Var	   *var = (Var *) nulltest->arg;
+TupleDesc tupdesc;
+
+if (!RelationIsValid(rel))
+	elog(ERROR,
+		 "no relation given to extract constraint from");
+tupdesc = RelationGetDescr(rel);
+return NameStr(tupdesc->attrs[var->varattno - 1]->attname);
+			}
+		}
+	}
+
+	/*
+	 * XXX Need to check a few more possible wordings of NOT NULL:
+	 *
+	 * - foo IS DISTINCT FROM NULL
+	 * - NOT (foo IS NULL)
+	 */
+
+	return NULL;
+}
diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c
index dc0665e..a3bff7d 100644
--- a/src/backend/commands/createas.c
+++ b/src/backend/commands/createas.c
@@ -231,6 +231,7 @@ 

Re: [HACKERS][PROPOSAL] New feature "... VALIDATE CONSTRAINT ... USING INDEX ..."

2016-01-08 Thread Vitaly Burovoy
On 1/8/16, Simon Riggs  wrote:
> On 8 January 2016 at 13:13, Vitaly Burovoy 
> wrote:
>
>> On 1/8/16, Simon Riggs  wrote:
>> > On 8 January 2016 at 12:49, Vitaly Burovoy 
>> > wrote:
>> >
>> >
>> >> In Postgres9.1 a new feature was implemented [1] for adding PK and
>> >> UNIQUE constraints using indexes created concurrently, but constraints
>> >> NOT NULL and CHECK still require full seqscan of a table. New CHECK
>> >> constraint allows "NOT VALID" option but VALIDATE CONSTRAINT still
>> >> does seqscan (with RowExclusiveLock, but for big and constantly
>> >> updatable table it is still awful).
>> >>
>> >> It is possible to find wrong rows in a table without seqscan if there
>> >> is an index with a predicate allows to find such rows. There is no
>> >> sense what columns it has since it is enough to check whether
>> >> index_getnext for it returns NULL (table is OK) or any tuple (table
>> >> has wrong rows).
>> >>
>> >
>> > You avoid a full seqscan by creating an index which also does a full
>> > seq
>> > scan.
>> >
>> > How does this help? The lock and scan times are the same.
>>
>> I avoid not a full seqscan, but a time when table is under
>> ExclusiveLock: index can be build concurrently without locking table.
>
>
> That is exactly what ADD ...NOT VALID  and VALIDATE already does, as of
> 9.4.
>
> --
> Simon Riggshttp://www.2ndQuadrant.com/
> 
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>

I'm so sorry, I was wrong. It is a result of my old experience with
Postgres 9.2. There tables were locked by an ACCESS EXCLUSIVE lock...
=(
I missed p. E.6.3.5 in the release notes[1] for 9.4.

Nevertheless, let me ask why do you reject an ability to use indexes
at a validation process?

Let's imagine a user has to add a CHECK constraint.
He tries to command:
ALTER TABLE tablename ADD CONSTRAINT tablename_expr_chk CHECK
(check_expr) NOT VALID;

It is ok. Then the command:
ALTER TABLE tablename VALIDATE CONSTRAINT tablename_expr_chk;

after some time gives an error:
ERROR:  check constraint "tablename_expr_chk" is violated by some row

Hmm... It must be fixed, but which row is wrong? How many wrong rows are there?
The best way is to create an index to find rows (there can be
thousands or more...) and understand how it turns out they violate the
constraint (the user was absolutely sure there's all OK before sending
"VALIDATE CONSTRAINT").
Then he deals with it (using the index for a fast access to wrong
rows), it is time to revalidate the constraint. Hmm... The user has
already had the actual index with a special predicate for being sure
there table has no wrong rows! Why he must wait for the third(!)
seqscan (the first two were validating and indexing) instead of just
using already present index with no entries?

Moreover the most often case of SET NOT NULL constraint is setting
default value without locking a table and set a constraint after all
rows have at least default values as I wrote in the initial letter.
Index there is important and always present at the end of the UPDATE
process (before applying the constraint). Why (even when NOT NULL
moves to the "pg_constraint" table) don't use the index but do seqscan
instead?

It is possible to use another syntax (currently for CHECK constraints
and for NOT NULLs when they appear in the pg_catalog):
ALTER TABLE tablename VALIDATE CONSTRAINT tablename_expr_chk USING
INDEX indexname;

which will use the predicate as it was described in the initial letter.


[1] http://www.postgresql.org/docs/9.4/static/release-9-4.html#AEN120302

-- 
Best regards,
Vitaly Burovoy


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] Improvements of Hunspell dictionaries support

2016-01-08 Thread Alvaro Herrera
Artur Zakirov wrote:

> Now almost all dictionaries are loaded into PostgreSQL. But the da_dk
> dictionary does not load. I see the following error:
> 
> ERROR: invalid regular expression: quantifier operand invalid
> CONTEXT: line 439 of configuration file
> "/home/artur/progs/pgsql/share/tsearch_data/da_dk.affix": "SFX 55 0 s
> +GENITIV
> 
> If you open the affix file in editor you can see that there is incorrect
> format of the affix 55 in 439 line (screen1.png):
 
 [ another email ]

> I also had implemented a patch that fixes an error from the e-mail
> http://www.postgresql.org/message-id/562e1073.8030...@postgrespro.ru
> This patch just ignore that error.

I think it's a bad idea to just ignore these syntax errors.  This affix
file is effectively corrupt, after all, so it seems a bad idea that we
need to cope with it.  I think it would be better to raise the error
normally and instruct the user to fix the file; obviously it's better if
the upstream provider of the file fixes it.

Now, if there is proof somewhere that the file is correct, then the code
must cope in some reasonable way.  But in any case I don't think this
change is acceptable ... it can only cause pain, in the long run.

> *** 429,443  NIAddAffix(IspellDict *Conf, int flag, char flagflags, const 
> char *mask, const c
>   err = pg_regcomp(&(Affix->reg.regex), wmask, wmasklen,
>REG_ADVANCED | REG_NOSUB,
>DEFAULT_COLLATION_OID);
>   if (err)
> ! {
> ! charerrstr[100];
> ! 
> ! pg_regerror(err, &(Affix->reg.regex), errstr, 
> sizeof(errstr));
> ! ereport(ERROR,
> ! 
> (errcode(ERRCODE_INVALID_REGULAR_EXPRESSION),
> !  errmsg("invalid regular expression: 
> %s", errstr)));
> ! }
>   }
>   
>   Affix->flagflags = flagflags;
> --- 429,437 
>   err = pg_regcomp(&(Affix->reg.regex), wmask, wmasklen,
>REG_ADVANCED | REG_NOSUB,
>DEFAULT_COLLATION_OID);
> + /* Ignore regular expression error and do not add wrong affix */
>   if (err)
> ! return;
>   }
>   
>   Affix->flagflags = flagflags;


-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improving replay of XLOG_BTREE_VACUUM records

2016-01-08 Thread Vladimir Borodin

> 7 янв. 2016 г., в 5:26, Michael Paquier  
> написал(а):
> 
> On Thu, Jan 7, 2016 at 12:20 AM, Alvaro Herrera
> > wrote:
>> Vladimir Borodin wrote:
>> 
>>> There are situations in which vacuuming big btree index causes stuck
>>> in WAL replaying on hot standby servers for quite a long time. I’ve
>>> described the problem in more details in this thread [0]. Below in
>>> that thread Kevin Grittner proposed a good way for improving btree
>>> scans so that btree vacuuming logic could be seriously simplified.
>>> Since I don’t know when that may happen I’ve done a patch that makes
>>> some improvement right now. If Kevin or someone else would expand [1]
>>> for handling all types of btree scans, I suppose, my patch could be
>>> thrown away and vacuuming logic should be strongly rewritten.
>> 
>> You submitted this patch in May 2015 -- and 7 months later, Simon came
>> up with another patch that's supposed to fix the underlying problem, so
>> that this shouldn't be a problem anymore.
>> 
>> Would you please have a look at Simon's patch, in particular verify
>> whether it solves the performance dip in your testing environment?
>> https://www.postgresql.org/message-id/CANP8%2BjJuyExr1HnTAdZraWsWkfc-octhug7YPtzPtJcYbyi4pA%40mail.gmail.com
>> (Note there's an updated patch a few emails down the thread.)
>> 
>> If it seems to fix the problem for you, I think we should mark yours
>> rejected and just apply Simon’s.

Ok, I’ll try this patch with my use case. Basically, it’s not so easy now since 
I’ve partitioned that big table to not have such problems but there is a way to 
reproduce it once again. If it helps, I agree that my should be rejected in 
favor of the Simon’s patch because my patch just reduces replication lag but 
Simon’s seems to remove lag at all.

> 
> Simon's patch (justly) does not update lastBlockVacuumed in the case
> of toast indexes, but Vladimir's patch would still optimize this case,
> no?

I suppose, in case of _not_ toast indexes. But yes, seems that my patch should 
help in that case too.

> -- 
> Michael


--
May the force be with you…
https://simply.name



Re: [HACKERS] New feature "... ALTER CONSTRAINT ... VERIFY USING INDEX"

2016-01-08 Thread Simon Riggs
On 8 January 2016 at 14:14, Vitaly Burovoy  wrote:

>
> What about SET NOT NULL constraints? There is no VALIDATE CONSTRAINT for
> it.


Sounds like a useful addition.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] New feature "... ALTER CONSTRAINT ... VERIFY USING INDEX"

2016-01-08 Thread Simon Riggs
On 8 January 2016 at 13:13, Vitaly Burovoy  wrote:

> On 1/8/16, Simon Riggs  wrote:
> > On 8 January 2016 at 12:49, Vitaly Burovoy 
> > wrote:
> >
> >
> >> In Postgres9.1 a new feature was implemented [1] for adding PK and
> >> UNIQUE constraints using indexes created concurrently, but constraints
> >> NOT NULL and CHECK still require full seqscan of a table. New CHECK
> >> constraint allows "NOT VALID" option but VALIDATE CONSTRAINT still
> >> does seqscan (with RowExclusiveLock, but for big and constantly
> >> updatable table it is still awful).
> >>
> >> It is possible to find wrong rows in a table without seqscan if there
> >> is an index with a predicate allows to find such rows. There is no
> >> sense what columns it has since it is enough to check whether
> >> index_getnext for it returns NULL (table is OK) or any tuple (table
> >> has wrong rows).
> >>
> >
> > You avoid a full seqscan by creating an index which also does a full seq
> > scan.
> >
> > How does this help? The lock and scan times are the same.
>
> I avoid not a full seqscan, but a time when table is under
> ExclusiveLock: index can be build concurrently without locking table.


That is exactly what ADD ...NOT VALID  and VALIDATE already does, as of 9.4.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] Improving replay of XLOG_BTREE_VACUUM records

2016-01-08 Thread Alvaro Herrera
Vladimir Borodin wrote:
> 
> > 7 янв. 2016 г., в 5:26, Michael Paquier  
> > написал(а):
> > 
> > On Thu, Jan 7, 2016 at 12:20 AM, Alvaro Herrera
> > > wrote:

> >> Would you please have a look at Simon's patch, in particular verify
> >> whether it solves the performance dip in your testing environment?
> >> https://www.postgresql.org/message-id/CANP8%2BjJuyExr1HnTAdZraWsWkfc-octhug7YPtzPtJcYbyi4pA%40mail.gmail.com
> >> (Note there's an updated patch a few emails down the thread.)
> >> 
> >> If it seems to fix the problem for you, I think we should mark yours
> >> rejected and just apply Simon’s.
> 
> Ok, I’ll try this patch with my use case. Basically, it’s not so easy
> now since I’ve partitioned that big table to not have such problems
> but there is a way to reproduce it once again. If it helps, I agree
> that my should be rejected in favor of the Simon’s patch because my
> patch just reduces replication lag but Simon’s seems to remove lag at
> all.

I would agree except for the observation on toast indexes.  I think
that's an important enough use case that perhaps we should have both.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] New feature "... ALTER CONSTRAINT ... VERIFY USING INDEX"

2016-01-08 Thread Vitaly Burovoy
Hello hackers,

I want to implement a new feature that allows to decrease time when
table is under ExclusiveLock on ALTERing new constraints NOT NULL or
CHECK.

In Postgres9.1 a new feature was implemented [1] for adding PK and
UNIQUE constraints using indexes created concurrently, but constraints
NOT NULL and CHECK still require full seqscan of a table. New CHECK
constraint allows "NOT VALID" option but VALIDATE CONSTRAINT still
does seqscan (with RowExclusiveLock, but for big and constantly
updatable table it is still awful).

It is possible to find wrong rows in a table without seqscan if there
is an index with a predicate allows to find such rows. There is no
sense what columns it has since it is enough to check whether
index_getnext for it returns NULL (table is OK) or any tuple (table
has wrong rows).

Index must be BTREE (since it is not supposed to hold any data), valid
and has a predicate depending on a constraint type:
* for NOT NULL constraint predicate must be "(col) IS NULL";
* for CHECK constraint predicate must be "(expr) IS DISTINCT FROM
TRUE" (to cover both "(expr) IS NULL" and "NOT(expr)" cases).

I propose the next syntax ("action" in "ALTER TABLE"):
ALTER [ COLUMN ] column_name SET NOT NULL [ VERIFY USING INDEX index_name ]

and (all rows except the last one was got from a documentation[2])
ADD CONSTRAINT constraint_name
CHECK ( expression ) [ NO INHERIT ]
[ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]
[ VERIFY USING INDEX index_name ]

The new nonreserved keyword "VERIFY" is necessary to avoid people be
confused what "USING INDEX" really does, and to show the index is not
needed after DDL finishes (verifies table) correctly.

I wasn't succeed in my search for similar feature (and such syntax) in
an SQL standard or in any existing DBMS.

I've done some research to be familiar with the source code (and be
ready to get advice) and have a working version for CHECK constraint.

My patch has a WIP state since I don't know how the community will
meet proposal feature/syntax, and some work is necessary for NOT NULL,
documentation and psql.

So I'm ready for a discussion.


P.S.: for NOT NULL it'll allow to do something like:

ALTER TABLE tablename ADD COLUMN newcol data_type;
ALTER TABLE tablename ALTER COLUMN newcol SET DEFAULT default_expr;
ALTER TABLE tablename ADD CONSTRAINT tablename_nonnull_chk CHECK
(newcol IS NOT NULL) NOT VALID;  --optional

CREATE INDEX CONCURRENTLY tablename_chk
ON tablename (id DESC)  -- PK col(s), "DESC" is to begin from the
oldest rows
WHERE newcol IS NULL;

-- query for repeat
UPDATE tablename SET newcol=DEFAULT
WHERE id IN (
SELECT id FROM tablename WHERE newcol IS NULL
ORDER BY id DESC LIMIT 10
FOR UPDATE SKIP LOCKED
);
-- repeat above command until 0 rows is affected.

ALTER TABLE tablename ALTER COLUMN newcol SET NOT NULL VERIFY USING
INDEX tablename_chk;

DROP INDEX CONCURRENTLY tablename_chk;  -- if the command above succeed
ALTER TABLE tablename DROP CONSTRAINT tablename_nonnull_chk;  --optional

[1] http://www.postgresql.org/docs/9.1/static/release-9-1.html#AEN110279
[2] http://www.postgresql.org/docs/9.5/static/sql-createtable.html

-- 
Best regards,
Vitaly Burovoy


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New feature "... ALTER CONSTRAINT ... VERIFY USING INDEX"

2016-01-08 Thread Simon Riggs
On 8 January 2016 at 12:49, Vitaly Burovoy  wrote:


> In Postgres9.1 a new feature was implemented [1] for adding PK and
> UNIQUE constraints using indexes created concurrently, but constraints
> NOT NULL and CHECK still require full seqscan of a table. New CHECK
> constraint allows "NOT VALID" option but VALIDATE CONSTRAINT still
> does seqscan (with RowExclusiveLock, but for big and constantly
> updatable table it is still awful).
>
> It is possible to find wrong rows in a table without seqscan if there
> is an index with a predicate allows to find such rows. There is no
> sense what columns it has since it is enough to check whether
> index_getnext for it returns NULL (table is OK) or any tuple (table
> has wrong rows).
>

You avoid a full seqscan by creating an index which also does a full seq
scan.

How does this help? The lock and scan times are the same.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2016-01-08 Thread Greg Stark
On Thu, Jan 7, 2016 at 6:30 PM, Jeff Janes  wrote:
> I don't completely agree with that.  I have often wanted to know when
> a specific item was added to the TODO page, and/or its individual edit
> history.

Sure, there might be other things it would be better at. But my point
is that it would have the same problem as the wiki in that it would
accumulate vague ideas that someone thought was a good idea once but
didn't have a good idea how to implement or a compelling argument that
convinced others to work on it.

The wiki is the lowest overhead and highest visibility way of
maintaining communal information. Bug trackers exist to impose extra
structure to match an intended workflow. That's fine for bugs or a
closely managed project but it's the last thing you want if you're
trying to get more people to contribute. The whole selling points of
wikis is that they draw in contributors because anyone can edit
easily.

This really sounds like you're looking for leverage to fix one problem
by finding other problems that you hope to solve with the same hammer.
That's a recipe for a tool that solves neither problem well and gets
ignored by the both sets of users.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New feature "... ALTER CONSTRAINT ... VERIFY USING INDEX"

2016-01-08 Thread Vitaly Burovoy
On 1/8/16, Simon Riggs  wrote:
> On 8 January 2016 at 12:49, Vitaly Burovoy 
> wrote:
>
>
>> In Postgres9.1 a new feature was implemented [1] for adding PK and
>> UNIQUE constraints using indexes created concurrently, but constraints
>> NOT NULL and CHECK still require full seqscan of a table. New CHECK
>> constraint allows "NOT VALID" option but VALIDATE CONSTRAINT still
>> does seqscan (with RowExclusiveLock, but for big and constantly
>> updatable table it is still awful).
>>
>> It is possible to find wrong rows in a table without seqscan if there
>> is an index with a predicate allows to find such rows. There is no
>> sense what columns it has since it is enough to check whether
>> index_getnext for it returns NULL (table is OK) or any tuple (table
>> has wrong rows).
>>
>
> You avoid a full seqscan by creating an index which also does a full seq
> scan.
>
> How does this help? The lock and scan times are the same.
>
> --
> Simon Riggshttp://www.2ndQuadrant.com/
> 
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>

I avoid not a full seqscan, but a time when table is under
ExclusiveLock: index can be build concurrently without locking table.

-- 
Best regards,
Vitaly Burovoy


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improving replay of XLOG_BTREE_VACUUM records

2016-01-08 Thread Simon Riggs
On 8 January 2016 at 13:36, Alvaro Herrera  wrote:

> Vladimir Borodin wrote:
> >
> > > 7 янв. 2016 г., в 5:26, Michael Paquier 
> написал(а):
> > >
> > > On Thu, Jan 7, 2016 at 12:20 AM, Alvaro Herrera
> > > > wrote:
>
> > >> Would you please have a look at Simon's patch, in particular verify
> > >> whether it solves the performance dip in your testing environment?
> > >>
> https://www.postgresql.org/message-id/CANP8%2BjJuyExr1HnTAdZraWsWkfc-octhug7YPtzPtJcYbyi4pA%40mail.gmail.com
> > >> (Note there's an updated patch a few emails down the thread.)
> > >>
> > >> If it seems to fix the problem for you, I think we should mark yours
> > >> rejected and just apply Simon’s.
> >
> > Ok, I’ll try this patch with my use case. Basically, it’s not so easy
> > now since I’ve partitioned that big table to not have such problems
> > but there is a way to reproduce it once again. If it helps, I agree
> > that my should be rejected in favor of the Simon’s patch because my
> > patch just reduces replication lag but Simon’s seems to remove lag at
> > all.
>
> I would agree except for the observation on toast indexes.  I think
> that's an important enough use case that perhaps we should have both.
>

The exclusion of toast indexes is something we can remove also, I have
recently discovered. When we access toast data we ignore MVCC, but we still
have the toast pointer and chunkid to use for rechecking our scan results.
So a later patch will add some rechecks.

So I don't think it is worth applying this patch now. I should add that I
also had a patch that did this, posted earlier IIRC. That is not the reason
to reject this, just me pointing out that I'm effectively rejecting my own
earlier patch also.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] dblink: add polymorphic functions.

2016-01-08 Thread Alvaro Herrera
Joe Conway wrote:
> On 07/30/2015 09:51 AM, Tom Lane wrote:
> > Joe Conway  writes:
> >> What about just TYPE then?
> > 
> >> SELECT x::TYPE(some_expression) FROM ...
> >> SELECT CAST (x AS TYPE(some_expression)) FROM ...
> 
> > The main limitation of this patch is that it won't work for call sites
> > that pass pstate == NULL to LookupTypeName.  There are a fair number
> > of them, some of which wouldn't care because they could never invoke
> > this notation anyway, but for others we'd need to do some work to cons
> > up a suitable pstate.
> 
> Sorry it took so long for me to get back to this, but any reason the
> attached won't work?

So, is this going anywhere?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] FDW join pushdown and scanclauses

2016-01-08 Thread Ashutosh Bapat
Hi,
In add_paths_to_joinrel(), the FDW specific hook GetForeignJoinPaths() is
called. This hook if implemented should add ForeignPaths for pushed down
joins. create_plan_recurse() calls create_scan_plan() on seeing these paths.

create_scan_plan() generates a list of clauses to be applied on scan from
rel->baserestrictinfo and parameterization clauses. This list is passed to
create_*scan_plan routine as scanclauses. This code is very specific for
single relations scans. Now that we are using create_scan_plan() for
creating plan for join relations, it needs some changes so that quals
relevant to a join can be passed to create_foreignscan_plan(). A related
problem is in create_foreignscan_plan(), which sets
ForeignScan::fsSystemCol if a system column is being used in the targetlist
or quals. Right now it only checks rel->baserestrictinfo, which is NULL for
a joinrel. Thus in case a system column appears in the joinclauses it will
not be considered.

Here are possible ways to fix the problem
1. Add joinrestrictinfo field in ForeignPath and use that as scan_clauses
in create_foreignscan_plan(). Something like patch attached. FDWs anyway
need to examine the join clauses to check whether they are pushable or not.
So mostly FDWs have already sorted the clauses (joinclauses and
otherclauses as well as pushable and nonpushable) and stored in their
private structures. So they probably don't need to look at the scan_clauses
passed in. (See WIP patch in [1]).

2. Create a separate ForeignJoinPath node with JoinPath as member (like
MergePath). This node then takes entirely different create_*_plan route,
ultimately producing a ForeignScan node. This path though clean will have a
lot of new and duplicate code.

Any other suggestions welcome.

[1]
http://www.postgresql.org/message-id/cafjfprdhgenohm0ab6gxz1evx_yoqkywukddzeb5vpzfbae...@mail.gmail.com
-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 953aa62..bb1c058 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -327,22 +327,25 @@ create_scan_plan(PlannerInfo *root, Path *best_path)
 		}
 	}
 	else
 	{
 		tlist = build_path_tlist(root, best_path);
 	}
 
 	/*
 	 * Extract the relevant restriction clauses from the parent relation. The
 	 * executor must apply all these restrictions during the scan, except for
-	 * pseudoconstants which we'll take care of below.
+	 * pseudoconstants which we'll take care of below. We may get here for a
+	 * pushed down join between foreign relations. baserestrictinfo would be
+	 * NULL in that case.
 	 */
+	Assert(rel->reloptkind != RELOPT_JOINREL || !rel->baserestrictinfo);
 	scan_clauses = rel->baserestrictinfo;
 
 	/*
 	 * If this is a parameterized scan, we also need to enforce all the join
 	 * clauses available from the outer relation(s).
 	 *
 	 * For paranoia's sake, don't modify the stored baserestrictinfo list.
 	 */
 	if (best_path->param_info)
 		scan_clauses = list_concat(list_copy(scan_clauses),
@@ -2114,20 +2117,26 @@ create_foreignscan_plan(PlannerInfo *root, ForeignPath *best_path,
 	 */
 	if (scan_relid > 0)
 	{
 		RangeTblEntry *rte;
 
 		Assert(rel->rtekind == RTE_RELATION);
 		rte = planner_rt_fetch(scan_relid, root);
 		Assert(rte->rtekind == RTE_RELATION);
 		rel_oid = rte->relid;
 	}
+	else
+	{
+		Assert(rel->reloptkind == RELOPT_JOINREL);
+		Assert(!scan_clauses);
+		scan_clauses = best_path->joinrestrictinfo;
+	}
 
 	/*
 	 * Sort clauses into best execution order.  We do this first since the FDW
 	 * might have more info than we do and wish to adjust the ordering.
 	 */
 	scan_clauses = order_qual_clauses(root, scan_clauses);
 
 	/*
 	 * Let the FDW perform its processing on the restriction clauses and
 	 * generate the plan node.  Note that the FDW might remove restriction
@@ -2173,21 +2182,21 @@ create_foreignscan_plan(PlannerInfo *root, ForeignPath *best_path,
 	 * bit of a kluge and might go away someday, so we intentionally leave it
 	 * out of the API presented to FDWs.
 	 *
 	 * First, examine all the attributes needed for joins or final output.
 	 * Note: we must look at reltargetlist, not the attr_needed data, because
 	 * attr_needed isn't computed for inheritance child rels.
 	 */
 	pull_varattnos((Node *) rel->reltargetlist, rel->relid, _used);
 
 	/* Add all the attributes used by restriction clauses. */
-	foreach(lc, rel->baserestrictinfo)
+	foreach(lc, scan_clauses)
 	{
 		RestrictInfo *rinfo = (RestrictInfo *) lfirst(lc);
 
 		pull_varattnos((Node *) rinfo->clause, rel->relid, _used);
 	}
 
 	/* Now, are any system columns requested from rel? */
 	scan_plan->fsSystemCol = false;
 	for (i = FirstLowInvalidHeapAttributeNumber + 1; i < 0; i++)
 	{
diff --git a/src/include/nodes/relation.h b/src/include/nodes/relation.h
index 19752ca..c951e24 100644
--- a/src/include/nodes/relation.h
+++ 

Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2016-01-08 Thread Tom Lane
Magnus Hagander  writes:
> On Thu, Jan 7, 2016 at 10:57 PM, Joshua D. Drake 
>> There are quite a few contributing companies that likely have people that
>> could help out with this in an educated fashion but aren't coders.

> Sort of like how they could also have helped out with cf management?

Dunno about that.  While a CF manager doesn't necessarily have to have
a commit bit, I think he/she probably does have to be someone who is known
and respected on the -hackers list.  Otherwise, nagging to finish patches
is likely to be ignored or even counterproductive.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New feature "... ALTER CONSTRAINT ... VERIFY USING INDEX"

2016-01-08 Thread Vitaly Burovoy
On 1/8/16, Simon Riggs  wrote:
> On 8 January 2016 at 13:13, Vitaly Burovoy 
> wrote:
>
>> On 1/8/16, Simon Riggs  wrote:
>> > On 8 January 2016 at 12:49, Vitaly Burovoy 
>> > wrote:
>> >
>> >
>> >> In Postgres9.1 a new feature was implemented [1] for adding PK and
>> >> UNIQUE constraints using indexes created concurrently, but constraints
>> >> NOT NULL and CHECK still require full seqscan of a table. New CHECK
>> >> constraint allows "NOT VALID" option but VALIDATE CONSTRAINT still
>> >> does seqscan (with RowExclusiveLock, but for big and constantly
>> >> updatable table it is still awful).
>> >>
>> >> It is possible to find wrong rows in a table without seqscan if there
>> >> is an index with a predicate allows to find such rows. There is no
>> >> sense what columns it has since it is enough to check whether
>> >> index_getnext for it returns NULL (table is OK) or any tuple (table
>> >> has wrong rows).
>> >>
>> >
>> > You avoid a full seqscan by creating an index which also does a full
>> > seq
>> > scan.
>> >
>> > How does this help? The lock and scan times are the same.
>>
>> I avoid not a full seqscan, but a time when table is under
>> ExclusiveLock: index can be build concurrently without locking table.
>
>
> That is exactly what ADD ...NOT VALID  and VALIDATE already does, as of
> 9.4.
>
> --
> Simon Riggshttp://www.2ndQuadrant.com/
> 
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>

Hmm... It really does. I was confused by a line in ATExecValidateConstraint
conrel = heap_open(ConstraintRelationId, RowExclusiveLock);

but validateCheckConstraint doesn't do anything for applying the lock to a row.

What about SET NOT NULL constraints? There is no VALIDATE CONSTRAINT for it.
-- 
Best regards,
Vitaly Burovoy


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improving replay of XLOG_BTREE_VACUUM records

2016-01-08 Thread Alvaro Herrera
Simon Riggs wrote:
> On 8 January 2016 at 13:36, Alvaro Herrera  wrote:

> > I would agree except for the observation on toast indexes.  I think
> > that's an important enough use case that perhaps we should have both.
> 
> The exclusion of toast indexes is something we can remove also, I have
> recently discovered. When we access toast data we ignore MVCC, but we still
> have the toast pointer and chunkid to use for rechecking our scan results.
> So a later patch will add some rechecks.

Ah, interesting, glad to hear.  I take it you're pushing your patch
soon, then?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New feature "... ALTER CONSTRAINT ... VERIFY USING INDEX"

2016-01-08 Thread Alvaro Herrera
Simon Riggs wrote:
> On 8 January 2016 at 14:14, Vitaly Burovoy  wrote:
> 
> > What about SET NOT NULL constraints? There is no VALIDATE CONSTRAINT for
> > it.
> 
> Sounds like a useful addition.

Yes.  In order to make it a reality you need to make the NOT NULL
constraints appear in pg_constraint.  Years ago I wrote a patch to do
that, which was very close to done.  It would be really cool if Vitaly
or someone else could look into that patch, update it and get it ready
for commit.

If someone is interested, I can send the patch along.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] VACUUM Progress Checker.

2016-01-08 Thread Rahila Syed
>I suspect you need to create a new CF entry for this patch in CF 2016-01.

Unless I am missing something, there seems to be no entry for this patch
into CF 2016-01 page: https://commitfest.postgresql.org/8/.
Regrettably, we have exceeded the deadline to add the patch into this
commitfest. Is there still some way to add it to the commitfest 2016-01? As
this feature has received lot of feedback in previous commitfest , adding
it to this commitfest will surely help in progressing it in order to make
it ready for PostgreSQL 9.6.

Thank you,
Rahila Syed


On Mon, Dec 28, 2015 at 6:01 AM, Amit Langote  wrote:

>
> Hi Vinayak,
>
> On 2015/12/25 21:46, Vinayak Pokale wrote:
> > Hi,
> > Please find attached patch addressing following comments.
> >
> >> The relation OID should be reported and not its name. In case of a
> >> relation rename that would not be cool for tracking, and most users
> >> are surely going to join with other system tables using it.
> > The relation OID is reported instead of relation name.
> > The following interface function is called at the beginning to report the
> > relation OID once.
> > void pgstat_report_command_target(Oid relid)
> >
> >> Regarding pg_stat_get_vacuum_progress(): I think a backend can simply be
> >> skipped if (!has_privs_of_role(GetUserId(), beentry->st_userid)) (cannot
> >> put that in plain English, :))
> > Updated in the attached patch.
> >
> > In the previous patch, ACTIVITY_IS_VACUUM is set unnecessarily for
> > VACOPT_FULL and they are not covered by lazy_scan_heap().
> > I have updated it in attached patch and also renamed ACTIVITY_IS_VACUUM
> to
> > COMMAND_LAZY_VACUUM.
> >
> > Added documentation for view.
> > Some more comments need to be addressed.
>
> I suspect you need to create a new CF entry for this patch in CF 2016-01.
>
> Thanks,
> Amit
>
>
>


Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2016-01-08 Thread Magnus Hagander
On Fri, Jan 8, 2016 at 4:07 PM, Tom Lane  wrote:

> Magnus Hagander  writes:
> > On Thu, Jan 7, 2016 at 10:57 PM, Joshua D. Drake 
> >> There are quite a few contributing companies that likely have people
> that
> >> could help out with this in an educated fashion but aren't coders.
>
> > Sort of like how they could also have helped out with cf management?
>
> Dunno about that.  While a CF manager doesn't necessarily have to have
> a commit bit, I think he/she probably does have to be someone who is known
> and respected on the -hackers list.  Otherwise, nagging to finish patches
> is likely to be ignored or even counterproductive.
>

I realize now that I caught JDs response slightly out of context. I thought
we were talking issue/bugtracker, in which case I think that is needed just
as much as it is for a CF manager. As it's basically the same thing just at
a different stage.

But that one seems to be more about keeping the todo list on the wiki up to
date, in which case I agree, not as much is needed. Because it's more
reflecting what's happened, rather than trying to manage process.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Speedup twophase transactions

2016-01-08 Thread Alvaro Herrera
Simon Riggs wrote:

> I think we could do better still, but this looks like the easiest 80% and
> actually removes code.
> 
> The lack of substantial comments on the patch is a problem though - the
> details above should go in the patch. I'll have a go at reworking this for
> you, this time.

Is someone submitting an updated patch here?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Blind attempt at a Cygwin fix

2016-01-08 Thread Alvaro Herrera
Alvaro Herrera wrote:
> Tom Lane wrote:
> > Alvaro Herrera  writes:
> > > Blind attempt at a Cygwin fix
> > 
> > According to
> > http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=brolga=2016-01-08%2014%3A51%3A27
> > 
> > brolga now tries to compile win32security.c, which it evidently was not
> > doing before, but the compile blows up; looks like it is missing #include
> > calls (which must exist in other places where this code lives ...)
> 
> Obviously this wasn't the best idea ever.  Andrew suggests on IM to
> revert this on Cygwin to just do the "isatty" check as originally.

Here's a proposed patch.  Thoughts?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>From af7989c37cb8a1d647e6ce7d01857491b42b01b5 Mon Sep 17 00:00:00 2001
From: Alvaro Herrera 
Date: Fri, 8 Jan 2016 12:36:18 -0300
Subject: [PATCH] Revert "Blind attempt at a Cygwin fix"

This reverts commit e9282e953205a2f3125fc8d1052bc01cb77cd2a3, which blew
up in a pretty spectacular way.  Introduce the original code while we
search for a real fix.
---
 configure   | 6 --
 configure.in| 1 -
 src/bin/pg_ctl/pg_ctl.c | 9 +
 3 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/configure b/configure
index ab213a1..3dd1b15 100755
--- a/configure
+++ b/configure
@@ -13075,12 +13075,6 @@ if test "$PORTNAME" = "cygwin"; then
  ;;
 esac
 
-  case " $LIBOBJS " in
-  *" win32security.$ac_objext "* ) ;;
-  *) LIBOBJS="$LIBOBJS win32security.$ac_objext"
- ;;
-esac
-
 fi
 
 ac_fn_c_check_decl "$LINENO" "sys_siglist" "ac_cv_have_decl_sys_siglist" "#include 
diff --git a/configure.in b/configure.in
index 41402df..9398482 100644
--- a/configure.in
+++ b/configure.in
@@ -1596,7 +1596,6 @@ fi
 # Cygwin needs only a bit of that
 if test "$PORTNAME" = "cygwin"; then
   AC_LIBOBJ(dirmod)
-  AC_LIBOBJ(win32security)
 fi
 
 AC_CHECK_DECLS([sys_siglist], [], [],
diff --git a/src/bin/pg_ctl/pg_ctl.c b/src/bin/pg_ctl/pg_ctl.c
index 28d3cf2..919d764 100644
--- a/src/bin/pg_ctl/pg_ctl.c
+++ b/src/bin/pg_ctl/pg_ctl.c
@@ -212,6 +212,15 @@ write_stderr(const char *fmt,...)
 	vfprintf(stderr, fmt, ap);
 #else
 
+/*
+ * On Cygwin, we don't yet have a reliable mechanism to detect when
+ * we're being run as a service, so fall back to the old (and broken)
+ * stderr test.
+ */
+#ifdef __CYGWIN__
+#define	pgwin32_is_service()	(isatty(fileno(stderr)))
+#endif
+
 	/*
 	 * On Win32, we print to stderr if running on a console, or write to
 	 * eventlog if running as a service
-- 
2.1.4


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL 9.5.0 regress tests fails with Python 3.5.0

2016-01-08 Thread Pavel Stehule
2016-01-08 17:42 GMT+01:00 Tom Lane :

> Pavel Stehule  writes:
> > make check-world fails,
> > It is fixed in HEAD.
>
> Hmm.  Apparently when Peter back-patched f16d52269, he missed the 9.5
> branch :-(.  I don't have Python 3.5 here, but blindly pushed the same
> patch into 9.5.
>

I'll send a patch

Pavel


>
> regards, tom lane
>


Re: [HACKERS] dblink: add polymorphic functions.

2016-01-08 Thread Tom Lane
Alvaro Herrera  writes:
> Joe Conway wrote:
>> On 07/30/2015 09:51 AM, Tom Lane wrote:
>>> The main limitation of this patch is that it won't work for call sites
>>> that pass pstate == NULL to LookupTypeName.  There are a fair number
>>> of them, some of which wouldn't care because they could never invoke
>>> this notation anyway, but for others we'd need to do some work to cons
>>> up a suitable pstate.

>> Sorry it took so long for me to get back to this, but any reason the
>> attached won't work?

> So, is this going anywhere?

Oh, sorry, was I on the hook to review that?

[ quick look... ]  This doesn't seem like it responds to my criticism
above.  I think what we want is that for every LookupTypeName call site
that could potentially be invoking this notation, we must actually make
provision for passing a valid pstate, one containing in particular the
source text for the nodetree we're parsing.  Without that we will not
get error messages of the quality we expect (with error pointers).

Another issue now that I look at it is that parser-detected semantic
problems in the expression will result in ereport(ERROR), rather than
returning NULL which is what you'd kind of expect from the API spec for
LookupTypeName.  That's probably all right considering that many other
syntactic issues throw errors inside this function; but maybe we'd better
adjust the API spec.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL 9.5.0 regress tests fails with Python 3.5.0

2016-01-08 Thread Tom Lane
Pavel Stehule  writes:
> make check-world fails,
> It is fixed in HEAD.

Hmm.  Apparently when Peter back-patched f16d52269, he missed the 9.5
branch :-(.  I don't have Python 3.5 here, but blindly pushed the same
patch into 9.5.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Blind attempt at a Cygwin fix

2016-01-08 Thread Alvaro Herrera
Tom Lane wrote:
> Alvaro Herrera  writes:
> > Blind attempt at a Cygwin fix
> 
> According to
> http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=brolga=2016-01-08%2014%3A51%3A27
> 
> brolga now tries to compile win32security.c, which it evidently was not
> doing before, but the compile blows up; looks like it is missing #include
> calls (which must exist in other places where this code lives ...)

Obviously this wasn't the best idea ever.  Andrew suggests on IM to
revert this on Cygwin to just do the "isatty" check as originally.

I'm CC'ing Marco Atzeri, who has done Cygwin work lately.  Maybe he can
spend some time getting this port fixed.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Blind attempt at a Cygwin fix

2016-01-08 Thread Tom Lane
Alvaro Herrera  writes:
> Alvaro Herrera wrote:
>> Obviously this wasn't the best idea ever.  Andrew suggests on IM to
>> revert this on Cygwin to just do the "isatty" check as originally.

> Here's a proposed patch.  Thoughts?

Ugly, but it will hold the fort until someone can debug the service
code for Cygwin.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers