Re: [HACKERS] [POC] hash partitioning

2017-10-29 Thread Robert Haas
On Tue, Oct 24, 2017 at 1:21 PM, amul sul  wrote:
> Updated patch attached.

This patch needs a rebase.

It appears that satisfies_hash_func is declared incorrectly in
pg_proc.h.  ProcedureCreate seems to think that provariadic should be
ANYOID if the type of the last element is ANYOID, ANYELEMENTOID if the
type of the last element is ANYARRAYOID, and otherwise the element
type corresponding to the array type.   But here you have the last
element as int4[] but provariadic is any.  I wrote the following query
to detect problems of this type, and I think we might want to just go
ahead and add this to the regression test suite, verifying that it
returns no rows:

select oid::regprocedure, provariadic::regtype, proargtypes::regtype[]
from pg_proc where provariadic != 0
and case proargtypes[array_length(proargtypes, 1)-1]
when 2276 then 2276 -- any -> any
when 2277 then 2283 -- anyarray -> anyelement
else (select t.oid from pg_type t where t.typarray =
proargtypes[array_length(proargtypes, 1)-1]) end
!= provariadic;

The simple fix is change provariadic to int4 and call it good.  It's
tempting to go the other way and actually make it
satisfies_hash_partition(int4, int4, variadic "any"), passing the
column values directly and letting satisfies_hash_partition doing the
hashing itself.  Any arguments that had a partition key type different
from the column type would have a RelabelType node placed on top of
the column, so that get_fn_expr_argtype would return the partition key
type.  Then, the function could look up the hash function for that
type and call it directly on the value.  That way, we'd be doing only
one function call instead of many, and the partition constraint would
look nicer in \d+ output, too.  :-)  On the other hand, that would
also mean that we'd have to look up the extended hash function every
time through this function, though maybe that could be prevented by
using fn_extra to cache FmgrInfos for all the hash functions on the
first time through.  I'm not sure how that would compare in terms of
speed with what you have now, but maybe it's worth trying.

The second paragraph of the CREATE TABLE documentation for PARTITION
OF needs to be updated like this: "The form with IN
is used for list partitioning, the form with FROM
and TO is used for range partitioning, and the form
with WITH is used for hash partitioning."

The CREATE TABLE documentation says "When using range partitioning,
the partition key can include multiple columns or expressions (up to
32,"; this should be changed to say "When using range or hash
partitioning".

-  expression.  If no B-tree operator class is specified when creating a
-  partitioned table, the default B-tree operator class for the
datatype will
-  be used.  If there is none, an error will be reported.
+  expression.  If no operator class is specified when creating a
partitioned
+  table, the default operator class of the appropriate type (btree for list
+  and range partitioning, hash for hash partitioning) will be used.  If
+  there is none, an error will be reported.
+ 
+
+ 
+  Since hash operator class provides only equality, not ordering, collation
+  is not relevant for hash partitioning. The behaviour will be unaffected
+  if a collation is specified.
+ 
+
+ 
+  Hash partitioning will use support function 2 routines from the operator
+  class. If there is none, an error will be reported.  See  for details of operator class support
+  functions.

I think we should rework this a little more heavily.  I suggest the
following, starting after "a single column or expression":


Range and list partitioning require a btree operator class, while hash
partitioning requires a hash operator class.  If no operator class is
specified explicitly, the default operator class of the appropriate
type will be used; if no default operator class exists, an error will
be raised.  When hash partitioning is used, the operator class used
must implement support function 2 (see 
for details).


I think we can leave out the part about collations.  It's possibly
worth a longer explanation here at some point: for range partitioning,
collation can affect which rows go into which partitions; for list
partitioning, it can't, but it can affect the order in which
partitions are expanded (which is a can of worms I'm not quite ready
to try to explain in user-facing documentation); for hash
partitioning, it makes no difference at all.  Although at some point
we may want to document this, I think it's a job for a separate patch,
since (1) the existing documentation doesn't document the precise
import of collations on existing partitioning types and (2) I'm not
sure that CREATE TABLE is really the best place to explain this.

The example commands for creating a hash-partitioned table are missing
spaces between WITH and the parenthesis which follows.

In 0003, the changes to partition_bounds_copy 

Re: [HACKERS] Index only scan for cube and seg

2017-10-29 Thread Andrey Borodin
Hi!
> 29 окт. 2017 г., в 2:24, Alexander Korotkov  
> написал(а):
> 
> As I can see, cube GiST code always uses DatumGetNDBOX() macro to transform 
> Datum to (NDBOX *).
> 
> #define DatumGetNDBOX(x)  ((NDBOX *) PG_DETOAST_DATUM(x))
> 
> Thus, it should be safe to just remove both compress/decompress methods from 
> existing opclass.

Alexander, Tom, you are absolutely right. I was sure there is toasting code in 
cube's compress, but it was not ever there.

Here is patch for cube that drops functions.

Best regards, Andrey Borodin.


0001-Enable-Index-Only-Scan-in-cube.patch
Description: Binary data


0001-Enable-Index-Only-Scan-in-seg.patch
Description: Binary data



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Restricting pg_rewind to data/wal dirs

2017-10-29 Thread Robert Haas
On Sat, Oct 28, 2017 at 4:52 PM, Chris Travers  wrote:
> There are still some cleanup bits needed here but I wanted to get feedback
> on my general direction.
>
> I hope to submit for commit fest soon if the general feedback is good.

I think you should submit to the CommitFest regardless, to increase
your chances of getting feedback.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Implementing pg_receivewal --no-sync

2017-10-29 Thread Robert Haas
On Sun, Oct 29, 2017 at 3:42 AM, Michael Paquier
 wrote:
> Okay. Here is an updated patch incorporating those comments.

Committed with a little wordsmithing on the documentation.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel worker error

2017-10-29 Thread Robert Haas
On Sun, Oct 29, 2017 at 12:02 PM, Amit Kapila  wrote:
> This patch no longer applies, so attached rebased patches.  I have
> also created patches for v10 as we might want to backpatch the fix.
> Added the patch in CF (https://commitfest.postgresql.org/15/1342/)

Thanks.  I picked the second variant, committed, and also back-patched to 9.6.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Tests for reloptions

2017-10-29 Thread Nikolay Shaplov
В письме от 19 октября 2017 14:20:52 Вы написали:

> Yeah, it would perhaps be good idea to ensure we don't break things that
> are documented to work.  If the tests don't take too long, I'm not
> opposed to testing every single option.  As you say, code coverage is
> important but it's not the only goal.
> 
> I'm hesitant to hardcode things like the number of bits in bloom, as you
> had in the original.  If I understand correctly, that number could
> change with compile options (different blocksize?), so I removed that
> part.  I also fixed a few spelling errors.
> 
> And pushed.  Let's see what the buildfarm says about this.
While merging this commit to my branch, I found two issues that as I think 
needs fixing. Hope this does not require creating new commit request...

First is missing tab.

Second i think it is better to write "The OIDS option is not stored as 
_reloption_" otherwise it cat be read as if it is not stored at all.

See patch in the attachment.

Thank you again for your work with the patch. I've seen how much you have 
change it.

PS do I get right that 80 character code width rule is applied to SQL tests 
too?

-- 
Do code for fun. Can do it for money (Perl & C/C++ ~10h/week)diff --git a/src/test/regress/sql/reloptions.sql b/src/test/regress/sql/reloptions.sql
index c9119fd..21a315b 100644
--- a/src/test/regress/sql/reloptions.sql
+++ b/src/test/regress/sql/reloptions.sql
@@ -47,12 +47,12 @@ SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test'::regclass;
 ALTER TABLE reloptions_test RESET (autovacuum_enabled,
 	autovacuum_analyze_scale_factor);
 SELECT reloptions FROM pg_class WHERE oid = 'reloptions_test'::regclass AND
-reloptions IS NULL;
+	reloptions IS NULL;
 
 -- RESET fails if a value is specified
 ALTER TABLE reloptions_test RESET (fillfactor=12);
 
--- The OIDS option is not stored
+-- The OIDS option is not stored as reloption
 DROP TABLE reloptions_test;
 CREATE TABLE reloptions_test(i INT) WITH (fillfactor=20, oids=true);
 SELECT reloptions, relhasoids FROM pg_class WHERE oid = 'reloptions_test'::regclass;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Implementing pg_receivewal --no-sync

2017-10-29 Thread Michael Paquier
On Sun, Oct 29, 2017 at 12:31 AM, Robert Haas  wrote:
> On Sun, Oct 29, 2017 at 3:42 AM, Michael Paquier
>  wrote:
>> Okay. Here is an updated patch incorporating those comments.
>
> Committed with a little wordsmithing on the documentation.

Thanks all.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] taking stdbool.h into use

2017-10-29 Thread Michael Paquier
On Thu, Oct 26, 2017 at 9:25 AM, Peter Eisentraut
 wrote:
> Here is an updated patch set.  This is just a rebase of the previous
> set, no substantial changes.  Based on the discussion so far, I'm
> proposing that 0001 through 0007 could be ready to commit after review,
> whereas the remaining two need more work at some later time.

I had a look at this patch series. Patches 1, 2 (macos headers indeed
show that NSUNLINKMODULE_OPTION_NONE is set to 0x0), 3 to 7 look fine
to me.

I spotted a couple of other things while looking at your patches and
the code tree.

-   return (ginCompareItemPointers(>itemptr, iptr) > 0) ? TRUE : FALSE;
+   return (ginCompareItemPointers(>itemptr, iptr) > 0) ? true : false;
You could simplify that at the same time by removing such things. The
"false : true" pattern is less frequent than the "true : false"
pattern.

Some comments in the code still mention FALSE or TRUE:
- hashsearch.c uses FALSE in some comments.
- In execExpr.c, ExecCheck mentions TRUE.
- AggStateIsShared mentions TRUE and FALSE.
- In arrayfuncs.c, ReadArrayStr, CopyArrayEls and ReadArrayBinary use TRUE.

0009 looks like a good idea. It would make the code less confusing for
the reader to mention HAVE_STDBOOL_H in the #endif of c.h that you are
complicating to make the end of the block clear. I am lacking energy
for 0008, so I have no opinions to offer. Sorry :p
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel worker error

2017-10-29 Thread Amit Kapila
On Sat, Sep 9, 2017 at 7:56 AM, Amit Kapila  wrote:
> On Fri, Sep 8, 2017 at 3:13 PM, Robert Haas  wrote:
>> On Fri, Sep 8, 2017 at 1:17 AM, Amit Kapila  wrote:
>>> You are right.  I have changed the ordering and passed OuterUserId via
>>> FixedParallelState.
>>
>> This looks a little strange:
>>
>> +SetCurrentRoleId(fps->outer_user_id, fps->is_current_user_superuser);
>>
>> The first argument says "outer" but the second says "current".  I'm
>> wondering if we can just make the second one is_superuser.
>>
>
> No issues changed as per suggestion.
>
>> I'm also wondering if, rather than using GetConfigOptionByName, we
>> should just make the GUC underlying is_superuser non-static and use
>> the value directly.  If not, then I'm alternatively wondering whether
>> we should maybe use a less-generic name than varval.
>>
>
> I think we can go either way.  So prepared patches with both
> approaches. In fix_role_handling_parallel_worker_v3_1.patch, I have
> changed the variable name and in
> fix_role_handling_parallel_worker_v3_2.patch, I have exposed the guc)
> is_superuser.
>

This patch no longer applies, so attached rebased patches.  I have
also created patches for v10 as we might want to backpatch the fix.
Added the patch in CF (https://commitfest.postgresql.org/15/1342/)

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


fix_role_handling_parallel_worker_v4_1.patch
Description: Binary data


fix_role_handling_parallel_worker_v4_2.patch
Description: Binary data


fix_role_handling_parallel_worker_10_v4_1.patch
Description: Binary data


fix_role_handling_parallel_worker_10_v4_2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-10-29 Thread Simon Riggs
On 28 October 2017 at 22:04, Peter Geoghegan  wrote:
> On Sat, Oct 28, 2017 at 12:49 PM, Simon Riggs  wrote:
>> Nothing I am proposing blocks later work.
>
> Actually, many things will block future work if you go down that road.
> You didn't respond to the specific points I raised, but that doesn't
> mean that they're not real.
>
>> Everything you say makes it clear that a fully generalized solution is
>> going to be many years in the making, assuming we agree.
>
> I think that it's formally impossible as long as you preserve the ON
> CONFLICT guarantees, unless you somehow define the problems out of
> existence. Those are guarantees which no other MERGE implementation
> has ever made, and which the SQL standard says nothing about. And for
> good reasons.
>
>> "The extent to which an SQL-implementation may disallow independent
>> changes that are not significant is implementation-defined”.
>>
>> So we get to choose. I recommend that we choose something practical.
>> We're approaching the 10 year anniversary of my first serious attempt
>> to do MERGE. I say that its time to move forwards with useful
>> solutions, rather than wait another 10 years for the perfect one, even
>> assuming it exists.
>
> As far as I'm concerned, you're the one arguing for an unobtainable
> solution over a good one, not me. I *don't* think you should solve the
> problems that I raise -- you should instead implement MERGE without
> any of the ON CONFLICT guarantees, just like everyone else has.
> Building MERGE on top of the ON CONFLICT guarantees, and ultimately
> arriving at something that is comparable to other implementations over
> many releases might be okay if anyone had the slightest idea of what
> that would look like. You haven't even _described the semantics_,
> which you could do by addressing the specific points that I raised.

I have no objection to you writing a better version than me and if my
work inspires you to complete that in a reasonable timescale then we
all win. I'm also very happy to work together on writing the version
described - you have already done much work in this area.

I don't see any major problems in points you've raised so far, though
obviously there is much detail.

All of this needs to be written and then committed, so I'll get on and
write my proposal. We can then see whether that is an 80% solution or
something less. There are more obvious barriers to completion at this
point, like time and getting on with it.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: schema variables

2017-10-29 Thread Chris Travers
On Sat, Oct 28, 2017 at 4:56 PM, Pavel Stehule 
wrote:

>
>>
> The creating database objects and necessary infrastructure is the most
> simple task of this project. I'll be more happy if there are zero
> intersection because variables and GUC are designed for different purposes.
> But due SET keyword the intersection there is.
>
> When I thinking about it, I have only one, but important reason, why I
> prefer design new type of database object -the GUC are stack based with
> different default granularity - global, database, user, session, function.
> This can be unwanted behave for variables - it can be source of hard to
> detected bugs. I afraid so this behave can be too messy for usage as
> variables.
>
> @1 I have not clean opinion about it - not sure if rights are good enough
> - probably some user limits can be more practical - but can be hard to
> choose result when some user limits and GUC will be against
>

I was mostly thinking that users can probably set things like work_mem and
possibly this might be a problem.


> @2 With variables typed custom GUC are not necessary
>

I don't know about that.  For example with the geoip2lookup extension it is
nice that you could set the preferred language for translation on a per
user basis or the mmdb path on a per-db basis.


> @3 Why you need it? It is possible with set_config function now.
>

Yeah you could do it safely with set_config and a CTE, but suppose I have:

with a (Id, value) as (values (1::Int, 'foo'), (2, 'bar'), (3, 'baz'))
SELECT set_config('custom_val', value) from a where id = 2;

What is the result out of this?  I would *expect* that this would probably
run set_config 3 times and filter the output.


>
> Regards
>
> Pavel
>
>
>
>
>>
>>
>>> regards
>>>
>>> Pavel
>>>
>>>
>>>
>>
>>
>> --
>> Best Regards,
>> Chris Travers
>> Database Administrator
>>
>> Tel: +49 162 9037 210 <+49%20162%209037210> | Skype: einhverfr |
>> www.adjust.com
>> Saarbrücker Straße 37a, 10405 Berlin
>> 
>>
>>
>


-- 
Best Regards,
Chris Travers
Database Administrator

Tel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com
Saarbrücker Straße 37a, 10405 Berlin


Re: [HACKERS] Parallel worker error

2017-10-29 Thread Robert Haas
On Mon, Oct 30, 2017 at 8:25 AM, Amit Kapila  wrote:
> Thanks.  I have closed this entry in CF app, however, I am not sure
> what is the best way to deal with the entry present in PostgreSQL 10
> Open Items page[1].  The entry for this bug seems to be present in
> Older Bugs section.  Shall we leave it as it is or do we want to do
> something else with it?
>
> [1] - https://wiki.postgresql.org/wiki/PostgreSQL_10_Open_Items

How about just adding an additional bullet point for that item that
says "fixed by commit blah blah for 10.1"?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical decoding of two-phase transactions

2017-10-29 Thread Craig Ringer
On 28 October 2017 at 03:53, Sokolov Yura  wrote:
> On 2017-10-26 22:01, Sokolov Yura wrote:

> Small improvement compared to v7:
> - twophase_gid is written with alignment padding in the XactLogCommitRecord
> and XactLogAbortRecord.

I think Nikhils has done some significant work on this patch.
Hopefully he'll be able to share it.



-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Typos in src/backend/optimizer/README

2017-10-29 Thread Etsuro Fujita

On 2017/10/28 18:15, Robert Haas wrote:

On Fri, Oct 27, 2017 at 9:34 AM, Etsuro Fujita
 wrote:

This sentence in the section of Partition-wise joins in
src/backend/optimizer/README should be fixed: "This technique of breaking
down a join between partition tables into join between their partitions is
called partition-wise join."

(1) s/a join between partition tables/a join between partitioned tables/
(2) s/join between their partitions/joins between their partitions/

It might be okay to leave #2 as-is, but I'd like to propose to change that
way to make the meaning clear.


I think you are right.  I have committed the patch.


Thank you.

Best regards,
Etsuro Fujita



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel safety for extern params

2017-10-29 Thread Amit Kapila
On Sun, Oct 29, 2017 at 9:55 PM, Robert Haas  wrote:
> On Sat, Oct 28, 2017 at 8:02 PM, Amit Kapila  wrote:
>> I think we need to make changes in exec_simple_recheck_plan to make
>> the behavior similar to HEAD.  With the attached patch, all tests
>> passed with force_parallel_mode.
>
> Committed to REL_10_STABLE.
>

Thanks, I have closed this entry in CF app.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reading timeline from pg_control on replication slave

2017-10-29 Thread Craig Ringer
On 28 October 2017 at 06:09, Michael Paquier  wrote:
> On Fri, Oct 27, 2017 at 1:04 AM, Andrey Borodin  wrote:
>> I'm working on backups from replication salve in WAL-G [0]
>> Backups used to use result of pg_walfile_name(pg_start_backup(...)). Call to 
>> pg_start_backup() works nice, but "pg_walfile_name() cannot be executed 
>> during recovery."
>> This function has LSN as argument and reads TimeLineId from global state.
>> So I made a function[1] that, if on replica, reads timeline from pg_control 
>> file and formats WAL file name as is it was produces by pg_wal_filename(lsn).
>
> ThisTimeLineID is not something you can rely on for standby backends
> as it is not set during recovery.

That's not much of a concern really, you just have to ensure you call
GetXLogReplayRecPtr and set ThisTimeLineID.

(I'd quite like ThisTimeLineID to go away as a global. It's messy and
confusing, and I'd much rather it be fetched when needed).

However, that doesn't negate the rest of the issues you raised.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Comment typo

2017-10-29 Thread Etsuro Fujita
Here is a patch to fix a typo in a comment in partition.c: 
s/specificiation/specification/.


Best regards,
Etsuro Fujita
diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c
index 07fdf66..e234167 100644
--- a/src/backend/catalog/partition.c
+++ b/src/backend/catalog/partition.c
@@ -703,7 +703,7 @@ partition_bounds_equal(int partnatts, int16 *parttyplen, 
bool *parttypbyval,
 
 /*
  * Return a copy of given PartitionBoundInfo structure. The data types of 
bounds
- * are described by given partition key specificiation.
+ * are described by given partition key specification.
  */
 extern PartitionBoundInfo
 partition_bounds_copy(PartitionBoundInfo src,

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel worker error

2017-10-29 Thread Amit Kapila
On Sun, Oct 29, 2017 at 1:15 PM, Robert Haas  wrote:
> On Sun, Oct 29, 2017 at 12:02 PM, Amit Kapila  wrote:
>> This patch no longer applies, so attached rebased patches.  I have
>> also created patches for v10 as we might want to backpatch the fix.
>> Added the patch in CF (https://commitfest.postgresql.org/15/1342/)
>
> Thanks.  I picked the second variant, committed, and also back-patched to 9.6.
>

Thanks.  I have closed this entry in CF app, however, I am not sure
what is the best way to deal with the entry present in PostgreSQL 10
Open Items page[1].  The entry for this bug seems to be present in
Older Bugs section.  Shall we leave it as it is or do we want to do
something else with it?


[1] - https://wiki.postgresql.org/wiki/PostgreSQL_10_Open_Items

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallelize queries containing initplans

2017-10-29 Thread Amit Kapila
On Wed, Oct 11, 2017 at 9:24 PM, Robert Haas  wrote:
> On Mon, Oct 9, 2017 at 5:56 AM, Amit Kapila  wrote:
>> How about always returning false for PARAM_EXTERN?
>
> Yeah, I think that's what we should do.  Let's do that first as a
> separate patch, which we might even want to back-patch, then return to
> this.
>

Now that the PARAM_EXTERN issue is fixed, I have rebased this patch.
This patch had been switched to Ready For Committer in last CF, then
Robert had comments which I have addressed, so I think the status
should be switched back to Ready For committer.  Let me know if you
think it should be switched to some other status.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


pq_pushdown_initplan_v13.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallelize queries containing initplans

2017-10-29 Thread Amit Kapila
On Wed, Oct 18, 2017 at 2:06 PM, tushar  wrote:
> On 10/11/2017 12:42 PM, tushar wrote:
>>
>> On 10/09/2017 03:26 PM, Amit Kapila wrote:
>>>
>>> I have reverted the check
>>> in the attached patch.
>>
>>
>> I have applied this patch against PG HEAD and run sqlsmith and analyzed
>> results . didn't find any specific failures against this patch.
>>
> I did some testing of  this feature and written few testcases . PFA the sql
> file(along with the expected .out file) .
>

Thanks a lot Tushar for testing this patch.  In the latest patch, I
have just rebased some comments, there is no code change, so I don't
expect any change in behavior, but feel free to test it once again.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallelize queries containing initplans

2017-10-29 Thread Robert Haas
On Mon, Oct 30, 2017 at 9:00 AM, Amit Kapila  wrote:
> Now that the PARAM_EXTERN issue is fixed, I have rebased this patch.
> This patch had been switched to Ready For Committer in last CF, then
> Robert had comments which I have addressed, so I think the status
> should be switched back to Ready For committer.  Let me know if you
> think it should be switched to some other status.

The change to ExplainPrintPlan doesn't look good to me, because it
actually moves the initPlan; I don't think it's good for EXPLAIN to
mutate the plan state tree.  It should find a way to display the
results *as if* the initPlans were attached to the subnode, but
without actually moving them.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Removing [Merge]Append nodes which contain a single subpath

2017-10-29 Thread Ashutosh Bapat
On Fri, Oct 27, 2017 at 12:49 AM, David Rowley
 wrote:
> On 27 October 2017 at 01:44, Ashutosh Bapat
>  wrote:
>> I think Antonin has a point. The join processing may deem some base
>> relations dummy (See populate_joinrel_with_paths()). So an Append
>> relation which had multiple child alive at the end of
>> set_append_rel_size() might ultimately have only one child after
>> partition-wise join has worked on it.
>
> hmm, I see. partition-wise join is able to remove eliminate partitions
> on the other side of the join that can't be matched to:
>
> # set enable_partition_wise_join=on;
> SET
> # explain select count(*) from ab ab1 inner join ab ab2 on ab1.a=ab2.a
> where ab1.a between 1 and 2 and ab2.a between 1 and 10001;
>QUERY PLAN
> 
>  Aggregate  (cost=0.00..0.01 rows=1 width=8)
>->  Result  (cost=0.00..0.00 rows=0 width=0)
>  One-Time Filter: false
> (3 rows)
>
> # \d+ ab
> Table "public.ab"
>  Column |  Type   | Collation | Nullable | Default | Storage | Stats
> target | Description
> +-+---+--+-+-+--+-
>  a  | integer |   |  | | plain   |  |
>  b  | integer |   |  | | plain   |  |
> Partition key: RANGE (a)
> Partitions: ab_p1 FOR VALUES FROM (1) TO (1),
> ab_p2 FOR VALUES FROM (1) TO (2)
>

IIUC, ab1_p2 and ab2_p1 are marked dummy by constraint exclusion.
Because of partition-wise join when the corresponding partitions are
joined, their inner joins are marked dummy resulting in an empty join.
This isn't the case of a join processing marking one of the joining
relations dummy, that I was referring to. But I think it's relevant
here since partition-wise join might create single child joins this
way.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-10-29 Thread Peter Geoghegan
On Sun, Oct 29, 2017 at 4:48 AM, Simon Riggs  wrote:
> I have no objection to you writing a better version than me and if my
> work inspires you to complete that in a reasonable timescale then we
> all win.

My whole point is that the way that you seem determined to go on this
is a dead end. I don't think that *anyone* can go improve on what you
come up with if that's based heavily on ON CONFLICT, for the simple
reason that the final user visible design is totally unclear. There is
an easy way to make me shut up - come up with a design for MERGE that
more or less builds on how UPDATE FROM works, rather than building
MERGE on ON CONFLICT. (You might base things like RLS handling on ON
CONFLICT, but in the main MERGE should be like an UPDATE FROM with an
outer join, that can do INSERTs and DELETEs, too.)

The original effort to add MERGE didn't do anything upsert-like, which
Heikki (the GSOC mentor of the project) was perfectly comfortable
with. I'm too lazy to go search the archives right now, but it's
there. Heikki cites the SQL standard.

This is what MERGE *actually is*, which you can clearly see from the
Oracle/SQL Server/DB2 docs. It says this in the first paragraph of
their MERGE documentation. It's crystal clear from their docs -- "This
statement is a convenient way to combine multiple operations. It lets
you avoid multiple INSERT, UPDATE, and DELETE DML statements."

> I'm also very happy to work together on writing the version
> described - you have already done much work in this area.

You seem to want to preserve the ON CONFLICT guarantees at great cost.
But you haven't even defended that based on a high level goal, or a
use case, or something that makes sense to users (describing how it is
possible is another matter). You haven't even tried to convince me.

> I don't see any major problems in points you've raised so far, though
> obviously there is much detail.

Did you even read them? They are not mere details. They're fundamental
to the semantics of the feature (if you base it on ON CONFLICT). It's
not actually important that you understand them all; the important
message is that generalizing ON CONFLICT has all kinds of terrible
problems.

> All of this needs to be written and then committed, so I'll get on and
> write my proposal. We can then see whether that is an 80% solution or
> something less. There are more obvious barriers to completion at this
> point, like time and getting on with it.

Getting on with *what*, exactly?

In general, I have nothing against an 80% solution, or even a 50%
solution, provided there is a plausible path to a 100% solution. I
don't think that you have such a path, but only because you're tacitly
inserting requirements that no other MERGE implementation has to live
with, that I doubt any implementation *could* live with. Again, I'm
not the one making this complicated, or adding requirements that will
be difficult for you to get in to your v1 -- you're the one doing
that.

The semantics that I suggest (the SQL standard's semantics) will
require less code, and will be far simpler. Right now, I simply don't
understand why you're insisting on using ON CONFLICT without even
saying why. I can only surmise that you think that doing so will
simplify the implementation, but I can guarantee you that it won't.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PANIC: invalid index offnum: 186 when processing BRIN indexes in VACUUM

2017-10-29 Thread Tomas Vondra
Hi,

while doing some weekend hacking & testing on the BRIN patches I posted
recently, I ran into PANICs in VACUUM, when it summarizes data inserted
concurrently (in another session):

PANIC:  invalid index offnum: 186

Initially I thought it's a bug in my patches, but apparently it's not
and I can reproduce it easily on current master (846fcc8516).

I'm not sure if/how this is related to the BRIN autosummarization issue
reported by Justin Pryzby about two weeks ago (thread [1]), but I don't
see any segfaults and the messages are always exactly the same.

[1]
https://www.postgresql.org/message-id/flat/20171014035732.GB31726%40telsasoft.com

Reproducing the issue is simple:

create table brin_test (a int, b bigint, c float,
d double precision, e uuid);
create index on brin_test using brin (a);
create index on brin_test using brin (b);
create index on brin_test using brin (c);
create index on brin_test using brin (d);
create index on brin_test using brin (e);

and then run

insert into brin_test select
 mod(i,10)/25,
 mod(i,10)/25,
 mod(i,10)/25,
 mod(i,10)/25,
md5((mod(i,10)/25)::text)::uuid
from generate_series(1,10) s(i) \watch 0.1

vacuum brin_test \watch 1

And sooner or later (for me it only takes a minute or two in most cases)
the VACUUM session should fail with the PANIC message mentioned above.
It always fails with the message, only the value (offset) changes.

The stack trace always looks exactly the same - see the attachment.

At first it seemed the idxrel is always the index on 'e' (i.e. the UUID
column), but it seems I also got failures on the other indexes.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Core was generated by `postgres: postgres test [local] VACUUM   
 '.
Program terminated with signal SIGABRT, Aborted.
#0  0x7fcadc413167 in raise () from /lib64/libc.so.6
(gdb) bt
#0  0x7fcadc413167 in raise () from /lib64/libc.so.6
#1  0x7fcadc4145ea in abort () from /lib64/libc.so.6
#2  0x00a2409f in errfinish (dummy=0) at elog.c:557
#3  0x00a26734 in elog_finish (elevel=20, fmt=0xc2ee3d "invalid index 
offnum: %u") at elog.c:1378
#4  0x008a4e86 in PageIndexTupleDeleteNoCompact (page=0x7fcad4595080 
"\004", offnum=292) at bufpage.c:982
#5  0x00476b0c in brin_doupdate (idxrel=0x7fcad33700d8, 
pagesPerRange=128, revmap=0x21657f0, heapBlk=1676160, oldbuf=1442, oldoff=292, 
origtup=0x2169518, origsz=8, newtup=0x2165dc0, newsz=24, samepage=0 '\000') at 
brin_pageops.c:246
#6  0x00475b34 in summarize_range (indexInfo=0x2165940, 
state=0x21673d0, heapRel=0x7fcad336f050, heapBlk=1676160, heapNumBlks=1676189) 
at brin.c:1199
#7  0x00475ddd in brinsummarize (index=0x7fcad33700d8, 
heapRel=0x7fcad336f050, pageRange=4294967295, numSummarized=0x2166e20, 
numExisting=0x2166e20) at brin.c:1306
#8  0x00474e7d in brinvacuumcleanup (info=0x7ffdf5c28d70, 
stats=0x2166e10) at brin.c:794
#9  0x004f85d2 in index_vacuum_cleanup (info=0x7ffdf5c28d70, stats=0x0) 
at indexam.c:772
#10 0x006c59c1 in lazy_cleanup_index (indrel=0x7fcad33700d8, stats=0x0, 
vacrelstats=0x21656c0) at vacuumlazy.c:1651
#11 0x006c48f7 in lazy_scan_heap (onerel=0x7fcad336f050, options=1, 
vacrelstats=0x21656c0, Irel=0x2165550, nindexes=5, aggressive=0 '\000') at 
vacuumlazy.c:1334
#12 0x006c286e in lazy_vacuum_rel (onerel=0x7fcad336f050, options=1, 
params=0x7ffdf5c29450, bstrategy=0x218c710) at vacuumlazy.c:258
#13 0x006c2326 in vacuum_rel (relid=16385, relation=0x209ecd0, 
options=1, params=0x7ffdf5c29450) at vacuum.c:1543
#14 0x006c0942 in vacuum (options=1, relations=0x218c888, 
params=0x7ffdf5c29450, bstrategy=0x218c710, isTopLevel=1 '\001') at vacuum.c:340
#15 0x006c0455 in ExecVacuum (vacstmt=0x209edc0, isTopLevel=1 '\001') 
at vacuum.c:141
#16 0x008b4cb3 in standard_ProcessUtility (pstmt=0x209f110, 
queryString=0x209e2b0 "vacuum brin_test ", context=PROCESS_UTILITY_TOPLEVEL, 
params=0x0, queryEnv=0x0, dest=0x209f208, completionTag=0x7ffdf5c298d0 "") at 
utility.c:675
#17 0x008b4413 in ProcessUtility (pstmt=0x209f110, 
queryString=0x209e2b0 "vacuum brin_test ", context=PROCESS_UTILITY_TOPLEVEL, 
params=0x0, queryEnv=0x0, dest=0x209f208, completionTag=0x7ffdf5c298d0 "") at 
utility.c:357
#18 0x008b33b4 in PortalRunUtility (portal=0x211b9f0, pstmt=0x209f110, 
isTopLevel=1 '\001', setHoldSnapshot=0 '\000', dest=0x209f208, 
completionTag=0x7ffdf5c298d0 "") at pquery.c:1178
#19 0x008b35cc in PortalRunMulti (portal=0x211b9f0, isTopLevel=1 
'\001', setHoldSnapshot=0 '\000', dest=0x209f208, altdest=0x209f208, 
completionTag=0x7ffdf5c298d0 "") at pquery.c:1324
#20 0x008b2a96 in PortalRun (portal=0x211b9f0, 
count=9223372036854775807, isTopLevel=1 

Re: [HACKERS] Parallel safety for extern params

2017-10-29 Thread Robert Haas
On Sat, Oct 28, 2017 at 8:02 PM, Amit Kapila  wrote:
> I think we need to make changes in exec_simple_recheck_plan to make
> the behavior similar to HEAD.  With the attached patch, all tests
> passed with force_parallel_mode.

Committed to REL_10_STABLE.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Hash take II

2017-10-29 Thread Thomas Munro
On Fri, Oct 27, 2017 at 12:24 AM, Rushabh Lathia
 wrote:
> While re-basing the parallel-B-tree-index-build patch on top v22 patch
> sets, found cosmetic review:

Thanks!

> 1) BufFileSetEstimate is removed but it's still into buffile.h
>
> +extern size_t BufFileSetEstimate(int stripes);

Fixed.

> 2) BufFileSetCreate is renamed with BufFileSetInit, but used at below
> place in comments:
>
> * Attach to a set of named BufFiles that was created with BufFileSetCreate.

Fixed.

Other minor tweaks: I fixed a harmless warning from Visual C++ and
added a CHECK_FOR_INTERRUPTS() to
ExecParallelHashJoinPartitionOuter()'s loop.

Two other changes:

1.  Improved concurrency for sharedtuplestore.c.

Last week I investigated some test failures on AppVeyor CI and
discovered a small problem with sharedtuplestore.c on Windows: it
could occasionally get ERROR_ACCESS_DENIED errors when attempting to
open files that were concurrently being unlinked (unlinking is not
atomic on Windows, see pgsql-bugs #14243 for another manifestation).
That code was a bit sloppy (though not incorrect), and was easy to fix
by doing some checks in a different order, but...

While hacking on that I realised that sharedtuplestore.c's parallel
scan efficiency was pretty terrible anyway, so I made an improvement
that I'd earlier threatened to make in a comment.  Instead of holding
a per-file lock while reading individual tuples, it now works in
page-multiple-sized chunks.  Readers only acquire a spinlock when they
need a new chunk, don't hold any locks while doing IO, and never read
overlapping pages.  From a user perspective, this means that EXPLAIN
(BUFFERS) temporary buffer read counts are approximately the same as
for the equivalent non-parallel hash join, because each worker reads a
disjoint set of temporary file pages instead of reading interleaved
tuples from the same pages, and there is no more LWLock "shared
tuplestore" that can show up in wait_event when backends pile up on
the same batch.  It writes very slightly more than it reads because of
unread pages at the end of the final chunk (because it reads back in
tuple-at-a-time and thus page-at-a-time, not whole chunk at a time --
I considered reading whole chunks and then returning pointer to
MinimalTuples in the chunk, but that required MAXALIGNing data in the
files on disk which made the files noticeably bigger).

So, to summarise, there are now three layers of contention avoidance
strategy being used by Parallel Hash Join for scanning batches in
parallel:

i)  Participants in a Parallel Hash Join try to work on different
batches so they avoid scanning the same SharedTuplestore completely.
That's visible with EXPLAIN ANALYZE as "Batches Probed" (that shows
the number of outer batches scanned; it doesn't seem worth the pixels
to show "Batches Loaded" for the number of inner batches scanned which
may be lower).

ii)  When that's not possible, they start out reading from different
backing files by starting with the one they wrote themselves and then
go around the full set.  That means they don't contend on the per-file
read-head lock until a reader drains a whole file and then choses
another one that's still being read by someone else.

iii)  [New in this version] Since they might still finish up reading
from the same file (and often do towards the end of a join), the
tuples are chopped into multi-page chunks and participants are
allocated chunks using a spinlock-protected counter.  This is quite a
lot like Parallel Sequential Scan, with some extra complications due
to variable sized chunks.

2.  Improved oversized tuple handling.

I added a new regression test case to exercise the oversized tuple
support in ExecParallelHashLoadTuple() and sts_puttuple(), as
requested by Andres a while back.  (Thanks to Andrew Gierth for a
pointer on how to get detoasted tuples into a hash join table without
disabling parallelism.)  While testing that I realised that my
defences against some degenerate behaviour with very small work_mem
weren't quite good enough.  Previously, I adjusted space_allowed to
make sure every backend could allocate at least one memory chunk
without triggering an increase in the number of batches.  Now, I leave
space_allowed alone but explicitly allow every backend to allocate at
least one chunk ignoring the memory budget (whether regular chunk size
or oversized tuple size), to avoid futile repeated batch increases
when a single monster tuple is never going to fit in work_mem.

A couple of stupid things outstanding:

1.  EXPLAIN ANALYZE for Parallel Hash "actual" shows the complete row
count, which is interesting to know (or not?  maybe I should show it
somewhere else?), but the costing shows the partial row estimate used
for costing purposes.
2.  The BufFileSet's temporary directory gets created even if you
don't need it for batches.  Duh.
3.  I don't have a good query rescan regression query yet.  I wish I
could write my own query plans to test the 

Re: [HACKERS] Jsonb transform for pl/python

2017-10-29 Thread David Fetter
On Wed, Oct 25, 2017 at 02:51:00PM +0300, Anthony Bykov wrote:
> Hi.
> I've implemented jsonb transform
> (https://www.postgresql.org/docs/9.5/static/sql-createtransform.html)
> for pl/python. 
> 
> 1. '{"1":1}'::jsonb is transformed into dict {"1"=>1}, while
> '["1",2]'::jsonb is transformed into list(not tuple!) ["1", 2]
> 
> 2. If there is a numeric value appear in jsonb, it will be transformed
> to decimal through string (Numeric->String->Decimal). Not the best
> solution, but as far as I understand this is usual practise in
> postgresql to serialize Numerics and de-serialize them.
> 
> 3. Decimal is transformed into jsonb through string
> (Decimal->String->Numeric).
> 
> An example may also be helpful to understand extension. So, as an
> example, function "test" transforms incoming jsonb into python,
> transforms it back into jsonb and returns it.
> 
> create extension jsonb_plpython2u cascade;

Thanks for your hard work!

Should there also be one for PL/Python3U?

Best,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: PANIC: invalid index offnum: 186 when processing BRIN indexes in VACUUM

2017-10-29 Thread Tomas Vondra
FWIW I can reproduce this on REL_10_STABLE, but not on REL9_6_STABLE. So
it seems to be due to something that changed in the last release.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2017: Foreign Key Arrays

2017-10-29 Thread Andreas Karlsson

Sorry for the very late review.

I like this feature and have needed it myself in the past, and the 
current syntax seems pretty good. One could argue for if the syntax 
could be generalized to support other things like json and hstore, but I 
do not think it would be fair to block this patch due to that.


== Limitations of the current design

1) Array element foreign keys can only be specified at the table level 
(not at columns): I think this limitation is fine. Other PostgreSQL 
specific features like exclusion contraints can also only be specified 
at the table level.


2) Lack of support for SET NULL and SET DEFAULT: these do not seem very 
useful for arrays.


3) Lack of support for specifiying multiple arrays in the foreign key: 
seems like a good thing to me since it is not obvious what such a thing 
even would do.


4) That you need to add a cast to the index if you have different types: 
due to there not being a int4[] <@ int2[] operator you need to add an 
index on (col::int4[]) to speed up deletes and updates. This one i 
annoying since EXPLAIN wont give you the query plans for the foreign key 
queries, but I do not think fixing this should be within the scope of 
the patch and that having a smaller interger in the referring table is rare.


5) The use of count(DISTINCT) requiring types to support btree equality: 
this has been discussed a lot up-thread and I think the current state is 
good enough.


== Functional review

I have played around some with it and things seem to work and the test 
suite passes, but I noticed a couple of strange behaviors.


1) MATCH FULL does not seem to care about NULLS in arrays. In the 
example below I expected both inserts into the referring table to fail.


CREATE TABLE t (x int, y int, PRIMARY KEY (x, y));
CREATE TABLE fk (x int, ys int[], FOREIGN KEY (x, EACH ELEMENT OF ys) 
REFERENCES t MATCH FULL);

INSERT INTO t VALUES (10, 1);
INSERT INTO fk VALUES (10, '{1,NULL}');
INSERT INTO fk VALUES (NULL, '{1}');

CREATE TABLE
CREATE TABLE
INSERT 0 1
INSERT 0 1
ERROR:  insert or update on table "fk" violates foreign key constraint 
"fk_x_fkey"

DETAIL:  MATCH FULL does not allow mixing of null and nonnull key values.

2) To me it was not obvious that ON DELETE CASCADE would delete the 
whole rows rather than delete the members from the array, and this kind 
of misunderstanding can lead to pretty bad surprises in production. I am 
leaning towards not supporting CASCADE.


== The @>> operator

A previous version of your patch added the "anyelement <<@ anyarray" 
operator to avoid having to build arrays, but that part was reverted due 
to a bug.


I am not expert on the gin code, but as far as I can tell it would be 
relatively simple to fix that bug. Just allocate an array of Datums of 
length one where you put the element you are searching for (or maybe a 
copy of it).


Potential issues with adding the operators:

1) Do we really want to add an operator just for array element foreign 
keys? I think this is not an issue since it seems like it should be 
useful in general. I know I have wanted it myself.


2) I am not sure, but the committers might prefer if adding the 
operators is done in a separate patch.


3) Bikeshedding about operator names. I personally think @>> is clear 
enough and as far as I know it is not used for anything else.


== Code review

The patch no longer applies to HEAD, but the conflicts are small.

I think we should be more consistent in the naming, both in code and in 
the documentation. Right now we have "array foreign keys", "element 
foreign keys", "ELEMENT foreign keys", etc.


+   /*
+* If this is an array foreign key, we must look up the 
operators for

+* the array element type, not the array type itself.
+*/
+   if (fkreftypes[i] != FKCONSTR_REF_PLAIN)

+   if (fkreftypes[i] != FKCONSTR_REF_PLAIN)
+   {
+   old_fktype = get_base_element_type(old_fktype);
+   /* this shouldn't happen ... */
+   if (!OidIsValid(old_fktype))
+   elog(ERROR, "old foreign key column is not an array");
+   }

+   if (riinfo->fk_reftypes[i] != FKCONSTR_REF_PLAIN)
+   {
+   riinfo->has_array = true;
+   riinfo->ff_eq_oprs[i] = ARRAY_EQ_OP;
+   }

In the three diffs above it would be much cleaner to check for "== 
FKCONSTR_REF_EACH_ELEMENT" since that better conveys the intent and is 
safer for adding new types in the future.


+   /* We look through any domain here */
+   fktype = get_base_element_type(fktype);

What does the comment above mean?

if (!(OidIsValid(pfeqop) && OidIsValid(ffeqop)))
ereport(ERROR,
(errcode(ERRCODE_DATATYPE_MISMATCH),
-errmsg("foreign key constraint \"%s\" "
-   "cannot be implemented",
-   fkconstraint->conname),
-errdetail("Key columns \"%s\" and 

Re: [HACKERS] git down

2017-10-29 Thread Andreas Karlsson

On 10/27/2017 10:51 PM, Erik Rijkers wrote:

git.postgresql.org is down/unreachable

( git://git.postgresql.org/git/postgresql.git )


Yes, I noticed this too, but 
https://git.postgresql.org/git/postgresql.git still works fine.


I guess it makes sense to remove unencrypted access, but in that case 
https://git.postgresql.org/gitweb/?p=postgresql.git;a=summary should not 
advertise supporting the git protocol. I have not seen any announcement 
either, but that could just be me not paying enough attention.


Andreas



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers