[HACKERS] Bug in intarray bench script

2016-05-17 Thread Daniel Gustafsson
In the contrib/intarray benchmarking script bench.pl, the -e option to print
the plan via EXPLAIN is using the DBI do() method which discards output
resulting in nothing being printed.  Judging by the usage help (“show explain”)
I assume the intention is to print the plan to STDOUT when invoked (which is
exactly what I wanted in this case, thus bumping into this) but at least on
recent DBI/DBD::Pg versions the do() method is not returning the rows.

Attached patch use the selectall_arrayref() method instead and output the plan
when -e is invoked.

cheers ./daniel



intarray_bench.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning

2016-05-17 Thread Amit Langote

Hi Ildar,

On 2016/05/16 22:12, Ildar Musin wrote:
> Hi Amit,
> 
> I'm running some experiments based on your infrastructure trying to
> optimize SELECT queries. At some point I need to get PartitionDesc for
> relation and to do it I'm using RelationGetPartitionDesc() function.
> Problem is that this function copies relcache data and it can be quite
> slow for large amounts (thousands) of partitions. The comment to the
> function says that we cannot use relation->rd_partdesc pointer to relcache
> because of possibility of relcache invalidation. Could you please tell is
> it possible that relcache invalidation occurs during SELECT/UPDATE/DELETE
> query?

Hmm, I think invalidation would not occur mid-query since it would have
acquired a lock on the table.  So the copying may be unnecessary though I
may be wrong.  I will study other similar code and see if that is so.

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] HeapTupleSatisfiesToast() busted? (was atomic pin/unpin causing errors)

2016-05-17 Thread Jim Nasby

On 5/10/16 4:12 PM, Andres Freund wrote:

The catalog representation (as in pg_class.reltoastrelid) isn't entirely
clear to me.  One way would be to invert pg_class.reltoastrelid's
meaning and have the toast table point to the table it stores values
for. That'd also open the potential of having one toast table per column
and such.


FWIW, toast-per-column is something I have a use case for.

Can we also consider using a per-toast-table sequence instead of OID? 
IIRC the generation mechanics of the two are similar, and that would 
greatly reduce the pressure on OID generation.


Tom, were you around when sequences were added? I'm guessing that that 
was done in response to OIDs becoming a serious problem in user tables 
on larger installs; ISTM this is just the next iteration of them being a 
problem. (And I suspect the one after this will be pg_attribute or maybe 
pg_depend).

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 10.0

2016-05-17 Thread David Fetter
On Tue, May 17, 2016 at 01:45:09PM +0800, Craig Ringer wrote:
> On 14 May 2016 at 02:49, Tom Lane  wrote:
> > * This year's major release will be 9.6.0, with minor updates 9.6.1,
> > 9.6.2, etc.  It's too late to do otherwise for this release cycle.
> >
> > * Next year's major release will be 10.0, with minor updates 10.1,
> > 10.2, etc.
> >
> > * The year after, 11.0.  Etc cetera.
> >
> >
> Yes. Please!
> 
> I get tired of explaining to people that PostgreSQL "9.x" isn't a thing,
> that yes, 9.3 and 9.4 really _do_ have incompatible data directories and
> replication protocols, and that when the docs say "major version" they
> don't mean "major version as you might actually expect" but "first two
> version number parts".
> 
> Lets get rid of this user-baffling wart.

Hear, hear!

Sadly, we're too late for 9.6, but we can start with 10.0 and finish
this silliness once and for good.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Backup doc typo

2016-05-17 Thread Magnus Hagander
On Mon, May 16, 2016 at 6:08 AM, Amit Langote  wrote:

> Hi,
>
> Attached patch adds missing "is" in a sentence in backup.sgml.
>


Applied, thanks!

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Does Type Have = Operator?

2016-05-17 Thread Jim Nasby

On 5/12/16 4:25 PM, David E. Wheeler wrote:

On May 12, 2016, at 12:02 PM, Tom Lane  wrote:


Andrew mentions in the extension you pointed to that providing a default
comparison operator would enable people to do UNION, DISTINCT, etc on JSON
columns without thinking about it.  I'm not convinced that "without
thinking about it" is a good thing here.  But if we were going to enable
that, I'd feel better about making it default to jsonb semantics ...


If you want the JSONB semantics, why wouldn’t you use JSONB instead of JSON?


Probably in an attempt to bypass parse overhead on ingestion.

Possibly because JSONB silently eats duplicated keys while JSON doesn't 
(though in that case even casting to JSONB is probably not what you want).

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Does Type Have = Operator?

2016-05-17 Thread Jim Nasby

On 5/11/16 7:05 PM, David E. Wheeler wrote:

On May 11, 2016, at 10:34 AM, Kevin Grittner  wrote:


I'm not clear enough on your intended usage to know whether these
operators are a good fit, but they are sitting there waiting to be
used if they do fit.


Huh. I haven’t had any problems with IS DISTINCT FROM for rows, except for the 
situation in which a failure is thrown because the types vary, say between TEXT 
and CITEXT. That can drive the tester crazy, since it says something like:

Results differ beginning at row 3:
have: (44,Anna)
want: (44,Anna)

But overall I think that’s okay; the tester really does want to make sure the 
type is correct.


Speaking specifically to is(), what I'd find most useful is if it at 
least hinted that there might be some type shenanigans going on, because 
I've run across something like your example more than once and it always 
takes a lot to finally figure out WTF is going on.


I think it'd also be useful to be able to specify an equality operator 
to is(), though that means not using IS DISTINCT.


Something else to keep in mind here is that is() is defined as 
is(anyelement, anyelement, text), which means you've lost your original 
type information when you use it. I don't think you could actually do 
anything useful here because of that.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Parameters don't work in FETCH NEXT clause?

2016-05-17 Thread Shay Rojansky
A user of mine just raised a strange issue... While it is possible to use a
parameter in a LIMIT clause, PostgreSQL does not seem to allow using one in
a FETCH NEXT clause. In other words, while the following works:

SELECT 1 LIMIT $1;

The following generates a syntax error:

SELECT 1 FETCH NEXT $1 ROWS ONLY;

Since LIMIT and FETCH NEXT are supposed to be equivalent this behavior is
odd.

More generally, is there some documentation on where exactly PostgreSQL
allows parameters and where it doesn't? I occasionally get complaints from
users expecting parameters to work in DDL, or even in table/column names in
SELECT queries... I haven't seen a resource like this.


Re: [HACKERS] Does Type Have = Operator?

2016-05-17 Thread David E. Wheeler
On May 17, 2016, at 7:58 AM, Jim Nasby  wrote:

> Probably in an attempt to bypass parse overhead on ingestion.
> 
> Possibly because JSONB silently eats duplicated keys while JSON doesn't 
> (though in that case even casting to JSONB is probably not what you want).

It’s also when you’d want text equivalent semantics.

Best,

David



smime.p7s
Description: S/MIME cryptographic signature


Re: [HACKERS] Does Type Have = Operator?

2016-05-17 Thread David E. Wheeler
Sorry for the pgTAP off-topicness here, hackers. Please feel free to ignore.

On May 17, 2016, at 8:10 AM, Jim Nasby  wrote:

> Speaking specifically to is(), what I'd find most useful is if it at least 
> hinted that there might be some type shenanigans going on, because I've run 
> across something like your example more than once and it always takes a lot 
> to finally figure out WTF is going on.

Agreed. Same for the relation testing functions. Maybe some additional 
diagnostics could be added in the event of failure.

> I think it'd also be useful to be able to specify an equality operator to 
> is(), though that means not using IS DISTINCT.

You can use cmp_ok().

  http://pgxn.org/dist/pgtap/doc/pgtap.html#cmp_ok.

> Something else to keep in mind here is that is() is defined as is(anyelement, 
> anyelement, text), which means you've lost your original type information 
> when you use it. I don't think you could actually do anything useful here 
> because of that.

pg_typeof() will give it to you.

Best,

David



smime.p7s
Description: S/MIME cryptographic signature


Re: [HACKERS] Re: [COMMITTERS] pgsql: Correctly align page's images in generic wal API

2016-05-17 Thread Tom Lane
Teodor Sigaev  writes:
>> Instead of allocating this memory unconditionally for each buffer,
>> wouldn't it be better to set all the page pointers to NULL in
>> GenericXLogStart and allocate memory only once a buffer is registered
>> in GenericXLogRegisterBuffer when finding a free slot? This patch is
>> wasting many cycles.

> GenericXLogRegisterBuffer() could be called in another MemoryContext what
> can be a reason for strange bugs. Right now only a few pages could be
> involved in one round of GenericWal. I don't believe that such allocation
> could be a reason of noticable performance degradation. Although I didn't
> check that.

You could allocate all the pages in one palloc, though, and then just
set the pointers via address arithmetic.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 10.0

2016-05-17 Thread David Steele

On 5/17/16 10:51 AM, David Fetter wrote:

> On Tue, May 17, 2016 at 01:45:09PM +0800, Craig Ringer wrote:
>> On 14 May 2016 at 02:49, Tom Lane  wrote:
>>> * This year's major release will be 9.6.0, with minor updates 9.6.1,
>>> 9.6.2, etc.  It's too late to do otherwise for this release cycle.
>>>
>>> * Next year's major release will be 10.0, with minor updates 10.1,
>>> 10.2, etc.
>>>
>>> * The year after, 11.0.  Etc cetera.
>>>
>>>
>> Yes. Please!
>
> Hear, hear!
>
> Sadly, we're too late for 9.6, but we can start with 10.0 and finish
> this silliness once and for good.

+1!

--
-David
da...@pgmasters.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Postgres_fdw join pushdown - getting server crash in left outer join of three table

2016-05-17 Thread Rajkumar Raghuwanshi
Thanks for the commit. I have tested it again. Not getting server crash now.

Thanks & Regards,
Rajkumar Raghuwanshi
QMG, EnterpriseDB Corporation

On Mon, May 16, 2016 at 9:38 PM, Robert Haas  wrote:

> On Fri, May 13, 2016 at 6:40 PM, Michael Paquier
>  wrote:
> > On Fri, May 13, 2016 at 11:14 PM, Robert Haas 
> wrote:
> >> So, barring objections, I intend to apply the attached fixup patch,
> >> which replaces Michael's logic with Ashutosh's logic and rewrites the
> >> comments such to be much more explicit.
> >
> > Re-oops. I didn't check what was committed to be honest. And it should
> > not have been my version, definitely.
>
> OK, committed.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: [HACKERS] Parameters don't work in FETCH NEXT clause?

2016-05-17 Thread Tom Lane
Shay Rojansky  writes:
> A user of mine just raised a strange issue... While it is possible to use a
> parameter in a LIMIT clause, PostgreSQL does not seem to allow using one in
> a FETCH NEXT clause. In other words, while the following works:
> SELECT 1 LIMIT $1;
> The following generates a syntax error:
> SELECT 1 FETCH NEXT $1 ROWS ONLY;
> Since LIMIT and FETCH NEXT are supposed to be equivalent this behavior is
> odd.

Per the SELECT reference page:

SQL:2008 introduced a different syntax to achieve the same result,
which PostgreSQL also supports. It is:

OFFSET start { ROW | ROWS }
FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY

In this syntax, to write anything except a simple integer constant for
start or count, you must write parentheses around it.

The comments about this in gram.y are informative:

 * Allowing full expressions without parentheses causes various parsing
 * problems with the trailing ROW/ROWS key words.  SQL only calls for
 * constants, so we allow the rest only with parentheses.  If omitted,
 * default to 1.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 10.0

2016-05-17 Thread Jaime Casanova
On 17 May 2016 at 10:37, David Steele  wrote:
> On 5/17/16 10:51 AM, David Fetter wrote:
>
>> On Tue, May 17, 2016 at 01:45:09PM +0800, Craig Ringer wrote:
>>> On 14 May 2016 at 02:49, Tom Lane  wrote:
 * This year's major release will be 9.6.0, with minor updates 9.6.1,
 9.6.2, etc.  It's too late to do otherwise for this release cycle.

 * Next year's major release will be 10.0, with minor updates 10.1,
 10.2, etc.

 * The year after, 11.0.  Etc cetera.


>>> Yes. Please!
>>
>> Hear, hear!
>>
>> Sadly, we're too late for 9.6, but we can start with 10.0 and finish
>> this silliness once and for good.
>
> +1!
>

+1

-- 
Jaime Casanova  www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parameters don't work in FETCH NEXT clause?

2016-05-17 Thread Shay Rojansky
Apologies, as usual I didn't read the docs carefully enough.

On Tue, May 17, 2016 at 7:13 PM, Tom Lane  wrote:

> Shay Rojansky  writes:
> > A user of mine just raised a strange issue... While it is possible to
> use a
> > parameter in a LIMIT clause, PostgreSQL does not seem to allow using one
> in
> > a FETCH NEXT clause. In other words, while the following works:
> > SELECT 1 LIMIT $1;
> > The following generates a syntax error:
> > SELECT 1 FETCH NEXT $1 ROWS ONLY;
> > Since LIMIT and FETCH NEXT are supposed to be equivalent this behavior is
> > odd.
>
> Per the SELECT reference page:
>
> SQL:2008 introduced a different syntax to achieve the same result,
> which PostgreSQL also supports. It is:
>
> OFFSET start { ROW | ROWS }
> FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY
>
> In this syntax, to write anything except a simple integer constant for
> start or count, you must write parentheses around it.
>
> The comments about this in gram.y are informative:
>
>  * Allowing full expressions without parentheses causes various parsing
>  * problems with the trailing ROW/ROWS key words.  SQL only calls for
>  * constants, so we allow the rest only with parentheses.  If omitted,
>  * default to 1.
>
> regards, tom lane
>


Re: [HACKERS] A couple of cosmetic changes around shared memory code

2016-05-17 Thread Tom Lane
Michael Paquier  writes:
> On Tue, May 17, 2016 at 4:40 AM, Piotr Stefaniak
>  wrote:
> -toc_bytes = offsetof(shm_toc, toc_entry) +nentry * sizeof(shm_toc_entry)
> +toc_bytes = offsetof(shm_toc, toc_entry) + nentry * sizeof(shm_toc_entry)
>  + allocated_bytes;

> I don't recall the exact reason, but this is intentional style
> (memories from a patchwork with Tom).

Well, it's not so much intentional as that pgindent will make it look like
that no matter what you do --- it's got some weird interaction with
sizeof, offsetof, and typedef names versus operators later on the same
line.  I'd call that a pgindent bug myself, but have no particular desire
to try to fix it.

You could possibly make it look nicer by splitting into multiple lines,
but you'll need to try pgindent'ing to see if you've actually improved the
end result at all.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] HeapTupleSatisfiesToast() busted? (was atomic pin/unpin causing errors)

2016-05-17 Thread David Fetter
On Tue, May 10, 2016 at 02:06:19PM -0400, Tom Lane wrote:
> Sooner or later we are going to need to go to 8-byte TOAST object
> identifiers.  Maybe we should think about doing that sooner not
> later rather than trying to invent some anti-wraparound solution
> here.

Yay!  Is there any lift in separating TOAST OIDs from the rest?

> In principle, you could support existing TOAST tables and pointers
> containing 4-byte IDs in parallel with the new ones.

> Not sure how pg_upgrade would handle it exactly though.

This is yet another reason we should get away from in-place binary
upgrade as a strategy.  It's always been fragile, and it's only ever
been justifiable on grounds of hardware economics that no longer
obtain.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning

2016-05-17 Thread Tom Lane
Amit Langote  writes:
> On 2016/05/16 22:12, Ildar Musin wrote:
>> Could you please tell is
>> it possible that relcache invalidation occurs during SELECT/UPDATE/DELETE
>> query?

> Hmm, I think invalidation would not occur mid-query since it would have
> acquired a lock on the table.

This is incorrect: invalidation can occur anyway (for example, if
autoanalyze updates the relation's statistics).  If you are holding
a lock, you can expect that the relation's schema will not change more
than your lock would allow --- but a cache flush and rebuild could happen
underneath you, so keeping a pointer to any subsidiary relcache data
structure is very dangerous.

The two ways that we've dealt with this type of hazard are to copy data
out of the relcache before using it; or to give the relcache the
responsibility of not moving a particular portion of data if it did not
change.  From memory, the latter applies to the tuple descriptor and
trigger data, but we've done most other things the first way.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] seg fault in contrib/bloom

2016-05-17 Thread Jeff Janes
I'm getting seg faults on contrib/bloom when updating a tuple which
was found via a bloom index.

It does not happen on every update, but it does happen within a few
seconds of run time, so it is readily reproducible.  The test harness
is a bit of a mess, I'll try to clean it up and post it if no one
spots the bug soon via looking at the stack trace below.

Obviously scan->opaque is null, but I don't know why it is null and
whether blendscan is obliged to deal with nulls, or if index_endscan
is obliged not to send them.

No crash/recovery cycles are necessary to reach this bug.

Program terminated with signal 11, Segmentation fault.
#0  blendscan (scan=0x248bf40) at blscan.c:79
79  if (so->sign)
(gdb) bt
#0  blendscan (scan=0x248bf40) at blscan.c:79
#1  0x004a9115 in index_endscan (scan=0x248bf40) at indexam.c:339
#2  0x005d5839 in ExecEndBitmapIndexScan (node=) at nodeBitmapIndexscan.c:183
#3  0x005d4e57 in ExecEndBitmapHeapScan (node=0x24894d8) at
nodeBitmapHeapscan.c:508
#4  0x005c1585 in EvalPlanQualEnd (epqstate=0x246d078) at
execMain.c:2889
#5  0x005ddc17 in ExecEndModifyTable (node=0x246cfd0) at
nodeModifyTable.c:1972
#6  0x005c2e9e in ExecEndPlan (queryDesc=) at execMain.c:1451
#7  standard_ExecutorEnd (queryDesc=) at execMain.c:468
#8  0x006daac0 in ProcessQuery (plan=0x2478e20,
sourceText=0x23e4770 "update foo set count=count+1 where bloom=$1",
params=0x23e47e0,
dest=, completionTag=0x7ffed6dff960 "UPDATE
1") at pquery.c:230
#9  0x006dac9f in PortalRunMulti (portal=0x23d43b0,
isTopLevel=1 '\001', dest=0xc05660, altdest=0xc05660,
completionTag=0x7ffed6dff960 "UPDATE 1")
at pquery.c:1267
#10 0x006db32a in PortalRun (portal=0x23d43b0,
count=9223372036854775807, isTopLevel=1 '\001', dest=0x2437c60,
altdest=0x2437c60,
completionTag=0x7ffed6dff960 "UPDATE 1") at pquery.c:813
#11 0x006d9612 in exec_execute_message (argc=, argv=, dbname=0x23df128 "jjanes",
username=) at postgres.c:1979
#12 PostgresMain (argc=, argv=, dbname=0x23df128 "jjanes", username=) at
postgres.c:4122
#13 0x00676a05 in BackendRun (argc=,
argv=) at postmaster.c:4258
#14 BackendStartup (argc=, argv=) at postmaster.c:3932
#15 ServerLoop (argc=, argv=) at postmaster.c:1690
#16 PostmasterMain (argc=, argv=) at postmaster.c:1298
#17 0x005ff9e8 in main (argc=4, argv=0x23b5d50) at main.c:228

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Jsonb array-style subscripting, generic version

2016-05-17 Thread Dmitry Dolgov
Hi

With regard to previous conversations:

http://www.postgresql.org/message-id/flat/CA+q6zcV8qvGcDXurwwgUbwACV86Th7G80pnubg42e-p9gsSf=g...@mail.gmail.com#CA+q6zcV8qvGcDXurwwgUbwACV86Th7G80pnubg42e-p9gsSf=g...@mail.gmail.com

http://www.postgresql.org/message-id/flat/ca+q6zcx3mdxgcgdthzuyswh-apyhhm-g4ob1r0fn0j2hzqq...@mail.gmail.com#ca+q6zcx3mdxgcgdthzuyswh-apyhhm-g4ob1r0fn0j2hzqq...@mail.gmail.com

I want to try following approach to make the array-style subscripting more
generic and allow using it for different types with less effort:

* Introduce generic node type instead of the `ArrayRef` (and `JsonbRef` from
  patch) [SubscriptingRef]
* Make generic version of `transformArraySubscript` /
  `transformAssignmentSubscript` / `ExecEvalArrayRef` etc
  [transformSubscripting / ExecEvalSubscripting etc]
* Introduce a new pg_type column with type `regproc` to point out a
function to
  handle all type-related logic for generic `ExecEval` function
  [typsubscripting]. If value of this column is null, type doesn't support
  array-style subscripting

There is still question about indexing of such kind of expressions. To be
honest I haven't figured it out in details yet how to do it (except simple
btree index for an each path like `create index jsonb_data_idx on
jsonb_table ((jsonb_data['key']))`).
But I believe that this can be achieved subsequently, since in case of
getting
data using the array-style subscripting it's no more than alias or syntactic
sugar.

So I have few questions:

* Is it whole plan looks ok?
* Any suggestions about names (especially for column in pg_type)?
* Is it ok to implement indexing separately (since the main purpose of
  array-style subscripting for jsonb is an update operation)?


Re: [HACKERS] Parameters don't work in FETCH NEXT clause?

2016-05-17 Thread David G. Johnston
On Tue, May 17, 2016 at 12:15 PM, Shay Rojansky  wrote:

> Apologies, as usual I didn't read the docs carefully enough.
>
> On Tue, May 17, 2016 at 7:13 PM, Tom Lane  wrote:
>
>> Shay Rojansky  writes:
>> > A user of mine just raised a strange issue... While it is possible to
>> use a
>> > parameter in a LIMIT clause, PostgreSQL does not seem to allow using
>> one in
>> > a FETCH NEXT clause. In other words, while the following works:
>> > SELECT 1 LIMIT $1;
>> > The following generates a syntax error:
>> > SELECT 1 FETCH NEXT $1 ROWS ONLY;
>> > Since LIMIT and FETCH NEXT are supposed to be equivalent this behavior
>> is
>> > odd.
>>
>> Per the SELECT reference page:
>>
>> SQL:2008 introduced a different syntax to achieve the same result,
>> which PostgreSQL also supports. It is:
>>
>> OFFSET start { ROW | ROWS }
>> FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY
>>
>> In this syntax, to write anything except a simple integer constant for
>> start or count, you must write parentheses around it.
>>
>> The comments about this in gram.y are informative:
>>
>>  * Allowing full expressions without parentheses causes various parsing
>>  * problems with the trailing ROW/ROWS key words.  SQL only calls for
>>  * constants, so we allow the rest only with parentheses.  If omitted,
>>  * default to 1.
>>
>
​Would something like this be valid?

OFFSET { start_literal | ( start_expression ) } { ROW | ROWS }
FETCH { FIRST | NEXT} [ count_literal | ( count_expression ) ] { ROW | ROWS
} ONLY

Leaving the mandatory parentheses detail to the description, while
adequate, seems insufficient - especially when a normal LIMIT expression is
not so restricted.

​And don't you think the section header would be more accurately named:

Limit, Offset & Fetch Clauses

​The nuance regarding "different standard syntax" is unknown to the reader
who first looks at the syntax and sees three different lines, one for each
clause, and then scrolls down looking at headers until they find the
section for the clause they are interested in.  That FETCH is an alias for
LIMIT ​is not something that I immediately understood - though to be honest
I don't think I comprehended the presence of FETCH on a SELECT query at all
and thought it only pertained to cursors

David J.


Re: [HACKERS] seg fault in contrib/bloom

2016-05-17 Thread Tom Lane
Jeff Janes  writes:
> I'm getting seg faults on contrib/bloom when updating a tuple which
> was found via a bloom index.
> It does not happen on every update, but it does happen within a few
> seconds of run time, so it is readily reproducible.  The test harness
> is a bit of a mess, I'll try to clean it up and post it if no one
> spots the bug soon via looking at the stack trace below.
> Obviously scan->opaque is null, but I don't know why it is null and
> whether blendscan is obliged to deal with nulls, or if index_endscan
> is obliged not to send them.

I'm guessing there is a code path that can get from blbeginscan to
blendscan without going through blrescan.  I'm not sure why it'd
only happen sometimes, though.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reviewing freeze map code

2016-05-17 Thread Masahiko Sawada
On Mon, May 16, 2016 at 10:49 AM, Robert Haas  wrote:
> On Tue, May 10, 2016 at 10:40 PM, Masahiko Sawada  
> wrote:
>> Or second way I came up with is having tool to remove particular _vm
>> file safely, which is executed via SQL or client tool like
>> pg_resetxlog.
>>
>> Attached updated VACUUM SCAN_ALL patch.
>> Please find it.
>
> We should support scan_all only with the new-style options syntax for
> VACUUM; that is, vacuum (scan_all) rename.  That doesn't require
> making scan_all a keyword, which is good: this is a minor feature, and
> we don't want to bloat the parsing tables for it.
>

I agree with having new-style options syntax.
Isn't it better to have SCAN_ALL option without parentheses?

Syntaxes are;
VACUUM SCAN_ALL table_name;
VACUUM SCAN_ALL;  -- for all tables on database

Regards,

--
Masahiko Sawada


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Parallel query and temp_file_limit

2016-05-17 Thread Peter Geoghegan
temp_file_limit "specifies the maximum amount of disk space that a
session can use for temporary files, such as sort and hash temporary
files", according to the documentation. That's not true when parallel
query is in use, since the global variable temporary_files_size
receives no special treatment for parallel query. Enforcement of
temp_file_limit will therefore not work *per-session*.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reviewing freeze map code

2016-05-17 Thread Alvaro Herrera
Masahiko Sawada wrote:
> On Mon, May 16, 2016 at 10:49 AM, Robert Haas  wrote:

> > We should support scan_all only with the new-style options syntax for
> > VACUUM; that is, vacuum (scan_all) rename.  That doesn't require
> > making scan_all a keyword, which is good: this is a minor feature, and
> > we don't want to bloat the parsing tables for it.
> 
> I agree with having new-style options syntax.
> Isn't it better to have SCAN_ALL option without parentheses?
> 
> Syntaxes are;
> VACUUM SCAN_ALL table_name;
> VACUUM SCAN_ALL;  -- for all tables on database

No, I agree with Robert that we shouldn't add any more such options to
avoid keyword proliferation.

 Syntaxes are;
 VACUUM (SCAN_ALL) table_name;
 VACUUM (SCAN_ALL);  -- for all tables on database

Is SCAN_ALL really the best we can do here?  The business of having an
underscore in an option name has no precedent (other than
CURRENT_DATABASE and the like).  How about COMPLETE, TOTAL, or WHOLE?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Allocate all page images at once in generic wal interface

2016-05-17 Thread Teodor Sigaev

Allocate all page images at once in generic wal interface

That reduces number of allocation.

Per gripe from Michael Paquier and Tom Lane suggestion.

Branch
--
master

Details
---
http://git.postgresql.org/pg/commitdiff/7c979c95a3700d0bd34c2831f49a9260d505b0f9

Modified Files
--
src/backend/access/transam/generic_xlog.c | 19 +--
1 file changed, 9 insertions(+), 10 deletions(-)


Seems, this patch isn't liked by curculio [1] buildfarm member, but I'm confused 
with diagnostics:
2016-05-17 21:43:19.489 CEST [573b7457.547c:3] LOG:  statement: CREATE EXTENSION 
bloom;
2016-05-17 21:43:19.501 CEST [573b7457.547c:4] ERROR:  syntax error in file 
"/home/pgbf/buildroot/HEAD/inst/share/postgresql/extension/bloom.control" line 
1, near token ""


Could somebody explain me what's going on?

Thank you

[1] 
http://pgbuildfarm.org/cgi-bin/show_log.pl?nm=curculio=2016-05-17+19%3A30%3A09

--
Teodor Sigaev   E-mail: teo...@sigaev.ru
   WWW: http://www.sigaev.ru/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reviewing freeze map code

2016-05-17 Thread Masahiko Sawada
On Tue, May 17, 2016 at 3:32 PM, Alvaro Herrera
 wrote:
> Masahiko Sawada wrote:
>> On Mon, May 16, 2016 at 10:49 AM, Robert Haas  wrote:
>
>> > We should support scan_all only with the new-style options syntax for
>> > VACUUM; that is, vacuum (scan_all) rename.  That doesn't require
>> > making scan_all a keyword, which is good: this is a minor feature, and
>> > we don't want to bloat the parsing tables for it.
>>
>> I agree with having new-style options syntax.
>> Isn't it better to have SCAN_ALL option without parentheses?
>>
>> Syntaxes are;
>> VACUUM SCAN_ALL table_name;
>> VACUUM SCAN_ALL;  -- for all tables on database
>
> No, I agree with Robert that we shouldn't add any more such options to
> avoid keyword proliferation.
>
>  Syntaxes are;
>  VACUUM (SCAN_ALL) table_name;
>  VACUUM (SCAN_ALL);  -- for all tables on database

Okay, I agree with this.

> Is SCAN_ALL really the best we can do here?  The business of having an
> underscore in an option name has no precedent (other than
> CURRENT_DATABASE and the like).

Another way is having tool or function that removes _vm file safely for example.

> How about COMPLETE, TOTAL, or WHOLE?

IMHO, I don't have strong opinion about SCAN_ALL as long as we have
document about that option and option name doesn't confuse users.
But ISTM that COMPLETE, TOTAL might make users mislead normal vacuum
as it doesn't do that completely.

Regards,

--
Masahiko Sawada


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reviewing freeze map code

2016-05-17 Thread Joshua D. Drake

On 05/17/2016 12:32 PM, Alvaro Herrera wrote:


  Syntaxes are;
  VACUUM (SCAN_ALL) table_name;
  VACUUM (SCAN_ALL);  -- for all tables on database

Is SCAN_ALL really the best we can do here?  The business of having an
underscore in an option name has no precedent (other than
CURRENT_DATABASE and the like).  How about COMPLETE, TOTAL, or WHOLE?



VACUUM (ANALYZE, VERBOSE, WHOLE)


That seems reasonable? I agree that SCAN_ALL doesn't fit. I am not 
trying to pull a left turn but is there a technical reason we don't just 
make FULL do this?


JD

--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel query and temp_file_limit

2016-05-17 Thread Amit Kapila
On Wed, May 18, 2016 at 12:55 AM, Peter Geoghegan  wrote:
>
> temp_file_limit "specifies the maximum amount of disk space that a
> session can use for temporary files, such as sort and hash temporary
> files", according to the documentation. That's not true when parallel
> query is in use, since the global variable temporary_files_size
> receives no special treatment for parallel query.
>

What kind of special treatment are you expecting for temporary_files_size,
also why do you think it is required?  Currently neither we build hash in
parallel nor there is any form of parallel sort work.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


[HACKERS] foreign table batch inserts

2016-05-17 Thread Manuel Kniep
Hi,

I realized that inserts into foreign tables are only done row by row.
Consider copying data from one local table to a foreign table with

INSERT INTO foreign_table(a,b,c) SELECT a,b,c FROM local_table;

When the foreign  server is for example in another datacenter with long latency,
this as an enormous performance trade off.

Wouldn’t it make sense to do the insert batch wise e.g. 100 rows ?
Are there any plans doing that or am I miss something?

regards

Manuel Kniep



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] seg fault in contrib/bloom

2016-05-17 Thread Tom Lane
Jeff Janes  writes:
> I'm getting seg faults on contrib/bloom when updating a tuple which
> was found via a bloom index.

I pushed a patch that should fix this, but without having tried to
reproduce the problem locally; so possibly I guessed wrong as to
what is causing it.  Please test.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parameters don't work in FETCH NEXT clause?

2016-05-17 Thread Shay Rojansky
>
> ​Would something like this be valid?
>
> OFFSET { start_literal | ( start_expression ) } { ROW | ROWS }
> FETCH { FIRST | NEXT} [ count_literal | ( count_expression ) ] { ROW |
> ROWS } ONLY
>
> Leaving the mandatory parentheses detail to the description, while
> adequate, seems insufficient - especially when a normal LIMIT expression is
> not so restricted.
>
> ​And don't you think the section header would be more accurately named:
>
> Limit, Offset & Fetch Clauses
>
> ​The nuance regarding "different standard syntax" is unknown to the reader
> who first looks at the syntax and sees three different lines, one for each
> clause, and then scrolls down looking at headers until they find the
> section for the clause they are interested in.  That FETCH is an alias for
> LIMIT ​is not something that I immediately understood - though to be honest
> I don't think I comprehended the presence of FETCH on a SELECT query at all
> and thought it only pertained to cursors
>
>
All these suggestions would definitely have saved me (and therefore this
list!) some time.


Re: [HACKERS] Reviewing freeze map code

2016-05-17 Thread Masahiko Sawada
On Tue, May 17, 2016 at 4:34 PM, Joshua D. Drake  wrote:
> On 05/17/2016 12:32 PM, Alvaro Herrera wrote:
>
>>   Syntaxes are;
>>   VACUUM (SCAN_ALL) table_name;
>>   VACUUM (SCAN_ALL);  -- for all tables on database
>>
>> Is SCAN_ALL really the best we can do here?  The business of having an
>> underscore in an option name has no precedent (other than
>> CURRENT_DATABASE and the like).  How about COMPLETE, TOTAL, or WHOLE?
>>
>
> VACUUM (ANALYZE, VERBOSE, WHOLE)
> 
>
> That seems reasonable? I agree that SCAN_ALL doesn't fit. I am not trying to
> pull a left turn but is there a technical reason we don't just make FULL do
> this?
>

FULL option requires AccessExclusiveLock, which could be a problem.

Regards,

--
Masahiko Sawada


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reviewing freeze map code

2016-05-17 Thread Vik Fearing
On 17/05/16 21:32, Alvaro Herrera wrote:
> Is SCAN_ALL really the best we can do here?  The business of having an
> underscore in an option name has no precedent (other than
> CURRENT_DATABASE and the like).

ALTER DATABASE has options for ALLOW_CONNECTIONS, CONNECTION_LIMIT, and
IS_TEMPLATE.

> How about COMPLETE, TOTAL, or WHOLE?

Sure, I'll play this game.  I like EXHAUSTIVE.
-- 
Vik Fearing  +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] seg fault in contrib/bloom

2016-05-17 Thread Jeff Janes
On Tue, May 17, 2016 at 2:02 PM, Tom Lane  wrote:
> Jeff Janes  writes:
>> I'm getting seg faults on contrib/bloom when updating a tuple which
>> was found via a bloom index.
>
> I pushed a patch that should fix this, but without having tried to
> reproduce the problem locally; so possibly I guessed wrong as to
> what is causing it.  Please test.

That fixed it.

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reviewing freeze map code

2016-05-17 Thread Gavin Flower

On 18/05/16 09:34, Vik Fearing wrote:

On 17/05/16 21:32, Alvaro Herrera wrote:

Is SCAN_ALL really the best we can do here?  The business of having an
underscore in an option name has no precedent (other than
CURRENT_DATABASE and the like).

ALTER DATABASE has options for ALLOW_CONNECTIONS, CONNECTION_LIMIT, and
IS_TEMPLATE.


How about COMPLETE, TOTAL, or WHOLE?

Sure, I'll play this game.  I like EXHAUSTIVE.


I prefer 'WHOLE', as it seems more obvious (and not because of the pun 
relating to 'wholesomeness'!!!)




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Allocate all page images at once in generic wal interface

2016-05-17 Thread Tom Lane
Teodor Sigaev  writes:
> Seems, this patch isn't liked by curculio [1] buildfarm member, but I'm 
> confused 
> with diagnostics:
> 2016-05-17 21:43:19.489 CEST [573b7457.547c:3] LOG:  statement: CREATE 
> EXTENSION 
> bloom;
> 2016-05-17 21:43:19.501 CEST [573b7457.547c:4] ERROR:  syntax error in file 
> "/home/pgbf/buildroot/HEAD/inst/share/postgresql/extension/bloom.control" 
> line 
> 1, near token ""

> Could somebody explain me what's going on?

That seems entirely unrelated to what you changed, and curculio's next
run failed even more bizarrely:

commands/explain.o: could not read symbols: File format not recognized
collect2: ld returned 1 exit status

It looks to me like that machine is suffering disk or filesystem
problems.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] foreign table batch inserts

2016-05-17 Thread Michael Paquier
On Wed, May 18, 2016 at 6:00 AM, Manuel Kniep  wrote:
> I realized that inserts into foreign tables are only done row by row.
> Consider copying data from one local table to a foreign table with
>
> INSERT INTO foreign_table(a,b,c) SELECT a,b,c FROM local_table;
>
> When the foreign  server is for example in another datacenter with long 
> latency,
> this as an enormous performance trade off.
>
> Wouldn’t it make sense to do the insert batch wise e.g. 100 rows ?

Using a single query string with multiple values, perhaps, but after
that comes into consideration query string limit particularly for
large text values... The query used for the insertion is a prepared
statement since writable queries are supported in 9.3, which makes the
code quite simple actually.

> Are there any plans doing that or am I miss something?

Not that I know of. I am adding Fujita-san in the loop here, he is
quite involved with postgres_fdw these days so perhaps he has some
input to offer.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel query and temp_file_limit

2016-05-17 Thread Peter Geoghegan
On Tue, May 17, 2016 at 1:53 PM, Amit Kapila  wrote:
> What kind of special treatment are you expecting for temporary_files_size,
> also why do you think it is required?  Currently neither we build hash in
> parallel nor there is any form of parallel sort work.

I expect only that temporary_files_size be described accurately, and
have new behavior for parallel query that is not surprising. There are
probably several solutions that would meet that standard, and I am not
attached to any particular one of them.

I wrote a parallel sort patch already (CREATE INDEX for the B-Tree
AM), and will post it at an opportune time. So, I think we can expect
your observations about there not being parallel sort work to no
longer apply in a future release, which we should get ahead of now.
Also, won't parallel workers that build their own copy of the hash
table (for a hash join) also use their own temp files, if there is a
need for temp files? I think parallel query will end up sharing temp
files fairly often, and not just out of convenience to implementers
(that is, not just to avoid using shared memory extensively).

Fundamentally, since temporary_files_size enforcement simply
piggy-backs on low-level fd.c file management, without any
consideration of what the temp files contain, it'll be hard to be sure
that parallel workers will not have issues. I think it'll be far
easier to fix the problem then it would be to figure out if it's
possible to get away with it.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel query and temp_file_limit

2016-05-17 Thread Peter Geoghegan
On Tue, May 17, 2016 at 3:33 PM, Peter Geoghegan  wrote:
> Fundamentally, since temporary_files_size enforcement simply
> piggy-backs on low-level fd.c file management, without any
> consideration of what the temp files contain, it'll be hard to be sure
> that parallel workers will not have issues. I think it'll be far
> easier to fix the problem then it would be to figure out if it's
> possible to get away with it.

I'll write a patch to fix the issue, if there is a consensus on a solution.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning

2016-05-17 Thread Amit Langote
On 2016/05/18 2:22, Tom Lane wrote:
> Amit Langote  writes:
>> On 2016/05/16 22:12, Ildar Musin wrote:
>>> Could you please tell is
>>> it possible that relcache invalidation occurs during SELECT/UPDATE/DELETE
>>> query?
> 
>> Hmm, I think invalidation would not occur mid-query since it would have
>> acquired a lock on the table.
> 
> This is incorrect: invalidation can occur anyway (for example, if
> autoanalyze updates the relation's statistics).  If you are holding
> a lock, you can expect that the relation's schema will not change more
> than your lock would allow --- but a cache flush and rebuild could happen
> underneath you, so keeping a pointer to any subsidiary relcache data
> structure is very dangerous.

I see.  Thanks for clarifying.

> The two ways that we've dealt with this type of hazard are to copy data
> out of the relcache before using it; or to give the relcache the
> responsibility of not moving a particular portion of data if it did not
> change.  From memory, the latter applies to the tuple descriptor and
> trigger data, but we've done most other things the first way.

It seems that tuple descriptor is reference-counted; however trigger data
is copied.  The former seems to have been done on performance grounds (I
found 06e10abc).

So for a performance-sensitive relcache data structure, refcounting is the
way to go (although done quite rarely)?  In this particular case, it is a
"partition descriptor" that could get big for a partitioned table with
partitions in hundreds or thousands.

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] roadmap sharing

2016-05-17 Thread Robert Haas
Hi,

Today at the developer meeting -
https://wiki.postgresql.org/wiki/PgCon_2016_Developer_Meeting - there
was some talk of sharing corporate roadmaps.  Postgres Pro put their
roadmap up at https://wiki.postgresql.org/wiki/Postgres_Professional_roadmap
and I have now done the same thing for the database server group at
EnterpriseDB at
https://wiki.postgresql.org/wiki/EnterpriseDB_database_server_roadmap

The database server group at EnterpriseDB covers most of the people at
EnterpriseDB who contribute to the core PostgreSQL project, but not
everyone; for example, it doesn't include Bruce. So this may not be
completely comprehensive, and is of course subject to change,
community feedback, other people doing the same projects before we do,
etc.  Also, it doesn't include anyone who contributes to collateral
projects like pgAdmin or pgpool, so whatever work we might do in those
areas isn't reflected here.  But hopefully it's helpful all the same.

Feedback is welcome.  Note that we've posted individual emails about
some of these efforts previously on pgsql-hackers, but not all of
them.  In the interest of getting this out quickly, I haven't tried to
find all of those threads and link to them, but maybe I or somebody
else will update with those details later.

Thanks,

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers