[HACKERS] Re: [BUGS] BUG #14849: jsonb_build_object doesn't like VARIADIC calls very much

2017-10-25 Thread Marko Tiikkaja
On Wed, Oct 25, 2017 at 5:32 PM, Michael Paquier 
wrote:

> On Mon, Oct 23, 2017 at 6:50 AM, Michael Paquier
>  wrote:
> > Okay, attached is what I think a fully implemented patch should look
> > like. On top of what Andrew has done, I added and reworked the
> > following:
> > - removed duplicate error handling.
> > - documented the function in funcapi.h and funcapi.c.
> > - Added a new section in funcapi.h to outline that this is for support
> > of VARIADIC inputs.
> > I have added a commit message as well. Hope this helps.
>
> For the sake of the archives, the introduction of
> extract_variadic_args is committed with f3c6e8a2, and the JSON fixes
> with 18fc4ecf. Thanks Andrew for the commit, and thanks Tom, Andrew
> and Dmitry for the reviews.


Thx yo.


.m


Re: [HACKERS] Query started showing wrong result after Ctrl+c

2017-10-12 Thread Marko Tiikkaja
On Thu, Oct 12, 2017 at 12:03 PM, tushar 
wrote:

> postgres=# SELECT  *  FROM ( SELECT n   from  tv  where n= (select * from
> (select n from tv limit 1) c)) as c  ;
>   n
> --
>  3713
> (1 row)
>
> This time , query is started showing wrong result.  Is this an expected
> behavior and if yes -then how to get the correct result ?


The subquery:

select n from tv limit 1

could in theory return any row due to the lack of ORDER BY.  What I'm
guessing happened is that you're seeing a synchronized sequential scan in
follow-up queries.  Add an ORDER BY.


.m


Re: [HACKERS] Index expression syntax

2017-09-29 Thread Marko Tiikkaja
On Fri, Sep 29, 2017 at 9:31 AM, Konstantin Knizhnik <
k.knizh...@postgrespro.ru> wrote:

> I wonder why syntax error is produced in this case:
>
> postgres=# create index metaindex on foo using gin(to_tsvector('english',
> x)||to_tsvector('english',y));
> ERROR:  syntax error at or near "||"
> LINE 1: ...taindex on foo using gin(to_tsvector('english', x)||to_tsvec...
>  ^
> [ .. ]
>
> *expression:* An expression based on one or more columns of the table.
> The expression usually must be written with surrounding parentheses, as
> shown in the syntax. However, the parentheses can be omitted if the
> expression has the form of a function call.
>
> So documentations states that sometimes it is possible to avoid
> parentheses, but it is unclear why I have to use double parentheses...
> I think that either grammar should be fixed, either documentation should
> be updated.
>

Your expression is clearly not a function call, it's a concatenation of two
of them.  The documentation seems perfectly accurate to me?


.m


Re: [HACKERS] 200 = 199 + 1?

2017-09-27 Thread Marko Tiikkaja
On Wed, Sep 27, 2017 at 5:45 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Marko Tiikkaja <ma...@joh.to> writes:
> > I wonder if the nested loop shouldn't have some kind of a cap on its own
> > estimate if it's wildly off of what you'd get by multiplying the child
> > nodes' estimates with each other?
>
> Nonstarter I'm afraid.  The join relation's size estimate is determined
> long before we get to a point where we could multiply the sizes of these
> particular child paths to arrive at the conclusion that it should be
> something different than what we decided originally.


Ah hah.  Thanks for the explanation, that makes sense.


> Adjusting the size
> of the nestloop result at that point would merely give it an unfair
> advantage over other paths for the same join relation.  (I think it would
> also break some assumptions about paths for the same relation all giving
> the same number of rows, unless parameterized.)
>

With the previous paragraph in mind, I would agree; it's not a very good
idea.


> Looking at it another way, the main thing that the combination of hashagg
> outer path + indexscan inner path knows that eqjoinsel_semi didn't account
> for is that there's a unique index on foo.id.  But that info is available
> to eqjoinsel_semi, in the sense that it's been given a nondefault estimate
> that nd1 is equal to the outer relation size.  So the mistake that it's
> making is to throw up its hands and use an 0.5 selectivity estimate just
> because it has no info about the inner relation.  I think if we'd pushed
> through the nd2/nd1 calculation after setting nd2 = size of inner rel,
> we'd end up with an estimate matching the product of these path sizes.
> (Caution: inadequate caffeine absorbed yet, this might be all wrong.)
>

This sounds very reasonable to me.


.m


[HACKERS] 200 = 199 + 1?

2017-09-27 Thread Marko Tiikkaja
Hi,

I just came across this very peculiar behavior:

=# create table foo(id int primary key);
CREATE TABLE
=# insert into foo select generate_series(1, 100);
INSERT 0 100
=# set enable_hashjoin to false; set enable_mergejoin to false;
SET
SET
=# explain select * from foo where id in (select i from generate_series(1,
200) i limit 199);
 QUERY PLAN

 Nested Loop  (cost=4.90..1648.52 rows=199 width=4)
   ->  HashAggregate  (cost=4.48..6.47 rows=199 width=4)
 Group Key: i.i
 ->  Limit  (cost=0.00..1.99 rows=199 width=4)
   ->  Function Scan on generate_series i  (cost=0.00..10.00
rows=1000 width=4)
   ->  Index Only Scan using foo_pkey on foo  (cost=0.42..8.24 rows=1
width=4)
 Index Cond: (id = i.i)
(7 rows)

=# explain select * from foo where id in (select i from generate_series(1,
200) i limit 200);
 QUERY PLAN

 Nested Loop  (cost=4.93..1653.00 rows=50 width=4)
   ->  HashAggregate  (cost=4.50..6.50 rows=200 width=4)
 Group Key: i.i
 ->  Limit  (cost=0.00..2.00 rows=200 width=4)
   ->  Function Scan on generate_series i  (cost=0.00..10.00
rows=1000 width=4)
   ->  Index Only Scan using foo_pkey on foo  (cost=0.42..8.22 rows=1
width=4)
 Index Cond: (id = i.i)
(7 rows)

So it seems that once the HashAggregate estimates to return 200 rows or
more, something extremely weird happens and the Nested Loop's estimate goes
wild.  I've recently seen numerous instances of this kind of a problem
where the row estimates from a nested loop's child nodes are very
reasonable but the loop itself goes absolutely nuts.  I can't see how this
can possibly be justified.

I wonder if the nested loop shouldn't have some kind of a cap on its own
estimate if it's wildly off of what you'd get by multiplying the child
nodes' estimates with each other?


.m


Re: [HACKERS] [PATCH] Pattern based listeners for asynchronous messaging (LISTEN/NOTIFY)

2017-09-07 Thread Marko Tiikkaja
Hi Markus,

On Sun, Aug 20, 2017 at 9:56 PM, Markus Sintonen 
wrote:

> I also encountered this when I built it with different configuration. I
> attached updated patch with the correct number of arguments to
> 'similar_escape'. I also added preliminary documentation to the patch.
> (Unfortunately unable to currently compile the documentation for testing
> purpose on Windows probably because of commit https://github.com/post
> gres/postgres/commit/510074f9f0131a04322d6a3d2a51c87e6db243f9. I followed
> https://www.postgresql.org/docs/devel/static/install-windows
> -full.html#idm45412738673840.)
>
> What do you think about the syntax? There was a suggestion to specify type
> of the pattern (eg ltree extension) but to me this feels like a overkill.
> One option here would be eg:
> LISTEN PATTERN 'foo%' TYPE 'similar'
> LISTEN PATTERN 'foo.*' TYPE 'ltree'
>

Not that my opinion matters, but I think we should pick one pattern style
and stick to it.  SIMILAR TO doesn't seem like the worst choice.  ltree
seems useless.

As for the rest of the interface..

First, I think mixing patterns and non-patterns is weird.  This is apparent
in at least two cases:

marko=# listen "foo%";
LISTEN
marko=# listen similar to 'foo%';
LISTEN
marko=# select * from pg_listening_channels();
 pg_listening_channels
---
 foo%
(1 row)

-- Not actually listening on the pattern.  Confusion.

The second case being the way UNLISTEN can be used to unlisten patterns,
too.  It kind of makes sense given that you can't really end up with both a
channel name and a pattern with the same source string, but it's still
weird.  I think it would be much better to keep these completely separate
so that you could be listening on both the channel "foo%" and the pattern
'foo%' at the same time, and you'd use a bare UNLISTEN to unsubscribe from
the former, and UNLISTEN SIMILAR TO for the latter.  As you said in the
original email:

> Allow *UNLISTEN SIMILAR TO 'xxx'* which would unregister matching
listeners. To me this feels confusing therefore it is not in the patch.

I agree, starting to match the patterns themselves would be confusing.  So
I think we should use that syntax for unsubscribing from patterns.  But
others should feel free to express their opinions on this.

Secondly -- and this is a kind of continuation to my previous point of
conflating patterns and non-patterns -- I don't think you can get away with
not changing the interface for pg_listening_channels().  Not knowing which
ones are compared byte-for-byte and which ones are patterns just seems
weird.


As for the patch itself, I have a couple of comments.  I'm writing this
based on the latest commit in your git branch, commit
fded070f2a56024f931b9a0f174320eebc362458.

In queue_listen(), the new variables would be better declared at the
innermost scope possible.  The datum is only used if isSimilarToPattern is
true, errormsg only if compile_regex didn't return REG_OKAY, etc..

I found this comment confusing at first: "If compiled RE was not applied as
a listener then it is freed at transaction commit."  The past tense makes
it seem like something that has already happened when that code runs, when
in reality it happens later in the transaction.

I'm not a fan of the dance you're doing with pcompreg.  I think it would be
better to optimistically allocate the ListenAction struct and compile
directly into actrec->compiledRegex.

The changed DEBUG1 line in Async_Listen should include whether it's a
pattern or not.

I don't understand why the return value of Exec_UnlistenAllCommit() was
changed at all.  Why do we need to do something different based on whether
listenChannels was empty or not?  The same goes for Exec_UnlistenCommit.

This looks wrong in isolationtester.c:

@@ -487,7 +488,7 @@ run_permutation(TestSpec *testspec, int nsteps, Step
**steps)
res = PQexec(conns[0], testspec->setupsqls[i]);
if (PQresultStatus(res) == PGRES_TUPLES_OK)
{
-   printResultSet(res);
+   printResultSet(res, conns[i + 1]);

(conns[0] vs. conns[i + 1]).

Moving to Waiting on Author.


.m


Re: [HACKERS] INSERT .. ON CONFLICT DO SELECT [FOR ..]

2017-09-04 Thread Marko Tiikkaja
On Mon, Sep 4, 2017 at 7:46 PM, Peter Geoghegan <p...@bowt.ie> wrote:

> On Mon, Sep 4, 2017 at 10:05 AM, Marko Tiikkaja <ma...@joh.to> wrote:

> But I'm generally against
> > interfaces which put arbitrary restrictions on what power users can do on
> > the basis that less experienced people might misuse the interface.
>
> I agree with that as a broad principle, but disagree that it applies
> here. Or if it does, then you have yet to convince me of it. In all
> sincerity, if you think that I just don't understand your perspective,
> then please try to make me understand it. Would a power user actually
> miss ON CONFLICT DO SELECT? And if that's the case, would it really be
> something they'd remember 5 minutes later?
>

I don't know, but I don't want to be limiting what people can do just
because I can't come up with a use case.

In any case, others, feel free to chime in.


.m


Re: [HACKERS] INSERT .. ON CONFLICT DO SELECT [FOR ..]

2017-09-04 Thread Marko Tiikkaja
On Mon, Sep 4, 2017 at 4:09 AM, Peter Geoghegan <p...@bowt.ie> wrote:

> On Tue, Aug 15, 2017 at 12:17 AM, Marko Tiikkaja <ma...@joh.to> wrote:
> > On Tue, Aug 15, 2017 at 7:43 AM, Peter Geoghegan <p...@bowt.ie> wrote:
> >>
> >> On Mon, Aug 14, 2017 at 6:23 PM, Marko Tiikkaja <ma...@joh.to> wrote:
> >> > Attached is a patch for $SUBJECT.  It might still be a bit rough
> around
> >> > the
> >> > edges and probably light on docs and testing, but I thought I'd post
> it
> >> > anyway so I won't forget.
> >>
> >> Is it possible for ON CONFLICT DO SELECT to raise a cardinality
> >> violation error? Why or why not?
> >
> >
> > No.  I don't see when that would need to happen.  But I'm guessing you
> have
> > a case in mind?
>
> Actually, no, I didn't. But I wondered if you did. I think that it
> makes some sense not to, now that I think about it. ON CONFLICT DO
> NOTHING doesn't have cardinality violations, because it cannot affect
> a row twice if there are duplicates proposed for insertion (at least,
> not through any ON CONFLICT related codepath). But, this opinion only
> applies to ON CONFLICT DO SELECT, not ON CONFLICT DO SELECT FOR
> UPDATE. And I have other reservations, which I'll go in to
> momentarily, about ON CONFLICT DO SELECT in general. So, the upshot is
> that I think we need cardinality violations in all cases for this
> feature. Why would a user ever not want to know that the row was
> locked twice?
>

I had to look at the patch to see what I'd done, and the tests suggest that
we already complain about this with if a FOR [lock level] clause is present:

   +begin transaction isolation level read committed;
   +insert into selfconflict values (10,1), (10,2) on conflict(f1) do
select for update returning *;
   +ERROR:  ON CONFLICT command cannot affect row a second time
   +HINT:  Ensure that no rows proposed for insertion within the same
command have duplicate constrained values.
   +commit;

(in expected/insert_conflict.out.)


> On to the subject of my more general reservation: Why did you include
> ON CONFLICT DO SELECT at all? Why not just ON CONFLICT DO SELECT FOR
> UPDATE (and FOR SHARE, ...) ?
>

If you don't intend to actually do anything with the row in the same
database transaction, locking seems unnecessary.  For example, you might
want to provide an idempotent method in your API which inserts the data and
returns the ID to the caller.  The transaction is committed by the time the
client sees the data, so locking is just extra overhead.


> I think I know what you're going to say about it: ON CONFLICT DO
> NOTHING doesn't lock the conflicting row, so why should I insist on it
> here (why not have an ON CONFLICT DO SELECT variant, too)?
>

I wasn't going to say that, no.


> In other words, while ON CONFLICT DO NOTHING may have set a precedent
> here, it at least has semantics that limit the consequences of not
> locking the row; and it *feels* a little bit dirty to use it
> indifferently, even where that makes sense. ON CONFLICT DO SELECT is
> probably going to be used within wCTEs some of the time. I'm not sure
> that a plain ON CONFLICT DO SELECT variant won't allow unpredictable,
> complicated problems when composed within a more complicated query.
>

Yeah, in most cases you'd probably do a  SELECT FOR KEY SHARE.  And I
wouldn't mind that being the default, but it would mean inventing special
syntax with no previous precedent (as far as I know) for not locking the
row in cases where the user doesn't want that.  And that doesn't seem too
attractive, either.

Maybe it would be better to make the default sensible for people who are
not super familiar with postgres.  I don't know.  And the more I think
about the use case above, the less I care about it.  But I'm generally
against interfaces which put arbitrary restrictions on what power users can
do on the basis that less experienced people might misuse the interface.


.m


Re: [HACKERS] INSERT .. ON CONFLICT DO SELECT [FOR ..]

2017-08-15 Thread Marko Tiikkaja
On Tue, Aug 15, 2017 at 7:43 AM, Peter Geoghegan <p...@bowt.ie> wrote:

> On Mon, Aug 14, 2017 at 6:23 PM, Marko Tiikkaja <ma...@joh.to> wrote:
> > Attached is a patch for $SUBJECT.  It might still be a bit rough around
> the
> > edges and probably light on docs and testing, but I thought I'd post it
> > anyway so I won't forget.
>
> Is it possible for ON CONFLICT DO SELECT to raise a cardinality
> violation error? Why or why not?


No.  I don't see when that would need to happen.  But I'm guessing you have
a case in mind?


.m


[HACKERS] INSERT .. ON CONFLICT DO SELECT [FOR ..]

2017-08-14 Thread Marko Tiikkaja
Hi,

Attached is a patch for $SUBJECT.  It might still be a bit rough around the
edges and probably light on docs and testing, but I thought I'd post it
anyway so I won't forget.


.m


insert_conflict_select_v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow INSTEAD OF DELETE triggers to modify the tuple for RETURNING

2017-08-13 Thread Marko Tiikkaja
On Fri, Jul 1, 2016 at 2:12 AM, I wrote:

> Currently the tuple returned by INSTEAD OF triggers on DELETEs is only
> used to determine whether to pretend that the DELETE happened or not, which
> is often not helpful enough; for example, the actual tuple might have been
> concurrently UPDATEd by another transaction and one or more of the columns
> now hold values different from those in the planSlot tuple. Attached is an
> example case which is impossible to properly implement under the current
> behavior.  For what it's worth, the current behavior seems to be an
> accident; before INSTEAD OF triggers either the tuple was already locked
> (in case of BEFORE triggers) or the actual pre-DELETE version of the tuple
> was fetched from the heap.
>
> So I'm suggesting to change this behavior and allow INSTEAD OF DELETE
> triggers to modify the OLD tuple and use that for RETURNING instead of
> returning the tuple in planSlot.  Attached is a WIP patch implementing that.
>
> Is there any reason why we wouldn't want to change the current behavior?


Since nobody seems to have came up with a reason, here's a patch for that
with test cases and some documentation changes.  I'll also be adding this
to the open commit fest, as is customary.


.m


instead_of_delete_returning_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PG10 transition tables, wCTEs and multiple operations on the same table

2017-06-06 Thread Marko Tiikkaja
On Tue, Jun 6, 2017 at 10:25 PM, Kevin Grittner  wrote:

> Nice as it would be to add a SQL standard feature and advance the
> effort to get to incremental maintenance of materialized views, and
> much as I really appreciate the efforts Thomas has put into trying
> to solve these problems, I agree that it is best to revert the
> feature.  It took years to get an in-depth review, then I was asked
> not to commit it because others were working on patches that would
> conflict.  That just doesn't leave enough time to address these
> issues before release.  Fundamentally, I'm not sure that there is a
> level of interest sufficient to support the effort.
>
> I'll give it a few days for objections before reverting.
>

I can only say that the lack of this feature comes up on a weekly basis on
IRC, and a lot of people would be disappointed to see it reverted.


.m


[HACKERS] PG10 transition tables, wCTEs and multiple operations on the same table

2017-06-02 Thread Marko Tiikkaja
Since the subject of transition tables came up, I thought I'd test how this
case works:

=# create table qwr(a int);
CREATE TABLE

=# create function qq() returns trigger as $$ begin raise notice '%',
(select count(*) from oo); return null; end $$ language plpgsql;
CREATE FUNCTION

=# create trigger xx after insert on qwr referencing new table as oo for
each statement execute procedure qq();
CREATE TRIGGER

=# with t as (insert into qwr values (1)) insert into qwr values (2), (3);
NOTICE:  3
NOTICE:  3
INSERT 0 2

to me, this means that it doesn't work.  Surely one of the trigger
invocations should say 1, and the other 2.  Or was this intentional?


.m


Re: [HACKERS] COMPRESS VALUES feature request

2017-05-09 Thread Marko Tiikkaja
On Tue, May 9, 2017 at 8:18 PM, Erez Segal  wrote:

> In the IRC channel - johto suggested an implementation:
>
>  if you want to get really fancy you could have two columns where
> only one of set; one would be the value (if reasonably unique) and the
> other the id (if not)
>

I only suggested how to do this in an application if one really wants the
feature so badly.  I don't think this would fly as a native implementation.


> I'd like to add that an ENUM can be used instead of the id+lookup table in
> the 2nd column for non unique values.
>

An ENUM in postgres *is* an id + lookup table..


.m


Re: [HACKERS] Why type coercion is not performed for parameters?

2017-05-05 Thread Marko Tiikkaja
On Fri, May 5, 2017 at 10:58 AM, Konstantin Knizhnik <
k.knizh...@postgrespro.ru> wrote:

> If I evaluate expression typename('literal'), then type coercion is
> performed and the function is successfully resolved, i.e.
>
> SELECT regnamespace('"pg_catalog"');
>
> But if I want to prepare this query, I get the error:
>
> postgres=#  prepare foo as SELECT regnamespace($1);
> ERROR:  function regnamespace(unknown) does not exist
> LINE 1: prepare foo as SELECT regnamespace($1);
>
> Certainly, I can explicitly specify parameter type:
>
> prepare foo (text) as SELECT regnamespace($1);
>
> and it will work. But it is not always possible.
>

There are other similar examples which have even bigger issues, such as
now() - interval '6 hours'.  now() - interval $1  won't even parse.


> Why do I need it? I want to implement autoprepare.
> My original intention was to let parse_analyze_varparams to infer type of
> parameters from the context.
> But it is not always possible  and sometime leads to different behavior of
> query.
> For example if the query:
>
>  select count(*) from test_range_gist where ir @> 10;
>
> is replaced with
>
>  select count(*) from test_range_gist where ir @> $1;
>
> then type of parameter will be int4range rather then int, which
> corresponds to the different operator.
>

But you know that the type of the literal "10" is int.  If you're throwing
that information away, surely that's a bug in your code.


.m


Re: [HACKERS] PG 10 release notes

2017-04-27 Thread Marko Tiikkaja
On Thu, Apr 27, 2017 at 4:13 PM, Bruce Momjian  wrote:

> On Thu, Apr 27, 2017 at 08:00:28AM +0530, Amit Kapila wrote:
> > > Oh, so non-correlated subqueries can be run in parallel.  Yes, that is
> > > something we should have in the release notes.  How is this?
> > >
> > > Author: Robert Haas 
> > > 2017-02-14 [5e6d8d2bb] Allow parallel workers to execute
> subplans.
> > >
> > > Allow non-correlated subqueries to be run in parallel (Amit
> Kapila)
> > >
> >
> > Looks good to me.


It's not clear from this item what the previous behavior was.  How about
adding something like "Correlated subqueries can not yet be parallelized."?
(I'm guessing that's the change, anyway, please correct me if I'm mistaken.)


.m


[HACKERS] Logical replication SnapBuildInitalSnapshot spelling

2017-03-24 Thread Marko Tiikkaja
Hi,

Commit 7c4f52409a8c7d85ed169bbbc1f6092274d03920 seems to have introduced an
alternative spelling of "initial".  Fixed in the attached.


.m


logical_inital.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Contrib: pqasyncnotifier.c -- a shell command client for LISTEN

2017-01-23 Thread Marko Tiikkaja
On Tue, Jan 24, 2017 at 12:40 AM, Nico Williams 
wrote:

> psql(1) does not output notifications asynchronously, as it does not
> check for them when idle.  This makes it difficult to script handling of
> NOTIFYs.
>
> Attached is pqasyncnotifier.c, a simple command that allows one to
> handle NOTIFYs asynchronously.
>

Did you forget the attachment?


.m


Re: [HACKERS] plpgsql - additional extra checks

2017-01-13 Thread Marko Tiikkaja
On Fri, Jan 13, 2017 at 2:46 AM, Jim Nasby  wrote:

> On 1/11/17 5:54 AM, Pavel Stehule wrote:
>
>> +too_many_rows
>> +
>> + 
>> +  When result is assigned to a variable by INTO
>> clause,
>> +  checks if query returns more than one row. In this case the
>> assignment
>> +  is not deterministic usually - and it can be signal some issues in
>> design.
>>
>
> Shouldn't this also apply to
>
> var := blah FROM some_table WHERE ...;
>
> ?
>
> AIUI that's one of the beefs the plpgsql2 project has.
>

No, not at all.  That syntax is undocumented and only works because
PL/PgSQL is a hack internally.  We don't use it, and frankly I don't think
anyone should.


.m


Re: [HACKERS] plpgsql - additional extra checks

2017-01-11 Thread Marko Tiikkaja
On Wed, Jan 11, 2017 at 2:54 PM, Pavel Stehule 
wrote:

> 1. strict_multi_assignment - checks the number of target variables and
> source values.
>

I've proposed this before (maybe around a year ago), except the checks were
done at parse time, rather than runtime.  I much prefer that approach.  If
I recall correctly, the patch was considered to be good, but not good
enough since it didn't handle all contexts (perhaps FOR loop were missing,
or something; I forget).


> 2. too_many_rows - checks if returned rows is more than one
>

I've also proposed this, and it was rejected because it was a runtime
check, and some people don't like runtime checks.


.m


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-10 Thread Marko Tiikkaja
On Tue, Jan 10, 2017 at 2:26 PM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:

> It's not like PL/pgSQL is the king of brevity.


This is essentially saying "PL/PgSQL isn't perfect, so we shouldn't try and
make it better".  I hear this argument a lot, and as long as people keep
rejecting improvements for this reason they can keep saying it.  It's a
self-fulfilling prophecy.


.m


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-09 Thread Marko Tiikkaja
On Tue, Jan 10, 2017 at 1:03 AM, Jim Nasby <jim.na...@bluetreble.com> wrote:

> On 1/9/17 5:53 PM, Marko Tiikkaja wrote:
>
>> My idea was that the currently unsupported combination of NOT
>> NULL and
>> no DEFAULT would mean "has to be assigned to a non-NULL value
>> before it
>> can be read from, or an exception is thrown".  Solves the most
>> common
>> use case and is backwards compatible.
>>
>>
>> That won't allow you to use a variable in multiple places though...
>> is there a reason we couldn't support something like IS DEFINED and
>> UNSET?
>>
>>
>> I don't understand what your use case is.  Could you demonstrate that
>> with some code you'd write if these features were in?
>>
>
> One use case is NEW and OLD in triggers. Checking to see if one or the
> other is set is easier than checking TG_OP. It's also going to be faster
> (probably MUCH faster; IIRC the comparison currently happens via SPI).
>

This sounds useless.


> Another case is selecting into a record:
>
> EXECUTE ... INTO rec;
> IF rec IS DEFINED THEN
> ELSE
>   EXECUTE  INTO rec;
>   IF rec IS DEFINED THEN
>

And this a workaround for non-functional FOUND.

I can't get excited about this idea based on these examples.


.m


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-09 Thread Marko Tiikkaja
On Tue, Jan 10, 2017 at 12:47 AM, Jim Nasby <jim.na...@bluetreble.com>
wrote:

> On 1/9/17 5:30 PM, Marko Tiikkaja wrote:
>
My idea was that the currently unsupported combination of NOT NULL and
>> no DEFAULT would mean "has to be assigned to a non-NULL value before it
>> can be read from, or an exception is thrown".  Solves the most common
>> use case and is backwards compatible.
>>
>
> That won't allow you to use a variable in multiple places though... is
> there a reason we couldn't support something like IS DEFINED and UNSET?
>

I don't understand what your use case is.  Could you demonstrate that with
some code you'd write if these features were in?


.m


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-09 Thread Marko Tiikkaja
On Mon, Jan 9, 2017 at 12:37 AM, Jim Nasby  wrote:

> If we're going to create a brand new language then I think it would be
> extremely foolish to keep *any* of the current pain points around. Off the
> top of my head:
>
> - variables must have an identifier (what $ in most languages does). The
> steps you have to go through to avoid simple naming collisions are insane.
>

This is exactly what we did not want to do with this project.  The idea is
to create a language which is really close to PL/PgSQL, but removes some of
the brain diarrhoea currently present.

Now, this *is* a problem, and the solution we had (well I, mostly, at this
point) in mind is to use the underscore prefix for all input variables and
make OUT parameters invisible to queries inside function bodies unless
explicitly prefixed with   OUT.  As far as I can tell this eliminates most
if not all collisions while staying almost completely compatible with
arguably well-written PL/PgSQL 1.

- Support for the notion of a variable being unset (which is NOT the same
> thing as NULL).
>

My idea was that the currently unsupported combination of NOT NULL and no
DEFAULT would mean "has to be assigned to a non-NULL value before it can be
read from, or an exception is thrown".  Solves the most common use case and
is backwards compatible.


.m


Re: [HACKERS] SELECT FOR UPDATE regression in 9.5

2016-09-07 Thread Marko Tiikkaja

On 07/09/16 7:29 PM, Alvaro Herrera wrote:

Marko, does this fix your reported problem too?  Both the assertion and
the overall test case that causes it to fire?


The test case never realized anything was wrong, but the assertion is 
gone.  So yup, problem solved on this end, at least.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] kqueue

2016-09-06 Thread Marko Tiikkaja

On 2016-06-03 01:45, Thomas Munro wrote:

On Fri, Jun 3, 2016 at 4:02 AM, Alvaro Herrera  wrote:

Tom Lane wrote:

Andres Freund  writes:

pg_strtoi?



I think that's what Thomas did upthread. Are you taking this one then?


I'd go with just "strtoint".  We have "strtoint64" elsewhere.


For closure of this subthread: this rename was committed by Tom as
0ab3595e5bb5.


Thanks.  And here is a new version of the kqueue patch.  The previous
version doesn't apply on top of recent commit
a3b30763cc8686f5b4cd121ef0bf510c1533ac22, which sprinkled some
MAXALIGN macros nearby.  I've now done the same thing with the kevent
struct because it's cheap, uniform with the other cases and could
matter on some platforms for the same reason.


I've tested and reviewed this, and it looks good to me, other than this 
part:


+   /*
+* kevent guarantees that the change list has been processed in the 
EINTR

+* case.  Here we are only applying a change list so EINTR counts as
+* success.
+*/

this doesn't seem to be guaranteed on old versions of FreeBSD or any 
other BSD flavors, so I don't think it's a good idea to bake the 
assumption into this code.  Or what do you think?



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SELECT FOR UPDATE regression in 9.5

2016-09-06 Thread Marko Tiikkaja

On 2016-09-06 6:02 PM, Marti Raudsepp wrote:

This issue is also reproducible on the current master branch. In an
assertions-enabled build, it traps an assertion in HeapTupleHeaderGetCmax
called by heap_lock_tuple. The following test case demonstrates the issue...


I think you found a reproducible test case for my bug in 
48d3eade-98d3-8b9a-477e-1a8dc32a7...@joh.to.  Thanks.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Change error code for hstore syntax error

2016-09-04 Thread Marko Tiikkaja

Hi Sherrylyn,

On 2016-05-09 19:42, Sherrylyn Branchaw wrote:

I'm attaching a revised patch; please let me know if there are any other
issues before I submit to the commitfest.


I think this is mostly good, but these two should be changed:

  errmsg("unexpected end of string: \"%s\"", state->begin)
  errmsg("syntax error at position %d: \"%s\"", ...)

Right now, aside from the error code, these two look like they're 
reporting about an error in the SQL statement itself, and not in an 
input value for a type.  I think they should look more like this:


  errmsg("invalid input syntax for type hstore: \"%s\"", string),
  errdetail("Unexpected end of input.")

If possible, it might also make sense to provide more information than 
"unexpected end of string".  For example: what character were you 
expecting to find, or what were you scanning?  I didn't look too closely 
what exactly could be done here.  I'll leave that part to you.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT .. SET syntax

2016-08-31 Thread Marko Tiikkaja

Hello hello,

Here's a rebased and updated patch for $SUBJECT for the September commit 
fest.



.m
*** a/doc/src/sgml/ref/insert.sgml
--- b/doc/src/sgml/ref/insert.sgml
***
*** 22,33  PostgreSQL documentation
   
  
  [ WITH [ RECURSIVE ] with_query [, ...] ]
! INSERT INTO table_name [ AS alias ] [ ( column_name [, ...] ) ]
! { DEFAULT VALUES | VALUES ( { expression | DEFAULT } [, ...] ) [, ...] | query }
  [ ON CONFLICT [ conflict_target ] conflict_action ]
  [ RETURNING * | output_expression [ [ AS ] output_name ] [, ...] ]
  
! where conflict_target can be one of:
  
  ( { index_column_name | ( index_expression ) } [ COLLATE collation ] [ opclass ] [, ...] ) [ WHERE index_predicate ]
  ON CONSTRAINT constraint_name
--- 22,42 
   
  
  [ WITH [ RECURSIVE ] with_query [, ...] ]
! INSERT INTO table_name [ AS alias ]
! {
! [ column_list ] VALUES ( { expression | DEFAULT } [, ...] ) [, ...] |
! [ column_list ] query |
! DEFAULT VALUES |
! SET column_name = { expression | DEFAULT } [, ...]
! }
  [ ON CONFLICT [ conflict_target ] conflict_action ]
  [ RETURNING * | output_expression [ [ AS ] output_name ] [, ...] ]
  
! where column_list is:
! 
! ( column_name [, ...] )
! 
! and conflict_target can be one of:
  
  ( { index_column_name | ( index_expression ) } [ COLLATE collation ] [ opclass ] [, ...] ) [ WHERE index_predicate ]
  ON CONSTRAINT constraint_name
***
*** 53,65  INSERT INTO table_name [ AS 
  

!The target column names can be listed in any order.  If no list of
!column names is given at all, the default is all the columns of the
!table in their declared order; or the first N column
!names, if there are only N columns supplied by the
!VALUES clause or query.  The values
!supplied by the VALUES clause or query are
!associated with the explicit or implicit column list left-to-right.

  

--- 62,87 

  

!The target column names in a column_list can be
!listed in any order.  If no column_list is given at
!all (and the SET syntax is not used), the default is all
!the columns of the table in their declared order; or the first
!N column names, if there are only N
!columns supplied by the VALUES clause or
!query.  The values supplied by the VALUES
!clause or query are associated with the explicit or
!implicit column list left-to-right.
!   
! 
!   
! Instead of a column_list and a VALUES
! clause, a SET clause similar to that of an
! UPDATE can be used instead.  The advantage of the
! SET clause is that instead of matching the elements in
! the two lists by ordinal position, the column name and the
! expression to assign to that column are visually next to each other.
! This can make long column assignment lists significantly more
! readable.

  

***
*** 690,702  INSERT INTO distributors (did, dname) VALUES (10, 'Conrad International')

 INSERT conforms to the SQL standard, except that
 the RETURNING clause is a
!PostgreSQL extension, as is the ability
!to use WITH with INSERT, and the ability to
!specify an alternative action with ON CONFLICT.
!Also, the case in
!which a column name list is omitted, but not all the columns are
!filled from the VALUES clause or query,
!is disallowed by the standard.

  

--- 712,724 

 INSERT conforms to the SQL standard, except that
 the RETURNING clause is a
!PostgreSQL extension, as is the
!SET clause when used instead of a VALUES clause, the
!ability to use WITH with INSERT, and the
!ability to specify an alternative action with ON
!CONFLICT.  Also, the case in which a column name list is omitted,
!but not all the columns are filled from the VALUES clause
!or query, is disallowed by the standard.

  

*** a/src/backend/parser/analyze.c
--- b/src/backend/parser/analyze.c
***
*** 467,474  transformInsertStmt(ParseState *pstate, InsertStmt *stmt)
  		stmt->onConflictClause->action == ONCONFLICT_UPDATE);
  
  	/*
! 	 * We have three cases to deal with: DEFAULT VALUES (selectStmt == NULL),
! 	 * VALUES list, or general SELECT input.  We special-case VALUES, both for
  	 * efficiency and so we can handle DEFAULT specifications.
  	 *
  	 * The grammar allows attaching ORDER BY, LIMIT, FOR UPDATE, or WITH to a
--- 467,475 
  		stmt->onConflictClause->action == ONCONFLICT_UPDATE);
  
  	/*
! 	 * We have four cases to deal with: DEFAULT VALUES (selectStmt == NULL and
! 	 * cols == NIL), SET syntax (selectStmt == NULL but cols != NIL), VALUES
! 	 * list, or general SELECT input.  We special-case VALUES, both for
  	 * efficiency and so we can handle DEFAULT specifications.
  	 *
  	 * The grammar allows attaching ORDER BY, LIMIT, FOR UPDATE, or WITH to a
***
*** 523,529  

Re: [HACKERS] Assertion failure in REL9_5_STABLE

2016-08-11 Thread Marko Tiikkaja

On 11/08/16 8:48 AM, Michael Paquier wrote:

On Thu, Aug 11, 2016 at 8:09 AM, Marko Tiikkaja <ma...@joh.to> wrote:

On 2016-08-11 12:09 AM, Alvaro Herrera wrote:


BTW this is not a regression, right?  It's been broken all along.  Or am
I mistaken?


You're probably right.  I just hadn't realized I could run our app against
9.5 with --enable-cassert until last week.


Just wondering... If you revert 1f9534b4 and/or b33e81cb do you still
see a problem?


Yeah, no effect.


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Assertion failure in REL9_5_STABLE

2016-08-10 Thread Marko Tiikkaja

On 2016-08-11 12:09 AM, Alvaro Herrera wrote:

BTW this is not a regression, right?  It's been broken all along.  Or am
I mistaken?


You're probably right.  I just hadn't realized I could run our app 
against 9.5 with --enable-cassert until last week.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Assertion failure in REL9_5_STABLE

2016-08-10 Thread Marko Tiikkaja

On 2016-08-10 11:01 PM, Alvaro Herrera wrote:

Oh, I see ... so there's an update chain, and you're updating a earlier
tuple.  But the later tuple has a multixact and one of the members is
the current transaction.

I wonder how can you lock a tuple that's not the latest, where that
update chain was produced by your own transaction.  I suppose this is
because of plpgsql use of cursors.


There's a rolled back subtransaction that also did some magic on the 
rows AFAICT.  I can try and spend some time producing a smaller test 
case over the weekend.  No hurry since this missed the today's point 
release anyway.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Assertion failure in REL9_5_STABLE

2016-08-10 Thread Marko Tiikkaja

On 2016-08-10 19:28, Alvaro Herrera wrote:

Umm.  AFAICS HeapTupleSatisfiesUpdate() only returns SelfUpdated after
already calling HeapTupleHeaderGetCmax() (which obviously hasn't caught
the same assertion).  Something is odd there ...


HeapTupleSatisfiesUpdate() returns HeapTupleBeingUpdated in this case. 
HeapTupleSelfUpdated comes from here (line 4749):


/* if there are updates, follow the update chain */
if (follow_updates && !HEAP_XMAX_IS_LOCKED_ONLY(infomask))
{
HTSU_Result res;
res = heap_lock_updated_tuple(relation, tuple, _ctid,
  GetCurrentTransactionId(),
  mode);
if (res != HeapTupleMayBeUpdated)
{
result = res;
/* recovery code expects to have buffer lock held */
LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE);
goto failed;
 }
 }


Can you share the test case?


Not at this time, sorry.


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Assertion failure in REL9_5_STABLE

2016-08-10 Thread Marko Tiikkaja

Hi,

Running one specific test from our application against REL9_5_STABLE 
(current as of today) gives me this:


#2  0x7effb59595be in ExceptionalCondition (
conditionName=conditionName@entry=0x7effb5b27a88 
"!(CritSectionCount > 0 || TransactionIdIsCurrentTransactionId(( 
(!((tup)->t_infomask & 0x0800) && ((tup)->t_infomask & 0x1000) && 
!((tup)->t_infomask & 0x0080)) ? HeapTupleGetUpdateXid(tup) : ( 
(tup)-"..., errorType=errorType@entry=0x7effb599b74b "FailedAssertion", 
fileName=fileName@entry=0x7effb5b2796c "combocid.c", 
lineNumber=lineNumber@entry=132) at assert.c:54
#3  0x7effb598e19b in HeapTupleHeaderGetCmax (tup=0x7effa85714c8) at 
combocid.c:131
#4  0x7effb55fb0c1 in heap_lock_tuple (relation=0x7effb5bf5d90, 
tuple=tuple@entry=0x7fffcee73690, cid=346, 
mode=mode@entry=LockTupleExclusive, wait_policy=LockWaitBlock, 
follow_updates=follow_updates@entry=1 '\001',
buffer=buffer@entry=0x7fffcee7367c, hufd=hufd@entry=0x7fffcee73680) 
at heapam.c:4813
#5  0x7effb5753e82 in ExecLockRows (node=node@entry=0x7effb6cebb00) 
at nodeLockRows.c:179
#6  0x7effb573cbc8 in ExecProcNode (node=node@entry=0x7effb6cebb00) 
at execProcnode.c:516
#7  0x7effb5739432 in ExecutePlan (dest=0x7effb5dd8160 
, direction=, numberTuples=0, 
sendTuples=1 '\001', operation=CMD_SELECT, planstate=0x7effb6cebb00, 
estate=0x7effb6ceb8f8)

at execMain.c:1549
#8  standard_ExecutorRun (queryDesc=0x7effb6ae3b40, direction=out>, count=0) at execMain.c:337
#9  0x7effb57661db in _SPI_pquery (tcount=0, fire_triggers=1 '\001', 
queryDesc=0x7effb6ae3b40) at spi.c:2411


The failure is a number of levels down a call stack of PL/PgSQL 
functions, but I can reproduce it at will by calling the function.  I 
unfortunately can't narrow it down to a smaller test case, but attached 
is an xlogdump of the session.  The query that finally breaks the 
elephant's back is a SELECT .. FOR UPDATE from relid=54511.


Any ideas on how to debug this?


.m
rmgr: Standby len (rec/tot): 24/50, tx:  0, lsn: 
0/2428, prev 0/23000108, desc: RUNNING_XACTS nextXid 8495 
latestCompletedXid 8494 oldestRunningXid 8495
rmgr: Transaction len (rec/tot): 12/38, tx:   8496, lsn: 
0/2460, prev 0/2428, desc: ASSIGNMENT xtop 8495: subxacts: 8496
rmgr: Heaplen (rec/tot):  7/  1042, tx:   8496, lsn: 
0/2488, prev 0/2460, desc: LOCK off 10: xid 8496 LOCK_ONLY EXCL_LOCK 
KEYS_UPDATED , blkref #0: rel 1663/47473/50461 blk 0 FPW
rmgr: Heaplen (rec/tot):  7/  8246, tx:   8496, lsn: 
0/240004A0, prev 0/2488, desc: LOCK off 48: xid 8496 LOCK_ONLY EXCL_LOCK 
KEYS_UPDATED , blkref #0: rel 1663/47473/51018 blk 81 FPW
rmgr: Heaplen (rec/tot):  7/53, tx:   8496, lsn: 
0/240024F0, prev 0/240004A0, desc: LOCK off 49: xid 8496 LOCK_ONLY EXCL_LOCK 
KEYS_UPDATED , blkref #0: rel 1663/47473/51018 blk 81
rmgr: Sequencelen (rec/tot):158/   204, tx:   8496, lsn: 
0/24002528, prev 0/240024F0, desc: LOG rel 1663/47473/49387, blkref #0: rel 
1663/47473/49387 blk 0
rmgr: Sequencelen (rec/tot):158/   204, tx:  0, lsn: 
0/240025F8, prev 0/24002528, desc: LOG rel 1663/47473/49992, blkref #0: rel 
1663/47473/49992 blk 0
rmgr: Heaplen (rec/tot):  3/  4404, tx:   8497, lsn: 
0/240026C8, prev 0/240025F8, desc: INSERT off 74, blkref #0: rel 
1663/47473/49994 blk 18 FPW
rmgr: Btree   len (rec/tot):  2/  6961, tx:   8497, lsn: 
0/24003800, prev 0/240026C8, desc: INSERT_LEAF off 78, blkref #0: rel 
1663/47473/59412 blk 2 FPW
rmgr: Btree   len (rec/tot):  2/  6933, tx:   8497, lsn: 
0/24005350, prev 0/24003800, desc: INSERT_LEAF off 342, blkref #0: rel 
1663/47473/59414 blk 8 FPW
rmgr: Heaplen (rec/tot):  7/  4910, tx:   8497, lsn: 
0/24006E80, prev 0/24005350, desc: LOCK off 3: xid 8497 LOCK_ONLY KEYSHR_LOCK , 
blkref #0: rel 1663/47473/50005 blk 0 FPW
rmgr: Heaplen (rec/tot):  3/87, tx:   8498, lsn: 
0/240081C8, prev 0/24006E80, desc: INSERT off 75, blkref #0: rel 
1663/47473/49994 blk 18
rmgr: Btree   len (rec/tot):  2/  4273, tx:   8498, lsn: 
0/24008220, prev 0/240081C8, desc: INSERT_LEAF off 76, blkref #0: rel 
1663/47473/59412 blk 13 FPW
rmgr: Btree   len (rec/tot):  2/64, tx:   8498, lsn: 
0/240092D8, prev 0/24008220, desc: INSERT_LEAF off 343, blkref #0: rel 
1663/47473/59414 blk 8
rmgr: Heaplen (rec/tot):  7/53, tx:   8498, lsn: 
0/24009318, prev 0/240092D8, desc: LOCK off 29: xid 8498 LOCK_ONLY KEYSHR_LOCK 
, blkref #0: rel 1663/47473/50005 blk 0
rmgr: Heaplen (rec/tot):  3/  2397, tx:   8496, lsn: 
0/24009350, prev 0/24009318, desc: INSERT off 16, blkref #0: rel 
1663/47473/49389 blk 16 FPW
rmgr: Btree   len (rec/tot):  2/  5773, tx:   8496, lsn: 
0/24009CB0, prev 0/24009350, desc: INSERT_LEAF off 6, blkref #0: rel 
1663/47473/49400 blk 1 FPW
rmgr: Btree   len (rec/tot):  

Re: [HACKERS] Oddity with NOT IN

2016-08-04 Thread Marko Tiikkaja

On 2016-08-04 11:23 PM, Jim Nasby wrote:

I've got a customer that discovered something odd...

SELECT f1 FROM v1 WHERE f2 not in (SELECT bad FROM v2 WHERE f3 = 1);

does not error, even though bad doesn't exist, but


I'm guessing there's a v1.bad?

This is a common mistake, and also why I recommend always table 
qualifying column references when there's more than one table in scope.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] INSERT .. SET syntax

2016-07-03 Thread Marko Tiikkaja

Hi,

Here's a patch for $SUBJECT.  I'll probably work on the docs a bit more 
before the next CF, but I thought I'd post it anyway.



.m
diff --git a/doc/src/sgml/ref/insert.sgml b/doc/src/sgml/ref/insert.sgml
index e710cf4..33e577b 100644
--- a/doc/src/sgml/ref/insert.sgml
+++ b/doc/src/sgml/ref/insert.sgml
@@ -22,12 +22,21 @@ PostgreSQL documentation
  
 
 [ WITH [ RECURSIVE ] with_query 
[, ...] ]
-INSERT INTO table_name [ AS 
alias ] [ ( column_name [, ...] ) ]
-{ DEFAULT VALUES | VALUES ( { expression | DEFAULT } [, ...] ) [, ...] | 
query }
+INSERT INTO table_name [ AS 
alias ]
+{
+[ column_list ] VALUES ( { expression | DEFAULT } [, ...] ) [, ...] |
+[ column_list ] query |
+DEFAULT VALUES |
+SET column_name = { 
expression | DEFAULT } [, ...]
+}
 [ ON CONFLICT [ conflict_target ] conflict_action ]
 [ RETURNING * | output_expression [ [ AS ] output_name ] [, ...] ]
 
-where conflict_target can 
be one of:
+where column_list 
is:
+
+( column_name [, ...] )
+
+and conflict_target can 
be one of:
 
 ( { index_column_name | ( 
index_expression ) } [ COLLATE 
collation ] [ opclass ] [, ...] ) [ WHERE index_predicate ]
 ON CONSTRAINT constraint_name
@@ -53,13 +62,26 @@ INSERT INTO table_name [ AS 
 
   
-   The target column names can be listed in any order.  If no list of
-   column names is given at all, the default is all the columns of the
-   table in their declared order; or the first N column
-   names, if there are only N columns supplied by the
-   VALUES clause or query.  The values
-   supplied by the VALUES clause or query are
-   associated with the explicit or implicit column list left-to-right.
+   The target column names in a column_list can be
+   listed in any order.  If no column_list is given at
+   all (and the SET syntax is not used), the default is all
+   the columns of the table in their declared order; or the first
+   N column names, if there are only N
+   columns supplied by the VALUES clause or
+   query.  The values supplied by the VALUES
+   clause or query are associated with the explicit or
+   implicit column list left-to-right.
+  
+
+  
+Instead of a column_list and a VALUES
+clause, a SET clause similar to that of an
+UPDATE can be used instead.  The advantage of the
+SET clause is that instead of matching the elements in
+the two lists by ordinal position, the column name and the
+expression to assign to that column are visually next to each other.
+This can make long column assignment lists significantly more
+readable.
   
 
   
@@ -691,13 +713,13 @@ INSERT INTO distributors (did, dname) VALUES (10, 'Conrad 
International')
   
INSERT conforms to the SQL standard, except that
the RETURNING clause is a
-   PostgreSQL extension, as is the ability
-   to use WITH with INSERT, and the ability to
-   specify an alternative action with ON CONFLICT.
-   Also, the case in
-   which a column name list is omitted, but not all the columns are
-   filled from the VALUES clause or query,
-   is disallowed by the standard.
+   PostgreSQL extension, as is the
+   SET clause when used instead of a VALUES clause, the
+   ability to use WITH with INSERT, and the
+   ability to specify an alternative action with ON
+   CONFLICT.  Also, the case in which a column name list is omitted,
+   but not all the columns are filled from the VALUES clause
+   or query, is disallowed by the standard.
   
 
   
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index 29c8c4e..55c4cb3 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -466,8 +466,9 @@ transformInsertStmt(ParseState *pstate, InsertStmt *stmt)
stmt->onConflictClause->action 
== ONCONFLICT_UPDATE);
 
/*
-* We have three cases to deal with: DEFAULT VALUES (selectStmt == 
NULL),
-* VALUES list, or general SELECT input.  We special-case VALUES, both 
for
+* We have four cases to deal with: DEFAULT VALUES (selectStmt == NULL 
and
+* cols == NIL), SET syntax (selectStmt == NULL but cols != NIL), VALUES
+* list, or general SELECT input.  We special-case VALUES, both for
 * efficiency and so we can handle DEFAULT specifications.
 *
 * The grammar allows attaching ORDER BY, LIMIT, FOR UPDATE, or WITH to 
a
@@ -522,7 +523,7 @@ transformInsertStmt(ParseState *pstate, InsertStmt *stmt)
/*
 * Determine which variant of INSERT we have.
 */
-   if (selectStmt == NULL)
+   if (selectStmt == NULL && stmt->cols == NIL)
{
/*
 * We have INSERT ... DEFAULT VALUES.  We can handle this case 
by
@@ -531,6 +532,25 @@ transformInsertStmt(ParseState *pstate, InsertStmt *stmt)
 */
exprList = NIL;
}
+   else if (selectStmt == NULL)
+   {
+   /*
+  

[HACKERS] Column COMMENTs in CREATE TABLE?

2016-07-02 Thread Marko Tiikkaja

Hi,

Currently we have CREATE TABLE statements in a git repository that look 
roughly like this:


CREATE TABLE foo(
  -- the first field
  f1 int NOT NULL,
  -- the second field
  f2 int NOT NULL,
...
);

But the problem is that those comments don't obviously make it all the 
way to the database, so e.g.  \d+ tblname  won't show you that precious 
information.  If you want them to make it all the way to the database, 
you'd have to add COMMENT ON statements *after* the CREATE TABLE, which 
means that either column comments have to be maintained twice, or the 
CREATE TABLE statement won't have them, so you have to go back and forth 
in your text editor to see the comments.  Both solutions are suboptimal.


What I would prefer is something like this:

CREATE TABLE foo(
  f1 int NOT NULL COMMENT
'the first field',
  f2 int NOT NULL COMMENT
'the second field',
...
);

which would ensure the comments are both next to the field definition 
they're documenting and that they make it all the way to the database. 
I looked into the biggest products, and MySQL supports this syntax.  I 
couldn't find any similar syntax in any other product.


The downside is that this would require us to make COMMENT a fully 
reserved keyword, which would quite likely break at least one 
application out in the wild.  Another option would be to make the syntax 
something like  [ COLUMN COMMENT '...' ], but that's not exactly a 
beautiful solution either.


I still think this would be a really valuable feature if we can come up 
with a decent syntax for it.  Does anyone have any ideas?  Or does 
anybody want to shoot this proposal down right off the bat?



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Allow INSTEAD OF DELETE triggers to modify the tuple for RETURNING

2016-06-30 Thread Marko Tiikkaja

Hi,

Currently the tuple returned by INSTEAD OF triggers on DELETEs is only 
used to determine whether to pretend that the DELETE happened or not, 
which is often not helpful enough; for example, the actual tuple might 
have been concurrently UPDATEd by another transaction and one or more of 
the columns now hold values different from those in the planSlot tuple. 
Attached is an example case which is impossible to properly implement 
under the current behavior.  For what it's worth, the current behavior 
seems to be an accident; before INSTEAD OF triggers either the tuple was 
already locked (in case of BEFORE triggers) or the actual pre-DELETE 
version of the tuple was fetched from the heap.


So I'm suggesting to change this behavior and allow INSTEAD OF DELETE 
triggers to modify the OLD tuple and use that for RETURNING instead of 
returning the tuple in planSlot.  Attached is a WIP patch implementing that.


Is there any reason why we wouldn't want to change the current behavior?


.m
BEGIN;

CREATE OR REPLACE FUNCTION foof() RETURNS TRIGGER AS $$
BEGIN
-- imagine someone concurrently incremented counter here
OLD.counter := OLD.counter + 1;
RETURN OLD;
END
$$ LANGUAGE plpgsql;

CREATE TABLE foo(counter int NOT NULL);

CREATE VIEW foov AS SELECT * FROM foo;

CREATE TRIGGER foov_instead
INSTEAD OF DELETE ON foov
FOR EACH ROW
EXECUTE PROCEDURE foof();

INSERT INTO foo VALUES (0);

DELETE FROM foov RETURNING counter;

ROLLBACK;
*** a/src/backend/commands/trigger.c
--- b/src/backend/commands/trigger.c
***
*** 2295,2307  ExecARDeleteTriggers(EState *estate, ResultRelInfo *relinfo,
}
  }
  
! bool
  ExecIRDeleteTriggers(EState *estate, ResultRelInfo *relinfo,
!HeapTuple trigtuple)
  {
TriggerDesc *trigdesc = relinfo->ri_TrigDesc;
TriggerData LocTriggerData;
!   HeapTuple   rettuple;
int i;
  
LocTriggerData.type = T_TriggerData;
--- 2295,2307 
}
  }
  
! TupleTableSlot *
  ExecIRDeleteTriggers(EState *estate, ResultRelInfo *relinfo,
!HeapTuple trigtuple, TupleTableSlot 
*slot)
  {
TriggerDesc *trigdesc = relinfo->ri_TrigDesc;
TriggerData LocTriggerData;
!   HeapTuple   rettuple = trigtuple;
int i;
  
LocTriggerData.type = T_TriggerData;
***
*** 2333,2343  ExecIRDeleteTriggers(EState *estate, ResultRelInfo *relinfo,
   
relinfo->ri_TrigInstrument,
   
GetPerTupleMemoryContext(estate));
if (rettuple == NULL)
!   return false;   /* Delete was suppressed */
!   if (rettuple != trigtuple)
!   heap_freetuple(rettuple);
}
!   return true;
  }
  
  void
--- 2333,2359 
   
relinfo->ri_TrigInstrument,
   
GetPerTupleMemoryContext(estate));
if (rettuple == NULL)
!   return NULL;/* Delete was suppressed */
}
! 
!   if (rettuple != trigtuple)
!   {
!   /*
!* Return the modified tuple using the es_trig_tuple_slot.  We 
assume
!* the tuple was allocated in per-tuple memory context, and 
therefore
!* will go away by itself. The tuple table slot should not try 
to
!* clear it.
!*/
!   TupleTableSlot *newslot = estate->es_trig_tuple_slot;
!   TupleDesc   tupdesc = 
RelationGetDescr(relinfo->ri_RelationDesc);
! 
!   if (newslot->tts_tupleDescriptor != tupdesc)
!   ExecSetSlotDescriptor(newslot, tupdesc);
!   ExecStoreTuple(rettuple, newslot, InvalidBuffer, false);
!   slot = newslot;
!   }
! 
!   return slot;
  }
  
  void
*** a/src/backend/executor/nodeModifyTable.c
--- b/src/backend/executor/nodeModifyTable.c
***
*** 573,585  ExecDelete(ItemPointer tupleid,
if (resultRelInfo->ri_TrigDesc &&
resultRelInfo->ri_TrigDesc->trig_delete_instead_row)
{
!   booldodelete;
  
!   Assert(oldtuple != NULL);
!   dodelete = ExecIRDeleteTriggers(estate, resultRelInfo, 
oldtuple);
  
!   if (!dodelete)  /* "do nothing" */
return NULL;
}
else if (resultRelInfo->ri_FdwRoutine)
{
--- 573,595 
if (resultRelInfo->ri_TrigDesc &&
resultRelInfo->ri_TrigDesc->trig_delete_instead_row)
{
!   /*
!* Store the heap tuple into the tuple table slot, making sure 

Re: [HACKERS] Feature suggestions: "dead letter"-savepoint.

2016-06-23 Thread Marko Tiikkaja

On 2016-06-23 12:34, Terje Elde wrote:

Typically the flow would be something like:

BEGIN;
SELECT id FROM targets WHERE status=‘scheduled’ FOR UPDATE SKIP LOCKED LIMIT 1;
UPDATE targets SET status=‘in-flight’ WHERE id =%(id);
COMMIT;
— Do the work.
BEGIN;
UPDATE targets SET status=‘completed’ WHERE id = %(id); — or 
status=‘failed-foo’, if it fails for reason foo
COMMIT;


What I’m suggesting would be something along the lines of;

BEGIN;
SELECT id FROM targets WHERE status=‘scheduled’ FOR UPDATE SKIP LOCKED LIMIT 1;
UPDATE targets SET status=‘failed-unknown’ WHERE id =%(id);
SAVEPOINT deadletter ON FAILURE COMMIT;
— Do the work.
UPDATE targets SET status=‘completed’ WHERE id = %(id); — or status=‘failed-foo'
COMMIT;


Comparing these two; how is the latter any better?  It's the same number 
of commands, except it's holding a transaction open for longer, it's 
using a non-standard concept and it's arguably more complex.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump -j against standbys

2016-05-25 Thread Marko Tiikkaja

On 25/05/16 15:59, Magnus Hagander wrote:

On Tue, May 24, 2016 at 5:39 PM, Tom Lane  wrote:

This patch will cause pg_dump to fail entirely against pre-9.0 servers.
You need to execute the test conditionally.



Ugh. can I blame coding while too jetlagged?


You could try blaming Magnus.  Oh, wait..


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Calling json_* functions with JSONB data

2016-05-23 Thread Marko Tiikkaja

On 2016-05-23 18:55, Peter van Hardenberg wrote:

I talked this over with Andrew who had no objections and suggested I float
it on the list before writing a patch. Looks pretty straightforward, just a
few new data rows in pg_proc.h.

Anyone have any concerns or suggestions?


What about cases like  json_whatever($1)  which previously worked but 
will now be ambiguous?  (Or will they somehow not be ambiguous?)



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Alter or rename enum value

2016-03-27 Thread Marko Tiikkaja

On 2016-03-27 19:30, Dagfinn Ilmari Mannsåker wrote:

ilm...@ilmari.org (Dagfinn Ilmari Mannsåker) writes:


I was bored and thought "how hard could it be?", and a few hours'
hacking later, I have something that seems to work.  It doesn't do IF
NOT EXISTS yet, and the error messaging could do with some improvement,
and there are no docs.  The patch is attached, as well as at
https://github.com/ilmari/postgres/commit/enum-alter-value


Here's v3 of the patch of the patch, which I consider ready for proper
review.


A couple of trivial comments below.

  +ALTER VALUE [ IF EXISTST ] TO [ IF NOT EXISTS 
]


typo EXISTST

  +  If IF EXISTS or is IF NOT 
EXISTS
  +  is specified, it is not an error if the type doesn't contain 
the old


double is

  +  if the old value is not alreeady present or the new value is.

typo alreeady

  + * RenameEnumLabel
  + *   Add a new label to the enum set. By default it goes at

copypaste-o?

  + if (!stmt->oldVal) {

not project curly brace style


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/pgsql exported functions

2016-03-02 Thread Marko Tiikkaja

On 11/02/16 18:29, Magnus Hagander wrote:

Most of the pl/pgsql functions and variables are prefixed plpgsql_, so they
don't risk conflicting with other shared libraries loaded.

There are a couple that are not prefixed. Attached patch fixes that. It's
mostly a cleanup, but I think it's something we should do since it's only 2
variables and 2 functions.

AFAICT these are clearly meant to be internal. (the plugin variable is
accessed through find_rendezvous_variable)

Comments?


Looks good to me.


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: schema PL session variables

2016-02-09 Thread Marko Tiikkaja

On 08/02/16 14:16, Pavel Stehule wrote:

2016-02-08 13:53 GMT+01:00 Marko Tiikkaja <ma...@joh.to>:


Yeah, and that's exactly what I don't want, because that means that CREATE
SCHEMA VARIABLE suddenly breaks existing code.



theoretically yes, but this conflict can be 100% detected - so no quiet bug
is possible, and plpgsql_check can find this issue well. If you don't use
schema variable, then your code will be correct. You have to explicitly
create the variable, and if there will be any problem, then the problem
will be only in PL functions in one schema. And you can identify it by
statical analyse.


I'm sorry, but I think you've got your priorities completely backwards. 
 You're saying that it's OK to add a footgun because blown-off pieces 
of feet can be found by using a third party static analyzer barely 
anyone uses.  And at best, that footgun is only a very minor convenience 
(though I'd argue that omitting it actually hurts readability).


That makes absolutely no sense to me at all.


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: schema PL session variables

2016-02-08 Thread Marko Tiikkaja

On 08/02/16 09:16, Pavel Stehule wrote:

Usage
=

DROP SCHEMA IF EXISTS test_schema CASCADE;
SET SCHEMA test_schema;

CREATE SCHEMA VARIABLE local_counter AS int DEFAULT 0;

CREATE OR REPLACE FUNCTION increment_counter()
RETURNS void AS $$
BEGIN
   local_counter := local_counter + 1;
END;
$$ LANGUAGE plpgsql;


How does this function know which schema variables are visible?


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: schema PL session variables

2016-02-08 Thread Marko Tiikkaja

On 08/02/16 13:17, Pavel Stehule wrote:

2016-02-08 13:03 GMT+01:00 Marko Tiikkaja <ma...@joh.to>:

How does this function know which schema variables are visible?


function see all schema variables from same schema as function's schema


Personally I find that undesirable.  I don't know what oracle does, but 
variables being visible without schema-qualifying them can introduce 
variable conflicts in PL/PgSQL.  I'd prefer if you could only refer to 
them by prefixing them with the schema name (or maybe allow search_path 
to be used).



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: schema PL session variables

2016-02-08 Thread Marko Tiikkaja

On 08/02/16 13:41, Pavel Stehule wrote:

2016-02-08 13:22 GMT+01:00 Marko Tiikkaja <ma...@joh.to>:

Personally I find that undesirable.  I don't know what oracle does, but
variables being visible without schema-qualifying them can introduce
variable conflicts in PL/PgSQL.  I'd prefer if you could only refer to them
by prefixing them with the schema name (or maybe allow search_path to be
used).


I hope so there are not new conflicts - schema variable is not directly
visible from SQL (in this iteration) - they are visible only from functions
- and the behave is same like global plpgsql variable. So schema variable
can be in conflict with SQL identifier only exactly identically as plpgsql
variable


Yeah, and that's exactly what I don't want, because that means that 
CREATE SCHEMA VARIABLE suddenly breaks existing code.



But prefix can be used.


Sure, but I don't see the point.  Is there a reason not to require such 
variable references to be prefixed with the schema name?  Or explicitly 
bring them into scope in the DECLARE section somehow.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] count_nulls(VARIADIC "any")

2016-02-05 Thread Marko Tiikkaja

On 2016-02-05 05:06, Tom Lane wrote:

I wrote:

Pavel Stehule  writes:

[ num_nulls_v6.patch ]



I started looking through this.  It seems generally okay, but I'm not
very pleased with the function name "num_notnulls".  I think it would
be better as "num_nonnulls", as I see Oleksandr suggested already.


Not hearing any complaints, I pushed it with that change and some other
cosmetic adjustments.


Thanks Tom and Pavel and everyone who provided feedback.


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE behind-the-scenes effects' CONTEXT

2016-01-30 Thread Marko Tiikkaja

On 2016-01-21 04:17, Simon Riggs wrote:

Marko, I was/am waiting for an updated patch. Could you comment please?


Sorry, I've not found time to work on this recently.

Thanks for everyone's comments so far.  I'll move this to the next CF 
and try and get an updated patch done in time for that one.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] count_nulls(VARIADIC "any")

2016-01-26 Thread Marko Tiikkaja

On 25/01/16 19:57, Pavel Stehule wrote:

Marco is a author of this patch, so - Marco, please, send final version of
this patch


I don't really care about the tests.  Can we not use the v5 patch 
already in the thread?  As far as I could tell there were no reviewer's 
comments on it anymore.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: function parse_ident

2016-01-22 Thread Marko Tiikkaja

Hi Pavel,

Sorry for the lack of review here.  I didn't realize I was still the 
reviewer for this after it had been moved to another commitfest.


That said, I think I've exhausted my usefulness here as a reviewer. 
Marking ready for committer.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SET syntax in INSERT

2016-01-14 Thread Marko Tiikkaja

Hi,

SET syntax for INSERT was brought up a few years ago here: 
http://www.postgresql.org/message-id/2c5ef4e30908251010s46d9d566m1da21357891ba...@mail.gmail.com


From the discussion it seems that one committer was against, one 
committer was not against, and one committer saw something good in the 
proposal.  Personally, I believe this would be a huge readability 
improvement to INSERTs with more than a few columns.  I'm willing to put 
in some work to make this happen for 9.7, but I'd like to know that I'm 
not wasting my time.


What do we think?


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SET syntax in INSERT

2016-01-14 Thread Marko Tiikkaja

On 2016-01-14 8:06 PM, Pavel Stehule wrote:

Probably there is less risk than 7 years ago, but still creating own syntax
isn't the best idea. This is syntactic sugar only and different from ANSi
SQL or common standard.


So is RETURNING, UPSERT, PL/PgSQL and many other useful features.


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SET syntax in INSERT

2016-01-14 Thread Marko Tiikkaja

On 2016-01-14 20:50, Vitaly Burovoy wrote:

On 1/14/16, Tom Lane  wrote:

Assume a table with an int-array column, and consider

INSERT INTO foo SET arraycol[2] = 7, arraycol[4] = 11;


Right part is a column name, not an expression. Isn't it?
So "arraycol[2]" is not possible there.


I think the idea here was that it's allowed in UPDATE.  But I don't see 
the point of allowing that in an INSERT.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SET syntax in INSERT

2016-01-14 Thread Marko Tiikkaja

On 2016-01-14 20:33, Tom Lane wrote:

Pavel Stehule  writes:

Probably there is less risk than 7 years ago, but still creating own
syntax isn't the best idea. This is syntactic sugar only and different
from ANSi SQL or common standard.


It's more than syntactic sugar; you are going to have to invent semantics,
as well, because it's less than clear what partial-field assignments
should do.


I don't really care for such.  In my opinion it would be fine if this 
simply was only "syntactic sugar", and trying to do any tricks like this 
would simply raise an exception.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: [JDBC] Re: 9.4-1207 behaves differently with server side prepared statements compared to 9.2-1102

2016-01-13 Thread Marko Tiikkaja

On 13/01/16 14:27, Vladimir Sitnikov wrote:

TL;DR: I suggest to create "generic plan" with regard to current bind values.
What's wrong with that approach?


I don't understand how this is supposed to work.  Say you already have a 
plan which looks like this:


 Seq Scan on foo  (cost=0.00..100.00 rows=1 width=630)
   Filter: (bar = $1)

Now the plan gets invoked with  $1 = 5.  What exactly in your mind would 
happen here?



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: [JDBC] Re: 9.4-1207 behaves differently with server side prepared statements compared to 9.2-1102

2016-01-13 Thread Marko Tiikkaja

On 13/01/16 14:12, Pavel Stehule wrote:

so the strategy - if cost of generic plan is less than some MAGIC CONSTANT
(can be specified by GUC), then use generic plan. Elsewhere use a custom
plan everytime.

It allow to controll the plan reusing. When MAGIC CONSTANT = 0 .. use
custom plan everytime, When MAGIC CONSTANT = M, then use generic plan
always.


I don't think that would solve even the original problem without 
effectively disabling generic plans, despite the problem being 
relatively simple.  The generic plan appears to be really cheap, because 
the planner doesn't have the concept of a "worst case".



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: [JDBC] Re: 9.4-1207 behaves differently with server side prepared statements compared to 9.2-1102

2016-01-13 Thread Marko Tiikkaja

On 13/01/16 14:36, Vladimir Sitnikov wrote:

Say you already have a plan which looks like this:
Now the plan gets invoked with  $1 = 5.  What exactly in your mind would happen 
here?


A sequential scan with $1=5 condition. What else could be there?


I don't know, it's your proposal :-)  But it looks like I misunderstood.


Note: I do not suggest changing already cached plans yet.
I suggest looking into "6th bind values" when building a cached plan.


But that wouldn't have helped your case.  The custom plan is *more 
expensive*; I'm guessing because the generic plan gambles on a better 
average case instead of preparing for the worst case.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: [JDBC] Re: 9.4-1207 behaves differently with server side prepared statements compared to 9.2-1102

2016-01-13 Thread Marko Tiikkaja

On 13/01/16 15:26, Vladimir Sitnikov wrote:

2) It is likely to be more performant. We just need to explain users
that "if different plans required, just use different statements".


This completely screws over PL/PgSQL, among other things.


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: [JDBC] Re: 9.4-1207 behaves differently with server side prepared statements compared to 9.2-1102

2016-01-13 Thread Marko Tiikkaja

On 13/01/16 15:34, Vladimir Sitnikov wrote:

This completely screws over PL/PgSQL, among other things.


Can you elaborate a bit?


You just write a query like this:

  SELECT * FROM foo WHERE bar = _Variable;

so you don't get to (or want to) have any control over the underlying 
prepared statement.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Question about DROP TABLE

2016-01-12 Thread Marko Tiikkaja

On 12/01/16 12:17, Pavel Stehule wrote:

2016-01-12 12:14 GMT+01:00 Michal Novotny :


Hi Pavel,
thanks for the information. I've been doing more investigation of this
issue and there's autovacuum running on the table however it's
automatically starting even if there is "autovacuum = off" in the
postgresql.conf configuration file.



Real autovacuum is automatically cancelled. It looks like VACUUM started by
cron, maybe?


Not if it's to prevent wraparound, which isn't unlikely if autovacuum=off.


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: [JDBC] Re: 9.4-1207 behaves differently with server side prepared statements compared to 9.2-1102

2016-01-12 Thread Marko Tiikkaja

On 12/01/16 13:00, Dave Cramer wrote:

We have an interesting problem, and the reporter has been kind enough to
provide logs for which we can't explain.

I'd be interested to hear any plausible explanations for a prepared plan
suddenly going from 2ms to 60ms for the same input values ?


This is a new feature in 9.2, where on the fifth (or sixth, not sure) 
execution the planner might choose to use a generic plan.  From the 9.2 
release notes (though I'm fairly certain this is documented somewhere in 
the manual as well):


In the past, a prepared statement always had a single "generic" plan 
that was used for all parameter values, which was frequently much 
inferior to the plans used for non-prepared statements containing 
explicit constant values. Now, the planner attempts to generate custom 
plans for specific parameter values. A generic plan will only be used 
after custom plans have repeatedly proven to provide no benefit. This 
change should eliminate the performance penalties formerly seen from use 
of prepared statements (including non-dynamic statements in PL/pgSQL).



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] count_nulls(VARIADIC "any")

2016-01-12 Thread Marko Tiikkaja

On 03/01/16 22:49, Jim Nasby wrote:

In the unit test, I'd personally prefer just building a table with the
test cases and the expected NULL/NOT NULL results, at least for all the
calls that would fit that paradigm. That should significantly reduce the
size of the test. Not a huge deal though...


I don't really see the point.  "The size of the test" doesn't seem like 
a worthwhile optimization target, unless the test scripts are somehow 
really unnecessarily large.


Further, if you were developing code related to this, previously you 
could just copy-paste the defective test case in order to easily 
reproduce a problem.  But now suddenly you need a ton of different setup.


I don't expect to really have a say in this, but I think the tests are 
now worse than they were before.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Passing initially_valid values instead of !skip_validation to StoreRelCheck() in AddRelationNewConstraints()

2015-12-03 Thread Marko Tiikkaja

On 12/3/15 12:44 PM, amul sul wrote:

On Thursday, 3 December 2015 4:36 PM, Amit Langote 
 wrote:
The user will have to separately validate the constraint by issuing a ALTER 
TABLE VALIDATE CONSTRAINT
command at a time of their choosing.


This could be time consuming operation for big table, If I am pretty much sure that 
my constraint will be valid, simply I could set both flag(initially_valid & 
skip_validation) to true.


I'm confused here.  It sounds like you're suggesting an SQL level 
feature, but you're really focused on a single line of code for some 
reason.  Could you take a step back and explain the high level picture 
of what you're trying to achieve?



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Removing Functionally Dependent GROUP BY Columns

2015-11-30 Thread Marko Tiikkaja

On 2015-12-01 05:00, David Rowley wrote:

We already allow a SELECT's target list to contain non-aggregated columns
in a GROUP BY query in cases where the non-aggregated column is
functionally dependent on the GROUP BY clause.

For example a query such as;

SELECT p.product_id,p.description, SUM(s.quantity)
FROM product p
INNER JOIN sale s ON p.product_id = s.product_id
GROUP BY p.product_id;

is perfectly fine in PostgreSQL, as p.description is functionally dependent
on p.product_id (assuming product_id is the PRIMARY KEY of product).


This has come up before (on other forums, at least), and my main concern 
has been that unlike the case where we go from throwing an error to 
allowing a query, this has a chance to make the planning of currently 
legal queries slower.  Have you tried to measure the impact of this on 
queries where there's no runtime gains to be had?



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] count_nulls(VARIADIC "any")

2015-11-22 Thread Marko Tiikkaja

On 2015-11-22 21:17, Jim Nasby wrote:

On 11/22/15 11:34 AM, Marko Tiikkaja wrote:

On 2015-11-22 18:29, Jim Nasby wrote:

Only if you know how many columns there already are.

Or does this not work if you hand it a row?


It "works" in the sense that it tells you whether the row is NULL or
not.  I.e. the answer will always be 0 or 1.


Hrm, I was hoping something like count_nulls(complex_type.*) would work.


Nope:

=# select num_nulls((f).*) from (select '(1,2,3)'::foo) ss(f);
ERROR:  row expansion via "*" is not supported here


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2015-11-22 Thread Marko Tiikkaja

On 2015-11-22 18:30, Jim Nasby wrote:

On 11/21/15 12:49 AM, Pavel Stehule wrote:


I propose inversion function to pg_size_pretty function - like
pg_human_size.

Usage:

SELECT * FROM pg_class
WHERE pg_table_size(oid) > pg_human_size('2GB');


I'm not a big fan of the name, but +1 on the general idea.

Maybe pg_size_pretty(text)?


pg_ytterp_ezis(text)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] count_nulls(VARIADIC "any")

2015-11-22 Thread Marko Tiikkaja

On 2015-11-22 18:29, Jim Nasby wrote:

Only if you know how many columns there already are.

Or does this not work if you hand it a row?


It "works" in the sense that it tells you whether the row is NULL or 
not.  I.e. the answer will always be 0 or 1.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY (INSERT/UPDATE/DELETE .. RETURNING ..)

2015-11-21 Thread Marko Tiikkaja

On 2015-11-16 08:24, Michael Paquier wrote:

On Sun, Nov 1, 2015 at 2:49 AM, Marko Tiikkaja <ma...@joh.to> wrote:

Attached is a patch for being able to do $SUBJECT without a CTE.  The
reasons this is better than a CTE version are:

   1) It's not obvious why a CTE version works but a plain one doesn't
   2) This one has less overhead (I measured a ~12% improvement on a
not-too-unreasonable test case)


Would you mind sharing this test case as well as numbers?


Attached the test case I used; removing a batch of old rows from a table 
through an index.  Best out of three runs, I get 13.1 seconds versus 
15.0 seconds, which should amount to an improvement of around 12.7%.


Also attached a v3 of the patch which adds an Assert on top of the test 
case changes suggested by you.



.m
drop table if exists tbl;
create table tbl(a serial, b int not null, c text not null);
insert into tbl (b,c) select i, random()::text from generate_series(1, 
1) i;
alter table tbl add primary key (a);
analyze tbl;
checkpoint;
\timing

-- new code
--\copy (delete from tbl where a <= 500 returning *) to foo.dat

-- old code
--SET work_mem TO '1GB';
--\copy (with t as (delete from tbl where a <= 500 returning *) select * 
from t) to foo.dat
*** a/doc/src/sgml/ref/copy.sgml
--- b/doc/src/sgml/ref/copy.sgml
***
*** 112,121  COPY { table_name [ ( query
  
   
!   A  or
!command
!   whose results are to be copied.
!   Note that parentheses are required around the query.
   
  
 
--- 112,128 
  query
  
   
!   A , ,
!   ,  or
!command whose results are to be
!   copied.  Note that parentheses are required around the query.
!  
!  
!   For INSERT, UPDATE and
!   DELETE queries a RETURNING clause must be provided,
!   and the target relation must not have a conditional rule, nor
!   an ALSO rule, nor an INSTEAD rule
!   that expands to multiple statements.
   
  
 
*** a/src/backend/commands/copy.c
--- b/src/backend/commands/copy.c
***
*** 201,207  typedef struct CopyStateData
  	int			raw_buf_len;	/* total # of bytes stored */
  } CopyStateData;
  
! /* DestReceiver for COPY (SELECT) TO */
  typedef struct
  {
  	DestReceiver pub;			/* publicly-known function pointers */
--- 201,207 
  	int			raw_buf_len;	/* total # of bytes stored */
  } CopyStateData;
  
! /* DestReceiver for COPY (query) TO */
  typedef struct
  {
  	DestReceiver pub;			/* publicly-known function pointers */
***
*** 772,778  CopyLoadRawBuf(CopyState cstate)
   *
   * Either unload or reload contents of table , depending on .
   * ( = TRUE means we are inserting into the table.)  In the "TO" case
!  * we also support copying the output of an arbitrary SELECT query.
   *
   * If  is false, transfer is between the table and the file named
   * .  Otherwise, transfer is between the table and our regular
--- 772,779 
   *
   * Either unload or reload contents of table , depending on .
   * ( = TRUE means we are inserting into the table.)  In the "TO" case
!  * we also support copying the output of an arbitrary SELECT, INSERT, UPDATE
!  * or DELETE query.
   *
   * If  is false, transfer is between the table and the file named
   * .  Otherwise, transfer is between the table and our regular
***
*** 1374,1384  BeginCopy(bool is_from,
  		Assert(!is_from);
  		cstate->rel = NULL;
  
! 		/* Don't allow COPY w/ OIDs from a select */
  		if (cstate->oids)
  			ereport(ERROR,
  	(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
! 	 errmsg("COPY (SELECT) WITH OIDS is not supported")));
  
  		/*
  		 * Run parse analysis and rewrite.  Note this also acquires sufficient
--- 1375,1385 
  		Assert(!is_from);
  		cstate->rel = NULL;
  
! 		/* Don't allow COPY w/ OIDs from a query */
  		if (cstate->oids)
  			ereport(ERROR,
  	(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
! 	 errmsg("COPY (query) WITH OIDS is not supported")));
  
  		/*
  		 * Run parse analysis and rewrite.  Note this also acquires sufficient
***
*** 1393,1401  BeginCopy(bool is_from,
  		rewritten = pg_analyze_and_rewrite((Node *) copyObject(raw_query),
  		   queryString, NULL, 0);
  
! 		/* We don't expect more or less than one result query */
! 		if (list_length(rewritten) != 1)
! 			elog(ERROR, "unexpected rewrite result");
  
  		query = (Query *) linitial(rewritten);
  
--- 1394,1429 
  		rewritten = pg_analyze_and_rewrite((Node *) copyObject(raw_query),
  		   queryString, NULL, 0);
  
! 		/* check that we got back something we can work with */
! 		if (rewritten == NIL)
! 		{
! 			ereport(ERROR,
! 	(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
! 	errmsg("DO INSTEAD NOTHING rules are not supported for COPY")));
! 		}
! 		else if (list_length(rewritten) > 1)
! 		{

Re: [HACKERS] COPY (INSERT/UPDATE/DELETE .. RETURNING ..)

2015-11-20 Thread Marko Tiikkaja

On 11/19/15 7:39 PM, Michael Paquier wrote:

On Thu, Nov 19, 2015 at 9:22 PM, Marko Tiikkaja <ma...@joh.to> wrote:

Of course, something might break if we added a new statement type which
supported RETURNING, but I'm really not worried about that.  I'm not dead
set against adding some Assert(IsA()) calls here, but I don't see the point.


gram.y has a long comment before select_no_parens regarding why we
shouldn't do it this way, did you notice it?


It talks about not doing  '(' SelectStmt ')'  "in the expression 
grammar".  I don't see what that has to do with this patch.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] count_nulls(VARIADIC "any")

2015-11-20 Thread Marko Tiikkaja

On 2015-11-21 06:02, I wrote:

Here's a patch implementing this under the name num_nulls().  For
January's CF, of course.


I forgot to update the some references in the documentation.  Fixed in 
v3, attached.



.m
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
***
*** 182,188 

  

!Comparison Operators
  
 
  comparison
--- 182,188 

  

!Comparison Functions and Operators
  
 
  comparison
***
*** 191,200 
  
 
  The usual comparison operators are available, shown in .
 
  
!
  Comparison Operators
  
   
--- 191,200 
  
 
  The usual comparison operators are available, shown in .
 
  
!
  Comparison Operators
  
   
***
*** 437,442 
--- 437,470 
 
  -->
  
+   
+ Comparison Functions
+ 
+  
+   
+Function
+Description
+Example
+Example Result
+   
+  
+  
+   
+
+  
+   num_nulls
+  
+  num_nulls(VARIADIC "any")
+
+Returns the number of NULL input arguments
+num_nulls(1, NULL, 2)
+1
+   
+  
+ 
+
+ 
+ 

  

***
*** 10307,10313  table2-mapping


 The standard comparison operators shown in   are available for
 jsonb, but not for json. They follow the
 ordering rules for B-tree operations outlined at .
--- 10335,10341 


 The standard comparison operators shown in   are available for
 jsonb, but not for json. They follow the
 ordering rules for B-tree operations outlined at .
*** a/src/backend/utils/adt/misc.c
--- b/src/backend/utils/adt/misc.c
***
*** 45,50 
--- 45,118 
  
  
  /*
+  * num_nulls()
+  *  Count the number of NULL input arguments
+  */
+ Datum
+ pg_num_nulls(PG_FUNCTION_ARGS)
+ {
+ 	int32 count = 0;
+ 	int i;
+ 
+ 	if (get_fn_expr_variadic(fcinfo->flinfo))
+ 	{
+ 		ArrayType  *arr;
+ 		int ndims, nitems, *dims;
+ 		bits8 *bitmap;
+ 		int bitmask;
+ 
+ 		/* Should have just the one argument */
+ 		Assert(PG_NARGS() == 1);
+ 
+ 		/* num_nulls(VARIADIC NULL) is defined as NULL */
+ 		if (PG_ARGISNULL(0))
+ 			PG_RETURN_NULL();
+ 
+ 		/*
+ 		 * Non-null argument had better be an array.  We assume that any call
+ 		 * context that could let get_fn_expr_variadic return true will have
+ 		 * checked that a VARIADIC-labeled parameter actually is an array.  So
+ 		 * it should be okay to just Assert that it's an array rather than
+ 		 * doing a full-fledged error check.
+ 		 */
+ 		Assert(OidIsValid(get_base_element_type(get_fn_expr_argtype(fcinfo->flinfo, 0;
+ 
+ 		/* OK, safe to fetch the array value */
+ 		arr = PG_GETARG_ARRAYTYPE_P(0);
+ 
+ 		ndims = ARR_NDIM(arr);
+ 		dims = ARR_DIMS(arr);
+ 		nitems = ArrayGetNItems(ndims, dims);
+ 
+ 		bitmap = ARR_NULLBITMAP(arr);
+ 		if (!bitmap)
+ 			PG_RETURN_INT32(0);
+ 		bitmask = 1;
+ 
+ 		for (i = 0; i < nitems; i++)
+ 		{
+ 			if ((*bitmap & bitmask) == 0)
+ count++;
+ 
+ 			bitmask <<= 1;
+ 			if (bitmask == 0x100)
+ 			{
+ bitmap++;
+ bitmask = 1;
+ 			}
+ 		}
+ 		PG_RETURN_INT32(count);
+ 	}
+ 
+ 	for (i = 0; i < PG_NARGS(); i++)
+ 	{
+ 		if (PG_ARGISNULL(i))
+ 			count++;
+ 	}
+ 	PG_RETURN_INT32(count);
+ }
+ 
+ /*
   * current_database()
   *	Expose the current database to the user
   */
*** a/src/include/catalog/pg_proc.h
--- b/src/include/catalog/pg_proc.h
***
*** 2963,2968  DESCR("adjust time with time zone precision");
--- 2963,2970 
  
  DATA(insert OID = 2003 (  textanycat	   PGNSP PGUID 14 1 0 0 0 f f f f t f s s 2 0 25 "25 2776" _null_ _null_ _null_ _null_ _null_ "select $1 || $2::pg_catalog.text" _null_ _null_ _null_ ));
  DATA(insert OID = 2004 (  anytextcat	   PGNSP PGUID 14 1 0 0 0 f f f f t f s s 2 0 25 "2776 25" _null_ _null_ _null_ _null_ _null_ "select $1::pg_catalog.text || $2" _null_ _null_ _null_ ));
+ DATA(insert OID = 4400 (  num_nullsPGNSP PGUID 12 1 0 2276 0 f f f f f f i s 1 0 23 "2276" "{2276}" "{v}" _null_ _null_ _null_pg_num_nulls _null_ _null_ _null_ ));
+ DESCR("count the number of NULL input arguments");
  
  DATA(insert OID = 2005 (  bytealike		   PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 16 "17 17" _null_ _null_ _null_ _null_ _null_ bytealike _null_ _null_ _null_ ));
  DATA(insert OID = 2006 (  byteanlike	   PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 16 "17 17" _null_ _null_ _null_ _null_ _null_ byteanlike _null_ _null_ _null_ ));
*** a/src/include/utils/builtins.h
--- b/src/include/utils/builtins.h
***
*** 481,486  extern Datum pg_ls_dir(PG_FUNCTION_ARGS);
--- 481,487 
  extern Datum pg_ls_dir_1arg(PG_FUNCTION_ARGS);
  
  /* misc.c */
+ extern Datum pg_num_nulls(PG_FUNCTION_ARGS);
  extern Datum current_database(PG_FUNCTION_ARGS);
  extern Datum current_query(PG_FUNCTION_ARGS);
  extern Datum pg_cancel_backend(PG_FUNCTION_ARGS);
*** 

Re: [HACKERS] count_nulls(VARIADIC "any")

2015-11-20 Thread Marko Tiikkaja

On 2015-11-21 06:06, Tom Lane wrote:

Marko Tiikkaja <ma...@joh.to> writes:

Here's a patch implementing this under the name num_nulls().  For
January's CF, of course.


What's this do that "count(*) - count(x)" doesn't?


This is sort of a lateral version of count(x); the input is a list of 
expressions rather than an expression executed over a bunch of input rows.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] count_nulls(VARIADIC "any")

2015-11-20 Thread Marko Tiikkaja

Hello,

Here's a patch implementing this under the name num_nulls().  For 
January's CF, of course.




.m
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
***
*** 182,188 

  

!Comparison Operators
  
 
  comparison
--- 182,188 

  

!Comparison Functions and Operators
  
 
  comparison
***
*** 194,200 
  linkend="functions-comparison-table">.
 
  
!
  Comparison Operators
  
   
--- 194,200 
  linkend="functions-comparison-table">.
 
  
!
  Comparison Operators
  
   
***
*** 437,442 
--- 437,470 
 
  -->
  
+   
+ Comparison Functions
+ 
+  
+   
+Function
+Description
+Example
+Example Result
+   
+  
+  
+   
+
+  
+   num_nulls
+  
+  num_nulls(VARIADIC "any")
+
+Returns the number of NULL input arguments
+num_nulls(1, NULL, 2)
+1
+   
+  
+ 
+
+ 
+ 

  

*** a/src/backend/utils/adt/misc.c
--- b/src/backend/utils/adt/misc.c
***
*** 45,50 
--- 45,118 
  
  
  /*
+  * num_nulls()
+  *  Count the number of NULL input arguments
+  */
+ Datum
+ pg_num_nulls(PG_FUNCTION_ARGS)
+ {
+ 	int32 count = 0;
+ 	int i;
+ 
+ 	if (get_fn_expr_variadic(fcinfo->flinfo))
+ 	{
+ 		ArrayType  *arr;
+ 		int ndims, nitems, *dims;
+ 		bits8 *bitmap;
+ 		int bitmask;
+ 
+ 		/* Should have just the one argument */
+ 		Assert(PG_NARGS() == 1);
+ 
+ 		/* num_nulls(VARIADIC NULL) is defined as NULL */
+ 		if (PG_ARGISNULL(0))
+ 			PG_RETURN_NULL();
+ 
+ 		/*
+ 		 * Non-null argument had better be an array.  We assume that any call
+ 		 * context that could let get_fn_expr_variadic return true will have
+ 		 * checked that a VARIADIC-labeled parameter actually is an array.  So
+ 		 * it should be okay to just Assert that it's an array rather than
+ 		 * doing a full-fledged error check.
+ 		 */
+ 		Assert(OidIsValid(get_base_element_type(get_fn_expr_argtype(fcinfo->flinfo, 0;
+ 
+ 		/* OK, safe to fetch the array value */
+ 		arr = PG_GETARG_ARRAYTYPE_P(0);
+ 
+ 		ndims = ARR_NDIM(arr);
+ 		dims = ARR_DIMS(arr);
+ 		nitems = ArrayGetNItems(ndims, dims);
+ 
+ 		bitmap = ARR_NULLBITMAP(arr);
+ 		if (!bitmap)
+ 			PG_RETURN_INT32(0);
+ 		bitmask = 1;
+ 
+ 		for (i = 0; i < nitems; i++)
+ 		{
+ 			if ((*bitmap & bitmask) == 0)
+ count++;
+ 
+ 			bitmask <<= 1;
+ 			if (bitmask == 0x100)
+ 			{
+ bitmap++;
+ bitmask = 1;
+ 			}
+ 		}
+ 		PG_RETURN_INT32(count);
+ 	}
+ 
+ 	for (i = 0; i < PG_NARGS(); i++)
+ 	{
+ 		if (PG_ARGISNULL(i))
+ 			count++;
+ 	}
+ 	PG_RETURN_INT32(count);
+ }
+ 
+ /*
   * current_database()
   *	Expose the current database to the user
   */
*** a/src/include/catalog/pg_proc.h
--- b/src/include/catalog/pg_proc.h
***
*** 2963,2968  DESCR("adjust time with time zone precision");
--- 2963,2970 
  
  DATA(insert OID = 2003 (  textanycat	   PGNSP PGUID 14 1 0 0 0 f f f f t f s s 2 0 25 "25 2776" _null_ _null_ _null_ _null_ _null_ "select $1 || $2::pg_catalog.text" _null_ _null_ _null_ ));
  DATA(insert OID = 2004 (  anytextcat	   PGNSP PGUID 14 1 0 0 0 f f f f t f s s 2 0 25 "2776 25" _null_ _null_ _null_ _null_ _null_ "select $1::pg_catalog.text || $2" _null_ _null_ _null_ ));
+ DATA(insert OID = 4400 (  num_nullsPGNSP PGUID 12 1 0 2276 0 f f f f f f i s 1 0 23 "2276" "{2276}" "{v}" _null_ _null_ _null_pg_num_nulls _null_ _null_ _null_ ));
+ DESCR("count the number of NULL input arguments");
  
  DATA(insert OID = 2005 (  bytealike		   PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 16 "17 17" _null_ _null_ _null_ _null_ _null_ bytealike _null_ _null_ _null_ ));
  DATA(insert OID = 2006 (  byteanlike	   PGNSP PGUID 12 1 0 0 0 f f f f t f i s 2 0 16 "17 17" _null_ _null_ _null_ _null_ _null_ byteanlike _null_ _null_ _null_ ));
*** a/src/include/utils/builtins.h
--- b/src/include/utils/builtins.h
***
*** 481,486  extern Datum pg_ls_dir(PG_FUNCTION_ARGS);
--- 481,487 
  extern Datum pg_ls_dir_1arg(PG_FUNCTION_ARGS);
  
  /* misc.c */
+ extern Datum pg_num_nulls(PG_FUNCTION_ARGS);
  extern Datum current_database(PG_FUNCTION_ARGS);
  extern Datum current_query(PG_FUNCTION_ARGS);
  extern Datum pg_cancel_backend(PG_FUNCTION_ARGS);
*** /dev/null
--- b/src/test/regress/expected/misc_functions.out
***
*** 0 
--- 1,68 
+ --
+ -- num_nulls()
+ --
+ SELECT num_nulls();
+ ERROR:  function num_nulls() does not exist
+ LINE 1: SELECT num_nulls();
+^
+ HINT:  No function matches the given name and argument types. You might need to add explicit type casts.
+ SELECT num_nulls(NULL);
+  num_nulls 
+ ---
+  1
+ (1 row)
+ 
+ SELECT num_nulls('1');
+  num_nulls 
+ ---
+  0
+ (1 row)
+ 
+ SELECT num_nulls(NULL::text);
+  num_nulls 
+ ---
+  1
+ (1 row)
+ 
+ 

Re: [HACKERS] count_nulls(VARIADIC "any")

2015-11-20 Thread Marko Tiikkaja

On 2015-11-21 06:52, Jim Nasby wrote:

On 11/20/15 11:12 PM, Marko Tiikkaja wrote:

On 2015-11-21 06:02, I wrote:

Here's a patch implementing this under the name num_nulls().  For
January's CF, of course.


I forgot to update the some references in the documentation.  Fixed in
v3, attached.


I thought there was going to be a not-null equivalent as well? I've
definitely wanted both variations in the past.


I'm not sure that's necessary.  It's quite simple to implement yourself 
using the  int - int  operator.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] onlyvalue aggregate (was: First Aggregate Funtion?)

2015-11-20 Thread Marko Tiikkaja

Hi Dean,

Here's v2 of the patch.  How's this look?


.m
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
***
*** 12635,12640  NULL baz(3 rows)
--- 12635,12660 
   

 
+ single_value
+
+
+  single_value(expression)
+
+   
+   
+any type for which the equality operator has been defined
+   
+   
+same as argument type
+   
+   returns the single distinct value from the input values;
+   if more than one distinct value exists (while considering NULLs equal),
+   an exception is raised
+  
+ 
+  
+   
+
  string_agg
 
 
*** a/src/backend/utils/adt/misc.c
--- b/src/backend/utils/adt/misc.c
***
*** 40,45 
--- 40,46 
  #include "utils/acl.h"
  #include "utils/builtins.h"
  #include "utils/timestamp.h"
+ #include "utils/typcache.h"
  
  #define atooid(x)  ((Oid) strtoul((x), NULL, 10))
  
***
*** 598,600  pg_column_is_updatable(PG_FUNCTION_ARGS)
--- 599,702 
  
  	PG_RETURN_BOOL((events & REQ_EVENTS) == REQ_EVENTS);
  }
+ 
+ struct single_value_agg_stype
+ {
+ 	FunctionCallInfoData fcinfo;
+ 	Datum datum;
+ 	bool isnull;
+ };
+ 
+ Datum
+ single_value_agg_transfn(PG_FUNCTION_ARGS)
+ {
+ 	MemoryContext aggcontext;
+ 	struct single_value_agg_stype *state;
+ 
+ 	if (!AggCheckCallContext(fcinfo, ))
+ 	{
+ 		/* cannot be called directly because of internal-type argument */
+ 		elog(ERROR, "single_value_agg_transfn called in non-aggregate context");
+ 	}
+ 
+ 	if (PG_ARGISNULL(0))
+ 	{
+ 		TypeCacheEntry *typentry;
+ 		Oid arg1_typeid = get_fn_expr_argtype(fcinfo->flinfo, 1);
+ 
+ 		if (arg1_typeid == InvalidOid)
+ 			ereport(ERROR,
+ 	(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ 	 errmsg("could not determine input data type")));
+ 
+ 		typentry = lookup_type_cache(arg1_typeid,
+ 	 TYPECACHE_EQ_OPR_FINFO);
+ 		if (!OidIsValid(typentry->eq_opr_finfo.fn_oid))
+ 			ereport(ERROR,
+ 	(errcode(ERRCODE_UNDEFINED_FUNCTION),
+ 			errmsg("could not identify an equality operator for type %s",
+    format_type_be(arg1_typeid;
+ 
+ 		state = (struct single_value_agg_stype *) MemoryContextAlloc(aggcontext, sizeof(struct single_value_agg_stype));
+ 
+ 		if (PG_ARGISNULL(1))
+ 		{
+ 			state->datum = (Datum) 0;
+ 			state->isnull = true;
+ 			memset(>fcinfo, 0, sizeof(state->fcinfo));
+ 		}
+ 		else
+ 		{
+ 			state->datum = PG_GETARG_DATUM(1);
+ 			state->isnull = false;
+ 			InitFunctionCallInfoData(state->fcinfo, >eq_opr_finfo, 2,
+ 	 InvalidOid, NULL, NULL);
+ 		}
+ 	}
+ 	else
+ 	{
+ 		bool oprresult;
+ 
+ 		state = (struct single_value_agg_stype *) PG_GETARG_POINTER(0);
+ 
+ 		if (state->isnull)
+ 			oprresult = PG_ARGISNULL(1);
+ 		else
+ 		{
+ 			state->fcinfo.argnull[0] = false;
+ 			state->fcinfo.argnull[1] = false;
+ 			state->fcinfo.arg[0] = state->datum;
+ 			state->fcinfo.arg[1] = PG_GETARG_DATUM(1);
+ 			state->fcinfo.isnull = false;
+ 			oprresult = DatumGetBool(FunctionCallInvoke(>fcinfo));
+ 		}
+ 		if (!oprresult)
+ 			ereport(ERROR,
+ 	(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ 			errmsg("more than one distinct value passed to single_value")));
+ 	}
+ 
+ 	/*
+ 	 * The transition type for single_value() is declared to be "internal",
+ 	 * which is a pass-by-value type the same size as a pointer.  So we can
+ 	 * safely pass the pointer through nodeAgg.c's machinations.
+ 	 */
+ 	PG_RETURN_POINTER(state);
+ }
+ 
+ Datum
+ single_value_agg_finalfn(PG_FUNCTION_ARGS)
+ {
+ 	struct single_value_agg_stype *state;
+ 
+ 	if (PG_ARGISNULL(0))
+ 		PG_RETURN_NULL();
+ 
+ 	/* cannot be called directly because of internal-type argument */
+ 	Assert(AggCheckCallContext(fcinfo, NULL));
+ 
+ 	state = (struct single_value_agg_stype *) PG_GETARG_POINTER(0);
+ 	if (state->isnull)
+ 		PG_RETURN_NULL();
+ 	PG_RETURN_DATUM(state->datum);
+ }
*** a/src/include/catalog/pg_aggregate.h
--- b/src/include/catalog/pg_aggregate.h
***
*** 292,297  DATA(insert ( 3197	n 0 json_object_agg_transfn json_object_agg_finalfn --
--- 292,300 
  DATA(insert ( 3267	n 0 jsonb_agg_transfn	jsonb_agg_finalfn			---f f 0	2281	0	0		0	_null_ _null_ ));
  DATA(insert ( 3270	n 0 jsonb_object_agg_transfn jsonb_object_agg_finalfn ---f f 0	2281	0	0		0	_null_ _null_ ));
  
+ /* single_value */
+ DATA(insert ( 4202 n 0 single_value_agg_transfnsingle_value_agg_finalfn-   -   -   t f 0   22810   0   0   _null_ _null_ ));
+ 
  /* ordered-set and hypothetical-set aggregates */
  DATA(insert ( 3972	o 1 ordered_set_transition			percentile_disc_final	-		-		-		t f 0	2281	0	0		0	_null_ _null_ ));
  DATA(insert ( 3974	o 1 ordered_set_transition			percentile_cont_float8_final			-		-		-		f f 0	2281	0	0		0	_null_ _null_ ));
*** a/src/include/catalog/pg_proc.h
--- b/src/include/catalog/pg_proc.h
***
*** 945,950 

Re: [HACKERS] proposal: LISTEN *

2015-11-19 Thread Marko Tiikkaja

On 11/19/15 4:48 AM, Pavel Stehule wrote:

2015-11-19 4:40 GMT+01:00 Marko Tiikkaja <ma...@joh.to>:

I've in the past wanted to listen on all notification channels in the
current database for debugging purposes.  But recently I came across
another use case.  Since having multiple postgres backends listening for
notifications is very inefficient (one Thursday I found out 30% of our CPU
time was spent spinning on s_locks around the notification code), it makes
sense to implement a notification server of sorts which then passes on
notifications from postgres to interested clients.  A server like this
would be a lot more simple to implement if there was a way to announce that
the client wants to see all notifications, regardless of the name of the
channel.

Attached is a very crude proof-of-concept patch implementing this.  Any
thoughts on the idea?



It is looking much more like workaround. Proposed feature isn't bad, but
optimization of current implementation should be better.

Isn't possible to fix internal implementation?


It's probably possible to improve the internal implementation.  But the 
way I see it, it's always going to be less efficient than a dedicated 
process outside the system (or maybe as a background worker?) handing 
out notifications, so I don't see any point in spending my time on that.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY (INSERT/UPDATE/DELETE .. RETURNING ..)

2015-11-19 Thread Marko Tiikkaja

On 11/19/15 1:17 PM, Michael Paquier wrote:

On Thu, Nov 19, 2015 at 11:04 AM, Marko Tiikkaja wrote:

Further, if someone's going to add new stuff to PreparableStmt, she should
probably think about whether it would make sense to add it to COPY and CTEs
from day one, too, and in that case not splitting them up is actually a win.


Personally, I would take it on the safe side and actually update it.
If someone were to add a new node type in PreparableStmt I am pretty
sure that we are going to forget to update the COPY part, leading us
to unwelcome bugs. And that would not be cool.


They'd have to get past this:

+   if (query->commandType != CMD_SELECT &&
+   query->returningList == NIL)
+   {
+   ereport(ERROR,
+   (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+errmsg("COPY query must have a RETURNING 
clause")));

+   }

Of course, something might break if we added a new statement type which 
supported RETURNING, but I'm really not worried about that.  I'm not 
dead set against adding some Assert(IsA()) calls here, but I don't see 
the point.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: LISTEN *

2015-11-19 Thread Marko Tiikkaja

On 11/19/15 4:21 PM, Tom Lane wrote:

Marko Tiikkaja <ma...@joh.to> writes:

I've in the past wanted to listen on all notification channels in the
current database for debugging purposes.  But recently I came across
another use case.  Since having multiple postgres backends listening for
notifications is very inefficient (one Thursday I found out 30% of our
CPU time was spent spinning on s_locks around the notification code), it
makes sense to implement a notification server of sorts which then
passes on notifications from postgres to interested clients.


... and then you gotta get the notifications to the clients, so it seems
like this just leaves the performance question hanging.


I'm not sure which performance question you think is left hanging.  If 
every client is connected to postgres, you're waking up tens if not 
hundreds of processes tens if not hundreds of times per second.  Each of 
them start a transaction, check which notifications are visible in the 
queue from clog and friends, go through the tails of every other process 
to see whether they should advance the tail pointer of the queue, commit 
the transaction and go back to sleep only to be immediately woken up again.


If they're not connected to postgres directly, this kind of complex 
processing only happens once, and then the notification server just 
unconditionally serves all notifications to the clients based on a 
simple map lookup.  It should be trivial to see how the overhead is avoided.



Why don't we find
and fix the actual performance issue (assuming that 07e4d03fb wasn't it)?


07e4d03fb wasn't it, no.


The reason I'm not terribly enthused about this proposal is that some
implementations of LISTEN (for example, our pre-9.0 one) would be unable
to support LISTEN *.  It's not too hard to imagine that at some point we
might wish to redo the existing implementation to reduce the overhead of
all listeners seeing all messages, and then having promised we could do
LISTEN * would be a serious restriction on our flexibility.  So while
I'm not necessarily trying to veto the idea, I think it has significant
opportunity cost, and I'd like to see a more solid rationale than this
one before we commit to it.


A reasonable point.


In any case, it would be good to understand exactly what's the performance
issue that's biting you.  Can you provide a test case that reproduces
that behavior?


I've attached a Go program which simulates quite accurately the 
LISTEN/NOTIFY-part of our setup.  What it does is:


  1) Open 50 connections, and issue a LISTEN in all of them
  2) Open another 50 connections, and deliver one notification every 
750 milliseconds from each of them


(My apologies for the fact that it's written in Go.  It's the only thing 
I can produce without spending significant amount of time working on this.)


On the test server I'm running on, it doesn't look quite as bad as the 
profiles we had in production, but s_lock is still the worst offender in 
the profiles, called from:


  - 80.33% LWLockAcquire
+ 48.34% asyncQueueReadAllNotifications
+ 23.09% SIGetDataEntries
+ 16.92% SimpleLruReadPage_ReadOnly
+ 10.21% TransactionIdIsInProgress
+ 1.27% asyncQueueAdvanceTail

which roughly looks like what I recall from our actual production profiles.


.m
package main

import (
"github.com/lib/pq"
"database/sql"
"fmt"
"log"
"time"
)

const connInfo = "host=/var/run/postgresql sslmode=disable"

func listener(i int) {
l := pq.NewListener(connInfo, time.Second, time.Second, nil)
err := l.Listen(fmt.Sprintf("channel%d", i))
if err != nil {
log.Fatal(err)
}
for {
<-l.Notify
}
}

func notifier(dbh *sql.DB, i int) {
for {
_, err := dbh.Exec(fmt.Sprintf("NOTIFY channel%d", i))
if err != nil {
log.Fatal(err)
}
time.Sleep(750 * time.Millisecond)
}
}

func main() {
openDb := func() *sql.DB {
db, _ := sql.Open("postgres", connInfo)
err := db.Ping()
if err != nil {
log.Fatal(err)
}
db.SetMaxIdleConns(2)
db.SetMaxOpenConns(2)
return db
}

for i := 0; i < 50; i++ {
go listener(i)
go notifier(openDb(), i)
time.Sleep(20 * time.Millisecond)
}
select{}
}

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: LISTEN *

2015-11-19 Thread Marko Tiikkaja

On 11/19/15 5:32 PM, Tom Lane wrote:

Marko Tiikkaja <ma...@joh.to> writes:

On 11/19/15 4:21 PM, Tom Lane wrote:

... and then you gotta get the notifications to the clients, so it seems
like this just leaves the performance question hanging.



I'm not sure which performance question you think is left hanging.  If
every client is connected to postgres, you're waking up tens if not
hundreds of processes tens if not hundreds of times per second.  Each of
them start a transaction, check which notifications are visible in the
queue from clog and friends, go through the tails of every other process
to see whether they should advance the tail pointer of the queue, commit
the transaction and go back to sleep only to be immediately woken up again.



If they're not connected to postgres directly, this kind of complex
processing only happens once, and then the notification server just
unconditionally serves all notifications to the clients based on a
simple map lookup.  It should be trivial to see how the overhead is avoided.


Meh.  You've gotten rid of one single-point-of-contention by creating
another one.  Essentially, your argument why this is an improvement is
that any random application developer using Postgres is smarter than
the Postgres development team and can therefore produce something
better-performing off the cuff.


It's not about who's smarter and who's not.  All a notification server 
has to do is wait for notifications incoming from one socket and make 
sure they're delivered to the correct sockets in an epoll loop.  There's 
only one process being woken up, no need to synchronize with other 
processes, no need to see where the pointers of other processes are, no 
overhead of transaction visibility, transaction BEGIN, or transaction 
COMMIT.  The total amount of work done is trivially smaller.



Which may indeed be true, but shall we
say it's unproven?


Well it's not proof, but every time we moved an application from 
LISTENing in postgres to the notification server our CPU usage halved. 
The last time we did that (from ~100 connections to exactly 1), s_lock 
went down from 30% to 0.16% in our CPU profiles, and our CPU usage is 
only a fraction of what it used to be.  And this is with the 
notification server running on the same server with postgres, so it's 
not cheating, either.


There's no way we could keep running postgres if all 400+ clients 
interested in notifications had a LISTEN open in the database.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY (INSERT/UPDATE/DELETE .. RETURNING ..)

2015-11-18 Thread Marko Tiikkaja

On 2015-11-16 08:24, Michael Paquier wrote:

On Sun, Nov 1, 2015 at 2:49 AM, Marko Tiikkaja <ma...@joh.to> wrote:

Attached is a patch for being able to do $SUBJECT without a CTE.  The
reasons this is better than a CTE version are:

   1) It's not obvious why a CTE version works but a plain one doesn't
   2) This one has less overhead (I measured a ~12% improvement on a
not-too-unreasonable test case)


Would you mind sharing this test case as well as numbers?


I'll try and re-puzzle it together tomorrow.  It looks like I wasn't 
smart enough to actually save the test case anywhere.



Regression tests are broken when copyselect is run in parallel because
test3 is a table used there as well. A simple idea to fix this is to rename
the table test3 to something else or to use a schema dedicated to this
test, I just renamed it to copydml_test in the patch attached.


Seems reasonable.


-   | COPY select_with_parens TO opt_program
copy_file_name opt_with copy_options
+   | COPY '(' PreparableStmt ')' TO opt_program
copy_file_name opt_with copy_options
This does not look right, PreparableStmt is used for statements that are
part of PREPARE queries, any modifications happening there would impact
COPY with this implementation. I think that we had better have a new option
query_with_parens. Please note that the patch attached has not changed that.


This was discussed in 2010 when CTEs got the same treatment, and I agree 
with what was decided back then.  If someone needs to make 
PreparableStmt different from what COPY and CTEs support, we can split 
them up.  But they're completely identical after this patch, so 
splitting them up right now is somewhat pointless.


Further, if someone's going to add new stuff to PreparableStmt, she 
should probably think about whether it would make sense to add it to 
COPY and CTEs from day one, too, and in that case not splitting them up 
is actually a win.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Add numeric_trim(numeric)

2015-11-18 Thread Marko Tiikkaja

Hi,

Here's a patch for the second function suggested in 
5643125e.1030...@joh.to.  This is my first patch trying to do anything 
with numerics, so please be gentle.  I'm sure it's full of stupid.


January's commit fest, feedback welcome, yada yada..


.m
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
***
*** 782,787 
--- 782,800 

 
  
+  numeric_trim
+ 
+ 
numeric_trim(numeric)
+
+numeric
+remove trailing decimal zeroes after the decimal point
+numeric_trim(8.4100)
+8.41
+   
+ 
+   
+
+ 
   pi
  
  pi()
*** a/src/backend/utils/adt/numeric.c
--- b/src/backend/utils/adt/numeric.c
***
*** 2825,2830  numeric_power(PG_FUNCTION_ARGS)
--- 2825,2904 
PG_RETURN_NUMERIC(res);
  }
  
+ /*
+  * numeric_trim() -
+  *
+  *Remove trailing decimal zeroes after the decimal point
+  */
+ Datum
+ numeric_trim(PG_FUNCTION_ARGS)
+ {
+   Numeric num = PG_GETARG_NUMERIC(0);
+   NumericVar  arg;
+   Numeric res;
+   int dscale;
+   int ndigits;
+ 
+   if (NUMERIC_IS_NAN(num))
+   PG_RETURN_NUMERIC(make_result(_nan));
+ 
+   init_var_from_num(num, );
+ 
+   ndigits = arg.ndigits;
+   /* for simplicity in the loop below, do full NBASE digits at a time */
+   dscale = ((arg.dscale + DEC_DIGITS - 1) / DEC_DIGITS) * DEC_DIGITS;
+   /* trim unstored significant trailing zeroes right away */
+   if (dscale > (ndigits - arg.weight - 1) * DEC_DIGITS)
+   dscale = (ndigits - arg.weight - 1) * DEC_DIGITS;
+   while (dscale > 0 && ndigits > 0)
+   {
+   NumericDigit dig = arg.digits[ndigits - 1];
+ 
+ #if DEC_DIGITS == 4
+   if ((dig % 10) != 0)
+   break;
+   if (--dscale == 0)
+   break;
+   if ((dig % 100) != 0)
+   break;
+   if (--dscale == 0)
+   break;
+   if ((dig % 1000) != 0)
+   break;
+   if (--dscale == 0)
+   break;
+ #elif DEC_DIGITS == 2
+   if ((dig % 10) != 0)
+   break;
+   if (--dscale == 0)
+   break;
+ #elif DEC_DIGITS == 1
+   /* nothing to do */
+ #else
+ #error unsupported NBASE
+ #endif
+   if (dig != 0)
+   break;
+   --dscale;
+   --ndigits;
+   }
+   arg.dscale = dscale;
+   arg.ndigits = ndigits;
+ 
+   /* If it's zero, normalize the sign and weight */
+   if (ndigits == 0)
+   {
+   arg.sign = NUMERIC_POS;
+   arg.weight = 0;
+   arg.dscale = 0;
+   }
+ 
+   res = make_result();
+   free_var();
+ 
+   PG_RETURN_NUMERIC(res);
+ }
+ 
  
  /* --
   *
*** a/src/include/catalog/pg_proc.h
--- b/src/include/catalog/pg_proc.h
***
*** 2361,2366  DESCR("exponentiation");
--- 2361,2368 
  DATA(insert OID = 2169 ( powerPGNSP 
PGUID 12 1 0 0 0 f f f f t f i s 2 0 1700 "1700 1700" _null_ _null_ _null_ 
_null_ _null_  numeric_power _null_ _null_ _null_ ));
  DESCR("exponentiation");
  DATA(insert OID = 1739 ( numeric_powerPGNSP PGUID 12 
1 0 0 0 f f f f t f i s 2 0 1700 "1700 1700" _null_ _null_ _null_ _null_ _null_ 
 numeric_power _null_ _null_ _null_ ));
+ DATA(insert OID =  ( numeric_trim PGNSP PGUID 12 1 0 0 0 
f f f f t f i s 1 0 1700 "1700" _null_ _null_ _null_ _null_ _null_   
numeric_trim _null_ _null_ _null_ ));
+ DESCR("remove trailing decimal zeroes after the decimal point");
  DATA(insert OID = 1740 ( numeric  PGNSP PGUID 12 
1 0 0 0 f f f f t f i s 1 0 1700 "23" _null_ _null_ _null_ _null_ _null_ 
int4_numeric _null_ _null_ _null_ ));
  DESCR("convert int4 to numeric");
  DATA(insert OID = 1741 ( log  PGNSP PGUID 14 
1 0 0 0 f f f f t f i s 1 0 1700 "1700" _null_ _null_ _null_ _null_ _null_ 
"select pg_catalog.log(10, $1)" _null_ _null_ _null_ ));
*** a/src/include/utils/builtins.h
--- b/src/include/utils/builtins.h
***
*** 1022,1027  extern Datum numeric_exp(PG_FUNCTION_ARGS);
--- 1022,1028 
  extern Datum numeric_ln(PG_FUNCTION_ARGS);
  extern Datum numeric_log(PG_FUNCTION_ARGS);
  extern Datum numeric_power(PG_FUNCTION_ARGS);
+ extern Datum numeric_trim(PG_FUNCTION_ARGS);
  extern Datum int4_numeric(PG_FUNCTION_ARGS);
  extern Datum numeric_int4(PG_FUNCTION_ARGS);
  extern Datum int8_numeric(PG_FUNCTION_ARGS);
*** a/src/test/regress/expected/numeric.out
--- b/src/test/regress/expected/numeric.out
***
*** 1852,1854  select log(3.1954752e47, 

[HACKERS] Add scale(numeric)

2015-11-18 Thread Marko Tiikkaja

Hi,

As suggested in 5643125e.1030...@joh.to, here's a patch for extracting 
the scale out of a numeric.


This is 2016-01 CF material, but if someone wants to bas^H^H^Hsay nice 
things in the meanwhile, feel free.



.m
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
***
*** 852,857 
--- 852,870 

 
  
+  scale
+ 
+ scale(numeric)
+
+numeric
+scale of the argument (the number of decimal digits in the 
fractional part)
+scale(8.41)
+2
+   
+ 
+   
+
+ 
   sign
  
  sign(dp or 
numeric)
*** a/src/backend/utils/adt/numeric.c
--- b/src/backend/utils/adt/numeric.c
***
*** 2825,2830  numeric_power(PG_FUNCTION_ARGS)
--- 2825,2847 
PG_RETURN_NUMERIC(res);
  }
  
+ /*
+  * numeric_scale() -
+  *
+  *Returns the scale, i.e. the count of decimal digits in the fractional 
part
+  */
+ Datum
+ numeric_scale(PG_FUNCTION_ARGS)
+ {
+   Numeric num = PG_GETARG_NUMERIC(0);
+ 
+   if (NUMERIC_IS_NAN(num))
+   PG_RETURN_NULL();
+ 
+   PG_RETURN_INT32(NUMERIC_DSCALE(num));
+ }
+ 
+ 
  
  /* --
   *
*** a/src/include/catalog/pg_proc.h
--- b/src/include/catalog/pg_proc.h
***
*** 2361,2366  DESCR("exponentiation");
--- 2361,2368 
  DATA(insert OID = 2169 ( powerPGNSP 
PGUID 12 1 0 0 0 f f f f t f i s 2 0 1700 "1700 1700" _null_ _null_ _null_ 
_null_ _null_  numeric_power _null_ _null_ _null_ ));
  DESCR("exponentiation");
  DATA(insert OID = 1739 ( numeric_powerPGNSP PGUID 12 
1 0 0 0 f f f f t f i s 2 0 1700 "1700 1700" _null_ _null_ _null_ _null_ _null_ 
 numeric_power _null_ _null_ _null_ ));
+ DATA(insert OID =  ( scale  PGNSP PGUID 12 1 0 0 0 f f f f t f i 
s 1 0 23 "1700" _null_ _null_ _null_ _null_ _null_   numeric_scale _null_ 
_null_ _null_ ));
+ DESCR("returns the number of decimal digits in the fractional part");
  DATA(insert OID = 1740 ( numeric  PGNSP PGUID 12 
1 0 0 0 f f f f t f i s 1 0 1700 "23" _null_ _null_ _null_ _null_ _null_ 
int4_numeric _null_ _null_ _null_ ));
  DESCR("convert int4 to numeric");
  DATA(insert OID = 1741 ( log  PGNSP PGUID 14 
1 0 0 0 f f f f t f i s 1 0 1700 "1700" _null_ _null_ _null_ _null_ _null_ 
"select pg_catalog.log(10, $1)" _null_ _null_ _null_ ));
*** a/src/include/utils/builtins.h
--- b/src/include/utils/builtins.h
***
*** 1022,1027  extern Datum numeric_exp(PG_FUNCTION_ARGS);
--- 1022,1028 
  extern Datum numeric_ln(PG_FUNCTION_ARGS);
  extern Datum numeric_log(PG_FUNCTION_ARGS);
  extern Datum numeric_power(PG_FUNCTION_ARGS);
+ extern Datum numeric_scale(PG_FUNCTION_ARGS);
  extern Datum int4_numeric(PG_FUNCTION_ARGS);
  extern Datum numeric_int4(PG_FUNCTION_ARGS);
  extern Datum int8_numeric(PG_FUNCTION_ARGS);
*** a/src/test/regress/expected/numeric.out
--- b/src/test/regress/expected/numeric.out
***
*** 1852,1854  select log(3.1954752e47, 9.4792021e-73);
--- 1852,1911 
   
-1.51613372350688302142917386143459361608600157692779164475351842333265418126982165
  (1 row)
  
+ --
+ -- Tests for scale()
+ --
+ select scale(numeric 'NaN');
+  scale 
+ ---
+   
+ (1 row)
+ 
+ select scale(NULL::numeric);
+  scale 
+ ---
+   
+ (1 row)
+ 
+ select scale(1.12);
+  scale 
+ ---
+  2
+ (1 row)
+ 
+ select scale(0);
+  scale 
+ ---
+  0
+ (1 row)
+ 
+ select scale(0.00);
+  scale 
+ ---
+  2
+ (1 row)
+ 
+ select scale(1.12345);
+  scale 
+ ---
+  5
+ (1 row)
+ 
+ select scale(110123.12475871856128);
+  scale 
+ ---
+ 14
+ (1 row)
+ 
+ select scale(-1123.12471856128);
+  scale 
+ ---
+ 11
+ (1 row)
+ 
+ select scale(-13.000);
+  scale 
+ ---
+ 15
+ (1 row)
+ 
*** a/src/test/regress/sql/numeric.sql
--- b/src/test/regress/sql/numeric.sql
***
*** 983,985  select log(1.23e-89, 6.4689e45);
--- 983,999 
  select log(0.99923, 4.58934e34);
  select log(1.16, 8.452010e18);
  select log(3.1954752e47, 9.4792021e-73);
+ 
+ --
+ -- Tests for scale()
+ --
+ 
+ select scale(numeric 'NaN');
+ select scale(NULL::numeric);
+ select scale(1.12);
+ select scale(0);
+ select scale(0.00);
+ select scale(1.12345);
+ select scale(110123.12475871856128);
+ select scale(-1123.12471856128);
+ select scale(-13.000);

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] proposal: LISTEN *

2015-11-18 Thread Marko Tiikkaja

Hi,

I've in the past wanted to listen on all notification channels in the 
current database for debugging purposes.  But recently I came across 
another use case.  Since having multiple postgres backends listening for 
notifications is very inefficient (one Thursday I found out 30% of our 
CPU time was spent spinning on s_locks around the notification code), it 
makes sense to implement a notification server of sorts which then 
passes on notifications from postgres to interested clients.  A server 
like this would be a lot more simple to implement if there was a way to 
announce that the client wants to see all notifications, regardless of 
the name of the channel.


Attached is a very crude proof-of-concept patch implementing this.  Any 
thoughts on the idea?



.m
*** a/src/backend/commands/async.c
--- b/src/backend/commands/async.c
***
*** 294,299  static SlruCtlData AsyncCtlData;
--- 294,304 
  static List *listenChannels = NIL;/* list of C strings */
  
  /*
+  * If listenWildcard is set, we're listening on all channels.
+  */
+ static bool listenWildcard = false;
+ 
+ /*
   * State for pending LISTEN/UNLISTEN actions consists of an ordered list of
   * all actions requested in the current transaction.  As explained above,
   * we don't actually change listenChannels until we reach transaction commit.
***
*** 306,311  static List *listenChannels = NIL; /* list of C 
strings */
--- 311,317 
  typedef enum
  {
LISTEN_LISTEN,
+   LISTEN_LISTEN_ALL,
LISTEN_UNLISTEN,
LISTEN_UNLISTEN_ALL
  } ListenActionKind;
***
*** 373,378  static void queue_listen(ListenActionKind action, const char 
*channel);
--- 379,385 
  static void Async_UnlistenOnExit(int code, Datum arg);
  static void Exec_ListenPreCommit(void);
  static void Exec_ListenCommit(const char *channel);
+ static void Exec_ListenAllCommit(void);
  static void Exec_UnlistenCommit(const char *channel);
  static void Exec_UnlistenAllCommit(void);
  static bool IsListeningOn(const char *channel);
***
*** 598,604  Async_Notify(const char *channel, const char *payload)
  
  /*
   * queue_listen
!  *Common code for listen, unlisten, unlisten all commands.
   *
   *Adds the request to the list of pending actions.
   *Actual update of the listenChannels list happens during 
transaction
--- 605,611 
  
  /*
   * queue_listen
!  *Common code for listen, listen all, unlisten, unlisten all 
commands.
   *
   *Adds the request to the list of pending actions.
   *Actual update of the listenChannels list happens during 
transaction
***
*** 613,620  queue_listen(ListenActionKind action, const char *channel)
/*
 * Unlike Async_Notify, we don't try to collapse out duplicates. It 
would
 * be too complicated to ensure we get the right interactions of
!* conflicting LISTEN/UNLISTEN/UNLISTEN_ALL, and it's unlikely that 
there
!* would be any performance benefit anyway in sane applications.
 */
oldcontext = MemoryContextSwitchTo(CurTransactionContext);
  
--- 620,627 
/*
 * Unlike Async_Notify, we don't try to collapse out duplicates. It 
would
 * be too complicated to ensure we get the right interactions of
!* conflicting LISTEN/LISTEN_ALL/UNLISTEN/UNLISTEN_ALL, and it's 
unlikely
!* that there would be any performance benefit anyway in sane 
applications.
 */
oldcontext = MemoryContextSwitchTo(CurTransactionContext);
  
***
*** 644,649  Async_Listen(const char *channel)
--- 651,671 
  }
  
  /*
+  * Async_ListenAll
+  *
+  *This is executed by the LISTEN * command.
+  */
+ void
+ Async_ListenAll(void)
+ {
+   if (Trace_notify)
+   elog(DEBUG1, "Async_ListenAll(%d)", MyProcPid);
+ 
+   queue_listen(LISTEN_LISTEN_ALL, "");
+ }
+ 
+ 
+ /*
   * Async_Unlisten
   *
   *This is executed by the SQL unlisten command.
***
*** 790,795  PreCommit_Notify(void)
--- 812,818 
switch (actrec->action)
{
case LISTEN_LISTEN:
+   case LISTEN_LISTEN_ALL:
Exec_ListenPreCommit();
break;
case LISTEN_UNLISTEN:
***
*** 895,900  AtCommit_Notify(void)
--- 918,926 
case LISTEN_LISTEN:
Exec_ListenCommit(actrec->channel);
break;
+   case LISTEN_LISTEN_ALL:
+   Exec_ListenAllCommit();
+   break;
case LISTEN_UNLISTEN:
Exec_UnlistenCommit(actrec->channel);

Re: [HACKERS] proposal: function parse_ident

2015-11-16 Thread Marko Tiikkaja

On 9/11/15 12:25 PM, Pavel Stehule wrote:

new update of parse_ident function patch


Nice!  I've certainly wanted something like this a number of times.

Some comments about the v2 of the patch:

   - The patch doesn't apply anymore, so it should be rebased.
   - The docs don't even try and explain what the "strictmode" 
parameter does.
   - The comment before the parse_ident function is not up to date 
anymore, since "the rest" was removed from the interface.
   - I can't immediately grep for any uses of  do { .. } while (true) 
from our code base.  AFAICT the majority look like  for (;;);  I see no 
reason not to be consistent here.
   - What should happen if the input is a string like 
'one.two.three.four.five.six'?  Do we want to accept input like that?
   - I haven't reviewed the actual parsing code carefully, but didn't 
we already have a function which splits identifiers up?  I of course 
can't find one with my grepping right now, so I might be wrong.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: add \pset true/false

2015-11-12 Thread Marko Tiikkaja

On 11/12/15 1:50 PM, Michael Paquier wrote:

FWIW, I am -1 on the concept of enforcing output values for particular
datatypes because that's not the job of psql


In my view, the job of psql is to make working with a postgres database 
easy for us human beings.  That means (among other things) formatting 
and aligning query output for readability.  I don't see how reformatting 
unreadable boolean values meant for computers is that big of a stretch.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] proposal: numeric scale functions

2015-11-11 Thread Marko Tiikkaja

Hi,

Dealing with "numeric"s right now in cases where it's really important 
that the scale is correct is quite painful.  For example, if I want to 
accept a EUR amount as an input, I often want to reject values such as 
'21.413', but I'd be fine with e.g. '21.41'.  My suggestion is to 
add two functions: one to return the number of decimal places in a 
numeric, and another one to remove non-significant decimal places. 
These two functions are useful on their own, and when combined allow you 
to look at the number of significant decimal places in a numeric.


Any thoughts, objections?


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Fix parallel workers connection bug in pg_dump (Bug #13727)

2015-11-05 Thread Marko Tiikkaja

On 11/5/15 4:11 PM, Zeus Kronion wrote:

On Nov 1, 2015 5:04 PM, "Marko Tiikkaja" <ma...@joh.to> wrote:

However, I don't quite like the way the password cache is kept up to date

in the old *or* the new code.  It seems to me that it should instead look
like:


if (PQconnectionUsedPassword(AH->connection))
AH->savedPassword = PQpass(AH->connection);

What do you think?


I don't understand why this logic is preferable. Is your concern that
AH->savedPassword may contain a password even when none is needed?


The opposite, sort of.  If the first connection uses a password, the 
second one doesn't, and the third one does again, you need to ask for a 
password again because you emptied the cache on the second attempt since 
it didn't use a password.  Granted, this situation is quite unlikely to 
occur in practice, but I find the "correct" code *also* more readable. 
To me it reads like "if the what we're caching was applied during the 
connection attempt, update the cache; otherwise keep the previous value 
in case it's useful in the future".



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] onlyvalue aggregate (was: First Aggregate Funtion?)

2015-11-02 Thread Marko Tiikkaja

On 11/2/15 12:40 PM, Dean Rasheed wrote:

I'm not sure what you mean when you say accepting NULLs can hide bugs.
I think that if the input values to the aggregate were
1,1,1,NULL,1,1,1 then it should raise an error. ITSM that that is more
likely to reveal problems with your underlying data or the query. If
you want to ignore NULLs, you can always add a FILTER(WHERE val IS NOT
NULL) clause.


Ah, I see.  So you're arguing that the aggregate should accept NULLs as 
input, but consider them distinct from any non-NULL values.  I thought 
you meant accepting NULLs and *not* considering them distinct, which 
could easily hide problems.


In that case, I don't oppose to changing the behavior.  I'll make the 
necessary changes.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] onlyvalue aggregate (was: First Aggregate Funtion?)

2015-11-02 Thread Marko Tiikkaja

On 11/2/15 9:32 AM, Dean Rasheed wrote:

On 28 October 2015 at 16:50, Marko Tiikkaja <ma...@joh.to> wrote:

Here's a patch for the aggregate function outlined by Corey Huinker in
CADkLM=foA_oC_Ri23F9PbfLnfwXFbC3Lt8bBzRu3=cb77g9...@mail.gmail.com .


+1. I've wanted something like this a few times. Of the names
suggested so far, I think I prefer "single_value", and yes I think it
should work with NULLs.


This was actually a last-minute design change I made before submitting 
the patch.  The two times I've previously written this aggregate both 
accepted NULLs and only enforced that there must not be more than one 
non-NULL value, but that's only because I was thinking about the "poor 
man's FILTER" case, which is obsolete since version 9.4.  The reason I 
changed in this version is that accepting NULLs can also hide bugs, and 
it's (usually) easy to filter them out with FILTER.


Did you have some specific use case in mind where accepting NULLs would 
be beneficial?



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Fix parallel workers connection bug in pg_dump (Bug #13727)

2015-11-01 Thread Marko Tiikkaja

On 10/25/15 10:55 PM, Zeus Kronion wrote:

Parallel workers were failing to connect to the database when running
pg_dump with a connection string. The first of the following two commands
runs without errors, while the second one fails:
pg_dump "postgres://my-user:my-passw...@my.hostname.com:5432/my-db" -Fd -f
my-dump
pg_dump "postgres://my-user:my-passw...@my.hostname.com:5432/my-db" -Fd
--jobs=9 -f my-dump

The error message:
pg_dump: [parallel archiver] connection to database "my-db" failed:
fe_sendauth: no password supplied

The password is not being stored correctly in the PGconn object when
connecting with a connection string.


Yeah, the current code is definitely broken for this case.  However, I 
don't feel like this patch is quite there yet, either.  _connectDB has 
similar logic in it which might be hit in case e.g. a a user's HBA is 
changed from a non-password-requiring method to a password-requiring one 
after the one or more connections has been initiated.  That one needs 
changing as well.


However, I don't quite like the way the password cache is kept up to 
date in the old *or* the new code.  It seems to me that it should 
instead look like:


   if (PQconnectionUsedPassword(AH->connection))
   AH->savedPassword = PQpass(AH->connection);

What do you think?


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] September 2015 Commitfest

2015-11-01 Thread Marko Tiikkaja

On 11/1/15 11:36 AM, Michael Paquier wrote:

On Sun, Nov 1, 2015 at 1:53 AM, Marko Tiikkaja <ma...@joh.to> wrote:

Are we doing these in an Australian time zone now?  It was quite unpleasant
to find that the 2015-11 is "in progress" already and two of my patches will
not be in there.  AFAIR the submission deadline used to be around UTC
midnight of the first day of the month the CF nominally begins on.


Er, well. Sorry for that... I did all this stuff on Friday evening
before leaving back for Japan with not much time on the table. I have
switched the CF back to an open status for now. And I'll switch it
back to in-progress in 24 hours. If there are patches you would like
attached to the CF app don't hesitate to ping me.


Thanks.  I managed to add the two patches just fine now.


.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] COPY (INSERT/UPDATE/DELETE .. RETURNING ..)

2015-10-31 Thread Marko Tiikkaja

Hi,

Attached is a patch for being able to do $SUBJECT without a CTE.  The 
reasons this is better than a CTE version are:


  1) It's not obvious why a CTE version works but a plain one doesn't
  2) This one has less overhead (I measured a ~12% improvement on a 
not-too-unreasonable test case)


With regard to RULEs, similar restrictions apply as the ones on 
data-modifying statements in WITH.


I can't add this to November's commit fest, but I'm posting this anyway 
in case someone is thinking about implementing this feature.



.m
diff --git a/doc/src/sgml/ref/copy.sgml b/doc/src/sgml/ref/copy.sgml
index 2850b47..07e2f45 100644
--- a/doc/src/sgml/ref/copy.sgml
+++ b/doc/src/sgml/ref/copy.sgml
@@ -112,10 +112,17 @@ COPY { table_name [ ( query
 
  
-  A  or
-   command
-  whose results are to be copied.
-  Note that parentheses are required around the query.
+  A , ,
+  ,  or
+   command whose results are to be
+  copied.  Note that parentheses are required around the query.
+ 
+ 
+  For INSERT, UPDATE and
+  DELETE queries a RETURNING clause must be provided,
+  and the target relation must not have a conditional rule, nor
+  an ALSO rule, nor an INSTEAD rule
+  that expands to multiple statements.
  
 

diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 9a52ec6..eb2f126 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -201,7 +201,7 @@ typedef struct CopyStateData
 	int			raw_buf_len;	/* total # of bytes stored */
 } CopyStateData;
 
-/* DestReceiver for COPY (SELECT) TO */
+/* DestReceiver for COPY (query) TO */
 typedef struct
 {
 	DestReceiver pub;			/* publicly-known function pointers */
@@ -772,7 +772,8 @@ CopyLoadRawBuf(CopyState cstate)
  *
  * Either unload or reload contents of table , depending on .
  * ( = TRUE means we are inserting into the table.)  In the "TO" case
- * we also support copying the output of an arbitrary SELECT query.
+ * we also support copying the output of an arbitrary SELECT, INSERT, UPDATE
+ * or DELETE query.
  *
  * If  is false, transfer is between the table and the file named
  * .  Otherwise, transfer is between the table and our regular
@@ -1374,11 +1375,11 @@ BeginCopy(bool is_from,
 		Assert(!is_from);
 		cstate->rel = NULL;
 
-		/* Don't allow COPY w/ OIDs from a select */
+		/* Don't allow COPY w/ OIDs from a query */
 		if (cstate->oids)
 			ereport(ERROR,
 	(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-	 errmsg("COPY (SELECT) WITH OIDS is not supported")));
+	 errmsg("COPY (query) WITH OIDS is not supported")));
 
 		/*
 		 * Run parse analysis and rewrite.  Note this also acquires sufficient
@@ -1393,9 +1394,36 @@ BeginCopy(bool is_from,
 		rewritten = pg_analyze_and_rewrite((Node *) copyObject(raw_query),
 		   queryString, NULL, 0);
 
-		/* We don't expect more or less than one result query */
-		if (list_length(rewritten) != 1)
-			elog(ERROR, "unexpected rewrite result");
+		/* check that we got back something we can work with */
+		if (rewritten == NIL)
+		{
+			ereport(ERROR,
+	(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+	errmsg("DO INSTEAD NOTHING rules are not supported for COPY")));
+		}
+		else if (list_length(rewritten) > 1)
+		{
+			ListCell *lc;
+
+			/* examine queries to determine which error message to issue */
+			foreach(lc, rewritten)
+			{
+Query	  *q = (Query *) lfirst(lc);
+
+if (q->querySource == QSRC_QUAL_INSTEAD_RULE)
+	ereport(ERROR,
+			(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+			 errmsg("conditional DO INSTEAD rules are not supported for COPY")));
+if (q->querySource == QSRC_NON_INSTEAD_RULE)
+	ereport(ERROR,
+			(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+			 errmsg("DO ALSO rules are not supported for the COPY")));
+			}
+
+			ereport(ERROR,
+	(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+	 errmsg("multi-statement DO INSTEAD rules are not supported for COPY")));
+		}
 
 		query = (Query *) linitial(rewritten);
 
@@ -1406,9 +1434,20 @@ BeginCopy(bool is_from,
 	(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 	 errmsg("COPY (SELECT INTO) is not supported")));
 
-		Assert(query->commandType == CMD_SELECT);
 		Assert(query->utilityStmt == NULL);
 
+		/*
+		 * Similarly the grammar doesn't enforce he presence of a RETURNING
+		 * clause, but we require it.
+		 */
+		if (query->commandType != CMD_SELECT &&
+			query->returningList == NIL)
+		{
+			ereport(ERROR,
+		(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+		 errmsg("COPY query must have a RETURNING clause")));
+		}
+
 		/* plan the query */
 		plan = pg_plan_query(query, 0, NULL);
 
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index fba91d5..d87ae93 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -2561,9 +2561,12 @@ ClosePortalStmt:
  *
  *		QUERY :
  *COPY relname [(columnList)] FROM/TO file [WITH] [(options)]
- *COPY ( 

Re: [HACKERS] September 2015 Commitfest

2015-10-31 Thread Marko Tiikkaja

On 10/31/15 12:42 AM, Michael Paquier wrote:

So, seeing nothing happening I have done the above, opened 2015-11 CF
and closed the current one.


Are we doing these in an Australian time zone now?  It was quite 
unpleasant to find that the 2015-11 is "in progress" already and two of 
my patches will not be in there.  AFAIR the submission deadline used to 
be around UTC midnight of the first day of the month the CF nominally 
begins on.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: add \pset true/false

2015-10-29 Thread Marko Tiikkaja

On 10/29/15 11:51 AM, Daniel Verite wrote:

Marko Tiikkaja wrote:


Since the default t/f output for booleans is not very user friendly,
attached is a patch which enables you to do for example the following:


Personally I think it would be worth having, but how about
booleans inside ROW() or composite types ?


There's not enough information sent over to do that in the client.

Note that this works the same way as  \pset null  with  SELECT 
ROW(NULL), so I don't consider it a show stopper for the patch.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] onlyvalue aggregate (was: First Aggregate Funtion?)

2015-10-28 Thread Marko Tiikkaja

Hi,

Here's a patch for the aggregate function outlined by Corey Huinker in 
CADkLM=foA_oC_Ri23F9PbfLnfwXFbC3Lt8bBzRu3=cb77g9...@mail.gmail.com .  I 
called it "onlyvalue", which is a horrible name, but I have nothing 
better to offer.  (Corey called it "only", but that doesn't really work 
since ONLY is a fully reserved keyword.)


I'll add this to September's commit fest, but if you want to bash me or 
the patch in the meanwhile, go ahead.



.m
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 2946122..6edc220 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -12631,6 +12631,26 @@ NULL baz(3 rows)
  
   

+onlyvalue
+   
+   
+ onlyvalue(expression)
+   
+  
+  
+   any type for which the equality operator has been defined
+  
+  
+   same as argument type
+  
+  returns the single distinct non-NULL value from the input
+  values; if any of the input values is NULL or more than one distinct
+  value exists, an exception is raised
+ 
+
+ 
+  
+   
 string_agg


diff --git a/src/backend/utils/adt/misc.c b/src/backend/utils/adt/misc.c
index c0495d9..72bb55c 100644
--- a/src/backend/utils/adt/misc.c
+++ b/src/backend/utils/adt/misc.c
@@ -40,6 +40,7 @@
 #include "utils/acl.h"
 #include "utils/builtins.h"
 #include "utils/timestamp.h"
+#include "utils/typcache.h"
 
 #define atooid(x)  ((Oid) strtoul((x), NULL, 10))
 
@@ -598,3 +599,93 @@ pg_column_is_updatable(PG_FUNCTION_ARGS)
 
 	PG_RETURN_BOOL((events & REQ_EVENTS) == REQ_EVENTS);
 }
+
+struct onlyvalue_agg_stype
+{
+	FunctionCallInfoData fcinfo;
+	Datum datum;
+};
+
+Datum
+onlyvalue_agg_transfn(PG_FUNCTION_ARGS)
+{
+	Oid		 arg1_typeid = get_fn_expr_argtype(fcinfo->flinfo, 1);
+	MemoryContext aggcontext;
+	struct onlyvalue_agg_stype *state;
+
+	if (arg1_typeid == InvalidOid)
+		ereport(ERROR,
+(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("could not determine input data type")));
+
+	if (!AggCheckCallContext(fcinfo, ))
+	{
+		/* cannot be called directly because of internal-type argument */
+		elog(ERROR, "onlyvalue_agg_transfn called in non-aggregate context");
+	}
+
+	if (PG_ARGISNULL(1))
+	{
+		ereport(ERROR,
+(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+		errmsg("NULL value passed to onlyvalue")));
+	}
+
+	if (PG_ARGISNULL(0))
+	{
+		TypeCacheEntry *typentry;
+
+		state = (struct onlyvalue_agg_stype *) MemoryContextAlloc(aggcontext, sizeof(struct onlyvalue_agg_stype));
+		state->datum = PG_GETARG_DATUM(1);
+
+		typentry = lookup_type_cache(arg1_typeid,
+	 TYPECACHE_EQ_OPR_FINFO);
+		if (!OidIsValid(typentry->eq_opr_finfo.fn_oid))
+			ereport(ERROR,
+	(errcode(ERRCODE_UNDEFINED_FUNCTION),
+			errmsg("could not identify an equality operator for type %s",
+   format_type_be(arg1_typeid;
+
+		InitFunctionCallInfoData(state->fcinfo, >eq_opr_finfo, 2,
+ InvalidOid, NULL, NULL);
+	}
+	else
+	{
+		bool oprresult;
+
+		state = (struct onlyvalue_agg_stype *) PG_GETARG_POINTER(0);
+
+		state->fcinfo.argnull[0] = false;
+		state->fcinfo.argnull[1] = false;
+		state->fcinfo.arg[0] = state->datum;
+		state->fcinfo.arg[1] = PG_GETARG_DATUM(1);
+		state->fcinfo.isnull = false;
+		oprresult = DatumGetBool(FunctionCallInvoke(>fcinfo));
+		if (!oprresult)
+			ereport(ERROR,
+	(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("more than one distinct value passed to onlyvalue")));
+	}
+
+	/*
+	 * The transition type for onlyvalue() is declared to be "internal", which
+	 * is a pass-by-value type the same size as a pointer.  So we can safely
+	 * pass the pointer through nodeAgg.c's machinations.
+	 */
+	PG_RETURN_POINTER(state);
+}
+
+Datum
+onlyvalue_agg_finalfn(PG_FUNCTION_ARGS)
+{
+	struct onlyvalue_agg_stype *state;
+
+	if (PG_ARGISNULL(0))
+		PG_RETURN_NULL();
+
+	/* cannot be called directly because of internal-type argument */
+	Assert(AggCheckCallContext(fcinfo, NULL));
+
+	state = (struct onlyvalue_agg_stype *) PG_GETARG_POINTER(0);
+	PG_RETURN_DATUM(state->datum);
+}
diff --git a/src/include/catalog/pg_aggregate.h b/src/include/catalog/pg_aggregate.h
index dd6079f..9d6c44a 100644
--- a/src/include/catalog/pg_aggregate.h
+++ b/src/include/catalog/pg_aggregate.h
@@ -292,6 +292,9 @@ DATA(insert ( 3197	n 0 json_object_agg_transfn json_object_agg_finalfn --
 DATA(insert ( 3267	n 0 jsonb_agg_transfn	jsonb_agg_finalfn			---f f 0	2281	0	0		0	_null_ _null_ ));
 DATA(insert ( 3270	n 0 jsonb_object_agg_transfn jsonb_object_agg_finalfn ---f f 0	2281	0	0		0	_null_ _null_ ));
 
+/* onlyvalue */
+DATA(insert ( 4202 n 0 onlyvalue_agg_transfnonlyvalue_agg_finalfn-   -   -   t f 0   22810   0   0   _null_ _null_ ));
+
 /* ordered-set and hypothetical-set aggregates */
 DATA(insert ( 3972	o 1 ordered_set_transition			percentile_disc_final	-		-		-		t f 0	2281	0	0		0	_null_ _null_ ));
 

Re: [HACKERS] onlyvalue aggregate (was: First Aggregate Funtion?)

2015-10-28 Thread Marko Tiikkaja

On 10/28/15 5:53 PM, Pavel Stehule wrote:

what is use case for this function and why it should be in core?


Corey had one example in his email, but I can offer another one which 
came up this week at $work.  The query looked something like this:


SELECT a, sum(amount), onlyvalue(rolling_count)
FROM
(
SELECT a, amount, count(*) OVER (ORDER BY a) AS rolling_count
FROM tbl
) ss
GROUP BY a;

We know that all the values for the column are going to be the same 
value for every "a", so we could use min() or max().  But the advantage 
of "onlyvalue" is that it actually checks that, so if someone went and 
changed the window frame to do something slightly different, the query 
would blow up instead of silently returning the (now likely incorrect) 
minimum or maximum value.  It's also self-documenting for the reader of 
such queries.


In my experience this problem comes up often enough that it would be 
make sense to have this aggregate in core.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] psql: add \pset true/false

2015-10-28 Thread Marko Tiikkaja

Hello hello,

Since the default t/f output for booleans is not very user friendly, 
attached is a patch which enables you to do for example the following:


=# \pset true TRUE
Boolean TRUE display is "TRUE".
=# \pset false FALSE
Boolean FALSE display is "FALSE".
=# select true, false;
  bool | bool
--+---
  TRUE | FALSE
(1 row)


(For anyone reviewing: I didn't touch the parts of describe.c which do 
this for nullPrint:


  myopt.nullPrint = NULL;

since I thought it was dubious in the first place, and I don't think we 
output booleans in the describe commands.)




.m
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 212dbfa..2048ba3 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -2229,6 +2229,26 @@ lo_import 152801
   
 
   
+  true
+  
+  
+  Sets the string to be printed in place of a boolean TRUE value.
+  The default is to print a 't' character.
+  
+  
+  
+
+  
+  false
+  
+  
+  Sets the string to be printed in place of a boolean FALSE value.
+  The default is to print an 'f' character.
+  
+  
+  
+
+  
   null
   
   
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 5163c76..7fadb0a 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -1151,7 +1151,7 @@ exec_command(const char *cmd,
 			int			i;
 			static const char *const my_list[] = {
 "border", "columns", "expanded", "fieldsep", "fieldsep_zero",
-"footer", "format", "linestyle", "null",
+"footer", "format", "linestyle", "true", "false", "null",
 "numericlocale", "pager", "pager_min_lines",
 "recordsep", "recordsep_zero",
 "tableattr", "title", "tuples_only",
@@ -2591,6 +2591,26 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 		}
 	}
 
+	/* boolean TRUE display */
+	else if (strcmp(param, "true") == 0)
+	{
+		if (value)
+		{
+			free(popt->truePrint);
+			popt->truePrint = pg_strdup(value);
+		}
+	}
+
+	/* boolean FALSE display */
+	else if (strcmp(param, "false") == 0)
+	{
+		if (value)
+		{
+			free(popt->falsePrint);
+			popt->falsePrint = pg_strdup(value);
+		}
+	}
+
 	/* field separator for unaligned text */
 	else if (strcmp(param, "fieldsep") == 0)
 	{
@@ -2782,6 +2802,20 @@ printPsetInfo(const char *param, struct printQueryOpt *popt)
 			   popt->nullPrint ? popt->nullPrint : "");
 	}
 
+	/* show true display */
+	else if (strcmp(param, "true") == 0)
+	{
+		printf(_("Boolean TRUE display is \"%s\".\n"),
+			   popt->truePrint ? popt->truePrint : "t");
+	}
+
+	/* show false display */
+	else if (strcmp(param, "false") == 0)
+	{
+		printf(_("Boolean FALSE display is \"%s\".\n"),
+			   popt->falsePrint ? popt->falsePrint : "f");
+	}
+
 	/* show locale-aware numeric output */
 	else if (strcmp(param, "numericlocale") == 0)
 	{
@@ -2953,6 +2987,14 @@ pset_value_string(const char *param, struct printQueryOpt *popt)
 		return psprintf("%s", _align2string(popt->topt.format));
 	else if (strcmp(param, "linestyle") == 0)
 		return psprintf("%s", get_line_style(>topt)->name);
+	else if (strcmp(param, "true") == 0)
+		return pset_quoted_string(popt->truePrint
+  ? popt->truePrint
+  : "t");
+	else if (strcmp(param, "false") == 0)
+		return pset_quoted_string(popt->falsePrint
+  ? popt->falsePrint
+  : "f");
 	else if (strcmp(param, "null") == 0)
 		return pset_quoted_string(popt->nullPrint
   ? popt->nullPrint
diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c
index 5b63e76..d0bf418 100644
--- a/src/bin/psql/help.c
+++ b/src/bin/psql/help.c
@@ -258,7 +258,7 @@ slashUsage(unsigned short int pager)
 	fprintf(output, _("  \\H toggle HTML output mode (currently %s)\n"),
 			ON(pset.popt.topt.format == PRINT_HTML));
 	fprintf(output, _("  \\pset [NAME [VALUE]]   set table output option\n"
-	  " (NAME := {format|border|expanded|fieldsep|fieldsep_zero|footer|null|\n"
+	  " (NAME := {format|border|expanded|fieldsep|fieldsep_zero|footer|true|false|null|\n"
 	  " numericlocale|recordsep|recordsep_zero|tuples_only|title|tableattr|pager|\n"
 	  " unicode_border_linestyle|unicode_column_linestyle|unicode_header_linestyle})\n"));
 	fprintf(output, _("  \\t [on|off]show only rows (currently %s)\n"),
@@ -371,6 +371,8 @@ helpVariables(unsigned short int pager)
 	fprintf(output, _("  format set output format [unaligned, aligned, wrapped, html, asciidoc, ...]\n"));
 	fprintf(output, _("  footer enable or disable display of the table footer [on, off]\n"));
 	fprintf(output, _("  linestyle  set the border line drawing style [ascii, old-ascii, unicode]\n"));
+	fprintf(output, _("  true   

Re: [HACKERS] psql: add \pset true/false

2015-10-28 Thread Marko Tiikkaja

On 10/28/15 11:52 PM, Robert Haas wrote:

-0 on this concept from me.  I'm not going to vigorously oppose it, but:

1. You can always do it in the query if you really want it.


True, but not always practical.


2. If you're the sort of person liable to be confused by t/f, you
probably aren't in the target audience for psql anyway.


Really?  The difference between t/f is that the vertical squiggle is 
flipped, THAT'S IT.  Consider:


t t f f f
f t f t f

Saying that I'm not target audience for not being able to see WTF is 
going on above I find offending.



3. I really don't want to end up with a bunch of features of this type
for a variety of different data types.


Fine.  Then let's not add it for a different variety of data types.  But 
boolean is quite essential and it has a really small number of valid 
output values.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Questionable behavior regarding aliasing

2015-10-09 Thread Marko Tiikkaja

On 2015-10-09 10:31 PM, Jim Nasby wrote:

I was about to report this as a bug until Marko Tiikkaja pointed out on
IRC that now was being treated as an alias for relname.

I'm not sure if this is required by the spec, but can we at least emit a
WARNING if not reject this case outright? I think it'd be OK to leave
the AS syntax alone... presumably it's a lot harder to fumble that. But
I'm not wed to that.


I'd be happy to turn on a GUC disabling this misfeature.  It's only ever 
brought pain and unhappiness to me and my family.



.m


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ALTER TABLE behind-the-scenes effects' CONTEXT

2015-10-04 Thread Marko Tiikkaja

Hi,

In the past I've found the error message in cases such as this somewhat 
less helpful than it could be:


=# CREATE TABLE qqq (a int);
=# CREATE UNIQUE INDEX IF NOT EXISTS qqq_a_idx ON qqq(a);
=# ALTER TABLE qqq ALTER COLUMN a TYPE json USING NULL;
ERROR:  data type json has no default operator class for access method 
"btree"
HINT:  You must specify an operator class for the index or define a 
default operator class for the data type.


The attached patch adds a CONTEXT line to index and constraint rebuilds, 
e.g:


  CONTEXT:  while rebuilding index qqq_a_idx

Any feedback welcome.


.m
*** a/src/backend/commands/tablecmds.c
--- b/src/backend/commands/tablecmds.c
***
*** 82,87 
--- 82,88 
  #include "storage/smgr.h"
  #include "utils/acl.h"
  #include "utils/builtins.h"
+ #include "utils/elog.h"
  #include "utils/fmgroids.h"
  #include "utils/inval.h"
  #include "utils/lsyscache.h"
***
*** 309,314  static void ATController(AlterTableStmt *parsetree,
--- 310,316 
 Relation rel, List *cmds, bool recurse, LOCKMODE 
lockmode);
  static void ATPrepCmd(List **wqueue, Relation rel, AlterTableCmd *cmd,
  bool recurse, bool recursing, LOCKMODE lockmode);
+ static void ATRewriteSubcommandErrorCallback(void *arg);
  static void ATRewriteCatalogs(List **wqueue, LOCKMODE lockmode);
  static void ATExecCmd(List **wqueue, AlteredTableInfo *tab, Relation rel,
  AlterTableCmd *cmd, LOCKMODE lockmode);
***
*** 3373,3378  ATPrepCmd(List **wqueue, Relation rel, AlterTableCmd *cmd,
--- 3375,3399 
tab->subcmds[pass] = lappend(tab->subcmds[pass], cmd);
  }
  
+ static void
+ ATRewriteSubcommandErrorCallback(void *arg)
+ {
+   AlterTableCmd *subcmd = (AlterTableCmd *) arg;
+   
+   switch (subcmd->subtype)
+   {
+   case AT_ReAddIndex:
+   errcontext("while rebuilding index %s", subcmd->name);
+   break;
+   case AT_ReAddConstraint:
+   errcontext("while rebuilding constraint %s", 
subcmd->name);
+   break;
+   default:
+   /* keep compiler quiet */
+   (void) 0;
+   }
+ }
+ 
  /*
   * ATRewriteCatalogs
   *
***
*** 3402,3407  ATRewriteCatalogs(List **wqueue, LOCKMODE lockmode)
--- 3423,3429 
List   *subcmds = tab->subcmds[pass];
Relationrel;
ListCell   *lcmd;
+   ErrorContextCallback errcallback;
  
if (subcmds == NIL)
continue;
***
*** 3411,3418  ATRewriteCatalogs(List **wqueue, LOCKMODE lockmode)
 */
rel = relation_open(tab->relid, NoLock);
  
foreach(lcmd, subcmds)
!   ATExecCmd(wqueue, tab, rel, (AlterTableCmd *) 
lfirst(lcmd), lockmode);
  
/*
 * After the ALTER TYPE pass, do cleanup work (this is 
not done in
--- 3433,3451 
 */
rel = relation_open(tab->relid, NoLock);
  
+   errcallback.callback = ATRewriteSubcommandErrorCallback;
+   errcallback.previous = error_context_stack;
+   error_context_stack = 
+ 
foreach(lcmd, subcmds)
!   {
!   AlterTableCmd *subcmd = (AlterTableCmd *) 
lfirst(lcmd);
! 
!   errcallback.arg = subcmd;
!   ATExecCmd(wqueue, tab, rel, subcmd, lockmode);
!   }
! 
!   error_context_stack = errcallback.previous;
  
/*
 * After the ALTER TYPE pass, do cleanup work (this is 
not done in
***
*** 8682,8687  ATPostAlterTypeParse(Oid oldId, Oid oldRelId, Oid refRelId, 
char *cmd,
--- 8715,8721 
  
newcmd = makeNode(AlterTableCmd);
newcmd->subtype = AT_ReAddIndex;
+   newcmd->name = stmt->idxname;
newcmd->def = (Node *) stmt;
tab->subcmds[AT_PASS_OLD_INDEX] =
lappend(tab->subcmds[AT_PASS_OLD_INDEX], 
newcmd);
***
*** 8712,8717  ATPostAlterTypeParse(Oid oldId, Oid oldRelId, Oid refRelId, 
char *cmd,
--- 8746,8752 

 RelationRelationId, 0);
  
cmd->subtype = AT_ReAddIndex;
+   cmd->name = indstmt->idxname;
tab->subcmds[AT_PASS_OLD_INDEX] =
 

  1   2   3   4   5   6   7   >