Re: [HACKERS] generated columns

2017-09-09 Thread Jaime Casanova
On 30 August 2017 at 23:16, Peter Eisentraut
 wrote:
> Here is another attempt to implement generated columns.  This is a
> well-known SQL-standard feature, also available for instance in DB2,
> MySQL, Oracle.
>
[...]
>
> In previous discussions, it has often been a source of confusion whether
> these generated columns are supposed to be computed on insert/update and
> stored, or computed when read.  The SQL standard is not explicit, but
> appears to lean toward stored.  DB2 stores.  Oracle computes on read.
> MySQL supports both.  So I target implementing both.  This makes sense:
> Both regular views and materialized views have their uses, too.  For the
> syntax, I use the MySQL/Oracle syntax of appending [VIRTUAL|STORED].  In
> this patch, only VIRTUAL is fully implemented.  I also have STORED kind
> of working, but it wasn't fully baked, so I haven't included it here.
>

Hi,

It applies and compiles without problems, it passes regression tests
and it does what it claims to do:

During my own tests, though, i found some problems:

-- UPDATEing the column, this is at least weird

postgres=# update t1 set height_in = 15;
ERROR:  column "height_in" can only be updated to DEFAULT
DETAIL:  Column "height_in" is a generated column.
postgres=# update t1 set height_in = default;
UPDATE 1


-- In a view it doesn't show any value

postgres=# create view v1 as select * from t1;
CREATE VIEW
postgres=# insert into t1(height_cm) values (10);
INSERT 0 1
postgres=# select * from t1;
   id   | height_cm | height_in
+---+---
 198000 |10 | 25.40
(1 row)

postgres=# select * from v1;
   id   | height_cm | height_in
+---+---
 198000 |10 |
(1 row)


-- In a inherits/partition tree, the default gets malformed

postgres=# create table t1_1 () inherits (t1);
CREATE TABLE
postgres=# \d t1_1
 Table "public.t1_1"
  Column   |  Type   | Collation | Nullable |Default
---+-+---+--+
 id| integer |   | not null | nextval('t1_id_seq'::regclass)
 height_cm | numeric |   |  |
 height_in | numeric |   |  | height_cm * 2.54
Inherits: t1

postgres=# insert into t1_1 values (11);
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!> \q

-- 
Jaime Casanova  www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_basebackup fails on Windows when using tablespace mapping

2017-09-09 Thread Michael Paquier
On Sun, Sep 10, 2017 at 12:28 PM, Ashutosh Sharma  wrote:
> On Tue, Jun 27, 2017 at 6:36 PM, Michael Paquier
>  wrote:
>> On Tue, Jun 27, 2017 at 7:46 PM, Ashutosh Sharma  
>> wrote:
>>> I am still seeing the issue with the attached patch. I had a quick
>>> look into the patch. It seems to me like you have canonicalized the
>>> tablespace path to convert win32 slashes to unix type of slashes but
>>> that is not being passed to strcmp() function and probably that could
>>> be the reason why the issue is still existing. Thanks.
>>>
>>> for (cell = tablespace_dirs.head; cell; cell = cell->next)
>>> -   if (strcmp(dir, cell->old_dir) == 0)
>>> +   if (strcmp(canon_dir, cell->old_dir) == 0)
>>
>> Thanks. I had the correct version on my Windows box actually, just
>> messed up the attachment.
>> --
>
> Okay. I have once again reviewed your patch and tested it on Windows
> as well as Linux. The patch LGTM. I am now marking it as Ready For
> Committer. Thanks.

Thanks for the review, Ashutosh.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_basebackup fails on Windows when using tablespace mapping

2017-09-09 Thread Ashutosh Sharma
On Tue, Jun 27, 2017 at 6:36 PM, Michael Paquier
 wrote:
> On Tue, Jun 27, 2017 at 7:46 PM, Ashutosh Sharma  
> wrote:
>> I am still seeing the issue with the attached patch. I had a quick
>> look into the patch. It seems to me like you have canonicalized the
>> tablespace path to convert win32 slashes to unix type of slashes but
>> that is not being passed to strcmp() function and probably that could
>> be the reason why the issue is still existing. Thanks.
>>
>> for (cell = tablespace_dirs.head; cell; cell = cell->next)
>> -   if (strcmp(dir, cell->old_dir) == 0)
>> +   if (strcmp(canon_dir, cell->old_dir) == 0)
>
> Thanks. I had the correct version on my Windows box actually, just
> messed up the attachment.
> --

Okay. I have once again reviewed your patch and tested it on Windows
as well as Linux. The patch LGTM. I am now marking it as Ready For
Committer. Thanks.

--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UPDATE of partition key

2017-09-09 Thread Amit Kapila
On Fri, Sep 8, 2017 at 4:51 PM, amul sul  wrote:
> On Thu, May 18, 2017 at 9:13 AM, Amit Kapila 
> wrote:
>>
>>  On Wed, May 17, 2017 at 5:17 PM, Robert Haas 
>> wrote:
>> > On Wed, May 17, 2017 at 6:29 AM, Amit Kapila 
>> > wrote:
>> >> I think we can do this even without using an additional infomask bit.
>> >> As suggested by Greg up thread, we can set InvalidBlockId in ctid to
>> >> indicate such an update.
>> >
>> > Hmm.  How would that work?
>> >
>>
>> We can pass a flag say row_moved (or require_row_movement) to
>> heap_delete which will in turn set InvalidBlockId in ctid instead of
>> setting it to self. Then the ExecUpdate needs to check for the same
>> and return an error when heap_update is not successful (result !=
>> HeapTupleMayBeUpdated).  Can you explain what difficulty are you
>> envisioning?
>>
>
> Attaching WIP patch incorporates the above logic, although I am yet to check
> all the code for places which might be using ip_blkid.  I have got a small
> query here,
> do we need an error on HeapTupleSelfUpdated case as well?
>

No, because that case is anyway a no-op (or error depending on whether
is updated/deleted by same command or later command).  Basically, even
if the row wouldn't have been moved to another partition, we would not
have allowed the command to proceed with the update.  This handling is
to make commands fail rather than a no-op where otherwise (when the
tuple is not moved to another partition) the command would have
succeeded.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Red-black trees: why would anyone want preorder or postorder traversal?

2017-09-09 Thread Tom Lane
There is quite a bit of code in src/backend/lib/rbtree.c that is currently
dead according to the code coverage report, but we're hanging onto it
with the thought that somebody might use it later.  That's fine as long as
there is a plausible use-case for it ... but I have to wonder what is the
argument that anyone would ever want pre-order or post-order traversal of
one of these trees (ie, the DirectWalk or InvertedWalk options to
rb_begin_iterate).  Those orderings give semantic meaning to purely
accidental aspects of the tree shape, such as which specific node happens
to be the root at the moment.  You could maybe argue for using them if
you didn't care about the visitation order and just wanted the cheapest,
quickest way of visiting all the nodes --- but these methods are not
cheaper, or simpler, than the left-to-right or right-to-left tree walks.
In fact, the InvertedWalk logic requires its very own extra field in
the iterator state.

Also, the reason I noticed this in the first place is that
I was working on Victor Drobny's submission in the current CF
https://commitfest.postgresql.org/14/1225/
to add a test harness for rbtree.c.  That harness currently expends
much more code, and many more cycles, on testing preorder/postorder
traversal than anything else.  It also has to make several assumptions
about the implementation of red-black trees that I would rather it
didn't.

In short, therefore, I propose we rip out the DirectWalk and InvertedWalk
options along with their support code, and then drop the portions of
test_rbtree that are needed to exercise them.  Any objections?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Red-Black tree traversal tests

2017-09-09 Thread Tom Lane
Victor Drobny  writes:
> On 2017-09-08 15:23, Thomas Munro wrote:
>> It's trying to call rb->combiner which is null.

> Thank you for pointing on it. Here is a fixed version.

Actually, I think we *do* want the tests to call the combiner
occasionally ...

I whacked this around quite a bit and had gotten it to a state
that I thought was committable, when it occurred to me to wonder
why exactly we're going to this much effort to test the preorder
and postorder traversal logic.  I'm inclined to think we should
rip that out, instead, because I can think of no reason that
anyone would ever want to use it in Postgres.

I'll make that argument in a separate thread, so it gets a little
more visibility in the pgsql-hackers firehose.

In the meantime, here's my version.  Notable changes:

* Got rid of separate .h file; seemed pointless, especially
  since the combiner function needs to know what the test
  expectations are.
* Corrected the random-permutation algorithm.
* Made the preorder/postorder check logic more paranoid
  (though I now feel that was a waste of effort).
* Improved English grammar in a lot of comments.
* Added some more test cases; code coverage report shows
  that this exercises every non-error-case line in rbtree.c.
* Added an rbtree "rb_root()" function to avoid the unsafe
  casting the previous version did to get the root pointer.
* Removed the assumption that the nil element is unique;
  now it just knows that a leaf element points to itself.

We could dispense with rb_root(), as well as the test code's knowledge
about RBNIL representation, if we dropped the preorder/postorder cases
... which is another good reason to do so.

regards, tom lane

diff --git a/src/backend/lib/rbtree.c b/src/backend/lib/rbtree.c
index 3d80090..0ae5bc8 100644
*** a/src/backend/lib/rbtree.c
--- b/src/backend/lib/rbtree.c
*** rb_leftmost(RBTree *rb)
*** 195,200 
--- 195,215 
  	return NULL;
  }
  
+ /*
+  * rb_root: fetch the root node.
+  * Returns RBNIL if tree is empty.
+  *
+  * This function has no great use in normal operation, since there's no
+  * particular semantic meaning to the current root node.  It's useful
+  * for testing purposes, though.
+  */
+ RBNode *
+ rb_root(RBTree *rb)
+ {
+ 	return rb->root;
+ }
+ 
+ 
  /**
   *			  Insertion  *
   **/
diff --git a/src/include/lib/rbtree.h b/src/include/lib/rbtree.h
index a7183bb..1d352a3 100644
*** a/src/include/lib/rbtree.h
--- b/src/include/lib/rbtree.h
*** extern RBTree *rb_create(Size node_size,
*** 71,76 
--- 71,77 
  
  extern RBNode *rb_find(RBTree *rb, const RBNode *data);
  extern RBNode *rb_leftmost(RBTree *rb);
+ extern RBNode *rb_root(RBTree *rb);
  
  extern RBNode *rb_insert(RBTree *rb, const RBNode *data, bool *isNew);
  extern void rb_delete(RBTree *rb, RBNode *node);
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 3ce9904..b7ed0af 100644
*** a/src/test/modules/Makefile
--- b/src/test/modules/Makefile
*** SUBDIRS = \
*** 13,18 
--- 13,19 
  		  test_extensions \
  		  test_parser \
  		  test_pg_dump \
+ 		  test_rbtree \
  		  test_rls_hooks \
  		  test_shm_mq \
  		  worker_spi
diff --git a/src/test/modules/test_rbtree/.gitignore b/src/test/modules/test_rbtree/.gitignore
index ...5dcb3ff .
*** a/src/test/modules/test_rbtree/.gitignore
--- b/src/test/modules/test_rbtree/.gitignore
***
*** 0 
--- 1,4 
+ # Generated subdirectories
+ /log/
+ /results/
+ /tmp_check/
diff --git a/src/test/modules/test_rbtree/Makefile b/src/test/modules/test_rbtree/Makefile
index ...a4184b4 .
*** a/src/test/modules/test_rbtree/Makefile
--- b/src/test/modules/test_rbtree/Makefile
***
*** 0 
--- 1,21 
+ # src/test/modules/test_rbtree/Makefile
+ 
+ MODULE_big = test_rbtree
+ OBJS = test_rbtree.o $(WIN32RES)
+ PGFILEDESC = "test_rbtree - test code for red-black tree library"
+ 
+ EXTENSION = test_rbtree
+ DATA = test_rbtree--1.0.sql
+ 
+ REGRESS = test_rbtree
+ 
+ ifdef USE_PGXS
+ PG_CONFIG = pg_config
+ PGXS := $(shell $(PG_CONFIG) --pgxs)
+ include $(PGXS)
+ else
+ subdir = src/test/modules/test_rbtree
+ top_builddir = ../../../..
+ include $(top_builddir)/src/Makefile.global
+ include $(top_srcdir)/contrib/contrib-global.mk
+ endif
diff --git a/src/test/modules/test_rbtree/README b/src/test/modules/test_rbtree/README
index ...fe63ce8 .
*** a/src/test/modules/test_rbtree/README
--- b/src/test/modules/test_rbtree/README
***
*** 0 
--- 1,28 
+ test_rbtree is a test module for checking the correctness of red-black
+ tree operations.
+ 
+ These tests are performed on red-black trees that store integers.
+ Since the rbtree logic treats the comparison function as a black
+ box, it shouldn't be important exactly what the key type is.

Re: [HACKERS] WIP: Separate log file for extension

2017-09-09 Thread Tomas Vondra


On 08/28/2017 11:23 AM, Antonin Houska wrote:
> Craig Ringer  wrote:
> 
>> On 25 August 2017 at 15:12, Antonin Houska  wrote:
>>
>> How will this play with syslog? csvlog? etc?
> 
> There's nothing special about csvlog: the LogStream structure has a
> "destination" field, so if particular extension wants this kind of output, it
> simply sets the LOG_DESTINATION_CSVLOG bit here.
> 

I assume Craig's point was more about CSVLOG requiring log_collector=on.
So what will happen if the PostgreSQL is started without the collector,
and an extension attempts to use LOG_DESTINATION_CSVLOG? Or will it
start a separate collector for each such extension?

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] segment size depending *_wal_size defaults (was increasing the default WAL segment size)

2017-09-09 Thread Tomas Vondra


On 08/30/2017 03:16 AM, Andres Freund wrote:
> On 2017-08-30 10:14:22 +0900, Michael Paquier wrote:
>> On Wed, Aug 30, 2017 at 10:06 AM, Andres Freund  wrote:
>>> On 2017-08-30 09:49:14 +0900, Michael Paquier wrote:
 Do you think that we should worry about wal segment sizes higher than
 2GB? Support for int64 GUCs is not here yet.
>>>
>>> 1GB will be the limit anyway.
>>
>> Yeah, but imagine that we'd want to raise that even more up.
> 
> I'm doubtfull of that. But even if, it'd not be hard to GUC support.
> 

It's not hard - it's just a lot of copy-pasting of infrastructure code.
Incidentally, I already have a patch doing that, as we had to add that
support to XL, and I can submit it to PostgreSQL. Might save some boring
coding.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Generic type subscripting

2017-09-09 Thread Arthur Zakirov
On Thu, Sep 07, 2017 at 10:49:54PM +0200, Dmitry Dolgov wrote:
> On 29 August 2017 at 22:42, Dmitry Dolgov <9erthali...@gmail.com> wrote:
> >
> > To make a review little bit easier I've divided the patch into a few
> smaller parts.
> 
> Apparently I forgot about subscripting for the name data type, so here is a
> small update of the patch.

Thank you for rebasing the patch!

PostgreSQL and documentation with the patch compiles without any errors. All 
regression tests passed.

But honestly I still cannot say that I agree with *_extract() and *_assign() 
functions creation way. For example, there is no entry in pg_depend for them 
(related with pg_type entry).

Because there is no such entry, there is the following bug:

1 - make and install src/tutorial
2 - run src/tutorial/subscripting.sql
3 - run:

=# drop function custom_subscripting_extract(internal);

4 - and we get the error:

=# select data[0] from test_subscripting;
ERROR:  function 0x55deb7911bfd returned NULL

But of course it is only my opinion and I could be wrong.

-- 
Arthur Zakirov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Still another race condition in recovery TAP tests

2017-09-09 Thread Tom Lane
Michael Paquier  writes:
> On Sat, Sep 9, 2017 at 1:43 PM, Tom Lane  wrote:
>> Yeah, even if we fixed this particular call site, I'm sure the issue
>> would come up again.  Certainly we expect hot backups to work with
>> a changing source directory.

> In short, I'd still like to keep RecursiveCopy for now, but change its
> code so as a copy() is not a hard failure. What do you think?

The specific case we need to allow is "ENOENT on a file/dir that was
there a moment ago".  I think it still behooves us to complain about
anything else.  If you think it's a simple fix, have at it.  But
I see at least three ways for _copypath_recurse to fail depending on
exactly when the file disappears.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Setting pd_lower in GIN metapage

2017-09-09 Thread Tom Lane
Michael Paquier  writes:
> On Fri, Sep 8, 2017 at 5:24 AM, Tom Lane  wrote:
>> In short, this patch needs a significant rewrite, and more analysis than
>> you've done so far on whether there's actually any benefit to be gained.
>> It might not be worth messing with.

> I did some measurements of the compressibility of the GIN meta page,
> looking at its FPWs with and without wal_compression and you are
> right: there is no direct compressibility effect when setting pd_lower
> on the meta page. However, it seems to me that there is an argument
> still pleading on favor of this patch for wal_consistency_checking.

I think that would be true if we did both my point 1 and 2, so that
the wal replay functions could trust pd_lower to be sane in all cases.
But really, if you have to touch all the places that write these
metapages, you might as well mark them REGBUF_STANDARD while at it.

> The same comment ought to be mentioned for btree.

Yeah, I was wondering if we ought not clean up btree/hash while at it.
At the very least, their existing comments saying that it's inessential
to set pd_lower could use some more detail about why or why not.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] The case for removing replacement selection sort

2017-09-09 Thread Greg Stark
On 8 September 2017 at 18:06, Peter Geoghegan  wrote:

> * It's still faster with int4/int8 comparisons on modern hardware, and
> I think that most ordered inputs will be of those types in practice.

This may be a bit "how long is a piece of string" but how do those two
compare with string sorting in an interesting encoding/locale -- say
/usr/share/dict/polish in pl_PL for example. It's certainly true that
people do sort text as well as numbers. Also, people often sort on
keys of more than one column

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: new help related to variables are not too readable

2017-09-09 Thread Daniel Verite
Tomas Vondra wrote:

> That's not what I meant. I've never felt a strong urge to avoid wrapping
> at 80 chars when translating strings - simply translate in the most
> meaningful way, if it gets longer than 80 chars and can't be easily
> shortened, just wrap. If there are 60 or 80 characters does not change
> this much - 80 chars may allow more unwrapped strings, of course, but
> it's a minor change for the translators. Or at least me, others may
> disagree, of course.

The difficulty varies across languages since some are more compact
than others, and the choice of wrapping or not is not
consistent across translations.

As an example, in the previous version, the first variable comes
out as:

en
  AUTOCOMMIT if set, successful SQL commands are automatically
committed

de
  AUTOCOMMIT wenn gesetzt werden alle erfolgreichen SQL-Befehle
 automatisch committet
es
  AUTOCOMMIT si está definida, órdenes SQL exitosas se comprometen
 automáticamente
fr
  AUTOCOMMIT si activé, les commandes SQL réussies sont
automatiquement validées

he
  AUTOCOMMITאם הופעל, פקודות SQL מחויבות באופן אוטומטי

it
  AUTOCOMMIT se impostato, i comandi SQL riusciti sono salvati
automaticamente

ko
  AUTOCOMMIT 지정 하면 SQL 명령이 성공하면 자동으로 커밋

pl
  AUTOCOMMIT gdy ustawiony, polecenia SQL zakończone powodzeniem są
automatycznie zatwierdzone

pt_BR
  AUTOCOMMIT se definido, comandos SQL bem sucedidos são
automaticamente efetivados

ru
  AUTOCOMMIT если установлен, успешные SQL-команды фиксируются
автоматически

sv
  AUTOCOMMIT om satt, efterföljande SQL-kommandon commit:as
automatiskt

zh_CN
  AUTOCOMMIT 如果被设置,成功的SQL命令将会被自动提交

The original line in english has exactly 80 characters.
When translated in other latin languages, it tends to be a bit
longer.

German and spanish translators have taken the trouble
to break the message into two lines of less than
80 chars with the second one nicely indented, also
aligned in the .po file:

msgid "  AUTOCOMMIT if set, successful SQL commands are automatically
committed\n"
msgstr ""
"  AUTOCOMMIT si está definida, órdenes SQL exitosas se
comprometen\n"
" automáticamente\n"


But other translations are not necessarily done this way, or
at least not consistently. If only for that, having more
characters in the description before screen wrap should help
a bit. In the above example, I think with the new presentation
only the polish version wouldn't fit in 80 chars without
wrapping.


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Proposal] Allow users to specify multiple tables in VACUUM commands

2017-09-09 Thread Michael Paquier
On Sat, Sep 9, 2017 at 2:05 AM, Bossart, Nathan  wrote:
> On 9/8/17, 1:27 AM, "Michael Paquier"  wrote:
>> Thanks. This looks now correct to me. Except that:
>> +   ereport(ERROR,
>> +   (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
>> +errmsg("column lists cannot have duplicate entries"),
>> +errhint("the column list specified for relation
>> \"%s\" contains duplicates",
>> +   relation->relation->relname)));
>> This should use ERRCODE_DUPLICATE_COLUMN.
>
> Absolutely.  This is fixed in v3.

In the duplicate patch, it seems to me that you can save one lookup at
the list of VacuumRelation items by checking for column duplicates
after checking that all the columns are defined. If you put the
duplicate check before closing the relation you can also use the
schema name associated with the Relation.

+   if (i == InvalidAttrNumber)
+   ereport(ERROR,
+   (errcode(ERRCODE_UNDEFINED_COLUMN),
+errmsg("column \"%s\" of relation \"%s\" does not exist",
+   col, RelationGetRelationName(rel;
This could use the schema name unconditionally as you hold a Relation
here, using RelationGetNamespace().

if (!onerel)
+   {
+   /*
+* If one of the relations specified by the user has disappeared
+* since we last looked it up, let them know so that they do not
+* think it was actually analyzed.
+*/
+   if (!IsAutoVacuumWorkerProcess() && relation)
+   ereport(WARNING,
+ (errmsg("skipping \"%s\" --- relation no longer exists",
+ relation->relname)));
+
return;
+   }
Hm. So if the relation with the defined OID is not found, then we'd
use the RangeVar, but this one may not be set here:
+   /*
+* It is safe to leave everything except the OID empty here.
+* Since no tables were specified in the VacuumStmt, we know
+* we don't have any columns to keep track of.  Also, we do
+* not need the RangeVar, because it is only used for error
+* messaging when specific relations are chosen.
+*/
+   rel_oid = HeapTupleGetOid(tuple);
+   relinfo = makeVacuumRelation(NULL, NIL, rel_oid);
+   vacrels_tmp = lappend(vacrels_tmp, relinfo);
So if the relation is analyzed but skipped, we would have no idea that
it actually got skipped because there are no reports about it. That's
not really user-friendly. I am wondering if we should not instead have
analyze_rel also enforce the presence of a RangeVar, and adding an
assert at the beginning of the function to undertline that, and also
do the same for vacuum(). It would make things also consistent with
vacuum() which now implies on HEAD that a RangeVar *must* be
specified.

Sorry for noticing that just now, I am switching the patch back to
waiting on author.

Are there opinions about back-patching the patch checking for
duplicate columns? Stable branches now complain about an unhelpful
error message.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: new help related to variables are not too readable

2017-09-09 Thread Tomas Vondra
On 09/09/2017 01:24 AM, Tom Lane wrote:
> Tomas Vondra  writes:
>> The translator has exactly the same context in both cases, and the
>> struggle to wrap it at 80 characters will be pretty much the same too.
> 
> Really?  With the old way, you had something under 60 characters to
> work in, now it's nearly 80.  I don't buy that that's not a significant
> difference.  It's also much less ugly if you decide you need one more
> line than the English version uses.
> 

That's not what I meant. I've never felt a strong urge to avoid wrapping
at 80 chars when translating strings - simply translate in the most
meaningful way, if it gets longer than 80 chars and can't be easily
shortened, just wrap. If there are 60 or 80 characters does not change
this much - 80 chars may allow more unwrapped strings, of course, but
it's a minor change for the translators. Or at least me, others may
disagree, of course.

What I find way more annoying are strings where it's not clear where to
wrap, because it gets prefixed by something, we insert a value while
formatting the error message, etc. But that's not the case here, as both

  _(" "
"  ")

and

  _(" ")

give you the whole string.


regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PG 10 release notes

2017-09-09 Thread Adrien Nayrat
On 07/13/2017 04:36 PM, Adrien Nayrat wrote:
> Hello hackers,
> 
> From: Peter Geoghegan 
>> Date: Wed, 5 Jul 2017 15:19:57 -0700
>> Subject: Re: [BUGS] BUG #14722: Segfault in tuplesort_heap_siftup, 32 bit 
>> overflow
>> On pgsql-b...@postgresql.org
> 
> On 07/06/2017 12:19 AM, Peter Geoghegan wrote:
>> In Postgres 10, tuplesort external sort run merging became much faster
>> following commit 24598337c8d. It might be noticeable if such a machine
>> were using Postgres 10 [...]
> 
> Should-we mention this improvement in release notes?
> 
> Regards,
> 

Hello,

After seeing theses slides (especially 52) :
https://speakerdeck.com/peterg/sort-hash-pgconfus-2017

I noticed several commits which improves performance of hash tables. Commit's
messages mentions performance improvements for bitmap scans, hash aggregation
and (according to Peter Geoghegan's conference) hash join :



commit b30d3ea824c5ccba43d3e942704f20686e7dbab8
Author: Andres Freund 
Date:   Fri Oct 14 16:05:30 2016 -0700

Add a macro templatized hashtable.
[...]
In queries where these use up a large fraction of the time, this
has been measured to lead to performance improvements of over 100%.



commit 75ae538bc3168bf44475240d4e0487ee2f3bb376
Author: Andres Freund 
Date:   Fri Oct 14 16:05:30 2016 -0700

Use more efficient hashtable for tidbitmap.c to speed up bitmap scans.
[...]
For bitmap scan heavy queries speedups of over 100% have been measured.



commit 5dfc198146b49ce7ecc8a1fc9d5e171fb75f6ba5
Author: Andres Freund 
Date:   Fri Oct 14 17:22:51 2016 -0700

Use more efficient hashtable for execGrouping.c to speed up hash 
aggregation.
[...]
Improvements of over 120% have been   measured.



Should we mention it ?

Regards,

-- 
Adrien NAYRAT

http://dalibo.com - http://dalibo.org



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Still another race condition in recovery TAP tests

2017-09-09 Thread Michael Paquier
On Sat, Sep 9, 2017 at 1:43 PM, Tom Lane  wrote:
> Michael Paquier  writes:
>>> I'm not real sure if the appropriate answer to this is "we need to fix
>>> RecursiveCopy" or "we need to fix the callers to not call RecursiveCopy
>>> until the source directory is stable".  Thoughts?
>
>> ... So making RecursiveCopy really looks
>> like the best bet in the long term.
>
> Yeah, even if we fixed this particular call site, I'm sure the issue
> would come up again.  Certainly we expect hot backups to work with
> a changing source directory.

Definitely...

> Hm.  Even if we don't want to use File::Copy::Recursive, maybe we should
> look at it and see what (if anything) it does about source-tree changes.

Actually, looking at the CPAN file of the module here:
http://cpansearch.perl.org/src/DMUEY/File-Copy-Recursive-0.38/Recursive.pm
There is no minimum version defined. There is even some handling for
perl 5.6 so I think that we should actually be fine from a
compatibility point of view. Also dircopy() uses fcopy(), which just
uses File::Copy::copy() to copy a file. A difference wiht
RecursiveCopy is that File::Copy::Recursive does not fail if copy()
itself fails (see "copy(@_) or return;" in "sub fcopy"). It just
returns for a failure.

We have actually discussed about RecursiveCopy and
File::Copy::Recursive two years ago:
https://www.postgresql.org/message-id/20151123212707.GD4073@alvherre.pgsql
The first version of RecursiveCopy did not need any fancy copy except
a simple recursive function, and I personally still like the fact that
symlinks are not handled. Should we care about that actually for cases
where we take a FS-backup with tablespaces and a pg_wal symlink? Such
case has not showed up yet, but if it does I think that we could
consider integrating File::Copy::Recursive.

In short, I'd still like to keep RecursiveCopy for now, but change its
code so as a copy() is not a hard failure. What do you think?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench more operators & functions

2017-09-09 Thread Fabien COELHO


Hello Pavel,

Here is a v13. No code changes, but TAP tests added to maintain pgbench 
coverage to green.



Summary of patch contents:

This patch extends pgbench expressions syntax while keeping 
compatibility with SQL expressions.


It adds support for NULL and BOOLEAN, as well as assorted logical, 
comparison and test operators (AND, <>, <=, IS NULL...).


A CASE construct is provided which takes advantage of the added BOOLEAN.

Integer and double functions and operators are also extended: bitwise 
operators (<< & ...), exp/ln, mod() as synonymous to % (matching pg).


Added TAP tests maintain pgbench source coverage to green (if you ignore 
lexer & parser generated files...).



Future plans include extending and synchronizing psql & pgbench variable 
and expression syntaxes:

 - move expression parsing and evaluation in fe_utils,
   which would allow to
 - extend psql with some \let i  cliend-side syntax
   (ISTM that extending the \set syntax cannot be upward compatible)
   and probably handle \let as a synonymous to \set in pgbench.
 - allow \if  in psql instead of just \if 
 - add \if ... support to pgbench
 - maybe add TEXT type support to the expression engine, if useful
 - maybe add :'var" and :"var" support to pgbench, if useful

There are already patches in the queue for:
 - testing whether a variable is defined in psql
   feature could eventually be added to pgbench as well
 - adding \gset (& \cset) to pgbench to get output of possibly
   combined queries into variables, which can be used for making
   decisions later in the script.

--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index f5db8d1..59ca7a1 100644
--- a/doc/src/sgml/ref/pgbench.sgml
+++ b/doc/src/sgml/ref/pgbench.sgml
@@ -827,14 +827,31 @@ pgbench  options  dbname
  
   Sets variable varname to a value calculated
   from expression.
-  The expression may contain integer constants such as 5432,
+  The expression may contain the NULL constant,
+  boolean constants TRUE and FALSE,
+  integer constants such as 5432,
   double constants such as 3.14159,
   references to variables :variablename,
-  unary operators (+, -) and binary operators
-  (+, -, *, /,
-  %) with their usual precedence and associativity,
-  function calls, and
-  parentheses.
+  operators
+  with their usual SQL precedence and associativity,
+  function calls,
+  SQL CASE generic conditional
+  expressions and parentheses.
+ 
+
+ 
+  Functions and most operators return NULL on
+  NULL input.
+ 
+
+ 
+  For conditional purposes, non zero numerical values are TRUE,
+  zero numerical values and NULL are FALSE.
+ 
+
+ 
+  When no final ELSE clause is provided to a CASE,
+  the default value is NULL.
  
 
  
@@ -843,6 +860,7 @@ pgbench  options  dbname
 \set ntellers 10 * :scale
 \set aid (1021 * random(1, 10 * :scale)) % \
(10 * :scale) + 1
+\set divx CASE WHEN :x <> 0 THEN :y/:x ELSE NULL END 
 
 

@@ -919,6 +937,177 @@ pgbench  options  dbname
   
  
 
+ 
+  Built-In Operators
+
+  
+   The arithmetic, bitwise, comparison and logical operators listed in
+are built into pgbench
+   and may be used in expressions appearing in
+   \set.
+  
+
+  
+   pgbench Operators by increasing precedence
+   
+
+ 
+  Operator
+  Description
+  Example
+  Result
+ 
+
+
+ 
+  OR
+  logical or
+  5 or 0
+  TRUE
+ 
+ 
+  AND
+  logical and
+  3 and 0
+  FALSE
+ 
+ 
+  NOT
+  logical not
+  not false
+  TRUE
+ 
+ 
+  IS [NOT] (NULL|TRUE|FALSE)
+  value tests
+  1 is null
+  FALSE
+ 
+ 
+  ISNULL|NOTNULL
+  null tests
+  1 notnull
+  TRUE
+ 
+ 
+  =
+  is equal
+  5 = 4
+  FALSE
+ 
+ 
+  
+  is not equal
+  5  4
+  TRUE
+ 
+ 
+  !=
+  is not equal
+  5 != 5
+  FALSE
+ 
+ 
+  
+  lower than
+  5  4
+  FALSE
+ 
+ 
+  =
+  lower or equal
+  5 = 4
+  FALSE
+ 
+ 
+  
+  greater than
+  5  4
+  TRUE
+ 
+ 
+  =
+  greater or equal
+  5 = 4
+  TRUE
+ 
+ 
+  |
+  integer bitwise OR
+  1 | 2
+  3
+ 
+ 
+  #
+  integer bitwise XOR
+  1 # 3
+  2
+ 
+ 
+  
+  integer bitwise AND
+  1  3
+  1
+ 
+ 
+  ~
+  integer bitwise NOT
+  ~ 1
+  -2
+ 
+ 
+  
+  integer bitwise shift left
+  1  2
+  4
+ 
+ 
+  
+  integer bitwise shift right
+  8  2
+  2
+ 
+ 
+  +
+  addition
+  5 + 4
+  9
+ 
+ 
+  -
+  substraction
+  3 - 2.0
+  1.0
+ 
+ 
+  *
+  multiplication
+  5 * 4
+  20
+ 

Re: [HACKERS] Setting pd_lower in GIN metapage

2017-09-09 Thread Michael Paquier
On Fri, Sep 8, 2017 at 5:24 AM, Tom Lane  wrote:
> This means that the premise of this patch is wrong.  Adjusting pd_lower,
> by itself, would accomplish precisely zip for WAL compression, because
> we'd still not be telling the WAL code to compress out the hole.
>
> To get any benefit, I think we'd need to do all of the following:
>
> 1. Initialize pd_lower correctly in the metapage init functions, as here.
> [...]
> In short, this patch needs a significant rewrite, and more analysis than
> you've done so far on whether there's actually any benefit to be gained.
> It might not be worth messing with.
>
> I'll set the CF entry back to Waiting on Author.

I did some measurements of the compressibility of the GIN meta page,
looking at its FPWs with and without wal_compression and you are
right: there is no direct compressibility effect when setting pd_lower
on the meta page. However, it seems to me that there is an argument
still pleading on favor of this patch for wal_consistency_checking.

On HEAD pd_lower gets set to 24 and pd_upper to 8184 for GIN meta
pages. With the patch, it gets at 80. On top of cleaning up the
masking functions GIN, BRIN and SpGist by removing some exceptions in
their handling, we are able to get a better masked page because it is
possible to mask a portion that we *know* is unused. So even if there
are no compressibility benefits, I think that it actually helps in
tracking down inconsistencies in meta pages by having a better
precision lookup. So I would still vote for integrating the patch
as-is, with the addition of a comment to mention that the
compressibility optimization is not used yet, though this is helpful
when masking the page. The same comment ought to be mentioned for
btree.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Pageinspect - add functions on GIN and GiST indexes from gevel

2017-09-09 Thread Ashutosh Sharma
On Fri, Sep 8, 2017 at 3:38 AM, Tomas Vondra
 wrote:
> Hi,
>
> On 07/21/2017 03:40 PM, Alexander Korotkov wrote:
>> Hi, Alexey!
>>
>> On Fri, Jul 21, 2017 at 3:05 PM, Alexey Chernyshov
>> > wrote:
>>
>> the following patch transfers functionality from gevel module
>> (http://www.sai.msu.su/~megera/wiki/Gevel
>> ) which provides functions for
>> analyzing GIN and GiST indexes to pageinspect. Gevel was originally
>> designed by Oleg Bartunov, and Teodor Sigaev for developers of GiST and
>> GIN indexes.
>>
>>
>> It's very good that you've picked up this work!  pageinspect is lacking
>> of this functionality.
>>
>> Functions added:
>>  - gist_stat(text) - shows statistics on GiST Tree
>>  - gist_tree(text) - shows GiST tree
>>  - gist_tree(text, int4) - shows GiST tree up to MAXLEVEL
>>  - gist_print(text) - prints objects stored in GiST tree
>>  - spgist_stat(text) - shows statistics on SP-GiST
>>  - spgist_print(text) - prints objects stored in index
>>  - gin_value_count() - originally gin_stat(text) - prints estimated
>> counts
>> for index values
>>  - gin_stats() - originally gin_statpage(text) - shows statistics
>>  - gin_count_estimate(text, tsquery) - shows number of indexed rows
>> matched
>> query
>>
>> Tests also transferred, docs for new functions are added. I run pgindent
>> over the code, but the result is different from those I expected, so
>> I leave
>> pgindented one.
>> The patch is applicable to the commit
>> 866f4a7c210857aa342bf901558d170325094dde.
>>
>>
>> As we can see, gevel contains functions which analyze the whole index.
>>  pageinspect is written in another manner: it gives you functionality to
>> analyze individual pages, tuples and so on.
>> Thus, we probably shouldn't try to move gevel functions to pageinspect
>> "as is".  They should be rewritten in more granular manner as well as
>> other pageinspact functions are.  Any other opinions?
>>
>
> I agree with Alexander that pageinspect is written in a very different
> way - as the extension name suggests, it's used to inspect pages. The
> proposed patch uses a very different approach, reading the whole index,
> not individual pages. Why should it be part of pageinspect?
>
> For example we have pgstattuple extension, which seems like a better
> match. Or just create a new extension - if the code is valuable, surely
> we can live one more extension instead of smuggling it in inside
> pageinspect.
>

Finally, i got some time to look into this patch and surprisingly I
didn't find any function returning information at page level instead
all the SQL functions are returning information at index level.
Therefore, i too feel that it should be either integrated with
pgstattuple which could a better match as Tomas also mentioned or may
be add a new contrib module itself. I think, adding a new contrib
module looks like a better option. The reason being, it doesn't simply
include the function for showing index level statistics (for e.g..
index size, no of rows, values..e.t.c) like pgstattuple does but,
also, displays information contained in a page for e.g. the object
stored in the page,  it's tid e.t.c. So, more or less, I would say
that, it's the mixture of pageinspect and pgstattuple module and
therefore, i feel, adding it as a new extension would be a better
choice. Thought?

--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: new help related to variables are not too readable

2017-09-09 Thread Pavel Stehule
Hi

2017-09-09 1:24 GMT+02:00 Tom Lane :

> Tomas Vondra  writes:
> > The translator has exactly the same context in both cases, and the
> > struggle to wrap it at 80 characters will be pretty much the same too.
>
> Really?  With the old way, you had something under 60 characters to
> work in, now it's nearly 80.  I don't buy that that's not a significant
> difference.  It's also much less ugly if you decide you need one more
> line than the English version uses.
>
>
here is patch - anybody can check if with this change will be result better
or not.

Regards

Pavel


> regards, tom lane
>
diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c
index 4d1c0ec3c6..9877cc5f55 100644
--- a/src/bin/psql/help.c
+++ b/src/bin/psql/help.c
@@ -337,7 +337,7 @@ helpVariables(unsigned short int pager)
 	 * Windows builds currently print one more line than non-Windows builds.
 	 * Using the larger number is fine.
 	 */
-	output = PageOutput(147, pager ? &(pset.popt.topt) : NULL);
+	output = PageOutput(206, pager ? &(pset.popt.topt) : NULL);
 
 	fprintf(output, _("List of specially treated variables\n\n"));
 
@@ -346,108 +346,108 @@ helpVariables(unsigned short int pager)
 	fprintf(output, _("  psql --set=NAME=VALUE\n  or \\set NAME VALUE inside psql\n\n"));
 
 	fprintf(output, _("  AUTOCOMMIT\n"
-	  "if set, successful SQL commands are automatically committed\n"));
+	  "if set, successful SQL commands are automatically committed\n\n"));
 	fprintf(output, _("  COMP_KEYWORD_CASE\n"
 	  "determines the case used to complete SQL key words\n"
-	  "[lower, upper, preserve-lower, preserve-upper]\n"));
+	  "[lower, upper, preserve-lower, preserve-upper]\n\n"));
 	fprintf(output, _("  DBNAME\n"
-	  "the currently connected database name\n"));
+	  "the currently connected database name\n\n"));
 	fprintf(output, _("  ECHO\n"
 	  "controls what input is written to standard output\n"
-	  "[all, errors, none, queries]\n"));
+	  "[all, errors, none, queries]\n\n"));
 	fprintf(output, _("  ECHO_HIDDEN\n"
 	  "if set, display internal queries executed by backslash commands;\n"
-	  "if set to \"noexec\", just show them without execution\n"));
+	  "if set to \"noexec\", just show them without execution\n\n"));
 	fprintf(output, _("  ENCODING\n"
-	  "current client character set encoding\n"));
+	  "current client character set encoding\n\n"));
 	fprintf(output, _("  FETCH_COUNT\n"
-	  "the number of result rows to fetch and display at a time (0 = unlimited)\n"));
+	  "the number of result rows to fetch and display at a time (0 = unlimited)\n\n"));
 	fprintf(output, _("  HISTCONTROL\n"
-	  "controls command history [ignorespace, ignoredups, ignoreboth]\n"));
+	  "controls command history [ignorespace, ignoredups, ignoreboth]\n\n"));
 	fprintf(output, _("  HISTFILE\n"
-	  "file name used to store the command history\n"));
+	  "file name used to store the command history\n\n"));
 	fprintf(output, _("  HISTSIZE\n"
-	  "max number of commands to store in the command history\n"));
+	  "max number of commands to store in the command history\n\n"));
 	fprintf(output, _("  HOST\n"
-	  "the currently connected database server host\n"));
+	  "the currently connected database server host\n\n"));
 	fprintf(output, _("  IGNOREEOF\n"
-	  "number of EOFs needed to terminate an interactive session\n"));
+	  "number of EOFs needed to terminate an interactive session\n\n"));
 	fprintf(output, _("  LASTOID\n"
-	  "value of the last affected OID\n"));
+	  "value of the last affected OID\n\n"));
 	fprintf(output, _("  ON_ERROR_ROLLBACK\n"
-	  "if set, an error doesn't stop a transaction (uses implicit savepoints)\n"));
+	  "if set, an error doesn't stop a transaction (uses implicit savepoints)\n\n"));
 	fprintf(output, _("  ON_ERROR_STOP\n"
-	  "stop batch execution after error\n"));
+	  "stop batch execution after error\n\n"));
 	fprintf(output, _("  PORT\n"
-	  "server port of the current connection\n"));
+	  "server port of the current connection\n\n"));
 	fprintf(output, _("  PROMPT1\n"
-	  "specifies the standard psql prompt\n"));
+	  "specifies the standard psql prompt\n\n"));
 	fprintf(output, _("  PROMPT2\n"
-	  "specifies the prompt used when a statement continues from a previous line\n"));
+	  "specifies the prompt used when a statement continues from a previous line\n\n"));
 	fprintf(output, _("  PROMPT3\n"
-	  "specifies the prompt used during COPY ... FROM STDIN\n"));
+	  "specifies the prompt used during COPY ... FROM STDIN\n\n"));
 	fprintf(output, _("  QUIET\n"
-	  "run quietly (same as -q option)\n"));
+	  "run quietly (same as -q 

Re: [HACKERS] Re: proposal - using names as primary names of plpgsql function parameters instead $ based names

2017-09-09 Thread Pavel Stehule
Hi

2017-09-08 9:36 GMT+02:00 Jeevan Chalke :

> Hi Pavel,
>
> On Sat, May 20, 2017 at 11:55 AM, Pavel Stehule 
> wrote:
>
>>
>>
>>
>> 2017-05-19 5:48 GMT+02:00 Pavel Stehule :
>>
>>>
>>>
>>> 2017-05-19 3:14 GMT+02:00 Peter Eisentraut <
>>> peter.eisentr...@2ndquadrant.com>:
>>>
 On 5/15/17 14:34, Pavel Stehule wrote:
 > Now, I when I working on plpgsql_check, I have to check function
 > parameters. I can use fn_vargargnos and out_param_varno for list
 of
 > arguments and related varno(s). when I detect some issue, I am
 using
 > refname. It is not too nice now, because these refnames are $
 based.
 > Long names are alias only. There are not a possibility to find
 > related alias.
 >
 > So, my proposal. Now, we can use names as refname of parameter
 > variable. $ based name can be used as alias. From user perspective
 > there are not any change.
 >
 > Comments, notes?
 >
 > here is a patch


> I like the idea of using parameter name instead of $n symbols.
>
> However, I am slightly worried that, at execution time if we want to
> know the parameter position in the actual function signature, then it
> will become difficult to get that from the corresponding datum
> variable. I don't have any use-case for that though. But apart from
> this concern, idea looks good to me.
>

Understand - but it was reason why I implemented this function - when I
have to search parameter name via offset, I cannot to use string searching.
When you know the parameter name, you can use a string searching in text
editor, in pager.

It is better supported now, then current behave.


>
> Here are review comments on the patch:
>
> 1.
> +char   *argname = NULL;
>
> There is no need to initialize argname here. The Later code does that.
>
> 2.
> +argname = (argnames && argnames[i][0] != 0) ? argnames[i]
> : NULL;
>
> It will be better to check '\0' instead of 0, like we have that already.
>

This pattern is somewhere in PLpgSQL code. Your proposal is better.


>
> 3.
> Check for argname exists is not consistent. At one place you have used
> "argname != NULL" and other place it is "argname != '\0'".
> Better to have "argname != NULL" at both the places.
>

sure

>
> 4.
> -- should fail -- message should to contain argument name
> Should be something like this:
> -- Should fail, error message should contain argument name
>
> 5.
> +argvariable = plpgsql_build_variable(argname != NULL ?
> +   argname : buf,
> +   0, argdtype,
> false);
>
> Please correct indentation.
>
> ---
>
> BTW, instead of doing all these changes, I have done these changes this
> way:
>
> -   /* Build variable and add to datum list */
> -   argvariable = plpgsql_build_variable(buf, 0,
> -argdtype, false);
> +   /*
> +* Build variable and add to datum list.  If there's a
> name for
> +* the argument, then use that else use $n name.
> +*/
> +   argvariable = plpgsql_build_variable((argnames &&
> argnames[i][0] != '\0') ?
> +argnames[i] : buf,
> +0, argdtype, false);
>
> This requires no new variable and thus no more changes elsewhere.
>
> Attached patch with these changes. Please have a look.
>

Looks great - I added check to NULL only

Thank you

Pavel


>
> Thanks
>
>
> --
> Jeevan Chalke
> Principal Software Engineer, Product Development
> EnterpriseDB Corporation
> The Enterprise PostgreSQL Company
>
>
diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c
index e9d7ef55e9..f320059fd0 100644
--- a/src/pl/plpgsql/src/pl_comp.c
+++ b/src/pl/plpgsql/src/pl_comp.c
@@ -433,9 +433,14 @@ do_compile(FunctionCallInfo fcinfo,
 			 errmsg("PL/pgSQL functions cannot accept type %s",
 	format_type_be(argtypeid;
 
-/* Build variable and add to datum list */
-argvariable = plpgsql_build_variable(buf, 0,
-	 argdtype, false);
+/*
+ * Build variable and add to datum list.  If there's a name for
+ * the argument, then use that else use $n name.
+ */
+argvariable = plpgsql_build_variable((argnames != NULL
+		  && argnames[i][0] != '\0') ?
+	 argnames[i] : buf,
+	 0, argdtype, false);
 
 if (argvariable->dtype == PLPGSQL_DTYPE_VAR)
 {
@@ -461,7 +466,7 @@ do_compile(FunctionCallInfo fcinfo,
 add_parameter_name(argitemtype, argvariable->dno, buf);
 
 /* If there's a name for the argument, make an alias */
-if (argnames && argnames[i][0] != '\0')
+if