Re: [HACKERS] INSERT ... ON CONFLICT syntax issues

2015-04-28 Thread Stephen Frost
* Andres Freund (and...@anarazel.de) wrote:
 On 2015-04-28 10:40:10 -0400, Stephen Frost wrote:
  * Andres Freund (and...@anarazel.de) wrote:
   On 2015-04-28 16:36:28 +0200, Petr Jelinek wrote:
I am also very sure that every time I'll write this statement I will 
have to
look into manual for the names of TARGET and EXCLUDED because they don't
seem intuitive to me at all (especially the EXCLUDED).
   
   Same here. I don't understand why 'CONFLICTING' would be more ambiguous
   than EXCLUDED (as Peter argued earlier). Especially given that the whole
   syntax is called ON CONFLICT.
  
  Any way we can alias it?  Both of those strike me as annoyingly long and
  if we could allow an alias then people can do whatever they want...
  
  No, I haven't got any suggestion on how to do that. :)
  
  It's also something we can probably improve on in the future...
 
 I earlier suggested NEW/OLD. I still think that's not too bad as I don't
 buy the argument that they're too associated with rules.

+1, NEW/OLD seem pretty natural and I'm not worried about what they look
like in rules, and their usage in triggers matches up with what they'd
mean here, I'd think.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] ATSimpleRecursion() and inheritance foreign parents

2015-04-28 Thread Etsuro Fujita

On 2015/04/28 15:17, Amit Langote wrote:

The code at the beginning of ATSimpleRecursion() looks like -

/*
  * Propagate to children if desired.  Non-table relations never have
  * children, so no need to search in that case.
  */
  if (recurse  rel-rd_rel-relkind == RELKIND_RELATION)

Not sure if it's great idea, but now that foreign tables can also have
children, should above be changed to take that into account?


Yeah, I think we should now allow the recursion for inheritance parents 
that are foreign tables as well.  Attached is a patch for that.



so even foreign inheritance parents expand for
various ALTER TABLE actions like adding a column though that is not a
meaningful operation on foreign tables anyway.

An example,
postgres=# alter foreign table fparent alter a type char;
ALTER FOREIGN TABLE

postgres=# select * from fparent;
ERROR:  attribute a of relation fchild1 does not match parent's type

Above error, AIUI, is hit much before it is determined that fparent is a
foreign table, whereas the following is FDW-specific (waiting to happen) error,

postgres=# alter foreign table fparent add b char;
ALTER FOREIGN TABLE

postgres=# SELECT * FROM fparent;
ERROR:  column b does not exist
CONTEXT:  Remote SQL command: SELECT a, b FROM public.parent

Not sure if the first case could be considered s a bug as foreign tables are
pretty lax in these regards anyway.


I think the first case would be a bug caused by ATSimpleRecursion() that 
doesn't allow the recursion.  But I don't think the second case is a 
bug.  As described in the notes in the reference page on ALTER FOREIGN 
TABLE, I think it should be the user's responsibility to ensure that the 
foreign table definition matches the reality.


Best regards,
Etsuro Fujita
*** a/src/backend/commands/tablecmds.c
--- b/src/backend/commands/tablecmds.c
***
*** 4367,4376  ATSimpleRecursion(List **wqueue, Relation rel,
    AlterTableCmd *cmd, bool recurse, LOCKMODE lockmode)
  {
  	/*
! 	 * Propagate to children if desired.  Non-table relations never have
! 	 * children, so no need to search in that case.
  	 */
! 	if (recurse  rel-rd_rel-relkind == RELKIND_RELATION)
  	{
  		Oid			relid = RelationGetRelid(rel);
  		ListCell   *child;
--- 4367,4379 
    AlterTableCmd *cmd, bool recurse, LOCKMODE lockmode)
  {
  	/*
! 	 * Propagate to children if desired.  Relations other than plain tables
! 	 * and foreign tables never have children, so no need to search in that
! 	 * case.
  	 */
! 	if (recurse 
! 		(rel-rd_rel-relkind == RELKIND_RELATION ||
! 		 rel-rd_rel-relkind == RELKIND_FOREIGN_TABLE))
  	{
  		Oid			relid = RelationGetRelid(rel);
  		ListCell   *child;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT syntax issues

2015-04-28 Thread Andres Freund
On 2015-04-28 10:40:10 -0400, Stephen Frost wrote:
 * Andres Freund (and...@anarazel.de) wrote:
  On 2015-04-28 16:36:28 +0200, Petr Jelinek wrote:
   I am also very sure that every time I'll write this statement I will have 
   to
   look into manual for the names of TARGET and EXCLUDED because they don't
   seem intuitive to me at all (especially the EXCLUDED).
  
  Same here. I don't understand why 'CONFLICTING' would be more ambiguous
  than EXCLUDED (as Peter argued earlier). Especially given that the whole
  syntax is called ON CONFLICT.
 
 Any way we can alias it?  Both of those strike me as annoyingly long and
 if we could allow an alias then people can do whatever they want...
 
 No, I haven't got any suggestion on how to do that. :)
 
 It's also something we can probably improve on in the future...

I earlier suggested NEW/OLD. I still think that's not too bad as I don't
buy the argument that they're too associated with rules.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cache invalidation for PL/pgsql functions

2015-04-28 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 rhaas=# create table foo (a int);
 CREATE TABLE
 rhaas=# create or replace function test (x foo) returns int as $$begin
 return x.b; end$$ language plpgsql;
 CREATE FUNCTION
 rhaas=# alter table foo add column b int;
 ALTER TABLE
 rhaas=# select test(null::foo);
 ERROR:  record x has no field b
 LINE 1: SELECT x.b
^
 QUERY:  SELECT x.b
 CONTEXT:  PL/pgSQL function test(foo) line 1 at RETURN

I believe that this was one of the cases I had in mind when I previously
proposed that we stop using PLPGSQL_DTYPE_ROW entirely for composite-type
variables, and make them use PLPGSQL_DTYPE_REC (that is, the same code
paths used for RECORD).

As I recall, that proposal was shot down with no investigation whatsoever,
on the grounds that it might possibly make some cases slower.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] improving speed of make check-world

2015-04-28 Thread Peter Eisentraut
On 4/28/15 9:09 AM, Michael Paquier wrote:
 I guess by redirecting it into the log file you indicated, but is that a
  good idea to redirect stderr?
 I am sure that Peter did that on purpose, both approaches having
 advantages and disadvantages. Personally I don't mind looking at the
 install log file in tmp_install to see the state of the installation,
 but it is true that this change is a bit disturbing regarding the fact
 that everything was directly outputted to stderr and stdout for many
 years.

Not really.  Before, pg_regress put the output of its internal make
install run into a log file.  Now make is just replicating that
behavior.  I would agree that that seems kind of obsolete now, because
it's really just another sub-make.  So if no one disagrees, I'd just
remove the log file redirection in temp-install.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Add transforms feature

2015-04-28 Thread Andrew Dunstan


On 04/28/2015 12:03 PM, Andrew Dunstan wrote:


[switching to -hackers]

On 04/28/2015 11:51 AM, Andrew Dunstan wrote:


On 04/28/2015 09:04 AM, Michael Paquier wrote:

On Tue, Apr 28, 2015 at 10:01 AM, Michael Paquier
michael.paqu...@gmail.com wrote:

On Tue, Apr 28, 2015 at 9:55 AM, Andrew Dunstan wrote:
w.r.t MSVC builds, it looks like we need entries in 
$contrib_extraincludes

in src/tools/msvc/Mkvcbuild.pm at the very least.

If our goal is to put back to green the Windows nodes as quick as
possible, we could bypass their build this way , and we'll
additionally need to update the install script and
vcregress.pl/contribcheck to bypass those modules accordingly. Now I
don't think that we should make the things produced inconsistent.
OK, attached are two patches to put back the buildfarm nodes using 
MSVC to green

- 0001 adds support for build and installation of the new transform
modules: hstore_plperl, hstore_plpython and  ltree_plpython. Note that
this patch is enough to put back the buildfarm to a green state for
MSVC, but it disables the regression tests for those modules,
something addressed in the next patch...
- 0002 adds support for regression tests for the new modules. The
thing is that we need to check the major version version of Python
available in configuration and choose what are the extensions to
preload before the tests run. It is a little bit hacky... But it has
the merit to work, and I am not sure we could have a cleaner solution
by looking at the Makefiles of the transform modules that use
currently conditions based on $(python_majorversion).
Regards,



I have pushed the first of these with some tweaks.

I'm looking at the second.





Regarding this second patch - apart from the buggy python version test 
which I just fixed in the first patch, I see this:


   +   if ($pyver eq 2)
   +   {
   +   push @opts, --load-extension=plpythonu;
   +   push @opts, '--load-extension=' . $module . 'u';
   +   }


But what do we do for Python3?

Do we actually have a Windows machine building with Python3?



The answer seems to be probably not. When I tried enabling this with 
bowerbird I got a ton of errors like these:


   plpy_cursorobject.obj : error LNK2001: unresolved external symbol
   PyObject_SelfIter [C:\prog\bf\root\HEAD\pgsql\plpython3.vcxproj]
   plpy_cursorobject.obj : error LNK2019: unresolved external symbol
   __imp_PyType_Ready referenced in function PLy_cursor_init_type
   [C:\prog\bf\root\HEAD\pgsql\plpython3.vcxproj]


Something else to fix I guess.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Add transforms feature

2015-04-28 Thread Peter Eisentraut
On 4/28/15 4:10 PM, Andrew Dunstan wrote:
 Do we actually have a Windows machine building with Python3?
 
 
 The answer seems to be probably not. When I tried enabling this with
 bowerbird I got a ton of errors like these:
 
plpy_cursorobject.obj : error LNK2001: unresolved external symbol
PyObject_SelfIter [C:\prog\bf\root\HEAD\pgsql\plpython3.vcxproj]
plpy_cursorobject.obj : error LNK2019: unresolved external symbol
__imp_PyType_Ready referenced in function PLy_cursor_init_type
[C:\prog\bf\root\HEAD\pgsql\plpython3.vcxproj]
 
 
 Something else to fix I guess.

I think at least at one point the community Windows binaries built by
EnterpriseDB were built against Python 3 (which upset some users).  But
they might not be using the msvc toolchain.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-28 Thread Jim Nasby

On 4/28/15 1:31 PM, Andrew Dunstan wrote:


On 04/28/2015 01:44 PM, Jim Nasby wrote:

On 4/27/15 10:06 PM, Andrew Dunstan wrote:

My point remains that we really need methods of a) getting the field
names from generic records and b) using text values to access fields of
generic records, both as lvalues and rvalues. Without those this feature
will be of comparatively little value, IMNSHO. With them it will be much
more useful and  powerful.


Sure, and if I had some pointers on what was necessary there I'd take
a look at it. But I'm not very familiar with plpgsql (let alone what
we'd need to do this in SQL), so I'd just be fumbling around. As a
reminder, one of the big issues there seems to be that while plSQL
knows what the underlying type is, plpgsql has no idea, which
seriously limits the use of passing it a record.

In the meantime I've got a patch that definitely works for plSQL and
allows you to handle a record and pass it on to other functions (such
as json_from_record()). Since that's my original motivation for
looking at this, I'd like that patch to be considered unless there's a
big drawback to it that I'm missing. (For 9.6, of course.)



If you look at composite_to_json() it gives you almost all that you'd
need to construct an array of field names for an arbitrary record, and a
lot of what you'd need to extract a value for an arbitrary field.
populate_record_worker() has a good deal of what you'd need to set a
value of an arbitrary field. None of that means that there isn't a good
deal of work do do, but if you want pointers there are some.


Thanks, those are helpful. BTW, I think this is a memory leak in 
populate_record_worker():


my_extra = (RecordIOData *) fcinfo-flinfo-fn_extra;
if (my_extra == NULL ||
my_extra-ncolumns != ncolumns)
{
fcinfo-flinfo-fn_extra =
MemoryContextAlloc(fcinfo-flinfo-fn_mcxt,


The part that I'm still concerned about in plpgsql is how to handle the 
case of having a record that we should be able to associate with a 
specific composite type (such as a table type). That's not currently 
working in my patch, but I'm not sure why. Maybe I need to test for that 
and in that case set the variable up as a PLPGSQL_TTYPE_ROW instead of 
PLPGSQL_TTYPE_REC?

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why does enum_endpoint call GetTransactionSnapshot()?

2015-04-28 Thread Bruce Momjian
On Sat, Feb 14, 2015 at 05:29:43PM -0500, Robert Haas wrote:
 On Sat, Feb 14, 2015 at 5:12 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  Robert Haas robertmh...@gmail.com writes:
  I think this is probably a holdover from before the death of
  SnapshotNow, and that we should just pass NULL to
  systable_beginscan_ordered() here, the same as we do for other catalog
  accesses.  Barring objections, I'll go make that change.
 
  Seems reasonable to me, but why are you only looking at that one and
  not the identical code in enum_range_internal()?
 
 I noticed that one after hitting send.  I think we should change them both.

Any news on this?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_rewind test race condition..?

2015-04-28 Thread Heikki Linnakangas

On 04/28/2015 11:02 AM, Stephen Frost wrote:

Heikki,

   Not sure if anyone else is seeing this, but I'm getting regression
   test failures when running the pg_rewind tests pretty consistently
   with 'make check'.  Specifically with basic remote, I'm getting:

source and target cluster are on the same timeline
Failure, exiting

   in regress_log/pg_rewind_log_basic_remote.

   If I throw a sleep(5); into t/001_basic.pl before the call to
   RewindTest::run_pg_rewind($test_mode); then everything works fine.


The problem seems to be that when the standby is promoted, it's a 
so-called fast promotion, where it writes an end-of-recovery record 
and starts accepting queries before creating a real checkpoint. 
pg_rewind looks at the TLI in the latest checkpoint, as it's in the 
control file, but that isn't updated until the checkpoint completes. I 
don't see it on my laptop normally, but I can reproduce it if I insert a 
sleep(5) in StartupXLog, just before it requests the checkpoint:


--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -7173,7 +7173,10 @@ StartupXLOG(void)
 * than is appropriate now that we're not in standby mode anymore.
 */
if (fast_promoted)
+   {
+   sleep(5);
RequestCheckpoint(CHECKPOINT_FORCE);
+   }
 }

The simplest fix would be to force a checkpoint in the regression test, 
before running pg_rewind. It's a bit of a cop out, since you'd still get 
the same issue when you tried to do the same thing in the real world. It 
should be rare in practice - you'd not normally run pg_rewind 
immediately after promoting the standby - but a better error message at 
least would be nice..

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Can pg_dump make use of CURRENT/SESSION_USER

2015-04-28 Thread Fabrízio de Royes Mello
On Tue, Apr 28, 2015 at 1:07 PM, Fabrízio de Royes Mello 
fabriziome...@gmail.com wrote:


 On Tue, Apr 28, 2015 at 9:38 AM, Robert Haas robertmh...@gmail.com
wrote:
 
  On Sat, Apr 25, 2015 at 8:05 AM, Fabrízio de Royes Mello
  fabriziome...@gmail.com wrote:
 If we ever implement something like

 COMMENT ON CURRENT_DATABASE IS ...

 it will be useful, because you will be able to restore a dump
into
 another database and have the comment apply to the target
database.
   
I think it's simple to implement, but how about pg_dump... we need
to
add
new option (like --use-current-database) or am I missing something
?
  
   I think we'd just change it to use the new syntax, full stop.  I see
   no need for an option.
  
   I'm returning on this...
  
   What's the reasonable syntaxes?
  
   COMMENT ON CURRENT DATABASE IS 'text';
  
   or
  
   COMMENT ON DATABASE { CURRENT_DATABASE | object_name } IS 'text';
 
  The second one would require making CURRENT_DATABASE a reserved
  keyword, and I'm not keen to create any more of those.  I like the
  first one.  The other alternative that may be worth considering is:
 
  COMMENT ON CURRENT_DATABASE IS 'text';
 
  That doesn't require making CURRENT_DATABASE a reserved keyword, but
  it does require making it a keyword, and it doesn't look very SQL-ish.
  Still, we have a bunch of other CURRENT_FOO keywords.
 
  But I'm inclined to stick with your first proposal.
 

 Attached the patch to support COMMENT ON CURRENT DATABASE IS ...
(including pg_dump).

 On my next spare time I'll send the ALTER ROLE ... IN CURRENT DATABASE
patch.


Looking at the patch again I realize the code is very ugly, so I'll rework
the patch.

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog: http://fabriziomello.github.io
 Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello
 Github: http://github.com/fabriziomello


Re: [HACKERS] pg_basebackup, tablespace mapping and path canonicalization

2015-04-28 Thread Bruce Momjian
On Fri, Feb  6, 2015 at 08:25:42AM -0500, Robert Haas wrote:
 On Thu, Feb 5, 2015 at 10:21 PM, Ian Barwick i...@2ndquadrant.com wrote:
  I stumbled on what appears to be inconsistent handling of double slashes
  in tablespace paths when using pg_basebackup with the 
  -T/--tablespace-mapping
  option:
 
  ibarwick:postgresql (master)$ mkdir /tmp//foo-old
 [...]
  The attached patch adds the missing canonicalization; I can't see any
  reason not to do this. Thoughts?
 
 Seems OK to me.  Anyone think differently?

Patch applied.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Feedback on getting rid of VACUUM FULL

2015-04-28 Thread Jim Nasby

On 4/28/15 1:32 PM, Robert Haas wrote:

More than five years have passed since Heikki posted this, and we still
haven't found a solution to the problem -- which neverthless keeps
biting people to the point that multiple user-space implementations of
similar techniques are out there.

Yeah.  The problem with solving this with an update is that a
concurrent real update may not see the expected behavior, especially
at higher isolation levels.  Tom also complained that the CTID will
change, and somebody might care about that.  But I think it's pretty
clear that a lot of people will be able to live with those problems,
and those who can't will be no worse off than now.


But that's the same thing that would happen during a real update, even 
if it was just UPDATE SET some_field = some_field, no? Doesn't 
heap_update already do everything that's necessary? Or are you worried 
that doing this could be user-visible (which as long as it's a manual 
process I think is OK)?

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] shared_libperl, shared_libpython

2015-04-28 Thread Andrew Dunstan


On 04/28/2015 04:31 PM, Peter Eisentraut wrote:

On 4/28/15 12:05 AM, Tom Lane wrote:

Yeah.  Even more specifically, olinguito does have --with-python in its
configure flags, but then the plpython Makefile skips the build because
libpython isn't available as a shared library.  But the contrib test is
(I assume, haven't looked) conditional only on the configure flag.

I'm not real sure now why we felt that was a good approach.  The general
project policy is that if you ask for a feature in the configure flags,
we'll build it or die trying; how come this specific Python issue gets
special treatment contrary to that policy?

So I'd vote for changing that to put the shared-library test in configure,
or at least make it a build failure later on, not silently don't build
what was asked for.  That would mean olinguito's configure flags would
have to be adjusted.

Plan B would require propagating the shared-libpython test into the
contrib makefiles, which seems pretty unpalatable even if you're willing
to defend the status quo otherwise.

I had tried plan B prime, moving the shared_libpython etc. detection
into Makefile.global.  But that doesn't work because contrib/pgxs
makefiles require setting all variables *before* including the global
makefiles.  So you can't decide whether you want to build something
before you have told it everything you want to build.

The reason for the current setup is actually that when plperl and later
plpython was added, we still had Perl and Python client modules in our
tree (Pg.pm and pygresql), and configure --with-perl and --with-python
were meant to activate their build primarily.  Also, in those days,
having a shared libperl or libpython was rare.  But we didn't want to
fail the frontend interface builds because of that.  So we arrived at
the current workaround.

My preference would be to rip all that out and let the compiler or
linker decide when it doesn't want to link something.

The alternative would be creating a configure check that mirrors the
current logic.  Arguably, however, the current logic is quite unworthy
of a configure check, because it's just a collection of
on-this-platform-do-that, instead of a real test.  Then again, a real
test would require building and loading a shared library in configure,
and we are not set up for that.

Preferences?





In general I prefer things not being available to be tested at configure 
time. After all, we check for libxml2, for example. Certainly silent 
failure is a bad thing.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_basebackup, tablespace mapping and path canonicalization

2015-04-28 Thread Ian Barwick


On 29/04/15 09:12, Bruce Momjian wrote:
 On Fri, Feb  6, 2015 at 08:25:42AM -0500, Robert Haas wrote:
 On Thu, Feb 5, 2015 at 10:21 PM, Ian Barwick i...@2ndquadrant.com wrote:
 I stumbled on what appears to be inconsistent handling of double slashes
 in tablespace paths when using pg_basebackup with the 
 -T/--tablespace-mapping
 option:

 ibarwick:postgresql (master)$ mkdir /tmp//foo-old
 [...]
 The attached patch adds the missing canonicalization; I can't see any
 reason not to do this. Thoughts?

 Seems OK to me.  Anyone think differently?
 
 Patch applied.

Thanks!


Regards

Ian Barwick

-- 
 Ian Barwick   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Temporal extensions

2015-04-28 Thread Dave Jones
Hi Jim,

On 28/04/15 03:49, Jim Nasby wrote:
 On 4/27/15 6:08 PM, Dave Jones wrote:
 (Though, I dislike using timestamps to do change/history tracking, but
 that's just me...)
 I've been playing around with history tracking (in the context of BI,
 typically with batch loaded reporting databases) for about 7-8 years now
 and always found timestamps perfect for the purpose, but are you perhaps
 referring to using it for audit purposes? If that's the case I'd agree
 entirely - this is absolutely the wrong tool for such things (which is
 something I need to put a bit more prominently in the docs - it's buried
 in the design section at the moment).
 
 Most warehouses dumb things down to a day level, so it's probably OK there.

That's certainly how I've always used this stuff (and how I've always
encouraged it to be used), although I must admit everyone's first
reaction is great, let's use microsecond resolution to capture
_everything_! followed in a few years time by good grief, my table's
huge and I don't need most of the detail in it!. Oh well :)

 What I specifically don't like is that using a timestamp to try and
 determine the order in which something happened is just fraught with
 gotchas. For starters, now() is locked in when you do a BEGIN, but maybe
 a newer transaction modifies a table before an older one does. Now the
 ordering is *backwards*. You have the same problem with using an XID.
 The only way I've thought of to make this guaranteed safe is to somehow
 serialize the logging with something like
 
 CREATE TABLE customer_history(
   customer_hid serial primary key -- hid == history_id
   , previous_customer_hid int references customer_history
   , customer_id int NOT NULL references customer
 ...
 );
 CREATE UNIQUE INDEX ... ON customer_history(previous_customer_hid) WHERE
 previous_customer_hid IS NOT NULL;
 CREATE UNIQUE INDEX ... ON customer_history(customer_hid) WHERE
 previous_customer_hid IS NULL;
 
 and then have a before trigger enforce
 NEW.previous_customer_hid := customer_history__get_latest(customer_id)
 
 where customer_history__get_latest() will 'walk the chain' starting with
 the first link customer_id = blah AND previous_customer_id = NULL
 
 Because of the indexes that will serialize inserts on a per-customer
 basis. You could still run into problems with a newer snapshot creating
 a history record before a transaction with an older snapshot does
 though. :( Though, if you included txid_current_snapshot() with each
 record you could probably detect when that happens.

Yes, I noticed in my read through the older temporal threads that one of
the solutions used clock_timestamp() rather than current_timestamp which
seemed to be in order to avoid this sort of thing. Can't say I liked the
sound of it though - seemed like that would lead to even more
inconsistencies (as the timestamp for a changeset would potentially fall
in the middle of the transaction that made it ... urgh!).

I should make clear in the docs, that this sort of system isn't good for
ordering at that level (i.e. it's only accurate for history resolutions
significantly longer than any individual transaction length).

 Or did you mean ranges would be better? They certainly looked intriguing
 when I started moving this stuff to postgres, and I'd like to re-visit
 them in the near future as they offer capabilities I don't have with
 timestamps (such as guaranteeing no overlapping ranges via exclusion
 constraints) but my initial tests suggested some rather major
 performance degradation so I put it on the back-burner at first.
 
 If you're going to keep both a start and end for each record you'd
 definitely want to do it with a range. If you're only keeping the change
 time then you can handle it differently.

Yup, it's keeping a start and end (some of the routines in the extension
provide alternate transformations, but the base storage is always a
range of timestamps simply because that seems the easiest structure to
transform into others in practice).

I'll work on a decent test case for the performance issues I ran into
(unfortunately the original one is full of proprietary data so I'll have
to come up with something similarly sized full of random bits).

Dave.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-04-28 Thread Robert Haas
On Fri, Apr 24, 2015 at 4:09 PM, Jim Nasby jim.na...@bluetreble.com
wrote: When I read that I think about something configurable at
 relation-level.There are cases where you may want to have more
 granularity of this information at block level by having the VM slots
 to track less blocks than 32, and vice-versa.

 What are those cases?  To me that sounds like making things
 complicated to no obvious benefit.

 Tables that get few/no dead tuples, like bulk insert tables. You'll have
 large sections of blocks with the same visibility.

I don't see any reason why that would require different granularity.

 I suspect the added code to allow setting 1 bit for multiple pages without
 having to lock all those pages simultaneously will probably outweigh making
 this a reloption anyway.

That's a completely unrelated issue.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Seq Scan

2015-04-28 Thread Robert Haas
On Fri, Apr 24, 2015 at 8:32 AM, Amit Kapila amit.kapil...@gmail.com wrote:
 - InitializeParallelWorkers() still mixes together general parallel
 executor concerns with concerns specific to parallel sequential scan
 (e.g. EstimatePartialSeqScanSpace).

 Here we are doing 2 things, first one is for planned statement and
 then second one is node specific which in the case is parallelheapscan
 descriptor. So If I understand correctly, you want that we remove second
 one and have a recursive function to achieve the same.

Right.

 - It's hard to believe this is right:

 +   if (parallelstmt-inst_options)
 +   receiver = None_Receiver;

 Really?  Flush the tuples if there are *any instrumentation options
 whatsoever*?  At the very least, that doesn't look too future-proof,
 but I'm suspicious that it's outright incorrect.

 instrumentation info is for explain statement where we don't need
 tuples and it is set same way for it as well, refer ExplainOnePlan().
 What makes you feel this is incorrect?

Well, for one thing, it's going to completely invalidate the result of
EXPLAIN.  I mean, consider this:

Hash Join
- Parallel Seq Scan
- Hash
  - Seq Scan

If you have the workers throw away the rows from the parallel seq scan
instead of sending them back to the master, the master won't join
those rows against the other table.  And then the actual row counts,
timing, etc. will all be totally wrong.  Worse, if the user is
EXPLAIN-ing a SELECT INTO command, the results will be totally wrong.

I don't think you can use ExplainOnePlan() as precedent for the theory
that explain_options != 0 means discard everything, because that
function does not do that.  It bases the decision to throw away the
output on the fact that EXPLAIN was used, and throws it away unless an
IntoClause was also specified.  It does this even if
instrument_options == 0.  Meanwhile, auto_explain does NOT throw away
the output even if instrument_options != 0, nor should it!  But even
if none of that were an issue, throwing away part of the results from
an internal plan tree is not the same thing as throwing away the final
result stream, and is dead wrong.

 - I think ParallelStmt probably shouldn't be defined in parsenodes.h.
 That file is included in a lot of places, and adding all of those
 extra #includes there doesn't seem like a good idea for modularity
 reasons even if you don't care about partial rebuilds.  Something that
 includes a shm_mq obviously isn't a parse node in any meaningful
 sense anyway.

 How about tcop/tcopprot.h?

The comment of that file is prototypes for postgres.c.

Generally, unless there is some reason to do otherwise, the prototypes
for a .c file in src/backend go in a .h file with the same name in
src/include.  I don't see why we should do differently here.
ParallelStmt should be defined and used in a file living in
src/backend/executor, and the header should have the same name and go
in src/include/executor.

 - I don't think you need both setup cost and startup cost.  Starting
 up more workers isn't particularly more expensive than starting up
 fewer of them, because most of the overhead is in waiting for them to
 actually start, and the number of workers is reasonable, then they're
 all be doing that in parallel with each other.  I suggest removing
 parallel_startup_cost and keeping parallel_setup_cost.

 There is some work (like creation of shm queues, launching of workers)
 which is done proportional to number of workers during setup time. I
 have kept 2 parameters to distinguish such work.  I think you have a
 point that start of some or all workers could be parallel, but I feel
 that still is a work proportinal to number of workers.  For future
 parallel operations also such a parameter could be useful where we need
 to setup IPC between workers or some other stuff where work is proportional
 to workers.

That's technically true, but the incremental work involved in
supporting a new worker is extremely small compare with worker startup
times.  I'm guessing that the setup cost is going to be on the order
of hundred-thousands or millions and and the startup cost is going to
be on the order of tens or ones.  Unless you can present some contrary
evidence, I think we should rip it out.

And I actually hope you *can't* present some contrary evidence.
Because if you can, then that might mean that we need to cost every
possible path from 0 up to N workers and let the costing machinery
decide which one is better.  If you can't, then we can cost the
non-parallel path and the maximally-parallel path and be done.  And
that would be much better, because it will be faster.  Remember, just
because we cost a bunch of parallel paths doesn't mean that any of
them will actually be chosen.  We need to avoid generating too much
additional planner work in cases where we don't end up deciding on
parallelism anyway.

 - In cost_funnel(), I don't think it's right to divide the run cost by
 nWorkers + 1.  Suppose we've got 

Re: [HACKERS] Can pg_dump make use of CURRENT/SESSION_USER

2015-04-28 Thread Fabrízio de Royes Mello
On Tue, Apr 28, 2015 at 9:38 AM, Robert Haas robertmh...@gmail.com wrote:

 On Sat, Apr 25, 2015 at 8:05 AM, Fabrízio de Royes Mello
 fabriziome...@gmail.com wrote:
If we ever implement something like
   
COMMENT ON CURRENT_DATABASE IS ...
   
it will be useful, because you will be able to restore a dump into
another database and have the comment apply to the target
database.
  
   I think it's simple to implement, but how about pg_dump... we need to
   add
   new option (like --use-current-database) or am I missing something ?
 
  I think we'd just change it to use the new syntax, full stop.  I see
  no need for an option.
 
  I'm returning on this...
 
  What's the reasonable syntaxes?
 
  COMMENT ON CURRENT DATABASE IS 'text';
 
  or
 
  COMMENT ON DATABASE { CURRENT_DATABASE | object_name } IS 'text';

 The second one would require making CURRENT_DATABASE a reserved
 keyword, and I'm not keen to create any more of those.  I like the
 first one.  The other alternative that may be worth considering is:

 COMMENT ON CURRENT_DATABASE IS 'text';

 That doesn't require making CURRENT_DATABASE a reserved keyword, but
 it does require making it a keyword, and it doesn't look very SQL-ish.
 Still, we have a bunch of other CURRENT_FOO keywords.

 But I'm inclined to stick with your first proposal.


Attached the patch to support COMMENT ON CURRENT DATABASE IS ...
(including pg_dump).

On my next spare time I'll send the ALTER ROLE ... IN CURRENT DATABASE
patch.

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog: http://fabriziomello.github.io
 Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello
 Github: http://github.com/fabriziomello
diff --git a/doc/src/sgml/ref/comment.sgml b/doc/src/sgml/ref/comment.sgml
index 656f5aa..b080106 100644
--- a/doc/src/sgml/ref/comment.sgml
+++ b/doc/src/sgml/ref/comment.sgml
@@ -30,6 +30,7 @@ COMMENT ON
   CONSTRAINT replaceable class=PARAMETERconstraint_name/replaceable ON replaceable class=PARAMETERtable_name/replaceable |
   CONSTRAINT replaceable class=PARAMETERconstraint_name/replaceable ON DOMAIN replaceable class=PARAMETERdomain_name/replaceable |
   CONVERSION replaceable class=PARAMETERobject_name/replaceable |
+  CURRENT DATABASE |
   DATABASE replaceable class=PARAMETERobject_name/replaceable |
   DOMAIN replaceable class=PARAMETERobject_name/replaceable |
   EXTENSION replaceable class=PARAMETERobject_name/replaceable |
@@ -92,6 +93,11 @@ COMMENT ON
   /para
 
   para
+   The CURRENT DATABASE means the comment will be applied to the database
+   where the command is executed.
+  /para
+
+  para
 Comments can be viewed using applicationpsql/application's
 command\d/command family of commands.
 Other user interfaces to retrieve comments can be built atop
@@ -301,6 +307,7 @@ COMMENT ON COLUMN my_table.my_column IS 'Employee ID number';
 COMMENT ON CONVERSION my_conv IS 'Conversion to UTF8';
 COMMENT ON CONSTRAINT bar_col_cons ON bar IS 'Constrains column col';
 COMMENT ON CONSTRAINT dom_col_constr ON DOMAIN dom IS 'Constrains col of domain';
+COMMENT ON CURRENT DATABASE IS 'Current Database Comment';
 COMMENT ON DATABASE my_database IS 'Development Database';
 COMMENT ON DOMAIN my_domain IS 'Email Address Domain';
 COMMENT ON EXTENSION hstore IS 'implements the hstore data type';
diff --git a/src/backend/commands/comment.c b/src/backend/commands/comment.c
index 6d8c006..db9b3c5 100644
--- a/src/backend/commands/comment.c
+++ b/src/backend/commands/comment.c
@@ -43,25 +43,29 @@ CommentObject(CommentStmt *stmt)
 	ObjectAddress address = InvalidObjectAddress;
 
 	/*
-	 * When loading a dump, we may see a COMMENT ON DATABASE for the old name
-	 * of the database.  Erroring out would prevent pg_restore from completing
-	 * (which is really pg_restore's fault, but for now we will work around
-	 * the problem here).  Consensus is that the best fix is to treat wrong
-	 * database name as a WARNING not an ERROR; hence, the following special
-	 * case.  (If the length of stmt-objname is not 1, get_object_address
+	 * If the length of stmt-objname is 1 then the COMMENT ON DATABASE command
+	 * was used, else COMMENT ON CURRENT DATABASE was used instead.
+	 * (If the length of stmt-objname is not 1, get_object_address
 	 * will throw an error below; that's OK.)
 	 */
-	if (stmt-objtype == OBJECT_DATABASE  list_length(stmt-objname) == 1)
+	if (stmt-objtype == OBJECT_DATABASE)
 	{
-		char	   *database = strVal(linitial(stmt-objname));
-
-		if (!OidIsValid(get_database_oid(database, true)))
+		/* COMMENT ON DATABASE name */
+		if (list_length(stmt-objname) == 1)
 		{
-			ereport(WARNING,
-	(errcode(ERRCODE_UNDEFINED_DATABASE),
-	 errmsg(database \%s\ does not exist, database)));
-			return address;
+			char	   *database = strVal(linitial(stmt-objname));
+
+			if (!OidIsValid(get_database_oid(database, true)))
+			{
+ereport(WARNING,
+		

Re: [HACKERS] Proposal: knowing detail of config files via SQL

2015-04-28 Thread David Steele
On 4/27/15 10:37 PM, Sawada Masahiko wrote:
 
 Thank you for your reviewing.
 Attached v8 patch is latest version.

I posted a review through the CF app but it only went to the list
instead of recipients of the latest message.  install-checkworld is
failing but the fix is pretty trivial.

See:
http://www.postgresql.org/message-id/20150428145626.2632.75287.p...@coridan.postgresql.org

-- 
- David Steele
da...@pgmasters.net



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Proposal: knowing detail of config files via SQL

2015-04-28 Thread David Steele
The following review has been posted through the commitfest application:
make installcheck-world:  tested, failed
Implements feature:   tested, passed
Spec compliant:   tested, passed
Documentation:tested, passed

Looks good overall, but make installcheck-world does not pass.  rules.sql 
outputs all system views to pg_file_settings will need to be added:

*** src/src/test/regress/expected/rules.out 2015-04-24 12:11:15.0 
-0400
--- src/src/test/regress/results/rules.out  2015-04-28 10:44:59.0 
-0400
***
*** 1308,1313 
--- 1308,1319 
  c.is_scrollable,
  c.creation_time
 FROM pg_cursor() c(name, statement, is_holdable, is_binary, is_scrollable, 
creation_time);
+ pg_file_settings| SELECT a.sourcefile,
+ a.sourceline,
+ a.seqno,
+ a.name,
+ a.setting
+FROM pg_show_all_file_settings() a(sourcefile, sourceline, seqno, name, 
setting);
  pg_group| SELECT pg_authid.rolname AS groname,
  pg_authid.oid AS grosysid,
  ARRAY( SELECT pg_auth_members.member


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT syntax issues

2015-04-28 Thread Geoff Winkless
On 28 April 2015 at 15:46, Stephen Frost sfr...@snowman.net wrote:

 +1, NEW/OLD seem pretty natural and I'm not worried about what they look
 like in rules, and their usage in triggers matches up with what they'd
 mean here, I'd think.


Since I've stuck my head above the parapet once I figured I'd give m
y 2p's worth:
​IMHO ​
NEW/OLD doesn't fit at all.

In triggers you're applying it to something that (without the trigger)
would be the new or old version of a matching row
​, so it's completely intuitive​
; in this instance without the ON CONFLICT there would never be a
​​
new
​​
, because it would be
​a ​
failure
​​
.
​​

​MySQL uses VALUES(columnname) to reference the intended INSERT value (what
you might term NEW) and the target name to reference OLD. I understand
that people might think the bracketed syntax isn't very pleasant because
that looks like a function, but it seems more reasonable than NEW (can we
use VALUES.columname?); finally I don't see why we need an OLD (or
TARGET) at all - am I missing the point?

Geoff


Re: [HACKERS] Auditing extension for PostgreSQL (Take 2)

2015-04-28 Thread David Steele
On 4/28/15 2:14 AM, Sawada Masahiko wrote:
 On Fri, Apr 24, 2015 at 3:23 AM, David Steele da...@pgmasters.net wrote:
 I've also added some checking to make sure that if anything looks funny
 on the stack an error will be generated.

 Thanks for the feedback!

 
 Thank you for updating the patch!
 I ran the postgres regression test on database which is enabled
 pg_audit, it works fine.
 Looks good to me.
 
 If someone don't have review comment or bug report, I will mark this
 as Ready for Committer.

Thank you!  I appreciate all your work reviewing this patch and of
course everyone else who commented on, reviewed or tested the patch
along the way.

-- 
- David Steele
da...@pgmasters.net



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] deparsing utility commands

2015-04-28 Thread Dimitri Fontaine
Hi,

As a comment to the whole email below, I think the following approach is
the best compromise we will find, allowing everybody to come up with
their own format much as in the Logical Decoding plugins world.

Of course, as in the Logical Decoding area, BDR and similar projects
will implement their own representation, in BDR IIUC the DDL will get
translated into a JSON based AST thing.

Alvaro Herrera alvhe...@2ndquadrant.com writes:
 Actually here's another approach I like better: use a new pseudotype,
 pg_ddl_command, which internally is just a pointer to a CollectedCommand
 struct.  We cannot give access to the pointer directly in SQL, so much
 like type internal or pg_node_tree the in/out functions should just
 error out.  But the type can be returned as a column in
 pg_event_trigger_ddl_command.  An extension can create a function that
 receives that type as argument, and return a different representation of
 the command; the third patch creates a function ddl_deparse_to_json()
 which does that.

 You can have as many extensions as you want, and your event triggers can
 use the column as many times as necessary.  This removes the limitation
 of the previous approach that you could only have a single extension
 providing a CommandDeparse_hook function.

 This patch applies cleanly on top of current master.  You need to
 install the extension with CREATE EXTENSION ddl_deparse after building
 it in contrib.

 Note: the extension is NOT to be committed to core, only the first two
 patches; that will give us more room to revisit the JSON representation
 more promptly.  My intention is that the extension will be published
 elsewhere.

+1 from me,

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT syntax issues

2015-04-28 Thread Petr Jelinek

On 28/04/15 16:44, Andres Freund wrote:

On 2015-04-28 10:40:10 -0400, Stephen Frost wrote:

* Andres Freund (and...@anarazel.de) wrote:

On 2015-04-28 16:36:28 +0200, Petr Jelinek wrote:

I am also very sure that every time I'll write this statement I will have to
look into manual for the names of TARGET and EXCLUDED because they don't
seem intuitive to me at all (especially the EXCLUDED).


Same here. I don't understand why 'CONFLICTING' would be more ambiguous
than EXCLUDED (as Peter argued earlier). Especially given that the whole
syntax is called ON CONFLICT.


Any way we can alias it?  Both of those strike me as annoyingly long and
if we could allow an alias then people can do whatever they want...

No, I haven't got any suggestion on how to do that. :)

It's also something we can probably improve on in the future...


I earlier suggested NEW/OLD. I still think that's not too bad as I don't
buy the argument that they're too associated with rules.



Hmm, I would never think of rules when talking about NEW/OLD, the 
association I have is with triggers.


--
 Petr Jelinek  http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT syntax issues

2015-04-28 Thread Geoff Winkless
On 28 April 2015 at 15:57, I wrote:

 ​MySQL uses VALUES(columnname) to reference the intended INSERT value
 (what you might term NEW) and the target name to reference OLD. I
 understand that people might think the bracketed syntax isn't very pleasant
 because that looks like a function, but it seems more reasonable than NEW
 (can we use VALUES.columname?);


​
On balance I
​think I ​
don't like VALUES.column either
​, because although it looks all fine when you're doing a single INSERT ...
VALUES () it gets confusing if you're INSERTing from a SELECT.

​As you were. :(

Geoff


Re: [HACKERS] [COMMITTERS] pgsql: Add transforms feature

2015-04-28 Thread Michael Paquier
On Wed, Apr 29, 2015 at 5:15 AM, Peter Eisentraut pete...@gmx.net wrote:
 On 4/28/15 4:10 PM, Andrew Dunstan wrote:
 Do we actually have a Windows machine building with Python3?


 The answer seems to be probably not. When I tried enabling this with
 bowerbird I got a ton of errors like these:

plpy_cursorobject.obj : error LNK2001: unresolved external symbol
PyObject_SelfIter [C:\prog\bf\root\HEAD\pgsql\plpython3.vcxproj]
plpy_cursorobject.obj : error LNK2019: unresolved external symbol
__imp_PyType_Ready referenced in function PLy_cursor_init_type
[C:\prog\bf\root\HEAD\pgsql\plpython3.vcxproj]


 Something else to fix I guess.

 I think at least at one point the community Windows binaries built by
 EnterpriseDB were built against Python 3 (which upset some users).  But
 they might not be using the msvc toolchain.

Er, aren't you simply trying to link with 32b libraries while building
in a 64b environment? I am able to make the build work with python 3.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] shared_libperl, shared_libpython

2015-04-28 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 On 4/28/15 12:05 AM, Tom Lane wrote:
 Yeah.  Even more specifically, olinguito does have --with-python in its
 configure flags, but then the plpython Makefile skips the build because
 libpython isn't available as a shared library.  But the contrib test is
 (I assume, haven't looked) conditional only on the configure flag.
 
 I'm not real sure now why we felt that was a good approach.  The general
 project policy is that if you ask for a feature in the configure flags,
 we'll build it or die trying; how come this specific Python issue gets
 special treatment contrary to that policy?

 The reason for the current setup is actually that when plperl and later
 plpython was added, we still had Perl and Python client modules in our
 tree (Pg.pm and pygresql), and configure --with-perl and --with-python
 were meant to activate their build primarily.  Also, in those days,
 having a shared libperl or libpython was rare.  But we didn't want to
 fail the frontend interface builds because of that.  So we arrived at
 the current workaround.

Ah.  I'm glad you remember, because I didn't.

 My preference would be to rip all that out and let the compiler or
 linker decide when it doesn't want to link something.

Works for me, assuming that we get an understandable failure message and
not, say, a plperl.so that mysteriously doesn't work.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FIX : teach expression walker about RestrictInfo

2015-04-28 Thread Tom Lane
Tomas Vondra tomas.von...@2ndquadrant.com writes:
 On 04/28/15 21:50, Tom Lane wrote:
 RestrictInfo is not a general expression node and support for it has
 been deliberately omitted from expression_tree_walker().  So I think
 what you are proposing is a bad idea and probably a band-aid for some
 other bad idea.

 That's not what I said, though. I said that calling pull_varnos() causes 
 the issue - apparently that does not happen in master, but I ran into 
 that when hacking on a patch.

pull_varnos is not, and should not be, applied to a RestrictInfo; for one
thing, it'd be redundant with work that was already done when creating the
RestrictInfo (cf make_restrictinfo_internal).  You've not shown the
context of your problem very fully, but in general most code in the planner
should be well aware of whether it is working with a RestrictInfo (or list
of same) or a bare expression.  I don't believe that it's a very good idea
to obscure that distinction.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] improving speed of make check-world

2015-04-28 Thread Tom Lane
Michael Paquier michael.paqu...@gmail.com writes:
 On Tue, Apr 28, 2015 at 1:46 AM, Jeff Janes jeff.ja...@gmail.com wrote:
 This change fixed the problem for me.
 It also made this age-old compiler warning go away:
 
 In file included from gram.y:14515:
 scan.c: In function 'yy_try_NUL_trans':
 scan.c:10307: warning: unused variable 'yyg'
 
 I guess by redirecting it into the log file you indicated, but is that a
 good idea to redirect stderr?

 I am sure that Peter did that on purpose, both approaches having
 advantages and disadvantages. Personally I don't mind looking at the
 install log file in tmp_install to see the state of the installation,
 but it is true that this change is a bit disturbing regarding the fact
 that everything was directly outputted to stderr and stdout for many
 years.

Hm.  I don't really like the idea that make all behaves differently
if invoked by make check than if invoked directly.  Hiding warnings
from you is not very good, and hiding errors would be even more annoying.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Moving on to close the current CF 2015-02

2015-04-28 Thread Magnus Hagander
On Fri, Apr 17, 2015 at 4:57 PM, Magnus Hagander mag...@hagander.net
wrote:

 On Fri, Apr 17, 2015 at 9:23 AM, Michael Paquier 
 michael.paqu...@gmail.com wrote:

 On Fri, Apr 17, 2015 at 4:22 PM, Michael Paquier wrote:
  @Magnus: having the possibility to mark a patch as returned with
  feedback without bumping it to the next CF automatically would be
  cool to being moving on.
 Meh. cool to have to help moving on.


 Yeah, it's at the top of my list of priorities once I get some time to
 spend on community stuff. Hopefully I can get around to it next week. There
 is a small chance I can do it before then, but it is indeed small...


My apologies for that being delayed even longe rthan that. I've finally
pushed the changes that:

* Renames the current returned with feedback to moved to next cf
* Adds a new status, returned with feedback, that is the same as
rejected in everything except the label (meaning it closes the patch out,
but does *not* move it to the next CF).

This was at least my understanding of the consensus :)


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] INSERT ... ON CONFLICT syntax issues

2015-04-28 Thread Peter Geoghegan
On Tue, Apr 28, 2015 at 9:42 AM, Stephen Frost sfr...@snowman.net wrote:
 I agree with that, but how are NEW and OLD ambiguous?  NEW is clearly
 the tuple being added, while OLD is clearly the existing tuple.

Yes, but EXCLUDED is neither the tuple being added, nor is it the new
tuple. It's something else entirely.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT syntax issues

2015-04-28 Thread Peter Geoghegan
On Tue, Apr 28, 2015 at 7:36 AM, Petr Jelinek p...@2ndquadrant.com wrote:
 I am also very sure that every time I'll write this statement I will have to
 look into manual for the names of TARGET and EXCLUDED because they don't
 seem intuitive to me at all (especially the EXCLUDED).

According to wordcount.org, the word exclude is ranked # 5796 in the
English language in terms of frequency of use. It's in the vocabulary
of 6 year olds.

I don't know why you find it counter-intuitive, since it perfectly
describes the purpose of the tuple.
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add transforms feature

2015-04-28 Thread Jeff Janes
On Tue, Apr 7, 2015 at 7:55 PM, Peter Eisentraut pete...@gmx.net wrote:

 On 3/22/15 5:46 AM, Pavel Stehule wrote:
  Isn't better doesn't support TRANSFORM ALL clause? If somebody would
  to use transformations - then he have to explicitly enable it by
  TRANSFORM FOR TYPE ? It is safe and without possible user
 unexpectations.

 Following our off-list conversation, here is a new patch that removes
 the TRANSFORM ALL/NONE clauses and requires an explicit list.


Hi Peter,

This commit is causing a compiler warning for me in non-cassert builds:

funcapi.c: In function 'get_func_trftypes':
funcapi.c:890: warning: unused variable 'procStruct'

Adding PG_USED_FOR_ASSERTS_ONLY seems to fix it.

Cheers,

Jeff


transform_unused.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: multivariate statistics / proof of concept

2015-04-28 Thread Jeff Janes
On Mon, Mar 30, 2015 at 5:26 PM, Tomas Vondra tomas.von...@2ndquadrant.com
wrote:

 Hello,

 attached is a new version of the patch series. Aside from fixing various
 issues (crashes, memory leaks). The patches are rebased to current
 master, and I also attach a few SQL scripts I used for testing (nothing
 fancy, just stress-testing all the parts the patch touches).


Hi Tomas,

I get cascading conflicts in pg_proc.h.  It looked easy enough to fix,
except then I get compiler errors:

funcapi.c: In function 'get_func_trftypes':
funcapi.c:890: warning: unused variable 'procStruct'
utils/fmgrtab.o:(.rodata+0x10cf8): undefined reference to `_null_'
utils/fmgrtab.o:(.rodata+0x10d18): undefined reference to `_null_'
utils/fmgrtab.o:(.rodata+0x10d38): undefined reference to `_null_'
utils/fmgrtab.o:(.rodata+0x10d58): undefined reference to `_null_'
collect2: ld returned 1 exit status
make[2]: *** [postgres] Error 1
make[1]: *** [all-backend-recurse] Error 2
make: *** [all-src-recurse] Error 2
make: *** Waiting for unfinished jobs
make: *** [temp-install] Error 2


Cheers,

Jeff


Re: [HACKERS] [COMMITTERS] pgsql: Add transforms feature

2015-04-28 Thread Andrew Dunstan



Oops, thought I'd changed the address line.


[switching to -hackers]

On 04/28/2015 11:51 AM, Andrew Dunstan wrote:


On 04/28/2015 09:04 AM, Michael Paquier wrote:

On Tue, Apr 28, 2015 at 10:01 AM, Michael Paquier
michael.paqu...@gmail.com wrote:

On Tue, Apr 28, 2015 at 9:55 AM, Andrew Dunstan wrote:

w.r.t MSVC builds, it looks like we need entries in
$contrib_extraincludes
in src/tools/msvc/Mkvcbuild.pm at the very least.

If our goal is to put back to green the Windows nodes as quick as
possible, we could bypass their build this way , and we'll
additionally need to update the install script and
vcregress.pl/contribcheck to bypass those modules accordingly. Now I
don't think that we should make the things produced inconsistent.

OK, attached are two patches to put back the buildfarm nodes using
MSVC to green
- 0001 adds support for build and installation of the new transform
modules: hstore_plperl, hstore_plpython and  ltree_plpython. Note that
this patch is enough to put back the buildfarm to a green state for
MSVC, but it disables the regression tests for those modules,
something addressed in the next patch...
- 0002 adds support for regression tests for the new modules. The
thing is that we need to check the major version version of Python
available in configuration and choose what are the extensions to
preload before the tests run. It is a little bit hacky... But it has
the merit to work, and I am not sure we could have a cleaner solution
by looking at the Makefiles of the transform modules that use
currently conditions based on $(python_majorversion).
Regards,



I have pushed the first of these with some tweaks.

I'm looking at the second.





Regarding this second patch - apart from the buggy python version test
which I just fixed in the first patch, I see this:

   +   if ($pyver eq 2)
   +   {
   +   push @opts, --load-extension=plpythonu;
   +   push @opts, '--load-extension=' . $module . 'u';
   +   }


But what do we do for Python3?

Do we actually have a Windows machine building with Python3?

cheers

andrew



--
Sent via pgsql-committers mailing list (pgsql-committ...@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: multivariate statistics / proof of concept

2015-04-28 Thread Stephen Frost
* Jeff Janes (jeff.ja...@gmail.com) wrote:
 On Mon, Mar 30, 2015 at 5:26 PM, Tomas Vondra tomas.von...@2ndquadrant.com
 wrote:
  attached is a new version of the patch series. Aside from fixing various
  issues (crashes, memory leaks). The patches are rebased to current
  master, and I also attach a few SQL scripts I used for testing (nothing
  fancy, just stress-testing all the parts the patch touches).
 
 I get cascading conflicts in pg_proc.h.  It looked easy enough to fix,
 except then I get compiler errors:

Yeah, those are because you didn't address the new column which was
added to pg_proc.  You need to add another _null_ in the pg_proc.h lines
in the correct place, apparently on four lines.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] INSERT ... ON CONFLICT syntax issues

2015-04-28 Thread Peter Geoghegan
On Tue, Apr 28, 2015 at 7:38 AM, Andres Freund and...@anarazel.de wrote:
 On 2015-04-28 16:36:28 +0200, Petr Jelinek wrote:
 I am also very sure that every time I'll write this statement I will have to
 look into manual for the names of TARGET and EXCLUDED because they don't
 seem intuitive to me at all (especially the EXCLUDED).

 Same here. I don't understand why 'CONFLICTING' would be more ambiguous
 than EXCLUDED (as Peter argued earlier). Especially given that the whole
 syntax is called ON CONFLICT.

Because the TARGET and EXCLUDED tuples conflict with each other -
they're both conflicting.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT syntax issues

2015-04-28 Thread Stephen Frost
* Peter Geoghegan (p...@heroku.com) wrote:
 On Tue, Apr 28, 2015 at 7:38 AM, Andres Freund and...@anarazel.de wrote:
  On 2015-04-28 16:36:28 +0200, Petr Jelinek wrote:
  I am also very sure that every time I'll write this statement I will have 
  to
  look into manual for the names of TARGET and EXCLUDED because they don't
  seem intuitive to me at all (especially the EXCLUDED).
 
  Same here. I don't understand why 'CONFLICTING' would be more ambiguous
  than EXCLUDED (as Peter argued earlier). Especially given that the whole
  syntax is called ON CONFLICT.
 
 Because the TARGET and EXCLUDED tuples conflict with each other -
 they're both conflicting.

I agree with that, but how are NEW and OLD ambiguous?  NEW is clearly
the tuple being added, while OLD is clearly the existing tuple.

Now that I think about it though, where that'd get ugly is using this
command *inside* a trigger function, which I can certainly imagine
people will want to do...

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] INSERT ... ON CONFLICT syntax issues

2015-04-28 Thread Peter Geoghegan
On Tue, Apr 28, 2015 at 9:45 AM, Peter Geoghegan p...@heroku.com wrote:
 Yes, but EXCLUDED is neither the tuple being added, nor is it the new
 tuple. It's something else entirely.


I mean, nor is it the existing tuple.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT syntax issues

2015-04-28 Thread Stephen Frost
* Peter Geoghegan (p...@heroku.com) wrote:
 On Tue, Apr 28, 2015 at 9:42 AM, Stephen Frost sfr...@snowman.net wrote:
  I agree with that, but how are NEW and OLD ambiguous?  NEW is clearly
  the tuple being added, while OLD is clearly the existing tuple.
 
 Yes, but EXCLUDED is neither the tuple being added, nor is it the new
 tuple. It's something else entirely.

I don't see that, it's exactly the tuple attempting to be inserted.  I
agree that it might not be the tuple originally in the INSERT statement
due to before triggers, but there isn't an alias anywhere for that.

Now, in 99% of cases there aren't going to be before triggers so I'm not
particularly worried about that distinction, nor do I think we need to
provide an alias for the tuple from the INSERT piece of the clause, but
to say that EXCLUDED isn't the tuple being added doesn't make any sense
to me, based on how I read the documentation proposed here:

http://postgres-benchmarks.s3-website-us-east-1.amazonaws.com/on-conflict-docs/sql-insert.html

I'll further point out that the documentation doesn't bother to make the
before-trigger distinction always either:

 Note that an EXCLUDED expression is used to reference values originally
 proposed for insertion:

Perhaps I've missed something, but that seems to go along with the
notion that EXCLUDED references the tuple that we're attempting to add
through the INSERT, and that's certainly what I'd consider NEW to be
also.

I do think there's a real issue with using OLD/NEW because of their
usage in triggers, and I don't see any good solution to that issue. :(

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] tablespaces inside $PGDATA considered harmful

2015-04-28 Thread Bruce Momjian
On Fri, Apr 24, 2015 at 01:05:03PM -0400, Bruce Momjian wrote:
 This way, both pg_dump and pg_upgrade will issue warnings, though, of
 course, those warnings can be ignored.  I am hopeful these two warnings
 will be sufficient and we will not need make these errors, with the
 possible inconvenience it will cause.  I am still afraid that someone
 will ignore the new errors pg_dump would generate and lose data.  I just
 don't remember enough cases where we threw new errors on _data_ restore.
 
 Frankly, those using pg_upgrade already will have to move the old
 tablespaces out of the old cluster if they ever want to delete those
 clusters, so I am hopeful these additional warnings will help eliminate
 this practice, which is already cumbersome and useless.  I am not
 planning to revisit this for 9.6.

Patch applied.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FIX : teach expression walker about RestrictInfo

2015-04-28 Thread Tomas Vondra

Hi,

On 04/28/15 21:50, Tom Lane wrote:

Tomas Vondra tomas.von...@2ndquadrant.com writes:

the attached trivial patch adds handling of RestrictInfo nodes into
expression_tree_walker().


RestrictInfo is not a general expression node and support for it has
been deliberately omitted from expression_tree_walker().  So I think
what you are proposing is a bad idea and probably a band-aid for some
other bad idea.


This is needed for example when calling
pull_varnos or (or other functions using the expression walker) in
clausesel.c, for example. An example of a query causing errors with
pull_varnos is



select * from t where (a = 10 and a = 20) or (b = 15 and b = 20);


Really?

regression=# create table t (a int, b int);
CREATE TABLE
regression=# select * from t where (a = 10 and a = 20) or (b = 15 and b = 
20);
  a | b
---+---
(0 rows)


That's not what I said, though. I said that calling pull_varnos() causes 
the issue - apparently that does not happen in master, but I ran into 
that when hacking on a patch.


For example try adding this

Relids tmp = pull_varnos(clause);
elog(WARNING, count = %d, bms_num_members(tmp));

into the or_clause branch in clause_selectivity(), and then running the 
query will give you this:


db=# select * from t where (a = 10 and a = 20) or (b = 15);
ERROR:  unrecognized node type: 524

But as I said - maybe calls to pull_varnos are not supposed to happen in 
this part of the code, for some reason, and it really is a bug in my patch.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] shared_libperl, shared_libpython

2015-04-28 Thread Peter Eisentraut
On 4/28/15 12:05 AM, Tom Lane wrote:
 Yeah.  Even more specifically, olinguito does have --with-python in its
 configure flags, but then the plpython Makefile skips the build because
 libpython isn't available as a shared library.  But the contrib test is
 (I assume, haven't looked) conditional only on the configure flag.
 
 I'm not real sure now why we felt that was a good approach.  The general
 project policy is that if you ask for a feature in the configure flags,
 we'll build it or die trying; how come this specific Python issue gets
 special treatment contrary to that policy?
 
 So I'd vote for changing that to put the shared-library test in configure,
 or at least make it a build failure later on, not silently don't build
 what was asked for.  That would mean olinguito's configure flags would
 have to be adjusted.
 
 Plan B would require propagating the shared-libpython test into the
 contrib makefiles, which seems pretty unpalatable even if you're willing
 to defend the status quo otherwise.

I had tried plan B prime, moving the shared_libpython etc. detection
into Makefile.global.  But that doesn't work because contrib/pgxs
makefiles require setting all variables *before* including the global
makefiles.  So you can't decide whether you want to build something
before you have told it everything you want to build.

The reason for the current setup is actually that when plperl and later
plpython was added, we still had Perl and Python client modules in our
tree (Pg.pm and pygresql), and configure --with-perl and --with-python
were meant to activate their build primarily.  Also, in those days,
having a shared libperl or libpython was rare.  But we didn't want to
fail the frontend interface builds because of that.  So we arrived at
the current workaround.

My preference would be to rip all that out and let the compiler or
linker decide when it doesn't want to link something.

The alternative would be creating a configure check that mirrors the
current logic.  Arguably, however, the current logic is quite unworthy
of a configure check, because it's just a collection of
on-this-platform-do-that, instead of a real test.  Then again, a real
test would require building and loading a shared library in configure,
and we are not set up for that.

Preferences?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY and file_fdw with fixed column widths

2015-04-28 Thread Andrew Dunstan


On 04/28/2015 03:50 PM, Bruce Momjian wrote:

On Tue, Apr 28, 2015 at 12:46:22PM -0700, Tom Lane wrote:

Bruce Momjian br...@momjian.us writes:

I know COPY doesn't support importing files with fixed column widths,
i.e. we can't say field1 is the first 30 characters, and field2 is the
rest of the line.  We need a unique delimiter at column 31.  (Commercial
Ingres does support this ability.)
I know we tell most people to use sed, Perl, or an ETL tool to convert
files into a format COPY understands, and I think that is a reasonable
answer.  However, the file_fdw also reads our COPY format, and in that
case, the data file might be updated regularly and running an ETL
process on it every time it is read is inconvenient.

COPY is, and has always been intended to be, as fast as possible; loading
format transformation abilities onto it seems like a fundamental mistake.
Therefore, if you wish file_fdw were more flexible, I think the answer is
to create a variant of file_fdw that doesn't use COPY but some other
mechanism.

Yes, I think this is a missing feature.  While we can tell people to do
ETL for loading, we are really not doing that for file_fdw.



This needs some love, but it's probably along the lines you need.

https://github.com/adunstan/file_fixed_length_record_fdw

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-04-28 Thread Jim Nasby

On 4/28/15 7:11 AM, Robert Haas wrote:

On Fri, Apr 24, 2015 at 4:09 PM, Jim Nasbyjim.na...@bluetreble.com
wrote: When I read that I think about something configurable at

relation-level.There are cases where you may want to have more
granularity of this information at block level by having the VM slots
to track less blocks than 32, and vice-versa.


What are those cases?  To me that sounds like making things
complicated to no obvious benefit.


Tables that get few/no dead tuples, like bulk insert tables. You'll have
large sections of blocks with the same visibility.

I don't see any reason why that would require different granularity.


Because in those cases it would be trivial to drop XMIN out of the tuple 
headers. For a warehouse with narrow rows that could be a significant 
win. Moreso, we could also move XMAX to the page level if we accept that 
if we need to invalidate any tuple we'd have to move all of them. In a 
warehouse situation that's probably OK as well.


That said, I don't think this is the first place to focus for reducing 
our on-disk format; reducing cleanup bloat would probably be a lot more 
useful.


Did you or Jan have more detailed info from the test he ran about where 
our 80% overhead was ending up? That would remove a lot of speculation 
here...

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cache invalidation for PL/pgsql functions

2015-04-28 Thread Pavel Stehule
2015-04-28 19:43 GMT+02:00 Robert Haas robertmh...@gmail.com:

 The following behavior surprised me, and a few other people at
 EnterpriseDB, and one of our customers:

 rhaas=# create table foo (a int);
 CREATE TABLE
 rhaas=# create or replace function test (x foo) returns int as $$begin
 return x.b; end$$ language plpgsql;
 CREATE FUNCTION
 rhaas=# alter table foo add column b int;
 ALTER TABLE
 rhaas=# select test(null::foo);
 ERROR:  record x has no field b
 LINE 1: SELECT x.b
^
 QUERY:  SELECT x.b
 CONTEXT:  PL/pgSQL function test(foo) line 1 at RETURN
 rhaas=# \c
 You are now connected to database rhaas as user rhaas.
 rhaas=# select test(null::foo);
  test
 --

 (1 row)

 I hate to use the term bug for what somebody's probably going to
 tell me is acceptable behavior, but that seems like a bug.  I guess
 the root of the problem is that PL/plgsql's cache invalidation logic
 only considers the pg_proc row's TID and xmin when deciding whether to
 recompile.  For base types that's probably OK, but for composite
 types, not so much.

 Thoughts?


It is inconsistent  - and I know so one bigger Czech companies, that use
Postgres, had small outage, because they found this issue, when deployed
new version of procedure.

The question is what is a cost of fixing

Regards

Pavel


 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-28 Thread Merlin Moncure
On Tue, Apr 28, 2015 at 12:44 PM, Jim Nasby jim.na...@bluetreble.com wrote:
 On 4/27/15 10:06 PM, Andrew Dunstan wrote:

 My point remains that we really need methods of a) getting the field
 names from generic records and b) using text values to access fields of
 generic records, both as lvalues and rvalues. Without those this feature
 will be of comparatively little value, IMNSHO. With them it will be much
 more useful and  powerful.


 Sure, and if I had some pointers on what was necessary there I'd take a look
 at it. But I'm not very familiar with plpgsql (let alone what we'd need to
 do this in SQL), so I'd just be fumbling around. As a reminder, one of the
 big issues there seems to be that while plSQL knows what the underlying type
 is, plpgsql has no idea, which seriously limits the use of passing it a
 record.

 In the meantime I've got a patch that definitely works for plSQL and allows
 you to handle a record and pass it on to other functions (such as
 json_from_record()). Since that's my original motivation for looking at
 this, I'd like that patch to be considered unless there's a big drawback to
 it that I'm missing. (For 9.6, of course.)

I think it's pretty useful actually.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: multivariate statistics / proof of concept

2015-04-28 Thread Jeff Janes
On Tue, Apr 28, 2015 at 9:13 AM, Stephen Frost sfr...@snowman.net wrote:

 * Jeff Janes (jeff.ja...@gmail.com) wrote:
  On Mon, Mar 30, 2015 at 5:26 PM, Tomas Vondra 
 tomas.von...@2ndquadrant.com
  wrote:
   attached is a new version of the patch series. Aside from fixing
 various
   issues (crashes, memory leaks). The patches are rebased to current
   master, and I also attach a few SQL scripts I used for testing (nothing
   fancy, just stress-testing all the parts the patch touches).
 
  I get cascading conflicts in pg_proc.h.  It looked easy enough to fix,
  except then I get compiler errors:

 Yeah, those are because you didn't address the new column which was
 added to pg_proc.  You need to add another _null_ in the pg_proc.h lines
 in the correct place, apparently on four lines.


Thanks.  I think I tried that, but was still having trouble.  But it turns
out that the trouble was for an unrelated reason, and I got it to compile
now.

Some of the fdw's need a patch as well in order to compile, see attached.

Cheers,

Jeff


multivariate_contrib.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT syntax issues

2015-04-28 Thread David G. Johnston
On Tue, Apr 28, 2015 at 9:58 AM, Stephen Frost sfr...@snowman.net wrote:

 * Peter Geoghegan (p...@heroku.com) wrote:
  On Tue, Apr 28, 2015 at 9:42 AM, Stephen Frost sfr...@snowman.net
 wrote:
   I agree with that, but how are NEW and OLD ambiguous?  NEW is clearly
   the tuple being added, while OLD is clearly the existing tuple.
 
  Yes, but EXCLUDED is neither the tuple being added, nor is it the new
  tuple. It's something else entirely.


​So?  I see this as a prime case for choosing practicality/functionality
over precision.

​If I was to pick 2 words I would probably pick PROPOSED and EXISTING.

But, the syntax is already verbose and being able to use a three-letter​
reference has its own appeal.


 I don't see that, it's exactly the tuple attempting to be inserted.  I
 agree that it might not be the tuple originally in the INSERT statement
 due to before triggers, but there isn't an alias anywhere for that.

 Now, in 99% of cases there aren't going to be before triggers so I'm not
 particularly worried about that distinction, nor do I think we need to
 provide an alias for the tuple from the INSERT piece of the clause, but
 to say that EXCLUDED isn't the tuple being added doesn't make any sense
 to me, based on how I read the documentation proposed here:


 http://postgres-benchmarks.s3-website-us-east-1.amazonaws.com/on-conflict-docs/sql-insert.html


​This example exemplifies the poorness of the proposed wording, IMO:

​

​[...] ​
SET dname = EXCLUDED.dname || ' (formerly ' || TARGET.dname || ')'

​NEW.dname || '(formerly ' || OLD.dname || ')' reads perfectly well.

Yes, this is an isolated example...​but am I missing the fact that there is
a third tuple that needs to be referenced?

If there are only two the choices of NEW and OLD seem to be both easily
learned and readable.

​David J.
​


Re: [HACKERS] PL/pgSQL, RAISE and error context

2015-04-28 Thread Jim Nasby

On 4/28/15 1:16 AM, Pavel Stehule wrote:


I think it can't be any clearer than the proposed
plpgsql.display_context_min_messages


client_min_context. It's doing the same thing as min_messages does,
just for context instead of the message.

Or does this affect client and log the same way?


it affect client and log together

maybe min_context


+1
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-28 Thread Jim Nasby

On 4/27/15 10:06 PM, Andrew Dunstan wrote:

My point remains that we really need methods of a) getting the field
names from generic records and b) using text values to access fields of
generic records, both as lvalues and rvalues. Without those this feature
will be of comparatively little value, IMNSHO. With them it will be much
more useful and  powerful.


Sure, and if I had some pointers on what was necessary there I'd take a 
look at it. But I'm not very familiar with plpgsql (let alone what we'd 
need to do this in SQL), so I'd just be fumbling around. As a reminder, 
one of the big issues there seems to be that while plSQL knows what the 
underlying type is, plpgsql has no idea, which seriously limits the use 
of passing it a record.


In the meantime I've got a patch that definitely works for plSQL and 
allows you to handle a record and pass it on to other functions (such as 
json_from_record()). Since that's my original motivation for looking at 
this, I'd like that patch to be considered unless there's a big drawback 
to it that I'm missing. (For 9.6, of course.)

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: [GENERAL] 4B row limit for CLOB tables

2015-04-28 Thread Jim Nasby

On 4/28/15 5:41 AM, José Luis Tallón wrote:

On 04/27/2015 08:49 AM, Jim Nasby wrote:

On 4/25/15 1:19 PM, Bruce Momjian wrote:

Note if you are storing a table with rows that exceed 2KB in size
(aggregate size of each row) then the Maximum number of rows in a
table may be limited to 4 Billion, see TOAST.


That's not accurate though; you could be limited to far less than 4B
rows. If each row has 10 fields that toast, you'd be limited to just
400M rows.


ISTM like the solution is almost here, and could be done without too
much (additional) work:
* We have already discussed having a page-per-sequence with the new
SeqAMs being introduced and how that would improve scalability.
* We have commented on having a sequence per TOAST table
 (hence, 4B toasted values per table each up to 4B chunks in size...
vs just 4B toasted values per cluster)

 I'm not sure that I can do it all by myself just yet, but I sure
can try if there is interest.


I don't think it would be hard at all to switch toast pointers to being 
sequence generated instead of OIDs. The only potential downside I see is 
the extra space required for all the sequnces... but that would only 
matter on the tinyest of clusters (think embedded), which probably don't 
have that many tables to begin with.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] FIX : teach expression walker about RestrictInfo

2015-04-28 Thread Tomas Vondra

Hi there,

the attached trivial patch adds handling of RestrictInfo nodes into 
expression_tree_walker(). This is needed for example when calling 
pull_varnos or (or other functions using the expression walker) in 
clausesel.c, for example. An example of a query causing errors with 
pull_varnos is


select * from t where (a = 10 and a = 20) or (b = 15 and b = 20);

which gets translated into an expression tree like this:

BoolExpr [OR_EXPR]
BoolExpr [AND_EXPR]
RestrictInfo
OpExpr [Var = Const]
RestrictInfo
OpExpr [Var = Const]
BoolExpr [AND_EXPR]
RestrictInfo
OpExpr [Var = Const]
RestrictInfo
OpExpr [Var = Const]

and the nested RestrictInfo nodes make the walker fail because of 
unrecognized node.


It's possible that expression walker is not supposed to know about 
RestrictInfo, but I don't really see why would that be the case.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index d6f1f5b..843f06d 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -1933,6 +1933,8 @@ expression_tree_walker(Node *node,
 			return walker(((PlaceHolderInfo *) node)-ph_var, context);
 		case T_RangeTblFunction:
 			return walker(((RangeTblFunction *) node)-funcexpr, context);
+		case T_RestrictInfo:
+			return walker(((RestrictInfo *) node)-clause, context);
 		default:
 			elog(ERROR, unrecognized node type: %d,
  (int) nodeTag(node));

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_rewind test race condition..?

2015-04-28 Thread Stephen Frost
Heikki,

  Not sure if anyone else is seeing this, but I'm getting regression
  test failures when running the pg_rewind tests pretty consistently
  with 'make check'.  Specifically with basic remote, I'm getting:

source and target cluster are on the same timeline
Failure, exiting

  in regress_log/pg_rewind_log_basic_remote.

  If I throw a sleep(5); into t/001_basic.pl before the call to
  RewindTest::run_pg_rewind($test_mode); then everything works fine.

  ./configure --silent --prefix=$INSTALL --with-openssl --with-tcl
  --with-tclconfig=/usr/lib/tcl8.6 --with-perl --enable-debug
  --enable-cassert --enable-tap-tests --with-gssapi

  Full unsuccessful pg_rewind_log_basic_remote output:
-
waiting for server to startLOG:  database system was shut down at 
2015-04-28 13:46:34 EDT
LOG:  database system is ready to accept connections
 done
server started
waiting for server to startLOG:  database system was interrupted; last 
known up at 2015-04-28 13:46:35 EDT
LOG:  entering standby mode
LOG:  redo starts at 0/228
LOG:  consistent recovery state reached at 0/2F8
LOG:  database system is ready to accept read only connections
LOG:  started streaming WAL from primary at 0/300 on timeline 1
 done
server started
server promoting
LOG:  received promote request
FATAL:  terminating walreceiver process due to administrator command
LOG:  invalid record length at 0/30EB410
LOG:  redo done at 0/30EB3A0
LOG:  last completed transaction was at log time 2015-04-28 13:46:37.540952-04
LOG:  selected new timeline ID: 2
LOG:  archive recovery complete
LOG:  database system is ready to accept connections
waiting for server to shut downLOG:  received fast shutdown request
LOG:  aborting any active transactions
LOG:  shutting down
LOG:  database system is shut down
 done
server stopped

source and target cluster are on the same timeline
Failure, exiting
waiting for server to startLOG:  database system was shut down at 
2015-04-28 13:46:39 EDT
LOG:  entering standby mode
LOG:  consistent recovery state reached at 0/3311438
LOG:  invalid record length at 0/3311438
LOG:  database system is ready to accept read only connections
LOG:  fetching timeline history file for timeline 2 from primary server
LOG:  started streaming WAL from primary at 0/300 on timeline 1
LOG:  replication terminated by primary server
DETAIL:  End of WAL reached on timeline 1 at 0/30EB410.
LOG:  new timeline 2 forked off current database system timeline 1 before 
current recovery point 0/3311438
LOG:  restarted WAL streaming at 0/300 on timeline 1
LOG:  replication terminated by primary server
DETAIL:  End of WAL reached on timeline 1 at 0/30EB410.
LOG:  new timeline 2 forked off current database system timeline 1 before 
current recovery point 0/3311438
 done
server started
LOG:  received immediate shutdown request
LOG:  received immediate shutdown request
-

  Full successful pg_rewind_log_basic_remote output:

-
waiting for server to startLOG:  database system was shut down at 
2015-04-28 13:54:04 EDT
LOG:  database system is ready to accept connections
 done
server started
waiting for server to startLOG:  database system was interrupted; last 
known up at 2015-04-28 13:54:05 EDT
LOG:  entering standby mode
LOG:  redo starts at 0/228
LOG:  consistent recovery state reached at 0/2F8
LOG:  database system is ready to accept read only connections
LOG:  started streaming WAL from primary at 0/300 on timeline 1
 done
server started
server promoting
LOG:  received promote request
FATAL:  terminating walreceiver process due to administrator command
LOG:  invalid record length at 0/30EB410
LOG:  redo done at 0/30EB3A0
LOG:  last completed transaction was at log time 2015-04-28 13:54:07.547533-04
LOG:  selected new timeline ID: 2
LOG:  archive recovery complete
LOG:  database system is ready to accept connections
waiting for server to shut downLOG:  received fast shutdown request
LOG:  aborting any active transactions
LOG:  shutting down
LOG:  database system is shut down
 done
server stopped
The servers diverged at WAL position 0/30EB410 on timeline 1.
Rewinding from last common checkpoint at 0/30EB3A0 on timeline 1
Done!
waiting for server to startLOG:  database system was interrupted while in 
recovery at log time 2015-04-28 13:54:08 EDT
HINT:  If this has occurred more than once some data might be corrupted and you 
might need to choose an earlier recovery target.
LOG:  entering standby mode
LOG:  redo starts at 0/30EB368
LOG:  consistent recovery state reached at 0/3113788
LOG:  invalid record length at 0/3113788
LOG:  database system is ready to accept read only connections
LOG:  started streaming WAL from primary at 0/300 on timeline 2
 done
server started
LOG:  received immediate shutdown request

Re: [HACKERS] WIP: multivariate statistics / proof of concept

2015-04-28 Thread Tomas Vondra

Hi,

On 04/28/15 19:36, Jeff Janes wrote:

...


Thanks. I think I tried that, but was still having trouble. But it
turns out that the trouble was for an unrelated reason, and I got it
to compile now.


Yeah, a new column was added to pg_proc the day after I submitted the 
pacth. Will address that in a new version, hopefully in a few days.




Some of the fdw's need a patch as well in order to compile, see
attached.


Thanks, I forgot to tweak the clauselist_selectivity() calls contrib :-(


--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-04-28 Thread Robert Haas
On Tue, Apr 28, 2015 at 1:53 PM, Jim Nasby jim.na...@bluetreble.com wrote:
 Because in those cases it would be trivial to drop XMIN out of the tuple
 headers. For a warehouse with narrow rows that could be a significant win.
 Moreso, we could also move XMAX to the page level if we accept that if we
 need to invalidate any tuple we'd have to move all of them. In a warehouse
 situation that's probably OK as well.

You have a funny definition of trivial.  If you start looking
through the code you'll see that anything that changes the format of
the tuple header is a very large undertaking.  And the bit about if
we invalidate any tuple we'd need to move all of them doesn't really
make any sense; we have no infrastructure that would allow us move
tuples like that.  A lot of people would like it if we did, but we
don't.

 That said, I don't think this is the first place to focus for reducing our
 on-disk format; reducing cleanup bloat would probably be a lot more useful.

Sure; changing the on-disk format is a different project that tracking
the frozen parts of a table, which is what this thread started out
being about, and nothing you've said since then seems to add or
detract from that.  I still think the best way to do it is to make the
VM carry two bits per page instead of one.

 Did you or Jan have more detailed info from the test he ran about where our
 80% overhead was ending up? That would remove a lot of speculation here...

We have more detailed information on that, but (1) that's not a very
specific question and (2) it has nothing to do with freeze avoidance,
so I'm not sure why you are asking on this thread.  Let's try not to
get sidetracked from the well-defined proposal that just needs to be
implemented to speculation about major changes in completely unrelated
areas.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0, parser/executor stuff

2015-04-28 Thread Peter Geoghegan
On Tue, Apr 28, 2015 at 3:38 AM, Andres Freund and...@anarazel.de wrote:
 The more I look at approach taken in the executor, the less I like it.
 I think the fundamental structural problem is that you've chosen to
 represent the ON CONFLICT UPDATE part as fully separate plan tree node;
 planned nearly like a normal UPDATE. But it really isn't. That's what
 then requires you to coordinate knowedge around with p_is_speculative,
 have these auxiliary trees, have update nodes without a relation, and
 other similar things.

The update node without a relation thing is misleading. There is an
UpdateStmt parse node that briefly lacks a RangeVar, but it takes its
target RTE from the parent without bothering with a RangeVar. From
there on in, it has an RTE (shared with the parent), just as the
executor has the two share their ResultRelation.

It is a separate node - it's displayed in EXPLAIN output as a separate
node. It's not the first type of node to have to supply its own
instrumentation because of the way its executed. I don't know how you
can say it isn't a separate plan node when it is displayed as such in
EXPLAIN, and is separately instrumented as one.

 My feeling is that this will look much less ugly if there's simply no
 UpdateStmt built. I mean, all we need is the target list and the where
 clause.

Yes, that's all that is needed - most of the structure of a
conventional UPDATE! There isn't much that you can't do that you can
with a regular UPDATE. Where are you going to cut?

 As far as I can see so far that'll get rid of a lot of
 structural uglyness. There'll still be some more localized stuff, but I
 don't think it'll be that bad.

You're going to need a new targetlist just for this case. So you've
just added a new field to save one within Query, etc.

 I'm actually even wondering if it'd not better off *not* to reuse
 ExecUpdate(), but that's essentially a separate concern.

I think that makes no sense. ExecUpdate() has to do many things that
are applicable to both. The *only* thing that it does that's
superfluous for ON CONFLICT DO UPDATE is walking the update chain.
That is literally the only thing.

I think that you're uncomfortable with the way that the structure is
different to anything that exists at the moment, which is
understandable. But this is UPSERT - why would the representation of
what is more or less a hybrid DML query type not have a novel new
representation? What I've done works with most existing abstractions
without too much extra code. The code reuse that this approach allows
seems like a major advantage.
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] cache invalidation for PL/pgsql functions

2015-04-28 Thread Robert Haas
The following behavior surprised me, and a few other people at
EnterpriseDB, and one of our customers:

rhaas=# create table foo (a int);
CREATE TABLE
rhaas=# create or replace function test (x foo) returns int as $$begin
return x.b; end$$ language plpgsql;
CREATE FUNCTION
rhaas=# alter table foo add column b int;
ALTER TABLE
rhaas=# select test(null::foo);
ERROR:  record x has no field b
LINE 1: SELECT x.b
   ^
QUERY:  SELECT x.b
CONTEXT:  PL/pgSQL function test(foo) line 1 at RETURN
rhaas=# \c
You are now connected to database rhaas as user rhaas.
rhaas=# select test(null::foo);
 test
--

(1 row)

I hate to use the term bug for what somebody's probably going to
tell me is acceptable behavior, but that seems like a bug.  I guess
the root of the problem is that PL/plgsql's cache invalidation logic
only considers the pg_proc row's TID and xmin when deciding whether to
recompile.  For base types that's probably OK, but for composite
types, not so much.

Thoughts?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow SQL/plpgsql functions to accept record

2015-04-28 Thread Andrew Dunstan


On 04/28/2015 01:44 PM, Jim Nasby wrote:

On 4/27/15 10:06 PM, Andrew Dunstan wrote:

My point remains that we really need methods of a) getting the field
names from generic records and b) using text values to access fields of
generic records, both as lvalues and rvalues. Without those this feature
will be of comparatively little value, IMNSHO. With them it will be much
more useful and  powerful.


Sure, and if I had some pointers on what was necessary there I'd take 
a look at it. But I'm not very familiar with plpgsql (let alone what 
we'd need to do this in SQL), so I'd just be fumbling around. As a 
reminder, one of the big issues there seems to be that while plSQL 
knows what the underlying type is, plpgsql has no idea, which 
seriously limits the use of passing it a record.


In the meantime I've got a patch that definitely works for plSQL and 
allows you to handle a record and pass it on to other functions (such 
as json_from_record()). Since that's my original motivation for 
looking at this, I'd like that patch to be considered unless there's a 
big drawback to it that I'm missing. (For 9.6, of course.)



If you look at composite_to_json() it gives you almost all that you'd 
need to construct an array of field names for an arbitrary record, and a 
lot of what you'd need to extract a value for an arbitrary field. 
populate_record_worker() has a good deal of what you'd need to set a 
value of an arbitrary field. None of that means that there isn't a good 
deal of work do do, but if you want pointers there are some.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Feedback on getting rid of VACUUM FULL

2015-04-28 Thread Robert Haas
On Fri, Apr 24, 2015 at 3:04 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Heikki Linnakangas wrote:
 Hmm, AFAICT pg_reorg is much more complex, writing stuff to a temp table
 and swapping relfilenodes afterwards. More like the VACUUM REWRITE
 that's been discussed.

 For the kicks, I looked at what it would take to write a utility like
 that. It turns out to be quite trivial, patch attached. It uses the same
 principle as VACUUM FULL, scans from the end, moving tuples to
 lower-numbered pages until it can't do it anymore. It requires a small
 change to heap_update(), to override the preference to store the new
 tuple on the same page as the old one, but other than that, it's all in
 the external module.

 More than five years have passed since Heikki posted this, and we still
 haven't found a solution to the problem -- which neverthless keeps
 biting people to the point that multiple user-space implementations of
 similar techniques are out there.

Yeah.  The problem with solving this with an update is that a
concurrent real update may not see the expected behavior, especially
at higher isolation levels.  Tom also complained that the CTID will
change, and somebody might care about that.  But I think it's pretty
clear that a lot of people will be able to live with those problems,
and those who can't will be no worse off than now.

 I think what we need here is something that does heap_update to tuples
 at the end of the table, moving them to earlier pages; then wait for old
 snapshots to die (the infrastructure for which we have now, thanks to
 CREATE INDEX CONCURRENTLY); then truncate the empty pages.  Of course,
 there are lots of details to resolve.  It doesn't really matter that
 this runs for long: a process doing this for hours might be better than
 AccessExclusiveLock on the table for a much shorter period.

Why do you need to do anything other than update the tuples and let
autovacuum clean up the mess?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cache invalidation for PL/pgsql functions

2015-04-28 Thread Jim Nasby

On 4/28/15 12:58 PM, Pavel Stehule wrote:

I hate to use the term bug for what somebody's probably going to
tell me is acceptable behavior, but that seems like a bug.  I guess
the root of the problem is that PL/plgsql's cache invalidation logic
only considers the pg_proc row's TID and xmin when deciding whether to
recompile.  For base types that's probably OK, but for composite
types, not so much.

Thoughts?


It is inconsistent  - and I know so one bigger Czech companies, that use
Postgres, had small outage, because they found this issue, when deployed
new version of procedure.

The question is what is a cost of fixing


We don't actually consider the argument types at all (pl_comp.c:169):

/* We have a compiled function, but is it still valid? */
if (function-fn_xmin == HeapTupleHeaderGetRawXmin(procTup-t_data) 

ItemPointerEquals(function-fn_tid, procTup-t_self))

Perhaps pg_depend protects from a base type changing.

This problem also exists for internal variables:

create table moo(m int);
create function t() returns text language plpgsql as $$declare m moo; 
begin m := null; return m.t; end$$;

select t(); -- Expected error
alter table moo add t text;
select t(); -- Unexpected error

So it seems the correct fix would be to remember the list of every xmin 
for every type we saw... unless there's some way to tie the proc into 
type sinval messages?

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallel mode and parallel contexts

2015-04-28 Thread Robert Haas
On Sun, Apr 26, 2015 at 2:03 PM, Euler Taveira eu...@timbira.com.br wrote:
 On 19-03-2015 15:13, Robert Haas wrote:
 On Wed, Mar 18, 2015 at 5:36 PM, Andres Freund and...@2ndquadrant.com 
 wrote:
 Reading the README first, the rest later. So you can comment on my
 comments, while I actually look at the code. Parallelism, yay!

 I'm also looking at this piece of code and found some low-hanging fruit.

Thanks.  0001 and 0003 look good, but 0002 is actually unrelated to this patch.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallel mode and parallel contexts

2015-04-28 Thread Robert Haas
On Sun, Apr 26, 2015 at 9:57 PM, Amit Langote
langote_amit...@lab.ntt.co.jp wrote:
 On 2015-04-22 AM 04:14, Robert Haas wrote:
 We should check IsParallelWorker() for operations that are allowed in
 the master during parallel mode, but not allowed in the workers - e.g.
 the master can scan its own temporary relations, but its workers
 can't.  We should check IsInParallelMode() for operations that are
 completely off-limits in parallel mode - i.e. writes.

 We use ereport() where we expect that SQL could hit that check, and
 elog() where we expect that only (buggy?) C code could hit that check.


 By the way, perhaps orthogonal to the basic issue Alvaro and you are
 discussing here - when playing around with (parallel-mode + parallel-safety +
 parallel heapscan + parallel seqscan), I'd observed (been a while since) that
 if you run a CREATE TABLE AS ... SELECT and the SELECT happens to use parallel
 scan, the statement as a whole fails because the top level statement (CTAS) is
 not read-only. So, only way to make CTAS succeed is to disable seqscan; which
 may require some considerations in reporting to user to figure out. I guess it
 happens because the would-be parallel leader of the scan would also be the one
 doing DefineRelation() DDL. Apologies if this has been addressed since.

That sounds like a bug in the assess-parallel-safety patch.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Feedback on getting rid of VACUUM FULL

2015-04-28 Thread Alvaro Herrera
Robert Haas wrote:
 On Fri, Apr 24, 2015 at 3:04 PM, Alvaro Herrera
 alvhe...@2ndquadrant.com wrote:

  I think what we need here is something that does heap_update to tuples
  at the end of the table, moving them to earlier pages; then wait for old
  snapshots to die (the infrastructure for which we have now, thanks to
  CREATE INDEX CONCURRENTLY); then truncate the empty pages.  Of course,
  there are lots of details to resolve.  It doesn't really matter that
  this runs for long: a process doing this for hours might be better than
  AccessExclusiveLock on the table for a much shorter period.
 
 Why do you need to do anything other than update the tuples and let
 autovacuum clean up the mess?

Sure, that's one option.  I think autovac's current approach is too
heavyweight: it always has to scan the whole relation and all the
indexes.  It might be more convenient to do something more
fine-grained; for instance, maybe instead of scanning the whole
relation, start from the end of the relation walking backwards and stop
once the first page containing a live or recently-dead tuple is found.
Perhaps, while scanning the indexes you know that all CTIDs with pages
higher than some threshold value are gone; you can remove them without
scanning the heap at all perhaps.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-04-28 Thread Peter Geoghegan
On Mon, Apr 27, 2015 at 6:43 AM, Andres Freund and...@anarazel.de wrote:
 Could you please add the tests for the logical decoding code you added?
 I presume you have some already/

Most of the tests I used for logical decoding were stress tests (i.e.
prominently involved my favorite tool, jjanes_upsert). There is a
regression test added, but it's not especially sophisticated. Which
may be a problem.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improving RLS qual pushdown

2015-04-28 Thread Dean Rasheed
On 27 April 2015 at 17:33, Stephen Frost sfr...@snowman.net wrote:
 Dean,

 * Dean Rasheed (dean.a.rash...@gmail.com) wrote:
 I took another look at this and came up with some alternate comment
 rewording. I also added a couple of additional comments that should
 hopefully clarify the code a bit.

 Finally got back to this.  Made a few additional changes that made
 things look clearer, to me at least, and added documentation and
 comments.  Also added a bit to the regression tests (in particular, an
 explicit check on the RLS side of this, not that it should be different,
 but just in case things diverge in the future).  Please review and let
 me know if you see any issues.


Thanks, that all looks very good.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0

2015-04-28 Thread Peter Geoghegan
On Mon, Apr 27, 2015 at 8:31 PM, Heikki Linnakangas hlinn...@iki.fi wrote:
 I thought we had an ironclad scheme to prevent deadlocks like this, so I'd
 like to understand why that happens.


Okay. I think I know how it happens (I was always skeptical of the
idea that this would be 100% reliable), but I'll be able to show you
exactly how tomorrow. I'll have pg_xlogdump output then.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ATSimpleRecursion() and inheritance foreign parents

2015-04-28 Thread Amit Langote

Hi,

Following ALTER TABLE actions are applied recursively to inheritance
descendents via ATSimpleRecursion() -

ALTER COLUMN DEFAULT
ALTER COLUMN DROP NOT NULL
ALTER COLUMN SET NOT NULL
ALTER COLUMN SET STATISTICS
ALTER COLUMN SET STORAGE

The code at the beginning of ATSimpleRecursion() looks like -

/*
 * Propagate to children if desired.  Non-table relations never have
 * children, so no need to search in that case.
 */
 if (recurse  rel-rd_rel-relkind == RELKIND_RELATION)

Not sure if it's great idea, but now that foreign tables can also have
children, should above be changed to take that into account? Any inheritance
related recursion performed without using ATSimpleRecursion() recurse as
dictated by RangeVar.inhOpt; so even foreign inheritance parents expand for
various ALTER TABLE actions like adding a column though that is not a
meaningful operation on foreign tables anyway.

An example,
postgres=# alter foreign table fparent alter a type char;
ALTER FOREIGN TABLE

postgres=# select * from fparent;
ERROR:  attribute a of relation fchild1 does not match parent's type

Above error, AIUI, is hit much before it is determined that fparent is a
foreign table, whereas the following is FDW-specific (waiting to happen) error,

postgres=# alter foreign table fparent add b char;
ALTER FOREIGN TABLE

postgres=# SELECT * FROM fparent;
ERROR:  column b does not exist
CONTEXT:  Remote SQL command: SELECT a, b FROM public.parent

Not sure if the first case could be considered s a bug as foreign tables are
pretty lax in these regards anyway. But if ATSimpleRecursion() prevents errors
that result from concepts independent of foreign tables being involved, maybe,
we need to fix?

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/pgSQL, RAISE and error context

2015-04-28 Thread Pavel Stehule
2015-04-27 22:53 GMT+02:00 Jim Nasby jim.na...@bluetreble.com:

 On 4/27/15 11:47 AM, Joel Jacobson wrote:

 On Mon, Apr 27, 2015 at 6:14 PM, Marko Tiikkaja ma...@joh.to
 mailto:ma...@joh.to wrote:

 That sounds weird.  log_min_messages are the messages sent to the
 log; client_min_messages are sent to the client.
 context_min_messages are not sent to a context, whatever that
 would mean.


 Good point.

 I think it can't be any clearer than the proposed
 plpgsql.display_context_min_messages


 client_min_context. It's doing the same thing as min_messages does, just
 for context instead of the message.

 Or does this affect client and log the same way?


it affect client and log together

maybe min_context

Pavel



 --
 Jim Nasby, Data Architect, Blue Treble Consulting
 Data in Trouble? Get it in Treble! http://BlueTreble.com



Re: [HACKERS] Auditing extension for PostgreSQL (Take 2)

2015-04-28 Thread Sawada Masahiko
On Fri, Apr 24, 2015 at 3:23 AM, David Steele da...@pgmasters.net wrote:
 On 4/23/15 5:49 AM, Sawada Masahiko wrote:

 I'm concerned that behaviour of pg_audit has been changed at a few
 times as far as I remember. Did we achieve consensus on this design?

 The original author Abhijit expressed support for the SESSION/OBJECT
 concept before I started working on the code and so has Stephen Frost.
 As far as I know all outstanding comments from the community have been
 addressed.

 Overall behavior has not changed very much since being submitted to the
 CF in February - mostly just tweaks and additional options.

 And one question; OBJECT logging of all tuple deletion (i.g. DELETE
 FROM hoge) seems like not work as follows.


 =# grant all on bar TO masahiko;

 (1) Delete all tuple
 =# insert into bar values(1);
 =# delete from bar ;
 NOTICE:  AUDIT: SESSION,47,1,WRITE,DELETE,TABLE,public.bar,delete from bar ;
 DELETE 1

 (2) Delete specified tuple (but same result as (1))
 =# insert into bar values(1);
 =# delete from bar where col = 1;
 NOTICE:  AUDIT: OBJECT,48,1,WRITE,DELETE,TABLE,public.bar,delete from
 bar where col = 1;
 NOTICE:  AUDIT: SESSION,48,1,WRITE,DELETE,TABLE,public.bar,delete from
 bar where col = 1;
 DELETE 1

 Definitely a bug.  Object logging works in the second case because the
 select privileges on the col column trigger logging.  I have fixed
 this and added a regression test.

 I also found a way to get the stack memory context under the query
 memory context.  Because of the order of execution it requires moving
 the memory context but I still think it's a much better solution.  I was
 able to remove most of the stack pops (except function logging) and the
 output remained stable.

 I've also added some checking to make sure that if anything looks funny
 on the stack an error will be generated.

 Thanks for the feedback!


Thank you for updating the patch!
I ran the postgres regression test on database which is enabled
pg_audit, it works fine.
Looks good to me.

If someone don't have review comment or bug report, I will mark this
as Ready for Committer.

Regards,

---
Sawada Masahiko


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0, parser/executor stuff

2015-04-28 Thread Andres Freund
On 2015-04-27 23:52:58 +0200, Andres Freund wrote:
 On 2015-04-27 16:28:49 +0200, Andres Freund wrote:
  On 2015-04-26 18:02:06 -0700, Peter Geoghegan wrote:
   * So far, there has been a lack of scrutiny about what the patch does
   in the rewriter (in particular, to support the EXCLUDED.* pseudo-alias
   expression) and optimizer (the whole concept of an auxiliary
   query/plan that share a target RTE, and later target ResultRelation).
   If someone took a close look at that, it would be most helpful.
   ruleutils.c is also modified for the benefit of EXPLAIN output. This
   all applies only to the ON CONFLICT UPDATE patch. A committer could
   push out the IGNORE patch before this was 100% firm.
 
  I'm far from an expert on the relevant regions; but I'm starting to look
  nonetheless. I have to say that on a preliminary look it looks more
  complicated than it has to.
 
 So, I'm looking. And I've a few questions:
 * Why do we need to spread knowledge about speculative inserts that wide?
   It's now in 1) Query, 2) ParseState 3) ModifyTable 4) InsertStmt. That
   seems a bit wide - and as far as I see not really necessary. That e.g.
   transformUpdateStmt() has if (!pstate-p_is_speculative) blocks seems
   wrong.
 
 * afaics 'auxiliary query' isn't something we have under that
   name. This isn't really different to what wCTEs do, so I don't think
   we need this term here.
 
 * You refer to 'join like syntax' in a couple places. I don't really see
   the current syntax being that join like? Is that just a different
   perception of terminology or is that just remnants of earlier
   approaches?
 
 * I am rather unconvinced we need the whole 'ExcludedVar' mechanism. I
   don't really see why we need it at all? Can't we 'just' make those
   plain vars referring to the normal targetlist entry? I feel like I'm
   missing something here.
 
 * The whole dealing with the update clause doesn't seem super
   clean. I'm not sure yet how to do it nicer though :(

The more I look at approach taken in the executor, the less I like it.
I think the fundamental structural problem is that you've chosen to
represent the ON CONFLICT UPDATE part as fully separate plan tree node;
planned nearly like a normal UPDATE. But it really isn't. That's what
then requires you to coordinate knowedge around with p_is_speculative,
have these auxiliary trees, have update nodes without a relation, and
other similar things.

My feeling is that this will look much less ugly if there's simply no
UpdateStmt built. I mean, all we need is the target list and the where
clause. As far as I can see so far that'll get rid of a lot of
structural uglyness. There'll still be some more localized stuff, but I
don't think it'll be that bad.

I'm actually even wondering if it'd not better off *not* to reuse
ExecUpdate(), but that's essentially a separate concern.

I'll try to rejigger things roughly in that directions. If you haven't
heard of me in three days, call the police.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: [GENERAL] 4B row limit for CLOB tables

2015-04-28 Thread José Luis Tallón

On 04/27/2015 08:49 AM, Jim Nasby wrote:

On 4/25/15 1:19 PM, Bruce Momjian wrote:

Note if you are storing a table with rows that exceed 2KB in size
(aggregate size of each row) then the Maximum number of rows in a
table may be limited to 4 Billion, see TOAST.


That's not accurate though; you could be limited to far less than 4B 
rows. If each row has 10 fields that toast, you'd be limited to just 
400M rows.


ISTM like the solution is almost here, and could be done without too 
much (additional) work:
* We have already discussed having a page-per-sequence with the new 
SeqAMs being introduced and how that would improve scalability.

* We have commented on having a sequence per TOAST table
(hence, 4B toasted values per table each up to 4B chunks in size... 
vs just 4B toasted values per cluster)


I'm not sure that I can do it all by myself just yet, but I sure 
can try if there is interest.
(just after I'm done with another patch that is independent from 
this, though)


This would be material for 9.6, of course :)

Thanks,

J.L.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY and file_fdw with fixed column widths

2015-04-28 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 I know COPY doesn't support importing files with fixed column widths,
 i.e. we can't say field1 is the first 30 characters, and field2 is the
 rest of the line.  We need a unique delimiter at column 31.  (Commercial
 Ingres does support this ability.)

 I know we tell most people to use sed, Perl, or an ETL tool to convert
 files into a format COPY understands, and I think that is a reasonable
 answer.  However, the file_fdw also reads our COPY format, and in that
 case, the data file might be updated regularly and running an ETL
 process on it every time it is read is inconvenient.

COPY is, and has always been intended to be, as fast as possible; loading
format transformation abilities onto it seems like a fundamental mistake.
Therefore, if you wish file_fdw were more flexible, I think the answer is
to create a variant of file_fdw that doesn't use COPY but some other
mechanism.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY and file_fdw with fixed column widths

2015-04-28 Thread Bruce Momjian
On Tue, Apr 28, 2015 at 12:46:22PM -0700, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  I know COPY doesn't support importing files with fixed column widths,
  i.e. we can't say field1 is the first 30 characters, and field2 is the
  rest of the line.  We need a unique delimiter at column 31.  (Commercial
  Ingres does support this ability.)
 
  I know we tell most people to use sed, Perl, or an ETL tool to convert
  files into a format COPY understands, and I think that is a reasonable
  answer.  However, the file_fdw also reads our COPY format, and in that
  case, the data file might be updated regularly and running an ETL
  process on it every time it is read is inconvenient.
 
 COPY is, and has always been intended to be, as fast as possible; loading
 format transformation abilities onto it seems like a fundamental mistake.
 Therefore, if you wish file_fdw were more flexible, I think the answer is
 to create a variant of file_fdw that doesn't use COPY but some other
 mechanism.

Yes, I think this is a missing feature.  While we can tell people to do
ETL for loading, we are really not doing that for file_fdw.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FIX : teach expression walker about RestrictInfo

2015-04-28 Thread Tom Lane
Tomas Vondra tomas.von...@2ndquadrant.com writes:
 the attached trivial patch adds handling of RestrictInfo nodes into 
 expression_tree_walker().

RestrictInfo is not a general expression node and support for it has
been deliberately omitted from expression_tree_walker().  So I think
what you are proposing is a bad idea and probably a band-aid for some
other bad idea.

 This is needed for example when calling 
 pull_varnos or (or other functions using the expression walker) in 
 clausesel.c, for example. An example of a query causing errors with 
 pull_varnos is

 select * from t where (a = 10 and a = 20) or (b = 15 and b = 20);

Really?

regression=# create table t (a int, b int);
CREATE TABLE
regression=# select * from t where (a = 10 and a = 20) or (b = 15 and b = 
20);
 a | b 
---+---
(0 rows)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] COPY and file_fdw with fixed column widths

2015-04-28 Thread Bruce Momjian
I know COPY doesn't support importing files with fixed column widths,
i.e. we can't say field1 is the first 30 characters, and field2 is the
rest of the line.  We need a unique delimiter at column 31.  (Commercial
Ingres does support this ability.)

I know we tell most people to use sed, Perl, or an ETL tool to convert
files into a format COPY understands, and I think that is a reasonable
answer.  However, the file_fdw also reads our COPY format, and in that
case, the data file might be updated regularly and running an ETL
process on it every time it is read is inconvenient.

I asking because I am playing around with file_fdw and I have some log
files with a Unix 'date' prefix, a colon, then some text that might also
contain a colon.  It would be ideal if I could read in first 28
characters into field1, then the rest of the line into field2.  The only
idea I have is to create a single-column foreign table that reads the
entire line as one field, then a view on top of that the parses the line
into fields, but that seems kind of complex.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ATSimpleRecursion() and inheritance foreign parents

2015-04-28 Thread Tom Lane
Etsuro Fujita fujita.ets...@lab.ntt.co.jp writes:
 On 2015/04/28 15:17, Amit Langote wrote:
 The code at the beginning of ATSimpleRecursion() looks like -
 if (recurse  rel-rd_rel-relkind == RELKIND_RELATION)
 Not sure if it's great idea, but now that foreign tables can also have
 children, should above be changed to take that into account?

 Yeah, I think we should now allow the recursion for inheritance parents 
 that are foreign tables as well.  Attached is a patch for that.

Yeah, this is just an oversight.  Fix pushed, and also a similar fix in
parse_utilcmd.c.  I thought I'd reviewed all the references to
RELKIND_RELATION before, but evidently I missed these.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT syntax issues

2015-04-28 Thread Peter Geoghegan
On Tue, Apr 28, 2015 at 10:36 AM, David G. Johnston
david.g.johns...@gmail.com wrote:
 This example exemplifies the poorness of the proposed wording, IMO:


 [...]
 SET dname = EXCLUDED.dname || ' (formerly ' || TARGET.dname || ')'

 NEW.dname || '(formerly ' || OLD.dname || ')' reads perfectly well.

 Yes, this is an isolated example...but am I missing the fact that there is a
 third tuple that needs to be referenced?

 If there are only two the choices of NEW and OLD seem to be both easily
 learned and readable.

Whatever Andres and/or Heikki want is what I'll agree to. Honestly, I
just don't care anymore.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cache invalidation for PL/pgsql functions

2015-04-28 Thread Merlin Moncure
On Tue, Apr 28, 2015 at 12:43 PM, Robert Haas robertmh...@gmail.com wrote:
 I hate to use the term bug for what somebody's probably going to
 tell me is acceptable behavior, but that seems like a bug.  I guess
 the root of the problem is that PL/plgsql's cache invalidation logic
 only considers the pg_proc row's TID and xmin when deciding whether to
 recompile.  For base types that's probably OK, but for composite
 types, not so much.

It was a missed case in the invalidation logic.   plpgsql was
deliberately modified to invalidate plans upon schema changes -- this
is a way to outsmart that system.   definitely a bug IMNSHO.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers