Re: [HACKERS] INSERT...ON DUPLICATE KEY LOCK FOR UPDATE

2013-11-23 Thread Peter Geoghegan
On Tue, Nov 19, 2013 at 5:13 AM, Heikki Linnakangas
 wrote:
> Ok. Which use case are you targeting during this initial effort, batch
> updates or small OLTP transactions?

OLTP transactions are probably my primary concern. I just realized
that I wasn't actually very clear on that point in my most recent
e-mail -- my apologies. What we really need for batching, and what we
should work towards in the medium term is MERGE, where a single table
scan does everything.

However, I also care about facilitating conflict resolution in
multi-master replication systems, so I think we definitely ought to
consider that carefully if at all possible. Incidentally, Andres said
a few weeks back that he thinks that what I've proposed ought to be
only exposed to C code, owing to the fact that it necessitates the
visibility trick (actually, I think UPSERT does generally, but what
I've done has, I suppose, necessitated making it more explicit/general
- i.e. modifications are added to HeapTupleSatisfiesMVCC()). I don't
understand what difference it makes to only exposed it at the C level
- what I've proposed in this area is either correct or incorrect
(Andres mentioned the Halloween problem). Furthermore, I presume that
it's broadly useful to have Bucardo-style custom conflict resolution
policies, without people having to get their hands dirty with C, and I
think having this at the SQL level helps there. Plus, as I've said
many times, the flexibility this syntax offers is likely to be broadly
useful for ordinary SQL clients - this is almost as good as SQL MERGE
for many cases.

>> Seems like an awful lot of additional mechanism.
>
> Not really. Once you have the code in place to do the kill-inserted-tuple
> dance on a conflict, all you need is to do an extra index search before it.
> And once you have that, it's not hard to add some kind of a heuristic to
> either do the pre-check or skip it.

Perhaps.

> I probably shouldn't have mentioned markpos/restrpos, you're right that it's
> not a good idea to conflate that with index insertion. Nevertheless, some
> kind of an API for doing a duplicate-key check prior to insertion, and
> remembering the location for the actual insert later, seems sensible. It's
> certainly no more of a modularity violation than the value-locking scheme
> you're proposing.

I'm not so sure - in principle, any locking implementation can be used
by any conceivable amcanunique indexing method. The core system knows
that it isn't okay to sit on them all day long, but that doesn't seem
very onerous.

>> I'm certainly not opposed to making something like this work for
>> exclusion constraints. Certainly, I want this to be as general as
>> possible. But I don't think that it needs to be a blocker, and I don't
>> think we gain anything in code footprint by addressing that by being
>> as general as possible in our approach to the basic concurrency issue.
>> After all, we're going to have to repeat the basic pattern in multiple
>> modules.
>
>
> Well, I don't know what to say. I *do* have a hunch that we'd gain much in
> code footprint by making this general. I don't understand what pattern you'd
> need to repeat in multiple modules.

Now that I see this rough patch, I better appreciate what you mean. I
withdraw this objection.

> Here's a patch, implementing a rough version of the scheme I'm trying to
> explain. It's not as polished as yours, but it ought to be enough to
> evaluate the code footprint and performance. It doesn't make any changes to
> the indexam API, and it works the same with exclusion constraints and unique
> constraints. As it stands, it doesn't leave bloat behind, except when a
> concurrent insertion with a conflicting key happens between the first
> "pre-check" and the actual insertion. That should be rare in practice.
>
> What have you been using to performance test this?

I was just testing my patch against a custom pgbench workload,
involving running upserts against a table from a fixed range of PK
values. It's proven kind of difficult to benchmark this in the way
that pgbench has proved useful for in the past. Pretty soon the
table's PK range is "saturated", so they're all updates, but on the
other hand how do you balance the INSERT or UPDATE case?

Multiple unique indexes are the interesting case for comparing both
approaches. I didn't really worry about performance so much as
correctness, and for multiple unique constraints your approach clearly
falls down, as explained below.

>> Is it even useful to lock multiple rows if we can't really
>> update them, because they'll overlap each other when all updated with
>> the one value?
>
>
> Hmm. I think what you're referring to is the case where you try to insert a
> row so that it violates an exclusion constraint, and in a way that it
> conflicts with a large number of existing tuples. For example, if you have a
> calendar application with a constraint that two reservations must not
> overlap, and you try to insert a new reservation that cover

Re: [HACKERS] Re: Server is not getting started with log level as debug5 on master after commit 3147ac

2013-11-23 Thread Amit Kapila
On Sun, Nov 24, 2013 at 4:36 AM, Tom Lane  wrote:
> Amit Kapila  writes:
>> On Fri, Nov 22, 2013 at 9:30 AM, Amit Kapila  wrote:
>> Again looking at it, I think better fix would be to restore 'errno'
>> from 'edata->saved_errno' in errfinish() incase the control returns
>> back to caller (elevel <= WARNING). It seems to me that this fix is
>> required anyway because if we return from errfinish (ereport/elog) to
>> caller, it should restore back 'errno', as caller might need to take
>> some action based on errno.
>> Patch to restore 'errno' in errfinish() is attached with this mail.
>
> I think this is pretty misguided.  In general, there should be no
> expectation that a function call doesn't stomp on errno unless it's
> specifically documented not to --- which surely ereport() never has
> been.  So generally it's the responsibility of any code that needs
> errno to be preserved to do so itself.  In particular, in the case
> at hand:
>
>
> So basically, _dosmaperr() is broken and always has been, and it's
> surprising we've not seen evidence of that before.  It needs to not
> try to set the real errno variable till it's done logging about it.
>
> Normally I avoid touching Windows-specific code for lack of ability
> to test it, but in this case the needed fix seems pretty clear, so
> I'll go make it.

  Thanks, I have verified that the problem reported above is fixed.

  I think that still this kind of problems can be there at other
places in code. I checked few places and suspecting secure_read() can
also have similar problem:

secure_read()
{
..
errno = 0;
n = SSL_read(port->ssl, ptr, len);
err = SSL_get_error(port->ssl, n);
switch (err)
{
..
..

case SSL_ERROR_SSL:
ereport(COMMERROR,
(errcode(ERRCODE_PROTOCOL_VIOLATION),
errmsg("SSL error: %s", SSLerrmessage(;
/* fall through */
..
}

In case SSL_read sets errno, ereport will reset it and caller of
secure_read() like pq_getbyte_if_available() who perform actions based
on errno, can lead to some problem.

pq_getbyte_if_available(unsigned char *c)
{
..

r = secure_read(MyProcPort, c, 1);
if (r < 0)
{
if (errno == EAGAIN || errno == EWOULDBLOCK || errno == EINTR)
r = 0;
..
}

In general it is responsibility of caller to take care of errno
handling, but I am not sure it is taken care well at all places in
code and the chances of such problems were less earlier because there
was less chance that ereport would reset errno, but now it will
definitely do so.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] segfault with contrib lo

2013-11-23 Thread Tom Lane
Sawada Masahiko  writes:
> On Mon, Oct 7, 2013 at 12:32 PM, Marc Cousin  wrote:
>>> I was using the lo contrib a few days ago and wasn't paying attention, and
>>> forgot the "for each row" in the create trigger command... PostgreSQL
>>> segfaulted, when the trigger tried to access the row's attributes.
>>> 
>>> Please find attached a patch to control that the trigger is correctly
>>> defined (as in the example): a before trigger, for each row, and a
>>> parameter (if the parameter was omitted, it segfaulted too). I hope I did
>>> this correctly.

This looks good to me, except I see no reason for the trigger to refuse to
run as an AFTER trigger.  It works fine like that (I checked), and in some
scenarios people might prefer to do it that way to be sure they're seeing
the final version of the update.

> Sorry for late response.
> I tried the patch you attached.
> It is simple patch, and works fine.
> It is just my suggestion that error message shows name of trigger.

That seemed like a good idea to me, so I made all the errors (not
just the new ones) do it.

Committed and back-patched.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freezing without write I/O

2013-11-23 Thread Jeff Davis
On Wed, 2013-09-25 at 12:31 +0300, Heikki Linnakangas wrote:
> On 19.09.2013 16:24, Andres Freund wrote:
> ...
> > There's probably more.

I think _bt_check_unique is also a problem.

> Hmm, some of those are trivial, but others, rewrite_heap_tuple() are 
> currently only passed the HeapTuple, with no reference to the page where 
> the tuple came from. Instead of changing code to always pass that along 
> with a tuple, I think we should add a boolean to HeapTupleData, to 
> indicate if the tuple came from a mature page. If the flag is set, the 
> tuple should be considered visible to everyone, without looking at the 
> xmin/xmax. This might affect extensions, although usually external C 
> code that have to deal with HeapTuples will use functions like 
> heap_form_tuple to do so, and won't need to deal with low-level stuff or 
> visibility themselves.

How bad would the code be to just pass along the buffer when it's
needed? After looking around, it doesn't seem necessarily worse than
adding the struct field (and I agree with Andres that it seems more
proper to pass the buffer along).

I also have a separate question about this patch:

It seems like this gets us closer to 64-bit transaction IDs. Can we get
some help from the compiler to distinguish the two cases in a less
error-prone way? Is it worth thinking about making 64-bit the norm, and
32-bit an optimization in certain places (e.g. the heap and maybe the
xip array of a snapshot)?

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL Service on Windows does not start. ~ "is not a valid Win32 application"

2013-11-23 Thread Tom Lane
Rajeev rastogi  writes:
> One suggestion:
> Instead of using sizeof(cmdLine),
>   a. Can't we use strlen  (hence small 'for' loop).
>   b. Or use memmove to move one byte. 

I looked at this patch a bit.  I agree that we need to fix
pgwin32_CommandLine to double-quote the executable name, but it needs a
great deal more work than that :-(.  Whoever wrote this code was
apparently unacquainted with the concept of buffer overrun.  It's not
going to be hard at all to crash pg_ctl with overlength arguments.  I'm
not sure that that amounts to a security bug, but it's certainly bad.

After some thought it seems like the most future-proof fix is to not
use a fixed-length buffer for the command string at all.  The attached
revised patch switches it over to using a PQExpBuffer instead, which is
pretty much free since we're relying on libpq anyway in this program.
(We still use a fixed-length buffer for the program path, which is OK
because that's what find_my_exec and find_other_exec expect.)

In addition, I fixed it to append .exe in both cases not just the one.

I'm not in a position to actually test this, but it does compile
without warnings.

regards, tom lane

diff --git a/src/bin/pg_ctl/pg_ctl.c b/src/bin/pg_ctl/pg_ctl.c
index 8399cdd..dd80719 100644
*** a/src/bin/pg_ctl/pg_ctl.c
--- b/src/bin/pg_ctl/pg_ctl.c
***
*** 18,24 
--- 18,26 
  #endif
  
  #include "postgres_fe.h"
+ 
  #include "libpq-fe.h"
+ #include "pqexpbuffer.h"
  
  #include 
  #include 
*** pgwin32_IsInstalled(SC_HANDLE hSCM)
*** 1238,1253 
  static char *
  pgwin32_CommandLine(bool registration)
  {
! 	static char cmdLine[MAXPGPATH];
  	int			ret;
  
- #ifdef __CYGWIN__
- 	char		buf[MAXPGPATH];
- #endif
- 
  	if (registration)
  	{
! 		ret = find_my_exec(argv0, cmdLine);
  		if (ret != 0)
  		{
  			write_stderr(_("%s: could not find own program executable\n"), progname);
--- 1240,1252 
  static char *
  pgwin32_CommandLine(bool registration)
  {
! 	PQExpBuffer cmdLine = createPQExpBuffer();
! 	char		cmdPath[MAXPGPATH];
  	int			ret;
  
  	if (registration)
  	{
! 		ret = find_my_exec(argv0, cmdPath);
  		if (ret != 0)
  		{
  			write_stderr(_("%s: could not find own program executable\n"), progname);
*** pgwin32_CommandLine(bool registration)
*** 1257,1263 
  	else
  	{
  		ret = find_other_exec(argv0, "postgres", PG_BACKEND_VERSIONSTR,
! 			  cmdLine);
  		if (ret != 0)
  		{
  			write_stderr(_("%s: could not find postgres program executable\n"), progname);
--- 1256,1262 
  	else
  	{
  		ret = find_other_exec(argv0, "postgres", PG_BACKEND_VERSIONSTR,
! 			  cmdPath);
  		if (ret != 0)
  		{
  			write_stderr(_("%s: could not find postgres program executable\n"), progname);
*** pgwin32_CommandLine(bool registration)
*** 1267,1320 
  
  #ifdef __CYGWIN__
  	/* need to convert to windows path */
  #if CYGWIN_VERSION_DLL_MAJOR >= 1007
! 	cygwin_conv_path(CCP_POSIX_TO_WIN_A, cmdLine, buf, sizeof(buf));
  #else
! 	cygwin_conv_to_full_win32_path(cmdLine, buf);
  #endif
! 	strcpy(cmdLine, buf);
  #endif
  
  	if (registration)
! 	{
! 		if (pg_strcasecmp(cmdLine + strlen(cmdLine) - 4, ".exe") != 0)
! 		{
! 			/* If commandline does not end in .exe, append it */
! 			strcat(cmdLine, ".exe");
! 		}
! 		strcat(cmdLine, " runservice -N \"");
! 		strcat(cmdLine, register_servicename);
! 		strcat(cmdLine, "\"");
! 	}
  
  	if (pg_config)
! 	{
! 		strcat(cmdLine, " -D \"");
! 		strcat(cmdLine, pg_config);
! 		strcat(cmdLine, "\"");
! 	}
  
  	if (registration && do_wait)
! 		strcat(cmdLine, " -w");
  
  	if (registration && wait_seconds != DEFAULT_WAIT)
! 		/* concatenate */
! 		sprintf(cmdLine + strlen(cmdLine), " -t %d", wait_seconds);
  
  	if (registration && silent_mode)
! 		strcat(cmdLine, " -s");
  
  	if (post_opts)
  	{
- 		strcat(cmdLine, " ");
- 		if (registration)
- 			strcat(cmdLine, " -o \"");
- 		strcat(cmdLine, post_opts);
  		if (registration)
! 			strcat(cmdLine, "\"");
  	}
  
! 	return cmdLine;
  }
  
  static void
--- 1266,1319 
  
  #ifdef __CYGWIN__
  	/* need to convert to windows path */
+ 	{
+ 		char		buf[MAXPGPATH];
+ 
  #if CYGWIN_VERSION_DLL_MAJOR >= 1007
! 		cygwin_conv_path(CCP_POSIX_TO_WIN_A, cmdPath, buf, sizeof(buf));
  #else
! 		cygwin_conv_to_full_win32_path(cmdPath, buf);
  #endif
! 		strcpy(cmdPath, buf);
! 	}
  #endif
  
+ 	/* if path does not end in .exe, append it */
+ 	if (strlen(cmdPath) < 4 ||
+ 		pg_strcasecmp(cmdPath + strlen(cmdPath) - 4, ".exe") != 0)
+ 		snprintf(cmdPath + strlen(cmdPath), sizeof(cmdPath) - strlen(cmdPath),
+  ".exe");
+ 
+ 	/* be sure to double-quote the executable's name in the command */
+ 	appendPQExpBuffer(cmdLine, "\"%s\"", cmdPath);
+ 
+ 	/* append assorted switches to the command line, as needed */
+ 
  	if (registration)
! 		appendPQExpBuffer(cmdLine, " runservice -N \"%s\"",
! 		  register_servicename);
  
  	if (pg_config)
! 		appendPQExpBuffer(cmdLine, 

Re: [HACKERS] Completing PL support for Event Triggers

2013-11-23 Thread Peter Eisentraut
I have committed the PL/Tcl part.

I'll work on the PL/Perl part next.

I believe we're still waiting on something from you for PL/Python.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stat_statements: calls under-estimation propagation

2013-11-23 Thread Peter Geoghegan
On Mon, Nov 18, 2013 at 1:54 AM, Sameer Thakur  wrote:
> Please find v10 of patch attached. This patch addresses following
> review comments

I've cleaned this up - revision attached - and marked it "ready for committer".

I decided that queryid should be of type oid, not bigint. This is
arguably a slight abuse of notation, but since ultimately Oids are
just abstract object identifiers (so say the docs), but also because
there is no other convenient, minimal way of representing unsigned
32-bit integers in the view that I'm aware of, I'm inclined to think
that it's appropriate.

In passing, I've made pg_stat_statements invalidate serialized entries
if there is a change in major version. This seems desirable as a
catch-all invalidator of entries.

I note that Tom has objected to exposing the queryid in the past, on
numerous occasions. I'm more confident than ever that it's actually
the right thing to do. I've had people I don't know walk up to me at
conferences and ask me what we don't already expose this at least
twice now. There are very strong indications that many people want
this, and given that I've documented the caveats, I think that we
should trust those calling for this. At the very least, it allows
people to GROUP BY queryid, when they don't want things broken out by
userid.

We're self-evidently already effectively relying on the queryid to be
as stable as it is documented to be in this patch. The hash function
cannot really change in minor releases, because to do so would at the
very least necessitate re-indexing hash indexes, and would of course
invalidate internally managed pg_stat_statements entries, both of
which are undesirable outcomes (and therefore, for these reasons and
more, unlikely). Arguments for not documenting hash_any() do not apply
here -- we're already suffering the full consequences of whatever
queryid instability may exist.

Quite apart from all of that, I think we need to have a way of
identifying particular entries for the purposes of supporting
per-entry "settings". Recent discussion about min/max query time, or
somehow characterizing the distribution of each entry's historic
execution time (or whatever) have not considered one important
questoin: What are you supposed to do when you find out that there is
an outlier (whatever an outlier is)?

I won't go into the details, because there is little point, but I'm
reasonably confident that it will be virtually impossible for
pg_stat_statements itself to usefully classify particular query
executions as outliers (I'm not even sure that we could do it if we
assumed a normal distribution, which would be bogus, and certainly
made very noisy by caching effects and so on. Furthermore, who are we
to say that an outlier is an execution time two sigmas to the right?
Seems useless).

Outliers are typically caused by things like bad plans, or problematic
constant values that appear in the most common values list (and are
therefore just inherently far more expensive to query against), or
lock contention. In all of those cases, with a min/max or something we
probably won't even get to know what the problematic constant values
were when response time suddenly suffers, because of course
pg_stat_statements doesn't help with that. So have we gained much?
Even with detective work, the trail might have gone cold by the time
the outlier is examined. And, monitoring is only one concern -- what
about alerting?

The bigger point is that having this will facilitate being able to
mark entries as "SLA queries" or something like that, where if their
execution exceeds a time (specified by the DBA per entry), that is
assumed to be very bad, and pg_stat_statements complains. That gets
dumped to the logs (which ought to be a rare occurrence when the
facility is used correctly). Of course, the (typically particularly
problematic) constant values *do* appear in the logs, and there is a
greppable keyword, potentially for the benefit of a tool like
tail_n_mail. You could think of this as being like a smart
log_min_duration_statement. I think that the DBA needs to tell
pg_stat_statements what to care about, and what bad looks like. If the
DBA doesn't know where to start specifying such things, the 5 queries
with the most calls can usefully have this set to (mean_execution_time
* 1.5) or something. SLA queries can also be "pinned", perhaps  (that
is, given a "stay of execution" when eviction occurs).

-- 
Peter Geoghegan
diff --git a/contrib/pg_stat_statements/Makefile b/contrib/pg_stat_statements/Makefile
new file mode 100644
index e8aed61..95a2767
*** a/contrib/pg_stat_statements/Makefile
--- b/contrib/pg_stat_statements/Makefile
*** MODULE_big = pg_stat_statements
*** 4,11 
  OBJS = pg_stat_statements.o
  
  EXTENSION = pg_stat_statements
! DATA = pg_stat_statements--1.1.sql pg_stat_statements--1.0--1.1.sql \
! 	pg_stat_statements--unpackaged--1.0.sql
  
  ifdef USE_PGXS
  PG_CONFIG = pg_config
--- 4,11 
  OBJS = pg_stat_statements.o
  
  EX

Re: [HACKERS] [PATCH] configure: allow adding a custom string to PG_VERSION

2013-11-23 Thread Peter Eisentraut
On Wed, 2013-11-20 at 22:41 +0900, Michael Paquier wrote:
> Here are a couple of comments about the patch:
> 1) I think that you should regenerate ./configure as well with this
> patch to include all the changes together (someone correct me if I am
> wrong here!)

Doesn't matter either way.

> 2) This new option should be added in the section "## Command line
> options" in configure.in

Yes.

> 3) PG_VERSION is not a variable name adapted IMO, as it might contain
> custom information. Something like PG_VERSION_TOTAL perhaps?

I don't think it's necessary to split this up further.  We have
PG_VERSION and PG_MAJORVERSION.  What's the use for one more level?



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE FOREIGN TABLE ( ... LIKE ... )

2013-11-23 Thread Vik Fearing
On 10/15/2013 07:50 AM, David Fetter wrote:
> On Mon, Oct 07, 2013 at 11:16:56PM -0700, David Fetter wrote:
>> Folks,
>>
>> Please find attached a patch implementing and documenting, to some
>> extent, $subject.  I did this in aid of being able to import SQL
>> standard catalogs and other entities where a known example could
>> provide a template for a foreign table.
>>
>> Should there be errhint()s, too?  Should we pile up all such errors
>> and mention them at the end rather than simply bailing on the first
>> one?
>>
>> TBD: regression tests.
> Now included: regression tests for disallowed LIKE options.

I like this patch, but I don't like its implementation at all.

First of all, the documentation doesn't compile:

openjade:ref/create_foreign_table.sgml:124:17:E: end tag for "LISTITEM"
omitted, but OMITTAG NO was specified
openjade:ref/create_foreign_table.sgml:119:4: start tag was here


I fixed that, and then noticed that like_option is not explained like it
is in CREATE TABLE.

Then I got down to the description of the LIKE clause in both pages, and
I noticed the last line of CREATE TABLE, which is "Inapplicable options
(e.g., INCLUDING INDEXES from a view) are ignored.".  This is
inconsistent with the behavior of this patch to throw errors for
inapplicable options.

Attached is a patch which corrects and completes the documentation
issues noted above, and also silently ignores inapplicable options.  In
addition to reducing patch size, this also allows the use of INCLUDING
ALL.  Because these options no longer produce errors, and that's all the
regression tests were looking for, I have removed those tests
(unfortunately leaving none).

Aside from this difference in behavior, I see no fault in this patch.

I am marking this patch as 'returned with feedback' in the commitfest app.

-- 
Vik

*** a/doc/src/sgml/ref/create_foreign_table.sgml
--- b/doc/src/sgml/ref/create_foreign_table.sgml
***
*** 20,25 
--- 20,26 
  
  CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
  column_name data_type [ OPTIONS ( option 'value' [, ... ] ) ] [ COLLATE collation ] [ column_constraint [ ... ] ]
+ | LIKE source_table [ like_option ... ] }
  [, ... ]
  ] )
SERVER server_name
***
*** 31,36  CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name
--- 32,39 
  { NOT NULL |
NULL |
DEFAULT default_expr }
+ 
+ and like_option is the same as for .
  
   
  
***
*** 114,119  CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name
--- 117,135 
 
  
 
+ LIKE source_table [ like_option ... ]
+ 
+  
+   The LIKE clause specifies a table from which
+   the new foreign table automatically copies all column names and their data types.
+  
+  
+   Inapplicable options (e.g., INCLUDING STORAGE) are ignored.
+  
+ 
+
+ 
+
  NOT NULL
  
   
*** a/src/backend/parser/parse_utilcmd.c
--- b/src/backend/parser/parse_utilcmd.c
***
*** 649,655  transformTableConstraint(CreateStmtContext *cxt, Constraint *constraint)
  /*
   * transformTableLikeClause
   *
!  * Change the LIKE  portion of a CREATE TABLE statement into
   * column definitions which recreate the user defined column portions of
   * .
   */
--- 649,655 
  /*
   * transformTableLikeClause
   *
!  * Change the LIKE  portion of a CREATE [FOREIGN] TABLE statement into
   * column definitions which recreate the user defined column portions of
   * .
   */
***
*** 668,679  transformTableLikeClause(CreateStmtContext *cxt, TableLikeClause *table_like_cla
  	setup_parser_errposition_callback(&pcbstate, cxt->pstate,
  	  table_like_clause->relation->location);
  
- 	/* we could support LIKE in many cases, but worry about it another day */
- 	if (cxt->isforeign)
- 		ereport(ERROR,
- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-  errmsg("LIKE is not supported for creating foreign tables")));
- 
  	relation = relation_openrv(table_like_clause->relation, AccessShareLock);
  
  	if (relation->rd_rel->relkind != RELKIND_RELATION &&
--- 668,673 
***
*** 688,693  transformTableLikeClause(CreateStmtContext *cxt, TableLikeClause *table_like_cla
--- 682,691 
  
  	cancel_parser_errposition_callback(&pcbstate);
  
+ 	/* For foreign tables, ignore all but applicable options. */
+ 	if (cxt->isforeign)
+ 		table_like_clause->options &= CREATE_TABLE_LIKE_DEFAULTS | CREATE_TABLE_LIKE_COMMENTS;
+ 
  	/*
  	 * Check for privileges
  	 */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dynamic Shared Memory stuff

2013-11-23 Thread Peter Geoghegan
On Sat, Nov 23, 2013 at 4:21 PM, Jeremy Harris  wrote:
> Its performance shines on partially- or reverse-sorted input.

Search the archives for the work I did on timsort support a while
back. A patch was posted, that had some impressive results provided
you just considered the number of comparisons (and not TPS when
sorting text), but at the time my sense was that it didn't have broad
enough applicability for me to pursue further. That doesn't mean the
idea wasn't useful, and it certainly doesn't mean that my rough patch
couldn't be improved upon. For one thing, if there was a type that had
a comparator that was, say, an order of magnitude more expensive than
bttextcmp, it would definitely be a big win.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dynamic Shared Memory stuff

2013-11-23 Thread Jeremy Harris

On 20/11/13 19:58, Robert Haas wrote:

Parallel sort, and then parallel other stuff.  Eventually general
parallel query.

I have recently updated https://wiki.postgresql.org/wiki/Parallel_Sort
and you may find that interesting/helpful as a statement of intent.


I've been playing with an internal merge sort which may be of interest
in this area of work.  While I've not worked on parallelizing the merge
stages it does not feel like an impossible task.  More to the immediate
point, the input stage and readout stage can do useful work
overlapped with the data source and sink (respectively).

The current implementation uses the same amount of memory as the
quicksort one, and does approximately the same number of compares
(on random input).  The overheads are higher than the quicksort
implementation (up to 50% higher cpu time, on a single key of
random integers).

Its performance shines on partially- or reverse-sorted input.

Transition to (the existing) external sort is implemented, as is
the limited random-access facility, and bounded output.

It supports unique-output.  The planner interface is extended to
return an indication of dedup guarantee, and the Unique node uses
this to elide itself at planning time.  Dedup operations also
cover external sorts.
--
Cheers,
   Jeremy


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] MultiXact bugs

2013-11-23 Thread Andres Freund
Hi,

The attached pgbench testcase can reproduce two issues:
1) (takes a bit)
TRAP: FailedAssertion("!(((xid) >= ((TransactionId) 3)))", File:/pruneheap.c", 
Line: 601)

That's because HeapTupleHeaderGetUpdateXid() ignores aborted updaters
and returns InvalidTransactionId in that case, but
HeapTupleSatisfiesVacuum() returns HEAPTUPLE_DELETE_IN_PROGRESS...

Looks like a 9.3+ issue, and shouldn't have a ll that high impact in
non-assert builds, page pruning will be delayed a bit.

2) we frequently error out in heap_lock_updated_tuple_rec() with
ERROR:  unable to fetch updated version of tuple

That's because when we're following a ctid chain, it's perfectly
possible for the updated version of a tuple to already have been
vacuumed/pruned away if the the updating transaction has aborted.

Also looks like a 9.3+ issues to me, slightly higher impact, but in the
end you're just getting some errors under concurrency.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
BEGIN;
DROP TABLE IF EXISTS data, referenced;
CREATE TABLE referenced(
referenced_id serial primary key,
data int NOT NULL
);

CREATE TABLE data(
data_id serial primary key,
referenced_id int NOT NULL REFERENCES referenced,
data int NOT NULL
);

INSERT INTO referenced SELECT nextval('referenced_referenced_id_seq'), 1 FROM 
generate_series(1, 20);

INSERT INTO data(referenced_id, data)
   SELECT trunc(random()*20+1)::int, g.i
   FROM generate_series(1, 10) g(i);
COMMIT;
\setrandom ref 1 20
BEGIN;
SELECT * FROM referenced WHERE referenced_id = :ref FOR KEY SHARE;
SAVEPOINT f;
UPDATE referenced SET data = data WHERE referenced_id = :ref;
ROLLBACK TO SAVEPOINT f;
COMMIT;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Server is not getting started with log level as debug5 on master after commit 3147ac

2013-11-23 Thread Tom Lane
Amit Kapila  writes:
> On Fri, Nov 22, 2013 at 9:30 AM, Amit Kapila  wrote:
>> I could think of below possible ways to fix the problem:
>> a. in function pvsnprintf(), save the errno before setting it to 0 and
>> then before exiting function reset it back to saved errno. I think
>> this is sane because in function pvsnprintf, if any error occurs due
>> to which errno is changed, it will not return control, so errno will
>> not be required by callers.
>> b. we can change the callers like _dosmaperr() who have responsibility
>> to save errno for their callers.
>> 
>> Patch with approach a) is attached with mail, we can change code as
>> per approach b) or any other better method as well, but for now I have
>> prepared patch with approach a), as I could not see any problem with
>> it.

> Again looking at it, I think better fix would be to restore 'errno'
> from 'edata->saved_errno' in errfinish() incase the control returns
> back to caller (elevel <= WARNING). It seems to me that this fix is
> required anyway because if we return from errfinish (ereport/elog) to
> caller, it should restore back 'errno', as caller might need to take
> some action based on errno.
> Patch to restore 'errno' in errfinish() is attached with this mail.

I think this is pretty misguided.  In general, there should be no
expectation that a function call doesn't stomp on errno unless it's
specifically documented not to --- which surely ereport() never has
been.  So generally it's the responsibility of any code that needs
errno to be preserved to do so itself.  In particular, in the case
at hand:

if (doserrors[i].winerr == e)
{
errno = doserrors[i].doserr;
#ifndef FRONTEND
ereport(DEBUG5,
(errmsg_internal("mapped win32 error code %lu to %d",
 e, errno)));
#elif FRONTEND_DEBUG
fprintf(stderr, _("mapped win32 error code %lu to %d"), e, errno);
#endif
return;
}

even if we were to do what you suggest to make the ereport() call
preserve errno, the FRONTEND_DEBUG code path would still be utterly
broken, because fprintf() is certainly capable of changing errno.

So basically, _dosmaperr() is broken and always has been, and it's
surprising we've not seen evidence of that before.  It needs to not
try to set the real errno variable till it's done logging about it.

Normally I avoid touching Windows-specific code for lack of ability
to test it, but in this case the needed fix seems pretty clear, so
I'll go make it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-11-23 Thread Dimitri Fontaine
Hi,

Jeff Davis  writes:
> In the CF app, this is marked "Ready for Committer". That's a bit vague
> here, considering Dimitri, you, Peter, and Alvaro are all committers.
> Who is this patch waiting on? Is the discussion concluding, or does it
> need another round of review?

Thanks for the confusion I guess, but I'm no committer here ;-)

This patch has received extensive review in July and I think it now
properly implements the design proposed by Tom and Heikki in 9.3/CF4.

As the path didn't make it already, yes it needs another (final) round
of review. The main difficulty in reviewing is understanding the design
and the relation in between our current model of extensions and what
this patch offers.

You might find the discussions we had with Markus Wanner quite useful in
this light. The current situation is that I believe the patch to
implement the same “template” model as the on-disk extensions, down to
dependency tracking.

IIRC I left only one differing behavior, which is that you're not
allowed to DROP an Extension Template when it's needed for a dump and
restore cycle, where you could be doing that at the file system level of
course (and pg_restore on a new system would depend on other files).

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extension Templates S03E11

2013-11-23 Thread Jeff Davis
On Mon, 2013-11-04 at 08:43 -0500, Stephen Frost wrote:
> I'll still go ahead and start looking through this, per our discussion.

In the CF app, this is marked "Ready for Committer". That's a bit vague
here, considering Dimitri, you, Peter, and Alvaro are all committers.
Who is this patch waiting on? Is the discussion concluding, or does it
need another round of review?

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regress tests to improve the function coverage of schemacmds and user and tablespace files

2013-11-23 Thread David Rowley
On Sun, Nov 24, 2013 at 6:37 AM, Kevin Grittner  wrote:

> David Rowley  wrote:
>
> > I've just had a look at both of these patches. All tests that
> > have been added seem to cover new areas that are not previously
> > tested, they also seem to cleanup properly after themselves, so I
> > think these should be a worthwhile addition to the regression
> > tests.
>
> Thanks for reviewing!  Did you happen to note the impact on `make
> check` runtime?  There are many people who run that many times per
> day while working on development, so we try to keep new tests that
> significantly extend that separate.  We haven't quite worked out
> the best way to exercise such longer-running tests, but I suspect
> we soon will.  At any rate, this is a piece of information the
> committer will want, so you will be helping whoever that is if you
> can supply it.
>
>
I've done a quick benchmark on this this morning.
Note that I'm using windows here and I used powershell to time the
regression run with the following command:

PS D:\Postgres\b\src\tools\msvc> Measure-Command { .\vcregress.bat check }

I ran the tests 10 times each.
I ran the patched version first, then just did git reset --hard to revert
the patched changes then I ran the tests again.

The average and median results over the 10 runs are as follows:

Patched Unpatched Time increased by
Average 48.23265888 47.70979854 101.10%
Median 47.8993686 47.51177815 100.82%


The slowdown is not too bad. It just around 1% increase of time.

I've attached the results in spreadsheet format.

Regards

David Rowley

--
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


regression_test_benchmark.xlsx
Description: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] local_preload_libraries logspam

2013-11-23 Thread Jeff Davis
On Mon, 2013-05-13 at 15:22 -0700, Peter Geoghegan wrote:
> On Tue, May 7, 2013 at 7:01 PM, Tom Lane  wrote:
> > It seems reasonable to me to reduce it to DEBUG1 level.
> 
> Attached patch renders all "loaded library..." messages DEBUG1,
> regardless of whether local_preload_libraries or
> shared_preload_libraries is involved, and regardless of EXEC_BACKEND.

This looks quite trivial and seems to match the discussed behavior.

It's marked as "Waiting on Review", but it looks more like it's in need
of a committer. I'll do a quick review and commit this in a couple hours
unless someone objects.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Schedule for upcoming back-branch releases

2013-11-23 Thread Tom Lane
Because of the recently discovered data-corruption-on-replication-slaves
bug, we're going to put out new minor releases for the back branches.
The plan is to wrap tarballs Monday Dec 2 for public announcement Thursday
Dec 5.  We considered doing it this coming week instead, but that seems
impractical due to the Thanksgiving holiday in the US.  Besides, there are
a few other open bugs that could use a chance to get fixed first.  (But,
to be clear, this train is leaving the station next Monday, whether
anything else is fixed or not.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logging WAL when updating hintbit

2013-11-23 Thread Jeff Davis
On Tue, 2013-11-19 at 11:42 -0500, Robert Haas wrote:
> On Thu, Nov 14, 2013 at 1:02 AM, Sawada Masahiko  
> wrote:
> > I attached patch adds new wal_level 'all'.
> > If wal_level is set 'all', the server logs WAL not only when wal_level
> > is set 'hot_standby' ,but also when updating hint bit.
> > That is, we will be able to know all of the changed block number by
> > reading the WALs.
> > This wal_level is infrastructure for fast failback. (i.g., without fresh 
> > backup)
> > It need to cooperate with pg_rewind.
> 
> I'm also not 100% sure we want it, but let's hear what others think.

My take is that configuration options should be used to serve different
use cases. One thing I like about postgres is that it doesn't have
options for the sake of options.

The trade-off here seems to be: if you want fast failback, you have to
write more WAL during normal operation. But it's not clear to me which
one I should choose for a given installation. If there's a lot of data,
then fast failback is nice, but then again, logging the hint bits on a
large amount of data might be painful during normal operation. The only
time the choice is easy is when you already have checksums enabled.

I think we should work some more in this area first so we can end up
with something that works for everyone; or at least a more clear choice
to offer users. My hope is that it will go something like:

1. We get more reports from the field about checksum performance
2. pg_rewind gets some more attention
3. we find a way to upgrade a replica set using pg_upgrade and pg_rewind
so that the replicas do not all need a new basebackup after an upgrade
4. We further mitigate the performance impact of logging all page
modifications
5. We eventually see that the benefits of logging all page modifications
outweigh the performance cost for most people, and start recommending to
most people
6. The other WAL levels become more specialized for single, unreplicated
instances

That's just a hope, of course, but we've made some progress and I think
it's a plausible outcome. As it stands, the replica re-sync after any
failover or upgrade seems to be a design gap.

All that being said, I don't object to this option -- I just want it to
lead us somewhere. Not be another option that we keep around forever
with conflicting recommendations about how to set it.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Modify the DECLARE CURSOR command tag depending on the scrollable flag

2013-11-23 Thread Tom Lane
Boszormenyi Zoltan  writes:
> Attached is the patch that modified the command tag returned by
> the DECLARE CURSOR command. It returns "DECLARE SCROLL CURSOR"
> or "DECLARE NO SCROLL CURSOR" depending on the cursor's
> scrollable flag that can be determined internally even if neither is
> asked explicitly.

This does not strike me as an acceptable change.  It will break any code
that's expecting the existing command tag, for little or no benefit
to most applications.  Even if backwards compatibility were of no concern,
I'm not convinced it's a good thing to expose the backend's internal
choices about query plans used for cursors, which is what this is
basically doing.

> It is expected by the ECPG cursor readahead code.

And that doesn't sound like a sufficient excuse.  You should only assume a
cursor is scrollable if SCROLL was specified in the cursor declaration
command, which it'd seem to me is something ECPG would or easily could
know about commands it issues.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-23 Thread Sebastian Hilbert
Am Samstag, 23. November 2013, 08:44:42 schrieb Kevin Grittner:
> Bruce Momjian  wrote:
> > I am not a fan of backpatching any of this.
> 
> Here's my problem with that.  Here's setup to create what I don't
> think is all that weird a setup:
> 
> initdb Debug/data
> pg_ctl -D Debug/data -l Debug/data/logfile -w start
> createdb test
> psql test /dev/null 2>&1
> psql postgres -c "alter database test set default_transaction_read_only =
> on;" psql postgres -c "alter database postgres set
> default_transaction_read_only = on;"
> 
> The following appears to produce a good backup, since there is no
> error:
> 
> pg_dumpall >~/dumpall.sql
> 
> Let's create a brand new cluster and start it up:
> 
> pg_ctl -D Debug/data -m fast -w stop
> rm -fr Debug/data/*
> initdb Debug/data
> pg_ctl -D Debug/data -l Debug/data/logfile -w start
> 
> Now we attempt to restore what we thought was a good backup:
> 
> psql postgres <~/dumpall.sql
> 
> What we get is:
> 
> SET
> SET
> ERROR:  role "kgrittn" already exists
> ALTER ROLE
> ALTER DATABASE
> REVOKE
> REVOKE
> GRANT
> GRANT
> CREATE DATABASE
> ALTER DATABASE
> You are now connected to database "postgres" as user "kgrittn".
> SET
> SET
> SET
> SET
> SET
> SET
> ERROR:  cannot execute COMMENT in a read-only transaction
> ERROR:  cannot execute CREATE EXTENSION in a read-only transaction
> ERROR:  cannot execute COMMENT in a read-only transaction
> ERROR:  cannot execute REVOKE in a read-only transaction
> ERROR:  cannot execute REVOKE in a read-only transaction
> ERROR:  cannot execute GRANT in a read-only transaction
> ERROR:  cannot execute GRANT in a read-only transaction
> You are now connected to database "template1" as user "kgrittn".
> SET
> SET
> SET
> SET
> SET
> SET
> COMMENT
> CREATE EXTENSION
> COMMENT
> REVOKE
> REVOKE
> GRANT
> GRANT
> You are now connected to database "test" as user "kgrittn".
> SET
> SET
> SET
> SET
> SET
> SET
> ERROR:  cannot execute CREATE SCHEMA in a read-only transaction
> ERROR:  cannot execute ALTER SCHEMA in a read-only transaction
> ERROR:  cannot execute CREATE EXTENSION in a read-only transaction
> ERROR:  cannot execute COMMENT in a read-only transaction
> SET
> SET
> SET
> ERROR:  cannot execute CREATE TABLE in a read-only transaction
> ERROR:  cannot execute ALTER TABLE in a read-only transaction
> ERROR:  cannot execute CREATE VIEW in a read-only transaction
> ERROR:  cannot execute ALTER TABLE in a read-only transaction
> SET
> ERROR:  relation "public.tv" does not exist
> LINE 4:FROM public.tv
> ^
> ERROR:  cannot execute ALTER TABLE in a read-only transaction
> SET
> ERROR:  cannot execute CREATE VIEW in a read-only transaction
> ERROR:  cannot execute ALTER TABLE in a read-only transaction
> ERROR:  relation "tvv" does not exist
> LINE 3:FROM tvv
> ^
> ERROR:  cannot execute ALTER TABLE in a read-only transaction
> ERROR:  cannot execute CREATE VIEW in a read-only transaction
> ERROR:  cannot execute ALTER TABLE in a read-only transaction
> ERROR:  relation "tvvmv" does not exist
> LINE 3:FROM tvvmv
> ^
> ERROR:  cannot execute ALTER TABLE in a read-only transaction
> ERROR:  relation "t" does not exist
> LINE 4:FROM t
> ^
> ERROR:  cannot execute ALTER TABLE in a read-only transaction
> ERROR:  relation "tm" does not exist
> LINE 3:FROM tm
> ^
> ERROR:  cannot execute ALTER TABLE in a read-only transaction
> ERROR:  relation "mvschema.tvm" does not exist
> LINE 3:FROM mvschema.tvm
> ^
> ERROR:  cannot execute ALTER TABLE in a read-only transaction
> ERROR:  relation "t" does not exist
> invalid command \.
> ERROR:  syntax error at or near "1"
> LINE 1: 1 x 2
> ^
> ERROR:  cannot execute CREATE INDEX in a read-only transaction
> ERROR:  cannot execute CREATE INDEX in a read-only transaction
> ERROR:  cannot execute CREATE INDEX in a read-only transaction
> ERROR:  cannot execute CREATE INDEX in a read-only transaction
> SET
> ERROR:  cannot execute REFRESH MATERIALIZED VIEW in a read-only transaction
> SET
> ERROR:  cannot execute REFRESH MATERIALIZED VIEW in a read-only transaction
> ERROR:  cannot execute REFRESH MATERIALIZED VIEW in a read-only transaction
> ERROR:  cannot execute REFRESH MATERIALIZED VIEW in a read-only transaction
> ERROR:  cannot execute REFRESH MATERIALIZED VIEW in a read-only transaction
> ERROR:  cannot execute REFRESH MATERIALIZED VIEW in a read-only transaction
> ERROR:  cannot execute REVOKE in a read-only transaction
> ERROR:  cannot execute REVOKE in a read-only transaction
> ERROR:  cannot execute GRANT in a read-only transaction
> ERROR:  cannot execute GRANT in a read-only transaction
> 
> If the dump is made with the attached patch, you get this on
> restore:
> 
> SET
> SET
> SET
> ERROR:  role "kgrittn" already exists
> ALTER ROLE
> ALTER DATABASE
> REVOKE
> REVOKE
> GRANT
> GRANT
> CREATE DATABASE
> ALTER DATABASE
> You are now connected to database "postgres" a

Re: [HACKERS] UNION ALL on partitioned tables won't use indices.

2013-11-23 Thread Tom Lane
Noah Misch  writes:
> I'm unclear on the key ideas behind distinguishing em_is_child members from
> ordinary EC members.  src/backend/optimizer/README says "These members are
> *not* full-fledged members of the EquivalenceClass and do not affect the
> class's overall properties at all."  Is that an optimization to avoid futile
> planner work, or is it necessary for correctness?  If the latter, what sort of
> query might give wrong answers if an EquivalenceMember were incorrectly added
> as em_is_child = false?

See commit dd4134ea, which added that text.  IIRC, the key point is that
among "real" EC members, there will never be duplicates: a Var "x.y"
for instance cannot appear in two different ECs, because we'd have merged
the two ECs into one instead.  However, this property does *not* hold for
child EC members.  The problem is that two completely unrelated UNION ALL
child subselects might have, say, constant "1" in their tlists.  This
should not lead to concluding that we can merge ECs that mention their
UNION ALL parent variables.

I've not looked at these patches yet, but if they're touching equivclass.c
at all, I'd guess that it's wrong.  I think what we want is for appendrel
formation to take care of flattening nested appendrels.  The core of the
problem is that pull_up_simple_union_all handles that for nested
UNION ALLs, and expand_inherited_rtentry sees to it by letting
find_all_inheritors do the recursion, but the two cases don't know about
each other.

My gut feeling after two minutes' thought is that the best fix is for
expand_inherited_rtentry to notice when the parent table is already an
appendrel member, and enlarge that appendrel instead of making a new one.
(This depends on the assumption that pull_up_simple_union_all always
happens first, but that seems OK.)  I'm not sure if that concept exactly
corresponds to either PATCH-3 or PATCH-4, but that's the way I'd think
about it.

> However, adding a qual to one of the inheritance queries once again defeated
> MergeAppend with the patches for approaches (2) and (3).

That's an independent limitation, see is_safe_append_member:

 * Also, the child can't have any WHERE quals because there's no place to
 * put them in an appendrel.  (This is a bit annoying...)

It'd be nice to fix that, but it's not going to be easy, and it should
be a separate patch IMO.  It's pretty much unrelated to the question at
hand here.

> Approaches (2) and (3) leave the inheritance parent with rte->inh == true
> despite no AppendRelInfo pointing to it as their parent.  Until now,
> expand_inherited_rtentry() has been careful to clear rte->inh in such cases.
> My gut feeling is that we should retain that property; what do you think?

Yes.  IIRC, failing to clear rte->inh doesn't actually break anything,
but it will lead to wasted work later in the planner.

> Finally, the patch should add test cases.

Agreed, at least one example showing that flattening does happen.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Errors on missing pg_subtrans/ files with 9.3

2013-11-23 Thread J Smith
On Tue, Nov 19, 2013 at 10:16 AM, J Smith  wrote:
> Alright, we'll look into doing that heading into the weekend.
> Interestingly, we haven't experienced the issue since our main Java
> developer made some modifications to our backend system. I'm not
> entirely sure what the changes entail except that it's a one-liner
> that involves re-SELECTing a table during a transaction. We'll
> rollback this change and re-compile Postgres with google-coredumper
> and let it run over the weekend and see where we stand.
>

Okay, I've patched our Postgres install and added in a call for
coredumper to output a core dump. We'll let our processes run for the
next few days and hopefully we'll get something we can look at. I've
attached the patch just in case anyone would like to make sure I've
done it in a sane way and that I've inserted the call in the correct
location.

Cheers


postgresql-coredumper.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regress tests to improve the function coverage of schemacmds and user and tablespace files

2013-11-23 Thread Kevin Grittner
David Rowley  wrote:

> I've just had a look at both of these patches. All tests that
> have been added seem to cover new areas that are not previously
> tested, they also seem to cleanup properly after themselves, so I
> think these should be a worthwhile addition to the regression
> tests.

Thanks for reviewing!  Did you happen to note the impact on `make
check` runtime?  There are many people who run that many times per
day while working on development, so we try to keep new tests that
significantly extend that separate.  We haven't quite worked out
the best way to exercise such longer-running tests, but I suspect
we soon will.  At any rate, this is a piece of information the
committer will want, so you will be helping whoever that is if you
can supply it.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-23 Thread Karsten Hilbert
On Sat, Nov 23, 2013 at 12:11:56PM -0500, Tom Lane wrote:

> I also agree with *not* changing pg_dump, since it is not the charter
> of pg_dump to recreate a whole cluster, and the objection about possibly
> restoring into a database that was meant to be protected by this setting
> seems to have some force.

This is were the suggestion comes in (which was already raised)
to add some commandline option to the effect of

--ignore-default-tx-readonly

which, however, I agree with can be worked around by
projects (like GNUmed, in which context this issue
came up at all) providing restore scripts setting
PGOPTIONS appropriately ...

Thanks for all the work,
Karsten
-- 
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-23 Thread Tom Lane
Kevin Grittner  writes:
> Bruce Momjian  wrote:
>> I am not a fan of backpatching any of this.

> Are you saying that you find current behavior acceptable in back
> branches?

I'm inclined to agree with Kevin that this behavior is wrong and
should be fixed (and back-patched), so far as pg_dumpall is concerned.
pg_dumpall's charter is to be able to recreate a database cluster's
contents in a virgin installation, but it's failing to honor that
contract if the cluster has any ALTER DATABASE SET default_read_only
settings.  Similarly, I think it's reasonable to try to make pg_upgrade
cope with the case.

I also agree with *not* changing pg_dump, since it is not the charter
of pg_dump to recreate a whole cluster, and the objection about possibly
restoring into a database that was meant to be protected by this setting
seems to have some force.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-23 Thread Karsten Hilbert
On Sat, Nov 23, 2013 at 08:44:42AM -0800, Kevin Grittner wrote:

> Here's my problem with that.  Here's setup to create what I don't
> think is all that weird a setup:

...

> The following appears to produce a good backup, since there is no
> error:

...

> Now we attempt to restore what we thought was a good backup:
> 
> psql postgres <~/dumpall.sql
> 
> What we get is:

> ERROR:  cannot execute COMMENT in a read-only transaction
> ERROR:  cannot execute CREATE EXTENSION in a read-only transaction
> ERROR:  cannot execute COMMENT in a read-only transaction
> ERROR:  cannot execute REVOKE in a read-only transaction
> ERROR:  cannot execute REVOKE in a read-only transaction
> ERROR:  cannot execute GRANT in a read-only transaction
> ERROR:  cannot execute GRANT in a read-only transaction
...
> ERROR:  cannot execute CREATE SCHEMA in a read-only transaction
> ERROR:  cannot execute ALTER SCHEMA in a read-only transaction
> ERROR:  cannot execute CREATE EXTENSION in a read-only transaction
> ERROR:  cannot execute COMMENT in a read-only transaction

...

> If the dump is made with the attached patch, you get this on
> restore:
...
> The cluster is created in the state that was dumped, default read
> only flags and all.

I find the patched behaviour more conformant
with the Principle Of Least Astonishment.

Maybe I am splitting hairs but setting transactions to readonly
per default does not mean the database *as such* is to be readonly.
It literally applies to the *default* state of transactions (as
opposed to the ONLY state of transactions). No more, no less.

Karsten
-- 
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Building on S390

2013-11-23 Thread Tom Lane
Peter Eisentraut  writes:
> On Fri, 2013-11-22 at 23:32 +, Greg Stark wrote:
>> Debian policy is to always use -fPIC

> My point is, they compile the *backend* as position-independent code.
> The backend is not a shared library.  Maybe it is in Postgres-XC?  But
> at least this makes their build process significantly different, so it's
> doubtful that this is a PG-proper issue.

Note that that's not an unreasonable decision in itself, if it's done
pursuant to some distro policy that daemons should run with ASLR enabled.
(Right before I left Red Hat, we were looking into building PG with -fPIE
for that reason.  It didn't happen yet because of a kernel bug[1], but
it will eventually.)

But there's too many moving parts here for us to know exactly what's going
wrong without more evidence.  The only thing that *is* pretty clear is
that the failure is not with the stock PG build anyway, so changing the
properties of the stock build sounds like the wrong response.  Personally
I'd think it is the job of the Debian package maintainer to determine why
this is breaking.

regards, tom lane

[1] https://bugzilla.redhat.com/show_bug.cgi?id=952946


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-23 Thread Kevin Grittner
Bruce Momjian  wrote:

> I am not a fan of backpatching any of this.

Here's my problem with that.  Here's setup to create what I don't
think is all that weird a setup:

initdb Debug/data
pg_ctl -D Debug/data -l Debug/data/logfile -w start
createdb test
psql test /dev/null 2>&1
psql postgres -c "alter database test set default_transaction_read_only = on;"
psql postgres -c "alter database postgres set default_transaction_read_only = 
on;"

The following appears to produce a good backup, since there is no
error:

pg_dumpall >~/dumpall.sql

Let's create a brand new cluster and start it up:

pg_ctl -D Debug/data -m fast -w stop
rm -fr Debug/data/*
initdb Debug/data
pg_ctl -D Debug/data -l Debug/data/logfile -w start

Now we attempt to restore what we thought was a good backup:

psql postgres <~/dumpall.sql

What we get is:

SET
SET
ERROR:  role "kgrittn" already exists
ALTER ROLE
ALTER DATABASE
REVOKE
REVOKE
GRANT
GRANT
CREATE DATABASE
ALTER DATABASE
You are now connected to database "postgres" as user "kgrittn".
SET
SET
SET
SET
SET
SET
ERROR:  cannot execute COMMENT in a read-only transaction
ERROR:  cannot execute CREATE EXTENSION in a read-only transaction
ERROR:  cannot execute COMMENT in a read-only transaction
ERROR:  cannot execute REVOKE in a read-only transaction
ERROR:  cannot execute REVOKE in a read-only transaction
ERROR:  cannot execute GRANT in a read-only transaction
ERROR:  cannot execute GRANT in a read-only transaction
You are now connected to database "template1" as user "kgrittn".
SET
SET
SET
SET
SET
SET
COMMENT
CREATE EXTENSION
COMMENT
REVOKE
REVOKE
GRANT
GRANT
You are now connected to database "test" as user "kgrittn".
SET
SET
SET
SET
SET
SET
ERROR:  cannot execute CREATE SCHEMA in a read-only transaction
ERROR:  cannot execute ALTER SCHEMA in a read-only transaction
ERROR:  cannot execute CREATE EXTENSION in a read-only transaction
ERROR:  cannot execute COMMENT in a read-only transaction
SET
SET
SET
ERROR:  cannot execute CREATE TABLE in a read-only transaction
ERROR:  cannot execute ALTER TABLE in a read-only transaction
ERROR:  cannot execute CREATE VIEW in a read-only transaction
ERROR:  cannot execute ALTER TABLE in a read-only transaction
SET
ERROR:  relation "public.tv" does not exist
LINE 4:    FROM public.tv
    ^
ERROR:  cannot execute ALTER TABLE in a read-only transaction
SET
ERROR:  cannot execute CREATE VIEW in a read-only transaction
ERROR:  cannot execute ALTER TABLE in a read-only transaction
ERROR:  relation "tvv" does not exist
LINE 3:    FROM tvv
    ^
ERROR:  cannot execute ALTER TABLE in a read-only transaction
ERROR:  cannot execute CREATE VIEW in a read-only transaction
ERROR:  cannot execute ALTER TABLE in a read-only transaction
ERROR:  relation "tvvmv" does not exist
LINE 3:    FROM tvvmv
    ^
ERROR:  cannot execute ALTER TABLE in a read-only transaction
ERROR:  relation "t" does not exist
LINE 4:    FROM t
    ^
ERROR:  cannot execute ALTER TABLE in a read-only transaction
ERROR:  relation "tm" does not exist
LINE 3:    FROM tm
    ^
ERROR:  cannot execute ALTER TABLE in a read-only transaction
ERROR:  relation "mvschema.tvm" does not exist
LINE 3:    FROM mvschema.tvm
    ^
ERROR:  cannot execute ALTER TABLE in a read-only transaction
ERROR:  relation "t" does not exist
invalid command \.
ERROR:  syntax error at or near "1"
LINE 1: 1 x 2
    ^
ERROR:  cannot execute CREATE INDEX in a read-only transaction
ERROR:  cannot execute CREATE INDEX in a read-only transaction
ERROR:  cannot execute CREATE INDEX in a read-only transaction
ERROR:  cannot execute CREATE INDEX in a read-only transaction
SET
ERROR:  cannot execute REFRESH MATERIALIZED VIEW in a read-only transaction
SET
ERROR:  cannot execute REFRESH MATERIALIZED VIEW in a read-only transaction
ERROR:  cannot execute REFRESH MATERIALIZED VIEW in a read-only transaction
ERROR:  cannot execute REFRESH MATERIALIZED VIEW in a read-only transaction
ERROR:  cannot execute REFRESH MATERIALIZED VIEW in a read-only transaction
ERROR:  cannot execute REFRESH MATERIALIZED VIEW in a read-only transaction
ERROR:  cannot execute REVOKE in a read-only transaction
ERROR:  cannot execute REVOKE in a read-only transaction
ERROR:  cannot execute GRANT in a read-only transaction
ERROR:  cannot execute GRANT in a read-only transaction

If the dump is made with the attached patch, you get this on
restore:

SET
SET
SET
ERROR:  role "kgrittn" already exists
ALTER ROLE
ALTER DATABASE
REVOKE
REVOKE
GRANT
GRANT
CREATE DATABASE
ALTER DATABASE
You are now connected to database "postgres" as user "kgrittn".
SET
SET
SET
SET
SET
SET
SET
COMMENT
CREATE EXTENSION
COMMENT
REVOKE
REVOKE
GRANT
GRANT
You are now connected to database "template1" as user "kgrittn".
SET
SET
SET
SET
SET
SET
SET
COMMENT
CREATE EXTENSION
COMMENT
REVOKE
REVOKE
GRANT
GRANT
You are now connected to database "test" as user "kgrittn".
SET
SET
SET
SET
SET
SET
SET
CREATE SCHEMA
ALTER SCHEMA
CREA

Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-23 Thread Bruce Momjian
On Fri, Nov 22, 2013 at 04:38:53PM -0800, Kevin Grittner wrote:
> Andres Freund  wrote:
> > On 2013-11-22 13:34:18 -0800, Kevin Grittner wrote:
> >> Oddly, it didn't complain about creating users within a read-only
> >> transaction.  That seems like a potential bug.
> >
> > There's lots of things that escape XactReadOnly. I've thought (and I
> > think suggested) before that we should put in another layer of defense
> > by also putting a check in AssignTransactionId(). Imo the compatibility
> > woes (like not being able to run SELECT txid_current();) are well worth
> > the nearly ironclad guarantee that we're not writing.
> 
> I agree that something like that is would be a good idea; however,
> I'm sure you would agree that would not be material for a
> back-patch to a stable branch.

I am not a fan of backpatching any of this.  We have learned the fix is
more complex than thought, and the risk of breakage and having pg_dump
diffs change between minor releases doesn't seem justified.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade ?deficiency

2013-11-23 Thread Bruce Momjian
On Fri, Nov 22, 2013 at 04:25:57PM -0800, Kevin Grittner wrote:
> Bruce Momjian  wrote:
> > On Fri, Nov 22, 2013 at 01:55:10PM -0800, Kevin Grittner wrote:
> >
> >> It does nothing about pg_upgrade, which is sort of a separate
> >> issue.  My inclination is that connections to the new cluster
> >> should set this and connections to the old should not.
> >
> > It is going to be tricky to conditionally set/reset an
> > environment variable for default_transaction_read_only for
> > old/new clusters.
> 
> Why?  If you know you have connected to the new cluster, set it to
> off; if you know you have connected to the old cluster, don't touch
> it.

Remember this is being done with a PGOPTIONS environment variable, so
you need to set/reset this on every action.  I just don't see the value.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Building on S390

2013-11-23 Thread Peter Eisentraut
On Fri, 2013-11-22 at 23:32 +, Greg Stark wrote:
> According to the Debian build logs, postgres-xc compiles the
> entire
> backend with -fPIC.  Not sure what sense that makes.
> 
> 
> Debian policy is to always use -fPIC

My point is, they compile the *backend* as position-independent code.
The backend is not a shared library.  Maybe it is in Postgres-XC?  But
at least this makes their build process significantly different, so it's
doubtful that this is a PG-proper issue.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor patch for the uuid-ossp extension

2013-11-23 Thread Oskari Saarenmaa

23.11.2013 14:12, Mario Weilguni kirjoitti:

Am 22.11.2013 16:15, schrieb Tom Lane:

[ memo to self: never, ever accept another contrib module whose name
isn't a plain SQL identifier ]


Well, in that case and since this is a rarely used extension (I guess
so), maybe it would be the best to simply rename that extension to
uuidossp (or whatever) and don't make any special treatment for it?


There are a couple of threads about issues with uuid-ossp (AIUI it's 
abandonware at this point).  If PostgreSQL had a proper PRNG with a 
128-bit state it could just implement uuid_generate_v4() function in 
core and most people could probably drop uuid-ossp.


I have a branch[1] which implements uuid_generate_v4 in core using 
pg_lrand48, but since it only has 48 bits of state it's probably not an 
acceptable replacement for uuid-ossp for now.


Is anyone working on a new PRNG for PostgreSQL at the moment?

/ Oskari

[1] https://github.com/saaros/postgres/compare/core-uuid-v4



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PL/Python: domain over array support

2013-11-23 Thread Rodolfo Campero
2013/11/22 Marko Kreen 

>
> One more thing - please update Python 3 regtests too.
>
>
The attached patch (version 3) includes the expected results for Python 3
(file plpython_types_3.out).
diff --git a/src/pl/plpython/expected/plpython_types.out b/src/pl/plpython/expected/plpython_types.out
index 91106e0..785ffca 100644
--- a/src/pl/plpython/expected/plpython_types.out
+++ b/src/pl/plpython/expected/plpython_types.out
@@ -664,6 +664,34 @@ SELECT * FROM test_type_conversion_array_error();
 ERROR:  return value of function with array return type is not a Python sequence
 CONTEXT:  while creating return value
 PL/Python function "test_type_conversion_array_error"
+CREATE DOMAIN ordered_pair_domain AS integer[] CHECK (array_length(VALUE,1)=2 AND VALUE[1] < VALUE[2]);
+CREATE FUNCTION test_type_conversion_array_domain(x ordered_pair_domain) RETURNS ordered_pair_domain AS $$
+plpy.info(x, type(x))
+return x
+$$ LANGUAGE plpythonu;
+SELECT * FROM test_type_conversion_array_domain(ARRAY[0, 100]::ordered_pair_domain);
+INFO:  ([0, 100], )
+CONTEXT:  PL/Python function "test_type_conversion_array_domain"
+ test_type_conversion_array_domain 
+---
+ {0,100}
+(1 row)
+
+SELECT * FROM test_type_conversion_array_domain(NULL::ordered_pair_domain);
+INFO:  (None, )
+CONTEXT:  PL/Python function "test_type_conversion_array_domain"
+ test_type_conversion_array_domain 
+---
+ 
+(1 row)
+
+CREATE FUNCTION test_type_conversion_array_domain_check_violation() RETURNS ordered_pair_domain AS $$
+return [2,1]
+$$ LANGUAGE plpythonu;
+SELECT * FROM test_type_conversion_array_domain_check_violation();
+ERROR:  value for domain ordered_pair_domain violates check constraint "ordered_pair_domain_check"
+CONTEXT:  while creating return value
+PL/Python function "test_type_conversion_array_domain_check_violation"
 ---
 --- Composite types
 ---
diff --git a/src/pl/plpython/expected/plpython_types_3.out b/src/pl/plpython/expected/plpython_types_3.out
index 523c2ec..25331f2 100644
--- a/src/pl/plpython/expected/plpython_types_3.out
+++ b/src/pl/plpython/expected/plpython_types_3.out
@@ -664,6 +664,34 @@ SELECT * FROM test_type_conversion_array_error();
 ERROR:  return value of function with array return type is not a Python sequence
 CONTEXT:  while creating return value
 PL/Python function "test_type_conversion_array_error"
+CREATE DOMAIN ordered_pair_domain AS integer[] CHECK (array_length(VALUE,1)=2 AND VALUE[1] < VALUE[2]);
+CREATE FUNCTION test_type_conversion_array_domain(x ordered_pair_domain) RETURNS ordered_pair_domain AS $$
+plpy.info(x, type(x))
+return x
+$$ LANGUAGE plpython3u;
+SELECT * FROM test_type_conversion_array_domain(ARRAY[0, 100]::ordered_pair_domain);
+INFO:  ([0, 100], )
+CONTEXT:  PL/Python function "test_type_conversion_array_domain"
+ test_type_conversion_array_domain 
+---
+ {0,100}
+(1 row)
+
+SELECT * FROM test_type_conversion_array_domain(NULL::ordered_pair_domain);
+INFO:  (None, )
+CONTEXT:  PL/Python function "test_type_conversion_array_domain"
+ test_type_conversion_array_domain 
+---
+ 
+(1 row)
+
+CREATE FUNCTION test_type_conversion_array_domain_check_violation() RETURNS ordered_pair_domain AS $$
+return [2,1]
+$$ LANGUAGE plpython3u;
+SELECT * FROM test_type_conversion_array_domain_check_violation();
+ERROR:  value for domain ordered_pair_domain violates check constraint "ordered_pair_domain_check"
+CONTEXT:  while creating return value
+PL/Python function "test_type_conversion_array_domain_check_violation"
 ---
 --- Composite types
 ---
diff --git a/src/pl/plpython/plpy_typeio.c b/src/pl/plpython/plpy_typeio.c
index caccbf9..0a2307a 100644
--- a/src/pl/plpython/plpy_typeio.c
+++ b/src/pl/plpython/plpy_typeio.c
@@ -373,7 +373,7 @@ PLy_output_datum_func2(PLyObToDatum *arg, HeapTuple typeTup)
 	arg->typioparam = getTypeIOParam(typeTup);
 	arg->typbyval = typeStruct->typbyval;
 
-	element_type = get_element_type(arg->typoid);
+	element_type = get_base_element_type(arg->typoid);
 
 	/*
 	 * Select a conversion function to convert Python objects to PostgreSQL
@@ -427,7 +427,8 @@ static void
 PLy_input_datum_func2(PLyDatumToOb *arg, Oid typeOid, HeapTuple typeTup)
 {
 	Form_pg_type typeStruct = (Form_pg_type) GETSTRUCT(typeTup);
-	Oid			element_type = get_element_type(typeOid);
+	/* It's safe to handle domains of array types as its base array type. */
+	Oid			element_type = get_base_element_type(typeOid);
 
 	/* Get the type's conversion information */
 	perm_fmgr_info(typeStruct->typoutput, &arg->typfunc);
@@ -808,6 +809,7 @@ static Datum
 PLySequence_ToArray(PLyObToDatum *arg, int32 typmod, PyObject *plrv)
 {
 	ArrayType  *array;
+	Datum   rv;
 	int			i;
 	Datum	   *elems;
 	bool	   *nulls;
@@ -844,8 +846,15 @@ PLySequence_ToArray(PLyObToDatum *arg, int32 typmod, PyObject *plrv)
 
 	lbs = 1;
 	array = construct_md_array(elems, nulls, 1, &len, &lbs,
-			   get_element_type(

Re: [HACKERS] Minor patch for the uuid-ossp extension

2013-11-23 Thread Vik Fearing
On 11/23/2013 01:12 PM, Mario Weilguni wrote:
> Am 22.11.2013 16:15, schrieb Tom Lane:
>> [ memo to self: never, ever accept another contrib module whose name
>> isn't a plain SQL identifier ]
>
> Well, in that case and since this is a rarely used extension (I guess
> so), maybe it would be the best to simply rename that extension to
> uuidossp (or whatever) and don't make any special treatment for it?

It'll definitely cause pain, but I'm all for normalizing the contrib
names on sql identifiers, and this is the only outlier.

-- 
Vik



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor patch for the uuid-ossp extension

2013-11-23 Thread Mario Weilguni

Am 22.11.2013 16:15, schrieb Tom Lane:
[ memo to self: never, ever accept another contrib module whose name 
isn't a plain SQL identifier ]


Well, in that case and since this is a rarely used extension (I guess 
so), maybe it would be the best to simply rename that extension to 
uuidossp (or whatever) and don't make any special treatment for it?




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] segfault with contrib lo

2013-11-23 Thread Sawada Masahiko
On Wed, Oct 9, 2013 at 2:12 AM, Marc Cousin  wrote:
> On Tuesday 08 October 2013 12:28:46 Robert Haas wrote:
>> On Mon, Oct 7, 2013 at 12:32 PM, Marc Cousin  wrote:
>> > I was using the lo contrib a few days ago and wasn't paying attention, and
>> > forgot the "for each row" in the create trigger command... PostgreSQL
>> > segfaulted, when the trigger tried to access the row's attributes.
>> >
>> > Please find attached a patch to control that the trigger is correctly
>> > defined (as in the example): a before trigger, for each row, and a
>> > parameter (if the parameter was omitted, it segfaulted too). I hope I did
>> > this correctly.
>> Thanks for the patch.  Please add it to the next CommitFest.
>>
>> https://commitfest.postgresql.org/action/commitfest_view/open
> Done.
>

Sorry for late response.
I tried the patch you attached.

It is simple patch, and works fine.
It is just my suggestion that error message shows name of trigger.


Regards,

---
Sawada Masahiko


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regress tests to improve the function coverage of schemacmds and user and tablespace files

2013-11-23 Thread David Rowley
On Thu, Oct 24, 2013 at 9:59 PM, Haribabu kommi
wrote:

>
>
> Here I have added some more regression tests to improve the missing
> function coverage of
>
> schemacmds.c, user.c and tablespace.c.
>
>
>
> The added tests are mainly RENAME TO and OWNER TO support.
>
>
I've just had a look at both of these patches. All tests that have been
added seem to cover new areas that are not previously tested, they also
seem to cleanup properly after themselves, so I think these should be a
worthwhile addition to the regression tests.

The only thing I can pickup on which is at fault is the the trailing white
space

src/test/regress/sql/privileges.sql:837: trailing whitespace.
+\c -

But I can't imagine it's worth submitting a new patch to fix it.

I've marked the patch as ready for commiter

Regards

David Rowley


>
>
> patches are attached in the mail.
>
> please check and provide your suggestions.
>
>
>
> Regards,
>
> Hari babu.
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


Re: [HACKERS] new unicode table border styles for psql

2013-11-23 Thread Vik Fearing
On 11/21/2013 10:39 PM, Peter Eisentraut wrote:
> On 11/21/13, 2:09 AM, Pavel Stehule wrote:
>> I wrote new styles for  psql table borders.
>>
>> http://postgres.cz/wiki/Pretty_borders_in_psql
>>
>> This patch is simply and I am think so some styles can be interesting
>> for final presentation.
>>
>> Do you think so this feature is generally interesting and should be in core?
> Maybe make the border setting a string containing the various characters
> by index.  Then everyone can create their own crazy borders.

I vote for doing it this way.

We would provide a default setting, and the documentation would show
other examples.  Perhaps even set up a "repository" on a wiki page or
something.

-- 
Vik



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Autoconf 2.69 update

2013-11-23 Thread Oskari Saarenmaa

20.11.2013 23:38, Robert Haas kirjoitti:

On Wed, Nov 20, 2013 at 4:31 AM, Oskari Saarenmaa  wrote:

ISTM autoconf has been better with backwards compatibility lately. Maybe the
fatal error could be changed to a warning and/or the check for version ==
2.63 be replaced with a check for version >= 2.63?  Without a strict
requirement for a certain autoconf version it would make sense to also drop
the built autoconf artifacts from the git repository which would make diffs
shorter and easier to review when touching configure.in.


-1 from me.  Dropping configure from the repository would
significantly increase the burden to compile and install PostgreSQL
from source.  Not everyone has autoconf installed.


I think it's reasonable to assume that people who build from git have 
autoconf.  The release tarballs should still contain the generated 
configure, etc;  I believe this is how a lot of (most?) open source 
projects handle autoconf artifacts.



-1 also for loosening the version check.  I guarantee that if we do
that, people will use varying versions when regenerating configure,
and we'll have a mess.  Even if we standardize the version committers
are supposed to use, someone will foul it up at least twice a year.
The last thing I need is to have more things that I can accidentally
screw up while committing; the list is too long already.

I realize that those checks are a bit of a nuisance, but if they
bother you you can just whack them out locally and proceed.  Or else
you can download and compile the right version of autoconf.  If you're
doing sufficiently serious development that you need to touch
configure.in, the 5 minutes it takes you to build a local install of
the right autoconf version should be the least of your concerns.  It's
not hard; I've done it multiple times, and have multiple versions of
autoconf installed on those systems where I need to be able to re-run
autoconf on any supported branch.


As long as the released tarballs contain generated scripts I don't 
really see this being an issue.  While changes to configure.in are 
pretty rare, they do happen and when you have to modify configure the 
resulting 'git diff', 'git status' etc become unnecessarily large and 
require you to hand-edit the patches before sending them to the mailing 
list, etc.


One more option would be to include the built configure in release 
branches as there shouldn't really be many changes to configure.in after 
branching and it'd make sure that all build farm builders test the same 
script generated by a known version.


Anyway, I won't mind the strict requirement for autoconf that much if 
it's for a more recent version than 2.63 :-)


/ Oskari



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY TO

2013-11-23 Thread Michael Paquier
Have a look at

On Sat, Nov 23, 2013 at 3:48 PM, mohsen soodkhah mohammadi
 wrote:
> in copy.c is one function that its name is CopyOneRowTO.
> in this function have one foreach loop. in this loop first if have this
> condition: !cstate->binary
> what means this condition and when true this condition?
Have a look at CopyStateData at the top of
src/backend/commands/copy.c. This flag simply means that COPY uses the
binary format, that can be used with more or less this command:
COPY ... WITH [ FORMAT ] BINARY;
You can check as well the documentation for more details:
http://www.postgresql.org/docs/9.3/static/sql-copy.html

Regards,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers