Re: [HACKERS] Statistics and selectivity estimation for ranges

2013-01-04 Thread Alexander Korotkov
On Mon, Dec 10, 2012 at 11:21 PM, Jeff Davis pg...@j-davis.com wrote:

 And I have a few other questions/comments:

 * Why is summ spelled with two ms? Is it short for summation? If
 so, might be good to use summation of instead of integrate in the
 comment.


Fixed.


 * Why does get_length_hist_frac return 0.0 when i is the last value? Is
 that a mistake?


Comment was wrong. Actually it return fraction fraction of ranges which
length is *greater*.


 * I am still confused by the distinction between rbound_bsearch and
 rbound_bsearch_bin. What is the intuitive purpose of each?


I've added corresponding comments. rbound_bsearch is for scalar operators
and for bin corresponding to upper bound. rbound_bsearch_bin is
now rbound_bsearch_bin_lower. It is for bin corresponding to lower bound.

* You use constant value in the comments in several places. Would
 query value or search key be better?


Yes. Fixed.

I also renamed get_length_hist_frac to get_length_hist_summ and rewrote
comments about it. Hope it becomes more understandable.

--
With best regards,
Alexander Korotkov.


range_stat-0.10.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] lock AccessShareLock on object 0/1260/0 is already held

2013-01-04 Thread Pavel Stehule
Hello

What is state of this issue?
http://archives.postgresql.org/pgsql-hackers/2011-09/msg00423.php

We detect this issue in our servers - we use 9.1.6 on Linux

locktype | database | relation | page | tuple | virtualxid |
transactionid | classid | objid | objsubid | virtualtransaction |  pid
 |  mode   | granted
--+--+--+--+---++---+-+---+--++---+-+-
 relation |  5630403 | 1259 |  |   ||
 | |   |  | 20/0   | 14960 |
AccessShareLock | t

Interesting of this issue - it is blocks DBI login

Exception thrown: DBI
connect('database=xxx;host=yyy;port=5432','username',...) failed:
FATAL:  lock AccessShareLock on object 5630403/1259/0 is already
held#012

but I am able to login to this database from command line without problems

There is unfriendly error message - I cannot get information about
statement that exactly raise this fatal state.


The most old message that I found is


Dec 10 15:01:51 cl-pdwh-d01 postgres[32548]: [6-1] (2012-12-10
15:01:51 CET db_hw7ny3roi257xe886fvc5krqgosp6g23)
Dec 10 15:01:51 cl-pdwh-d01 postgres[32548]: [6-2] ERROR:  canceling
statement due to user request
Dec 10 15:01:51 cl-pdwh-d01 postgres[32738]: [6-1] (2012-12-10
15:01:51 CET db_hw7ny3roi257xe886fvc5krqgosp6g23)
Dec 10 15:01:51 cl-pdwh-d01 postgres[32738]: [6-2] FATAL:  lock
AccessShareLock on object 5630403/1259/0 is already held
Dec 10 15:01:51 cl-pdwh-d01 postgres[32739]: [6-1] (2012-12-10
15:01:51 CET db_hw7ny3roi257xe886fvc5krqgosp6g23)
Dec 10 15:01:51 cl-pdwh-d01 postgres[32739]: [6-2] FATAL:  lock
AccessShareLock on object 5630403/1259/0 is already held
Dec 10 15:02:29 cl-pdwh-d01 postgres[957]: [6-1] (2012-12-10 15:02:29
CET db_hw7ny3roi257xe886fvc5krqgosp6g23)
Dec 10 15:02:29 cl-pdwh-d01 postgres[957]: [6-2] FATAL:  lock
AccessShareLock on object 5630403/1259/0 is already held

Regards

Pavel Stehule


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Print b-tree tuples

2013-01-04 Thread Samuel Vogel

Hello,

I'm trying to print out the tuples in the b-tree nodes for analysis, 
but when iterate over more than the first entry of the tuples 
(scanKey++), I strangely get the error below on query execution:

ERROR:  relation simpletest does not exist
LINE 1: SELECT * FROM simpletest WHERE id = 50;

I was able to get around this by only printing byval attributes:

if (!itupdesc-attrs[i-1]-attbyval)
break;

I would still like to know why my code screws up though. The relnames 
are not byval, but I should still be able to print the address, right?


Regards,
Samuel


Re: Review of pg_basebackup and pg_receivexlog to use non-blocking socket communication, was: Re: [HACKERS] Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown

2013-01-04 Thread Hari Babu

On January 02, 2013 12:41 PM Hari Babu wrote:
On January 01, 2013 10:19 PM Boszormenyi Zoltan wrote:
I am reviewing your patch.
• Is the patch in context diff format? 
Yes.

Thanks for reviewing the patch.

• Does it apply cleanly to the current git master?
Not quite cleanly but it doesn't produce rejects or fuzz, only offset
warnings:

Will rebase the patch to head.

• Does it include reasonable tests, necessary doc patches, etc? 
The test cases are not applicable. There is no test framework for
testing network outage in make check.

There are no documentation patches for the new --recvtimeout=INTERVAL
and --conntimeout=INTERVAL options for either pg_basebackup or
pg_receivexlog.

I will add the documentation for the same.

Per the previous comment, no. But those are for the backend
to notice network breakdowns and as such, they need a
separate patch. 

I also think it is better to handle it as a separate patch for walsender.

• Are the comments sufficient and accurate?
This chunk below removes a comment which seems obvious enough
so it's not needed:
***
*** 518,524  ReceiveXlogStream(PGconn *conn, XLogRecPtr startpos,
uint32 timeline,
    goto error;
    }
  
!   /* Check the message type. */
    if (copybuf[0] == 'k')
    {
    int pos;
--- 559,568 
    goto error;
    }
  
!   /* Set the last reply timestamp */
!   last_recv_timestamp = localGetCurrentTimestamp();
!   ping_sent = false;
!   
    if (copybuf[0] == 'k')
    {
    int pos;
***

Other comments are sufficient and accurate.

I will fix and update the patch.

The attached V2 patch in the mail handles all the review comments identified
above.

Regards,
Hari babu.



pg_basebkup_recvxlog_noblock_comm_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-04 Thread Amit Kapila
On Friday, December 28, 2012 3:52 PM Simon Riggs wrote:
 On 28 December 2012 08:07, Kyotaro HORIGUCHI
 horiguchi.kyot...@lab.ntt.co.jp wrote:
 
  Hello, I saw this patch and confirmed that
 
   - Coding style looks good.
   - Appliable onto HEAD.
   - Some mis-codings are fixed.
 
 I've had a quick review of the patch to see how close we're getting.
 The perf tests look to me like we're getting what we wanted from this
 and I'm happy with the recovery performance trade-offs. Well done to
 both author and testers.
 
 
 * The compression algorithm depends completely upon new row length
 savings. If the new row is short, it would seem easier to just skip
 the checks and include it anyway. We can say if old and new vary in
 length by  50% of each other, just include new as-is, since the rows
 very clearly differ in a big way. 

 Also, if tuple is same length as 
 before, can we compare the whole tuple at once to save doing
 per-column checks?

If we have to do whole tuple comparison then storing of changed parts might
need to be
be done in a byte-by-byte way rather then at column offset boundaries.
This might not be possible with current algorithm as it stores in WAL
information column-by-column and decrypts also in similar way.

 The internal docs are completely absent. We need at least a whole page of
descriptive  comment, discussing trade-offs and design decisions.

Currently I have planned to put it transam/README, as most of WAL
description is present there.

With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation

2013-01-04 Thread Simon Riggs
On 4 January 2013 13:53, Amit Kapila amit.kap...@huawei.com wrote:
 On Friday, December 28, 2012 3:52 PM Simon Riggs wrote:
 On 28 December 2012 08:07, Kyotaro HORIGUCHI
 horiguchi.kyot...@lab.ntt.co.jp wrote:

  Hello, I saw this patch and confirmed that
 
   - Coding style looks good.
   - Appliable onto HEAD.
   - Some mis-codings are fixed.

 I've had a quick review of the patch to see how close we're getting.
 The perf tests look to me like we're getting what we wanted from this
 and I'm happy with the recovery performance trade-offs. Well done to
 both author and testers.


 * The compression algorithm depends completely upon new row length
 savings. If the new row is short, it would seem easier to just skip
 the checks and include it anyway. We can say if old and new vary in
 length by  50% of each other, just include new as-is, since the rows
 very clearly differ in a big way.

 Also, if tuple is same length as
 before, can we compare the whole tuple at once to save doing
 per-column checks?

 If we have to do whole tuple comparison then storing of changed parts might
 need to be
 be done in a byte-by-byte way rather then at column offset boundaries.
 This might not be possible with current algorithm as it stores in WAL
 information column-by-column and decrypts also in similar way.

OK, please explain in comments.

 The internal docs are completely absent. We need at least a whole page of
 descriptive  comment, discussing trade-offs and design decisions.

 Currently I have planned to put it transam/README, as most of WAL
 description is present there.

But also in comments for each major function.

Thanks

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: optimized DROP of multiple tables within a transaction

2013-01-04 Thread Robert Haas
On Mon, Dec 31, 2012 at 11:51 AM, Tomas Vondra t...@fuzzy.cz wrote:
 I thought I followed the conding style - which guidelines have I broken?

+   /* If there are no non-local relations, then we're done. Release the 
memory
+* and return. */

Multi-line comments should start with a line containing only /* and
end with a line containing only */.

+DropRelFileNodeAllBuffers(RelFileNodeBackend * rnodes, int nnodes)
and
+rnode_comparator(const void * p1, const void * p2)

The extra spaces after the asterisks should be removed.

+void smgrdounlinkall(SMgrRelation * rels, int nrels, bool isRedo)
+{

void should be on a line by itself.

Sorry to nitpick.

As for BSEARCH_LIMIT, I don't have a great idea - maybe just
DROP_RELATIONS_BSEARCH_LIMIT?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] lock AccessShareLock on object 0/1260/0 is already held

2013-01-04 Thread Tom Lane
Pavel Stehule pavel.steh...@gmail.com writes:
 What is state of this issue?
 http://archives.postgresql.org/pgsql-hackers/2011-09/msg00423.php

AFAICS we never identified the cause.  It was pretty clear that
something was failing to clean up shared-memory lock data structures,
but not what that something was.  The last productive suggestion was
to add process-exit-time logging of whether unreleased locks remain,
but if Dave ever did that, he didn't report back what he found.

Maybe you could add such logging on your machines.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: optimized DROP of multiple tables within a transaction

2013-01-04 Thread Alvaro Herrera
On Mon, Dec 24, 2012 at 02:41:37AM +0100, Tomas Vondra wrote:

 + SMgrRelation   *srels = palloc(sizeof(SMgrRelation));
 + int nrels = 0,
 + i = 0,
 + maxrels = 1;

maxrels=1 is not good -- too much palloc traffic.  I'd make it start at,
say, 8 instead.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] enhanced error fields

2013-01-04 Thread Robert Haas
On Fri, Dec 28, 2012 at 1:21 PM, Peter Geoghegan pe...@2ndquadrant.com wrote:
 Now, as to the question of whether we need to make sure that
 everything is always fully qualified - I respectfully disagree with
 Stephen, and maintain my position set out in the last round of
 feedback [1], which is that we don't need to fully qualify everything,
 because *for the purposes of error handling*, which is what I think we
 should care about, these fields are sufficient:

 +   column_name text,
 +   table_name text,
 +   constraint_name text,
 +   schema_name text,

 If you use the same constraint name everywhere, you might get the same
 error message. The situations in which this scheme might fall down are
 too implausible for me to want to bloat up all those ereport sites any
 further (something that Stephen also complained about).

 I think that the major outstanding issues are concerning whether or
 not I have the API here right. I make explicit guarantees as to the
 availability of certain fields for certain errcodes (a small number of
 Class 23 - Integrity Constraint Violation codes). No one has really
 said anything about that, though I think it's important.

 I also continue to think that we shouldn't have routine_name,
 routine_schema and trigger_name fields - I think it's wrong-headed
 to have an exception handler magically act on knowledge about where
 the exception came from that has been baked into the exception - is
 there any sort of precedent for this? Pavel disagrees here. Again, I
 defer to others.

I don't really agree with this.  To be honest, my biggest concern
about this patch is that it will make it take longer to report an
error.  At least in the cases where these additional fields are
included, it will take CPU time to populate those fields, and maybe
there will be some overhead even in the cases where they aren't even
used (although I'd expect that to be too little to measure).  Now,
maybe that doesn't matter in the case where the error is being
reported back to the client, because the overhead of shipping packets
across even a local socket likely dwarfs the overhead, but I think it
matters a lot where you are running a big exception-catching loop.
That is already painfully slow, and -1 from me on anything that makes
it significantly slower.  I have a feeling this isn't the first time
I'm ranting about this topic in relation to this patch, so apologies
if this is old news.

But if we decide that there is no performance issue or that we don't
care about the hit there, then I think Stephen and Pavel are right to
want a large amount of very precise detail.  What's the use case for
this feature?  Presumably, it's for people for whom parsing the error
message is not a practical option, so either they textual error
message doesn't identify the target object sufficiently precisely, and
they want to make sure they know what it applies to; or else it's for
people who want any error that applies to a table to be handled the
same way (without worrying about exactly which error they have got).
Imagine, for example, someone writing a framework that will be used as
a basis for many different applications.  It might want to do
something, like, I don't know, update the comment on a table every
time an error involving that table occurs.  Clearly, precise
identification of the table is a must.  In a particular application
development environment, it's reasonable to rely on users to name
things sensibly, but if you're shipping a tool that might be dropped
into any arbitrary environment, that's significantly more dangerous.

Similarly, for a function-related error, you'd need something like
that looks like the output of pg_proc.oid::regprocedure, or individual
fields with the various components of that output.  That sounds like
routine_name et. al.

I'm not happy about the idea of shipping OIDs back to the client.
OIDs are too user-visible as it is; we should try to make them less
not moreso.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] lock AccessShareLock on object 0/1260/0 is already held

2013-01-04 Thread Pavel Stehule
2013/1/4 Tom Lane t...@sss.pgh.pa.us:
 Pavel Stehule pavel.steh...@gmail.com writes:
 What is state of this issue?
 http://archives.postgresql.org/pgsql-hackers/2011-09/msg00423.php

 AFAICS we never identified the cause.  It was pretty clear that
 something was failing to clean up shared-memory lock data structures,
 but not what that something was.  The last productive suggestion was
 to add process-exit-time logging of whether unreleased locks remain,
 but if Dave ever did that, he didn't report back what he found.


probably I am able to find statement, that was canceled as last
success statement from our application logs. And probably it will be
LOCK ... or CREATE TABLE AS SELECT

Recheck on session end can help us to drop this leaked locks, but I
don't understand how it can help with finding with finding the cause?





 Maybe you could add such logging on your machines.

 regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] enhanced error fields

2013-01-04 Thread Robert Haas
On Sat, Dec 29, 2012 at 4:30 PM, Peter Geoghegan pe...@2ndquadrant.com wrote:
 Ascertaining the identity of the object in question perfectly
 unambiguously, so that you can safely do something like lookup a
 comment on the object, seems like something way beyond what I'd
 envisioned for this feature. Why should the comment be useful in an
 error handler anyway? At best, that seems like a nice-to-have extra to
 me. The vast majority are not even going to think about the ambiguity
 that may exist. They'll just write:

 if (constraint_name == upc)
 MessageBox(That is not a valid barcode.);

The people who are content to do that don't need this patch at all.
They can just apply a regexp to the message that comes back from the
server and then set constraint_name based on what pops out of the
regex.  And then do just what you did there.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] enhanced error fields

2013-01-04 Thread anara...@anarazel.de


Robert Haas robertmh...@gmail.com schrieb:

On Sat, Dec 29, 2012 at 4:30 PM, Peter Geoghegan
pe...@2ndquadrant.com wrote:
 Ascertaining the identity of the object in question perfectly
 unambiguously, so that you can safely do something like lookup a
 comment on the object, seems like something way beyond what I'd
 envisioned for this feature. Why should the comment be useful in an
 error handler anyway? At best, that seems like a nice-to-have extra
to
 me. The vast majority are not even going to think about the ambiguity
 that may exist. They'll just write:

 if (constraint_name == upc)
 MessageBox(That is not a valid barcode.);

The people who are content to do that don't need this patch at all.
They can just apply a regexp to the message that comes back from the
server and then set constraint_name based on what pops out of the
regex.  And then do just what you did there.

Easier said than done if you're dealing with pg installations with different 
lc_messages...

Andres

--- 
Please excuse the brevity and formatting - I am writing this on my mobile phone.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review]

2013-01-04 Thread Boszormenyi Zoltan

Hi,

I am reviewing your patch.

2012-12-10 13:58 keltezéssel, Amit kapila írta:


On Thu, 6 Dec 2012 10:12:31 +0530 Amit Kapila wrote:
On Tuesday, December 04, 2012 8:37 AM Amit Kapila wrote:
 On Monday, December 03, 2012 8:59 PM Tom Lane wrote:
  Robert Haas robertmhaas(at)gmail(dot)com writes:
   On Sat, Dec 1, 2012 at 11:45 AM, Tom Lane 
tgl(at)sss(dot)pgh(dot)pa(dot)us wrote:
  Neither of you have responded to the point about what SET PERSISTENT
  var_name TO DEFAULT will do, and whether it is or should be different
  from RESET PERSISTENT, and if not why we should put the latter into
  the grammar as well.


 The current behavior is
 1. RESET PERSISTENT ...  will delete the entry from
 postgresql.auto.conf
 2. SET PERSISTENT... TO DEFAULT  will update the entry value in
 postgresql.auto.conf to default value

 However we can even change SET PERSISTENT... TO DEFAULT to delete the
 entry and then we can avoid RESET PERSISTENT ...

 As per my understanding from the points raised by you, the behavior could be
 defined as follows:

 1. No need to have RESET PERSISTENT ... syntax.
 2. It is better if we provide a way to delete entry which could be done for
 syntax:
   SET PERSISTENT... TO DEFAULT

Updated patch to handle above 2 points.

With Regards,

Amit Kapila.



 * Is the patch in context diff format 
http://en.wikipedia.org/wiki/Diff#Context_format?


Yes. (But with PostgreSQL using GIT, and most developers are
now comfortable with the unified diff format, this question should
not be in the wiki any more...)

 * Does it apply cleanly to the current git master?


Not quite cleanly, it gives some offset warnings:


[zozo@localhost postgresql.1]$ cat ../set_persistent_v3.patch | patch -p1
patching file doc/src/sgml/ref/set.sgml
patching file src/backend/nodes/copyfuncs.c
patching file src/backend/nodes/equalfuncs.c
patching file src/backend/parser/gram.y
patching file src/backend/postmaster/postmaster.c
Hunk #1 succeeded at 2552 (offset -4 lines).
patching file src/backend/replication/basebackup.c
Hunk #1 succeeded at 736 (offset 155 lines).
patching file src/backend/utils/misc/guc-file.l
patching file src/backend/utils/misc/guc.c
Hunk #1 succeeded at 3348 (offset -13 lines).
Hunk #2 succeeded at 4125 (offset -13 lines).
Hunk #3 succeeded at 5108 (offset -13 lines).
Hunk #4 succeeded at 6090 (offset -13 lines).
Hunk #5 succeeded at 6657 (offset -13 lines).
Hunk #6 succeeded at 6742 (offset -13 lines).
patching file src/backend/utils/misc/postgresql.conf.sample
patching file src/bin/initdb/initdb.c
patching file src/include/nodes/parsenodes.h
patching file src/include/parser/kwlist.h
patching file src/include/utils/guc.h
patching file src/include/utils/guc_tables.h
patching file src/test/regress/expected/persistent.out
patching file src/test/regress/parallel_schedule
patching file src/test/regress/serial_schedule
patching file src/test/regress/sql/persistent.sql


There are no fuzz warnings though, so rebase is not needed.


 * Does it include reasonable tests, necessary doc patches, etc?


Yes, it contains documentation, but it will surely need proofreading.
I noticed some typos in it already but the wording seems to be clear.


It also contains an extensive set of regression tests.


One specific comment about the documentation part of the patch:


***
*** 86,92  SET [ SESSION | LOCAL ] TIME ZONE { replaceable 
class=PARAMETERtimezone/rep

  applicationPL/pgSQL/application exception block.  This behavior
  has been changed because it was deemed unintuitive.
 /para
!   /note
   /refsect1

   refsect1
--- 95,101 
  applicationPL/pgSQL/application exception block.  This behavior
  has been changed because it was deemed unintuitive.
 /para
!/note
   /refsect1

   refsect1
***

 * Does the patch actually implement that?


Yes.

 * Do we want that?


Yes.

 * Do we already have it?


No.

 * Does it follow SQL spec, or the community-agreed behavior?


No such SQL spec. It was implemented according to the discussion.

It uses a single file in an extra configuration directory added to
postgresql.conf as:


# This includes the default configuration directory included to support
# SET PERSISTENT statement.
#
# WARNNING: Any configuration parameter values specified after this line
#   will override the values set by SET PERSISTENT statement.
include_dir = 'config_dir'


Note the typo in the above message: WARNNING, there are too many Ns.


The extra configuration file ($PGDATA/config_dir/postgresql.auto.conf)
is loaded automatically. I tested it by setting shared_buffers and work_mem.
Executing SELECT pg_reload_conf(); changed work_mem and left these
messages in the server log:


LOG:  parameter shared_buffers cannot be changed without restarting the server
LOG:  configuration file /home/zozo/pgc93dev-setpersistent/data/postgresql.conf contains 
errors; unaffected changes were applied



Just as if I manually edited postgresql.conf. It works 

Re: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review]

2013-01-04 Thread Boszormenyi Zoltan

2013-01-04 18:27 keltezéssel, Boszormenyi Zoltan írta:


One specific comment about the documentation part of the patch:


***
*** 86,92  SET [ SESSION | LOCAL ] TIME ZONE { replaceable 
class=PARAMETERtimezone/rep

  applicationPL/pgSQL/application exception block.  This behavior
  has been changed because it was deemed unintuitive.
 /para
!   /note
   /refsect1

   refsect1
--- 95,101 
  applicationPL/pgSQL/application exception block.  This behavior
  has been changed because it was deemed unintuitive.
 /para
!/note
   /refsect1

   refsect1
***



I forgot to add the actual comment: this is an unnecessary whitespace change.

--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



Re: [HACKERS] Re: Proposal: Store timestamptz of database creation on pg_database

2013-01-04 Thread Peter Eisentraut
On 1/3/13 3:26 PM, Robert Haas wrote:
 It's true, as we've often
 said here, that leveraging the OS facilities means that we get the
 benefit of improving OS facilities for free - but it also means that
 we never exceed what the OS facilities are able to provide.

And that should be the deciding factor, shouldn't it?  Clearly, the OS
timestamps do not satisfy the requirements of tracking database object
creation times.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] enhanced error fields

2013-01-04 Thread Tom Lane
anara...@anarazel.de and...@anarazel.de writes:
 Robert Haas robertmh...@gmail.com schrieb:
 The people who are content to do that don't need this patch at all.
 They can just apply a regexp to the message that comes back from the
 server and then set constraint_name based on what pops out of the
 regex.  And then do just what you did there.

 Easier said than done if you're dealing with pg installations with different 
 lc_messages...

Exactly.  To my mind, the *entire* point of this patch is to remove the
need for people to try to dig information out of potentially-localized
message strings.  It's not clear to me that we have to strain to provide
information that isn't in the currently-reported messages --- we are
only trying to make it easier for client-side code to extract the
information it's likely to need.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_retainxlog for inclusion in 9.3?

2013-01-04 Thread Peter Eisentraut
On 1/3/13 12:30 PM, Robert Haas wrote:
 On Thu, Jan 3, 2013 at 11:32 AM, Magnus Hagander mag...@hagander.net wrote:
 Any particular reason? It goes pretty tightly together with
 pg_receivexlog, which is why I'd prefer putting it alongside that one.
 But if you have a good argument against it, I can change my mind :)
 
 Mostly that it seems like a hack, and I suspect we may come up with a
 better way to do this in the future.

It does seem like a hack.  Couldn't this be implemented with a backend
switch instead?

Also, as a small practical matter, since this is a server-side program
(since it's being used as archive_command), we shouldn't put it into the
pg_basebackup directory, because that would blur the lines about what to
install where, in particular for the translations.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynamic SQL - possible performance regression in 9.2

2013-01-04 Thread Dong Ye
I did three back-to-back runs using the same settings as in
http://archives.postgresql.org/pgsql-performance/2012-11/msg7.php
Except:
- use no prepared statement
- use 40 db connections
- build source from postgresql.git on the server box using: REL9_1_7,
REL9_2_2, REL9_2_2 + this patch

NOTPM results:
REL9_1_7: 46512.66
REL9_2_2: 42828.66
REL9_2_2 + this patch: 46973.70

Thanks,
Dong

PS, the top 20 lines of oprofile of these runs attached.


-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Tuesday, January 01, 2013 6:48 PM
To: Peter Eisentraut
Cc: Heikki Linnakangas; Pavel Stehule; PostgreSQL Hackers; Dong Ye
Subject: Re: [HACKERS] dynamic SQL - possible performance regression in
9.2

I wrote:
 I'm inclined to think that Heikki's patch doesn't go far enough, if we
 want to optimize behavior in this case.  What we really want to happen
 is that parsing, planning, and execution all happen in the caller's
 memory context, with no copying of parse or plan trees at all - and we
 could do without overhead such as dependency extraction and invalidation
 checking, too.  This would make SPI_execute a lot more comparable to the
 behavior of exec_simple_query().

Here's a draft patch for that.  My initial hack at it had a
disadvantage, which was that because no invalidation checking happened,
a SPI_execute query string containing a DDL command (such as ALTER TABLE)
followed by a command affected by the DDL would fail to reparse/replan
the second command properly.  (I suspect that Heikki's version had a
related defect, but haven't looked closely.)  Now that's not a huge deal
IMO, because in many common cases parse analysis of the second command
would fail anyway.  For instance, this has never worked in any PG
release:

do $$ begin execute 'create table foo(f1 int); insert into foo
values(1);'; end $$;

However it troubled me that there might be some regression there, and
after a bit of reflection I decided the right fix would be to rearrange
the code in spi.c so that parse analysis of later parsetrees follows
execution of earlier ones.  This makes the behavior of SPI_execute()
even more like that of exec_simple_query(), and shouldn't cost anything
noticeable given the other changes here.

I'm not entirely sure about performance of this fix, though.  I got
numbers varying between roughly-on-par with 9.1 and 10% slower than 9.1
for Pavel's example, depending on seemingly not-performance-related
rearrangements of the code in spi.c.  I think this must be chance
effects of cache line alignment, but it would be good to hear what other
people get, both on Pavel's example and the other ones alluded to.
In any case this seems better than unmodified HEAD, which was 40% slower
than 9.1 for me.

regards, tom lane

== REL9_1_7 ==
CPU: Intel Core/i7, speed 2.266e+06 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask 
of 0x00 (No unit mask) count 10
samples  %image name   app name symbol name
7010934.8657  postgres postgres 
SearchCatCache
6466114.4876  postgres postgres 
AllocSetAlloc
5651403.9222  postgres postgres 
hash_search_with_hash_value
4664903.2375  postgres postgres base_yyparse
2912082.0210  postgres postgres 
LWLockAcquire
2329441.6167  postgres postgres PinBuffer
2208791.5329  postgres postgres 
MemoryContextAllocZeroAligned
2134941.4817  postgres postgres core_yylex
1934021.3423  postgres postgres 
heap_hot_search_buffer
1914541.3287  postgres postgres 
expression_tree_walker
1912471.3273  postgres postgres _bt_compare
1899491.3183  libc-2.14.1.so   libc-2.14.1.so   
__strcmp_sse42
1736741.2053  postgres postgres XLogInsert
1639301.1377  postgres postgres 
FunctionCall2Coll
1508161.0467  libc-2.14.1.so   libc-2.14.1.so   
__memcpy_ssse3_back
1477561.0255  postgres postgres tbm_iterate
1461911.0146  postgres postgres 
MemoryContextAlloc
1407540.9769  postgres postgres 
nocachegetattr
1348090.9356  postgres postgres 
heap_page_prune_opt
1241100.8614  postgres postgres hash_any

== REL9_2_2 ==
CPU: Intel Core/i7, speed 2.266e+06 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask 
of 0x00 (No unit mask) count 10
samples  %image name   app 

Re: [HACKERS] dynamic SQL - possible performance regression in 9.2

2013-01-04 Thread Tom Lane
Dong Ye y...@vmware.com writes:
 I did three back-to-back runs using the same settings as in
 http://archives.postgresql.org/pgsql-performance/2012-11/msg7.php
 Except:
 - use no prepared statement
 - use 40 db connections
 - build source from postgresql.git on the server box using: REL9_1_7,
 REL9_2_2, REL9_2_2 + this patch

 NOTPM results:
 REL9_1_7: 46512.66
 REL9_2_2: 42828.66
 REL9_2_2 + this patch: 46973.70

Thanks!  I think this is probably sufficient evidence to conclude that
we should apply this patch, at least in HEAD.  Whatever Peter is seeing
must be some other issue, which we can address whenever we understand
what it is.

Next question is what people think about back-patching into 9.2 so as
to eliminate the performance regression vs 9.1.  I believe this would
be safe (although some care would have to be taken to put the added
boolean fields into places where they'd not result in an ABI break).
However it may not be worth the risk.  The 40% slowdown seen with
Pavel's example seems to me to be an extreme corner case --- Dong's
result of 8% slowdown is probably more realistic for normal uses
of SPI_execute.  Might be better to just live with it in 9.2.
Thoughts?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynamic SQL - possible performance regression in 9.2

2013-01-04 Thread Pavel Stehule
2013/1/4 Tom Lane t...@sss.pgh.pa.us:
 Dong Ye y...@vmware.com writes:
 I did three back-to-back runs using the same settings as in
 http://archives.postgresql.org/pgsql-performance/2012-11/msg7.php
 Except:
 - use no prepared statement
 - use 40 db connections
 - build source from postgresql.git on the server box using: REL9_1_7,
 REL9_2_2, REL9_2_2 + this patch

 NOTPM results:
 REL9_1_7: 46512.66
 REL9_2_2: 42828.66
 REL9_2_2 + this patch: 46973.70

 Thanks!  I think this is probably sufficient evidence to conclude that
 we should apply this patch, at least in HEAD.  Whatever Peter is seeing
 must be some other issue, which we can address whenever we understand
 what it is.

 Next question is what people think about back-patching into 9.2 so as
 to eliminate the performance regression vs 9.1.  I believe this would
 be safe (although some care would have to be taken to put the added
 boolean fields into places where they'd not result in an ABI break).
 However it may not be worth the risk.  The 40% slowdown seen with
 Pavel's example seems to me to be an extreme corner case --- Dong's
 result of 8% slowdown is probably more realistic for normal uses
 of SPI_execute.  Might be better to just live with it in 9.2.
 Thoughts?

I am for back-patching - I agree with you so my example is corner
case, and cannot be worse example - but performance regression about
5-10% can be confusing for users - because they can searching
regression in their application.

Regards

Pavel Stehule


 regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynamic SQL - possible performance regression in 9.2

2013-01-04 Thread Josh Berkus

 Next question is what people think about back-patching into 9.2 so as
 to eliminate the performance regression vs 9.1.  I believe this would
 be safe (although some care would have to be taken to put the added
 boolean fields into places where they'd not result in an ABI break).
 However it may not be worth the risk.  The 40% slowdown seen with
 Pavel's example seems to me to be an extreme corner case --- Dong's
 result of 8% slowdown is probably more realistic for normal uses
 of SPI_execute.  Might be better to just live with it in 9.2.
 Thoughts?

8% is a pretty serious regression, for those of us with applications
which do a lot of dynamic SQL.  As a reminder, many people do dynamic
SQL even in repetitive, performance-sensitive functions in order to
avoid plan caching.   Also partition-handlers often use dynamic SQL, and
a 10% drop in loading rows/second would be a big deal.

Let's put it this way: if the community doesn't backport it, we'll end
up doing so ad-hoc for some of our customers.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynamic SQL - possible performance regression in 9.2

2013-01-04 Thread Heikki Linnakangas

On 04.01.2013 22:05, Josh Berkus wrote:



Next question is what people think about back-patching into 9.2 so as
to eliminate the performance regression vs 9.1.  I believe this would
be safe (although some care would have to be taken to put the added
boolean fields into places where they'd not result in an ABI break).
However it may not be worth the risk.  The 40% slowdown seen with
Pavel's example seems to me to be an extreme corner case --- Dong's
result of 8% slowdown is probably more realistic for normal uses
of SPI_execute.  Might be better to just live with it in 9.2.
Thoughts?


8% is a pretty serious regression, for those of us with applications
which do a lot of dynamic SQL.  As a reminder, many people do dynamic
SQL even in repetitive, performance-sensitive functions in order to
avoid plan caching.   Also partition-handlers often use dynamic SQL, and
a 10% drop in loading rows/second would be a big deal.


+1 for backpatching.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Print b-tree tuples

2013-01-04 Thread Tom Lane
Samuel Vogel s...@muel-vogel.de writes:
 I'm trying to print out the tuples in the b-tree nodes for analysis, but 
 when iterate over more than the first entry of the tuples (scanKey++), I 
 strangely get the error below on query execution:
 ERROR:  relation simpletest does not exist
 LINE 1: SELECT * FROM simpletest WHERE id = 50;

Is this patch the only thing you changed?  The only obvious explanation
for the above error (other than you fat-fingered the query) seems to
be that you caused index searches in pg_class to fail, but I don't see
how this patch could be affecting the result of _bt_binsrch.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynamic SQL - possible performance regression in 9.2

2013-01-04 Thread Joshua D. Drake


On 01/04/2013 12:05 PM, Josh Berkus wrote:




Next question is what people think about back-patching into 9.2 so as
to eliminate the performance regression vs 9.1.  I believe this would
be safe (although some care would have to be taken to put the added
boolean fields into places where they'd not result in an ABI break).
However it may not be worth the risk.  The 40% slowdown seen with
Pavel's example seems to me to be an extreme corner case --- Dong's
result of 8% slowdown is probably more realistic for normal uses
of SPI_execute.  Might be better to just live with it in 9.2.
Thoughts?


8% is a pretty serious regression, for those of us with applications
which do a lot of dynamic SQL.  As a reminder, many people do dynamic
SQL even in repetitive, performance-sensitive functions in order to
avoid plan caching.   Also partition-handlers often use dynamic SQL, and
a 10% drop in loading rows/second would be a big deal.

Let's put it this way: if the community doesn't backport it, we'll end
up doing so ad-hoc for some of our customers.


Exactly. This is a significant reduction in the production quality of 
PostgreSQL as it pertains to dynamic SQL. To put it more bluntly, we 
will have people not upgrade to 9.2 specifically because of this problem.


Joshua D. Drake



--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC
@cmdpromptinc - 509-416-6579


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] json api WIP patch

2013-01-04 Thread Pavel Stehule
Hello



2013/1/4 Andrew Dunstan and...@dunslane.net:

 On 01/02/2013 05:51 PM, Andrew Dunstan wrote:


 On 01/02/2013 04:45 PM, Robert Haas wrote:

 On Wed, Dec 26, 2012 at 3:33 PM, Andrew Dunstan and...@dunslane.net
 wrote:

 Here is a patch for the first part of the JSON API that was recently
 discussed. It includes the json  parser hook infrastructure and
 functions
 for json_get and friends, plus json_keys.



 Udated patch that contains most of the functionality I'm after. One piece
 left is populate_recordset (populate a set of records from a single json
 datum which is an array of objects, in one pass). That requires a bit of
 thought.

 I hope most of the whitespace issues are fixed.


it is looking well

I have one note - is it json_each good name?

Regards

Pavel

 cheers

 andrew


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] json api WIP patch

2013-01-04 Thread Andrew Dunstan


On 01/04/2013 03:36 PM, Pavel Stehule wrote:

Hello



2013/1/4 Andrew Dunstan and...@dunslane.net:

On 01/02/2013 05:51 PM, Andrew Dunstan wrote:


On 01/02/2013 04:45 PM, Robert Haas wrote:

On Wed, Dec 26, 2012 at 3:33 PM, Andrew Dunstan and...@dunslane.net
wrote:

Here is a patch for the first part of the JSON API that was recently
discussed. It includes the json  parser hook infrastructure and
functions
for json_get and friends, plus json_keys.



Udated patch that contains most of the functionality I'm after. One piece
left is populate_recordset (populate a set of records from a single json
datum which is an array of objects, in one pass). That requires a bit of
thought.

I hope most of the whitespace issues are fixed.


it is looking well

I have one note - is it json_each good name?




Possibly not, although hstore has each(). json_unnest might be even less 
felicitous.


I'm expecting a good deal of bikeshedding - I'm trying to ignore those 
issues for the most part because the functionality seems much more 
important to me than the names.


The more people that pound on it and try to break it the happier I'll be.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Event Triggers: adding information

2013-01-04 Thread Alvaro Herrera
Dimitri Fontaine escribió:
 Robert Haas robertmh...@gmail.com writes:
  OK, I committed this.
 
 Thanks. Please find attached a rebased patch, v6. I think it's looking
 more like what you would expect now:

I gave this patch a look.  I'm not done with it; I mostly gave the
utility.c changes some love so that the end result is not as cluttered.
I did this by creating a couple of macros that call the Start() thing,
then set the OID, then call the End() thing.  This made pretty obvious
the places that were missing to set the object OID; I changed the CREATE
TABLE AS code to return it, for instance.  The patch I attach applies on
top of Dimitri's v6 patch.  With these changes, it's much easier to
review the big standard_ProcessUtility() switch and verify cases that
are being handled correctly.  (A few places cannot use the macros I
defined because they use more complex arrangements.  This is fine, I
think, because they are few enough that can be verified manually.)

I also noticed that ALTER DEFAULT PRIVILEGES was trying to fire event
triggers, even though we don't support GRANT and REVOKE; it seems to me
that the three things out to be supported together.  I'm told David
Fetter would call this DCL support, and we're not supporting DCL at
this stage.  So DEFAULT PRIVILEGES ought to be not supported.

I also observed that the SECURITY LABEL commands does not fire event
triggers; I think it should.  But then maybe this can be rightly called
DCL as well, so perhaps it's all right to leave it out.

DROP OWNED is not firing event triggers.  There is a comment therein
that says we don't fire them for shared objects, but this is wrong:
this command is dropping local objects, not shared, so it seems
necessary to me that triggers are fired.

I also noticed that some cases such as DROP whatever do not report the
OIDs of all the objects that are being dropped, only one of them.  This
seems bogus to me; if you do DROP TABLE foo,bar then both tables
should be reported.

 Given the OID, we use the ObjectProperty[] structure to know which cache
 or catalog to scan in order to get the name and namespace of the object.

The changes to make ObjectPropertyType visible to code outside
objectaddress.c look bogus to me.  I think we should make this new code
use the interfaces we already have.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Event Triggers: adding information

2013-01-04 Thread Alvaro Herrera
Alvaro Herrera escribió:

 I gave this patch a look.  I'm not done with it; I mostly gave the
 utility.c changes some love so that the end result is not as cluttered.
 I did this by creating a couple of macros that call the Start() thing,
 then set the OID, then call the End() thing.  This made pretty obvious
 the places that were missing to set the object OID; I changed the CREATE
 TABLE AS code to return it, for instance.  The patch I attach applies on
 top of Dimitri's v6 patch.

... and here's the attachment.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services
*** a/src/backend/commands/createas.c
--- b/src/backend/commands/createas.c
***
*** 50,55  typedef struct
--- 50,58 
BulkInsertState bistate;/* bulk insert state */
  } DR_intorel;
  
+ /* the OID of the created table, for ExecCreateTableAs consumption */
+ static OidCreateAsRelid = InvalidOid;
+ 
  static void intorel_startup(DestReceiver *self, int operation, TupleDesc 
typeinfo);
  static void intorel_receive(TupleTableSlot *slot, DestReceiver *self);
  static void intorel_shutdown(DestReceiver *self);
***
*** 59,71  static void intorel_destroy(DestReceiver *self);
  /*
   * ExecCreateTableAs -- execute a CREATE TABLE AS command
   */
! void
  ExecCreateTableAs(CreateTableAsStmt *stmt, const char *queryString,
  ParamListInfo params, char *completionTag)
  {
Query  *query = (Query *) stmt-query;
IntoClause *into = stmt-into;
DestReceiver *dest;
List   *rewritten;
PlannedStmt *plan;
QueryDesc  *queryDesc;
--- 62,75 
  /*
   * ExecCreateTableAs -- execute a CREATE TABLE AS command
   */
! Oid
  ExecCreateTableAs(CreateTableAsStmt *stmt, const char *queryString,
  ParamListInfo params, char *completionTag)
  {
Query  *query = (Query *) stmt-query;
IntoClause *into = stmt-into;
DestReceiver *dest;
+   Oid relOid;
List   *rewritten;
PlannedStmt *plan;
QueryDesc  *queryDesc;
***
*** 88,94  ExecCreateTableAs(CreateTableAsStmt *stmt, const char 
*queryString,
  
ExecuteQuery(estmt, into, queryString, params, dest, 
completionTag);
  
!   return;
}
Assert(query-commandType == CMD_SELECT);
  
--- 92,100 
  
ExecuteQuery(estmt, into, queryString, params, dest, 
completionTag);
  
!   relOid = CreateAsRelid;
!   CreateAsRelid = InvalidOid;
!   return relOid;
}
Assert(query-commandType == CMD_SELECT);
  
***
*** 156,161  ExecCreateTableAs(CreateTableAsStmt *stmt, const char 
*queryString,
--- 162,172 
FreeQueryDesc(queryDesc);
  
PopActiveSnapshot();
+ 
+   relOid = CreateAsRelid;
+   CreateAsRelid = InvalidOid;
+ 
+   return relOid;
  }
  
  /*
***
*** 353,358  intorel_startup(DestReceiver *self, int operation, TupleDesc 
typeinfo)
--- 364,372 
myState-rel = intoRelationDesc;
myState-output_cid = GetCurrentCommandId(true);
  
+   /* and remember the new relation's OID for ExecCreateTableAs */
+   CreateAsRelid = RelationGetRelid(myState-rel);
+ 
/*
 * We can skip WAL-logging the insertions, unless PITR or streaming
 * replication is in use. We can skip the FSM in any case.
*** a/src/backend/commands/event_trigger.c
--- b/src/backend/commands/event_trigger.c
***
*** 81,87  typedef enum
EVENT_TRIGGER_COMMAND_TAG_NOT_RECOGNIZED
  } event_trigger_command_tag_check_result;
  
! static event_trigger_support_data event_trigger_support[] = {
{ AGGREGATE, true },
{ CAST, true },
{ CONSTRAINT, true },
--- 81,88 
EVENT_TRIGGER_COMMAND_TAG_NOT_RECOGNIZED
  } event_trigger_command_tag_check_result;
  
! static event_trigger_support_data event_trigger_support[] =
! {
{ AGGREGATE, true },
{ CAST, true },
{ CONSTRAINT, true },
***
*** 121,127  static void AlterEventTriggerOwner_internal(Relation rel,

HeapTuple tup,

Oid newOwnerId);
  static event_trigger_command_tag_check_result check_ddl_tag(const char *tag);
- static void error_duplicate_filter_variable(const char *defname);
  static Datum filter_list_to_array(List *filterlist);
  static Oid insert_event_trigger_tuple(char *trigname, char *eventname,
  Oid 
evtOwner, Oid funcoid,
--- 122,127 
***
*** 164,176  CreateEventTrigger(CreateEventTrigStmt *stmt)
 

Re: [HACKERS] json api WIP patch

2013-01-04 Thread Pavel Stehule
2013/1/4 Andrew Dunstan and...@dunslane.net:

 On 01/04/2013 03:36 PM, Pavel Stehule wrote:

 Hello



 2013/1/4 Andrew Dunstan and...@dunslane.net:

 On 01/02/2013 05:51 PM, Andrew Dunstan wrote:


 On 01/02/2013 04:45 PM, Robert Haas wrote:

 On Wed, Dec 26, 2012 at 3:33 PM, Andrew Dunstan and...@dunslane.net
 wrote:

 Here is a patch for the first part of the JSON API that was recently
 discussed. It includes the json  parser hook infrastructure and
 functions
 for json_get and friends, plus json_keys.



 Udated patch that contains most of the functionality I'm after. One piece
 left is populate_recordset (populate a set of records from a single json
 datum which is an array of objects, in one pass). That requires a bit of
 thought.

 I hope most of the whitespace issues are fixed.

 it is looking well

 I have one note - is it json_each good name?



 Possibly not, although hstore has each(). json_unnest might be even less
 felicitous.

I understand - but hstore isn't in core  - so it should not be precedent

regexp_split_to_table

I am not native speaker, it sounds little bit strange - but maybe
because I am not native speaker :)

Regards

Pavel


 I'm expecting a good deal of bikeshedding - I'm trying to ignore those
 issues for the most part because the functionality seems much more important
 to me than the names.

 The more people that pound on it and try to break it the happier I'll be.

 cheers

 andrew


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynamic SQL - possible performance regression in 9.2

2013-01-04 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 Next question is what people think about back-patching into 9.2 so as
 to eliminate the performance regression vs 9.1.

 8% is a pretty serious regression, for those of us with applications
 which do a lot of dynamic SQL.  As a reminder, many people do dynamic
 SQL even in repetitive, performance-sensitive functions in order to
 avoid plan caching.

Well, of course, people with that type of problem should probably
rethink their use of dynamic SQL when they move to 9.2 anyway, because
that's the case where the new plancache code could actually help them
if they'd only let it.

But anyway, nobody seems to be speaking against back-patching, so
I'll go do it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Event Triggers: adding information

2013-01-04 Thread Dimitri Fontaine
Alvaro Herrera alvhe...@2ndquadrant.com writes:
 I gave this patch a look.  I'm not done with it; I mostly gave the
 utility.c changes some love so that the end result is not as cluttered.

Thanks Álvaro!

 I did this by creating a couple of macros that call the Start() thing,
 then set the OID, then call the End() thing.  This made pretty obvious
 the places that were missing to set the object OID; I changed the CREATE
 TABLE AS code to return it, for instance.  The patch I attach applies on
 top of Dimitri's v6 patch.  With these changes, it's much easier to
 review the big standard_ProcessUtility() switch and verify cases that
 are being handled correctly.  (A few places cannot use the macros I
 defined because they use more complex arrangements.  This is fine, I
 think, because they are few enough that can be verified manually.)

That sounds great. I'm not at ease with creating avdanved C macros and
I'm very happy you did it :)

 I also noticed that ALTER DEFAULT PRIVILEGES was trying to fire event
 triggers, even though we don't support GRANT and REVOKE; it seems to me
 that the three things out to be supported together.  I'm told David
 Fetter would call this DCL support, and we're not supporting DCL at
 this stage.  So DEFAULT PRIVILEGES ought to be not supported.

Agreed.  Do you want me to edit my patch to remove support for that
command and the other arrangement we are talking about, or do you want
to continue having at it?

 I also observed that the SECURITY LABEL commands does not fire event
 triggers; I think it should.  But then maybe this can be rightly called
 DCL as well, so perhaps it's all right to leave it out.

I would agree with calling that a DCL.

 DROP OWNED is not firing event triggers.  There is a comment therein
 that says we don't fire them for shared objects, but this is wrong:
 this command is dropping local objects, not shared, so it seems
 necessary to me that triggers are fired.

I think the best way to address DROP OWNED is the same as to address
DROP CASCADE: have the inner code (doDeletion, I think) prepare another
DropStmt and give control back to ProcessUtility() with a context of
PROCESS_UTILITY_CASCADE.

The good news is that the unified DropStmt support allows us to think
about that development, even if it still is a non trivial refactoring.

I would like to hear that we can consider the current patch first then
refactor the drop cascade operations so that we automatically have full
support for DROP OWNED.

 I also noticed that some cases such as DROP whatever do not report the
 OIDs of all the objects that are being dropped, only one of them.  This
 seems bogus to me; if you do DROP TABLE foo,bar then both tables
 should be reported.

When we have the DROP CASCADE refactoring, we could maybe implement drop
on a list of objects as getting back to ProcessUtility() also, with yet
another context (SUBCOMMAND or maybe GENERATED would do here).

The other way to address the information problem would be to expose it
as a relation (resultset, setof record, cursor, you name it) variable to
PLpgSQL.

Again, I hope we can decide on the approach now, commit what we have and
expand on it on the next commit fest.

 Given the OID, we use the ObjectProperty[] structure to know which cache
 or catalog to scan in order to get the name and namespace of the object.

 The changes to make ObjectPropertyType visible to code outside
 objectaddress.c look bogus to me.  I think we should make this new code
 use the interfaces we already have.

We need to move the new fonction get_object_name_and_namespace() into
the objectaddress.c module then, and decide about returning yet
another structure (because we have two values to return) or to pass that
function a couple of char **.  Which devil do you prefer?

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_retainxlog for inclusion in 9.3?

2013-01-04 Thread Dimitri Fontaine
Robert Haas robertmh...@gmail.com writes:
 Mostly that it seems like a hack, and I suspect we may come up with a
 better way to do this in the future.

Do you have the specs of such better way? Would it be a problem to have
both pg_retainxlog and the new way?

-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynamic SQL - possible performance regression in 9.2

2013-01-04 Thread Joshua D. Drake


On 01/04/2013 01:17 PM, Tom Lane wrote:


Well, of course, people with that type of problem should probably
rethink their use of dynamic SQL when they move to 9.2 anyway, because
that's the case where the new plancache code could actually help them
if they'd only let it.


Not to further the argument because of the +1 from you but I think it is 
important to keep in mind that the less work we make it to upgrade, the 
more likely it is they will.


It took forever (and we still have stragglers) to get people past 8.2 
because of the casting change in 8.3. This is a similar problem in that 
if we want them to upgrade to take advantage of features, we have to 
make it so the least amount of work possible is needed to make that 
upgrade happen.


Rewriting many thousands of lines of dynamic sql to upgrade to 9.2 is 
certainly not doing that :).


Sincerely,

Joshua D. Drake




But anyway, nobody seems to be speaking against back-patching, so
I'll go do it.

regards, tom lane





--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC
@cmdpromptinc - 509-416-6579


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] enhanced error fields

2013-01-04 Thread Peter Geoghegan
On 4 January 2013 18:07, Tom Lane t...@sss.pgh.pa.us wrote:
 Exactly.  To my mind, the *entire* point of this patch is to remove the
 need for people to try to dig information out of potentially-localized
 message strings.  It's not clear to me that we have to strain to provide
 information that isn't in the currently-reported messages --- we are
 only trying to make it easier for client-side code to extract the
 information it's likely to need.

It seems that we're in agreement, then. I'll prepare a version of the
patch very similar to the one I previously posted, but with some
caveats about how reliably the values can be used. I think that that
should be fine.

-- 
Peter Geoghegan   http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] enhanced error fields

2013-01-04 Thread Peter Geoghegan
On 4 January 2013 17:10, Robert Haas robertmh...@gmail.com wrote:
 I don't really agree with this.  To be honest, my biggest concern
 about this patch is that it will make it take longer to report an
 error.  At least in the cases where these additional fields are
 included, it will take CPU time to populate those fields, and maybe
 there will be some overhead even in the cases where they aren't even
 used (although I'd expect that to be too little to measure).  Now,
 maybe that doesn't matter in the case where the error is being
 reported back to the client, because the overhead of shipping packets
 across even a local socket likely dwarfs the overhead, but I think it
 matters a lot where you are running a big exception-catching loop.
 That is already painfully slow, and -1 from me on anything that makes
 it significantly slower.  I have a feeling this isn't the first time
 I'm ranting about this topic in relation to this patch, so apologies
 if this is old news.

You already brought it up. I measured the additional overhead, and
found it to be present, but inconsequential. That was with Pavel's
version of the patch, that had many more fields then what I've
proposed. Please take a look upthread.

-- 
Peter Geoghegan   http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynamic SQL - possible performance regression in 9.2

2013-01-04 Thread Josh Berkus

 Well, of course, people with that type of problem should probably
 rethink their use of dynamic SQL when they move to 9.2 anyway, because
 that's the case where the new plancache code could actually help them
 if they'd only let it.

Oh, no question.  But it'll take time for users with 1000's of lines of
PLpgSQL from 8.2 to rewrite it.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Invent a one-shot variant of CachedPlans for better performanc

2013-01-04 Thread Simon Riggs
On 4 January 2013 22:42, Tom Lane t...@sss.pgh.pa.us wrote:

 Invent a one-shot variant of CachedPlans for better performance.
...
 Back-patch to 9.2, since this was a performance regression compared to 9.1.
 (In 9.2, place the added struct fields so as to avoid changing the offsets
 of existing fields.)

 Heikki Linnakangas and Tom Lane

Just a moment of reflection here but I did already invent a one-shot
plan concept in a patch submission to 9.2, called exactly that, which
enables a variety of optimizations, this being one.

It's a shame that my patch was deferred in favour of your approach,
yet it never happened until now, well after release. Given that I
raised this before the first CF of 9.2, that is not good.

We need a full one-shot concept linking the planner and executor for
all sorts of reasons, not just this one. We can discuss the
practicality of individual optimizations but the linkage should be
clear throughout the whole infrastructure.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review]

2013-01-04 Thread Amit kapila
On Friday, January 04, 2013 10:57 PM Boszormenyi Zoltan wrote:
 Hi,

 I am reviewing your patch.

Thank you very much.

 Since you are using a constant string, it would be a little faster
 to use sizeof(string)-1 as it can be computed at compile-time
 and not run the strlen() all the time this code is reached.

I have reffered the code just above for PG_TEMP_FILE_PREFIX in same function 
sendDir(). 
I have that not only that place, but there other places where strlen is used 
for PG_TEMP_FILE_PREFIX.
I think in this path, such optimization might not help much.
However if you think we should take care of this, then I can find other places 
where similar change can be done
to make it consistent?

 In create_conf_lock_file():


 Can't we add a new LWLock and use it as a critical section instead
 of waiting for 10 seconds? It would be quite surprising to wait
 10 seconds  when accidentally executing two SET PERSISTENT
 statements in parallel. 

Yes, you are right adding a new LWLock will avoid the use of sleep.
However I am just not sure if for only this purpose we should add a new LWLock?

Similar logic is used in CreateLockFile() for postmaster file but it doesn't 
use sleep.
How about reducing the time of sleep or removing sleep, in that user might get 
error and he need to retry to get his command successful?

With Regards,
Amit Kapila.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review]

2013-01-04 Thread Boszormenyi Zoltan

2013-01-05 05:58 keltezéssel, Amit kapila írta:

On Friday, January 04, 2013 10:57 PM Boszormenyi Zoltan wrote:

Hi,
I am reviewing your patch.

Thank you very much.


Since you are using a constant string, it would be a little faster
to use sizeof(string)-1 as it can be computed at compile-time
and not run the strlen() all the time this code is reached.

I have reffered the code just above for PG_TEMP_FILE_PREFIX in same function 
sendDir().
I have that not only that place, but there other places where strlen is used 
for PG_TEMP_FILE_PREFIX.
I think in this path, such optimization might not help much.
However if you think we should take care of this, then I can find other places 
where similar change can be done
to make it consistent?


In create_conf_lock_file():



Can't we add a new LWLock and use it as a critical section instead
of waiting for 10 seconds? It would be quite surprising to wait
10 seconds  when accidentally executing two SET PERSISTENT
statements in parallel.

Yes, you are right adding a new LWLock will avoid the use of sleep.
However I am just not sure if for only this purpose we should add a new LWLock?

Similar logic is used in CreateLockFile() for postmaster file but it doesn't 
use sleep.
How about reducing the time of sleep or removing sleep, in that user might get
error and he need to retry to get his command successful?


I think removing the loop entirely is better.

However, the behaviour should be discussed by a wider audience:
- the backends should wait for each other just like other statements
   that fight for the same object (in which case locking is needed), or
- throw an error immediately on conflicting parallel work

I also tried to trigger one of the errors by creating the lock file manually.
You need an extra space between the ... retry and or ...:

+   ereport(ERROR,
+ (errcode_for_file_access(),
+errmsg(could not create lock file 
\%s\: %m , LockFileName),
+errhint(May be too many concurrent edit 
into file happening, please wait!! and retry
+or .lock is file 
accidently left there please clean the file from config_dir)));


Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers