Re: [HACKERS] WIP: index support for regexp search

2013-01-23 Thread Heikki Linnakangas

On 23.01.2013 09:36, Alexander Korotkov wrote:

Hi!

Some quick answers to the part of notes/issues. I will provide rest of
answers soon.

On Wed, Jan 23, 2013 at 6:08 AM, Tom Lanet...@sss.pgh.pa.us  wrote:


The biggest problem is that I really don't care for the idea of
contrib/pg_trgm being this cozy with the innards of regex_t.  Sooner
or later we are going to want to split the regex code off again as a
standalone library, and we *must* have a cleaner division of labor if
that is ever to happen.  Not sure what a suitable API would look like
though.


The only option I see now is to provide a method like export_cnfa which
would export corresponding CNFA in fixed format.


Yeah, I think that makes sense. The transformation code in trgm_regexp.c 
would probably be more readable too, if it didn't have to deal with the 
regex guts representation of the CNFA. Also, once you have intermediate 
representation of the original CNFA, you could do some of the 
transformation work on that representation, before building the 
tranformed graph containing trigrams. You could eliminate any 
non-alphanumeric characters, joining states connected by arcs with 
non-alphanumeric characters, for example.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: fix corner use case of variadic fuctions usage

2013-01-23 Thread Pavel Stehule
2013/1/23 Tom Lane t...@sss.pgh.pa.us:
 Pavel Stehule pavel.steh...@gmail.com writes:
 what should be result of concat(variadic NULL::int[])
 I enabled this use case, but what should be result?

 I think there are two somewhat defensible theories:

 (1) punt, and return NULL overall.  So in this case the variadic
 function would act as if it were STRICT.  That seems a bit weird though
 if the function is not strict otherwise.

 (2) Treat the NULL as if it were a zero-length array, giving rise to
 zero ordinary parameters.  This could be problematic if the function
 can't cope very well with zero parameters ... but on the other hand,
 if it can't do so, then what will it do with VARIADIC '{}'::int[] ?

This is repeated question - how much is NULL ~ '{}'

There is only one precedent, I think

postgres=# select '' || array_to_string('{}'::int[], '') || '';
 ?column?
--
 
(1 row)

postgres=# select '' || array_to_string(NULL::int[], '') || '';
 ?column?
--

(1 row)

but this function is STRICT - so there is no precedent :(


 I lean a little bit towards (2) but it's definitely a judgment call.
 Anybody have any other arguments one way or the other?




 regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] .gitignore additions

2013-01-23 Thread Andrew Dunstan


On 01/23/2013 12:05 AM, Craig Ringer wrote:

Hi all

Would a committer be willing to pop some entries in .gitignore for
Windows native build outputs?

*.sln


We already exclude pgsql.sln - what others are built? None that I can see.


*.vcproj
*.vcxproj


These all go in the top dir, no? So I think these could be anchored 
patterns.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dividing privileges for replication role.

2013-01-23 Thread Tomonari Katsumata
Hi, Tom

Thank you for comments.

 Tomonari Katsumata t.katsumata1...@gmail.com writes:
  Why is it better to do this with a privilege, rather than just using
  pg_hba.conf?


  You are right.
  Handling with pg_hba.conf is an easy way.

  But I think many users think about switch over, so
  the pg_hba.conf is same on master and standby.
  it's not convinient that we have to rewrite pg_hba.conf
  whenever switch over occurs.

  In the other hand, using a privilege, although we have to prepare
  each roles before, we don't need to rewrite pg_hba.conf.


 That sounds good, but if the behavior is controlled by a privilege
 (ie, it's stored in system catalogs) then it's impossible to have
 different settings on different slave servers --- or indeed to change
 the settings locally on a slave at all.  You can only change settings
 on the master and let the change replicate to all the slaves.

Yes, I'm thinking changing settings on the master and the settings are
propagating to all slaves.

 Quite aside from whether you want to manage things like that, what
happens if
 your master has crashed and you find you need to change the settings on
 the way to getting a slave to take over?

 The crash-recovery worry is one of the main reasons that things like
 pg_hba.conf aren't stored in system catalogs already.  It's not always
 convenient to need a running server before you can change the settings.

I understand that the approach in my patch(using pribileges for controlling
its behavior) does not match the policy.

But I'm still thinking I should put a something to controle
the behavior.

Then, how about to add a new option standby_mode to Database Connection
Control Functions like application_name.

ex:
  primary_conninfo = 'port=5432 standby_mode=master-cascade'
  primary_conninfo = 'port=5432 standby_mode=master-only'
  primary_conninfo = 'port=5432 standby_mode=cascade-only'

I think it will be able to do same control with privilege controlling.
And it won't be contrary to the policy(no data is stored in system
catalogs).

Because it is corner-case, I should not do anything about this?
(Am I concerning too much?)


regards,

NTT Software Corporation
  Tomonari Katsumata


Re: [HACKERS] .gitignore additions

2013-01-23 Thread Andrew Dunstan


On 01/23/2013 03:15 AM, Andrew Dunstan wrote:


On 01/23/2013 12:05 AM, Craig Ringer wrote:

Hi all

Would a committer be willing to pop some entries in .gitignore for
Windows native build outputs?

*.sln


We already exclude pgsql.sln - what others are built? None that I can 
see.



*.vcproj
*.vcxproj


Actually, I see we already have the first pattern. So I've added the second.

cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dividing privileges for replication role.

2013-01-23 Thread Michael Paquier
On Wed, Jan 23, 2013 at 5:46 PM, Tomonari Katsumata 
t.katsumata1...@gmail.com wrote:

 ex:

   primary_conninfo = 'port=5432 standby_mode=master-cascade'
   primary_conninfo = 'port=5432 standby_mode=master-only'
   primary_conninfo = 'port=5432 standby_mode=cascade-only'

 I think it will be able to do same control with privilege controlling.
 And it won't be contrary to the policy(no data is stored in system
 catalogs).

 Because it is corner-case, I should not do anything about this?
 (Am I concerning too much?)

Just curious, but... What is your primary goal with this patch?
Solving the cyclic problem?
-- 
Michael Paquier
http://michael.otacoo.com


Re: [HACKERS] .gitignore additions

2013-01-23 Thread Craig Ringer
On 01/23/2013 04:47 PM, Andrew Dunstan wrote:


 *.vcproj
 *.vcxproj

 Actually, I see we already have the first pattern. So I've added the
 second.

Great, thanks.

Anchoring them is probably slightly safer, but I can't really imagine
that ever being an issue for a pattern like *.vcxproj.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unlogged tables vs. GIST

2013-01-23 Thread Jeevan Chalke
On Wed, Jan 16, 2013 at 3:24 AM, Robert Haas robertmh...@gmail.com wrote:

 On Tue, Jan 15, 2013 at 4:26 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  Robert Haas robertmh...@gmail.com writes:
  I think that might be acceptable from a performance point of view -
  after all, if the index is unlogged, you're saving the cost of WAL -
  but I guess I still prefer a generic solution to this problem (a
  generalization of GetXLogRecPtrForTemp) rather than a special-purpose
  solution based on the nitty-gritty of how GiST uses these values.
  What's the difference between storing this value in pg_control and,
  say, the OID counter?
 
  Well, the modularity argument is that GiST shouldn't have any special
  privileges compared to a third-party index AM.  (I realize that
  third-party AMs already have problems plugging into WAL replay, but
  that doesn't mean we should create more problems.)
 
  We could possibly dodge that objection by regarding the global counter
  as some sort of generic unlogged operation counter, available to
  anybody who needs it.  It would be good to have a plausible example of
  something else needing it, but assume somebody can think of one.
 
  The bigger issue is that the reason we don't have to update pg_control
  every other millisecond is that the OID counter is capable of tracking
  its state between checkpoints without touching pg_control, that is it
  can emit WAL records to track its increments.  I think that we should
  insist that GiST do likewise, even if we give it some space in
  pg_control.  Remember that pg_control is a single point of failure for
  the database, and the more often it's written to, the more likely it is
  that something will go wrong there.
 
  So I guess what would make sense to me is that we invent an unlogged
  ops counter that is managed exactly like the OID counter, including
  having WAL records that are treated as consuming some number of values
  in advance.  If it's 64 bits wide then the WAL records could safely be
  made to consume quite a lot of values, like a thousand or so, thus
  reducing the actual WAL I/O burden to about nothing.

 I didn't look at the actual patch (silly me?) but the only time you
 need to update the control file is when writing the shutdown
 checkpoint just before stopping the database server.  If the server
 crashes, it's OK to roll the value back to some smaller value, because
 unlogged relations will be reset anyway.  And while the server is
 running the information can live in a shared memory copy protected by
 a spinlock.  So the control file traffic should be limited to once per
 server lifetime, AFAICS.


Yes.

I guess my earlier patch, which was directly incrementing
ControlFile-unloggedLSN counter was the concern as it will take
ControlFileLock several times.

In this version of patch I did what Robert has suggested. At start of the
postmaster, copying unloggedLSn value to XLogCtl, a shared memory struct.
And
in all access to unloggedLSN, using this shared variable using a SpinLock.
And since we want to keep this counter persistent across clean shutdown,
storing it in ControlFile before updating it.

With this approach, we are updating ControlFile only when we shutdown the
server, rest of the time we are having a shared memory counter. That means
we
are not touching pg_control every other millisecond or so. Also since we are
not caring about crashes, XLogging this counter like OID counter is not
required as such.

Thanks


  --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company




-- 
Jeevan B Chalke
Senior Software Engineer, RD
EnterpriseDB Corporation
The Enterprise PostgreSQL Company

Phone: +91 20 30589500

Website: www.enterprisedb.com
EnterpriseDB Blog: http://blogs.enterprisedb.com/
Follow us on Twitter: http://www.twitter.com/enterprisedb

This e-mail message (and any attachment) is intended for the use of the
individual or entity to whom it is addressed. This message contains
information from EnterpriseDB Corporation that may be privileged,
confidential, or exempt from disclosure under applicable law. If you are
not the intended recipient or authorized to receive this for the intended
recipient, any use, dissemination, distribution, retention, archiving, or
copying of this communication is strictly prohibited. If you have received
this e-mail in error, please notify the sender immediately by reply e-mail
and delete this message.


support_unlogged_gist_indexes_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request for vote to move forward with recovery.conf overhaul

2013-01-23 Thread Simon Riggs
On 23 January 2013 04:49, Michael Paquier michael.paqu...@gmail.com wrote:

 - recovery.conf is removed (no backward compatibility in this version of the
 patch)

If you want to pursue that, you know where it leads. No, rebasing a
rejected patch doesn't help, its just relighting a fire that shouldn't
ever have been lit.

Pushing to do that out of order is just going to drain essential time
out of this CF from all of us.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_basebackup with -R option and start standby have problems with escaped password

2013-01-23 Thread Hari Babu
Test scenario to reproduce: 
1. Start the server 
2. create the user as follows 
./psql postgres -c create user user1 superuser login
password 'use''1' 

3. Take the backup with -R option as follows. 
./pg_basebackup -D ../../data1 -R -U user1 -W 

The following errors are occurring when the new standby on the backup
database starts. 

FATAL:  could not connect to the primary server: missing = after 1' in
connection info string 

Regards, 
Hari babu.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Proposal for Allow postgresql.conf values to be changed via SQL [review]

2013-01-23 Thread Amit Kapila
On Tuesday, January 22, 2013 10:14 PM Fujii Masao wrote:
 On Tue, Jan 22, 2013 at 11:54 PM, Fujii Masao masao.fu...@gmail.com
 wrote:
  On Tue, Jan 22, 2013 at 11:07 PM, Amit Kapila
 amit.kap...@huawei.com wrote:
  On Tuesday, January 22, 2013 7:10 PM Zoltán Böszörményi wrote:
  2013-01-22 13:32 keltezéssel, Amit kapila írta:
   On Saturday, January 19, 2013 2:37 AM Boszormenyi Zoltan wrote:
   2013-01-18 21:48 keltezéssel, Boszormenyi Zoltan írta:
   2013-01-18 21:32 keltezéssel, Tom Lane írta:
   Boszormenyi Zoltan z...@cybertec.at writes:
   2013-01-18 11:05 keltezéssel, Amit kapila írta:
   On using mktemp, linux compilation gives below warning
   warning: the use of `mktemp' is dangerous, better use
 `mkstemp'
  
   Everywhere else that we need to do something like this, we
 just
  use our
   own PID to disambiguate, ie
sprintf(tempfilename, /path/to/file.%d, (int)
 getpid());
   There is no need to deviate from that pattern or introduce
  portability
   issues, since we can reasonably assume that no non-Postgres
  programs are
   creating files in this directory.
   Thanks for the enlightenment, I will post a new version soon.
   Here it is.
   The patch sent by you works fine.
   It needs small modification as below:
  
   The auto.conf.d directory should follow the postgresql.conf
 file
  directory not the data_directory.
   The same is validated while parsing the postgresql.conf
 configuration
  file.
  
   Patch is changed to use the postgresql.conf file directory as
 below.
  
   StrNCpy(ConfigFileDir, ConfigFileName, sizeof(ConfigFileDir));
   get_parent_directory(ConfigFileDir);
   /* Frame auto conf name and auto conf sample temp file name */
   snprintf(AutoConfFileName, sizeof(AutoConfFileName), %s/%s/%s,
ConfigFileDir,
PG_AUTOCONF_DIR,
PG_AUTOCONF_FILENAME);
 
  Maybe it's just me but I prefer to have identical
  settings across all replicated servers. But I agree
  that there can be use cases with different setups.
 
  All in all, this change makes it necessary to run the
  same SET PERSISTENT statements on all slave servers,
  too, to make them identical again if the configuration
  is separated from the data directory (like on Debian
  or Ubuntu using the default packages). This needs to be
  documented explicitly.
 
  SET PERSISTENT command needs to run on all slave servers even if the
 path we
  take w.r.t data directory
  unless user takes backup after command.
 
  Is it safe to write something in the directory other than data
 directory
  via SQL?
 
  postgres user usually has the write permission for the configuration
  directory like /etc/postgresql?
 
   This closes all comments raised till now for this patch.
   Kindly let me know if you feel something is missing?
 
  I can't think of anything else.
 
  For removing
  +   a configuration entry from
 filenamepostgresql.auto.conf/filename file,
  +   use commandRESET PERSISTENT/command. The values will be
 effective
  +   after reload of server configuration (SIGHUP) or server startup.
 
  The description of RESET PERSISTENT is in the document, but it
  seems not to be implemented.
 
 I read only document part in the patch and did simple test.
 Here are the comments:
 
 storage.sgml needs to be updated.
 
 Doesn't auto.conf.d need to be explained in config.sgml?

I shall update both these files.
 
 When I created my own configuration file in auto.conf.d and
 started the server, I got the following LOG message. This
 means that users should not create any their own configuration
 file there? Why shouldn't they?

It can be allowed, but for now it has been kept such that automatically
generated conf files will be
present in this directory, that’s what the name 'auto.conf.d' suggests.

  I think that it's more useful to
 allow users to put also their own configuration files in auto.conf.d.
 Then users can include any configuration file without adding
 new include derective into postgresql.conf because auto.conf.d
 has already been included.
 
 LOG:  unexpected file found in automatic configuration directory:
 /data/auto.conf.d/hoge

Personally I don't have objection in getting other user specific conf files
in this directory. 
But I think then we should name this directory also differently as it was
initially (conf_dir) in the Patch.
I would like to see opinion of others also in this matter, so that later I
don't need to
change it back to what currently it is.

 When I removed postgresql.auto.conf and restarted the server,
 I got the following warning message. This is not correct because
 I didn't remove auto.conf.d from postgresql.conf. What I removed
 is only postgresql.auto.conf.
 
 WARNING:  Default auto.conf.d is not present in postgresql.conf.
 Configuration parameters changed with SET PERSISTENT command will not
 be effective.

How about changing it to below message:

WARNING:  File 

Re: [HACKERS] dividing privileges for replication role.

2013-01-23 Thread Tomonari Katsumata
Hi, Michael
2013/1/23 Michael Paquier michael.paqu...@gmail.com



  On Wed, Jan 23, 2013 at 5:46 PM, Tomonari Katsumata 
 t.katsumata1...@gmail.com wrote:

 ex:

   primary_conninfo = 'port=5432 standby_mode=master-cascade'
   primary_conninfo = 'port=5432 standby_mode=master-only'
   primary_conninfo = 'port=5432 standby_mode=cascade-only'

 I think it will be able to do same control with privilege controlling.
 And it won't be contrary to the policy(no data is stored in system
 catalogs).

 Because it is corner-case, I should not do anything about this?
 (Am I concerning too much?)

 Just curious, but... What is your primary goal with this patch?
 Solving the cyclic problem?


No, I'm not thinking about solving the cyclic problem directly.
It is too difficult for me.
And the goal of my patch is adding some selections to avoid it for users.

NTT Software Corporation
  Tomonari Katsumata


Re: [HACKERS] Event Triggers: adding information

2013-01-23 Thread Dimitri Fontaine
Robert Haas robertmh...@gmail.com writes:
 I think these new regression tests are no good, because I doubt that
 the number of recursive calls that can fit into any given amount of
 stack space is guaranteed to be the same on all platforms.  I have
 committed the bug fixes themselves, however.

Thanks for commiting the fixes. About the regression tests, I think
you're right, but then I can't see how to include such a test. Maybe you
could add the other one, though?

 I wasn't entirely happy with your proposed documentation so I'm
 attaching a counter-proposal.  My specific complaints are (1) telling
 people that event triggers are run in a savepoint seems a little too
 abstract; I have attempted to make the consequences more concrete; (2)
 RAISE EXCEPTION is PL/pgsql specific and not the only possible reason
 for an error; I have attempted to be more generic; and (3) in the
 process of fiddling with this, I noticed that the ddl_command_end
 documentation can, I believe, be made both shorter and more clear by
 turning it into a rider on the previous paragraph.

 Comments?

+1 for this version, thanks.

-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap [Review]

2013-01-23 Thread Amit Kapila
On Wednesday, January 23, 2013 2:30 AM Jameison Martin wrote:

 Sorry for the late response, I just happened to see this yesterday.

 Running a general benchmark against the patch as Keven suggests is a good
idea. 

 Amit, can you supply the actual values you saw when running pgbench (the 3
values for each run)? I'd like to verify 
 that the 1% difference isn't due to some file system/OS variability (would
be interested in what the stdev is for the 
 values). Also, do you happen to have some information about the hardware
you ran on?

Performance data for 5 runs is as below:

System Configuration: 
Hardware : 4 core (Intel(R) Xeon(R) CPU L5408 @ 2.13GHz)   RAM : 24GB 
Operating System: Suse-Linux 10.2 x86_64   

Sever Configuration: 
The database cluster will be initialized with locales 
  COLLATE:  C 
  CTYPE:C 
  MESSAGES: en_US.UTF-8 
  MONETARY: en_US.UTF-8 
  NUMERIC:  en_US.UTF-8 
  TIME: en_US.UTF-8 

shared_buffers = 2GB 
checkpoint_segments = 255   
checkpoint_timeout = 10min   

pgbench: 
transaction type: TPC-B (sort of) 
scaling factor: 75 
query mode: simple 
number of clients: 8 
number of threads: 8 
duration: 300 s 

 originalpatch 
 Run-1: 579.596730570.212601 
 Run-2: 577.220325576.719402 
 Run-3: 571.792736574.118542 
 Run-4: 573.376879571.548136 
 Run-5: 573.469166576.321368 
  
 Avg  : 575.091167573.784009


With Regards,
Amit Kapila.

On Monday, December 24, 2012 7:58 PM Kevin Grittner wrote:
Simon Riggs wrote:

Not really sure about the 100s of columns use case.

But showing gain in useful places in these more common cases wins
my vote.

Thanks for testing. Barring objections, will commit.
Do we have any results on just a plain, old stock pgbench run, with
the default table definitions?

That would be a reassuring complement to the other tests.
Sever Configuration: 
The database cluster will be initialized with locales 
  COLLATE:  C 
  CTYPE:C 
  MESSAGES: en_US.UTF-8 
  MONETARY: en_US.UTF-8 
  NUMERIC:  en_US.UTF-8 
  TIME: en_US.UTF-8 

shared_buffers = 1GB 
checkpoint_segments = 255   
checkpoint_timeout = 15min   

pgbench: 
transaction type: TPC-B (sort of) 
scaling factor: 75 
query mode: simple 
number of clients: 8 
number of threads: 8 
duration: 600 s 

Performance: Average of 3 runs of pgbench in tps 
9.3devel  |  with trailing null patch 
--+-- 
578.9872  |   573.4980

With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)

2013-01-23 Thread Heikki Linnakangas
For the record, on MSVC we can use __assume(0) to mark unreachable code. 
It does the same as gcc's __builtin_unreachable(). I tested it with the 
same Pavel's palloc-heavy test case that you used earlier, with the 
one-shot plan commit temporarily reverted, and saw a similar speedup you 
reported with gcc's __builtin_unreachable(). So, committed that.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request for vote to move forward with recovery.conf overhaul

2013-01-23 Thread Michael Paquier

On 2013/01/23, at 18:12, Simon Riggs si...@2ndquadrant.com wrote:

 On 23 January 2013 04:49, Michael Paquier michael.paqu...@gmail.com wrote:
 
 - recovery.conf is removed (no backward compatibility in this version of the
 patch)
 
 If you want to pursue that, you know where it leads. No, rebasing a
 rejected patch doesn't help, its just relighting a fire that shouldn't
 ever have been lit.
 
 Pushing to do that out of order is just going to drain essential time
 out of this CF from all of us.
No problem to support both. The only problem I see is if the same parameter is 
defined in recovery.conf and postgresql.conf, is the priority given to 
recovery.conf?
--
Michael Paquier
http://michael.otacoo.com
(Sent from my mobile phone)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review : Add hooks for pre- and post-processor executables for COPY and \copy

2013-01-23 Thread Etsuro Fujita
Hi Amit,

 

Thank you for your review.  I've rebased and updated the patch.  Please find
attached the patch.


 Code Review comments: 
 - 
 
 1. Modify the comment in function header of: parse_slash_copy (needs to modify
for new syntax)



Done.

 

 2. Comments for functions OpenPipeStream  ClosePipeStream are missing.


 

Done.

 

 3. Any Script errors are not directly visible to user; If there problems in
script no way to cleanup.   

Shouldn't this be mentioned in User Manual.

 

Done.  Please see the documentation note on the \copy instruction in
psql-ref.sgml.


 Test case issues: 
 -- 
 1. Broken pipe is not handled in case of psql \copy command; 
 Issue are as follows: 
 Following are verified on SuSE-Linux 10.2. 
 1) psql is exiting when \COPY xxx TO command is issued and
command/script is not found 

 When popen is called in write mode it is creating valid file
descriptor and when it tries to write to file Broken pipe error is  coming
which is not handled. 
 psql# \copy pgbench_accounts TO PROGRAM
'../compress.sh pgbench_accounts4.txt' 
 2) When \copy command is in progress then program/command is
killed/crashed due to any problem 
psql is exiting.

This is a headache.  I have no idea how to solve this.

 

Sorry for the long delay in responding.

 

Best regards,

Etsuro Fujita



copy-popen-20130123.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: fix corner use case of variadic fuctions usage

2013-01-23 Thread Pavel Stehule
2013/1/23 Pavel Stehule pavel.steh...@gmail.com:
 2013/1/23 Tom Lane t...@sss.pgh.pa.us:
 Pavel Stehule pavel.steh...@gmail.com writes:
 what should be result of concat(variadic NULL::int[])
 I enabled this use case, but what should be result?

 I think there are two somewhat defensible theories:

 (1) punt, and return NULL overall.  So in this case the variadic
 function would act as if it were STRICT.  That seems a bit weird though
 if the function is not strict otherwise.

 (2) Treat the NULL as if it were a zero-length array, giving rise to
 zero ordinary parameters.  This could be problematic if the function
 can't cope very well with zero parameters ... but on the other hand,
 if it can't do so, then what will it do with VARIADIC '{}'::int[] ?

 This is repeated question - how much is NULL ~ '{}'

 There is only one precedent, I think

 postgres=# select '' || array_to_string('{}'::int[], '') || '';
  ?column?
 --
  
 (1 row)

 postgres=# select '' || array_to_string(NULL::int[], '') || '';
  ?column?
 --

 (1 row)

 but this function is STRICT - so there is no precedent :(

next related example

CREATE OR REPLACE FUNCTION public.myleast(VARIADIC integer[])
 RETURNS integer
 LANGUAGE sql
AS $function$
select min(v) from unnest($1) g(v)
$function$


postgres=# select myleast(variadic array[]::int[]) is null;
 ?column?
--
 t
(1 row)

postgres=# select myleast(variadic null::int[]) is null;
 ?column?
--
 t
(1 row)

so it is close to Tom (2)

concat(VARIADIC NULL::int[]) and concat(VARIADIC '{}'::int[]) should
returns NULL

it is little bit strange, but probably it is most valid

Regards

Pavel



 I lean a little bit towards (2) but it's definitely a judgment call.
 Anybody have any other arguments one way or the other?




 regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4

2013-01-23 Thread Andres Freund
On 2013-01-19 23:42:02 -0500, Steve Singer wrote:
 5) Currently its only allowed to access catalog tables, its fairly
 trivial to extend this to additional tables if you can accept some
 (noticeable but not too big) overhead for modifications on those tables.
 
 I was thinking of making that an option for tables, that would be useful
 for replication solutions configuration tables.
 
 I think this will make the life of anyone developing a new replication
 system easier.  Slony has a lot of infrastructure for allowing slonik
 scripts to wait for configuration changes to popogate everywhere before
 making other configuration changes because you can get race conditions.  If
 I were designing a new replication system and I had this feature then I
 would try to use it to come up with a simpler model of propagating
 configuration changes.

I pushed support for this, turned out to be a rather moderate patch (after a
cleanup patch that was required anyway):

 src/backend/access/common/reloptions.c | 10 ++
 src/backend/utils/cache/relcache.c |  9 -
 src/include/utils/rel.h|  9 +
 3 files changed, 27 insertions(+), 1 deletion(-)

With the (attached for convenience) patch applied you can do
# ALTER TABLE replication_metadata SET (treat_as_catalog_table = true);

to enable this.
What I wonder about is:
* does anybody have a better name for the reloption?
* Currently this can be set mid-transaction but it will only provide access for
  in the next transaction but doesn't error out when setting it
  mid-transaction. I personally find that acceptable, other opinions?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
From b535ba12fad667725247281c43be2ef81f7e40d7 Mon Sep 17 00:00:00 2001
From: Andres Freund and...@anarazel.de
Date: Wed, 23 Jan 2013 13:02:34 +0100
Subject: [PATCH] wal_decoding: mergme: Support declaring normal tables as
 timetraveleable

This is useful to be able to access tables used for replication metadata inside
an output plugin.

The storage option 'treat_as_catalog_table' is used for that purpose, so it can
be enabled for a table with
ALTER TABLE replication_metadata SET (treat_as_catalog_table = true);

It is currently possible to change that option mid-transaction although
timetravel access will only be possible in the next transaction.
---
 src/backend/access/common/reloptions.c |   10 ++
 src/backend/utils/cache/relcache.c |9 -
 src/include/utils/rel.h|9 +
 3 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 456d746..f2d3c8b 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -62,6 +62,14 @@ static relopt_bool boolRelOpts[] =
 	},
 	{
 		{
+			treat_as_catalog_table,
+			Treat table as a catalog table for the purpose of logical replication,
+			RELOPT_KIND_HEAP
+		},
+		false
+	},
+	{
+		{
 			fastupdate,
 			Enables \fast update\ feature for this GIN index,
 			RELOPT_KIND_GIN
@@ -1151,6 +1159,8 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, analyze_scale_factor)},
 		{security_barrier, RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, security_barrier)},
+		{treat_as_catalog_table, RELOPT_TYPE_BOOL,
+		 offsetof(StdRdOptions, treat_as_catalog_table)},
 	};
 
 	options = parseRelOptions(reloptions, validate, kind, numoptions);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 369a4d1..cc42ff4 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -4718,12 +4718,19 @@ RelationIsDoingTimetravelInternal(Relation relation)
 	Assert(wal_level = WAL_LEVEL_LOGICAL);
 
 	/*
-	 * XXX: Doing this test instead of using IsSystemNamespace has the
+	 * XXX: Doing this test instead of using IsSystemNamespace has the frak
 	 * advantage of classifying toast tables correctly.
 	 */
 	if (RelationGetRelid(relation)  FirstNormalObjectId)
 		return true;
 
+	/*
+	 * also log relevant data if we want the table to behave as a catalog
+	 * table, although its not a system provided one.
+	 */
+	if (RelationIsTreatedAsCatalogTable(relation))
+	return true;
+
 	return false;
 }
 
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index e07ef3f..a026612 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -219,6 +219,7 @@ typedef struct StdRdOptions
 	int			fillfactor;		/* page fill factor in percent (0..100) */
 	AutoVacOpts autovacuum;		/* autovacuum-related options */
 	bool		security_barrier;		/* for views */
+	booltreat_as_catalog_table; /* treat as timetraveleable table */
 } StdRdOptions;
 
 #define HEAP_MIN_FILLFACTOR			10
@@ -255,6 +256,14 @@ typedef struct StdRdOptions
 	 

Re: [HACKERS] Prepared statements fail after schema changes with surprising error

2013-01-23 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 I'm thinking that the main argument for trying to do this is so that we
 could say plan caching is transparent, full stop, with no caveats or
 corner cases.  But removing those caveats is going to cost a fair
 amount, and it looks like that cost will be wasted for most usage
 patterns.

I think the right thing to do here is aim for transparent plan caching.
Now, will the caveats only apply when there has been some live DDL
running, or even only DDL that changes schemas (not objects therein)?

Really, live DDL is not that frequent, and when you do that, you want
transparent replanning. I can't see any use case where it's important to
be able to run DDL in a live application yet continue to operate with
the old (and in cases wrong) plans.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Strange Windows problem, lock_timeout test request

2013-01-23 Thread Zoltán Böszörményi

2013-01-20 00:15 keltezéssel, Andrew Dunstan írta:


On 01/19/2013 02:51 AM, Boszormenyi Zoltan wrote:
Yes it rings a bell. See 
http://people.planetpostgresql.org/andrew/index.php?/archives/264-Cross-compiling-PostgreSQL-for-WIndows.html



I wanted to add a comment to this blog entry but it wasn't accepted.



The blog is closed for comments. I have moved to a new blog, and this 
is just there for archive purposes.




Here it is:

It doesn't work under Wine, see:
http://www.winehq.org/pipermail/wine-users/2013-January/107008.html

But pg_config works so other PostgreSQL clients can also be built 
using the cross compiler.





If you want to target Wine I think you're totally on your own.


Yes, I know, it was only an attempt. The user's
administrator privilege under Wine is a showstopper.



cheers

andrew






--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Strange Windows problem, lock_timeout test request

2013-01-23 Thread Zoltán Böszörményi

2013-01-19 21:15 keltezéssel, Andrew Dunstan írta:


On 01/19/2013 02:36 AM, Boszormenyi Zoltan wrote:




Cross-compiling is not really a supported platform. Why don't you 
just build natively? This is know to work as shown by the buildfarm 
animals doing it successfully.


Because I don't have a mingw setup on Windows. (Sorry.)



A long time ago I had a lot of sympathy with this answer, but these 
days not so much.


I didn't ask for sympathy. :-) I am just on a totally different
project until 9th February and I am also far away from my desk.
So I can't even attempt to install Windows+mingw inside Qemu/KVM.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] .gitignore additions

2013-01-23 Thread Phil Sorber
On Wed, Jan 23, 2013 at 1:32 AM, David Fetter da...@fetter.org wrote:
 On Wed, Jan 23, 2013 at 01:05:12PM +0800, Craig Ringer wrote:
 Hi all

 Would a committer be willing to pop some entries in .gitignore for
 Windows native build outputs?

 *.sln
 *.vcproj
 *.vcxproj

 It'd make life easier when testing Windows changes.

 While they're at it, it'd be nice to have tags from ctags (via our
 tools or otherwise) get ignored globally, along with cscope.out , as
 follows:

 tags
 /cscope.out


+1 on cscope.out!

 Cheers,
 David.
 --
 David Fetter da...@fetter.org http://fetter.org/
 Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
 Skype: davidfetter  XMPP: david.fet...@gmail.com
 iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

 Remember to vote!
 Consider donating to Postgres: http://www.postgresql.org/about/donate


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] .gitignore additions

2013-01-23 Thread Andrew Dunstan


On 01/23/2013 08:47 AM, Phil Sorber wrote:

On Wed, Jan 23, 2013 at 1:32 AM, David Fetter da...@fetter.org wrote:

On Wed, Jan 23, 2013 at 01:05:12PM +0800, Craig Ringer wrote:

Hi all

Would a committer be willing to pop some entries in .gitignore for
Windows native build outputs?

*.sln
*.vcproj
*.vcxproj

It'd make life easier when testing Windows changes.

While they're at it, it'd be nice to have tags from ctags (via our
tools or otherwise) get ignored globally, along with cscope.out , as
follows:

tags
/cscope.out


+1 on cscope.out!



There doesn't seem anything postgres-specific about these. Pretty much 
everything we list is a byproduct of a standard build, not some other 
tool. man gitignore says


   Patterns which a user wants git to ignore in all situations (e.g.,
   backup or temporary files generated by the user’s editor of choice)
   generally go into a file specified by core.excludesfile in the
   user’s ~/.gitconfig.


I would think tags files and cscope.out probably come into that 
category, although I don't have terribly strong feelings about it.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] .gitignore additions

2013-01-23 Thread Phil Sorber
On Jan 23, 2013 8:59 AM, Andrew Dunstan and...@dunslane.net wrote:


 On 01/23/2013 08:47 AM, Phil Sorber wrote:

 On Wed, Jan 23, 2013 at 1:32 AM, David Fetter da...@fetter.org wrote:

 On Wed, Jan 23, 2013 at 01:05:12PM +0800, Craig Ringer wrote:

 Hi all

 Would a committer be willing to pop some entries in .gitignore for
 Windows native build outputs?

 *.sln
 *.vcproj
 *.vcxproj

 It'd make life easier when testing Windows changes.

 While they're at it, it'd be nice to have tags from ctags (via our
 tools or otherwise) get ignored globally, along with cscope.out , as
 follows:

 tags
 /cscope.out

 +1 on cscope.out!


 There doesn't seem anything postgres-specific about these. Pretty much
everything we list is a byproduct of a standard build, not some other tool.
man gitignore says

Patterns which a user wants git to ignore in all situations (e.g.,
backup or temporary files generated by the user’s editor of choice)
generally go into a file specified by core.excludesfile in the
user’s ~/.gitconfig.


That's a good point. Will do that instead.


 I would think tags files and cscope.out probably come into that category,
although I don't have terribly strong feelings about it.

 cheers

 andrew



Re: [HACKERS] Event Triggers: adding information

2013-01-23 Thread Robert Haas
On Wed, Jan 23, 2013 at 4:57 AM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
 Thanks for commiting the fixes. About the regression tests, I think
 you're right, but then I can't see how to include such a test. Maybe you
 could add the other one, though?

Can you point me specifically at what you have in mind so I can make
sure I'm considering the right thing?

 +1 for this version, thanks.

OK, committed that also.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prepared statements fail after schema changes with surprising error

2013-01-23 Thread Robert Haas
On Tue, Jan 22, 2013 at 12:44 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 After reflecting on this a bit, I think that the problem may come from
 drawing an unjustified analogy between views and prepared statements.
 The code is certainly trying to treat them as the same thing, but
 perhaps we shouldn't do that.

 Consider that once you do
 create view v as select * from s.t;
 the view will continue to refer to the same table object no matter what.
 You can rename t, you can rename s, you can move t to a different schema
 and then drop s, but the view still knows what t is, because the
 reference is by OID.  The one thing you can't do is drop t, because the
 stored dependency from v to t will prevent that (at least unless you let
 it cascade to drop v as well).  Views therefore do not have, or need,
 any explicit dependencies on either specific schemas or their
 creation-time search_path --- they only have dependencies on individual
 objects.

 The current plancache code is trying, in a somewhat half-baked fashion,
 to preserve those semantics for prepared queries --- that's partly
 because it's reusing the dependency mechanism that was designed for
 views, and partly because it didn't occur to us to question that model.
 But it now strikes me that the model doesn't apply very well, so maybe
 we need a new one.  The key point that seems to force a different
 treatment is that there are no stored (globally-visible) dependencies
 for prepared queries, so there's no way to guarantee that referenced
 objects don't get dropped.

 We could possibly set things up so that re-executing a prepared query
 that references now-dropped objects would throw an error; but what
 people seem to prefer is that it should be re-analyzed to see if the
 original source text would now refer to a different object.  And we're
 doing that, but we haven't followed through on the logical implications.
 The implication, ISTM, is that we should no longer consider that
 referring to the same objects throughout the query's lifespan is a goal
 at all.  Rather, what we should be trying to do is make the query
 preparation transparent, in the sense that you should get the same
 results as if you resubmitted the original query text each time.

 In particular, it now seems to me that this makes a good argument
 for changing what plancache is doing with search_path.  Instead of
 re-establishing the original search_path in a rather vain hope that the
 same objects will be re-selected by parse analysis, we should consider
 that the prepared query has a dependency on the active search path, and
 thus force a replan if the effective search path changes.

I think that's exactly right, and as Stephen says, likely to be a very
significant improvement over the status quo even if it's not perfect.

(Basically, I agree with everything Stephen said in his followup
emails down to the letter.  +1 from me for everything he said.)

 I'm not sure that we can make the plan caching 100% transparent, though.
 The existing mechanisms will force replan if any object used in the plan
 is modified (and fortunately, modified includes renamed, even though
 a rename isn't interesting according to the view-centric worldview).
 And we can force replan if the search path changes (ie, the effective
 list of schema OIDs changes).  But there are cases where neither of
 those things happens and yet the user might expect a new object to be
 selected.  Consider for example that the search path is a, b, c,
 and we have a prepared query select * from t, and that currently
 refers to b.t.  If now someone creates a.t, or renames a.x to a.t,
 then a replan would cause the query to select from a.t ... but there
 was no invalidation event that will impinge on the stored plan, and the
 search_path setting didn't change either.  I don't think we want to
 accept the overhead of saying any DDL anywhere invalidates all cached
 plans, so I don't see any good way to make this case transparent.
 How much do we care?

I'd just like to mention that Noah and I left this same case unhandled
when we did all of those concurrent DDL improvements for 9.2.  A big
part of that worked aimed at fixing tthe problem of a DML or DDL
statement latching onto an object that had been concurrently dropped,
as in the case where someone does BEGIN; DROP old; RENAME new TO old;
COMMIT; for any sort of SQL object (table, function, etc.).  That code
is all now much more watertight than it was before, but the case of
creating an object earlier in the search path than an existing object
of the same name is still not guaranteed to work correctly - there's
no relevant invalidation message, so with the right timing of events,
you can still manage to latch onto the object that appears later in
the search path instead of the new one added to a schema that appears
earlier.  So there is precedent for punting that
particularly-difficult aspect of problems in this category.

To make the cached-plan stuff truly safe against 

Re: [HACKERS] Prepared statements fail after schema changes with surprising error

2013-01-23 Thread Robert Haas
On Wed, Jan 23, 2013 at 8:10 AM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
 Really, live DDL is not that frequent, and when you do that, you want
 transparent replanning. I can't see any use case where it's important to
 be able to run DDL in a live application yet continue to operate with
 the old (and in cases wrong) plans.

I agree with that, but I think Tom's concern is more with the cost of
too-frequent re-planning.  The most obvious case in which DDL might be
frequent enough to cause an issue here is if there is heavy use of
temporary objects - sessions might be rapidly creating and dropping
objects in their own schemas.  It would be unfortunate if that forced
continual replanning of queries in other sessions.  I think there
could be other cases where this is an issue as well, but the
temp-object case is probably the one that's most likely to matter in
practice.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: fix corner use case of variadic fuctions usage

2013-01-23 Thread Robert Haas
On Wed, Jan 23, 2013 at 2:28 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Pavel Stehule pavel.steh...@gmail.com writes:
 what should be result of concat(variadic NULL::int[])
 I enabled this use case, but what should be result?

 I think there are two somewhat defensible theories:

 (1) punt, and return NULL overall.  So in this case the variadic
 function would act as if it were STRICT.  That seems a bit weird though
 if the function is not strict otherwise.

 (2) Treat the NULL as if it were a zero-length array, giving rise to
 zero ordinary parameters.  This could be problematic if the function
 can't cope very well with zero parameters ... but on the other hand,
 if it can't do so, then what will it do with VARIADIC '{}'::int[] ?

 I lean a little bit towards (2) but it's definitely a judgment call.
 Anybody have any other arguments one way or the other?

I'd like to vote for it probably doesn't matter very much, so let's
just pick whatever makes the code simplest.  :-)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4

2013-01-23 Thread Robert Haas
On Wed, Jan 23, 2013 at 7:14 AM, Andres Freund and...@2ndquadrant.com wrote:
 With the (attached for convenience) patch applied you can do
 # ALTER TABLE replication_metadata SET (treat_as_catalog_table = true);

 to enable this.
 What I wonder about is:
 * does anybody have a better name for the reloption?

IMHO, it should somehow involve the words logical and replication.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4

2013-01-23 Thread Andres Freund
On 2013-01-23 10:18:50 -0500, Robert Haas wrote:
 On Wed, Jan 23, 2013 at 7:14 AM, Andres Freund and...@2ndquadrant.com wrote:
  With the (attached for convenience) patch applied you can do
  # ALTER TABLE replication_metadata SET (treat_as_catalog_table = true);
 
  to enable this.
  What I wonder about is:
  * does anybody have a better name for the reloption?
 
 IMHO, it should somehow involve the words logical and replication.

Not a bad point. In the back of my mind I was thinking of reusing it to
do error checking when accessing the heap via index methods as a way of
making sure index support writers are aware of the complexities of doing
so (c.f. ALTER TYPE .. ADD VALUE only being usable outside
transactions).
But thats probably over the top.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: index support for regexp search

2013-01-23 Thread Tom Lane
Heikki Linnakangas hlinnakan...@vmware.com writes:
 On 23.01.2013 09:36, Alexander Korotkov wrote:
 On Wed, Jan 23, 2013 at 6:08 AM, Tom Lanet...@sss.pgh.pa.us  wrote:
 The biggest problem is that I really don't care for the idea of
 contrib/pg_trgm being this cozy with the innards of regex_t.

 The only option I see now is to provide a method like export_cnfa which
 would export corresponding CNFA in fixed format.

 Yeah, I think that makes sense. The transformation code in trgm_regexp.c 
 would probably be more readable too, if it didn't have to deal with the 
 regex guts representation of the CNFA. Also, once you have intermediate 
 representation of the original CNFA, you could do some of the 
 transformation work on that representation, before building the 
 tranformed graph containing trigrams. You could eliminate any 
 non-alphanumeric characters, joining states connected by arcs with 
 non-alphanumeric characters, for example.

It's not just the CNFA though; the other big API problem is with mapping
colors back to characters.  Right now, that not only knows way too much
about a part of the regex internals we have ambitions to change soon,
but it also requires pg_wchar2mb_with_len() and lowerstr(), neither of
which should be known to the regex library IMO.  So I'm not sure how we
divvy that up sanely.  To be clear: I'm not going to insist that we have
to have a clean API factorization before we commit this at all.  But it
worries me if we don't even know how we could get to that, because we
are going to need it eventually.

Anyway, I had another thought in the shower this morning, which is that
solving this problem in terms of color trigrams is really the Wrong
Thing.  ISTM it'd be better to think of the CNFA traversal logic as
looking for required sequences of colors of unspecified length, which
we'd then chop into trigrams after the fact.  This might produce
slightly better (more complete) trigram sets, but the real reason I'm
suggesting it is that I think the CNFA traversal code might be subject
to Polya's Inventor's Paradox: the more general problem may be easier
to solve.  It seems like casting the goal of that code as being to
find variable-length sequences, rather than exactly trigrams, might
lead to simpler data structures and more readable algorithms.  The
still-to-be-designed regex API for this also seems like it would be
better off if decoupled from the notion of trigrams.

It's quite possible that this idea is all wet and no meaningful
improvement can be gotten this way.  But I offer it for your
consideration.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unlogged tables vs. GIST

2013-01-23 Thread Robert Haas
On Wed, Jan 23, 2013 at 4:04 AM, Jeevan Chalke
jeevan.cha...@enterprisedb.com wrote:
 Yes.

 I guess my earlier patch, which was directly incrementing
 ControlFile-unloggedLSN counter was the concern as it will take
 ControlFileLock several times.

 In this version of patch I did what Robert has suggested. At start of the
 postmaster, copying unloggedLSn value to XLogCtl, a shared memory struct.
 And
 in all access to unloggedLSN, using this shared variable using a SpinLock.
 And since we want to keep this counter persistent across clean shutdown,
 storing it in ControlFile before updating it.

 With this approach, we are updating ControlFile only when we shutdown the
 server, rest of the time we are having a shared memory counter. That means
 we
 are not touching pg_control every other millisecond or so. Also since we are
 not caring about crashes, XLogging this counter like OID counter is not
 required as such.

On a quick read-through this looks reasonable to me, but others may
have different opinions, and I haven't reviewed in detail.

One obvious oversight is that bufmgr.c needs a detailed comment
explaining the reason behind the change you are making there.
Otherwise, someone might think to undo it in the future, and that
would be bad.  Perhaps add something like:

However, this rule does not apply for unlogged relations, which will
be lost after a crash anyway.  Most unlogged relation pages do not
bear LSNs since we never emit WAL records for them, and therefore
flushing up through the buffer LSN would be useless, but harmless.
However, GiST indexes use LSNs internally to track page-splits, and
therefore temporary and unlogged GiST pages bear fake LSNs generated
by GetXLogRecPtrForUnloggedRel.  It is unlikely but possible that the
fake-LSN counter used for unlogged relations could advance past the
WAL insertion point; and if it did happen, attempting to flush WAL
through that location would fail, with disastrous system-wide
consequences.  To make sure that can't happen, skip the flush if the
buffer isn't permanent.

A related problem is that you're examining the buffer header flags
without taking the buffer-header spinlock.  I'm not sure this can
actually matter, because we'll always hold the content lock on the
buffer at this point, which presumably precludes any operation that
might flip that bit, but it looks like the callers all have the buffer
header flags conveniently available, so maybe they should pass that
information down to FlushBuffer.  That would also avoid accessing that
word in memory both before and after releasing the spinlock (all
callers have just released it) which can lead to memory-access stalls.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PQping Docs

2013-01-23 Thread Robert Haas
On Mon, Jan 21, 2013 at 1:45 PM, Phil Sorber p...@omniti.com wrote:
 Attached is a patch that adds a note about the FATAL messages that
 appear in the logs if you don't pass a valid user or dbname to PQping
 or PQpingParams.

 This was requested in the pg_isready thread.

Can I counter-propose the attached, which puts the relevant text
closer to the scene of the crime?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


libpq_ping_doc_rmh.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Proposal for Allow postgresql.conf values to be changed via SQL [review]

2013-01-23 Thread Fujii Masao
On Wed, Jan 23, 2013 at 12:24 PM, Amit Kapila amit.kap...@huawei.com wrote:
 On Tuesday, January 22, 2013 8:25 PM Fujii Masao wrote:
 On Tue, Jan 22, 2013 at 11:07 PM, Amit Kapila amit.kap...@huawei.com
 wrote:
  On Tuesday, January 22, 2013 7:10 PM Zoltán Böszörményi wrote:
  2013-01-22 13:32 keltezéssel, Amit kapila írta:
   On Saturday, January 19, 2013 2:37 AM Boszormenyi Zoltan wrote:
   2013-01-18 21:48 keltezéssel, Boszormenyi Zoltan írta:
   2013-01-18 21:32 keltezéssel, Tom Lane írta:
   Boszormenyi Zoltan z...@cybertec.at writes:
   2013-01-18 11:05 keltezéssel, Amit kapila írta:
   On using mktemp, linux compilation gives below warning
   warning: the use of `mktemp' is dangerous, better use
 `mkstemp'
  
   Everywhere else that we need to do something like this, we just
  use our
   own PID to disambiguate, ie
sprintf(tempfilename, /path/to/file.%d, (int) getpid());
   There is no need to deviate from that pattern or introduce
  portability
   issues, since we can reasonably assume that no non-Postgres
  programs are
   creating files in this directory.
   Thanks for the enlightenment, I will post a new version soon.
   Here it is.
   The patch sent by you works fine.
   It needs small modification as below:
  
   The auto.conf.d directory should follow the postgresql.conf file
  directory not the data_directory.
   The same is validated while parsing the postgresql.conf
 configuration
  file.
  
   Patch is changed to use the postgresql.conf file directory as
 below.
  
   StrNCpy(ConfigFileDir, ConfigFileName, sizeof(ConfigFileDir));
   get_parent_directory(ConfigFileDir);
   /* Frame auto conf name and auto conf sample temp file name */
   snprintf(AutoConfFileName, sizeof(AutoConfFileName), %s/%s/%s,
ConfigFileDir,
PG_AUTOCONF_DIR,
PG_AUTOCONF_FILENAME);
 
  Maybe it's just me but I prefer to have identical
  settings across all replicated servers. But I agree
  that there can be use cases with different setups.
 
  All in all, this change makes it necessary to run the
  same SET PERSISTENT statements on all slave servers,
  too, to make them identical again if the configuration
  is separated from the data directory (like on Debian
  or Ubuntu using the default packages). This needs to be
  documented explicitly.
 
  SET PERSISTENT command needs to run on all slave servers even if the
 path we
  take w.r.t data directory
  unless user takes backup after command.

 Is it safe to write something in the directory other than data
 directory
 via SQL?

 postgres user usually has the write permission for the configuration
 directory like /etc/postgresql?

 As postgresql.conf will also be in same path and if user can change that,
 then what's the problem with postgresql.auto.conf

If the configuration directory like /etc is owned by root and only root has
a write permission for it, the user running PostgreSQL server would not
be able to update postgresql.auto.conf there. OTOH, even in that case,
a user can switch to root and update the configuration file there. I'm not
sure whether the configuration directory is usually writable by the user
running PostgreSQL server in Debian or Ubuntu, though.

 Do you see any security risk?

I have no idea. I just wondered that because some functions like pg_file_write()
in adminpack are restricted to write something in the directory other
than $PGDATA.

   This closes all comments raised till now for this patch.
   Kindly let me know if you feel something is missing?
 
  I can't think of anything else.

 For removing
 +   a configuration entry from
 filenamepostgresql.auto.conf/filename file,
 +   use commandRESET PERSISTENT/command. The values will be
 effective
 +   after reload of server configuration (SIGHUP) or server startup.

 The description of RESET PERSISTENT is in the document, but it
 seems not to be implemented.

 This command support has been removed from patch due to parser issues, so it
 should be removed from documentation as well. I shall fix this along with
 other issues raised by you.

Okay.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PQping Docs

2013-01-23 Thread Phil Sorber
On Wed, Jan 23, 2013 at 10:37 AM, Robert Haas robertmh...@gmail.com wrote:
 On Mon, Jan 21, 2013 at 1:45 PM, Phil Sorber p...@omniti.com wrote:
 Attached is a patch that adds a note about the FATAL messages that
 appear in the logs if you don't pass a valid user or dbname to PQping
 or PQpingParams.

 This was requested in the pg_isready thread.

 Can I counter-propose the attached, which puts the relevant text
 closer to the scene of the crime?


This seems reasonable to me.

 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PQping Docs

2013-01-23 Thread Robert Haas
On Wed, Jan 23, 2013 at 10:59 AM, Phil Sorber p...@omniti.com wrote:
 On Wed, Jan 23, 2013 at 10:37 AM, Robert Haas robertmh...@gmail.com wrote:
 On Mon, Jan 21, 2013 at 1:45 PM, Phil Sorber p...@omniti.com wrote:
 Attached is a patch that adds a note about the FATAL messages that
 appear in the logs if you don't pass a valid user or dbname to PQping
 or PQpingParams.

 This was requested in the pg_isready thread.

 Can I counter-propose the attached, which puts the relevant text
 closer to the scene of the crime?

 This seems reasonable to me.

OK, done.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pg_isready (was: [WIP] pg_ping utility)

2013-01-23 Thread Robert Haas
On Mon, Jan 21, 2013 at 12:23 PM, Phil Sorber p...@omniti.com wrote:
 Changing up the subject line because this is no longer a work in
 progress nor is it pg_ping anymore.

OK, I committed this.  However, I have one suggestion.  Maybe it would
be a good idea to add a -c or -t option that sets the connect_timeout
parameter.   Because:

[rhaas pgsql]$ pg_isready -h www.google.com
grows old, dies

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prepared statements fail after schema changes with surprising error

2013-01-23 Thread Pavel Stehule
2013/1/23 Robert Haas robertmh...@gmail.com:
 On Wed, Jan 23, 2013 at 8:10 AM, Dimitri Fontaine
 dimi...@2ndquadrant.fr wrote:
 Really, live DDL is not that frequent, and when you do that, you want
 transparent replanning. I can't see any use case where it's important to
 be able to run DDL in a live application yet continue to operate with
 the old (and in cases wrong) plans.

 I agree with that, but I think Tom's concern is more with the cost of
 too-frequent re-planning.  The most obvious case in which DDL might be
 frequent enough to cause an issue here is if there is heavy use of
 temporary objects - sessions might be rapidly creating and dropping
 objects in their own schemas.  It would be unfortunate if that forced
 continual replanning of queries in other sessions.  I think there
 could be other cases where this is an issue as well, but the
 temp-object case is probably the one that's most likely to matter in
 practice.

probably our model is not usual, but probably not hard exception

almost all queries that we send to server are CREATE TABLE cachexxx AS
SELECT ...

Tables are dropped, when data there are possibility so containing data
are invalid. Probably any replanning based on DDL can be very
problematic in our case.

Number of tables in one database can be more than 100K.

Regards

Pavel



 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Proposal for Allow postgresql.conf values to be changed via SQL [review]

2013-01-23 Thread Fujii Masao
On Wed, Jan 23, 2013 at 6:18 PM, Amit Kapila amit.kap...@huawei.com wrote:
 On Tuesday, January 22, 2013 10:14 PM Fujii Masao wrote:
 When I removed postgresql.auto.conf and restarted the server,
 I got the following warning message. This is not correct because
 I didn't remove auto.conf.d from postgresql.conf. What I removed
 is only postgresql.auto.conf.

 WARNING:  Default auto.conf.d is not present in postgresql.conf.
 Configuration parameters changed with SET PERSISTENT command will not
 be effective.

 How about changing it to below message:

 WARNING:  File 'postgresql.auto.conf' is not accessible, either file
 'postgresql.auto.conf' or folder '%s' doesn't exist or default auto.conf.d
 is not present in postgresql.conf.
 Configuration parameters changed with SET PERSISTENT command will not be
 effective.

Or we should suppress such a warning message in the case where
postgresql.auto.conf doesn't exist? SET PERSISTENT creates that
file automatically if it doesn't exist. So we can expect that configuration
parameters changed with SET PERSISTENT WILL be effective.

This warning message implies that the line include_dir 'auto.conf.d'
must not be removed from postgresql.conf? If so, we should warn that
in both document and postgresql.conf.sample? Or we should hard-code
so that something like auto.conf.d is always included even when that
include_dir directive doesn't exist?

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prepared statements fail after schema changes with surprising error

2013-01-23 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 I agree with that, but I think Tom's concern is more with the cost of
 too-frequent re-planning.  The most obvious case in which DDL might be
 frequent enough to cause an issue here is if there is heavy use of
 temporary objects - sessions might be rapidly creating and dropping
 objects in their own schemas.  It would be unfortunate if that forced
 continual replanning of queries in other sessions.  I think there
 could be other cases where this is an issue as well, but the
 temp-object case is probably the one that's most likely to matter in
 practice.

Yeah, that is probably the major hazard IMO too.  The designs sketched
in this thread would be sufficient to ensure that DDL in one session's
temp schema wouldn't have to invalidate plans in other sessions --- but
is that good enough?

Your point that the locking code doesn't quite cope with newly-masked
objects makes me feel that we could get away with not solving the case
for plan caching either.  Or at least that we could put off the problem
till another day.  If we are willing to just change plancache's handling
of search_path, that's a small patch that I think is easily doable for
9.3.  If we insist on installing schema-level invalidation logic, it's
not happening before 9.4.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CF3+4 (was Re: Parallel query execution)

2013-01-23 Thread Robert Haas
On Tue, Jan 22, 2013 at 1:15 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Yeah, and a lot more fairly-new developers who don't understand all the
 connections in the existing system.  Let me just push back a bit here:
 based on the amount of time I've had to spend fixing bugs over the past
 five months, 9.2 was our worst release ever.  I don't like that trend,
 and I don't want to see it continued because we get laxer about
 accepting patches.  IMO we are probably too lax already.

Really?  Hmm, that's not good.  I seem to recall 8.4.x being pretty
bad, and some of the recent bugs we fixed were actually 9.1.x problems
that slipped through the cracks.

 For a very long time we've tried to encourage people to submit rough
 ideas to pgsql-hackers for discussion *before* they start coding.
 The point of that is to weed out, or fix if possible, (some of) the bad
 ideas before a lot of effort has gotten expended on them.  It seems
 though that less and less of that is actually happening, so I wonder
 whether the commitfest process is encouraging inefficiency by making
 people think they should start a discussion with a mostly-complete
 patch.  Or maybe CF review is just crowding out any serious discussion
 of rough ideas.  There was some discussion at the last dev meeting of
 creating a design review process within commitfests, but nothing got
 done about it ...

I think there's been something of a professionalization of PostgreSQL
development over the last few years.   More and more people are able
to get paid to work on PostgreSQL as part or in a few cases all of
their job.  This trend includes me, but also a lot of other people.
There are certainly good things about this, but I think it increases
the pressure to get patches committed.  If you are developing a patch
on your own time and it doesn't go in, it may be annoying but that's
about it.  If you're getting paid to get that patch committed and it
doesn't go in, either your boss cares or your customer cares, or both,
so now you have a bigger problem.  Of course, this isn't always true:
I don't know everyone's employment arrangements, but there are some
people who are paid to work on PostgreSQL with no real specific
agenda, just a thought of generally contributing to the community.
However, I believe this to be less common than an arrangement
involving specific deliverables.

Whatever the arrangement, jobs where you get to work on PostgreSQL as
part of your employment mean that you can get more stuff done.
Whatever you can get done during work time plus any nonwork time you
care to contribute will be more than what you could get done in the
latter time alone.  And I think we're seeing that, too.  That then
puts more pressure on the people who need to do reviews and commits,
because there's just more stuff to look at, and you know, a lot of it
is really good stuff.  A lot of it has big problems, too, but we could
be doing a lot worse.  Nonwithstanding, it's a lot of work, and the
number of people who know the system well enough to review the really
difficult patches is growing, but not as fast as the number of people
who have time to write them.

For all of that, I'm not sure that people failing to seek consensus
before coding is really so much of a problem as you seem to think.  A
lot of the major efforts underway for this release have been discussed
extensively on multiple threads.  The fact that some of those ideas
may be less than half-baked at this point may in some cases be the
submitter's fault, but there also cases where it's just that they
haven't got enough looking-at from the people who know enough to
evaluate them in detail, and perhaps some cases that are really
nobody's fault: nothing in life is going to be perfect all the time no
matter how hard everyone tries.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prepared statements fail after schema changes with surprising error

2013-01-23 Thread Robert Haas
On Wed, Jan 23, 2013 at 11:40 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Yeah, that is probably the major hazard IMO too.  The designs sketched
 in this thread would be sufficient to ensure that DDL in one session's
 temp schema wouldn't have to invalidate plans in other sessions --- but
 is that good enough?

 Your point that the locking code doesn't quite cope with newly-masked
 objects makes me feel that we could get away with not solving the case
 for plan caching either.  Or at least that we could put off the problem
 till another day.  If we are willing to just change plancache's handling
 of search_path, that's a small patch that I think is easily doable for
 9.3.  If we insist on installing schema-level invalidation logic, it's
 not happening before 9.4.

I agree with that analysis.  FWIW, I am pretty confident that the
narrower fix will make quite a few people significantly happier than
they are today, so if you're willing to take that on, +1 from me.  I
believe the search-path-interpolation problem is a sufficiently
uncommon case that, in practice, it rarely comes up.  That's not to
say that we shouldn't ever fix it, but I think the simpler fix will be
a 90% solution and people will be happy to have made that much
progress this cycle.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: fix corner use case of variadic fuctions usage

2013-01-23 Thread Tom Lane
Pavel Stehule pavel.steh...@gmail.com writes:
 next related example

 CREATE OR REPLACE FUNCTION public.myleast(VARIADIC integer[])
  RETURNS integer
  LANGUAGE sql
 AS $function$
 select min(v) from unnest($1) g(v)
 $function$

The reason you get a null from that is that (1) unnest() produces zero
rows out for either a null or empty-array input, and (2) min() over
zero rows produces NULL.

In a lot of cases, it's not very sane for aggregates over no rows to
produce NULL; the best-known example is that SUM() produces NULL, when
anyone who'd not suffered brain-damage from sitting on the SQL committee
would have made it return zero.  So I'm not very comfortable with
generalizing from this specific case to decide that NULL is the
universally right result.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Improve concurrency of foreign key locking

2013-01-23 Thread Andrew Dunstan


On 01/23/2013 10:12 AM, Alvaro Herrera wrote:

Improve concurrency of foreign key locking


This error message change looks rather odd, and has my head spinning a bit:

-errmsg(SELECT FOR UPDATE/SHARE cannot be applied 
to the nullable side of an outer join)));
+errmsg(SELECT FOR UPDATE/SHARE/KEY UPDATE/KEY 
SHARE cannot be applied to the nullable side of an outer join)))


Can't we do better than that?

(It's also broken my FDW check, but I'll fix that when this is sorted out)

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BUG #6510: A simple prompt is displayed using wrong charset

2013-01-23 Thread Alexander Law

Hello,
Please let me know if I can do something to get the bug fix 
(https://commitfest.postgresql.org/action/patch_view?id=902) committed.
I would like to fix other bugs related to postgres localization, but I 
am not sure yet how to do it.


Thanks in advance,
Alexander

18.10.2012 19:46, Alvaro Herrera wrote:

Noah Misch escribió:


Following an off-list ack from Alexander, here is that version.  No functional
differences from Alexander's latest version, and I have verified that it still
fixes the original test case.  I'm marking this Ready for Committer.

This seems good to me, but I'm not comfortable committing Windows stuff.
Andrew, Magnus, are you able to handle this?







--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CF3+4 (was Re: Parallel query execution)

2013-01-23 Thread Andres Freund
On 2013-01-23 11:44:29 -0500, Robert Haas wrote:
 On Tue, Jan 22, 2013 at 1:15 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  Yeah, and a lot more fairly-new developers who don't understand all the
  connections in the existing system.  Let me just push back a bit here:
  based on the amount of time I've had to spend fixing bugs over the past
  five months, 9.2 was our worst release ever.  I don't like that trend,
  and I don't want to see it continued because we get laxer about
  accepting patches.  IMO we are probably too lax already.
 
 Really?  Hmm, that's not good.  I seem to recall 8.4.x being pretty
 bad, and some of the recent bugs we fixed were actually 9.1.x problems
 that slipped through the cracks.

FWIW I concur with Tom's assessment.

 On Tue, Jan 22, 2013 at 1:15 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  For a very long time we've tried to encourage people to submit rough
  ideas to pgsql-hackers for discussion *before* they start coding.
  The point of that is to weed out, or fix if possible, (some of) the bad
  ideas before a lot of effort has gotten expended on them.  It seems
  though that less and less of that is actually happening, so I wonder
  whether the commitfest process is encouraging inefficiency by making
  people think they should start a discussion with a mostly-complete
  patch.  Or maybe CF review is just crowding out any serious discussion
  of rough ideas.  There was some discussion at the last dev meeting of
  creating a design review process within commitfests, but nothing got
  done about it ...

Are you thinking of specific patches? From what I remember all all the
bigger patches arround the 9.3 cycle were quite heavily discussed during
multiple stages of their development.

I agree that its not necessarily the case for smaller patches but at
least for me in those cases its often hard to judge how complex
something is before doing an initial prototype.

One aspect of this might be that its sometimes easier to convince
-hackers of some idea if you can prove its doable.
Another that the amount of bikeshedding seems to be far lower if a patch
already shapes a feature in some way.

 I think there's been something of a professionalization of PostgreSQL
 development over the last few years.   More and more people are able
 to get paid to work on PostgreSQL as part or in a few cases all of
 their job.  This trend includes me, but also a lot of other people.

Yes.

 There are certainly good things about this, but I think it increases
 the pressure to get patches committed.  If you are developing a patch
 on your own time and it doesn't go in, it may be annoying but that's
 about it.  If you're getting paid to get that patch committed and it
 doesn't go in, either your boss cares or your customer cares, or both,
 so now you have a bigger problem.

And it also might shape the likelihood of getting paid for future work,
be that a specific patch or time for hacking/maintenance.

 For all of that, I'm not sure that people failing to seek consensus
 before coding is really so much of a problem as you seem to think.  A
 lot of the major efforts underway for this release have been discussed
 extensively on multiple threads.  The fact that some of those ideas
 may be less than half-baked at this point may in some cases be the
 submitter's fault, but there also cases where it's just that they
 haven't got enough looking-at from the people who know enough to
 evaluate them in detail, and perhaps some cases that are really
 nobody's fault: nothing in life is going to be perfect all the time no
 matter how hard everyone tries.

I agree. Take logical replication/decoding for example. While we
developed a prototype first, we/I tried to take in as much feedback from
the community as we could. Just about no code, and only few concepts,
from the initial prototype remain, and thats absolutely good.
I don't think a more talk about it first approach would have helped
here. Do others disagree?

But as soon as the rough, rough design was laid out the amount of
specific feedback shrank. Part of that is that some people involved in
the discussions had changes in their work situation that influenced the
amount of time they could spend on it, but I think one other problem is
that at some point it gets hard to give feedback on a complex, evolving
patch without it eating up big amount of time.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Improve concurrency of foreign key locking

2013-01-23 Thread Andres Freund
On 2013-01-23 11:58:28 -0500, Andrew Dunstan wrote:
 
 On 01/23/2013 10:12 AM, Alvaro Herrera wrote:
 Improve concurrency of foreign key locking
 
 This error message change looks rather odd, and has my head spinning a bit:
 
 -errmsg(SELECT FOR UPDATE/SHARE cannot be applied to
 the nullable side of an outer join)));
 +errmsg(SELECT FOR UPDATE/SHARE/KEY UPDATE/KEY SHARE
 cannot be applied to the nullable side of an outer join)))
 
 Can't we do better than that?

I don't really see how? I don't think listing only the current locklevel
really is an improvement and something like SELECT ... FOR $locktype
cannot .. seem uncommon enough in pg error messages to be strange.
Now I aggree that listing all those locklevels isn't that nice, but I
don't really have a better idea.

Andres

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pg_isready (was: [WIP] pg_ping utility)

2013-01-23 Thread Phil Sorber
On Wed, Jan 23, 2013 at 11:07 AM, Robert Haas robertmh...@gmail.com wrote:
 On Mon, Jan 21, 2013 at 12:23 PM, Phil Sorber p...@omniti.com wrote:
 Changing up the subject line because this is no longer a work in
 progress nor is it pg_ping anymore.

 OK, I committed this.  However, I have one suggestion.  Maybe it would
 be a good idea to add a -c or -t option that sets the connect_timeout
 parameter.   Because:

 [rhaas pgsql]$ pg_isready -h www.google.com
 grows old, dies

Oh, hrmm. Yes, I will address that with a follow up patch. I guess in
my testing I was using a host that responded properly with port
unreachable or TCP RST or something.

Do you think we should have a default timeout, or only have one if
specified at the command line?


 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CF3+4 (was Re: Parallel query execution)

2013-01-23 Thread Bruce Momjian
On Mon, Jan 21, 2013 at 02:04:14PM -0500, Tom Lane wrote:
 Josh Berkus j...@agliodbs.com writes:
  IMHO that's the single most important task of a review.
 
  Really?  I'd say the most important task for a review is does the patch
  do what it says it does?.  That is, if the patch is supposed to
  implement feature X, does it actually?  If it's a performance patch,
  does performance actually improve?
 
  If the patch doesn't implement what it's supposed to, who cares what the
  code looks like?
 
 But even before that, you have to ask whether what it's supposed to do
 is something we want.

Yep.  Our TODO list has a pretty short summary of this at the top:

Desirability - Design - Implement - Test - Review - Commit

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pg_isready (was: [WIP] pg_ping utility)

2013-01-23 Thread Tom Lane
Phil Sorber p...@omniti.com writes:
 On Wed, Jan 23, 2013 at 11:07 AM, Robert Haas robertmh...@gmail.com wrote:
 [rhaas pgsql]$ pg_isready -h www.google.com
 grows old, dies

 Do you think we should have a default timeout, or only have one if
 specified at the command line?

+1 for default timeout --- if this isn't like ping where you are
expecting to run indefinitely, I can't see that it's a good idea for it
to sit very long by default, in any circumstance.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] BUG #6510: A simple prompt is displayed using wrong charset

2013-01-23 Thread Tom Lane
Alexander Law exclus...@gmail.com writes:
 Please let me know if I can do something to get the bug fix 
 (https://commitfest.postgresql.org/action/patch_view?id=902) committed.

It's waiting on some Windows-savvy committer to pick it up, IMO.
(FWIW, I have no objection to the patch as given, but I am unqualified
to evaluate how sane it is on Windows.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Add Makefile dep in bin/scripts for libpgport

2013-01-23 Thread Phil Sorber
I get the following error when I try to compile just a specific binary
in src/bin/scripts:

gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels
-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing
-fwrapv -fexcess-precision=standard reindexdb.o common.o dumputils.o
kwlookup.o keywords.o -L../../../src/port -lpgport
-L../../../src/interfaces/libpq -lpq -L../../../src/port
-Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags
-lpgport -lz -lreadline -lcrypt -ldl -lm  -o reindexdb
/usr/bin/ld: cannot find -lpgport
/usr/bin/ld: cannot find -lpgport
collect2: error: ld returned 1 exit status
make: *** [reindexdb] Error 1

It appears it is missing the libpgport dependency. Attached is a patch
to correct that. This is not normally a problem because when building
the whole tree libpgport is usually compiled already.


scripts_makefile_pgport_dep.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Improve concurrency of foreign key locking

2013-01-23 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2013-01-23 11:58:28 -0500, Andrew Dunstan wrote:
 This error message change looks rather odd, and has my head spinning a bit:
 
 -errmsg(SELECT FOR UPDATE/SHARE cannot be applied to
 the nullable side of an outer join)));
 +errmsg(SELECT FOR UPDATE/SHARE/KEY UPDATE/KEY SHARE
 cannot be applied to the nullable side of an outer join)))
 
 Can't we do better than that?

 I don't really see how? I don't think listing only the current locklevel
 really is an improvement and something like SELECT ... FOR $locktype
 cannot .. seem uncommon enough in pg error messages to be strange.
 Now I aggree that listing all those locklevels isn't that nice, but I
 don't really have a better idea.

I don't really see what's wrong with the original spelling of the
message.  The fact that you can now insert KEY in there doesn't really
affect anything for the purposes of this error.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Improve concurrency of foreign key locking

2013-01-23 Thread Simon Riggs
On 23 January 2013 17:15, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-01-23 11:58:28 -0500, Andrew Dunstan wrote:

 On 01/23/2013 10:12 AM, Alvaro Herrera wrote:
 Improve concurrency of foreign key locking

 This error message change looks rather odd, and has my head spinning a bit:

 -errmsg(SELECT FOR UPDATE/SHARE cannot be applied to
 the nullable side of an outer join)));
 +errmsg(SELECT FOR UPDATE/SHARE/KEY UPDATE/KEY SHARE
 cannot be applied to the nullable side of an outer join)))

 Can't we do better than that?

 I don't really see how? I don't think listing only the current locklevel
 really is an improvement and something like SELECT ... FOR $locktype
 cannot .. seem uncommon enough in pg error messages to be strange.
 Now I aggree that listing all those locklevels isn't that nice, but I
 don't really have a better idea.

row level locks cannot be applied to the NULLable side of an outer join
Hint: there are no rows to lock


-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] My first patch! (to \df output)

2013-01-23 Thread Phil Sorber
On Wed, Jan 23, 2013 at 12:31 AM, Jon Erdman
postgre...@thewickedtribe.net wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1


 Done. Attached.
 - --
 Jon T Erdman (aka StuckMojo)
 PostgreSQL Zealot

 On 01/22/2013 11:17 PM, Phil Sorber wrote:
 On Wed, Jan 23, 2013 at 12:10 AM, Jon Erdman
 postgre...@thewickedtribe.net wrote:

 Updated the patch in commitfest with the doc change, and added a
 comment to explain the whitespace change (it was to clean up the
 sql indentation). I've also attached the new patch here for
 reference.

 Docs looks good. Spaces gone.

 Still need to replace 'definer' and 'invoker' with %s and add
 the corresponding gettext_noop() calls I think.


This looks good to me now. Compiles and works as described.

One thing I would note for the future though, when updating a patch,
add a version to the file name to distinguish it from older versions
of the patch.

 -BEGIN PGP SIGNATURE-
 Comment: Using GnuPG with undefined - http://www.enigmail.net/

 iEYEARECAAYFAlD/dcoACgkQRAk1+p0GhSEKHQCZAW8UNqSjYxBgBvt2nuffrkrV
 +9AAn1hChpY5Jg8G8T3XmlIb+3VUSEQ2
 =3cFD
 -END PGP SIGNATURE-


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CF3+4 (was Re: Parallel query execution)

2013-01-23 Thread Josh Berkus
On 01/23/2013 09:08 AM, Andres Freund wrote:
 On 2013-01-23 11:44:29 -0500, Robert Haas wrote:
 On Tue, Jan 22, 2013 at 1:15 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Yeah, and a lot more fairly-new developers who don't understand all the
 connections in the existing system.  Let me just push back a bit here:
 based on the amount of time I've had to spend fixing bugs over the past
 five months, 9.2 was our worst release ever.  I don't like that trend,
 and I don't want to see it continued because we get laxer about
 accepting patches.  IMO we are probably too lax already.

 Really?  Hmm, that's not good.  I seem to recall 8.4.x being pretty
 bad, and some of the recent bugs we fixed were actually 9.1.x problems
 that slipped through the cracks.
 
 FWIW I concur with Tom's assessment.

The only way to fix increasing bug counts is through more-comprehensive
regular testing.  Currently we have regression/unit tests which cover
maybe 30% of our code.  Performance testing is largely ad-hoc.  We don't
require comprehensive acceptance testing for new patches.  And we have 
1m lines of code.  Of course our bug count is increasing.

I'm gonna see if I can do something about improving our test coverage.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: fix corner use case of variadic fuctions usage

2013-01-23 Thread Pavel Stehule
2013/1/23 Tom Lane t...@sss.pgh.pa.us:
 Pavel Stehule pavel.steh...@gmail.com writes:
 next related example

 CREATE OR REPLACE FUNCTION public.myleast(VARIADIC integer[])
  RETURNS integer
  LANGUAGE sql
 AS $function$
 select min(v) from unnest($1) g(v)
 $function$

 The reason you get a null from that is that (1) unnest() produces zero
 rows out for either a null or empty-array input, and (2) min() over
 zero rows produces NULL.

 In a lot of cases, it's not very sane for aggregates over no rows to
 produce NULL; the best-known example is that SUM() produces NULL, when
 anyone who'd not suffered brain-damage from sitting on the SQL committee
 would have made it return zero.  So I'm not very comfortable with
 generalizing from this specific case to decide that NULL is the
 universally right result.


I am testing some situation and there are no consistent idea - can we
talk just only about functions concat and concat_ws?

these functions are really specific - now we talk about corner use
case and strongly PostgreSQL proprietary solution. So any solution
should not too bad.

Difference between all solution will by maximally +/- 4 simple rows
per any function.

Possibilities

1) A. concat(variadic NULL) = empty string, B. concat(variadic '{}')
= empty string -- if we accept @A, then B is ok
2) A. concat(variadic NULL) = NULL, B. concat(variadic '{}') = NULL
-- question - is @B valid ?
3) A. concat(variadic NULL) = NULL, B. concat(variadic '{}) = empty string

There are no other possibility.

I can live with any variant - probably we find any precedent to any
variant in our code or in ANSI SQL.

I like @2 as general concept for PostgreSQL variadic functions, but
when we talk about concat() and concat_ws() @1 is valid too.

Please, can somebody say his opinion early

Regards

Pavel



 regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Improve concurrency of foreign key locking

2013-01-23 Thread Alvaro Herrera
Andrew Dunstan wrote:
 
 On 01/23/2013 10:12 AM, Alvaro Herrera wrote:
 Improve concurrency of foreign key locking
 
 This error message change looks rather odd, and has my head spinning a bit:
 
 -errmsg(SELECT FOR UPDATE/SHARE cannot be
 applied to the nullable side of an outer join)));
 +errmsg(SELECT FOR UPDATE/SHARE/KEY UPDATE/KEY
 SHARE cannot be applied to the nullable side of an outer join)))
 
 Can't we do better than that?

Basically this message says a SELECT with a locking clause attached
cannot be blah blah.  There are many messages that had the original
code saying SELECT FOR UPDATE cannot be blah blah, which was later
(8.1) updated to say SELECT FOR UPDATE/SHARE cannot be blah blah.  I
expanded them using the same logic, but maybe there's a better
suggestion?  Note that the SELECT reference page now has a subsection
called The locking clause, so maybe it's okay to use that term now.



-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: clean up addRangeTableEntryForFunction

2013-01-23 Thread David Fetter
On Tue, Jan 22, 2013 at 11:02:18PM -0500, Tom Lane wrote:
 David Fetter da...@fetter.org writes:
  I've been working with Andrew Gierth (well, mostly he's been doing
  the work, as usual) to add WITH ORDINALITY as an option for
  set-returning functions.  In the process, he found a minor
  opportunity to clean up the interface for $SUBJECT, reducing the
  call to a Single Point of Truth for lateral-ness, very likely
  improving the efficiency of calls to that function.
 
 As I mentioned in our off-list discussion, I think this is going in
 the wrong direction.  It'd make more sense to me to get rid of the
 RangeFunction parameter, instead passing the two fields that
 addRangeTableEntryForFunction actually uses out of that.

With utmost respect, of the four fields currently in RangeFunction:
type (tag), lateral, funccallnode, alias, and coldeflist, the function
needs three (all but funccallnode, which has already been transformed
into a funcexpr).  The patch for ordinality makes that 4/5 with the
ordinality field added.

 If we do what you have here, we'll be welding together the alias and
 lateral settings for the new RTE; which conceivably some other
 caller would want to specify in a different way.  As a comparison
 point, you might want to look at the various calls to
 addRangeTableEntryForSubquery: some of those pass multiple fields
 out of the same RangeSubselect, and some do not.

As to addRangeTableEntryForSubquery, I'm not seeing the connection to
the case at hand.  Could you please spell it out?

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: UNNEST (and other functions) WITH ORDINALITY

2013-01-23 Thread Alvaro Herrera
David Fetter wrote:
 On Tue, Jan 22, 2013 at 10:29:43PM -0800, David Fetter wrote:
  Folks,
  
  Please find attached a patch which implements the SQL standard
  UNNEST() WITH ORDINALITY.
 
 Added to CF4.

Surely you meant CF 2013-Next (i.e. first commit of 9.4 cycle).

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: UNNEST (and other functions) WITH ORDINALITY

2013-01-23 Thread David Fetter
On Wed, Jan 23, 2013 at 03:12:37PM -0300, Alvaro Herrera wrote:
 David Fetter wrote:
  On Tue, Jan 22, 2013 at 10:29:43PM -0800, David Fetter wrote:
   Folks,
   
   Please find attached a patch which implements the SQL standard
   UNNEST() WITH ORDINALITY.
  
  Added to CF4.
 
 Surely you meant CF 2013-Next (i.e. first commit of 9.4 cycle).

I see that that's what I did, but given that this is a pretty small
feature with low impact, I'm wondering whether it should be on CF4.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_basebackup with -R option and start standby have problems with escaped password

2013-01-23 Thread Magnus Hagander
On Wed, Jan 23, 2013 at 10:18 AM, Hari Babu haribabu.ko...@huawei.com wrote:
 Test scenario to reproduce:
 1. Start the server
 2. create the user as follows
 ./psql postgres -c create user user1 superuser login
 password 'use''1'

 3. Take the backup with -R option as follows.
 ./pg_basebackup -D ../../data1 -R -U user1 -W

 The following errors are occurring when the new standby on the backup
 database starts.

 FATAL:  could not connect to the primary server: missing = after 1' in
 connection info string

What does the resulting recovery.conf file look like?

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BUG #6510: A simple prompt is displayed using wrong charset

2013-01-23 Thread Andrew Dunstan


On 01/23/2013 01:08 PM, Alvaro Herrera wrote:

Tom Lane escribió:

Alexander Law exclus...@gmail.com writes:

Please let me know if I can do something to get the bug fix
(https://commitfest.postgresql.org/action/patch_view?id=902) committed.

It's waiting on some Windows-savvy committer to pick it up, IMO.
(FWIW, I have no objection to the patch as given, but I am unqualified
to evaluate how sane it is on Windows.)

I think part of the problem is that we no longer have many Windows-savvy
committers willing to spend time on Windows-specific patches; there are,
of course, people with enough knowledge, but they don't always
necessarily have much interest.  Note that I added Magnus and Andrew to
this thread because they are known to have contributed Windows-specific
improvements, but they have yet to followup on this thread *at all*.

I remember back when Windows support was added, one of the arguments in
its favor was it's going to attract lots of new developers.  Yeah, right.



I'm about to take a look at this one.

Note BTW that Craig Ringer has recently done some excellent work on 
Windows, and there are several other active non-committers (e.g. Noah) 
who comment on Windows too.


IIRC I never expected us to get a huge influx of developers from the 
Windows world. What I did expect was a lot of new users, and I think we 
have on fact got that.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Proposal for Allow postgresql.conf values to be changed via SQL [review]

2013-01-23 Thread Alvaro Herrera
Fujii Masao escribió:
 On Wed, Jan 23, 2013 at 12:24 PM, Amit Kapila amit.kap...@huawei.com wrote:

  Is it safe to write something in the directory other than data
  directory
  via SQL?
 
  postgres user usually has the write permission for the configuration
  directory like /etc/postgresql?
 
  As postgresql.conf will also be in same path and if user can change that,
  then what's the problem with postgresql.auto.conf
 
 If the configuration directory like /etc is owned by root and only root has
 a write permission for it, the user running PostgreSQL server would not
 be able to update postgresql.auto.conf there. OTOH, even in that case,
 a user can switch to root and update the configuration file there. I'm not
 sure whether the configuration directory is usually writable by the user
 running PostgreSQL server in Debian or Ubuntu, though.

Yes it is -- the /etc/postgresql/version/cluster directory (where
postgresql.conf resides) is owned by user postgres.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CF3+4 (was Re: Parallel query execution)

2013-01-23 Thread Joshua D. Drake


On 01/23/2013 09:51 AM, Josh Berkus wrote:


The only way to fix increasing bug counts is through more-comprehensive
regular testing.  Currently we have regression/unit tests which cover
maybe 30% of our code.  Performance testing is largely ad-hoc.  We don't
require comprehensive acceptance testing for new patches.  And we have 
1m lines of code.  Of course our bug count is increasing.



And... slow down the release cycle or slow down the number of features 
that are accepted. Don't get me wrong I love everything we have and are 
adding every cycle but there does seem to be a definite weight 
difference between # of features added and time spent allowing those 
features to settle.


Sincerely,

JD

--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC
@cmdpromptinc - 509-416-6579


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] foreign key locks

2013-01-23 Thread Alvaro Herrera
I just pushed this patch to the master branch.  There was a
corresponding catversion bump and pg_control version bump.  I have
verified that make check-world passes on my machine, as well as
isolation tests and pg_upgrade.

Tom Lane said at one point this is too complex to maintain.  Several
times during the development I feared he might well be right.  I am sure
he will be discovering many oversights and poor design choices, when
gets around to reviewing the code; hopefully some extra effort will be
all that's needed to make the whole thing work sanely and not eat
anyone's data.  I just hope that nothing so serious comes up that the
patch will need to be reverted completely.

This patch is the result of the work of many people.  I am not allowed
to mention the patch sponsors in the commit message, so I'm doing it
here: first and foremost I need to thank my previous and current
employers Command Prompt and 2ndQuadrant -- they were extremely kind in
allowing me to work on this for days on end (and not all of it was
supported by financial sponsors).  Joel Jacobson started the whole
effort by posting a screencast of a problem his company was having; I
hope they found a workaround in the meantime, because his post was in
mid 2010.  The key idea of this patch' design came from Simon Riggs;
Noah Misch provided additional design advice that allowed the project
torun to completion.  Noah and Andres Freund spent considerable time
reviewing early versions of this patch; they uncovered many embarrasing
bugs in my implementation.  Kevin Grittner, Robert Haas, and others,
provided useful comments as well.  Noah Misch, Andres Freund, Marti
Raudsepp and Alexander Shulgin also provided bits of code.

Any bugs that remain are, of course, my own fault only.

Financial support came from

* Command Prompt Inc. of Washington, US
* 2ndQuadrant Ltd. of United Kingdom
* Trustly (then Glue Finance) of Sweden
* Novozymes A/S of Denmark
* MailerMailer LLC of Maryland, US
* PostgreSQL Experts, Inc. of California, US.

My sincerest thanks to everyone.


I seriously hope that no patch of mine ever becomes this monstruous
again.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for REINDEX CONCURRENTLY

2013-01-23 Thread Andres Freund
On 2013-01-15 18:16:59 +0900, Michael Paquier wrote:
 OK. I am back to this patch after a too long time.

Dito ;)

* would be nice (but thats probably a step #2 thing) to do the
  individual steps of concurrent reindex over multiple relations to
  avoid too much overall waiting for other transactions.
   
   I think I did that by now using one transaction per index for each
   operation except the drop phase...
 
  Without yet having read the new version, I think thats not what I
  meant. There currently is a wait for concurrent transactions to end
  after most of the phases for every relation, right? If you have a busy
  database with somewhat longrunning transactions thats going to slow
  everything down with waiting quite bit. I wondered whether it would make
  sense to do PHASE1 for all indexes in all relations, then wait once,
  then PHASE2...
 
 That obviously has some space and index maintainece overhead issues, but
  its probably sensible anyway in many cases.
 
 OK, phase 1 is done with only one transaction for all the indexes. Do you
 mean that we should do that with a single transaction for each index?

Yes.

  Isn't the following block content thats mostly available somewhere else
  already?
  [... doc extract ...]
 
 Yes, this portion of the docs is pretty similar to what is findable in
 CREATE INDEX CONCURRENTLY. Why not creating a new common documentation
 section that CREATE INDEX CONCURRENTLY and REINDEX CONCURRENTLY could refer
 to? I think we should first work on the code and then do the docs properly
 though.

Agreed. I just noticed it when scrolling through the patch.

   - if (concurrent  is_exclusion)
   + if (concurrent  is_exclusion  !is_reindex)
 ereport(ERROR,
 (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
  errmsg_internal(concurrent index
  creation for exclusion constraints is not supported)));
 
  This is what I referred to above wrt reindex and CONCURRENTLY. We
  shouldn't pass concurrently if we don't deem it to be safe for exlusion
  constraints.
 
 So does that mean that it is not possible to create an exclusive constraint
 in a concurrent context?

Yes, its currently not safe in the general case.

 Code path used by REINDEX concurrently permits to
 create an index in parallel of an existing one and not a completely new
 index. Shouldn't this work for indexes used by exclusion indexes also?

But that fact might safe things. I don't immediately see any reason that
adding a
if (!indisvalid)
   return;
to check_exclusion_constraint wouldn't be sufficient if there's another
index with an equivalent definition.

   + /*
   +  * Phase 2 of REINDEX CONCURRENTLY
   +  *
   +  * Build concurrent indexes in a separate transaction for each
  index to
   +  * avoid having open transactions for an unnecessary long time.
   We also
   +  * need to wait until no running transactions could have the
  parent table
   +  * of index open. A concurrent build is done for each concurrent
   +  * index that will replace the old indexes.
   +  */
   +
   + /* Get the first element of concurrent index list */
   + lc2 = list_head(concurrentIndexIds);
   +
   + foreach(lc, indexIds)
   + {
   + RelationindexRel;
   + Oid indOid = lfirst_oid(lc);
   + Oid concurrentOid = lfirst_oid(lc2);
   + Oid relOid;
   + boolprimary;
   + LOCKTAG*heapLockTag = NULL;
   + ListCell   *cell;
   +
   + /* Move to next concurrent item */
   + lc2 = lnext(lc2);
   +
   + /* Start new transaction for this index concurrent build */
   + StartTransactionCommand();
   +
   + /* Get the parent relation Oid */
   + relOid = IndexGetRelation(indOid, false);
   +
   + /*
   +  * Find the locktag of parent table for this index, we
  need to wait for
   +  * locks on it.
   +  */
   + foreach(cell, lockTags)
   + {
   + LOCKTAG *localTag = (LOCKTAG *) lfirst(cell);
   + if (relOid == localTag-locktag_field2)
   + heapLockTag = localTag;
   + }
   +
   + Assert(heapLockTag  heapLockTag-locktag_field2 !=
  InvalidOid);
   + WaitForVirtualLocks(*heapLockTag, ShareLock);
 
  Why do we have to do the WaitForVirtualLocks here? Shouldn't we do this
  once for all relations after each phase? Otherwise the waiting time will
  really start to hit when you do this on a somewhat busy server.
 
 Each new index is built and set as ready in a separate single transaction,
 so doesn't it make sense to wait for the parent relation each time. It is
 possible to wait for a parent relation 

Re: [HACKERS] CF3+4 (was Re: Parallel query execution)

2013-01-23 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
 For all of that, I'm not sure that people failing to seek consensus
 before coding is really so much of a problem as you seem to think.

For my part, I don't think the lack of consensus-finding before
submitting patches is, in itself, a problem.

The problem, imv, is that everyone is expecting that once they've
written a patch and put it on a commitfest that it's going to get
committed- and it seems like committers are feeling under pressure
that, because something's on the CF app, it needs to be committed
in some form.

There's a lot of good stuff out there, sure, and even more good *ideas*,
but it's important to make sure we can provide a stable system with
regular releases.  As discussed, we really need to be ready to truely
triage the remaining patch set, figure out who is going to work on what,
and punt the rest til post-9.3.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Support for REINDEX CONCURRENTLY

2013-01-23 Thread Alvaro Herrera
Andres Freund escribió:

 I somewhat dislike the fact that CONCURRENTLY isn't really concurrent
 here (for the listeners: swapping the indexes acquires exlusive locks) ,
 but I don't see any other naming being better.

REINDEX ALMOST CONCURRENTLY?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for REINDEX CONCURRENTLY

2013-01-23 Thread Gavin Flower

On 24/01/13 07:45, Alvaro Herrera wrote:

Andres Freund escribió:


I somewhat dislike the fact that CONCURRENTLY isn't really concurrent
here (for the listeners: swapping the indexes acquires exlusive locks) ,
but I don't see any other naming being better.

REINDEX ALMOST CONCURRENTLY?



REINDEX BEST EFFORT CONCURRENTLY?



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl idempotent option

2013-01-23 Thread Bruce Momjian
On Tue, Jan 22, 2013 at 06:03:28PM +0530, Ashutosh Bapat wrote:
 On Tue, Jan 15, 2013 at 9:36 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  Peter Eisentraut pete...@gmx.net writes:
  On 1/14/13 10:22 AM, Tom Lane wrote:
  Also it appears to me that the hunk at lines 812ff is changing the
  default behavior, which is not what the patch is advertised to do.
 
  True, I had forgotten to mention that.
 
  Since it's already the behavior for start, another option would be to
  just make it the default for stop as well and forget about the extra
  options.
 
 By default, (without idempotent option), if it finds the pid, it tries
 to start one. If there is already one running, it exits with errorcode
 1,  otherwise it has already run the server.
  814 exitcode = start_postmaster();
  815 if (exitcode != 0)
  816 {
  817 write_stderr(_(%s: could not start server: exit code was %d\n),
  818  progname, exitcode);
  819 exit(1);
  820 }
 
 What we can do under idempotent is to return with code 0 instead of
 exit(1) above, thus not need the changes in the patch around line 812.
 That will be more in-line with the description at
 http://www.postgresql.org/message-id/1253165415.18853.32.ca...@vanquo.pezone.net
 
  for example an exit
  code of 0 for an attempt to start a server which is already running
  or an attempt to stop a server which isn't running.  (These are only
  two examples of differences between what is required of an LSB
  conforming init script and what pg_ctl does.  Are we all in universal
  agreement to make such a change to pg_ctl?
 
 anyway, +1 for making this as default option. Going that path, would
 we be breaking backward compatibility? There might be scripts, (being
 already used), which depend upon the current behaviour.

FYI, I have a pg_upgrade patch that relies on the old throw-an-error
behavior.  Will there be a way to still throw an error if we make
idempotent the default?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pg_isready (was: [WIP] pg_ping utility)

2013-01-23 Thread Bruce Momjian
On Wed, Jan 23, 2013 at 12:27:45PM -0500, Tom Lane wrote:
 Phil Sorber p...@omniti.com writes:
  On Wed, Jan 23, 2013 at 11:07 AM, Robert Haas robertmh...@gmail.com wrote:
  [rhaas pgsql]$ pg_isready -h www.google.com
  grows old, dies
 
  Do you think we should have a default timeout, or only have one if
  specified at the command line?
 
 +1 for default timeout --- if this isn't like ping where you are
 expecting to run indefinitely, I can't see that it's a good idea for it
 to sit very long by default, in any circumstance.

FYI, the pg_ctl -w (wait) default is 60 seconds:

from pg_ctl.c:

#define DEFAULT_WAIT60

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl idempotent option

2013-01-23 Thread Heikki Linnakangas

On 23.01.2013 20:56, Bruce Momjian wrote:

On Tue, Jan 22, 2013 at 06:03:28PM +0530, Ashutosh Bapat wrote:

anyway, +1 for making this as default option. Going that path, would
we be breaking backward compatibility? There might be scripts, (being
already used), which depend upon the current behaviour.


FYI, I have a pg_upgrade patch that relies on the old throw-an-error
behavior.  Will there be a way to still throw an error if we make
idempotent the default?


Could you check the status with pg_ctl status first, and throw an 
error if it's not what you expected?


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] COPY FREEZE has no warning

2013-01-23 Thread Bruce Momjian
As a reminder, COPY FREEZE still does not issue any warning/notice if
the freezing does not happen:

  Requests copying the data with rows already frozen, just as they
  would be after running the commandVACUUM FREEZE/ command.
  This is intended as a performance option for initial data loading.
  Rows will be frozen only if the table being loaded has been created
  in the current subtransaction, there are no cursors open and there
  are no older snapshots held by this transaction. If those conditions
  are not met the command will continue without error though will not
  freeze rows. It is also possible in rare cases that the request
  cannot be honoured for internal reasons, hence literalFREEZE/literal
  is more of a guideline than a hard rule.

  Note that all other sessions will immediately be able to see the data
  once it has been successfully loaded. This violates the normal rules
  of MVCC visibility and by specifying this option the user acknowledges
  explicitly that this is understood.

Didn't we want to issue the user some kind of feedback?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl idempotent option

2013-01-23 Thread Bruce Momjian
On Wed, Jan 23, 2013 at 09:00:25PM +0200, Heikki Linnakangas wrote:
 On 23.01.2013 20:56, Bruce Momjian wrote:
 On Tue, Jan 22, 2013 at 06:03:28PM +0530, Ashutosh Bapat wrote:
 anyway, +1 for making this as default option. Going that path, would
 we be breaking backward compatibility? There might be scripts, (being
 already used), which depend upon the current behaviour.
 
 FYI, I have a pg_upgrade patch that relies on the old throw-an-error
 behavior.  Will there be a way to still throw an error if we make
 idempotent the default?
 
 Could you check the status with pg_ctl status first, and throw an
 error if it's not what you expected?

Well, this could still create a period of time where someone else starts
the server between my status and my starting it.  Do we really want
that?  And what if I want to start it with my special -o parameters, and
I then can't tell if it was already running or it is using my
parameters.  I think an idempotent default is going to cause problems.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bugfix: --echo-hidden is not supported by \sf statements

2013-01-23 Thread Pavel Stehule
2013/1/14 Tom Lane t...@sss.pgh.pa.us:
 Robert Haas robertmh...@gmail.com writes:
 On Mon, Jan 14, 2013 at 11:56 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 So far as I can tell, get_create_function_cmd (and lookup_function_oid
 too) were intentionally designed to not show their queries, and for that
 matter they go out of their way to produce terse error output if they
 fail.  I'm not sure why we should revisit that choice.  In any case
 it seems silly to change one and not the other.

 Agreed on the second point, but I think I worked on that patch, and I
 don't think that was intentional on my part.  You worked on it, too,
 IIRC, so I guess you'll have to comment on your intentions.

 Personally I think -E is one of psql's finer features, so +1 from me
 for making it apply across the board.

 Well, fine, but then it should fix both of them and remove
 minimal_error_message altogether.  I would however suggest eyeballing
 what happens when you try \ef nosuchfunction (with or without -E).
 I'm pretty sure that the reason for the special error handling in these
 functions is that the default error reporting was unpleasant for this
 use-case.

so I rewrote patch

still is simple

On the end I didn't use PSQLexec - just write hidden queries directly
from related functions - it is less invasive

Regards

Pavel


 regards, tom lane


psql_sf_hidden_queries.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Visual Studio 2012 RC

2013-01-23 Thread Brar Piening

On 01/23/2013 02:14 PM, Craig Ringer wrote:

How have you been testing VS2012 builds? In what environment?


When I tested this patch the last time I've been using Windows 8 RTM 
(Microsoft Windows 8 Enterprise Evaluation - 6.2.9200 Build 9200) and 
Microsoft Visual Studio Express 2012 für Windows Desktop RTM (Version 
11.0.50727.42)


Regards,

Brar


Re: [HACKERS] Event Triggers: adding information

2013-01-23 Thread Bruce Momjian
On Wed, Jan 23, 2013 at 09:33:58AM -0500, Robert Haas wrote:
 On Wed, Jan 23, 2013 at 4:57 AM, Dimitri Fontaine
 dimi...@2ndquadrant.fr wrote:
  Thanks for commiting the fixes. About the regression tests, I think
  you're right, but then I can't see how to include such a test. Maybe you
  could add the other one, though?
 
 Can you point me specifically at what you have in mind so I can make
 sure I'm considering the right thing?
 
  +1 for this version, thanks.
 
 OK, committed that also.

Also, I assume we no longer want after triggers on system tables, so I
removed that from the TODO list and added event triggers as a completed
item.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: UNNEST (and other functions) WITH ORDINALITY

2013-01-23 Thread Bruce Momjian
On Wed, Jan 23, 2013 at 10:15:27AM -0800, David Fetter wrote:
 On Wed, Jan 23, 2013 at 03:12:37PM -0300, Alvaro Herrera wrote:
  David Fetter wrote:
   On Tue, Jan 22, 2013 at 10:29:43PM -0800, David Fetter wrote:
Folks,

Please find attached a patch which implements the SQL standard
UNNEST() WITH ORDINALITY.
   
   Added to CF4.
  
  Surely you meant CF 2013-Next (i.e. first commit of 9.4 cycle).
 
 I see that that's what I did, but given that this is a pretty small
 feature with low impact, I'm wondering whether it should be on CF4.

The diff is 1.2k and has no discussion.  It should be in CF 2013-Next.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: UNNEST (and other functions) WITH ORDINALITY

2013-01-23 Thread David Fetter
On Wed, Jan 23, 2013 at 02:40:45PM -0500, Bruce Momjian wrote:
 On Wed, Jan 23, 2013 at 10:15:27AM -0800, David Fetter wrote:
  On Wed, Jan 23, 2013 at 03:12:37PM -0300, Alvaro Herrera wrote:
   David Fetter wrote:
On Tue, Jan 22, 2013 at 10:29:43PM -0800, David Fetter wrote:
 Folks,
 
 Please find attached a patch which implements the SQL standard
 UNNEST() WITH ORDINALITY.

Added to CF4.
   
   Surely you meant CF 2013-Next (i.e. first commit of 9.4 cycle).
  
  I see that that's what I did, but given that this is a pretty small
  feature with low impact, I'm wondering whether it should be on CF4.
 
 The diff is 1.2k and has no discussion.

It's been up less than a day ;)

 It should be in CF 2013-Next.

OK :)

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pg_isready (was: [WIP] pg_ping utility)

2013-01-23 Thread Phil Sorber
On Wed, Jan 23, 2013 at 1:58 PM, Bruce Momjian br...@momjian.us wrote:
 On Wed, Jan 23, 2013 at 12:27:45PM -0500, Tom Lane wrote:
 Phil Sorber p...@omniti.com writes:
  On Wed, Jan 23, 2013 at 11:07 AM, Robert Haas robertmh...@gmail.com 
  wrote:
  [rhaas pgsql]$ pg_isready -h www.google.com
  grows old, dies

  Do you think we should have a default timeout, or only have one if
  specified at the command line?

 +1 for default timeout --- if this isn't like ping where you are
 expecting to run indefinitely, I can't see that it's a good idea for it
 to sit very long by default, in any circumstance.

 FYI, the pg_ctl -w (wait) default is 60 seconds:

 from pg_ctl.c:

 #define DEFAULT_WAIT60


Great. That is what I came to on my own as well. Figured that might be
a sticking point, but as there is precedent, I'm happy with it.

 --
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com

   + It's impossible for everything to be true. +


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autovacuum truncate exclusive lock round two

2013-01-23 Thread Kevin Grittner
Kevin Grittner wrote:

 Applied with trivial editing, mostly from a pgindent run against
 modified files.

Applied back as far as 9.0. Before that code didn't match well
enough for it to seem safe to apply without many hours of
additional testing.

I have confirmed occurences of this problem at least as far back as
9.0 in the wild, where it is causing performance degradation severe
enough to force users to stop production usage long enough to
manually vacuum the affected tables. The use case is a lot like
what Jan described, where PostgreSQL is being used for high volume
queuing. When there is a burst of activity which increases table
size, and then the queues are drained back to a normal state, the
problem shows up.

-Kevin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pg_isready (was: [WIP] pg_ping utility)

2013-01-23 Thread Bruce Momjian
On Wed, Jan 23, 2013 at 02:50:01PM -0500, Phil Sorber wrote:
 On Wed, Jan 23, 2013 at 1:58 PM, Bruce Momjian br...@momjian.us wrote:
  On Wed, Jan 23, 2013 at 12:27:45PM -0500, Tom Lane wrote:
  Phil Sorber p...@omniti.com writes:
   On Wed, Jan 23, 2013 at 11:07 AM, Robert Haas robertmh...@gmail.com 
   wrote:
   [rhaas pgsql]$ pg_isready -h www.google.com
   grows old, dies
 
   Do you think we should have a default timeout, or only have one if
   specified at the command line?
 
  +1 for default timeout --- if this isn't like ping where you are
  expecting to run indefinitely, I can't see that it's a good idea for it
  to sit very long by default, in any circumstance.
 
  FYI, the pg_ctl -w (wait) default is 60 seconds:
 
  from pg_ctl.c:
 
  #define DEFAULT_WAIT60
 
 
 Great. That is what I came to on my own as well. Figured that might be
 a sticking point, but as there is precedent, I'm happy with it.

Yeah, being able to point to precedent is always helpful.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Event Triggers: adding information

2013-01-23 Thread Robert Haas
On Wed, Jan 23, 2013 at 2:36 PM, Bruce Momjian br...@momjian.us wrote:
 On Wed, Jan 23, 2013 at 09:33:58AM -0500, Robert Haas wrote:
 On Wed, Jan 23, 2013 at 4:57 AM, Dimitri Fontaine
 dimi...@2ndquadrant.fr wrote:
  Thanks for commiting the fixes. About the regression tests, I think
  you're right, but then I can't see how to include such a test. Maybe you
  could add the other one, though?

 Can you point me specifically at what you have in mind so I can make
 sure I'm considering the right thing?

  +1 for this version, thanks.

 OK, committed that also.

 Also, I assume we no longer want after triggers on system tables, so I
 removed that from the TODO list and added event triggers as a completed
 item.

Seems reasonable.  Event triggers are not completed in the sense that
there is a lot more stuff we can do with this architecture, but we've
got a basic implementation now and that's progress.  And they do
address the use case that triggers on system tables would have
targeted, I think, but better.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY FREEZE has no warning

2013-01-23 Thread Robert Haas
On Wed, Jan 23, 2013 at 2:02 PM, Bruce Momjian br...@momjian.us wrote:
 As a reminder, COPY FREEZE still does not issue any warning/notice if
 the freezing does not happen:

   Requests copying the data with rows already frozen, just as they
   would be after running the commandVACUUM FREEZE/ command.
   This is intended as a performance option for initial data loading.
   Rows will be frozen only if the table being loaded has been created
   in the current subtransaction, there are no cursors open and there
   are no older snapshots held by this transaction. If those conditions
   are not met the command will continue without error though will not
   freeze rows. It is also possible in rare cases that the request
   cannot be honoured for internal reasons, hence literalFREEZE/literal
   is more of a guideline than a hard rule.

   Note that all other sessions will immediately be able to see the data
   once it has been successfully loaded. This violates the normal rules
   of MVCC visibility and by specifying this option the user acknowledges
   explicitly that this is understood.

 Didn't we want to issue the user some kind of feedback?

I believe that is what was agreed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Event Triggers: adding information

2013-01-23 Thread Bruce Momjian
On Wed, Jan 23, 2013 at 03:02:24PM -0500, Robert Haas wrote:
 On Wed, Jan 23, 2013 at 2:36 PM, Bruce Momjian br...@momjian.us wrote:
  On Wed, Jan 23, 2013 at 09:33:58AM -0500, Robert Haas wrote:
  On Wed, Jan 23, 2013 at 4:57 AM, Dimitri Fontaine
  dimi...@2ndquadrant.fr wrote:
   Thanks for commiting the fixes. About the regression tests, I think
   you're right, but then I can't see how to include such a test. Maybe you
   could add the other one, though?
 
  Can you point me specifically at what you have in mind so I can make
  sure I'm considering the right thing?
 
   +1 for this version, thanks.
 
  OK, committed that also.
 
  Also, I assume we no longer want after triggers on system tables, so I
  removed that from the TODO list and added event triggers as a completed
  item.
 
 Seems reasonable.  Event triggers are not completed in the sense that
 there is a lot more stuff we can do with this architecture, but we've
 got a basic implementation now and that's progress.  And they do
 address the use case that triggers on system tables would have
 targeted, I think, but better.

Right.  Users would always be chasing implementation details if they
tried to trigger on system tables.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pg_isready (was: [WIP] pg_ping utility)

2013-01-23 Thread Phil Sorber
On Wed, Jan 23, 2013 at 2:51 PM, Bruce Momjian br...@momjian.us wrote:
 On Wed, Jan 23, 2013 at 02:50:01PM -0500, Phil Sorber wrote:
 On Wed, Jan 23, 2013 at 1:58 PM, Bruce Momjian br...@momjian.us wrote:
  On Wed, Jan 23, 2013 at 12:27:45PM -0500, Tom Lane wrote:
  Phil Sorber p...@omniti.com writes:
   On Wed, Jan 23, 2013 at 11:07 AM, Robert Haas robertmh...@gmail.com 
   wrote:
   [rhaas pgsql]$ pg_isready -h www.google.com
   grows old, dies
 
   Do you think we should have a default timeout, or only have one if
   specified at the command line?
 
  +1 for default timeout --- if this isn't like ping where you are
  expecting to run indefinitely, I can't see that it's a good idea for it
  to sit very long by default, in any circumstance.
 
  FYI, the pg_ctl -w (wait) default is 60 seconds:
 
  from pg_ctl.c:
 
  #define DEFAULT_WAIT60
 

 Great. That is what I came to on my own as well. Figured that might be
 a sticking point, but as there is precedent, I'm happy with it.

 Yeah, being able to point to precedent is always helpful.

 --
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com

   + It's impossible for everything to be true. +

Attached is the patch to add a connect_timeout.

I also factored out the setting of user and dbname from the default
gathering loop as we only need host and port defaults and made more
sense to handle user/dbname in the same area of the code as
connect_timeout. This was something mentioned by Robert upthread.

This also coincidentally fixes a bug in the size of the keywords and
values arrays. Must have added an option during review and not
extended that array. Is there a best practice to making sure that
doesn't happen in the future? I was thinking define MAX_PARAMS and
then setting the array size to MAX_PARAMS+1 and then checking in the
getopt loop to see how many params we expect to process and erroring
if we are doing to many, but that only works at runtime.

One other thing I noticed while refactoring the defaults gathering
code. If someone passes in a connect string for dbname, we output the
wrong info at the end. This is not addressed in this patch because I
wanted to get some feedback before fixing and make a separate patch. I
see the only real option being to use PQconninfoParse to get the
params from the connect string. If someone passes in a connect string
via dbname should that have precedence over other values passed in via
individual command line options? Should ordering of the command line
options matter?

For example if someone did: pg_isready -h foo -d host=bar port=4321 -p 1234

What should the connection parameters be?


pg_isready_timeout.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CF3+4 (was Re: Parallel query execution)

2013-01-23 Thread Phil Sorber
On Wed, Jan 23, 2013 at 1:44 PM, Stephen Frost sfr...@snowman.net wrote:
 * Robert Haas (robertmh...@gmail.com) wrote:
 For all of that, I'm not sure that people failing to seek consensus
 before coding is really so much of a problem as you seem to think.

 For my part, I don't think the lack of consensus-finding before
 submitting patches is, in itself, a problem.

 The problem, imv, is that everyone is expecting that once they've
 written a patch and put it on a commitfest that it's going to get
 committed- and it seems like committers are feeling under pressure
 that, because something's on the CF app, it needs to be committed
 in some form.


FWIW, I have NO delusions that something I propose or submit or put in
a CF is necessarily going to get committed. For me it's not committed
until I can see it in 'git log' and even then, I've seen stuff get
reverted. I would hope that if a committer isn't comfortable with a
patch they would explain why, and decline to commit. Then it's up to
the submitter as to whether or not they want to make changes, try to
explain why they are right and the committer is wrong, or withdraw the
patch.

 There's a lot of good stuff out there, sure, and even more good *ideas*,
 but it's important to make sure we can provide a stable system with
 regular releases.  As discussed, we really need to be ready to truely
 triage the remaining patch set, figure out who is going to work on what,
 and punt the rest til post-9.3.

 Thanks,

 Stephen


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Proposal for Allow postgresql.conf values to be changed via SQL [review]

2013-01-23 Thread Andres Freund
On 2013-01-22 12:32:07 +, Amit kapila wrote:
 This closes all comments raised till now for this patch.
 Kindly let me know if you feel something is missing?

I am coming late to this patch, so bear with me if I repeat somethign
said elsewhere.

Review comments of cursory pass through the patch:
* most comments are hard to understand. I know the problem of that
  being hard for a non-native speaker by heart, but I think another pass
  over them would be good thing.
* The gram.y changes arround set_rest_(more|common) seem pretty confused
  to me. E.g. its not possible anymore to set the timezone for a
  function. And why is it possible to persistently set the search path,
  but not client encoding? Why is FROM CURRENT in set_rest_more?
* set_config_file should elog(ERROR), not return on an unhandled
  setstmt-kind
* why are you creating AutoConfFileName if its not stat'able? It seems
  better to simply skip parsing the old file in that case
* Writing the temporary file to .$pid seems like a bad idea, better use
  one file for that, SET PERSISTENT is protected by an exclusive lock
  anyway.
* the write sequence should be:
  * fsync(tempfile)
  * fsync(directory)
  * rename(tempfile, configfile)
  * fsync(configfile)
  * fsync(directory)
* write_auto_conf_file should probably escape quoted values?
* coding style should be adhered to more closesly, there are many
  if (pointer) which should be if (pointer != NULL), single-line blocks
  enclosed in curlies which shouldn't, etc.
* replace_auto_config_file and surrounding functions need more comments
  in the header
* the check that prevents persistent SETs in a transaction should rather
  be in utility.c and use PreventTransactionChain like most of the
  others that need to do that (c.f. T_CreatedbStmt).

I think this patch is a good bit away of being ready for committer...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CF3+4 (was Re: Parallel query execution)

2013-01-23 Thread Robert Haas
On Wed, Jan 23, 2013 at 3:23 PM, Phil Sorber p...@omniti.com wrote:
 On Wed, Jan 23, 2013 at 1:44 PM, Stephen Frost sfr...@snowman.net wrote:
 * Robert Haas (robertmh...@gmail.com) wrote:
 For all of that, I'm not sure that people failing to seek consensus
 before coding is really so much of a problem as you seem to think.

 For my part, I don't think the lack of consensus-finding before
 submitting patches is, in itself, a problem.

 The problem, imv, is that everyone is expecting that once they've
 written a patch and put it on a commitfest that it's going to get
 committed- and it seems like committers are feeling under pressure
 that, because something's on the CF app, it needs to be committed
 in some form.


 FWIW, I have NO delusions that something I propose or submit or put in
 a CF is necessarily going to get committed. For me it's not committed
 until I can see it in 'git log' and even then, I've seen stuff get
 reverted. I would hope that if a committer isn't comfortable with a
 patch they would explain why, and decline to commit. Then it's up to
 the submitter as to whether or not they want to make changes, try to
 explain why they are right and the committer is wrong, or withdraw the
 patch.

I think that's the right attitude, but it doesn't always work out that
way.  Reviewers and committers sometimes spend a lot of time writing a
review and then get flamed for their honest opinion about the
readiness of a patch.  Of course, reviewers and committers can be
jerks, too.  As far as I know, we're all human, here.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CF3+4 (was Re: Parallel query execution)

2013-01-23 Thread Heikki Linnakangas

On 23.01.2013 20:44, Stephen Frost wrote:

* Robert Haas (robertmh...@gmail.com) wrote:

For all of that, I'm not sure that people failing to seek consensus
before coding is really so much of a problem as you seem to think.


For my part, I don't think the lack of consensus-finding before
submitting patches is, in itself, a problem.


I feel the same. Posting a patch before design and consensus may be a 
huge waste of time for the submitter himself, but it's not a problem for 
others.



The problem, imv, is that everyone is expecting that once they've
written a patch and put it on a commitfest that it's going to get
committed- and it seems like committers are feeling under pressure
that, because something's on the CF app, it needs to be committed
in some form.


I'm sure none of the committers have a problem rejecting a patch, when 
there's clear grounds for it. Rejecting is the easiest way to deal with 
a patch. However, at least for me, 50% of the patches in any given 
commitfest don't interest me at all. I don't object to them, per se, but 
I don't want to spend any time on them either. I can imagine the same to 
be true for all other committers too. If a patch doesn't catch the 
interest of any committer, it's going to just sit there in the 
commitfest app for a long time, even if it's been reviewed.



As discussed, we really need to be ready to truely
triage the remaining patch set, figure out who is going to work on what,
and punt the rest til post-9.3.


FWIW, here's how I feel about some the patches. It's not an exhaustive list.

Event Triggers: Passing Information to User Functions (from 2012-11)
I don't care about this whole feature, and haven't looked at the patch.. 
Probably not worth the complexity. But won't object if someone else 
wants to deal with it.


Extension templates
Same as above.

Checksums (initdb-time)
I don't want this feature, and I've said that on the thread.

Identity projection (partitioning)
Nothing's happened for over a month. Seems dead for that reason.

Remove unused targets from plan
Same here.

Store additional information in GIN index
Same here.

Index based regexp search for pg_trgm
I'd like to see this patch go in, but it still needs a fair amount of 
work. Probably needs to be pushed to next commitfest for that reason.


allowing privileges on untrusted languages
Seems like a bad idea to me, for the reasons Tom mentioned on that thread.

Skip checkpoint on promoting from streaming replication
Given that we still don't have an updated patch for this, it's probably 
getting too late for this. Would be nice to see the patch or an 
explanation of the new design Simon's been working.


Patch to compute Max LSN of Data Pages (from 2012-11)
Meh. Seems like a really special-purpose tool. Probably better to put 
this on pgfoundry.


logical changeset generation v4
This is a boatload of infrastructure for supporting logical replication, 
yet we have no code actually implementing logical replication that would 
go with this. The premise of logical replication over trigger-based was 
that it'd be faster, yet we cannot asses that without a working 
implementation. I don't think this can be committed in this state.


built-in/SQL Command to edit the server configuration file 
(postgresql.conf)

I think this should be a pgfoundry project, not in core. At least for now.

json generator function enhacements
Json API and extraction functions
To be honest, I don't understand why the json datatype had to be 
built-in to begin with. But it's too late to object to that now, and if 
the datatype is there, these functions probably should be, too. Or could 
these be put into a separate json-extras extension? I dunno.


psql watch
Meh. In practice, for the kind of ad-hoc monitoring this would be useful 
for, you might as well just use watch psql -c 'select ...' . Yes, that 
reconnects for each query, but so what.


plpgsql_check_function
I don't like this in its current form. A lot of code that mirrors 
pl_exec.c. I think we'll have to find a way to somehow re-use the 
existing code for this. Like, compile the function as usual, but don't 
stop on error.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Potential TODO: schema in ALTER DEFAULT PRIVILEGES?

2013-01-23 Thread Josh Berkus
Folks,

As you know, there's a lot of people these days using SCHEMA for
multi-tenant application partitioning.   One of them pointed out to me
that schema is missing from ALTER DEFAULT PRIVS; that is, there's no
way for you to set default permissions on a new schema.  For folks using
schema for partitioning, support for this would be very helpful.

Worth adding to TODO?  Obviously nobody's going to work on it right now.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Improve concurrency of foreign key locking

2013-01-23 Thread Andrew Dunstan


On 01/23/2013 12:48 PM, Simon Riggs wrote:

On 23 January 2013 17:15, Andres Freund and...@2ndquadrant.com wrote:

On 2013-01-23 11:58:28 -0500, Andrew Dunstan wrote:

On 01/23/2013 10:12 AM, Alvaro Herrera wrote:

Improve concurrency of foreign key locking

This error message change looks rather odd, and has my head spinning a bit:

-errmsg(SELECT FOR UPDATE/SHARE cannot be applied to
the nullable side of an outer join)));
+errmsg(SELECT FOR UPDATE/SHARE/KEY UPDATE/KEY SHARE
cannot be applied to the nullable side of an outer join)))

Can't we do better than that?

I don't really see how? I don't think listing only the current locklevel
really is an improvement and something like SELECT ... FOR $locktype
cannot .. seem uncommon enough in pg error messages to be strange.
Now I aggree that listing all those locklevels isn't that nice, but I
don't really have a better idea.

row level locks cannot be applied to the NULLable side of an outer join
Hint: there are no rows to lock



Yeah, this is really more informative than either, I think.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sepgsql 1/3] add name qualified creation label

2013-01-23 Thread Heikki Linnakangas

On 17.01.2013 23:20, Kohei KaiGai wrote:

2013/1/16 Robert Haasrobertmh...@gmail.com:

This looks OK on a quick once-over, but should it update the
documentation somehow?


Documentation does not take so much description for type_transition
rules, so I just modified relevant description a bit to mention about
type_transition rule may have argument of new object name optionally.


The comments at least need updating, to mention the new arguments.


In addition, I forgot to update minimum required version for libselinux;
(it also takes change in configure script).


libselinux1 2.1.10 or newer is a pretty tall order. That's not in debian 
testing yet, for example. I'm afraid if we bump the minimum requirement, 
what might happen is that many distributions will stop building with 
--with-selinux.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CF3+4 (was Re: Parallel query execution)

2013-01-23 Thread Stephen Frost
Heikki,

* Heikki Linnakangas (hlinnakan...@vmware.com) wrote:
 FWIW, here's how I feel about some the patches. It's not an exhaustive list.

Thanks for going through them and commenting on them.

 Event Triggers: Passing Information to User Functions (from 2012-11)
 I don't care about this whole feature, and haven't looked at the
 patch.. Probably not worth the complexity. But won't object if
 someone else wants to deal with it.

I'd like to see it happen.

 Extension templates
 Same as above.

This one isn't actually all that complex, though it does add a few
additional catalogs for keeping track of things.  For my part, I'd like
to see this go in as I see it being another step closer to Packages that
a certain other RDBMS has.

 Checksums (initdb-time)
 I don't want this feature, and I've said that on the thread.

I see a lot of value in this.

 Identity projection (partitioning)
 Nothing's happened for over a month. Seems dead for that reason.

I don't think this is dead..

 Remove unused targets from plan
 Same here.
 
 Store additional information in GIN index
 Same here.

Havn't been following these so not sure.  Some of these are in a state
of lack of progress for having not been reviewed.

 Index based regexp search for pg_trgm
 I'd like to see this patch go in, but it still needs a fair amount
 of work. Probably needs to be pushed to next commitfest for that
 reason.

Agreed.

 allowing privileges on untrusted languages
 Seems like a bad idea to me, for the reasons Tom mentioned on that thread.

On the fence about this one.  I like the idea of reducing the need to be
a superuser, but there are risks associated with this change also.

 Skip checkpoint on promoting from streaming replication
 Given that we still don't have an updated patch for this, it's
 probably getting too late for this. Would be nice to see the patch
 or an explanation of the new design Simon's been working.
 
 Patch to compute Max LSN of Data Pages (from 2012-11)
 Meh. Seems like a really special-purpose tool. Probably better to
 put this on pgfoundry.

Agreed on these two.

 logical changeset generation v4
 This is a boatload of infrastructure for supporting logical
 replication, yet we have no code actually implementing logical
 replication that would go with this. The premise of logical
 replication over trigger-based was that it'd be faster, yet we
 cannot asses that without a working implementation. I don't think
 this can be committed in this state.

Andres had a working implementation of logical replication, with code to
back it up and performance numbers showing how much faster it is, at
PGCon last year, as I recall...  Admittedly, it probably needs changing
and updating due to the changes which the patches have been going
through, but I don't think the claim that we don't know it's faster is
fair.

 built-in/SQL Command to edit the server configuration file
 (postgresql.conf)
 I think this should be a pgfoundry project, not in core. At least for now.

I don't think it would ever get deployed or used then..

 json generator function enhacements
 Json API and extraction functions
 To be honest, I don't understand why the json datatype had to be
 built-in to begin with. But it's too late to object to that now, and
 if the datatype is there, these functions probably should be, too.
 Or could these be put into a separate json-extras extension? I
 dunno.

There were good reasons to add json as a data type, I thought, though I
don't have them offhand.

 psql watch
 Meh. In practice, for the kind of ad-hoc monitoring this would be
 useful for, you might as well just use watch psql -c 'select ...'
 . Yes, that reconnects for each query, but so what.

I do that pretty often.  A better approach, imv, would be making psql a
bit more of a 'real' shell, with loops, conditionals, better variable
handling, etc.

 plpgsql_check_function
 I don't like this in its current form. A lot of code that mirrors
 pl_exec.c. I think we'll have to find a way to somehow re-use the
 existing code for this. Like, compile the function as usual, but
 don't stop on error.

Havn't looked at this yet, but your concerns make sense to me.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Potential TODO: schema in ALTER DEFAULT PRIVILEGES?

2013-01-23 Thread Stephen Frost
Josh,

* Josh Berkus (j...@agliodbs.com) wrote:
 As you know, there's a lot of people these days using SCHEMA for
 multi-tenant application partitioning.   One of them pointed out to me
 that schema is missing from ALTER DEFAULT PRIVS; that is, there's no
 way for you to set default permissions on a new schema.  For folks using
 schema for partitioning, support for this would be very helpful.
 
 Worth adding to TODO?  Obviously nobody's going to work on it right now.

The original ALTER DEFAULT PRIVS actually included support for exactly
this, and there was a patch at one point for DEFAULT OWNER as well.  I'm
on board for both of those ideas and run into the lack of them regularly
(as in, last week I was setting default privileges for a whole slew of
roles by hand for a given schema because I couldn't set it for *all*
users for a given schema, even as a superuser, and new roles will be
added shortly and I'll have to go back and remember to add the default
privs for them also...).

That's my 2c.  I don't believe this is really a question about if anyone
needs this so much as how we can implement it and keep everyone happy
that it's safe and secure.  That's what needs to be worked out first.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] CF3+4 (was Re: Parallel query execution)

2013-01-23 Thread Pavel Stehule
Hello


 I do that pretty often.  A better approach, imv, would be making psql a
 bit more of a 'real' shell, with loops, conditionals, better variable
 handling, etc.


after a few years prototyping on this area I am not sure so this is
good idea. Maybe better to start some new console from scratch.

 plpgsql_check_function
 I don't like this in its current form. A lot of code that mirrors
 pl_exec.c. I think we'll have to find a way to somehow re-use the
 existing code for this. Like, compile the function as usual, but
 don't stop on error.

 Havn't looked at this yet, but your concerns make sense to me.

I invite any moving in this subject - again I wrote lot of variants
and current is a most maintainable (my opinion) - there is redundant
structure (not code) - and simply code. After merging the code lost
readability :(. But I would to continue in work - I am sure so it has
a sense and  people and some large companies use it with success, so
it should be in core - in any form.

Regards

Pavel


 Thanks!

 Stephen


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [BUGS] BUG #7815: Upgrading PostgreSQL from 9.1 to 9.2 with pg_upgrade/postgreql-setup fails - invalid status retrieve

2013-01-23 Thread Bruce Momjian
On Sat, Jan 19, 2013 at 09:56:48PM -0500, Bruce Momjian wrote:
  (But, at least with the type of packaging I'm using in Fedora, he would
  first have to go through a package downgrade/reinstallation process,
  because the packaging provides no simple scripted way of manually
  starting the old postgres executable, only the new one.  Moreover, what
  pg_upgrade is printing provides no help in figuring out whether that's
  the next step.)
  
  I do sympathize with taking a paranoid attitude here, but I'm failing
  to see what advantage there is in not attempting to start the old
  postmaster.  In the *only* case that pg_upgrade is successfully
  protecting against with this logic, namely there's-an-active-postmaster-
  already, the postmaster is equally able to protect itself.  In other
  cases it would be more helpful not less to let the postmaster analyze
  the situation.
  
   The other problem is that if the server start fails, how do we know if
   the failure was due to a running postmaster?
  
  Because we read the postmaster's log file, or at least tell the user to
  do so.  That report would be unambiguous, unlike pg_upgrade's.
 
 Attached is a WIP patch to give you an idea of how I am going to solve
 this problem.  This comment says it all:

Attached is a ready-to-apply version of the patch.  I continued to use
postmaster.pid to determine if I need to try the start/stop test ---
that allows me to know which servers _might_ be running, because a
server start failure might be for many reasons and I would prefer not to
suggest the server is running if I can avoid it, and the pid file gives
me that.

The patch is longer than I expected because the code needed to be
reordered.  The starting banner indicates if it a live check, but to
check for a live server we need to start/stop the servers and we needed
to know where the binaries are, and we didn't do that until after the
start banner.  I removed the 'check' line for binary checks, and moved
that before the banner printing.  postmaster_start also now needs an
option to supress an error.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/contrib/pg_upgrade/check.c b/contrib/pg_upgrade/check.c
new file mode 100644
index 1780788..8188643
*** a/contrib/pg_upgrade/check.c
--- b/contrib/pg_upgrade/check.c
*** fix_path_separator(char *path)
*** 56,66 
  }
  
  void
! output_check_banner(bool *live_check)
  {
! 	if (user_opts.check  is_server_running(old_cluster.pgdata))
  	{
- 		*live_check = true;
  		pg_log(PG_REPORT, Performing Consistency Checks on Old Live Server\n);
  		pg_log(PG_REPORT, \n);
  	}
--- 56,65 
  }
  
  void
! output_check_banner(bool live_check)
  {
! 	if (user_opts.check  live_check)
  	{
  		pg_log(PG_REPORT, Performing Consistency Checks on Old Live Server\n);
  		pg_log(PG_REPORT, \n);
  	}
*** check_and_dump_old_cluster(bool live_che
*** 78,84 
  	/* -- OLD -- */
  
  	if (!live_check)
! 		start_postmaster(old_cluster);
  
  	set_locale_and_encoding(old_cluster);
  
--- 77,83 
  	/* -- OLD -- */
  
  	if (!live_check)
! 		start_postmaster(old_cluster, true);
  
  	set_locale_and_encoding(old_cluster);
  
*** issue_warnings(char *sequence_script_fil
*** 201,207 
  	/* old = PG 8.3 warnings? */
  	if (GET_MAJOR_VERSION(old_cluster.major_version) = 803)
  	{
! 		start_postmaster(new_cluster);
  
  		/* restore proper sequence values using file created from old server */
  		if (sequence_script_file_name)
--- 200,206 
  	/* old = PG 8.3 warnings? */
  	if (GET_MAJOR_VERSION(old_cluster.major_version) = 803)
  	{
! 		start_postmaster(new_cluster, true);
  
  		/* restore proper sequence values using file created from old server */
  		if (sequence_script_file_name)
*** issue_warnings(char *sequence_script_fil
*** 224,230 
  	/* Create dummy large object permissions for old  PG 9.0? */
  	if (GET_MAJOR_VERSION(old_cluster.major_version) = 804)
  	{
! 		start_postmaster(new_cluster);
  		new_9_0_populate_pg_largeobject_metadata(new_cluster, false);
  		stop_postmaster(false);
  	}
--- 223,229 
  	/* Create dummy large object permissions for old  PG 9.0? */
  	if (GET_MAJOR_VERSION(old_cluster.major_version) = 804)
  	{
! 		start_postmaster(new_cluster, true);
  		new_9_0_populate_pg_largeobject_metadata(new_cluster, false);
  		stop_postmaster(false);
  	}
diff --git a/contrib/pg_upgrade/exec.c b/contrib/pg_upgrade/exec.c
new file mode 100644
index e326a10..44dafc3
*** a/contrib/pg_upgrade/exec.c
--- b/contrib/pg_upgrade/exec.c
*** exec_prog(const char *log_file, const ch
*** 140,152 
  
  
  /*
!  * is_server_running()
   *
!  * checks whether postmaster on the given data directory is running or not.
!  * The 

Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-23 Thread Andres Freund
Hi,

I decided to reply on the patches thread to be able to find this later.

On 2013-01-23 22:48:50 +0200, Heikki Linnakangas wrote:
 logical changeset generation v4
 This is a boatload of infrastructure for supporting logical replication, yet
 we have no code actually implementing logical replication that would go with
 this. The premise of logical replication over trigger-based was that it'd be
 faster, yet we cannot asses that without a working implementation. I don't
 think this can be committed in this state.

Its a fair point that this is a huge amount of code without a user in
itself in-core.
But the reason it got no user included is because several people
explicitly didn't want a user in-core for now but said the first part of
this would be to implement the changeset generation as a separate
piece. Didn't you actually prefer not to have any users of this in-core
yourself?

Also, while the apply side surely isn't benchmarkable without any being
submitted, the changeset generation can very well be benchmarked.

A very, very adhoc benchmark:
 -c max_wal_senders=10
 -c max_logical_slots=10 --disabled for anything but logical
 -c wal_level=logical --hot_standby for anything but logical
 -c checkpoint_segments=100
 -c log_checkpoints=on
 -c shared_buffers=512MB
 -c autovacuum=on
 -c log_min_messages=notice
 -c log_line_prefix='[%p %t] '
 -c wal_keep_segments=100
 -c fsync=off
 -c synchronous_commit=off

pgbench -p 5440 -h /tmp -n -M prepared -c 16 -j 16 -T 30 

pgbench upstream:
tps: 22275.941409
space overhead: 0%
pgbench logical-submitted
tps: 16274.603046
space overhead: 2.1%
pgbench logical-HEAD (will submit updated version tomorrow or so):
tps: 20853.341551
space overhead: 2.3%
pgbench single plpgsql trigger (INSERT INTO log(data) VALUES(NEW::text))
tps: 14101.349535
space overhead: 369%

Note that in the single trigger case nobody consumed the queue while the
logical version streamed the changes out and stored them to disk.

Adding a default NOW() or similar to the tables immediately makes
logical decoding faster by a factor of about 3 in comparison to the
above trivial trigger.

The only reason the submitted version of logical decoding is
comparatively slow is that its xmin update policy is braindamaged,
working on that right now.

Greetings,

Andres

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >