Re: [HACKERS] Restore-reliability mode

2015-06-06 Thread Noah Misch
On Fri, Jun 05, 2015 at 08:25:34AM +0100, Simon Riggs wrote:
 This whole idea of feature development vs reliability is bogus. It
 implies people that work on features don't care about reliability. Given
 the fact that many of the features are actually about increasing database
 reliability in the event of crashes and corruptions it just makes no sense.

I'm contrasting work that helps to keep our existing promises (reliability)
with work that makes new promises (features).  In software development, we
invariably hazard old promises to make new promises; our success hinges on
electing neither too little nor too much risk.  Two years ago, PostgreSQL's
track record had placed it in a good position to invest in new, high-risk,
high-reward promises.  We did that, and we emerged solvent yet carrying an
elevated debt service ratio.  It's time to reduce risk somewhat.

You write about a different sense of reliability.  (Had I anticipated this
misunderstanding, I might have written Restore-probity mode.)  None of this
was about classifying people, most of whom allocate substantial time to each
kind of work.

 How will we participate in cleanup efforts? How do we know when something
 has been cleaned up, how will we measure our success or failure? I think
 we should be clear that wasting N months on cleanup can *fail* to achieve a
 useful objective. Without a clear plan it almost certainly will do so. The
 flip side is that wasting N months will cause great amusement and dancing
 amongst those people who wish to pull ahead of our open source project and
 we should take care not to hand them a victory from an overreaction.

I agree with all that.  We should likewise take care not to become insolvent
from an underreaction.

 So lets do our normal things, not do a total stop for an indefinite
 period. If someone has specific things that in their opinion need to be
 addressed, list them and we can talk about doing them, together.

I recommend these four exit criteria:

1. Non-author committer review of foreign keys locks/multixact durability.
   Done when that committer certifies, as if he were committing the patch
   himself today, that the code will not eat data.

2. Non-author committer review of row-level security.  Done when that
   committer certifies that the code keeps its promises and that the
   documentation bounds those promises accurately.

3. Second committer review of the src/backend/access changes for INSERT ... ON
   CONFLICT DO NOTHING/UPDATE.  (Bugs affecting folks who don't use the new
   syntax are most likely to fall in that portion.)  Unlike the previous two
   criteria, a review without certification is sufficient.

4. Non-author committer certifying that the 9.5 WAL format changes will not
   eat your data.  The patch lists Andres and Alvaro as reviewers; if they
   already reviewed it enough to make that certification, this one is easy.

That ties up four people.  For everyone else:

- Fix bugs those reviews find.  This will start slow but will grow to keep
  everyone busy.  Committers won't certify code, and thus we can't declare
  victory, until these bugs are fixed.  The rest of this list, in contrast,
  calls out topics to sample from, not topics to exhaust.

- Turn current buildfarm members green.

- Write, review and commit more automated test machinery to PostgreSQL.  Test
  whatever excites you.  If you need ideas, Craig posted some good ones
  upthread.  Here are a few more:
  - Add a debug mode that calls sched_yield() in SpinLockRelease(); see
6322.1406219...@sss.pgh.pa.us.
  - Improve TAP suite (src/test/perl/TestLib.pm) logging.  Currently, these
suites redirect much output to /dev/null.  Instead, log that output and
teach the buildfarm to capture the log.
  - Call VALGRIND_MAKE_MEM_NOACCESS() on a shared buffer when its local pin
count falls to zero.  Under CLOBBER_FREED_MEMORY, wipe a shared buffer
when its global pin count falls to zero.
  - With assertions enabled, or perhaps in a new debug mode, have
pg_do_encoding_conversion() and pg_server_to_any() check the data for a
no-op conversion instead of assuming the data is valid.

- Add buildfarm members.  This entails reporting any bugs that prevent an
  initial passing run.  Once you have a passing run, schedule regular runs.
  Examples of useful additions:
  - ./configure ac_cv_func_getopt_long=no, ac_cv_func_snprintf=no ... to
enable all the replacement code regardless of the current platform's need
for it.  This helps distinguish Windows bug from replacement code bug.
  - --disable-integer-datetimes, --disable-float8-byval, disable-float4-byval,
--disable-spinlocks, --disable-atomics, disable-thread-safety,
--disable-largefile, #define RANDOMIZE_ALLOCATED_MEMORY
  - Any OS or CPU architecture other than x86 GNU/Linux, even ones already
represented.

- Write, review and commit fixes for the bugs that come to light by way of
  these new automated tests.

- Anything else targeted 

Re: [HACKERS] psql :: support for \ev viewname and \sv viewname

2015-06-06 Thread Petr Korobeinikov
 1.
 make failed with docs

Fixed.


 2.
  \ev vw1 3

 This syntax is supported. But documentation only says:
 \ev [ viewname ]
 Missing optional line_number clause

Fixed. Documented.

3.
  strip_lineno_from_objdesc(char *func)

 Can we have parameter name as obj instead of func.
 You have renamed the function name, as it is now called in case of views as
 well. Better rename the parameter names as well.

Renamed.

4.
 Also please update the comments above strip_lineno_from_objdesc().
 It is specific to functions which is not the case now.

 Comments updated.


 5.
  print_with_linenumbers(FILE *output,
   char *lines,
   const char *header_cmp_keyword,
   size_t header_cmp_sz)

 Can't we calculate the length of header (header_cmp_sz) inside function?
 This will avoid any sloppy changes like, change in the keyword but forgot
 to
 change the size.
 Lets just accept the keyword and calculate the size within the function.

Now header_cmp_sz calculated inside function.

6.
 *
 * Note that this loop scribbles on
 func_buf.
 */

 These lines at commands.c:1357, looks NO more valid now as there is NO loop
 there.

Removed.



 7.
 I see few comment lines explaining which is line 1 in case of function, for
 which AS  is used. Similarly, for view SELECT  is used.
 Can you add similar kind of explanation there?

Explanation added.

8.
  get_create_object_cmd_internal
  get_create_function_cmd
  get_create_view_cmd

 Can these three functions grouped together in just get_create_object_cmd().
 This function will take an extra parameter to indicate the object type.
 Say O_FUNC and O_VIEW for example.

 For distinct part, just have a switch case over this type.

 This will add a flexibility that if we add another such \e and \s options,
 we
 don't need new functions, rather just need new enum like O_new and a new
 case
 in this switch statement.
 Also it will look good to read the code as well.

 similarly you can do it for
  lookup_object_oid_internal
  get_create_function_cmd
  lookup_function_oid

Reworked.
New enum PgObjType introduced.



 9.
  static int count_lines_in_buf(PQExpBuffer buf)
  static void print_with_linenumbers(FILE *output, .. )
  static bool lookup_view_oid(const char *desc, Oid *view_oid)
  static bool lookup_object_oid_internal(PQExpBuffer query, Oid *obj_oid)

 Can we have smaller description, explaining what's the function doing for
 these functions at the definition?

Description added.



 10.
  + \\e, \\echo, \\ef, \\ev, \\encoding,

 Can you keep this sorted?
 It will be good if it sorted, but I see no such restriction as I see few
 out
 of order options. But better keep it ordered.
 Ignore if you dis-agree.

Hmm, sorted now.
Sort is based on my feelings.


psql-ev-sv-support-v4.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Restore-reliability mode

2015-06-06 Thread Michael Paquier
On Sun, Jun 7, 2015 at 4:58 AM, Noah Misch n...@leadboat.com wrote:
 - Write, review and commit more automated test machinery to PostgreSQL.  Test
   whatever excites you.  If you need ideas, Craig posted some good ones
   upthread.  Here are a few more:
   - Improve TAP suite (src/test/perl/TestLib.pm) logging.  Currently, these
 suites redirect much output to /dev/null.  Instead, log that output and
 teach the buildfarm to capture the log.

We can capture the logs and redirect them by replacing
system_or_bail() with more calls to IPC::run. That would be a patch
simple enough. pg_rewind's tests should be switched to use that as
well.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] nested loop semijoin estimates

2015-06-06 Thread Tomas Vondra
FWIW, I've repeated the TPC-DS tests on a much larger data set (50GB) 
today, and I see that


(a) 3f59be836c555fa679bbe0ec76de50a8b5cb23e0 (ANTI/SEMI join costing)
changes nothing - there are some small cost changes, but only in
plans involving semi/anti-joins (which is expected). Nevertheless,
all the plans remain the same.

(b) 3b0f77601b9f9f3a2e36a813e4cd32c00e0864d6 (add_path fixes)

This changes join order in one of the queries, with lots of nested
loops (this is the join order change we've seen in this thread).
Anyway, this is mostly expected consequence of the add_path changes.

So both changes seem fine.

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [CORE] Restore-reliability mode

2015-06-06 Thread Simon Riggs
On 5 June 2015 at 17:20, Alvaro Herrera alvhe...@2ndquadrant.com wrote:

 Simon Riggs wrote:
  On 5 June 2015 at 15:00, Robert Haas robertmh...@gmail.com wrote:

   Stamping it a beta implies that we think it's something fairly
   stable that we'd be pretty happy to release if things go well, which
   is a higher bar to clear.
 
  We don't have a clear definition of what Beta means. For me, Beta has
  always meant trial software, please test.

 I think that definition *is* the problem, actually.  To me, beta means
 trial software, please test, but final product will be very similar to
 what you see here.  What we need to convey at this point is what you
 said, but I think a better word for that is alpha.  There may be more
 mobility in there than in a beta, in users's perception, which is the
 right impression we want to convey.

 Another point is that historically, once we've released a beta, we're
 pretty reluctant to bump catversion.  We're not ready for that at this
 stage, which is one criteria that suggests to me that we're not ready
 for beta.

 So I think the right thing to do at this point is to get an alpha out,
 shortly after releasing upcoming minors.


OK, I can get behind that.

My only additional point is that it is a good idea to release an Alpha
every time, not just this release.

And if its called Alpha, lets release it immediately. We can allow Alpha1,
Alpha2 as needed, plus we allow catversion and file format changes between
Alpha versions.

Proposed definitions

Alpha: This is trial software please actively test and report bugs. Your
feedback is sought on usability and performance, which may result in
changes to the features included here. Not all known issues have been
resolved but work continues on resolving them. Multiple Alpha versions may
be released before we move to Beta. We reserve the right to change internal
API definitions, file formats and increment the catalog version between
Alpha versions and Beta, so we do not guarantee and easy upgrade path from
this version to later versions of this release.

Beta: This is trial software please actively test and report bugs and
performance issues. Multiple Beta versions may be released before we move
to Release Candidate. We will attempt to maintain APIs, file formats and
catversions.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
http://www.2ndquadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


Re: [HACKERS] [CORE] Restore-reliability mode

2015-06-06 Thread Devrim GÜNDÜZ
Hi,

On Sat, 2015-06-06 at 12:15 +0200, Magnus Hagander wrote:
 If I'm not mistaken, we (Simon and me) actually discussed something
 else along this line a while ago that might be worth considering. That
 is, maybe we should consider time-based alpha releases. That is, we
 can just decide we wrap an alpha every other Monday until we think we
 are good to go with beta. The reason for that is to get much quicker
 iteration on bugfixes, which would encourage people to use and test
 these versions. Report a bug and  if it was easy enough to fix, you
 have a wrapped release with the fix in 2 weeks top.

+1. 

 Package availability would depend on platform. For those platforms
 where package building is more or less entirely automatic already,
 this could probably also be easily automated.

When we used to release more alphas years ago, I was releasing Alpha
RPMs for many platforms. I'll do it again if we keep doing it.

Regards,

-- 
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [CORE] Restore-reliability mode

2015-06-06 Thread Magnus Hagander
On Sat, Jun 6, 2015 at 11:07 AM, Simon Riggs si...@2ndquadrant.com wrote:

 On 5 June 2015 at 17:20, Alvaro Herrera alvhe...@2ndquadrant.com wrote:

 Simon Riggs wrote:
  On 5 June 2015 at 15:00, Robert Haas robertmh...@gmail.com wrote:

   Stamping it a beta implies that we think it's something fairly
   stable that we'd be pretty happy to release if things go well, which
   is a higher bar to clear.
 
  We don't have a clear definition of what Beta means. For me, Beta has
  always meant trial software, please test.

 I think that definition *is* the problem, actually.  To me, beta means
 trial software, please test, but final product will be very similar to
 what you see here.  What we need to convey at this point is what you
 said, but I think a better word for that is alpha.  There may be more
 mobility in there than in a beta, in users's perception, which is the
 right impression we want to convey.

 Another point is that historically, once we've released a beta, we're
 pretty reluctant to bump catversion.  We're not ready for that at this
 stage, which is one criteria that suggests to me that we're not ready
 for beta.

 So I think the right thing to do at this point is to get an alpha out,
 shortly after releasing upcoming minors.


 OK, I can get behind that.

 My only additional point is that it is a good idea to release an Alpha
 every time, not just this release.

 And if its called Alpha, lets release it immediately. We can allow Alpha1,
 Alpha2 as needed, plus we allow catversion and file format changes between
 Alpha versions.



If I'm not mistaken, we (Simon and me) actually discussed something else
along this line a while ago that might be worth considering. That is, maybe
we should consider time-based alpha releases. That is, we can just decide
we wrap an alpha every other Monday until we think we are good to go with
beta. The reason for that is to get much quicker iteration on bugfixes,
which would encourage people to use and test these versions. Report a bug
and  if it was easy enough to fix, you have a wrapped release with the fix
in 2 weeks top.

This would require that we can (at least mostly) automate the wrapping of
an alpha release, but I'm pretty sure we can solve that problem. We can
also, I think, get a way with doing the release notes for an alpha just as
a wiki page and a lot less formal than others, meaning we don't need to
hold up any process for that.

Package availability would depend on platform. For those platforms where
package building is more or less entirely automatic already, this could
probably also be easily automated. And for those that take a lot more work,
such as the Windows installers, we could just go with wrapping every other
or every third alpha. As this is not a production release, I don't see why
we'd need to hold some back to cover for the rest.





 Proposed definitions

 Alpha: This is trial software please actively test and report bugs. Your
 feedback is sought on usability and performance, which may result in
 changes to the features included here. Not all known issues have been
 resolved but work continues on resolving them. Multiple Alpha versions may
 be released before we move to Beta. We reserve the right to change internal
 API definitions, file formats and increment the catalog version between
 Alpha versions and Beta, so we do not guarantee and easy upgrade path from
 this version to later versions of this release.

 Beta: This is trial software please actively test and report bugs and
 performance issues. Multiple Beta versions may be released before we move
 to Release Candidate. We will attempt to maintain APIs, file formats and
 catversions.


These sound like good definitions. Might add to the beta one something like
whilst we will try to avoid it, pg_upgrade may be required between betas
and from beta to rc versions.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] [CORE] Restore-reliability mode

2015-06-06 Thread Geoff Winkless


 ​
​
To play devil's advocate for a moment, is there anyone who would genuinely
be prepared to download and install an alpha release who would not already
have downloaded one of the nightlies? I only ask because I assume that
​releasing ​
an alpha is not zero-developer-cost and I don't believe
​that​
 there's a large ​
​
number of people who *would *be happy to install something that's described
as being buggy and subject to change but are put off by having to type
configure and make.
​

Further, it seems to me that the number of people who ​won't roll their own
who are useful as bug-finders is even smaller.

I get the feeling that the argument appears to be Bruce doesn't want to
release a beta, Simon wants to release something. Let's release an alpha
because it's sort-of half way in between as a consensus compromise (I'm
not deliberately picking on specific people, I'm aware you're not the only
two involved and arguing for either side, but you do seem to be fairly
polar opposite sides of the argument :) ); I don't really believe that
releasing an alpha moves anything further forward from a testing point of
view, and I'm fairly sure that it will have just as dele
terious effect on bugfixing as would a beta
​, with the added disadvantage of the extra developer cost.

​Geoff​


Re: [HACKERS] [CORE] Restore-reliability mode

2015-06-06 Thread Gavin Flower

On 06/06/15 21:07, Simon Riggs wrote:
On 5 June 2015 at 17:20, Alvaro Herrera alvhe...@2ndquadrant.com 
mailto:alvhe...@2ndquadrant.com wrote:


Simon Riggs wrote:
 On 5 June 2015 at 15:00, Robert Haas robertmh...@gmail.com
mailto:robertmh...@gmail.com wrote:

  Stamping it a beta implies that we think it's something fairly
  stable that we'd be pretty happy to release if things go well,
which
  is a higher bar to clear.

 We don't have a clear definition of what Beta means. For me,
Beta has
 always meant trial software, please test.

I think that definition *is* the problem, actually.  To me, beta
means
trial software, please test, but final product will be very
similar to
what you see here.  What we need to convey at this point is what you
said, but I think a better word for that is alpha. There may be more
mobility in there than in a beta, in users's perception, which is the
right impression we want to convey.

Another point is that historically, once we've released a beta, we're
pretty reluctant to bump catversion.  We're not ready for that at this
stage, which is one criteria that suggests to me that we're not ready
for beta.

So I think the right thing to do at this point is to get an alpha out,
shortly after releasing upcoming minors.


OK, I can get behind that.

My only additional point is that it is a good idea to release an Alpha 
every time, not just this release.


And if its called Alpha, lets release it immediately. We can allow 
Alpha1, Alpha2 as needed, plus we allow catversion and file format 
changes between Alpha versions.


Proposed definitions

Alpha: This is trial software please actively test and report bugs. 
Your feedback is sought on usability and performance, which may result 
in changes to the features included here. Not all known issues have 
been resolved but work continues on resolving them. Multiple Alpha 
versions may be released before we move to Beta. We reserve the right 
to change internal API definitions, file formats and increment the 
catalog version between Alpha versions and Beta, so we do not 
guarantee and easy upgrade path from this version to later versions of 
this release.


Beta: This is trial software please actively test and report bugs and 
performance issues. Multiple Beta versions may be released before we 
move to Release Candidate. We will attempt to maintain APIs, file 
formats and catversions.


--
Simon Riggs http://www.2ndQuadrant.com/ http://www.2ndquadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services
As a 'user' I am very happy with the idea of having Alpha's, gives me a 
feeling that there will be less chance of problems being released in the 
final version.


Because not only does it give more chances to test, but might encourage 
more people to get involved in contributing, either ideas for minor 
tweaks or simple patches etc. (as being not quite finished, and an 
expectation that minor functional changes have a possibility of being 
accepted for the version, if there is sufficient merit).



Cheers,
Gavin


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Is it possible to have a fast-write Index?

2015-06-06 Thread k...@rice.edu
On Fri, Jun 05, 2015 at 11:54:01PM +, deavid wrote:
 Thanks to everybody for answering. I wasn't expecting this attention; this
 is a great community :-)
 
 Jim asked me about something real. Well, the problem is this showed up more
 than five years ago, and keeps popping from time to time since in different
 circumstances. I solved them in different ways each time, depending the
 exact use-case. I wanted to generalize, because seems a good feature for
 several situations; and I don't expect a solution for me as each time I hit
 with this I found some way to sort it out.
 As Jim said, we need here are figures for real examples, and i don't have
 yet. I'll do my homework and email back with exact problems with exact
 timing. Give me a week or two.
 
 Also, some of you are talking about IO. Well, it's hard to say without the
 figures here, but I'm pretty sure I'm hitting CPU time only. We use SSD on
 those big databases, and also in my tests i tried setting fsync=off.
 
 So the problem is: i see a low iowait, and CPU time for one core is at
 80-90% most of the time. I can buy more ram, better disks, or cpu's with
 more cores. But one cpu core would have more-or-less the same speed no
 matter how much money you invest.
 
 When someone wants a delayed-write index is similar to setting
  synchronous_commit = off. We want to give an OK to the backend as soon
 as is possible and do this work in background. But we also want some
 reliability against crashes.
 
 Also, if the task is done in background it may be done from other backend,
 so probably several indexes could be packed at once using different backend
 processes. We could use the entire cpu if our index writes aren't tied to
 the session who wrote the row.
 
 PD: I'm very interested on existent approaches like GIN or BRIN (this one
 is new to me). Thanks a lot; i'll try them in my tests.

Hi David,

Here is an interesting read comparing LSM and Fractal Tree indexing:

http://highscalability.com/blog/2014/8/6/tokutek-white-paper-a-comparison-of-log-structured-merge-lsm.html

Regards,
Ken


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [CORE] Restore-reliability mode

2015-06-06 Thread Sehrope Sarkuni
On Sat, Jun 6, 2015 at 6:47 AM, Geoff Winkless pgsqlad...@geoff.dj wrote:
 To play devil's advocate for a moment, is there anyone who would genuinely be 
 prepared to download
 and install an alpha release who would not already have downloaded one of the 
 nightlies? I only ask
 because I assume that  releasing an alpha is not zero-developer-cost and I 
 don't believe  that
 there's a large number of people who would be happy to install something 
 that's described as being
 buggy and subject to change but are put off by having to type configure and 
 make.

I fit into that category and I would guess there would be others as
well. Having system packages available via an apt-get install ...
lowers the bar significantly to try things out.

As an example, I installed the 9.4 beta as soon as it was available to
run a smoke test and try out some of the new jsonb features. I'll be
doing the same with a 9.5 alpha/beta (or whatever it's called), for
both similar testing and to try out UPSERT.

It's much easier to work into dev/test setups if there are system
packages as it's just a config change to an existing script. Building
from source would require a whole new workflow that I don't have time
to incorporate.

 Further, it seems to me that the number of people who won't roll their own 
 who are useful as bug-finders is even smaller.

That's probably true but they definitely won't find any bugs if they
don't test at all.

If it's possible to have automated packaging, even for just a subset
of platforms, I think that'd be useful.

Regards,
-- Sehrope Sarkuni
Founder  CEO | JackDB, Inc. | https://www.jackdb.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [CORE] Restore-reliability mode

2015-06-06 Thread Joshua D. Drake


On 06/06/2015 07:33 AM, Robert Haas wrote:


On Sat, Jun 6, 2015 at 6:47 AM, Geoff Winkless pgsqlad...@geoff.dj wrote:

To play devil's advocate for a moment, is there anyone who would genuinely
be prepared to download and install an alpha release who would not already
have downloaded one of the nightlies? I only ask because I assume that
releasing
an alpha is not zero-developer-cost and I don't believe
that
  there's a large
number of people who would be happy to install something that's described as
being buggy and subject to change but are put off by having to type
configure and make.


Yes, me and everyone like me in feature set.

Compiling takes time, time that does not need to be spent. If I can push 
an alpha into a container and start testing, I will do so. If I have to:


git pull; configure --prefix; make -j8 install

Then I will likely move on to other things because my time (nor is any 
other's on this list) is not free.


If you add into this a test harness that I can execute from the alpha 
release (or another package) that allows me to instant report via 
buildfarm or just email a tarball to -hackers that is even better.


I know that I am not taking everything into account here but remember 
that most of our users are not -hackers. They are practitioners and a 
lot of them would love to help but just can't because a lot of the 
infrastructure has never been built and -hackers think like -hackers.



Sincerely,

JD


--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [CORE] Restore-reliability mode

2015-06-06 Thread Joshua D. Drake


On 06/05/2015 08:07 PM, Bruce Momjian wrote:


 From my side, it is only recently I got some clear answers to my questions
about how it worked. I think it is very important that major features have
extensive README type documentation with them so the underlying principles used
in the development are clear. I would define the measure of a good feature as
whether another committer can read the code comments and get a good feel. A bad
feature is one where committers walk away from it, saying I don't really get it
and I can't read an explanation of why it does that. Tom's most significant
contribution is his long descriptive comments on what the problem is that need
to be solved, the options and the method chosen. Clarity of thought is what
solves bugs.


Yes, I think we should have done that early-on for multi-xact, and I am
hopeful we will learn to do that more often when complex features are
implemented, or when we identify areas that are more complex than we
thought.



I see this idea of the README as very useful. There are far more people 
like me in this community than Simon or Alvaro. I can test, I can break 
things, I can script up a harness but I need to be understand HOW and 
the README would help allow for that.




People think I want to stop feature development to review.  What I am
saying is that we need to stop development so we can be honest about
whether we need review, and where.  It is hard to be honest when time
and feature pressure are on you.  It shouldn't take long to make that
decision as a group.



Right. This is all about taking a step back, a deep breath, an objective 
look and then digging in with a more productive and reliable manner.


Sincerely,

JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [CORE] Restore-reliability mode

2015-06-06 Thread Robert Haas
On Sat, Jun 6, 2015 at 6:47 AM, Geoff Winkless pgsqlad...@geoff.dj wrote:
 To play devil's advocate for a moment, is there anyone who would genuinely
 be prepared to download and install an alpha release who would not already
 have downloaded one of the nightlies? I only ask because I assume that
 releasing
 an alpha is not zero-developer-cost and I don't believe
 that
  there's a large
 number of people who would be happy to install something that's described as
 being buggy and subject to change but are put off by having to type
 configure and make.

This is pretty much why Peter Eisentraut gave up on doing alphas after
the 9.1 cycle.

Admittedly, what is being proposed here is somewhat different.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_stat_archiver issue with aborted archiver

2015-06-06 Thread Julien Rouhaud
Hello,

I just noticed that if the archiver aborts (for instance if the
archive_command exited with a return code  127), pg_stat_archiver won't
report those failed attempts. This happens with both 9.4 and 9.5 branches.

Please find attached a patch that fix this issue, based on current head.

Regards.
-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org

*** a/src/backend/postmaster/pgarch.c
--- b/src/backend/postmaster/pgarch.c
***
*** 578,585  pgarch_archiveXlog(char *xlog)
  		 *
  		 * Per the Single Unix Spec, shells report exit status  128 when a
  		 * called command died on a signal.
  		 */
! 		int			lev = (WIFSIGNALED(rc) || WEXITSTATUS(rc)  128) ? FATAL : LOG;
  
  		if (WIFEXITED(rc))
  		{
--- 578,595 
  		 *
  		 * Per the Single Unix Spec, shells report exit status  128 when a
  		 * called command died on a signal.
+ 		 *
+ 		 * If the archiver abort, we still need to tell the collector about
+ 		 * the WAL file that we failed to archive.
  		 */
! 		int		lev;
! 		if (WIFSIGNALED(rc) || WEXITSTATUS(rc)  128)
! 		{
! 			lev = FATAL;
! 			pgstat_send_archiver(xlog, true);
! 		}
! 		else
! 			lev = LOG;
  
  		if (WIFEXITED(rc))
  		{


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [CORE] Restore-reliability mode

2015-06-06 Thread Kevin Grittner
Robert Haas robertmh...@gmail.com wrote:

 Tom, for example, has previously not wanted to even bump
 catversion after beta1, which rules out a huge variety of
 possible fixes and interface changes.  If we want to make a
 policy decision to change our approach, we should be up-front
 about that.

What?!?  There have been catversion bumps between the REL?_?_BETA1
tag and the REL?_?_0 tag for 8.2, 8.3, 9.0, 9.1, 9.3, and 9.4.
(That is, it has happend on 6 of the last 8 releases.)  I don't
think we're talking about any policy change here.  We try to avoid
a catversion bump after beta if we can; we're not that reluctant to
do so if needed.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Further issues with jsonb semantics, documentation

2015-06-06 Thread Andrew Dunstan


On 06/05/2015 04:48 PM, Tom Lane wrote:

Andrew Dunstan and...@dunslane.net writes:

Yeah, Good point. Actually, if my memory serves me correctly (always a
dubious bet), the avoidance of that kind of ambiguity is why we
introduced the # and # operators in the first place, after going
round and round for a while on what the API would look like. I should
have remembered that when this came around. Mea culpa.
So probably the least invasive change would be to rename the text[]
variant operator to something like #- and rename the corresponding
function to jsonb_delete_path.

Not sure that's a great choice of operator name; consider for example
select 4#-1;
It's not immediately obvious whether the - is meant as a separate
unary minus.  There are heuristics in the lexer that try to deal with
cases like this, but it doesn't seem like a good plan to double down
on such heuristics always doing the right thing.





Perhaps we should deprectae operator names ending in -?

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [CORE] Restore-reliability mode

2015-06-06 Thread Geoff Winkless
On 6 June 2015 at 13:41, Sehrope Sarkuni sehr...@jackdb.com wrote:

 On Sat, Jun 6, 2015 at 6:47 AM, Geoff Winkless pgsqlad...@geoff.dj
 wrote:
  To play devil's advocate for a moment, is there anyone who would
 genuinely be prepared to download
  and install an alpha release who would not already have downloaded one
 of the nightlies? I only ask
  because I assume that  releasing an alpha is not zero-developer-cost and
 I don't believe  that
  there's a large number of people who would be happy to install something
 that's described as being
  buggy and subject to change but are put off by having to type
 configure and make.

 I fit into that category and I would guess there would be others as
 well. Having system packages available via an apt-get install ...
 lowers the bar significantly to try things out.


​But it also lowers the bar to the extent that you get the people who won't
read the todo list and end up complaining about the things that everyone
already knows about​.


 It's much easier to work into dev/test setups if there are system
 packages as it's just a config change to an existing script. Building
 from source would require a whole new workflow that I don't have time
 to incorporate.


​Really? You genuinely don't have time to paste, say:

mkdir -p ~/src/pgdevel
cd ~/src/pgdevel
wget https://ftp.postgresql.org/pub/snapshot/dev/postgresql-snapshot.tar.bz2
tar xjf postgresql-snapshot.tar.bz2
​mkdir bld
​
cd bld
../postgresql-9.5devel/configure $(pg_config --configure | sed -e
's/\(pg\|postgresql[-\/]\)\(doc-\)\?9\.[0-9]*\(dev\)\?/\1\29.5dev/g')
make wor
​ld​
​make check
make world-install
​​

​and yet you think you have enough time to provide more than a looks like
it's working report to the developers?​

(NB the sed for the pg_config line will probably need work, it looks like
it should work on the two types of system I have here but I have to admit I
changed the config line manually when I built it)



  Further, it seems to me that the number of people who won't roll their
 own who are useful as bug-finders is even smaller.

 That's probably true but they definitely won't find any bugs if they
 don't test at all.

 If it's possible to have automated packaging, even for just a subset
 of platforms, I think that'd be useful.


Well yes, automated packaging of the nightly build, that doesn't involve
the developers having to stop what they're doing to write official alpha
release docs or any of the other stuff that goes along with doing a
release, would be zero-impact on development (assuming the developers
didn't have to build or maintain the auto-packager) and therefore any
return (however small) would make it worthwhile.

Fancy building (and maintaining) the auto-packaging system, and managing a
mailing list for its users?

Geoff


Re: [HACKERS] [CORE] Restore-reliability mode

2015-06-06 Thread Sehrope Sarkuni
On Sat, Jun 6, 2015 at 10:35 AM, Geoff Winkless pgsqlad...@geoff.dj wrote:
 Really? You genuinely don't have time to paste, say:

 mkdir -p ~/src/pgdevel
 cd ~/src/pgdevel
 wget https://ftp.postgresql.org/pub/snapshot/dev/postgresql-snapshot.tar.bz2
 tar xjf postgresql-snapshot.tar.bz2
 mkdir bld
 cd bld
 ../postgresql-9.5devel/configure $(pg_config --configure | sed -e 
 's/\(pg\|postgresql[- \/]\)\(doc-\)\?9\.[0-9]*\(dev\)\?/\1\29.5dev/g')
 make world
 make check
 make world-install

 and yet you think you have enough time to provide more than a looks like 
 it's working report to the developers?

Adding steps to an existing process to fetch and build from source is
significantly more complicated then flipping a version number. And I'm
not trying to run PG's built in tests on my machine. I want to run the
tests for my applications, and ideally, my applications themselves.

If doing so leads me to find that something doesn't work then of
course I would research and report the cause. At that point it's
something that I know will directly effect me if it's not fixed!

 Well yes, automated packaging of the nightly build, that doesn't involve the 
 developers having to stop what they're doing to write official alpha release 
 docs or any of the other stuff that goes along with doing a release, would be 
 zero-impact on development (assuming the developers didn't have to build or 
 maintain the auto-packager) and therefore any return (however small) would 
 make it worthwhile.
 Fancy building (and maintaining) the auto-packaging system, and managing a 
 mailing list for its users?

I don't have much experience in setting things like this up so I'm not
one to estimate the work load involved. If it existed though, I'd use
it.

Regards,
-- Sehrope Sarkuni
Founder  CEO | JackDB, Inc. | https://www.jackdb.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Initializing initFileRelationIds list via write is unsafe

2015-06-06 Thread Tom Lane
I wrote:
 I've been chasing the intermittent cache lookup failed for access method
 403 failure at session startup that's been seen lately in the buildfarm,
 for instance here:
 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=axolotldt=2015-06-04%2019%3A22%3A46
 (Axolotl has shown this 3 times in the last 90 days, not sure if any
 others have seen it.)  I hypothesized that this was triggered by the
 VACUUM FULL pg_am in the concurrently running vacuum.sql regression
 test, so I started running the regression tests in parallel with a
 shell script doing
 while sleep 0.1; do psql -c 'vacuum full pg_am' regression; done
 and sure enough, I can reproduce it once in awhile.

I've tracked down what's happening here, and it's pretty nasty.
relcache.c keeps a list of the OIDs of all system catalogs/indexes that
are in the relcache init file, which it uses to decide whether the init
file needs to be updated after a relcache invalidation has been issued
against some relation.  That list is created in one of two ways:

1. If we read in an init file, we just record all the entries we saw.

2. If we write out an init file, we just record all the entries we wrote
out.

Before doing #2, we open all the catalog caches, which should result in
populating the relcache with all the interesting system catalogs and
indexes.

Now there's a race condition here, which is that we might receive an
invalidation message telling us that one of those relcache entries is
stale before we get done writing it out.  Fine, we check for that, and
if it happens then we discard the already-stale init file instead of
installing it as valid.  That all works.  BUT: the session keeps on
using the initFileRelationIds list that it constructed while writing
the bogus init file.  Processing of the invalidation message might've
resulted in flushing one or more relcache entries, so that they were
never written out and thus are missing from the initFileRelationIds
list.  This can lead to wrong decisions later in the session about
whether its own actions require init file updates.

To be concrete, here's the sequence of events that I see happening
in the above-described test case:

1. The VACUUM FULL pg_am in the vacuum.sql regression test rewrites
pg_am, sends out a relcache inval for pg_am, and unlinks the init file.
(Actually the init file was probably already gone from other activity.)

2. An incoming session from the manual script fails to open the init
file, so it loads up its catalog caches.  Somewhere pretty late in that
sequence, it notices the relcache inval for pg_am and flushes its pg_am
relcache entry, which it then has no need to reload later in the
sequence.  So when it goes to write the init file, it doesn't write a
pg_am entry and doesn't add pg_am's OID to the initFileRelationIds list
either.  At the end of that, it correctly notices that what it's written
is stale, so it doesn't install a busted init file ... but it does sally
on with a busted initFileRelationIds list.

3. Meanwhile, some other regression-test session that arrived slightly
later successfully builds and installs a valid init file.

4. When the manual script session does vacuum full pg_am, it does not
realize that there ought to be an init file invalidation event from that.
So the init file created by step 3 is left alone, even though it now
contains obsolete data about the relfilenode of pg_am.

5. Specifically, the init file's pg_am entry is referencing a physical
disk file that has been truncated to zero size and is pending deletion at
the next checkpoint.  So, when incoming sessions decide they need to fetch
a pg_am entry, the symptom is row not found rather than file not
found.  That matches the failure symptom we see in the buildfarm.

Now, this test case seems to require two successive vacuum fulls on pg_am
in order to create the observed problem, which is something that doesn't
occur in the regression tests themselves, so how come we saw it in the
buildfarm?  I think the answer is probably that an sinval overflow/reset
event can substitute for step 1.  I tried instrumenting
RelationCacheInitializePhase3 to log a complaint anytime the list length
wasn't the currently-expected 94 entries, and got results like these:

LOG:  length(initFileRelationIds) = 11 in pid 2054
LOG:  length(initFileRelationIds) = 93 in pid 2298
LOG:  length(initFileRelationIds) = 93 in pid 4631
LOG:  length(initFileRelationIds) = 11 in pid 4684
LOG:  length(initFileRelationIds) = 11 in pid 4680
LOG:  length(initFileRelationIds) = 11 in pid 4695
LOG:  length(initFileRelationIds) = 11 in pid 4692
LOG:  length(initFileRelationIds) = 11 in pid 4682
LOG:  length(initFileRelationIds) = 11 in pid 4687
LOG:  length(initFileRelationIds) = 93 in pid 5722

The events with one missing entry are probably consequences of the
sequence described above, but the events with only 11 entries seem like
they must reflect an sinval overflow causing everything not nailed in
cache to be lost.  (Note that there 

Re: [HACKERS] CREATE POLICY and RETURNING

2015-06-06 Thread Peter Geoghegan
On Fri, Oct 17, 2014 at 5:34 AM, Andres Freund and...@2ndquadrant.com wrote:
 On 2014-10-17 14:57:03 +0800, Craig Ringer wrote:
 On 10/17/2014 02:49 AM, Robert Haas wrote:
  I think you could probably make the DELETE policy control what can get
  deleted, but then have the SELECT policy further filter what gets
  returned.

 That seems like the worst of both worlds to me.

 Suddenly DELETE ... RETURNING might delete more rows than it reports a
 resultset for. As well as being potentially dangerous for people using
 it in wCTEs, etc, to me that's the most astonishing possible outcome of all.

 I'd be much happier with even:

   ERROR: RETURNING not permitted with SELECT row-security policy

 FWIW, that doesn't sound acceptable to me.

This is more or less what ended up happening with UPSERT and USING
security barrier quals on UPDATE/ALL policies. Realistically, the
large majority of use cases don't involve a user being able to
INSERT/DELETE tuples, but not SELECT them, and those that do should
not be surprised to have a RETURNING fail (it's an odd enough union of
different features that this seems acceptable to me).

Like Fujii, I think that RETURNING with RLS should not get to avoid
SELECT policies. I agree with the concern about not seeing affected
rows with a DELETE (which, as I said, is very similar to UPSERT +
WCO_RLS_CONFLICT_CHECK policies), so an error seems like the only
alternative.

The argument against not requiring SELECT *column* privilege on the
EXCLUDED.* pseudo relation for UPSERT might have been: well, what can
be the harm of allowing the user to see what they themselves might
have inserted?. But that would have been a bad argument then had
anyone made it, because RETURNING with a (vanilla) INSERT requires
SELECT privilege, and that's also what the user then actually inserted
(as distinct from what the user *would have* inserted had the insert
path been taking, representing as the EXCLUDED.* pseudo relation --
for security purposes, ISTM that this is really no distinction at
all). Consider before row insert triggers that can modify EXCLUDED.*
tuples in a privileged way.

So, the only logical reason that INSERT with RETURNING requires SELECT
column privilege that I can see is that a before row INSERT trigger
could modify the tuple inserted in a way that the inserter role should
not know the details of. This long standing convention was reason
enough to mandate that SELECT column privilege be required for the
EXCLUDED.* pseudo relation for UPSERT. And so, I think it isn't too
much of a jump to also say that we should do the same for RLS (for
INSERTs for the reason I state, but also for UPDATEs and DELETEs for a
far more obvious reason: the *existing* tuple can be projected, and
the updater/deleter might well have no business seeing its contents).

In short: I think we should be tracking a new WCOKind (perhaps
WCO_RLS_RETURNING_CHECK?), that independently holds the security
barrier quals as WCO-style checks when that's recognized as being
necessary. For INSERT, these WCOs must be enforced against the target
tuple projected by RETURNING. For UPDATEs and DELETEs, FROM/USING
relations must also have SELECT privilege enforced against the
projected target tuple, as well as the non-target relation --
apparently the latter isn't currently happening, although Dean has
tried to address this with his recent patch [1]. That is, even
non-target relations (UPDATE ... FROM relations, or DELETE ... USING
relations) do not have SELECT policy enforcement, but rather have
UPDATE or DELETE policy enforcement only. I must admit that I was
rather surprised at that; it has to be a bug.

[1] 
http://www.postgresql.org/message-id/caezatcve7hdtfzgcjn-oevvawbtbgg8-fbch9vhdbhuzrsw...@mail.gmail.com
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] RLS fails to work with UPDATE ... WHERE CURRENT OF

2015-06-06 Thread Peter Geoghegan
Attached test case patch shows how RLS fails to play nice with UPDATE
... WHERE CURRENT OF. If you run the revised rowsecurity regression
test against the master branch, the tests do not pass (which, ideally,
they would -- expected is actually what I expect here):

*** /home/pg/postgresql/src/test/regress/expected/rowsecurity.out
2015-06-06 15:04:02.142084059 -0700
--- /home/pg/postgresql/src/test/regress/results/rowsecurity.out
2015-06-06 15:04:09.014083800 -0700
***
*** 2771,2780 

  -- Still cannot UPDATE row through cursor:
  UPDATE current_check SET payload = payload || '_new' WHERE CURRENT
OF current_check_cursor RETURNING *;
!  currentid | payload | rlsuser
! ---+-+-
! (0 rows)
! commit;
  --
  -- Clean up objects
  --
--- 2771,2778 

  -- Still cannot UPDATE row through cursor:
  UPDATE current_check SET payload = payload || '_new' WHERE CURRENT
OF current_check_cursor RETURNING *;
! ERROR:  WHERE CURRENT OF is not supported for this table type
! COMMIT;
  --
  -- Clean up objects
  --

==

What's actually occurring here is that the executor imagines that this
involves a foreign table scan (although I suppose it's equivocating a
little bit by not saying so explicitly) -- ExecEvalCurrentOfExpr()
comments imply that that's the only reason why control should reach it
in practice. It looks like RLS has added a new way that CURRENT OF can
fail to be made into a TidScan qualification. It doesn't look like
Dean's most recent round of RLS fixes [1] addressed this case, based
on his remarks. This non-support of WHERE CURRENT OF certainly isn't
documented, and so looks like a bug.

Unfortunately, the fact that WHERE CURRENT OF doesn't already accept
additional qualifications doesn't leave me optimistic about this bug
being easy to fix -- consider the gymnastics performed by commit
c29a9c37 to get an idea of what I mean. Maybe it should just be
formally desupported with RLS, as a stopgap solution for 9.5.

[1] 
http://www.postgresql.org/message-id/caezatcve7hdtfzgcjn-oevvawbtbgg8-fbch9vhdbhuzrsw...@mail.gmail.com
-- 
Peter Geoghegan
diff --git a/src/test/regress/expected/rowsecurity.out b/src/test/regress/expected/rowsecurity.out
index 0ae5557..1c4c551 100644
--- a/src/test/regress/expected/rowsecurity.out
+++ b/src/test/regress/expected/rowsecurity.out
@@ -2729,6 +2729,52 @@ COPY copy_t FROM STDIN; --fail - permission denied.
 ERROR:  permission denied for relation copy_t
 RESET SESSION AUTHORIZATION;
 DROP TABLE copy_t;
+-- Check WHERE CURRENT OF
+SET SESSION AUTHORIZATION rls_regress_user0;
+CREATE TABLE current_check (currentid int, payload text, rlsuser text);
+GRANT ALL ON current_check TO PUBLIC;
+INSERT INTO current_check VALUES
+(1, 'abc', 'rls_regress_user1'),
+(2, 'bcd', 'rls_regress_user1'),
+(3, 'cde', 'rls_regress_user1'),
+(4, 'def', 'rls_regress_user1'),
+(5, 'efg', 'rls_regress_user1'),
+(6, 'fgh', 'rls_regress_user1'),
+(7, 'fgh', 'rls_regress_user1'),
+(8, 'fgh', 'rls_regress_user1');
+CREATE POLICY p1 ON current_check FOR SELECT USING (currentid % 2 = 0);
+CREATE POLICY p3 ON current_check FOR UPDATE USING (currentid = 4) WITH CHECK (rlsuser = current_user);
+ALTER TABLE current_check ENABLE ROW LEVEL SECURITY;
+SET SESSION AUTHORIZATION rls_regress_user1;
+-- Can SELECT this row just fine:
+SELECT * FROM current_check WHERE currentid = 2;
+ currentid | payload |  rlsuser  
+---+-+---
+ 2 | bcd | rls_regress_user1
+(1 row)
+
+-- Cannot UPDATE it, though:
+UPDATE current_check SET payload = payload || '_new' WHERE currentid = 2 RETURNING *;
+ currentid | payload | rlsuser 
+---+-+-
+(0 rows)
+
+BEGIN;
+DECLARE current_check_cursor SCROLL CURSOR FOR SELECT * FROM current_check;
+-- Returns rows that can be seen according to SELECT policy, like plain SELECT
+-- above:
+FETCH ABSOLUTE 1 FROM current_check_cursor;
+ currentid | payload |  rlsuser  
+---+-+---
+ 2 | bcd | rls_regress_user1
+(1 row)
+
+-- Still cannot UPDATE row through cursor:
+UPDATE current_check SET payload = payload || '_new' WHERE CURRENT OF current_check_cursor RETURNING *;
+ currentid | payload | rlsuser 
+---+-+-
+(0 rows)
+COMMIT;
 --
 -- Clean up objects
 --
diff --git a/src/test/regress/sql/rowsecurity.sql b/src/test/regress/sql/rowsecurity.sql
index fdadf99..aeed5b1 100644
--- a/src/test/regress/sql/rowsecurity.sql
+++ b/src/test/regress/sql/rowsecurity.sql
@@ -1087,6 +1087,46 @@ COPY copy_t FROM STDIN; --fail - permission denied.
 RESET SESSION AUTHORIZATION;
 DROP TABLE copy_t;
 
+-- Check WHERE CURRENT OF
+SET SESSION AUTHORIZATION rls_regress_user0;
+
+CREATE TABLE current_check (currentid int, payload text, rlsuser text);
+GRANT ALL ON current_check TO PUBLIC;
+
+INSERT INTO current_check VALUES
+(1, 'abc', 

[HACKERS] error message diff with Perl 5.22.0

2015-06-06 Thread Peter Eisentraut
With the recently released Perl 5.22.0, the tests fail thus:

-ERROR:  Global symbol $global requires explicit package name at line 3.
-Global symbol $other_global requires explicit package name at line 4.
+ERROR:  Global symbol $global requires explicit package name (did you forget 
to declare my $global?) at line 3.
+Global symbol $other_global requires explicit package name (did you forget 
to declare my $other_global?) at line 4.
 CONTEXT:  compilation of PL/Perl function uses_global


With PL/Python, this happens for just about every other release, and we usually 
add another expected file.  I don't see anything like that for PL/Perl yet.  
Should we add a new expected file, or is there a different preferred solution?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] error message diff with Perl 5.22.0

2015-06-06 Thread Alvaro Herrera
Peter Eisentraut wrote:
 With the recently released Perl 5.22.0, the tests fail thus:
 
 -ERROR:  Global symbol $global requires explicit package name at line 3.
 -Global symbol $other_global requires explicit package name at line 4.
 +ERROR:  Global symbol $global requires explicit package name (did you 
 forget to declare my $global?) at line 3.
 +Global symbol $other_global requires explicit package name (did you forget 
 to declare my $other_global?) at line 4.
  CONTEXT:  compilation of PL/Perl function uses_global
 
 
 With PL/Python, this happens for just about every other release, and we 
 usually add another expected file.  I don't see anything like that for 
 PL/Perl yet.  Should we add a new expected file, or is there a different 
 preferred solution?

How many .sql files does this affect?  Alternate expected output is
bothersome; if more than one test file is affected, I think the best is
to isolate the cases where this appears to a single .sql file, as short
as possible, so that we don't have to touch it for anything else, and so
that we don't have to touch the isolated file except for similar
changes.

Also, do we need a buildfarm member running 5.22?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [CORE] Restore-reliability mode

2015-06-06 Thread Joshua D. Drake


On 06/06/2015 07:14 PM, Peter Geoghegan wrote:


On Sat, Jun 6, 2015 at 7:07 PM, Robert Haas robertmh...@gmail.com wrote:

Perhaps we're honoring this more in the breech than in the observance,
but I'm not making up what Tom has said about this:

http://www.postgresql.org/message-id/27310.1251410...@sss.pgh.pa.us
http://www.postgresql.org/message-id/19174.1299782...@sss.pgh.pa.us
http://www.postgresql.org/message-id/3413.1301154...@sss.pgh.pa.us
http://www.postgresql.org/message-id/3261.1401915...@sss.pgh.pa.us


Of course, not doing a catversion bump after beta1 doesn't necessarily
have much value in and of itself. *Promising* to not do a catversion
bump, and then usually keeping that promise definitely has a certain
value, but clearly we are incapable of that.



It seems to me that a cat bump during Alpha or Beta should be absolutely 
fine and reservedly fine respectively. Where we should absolutely not 
cat bump unless there is absolutely no other choice is during and RC.


JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Initializing initFileRelationIds list via write is unsafe

2015-06-06 Thread Tom Lane
I wrote:
 I've been chasing the intermittent cache lookup failed for access method
 403 failure at session startup that's been seen lately in the buildfarm,
 for instance here:
 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=axolotldt=2015-06-04%2019%3A22%3A46
 (Axolotl has shown this 3 times in the last 90 days, not sure if any
 others have seen it.)  I hypothesized that this was triggered by the
 VACUUM FULL pg_am in the concurrently running vacuum.sql regression
 test, so I started running the regression tests in parallel with a
 shell script doing
  while sleep 0.1; do psql -c 'vacuum full pg_am' regression; done
 and sure enough, I can reproduce it once in awhile.

BTW, while this recipe works in HEAD (it might take 40 or 50 cycles
of the regression tests, but it works), I've been unable to reproduce
the failure this way in any back branch.  I'm not entirely sure why,
but I'm suspicious that it's because HEAD has more tests running
concurrently with vacuum.sql than there used to be.

However, I can reproduce the failure with 100% reliability in all
branches by injecting a conditional cache reset just before writing
the init file.  For example, apply this patch:

diff --git a/src/backend/utils/cache/relcache.c 
b/src/backend/utils/cache/relcache.c
index f60f3cb..846d880 100644
*** a/src/backend/utils/cache/relcache.c
--- b/src/backend/utils/cache/relcache.c
*** RelationCacheInitializePhase3(void)
*** 3489,3494 
--- 3489,3498 
 */
InitCatalogCachePhase2();
  
+   /* If superuser, force cache flush */
+   if (superuser())
+   RelationCacheInvalidate();
+ 
/* reset initFileRelationIds list; we'll fill it during write */
initFileRelationIds = NIL;
  

then create at least one non-superuser user, then do this:

1. Connect as superuser and do vacuum full pg_am.  (This causes the
current relcache init file to be removed.)

2. Disconnect and reconnect as superuser.  (The new session goes through
the above logic, so that it sees a relcache flush occur just at the
critical time.  It correctly doesn't write a bogus init file, but
it's left with an initFileRelationIds list with only 11 entries.)

3. In another window, connect as a non-superuser user.  (This session
will successfully write a new init file, since the above hack doesn't
trigger in it.)

4. In the session from step 2, again do vacuum full pg_am.  (Now
the bug manifests: we fail to remove the init file although it is
now stale.)

5. Now things are broken: all new sessions fail with
psql: FATAL:  cache lookup failed for access method 403


So that's mainly for the archives, to document a reliable way to test
the problem.

However, there's an interesting takeaway from this.  Since this problem
is triggered by a cache flush at just the wrong time, you might hope
that the buildfarm's CLOBBER_CACHE_ALWAYS critters would have caught it.
The reason they fail to do so is that, because they *always* flush at
every opportunity, the relcache write code *always* fails and so such
a build never creates an init file that could then become stale.

This suggests that CLOBBER_CACHE_ALWAYS is actually missing a pretty
large part of the cache behavioral space.  Maybe we should devise some
sort of CLOBBER_CACHE_RANDOMLY option that would inject cache flush
events more selectively, perhaps only once every thousand opportunities
or so.  And perhaps not only full cache reset events, though I confess
to not being sure what that ought to look like.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [CORE] Restore-reliability mode

2015-06-06 Thread Robert Haas
On Sat, Jun 6, 2015 at 12:33 PM, Kevin Grittner kgri...@ymail.com wrote:
 Robert Haas robertmh...@gmail.com wrote:
 Tom, for example, has previously not wanted to even bump
 catversion after beta1, which rules out a huge variety of
 possible fixes and interface changes.  If we want to make a
 policy decision to change our approach, we should be up-front
 about that.

 What?!?  There have been catversion bumps between the REL?_?_BETA1
 tag and the REL?_?_0 tag for 8.2, 8.3, 9.0, 9.1, 9.3, and 9.4.
 (That is, it has happend on 6 of the last 8 releases.)  I don't
 think we're talking about any policy change here.  We try to avoid
 a catversion bump after beta if we can; we're not that reluctant to
 do so if needed.

Perhaps we're honoring this more in the breech than in the observance,
but I'm not making up what Tom has said about this:

http://www.postgresql.org/message-id/27310.1251410...@sss.pgh.pa.us
http://www.postgresql.org/message-id/19174.1299782...@sss.pgh.pa.us
http://www.postgresql.org/message-id/3413.1301154...@sss.pgh.pa.us
http://www.postgresql.org/message-id/3261.1401915...@sss.pgh.pa.us

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [CORE] Restore-reliability mode

2015-06-06 Thread Peter Geoghegan
On Sat, Jun 6, 2015 at 7:07 PM, Robert Haas robertmh...@gmail.com wrote:
 Perhaps we're honoring this more in the breech than in the observance,
 but I'm not making up what Tom has said about this:

 http://www.postgresql.org/message-id/27310.1251410...@sss.pgh.pa.us
 http://www.postgresql.org/message-id/19174.1299782...@sss.pgh.pa.us
 http://www.postgresql.org/message-id/3413.1301154...@sss.pgh.pa.us
 http://www.postgresql.org/message-id/3261.1401915...@sss.pgh.pa.us

Of course, not doing a catversion bump after beta1 doesn't necessarily
have much value in and of itself. *Promising* to not do a catversion
bump, and then usually keeping that promise definitely has a certain
value, but clearly we are incapable of that.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] error message diff with Perl 5.22.0

2015-06-06 Thread Alvaro Herrera
Alvaro Herrera wrote:

 Also, do we need a buildfarm member running 5.22?

Actually, I wonder if there's a way to have a buildfarm animal that runs
the pl/perl tests with all supported versions of Perl, for example.
This would probably require adding a new .pm file each time a new Perl
is released.  Is this doable?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Proposal] More Vacuum Statistics

2015-06-06 Thread Naoya Anzai
Thank you for quick feedback, and I'm sorry for slow response.
All of your opinions were very helpful for me.

I have confirmed Greg's Idea Timing events.
http://www.postgresql.org/message-id/509300f7.5000...@2ndquadrant.com

Greg said at first,
Parsing log files for commonly needed performance data is no fun.
Yes, I completely agree with him.

That looks a nice idea but I don't know why this idea has 
not been commited yet. Anybody knows?

I have reworked my idea since I heard dear hacker's opinions.


pg_stat_vacuum view


I understand it is not good to simply add more counters in 
pg_stat_*_tables. For now, I'd like to suggest an extension 
which can confirm vacuum statistics like pg_stat_statements.

VACUUM is a most important feature in PostgreSQL, but a 
special view for vacuum does not exist. Don't you think 
the fact is inconvenience? At least, I am disgruntled with 
that we need to parse pg_log for tune VACUUM.

My first design of pg_stat_vacuum view is following. 
(There are two views.)

pg_stat_vacuum_table
---
dbid
schemaname
relid
relname
elapsed
page_removed
page_remain
page_skipped
tuple_removed
tuple_remain
tuple_notremovable
buffer_hit
buffer_miss
buffer_dirty
avg_read
avg_write
vm_count
vac_start
vac_end
is_autovacuum

pg_stat_vacuum_index
---
dbid
shemaname
relid
indexrelid
indexname
elapsed
num_index_tuples
num_pages
tuples_removed
pages_deleted
pages_free
is_autovacuum

At present, I think memory design of pg_stat_statements can 
divert into this feature.And I think this module needs to 
prepare following parameters like pg_stat_statements.

pg_stat_vacuum.max(integer)
pg_stat_vacuum.save(boolean)
pg_stat_vacuum.excluded_dbnames(text)
pg_stat_vacuum.excluded_schemas(text)
pg_stat_vacuum.min_duration(integer)
... and so on.

To implement this feature, I have to collect each vacuum-stats 
every lazy_vacuum_* and I need to embed a hook function point 
where needed. (probably last point of lazy_vacuum_rel).
Do you hesitate to add the hook only for this function?

Similar feature has been already provided by pg_statsinfo package.
But it is a full-stack package for PG-stats and it needs to 
redesign pg_log and design a repository database for introduce.
And it is not a core-extension for PostgreSQL.
(I don't intend to hate pg_statsinfo, 
 I think this package is a very convinient tool)

Everyone will be able to do more easily tuning of VACUUM.
That's all I want.

Any comments are welcome!

Best Regards,

Naoya Anzai

---
Naoya Anzai
Engineering Department
NEC Solution Inovetors, Ltd.
E-Mail: nao-an...@xc.jp.nec.com
---


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reducing tuple overhead

2015-06-06 Thread Amit Kapila
On Thu, Apr 23, 2015 at 9:54 PM, Andres Freund and...@anarazel.de wrote:

 Split into a new thread, the other one is already growing fast
 enough. This discussion started at
 http://archives.postgresql.org/message-id/55391469.5010506%40iki.fi

 On April 23, 2015 6:48:57 PM GMT+03:00, Heikki Linnakangas 
hlinn...@iki.fi wrote:
 Stop right there. You need to reserve enough space on the page to store
 
 an xmax for *every* tuple on the page. Because if you don't, what are
 you going to do when every tuple on the page is deleted by a different
 transaction.
 
 Even if you store the xmax somewhere else than the page header, you
 need
 to reserve the same amount of space for them, so it doesn't help at
 all.

 Depends on how you do it and what you optimize for (disk space, runtime,
 code complexity..).  You can e.g. use apply a somewhat similar trick to
 xmin/xmax as done to cmin/cmax; only that the data structure needs to be
 persistent.

Today while reading how other databases (that stores similar information
at page level) tackle this problem, I came across a link [1] which indicates
that this is controlled by some clauses (options) at table level.  The idea
seems to be that have some preallocated space (minimum and maximum
value for which can be specified by user, ofcourse there will be some
default values for the same) for this information in page and if more space
than that is required for a concurrent write operation, then the operation
needs to wait till the space for the same is available.

I am not sure if this is the best way as it depends on how to re-use the
preallocated space for transaction information at page level, but still I
think it is worth considering.


[1] -
https://techiedba.wordpress.com/2011/09/03/what-is-initrans-and-maxtrans/


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com