Re: [HACKERS] Marginal performance improvement: replace bms_first_member loops

2014-11-28 Thread Dean Rasheed
On 27 November 2014 at 19:20, Tom Lane t...@sss.pgh.pa.us wrote:
 The attached proposed patch adds bms_next_member() and replaces
 bms_first_member() calls where it seemed to make sense.  I've had a
 hard time measuring much speed difference for this patch in isolation,
 but in principle it should be less code and less cycles.  It also seems
 safer and more natural to not use destructive looping techniques.


+1. I had a similar idea a while back but didn't have time to produce
a complete patch.

There is another micro-optimisation that you could make in
bms_next_member() -- it isn't necessary to do

w = RIGHTMOST_ONE(w)

because unlike bms_next_member, w isn't being used to mask out the bit
retrieved, so any higher bits don't matter and the later use of
rightmost_one_pos[...] will pick out the required rightmost bit.

Should this function protect against large negative inputs? As it
stands, passing in a value of prevbit less than -1 would be
problematic. Maybe it's sufficient to say don't do that in the docs,
rather than waste more cycles checking.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] inherit support for foreign tables

2014-11-28 Thread Ashutosh Bapat
On Thu, Nov 27, 2014 at 3:52 PM, Etsuro Fujita fujita.ets...@lab.ntt.co.jp
wrote:

 (2014/11/17 17:55), Ashutosh Bapat wrote:

 Here are my review comments for patch fdw-inh-3.patch.


 Thanks for the review!

  Tests
 ---
 1. It seems like you have copied from testcase inherit.sql to
 postgres_fdw testcase. That's a good thing, but it makes the test quite
 long. May be we should have two tests in postgres_fdw contrib module,
 one for simple cases, and other for inheritance. What do you say?


 IMO, the test is not so time-consuming, so it doesn't seem worth splitting
 it into two.


I am not worried about the timing but I am worried about the length of the
file and hence ease of debugging in case we find any issues there. We will
leave that for the commiter to decide.



  Documentation
 
 1. The change in ddl.sgml
 -We will refer to the child tables as partitions, though they
 -are in every way normal productnamePostgreSQL/ tables.
 +We will refer to the child tables as partitions, though we assume
 +that they are normal productnamePostgreSQL/ tables.

 adds phrase we assume that, which confuses the intention behind the
 sentence. The original text is intended to highlight the equivalence
 between partition and normal table, where as the addition esp. the
 word assume weakens that equivalence. Instead now we have to highlight
 the equivalence between partition and normal or foreign table. The
 wording similar to though they are either normal or foreign tables
 should be used there.


 You are right, but I feel that there is something out of place in saying
 that there (5.10.2. Implementing Partitioning) because the procedure there
 has been written based on normal tables only.  Put another way, if we say
 that, I think we'd need to add more docs, describing the syntax and/or the
 corresponding examples for foreign-table cases.  But I'd like to leave that
 for another patch.  So, how about the wording we assume *here* that,
 instead of we assume that, as I added the following notice in the
 previous section (5.10.1. Overview)?

 @@ -2650,7 +2669,10 @@ VALUES ('Albany', NULL, NULL, 'NY');
  table of a single parent table.  The parent table itself is normally
  empty; it exists just to represent the entire data set.  You should be
  familiar with inheritance (see xref linkend=ddl-inherit) before
 -attempting to set up partitioning.
 +attempting to set up partitioning.  (The setup and management of
 +partitioned tables illustrated in this section assume that each
 +partition is a normal table.  However, you can do that in a similar
 way
 +for cases where some or all partitions are foreign tables.)


This looks ok, though, I would like to see final version of the document.
But I think, we will leave that for committer to handle.


  2. The wording some kind of optimization gives vague picture. May be
 it can be worded as Since the constraints are assumed to be true, they
 are used in constraint-based query optimization like constraint
 exclusion for partitioned tables..
 +Those constraints are used in some kind of query optimization such
 +as constraint exclusion for partitioned tables (see
 +xref linkend=ddl-partitioning).


 Will follow your revision.


Thanks.



  Code
 ---
 1. In the following change
 +/*
* acquire_inherited_sample_rows -- acquire sample rows from
 inheritance tree
*
* This has the same API as acquire_sample_rows, except that rows are
* collected from all inheritance children as well as the specified
 table.
 - * We fail and return zero if there are no inheritance children.
 + * We fail and return zero if there are no inheritance children or
 there are
 + * inheritance children that foreign tables.

 The addition should be there are inheritance children that *are all
 *foreign tables. Note the addition are all.


 Sorry, I incorrectly wrote the comment.  What I tried to write is We fail
 and return zero if there are no inheritance children or if we are not in
 VAC_MODE_SINGLE case and inheritance tree contains at least one foreign
 table..


You might want to use English description of VAC_MODE_SINGLE instead of
that macro in the comment, so that reader doesn't have to look up
VAC_MODE_SINGLE. But I think, we will leave this for the committer.


  2. The function has_foreign() be better named has_foreign_child()? This


 How about has_foreign_table?


has_foreign_child() would be better, since these are children in the
inheritance hierarchy and not mere tables.



  function loops through all the tableoids passed even after it has found
 a foreign table. Why can't we return true immediately after finding the
 first foreign table, unless the side effects of heap_open() on all the
 table are required. But I don't see that to be the case, since these
 tables are locked already through a previous call to heap_open(). In the


 Good catch!  Will fix.

  same function 

Re: [HACKERS] Fillfactor for GIN indexes

2014-11-28 Thread Alexander Korotkov
Hi!

On Fri, Nov 21, 2014 at 8:12 AM, Michael Paquier michael.paqu...@gmail.com
wrote:

 Please find attached a simple patch adding fillfactor as storage parameter
 for GIN indexes. The default value is the same as the one currently aka 100
 to have the pages completely packed when a GIN index is created.


That's not true. Let us discuss it a little bit.
In btree access method index is created by sorting index tuples. Once they
are sorted it can write a tree with any fullness of the pages. With
completely filled pages you will get flood of index page splits on table
updates or inserts. In order to avoid that the default fillfactor is 90.
Besides index creation, btree access method tries to follow given
fillfactor in the case of inserting ordered data. When rightmost page
splits then fillfactor percent of data goes to the left page.
Btree page life cycle is between being freshly branched by split (50%
filled) and being full packed (100% filled) and then splitted. Thus we can
roughly estimate average fullness of btree page to be 75%. This
equilibrium state can be achieved by huge amount of inserts over btree
index.

Fillfactor was spreaded to other access methods. For instance, GiST. In
GiST, tree is always constructed by page splits. So, freshly created GiST
is already in its equilibrium state. Even more GiST doesn't splits page
by halves. Split ration depends on opclass. So, equilibrium fullness of
particular GiST index is even less than 75%. What does fillfactor do with
GiST? It makes GiST pages split on fillfactor fullness on index build. So,
GiST is even less fulled. But why? You anyway don't get flood of split on
fresh GiST index because it's in equilibrium state. That's why I would
rather delete fillfactor from GiST until we have some algorithm to
construct GiST without page splits.

What is going on with GIN? GIN is, basically, two-level nested btree. So,
fillfactor should be applicable here. But GIN is not constructed like
btree...
GIN accumulates maintenance_work_mem of data and then inserts it into the
tree. So fresh GIN is the result of insertion of maintenance_work_mem
buckets. And each bucket is sorted. On small index (or large
maintenance_work_mem) it would be the only one bucket. GIN has two levels
of btree: entry tree and posting tree. Currently entry tree has on special
logic to control fullness during index build. So, with only one bucket
entry tree will be 50% filled. With many buckets entry tree will be at
equilibrium state. In some specific cases (about two buckets) entry tree
can be almost fully packed. Insertion into posting tree is always ordered
during index build because posting tree is a tree of TIDs. When posting
tree page splits it leaves left page fully packed during index build and
75% packed during insertion (because it expects some sequential inserts).

Idea of GIN building algorithm is that entry tree is expected to be
relatively small and fit to cache. Insertions into posting trees are
orders. Thus this algorithm should be IO efficient and have low overhead.
However, with large entry trees this algorithm can perform much worse.

My summary is following:
1) In order to have fully correct support of fillfactor in GIN we need to
rewrite GIN build algorithm.
2) Without rewriting GIN build algorithm, not much can be done with entry
tree. However, you can implement some heuristics.
3) You definitely need to touch code that selects ratio of split in
dataPlaceToPageLeaf (starting with if (!btree-isBuild)).
3) GIN data pages are always compressed excepts pg_upgraded indexes from
pre-9.4. Take care about it in following code.
  if (GinPageIsCompressed(page))
  freespace = GinDataLeafPageGetFreeSpace(page);
+ else if (btree-isBuild)
+ freespace = BLCKSZ * (100 - fillfactor) / 100;

--
With best regards,
Alexander Korotkov.


Re: [HACKERS] KNN-GiST with recheck

2014-11-28 Thread Alexander Korotkov
On Sat, Nov 22, 2014 at 2:20 AM, Peter Geoghegan p...@heroku.com wrote:

 On Mon, Jan 13, 2014 at 9:17 AM, Alexander Korotkov
 aekorot...@gmail.com wrote:
  This patch was split from thread:
 
 http://www.postgresql.org/message-id/CAPpHfdscOX5an71nHd8WSUH6GNOCf=V7wgDaTXdDd9=gon-...@mail.gmail.com
 
  I've split it to separate thead, because it's related to partial sort
 only
  conceptually not technically. Also I renamed it to knn-gist-recheck
 from
  partial-knn as more appropriate name. In the attached version docs are
  updated. Possible weak point of this patch design is that it fetches heap
  tuple from GiST scan. However, I didn't receive any notes about its
 design,
  so, I'm going to put it to commitfest.


 The partial sort thing is not in the current 2014-10 commitfest
 (although this patch is). Is that intentional?


It's not. I just didn't revise partial sort yet :(

--
With best regards,
Alexander Korotkov.


Re: [HACKERS] Marginal performance improvement: replace bms_first_member loops

2014-11-28 Thread David Rowley
On Fri, Nov 28, 2014 at 8:20 AM, Tom Lane t...@sss.pgh.pa.us wrote:


 The attached proposed patch adds bms_next_member() and replaces
 bms_first_member() calls where it seemed to make sense.  I've had a
 hard time measuring much speed difference for this patch in isolation,
 but in principle it should be less code and less cycles.  It also seems
 safer and more natural to not use destructive looping techniques.


I've had a quick read of the patch and it seems like a good idea.

I have to say I don't really like the modifying of the loop iterator that's
going on here:

col = -1;
while ((col = bms_next_member(rte-modifiedCols, col)) = 0)
{
col += FirstLowInvalidHeapAttributeNumber;
/* do stuff */
col -= FirstLowInvalidHeapAttributeNumber;
}


Some other code is doing this:

col = -1;
while ((col = bms_next_member(cols, col)) = 0)
{
/* bit numbers are offset by FirstLowInvalidHeapAttributeNumber */
AttrNumber attno = col + FirstLowInvalidHeapAttributeNumber;

Which seems less prone to future breakage and possibly slightly less cycles.

A while back when I was benchmarking the planner time during my trials with
anti/semi join removals, I wrote a patch to change the usage pattern for
cases
such as:

if (bms_membership(a) != BMS_SINGLETON)
  return; /* nothing to do */
singleton = bms_singleton_member(a);
...

Into:

if (!bms_get_singleton(a, singleton))
  return; /* nothing to do */
...

Which means 1 function call and loop over the bitmapset, rather than 2
function
calls and 2 loops over the set when the set is a singleton.

This knocked between 4 and 22% off of the time the planner spent in the join
removals path.

The patch to implement this and change all suitable calls sites is attached.

Regards

David Rowley


bms_get_singleton_v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] The problems of PQhost()

2014-11-28 Thread Fujii Masao
On Fri, Nov 28, 2014 at 3:43 AM, Noah Misch n...@leadboat.com wrote:
 On Fri, Nov 28, 2014 at 03:11:06AM +0900, Fujii Masao wrote:
 On Thu, Nov 27, 2014 at 12:38 PM, Noah Misch n...@leadboat.com wrote:
  Sure.  I'll first issue git revert 9f80f48, then apply the attached 
  patch.
  Since libpq ignores a hostaddr parameter equal to the empty string, this
  implementation does likewise.  Apart from that, I anticipate behavior
  identical to today's code.

 +fprintf(stderr, _(out of memory\n));

 psql_error() should be used instead of fprintf()?

 I copied what pg_malloc() would do.  Either way seems reasonable.

psql_error() seems better for the case where psql executes the
specified input file.
In this case, psql_error reports the message in the format like
psql:filename:lineno: message.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Function to know last log write timestamp

2014-11-28 Thread Fujii Masao
On Wed, Nov 26, 2014 at 4:05 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
 On Fri, Aug 15, 2014 at 8:17 PM, Fujii Masao masao.fu...@gmail.com wrote:
 On Fri, Aug 15, 2014 at 3:40 AM, Andres Freund and...@2ndquadrant.com 
 wrote:
 On 2014-08-14 14:37:22 -0400, Robert Haas wrote:
 On Thu, Aug 14, 2014 at 2:21 PM, Andres Freund and...@2ndquadrant.com 
 wrote:
  On 2014-08-14 14:19:13 -0400, Robert Haas wrote:
  That's about the idea. However, what you've got there is actually
  unsafe, because shmem-counter++ is not an atomic operation.  It reads
  the counter (possibly even as two separate 4-byte loads if the counter
  is an 8-byte value), increments it inside the CPU, and then writes the
  resulting value back to memory.  If two backends do this concurrently,
  one of the updates might be lost.
 
  All these are only written by one backend, so it should be safe. Note
  that that coding pattern, just without memory barriers, is all over
  pgstat.c

 Ah, OK.  If there's a separate slot for each backend, I agree that it's 
 safe.

 We should probably add barriers to pgstat.c, too.

 Yea, definitely. I think this is rather borked on weaker
 architectures. It's just that the consequences of an out of date/torn
 value are rather low, so it's unlikely to be noticed.

 Imo we should encapsulate the changecount modifications/checks somehow
 instead of repeating the barriers, Asserts, comments et al everywhere.

 So what about applying the attached patch first, which adds the macros
 to load and store the changecount with the memory barries, and changes
 pgstat.c use them. Maybe this patch needs to be back-patch to at least 9.4?

 After applying the patch, I will rebase the pg_last_xact_insert_timestamp
 patch and post it again.
 Hm, what's the status on this patch? The addition of those macros to
 control count increment with a memory barrier seems like a good thing
 at least.

Thanks for reminding me of that! Barring any objection, I will commit it.

 The 2nd patch has not been rebased but still..

The feature that this 2nd patch implements is very similar to a part of
what the committs patch does, i.e., tracking the timestamps of the committed
transactions. If the committs patch will have been committed, basically
I'd like to no longer work on the 2nd patch to avoid the duplicate work.
OTOH, I'm concerned about the performance impact by the committs patch.
So, for the simple use case like the check of replication lag, what the 2nd
patch implements seems to be better, though...

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-packagers] Palle Girgensohn's ICU patch

2014-11-28 Thread Palle Girgensohn

 27 nov 2014 kl. 16:03 skrev Tom Lane t...@sss.pgh.pa.us:
 
 Another issue is that (AFAIK) ICU doesn't support any non-Unicode
 encodings, which means that a build supporting *only* ICU collations is a
 nonstarter IMO.

The patch I originally wrote replaces strwcoll but for keeps the original 
behaviour for 8-bit charsets' encodings.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] The problems of PQhost()

2014-11-28 Thread Noah Misch
On Fri, Nov 28, 2014 at 07:55:29PM +0900, Fujii Masao wrote:
 On Fri, Nov 28, 2014 at 3:43 AM, Noah Misch n...@leadboat.com wrote:
  On Fri, Nov 28, 2014 at 03:11:06AM +0900, Fujii Masao wrote:
  On Thu, Nov 27, 2014 at 12:38 PM, Noah Misch n...@leadboat.com wrote:
   Sure.  I'll first issue git revert 9f80f48, then apply the attached 
   patch.
   Since libpq ignores a hostaddr parameter equal to the empty string, this
   implementation does likewise.  Apart from that, I anticipate behavior
   identical to today's code.
 
  +fprintf(stderr, _(out of memory\n));
 
  psql_error() should be used instead of fprintf()?
 
  I copied what pg_malloc() would do.  Either way seems reasonable.
 
 psql_error() seems better for the case where psql executes the
 specified input file.
 In this case, psql_error reports the message in the format like
 psql:filename:lineno: message.

Fair enough.  Will do it that way.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Marginal performance improvement: replace bms_first_member loops

2014-11-28 Thread Tom Lane
Dean Rasheed dean.a.rash...@gmail.com writes:
 On 27 November 2014 at 19:20, Tom Lane t...@sss.pgh.pa.us wrote:
 The attached proposed patch adds bms_next_member() and replaces
 bms_first_member() calls where it seemed to make sense.

 There is another micro-optimisation that you could make in
 bms_next_member() -- it isn't necessary to do
 w = RIGHTMOST_ONE(w)

Excellent point!  Thanks for noticing that.

 Should this function protect against large negative inputs? As it
 stands, passing in a value of prevbit less than -1 would be
 problematic. Maybe it's sufficient to say don't do that in the docs,
 rather than waste more cycles checking.

Yeah, I had considered whether to do that; instead of just prevbit++
it would need to be something like
prevbit = (prevbit  0) ? 0 : prevbit + 1;
This would add one test-and-branch, and moreover one that would be
hard to predict correctly (given that most of our bitmapsets don't
have very many members).  So it seems pretty expensive.  Probably
a more explicit warning in the header comment is good enough; or
we could put in an Assert().

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Marginal performance improvement: replace bms_first_member loops

2014-11-28 Thread Tom Lane
David Rowley dgrowle...@gmail.com writes:
 I have to say I don't really like the modifying of the loop iterator that's
 going on here:

 col = -1;
 while ((col = bms_next_member(rte-modifiedCols, col)) = 0)
 {
   col += FirstLowInvalidHeapAttributeNumber;
   /* do stuff */
   col -= FirstLowInvalidHeapAttributeNumber;
 }

 Some other code is doing this:

 col = -1;
 while ((col = bms_next_member(cols, col)) = 0)
 {
   /* bit numbers are offset by FirstLowInvalidHeapAttributeNumber */
   AttrNumber attno = col + FirstLowInvalidHeapAttributeNumber;

 Which seems less prone to future breakage and possibly slightly less cycles.

Yeah, I'd come to the same conclusion while thinking about it in the
shower this morning ...

 A while back when I was benchmarking the planner time during my trials with
 anti/semi join removals, I wrote a patch to change the usage pattern for
 cases such as:
 ...
 This knocked between 4 and 22% off of the time the planner spent in the join
 removals path.

Really!?  I've never seen either of those functions show up all that high
in profiles.  Can you share the test case you were measuring?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] no test programs in contrib

2014-11-28 Thread Peter Eisentraut
On 11/27/14 3:10 PM, Tom Lane wrote:
 I'm not too happy with that approach, because packagers are going to
 tend to think they should package any files installed by install-world.
 The entire point of this change, IMO, is that the test modules should
 *not* get installed, certainly not by normal install targets.  Being
 consistent with the existing contrib packaging is exactly not what we
 want.

I share this objection.

 Maybe we should only allow check-world to run these tests, and not
 installcheck-world?  That's kind of annoying, but what you are
 doing now seems to defeat the purpose altogether.

Maybe this will have to do for now.

But the existing main regression tests are able to run against an
existing installation while using the modules autoinc.so and refint.so
without installing them.  I think the problem is that we are able to
load a .so file from just about anywhere, but we can't load a full
extension in the same way.  There have been discussions about that, in
the context of being able to test an externally developed extension
before/without installing it.  This is pretty much the same case.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch : Allow toast tables to be moved to a different tablespace

2014-11-28 Thread Alex Shulgin
Julien Tachoires jul...@gmail.com writes:

 2011/12/15 Alvaro Herrera alvhe...@commandprompt.com:

 Uhm, surely you could compare the original toast tablespace to the heap
 tablespace, and if they differ, handle appropriately when creating the
 new toast table? =A0Just pass down the toast tablespace into
 AlterTableCreateToastTable, instead of having it assume that
 rel-rd_rel-relnamespace is sufficient. =A0This should be done in all
 cases where a toast tablespace is created, which shouldn't be more than
 a handful of them.

 Thank you, that way seems right.
 Now, I distinguish before each creation of a TOAST table with
 AlterTableCreateToastTable() : if it will create a new one or recreate
 an existing one.
 Thus, in create_toast_table() when toastTableSpace is equal to
 InvalidOid, we are able :
 - to fallback to the main table tablespace in case of new TOAST table creat=
 ion
 - to keep it previous tablespace in case of recreation.

 Here's a new version rebased against HEAD.

Almost 3 years later, here's a version rebased against current HEAD. :-)

It compiles and even passes the included regression test.

There were doubts originally if this is actually a useful feature, but
there is at least one plausible use case (main table on SSD, TOAST on
HDD):

  http://www.postgresql.org/message-id/4f3267ee.80...@deltasoft.no

I didn't add anything on top of the latest version, but I notice we've
added mat. views since then, so it looks like we now also need this:

  ALTER MATERIALIZED VIEW SET [VIEW | TOAST] TABLESPACE


Should I add this patch to the next CommitFest?

--
Alex



set_toast_tablespace_v0.11.patch.gz
Description: application/gzip

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] using Core Foundation locale functions

2014-11-28 Thread Peter Eisentraut
In light of the recent discussions about using ICU on OS X, I looked
into the Core Foundation locale functions (Core Foundation = traditional
Mac API in OS X, as opposed to the Unix/POSIX APIs).

Attached is a proof of concept patch that just about works for the
sorting aspects.  (The ctype aspects aren't there yet and will crash,
but they could be done similarly.)  It passes an appropriately adjusted
collate.linux.utf8 test, meaning that it does produce language-aware
sort orders that are equivalent to what glibc produces.

At the moment, this is probably just an experiment that shows where
refactoring and better abstractions might be suitable if we want to
support multiple locale libraries.  If we want to pursue ICU, I think
this could be a useful third option.
diff --git a/configure b/configure
index 7594401..371cbe0 100755
--- a/configure
+++ b/configure
@@ -708,6 +708,8 @@ with_libxml
 XML2_CONFIG
 UUID_EXTRA_OBJS
 with_uuid
+LOCALE_EXTRA_LIBS
+with_locale
 with_selinux
 with_openssl
 krb_srvtab
@@ -831,6 +833,7 @@ with_openssl
 with_selinux
 with_readline
 with_libedit_preferred
+with_locale
 with_uuid
 with_ossp_uuid
 with_libxml
@@ -1520,6 +1523,7 @@ Optional Packages:
   --without-readline  do not use GNU Readline nor BSD Libedit for editing
   --with-libedit-preferred
   prefer BSD Libedit over GNU Readline
+  --with-locale=LIB   use locale library LIB (posix,cf)
   --with-uuid=LIB build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuidobsolete spelling of --with-uuid=ossp
   --with-libxml   build with XML support
@@ -5677,6 +5681,51 @@ fi
 
 
 #
+# collation library
+#
+
+
+
+# Check whether --with-locale was given.
+if test ${with_locale+set} = set; then :
+  withval=$with_locale;
+  case $withval in
+yes)
+  as_fn_error $? argument required for --with-locale option $LINENO 5
+  ;;
+no)
+  as_fn_error $? argument required for --with-locale option $LINENO 5
+  ;;
+*)
+
+  ;;
+  esac
+
+else
+  with_locale=posix
+fi
+
+
+case $with_locale in
+  posix)
+
+$as_echo #define USE_LOCALE_POSIX 1 confdefs.h
+
+;;
+  cf)
+
+$as_echo #define USE_LOCALE_CF 1 confdefs.h
+
+LOCALE_EXTRA_LIBS='-framework Foundation'
+;;
+  *)
+as_fn_error $? --with-locale must specify one of posix or cf $LINENO 5
+esac
+
+
+
+
+#
 # UUID library
 #
 # There are at least three UUID libraries in common use: the FreeBSD/NetBSD
diff --git a/configure.in b/configure.in
index 0dc3f18..16b97a1 100644
--- a/configure.in
+++ b/configure.in
@@ -706,6 +706,25 @@ PGAC_ARG_BOOL(with, libedit-preferred, no,
 
 
 #
+# collation library
+#
+PGAC_ARG_REQ(with, locale, [LIB], [use locale library LIB (posix,cf)], [], [with_locale=posix])
+case $with_locale in
+  posix)
+AC_DEFINE([USE_LOCALE_POSIX], 1, [Define to 1 to use POSIX locale functions.])
+;;
+  cf)
+AC_DEFINE([USE_LOCALE_CF], 1, [Define to 1 to use Core Foundation locale functions.])
+LOCALE_EXTRA_LIBS='-framework CoreFoundation'
+;;
+  *)
+AC_MSG_ERROR([--with-locale must specify one of posix or cf])
+esac
+AC_SUBST(with_locale)
+AC_SUBST(LOCALE_EXTRA_LIBS)
+
+
+#
 # UUID library
 #
 # There are at least three UUID libraries in common use: the FreeBSD/NetBSD
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 63ff50b..fa5a60e 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -166,6 +166,7 @@ with_openssl	= @with_openssl@
 with_selinux	= @with_selinux@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
+with_locale	= @with_locale@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
@@ -241,6 +242,7 @@ DLLWRAP = @DLLWRAP@
 LIBS = @LIBS@
 LDAP_LIBS_FE = @LDAP_LIBS_FE@
 LDAP_LIBS_BE = @LDAP_LIBS_BE@
+LOCALE_EXTRA_LIBS = @LOCALE_EXTRA_LIBS@
 UUID_LIBS = @UUID_LIBS@
 UUID_EXTRA_OBJS = @UUID_EXTRA_OBJS@
 LD = @LD@
diff --git a/src/backend/Makefile b/src/backend/Makefile
index 870a022..f793e76 100644
--- a/src/backend/Makefile
+++ b/src/backend/Makefile
@@ -54,7 +54,7 @@ ifneq ($(PORTNAME), win32)
 ifneq ($(PORTNAME), aix)
 
 postgres: $(OBJS)
-	$(CC) $(CFLAGS) $(LDFLAGS) $(LDFLAGS_EX) $(export_dynamic) $(call expand_subsys,$^) $(LIBS) -o $@
+	$(CC) $(CFLAGS) $(LDFLAGS) $(LDFLAGS_EX) $(export_dynamic) $(call expand_subsys,$^) $(LIBS) $(LOCALE_EXTRA_LIBS) -o $@
 
 endif
 endif
diff --git a/src/backend/utils/adt/pg_locale.c b/src/backend/utils/adt/pg_locale.c
index 94bb5a4..7f441b4 100644
--- a/src/backend/utils/adt/pg_locale.c
+++ b/src/backend/utils/adt/pg_locale.c
@@ -63,6 +63,10 @@
 #include utils/pg_locale.h
 #include utils/syscache.h
 
+#ifdef USE_LOCALE_CF
+#include cf_locale.h
+#endif
+
 #ifdef WIN32
 /*
  * This Windows file defines StrNCpy. We don't need it here, so we undefine
@@ -1023,7 +1027,6 @@ lc_ctype_is_c(Oid collation)
 
 
 /* simple subroutine for reporting errors from newlocale() */
-#ifdef HAVE_LOCALE_T
 static void
 report_newlocale_failure(const 

Re: [HACKERS] Add shutdown_at_recovery_target option to recovery.conf

2014-11-28 Thread Alex Shulgin

Christoph Berg c...@df7cb.de writes:

 Re: Petr Jelinek 2014-11-25 5474efea.2040...@2ndquadrant.com
 Patch committed.

Before I go and rebase that recovery.conf - GUC patch on top of
this...  is it final?

 
 Thanks!

 I'm a bit late to the party, but wouldn't

 recovery_target_action = ...

 have been a better name for this? It'd be in line with the other
 recovery_target_* parameters, and also a bit shorter than the imho
 somewhat ugly action_at_recovery_target.

FWIW, I too think that recovery_target_action is a better name.

--
Alex


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add shutdown_at_recovery_target option to recovery.conf

2014-11-28 Thread Petr Jelinek

On 28/11/14 17:46, Alex Shulgin wrote:


Christoph Berg c...@df7cb.de writes:


Re: Petr Jelinek 2014-11-25 5474efea.2040...@2ndquadrant.com

Patch committed.


Before I go and rebase that recovery.conf - GUC patch on top of
this...  is it final?



I think so, perhaps sans the name mentioned below.



Thanks!


I'm a bit late to the party, but wouldn't

recovery_target_action = ...

have been a better name for this? It'd be in line with the other
recovery_target_* parameters, and also a bit shorter than the imho
somewhat ugly action_at_recovery_target.


FWIW, I too think that recovery_target_action is a better name.



I agree.

--
 Petr Jelinek  http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] no test programs in contrib

2014-11-28 Thread Andres Freund
On 2014-11-27 15:51:51 -0500, Tom Lane wrote:
 Josh Berkus j...@agliodbs.com writes:
  So test_decoding is fairly useful for users demonstrating that decoding
  works, especially if they're also testing an external decoding module
  and are unsure of where their replication problem is located, or what's
  wrong with their HBA settings.  For that reason it's important that
  test_decoding be available via OS packages, which would give me some
  reluctance to move it out of /contrib.
 
 If we follow that reasoning we'll end up removing nothing from contrib.
 There is no reason that end users should need to be performing such
 testing; anyone who does have reason to do it will have a source tree
 at hand.

Actually I don't think that's true for test_decoding - there's quite a
bit of actual things that you can do with it. At the very least it's
useful to roughly measure the impact logical replication would have on a
database, but it's also helpful to look at the changes. And even if the
format isn't super nice, thanks to Robert's insistence it's actually
safely parseable if necessary.

  dummy_seclabel might serve the same purpose for users who are having
  issues with SEPostgres etc.  I don't know enough about it ...
 
 And as for dummy_seclabel, the same applies in spades, considering
 that the number of users of SEPostgres can probably be counted without
 running out of fingers.

I agree that dummy_seclabel really doesn't have any applications besides
regression tests.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Marginal performance improvement: replace bms_first_member loops

2014-11-28 Thread Tom Lane
David Rowley dgrowle...@gmail.com writes:
 A while back when I was benchmarking the planner time during my trials with
 anti/semi join removals, I wrote a patch to change the usage pattern for
 cases such as:

 if (bms_membership(a) != BMS_SINGLETON)
   return; /* nothing to do */
 singleton = bms_singleton_member(a);
 ...

 Into:

 if (!bms_get_singleton(a, singleton))
   return; /* nothing to do */
 ...

 Which means 1 function call and loop over the bitmapset, rather than 2
 function
 calls and 2 loops over the set when the set is a singleton.

I went ahead and committed this with some cosmetic adjustments.
I'm not sure about there being any performance win in existing use-cases,
but it seems worth doing on notational grounds anyway.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] no test programs in contrib

2014-11-28 Thread Josh Berkus
On 11/27/2014 12:51 PM, Tom Lane wrote:
 So test_decoding is fairly useful for users demonstrating that decoding
  works, especially if they're also testing an external decoding module
  and are unsure of where their replication problem is located, or what's
  wrong with their HBA settings.  For that reason it's important that
  test_decoding be available via OS packages, which would give me some
  reluctance to move it out of /contrib.
 If we follow that reasoning we'll end up removing nothing from contrib.
 There is no reason that end users should need to be performing such
 testing; anyone who does have reason to do it will have a source tree
 at hand.

1) Decoding extension Slony-II, installed from PGXN, won't connect
2) Install test_decoding
3) Check if decoding is working and you can connect

That's the scenario I'm looking at.  It's useful for is there something
wrong with my decoding setup or is the decoding plugin broken?

And I can imagine quite a few users who don't have source installs
needing to check that.  That doesn't mean test_decoding needs to stay in
contrib, just that it needs to be somewhere which goes into some common
package.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] no test programs in contrib

2014-11-28 Thread Alvaro Herrera
Peter Eisentraut wrote:
 On 11/27/14 3:10 PM, Tom Lane wrote:
  I'm not too happy with that approach, because packagers are going to
  tend to think they should package any files installed by install-world.
  The entire point of this change, IMO, is that the test modules should
  *not* get installed, certainly not by normal install targets.  Being
  consistent with the existing contrib packaging is exactly not what we
  want.
 
 I share this objection.

Okay, the attached version does it that way.

I also attach some changes for the MSVC build stuff.  I tested it and it
builds fine AFAICT, but it doesn't install because Install.pm wants to
install contrib modules from contrib/ (which seems reasonable) but my
hack adds the src/test/modules/ as contrib modules also, so Install.pm
goes bonkers.  I'm not even sure *what* we're supposed to build -- there
is no distinction in these programs as there is in the makefiles about
what to install.  So if some Windows developer can look into this, I'd
appreciate it.

 But the existing main regression tests are able to run against an
 existing installation while using the modules autoinc.so and refint.so
 without installing them.  I think the problem is that we are able to
 load a .so file from just about anywhere, but we can't load a full
 extension in the same way.  There have been discussions about that, in
 the context of being able to test an externally developed extension
 before/without installing it.  This is pretty much the same case.

I'm leaving that problem for someone else to solve.

Andres Freund wrote:
 On 2014-11-27 15:51:51 -0500, Tom Lane wrote:
  
  If we follow that reasoning we'll end up removing nothing from contrib.
  There is no reason that end users should need to be performing such
  testing; anyone who does have reason to do it will have a source tree
  at hand.
 
 Actually I don't think that's true for test_decoding - there's quite a
 bit of actual things that you can do with it. At the very least it's
 useful to roughly measure the impact logical replication would have on a
 database, but it's also helpful to look at the changes. And even if the
 format isn't super nice, thanks to Robert's insistence it's actually
 safely parseable if necessary.

Argh.  Okay, the attached doesn't move test_decoding either.

I think it's fine anyway -- I'm sure we will come up with a few
additional test modules, such as the one for the commit_ts patch.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services
commit be4885b5ee0308909bf896298b47acbf84b38606
Author: Alvaro Herrera alvhe...@alvh.no-ip.org
Date:   Fri Nov 28 17:41:29 2014 -0300

Move test modules from contrib to src/test/modules

This is advance preparation for introducing even more test modules; the
easy solution is to add them to contrib, but that's bloated enough that
it seems a good time to think of something different.

Moved modules are dummy_seclabel, test_shm_mq, test_parser and
worker_spi.

(test_decoding was also a candidate, but there was too much opposition
to moving that one.  We can always reconsider later.)

diff --git a/contrib/Makefile b/contrib/Makefile
index b37d0dd..195d447 100644
--- a/contrib/Makefile
+++ b/contrib/Makefile
@@ -16,7 +16,6 @@ SUBDIRS = \
 		dblink		\
 		dict_int	\
 		dict_xsyn	\
-		dummy_seclabel	\
 		earthdistance	\
 		file_fdw	\
 		fuzzystrmatch	\
@@ -51,12 +50,9 @@ SUBDIRS = \
 		tablefunc	\
 		tcn		\
 		test_decoding	\
-		test_parser	\
-		test_shm_mq	\
 		tsearch2	\
 		unaccent	\
-		vacuumlo	\
-		worker_spi
+		vacuumlo
 
 ifeq ($(with_openssl),yes)
 SUBDIRS += sslinfo
diff --git a/doc/src/sgml/contrib.sgml b/doc/src/sgml/contrib.sgml
index ec68f10..a698d0f 100644
--- a/doc/src/sgml/contrib.sgml
+++ b/doc/src/sgml/contrib.sgml
@@ -113,7 +113,6 @@ CREATE EXTENSION replaceablemodule_name/ FROM unpackaged;
  dblink;
  dict-int;
  dict-xsyn;
- dummy-seclabel;
  earthdistance;
  file-fdw;
  fuzzystrmatch;
@@ -141,8 +140,6 @@ CREATE EXTENSION replaceablemodule_name/ FROM unpackaged;
  tablefunc;
  tcn;
  test-decoding;
- test-parser;
- test-shm-mq;
  tsearch2;
  unaccent;
  uuid-ossp;
diff --git a/doc/src/sgml/dummy-seclabel.sgml b/doc/src/sgml/dummy-seclabel.sgml
deleted file mode 100644
index d064705..000
--- a/doc/src/sgml/dummy-seclabel.sgml
+++ /dev/null
@@ -1,74 +0,0 @@
-!-- doc/src/sgml/dummy-seclabel.sgml --
-
-sect1 id=dummy-seclabel xreflabel=dummy_seclabel
- titledummy_seclabel/title
-
- indexterm zone=dummy-seclabel
-  primarydummy_seclabel/primary
- /indexterm
-
- para
-  The filenamedummy_seclabel/ module exists only to support regression
-  testing of the commandSECURITY LABEL/ statement.  It is not intended
-  to be used in production.
- /para
-
- sect2
-  titleRationale/title
-
-  para
-   The commandSECURITY LABEL/ statement allows the user to assign security
-   labels to database objects; however, security labels can only be assigned

Re: [HACKERS] bug in json_to_record with arrays

2014-11-28 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 On 11/26/2014 12:48 PM, Tom Lane wrote:
 Wouldn't it be better if it said
 
 ERROR:  invalid input syntax for array: [potter,chef,programmer]
 DETAIL: Dimension value is missing.
 
 which is comparable to what you'd get out of most other input functions
 that were feeling indigestion?

 Yes, it most definitely would be better.

Here's a draft patch to improve array_in's error reports.  With this
patch, what you'd get for this example is

# select * from json_to_record('
{id:1,val:josh,valry:[potter,chef,programmer]}') as r(id
int, val text, valry text[]);
ERROR:  malformed array literal: [potter,chef,programmer]
DETAIL:  [ must introduce explicitly-specified array dimensions.

Some comments:

* Probably the main objection that could be leveled against this approach
is that it's reasonable to expect that array literal strings could be
pretty long, maybe longer than is reasonable to print all of in a primary
error message.  However, the same could be said about record literal
strings; yet I've not heard any complaints about the fact that record_in
just prints the whole input string when it's unhappy.  So that's what this
does too.

* The phrasing malformed array literal matches some already-existing
error texts, as well as record_in which uses malformed record literal.
I wonder though if we shouldn't change all of these to invalid input
syntax for array (resp. record), which seems more like our usual
phraseology in other datatype input functions.  OTOH, that would make
the primary error message even longer.

* I changed every one of the ERRCODE_INVALID_TEXT_REPRESENTATION cases
to malformed array literal with an errdetail.  I did not touch some
existing ereport's with different SQLSTATEs, notably upper bound cannot
be less than lower bound.  Anyone have a different opinion about whether
that case needs to print the string contents?

* Some of the errdetails don't really seem all that helpful (the ones
triggered by the existing regression test cases are examples).  Can anyone
think of better wording?  Should we just leave those out?

* This would only really address Josh's complaint if we were to back-patch
it into 9.4, but I'm hesitant to change error texts post-RC1.  Thoughts?

regards, tom lane

diff --git a/src/backend/utils/adt/arrayfuncs.c b/src/backend/utils/adt/arrayfuncs.c
index 743351b..933c6b0 100644
*** a/src/backend/utils/adt/arrayfuncs.c
--- b/src/backend/utils/adt/arrayfuncs.c
*** array_in(PG_FUNCTION_ARGS)
*** 247,257 
  	 errmsg(number of array dimensions (%d) exceeds the maximum allowed (%d),
  			ndim + 1, MAXDIM)));
  
! 		for (q = p; isdigit((unsigned char) *q) || (*q == '-') || (*q == '+'); q++);
  		if (q == p)/* no digits? */
  			ereport(ERROR,
  	(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),
! 	 errmsg(missing dimension value)));
  
  		if (*q == ':')
  		{
--- 247,259 
  	 errmsg(number of array dimensions (%d) exceeds the maximum allowed (%d),
  			ndim + 1, MAXDIM)));
  
! 		for (q = p; isdigit((unsigned char) *q) || (*q == '-') || (*q == '+'); q++)
! 			 /* skip */ ;
  		if (q == p)/* no digits? */
  			ereport(ERROR,
  	(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),
! 	 errmsg(malformed array literal: \%s\, string),
! 	 errdetail(\[\ must introduce explicitly-specified array dimensions.)));
  
  		if (*q == ':')
  		{
*** array_in(PG_FUNCTION_ARGS)
*** 259,269 
  			*q = '\0';
  			lBound[ndim] = atoi(p);
  			p = q + 1;
! 			for (q = p; isdigit((unsigned char) *q) || (*q == '-') || (*q == '+'); q++);
  			if (q == p)			/* no digits? */
  ereport(ERROR,
  		(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),
! 		 errmsg(missing dimension value)));
  		}
  		else
  		{
--- 261,273 
  			*q = '\0';
  			lBound[ndim] = atoi(p);
  			p = q + 1;
! 			for (q = p; isdigit((unsigned char) *q) || (*q == '-') || (*q == '+'); q++)
!  /* skip */ ;
  			if (q == p)			/* no digits? */
  ereport(ERROR,
  		(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),
! 		 errmsg(malformed array literal: \%s\, string),
! 		 errdetail(Missing array dimension value.)));
  		}
  		else
  		{
*** array_in(PG_FUNCTION_ARGS)
*** 273,279 
  		if (*q != ']')
  			ereport(ERROR,
  	(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),
! 	 errmsg(missing \]\ in array dimensions)));
  
  		*q = '\0';
  		ub = atoi(p);
--- 277,285 
  		if (*q != ']')
  			ereport(ERROR,
  	(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),
! 	 errmsg(malformed array literal: \%s\, string),
! 	 errdetail(Missing \%s\ after array dimensions.,
! 			   ])));
  
  		*q = '\0';
  		ub = atoi(p);
*** array_in(PG_FUNCTION_ARGS)
*** 293,299 
  		if (*p != '{')
  			ereport(ERROR,
  	(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),
! 	 errmsg(array value must start with \{\ or dimension information)));
  		ndim = 

Re: [HACKERS] How to use brin indexes?

2014-11-28 Thread Alvaro Herrera
hubert depesz lubaczewski wrote:
 On Sat, Nov 22, 2014 at 3:29 AM, Alvaro Herrera alvhe...@2ndquadrant.com
 wrote:
 
  I won't push this right away because I want to add the cross-type stuff
  to the tests, to ensure I haven't bollixed anything; I ran a few quick
  manual tests and everything seems to work.  But if Depesz wants to test
  the behavior, be my guest.  Note that you need to initdb after
  rebuilding with this patch.
 
 Tested. Works OK.

Thanks for testing.  I have pushed it now.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Role Attribute Bitmask Catalog Representation

2014-11-28 Thread Adam Brightwell
All,

I have attached a patch that addresses the current suggestions and
recommendations:

* Add 'get_all_role_attributes' SQL function - returns a text array
representation of the attributes from a value passed to it.

Example:

postgres=# SELECT rolname, get_all_role_attributes(rolattr) AS rolattr FROM
pg_authid;
 rolname  |rolattr

--+---
 postgres | {Superuser,Inherit,Create Role,Create DB,Catalog
Update,Login,Replication,Bypass RLS}
(1 row)

* Refactor #define's from 'parsenodes.h' to 'acl.h'
* Added #define ROLE_ATTR_ALL to represent all currently available
attributes.
* Added genbki.pl substitution for  PGROLEATTRALL constant.

Please let me know what you think, all feedback is greatly appreciated.

Thanks,
Adam

-- 
Adam Brightwell - adam.brightw...@crunchydatasolutions.com
Database Engineer - www.crunchydatasolutions.com
diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c
new file mode 100644
index d30612c..93eb2e6
*** a/src/backend/catalog/aclchk.c
--- b/src/backend/catalog/aclchk.c
*** aclcheck_error_type(AclResult aclerr, Oi
*** 3423,3448 
  }
  
  
- /* Check if given user has rolcatupdate privilege according to pg_authid */
- static bool
- has_rolcatupdate(Oid roleid)
- {
- 	bool		rolcatupdate;
- 	HeapTuple	tuple;
- 
- 	tuple = SearchSysCache1(AUTHOID, ObjectIdGetDatum(roleid));
- 	if (!HeapTupleIsValid(tuple))
- 		ereport(ERROR,
- (errcode(ERRCODE_UNDEFINED_OBJECT),
-  errmsg(role with OID %u does not exist, roleid)));
- 
- 	rolcatupdate = ((Form_pg_authid) GETSTRUCT(tuple))-rolcatupdate;
- 
- 	ReleaseSysCache(tuple);
- 
- 	return rolcatupdate;
- }
- 
  /*
   * Relay for the various pg_*_mask routines depending on object kind
   */
--- 3423,3428 
*** pg_class_aclmask(Oid table_oid, Oid role
*** 3630,3636 
  	if ((mask  (ACL_INSERT | ACL_UPDATE | ACL_DELETE | ACL_TRUNCATE | ACL_USAGE)) 
  		IsSystemClass(table_oid, classForm) 
  		classForm-relkind != RELKIND_VIEW 
! 		!has_rolcatupdate(roleid) 
  		!allowSystemTableMods)
  	{
  #ifdef ACLDEBUG
--- 3610,3616 
  	if ((mask  (ACL_INSERT | ACL_UPDATE | ACL_DELETE | ACL_TRUNCATE | ACL_USAGE)) 
  		IsSystemClass(table_oid, classForm) 
  		classForm-relkind != RELKIND_VIEW 
! 		!role_has_attribute(roleid, ROLE_ATTR_CATUPDATE) 
  		!allowSystemTableMods)
  	{
  #ifdef ACLDEBUG
*** pg_extension_ownercheck(Oid ext_oid, Oid
*** 5051,5056 
--- 5031,5058 
  }
  
  /*
+  * Check whether the specified role has a specific role attribute.
+  */
+ bool
+ role_has_attribute(Oid roleid, RoleAttr attribute)
+ {
+ 	RoleAttr	attributes;
+ 	HeapTuple	tuple;
+ 
+ 	tuple = SearchSysCache1(AUTHOID, ObjectIdGetDatum(roleid));
+ 
+ 	if (!HeapTupleIsValid(tuple))
+ 		ereport(ERROR,
+ (errcode(ERRCODE_UNDEFINED_OBJECT),
+  errmsg(role with OID %u does not exist, roleid)));
+ 
+ 	attributes = ((Form_pg_authid) GETSTRUCT(tuple))-rolattr;
+ 	ReleaseSysCache(tuple);
+ 
+ 	return ((attributes  attribute)  0);
+ }
+ 
+ /*
   * Check whether specified role has CREATEROLE privilege (or is a superuser)
   *
   * Note: roles do not have owners per se; instead we use this test in
*** pg_extension_ownercheck(Oid ext_oid, Oid
*** 5064,5102 
  bool
  has_createrole_privilege(Oid roleid)
  {
- 	bool		result = false;
- 	HeapTuple	utup;
- 
  	/* Superusers bypass all permission checking. */
  	if (superuser_arg(roleid))
  		return true;
  
! 	utup = SearchSysCache1(AUTHOID, ObjectIdGetDatum(roleid));
! 	if (HeapTupleIsValid(utup))
! 	{
! 		result = ((Form_pg_authid) GETSTRUCT(utup))-rolcreaterole;
! 		ReleaseSysCache(utup);
! 	}
! 	return result;
  }
  
  bool
  has_bypassrls_privilege(Oid roleid)
  {
- 	bool		result = false;
- 	HeapTuple	utup;
- 
  	/* Superusers bypass all permission checking. */
  	if (superuser_arg(roleid))
  		return true;
  
! 	utup = SearchSysCache1(AUTHOID, ObjectIdGetDatum(roleid));
! 	if (HeapTupleIsValid(utup))
! 	{
! 		result = ((Form_pg_authid) GETSTRUCT(utup))-rolbypassrls;
! 		ReleaseSysCache(utup);
! 	}
! 	return result;
  }
  
  /*
--- 5066,5089 
  bool
  has_createrole_privilege(Oid roleid)
  {
  	/* Superusers bypass all permission checking. */
  	if (superuser_arg(roleid))
  		return true;
  
! 	return role_has_attribute(roleid, ROLE_ATTR_CREATEROLE);
  }
  
+ /*
+  * Check whether specified role has BYPASSRLS privilege.
+  */
  bool
  has_bypassrls_privilege(Oid roleid)
  {
  	/* Superusers bypass all permission checking. */
  	if (superuser_arg(roleid))
  		return true;
  
! 	return role_has_attribute(roleid, ROLE_ATTR_BYPASSRLS);
  }
  
  /*
diff --git a/src/backend/catalog/genbki.pl b/src/backend/catalog/genbki.pl
new file mode 100644
index ca89879..2929b66
*** a/src/backend/catalog/genbki.pl
--- b/src/backend/catalog/genbki.pl
*** my $BOOTSTRAP_SUPERUSERID =
*** 90,95 
--- 

Re: [HACKERS] Marginal performance improvement: replace bms_first_member loops

2014-11-28 Thread David Rowley
On Sat, Nov 29, 2014 at 8:18 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 David Rowley dgrowle...@gmail.com writes:
  A while back when I was benchmarking the planner time during my trials
 with
  anti/semi join removals, I wrote a patch to change the usage pattern for
  cases such as:

  if (bms_membership(a) != BMS_SINGLETON)
return; /* nothing to do */
  singleton = bms_singleton_member(a);
  ...

  Into:

  if (!bms_get_singleton(a, singleton))
return; /* nothing to do */
  ...

  Which means 1 function call and loop over the bitmapset, rather than 2
  function
  calls and 2 loops over the set when the set is a singleton.

 I went ahead and committed this with some cosmetic adjustments.


Thank you!


 I'm not sure about there being any performance win in existing use-cases,
 but it seems worth doing on notational grounds anyway.


My original benchmarks for this were based on the semi/anti join patch I
was working on at the time

Benchmarks here:
http://www.postgresql.org/message-id/CAApHDvo21-b+PU=sc9b1qieg3yl4t919i4wjzfnfp6upxs9...@mail.gmail.com

Though the existing left join removal code should see similar speed ups,
albeit the time spent in the join removal code path did only happen to be
between 0.02 and 0.2% of total planning time with my test cases.

Regards

David Rowley


[HACKERS] New error code to track unsupported contexts

2014-11-28 Thread Michael Paquier
Hi all,

When pg_event_trigger_dropped_objects is run in a context that is not
the one of an event trigger, currently the error code
ERRCODE_FEATURE_NOT_SUPPORTED is returned. Wouldn't it be better to
have an error to define an out-of-context instead? It seems that it
would be a good thing to have more error verbosity for situations like
the case above. Note that this idea has been mentioned on this ML a
couple of weeks back. In any case, attached is a patch showing the
idea.

Opinions? Is that worth having?
Regards,
-- 
Michael
diff --git a/src/backend/commands/event_trigger.c b/src/backend/commands/event_trigger.c
index 6a3002f..154bed9 100644
--- a/src/backend/commands/event_trigger.c
+++ b/src/backend/commands/event_trigger.c
@@ -1218,7 +1218,7 @@ pg_event_trigger_dropped_objects(PG_FUNCTION_ARGS)
 	if (!currentEventTriggerState ||
 		!currentEventTriggerState-in_sql_drop)
 		ereport(ERROR,
-(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+(errcode(ERRCODE_CONTEXT_NOT_SUPPORTED),
 		 errmsg(%s can only be called in a sql_drop event trigger function,
 pg_event_trigger_dropped_objects(;
 
diff --git a/src/backend/utils/errcodes.txt b/src/backend/utils/errcodes.txt
index 62ba092..dcd8489 100644
--- a/src/backend/utils/errcodes.txt
+++ b/src/backend/utils/errcodes.txt
@@ -114,6 +114,7 @@ Section: Class 09 - Triggered Action Exception
 Section: Class 0A - Feature Not Supported
 
 0A000EERRCODE_FEATURE_NOT_SUPPORTED  feature_not_supported
+0A001EERRCODE_CONTEXT_NOT_SUPPORTED  context_not_supported
 
 Section: Class 0B - Invalid Transaction Initiation
 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers