Re: [HACKERS] [WIP] showing index maintenance on EXPLAIN

2014-05-09 Thread Thom Brown
On 8 May 2014 01:00, Jaime Casanova ja...@2ndquadrant.com wrote:

 Hi,

 This patch implements $subject only when ANALYZE and VERBOSE are on.
 I made it that way because for years nobody seemed interested in this
 info (at least no one did it) so i decided that maybe is to much
 information for most people (actually btree indexes are normally very
 fast).

 But because we have GiST and GIN this became an interested piece of
 data (there are other cases even when using btree).

 Current patch doesn't have docs yet i will add them soon.


+1

I've asked for something like this previously:
http://www.postgresql.org/message-id/CAA-aLv6O7QKdtZvacDJ+R_YjuLGCQTGzj6JXZTPmCnnxroX=7...@mail.gmail.com



-- 
Thom


Re: [HACKERS] default opclass for jsonb (was Re: Call for GIST/GIN/SP-GIST opclass documentation)

2014-05-09 Thread Tom Lane
Gavin Flower gavinflo...@archidevsys.co.nz writes:
 On 09/05/14 15:34, Bruce Momjian wrote:
 Looks good.  I was thinking the jsonb_ops name could remain unchanged
 and the jsonb_hash_ops could be called jsonb_combo_ops as it combines
 the key and value into a single index entry.

 If you have 'jsonb_combo_ops' - then surely 'jsonb_op' should be called 
 'jsonb_xxx_ops', where the 'xxx' distinguishes that from 
 'jsonb_combo_ops'?  I guess, if any appropriate wording of 'xxx' was too 
 cumbersome, then it would be worse.

Yeah, I'm disinclined to change the opclass names now.  It's not apparent
to me that combo is a better choice than hash for the second opclass.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Cache lookup error when using jsonb, json_build_object and a WITH clause

2014-05-09 Thread Michael Paquier
Hi all,

I found the following error when playing with jsonb and json_build_object:
=# with jsonb_data as (select * from jsonb_each('{aa :
po}'::jsonb)) select json_build_object(key,value) from jsonb_data;
ERROR:  XX000: cache lookup failed for type 2147483650
LOCATION:  lookup_type_cache, typcache.c:193

I would have expected the result to be the same as in the case of json:
=# with json_data as (select * from json_each('{aa : po}'::json))
select json_build_object(key,value) from json_data;
 json_build_object
---
 {aa : po}
(1 row)

Regards,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New pg_lsn type doesn't have hash/btree opclasses

2014-05-09 Thread Fujii Masao
On Wed, May 7, 2014 at 11:55 AM, Fabrízio de Royes Mello
fabriziome...@gmail.com wrote:

 On Tue, May 6, 2014 at 8:25 PM, Michael Paquier michael.paqu...@gmail.com
 wrote:

 On Wed, May 7, 2014 at 8:22 AM, Andres Freund and...@2ndquadrant.com
 wrote:
  Uh. They're different:
 
  Datum
  timestamp_hash(PG_FUNCTION_ARGS)
  {
  /* We can use either hashint8 or hashfloat8 directly */
  #ifdef HAVE_INT64_TIMESTAMP
  return hashint8(fcinfo);
  #else
  return hashfloat8(fcinfo);
  #endif
  }
  note it's passing fcinfo, not the datum as you do. Same with
  time_hash.. In fact your version crashes when used because it's
  dereferencing a int8 as a pointer inside hashfloat8.
 Thanks, didn't notice that fcinfo was used.


 Hi all,

 If helps, I added some regression tests to the lastest patch.

+DATA(insert OID = 3260 (403pglsn_opsPGNSP PGUID ));
+DATA(insert OID = 3261 (405pglsn_opsPGNSP PGUID ));

The patch looks good to me except the name of index operator class.
I think that pg_lsn_ops is better than pglsn_ops because it's for pg_lsn
data type.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New pg_lsn type doesn't have hash/btree opclasses

2014-05-09 Thread Andres Freund
On 2014-05-09 22:01:07 +0900, Fujii Masao wrote:
 On Wed, May 7, 2014 at 11:55 AM, Fabrízio de Royes Mello
 fabriziome...@gmail.com wrote:
 
  On Tue, May 6, 2014 at 8:25 PM, Michael Paquier michael.paqu...@gmail.com
  wrote:
 
  On Wed, May 7, 2014 at 8:22 AM, Andres Freund and...@2ndquadrant.com
  wrote:
   Uh. They're different:
  
   Datum
   timestamp_hash(PG_FUNCTION_ARGS)
   {
   /* We can use either hashint8 or hashfloat8 directly */
   #ifdef HAVE_INT64_TIMESTAMP
   return hashint8(fcinfo);
   #else
   return hashfloat8(fcinfo);
   #endif
   }
   note it's passing fcinfo, not the datum as you do. Same with
   time_hash.. In fact your version crashes when used because it's
   dereferencing a int8 as a pointer inside hashfloat8.
  Thanks, didn't notice that fcinfo was used.
 
 
  Hi all,
 
  If helps, I added some regression tests to the lastest patch.
 
 +DATA(insert OID = 3260 (403pglsn_opsPGNSP PGUID ));
 +DATA(insert OID = 3261 (405pglsn_opsPGNSP PGUID ));
 
 The patch looks good to me except the name of index operator class.

FWIW, I've tested and looked through the patch as well.

 I think that pg_lsn_ops is better than pglsn_ops because it's for pg_lsn
 data type.

You're right, that's marginally prettier.

You plan to commit it?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cache lookup error when using jsonb, json_build_object and a WITH clause

2014-05-09 Thread Tom Lane
Michael Paquier michael.paqu...@gmail.com writes:
 I found the following error when playing with jsonb and json_build_object:
 =# with jsonb_data as (select * from jsonb_each('{aa :
 po}'::jsonb)) select json_build_object(key,value) from jsonb_data;
 ERROR:  XX000: cache lookup failed for type 2147483650
 LOCATION:  lookup_type_cache, typcache.c:193

The immediate problem seems to be that add_json() did not get taught
that jsonb is of TYPCATEGORY_JSON; somebody missed updating that copy
of logic that's been copied and pasted several times too many, IMNSHO.

However, now that I look at this code, it seems like it's got more
problems than that:

* it will be fooled utterly by domains over types it's interested in.

* there is nothing stopping somebody from making user-defined types
with category 'j' or 'c', which will confuse it even more.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cache lookup error when using jsonb, json_build_object and a WITH clause

2014-05-09 Thread Andres Freund
Hi,

On 2014-05-09 21:40:07 +0900, Michael Paquier wrote:
 Hi all,
 
 I found the following error when playing with jsonb and json_build_object:
 =# with jsonb_data as (select * from jsonb_each('{aa :
 po}'::jsonb)) select json_build_object(key,value) from jsonb_data;
 ERROR:  XX000: cache lookup failed for type 2147483650
 LOCATION:  lookup_type_cache, typcache.c:193
 
 I would have expected the result to be the same as in the case of json:
 =# with json_data as (select * from json_each('{aa : po}'::json))
 select json_build_object(key,value) from json_data;
  json_build_object
 ---
  {aa : po}
 (1 row)

Whoa. There's two wierd things here:
a) jsonb has a typcategory 'C'. Marking a composite type. json has
   'U'.

b) datum_to_json() thinks it's a good idea to use typcategory to decide
   how a type is output. Isn't that pertty fundamentally flawed? To
   detect composite types it really should look at typtype, now?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New pg_lsn type doesn't have hash/btree opclasses

2014-05-09 Thread Michael Paquier
On Fri, May 9, 2014 at 10:01 PM, Fujii Masao masao.fu...@gmail.com wrote:
 +DATA(insert OID = 3260 (403pglsn_opsPGNSP PGUID ));
 +DATA(insert OID = 3261 (405pglsn_opsPGNSP PGUID ));

 The patch looks good to me except the name of index operator class.
 I think that pg_lsn_ops is better than pglsn_ops because it's for pg_lsn
 data type.
Well, yes... Btw, before doing anything with this patch, I think that
the version of Fabrizio could be used as a base, but the regression
tests should be reshuffled a bit...
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New pg_lsn type doesn't have hash/btree opclasses

2014-05-09 Thread Andres Freund
On 2014-05-09 22:14:25 +0900, Michael Paquier wrote:
 On Fri, May 9, 2014 at 10:01 PM, Fujii Masao masao.fu...@gmail.com wrote:
  +DATA(insert OID = 3260 (403pglsn_opsPGNSP PGUID ));
  +DATA(insert OID = 3261 (405pglsn_opsPGNSP PGUID ));
 
  The patch looks good to me except the name of index operator class.
  I think that pg_lsn_ops is better than pglsn_ops because it's for 
  pg_lsn
  data type.
 Well, yes... Btw, before doing anything with this patch, I think that
 the version of Fabrizio could be used as a base, but the regression
 tests should be reshuffled a bit...

I honestly don't really see much point in the added tests. If at all I'd
rather see my tests from
http://archives.postgresql.org/message-id/20140506230722.GE24808%40awork2.anarazel.de
addded. With some EXPLAIN (COSTS OFF) they'd test more.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgresql.auto.conf read from wrong directory

2014-05-09 Thread Fujii Masao
On Fri, May 9, 2014 at 1:06 PM, Amit Kapila amit.kapil...@gmail.com wrote:
 On Thu, May 8, 2014 at 9:47 PM, Christoph Berg c...@df7cb.de wrote:
 Re: Andres Freund 2014-05-08 20140508145901.gb1...@awork2.anarazel.de
  Maybe this is nitpicking, but what happens when postgresql.auto.conf also
  includes the setting of data_directory? This is possible because we can
  set data_directory via ALTER SYSTEM now. Should we just ignore such
  problematic setting in postgresql.auto.conf with warning message?

 I think that's a case of Doctor, it hurts when I do this. Doctor: don't
 do that then.

 I'd opt to forbid setting data_directory at ALTER SYSTEM time. For the
 other options, I agree with Andres that you should get to keep all
 parts if you manage to break it.

 There is no harm in forbidding data_directory, but similarly we can
 imagine that people can set some very large values for some config
 variables due to which later it can have symptoms similar to this
 issue.

Yes, that can prevent the server from restarting at all. In this case,
to restart the server, we need to edit postgresql.auto.conf manually
and remove the problematic settings though the header of
postgresql.auto.conf warns Do not edit this file manually!.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cache lookup error when using jsonb, json_build_object and a WITH clause

2014-05-09 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 Whoa. There's two wierd things here:
 a) jsonb has a typcategory 'C'. Marking a composite type. json has
'U'.

Yeah, that's flat out wrong.  I changed it before seeing your message.

 b) datum_to_json() thinks it's a good idea to use typcategory to decide
how a type is output. Isn't that pertty fundamentally flawed?

Indeed.  I think the bit that uses TYPCATEGORY_NUMERIC as a hint to decide
whether the value can be left unquoted (assuming it looks like a number)
might be all right, but the rest of this seems pretty bogus.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New pg_lsn type doesn't have hash/btree opclasses

2014-05-09 Thread Fujii Masao
On Fri, May 9, 2014 at 10:03 PM, Andres Freund and...@2ndquadrant.com wrote:
 On 2014-05-09 22:01:07 +0900, Fujii Masao wrote:
 On Wed, May 7, 2014 at 11:55 AM, Fabrízio de Royes Mello
 fabriziome...@gmail.com wrote:
 
  On Tue, May 6, 2014 at 8:25 PM, Michael Paquier michael.paqu...@gmail.com
  wrote:
 
  On Wed, May 7, 2014 at 8:22 AM, Andres Freund and...@2ndquadrant.com
  wrote:
   Uh. They're different:
  
   Datum
   timestamp_hash(PG_FUNCTION_ARGS)
   {
   /* We can use either hashint8 or hashfloat8 directly */
   #ifdef HAVE_INT64_TIMESTAMP
   return hashint8(fcinfo);
   #else
   return hashfloat8(fcinfo);
   #endif
   }
   note it's passing fcinfo, not the datum as you do. Same with
   time_hash.. In fact your version crashes when used because it's
   dereferencing a int8 as a pointer inside hashfloat8.
  Thanks, didn't notice that fcinfo was used.
 
 
  Hi all,
 
  If helps, I added some regression tests to the lastest patch.

 +DATA(insert OID = 3260 (403pglsn_opsPGNSP PGUID ));
 +DATA(insert OID = 3261 (405pglsn_opsPGNSP PGUID ));

 The patch looks good to me except the name of index operator class.

 FWIW, I've tested and looked through the patch as well.

 I think that pg_lsn_ops is better than pglsn_ops because it's for 
 pg_lsn
 data type.

 You're right, that's marginally prettier.

 You plan to commit it?

Yes unless many people object the commit.

Michael,
You're now modifying the patch?

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] default opclass for jsonb (was Re: Call for GIST/GIN/SP-GIST opclass documentation)

2014-05-09 Thread Bruce Momjian
On Fri, May  9, 2014 at 07:04:17AM -0400, Tom Lane wrote:
 Gavin Flower gavinflo...@archidevsys.co.nz writes:
  On 09/05/14 15:34, Bruce Momjian wrote:
  Looks good.  I was thinking the jsonb_ops name could remain unchanged
  and the jsonb_hash_ops could be called jsonb_combo_ops as it combines
  the key and value into a single index entry.
 
  If you have 'jsonb_combo_ops' - then surely 'jsonb_op' should be called 
  'jsonb_xxx_ops', where the 'xxx' distinguishes that from 
  'jsonb_combo_ops'?  I guess, if any appropriate wording of 'xxx' was too 
  cumbersome, then it would be worse.
 
 Yeah, I'm disinclined to change the opclass names now.  It's not apparent
 to me that combo is a better choice than hash for the second opclass.

Well, if we are optionally hashing json_ops for long strings, what does
jsonb_hash_ops do uniquely with hashing?  Does it always hash, while
json_ops optionally hashes?  Is that the distinguishing characteristic? 
It seemed the _content_ of the indexed value was more important, rather
than the storage method.

Should jsonb_hash_ops do only optional hashing too for long strings?

Also, with json_ops, when you index '{exterior : white, interior:
blue}', if you query for {exterior : blue}, does it match and have
to be rechecked in the heap because the index doesn't know which values
go with which keys?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cache lookup error when using jsonb, json_build_object and a WITH clause

2014-05-09 Thread Tom Lane
I wrote:
 Andres Freund and...@2ndquadrant.com writes:
 b) datum_to_json() thinks it's a good idea to use typcategory to decide
 how a type is output. Isn't that pertty fundamentally flawed?

 Indeed.  I think the bit that uses TYPCATEGORY_NUMERIC as a hint to decide
 whether the value can be left unquoted (assuming it looks like a number)
 might be all right, but the rest of this seems pretty bogus.

Actually, that would be a security hole if it weren't that CREATE TYPE for
new base types is superuser-only.  Otherwise a user-defined type could
fool this logic with a malicious choice of typcategory.  jsonb itself was
darn close to being a malicious choice of typcategory --- it's entirely
accidental that Michael's example didn't lead to a crash or even more
interesting stuff, since the code was trying to process a jsonb as though
it were a regular composite type.  Other choices of typcategory could have
sent the code into the array path for something that's not an array, or
have allowed escaping to be bypassed for something that's not json, etc.

In short, there are defined ways to decide if a type is array or
composite, and this ain't how.

After further reflection I think we should lose the TYPCATEGORY_NUMERIC
business too.  ruleutils.c hard-wires the set of types it will consider
to be numeric, and I see no very good reason not to do likewise here.
That will remove the need to look up the typcategory at all.

So we need to:

1. Refactor so there's only one copy of the control logic.

2. Smash domains to their base types.

3. Identify boolean, numeric, and json types by direct tests of type OID.

4. Identify array and composite types using standard methods.

Anybody see other problems to fix here?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] test_shm_mq failing on anole (was: Sending out a request for more buildfarm animals?)

2014-05-09 Thread Robert Haas
On Sat, May 3, 2014 at 4:31 AM, Dave Page dave.p...@enterprisedb.com wrote:
 Hamid@EDB; Can you please have someone configure anole to build git
 head as well as the other branches? Thanks.

The test_shm_mq regression tests hung on this machine this morning.
Hamid was able to give me access to log in and troubleshoot.
Unfortunately, I wasn't able to completely track down the problem
before accidentally killing off the running cluster, but it looks like
test_shm_mq_pipelined() tried to start 3 background workers and the
postmaster only ever launched one of them, so the test just sat there
and waited for the other two workers to start.  At this point, I have
no idea what could cause the postmaster to be asleep at the switch
like this, but it seems clear that's what happened.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Archive recovery won't be completed on some situation.

2014-05-09 Thread Fujii Masao
On Thu, Mar 20, 2014 at 11:38 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Kyotaro HORIGUCHI escribió:
 Hi, I confirmed that 82233ce7ea4 surely did it.

 At Wed, 19 Mar 2014 09:35:16 -0300, Alvaro Herrera wrote
  Fujii Masao escribió:
   On Wed, Mar 19, 2014 at 7:57 PM, Heikki Linnakangas
   hlinnakan...@vmware.com wrote:
 
9.4 canceles backup mode even on immediate shutdown so the
operation causes no problem, but 9.3 and before are doesn't.
   
Hmm, I don't think we've changed that behavior in 9.4.
  
   ISTM 82233ce7ea42d6ba519aaec63008aff49da6c7af changed immdiate
   shutdown that way.
 
  Uh, interesting.  I didn't see that secondary effect.  I hope it's not
  for ill?

 The crucial factor for the behavior change is that pmdie has
 become not to exit immediately for SIGQUIT. 'case SIGQUIT:' in
 pmdie() ended with ExitPostmaster(0) before the patch but now
 it ends with 'PostmasterStateMachine(); break;' so continues to
 run with pmState = PM_WAIT_BACKENDS, similar to SIGINT (fast
 shutdown).

 After all, pmState changes to PM_NO_CHILDREN via PM_WAIT_DEAD_END
 by SIGCHLDs from non-significant processes, then CancelBackup().

 Judging from what was being said on the thread, it seems that running
 CancelBackup() after an immediate shutdown is better than not doing it,
 correct?

This is listed as a 9.4 Open Item, but no one seems to want to revert
this change.
So I'll drop this from the Open Item list barring objections.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] default opclass for jsonb (was Re: Call for GIST/GIN/SP-GIST opclass documentation)

2014-05-09 Thread Bruce Momjian
On Fri, May  9, 2014 at 09:53:36AM -0400, Bruce Momjian wrote:
 On Fri, May  9, 2014 at 07:04:17AM -0400, Tom Lane wrote:
  Gavin Flower gavinflo...@archidevsys.co.nz writes:
   On 09/05/14 15:34, Bruce Momjian wrote:
   Looks good.  I was thinking the jsonb_ops name could remain unchanged
   and the jsonb_hash_ops could be called jsonb_combo_ops as it combines
   the key and value into a single index entry.
  
   If you have 'jsonb_combo_ops' - then surely 'jsonb_op' should be called 
   'jsonb_xxx_ops', where the 'xxx' distinguishes that from 
   'jsonb_combo_ops'?  I guess, if any appropriate wording of 'xxx' was too 
   cumbersome, then it would be worse.
  
  Yeah, I'm disinclined to change the opclass names now.  It's not apparent
  to me that combo is a better choice than hash for the second opclass.
 
 Well, if we are optionally hashing json_ops for long strings, what does
 jsonb_hash_ops do uniquely with hashing?  Does it always hash, while
 json_ops optionally hashes?  Is that the distinguishing characteristic? 
 It seemed the _content_ of the indexed value was more important, rather
 than the storage method.

Also, are people going to think that jsonb_hash_ops creates a hash
index, which is not crash safe, even though it is a GIN index?  Do we
have this hash confusion anywhere else?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] default opclass for jsonb (was Re: Call for GIST/GIN/SP-GIST opclass documentation)

2014-05-09 Thread Greg Stark
On Fri, May 9, 2014 at 2:53 PM, Bruce Momjian br...@momjian.us wrote:
 Well, if we are optionally hashing json_ops for long strings, what does
 jsonb_hash_ops do uniquely with hashing?  Does it always hash, while
 json_ops optionally hashes?  Is that the distinguishing characteristic?
 It seemed the _content_ of the indexed value was more important, rather
 than the storage method.

 Should jsonb_hash_ops do only optional hashing too for long strings?


Well the question seems to me to be that if we're always doing recheck
then what advantage is there to not hashing everything?

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cache lookup error when using jsonb, json_build_object and a WITH clause

2014-05-09 Thread Andrew Dunstan


On 05/09/2014 10:07 AM, Tom Lane wrote:

After further reflection I think we should lose the TYPCATEGORY_NUMERIC
business too.  ruleutils.c hard-wires the set of types it will consider
to be numeric, and I see no very good reason not to do likewise here.
That will remove the need to look up the typcategory at all.

So we need to:

1. Refactor so there's only one copy of the control logic.

2. Smash domains to their base types.

3. Identify boolean, numeric, and json types by direct tests of type OID.

4. Identify array and composite types using standard methods.

Anybody see other problems to fix here?



I guess this is my fault. I recall some discussions when some of this 
was first being written about the best way to make the type based 
decisions, not sure at this remove whether on list or off. The origin of 
it is in 9.2, so if you're going to adjust it you should probably go 
back that far.


I was aware of the domain problem, but in 2 years or so nobody has 
complained about it, so I guess nobody is defining domains over json.


cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] default opclass for jsonb (was Re: Call for GIST/GIN/SP-GIST opclass documentation)

2014-05-09 Thread Andres Freund
On 2014-05-09 10:26:48 -0400, Bruce Momjian wrote:
 On Fri, May  9, 2014 at 09:53:36AM -0400, Bruce Momjian wrote:
  On Fri, May  9, 2014 at 07:04:17AM -0400, Tom Lane wrote:
   Gavin Flower gavinflo...@archidevsys.co.nz writes:
On 09/05/14 15:34, Bruce Momjian wrote:
Looks good.  I was thinking the jsonb_ops name could remain unchanged
and the jsonb_hash_ops could be called jsonb_combo_ops as it combines
the key and value into a single index entry.
   
If you have 'jsonb_combo_ops' - then surely 'jsonb_op' should be called 
'jsonb_xxx_ops', where the 'xxx' distinguishes that from 
'jsonb_combo_ops'?  I guess, if any appropriate wording of 'xxx' was 
too 
cumbersome, then it would be worse.
   
   Yeah, I'm disinclined to change the opclass names now.  It's not apparent
   to me that combo is a better choice than hash for the second opclass.
  
  Well, if we are optionally hashing json_ops for long strings, what does
  jsonb_hash_ops do uniquely with hashing?  Does it always hash, while
  json_ops optionally hashes?  Is that the distinguishing characteristic? 
  It seemed the _content_ of the indexed value was more important, rather
  than the storage method.
 
 Also, are people going to think that jsonb_hash_ops creates a hash
 index, which is not crash safe, even though it is a GIN index?  Do we
 have this hash confusion anywhere else?

The operator class has to be specified after the USING GIN in CREATE
INDEX so I think that rest is neglegible.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A couple logical decoding fixes/patches

2014-05-09 Thread Robert Haas
On Thu, May 8, 2014 at 12:29 PM, Andres Freund and...@2ndquadrant.com wrote:
 Patch 01: Fix a couple of embarassing typos. Most of them harmless, but
 one isn't and can lead to crashing or decoding wrong data.

Committed.

 Patch 02: Don't crash with an Assert() failure if wal_level=logical but
 max_replication_slots=0.

Committed.

 Patch 03: Add valgrind suppression for writing out padding bytes. That's
 better than zeroing the data from the get go because unitialized
 accesses are still detected.

I have no understanding of valgrind suppressions.  I can commit this
blindly if nobody else wants to pick it up.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.4 release notes

2014-05-09 Thread Andres Freund
Hi,

I've attached a fair number of fixes for the current state of the
release notes. Many of them are fixing factual errors, others are more
minors.
I think the whole 'logical decoding' section will need an entire
rewrite, but I'll do that separately.

I've added a couple of !-- FIXME -- for things that are clearly
unfinished.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
From a0359bf8f1f0231f314a6deb537eb7d58075c03c Mon Sep 17 00:00:00 2001
From: Andres Freund and...@anarazel.de
Date: Fri, 9 May 2014 16:48:42 +0200
Subject: [PATCH] Release note improvements.

---
 doc/src/sgml/release-9.4.sgml | 178 +-
 1 file changed, 88 insertions(+), 90 deletions(-)

diff --git a/doc/src/sgml/release-9.4.sgml b/doc/src/sgml/release-9.4.sgml
index 21c6fc7..03bbb12 100644
--- a/doc/src/sgml/release-9.4.sgml
+++ b/doc/src/sgml/release-9.4.sgml
@@ -29,8 +29,8 @@
 
  listitem
   para
-   Logical change-set extraction allows database
-   changes to be optionally recorded in emphasislogical/ format
+   link linkend='logicaldecoding'Logical decoding/link allows database
+   changes to be streamed out in customizable format.
   /para
  /listitem
 
@@ -263,6 +263,15 @@
  /para
 /listitem
 
+listitem
+ para
+  The maximum number link linkend=bgworkerbackground workers/link
+  that can be registered
+  with functionRegisterBackgroundWorker()/function is now limited by
+  link linkend=guc-max-worker-processesvarnamemax_worker_processes//link
+ /para
+/listitem
+
/itemizedlist
 
   /sect2
@@ -452,15 +461,15 @@
 
   listitem
para
-link linkend=vacuum-for-wraparoundFreeze/link
-tuples when tables are written with link
+Attempt to link linkend=vacuum-for-wraparoundfreeze/link
+tuples when tables are rewritten with link
 linkend=SQL-CLUSTERcommandCLUSTER//link or link
 linkend=SQL-VACUUMcommandVACUUM FULL//link (Robert Haas,
 Andres Freund)
/para
 
para
-This avoids the need to freeze the tuples in the future.
+This can avoid the need to freeze the tuples in the future.
/para
   /listitem
 
@@ -545,12 +554,9 @@
 
   listitem
para
-Add structnamexid/ and link
-linkend=ddl-system-columnsstructnamexmin//link
-to system views link
-linkend=pg-stat-activity-viewstructnamepg_stat_activity//link
-and link
-linkend=pg-stat-replication-viewstructnamepg_stat_replication//link
+Add varnamebackend_xid/ and varnamebackend_xmin/ columns to
+the system view link linkend=pg-stat-activity-viewstructnamepg_stat_activity//link
+and varnamebackend_xmin/ to link linkend=pg-stat-replication-viewstructnamepg_stat_replication//link
 (Christian Kruse)
/para
   /listitem
@@ -571,10 +577,10 @@
/para
 
para
-Such keys are faster and have improved security
-over previous options.  New variable link
-linkend=guc-ssl-ecdh-curvevarnamessl_ecdh_curve//link
-controls the curve that is used.
+Such keys are faster and have improved security over previous
+options. The new configuration
+parameter link linkend=guc-ssl-ecdh-curvevarnamessl_ecdh_curve//link
+controls which curve is used.
/para
   /listitem
 
@@ -617,15 +623,14 @@
 
   listitem
para
-Add acronymSQL/-level command link
+Add acronymSQL/-level link
 linkend=SQL-ALTERSYSTEMcommandALTER SYSTEM//link command
-to edit the filenamepostgresql.conf/ configuration file
-(Amit Kapila)
+to adjust server-wide settings (Amit Kapila)
/para
 
para
-Previously filenamepostgresql.conf/ could only be edited at
-the file system level.
+Previously such settings could only be changed by
+editing filenamepostgresql.conf/ at the file system level.
/para
   /listitem
 
@@ -680,8 +685,8 @@
/para
 
para
-Hint bits are not normally logged, except when checksums are
-enabled.  This is useful for tools like applicationpg_rewind/.
+Hint bits are not normally logged, except when checksums are enabled.
+This is useful for external tools like applicationpg_rewind/.
/para
   /listitem
 
@@ -702,9 +707,10 @@
/para
 
para
-Such libraries are auto-link
-linkend=SQL-LOADcommandLOAD//link'ed, unlike link
-linkend=guc-local-preload-librariesvarnamelocal_preload_libraries//link.
+In contrast
+to link linkend=guc-local-preload-librariesvarnamelocal_preload_libraries//link
+this parameter can load all shared libraries, not just ones in
+the literal$libdir/plugins//literal.

Re: [HACKERS] A couple logical decoding fixes/patches

2014-05-09 Thread Andres Freund
Hi,

On 2014-05-09 10:49:09 -0400, Robert Haas wrote:
 Committed.

Thanks.

  Patch 03: Add valgrind suppression for writing out padding bytes. That's
  better than zeroing the data from the get go because unitialized
  accesses are still detected.
 
 I have no understanding of valgrind suppressions.  I can commit this
 blindly if nobody else wants to pick it up.

I think the only committer with previous experience in that area is
Noah. Noah?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sending out a request for more buildfarm animals?

2014-05-09 Thread Alvaro Herrera
Tomas Vondra wrote:

 So, if I get this right, the proposal is to have 7 animals:

It's your machine, so you decide what you want.  I'm only throwing out
some ideas.

 1) all branches/locales, frequent builds (every few hours)
   magpie  - gcc
   fulmar  - icc
   treepie - clang
 
 2) single branch/locale, CLOBBER, built once a week
   magpie2 - gcc
   fulmar2 - icc
   treepie - clang
 
 3) single branch/locale, recursive CLOBBER, built once a month

Check.  Not those 2 names though.

 I don't particularly mind the number of animals, although I was shooting
 for lower number.

Consider that if the recursive clobber fails, we don't want that failure
to appear diluted among many successes of runs using the same animal
with non-recursive clobber.

 The only question is - should we use 3 animals for the recursive CLOBBER
 too? I mean, one for each compiler?

I guess it depends how likely we think that a different compiler will
change the behavior of the shared invalidation queue.  Somebody else
would have to answer that.  If not, then clearly we need only 5 animals.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sending out a request for more buildfarm animals?

2014-05-09 Thread Robert Haas
On Fri, May 9, 2014 at 11:18 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 I guess it depends how likely we think that a different compiler will
 change the behavior of the shared invalidation queue.  Somebody else
 would have to answer that.  If not, then clearly we need only 5 animals.

This may be heresy, but one of the things that drives me nuts about
the buildfarm is that the names of the animals are all weird stuff
that I've never heard of, and things on the same machine have
completely unrelated names.  Would it be crazy to think we might name
all of these animals in some way that lets people associated them with
each other?  e.g. brownbear, blackbear, polarbear, grizzlybear,
teddybear?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cache lookup error when using jsonb, json_build_object and a WITH clause

2014-05-09 Thread Andres Freund
On 2014-05-09 10:07:10 -0400, Tom Lane wrote:
 I wrote:
  Andres Freund and...@2ndquadrant.com writes:
  b) datum_to_json() thinks it's a good idea to use typcategory to decide
  how a type is output. Isn't that pertty fundamentally flawed?
 
  Indeed.  I think the bit that uses TYPCATEGORY_NUMERIC as a hint to decide
  whether the value can be left unquoted (assuming it looks like a number)
  might be all right, but the rest of this seems pretty bogus.
 
 Actually, that would be a security hole if it weren't that CREATE TYPE for
 new base types is superuser-only.

Yea. I actual wonder why CREATE TYPE seems to allow createing
TYPCATEGORY_COMPOSITE types - there really doesn't seem to be a usecase.

 After further reflection I think we should lose the TYPCATEGORY_NUMERIC
 business too.  ruleutils.c hard-wires the set of types it will consider
 to be numeric, and I see no very good reason not to do likewise here.
 That will remove the need to look up the typcategory at all.

Maybe we should expose that list or the functionality in a neater way?
It's already been copied to test_decoding...

 So we need to:
 
 1. Refactor so there's only one copy of the control logic.
 
 2. Smash domains to their base types.
 
 3. Identify boolean, numeric, and json types by direct tests of type OID.
 
 4. Identify array and composite types using standard methods.
 
 Anybody see other problems to fix here?

Yea.
5)
 if (val_type  FirstNormalObjectId)
isn't fundamentally incorrect but imo shouldn't be replaced by something
like !IsCatalogType() (akin to IsCatalogRelation). At least if we decide
that hunk is safe from other POVs - I am not actually 100% sure yet.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sending out a request for more buildfarm animals?

2014-05-09 Thread Alvaro Herrera
Robert Haas wrote:
 On Fri, May 9, 2014 at 11:18 AM, Alvaro Herrera
 alvhe...@2ndquadrant.com wrote:
  I guess it depends how likely we think that a different compiler will
  change the behavior of the shared invalidation queue.  Somebody else
  would have to answer that.  If not, then clearly we need only 5 animals.
 
 This may be heresy, but one of the things that drives me nuts about
 the buildfarm is that the names of the animals are all weird stuff
 that I've never heard of, and things on the same machine have
 completely unrelated names.  Would it be crazy to think we might name
 all of these animals in some way that lets people associated them with
 each other?  e.g. brownbear, blackbear, polarbear, grizzlybear,
 teddybear?

Sure.  I guess it'd be better that people notify somewhere the intention
to create many animals, somehow, so that we know to pick related names.
Right now the interface to requesting a new animal is 100% focused on an
individual animal.  Someone had several animals that were all moths, for
instance, IIRC.

Should we consider renaming Tomas' recent animals?  Not sure that this
would reduce confusion, and it might be heresy as well.  Andrew?

Would it help if the buildfarm page had pics of each animal next to its
name, or something like that?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cache lookup error when using jsonb, json_build_object and a WITH clause

2014-05-09 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 I guess this is my fault. I recall some discussions when some of this 
 was first being written about the best way to make the type based 
 decisions, not sure at this remove whether on list or off. The origin of 
 it is in 9.2, so if you're going to adjust it you should probably go 
 back that far.

Right, will back-patch.

 I was aware of the domain problem, but in 2 years or so nobody has 
 complained about it, so I guess nobody is defining domains over json.

Actually, I was more concerned about domains over other types.
For instance

regression=# create domain dd as int;
CREATE DOMAIN
regression=# select json_build_object('foo', 43);  
 json_build_object 
---
 {foo : 43}
(1 row)

regression=# select json_build_object('foo', 43::dd);
 json_build_object 
---
 {foo : 43}
(1 row)

With the attached patch, you get 43 without any quotes, which seems
right to me.  However, given the lack of complaints, it might be better
to not make this behavioral change in the back branches.  I'll omit
the getBaseType() call from the back-patches.

Draft HEAD patch attached.

regards, tom lane

diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c
index 22ef402..a7364f3 100644
*** a/src/backend/utils/adt/json.c
--- b/src/backend/utils/adt/json.c
*** typedef enum	/* contexts of JSON par
*** 48,53 
--- 48,65 
  	JSON_PARSE_END/* saw the end of a document, expect nothing */
  } JsonParseContext;
  
+ typedef enum	/* type categories for datum_to_json */
+ {
+ 	JSONTYPE_NULL,/* null, so we didn't bother to identify */
+ 	JSONTYPE_BOOL,/* boolean (built-in types only) */
+ 	JSONTYPE_NUMERIC,			/* numeric (ditto) */
+ 	JSONTYPE_JSON,/* JSON itself (and JSONB) */
+ 	JSONTYPE_ARRAY,/* array */
+ 	JSONTYPE_COMPOSITE,			/* composite */
+ 	JSONTYPE_CAST,/* something with an explicit cast to JSON */
+ 	JSONTYPE_OTHER/* all else */
+ } JsonTypeCategory;
+ 
  static inline void json_lex(JsonLexContext *lex);
  static inline void json_lex_string(JsonLexContext *lex);
  static inline void json_lex_number(JsonLexContext *lex, char *s, bool *num_err);
*** static void composite_to_json(Datum comp
*** 64,75 
    bool use_line_feeds);
  static void array_dim_to_json(StringInfo result, int dim, int ndims, int *dims,
    Datum *vals, bool *nulls, int *valcount,
!   TYPCATEGORY tcategory, Oid typoutputfunc,
    bool use_line_feeds);
  static void array_to_json_internal(Datum array, StringInfo result,
  	   bool use_line_feeds);
  static void datum_to_json(Datum val, bool is_null, StringInfo result,
! 			  TYPCATEGORY tcategory, Oid typoutputfunc, bool key_scalar);
  static void add_json(Datum val, bool is_null, StringInfo result,
  		 Oid val_type, bool key_scalar);
  
--- 76,91 
    bool use_line_feeds);
  static void array_dim_to_json(StringInfo result, int dim, int ndims, int *dims,
    Datum *vals, bool *nulls, int *valcount,
!   JsonTypeCategory tcategory, Oid outfuncoid,
    bool use_line_feeds);
  static void array_to_json_internal(Datum array, StringInfo result,
  	   bool use_line_feeds);
+ static void json_categorize_type(Oid typoid,
+ 	 JsonTypeCategory *tcategory,
+ 	 Oid *outfuncoid);
  static void datum_to_json(Datum val, bool is_null, StringInfo result,
! 			  JsonTypeCategory tcategory, Oid outfuncoid,
! 			  bool key_scalar);
  static void add_json(Datum val, bool is_null, StringInfo result,
  		 Oid val_type, bool key_scalar);
  
*** lex_expect(JsonParseContext ctx, JsonLex
*** 143,156 
  		report_parse_error(ctx, lex);;
  }
  
- /*
-  * All the defined	type categories are upper case , so use lower case here
-  * so we avoid any possible clash.
-  */
- /* fake type category for JSON so we can distinguish it in datum_to_json */
- #define TYPCATEGORY_JSON 'j'
- /* fake category for types that have a cast to json */
- #define TYPCATEGORY_JSON_CAST 'c'
  /* chars to consider as part of an alphanumeric token */
  #define JSON_ALPHANUMERIC_CHAR(c)  \
  	(((c) = 'a'  (c) = 'z') || \
--- 159,164 
*** extract_mb_char(char *s)
*** 1219,1232 
  }
  
  /*
!  * Turn a scalar Datum into JSON, appending the string to result.
   *
!  * Hand off a non-scalar datum to composite_to_json or array_to_json_internal
!  * as appropriate.
   */
  static void
  datum_to_json(Datum val, bool is_null, StringInfo result,
! 			  TYPCATEGORY tcategory, Oid typoutputfunc, bool key_scalar)
  {
  	char	   *outputstr;
  	text	   *jsontext;
--- 1227,1321 
  }
  
  /*
!  * Determine how we want to print values of a given type in datum_to_json.
   *
!  * Given the datatype OID, return its JsonTypeCategory, as well as the type's
!  * output function OID.  If the returned category is JSONTYPE_CAST, we
!  * return the OID of the type-JSON cast function instead.
!  */
! static void
! 

Re: [HACKERS] Cache lookup error when using jsonb, json_build_object and a WITH clause

2014-05-09 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 Anybody see other problems to fix here?

 Yea.
 5)
  if (val_type  FirstNormalObjectId)
 isn't fundamentally incorrect but imo shouldn't be replaced by something
 like !IsCatalogType() (akin to IsCatalogRelation). At least if we decide
 that hunk is safe from other POVs - I am not actually 100% sure yet.

I didn't particularly like that either.  The test is off-by-one, for
one thing (a type created right at OID wraparound could have
FirstNormalObjectId).  However, it seems reasonable to avoid fruitless
syscache searches for builtin types, and I'm not seeing a lot of point
to wrapping this test in some macro.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sending out a request for more buildfarm animals?

2014-05-09 Thread Robert Haas
On Fri, May 9, 2014 at 11:32 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Robert Haas wrote:
 On Fri, May 9, 2014 at 11:18 AM, Alvaro Herrera
 alvhe...@2ndquadrant.com wrote:
  I guess it depends how likely we think that a different compiler will
  change the behavior of the shared invalidation queue.  Somebody else
  would have to answer that.  If not, then clearly we need only 5 animals.

 This may be heresy, but one of the things that drives me nuts about
 the buildfarm is that the names of the animals are all weird stuff
 that I've never heard of, and things on the same machine have
 completely unrelated names.  Would it be crazy to think we might name
 all of these animals in some way that lets people associated them with
 each other?  e.g. brownbear, blackbear, polarbear, grizzlybear,
 teddybear?

 Sure.  I guess it'd be better that people notify somewhere the intention
 to create many animals, somehow, so that we know to pick related names.
 Right now the interface to requesting a new animal is 100% focused on an
 individual animal.  Someone had several animals that were all moths, for
 instance, IIRC.

 Should we consider renaming Tomas' recent animals?  Not sure that this
 would reduce confusion, and it might be heresy as well.  Andrew?

 Would it help if the buildfarm page had pics of each animal next to its
 name, or something like that?

I'm not sure how helpful pictures would really be, but I bet I'd have
more *fun* looking at the buildfarm status page.  :-)

I don't know that I have all the answers as to what would really be
best here.  If we were starting over I think a taxonomy might be more
useful than what we have today - e.g. mammals for Linux, avians for
BSD-derived systems, reptiles for other System V-derived systems, and
invertebrates for Windows.  But it's surely not worth renaming
everything now.  Some easy way to group things on the same actual
system might be worthwhile, though.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PQputCopyEnd doesn't adhere to its API contract

2014-05-09 Thread Robert Haas
On Thu, May 8, 2014 at 5:21 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Perhaps the text should be like this:

 The result is 1 if the termination message was sent; or in nonblocking
 mode, this may only indicate that the termination message was successfully
 queued.  (In nonblocking mode, to be certain that the data has been sent,
 you should next wait for write-ready and call functionPQflush/,
 repeating until it returns zero.)  Zero indicates that the function could
 not queue the termination message because of full buffers; this will only
 happen in nonblocking mode.  (In this case, wait for write-ready and try
 the PQputCopyEnd call again.)  If a hard error occurs, -1 is returned; you
 can use functionPQerrorMessage/function to retrieve details.

That looks pretty good.   However, I'm realizing this isn't the only
place where we probably need to clarify the language.  Just to take
one example near at hand, PQputCopyData may also return 1 when it's
only queued the data; it seems to try even less hard than PQputCopyEnd
to ensure that the data is actually sent.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PQputCopyEnd doesn't adhere to its API contract

2014-05-09 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 That looks pretty good.   However, I'm realizing this isn't the only
 place where we probably need to clarify the language.  Just to take
 one example near at hand, PQputCopyData may also return 1 when it's
 only queued the data; it seems to try even less hard than PQputCopyEnd
 to ensure that the data is actually sent.

Well, we certainly do NOT want a flush in PQputCopyData.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Clean up jsonb code.

2014-05-09 Thread Alexander Korotkov
On Thu, May 8, 2014 at 11:45 AM, Heikki Linnakangas hlinnakan...@vmware.com
 wrote:

 On 05/08/2014 02:25 AM, Peter Geoghegan wrote:

 findJsonbValueFromSuperHeader()'s lowbound argument
 previously served to establish a low bound for searching when
 searching for multiple keys (so the second and subsequent
 user-supplied key could skip much of the object).


 Got that.

  In the case of
 jsonb_exists_any(), say, if you only have a reasonable expectation
 that about 1 key exists, and that happens to be the last key that the
 user passed to the text[] argument (to the existence/? operator), then
 n - 1 calls to what is now findJsonbValueFromContainer() (which now
 does not accept a lowbound) are wasted.


 Check.

  That's elem_count - 1
 top-level binary searches of the entire jsonb. Or elem_count such
 calls rather than 1 call (plus 1 sort of the supplied array) in the
 common case where jsonb_exists_any() will return false.


 Ok, but I don't see any big difference in that regard. It still called
 findJsonbValueFromContainer() elem_count times, before this commit. Each
 call was somewhat cheaper, because the lower bound of the binary search was
 initialized to where the previous search ended, but you still had to
 perform the search.

  Granted, that might not be that bad right now, given that it's only
 ever (say) elem_count or elem_count - 1 wasted binary searches through
 the *top* level, but that might not always be true.


 If we are ever to perform a deep search, I think we'll want to do much
 more to optimize that than just keep track of the lower bound. Like, start
 an iterator of tree and check for all of the keys in one go.

  And even today,
 sorting a presumably much smaller user-passed lookup array once has to
 be cheaper than searching through the entire jsonb perhaps elem_count
 times per call.


 Well, the quick testing I did suggested otherwise. It's not a big
 difference, but sorting has all kinds of overhead, like pallocing the array
 to sort, copying the elements around etc. For a small array, the startup
 cost of sorting trumps the savings in the binary searches. Possibly the way
 the sorting was done was not optimal, and you could reduce the copying and
 allocations involved in that, but it's hardly worth the trouble.


With current head I can't load delicious dataset into jsonb format. I got
segfault. It looks like memory corruption.

$ gzip -c -d js.copy.gz | psql postgres -c 'copy js from stdin;'
WARNING:  problem in alloc set ExprContext: bogus aset link in block
0x14a846000, chunk 0x14a84d278
CONTEXT:  COPY js, line 246766
WARNING:  problem in alloc set ExprContext: bad single-chunk 0x14a804b18 in
block 0x14a7e6000
CONTEXT:  COPY js, line 1009820
WARNING:  problem in alloc set ExprContext: bogus aset link in block
0x14a7e6000, chunk 0x14a804b18
CONTEXT:  COPY js, line 1009820
server closed the connection unexpectedly
 This probably means the server terminated abnormally
before or while processing the request.

You can get dataset here:
http://mira.sai.msu.ru/~megera/tmp/js.copy.gz

--
With best regards,
Alexander Korotkov.


[HACKERS] pg_class.relpages/allvisible probably shouldn't be a int4

2014-05-09 Thread Andres Freund
Hi,

postgres=# SELECT relpages FROM pg_class WHERE relname = 'large';
  relpages
-
 -1804468224
(1 row)

postgres=# \dt+ large
   List of relations
 Schema | Name  | Type  | Owner  | Size  | Description
+---+---++---+-
 public | large | table | andres | 19 TB |
(1 row)

That's nothing for 9.4 anymore, but shouldn't we make pg_class.relpages
a int8 (sounds slightly better than float to me) or somesuch?

I think most calculations actually work out because they're performed
after casting to BlockNumber, but ...

Greetings,

Andres Freund

PS: _mdfd_getseg is considered harmful(). Not fun once you have couple
of hundred gigabytes.

--
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How can we make beta testing better?

2014-05-09 Thread Josh Berkus
On 04/26/2014 12:39 PM, Jim Nasby wrote:
 On 4/17/14, 6:42 PM, Josh Berkus wrote:
 So we have some software we've been procrastinating on OSS'ing, which
 does:

 1) Takes full query CSV logs from a running postgres instance
 2) Runs them against a target instance in parallel
 3) Records response times for all queries
 
 Is that the stuff you'd worked on for us forever ago? I thought that was
 just pgreplay based, but now I don't remember.

Yes.  It's based around the same principles as pgreplay -- a replay from
logs -- but used python and multiprocess to allow for much higher levels
of concurrency as well as not completely halting due to deadlock errors.

However, this is all a bit of a tangent.  While we want to give users
better tools for testing, we also need some reasonable way to get them
*involved* in testing.  That's the main thing I was looking for feedback on.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [WIP] showing index maintenance on EXPLAIN

2014-05-09 Thread Jaime Casanova
On Thu, May 8, 2014 at 10:44 PM, Amit Kapila amit.kapil...@gmail.com wrote:
 On Thu, May 8, 2014 at 12:01 PM, Jaime Casanova ja...@2ndquadrant.com wrote:
 On Wed, May 7, 2014 at 10:52 PM, Amit Kapila amit.kapil...@gmail.com wrote:

 Why to capture only for Index Insert/Update and not for Read; is it
 because Read will be always fast ot implementation complexity?

 EXPLAIN ANALYZE already shows that on any SELECT that uses an index in
 some way. Or are you thinking on something else?

[...]

 Are you referring actual time in above print?

 The actual time is node execution time which in above kind of cases will
 be: scanning the index + scanning the heap.  I think it is not same what
 you are planning to show for Insert/Update case.


ah! good point! my current case is because of write performance, but
will look at it a little

-- 
Jaime Casanova www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
Phone: +593 4 5107566 Cell: +593 987171157


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sending out a request for more buildfarm animals?

2014-05-09 Thread Andrew Dunstan


On 05/09/2014 11:25 AM, Robert Haas wrote:

On Fri, May 9, 2014 at 11:18 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:

I guess it depends how likely we think that a different compiler will
change the behavior of the shared invalidation queue.  Somebody else
would have to answer that.  If not, then clearly we need only 5 animals.

This may be heresy, but one of the things that drives me nuts about
the buildfarm is that the names of the animals are all weird stuff
that I've never heard of, and things on the same machine have
completely unrelated names.  Would it be crazy to think we might name
all of these animals in some way that lets people associated them with
each other?  e.g. brownbear, blackbear, polarbear, grizzlybear,
teddybear?





I've done that a bit in the past. At one stage all my Windows animals 
were some sort of bat. There's nothing magical about the names. It's 
just a text field and can be whatever we like. I initially started with 
animals because it seemed like a category that was likely to supply a 
virtually endless list of names.


We could maybe use more generic names to start with and then add 
specialized names to extra animals on the same machine. But that's 
really pretty much a hack, and something I would criticize if shown it 
in a client's schema. If we want to be able to group machines on the 
same box then we should have a database table or field that groups them 
cleanly. That's going to require a bit of thought on how to do it with 
minimal disruption.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Clean up jsonb code.

2014-05-09 Thread Peter Geoghegan
On Fri, May 9, 2014 at 10:27 AM, Alexander Korotkov
aekorot...@gmail.com wrote:
 With current head I can't load delicious dataset into jsonb format. I got
 segfault. It looks like memory corruption.

I'll look at this within the next couple of hours.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.4 checksum errors in recovery with gin index

2014-05-09 Thread Jeff Janes
On Wed, May 7, 2014 at 1:40 PM, Heikki Linnakangas
hlinnakan...@vmware.comwrote:

 On 05/07/2014 10:35 AM, Jeff Janes wrote:

 When recovering from a crash (with injection of a partial page write at
 time of crash) against 7c7b1f4ae5ea3b1b113682d4d I get a checksum
 verification failure.

 16396 is a gin index.

 If I have it ignore checksum failures, there is no apparent misbehavior.
   I'm trying to bisect it, but it could take a while and I thought someone
 might have some theories based on the log:

 29075  2014-05-06 23:29:51.411 PDT:LOG:  0: database system was not
 properly shut down; automatic recovery in progress
 29075  2014-05-06 23:29:51.411 PDT:LOCATION:  StartupXLOG, xlog.c:6361
 29075  2014-05-06 23:29:51.412 PDT:LOG:  0: redo starts at 11/323FE1C0
 29075  2014-05-06 23:29:51.412 PDT:LOCATION:  StartupXLOG, xlog.c:6600
 29075  2014-05-06 23:29:51.471 PDT:WARNING:  01000: page verification
 failed, calculated checksum 35967 but expected 7881
 29075  2014-05-06 23:29:51.471 PDT:CONTEXT:  xlog redo Delete list pages


 A-ha. The WAL record of list page deletion doesn't create a full-page
 images of the deleted pages. That's pretty sensible, as a deleted page
 doesn't contain any data that needs to be retained. However, if a full-page
 image is not taken, then the page should be completely recreated from
 scratch at replay. It's not doing that.

 Thanks for the testing! I'll have a stab at fixing that tomorrow.
 Basically, ginRedoDeleteListPages() needs to re-initialize the deleted
 pages.



It looks like it is solved now.

Thanks,

Jeff


[HACKERS] Weird behaviour with the new MOVE clause of ALTER TABLESPACE

2014-05-09 Thread Guillaume Lelarge
Hey,

I was working on adding support to the new MOVE clause of the ALTER
TABLESPACE statement to pgAdmin when I noticed this issue. See this
example:

Fresh git compilation, and new database on a new cluster:

$ createdb b1
$ psql b1
psql (9.4devel)
Type help for help.

b1=# CREATE TABLESPACE ts1 LOCATION '/opt/postgresql/tablespaces/ts94';
CREATE TABLESPACE
b1=# SELECT count(*) FROM pg_class c
  JOIN pg_tablespace t ON c.reltablespace=t.oid
  AND spcname='pg_default';
 count 
---
 0
(1 row)

b1=# SELECT count(*) FROM pg_class c
  JOIN pg_tablespace t ON c.reltablespace=t.oid AND spcname='ts1';
 count 
---
 0
(1 row)

b1=# SELECT count(*) FROM pg_class c WHERE c.reltablespace=0;
 count 
---
   268
(1 row)

So, 268 objects in the default tablespace (which happens to be
pg_default) and none in ts1 (that's correct, it was just created).

Now, we move all objects from pg_default to ts1. My expectation was that
all user objects would be afterwards in the ts1 tablespace. And here is
what happens:

b1=# ALTER TABLESPACE pg_default MOVE ALL TO ts1;
ALTER TABLESPACE
b1=# SELECT count(*) FROM pg_class c
  JOIN pg_tablespace t ON c.reltablespace=t.oid
  AND spcname='pg_default';
 count 
---
 0
(1 row)

b1=# SELECT count(*) FROM pg_class c
  JOIN pg_tablespace t ON c.reltablespace=t.oid AND spcname='ts1';
 count 
---
21
(1 row)

b1=# SELECT count(*) FROM pg_class c WHERE c.reltablespace=0;
 count 
---
   247
(1 row)

I have 21 objects in ts1 and 247 stayed in the default tablespace. I'm
not sure what I should find weird: that some objects were moved, or that
not all objects were moved :)

What's weirder is the objects themselves:

b1=# SELECT relkind, relname FROM pg_class c
  JOIN pg_tablespace t ON c.reltablespace=t.oid AND spcname='ts1'
  ORDER BY 1, 2;
 relkind | relname 
-+-
 i   | pg_toast_12619_index
 i   | pg_toast_12624_index
 i   | pg_toast_12629_index
 i   | pg_toast_12634_index
 i   | pg_toast_12639_index
 i   | pg_toast_12644_index
 i   | pg_toast_12649_index
 r   | sql_features
 r   | sql_implementation_info
 r   | sql_languages
 r   | sql_packages
 r   | sql_parts
 r   | sql_sizing
 r   | sql_sizing_profiles
 t   | pg_toast_12619
 t   | pg_toast_12624
 t   | pg_toast_12629
 t   | pg_toast_12634
 t   | pg_toast_12639
 t   | pg_toast_12644
 t   | pg_toast_12649
(21 rows)

In other words, all information_schema tables (and their toast tables
and the toast indexes) were moved. Why only them? AFAICT, there are no
other information_schema tables, only views which obviously are not
concerned by the ALTER TABLESPACE statement.

Should information_schema tables be moved and not pg_catalog ones? it
doesn't seem consistent to me.

I probably miss something obvious.

Thanks for any pointer.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_class.relpages/allvisible probably shouldn't be a int4

2014-05-09 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 That's nothing for 9.4 anymore, but shouldn't we make pg_class.relpages
 a int8 (sounds slightly better than float to me) or somesuch?

No; those are really BlockNumbers, and have always been.  float4 would
lose information and float8 or int8 would waste space.  If we had an
unsigned int type it'd be better.  I suppose we could declare them as OID,
but that would probably confuse people no end.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sending out a request for more buildfarm animals?

2014-05-09 Thread Tomas Vondra
On 9.5.2014 17:18, Alvaro Herrera wrote:
 Tomas Vondra wrote:
 
 So, if I get this right, the proposal is to have 7 animals:
 
 It's your machine, so you decide what you want.  I'm only throwing
 out some ideas.
 
 1) all branches/locales, frequent builds (every few hours) magpie
 - gcc fulmar  - icc treepie - clang
 
 2) single branch/locale, CLOBBER, built once a week magpie2 - gcc 
 fulmar2 - icc treepie - clang
 
 3) single branch/locale, recursive CLOBBER, built once a month
 
 Check.  Not those 2 names though.

Sure. That was just for illustration purposes.

 I don't particularly mind the number of animals, although I was
 shooting for lower number.
 
 Consider that if the recursive clobber fails, we don't want that
 failure to appear diluted among many successes of runs using the
 same animal with non-recursive clobber.
 
 The only question is - should we use 3 animals for the recursive
 CLOBBER too? I mean, one for each compiler?
 
 I guess it depends how likely we think that a different compiler
 will change the behavior of the shared invalidation queue.  Somebody
 else would have to answer that.  If not, then clearly we need only 5
 animals.

Well, I think you're forgetting CLOBBER_FREED_MEMORY - that's not just
about the invalidation queue. And I think we've been bitten by compilers
optimizing out parts of the code before (e.g. because we relied on
undefined behaviour).

regards
Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sending out a request for more buildfarm animals?

2014-05-09 Thread Tomas Vondra
On 9.5.2014 20:09, Andrew Dunstan wrote:
 
 I've done that a bit in the past. At one stage all my Windows animals
 were some sort of bat. There's nothing magical about the names. It's
 just a text field and can be whatever we like. I initially started with
 animals because it seemed like a category that was likely to supply a
 virtually endless list of names.
 
 We could maybe use more generic names to start with and then add
 specialized names to extra animals on the same machine. But that's
 really pretty much a hack, and something I would criticize if shown it
 in a client's schema. If we want to be able to group machines on the
 same box then we should have a database table or field that groups them
 cleanly. That's going to require a bit of thought on how to do it with
 minimal disruption.

I'm not really sure what would be the purpose of this information? I
mean, why do we need to identify the animals running on the same
machine? And what if they run in different VMs on the same hardware?

And I certainly prefer animal names than e.g. animal001 and similar
naming schemes.

regards
Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] default opclass for jsonb (was Re: Call for GIST/GIN/SP-GIST opclass documentation)

2014-05-09 Thread Tom Lane
Greg Stark st...@mit.edu writes:
 Well the question seems to me to be that if we're always doing recheck
 then what advantage is there to not hashing everything?

Right now, there's not much.  But it seems likely to me that there will be
more JSON operators in future, and some of them might be able to make use
of the additional specificity of unhashed entries.  For example, it's only
a very arbitrary definitional choice for the exists operator (ie, not
looking into sub-objects) that makes jsonb_ops lossy for it.  We might
eventually build a recursive-exists-check operator for which the index
could be lossless, at least up to the string length where we start to
hash.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_class.relpages/allvisible probably shouldn't be a int4

2014-05-09 Thread Andres Freund
On May 9, 2014 10:37:49 PM CEST, Tom Lane t...@sss.pgh.pa.us wrote:
Andres Freund and...@2ndquadrant.com writes:
 That's nothing for 9.4 anymore, but shouldn't we make
pg_class.relpages
 a int8 (sounds slightly better than float to me) or somesuch?

No; those are really BlockNumbers, and have always been.  float4 would
lose information and float8 or int8 would waste space.  If we had an
unsigned int type it'd be better.  I suppose we could declare them as
OID,
but that would probably confuse people no end.

Well negative numbers aren't great either.  Although admittedly it's not yet 
affecting many...

I think the waste of storing 2*4 additional bytes isn't going to hurt much.
And adding a proper unsigned type doesn't sound like a small amount of work. 
Not to speak of overloading troubles
I realize they are block numbers and casted in most places - that's why the 
overflow doesn't seem to cause too many troubles.

Andres

-- 
Please excuse brevity and formatting - I am writing this on my mobile phone.

Andres Freund  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Weird behaviour with the new MOVE clause of ALTER TABLESPACE

2014-05-09 Thread Stephen Frost
Guillaume,

* Guillaume Lelarge (guilla...@lelarge.info) wrote:
 Should information_schema tables be moved and not pg_catalog ones? it
 doesn't seem consistent to me.

The catalog tables are moved by changing the database's tablespace, eg:

ALTER DATABASE ... SET TABLESPACE

That also moves any objects which are not assigned to a specific
tablespace.

The question ends up being just which side of is it part of the
catalog, or not? the information schema falls on to.  For this case, I
had considered those to *not* be part of the catalog as they can be
moved independently of the ALTER DATABASE ... SET TABLESPACE.

This is happily documented:

   System catalogs will not be moved by this command- individuals wishing to
   move a whole database should use ALTER DATABASE, or call ALTER TABLE on the
   individual system catalogs.  Note that relations in 
literalinformation_schema/literal
   will be moved, just as any other normal database objects, if the user is the
   superuser or considered an owner of the relations in 
literalinformation_schema/literal.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] test_shm_mq failing on anole (was: Sending out a request for more buildfarm animals?)

2014-05-09 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 The test_shm_mq regression tests hung on this machine this morning.

It looks like hamster may have a repeatable issue there as well,
since the last set of DSM code changes.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] test_shm_mq failing on anole (was: Sending out a request for more buildfarm animals?)

2014-05-09 Thread Tom Lane
I wrote:
 It looks like hamster may have a repeatable issue there as well,
 since the last set of DSM code changes.

Ah, scratch that --- on closer inspection it looks like both failures
probably trace to out-of-disk-space.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Clean up jsonb code.

2014-05-09 Thread Tom Lane
Alexander Korotkov aekorot...@gmail.com writes:
 With current head I can't load delicious dataset into jsonb format. I got
 segfault. It looks like memory corruption.

The proximate cause of this seems to be that reserveFromBuffer() fails
to consider the possibility that it needs to more-than-double the
current buffer size.  This change makes the crash go away for me:

diff --git a/src/backend/utils/adt/jsonb_util.c 
b/src/backend/utils/adt/jsonb_util.c
index 832a08d..0c4af04 100644
*** a/src/backend/utils/adt/jsonb_util.c
--- b/src/backend/utils/adt/jsonb_util.c
*** reserveFromBuffer(convertState *buffer, 
*** 1186,1192 
/* Make more room if needed */
if (buffer-len + len  buffer-allocatedsz)
{
!   buffer-allocatedsz *= 2;
buffer-buffer = repalloc(buffer-buffer, buffer-allocatedsz);
}
  
--- 1186,1195 
/* Make more room if needed */
if (buffer-len + len  buffer-allocatedsz)
{
!   do
!   {
!   buffer-allocatedsz *= 2;
!   } while (buffer-len + len  buffer-allocatedsz);
buffer-buffer = repalloc(buffer-buffer, buffer-allocatedsz);
}
  

However, what it looks to me like we've got here is a very bad
reimplementation of StringInfo buffers.  There is for example no
integer-overflow checking here.  Rather than try to bring this code
up to speed, I think we should rip it out and use StringInfo.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Clean up jsonb code.

2014-05-09 Thread Peter Geoghegan
On Fri, May 9, 2014 at 2:54 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 However, what it looks to me like we've got here is a very bad
 reimplementation of StringInfo buffers.  There is for example no
 integer-overflow checking here.  Rather than try to bring this code
 up to speed, I think we should rip it out and use StringInfo.

Heikki did specifically consider StringInfo buffers and said they were
not best suited to the task at hand. At the time I thought he meant
that he'd do something domain-specific to avoid unnecessary geometric
growth in the size of the buffer (I like to grow buffers to either
twice their previous size, or just big enough to fit the next thing,
whichever is larger), but that doesn't appear to be the case. Still,
it would be good to know what he meant before proceeding. It probably
had something to do with alignment.

Integer overflow checking probably isn't strictly necessary FWIW,
because there are limits to the size that the buffer can grow to
enforced at various points.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Clean up jsonb code.

2014-05-09 Thread Tom Lane
Peter Geoghegan p...@heroku.com writes:
 On Fri, May 9, 2014 at 2:54 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 However, what it looks to me like we've got here is a very bad
 reimplementation of StringInfo buffers.  There is for example no
 integer-overflow checking here.  Rather than try to bring this code
 up to speed, I think we should rip it out and use StringInfo.

 Heikki did specifically consider StringInfo buffers and said they were
 not best suited to the task at hand. At the time I thought he meant
 that he'd do something domain-specific to avoid unnecessary geometric
 growth in the size of the buffer (I like to grow buffers to either
 twice their previous size, or just big enough to fit the next thing,
 whichever is larger), but that doesn't appear to be the case. Still,
 it would be good to know what he meant before proceeding. It probably
 had something to do with alignment.

It looks to me like he wanted an API that would let him reserve space
separately from filling it, which is not in stringinfo.c but is surely
easily built on top of it.  For the moment, I've just gotten rid of
the buggy code fragment in favor of calling enlargeStringInfo, which
I trust to be right.

We might at some point want to change the heuristics in
enlargeStringInfo, but two days before beta is not the time for that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Commitfest still shows pending patches?

2014-05-09 Thread Josh Berkus
How can this be?  We're bundling a beta in 2 days.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest still shows pending patches?

2014-05-09 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 How can this be?  We're bundling a beta in 2 days.

The remaining stuff in 2014-01 needs to get moved to 2014-06, I guess.
We're certainly not committing any of it at this point.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New pg_lsn type doesn't have hash/btree opclasses

2014-05-09 Thread Michael Paquier
On Fri, May 9, 2014 at 10:49 PM, Fujii Masao masao.fu...@gmail.com wrote:
 Yes unless many people object the commit.

 Michael,
 You're now modifying the patch?
Not within a couple of days.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New pg_lsn type doesn't have hash/btree opclasses

2014-05-09 Thread Tom Lane
Michael Paquier michael.paqu...@gmail.com writes:
 On Fri, May 9, 2014 at 10:49 PM, Fujii Masao masao.fu...@gmail.com wrote:
 Yes unless many people object the commit.
 
 Michael,
 You're now modifying the patch?
 Not within a couple of days.

I think it's really too late for this for 9.4.  At this point it's
less than 48 hours until beta1 wraps, and we do not have the bandwidth
to do anything but worry about stabilizing the features we've already
got.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New pg_lsn type doesn't have hash/btree opclasses

2014-05-09 Thread Fabrízio de Royes Mello
On Fri, May 9, 2014 at 10:18 AM, Andres Freund and...@2ndquadrant.com
wrote:

 On 2014-05-09 22:14:25 +0900, Michael Paquier wrote:
  On Fri, May 9, 2014 at 10:01 PM, Fujii Masao masao.fu...@gmail.com
wrote:
   +DATA(insert OID = 3260 (403pglsn_opsPGNSP PGUID
));
   +DATA(insert OID = 3261 (405pglsn_opsPGNSP PGUID
));
  
   The patch looks good to me except the name of index operator class.
   I think that pg_lsn_ops is better than pglsn_ops because it's for
pg_lsn
   data type.
  Well, yes... Btw, before doing anything with this patch, I think that
  the version of Fabrizio could be used as a base, but the regression
  tests should be reshuffled a bit...

 I honestly don't really see much point in the added tests. If at all I'd
 rather see my tests from

http://archives.postgresql.org/message-id/20140506230722.GE24808%40awork2.anarazel.de
 addded. With some EXPLAIN (COSTS OFF) they'd test more.


Ok. In a few hours I'll add your suggestion and send it again.

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello


Re: [HACKERS] New pg_lsn type doesn't have hash/btree opclasses

2014-05-09 Thread Fabrízio de Royes Mello
On Fri, May 9, 2014 at 8:42 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Michael Paquier michael.paqu...@gmail.com writes:
  On Fri, May 9, 2014 at 10:49 PM, Fujii Masao masao.fu...@gmail.com
wrote:
  Yes unless many people object the commit.
 
  Michael,
  You're now modifying the patch?
  Not within a couple of days.

 I think it's really too late for this for 9.4.  At this point it's
 less than 48 hours until beta1 wraps, and we do not have the bandwidth
 to do anything but worry about stabilizing the features we've already
 got.


But it's a very small change with many benefits, and Michael acted very
proactive to make this happens.

In a few hours I'll fix the last two suggestions (Fujii and Andres) and
send it again.

Regards,

-- 
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello


Re: [HACKERS] Sending out a request for more buildfarm animals?

2014-05-09 Thread Tomas Vondra
On 9.5.2014 00:47, Tomas Vondra wrote:
 On 8.5.2014 23:48, Andrew Dunstan wrote:

 On 05/08/2014 05:21 PM, Alvaro Herrera wrote:
 Andrew Dunstan wrote:

 I really don't get what your objection to the setup is. And no, I
 don't want them to run concurrently, I'd rather spread out the
 cycles.
 I wasn't objecting, merely an observation. Note that Tomas
 mentioned he's okay with running 4 builds at once. My main point
 here, really, is that having a larger number of animals shouldn't
 be an impediment for a more complex permutation of configurations,
 if he's okay with doing that. I assume you wouldn't object to my
 approving four extra animals running on the same machine, if Tomas
 wants to go for that.
 
 So, if I get this right, the proposal is to have 7 animals:
 
 
 1) all branches/locales, frequent builds (every few hours)
   magpie  - gcc
   fulmar  - icc
   treepie - clang
 
 2) single branch/locale, CLOBBER, built once a week
   magpie2 - gcc
   fulmar2 - icc
   treepie - clang
 
 3) single branch/locale, recursive CLOBBER, built once a month

I've just noticed that the CLOBBER tests completed fine on the first
branch, but failed after the second one with this error:

Query for: stage=OKanimal=magpiets=1399599933
Target:
http://www.pgbuildfarm.org/cgi-bin/pgstatus.pl/3c4d6bf5c9ac87be37fcbd0d046f02ae5b39d09e
Status Line: 493 snapshot too old: Wed May  7 04:36:57 2014 GMT
Content:
snapshot to old: Wed May  7 04:36:57 2014 GMT

Now, I assume this happens because the tests duration exceeds 24h (or
whatever limit is there), and it probably won't be a problem after
switching to the single branch/locale combination.

But won't that be a problem on the animal running tests with
CLOBBER_CACHE_RECURSIVELY? That's supposed to run much longer, and I
wouldn't be surprised if it exceeds 24h on a single branch/locale.

regards
Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New pg_lsn type doesn't have hash/btree opclasses

2014-05-09 Thread Tom Lane
=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= fabriziome...@gmail.com writes:
 On Fri, May 9, 2014 at 8:42 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I think it's really too late for this for 9.4.  At this point it's
 less than 48 hours until beta1 wraps, and we do not have the bandwidth
 to do anything but worry about stabilizing the features we've already
 got.

 But it's a very small change with many benefits, and Michael acted very
 proactive to make this happens.

[ shrug... ]  proactive would have been doing this a month ago.
If we're going to ship a release, we have to stop taking new features
at some point, and we are really past that point for 9.4.

And, to be blunt, this is not important enough to hold up the release
for, nor to take any stability risks for.  It should go into the next
commitfest cycle where it can get a non-rushed review.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New pg_lsn type doesn't have hash/btree opclasses

2014-05-09 Thread Fabrízio de Royes Mello
On Fri, May 9, 2014 at 9:30 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 =?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= fabriziome...@gmail.com writes:
  On Fri, May 9, 2014 at 8:42 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  I think it's really too late for this for 9.4.  At this point it's
  less than 48 hours until beta1 wraps, and we do not have the bandwidth
  to do anything but worry about stabilizing the features we've already
  got.

  But it's a very small change with many benefits, and Michael acted very
  proactive to make this happens.

 [ shrug... ]  proactive would have been doing this a month ago.
 If we're going to ship a release, we have to stop taking new features
 at some point, and we are really past that point for 9.4.

 And, to be blunt, this is not important enough to hold up the release
 for, nor to take any stability risks for.  It should go into the next
 commitfest cycle where it can get a non-rushed review.


I agree with you that is too late to add *new features*.

But I agree with Andres when he said this is a regression introcuced in the
pg_lsn patch.

So we'll release a version that break a simple query like that:

fabrizio=# SELECT DISTINCT (g.i||'/0')::pg_lsn f FROM generate_series(1,
100) g(i), generate_series(1, 5);
ERROR:  could not identify an equality operator for type pg_lsn
LINE 1: SELECT DISTINCT (g.i||'/0')::pg_lsn f FROM generate_series(1...
 ^

I attached the last version of this fix with the Andres and Fujii
suggestions.


Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello
diff --git a/src/backend/utils/adt/pg_lsn.c b/src/backend/utils/adt/pg_lsn.c
index d1448ae..c021a1e 100644
--- a/src/backend/utils/adt/pg_lsn.c
+++ b/src/backend/utils/adt/pg_lsn.c
@@ -14,6 +14,7 @@
 #include postgres.h
 
 #include funcapi.h
+#include access/hash.h
 #include libpq/pqformat.h
 #include utils/builtins.h
 #include utils/pg_lsn.h
@@ -153,6 +154,27 @@ pg_lsn_ge(PG_FUNCTION_ARGS)
 	PG_RETURN_BOOL(lsn1 = lsn2);
 }
 
+/* handler for btree index operator */
+Datum
+pg_lsn_cmp(PG_FUNCTION_ARGS)
+{
+	XLogRecPtr a = PG_GETARG_LSN(0);
+	XLogRecPtr b = PG_GETARG_LSN(1);
+
+	if (a  b)
+		PG_RETURN_INT32(1);
+	else if (a == b)
+		PG_RETURN_INT32(0);
+	else
+		PG_RETURN_INT32(-1);
+}
+
+/* hash index support */
+Datum
+pg_lsn_hash(PG_FUNCTION_ARGS)
+{
+	return DirectFunctionCall1(hashint8, PG_GETARG_LSN(0));
+}
 
 /*--
  *	Arithmetic operators on PostgreSQL LSNs.
diff --git a/src/include/catalog/pg_amop.h b/src/include/catalog/pg_amop.h
index 8efd3be..35fd17e 100644
--- a/src/include/catalog/pg_amop.h
+++ b/src/include/catalog/pg_amop.h
@@ -513,6 +513,16 @@ DATA(insert (	2968  2950 2950 4 s 2977	403 0 ));
 DATA(insert (	2968  2950 2950 5 s 2975	403 0 ));
 
 /*
+ * btree pg_lsn_ops
+ */
+
+DATA(insert (	3260  3220 3220 1 s 3224	403 0 ));
+DATA(insert (	3260  3220 3220 2 s 3226	403 0 ));
+DATA(insert (	3260  3220 3220 3 s 3222	403 0 ));
+DATA(insert (	3260  3220 3220 4 s 3227	403 0 ));
+DATA(insert (	3260  3220 3220 5 s 3225	403 0 ));
+
+/*
  *	hash index _ops
  */
 
@@ -581,6 +591,8 @@ DATA(insert (	2231   1042 1042 1 s 1054 405 0 ));
 DATA(insert (	2235   1033 1033 1 s  974 405 0 ));
 /* uuid_ops */
 DATA(insert (	2969   2950 2950 1 s 2972 405 0 ));
+/* pg_lsn_ops */
+DATA(insert (	3261   3220 3220 1 s 3222 405 0 ));
 /* numeric_ops */
 DATA(insert (	1998   1700 1700 1 s 1752 405 0 ));
 /* array_ops */
diff --git a/src/include/catalog/pg_amproc.h b/src/include/catalog/pg_amproc.h
index 198b126..369adb9 100644
--- a/src/include/catalog/pg_amproc.h
+++ b/src/include/catalog/pg_amproc.h
@@ -134,6 +134,7 @@ DATA(insert (	2789   27 27 1 2794 ));
 DATA(insert (	2968   2950 2950 1 2960 ));
 DATA(insert (	2994   2249 2249 1 2987 ));
 DATA(insert (	3194   2249 2249 1 3187 ));
+DATA(insert (	3260   3220 3220 1 3263 ));
 DATA(insert (	3522   3500 3500 1 3514 ));
 DATA(insert (	3626   3614 3614 1 3622 ));
 DATA(insert (	3683   3615 3615 1 3668 ));
@@ -174,6 +175,7 @@ DATA(insert (	2229   25 25 1 400 ));
 DATA(insert (	2231   1042 1042 1 1080 ));
 DATA(insert (	2235   1033 1033 1 329 ));
 DATA(insert (	2969   2950 2950 1 2963 ));
+DATA(insert (	3261   3220 3220 1 3262 ));
 DATA(insert (	3523   3500 3500 1 3515 ));
 DATA(insert (	3903   3831 3831 1 3902 ));
 DATA(insert (	4034   3802 3802 1 4045 ));
diff --git a/src/include/catalog/pg_opclass.h b/src/include/catalog/pg_opclass.h
index ecf7063..2298a83 100644
--- a/src/include/catalog/pg_opclass.h
+++ b/src/include/catalog/pg_opclass.h
@@ -215,6 +215,8 @@ DATA(insert (	2742	_reltime_ops		PGNSP PGUID 2745  1024 t 703 ));
 DATA(insert (	2742	_tinterval_ops		PGNSP PGUID 2745  1025 t 704 ));
 DATA(insert (	403		uuid_ops			PGNSP PGUID 2968  2950 t 0 ));
 DATA(insert (	405		uuid_ops			PGNSP 

[HACKERS] imprecise pg_basebackup documentation about excluded files

2014-05-09 Thread Peter Eisentraut
The pg_basebackup documentation says that only regular files and
directories are allowed in the data directory.  But it is more correct
that any other files are skipped.  Attached is a patch to correct that.
 I also added a link to the protocol documentation and added more
details there, also about pg_replslot handling.  Not sure exactly how
much detail we want to document, but it seems reasonable to make it
complete if we provide a list at all.
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index c37113c..3a2421b 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -1997,12 +1997,29 @@ titleStreaming Replication Protocol/title
/listitem
listitem
 para
+ various temporary files created during the operation of the PostgreSQL server
+/para
+   /listitem
+   listitem
+para
  filenamepg_xlog/, including subdirectories. If the backup is run
  with WAL files included, a synthesized version of filenamepg_xlog/filename will be
  included, but it will only contain the files necessary for the
  backup to work, not the rest of the contents.
 /para
/listitem
+   listitem
+para
+ filenamepg_replslot/ is copied as an empty directory.
+/para
+   /listitem
+   listitem
+para
+ Files other than regular files and directories, such as symbolic
+ links and special device files, are skipped.  (Symbolic links
+ in filenamepg_tblspc/filename are maintained.)
+/para
+   /listitem
   /itemizedlist
   Owner, group and file mode are set if the underlying file system on
   the server supports it.
diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml
index 97b0af9..10c1743 100644
--- a/doc/src/sgml/ref/pg_basebackup.sgml
+++ b/doc/src/sgml/ref/pg_basebackup.sgml
@@ -571,8 +571,10 @@ titleNotes/title
   para
The backup will include all files in the data directory and tablespaces,
including the configuration files and any additional files placed in the
-   directory by third parties. Only regular files and directories are allowed
-   in the data directory, no symbolic links or special device files.
+   directory by third parties. But only regular files and directories are
+   copied.  Symbolic links (other than those used for tablespaces) and special
+   device files are skipped.  (See xref linkend=protocol-replication for
+   the precise details.)
   /para
 
   para

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] min_recovery_apply_delay

2014-05-09 Thread Peter Eisentraut
Wouldn't a better name be recovery_min_apply_delay?

Other parameters are usually named something_min_something.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] min_recovery_apply_delay

2014-05-09 Thread Fabrízio de Royes Mello
Em sexta-feira, 9 de maio de 2014, Peter Eisentraut pete...@gmx.net
escreveu:

 Wouldn't a better name be recovery_min_apply_delay?


+1

The original name was recovery_apply_delay but in some point of review it
was changed to reflect better the limit.


 Other parameters are usually named something_min_something.


Ok

If you want I can send a patch to change it.

Fabrízio Mello.


-- 
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello


Re: [HACKERS] postgresql.auto.conf read from wrong directory

2014-05-09 Thread Amit Kapila
On Fri, May 9, 2014 at 7:01 PM, Fujii Masao masao.fu...@gmail.com wrote:
 On Fri, May 9, 2014 at 1:06 PM, Amit Kapila amit.kapil...@gmail.com wrote:
 There is no harm in forbidding data_directory, but similarly we can
 imagine that people can set some very large values for some config
 variables due to which later it can have symptoms similar to this
 issue.

 Yes, that can prevent the server from restarting at all. In this case,
 to restart the server, we need to edit postgresql.auto.conf manually
 and remove the problematic settings though the header of
 postgresql.auto.conf warns Do not edit this file manually!.

So lets not try to forbid data_directory in postgresql.auto.conf and just
fix the case reported in this mail for postgresql.conf for which the
patch is attached upthread.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A couple logical decoding fixes/patches

2014-05-09 Thread Noah Misch
On Fri, May 09, 2014 at 04:58:54PM +0200, Andres Freund wrote:
 On 2014-05-09 10:49:09 -0400, Robert Haas wrote:
   Patch 03: Add valgrind suppression for writing out padding bytes. That's
   better than zeroing the data from the get go because unitialized
   accesses are still detected.
  
  I have no understanding of valgrind suppressions.  I can commit this
  blindly if nobody else wants to pick it up.
 
 I think the only committer with previous experience in that area is
 Noah. Noah?

I can pick up that patch.

Static functions having only one call site are especially vulnerable to
inlining, so avoid naming them in the suppressions file.  I do see
ReorderBufferSerializeChange() inlined away at -O2 and higher.  Is it fair to
tie the suppression to ReorderBufferSerializeTXN() instead?

Do you happen to have a self-contained procedure for causing the server to
reach the code in question?

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] pg_ctl always uses the same event source

2014-05-09 Thread Amit Kapila
On Thu, May 8, 2014 at 4:47 PM, MauMau maumau...@gmail.com wrote:
 I rebased the patch to HEAD and removed the compilation error on Linux.  I
 made event_source variable on all platforms like register_servicename,
 although they are not necessary on non-Windows platforms.

I have verified that current patch is fine and I have marked it as
Ready For Committer.

I think it's bit late for this patch for 9.4, you might want to move it to
next CF.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New pg_lsn type doesn't have hash/btree opclasses

2014-05-09 Thread Michael Paquier
On Fri, May 9, 2014 at 10:49 PM, Fujii Masao masao.fu...@gmail.com wrote:
 On Fri, May 9, 2014 at 10:03 PM, Andres Freund and...@2ndquadrant.com wrote:
 You plan to commit it?

 Yes unless many people object the commit.

 Michael, you're now modifying the patch?
OK, I have been able to put my head into that earlier than I thought
as it would be better to get that done before the beta, finishing with
the attached. When hacking that, I noticed that some tests were
missing for some of the operators. I am including them in this patch
as well, with more tests for unique index creation, ordering, and
hash/btree index creation. The test suite looks cleaner with all that.
-- 
Michael
From 13513f3392085edb0b3a75af12ef9d08334bdb2e Mon Sep 17 00:00:00 2001
From: Michael Paquier mich...@otacoo.com
Date: Sat, 10 May 2014 14:34:04 +0900
Subject: [PATCH] Add hash/btree operators for pg_lsn

At the same time, regression tests are completed with some tests for
operators that were missing and of course new tests for the new btree
and hash operators.
---
 src/backend/utils/adt/pg_lsn.c   | 21 
 src/include/catalog/pg_amop.h| 12 +
 src/include/catalog/pg_amproc.h  |  2 +
 src/include/catalog/pg_opclass.h |  2 +
 src/include/catalog/pg_operator.h|  2 +-
 src/include/catalog/pg_opfamily.h|  2 +
 src/include/catalog/pg_proc.h|  4 ++
 src/include/utils/pg_lsn.h   |  2 +
 src/test/regress/expected/pg_lsn.out | 96 ++--
 src/test/regress/sql/pg_lsn.sql  | 46 +
 10 files changed, 165 insertions(+), 24 deletions(-)

diff --git a/src/backend/utils/adt/pg_lsn.c b/src/backend/utils/adt/pg_lsn.c
index d1448ae..b779907 100644
--- a/src/backend/utils/adt/pg_lsn.c
+++ b/src/backend/utils/adt/pg_lsn.c
@@ -14,6 +14,7 @@
 #include postgres.h
 
 #include funcapi.h
+#include access/hash.h
 #include libpq/pqformat.h
 #include utils/builtins.h
 #include utils/pg_lsn.h
@@ -153,6 +154,26 @@ pg_lsn_ge(PG_FUNCTION_ARGS)
PG_RETURN_BOOL(lsn1 = lsn2);
 }
 
+/* handler for btree index operator */
+Datum
+pg_lsn_cmp(PG_FUNCTION_ARGS)
+{
+   XLogRecPtr a = PG_GETARG_LSN(0);
+   XLogRecPtr b = PG_GETARG_LSN(1);
+
+   if (a  b)
+   PG_RETURN_INT32(-1);
+   else if (a  b)
+   PG_RETURN_INT32(1);
+   PG_RETURN_INT32(0);
+}
+
+/* hash index support */
+Datum
+pg_lsn_hash(PG_FUNCTION_ARGS)
+{
+   return DirectFunctionCall1(hashint8, PG_GETARG_LSN(0));
+}
 
 /*--
  * Arithmetic operators on PostgreSQL LSNs.
diff --git a/src/include/catalog/pg_amop.h b/src/include/catalog/pg_amop.h
index 8efd3be..35fd17e 100644
--- a/src/include/catalog/pg_amop.h
+++ b/src/include/catalog/pg_amop.h
@@ -513,6 +513,16 @@ DATA(insert (  2968  2950 2950 4 s 2977403 0 
));
 DATA(insert (  2968  2950 2950 5 s 2975403 0 ));
 
 /*
+ * btree pg_lsn_ops
+ */
+
+DATA(insert (  3260  3220 3220 1 s 3224403 0 ));
+DATA(insert (  3260  3220 3220 2 s 3226403 0 ));
+DATA(insert (  3260  3220 3220 3 s 3222403 0 ));
+DATA(insert (  3260  3220 3220 4 s 3227403 0 ));
+DATA(insert (  3260  3220 3220 5 s 3225403 0 ));
+
+/*
  * hash index _ops
  */
 
@@ -581,6 +591,8 @@ DATA(insert (   2231   1042 1042 1 s 1054 405 0 ));
 DATA(insert (  2235   1033 1033 1 s  974 405 0 ));
 /* uuid_ops */
 DATA(insert (  2969   2950 2950 1 s 2972 405 0 ));
+/* pg_lsn_ops */
+DATA(insert (  3261   3220 3220 1 s 3222 405 0 ));
 /* numeric_ops */
 DATA(insert (  1998   1700 1700 1 s 1752 405 0 ));
 /* array_ops */
diff --git a/src/include/catalog/pg_amproc.h b/src/include/catalog/pg_amproc.h
index 198b126..369adb9 100644
--- a/src/include/catalog/pg_amproc.h
+++ b/src/include/catalog/pg_amproc.h
@@ -134,6 +134,7 @@ DATA(insert (   2789   27 27 1 2794 ));
 DATA(insert (  2968   2950 2950 1 2960 ));
 DATA(insert (  2994   2249 2249 1 2987 ));
 DATA(insert (  3194   2249 2249 1 3187 ));
+DATA(insert (  3260   3220 3220 1 3263 ));
 DATA(insert (  3522   3500 3500 1 3514 ));
 DATA(insert (  3626   3614 3614 1 3622 ));
 DATA(insert (  3683   3615 3615 1 3668 ));
@@ -174,6 +175,7 @@ DATA(insert (   2229   25 25 1 400 ));
 DATA(insert (  2231   1042 1042 1 1080 ));
 DATA(insert (  2235   1033 1033 1 329 ));
 DATA(insert (  2969   2950 2950 1 2963 ));
+DATA(insert (  3261   3220 3220 1 3262 ));
 DATA(insert (  3523   3500 3500 1 3515 ));
 DATA(insert (  3903   3831 3831 1 3902 ));
 DATA(insert (  4034   3802 3802 1 4045 ));
diff --git a/src/include/catalog/pg_opclass.h b/src/include/catalog/pg_opclass.h
index ecf7063..2298a83 100644
--- a/src/include/catalog/pg_opclass.h
+++ b/src/include/catalog/pg_opclass.h
@@ -215,6 +215,8 @@ DATA(insert (   2742_reltime_opsPGNSP 
PGUID 2745  1024 t 703 ));
 DATA(insert (  2742_tinterval_ops  PGNSP PGUID 2745  1025 t 704 ));
 DATA(insert (  403 uuid_ops