Re: is JSON really "a type" (Re: [HACKERS] data to json enhancements)

2012-09-29 Thread Darren Duncan

Hannu Krosing wrote:
Reflecting over the dual possible interpretation of what it does mean to 
convert between "text" and "json" data types it has dawned to me that 
the confusion may come mainly from wanting json to be two things at once:


1. - a serialisation of of a subset of javascript objects to a string.

2. - a dynamic type represented by the above serialisation.

case 1
--

If we stick with interpretation 1. then json datatype is really no more 
than a domain based on "text" type and having a CHECK is_valid_json() 
constraint.




case 2
--

My suggestions on using typecasts for convert-to-json were result of 
this interpretation of json-as-dynamic-type.


Having thought more of this I now think that we probably should leave 
JSON alone and develop an separate dynamic type here.



I think it would be best to have 2 main JSON-concerning data types:

1. A proper subset of "text" consisting of every value meeting some 
is_valid_json() constraint, as a DOMAIN; every value of this type is a "text".


2. A type that is disjoint from "text", that is, no value of this type would 
compare as equal to any "text" value.  It would be considered a collection type, 
similarly to how an array or tuple or relation is, but having arbitrary depth 
and that is heterogeneous in the general case.


You could say that #1 is to textual source code what #2 is to a parsed syntax 
tree of that code.  Or that #1 is to textual XML what #2 is to an XML DOM.


It would be type #2 above that is the primary JSON type, which has all the 
special operators for working with JSON, while type #1 would be opaque, just a 
character string, and must be cast as type #2 in order to use any special 
operators on it.  Similarly, all the converting operators between other types 
and JSON would be with #2 only, and producing #1 must go through #2.


So call #1 say JSON_source and #2 say JSON_model, or JSON_text and JSON 
respectively.


That's how I think it should work.

-- Darren Duncan


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 64-bit API for large object

2012-09-29 Thread Tatsuo Ishii
Kaiai-san,

Thank you for review.

> I checked this patch. It looks good, but here are still some points to be
> discussed.
> 
> * I have a question. What is the meaning of INT64_IS_BUSTED?
>   It seems to me a marker to indicate a platform without 64bit support.
>   However, the commit 901be0fad4034c9cf8a3588fd6cf2ece82e4b8ce
>   says as follows:
>   | Remove all the special-case code for INT64_IS_BUSTED, per decision that
>   | we're not going to support that anymore.

Agreed.

> * At inv_seek(), it seems to me it checks offset correctness with wrong way,
>   as follows:
> |  case SEEK_SET:
> |  if (offset < 0)
> |  elog(ERROR, "invalid seek offset: " INT64_FORMAT, offset);
> |  obj_desc->offset = offset;
> |  break;
>   It is a right assumption, if large object size would be restricted to 2GB.
>   But the largest positive int64 is larger than expected limitation.
>   So, it seems to me it should be compared with (INT_MAX * PAGE_SIZE)
>   instead.

Point taken. However, checking offset < 0 seems to be still valid
because it is possible to pass minus offset to inv_seek(), no?  Also I
think upper limit for seek position should be defined as (INT_MAX *
LOBLKSIZE), rather than (INT_MAX * PAGE_SIZE). Probably (INT_MAX *
LOBLKSIZE) should be defined in pg_largeobject.h as:

/*
 * Maximum byte length for each large object
*/
#define MAX_LARGE_OBJECT_SIZE   INT64CONST(INT_MAX * LOBLKSIZE)

Then the checking offset in inv_seek() will be:

case SEEK_SET:
if (offset < 0 || offset >= MAX_LARGE_OBJECT_SIZE)
elog(ERROR, "invalid seek offset: " 
INT64_FORMAT, offset);
obj_desc->offset = offset;
break;
case SEEK_CUR:
if ((offset + obj_desc->offset) < 0 ||
   (offset + obj_desc->offset) >= MAX_LARGE_OBJECT_SIZE)
elog(ERROR, "invalid seek offset: " 
INT64_FORMAT, offset);
obj_desc->offset += offset;
break;
case SEEK_END:
{
int64   pos = inv_getsize(obj_desc) + 
offset;

if (pos < 0 || pos >= MAX_LARGE_OBJECT_SIZE)
elog(ERROR, "invalid seek offset: " 
INT64_FORMAT, offset);
obj_desc->offset = pos;
}

What do you think?

> * At inv_write(), it definitely needs a check to prevent data-write upper 4TB.
>   In case when obj_desc->offset is a bit below 4TB, an additional 1GB write
>   will break head of the large object because of "pageno" overflow.

Ok. I will add checking:

if ((nbytes + obj_desc->offset) > MAX_LARGE_OBJECT_SIZE)
elog(ERROR, "invalid write request size: %d", nbytes);

> * Please also add checks on inv_read() to prevent LargeObjectDesc->offset
>   unexpectedly overflows 4TB boundary.

Ok. I will add checking:

if ((nbytes + obj_desc->offset) > MAX_LARGE_OBJECT_SIZE)
elog(ERROR, "invalid read request size: %d", nbytes);

> * At inv_truncate(), variable "off" is re-defined to int64. Is it really 
> needed
>   change? All its usage is to store the result of "len % LOBLKSIZE".

Your point is correct. Back to int32.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

> Thanks,
> 
> 2012/9/24 Nozomi Anzai :
>> Here is 64-bit API for large object version 2 patch.
>>
>>> I checked this patch. It can be applied onto the latest master branch
>>> without any problems. My comments are below.
>>>
>>> 2012/9/11 Tatsuo Ishii :
>>> > Ok, here is the patch to implement 64-bit API for large object, to
>>> > allow to use up to 4TB large objects(or 16TB if BLCKSZ changed to
>>> > 32KB). The patch is based on Jeremy Drake's patch posted on September
>>> > 23, 2005
>>> > (http://archives.postgresql.org/pgsql-hackers/2005-09/msg01026.php)
>>> > and reasonably updated/edited to adopt PostgreSQL 9.3 by Nozomi Anzai
>>> > for the backend part and Yugo Nagata for the rest(including
>>> > documentation patch).
>>> >
>>> > Here are changes made in the patch:
>>> >
>>> > 1) Frontend lo_* libpq functions(fe-lobj.c)(Yugo Nagata)
>>> >
>>> > lo_initialize() gathers backend 64-bit large object handling
>>> > function's oid, namely lo_lseek64, lo_tell64, lo_truncate64.
>>> >
>>> > If client calls lo_*64 functions and backend does not support them,
>>> > lo_*64 functions return error to caller. There might be an argument
>>> > since calls to lo_*64 functions can automatically be redirected to
>>> > 32-bit older API. I don't know this is worth the trouble though.
>>> >
>>> I think it should definitely return an error code when user tries to
>>> use lo_*64 functions towards the backend v9.2 or older, because
>>> fa

Re: [HACKERS] 64-bit API for large object

2012-09-29 Thread Tatsuo Ishii
> Excerpts from Kohei KaiGai's message of jue sep 27 01:01:18 -0300 2012:
> 
>> * I have a question. What is the meaning of INT64_IS_BUSTED?
>>   It seems to me a marker to indicate a platform without 64bit support.
>>   However, the commit 901be0fad4034c9cf8a3588fd6cf2ece82e4b8ce
>>   says as follows:
>>   | Remove all the special-case code for INT64_IS_BUSTED, per decision that
>>   | we're not going to support that anymore.
> 
> Yeah, I think we should just get rid of those bits.  I don't remember
> seeing *any* complaint when INT64_IS_BUSTED was removed, which means
> nobody was using that code anyway.

Ok.

> Now there is one more problem in this area which is that the patch
> defined a new type pg_int64 for frontend code (postgres_ext.h).  This
> seems a bad idea to me.  We already have int64 defined in c.h.  Should
> we expose int64 to postgres_ext.h somehow?  Should we use standard-
> mandated int64_t instead?  One way would be to have a new configure
> check for int64_t, and if that type doesn't exist, then just don't
> provide the 64 bit functionality to frontend.

This has been already explained in upthread:
http://archives.postgresql.org/pgsql-hackers/2012-09/msg00447.php
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade tests vs alter generic changes

2012-09-29 Thread Andrew Dunstan


On 09/29/2012 01:49 PM, Andrew Dunstan wrote:


On 09/29/2012 11:49 AM, Tom Lane wrote:

Andrew Dunstan  writes:
The recent alter generic tests have broken pg_upgrade testing on 
Windows

and probably other non-collation-supporting platforms.

Is it still broken after Alvaro added the alternate expected file, and
if so why?  I don't see a reason that this should be failing only there.
I also note that it seems to be passing fine on buildfarm members other
than pitta.


Well, that's a very good point. chough is actually the same machine, 
doing an MSVC build. So why would this test pass there? I'll 
investigate a bit more. Here's what the regression diffs look like 
when run from pg_upgrade on pitta:



[hours of digging later]

It turns out that the reason is that we support collations on MSVC but 
not on Mingw.


*sigh*

I'll have to put it on my ever lengthening TODO list.

cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: is JSON really "a type" (Re: [HACKERS] data to json enhancements)

2012-09-29 Thread Andrew Dunstan


On 09/29/2012 05:01 PM, Hannu Krosing wrote:

On 09/29/2012 05:40 PM, Andrew Dunstan wrote:


I still think Tom's suggestion is the best and simplest way to do that.

which Toms suggestion you mean here ?

The 3. mentioned above was for making possible 2 separate ways to 
convert (serialise/quote/escape and parse/check-for-valid-json) string 
to json and afair not about hstore to json.


Er, what? yes, I meant option 3, and it is a perfect mechanism for doing 
conversion of an hstore field inside datum_to_json: the code would look 
for a cast to the new type (whatever we call it) and use that instead of 
the normal text representation.




I'm also looking forward for an easy way or two to populate a record 
from json and extract an array from json.



I am prepared to take this on - at least starting with json to array 
which I think is the simpler case. I have some ideas about how to do 
this and have done a bit of experimental work along these lines. 
Personally I'd be inclined to make it do a conversion to text[] and then 
cast from that to anything else we needed.


cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: is JSON really "a type" (Re: [HACKERS] data to json enhancements)

2012-09-29 Thread Hannu Krosing

On 09/29/2012 05:40 PM, Andrew Dunstan wrote:




I am not opposed to making a new type, but I really don't think that 
means we need to do nothing for the existing data type. The suggested 
SERIALIZATION mechanism seems to be fairly intrusive and heavy handed, 
as opposed to the very lightweight mechanism that is Tom's option 3.
Agreed this would be the simplest one. I prefer it to be called 
something like "json_embedded?string" to better convey it's use as it is 
needed only when converting a postgresql string type to json string 
type. json_value already has a standard-defined meaning and is a 
supertype of json (which unfortunately is called "json text".


Personally I don't have a strong feeling about a general to_json 
function, but it's something other people have asked for. The things I 
do care about are the json_agg function (to which nobody has objected) 

Not just objected but i am very much for it. +1 from me.
and finding a mechanism for reasonably converting structured types, 
particularly hstore, to json. 
hstore to json is what started this discussion and using 
to_json() function was one of the proposed solutions for this. 
Using the same mechanism for enabling users to also have custom 
serialisations for thins that the standard leaves open - like datetime - 
is an added bonus.

I still think Tom's suggestion is the best and simplest way to do that.

which Toms suggestion you mean here ?

The 3. mentioned above was for making possible 2 separate ways to 
convert (serialise/quote/escape and parse/check-for-valid-json) string 
to json and afair not about hstore to json.


I'm also looking forward for an easy way or two to populate a record 
from json and extract an array from json.




cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade tests vs alter generic changes

2012-09-29 Thread Alvaro Herrera
Excerpts from Tom Lane's message of sáb sep 29 14:57:11 -0300 2012:
> 
> Andrew Dunstan  writes:

> > Well, that's a very good point. chough is actually the same machine, 
> > doing an MSVC build. So why would this test pass there? I'll investigate 
> > a bit more. Here's what the regression diffs look like when run from 
> > pg_upgrade on pitta:
> 
> >ALTER COLLATION alt_coll1 RENAME TO alt_coll3;  -- OK
> > ! ERROR:  collation "alt_coll1" for encoding "SQL_ASCII" does not exist
> 
> vs
> 
> >ALTER COLLATION alt_coll1 RENAME TO alt_coll3;  -- OK
> > ! ERROR:  collation "alt_coll1" for encoding "WIN1252" does not exist
> 
> Oh!  So Alvaro's second expected file is assuming that machines without
> custom-locale support will only ever be testing with SQL_ASCII encoding.
> Wrong.
> 
> At this point I'm inclined to think that we should just drop the
> collation-specific portions of the alter_generic test.  It looks to me
> like making that adequately portable is going to be far more trouble
> than it's worth.

Ah, yes.  We already dropped some plperl tests because of a similar
problem.  I will remove that part of the test.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] doc patch for increase in shared_buffers

2012-09-29 Thread Jeff Janes
The default value for shared_buffers was recently increased from 32MB
to 128MB, but the docs were not updated.


shared_buffer_increase.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade tests vs alter generic changes

2012-09-29 Thread Tom Lane
Andrew Dunstan  writes:
> On 09/29/2012 11:49 AM, Tom Lane wrote:
>> Is it still broken after Alvaro added the alternate expected file, and
>> if so why?  I don't see a reason that this should be failing only there.
>> I also note that it seems to be passing fine on buildfarm members other
>> than pitta.

> Well, that's a very good point. chough is actually the same machine, 
> doing an MSVC build. So why would this test pass there? I'll investigate 
> a bit more. Here's what the regression diffs look like when run from 
> pg_upgrade on pitta:

>ALTER COLLATION alt_coll1 RENAME TO alt_coll3;  -- OK
> ! ERROR:  collation "alt_coll1" for encoding "SQL_ASCII" does not exist

vs

>ALTER COLLATION alt_coll1 RENAME TO alt_coll3;  -- OK
> ! ERROR:  collation "alt_coll1" for encoding "WIN1252" does not exist

Oh!  So Alvaro's second expected file is assuming that machines without
custom-locale support will only ever be testing with SQL_ASCII encoding.
Wrong.

At this point I'm inclined to think that we should just drop the
collation-specific portions of the alter_generic test.  It looks to me
like making that adequately portable is going to be far more trouble
than it's worth.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade tests vs alter generic changes

2012-09-29 Thread Andrew Dunstan


On 09/29/2012 11:49 AM, Tom Lane wrote:

Andrew Dunstan  writes:

The recent alter generic tests have broken pg_upgrade testing on Windows
and probably other non-collation-supporting platforms.

Is it still broken after Alvaro added the alternate expected file, and
if so why?  I don't see a reason that this should be failing only there.
I also note that it seems to be passing fine on buildfarm members other
than pitta.


Well, that's a very good point. chough is actually the same machine, 
doing an MSVC build. So why would this test pass there? I'll investigate 
a bit more. Here's what the regression diffs look like when run from 
pg_upgrade on pitta:


cheers

andrew


*** 
c:/MinGW/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.7132/../pgsql/src/test/regress/expected/alter_generic_1.out 
Sat Sep 29 02:01:44 2012
--- 
c:/MinGW/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.7132/src/test/regress/results/alter_generic.out 
Sat Sep 29 13:41:39 2012

***
*** 110,137 
  -- can't test this: the error message includes the encoding name
  -- ALTER COLLATION alt_coll1 RENAME TO alt_coll2;  -- failed (name 
conflict)

  ALTER COLLATION alt_coll1 RENAME TO alt_coll3;  -- OK
! ERROR:  collation "alt_coll1" for encoding "SQL_ASCII" does not exist
  ALTER COLLATION alt_coll2 OWNER TO regtest_alter_user2;  -- failed 
(no role membership)

! ERROR:  collation "alt_coll2" for encoding "SQL_ASCII" does not exist
  ALTER COLLATION alt_coll2 OWNER TO regtest_alter_user3;  -- OK
! ERROR:  collation "alt_coll2" for encoding "SQL_ASCII" does not exist
  ALTER COLLATION alt_coll2 SET SCHEMA alt_nsp2;  -- OK
! ERROR:  collation "alt_coll2" for encoding "SQL_ASCII" does not exist
  SET SESSION AUTHORIZATION regtest_alter_user2;
  CREATE COLLATION alt_coll1 (locale = 'C');
  ERROR:  nondefault collations are not supported on this platform
  CREATE COLLATION alt_coll2 (locale = 'C');
  ERROR:  nondefault collations are not supported on this platform
  ALTER COLLATION alt_coll3 RENAME TO alt_coll4;  -- failed (not owner)
! ERROR:  collation "alt_coll3" for encoding "SQL_ASCII" does not exist
  ALTER COLLATION alt_coll1 RENAME TO alt_coll4;  -- OK
! ERROR:  collation "alt_coll1" for encoding "SQL_ASCII" does not exist
  ALTER COLLATION alt_coll3 OWNER TO regtest_alter_user2;  -- failed 
(not owner)

! ERROR:  collation "alt_coll3" for encoding "SQL_ASCII" does not exist
  ALTER COLLATION alt_coll2 OWNER TO regtest_alter_user3;  -- failed 
(no role membership)

! ERROR:  collation "alt_coll2" for encoding "SQL_ASCII" does not exist
  ALTER COLLATION alt_coll3 SET SCHEMA alt_nsp2;  -- failed (not owner)
! ERROR:  collation "alt_coll3" for encoding "SQL_ASCII" does not exist
  -- can't test this: the error message includes the encoding name
  -- ALTER COLLATION alt_coll2 SET SCHEMA alt_nsp2;  -- failed (name 
conflict)

  RESET SESSION AUTHORIZATION;
--- 110,137 
  -- can't test this: the error message includes the encoding name
  -- ALTER COLLATION alt_coll1 RENAME TO alt_coll2;  -- failed (name 
conflict)

  ALTER COLLATION alt_coll1 RENAME TO alt_coll3;  -- OK
! ERROR:  collation "alt_coll1" for encoding "WIN1252" does not exist
  ALTER COLLATION alt_coll2 OWNER TO regtest_alter_user2;  -- failed 
(no role membership)

! ERROR:  collation "alt_coll2" for encoding "WIN1252" does not exist
  ALTER COLLATION alt_coll2 OWNER TO regtest_alter_user3;  -- OK
! ERROR:  collation "alt_coll2" for encoding "WIN1252" does not exist
  ALTER COLLATION alt_coll2 SET SCHEMA alt_nsp2;  -- OK
! ERROR:  collation "alt_coll2" for encoding "WIN1252" does not exist
  SET SESSION AUTHORIZATION regtest_alter_user2;
  CREATE COLLATION alt_coll1 (locale = 'C');
  ERROR:  nondefault collations are not supported on this platform
  CREATE COLLATION alt_coll2 (locale = 'C');
  ERROR:  nondefault collations are not supported on this platform
  ALTER COLLATION alt_coll3 RENAME TO alt_coll4;  -- failed (not owner)
! ERROR:  collation "alt_coll3" for encoding "WIN1252" does not exist
  ALTER COLLATION alt_coll1 RENAME TO alt_coll4;  -- OK
! ERROR:  collation "alt_coll1" for encoding "WIN1252" does not exist
  ALTER COLLATION alt_coll3 OWNER TO regtest_alter_user2;  -- failed 
(not owner)

! ERROR:  collation "alt_coll3" for encoding "WIN1252" does not exist
  ALTER COLLATION alt_coll2 OWNER TO regtest_alter_user3;  -- failed 
(no role membership)

! ERROR:  collation "alt_coll2" for encoding "WIN1252" does not exist
  ALTER COLLATION alt_coll3 SET SCHEMA alt_nsp2;  -- failed (not owner)
! ERROR:  collation "alt_coll3" for encoding "WIN1252" does not exist
  -- can't test this: the error message includes the encoding name
  -- ALTER COLLATION alt_coll2 SET SCHEMA alt_nsp2;  -- failed (name 
conflict)

  RESET SESSION AUTHORIZATION;

==



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hacke

Re: [HACKERS] Unportable use of uname in pg_upgrade test script

2012-09-29 Thread Tom Lane
Andrew Dunstan  writes:
> Exactly, the sed script pulls the last token from the line, which is 
> Msys on all my Mingw systems.

Perhaps that's "uname -v"?

> If you want to do it another way we could possibly pass the PORTNAME 
> from the global make file.

That might be safer.  The last few words of uname's output are
*completely* unstandardized (the spec says that implementation-defined
fields can be added to -a's output ...)

regards, tom lane








































































> cheers

> andrew


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unportable use of uname in pg_upgrade test script

2012-09-29 Thread Andrew Dunstan


On 09/29/2012 01:06 PM, Tom Lane wrote:

Andrew Dunstan  writes:

The trouble with uname -s is that its output is a bit variable. I think
this will work:
  testhost=`uname -a | sed 's/.* //'`

What do you mean by "a bit variable"?


On one of my machines uname -s return MINGW32_NT5.1

On another it says MINGW32_NT6.1





And why would that fix it?  The
output of -a is *defined* to be the same as -s followed by other stuff.
The reference page I'm looking at also points out that the -s string
can contain embedded blanks.


Exactly, the sed script pulls the last token from the line, which is 
Msys on all my Mingw systems.


If you want to do it another way we could possibly pass the PORTNAME 
from the global make file.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unportable use of uname in pg_upgrade test script

2012-09-29 Thread Tom Lane
Andrew Dunstan  writes:
> The trouble with uname -s is that its output is a bit variable. I think 
> this will work:

>  testhost=`uname -a | sed 's/.* //'`

What do you mean by "a bit variable"?  And why would that fix it?  The
output of -a is *defined* to be the same as -s followed by other stuff.
The reference page I'm looking at also points out that the -s string
can contain embedded blanks.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: is JSON really "a type" (Re: [HACKERS] data to json enhancements)

2012-09-29 Thread Andrew Dunstan


On 09/29/2012 11:47 AM, Hannu Krosing wrote:

On 09/26/2012 06:46 PM, Tom Lane wrote:

Andrew Dunstan  writes:

Drawing together various discussions both here and elsewhere (e.g. the
PostgresOpen hallway track) I propose to work on the following:
1. make datum_to_json() honor a type's cast to json if it exists. The
fallback is to use the type's string representation, as now.
2. add a cast hstore -> json (any others needed for core / contrib 
types ?)

3. add a to_json(anyelement) function
4. add a new aggregate function json_agg(anyrecord) -> json to simplify
and make more effecient turning a resultset into json.
Comments welcome.

ISTM the notion of to_json(anyelement) was already heavily discussed and
had spec-compliance issues ... in fact, weren't you one of the people
complaining?  What exactly does #3 mean that is different from the
previous thread?

Also, on reflection I'm not sure about commandeering cast-to-json for
this --- aren't we really casting to "json member" or something like
that?  The distinction between a container and its contents seems
important here.  With a container type as source, it might be important
to do something different if we're coercing it to a complete JSON
value versus something that will be just one member.  I'm handwaving
here because I don't feel like going back to re-read the RFC, but
it seems like something that should be considered carefully before
we lock down an assumption that there can never be a difference.

regards, tom lane
Reflecting over the dual possible interpretation of what it does mean 
to convert between "text" and "json" data types it has dawned to me 
that the confusion may come mainly from wanting json to be two things 
at once:


1. - a serialisation of of a subset of javascript objects to a string.

2. - a dynamic type represented by the above serialisation.

case 1
--

If we stick with interpretation 1. then json datatype is really no 
more than a domain based on "text" type and having a CHECK 
is_valid_json() constraint.


For this interpretation it makes complete sense to interpret any text 
as already being serialised and no casts (other than casts to a text 
type) have place here.


a few datatypes - like hstore - could have their "to_json_text()" 
serialiser functions if there is a better serialisation to text than 
the types defaul one, but other than that the "serialise to text and 
quote if not null, boolean or numeric type" should be needed.


if there is strong aversion to relying on function names for getting 
the right serialisation function, we could invent a new "cast-like" 
feature for serialising types so we could define a serialiser for 
hstore to json using


CREATE SERIALISATION (hstore AS json)
WITH FUNCTION hstore_as_json(hstore);

this probably will not be any safer than just using the name for 
lookup directly unless we place some restrictions on who is allowed to 
create the serialisation;


case 2
--

My suggestions on using typecasts for convert-to-json were result of 
this interpretation of json-as-dynamic-type.


Having thought more of this I now think that we probably should leave 
JSON alone and develop an separate dynamic type here.


I have started work on doing this based on ideas from BSON data 
format, except using postgreSQL datatypes.


It will still have to solve similar problems we have had here with 
JSON, but being both a new type and a binary type there will probably 
be no expectation of 1-to-1 conversion from to-text.


Will post here soon for more discussion on what this ned type does and 
how it should be used.




I am not opposed to making a new type, but I really don't think that 
means we need to do nothing for the existing data type. The suggested 
SERIALIZATION mechanism seems to be fairly intrusive and heavy handed, 
as opposed to the very lightweight mechanism that is Tom's option 3.


Personally I don't have a strong feeling about a general to_json 
function, but it's something other people have asked for. The things I 
do care about are the json_agg function (to which nobody has objected) 
and finding a mechanism for reasonably converting structured types, 
particularly hstore, to json. I still think Tom's suggestion is the best 
and simplest way to do that.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unportable use of uname in pg_upgrade test script

2012-09-29 Thread Andrew Dunstan


On 09/29/2012 12:13 PM, Tom Lane wrote:

BTW, I tried the pg_upgrade regression tests this morning on my dinosaur
HPUX box, and it promptly fell over with:

uname: illegal option -- o
usage: uname [-amnrsvil] [-S nodename]
make: *** [check] Error 1

This is not terribly surprising, because the -o option is nowhere to be
seen in the Single Unix Spec definition of uname; which means this is
likely to fail on other platforms too.  I would suggest using -s, or no
option at all, or finding some other way to identify Windows/MSys.


The trouble with uname -s is that its output is a bit variable. I think 
this will work:


testhost=`uname -a | sed 's/.* //'`


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Unportable use of uname in pg_upgrade test script

2012-09-29 Thread Tom Lane
BTW, I tried the pg_upgrade regression tests this morning on my dinosaur
HPUX box, and it promptly fell over with:

uname: illegal option -- o
usage: uname [-amnrsvil] [-S nodename]
make: *** [check] Error 1

This is not terribly surprising, because the -o option is nowhere to be
seen in the Single Unix Spec definition of uname; which means this is
likely to fail on other platforms too.  I would suggest using -s, or no
option at all, or finding some other way to identify Windows/MSys.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade tests vs alter generic changes

2012-09-29 Thread Tom Lane
Andrew Dunstan  writes:
> The recent alter generic tests have broken pg_upgrade testing on Windows 
> and probably other non-collation-supporting platforms.

Is it still broken after Alvaro added the alternate expected file, and
if so why?  I don't see a reason that this should be failing only there.
I also note that it seems to be passing fine on buildfarm members other
than pitta.

> This can be cured 
> by making the pg_upgrade test set up its test installs with "initdb 
> --no-locale." as shown below (with similar changes for the MSVC  build 
> system also being needed)

That seems like a kluge, not a solution.  There's no reason I can see
why pg_upgrade's repeat of the regression tests should be more sensitive
to locale environment than the main run is.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


is JSON really "a type" (Re: [HACKERS] data to json enhancements)

2012-09-29 Thread Hannu Krosing

On 09/26/2012 06:46 PM, Tom Lane wrote:

Andrew Dunstan  writes:

Drawing together various discussions both here and elsewhere (e.g. the
PostgresOpen hallway track) I propose to work on the following:
1. make datum_to_json() honor a type's cast to json if it exists. The
fallback is to use the type's string representation, as now.
2. add a cast hstore -> json (any others needed for core / contrib types ?)
3. add a to_json(anyelement) function
4. add a new aggregate function json_agg(anyrecord) -> json to simplify
and make more effecient turning a resultset into json.
Comments welcome.

ISTM the notion of to_json(anyelement) was already heavily discussed and
had spec-compliance issues ... in fact, weren't you one of the people
complaining?  What exactly does #3 mean that is different from the
previous thread?

Also, on reflection I'm not sure about commandeering cast-to-json for
this --- aren't we really casting to "json member" or something like
that?  The distinction between a container and its contents seems
important here.  With a container type as source, it might be important
to do something different if we're coercing it to a complete JSON
value versus something that will be just one member.  I'm handwaving
here because I don't feel like going back to re-read the RFC, but
it seems like something that should be considered carefully before
we lock down an assumption that there can never be a difference.

regards, tom lane
Reflecting over the dual possible interpretation of what it does mean to 
convert between "text" and "json" data types it has dawned to me that 
the confusion may come mainly from wanting json to be two things at once:


1. - a serialisation of of a subset of javascript objects to a string.

2. - a dynamic type represented by the above serialisation.

case 1
--

If we stick with interpretation 1. then json datatype is really no more 
than a domain based on "text" type and having a CHECK is_valid_json() 
constraint.


For this interpretation it makes complete sense to interpret any text as 
already being serialised and no casts (other than casts to a text type) 
have place here.


a few datatypes - like hstore - could have their "to_json_text()" 
serialiser functions if there is a better serialisation to text than the 
types defaul one, but other than that the "serialise to text and quote 
if not null, boolean or numeric type" should be needed.


if there is strong aversion to relying on function names for getting the 
right serialisation function, we could invent a new "cast-like" feature 
for serialising types so we could define a serialiser for hstore to json 
using


CREATE SERIALISATION (hstore AS json)
WITH FUNCTION hstore_as_json(hstore);

this probably will not be any safer than just using the name for lookup 
directly unless we place some restrictions on who is allowed to create 
the serialisation;


case 2
--

My suggestions on using typecasts for convert-to-json were result of 
this interpretation of json-as-dynamic-type.


Having thought more of this I now think that we probably should leave 
JSON alone and develop an separate dynamic type here.


I have started work on doing this based on ideas from BSON data format, 
except using postgreSQL datatypes.


It will still have to solve similar problems we have had here with JSON, 
but being both a new type and a binary type there will probably be no 
expectation of 1-to-1 conversion from to-text.


Will post here soon for more discussion on what this ned type does and 
how it should be used.


Hannu






--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_upgrade tests vs alter generic changes

2012-09-29 Thread Andrew Dunstan
The recent alter generic tests have broken pg_upgrade testing on Windows 
and probably other non-collation-supporting platforms. This can be cured 
by making the pg_upgrade test set up its test installs with "initdb 
--no-locale." as shown below (with similar changes for the MSVC  build 
system also being needed)


Is there any reason we don't want to do that, or to restrict it to just 
those platforms?


cheers

andrew


diff --git a/contrib/pg_upgrade/test.sh b/contrib/pg_upgrade/test.sh
index 32fb6bf..96da6f1 100644
--- a/contrib/pg_upgrade/test.sh
+++ b/contrib/pg_upgrade/test.sh
@@ -64,7 +64,7 @@ mkdir "$logdir"

 set -x

-$oldbindir/initdb -N
+$oldbindir/initdb -N --no-locale
 $oldbindir/pg_ctl start -l "$logdir/postmaster1.log" -o '-F' -w
 if "$MAKE" -C "$oldsrc" installcheck; then
pg_dumpall -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
@@ -104,7 +104,7 @@ fi

 mv "${PGDATA}" "${PGDATA}.old"

-initdb -N
+initdb -N --no-locale

 pg_upgrade -d "${PGDATA}.old" -D "${PGDATA}" -b "$oldbindir" -B "$bindir"





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE SCHEMA IF NOT EXISTS

2012-09-29 Thread Volker Grabsch
Dickson S. Guedes schrieb:
> - https://commitfest.postgresql.org/action/patch_view?id=907
> 
> The patch is small and implements a new syntax to CREATE SCHEMA
> that allow the creation of a schema be skipped when IF NOT EXISTS is
> used.
>
> [...]
>
> - Should this patch implements others INEs like ADD COLUMN IF NOT EXISTS?

If there's still a chance to improve the patch, I'd love to see
the following INEs implemented. Several real-world database
upgrade scripts would benefit from those:

1)
CREATE [ TEMPORARY | TEMP ] SEQUENCE [ IF NOT EXISTS ] name ...

2)
ALTER TABLE ... ADD [ COLUMN ] [ IF NOT EXISTS ] column_name ...

3)
CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ IF NOT EXISTS ] [ name ] ON ...


Regards,
Volker

-- 
Volker Grabsch
---<<(())>>---


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] data to json enhancements

2012-09-29 Thread Misa Simic
No probs...

And I did...The thing is, subject is to wide... Post too long... Intention
was just to better explain thoughts... I am not a blogger anyway, just new
in Postgres community... Trying to say, probably 90% of post would be
suficient just for the list, and because of i am new it is hard to me to
identify that right 10% peace :)

cheers,

Misa

On Saturday, September 29, 2012, Andrew Dunstan wrote:

>
>
> I think if you want to contribute you should post on the mailing list -
> otherwise the conversation just becomes way too fragmented.
>
> cheers
>
> andrew
>
>


Re: [HACKERS] Doc patch, further describe and-mask nature of the permission system

2012-09-29 Thread Karl O. Pinc
On 09/29/2012 01:16:51 AM, Karl O. Pinc wrote:

> The attached documentation patch further describes the 
> additive nature of the permission system.

Attached a second patch, applied after the
first, to extend the discussion
further regards roles.


(Original patch attached for reference.)


Karl 
Free Software:  "You don't pay back, you pay forward."
 -- Robert A. Heinlein

diff --git a/doc/src/sgml/ref/grant.sgml b/doc/src/sgml/ref/grant.sgml
index fb81af4..73f88e0 100644
--- a/doc/src/sgml/ref/grant.sgml
+++ b/doc/src/sgml/ref/grant.sgml
@@ -429,11 +429,27 @@ GRANT role_name [, ...] TO 
 

-A user may perform SELECT, INSERT, etc. on a
-column if he holds that privilege for either the specific column or
-its whole table.  Granting the privilege at the table level and then
-revoking it for one column will not do what you might wish: the
-table-level grant is unaffected by a column-level operation.
+Permission granted at any level of the
+database.schema.table.column
+object hierarchy grants permission to all contained objects.
+E.g. a user may perform SELECT, INSERT,
+etc. on a column if he holds that privilege for either the
+specific column or its whole table.  Granting the privilege at the
+table level and then revoking it for one column will not do what
+you might wish: the table-level grant is unaffected by a
+column-level operation.  Granting a privilege to some columns of a
+table and denying the privilege to the table's other columns is
+done in the same way as regards all other hierarchically organized
+database objects (e.g. granting a privilege to some tables in a
+schema and denying the privilege to the schema's other tables):
+deny privilege to the protected columns, to their table, their
+schema, and their database; grant privilege to the permitted
+columns.  As described above,
+permissions granted to the roles assigned a user are likewise
+entirely additive.  Permissions throughout
+PostgreSQL combine in this uniform
+fashion.

 


diff --git a/doc/src/sgml/ref/grant.sgml b/doc/src/sgml/ref/grant.sgml
index 73f88e0..0e878ba 100644
--- a/doc/src/sgml/ref/grant.sgml
+++ b/doc/src/sgml/ref/grant.sgml
@@ -453,6 +453,22 @@ GRANT role_name [, ...] TO 
 

+Further, roles having the INHERIT attribute
+that are assigned to other roles in a hierarchical fashion are
+additive of permission in the fashion of the
+database.schema.table.column
+hierarchy.  E.g. a user's login role can be assigned a role of
+accountant which is in turn assigned a role of
+employee.  The user would have the permissions of an
+accountant and, be virtue of the role hierarchy, also
+all permissions granted to employees.  Unlike the
+fixed
+database.schema.table.column
+hierarchy the PostgreSQL user is free to fashion
+roles into arbitrary hierarchical structures.
+   
+
+   
 When a non-owner of an object attempts to GRANT privileges
 on the object, the command will fail outright if the user has no
 privileges whatsoever on the object.  As long as some privilege is


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers