[HACKERS] default_isolation_level='serializable' crashes on Windows

2012-08-12 Thread Heikki Linnakangas
A customer reported that when you set 
default_isolation_level='serializable' in postgresql.conf on Windows, 
and try to start up the database, it crashes immediately. And sure 
enough, it does, on REL9_1_STABLE as well as on master.


Stack trace:

postgres!RecoveryInProgress+0x3a 
[c:\postgresql\src\backend\access\transam\xlog.c @ 7125]
postgres!check_XactIsoLevel+0x162 
[c:\postgresql\src\backend\commands\variable.c @ 617]
 postgres!call_string_check_hook+0x6d 
[c:\postgresql\src\backend\utils\misc\guc.c @ 8226]
postgres!set_config_option+0x13e5 
[c:\postgresql\src\backend\utils\misc\guc.c @ 5652]
 postgres!read_nondefault_variables+0x27f 
[c:\postgresql\src\backend\utils\misc\guc.c @ 7677]
postgres!SubPostmasterMain+0x227 
[c:\postgresql\src\backend\postmaster\postmaster.c @ 4101]

postgres!main+0x1e9 [c:\postgresql\src\backend\main\main.c @ 187]
postgres!__tmainCRTStartup+0x192 
[f:\dd\vctools\crt_bld\self_64_amd64\crt\src\crtexe.c @ 586]
postgres!mainCRTStartup+0xe 
[f:\dd\vctools\crt_bld\self_64_amd64\crt\src\crtexe.c @ 403]

kernel32!BaseThreadInitThunk+0xd
ntdll!RtlUserThreadStart+0x1d

The problem is that when a postmaster subprocess is launched, it calls 
read_nondefault_variables() very early, before shmem initialization, to 
read the non-default config options from the file that postmaster wrote. 
When check_XactIsoLevel() calls RecoveryInProgress(), it crashes, 
because XLogCtl is NULL.


I'm not sure what the cleanest fix for this would be. It seems that we 
could should just trust the values the postmaster passes to us and 
accept them without checking RecoveryInProgress(), but there's no 
straightforward way to tell that within check_XactIsoLevel(). Another 
thought is that there's really no need to pass XactIsoLevel from 
postmaster to a backend anyway, because it's overwritten from 
default_transaction_isolation as soon as you begin a transaction.


There's also a call to RecoveryInProgress() in 
check_transaction_read_only() as well, but it seems to not have this 
problem. That's more by accident than by design, though.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] default_isolation_level='serializable' crashes on Windows

2012-08-12 Thread Tom Lane
Heikki Linnakangas  writes:
> The problem is that when a postmaster subprocess is launched, it calls 
> read_nondefault_variables() very early, before shmem initialization, to 
> read the non-default config options from the file that postmaster wrote. 
> When check_XactIsoLevel() calls RecoveryInProgress(), it crashes, 
> because XLogCtl is NULL.

Hm, how did the same code fail to crash in the postmaster itself, when
the postmaster read the setting from postgresql.conf?

A larger point is that I think it's broken for any GUC assignment
function to be calling something as transient as RecoveryInProgress to
start with.  We probably ought to re-think the logic, not just band-aid
this by having it skip the check when shmem isn't initialized yet.
I'm thinking that the check has to occur somewhere outside GUC.

regards, tom lane

-
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Statistics and selectivity estimation for ranges

2012-08-12 Thread Alexander Korotkov
On Thu, Aug 9, 2012 at 12:44 AM, Alexander Korotkov wrote:

> My conclusion is so, that current errors are probably ok for selectivity
> estimation. But taking into attention that generated datasets ideally fits
> assumptions of estimation, there could be room for improvement. Especially,
> it's unclear why estimate for "<@" and "@>" have much greater error than
> estimate for "&&". Possibly, it's caused by some bugs.
>

ITSM, I found reason of inaccuracy. Implementation of linear interpolation
was wrong. Fixed version is attached. Now, need to rerun tests, possible
refactoring and comments rework.

--
With best regards,
Alexander Korotkov.


range_stat-0.4.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] error handling in logging hooks

2012-08-12 Thread Peter Eisentraut
On Sat, 2012-08-11 at 14:05 -0400, Tom Lane wrote:
> Peter Eisentraut  writes:
> > What is the intended way to handle errors in the new logging hook?
> 
> I'm not sure there is anything very useful you can do to "handle" them,
> if by "handle" you mean "report somewhere".

Yes, they ought to be written to the normal log.  I just don't want them
to be sent back to the logging hook.

> >From the point of view of elog.c, anything that might go wrong inside
> a logging hook is not very different from an error in write(), which
> it ignores on the basis that it can't report it to anybody.
> 
> Another comparison point is syslog(3), which doesn't even have a
> defined API for reporting that it failed, even though there are
> certainly cases in which it must.  I think the design intention is
> that syslog messages are "fire and forget"; if they don't get to
> their destination, it's not the originating app's problem.  I do not
> think we can do better than that for arbitrary logging hooks.

Well, there are plenty of ereport calls in syslogger.c, for example, so
I don't think that analogy really holds.  Also, syslog itself will
report something to its own log when there is a misconfiguration or
communication problem for remote syslogging.

> > The reference implementation pg_logforward just uses fprintf(stderr) to
> > communicate its own errors, which doesn't seem ideal.
> 
> That seems pretty broken, even without considering what's likely to
> happen on Windows.  It should just shut up, if you ask me.

That's not really an acceptable solution.  If I'm trying to log to a
network resource and the setup fails, I should have *some* way to learn
about that.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Yet another failure mode in pg_upgrade

2012-08-12 Thread Tom Lane
I've been experimenting with moving the Unix socket directory to
/var/run/postgresql for the Fedora distribution (don't ask :-().
It's mostly working, but I found out yet another way that pg_upgrade
can crash and burn: it doesn't consider the possibility that the
old or new postmaster is compiled with a different default
unix_socket_directory than what is compiled into the libpq it's using
or that pg_dump is using.

This is another hazard that we could forget about if we had some way for
pg_upgrade to run standalone backends instead of starting a postmaster.
But in the meantime, I suggest it'd be a good idea for pg_upgrade to
explicitly set unix_socket_directory (or unix_socket_directories in
HEAD) when starting the postmasters, and also explicitly set PGHOST
to ensure that the client-side code plays along.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PATCH: Implement value_to_json for single-datum conversion

2012-08-12 Thread Craig Ringer

Hi all

Whenever I try to work with the new json types I trip over the lack of a 
function to escape text to json. The attached patch against master 
provides one by exposing the existing datum_to_json function to SQL. 
I've used the name value_to_json, but I'm not sure it's necessarily the 
right choice of name.


Please consider this for the 9.2 branch as well as to HEAD, as IMO it's 
very important for basic usability of the json functionality. It applies 
to 9.2 fine and passes "make check". I know it's late in the game, but 
it's also a very small change and it's very hard to build up JSON data 
structures other than simple rows or arrays without at minimum a way of 
escaping `text' to json strings.


This feels basic enough that I'm wondering if there's a reason it wasn't 
included from the start, but I don't see any comments in json.c talking 
about anything like this, nor did I find any -hackers discussion about 
it. I suspect it's just an oversight.


As value_to_json directly wraps datum_to_json it actually accepts record 
and array types too. I didn't see any reason to prevent that and force 
the user to instead use row_to_json or array_to_json for those cases. If 
you don't want to accept this, I can provide a wrapper for escape_json 
that only accepts a text argument instead, but I think *some* way to 
escape text to JSON is vital to have in 9.2.


A docs patch will follow shortly if you're happy that this patch is 
reasonable.


--
Craig Ringer
>From e829c8500b0e507cb70c1a87784c9395269e27bc Mon Sep 17 00:00:00 2001
From: Craig Ringer 
Date: Mon, 13 Aug 2012 10:29:40 +0800
Subject: [PATCH] Implement value_to_json, exposing the existing datum_to_json
 for use from SQL.

A generic json quoting function needs to be available from SQL to permit
the building of any JSON structures other than those produced from arrays
or rowtypes.
---
 src/backend/utils/adt/json.c   | 28 
 src/include/catalog/pg_proc.h  |  4 ++
 src/include/utils/json.h   |  1 +
 src/test/regress/expected/json.out | 91 ++
 src/test/regress/sql/json.sql  | 39 
 5 files changed, 163 insertions(+)

diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c
new file mode 100644
index 0425ac6..11babd3
*** a/src/backend/utils/adt/json.c
--- b/src/backend/utils/adt/json.c
*** row_to_json_pretty(PG_FUNCTION_ARGS)
*** 1121,1126 
--- 1121,1154 
  }
  
  /*
+  * SQL function value_to_json(value)
+  *
+  * Wraps datum_to_json for use from SQL, so any element may be
+  * converted to json.
+  */
+ extern Datum
+ value_to_json(PG_FUNCTION_ARGS)
+ {
+ 	Datum		arg0 = PG_GETARG_DATUM(0);
+ 	Oid			arg0type = get_fn_expr_argtype(fcinfo->flinfo, 0);
+ 	StringInfo	result;
+ 	TYPCATEGORY tcategory;
+ 	Oid			typoutput;
+ 	bool		typisvarlena;
+ 
+ 	if (arg0type == JSONOID)
+ 		tcategory = TYPCATEGORY_JSON;
+ 	else
+ 		tcategory = TypeCategory(arg0type);
+ 	
+ 	getTypeOutputInfo(arg0type, &typoutput, &typisvarlena);
+ 
+ 	result = makeStringInfo();
+ 	datum_to_json(arg0, PG_ARGISNULL(0), result, tcategory, typoutput);
+ 	PG_RETURN_TEXT_P(cstring_to_text(result->data));
+ }
+ 
+ /*
   * Produce a JSON string literal, properly escaping characters in the text.
   */
  void
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
new file mode 100644
index 665918f..31e7772
*** a/src/include/catalog/pg_proc.h
--- b/src/include/catalog/pg_proc.h
*** DATA(insert OID = 3155 (  row_to_json
*** 4091,4096 
--- 4091,4100 
  DESCR("map row to json");
  DATA(insert OID = 3156 (  row_to_json	   PGNSP PGUID 12 1 0 0 0 f f f f t f s 2 0 114 "2249 16" _null_ _null_ _null_ _null_ row_to_json_pretty _null_ _null_ _null_ ));
  DESCR("map row to json with optional pretty printing");
+ DATA(insert OID = 3164 (  value_to_json	   PGNSP PGUID 12 1 0 0 0 f f f f f f s 1 0 114 "2283" _null_ _null_ _null_ _null_ value_to_json _null_ _null_ _null_ ));
+ DESCR("Convert a simple value to json, quoting and escaping if necessary");
+ DATA(insert OID = 3169 (  value_to_json	   PGNSP PGUID 12 1 0 0 0 f f f f f f s 1 0 114 "25" _null_ _null_ _null_ _null_ value_to_json _null_ _null_ _null_ ));
+ DESCR("Convert a simple value to json, quoting and escaping if necessary");
  
  /* uuid */
  DATA(insert OID = 2952 (  uuid_in		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 2950 "2275" _null_ _null_ _null_ _null_ uuid_in _null_ _null_ _null_ ));
diff --git a/src/include/utils/json.h b/src/include/utils/json.h
new file mode 100644
index 0f38147..432ad24
*** a/src/include/utils/json.h
--- b/src/include/utils/json.h
*** extern Datum array_to_json(PG_FUNCTION_A
*** 25,30 
--- 25,31 
  extern Datum array_to_json_pretty(PG_FUNCTION_ARGS);
  extern Datum row_to_json(PG_FUNCTION_ARGS);
  extern Datum row_to_json_pretty(PG_FUNCTION_ARGS);
+ extern Datum value_to_json(PG_FUNCTION_ARGS);
  extern void escape_json(StringInfo buf, const cha

Re: [HACKERS] PATCH: Implement value_to_json for single-datum conversion

2012-08-12 Thread Craig Ringer
Whoops. It actually looks like the posted patch muffed up opr_sanity 
checks. I'm totally new to pg_proc.h wrangling so I'm not sure why yet, 
looking.


Sorry, not sure how I missed that. I'll follow up shortly.

--
Craig Ringer


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PL/Perl build problem: error: ‘OP_SETSTATE’ undeclared

2012-08-12 Thread Peter Eisentraut
It appears that a recent Perl version (I have 5.14.2) has eliminated
OP_SETSTATE, which causes the current PostgreSQL build to fail:

plperl.c: In function ‘_PG_init’:
plperl.c:442:5645: error: ‘OP_SETSTATE’ undeclared (first use in this function)
plperl.c:442:5645: note: each undeclared identifier is reported only once for 
each function it appears in

I don't know the significance of this.  Could someone investigate
please?



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: Implement value_to_json for single-datum conversion

2012-08-12 Thread Craig Ringer
OK, opr_sanity was failing because I added the value_to_json(text) alias 
to ensure that:


  value_to_json('some_literal')

worked, following the same approach as quote_literal(anyelement) and 
quote_literal(text). That should be reasonable, right? The comments on 
the affected check in opr_sanity say that it's not necessarily wrong so 
long as the called function is prepared to handle the different 
arguments its self - which it is, since it's already accepting anyelement.


The test comment reads:

  Note that the expected output of this part of the test will
  need to be modified whenever new pairs of types are made 
binary-equivalent,

  or when new polymorphic built-in functions are added

so that seems reasonable.


postgres=# \df quote_literal
  List of functions
   Schema   | Name  | Result data type | Argument data types |  
Type

+---+--+-+
 pg_catalog | quote_literal | text | anyelement | normal
 pg_catalog | quote_literal | text | text | normal
(2 rows)

postgres=# \df value_to_json
  List of functions
   Schema   | Name  | Result data type | Argument data types |  
Type

+---+--+-+
 pg_catalog | value_to_json | json | anyelement | normal
 pg_catalog | value_to_json | json | text | normal
(2 rows)

Revised patch that tweaks the expected result of opr_sanity attached.

--
Craig Ringer
[PATCH] Implement value_to_json, exposing the existing datum_to_json
for use from SQL.

A generic json quoting function needs to be available from SQL to permit
the building of any JSON structures other than those produced from arrays
or rowtypes.
---
 src/backend/utils/adt/json.c | 28 ++
 src/include/catalog/pg_proc.h|  4 ++
 src/include/utils/json.h |  1 +
 src/test/regress/expected/json.out   | 91 
 src/test/regress/expected/opr_sanity.out |  3 +-
 src/test/regress/sql/json.sql| 39 ++
 6 files changed, 165 insertions(+), 1 deletion(-)

diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c
new file mode 100644
index 0425ac6..11babd3
*** a/src/backend/utils/adt/json.c
--- b/src/backend/utils/adt/json.c
*** row_to_json_pretty(PG_FUNCTION_ARGS)
*** 1121,1126 
--- 1121,1154 
  }
  
  /*
+  * SQL function value_to_json(value)
+  *
+  * Wraps datum_to_json for use from SQL, so any element may be
+  * converted to json.
+  */
+ extern Datum
+ value_to_json(PG_FUNCTION_ARGS)
+ {
+ 	Datum		arg0 = PG_GETARG_DATUM(0);
+ 	Oid			arg0type = get_fn_expr_argtype(fcinfo->flinfo, 0);
+ 	StringInfo	result;
+ 	TYPCATEGORY tcategory;
+ 	Oid			typoutput;
+ 	bool		typisvarlena;
+ 
+ 	if (arg0type == JSONOID)
+ 		tcategory = TYPCATEGORY_JSON;
+ 	else
+ 		tcategory = TypeCategory(arg0type);
+ 	
+ 	getTypeOutputInfo(arg0type, &typoutput, &typisvarlena);
+ 
+ 	result = makeStringInfo();
+ 	datum_to_json(arg0, PG_ARGISNULL(0), result, tcategory, typoutput);
+ 	PG_RETURN_TEXT_P(cstring_to_text(result->data));
+ }
+ 
+ /*
   * Produce a JSON string literal, properly escaping characters in the text.
   */
  void
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
new file mode 100644
index 665918f..31e7772
*** a/src/include/catalog/pg_proc.h
--- b/src/include/catalog/pg_proc.h
*** DATA(insert OID = 3155 (  row_to_json
*** 4091,4096 
--- 4091,4100 
  DESCR("map row to json");
  DATA(insert OID = 3156 (  row_to_json	   PGNSP PGUID 12 1 0 0 0 f f f f t f s 2 0 114 "2249 16" _null_ _null_ _null_ _null_ row_to_json_pretty _null_ _null_ _null_ ));
  DESCR("map row to json with optional pretty printing");
+ DATA(insert OID = 3164 (  value_to_json	   PGNSP PGUID 12 1 0 0 0 f f f f f f s 1 0 114 "2283" _null_ _null_ _null_ _null_ value_to_json _null_ _null_ _null_ ));
+ DESCR("Convert a simple value to json, quoting and escaping if necessary");
+ DATA(insert OID = 3169 (  value_to_json	   PGNSP PGUID 12 1 0 0 0 f f f f f f s 1 0 114 "25" _null_ _null_ _null_ _null_ value_to_json _null_ _null_ _null_ ));
+ DESCR("Convert a simple value to json, quoting and escaping if necessary");
  
  /* uuid */
  DATA(insert OID = 2952 (  uuid_in		   PGNSP PGUID 12 1 0 0 0 f f f f t f i 1 0 2950 "2275" _null_ _null_ _null_ _null_ uuid_in _null_ _null_ _null_ ));
diff --git a/src/include/utils/json.h b/src/include/utils/json.h
new file mode 100644
index 0f38147..432ad24
*** a/src/include/utils/json.h
--- b/src/include/utils/json.h
*** extern Datum array_to_json(PG_FUNCTION_A
*** 25,30 
--- 25,31 
  extern Datum array_to_json_pretty(PG_FUNCTION_ARGS);
  extern Datum row_to_json(PG_FUNCTION_ARGS);
  extern Datum row_to_json_pretty(PG_FUNCTION_ARGS);
+ extern Datum value_to_json(PG_FUNCTION_ARGS);
  extern void es

Re: [HACKERS] PATCH: Implement value_to_json for single-datum conversion

2012-08-12 Thread Tom Lane
Craig Ringer  writes:
> OK, opr_sanity was failing because I added the value_to_json(text) alias 
> to ensure that:

>value_to_json('some_literal')

> worked, following the same approach as quote_literal(anyelement) and 
> quote_literal(text). That should be reasonable, right?

No, it isn't.  What you're proposing is to let opr_sanity think that
text and anyelement are interchangeable to C functions, which is so
far from reality as to be ludicrous.  That would be seriously damaging
to its ability to detect errors.

But more to the point, your analogy to quote_literal is faulty anyway.
If you looked at that, what you'd find is that only quote_literal(text)
is a C function.  The other one is a SQL wrapper around a coercion to
text followed by the C function.  I rather imagine that the definition
as you have it would crash on, say, value_to_json(42).

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: Implement value_to_json for single-datum conversion

2012-08-12 Thread Tom Lane
Craig Ringer  writes:
> Whenever I try to work with the new json types I trip over the lack of a 
> function to escape text to json. The attached patch against master 
> provides one by exposing the existing datum_to_json function to SQL.
> ...
> This feels basic enough that I'm wondering if there's a reason it wasn't 
> included from the start,

There actually was a previous thread about this:
http://archives.postgresql.org/pgsql-hackers/2012-05/msg1.php
Note in particular Andrew's comment:

Second, RFC 4627 is absolutely clear: a valid JSON value can
only be an object or an array, so this thing about converting
arbitrary datum values to JSON is a fantasy. If anything, we
should adjust the JSON input routines to disallow anything else,
rather than start to output what is not valid JSON.

It's possible he's misread the spec, but I think we ought to tread very
carefully before adding "obvious" conversions we might regret later.

> Please consider this for the 9.2 branch as well as to HEAD, as IMO it's 
> very important for basic usability of the json functionality. It applies 
> to 9.2 fine and passes "make check". I know it's late in the game,

It's several months too late for feature additions to 9.2, especially
ones that would require an initdb to install.  We can look at this for
9.3, but I'm still worried about the spec-compliance question.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: Implement value_to_json for single-datum conversion

2012-08-12 Thread Craig Ringer

On 08/13/2012 12:48 PM, Tom Lane wrote:

There actually was a previous thread about this:
http://archives.postgresql.org/pgsql-hackers/2012-05/msg1.php
Note in particular Andrew's comment:

Second, RFC 4627 is absolutely clear: a valid JSON value can
only be an object or an array, so this thing about converting
arbitrary datum values to JSON is a fantasy. If anything, we
should adjust the JSON input routines to disallow anything else,
rather than start to output what is not valid JSON.


Thanks for taking a look. That makes sense. I guess these are similar 
issues to those the XML type faces, where working with fragments is a 
problem. The spec requires a single root element, but you don't always 
have that when you're *building* XML, hence the addition of `IS DOCUMENT'.


I was hoping to find a low-impact way to allow SQL-level construction of 
more complex JSON objects with correct text escaping, but it sounds like 
this isn't the right route. I don't currently see any way to achieve the 
kind of on-the-fly building you can do with XML's xmlelement(), 
xmlconcat(), xmlforest() etc; nor equivalent to hstore's 
hstore(text[],text[]), and I was hoping to improve that.


I have a half-finished JSON object constructor 
json_object_from_arrays(text[], json[]) in the same style as 
hstore(text[],text[]) . It won't work without the notion of json-typed 
scalars, though, as the values of keys could then only be arrays or 
objects, which isn't very useful. I can't usefully accept `anyarray' as 
a values argument since arrays are of homogeneous type. Accepting text[] 
would be a bug-magnet even if there was some kind of `text 
json_escape(text)' function.


Would it be reasonable to add a separate json_element type, one that's 
binary-equivalent to `json' but not constrained by the requirement to be 
an array or object/dict? Or a `jsobject' ?



As for the value_to_json crashing, works for me:

postgres=# SELECT value_to_json(42);
 value_to_json
---
 42
(1 row)

... since datum_to_json is happy to accept anything you throw at it 
using output function lookups, and value_to_json its self doesn't care 
about the argument type at all. That was all in the regression tests.


Purely so I understand what the correct handling of the anyelement+text 
overload would've been: In light of your comments on opr_sanity would 
the right approach be to add a second C function like text_to_json that 
only accepts 'text' to avoid confusing the sanity check? So the SQL 
"value_to_json(anyelement)" would point to the C "value_to_json" and the 
SQL "value_to_json(text)" would point to the C "text_to_json" ?


Anyway, clearly the value_to_json approach is out.

--
Craig Ringer


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: Implement value_to_json for single-datum conversion

2012-08-12 Thread Tom Lane
Craig Ringer  writes:
> As for the value_to_json crashing, works for me:

> postgres=# SELECT value_to_json(42);
>   value_to_json
> ---
>   42
> (1 row)

Oh, right, because there actually is support for anyelement in the
underlying C function.  There is not in the quote_literal case.

> Purely so I understand what the correct handling of the anyelement+text 
> overload would've been: In light of your comments on opr_sanity would 
> the right approach be to add a second C function like text_to_json that 
> only accepts 'text' to avoid confusing the sanity check?

Actually, given the above, what did you need value_to_json(text) for at
all?  Wouldn't value_to_json(anyelement) have covered it?

But yeah, the general approach to suppressing complaints from that
opr_sanity test is to make more C entry points.  The point of it,
in some sense, is that if you want to make an assumption that two
types are binary-equivalent then it's better to have that assumption
in C code than embedded in the pg_proc entries.  The cases that we
let pass via the "expected" outputs are only ones where binary
equivalence seems pretty well assured, like text vs varchar.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: Implement value_to_json for single-datum conversion

2012-08-12 Thread Craig Ringer

On 08/13/2012 01:55 PM, Tom Lane wrote:

Actually, given the above, what did you need value_to_json(text) for at
all?  Wouldn't value_to_json(anyelement) have covered it?


Usability. Without the version accepting text an explicit cast to text 
is required to disambiguate a literal argument like 
value_to_json('something') .



But yeah, the general approach to suppressing complaints from that
opr_sanity test is to make more C entry points.  The point of it,
in some sense, is that if you want to make an assumption that two
types are binary-equivalent then it's better to have that assumption
in C code than embedded in the pg_proc entries.  The cases that we
let pass via the "expected" outputs are only ones where binary
equivalence seems pretty well assured, like text vs varchar.


Thanks. I appreciate the explanation, and sorry for the newbie error.


On the JSON stuff, I can see it's not as simple as adding a simple 
escape function. For my needs during the 9.2 timeframe I'll bundle up an 
extension with the functionality I need and deal with the need to port 
to whatever 9.3 includes. Hopefully a "json_value" or "javascript_value" 
or similar can be introduced for 9.3, given comments like:


  http://archives.postgresql.org/pgsql-hackers/2012-05/msg00030.php
  http://archives.postgresql.org/pgsql-hackers/2012-05/msg00065.php

.. expressing not only the need for json scalars, but the fact that 
they're already commonplace in pretty much everything else.


Given this:

  http://archives.postgresql.org/pgsql-hackers/2012-05/msg00040.php

it does seem that `json' should be a whole document not a fragment, but 
IMO a way to work with individual JSON values is going to be *necessary* 
to get the most out of the json support - and to stop people who use 
JSON in the real world complaining that Pg's JSON support is broken 
because it follows the standard not real-world practice.


Personally the lack of json scalars has prevented me from using JSON 
support in two different places already, though it's proving very useful 
in many others.


--
Craig Ringer


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] default_isolation_level='serializable' crashes on Windows

2012-08-12 Thread Heikki Linnakangas

On 12.08.2012 17:39, Tom Lane wrote:

Heikki Linnakangas  writes:

The problem is that when a postmaster subprocess is launched, it calls
read_nondefault_variables() very early, before shmem initialization, to
read the non-default config options from the file that postmaster wrote.
When check_XactIsoLevel() calls RecoveryInProgress(), it crashes,
because XLogCtl is NULL.


Hm, how did the same code fail to crash in the postmaster itself, when
the postmaster read the setting from postgresql.conf?


It's not the check function for default_transaction_isolation that 
crashes, but the one for transaction_isolation.


I'm not exactly sure how transaction_isolation gets set to a non-default 
value, though. The default for transaction_isolation is 'default', so 
it's understandable that the underlying XactIsoLevel variable gets set 
to XACT_SERIALIZABLE, but AFAICS the code to read/write the GUCs from/to 
file only cares about the string value of the guc, not the integer value 
of the underlying global variable.



A larger point is that I think it's broken for any GUC assignment
function to be calling something as transient as RecoveryInProgress to
start with.  We probably ought to re-think the logic, not just band-aid
this by having it skip the check when shmem isn't initialized yet.
I'm thinking that the check has to occur somewhere outside GUC.


Hmm, it seems like the logical place to complain if you do a manual "SET 
transaction_isolation='serializable'". But I think we should only do the 
check if we're not in a transaction. Setting the guc won't have any 
effect outside a transaction anyway, because StartTransaction will 
overwrite it from default_transaction_isolation as soon as you begin a 
transaction.


While playing around, I bumped into another related bug, and after 
googling around I found out that it was already reported by Robert Haas 
earlier, but still not fixed: 
http://archives.postgresql.org/message-id/CA%2BTgmoa0UM2W1YkjjneEgJctzxopC3G53ocYPaCyoEOWT3aKiA%40mail.gmail.com. 
Kevin, the last message on that thread 
(http://archives.postgresql.org/pgsql-hackers/2012-04/msg01394.php) says 
you'll write a patch for that. Ping? Or would you like me to try that?


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers