Actually, this is just the tip of the iceberg.
Funny, but
postgres=# select '\u'::text;
text
\u
(1 row)
postgres=# select array['\u']::text[];
array
-
{"\\u"}
postgres=# select '{"\u"}'::text[];
text
-
{u}
postgres=# select '{"\
On 05/29/2014 08:15 AM, Andrew Dunstan wrote:
On 05/29/2014 08:00 AM, Teodor Sigaev wrote:
postgres=# select '["\u"]'::json->0;
?column?
--
"\u"
(1 row)
Time: 1,294 ms
postgres=# select '["\u"]'::jsonb->0;
?column?
---
"\\u"
(1 row)
It seems to me that es
inconsistent. It's been the subject of some discussion on -hackers previously,
IIRC. I actually referred to this difference in my talk at pgCon last Friday.
I see, and I wasn't on your talk :( I'm playing around jsquery and now trying to
implement UTF escapes there.
--
Teodor Sigaev
On 05/29/2014 08:00 AM, Teodor Sigaev wrote:
postgres=# select '["\u"]'::json->0;
?column?
--
"\u"
(1 row)
Time: 1,294 ms
postgres=# select '["\u"]'::jsonb->0;
?column?
---
"\\u"
(1 row)
It seems to me that escape_json() is wrongly used in
jsonb_put_escap
On 05/29/2014 07:55 AM, Teodor Sigaev wrote:
# select '"\uaBcD"'::json;
json
--
"\uaBcD"
but
# select '"\uaBcD"'::jsonb;
ERROR: invalid input syntax for type json
LINE 1: select '"\uaBcD"'::jsonb;
^
DETAIL: Unicode escape values cannot be used for code point val
postgres=# select '["\u"]'::json->0;
?column?
--
"\u"
(1 row)
Time: 1,294 ms
postgres=# select '["\u"]'::jsonb->0;
?column?
---
"\\u"
(1 row)
It seems to me that escape_json() is wrongly used in jsonb_put_escaped_value(),
right name of escape_json() is a es
# select '"\uaBcD"'::json;
json
--
"\uaBcD"
but
# select '"\uaBcD"'::jsonb;
ERROR: invalid input syntax for type json
LINE 1: select '"\uaBcD"'::jsonb;
^
DETAIL: Unicode escape values cannot be used for code point values above 007F
when the server encoding is no