Re: [GENERAL] Partioning with overlapping and non overlapping constraints

2015-02-08 Thread Tim Uckun
Partitioning by day would result in less partitions but of course it would
create a hot table where all the writes go.

Actually I have thought of an alternative and  I'd be interested in your
opinion of it.


I leave the metrics table alone, The current code continues to read and
write from the metrics.  Every night I create a table based on
metricts_MMDD which inherit from metrics and move data (using the
ONLY clause in the delete) into the table and then set a constraint for
that table for that day. I also adjust the constraint for the metrics table
which is basically saying where timestamp  YYYMMDD.

This way there is no trigger in the parent table to slow down the inserts
and I still have partitions that will speed up read queries.  I realize
that moving large amounts of data is going to be painful but perhaps I can
do it in chunks.



On Sat, Feb 7, 2015 at 3:09 AM, Melvin Davidson melvin6...@gmail.com
wrote:

 Perhaps, I do not fully understand completely, but would it not be simpler
 to just rearrange the key (and partition) by date  location?
 EG: 2015_01_01_metrics_location_X

 In that way, you would only have 365 partitions per year at most. But you
 also have the option to break it down by week or month, or year.

 EG:

 EXTRACT(YEAR FROM utc_time) = 2015 AND
 EXTRACT(WEEK FROM utc_time) = 1

 or
 EXTRACT(YEAR FROM utc_time) = 2015 AND
 EXTRACT(MONTH FROM utc_time) = 1

 or just
 EXTRACT(YEAR FROM utc_time) = 2015


 On Thu, Feb 5, 2015 at 10:12 PM, David G Johnston 
 david.g.johns...@gmail.com wrote:

 Tim Uckun wrote
  1. Should I be worried about having possibly hundreds of thousands of
  shards.

 IIRC, yes.


  2. Is PG smart enough to handle overlapping constraints on table and
 limit
  it's querying to only those tables that have the correct time
 constraint.

 Probably yes, but seems easy enough to verify.

 All constraints are checked for each partiton and if any return false the
 entire partiton will be excluded; which means multiple partitions can be
 included.

 Note, this is large reason why #1 poses a problem.

 David J.




 --
 View this message in context:
 http://postgresql.nabble.com/Partioning-with-overlapping-and-non-overlapping-constraints-tp5836869p5836871.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




 --
 *Melvin Davidson*
 I reserve the right to fantasize.  Whether or not you
 wish to share my fantasy is entirely up to you.



[GENERAL] Change postgresql encoding

2015-02-08 Thread Oliver
Hi,
I have a new postgresql installation done, it has utf-8 encoding.
I have to make a Oracle database migration and it has ISO8859-15 encoding,
should I change postgresql encoding to same Oracle encoding or with utf-8
it should go well?
If I want change postgresql encoding, I have understood that I should
reinstall postgresql (I do installation from rpm official binary files for
red hat), someone knows say me if when you install software it asks you for
choosing encoding or it puts it from what you have in the system? (I did
installation from many time ago and I don't remember it).
Thanks beforehand.

P.D.: I've tried change encoding to a new postgresql database to latin9,
but it gives me error for having utf-8.

Regards...


Re: [GENERAL] Change postgresql encoding

2015-02-08 Thread Adrian Klaver

On 02/08/2015 11:20 AM, Oliver wrote:

Hi,
I have a new postgresql installation done, it has utf-8 encoding.
I have to make a Oracle database migration and it has ISO8859-15
encoding, should I change postgresql encoding to same Oracle encoding or
with utf-8 it should go well?
If I want change postgresql encoding, I have understood that I should
reinstall postgresql (I do installation from rpm official binary files
for red hat), someone knows say me if when you install software it asks
you for choosing encoding or it puts it from what you have in the
system? (I did installation from many time ago and I don't remember it).
Thanks beforehand.


What version of Postgres?

What version of RedHat?


P.D.: I've tried change encoding to a new postgresql database to latin9,
but it gives me error for having utf-8.


What was the command you gave?

What was the exact error message?



Regards...



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Change postgresql encoding

2015-02-08 Thread BladeOfLight16
On Sun, Feb 8, 2015 at 2:20 PM, Oliver ofab...@gmail.com wrote:

 If I want change postgresql encoding, I have understood that I should
 reinstall postgresql (I do installation from rpm official binary files for
 red hat)


I can't answer your question about whether the encodings need to be the
same, but I'm pretty sure that's not true. CREATE DATABASE has options to
set the encoding for a database:

CREATE DATABASE korean WITH ENCODING 'EUC_KR' LC_COLLATE='ko_KR.euckr'
LC_CTYPE='ko_KR.euckr' TEMPLATE=template0;

(From http://www.postgresql.org/docs/9.4/static/multibyte.html)

To change the encoding of an existing PG database, you have to 1) dump it,
2) create a new, empty DB with the desired encoding, and 3) import the dump
into the new DB. So you can't really change it on the fly, but it can be
done at creation time.


Re: [GENERAL] Change postgresql encoding

2015-02-08 Thread Oliver
2015-02-08 20:44 GMT+00:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 02/08/2015 11:20 AM, Oliver wrote:

 Hi,
 I have a new postgresql installation done, it has utf-8 encoding.
 I have to make a Oracle database migration and it has ISO8859-15
 encoding, should I change postgresql encoding to same Oracle encoding or
 with utf-8 it should go well?
 If I want change postgresql encoding, I have understood that I should
 reinstall postgresql (I do installation from rpm official binary files
 for red hat), someone knows say me if when you install software it asks
 you for choosing encoding or it puts it from what you have in the
 system? (I did installation from many time ago and I don't remember it).
 Thanks beforehand.


 What version of Postgres?


9.3.4



 What version of RedHat?


Red Hat Enterprise Linux Server release 6.5




 P.D.: I've tried change encoding to a new postgresql database to latin9,
 but it gives me error for having utf-8.


 What was the command you gave?


CREATE DATABASE xxx
  WITH ENCODING 'LATIN9'
   OWNER=xxx
   TEMPLATE=template0
   LC_COLLATE='es_ES.latin9'
   LC_CTYPE='es_ES.latin9'
   CONNECTION LIMIT=-1
   TABLESPACE=xxx;



 What was the exact error message?


regional configuration name not valid es_ES.latin9
SQL state: 42809

I've tried es_ES.iso8859-15 and same error.
I'm using pgadmin III 1.18.1 for running querys. When I run SHOW
client_encoding; it shows UNICODE.



 Regards...



Thanks beforehand.



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL] postgres cust types

2015-02-08 Thread Pavel Stehule
2015-02-03 13:49 GMT+01:00 Ramesh T rameshparnandit...@gmail.com:

 Hi ,
   i created type on postgres
  CREATE  TYPE order_list AS (order_id bigint);
 it works fine.

 then, i try to create a other table type using above created type.
 like,
 --create or replace type suborder_list_table as table of suborder_list;
 this on *oracle *formate


This syntax is not supported in Pg - resp. a collections are not supported
by PostgreSQL.

use a arrays instead

DECLARE array_var order_list[];

http://www.postgresql.org/docs/9.4/static/arrays.html

Regards

Pavel Stehule



 i need to convert *postgres *and how to create a table type in postgres
 is it possible
 or
 else any other method.

 FYI,i am using these types in a function.

 thanks in advance,






Re: [GENERAL] dbmsscheduler

2015-02-08 Thread Pavel Stehule
Hi

PostgreSQL doesn't have a dbms_scheduler. If you need it, you can use a
EnterpriseDB - commercial fork with Oracle migration tools. There it is.

You can use a scheduler pgAgent - https://github.com/postgres/pgagent
http://www.pgadmin.org/docs/dev/pgagent.html

Regards

Pavel

2015-02-03 14:16 GMT+01:00 Ramesh T rameshparnandit...@gmail.com:

 hi,
 How to run dbms_scheduler.create_job in postgres and is it available
 postgres..?

 any help ,how to use  in postgres..?
 FYI, i am using postgres 9.3 version

 thanks in advance,



[GENERAL] dbmsscheduler

2015-02-08 Thread Ramesh T
hi,
How to run dbms_scheduler.create_job in postgres and is it available
postgres..?

any help ,how to use  in postgres..?
FYI, i am using postgres 9.3 version

thanks in advance,


[GENERAL] postgres cust types

2015-02-08 Thread Ramesh T
Hi ,
  i created type on postgres
 CREATE  TYPE order_list AS (order_id bigint);
it works fine.

then, i try to create a other table type using above created type.
like,
--create or replace type suborder_list_table as table of suborder_list;
this on *oracle *formate

i need to convert *postgres *and how to create a table type in postgres is
it possible
or
else any other method.

FYI,i am using these types in a function.

thanks in advance,


Re: [GENERAL] Ungroup data for import into PostgreSQL

2015-02-08 Thread George Weaver

Hi Adrian,

From: Adrian Klaver adrian.kla...@aklaver.com
Subject: Re: [GENERAL] Ungroup data for import into PostgreSQL



On 01/15/2015 04:56 PM, Jim Nasby wrote:

On 1/15/15 9:43 AM, George Weaver wrote:

Hi List,

I need to import data from a large Excel spreadsheet into a PostgreSQL
table.  I have a program that uses ODBC to connect to Excel and
extract data using SQL queries.  The program then inserts the data
into a PostgreSQL table.

The challenge with this particular spreadsheet is that it is arrayed
thus:

Model No 1 Product Code 15
Serial No 1   No on Hand
Serial No 2   No on Hand
Serial No 3   No on Hand
Model No 4Product Code 9
Serial No 12 No on Hand
Model No 5Product Code 27
Serial No 6   No on Hand
Serial No 14 No on Hand

etc.

I need the data in PostgreSQL arrayed thus

Model No 1Product Code 15Serial No 1   No on Hand
Model No 1Product Code 15Serial No 2   No on Hand
Model No 1Product Code 15Serial No 3   No on Hand
Model No 4Product Code 9  Serial No 12 No on Hand
Model No 5Product Code 27Serial No 6   No on Hand
Model No 5Product Code 27Serial No 14 No on Hand

I can import the data procedurely using plpgsql to match the
individual rows to the master for each row (import the raw data into a
buffer table in PostgreSQL and then looping through the rows in the
buffer table and checking to see when the Model No changes).


Note that if you're doing that you better be putting the rownumber from
excel into the table... result sets are NOT guaranteed to be in insert
order!


I'm wondering if there is a more elegant way to do this using straight
sql from Excel?


Well, that's really an excel question, not a Postgres question...

If you load the whole spreadsheet into a single table and have a way to
differentiate between the different rows then you might be able to do
something with CTE's to relate a serial number to the product code. That
might be faster than plpgsql.

You might also be able to do something creative with formulas in excel
to copy the product code data to the serial # rows. You could then
import the whole thing and re-normalize it.

There's probably some stuff you could do with VBA too. If you care about
performance you don't want to execute SQL statements for each
spreadsheet row.


Or if you really want to slice and dice and you use Python, then take a 
look at Pandas:


http://pandas.pydata.org/

In particular the IO functions:

http://pandas.pydata.org/pandas-docs/stable/io.html


I don't use Python but I found the information in these links very 
interesting.


Thanks!




--
Adrian Klaver
adrian.kla...@aklaver.com 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ungroup data for import into PostgreSQL

2015-02-08 Thread George Weaver

Hi Adrian,

From: Adrian Klaver adrian.kla...@aklaver.com
Subject: Re: [GENERAL] Ungroup data for import into PostgreSQL



On 01/15/2015 04:56 PM, Jim Nasby wrote:

On 1/15/15 9:43 AM, George Weaver wrote:

Hi List,

I need to import data from a large Excel spreadsheet into a PostgreSQL
table.  I have a program that uses ODBC to connect to Excel and
extract data using SQL queries.  The program then inserts the data
into a PostgreSQL table.

The challenge with this particular spreadsheet is that it is arrayed
thus:

Model No 1 Product Code 15
Serial No 1   No on Hand
Serial No 2   No on Hand
Serial No 3   No on Hand
Model No 4Product Code 9
Serial No 12 No on Hand
Model No 5Product Code 27
Serial No 6   No on Hand
Serial No 14 No on Hand

etc.

I need the data in PostgreSQL arrayed thus

Model No 1Product Code 15Serial No 1   No on Hand
Model No 1Product Code 15Serial No 2   No on Hand
Model No 1Product Code 15Serial No 3   No on Hand
Model No 4Product Code 9  Serial No 12 No on Hand
Model No 5Product Code 27Serial No 6   No on Hand
Model No 5Product Code 27Serial No 14 No on Hand

I can import the data procedurely using plpgsql to match the
individual rows to the master for each row (import the raw data into a
buffer table in PostgreSQL and then looping through the rows in the
buffer table and checking to see when the Model No changes).


Note that if you're doing that you better be putting the rownumber from
excel into the table... result sets are NOT guaranteed to be in insert
order!


I'm wondering if there is a more elegant way to do this using straight
sql from Excel?


Well, that's really an excel question, not a Postgres question...

If you load the whole spreadsheet into a single table and have a way to
differentiate between the different rows then you might be able to do
something with CTE's to relate a serial number to the product code. That
might be faster than plpgsql.

You might also be able to do something creative with formulas in excel
to copy the product code data to the serial # rows. You could then
import the whole thing and re-normalize it.

There's probably some stuff you could do with VBA too. If you care about
performance you don't want to execute SQL statements for each
spreadsheet row.


Or if you really want to slice and dice and you use Python, then take a 
look at Pandas:


http://pandas.pydata.org/

In particular the IO functions:

http://pandas.pydata.org/pandas-docs/stable/io.html


I don't use Python but I found the information in these links very 
interesting.


Thanks!


--
Adrian Klaver
adrian.kla...@aklaver.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Stability of JSON textual representation

2015-02-08 Thread David Evans
I've noticed that when representing lists as JSON, Postgres 9.4 sometimes
outputs spaces after commas, and other times does not.

# SELECT array_to_json(array[1, 2, 3]), json_build_array(1, 2, 3);

 array_to_json | json_build_array
---+--
 [1,2,3]   | [1, 2, 3]


This matters to me because I'm intending the hash the JSON text and use the
hash for change detection.

This difference in behaviour doesn't seem to be documented anywhere. Does
anyone know whether it is likely to be stable?

Many thanks,

Dave


Re: [GENERAL] Question on session_replication_role

2015-02-08 Thread Vasudevan, Ramya
Jim,

 I am OP (Karthik)'s colleague. Please see the responses below.

 Your EXPLAIN ANALYZE output doesn't show this taking 30 seconds, it shows it 
 taking 90ms. Please provide an EXPLAIN ANALYZE That actually demonstrates the 
 problem.
SELECT DISTINCT it.recipient_id FROM iru.iru_tags it WHERE it.recipient_id 
BETWEEN 7257057171 AND 7257704235 AND it.status = 0 AND it.last_update_date = 
date_trunc('day', now() - interval '90 days') AND EXISTS (SELECT DISTINCT 
res.registration_id FROM emailsubscription.reg_email_subscriptions res WHERE 
res.registration_id = it.recipient_id AND res.subscription_id = 200400);

   QUERY PLAN
-
 Unique  (cost=0.57..296573.59 rows=46 width=8) (actual time=24.613..21194.635 
rows=904 loops=1)
   Output: it.recipient_id
   Buffers: shared hit=1094265 read=718
   I/O Timings: read=28.674
   -  Nested Loop Semi Join  (cost=0.57..296570.25 rows=1335 width=8) (actual 
time=24.608..21190.382 rows=1264 loops=1)
 Output: it.recipient_id
 Buffers: shared hit=1094265 read=718
 I/O Timings: read=28.674
 -  Index Only Scan using iru_tags_n31 on iru.iru_tags it  
(cost=0.57..337.19 rows=1335 width=8) (actual time=0.184..25.875 rows=1847 
loops=1)
   Output: it.status, it.recipient_id, it.last_update_date
   Index Cond: ((it.status = 0) AND (it.recipient_id = 
7257057171::bigint) AND (it.recipient_id = 7257704235::bigint) AND 
(it.last_update_date = date_trunc('day'::text, (now() - '90 days'::interval
   Heap Fetches: 103
   Buffers: shared hit=820 read=180
   I/O Timings: read=7.614
 -  Append  (cost=0.00..1748.87 rows=17 width=8) (actual 
time=11.454..11.454 rows=1 loops=1847)
   Buffers: shared hit=1093445 read=538
   I/O Timings: read=21.060
   -  Seq Scan on emailsubscription.reg_email_subscriptions res  
(cost=0.00..1728.07 rows=1 width=8) (actual time=11.316..11.316 rows=0 
loops=1847)
 Output: res.registration_id
 Filter: ((res.subscription_id = 200400) AND 
(it.recipient_id = res.registration_id))
 Rows Removed by Filter: 77271
 Buffers: shared hit=1050943
   -  Index Only Scan using reg_email_subscriptions_p00_pkey on 
emailsubscription.reg_email_subscriptions_p00 res_1  (cost=0.57..2.60 rows=2 
width=8) (actual time=0.033..0.033 rows=0 loops=1847)
 Output: res_1.registration_id
 Index Cond: ((res_1.registration_id = it.recipient_id) AND 
(res_1.subscription_id = 200400))
 Heap Fetches: 0
 Buffers: shared hit=7415 read=65
 I/O Timings: read=2.802
   -  Index Only Scan using reg_email_subscriptions_p01_pkey on 
emailsubscription.reg_email_subscriptions_p01 res_2  (cost=0.57..2.60 rows=2 
width=8) (actual time=0.017..0.017 rows=0 loops=1710)
 Output: res_2.registration_id
 Index Cond: ((res_2.registration_id = it.recipient_id) AND 
(res_2.subscription_id = 200400))
 Heap Fetches: 0
 Buffers: shared hit=6866 read=67
 I/O Timings: read=3.053
   -  Index Only Scan using reg_email_subscriptions_p02_pkey on 
emailsubscription.reg_email_subscriptions_p02 res_3  (cost=0.57..2.60 rows=2 
width=8) (actual time=0.017..0.017 rows=0 loops=1567)
 Output: res_3.registration_id
 Index Cond: ((res_3.registration_id = it.recipient_id) AND 
(res_3.subscription_id = 200400))
 Heap Fetches: 0
 Buffers: shared hit=6296 read=68
 I/O Timings: read=2.812
   -  Index Only Scan using reg_email_subscriptions_p03_pkey on 
emailsubscription.reg_email_subscriptions_p03 res_4  (cost=0.57..2.60 rows=2 
width=8) (actual time=0.016..0.016 rows=0 loops=1406)
 Output: res_4.registration_id
 Index Cond: ((res_4.registration_id = it.recipient_id) AND 
(res_4.subscription_id = 200400))
 Heap Fetches: 0
 Buffers: shared hit=5678 read=68
 I/O Timings: read=2.645
   -  Index Only Scan using reg_email_subscriptions_p04_pkey on 
emailsubscription.reg_email_subscriptions_p04 res_5  (cost=0.57..2.60 rows=2 
width=8) (actual time=0.019..0.019 rows=0 loops=1246)
 Output: res_5.registration_id
 Index Cond: ((res_5.registration_id = it.recipient_id) AND 
(res_5.subscription_id = 200400))
 

Re: [GENERAL] Change postgresql encoding

2015-02-08 Thread Oliver
2015-02-08 20:50 GMT+00:00 BladeOfLight16 bladeofligh...@gmail.com:

 On Sun, Feb 8, 2015 at 2:20 PM, Oliver ofab...@gmail.com wrote:

 If I want change postgresql encoding, I have understood that I should
 reinstall postgresql (I do installation from rpm official binary files for
 red hat)


 I can't answer your question about whether the encodings need to be the
 same, but I'm pretty sure that's not true. CREATE DATABASE has options to
 set the encoding for a database:

 CREATE DATABASE korean WITH ENCODING 'EUC_KR' LC_COLLATE='ko_KR.euckr'
 LC_CTYPE='ko_KR.euckr' TEMPLATE=template0;

 (From http://www.postgresql.org/docs/9.4/static/multibyte.html)

 To change the encoding of an existing PG database, you have to 1) dump it,
 2) create a new, empty DB with the desired encoding, and 3) import the dump
 into the new DB. So you can't really change it on the fly, but it can be
 done at creation time.


How it would be if I want latin9 encoding? I'm trying the next but it shows
that it is not valid (es_ES.latin9):

CREATE DATABASE xxx
  WITH ENCODING 'LATIN9'
   OWNER=xxx
   TEMPLATE=template0
   LC_COLLATE='es_ES.latin9'
   LC_CTYPE='es_ES.latin9'
   CONNECTION LIMIT=-1
   TABLESPACE=xxx;

Thanks beforehand.


Re: [GENERAL] Ungroup data for import into PostgreSQL

2015-02-08 Thread George Weaver


Sorry for the late reply...life interefered...


From: Jim Nasby



On 1/15/15 9:43 AM, George Weaver wrote:

Hi List,

I need to import data from a large Excel spreadsheet into a PostgreSQL 
table.  I have a program that uses ODBC to connect to Excel and extract 
data using SQL queries.  The program then inserts the data into a 
PostgreSQL table.


The challenge with this particular spreadsheet is that it is arrayed 
thus:


Model No 1 Product Code 15
Serial No 1   No on Hand
Serial No 2   No on Hand
Serial No 3   No on Hand
Model No 4Product Code 9
Serial No 12 No on Hand
Model No 5Product Code 27
Serial No 6   No on Hand
Serial No 14 No on Hand

etc.

I need the data in PostgreSQL arrayed thus

Model No 1Product Code 15Serial No 1   No on Hand
Model No 1Product Code 15Serial No 2   No on Hand
Model No 1Product Code 15Serial No 3   No on Hand
Model No 4Product Code 9  Serial No 12 No on Hand
Model No 5Product Code 27Serial No 6   No on Hand
Model No 5Product Code 27Serial No 14 No on Hand

I can import the data procedurely using plpgsql to match the individual 
rows to the master for each row (import the raw data into a buffer table 
in PostgreSQL and then looping through the rows in the buffer table and 
checking to see when the Model No changes).


Note that if you're doing that you better be putting the rownumber from 
excel into the table... result sets are NOT guaranteed to be in insert 
order!


Good point.

I'm wondering if there is a more elegant way to do this using straight 
sql from Excel?



Well, that's really an excel question, not a Postgres question...


If you load the whole spreadsheet into a single table and have a way to 
differentiate between the different rows then you might be able to do 
something wih CTE's to relate a serial number to the product code. That 
might be faster than plpgsql.


I have gone the route of loading the whole file into a buffer table in 
PostgreSQL and processing it from there.  Since the only way I can relate 
each row to the model number is by its relative row position, I have used a 
plpgsql loop to flatten the data.


You might also be able to do something creative with formulas in excel to 
copy the product code data to the serial # rows. You could then import the 
whole thing and re-normalize it.


I looked at this but decided to just do it in PostgreSQL - much more 
powerful :-)


There's probably some stuff you could do with VBA too. If you care about 
performance you don't want to execute SQL statements for each spreadsheet 
row.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Stability of JSON textual representation

2015-02-08 Thread Paul Jungwirth
 I've noticed that when representing lists as JSON, Postgres 9.4 sometimes
 outputs spaces after commas, and other times does not.

Here is a similar test on 9.3:

# select '[1,2,3]'::json::text, '[1, 2, 3]'::json::text;
  text   |   text
-+---
 [1,2,3] | [1, 2, 3]

It looks like even casting back and forth between text and json
preserves the text.

Paul

-- 
_
Pulchritudo splendor veritatis.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Stability of JSON textual representation

2015-02-08 Thread Gavin Flower

On 03/02/15 00:06, David Evans wrote:
I've noticed that when representing lists as JSON, Postgres 9.4 
sometimes outputs spaces after commas, and other times does not.


# SELECT array_to_json(array[1, 2, 3]), json_build_array(1, 2, 3);

 array_to_json | json_build_array
---+--
 [1,2,3]   | [1, 2, 3]


This matters to me because I'm intending the hash the JSON text and 
use the hash for change detection.


This difference in behaviour doesn't seem to be documented anywhere. 
Does anyone know whether it is likely to be stable?


Many thanks,

Dave

 Would using jsonb be more consistent?


Cheers,
Gavin


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general