[GENERAL] Generate PG schemas from the Oracle Data Modeler tool?

2016-02-24 Thread Ken Winter
The best affordable (in this case, free) data modeling tool that I have
found is the "Oracle SQL Developer Data Modeler" (
http://www.oracle.com/technetwork/developer-tools/datamodeler/overview/index.html).


The best DBMS (free or otherwise) that I have found is PostgreSQL.

So of course it would be great to be able to connect the Oracle tool to a
PG database, so one could forward- and reverse-engineer between the two.
At present, apparently the Oracle tool only natively connects with Oracle,
DB2, and SQL Server.

So I'm wondering if anybody knows of a utility or an Oracle Data Modeler
add-on that will take some dialect of SQL DDL that that tool generates and
turn it into PG-readable SQL.

I get it from the list at
https://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL
that there are tools that do something more ambitious: read schemas from
actual Oracle databases and implement them as schemas in actual PG
databases.  What I need is more modest than that: a tool that inputs a file
of DDL from the Oracle Data Modeler tool and outputs that DDL in PG syntax.

~ Thanks for any leads you can provide
~ Ken


[GENERAL] 2x Time difference between first and subsequent run of the same query on fresh established connection (on linux, with perf data included, all query data in the shared buffers) on postgresql

2016-02-24 Thread Maxim Boguk
Hi,

I found a strange case when the query (which works through large amount of
shared buffers) run 2x time faster on the second and subsequent run in new
connection to db.
For sample:

postgres@base1:~$ psql ***
psql (9.3.10)

***=# explain (analyze, costs, buffers, timing) select * from
transactions where "timestamp" >= '2016-02-20' and "timestamp"  <
'2016-02-23'::date;

QUERY PLAN
-
 Index Scan using i_transactions_timestamp on transactions
(cost=0.57..138824.52 rows=1955459 width=790) (actual time=0.146..3416.477
rows=1950630 loops=1)
   Index Cond: (("timestamp" >= '2016-02-20 00:00:00'::timestamp without
time zone) AND ("timestamp" < '2016-02-23'::date))
   Buffers: shared hit=1965635
 Total runtime: 3481.322 ms
(4 строки)

***=# explain (analyze, costs, buffers, timing) select * from
transactions where "timestamp" >= '2016-02-20' and "timestamp"  <
'2016-02-23'::date;

QUERY PLAN
-
 Index Scan using i_transactions_timestamp on transactions
(cost=0.57..138824.52 rows=1955459 width=790) (actual time=0.030..1812.361
rows=1950630 loops=1)
   Index Cond: (("timestamp" >= '2016-02-20 00:00:00'::timestamp without
time zone) AND ("timestamp" < '2016-02-23'::date))
   Buffers: shared hit=1965635
 Total runtime: 1878.503 ms

And every run after it - works in 1.8-1.9s, but if I establish the new
connection to database - the first query will run 3.5s again.

​Time difference and timing of each run pretty repeatable (+/- 100ms).​

There are perf report data for the first and for the second runs:

The first run (something fishy with kernel calls):
  19,60%  postgres  [kernel.kallsyms]  [k] filemap_map_pages
  15,86%  postgres  postgres   [.] hash_search_with_hash_value
   8,20%  postgres  postgres   [.] heap_hot_search_buffer
   8,20%  postgres  postgres   [.] heap_page_prune_opt
   5,72%  postgres  postgres   [.] PinBuffer
   4,38%  postgres  [kernel.kallsyms]  [k] page_fault
   4,04%  postgres  [kernel.kallsyms]  [k] page_waitqueue
   3,55%  postgres  [kernel.kallsyms]  [k] __wake_up_bit
   2,95%  postgres  postgres   [.] LWLockAcquire
   2,31%  postgres  [kernel.kallsyms]  [k] unlock_page
   1,96%  postgres  [vdso] [.] __vdso_gettimeofday
   1,83%  postgres  [kernel.kallsyms]  [k] radix_tree_next_chunk
   1,77%  postgres  [kernel.kallsyms]  [k] page_add_file_rmap
   1,66%  postgres  postgres   [.] _bt_checkkeys
   1,27%  postgres  postgres   [.] LWLockRelease

The second run (look perfectly ok for such kind of query):
  27,74%  postgres  postgres   [.] hash_search_with_hash_value
  15,51%  postgres  postgres   [.] heap_hot_search_buffer
  14,28%  postgres  postgres   [.] heap_page_prune_opt
   9,22%  postgres  postgres   [.] PinBuffer
   3,94%  postgres  [vdso] [.] __vdso_gettimeofday
   3,32%  postgres  postgres   [.] _bt_checkkeys
   3,29%  postgres  postgres   [.] LWLockAcquire
   2,86%  postgres  postgres   [.] LWLockRelease
   1,54%  postgres  postgres   [.] UnpinBuffer


So it's looks like that something strange going inside linux kernel 3.16.0
memory managment (it's vanilla kernel on the bare hardware, no
virtualization, swap off).
Question is: it's work as expected (and in that case probably good idea use
pgbouncer even for one-shot analytical queries), or it's sign of the
potential issues with the ykernel?



-- 
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.com/

Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678

LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b
Skype: maxim.boguk
Jabber: maxim.bo...@gmail.com

"People problems are solved with people.
If people cannot solve the problem, try technology.
People will then wish they'd listened at the first stage."


Re: [GENERAL] check constraint problem during COPY while pg_upgrade-ing

2016-02-24 Thread David G. Johnston
On Wed, Feb 24, 2016 at 3:12 PM, Karsten Hilbert 
wrote:

> I have noticed another problem during pg_upgrade on a 9.1
> cluster with 9.4 as the target.
>
> Consider this sort of table
>
> create table therapy (
> pk serial primary key,
> description text,
> is_ongoing boolean not null,
> ts_end timestamp with time zone
> );
>
> Now, business rules say that a particular therapy is either
> ongoing or not. The end of therapy can be known or not.
>
​...​

> However, if the therapy is ongoing the ts_end must be either
> NULL or "in the future" at row INSERT/UPDATE time.
>

​Then you must record the "INSERT/UPDATE time" into the table, as a
constant, and refer to that value instead of having "now()" which happens
to be correct at the time of the insert/update but is not correct at any
future point.

So:

ts_last_change DEFAULT now()
and
CHECK (...
(is_ongoing is true) AND (ts_end >
​ts_last_change
)
​)​

​In this way a restoration will restore both the historical ts_last_change
and the value of is_ongoing that matched.​

David J.


Re: [GENERAL] check constraint problem during COPY while pg_upgrade-ing

2016-02-24 Thread Adrian Klaver

On 02/24/2016 02:12 PM, Karsten Hilbert wrote:

I have noticed another problem during pg_upgrade on a 9.1
cluster with 9.4 as the target.

Consider this sort of table

create table therapy (
pk serial primary key,
description text,
is_ongoing boolean not null,
ts_end timestamp with time zone
);

Now, business rules say that a particular therapy is either
ongoing or not. The end of therapy can be known or not.
However, if the therapy is ongoing the ts_end must be either
NULL or "in the future" at row INSERT/UPDATE time.

Consider this check constraint

CHECK (
(is_ongoing is false)
OR
(
((is_ongoing is true) AND (ts_end is null))
OR
((is_ongoing is true) AND (ts_end > now()))
)
)

(I know this can logically be reduced. I wrote it this way to
be explicit about the intent.)

This works fine, the application (GNUmed) ensures INSERTS and
UPDATES do the right thing with .is_ongoing and .ts_end.

Now the following sequence happens:

- insert row with .is_ongoing=true and .ts_end=tomorrow()
- wait a week
- dump
- restore


Seems to be you are caught in a logical bind even with out the dump/restore.

At some point past tomorrow(), absent a change in is_ongoing, you will 
have a row where is_ongoing is 't' but ts_end says the therapy is over. 
To my way of thinking this means having ts_end be NULL until the therapy 
is completed or have a periodic job that marks is_ongoing = 'f' when 
ts_end goes into the past and is_ongoing = 't'. Otherwise resort to the 
trigger method you suggest below.




The restore will fail because the inserted row contains
.is_ongoing=true and .ts_end

[GENERAL] check constraint problem during COPY while pg_upgrade-ing

2016-02-24 Thread Karsten Hilbert
I have noticed another problem during pg_upgrade on a 9.1
cluster with 9.4 as the target.

Consider this sort of table

create table therapy (
pk serial primary key,
description text,
is_ongoing boolean not null,
ts_end timestamp with time zone
);

Now, business rules say that a particular therapy is either
ongoing or not. The end of therapy can be known or not.
However, if the therapy is ongoing the ts_end must be either
NULL or "in the future" at row INSERT/UPDATE time.

Consider this check constraint

CHECK (
(is_ongoing is false)
OR
(
((is_ongoing is true) AND (ts_end is null))
OR
((is_ongoing is true) AND (ts_end > now()))
)
)

(I know this can logically be reduced. I wrote it this way to
be explicit about the intent.)

This works fine, the application (GNUmed) ensures INSERTS and
UPDATES do the right thing with .is_ongoing and .ts_end.

Now the following sequence happens:

- insert row with .is_ongoing=true and .ts_end=tomorrow()
- wait a week
- dump
- restore

The restore will fail because the inserted row contains
.is_ongoing=true and .ts_end

Re: [GENERAL] "plan should not reference subplan's variable" when using row level security

2016-02-24 Thread Stephen Frost
* Adam Guthrie (asguth...@gmail.com) wrote:
> On 24 February 2016 at 20:27, Stephen Frost  wrote:
> > Yeah, looks like a bug to me.  My gut reaction is that we're pulling up
> > a subquery in a way that isn't possible and that plan shouldn't be
> > getting built/considered.
> 
> Thanks - shall I go ahead and submit a bug report?

Sure.

> > As a work-around, until we fix it, you could create an sql function to
> > check for the existance of the id in 'a' and use that in the policy
> > definition.
> 
> I've also discovered that using the following policy instead
> 
> CREATE POLICY a_select ON b FOR SELECT
> USING ( a_id IN (SELECT id FROM a) );
> 
> also seems to work around the issue.

Yes, that also works, but it could get painful if 'a' gets large.  An
SQL function like:

select exists (select * from a where a.id = $1);

Would still use an indexed lookup against 'a'.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] "plan should not reference subplan's variable" when using row level security

2016-02-24 Thread Adam Guthrie
On 24 February 2016 at 20:27, Stephen Frost  wrote:
> Yeah, looks like a bug to me.  My gut reaction is that we're pulling up
> a subquery in a way that isn't possible and that plan shouldn't be
> getting built/considered.

Thanks - shall I go ahead and submit a bug report?

>
> As a work-around, until we fix it, you could create an sql function to
> check for the existance of the id in 'a' and use that in the policy
> definition.

I've also discovered that using the following policy instead

CREATE POLICY a_select ON b FOR SELECT
USING ( a_id IN (SELECT id FROM a) );

also seems to work around the issue.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "plan should not reference subplan's variable" when using row level security

2016-02-24 Thread Stephen Frost
Adrian,

* Adrian Klaver (adrian.kla...@aklaver.com) wrote:
> I started to work through this when I realized the
> permissions/attributes of the role test are not shown. This seems to
> be important as the UPDATE example works if you run it immediately
> after:
> 
> INSERT INTO b (id, a_id, text) VALUES (1, 1, 'one');

Not sure what you mean- the necessary permissions for the test role are
just the 'GRANT ALL ON ALL TABLES' which is included.

If you run the UPDATE immediately after the INSERT, then it's before
that GRANT and, more importantly, before the 'SET ROLE', meaning that
you're running it as the table owner, and the policy is ignored
(policies are not applied to the owner of the table, unless FORCE RLS is
used).

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] "plan should not reference subplan's variable" when using row level security

2016-02-24 Thread Stephen Frost
Adam,

* Adam Guthrie (asguth...@gmail.com) wrote:
> psql:/tmp/test.sql:26: ERROR:  plan should not reference subplan's variable
> 
> Is this a bug or am I doing something wrong?

Yeah, looks like a bug to me.  My gut reaction is that we're pulling up
a subquery in a way that isn't possible and that plan shouldn't be
getting built/considered.

As a work-around, until we fix it, you could create an sql function to
check for the existance of the id in 'a' and use that in the policy
definition.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] "plan should not reference subplan's variable" when using row level security

2016-02-24 Thread Adrian Klaver

On 02/24/2016 09:51 AM, Adam Guthrie wrote:

Hi,

Whilst trying to use row level security with a subquery in the USING
expression, I'm receiving an error "plan should not reference
subplan's variable"

A simple sql file to reproduce:



CREATE TABLE a (
 id  INTEGER PRIMARY KEY
);

CREATE TABLE b (
 id  INTEGER PRIMARY KEY,
 a_idINTEGER,
 textTEXT
);

CREATE POLICY a_select ON b FOR SELECT
 USING ( EXISTS(SELECT FROM a WHERE a.id = b.a_id) );

ALTER TABLE b ENABLE ROW LEVEL SECURITY;

INSERT INTO a (id) VALUES (1);

INSERT INTO b (id, a_id, text) VALUES (1, 1, 'one');

GRANT ALL ON ALL TABLES IN SCHEMA public TO test;

SET ROLE test;

SELECT * FROM b;

UPDATE b SET text = 'ONE' WHERE id = 1;



gives error:

psql:/tmp/test.sql:26: ERROR:  plan should not reference subplan's variable

Is this a bug or am I doing something wrong?


I started to work through this when I realized the 
permissions/attributes of the role test are not shown. This seems to be 
important as the UPDATE example works if you run it immediately after:


INSERT INTO b (id, a_id, text) VALUES (1, 1, 'one');



Any help much appreciated,

Adam





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ubuntu and Rails postgresql setup

2016-02-24 Thread Paul Jungwirth

On 02/24/2016 09:44 AM, Paul Jungwirth wrote:

Also, Rails wants to use Postgres "ident" authentication, which does not
require a password because it trusts that the OS has already
authenticated you.


Sorry, I misspoke: this is "peer" authentication, not "ident". Should 
have looked it up first. :-)


Paul



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] "plan should not reference subplan's variable" when using row level security

2016-02-24 Thread Adam Guthrie
Hi,

Whilst trying to use row level security with a subquery in the USING
expression, I'm receiving an error "plan should not reference
subplan's variable"

A simple sql file to reproduce:



CREATE TABLE a (
id  INTEGER PRIMARY KEY
);

CREATE TABLE b (
id  INTEGER PRIMARY KEY,
a_idINTEGER,
textTEXT
);

CREATE POLICY a_select ON b FOR SELECT
USING ( EXISTS(SELECT FROM a WHERE a.id = b.a_id) );

ALTER TABLE b ENABLE ROW LEVEL SECURITY;

INSERT INTO a (id) VALUES (1);

INSERT INTO b (id, a_id, text) VALUES (1, 1, 'one');

GRANT ALL ON ALL TABLES IN SCHEMA public TO test;

SET ROLE test;

SELECT * FROM b;

UPDATE b SET text = 'ONE' WHERE id = 1;



gives error:

psql:/tmp/test.sql:26: ERROR:  plan should not reference subplan's variable

Is this a bug or am I doing something wrong?

Any help much appreciated,

Adam


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ubuntu and Rails postgresql setup

2016-02-24 Thread Paul Jungwirth

On 02/24/2016 12:34 AM, Marco Lobbia wrote:

I am on a Ubuntu 14.04 LTS machine.


I thought I'd chime in since I work with Rails and Postgres on Ubuntu 
all day long. :-) 14.04 LTS is fine for both production and development. 
(Sounds like you're using Heroku for production in any case.)



Putting everything together, all the information seems to converge on
the necessity of creating a database superuser with login name that
match my Ubuntu user name with:

|sudo -u postgres createuser --superuser $USER|


This is the "normal" way to do it for Rails. It wants a superuser so it 
can drop and recreate the database before running tests (`rake test`). 
Personally I prefer to give each project a separate non-superuser 
Postgres account and start tests with just `rspec spec`, but if you're 
new to Rails I recommend going with the approved method.


Also, Rails wants to use Postgres "ident" authentication, which does not 
require a password because it trusts that the OS has already 
authenticated you. You can avoid this too if you like by adding `host: 
localhost` to your `database.yml`, but then you'll need to use a password.


Btw since this is not the Rails mailing list, feel free to email me 
personally if you have more Rails-specific questions.


> According to Heroku
>  
> it is necessary "to export the DATABASE_URL environment variable for

> your app to connect to it when running locally", with:
> |export DATABASE_URL=postgres:///$(whoami)|

That is if you want to connect to the *remote* database on Heroku (i.e. 
probably your *production* database). If you want to connect to your 
locally-installed Postgres, you should not set DATABASE_URL.


> Finally I am wondering whether the choice of installing PostgreSQL
> through the PostgreSQL apt repository would be safe enough or it would
> be preferable to install the LTS version of Ubuntu.

Either is fine.

Good luck!
Paul



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FreeBSD x86 and x86_64

2016-02-24 Thread MEERA
Hi all,

Thanks a lot for the information, we are fine. Really appreciate it.

On Wed, Feb 24, 2016 at 2:15 AM, Larry Rosenman  wrote:

> On 2016-02-23 14:40, Alvaro Herrera wrote:
>
>> MEERA wrote:
>>
>>> Hi all,
>>>
>>> Any information regarding PostgreSQL support on FreeBSD platform?
>>>
>>
>> You can see the list of supported platforms here:
>> http://buildfarm.postgresql.org/cgi-bin/show_status.pl
>> We seem to have FreeBSD 9.0 on gcc 4.2 and FreeBSD 10 on clang, both on
>> x86_64.  If you want to see it tested on 32bit x86, I suggest you put up
>> a permanent buildfarm machine to that effect; see
>> http://wiki.postgresql.org/wiki/PostgreSQL_Buildfarm_Howto
>>
>> Cheers
>>
>> --
>> Álvaro Herrerahttp://www.2ndQuadrant.com/
>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>>
>
> The ports tree has postgresql:
> org.lerctr.org /home/ler $ cd /usr/ports/databases/
> borg.lerctr.org /usr/ports/databases $ ls|grep postgres
> fpc-postgres
> libgda4-postgresql
> libgda5-postgresql
> luasql-postgres
> nagios-check_postgres_replication
> p5-Test-postgresql
> postgresql-jdbc
> postgresql-libpgeasy
> postgresql-libpqxx
> postgresql-libpqxx3
> postgresql-odbc
> postgresql-pllua
> postgresql-plproxy
> postgresql-plruby
> postgresql-plv8js
> postgresql-relay
> postgresql-repmgr
> postgresql90-client
> postgresql90-contrib
> postgresql90-docs
> postgresql90-pgtcl
> postgresql90-plperl
> postgresql90-plpython
> postgresql90-pltcl
> postgresql90-server
> postgresql91-client
> postgresql91-contrib
> postgresql91-docs
> postgresql91-pgtcl
> postgresql91-plperl
> postgresql91-plpython
> postgresql91-pltcl
> postgresql91-server
> postgresql92-client
> postgresql92-contrib
> postgresql92-docs
> postgresql92-pgtcl
> postgresql92-plperl
> postgresql92-plpython
> postgresql92-pltcl
> postgresql92-server
> postgresql93-client
> postgresql93-contrib
> postgresql93-docs
> postgresql93-pgtcl
> postgresql93-plperl
> postgresql93-plpython
> postgresql93-pltcl
> postgresql93-server
> postgresql94-client
> postgresql94-contrib
> postgresql94-docs
> postgresql94-pgtcl
> postgresql94-plperl
> postgresql94-plpython
> postgresql94-pltcl
> postgresql94-server
> postgresql95-client
> postgresql95-contrib
> postgresql95-docs
> postgresql95-pgtcl
> postgresql95-plperl
> postgresql95-plpython
> postgresql95-pltcl
> postgresql95-server
> postgresql_autodoc
> proftpd-mod_sql_postgres
> py-postgresql
> rubygem-dm-postgres-adapter
> rubygem-do_postgres
> borg.lerctr.org /usr/ports/databases $
>
> I'm running 9.5.1 on both 11-CURRENT, and 10.x
>
>
> --
> Larry Rosenman http://www.lerctr.org/~ler
> Phone: +1 214-642-9640 E-Mail: l...@lerctr.org
> US Mail: 7011 W Parmer Ln, Apt 1115, Austin, TX 78729-6961
>



-- 
thanks and regards,
Meera R Nair


Re: [GENERAL] Perfomance issue. statement in the log file..

2016-02-24 Thread Bala Venkat
Thank you. That will help us. We are changing the settings. We are running
9.0.4.  Appreciate your time and help

On Tue, Feb 23, 2016 at 3:20 PM, Roxanne Reid-Bennett 
wrote:

> On 2/23/2016 11:57 AM, Bala Venkat wrote:
>
>> Dear All -
>>
>>We are seeing lot of these statements in the log file. We have 2
>> functions .  Submit and update .  Between submit and update which will int
>> the process, the messages are appearning. The process is very slow. We
>> don't know if this the main reason for the performance issue
>>
>> Can you please help ? if any one has come across  this issue and
>> resolved. Appreciate your time.
>>
>> 2016-02-23 18:11:24 GMT IpAddress(33946)  user=UserName , db=testLOG:
>> execute : SELECT 1  FROM pg_catalog.pg_attrdef  WHERE adrelid = $1
>> AND adn
>> um = $2   AND pg_catalog.pg_get_expr(adbin, adrelid)   LIKE
>> '%nextval(%'
>> 2016-02-23 18:11:24 GMT IpAddress(33946)  user=UserName , db=testDETAIL:
>> parameters: $1 = '3003958', $2 = '1'
>> 2016-02-23 18:11:24 GMT IpAddress(33946)  user=UserName , db=testLOG:
>> duration: 0.059 ms
>> 2016-02-23 18:11:24 GMT IpAddress(33946)  user=UserName , db=testLOG:
>> duration: 0.117 ms
>> 2016-02-23 18:11:24 GMT IpAddress(33946)  user=UserName , db=testLOG:
>> duration: 1.087 ms
>> 2016-02-23 18:11:24 GMT IpAddress(33946)  user=UserName , db=testLOG:
>> execute : SELECT attnotnull FROM pg_catalog.pg_attribute WHERE
>> attrelid =
>> $1 AND attnum = $2
>> 2016-02-23 18:11:24 GMT IpAddress(33946)  user=UserName , db=testDETAIL:
>> parameters: $1 = '3003958', $2 = '1'
>>
>
> ...
> Having done a cursory search for "SELECT attnotnull FROM
> pg_catalog.pg_attribute WHERE attrelid ="
> It appears your potential culprit is a JDBC or ODBC (or ORM managed)
> connection querying meta-data over and over.
> To identify connection information, back in 2010 [so... your mileage may
> vary] Tom suggested the use of:
>
> See log_connections and log_disconnections.  You probably want to
> add PID to log_line_prefix, too, so that you can associate different
> log entries for the same session.
>
>
> Version of Postgres and the execution environment of the 2 functions.
> [e.g. pgsql, PL/?, code etc] might provide more input to help identify a
> more specific answer.
>
> Roxanne
>
> --
> [At other schools] I think the most common fault in general is to teach
> students how to pass exams instead of teaching them the science.
> Donald Knuth
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] uuid gin operator class - why not include for everybody?

2016-02-24 Thread Seamus Abshere
hi,

We want to use gin indexes on arrays of UUIDs. It is as simple as:

> CREATE OPERATOR CLASS _uuid_ops
> DEFAULT FOR TYPE _uuid
> USING gin AS
> OPERATOR 1 &&(anyarray, anyarray),
> OPERATOR 2 @>(anyarray, anyarray),
> OPERATOR 3 <@(anyarray, anyarray),
> OPERATOR 4 =(anyarray, anyarray),
> FUNCTION 1 uuid_cmp(uuid, uuid),
> FUNCTION 2 ginarrayextract(anyarray, internal, internal),
> FUNCTION 3 ginqueryarrayextract(anyarray, internal, smallint, internal, 
> internal, internal, internal),
> FUNCTION 4 ginarrayconsistent(internal, smallint, anyarray, integer, 
> internal, internal, internal, internal),
> STORAGE uuid;

Is there a reason not to put this into postgres itself? This already
exists for text[].

Thanks,
Seamus


-- 
Seamus Abshere, SCEA
+598 99 54 99 54
https://github.com/seamusabshere


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] json function question

2016-02-24 Thread Andrew Dunstan



On 02/24/2016 09:41 AM, Tom Lane wrote:


However, it looks to me like row_to_json already does pretty much the
right thing with nested array/record types:

regression=# select 
row_to_json(row(1,array[2,3],'(0,1)'::int8_tbl,array[(1,2),(3,4)]::int8_tbl[]));
row_to_json
-
  
{"f1":1,"f2":[2,3],"f3":{"q1":0,"q2":1},"f4":[{"q1":1,"q2":2},{"q1":3,"q2":4}]}
(1 row)

So the complaint here is that json_populate_record fails to be an inverse
of row_to_json.



Right.




I'm not sure about Andrew's estimate that it'd be a large amount of work
to fix this.  It would definitely require some restructuring of the code
to make populate_record_worker (or some portion thereof) recursive, and
probably some entirely new code for array conversion; and making
json_populate_recordset behave similarly might take refactoring too.





One possible shortcut if we were just handling arrays and not nested 
composites would be to mangle the json array to produce a Postgres array 
literal. But if we're handling nested composites as well that probably 
won't pass muster and we would need to decompose all the objects fully 
and reassemble them into Postgres objects. Maybe it won't take as long 
as I suspect. If anyone actually does it I'll be interested to find out 
how long it took them :-)


cheers

andrew



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ubuntu and Rails postgresql setup

2016-02-24 Thread Adrian Klaver

On 02/24/2016 12:34 AM, Marco Lobbia wrote:

I am on a Ubuntu 14.04 LTS machine.

I found useful information on how to configure PostgreSQL in Ubuntu for
Rails development at help.ubuntu.com
, at Heroku

and at digitalocean.com
.

In the PostgreSQL documentation I could not find instructions related to
setup after installation with PPA and about configuration for Rails, but
only setup instructions after installation from souce code.

Putting everything together, all the information seems to converge on
the necessity of creating a database superuser with login name that
match my Ubuntu user name with:

|sudo -u postgres createuser --superuser $USER|

When time arrives to create a password for the new superuser with |sudo
-u postgres psql| , I am wondering if Rails can use PostgreSQL without
setting the password, if this password can and should be different from
my Ubuntu account password and also whether |database.yml| could be a
security concern when pushing to Git repository hosting web sites and to
Heroku.
In|database.yml| in fact is recorded exactly this kind of sensitive
information.


Do not know if the pg gem uses libpq to make its connections. I suspect 
it does:


https://bitbucket.org/ged/ruby-pg/wiki/Home

Requirements

Ruby 1.9.3+ or Rubinius 2.0+ .
PostgreSQL 9.0.x (with headers, -dev packages, etc).

If that is the case, the you can use .pgpass to keep sensitive 
information out of the *.yml file:


http://www.postgresql.org/docs/9.5/static/libpq-pgpass.html



According to Heroku
 it
is necessary "to export the DATABASE_URL environment variable for your
app to connect to it when running locally", with: |
|

|export DATABASE_URL=postgres:///$(whoami)|

Is that really necessary? At help.ubuntu.com
 and digitalocean.com

this information is not reported.


See here:

http://guides.rubyonrails.org/configuring.html#configuring-a-database

Since a good bit of the above deals with Rails specifically, you might 
want to also ask on the Rails list:


https://groups.google.com/forum/#!forum/rubyonrails-talk



Finally I am wondering whether the choice of installing PostgreSQL
through the PostgreSQL apt repository would be safe enough or it would
be preferable to install the LTS version of Ubuntu.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] json function question

2016-02-24 Thread David G. Johnston
On Wednesday, February 24, 2016, Andrew Dunstan  wrote:

>
>
> On 02/24/2016 09:11 AM, David G. Johnston wrote:
>
>> On Wednesday, February 24, 2016, Andrew Dunstan > > wrote:
>>
>>
>> Having json(b)_populate_record recursively process nested complex
>> objects would be a large undertaking. One thing to consider is
>> that json arrays are quite different from Postgres arrays: they
>> are essentially one-dimensional heterogenous lists, not
>> multi-dimensional homogeneous matrices. So while a Postgres array
>> that's been converted to a json array should in principle be
>> convertible back, an arbitrary json array could easily not be.
>>
>>
>> An arbitrary json array should be one-dimensional and homogeneous - seems
>> like that should be easy to import.  The true concern is that not all
>> PostgreSQL arrays are capable of being represented in json.
>>
>
>
>
> Neither of these things are true AFAIK.
>
> 1. The following is a 100% legal json array, about as heterogenous as can
> be:
>
>[ "a" , 1, true, null, [2,false], {"b": null} ]
>
>
> 2. Having implemented the routines to convert Postgres arrays to json I'm
> not aware of any which can't be converted. Please supply an example of one
> that can't.
>
>
Yeah, I confused heterogeneous and homogeneous.  That said we already only
promise best effort so if presented with a non-complex array it would be
nice to import it as part of that effort.  Even if something more complex
cannot be and continues to fail.

David J.


Re: [GENERAL] json function question

2016-02-24 Thread Tom Lane
"David G. Johnston"  writes:
> On Wednesday, February 24, 2016, Andrew Dunstan  wrote:
>> Having json(b)_populate_record recursively process nested complex objects
>> would be a large undertaking. One thing to consider is that json arrays are
>> quite different from Postgres arrays: they are essentially one-dimensional
>> heterogenous lists, not multi-dimensional homogeneous matrices. So while a
>> Postgres array that's been converted to a json array should in principle be
>> convertible back, an arbitrary json array could easily not be.

> An arbitrary json array should be one-dimensional and homogeneous - seems
> like that should be easy to import.  The true concern is that not all
> PostgreSQL arrays are capable of being represented in json.

I think we can put it on the user's head that the target Postgres array
type specified in json(b)_populate_record's arguments must be capable of
absorbing all elements of the matching JSON array.

Andrew raises a larger point: if the goal is that
json_populate_record(row_to_json()) be an identity with "deep" conversion
of container types, that puts constraints on row_to_json's behavior, which
we could not change without creating backwards-compatibility issues.
However, it looks to me like row_to_json already does pretty much the
right thing with nested array/record types:

regression=# select 
row_to_json(row(1,array[2,3],'(0,1)'::int8_tbl,array[(1,2),(3,4)]::int8_tbl[]));
   row_to_json  
 
-
 {"f1":1,"f2":[2,3],"f3":{"q1":0,"q2":1},"f4":[{"q1":1,"q2":2},{"q1":3,"q2":4}]}
(1 row)

So the complaint here is that json_populate_record fails to be an inverse
of row_to_json.

I'm not sure about Andrew's estimate that it'd be a large amount of work
to fix this.  It would definitely require some restructuring of the code
to make populate_record_worker (or some portion thereof) recursive, and
probably some entirely new code for array conversion; and making
json_populate_recordset behave similarly might take refactoring too.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] json function question

2016-02-24 Thread Andrew Dunstan



On 02/24/2016 09:11 AM, David G. Johnston wrote:
On Wednesday, February 24, 2016, Andrew Dunstan > wrote:



Having json(b)_populate_record recursively process nested complex
objects would be a large undertaking. One thing to consider is
that json arrays are quite different from Postgres arrays: they
are essentially one-dimensional heterogenous lists, not
multi-dimensional homogeneous matrices. So while a Postgres array
that's been converted to a json array should in principle be
convertible back, an arbitrary json array could easily not be.


An arbitrary json array should be one-dimensional and homogeneous - 
seems like that should be easy to import.  The true concern is that 
not all PostgreSQL arrays are capable of being represented in json.




Neither of these things are true AFAIK.

1. The following is a 100% legal json array, about as heterogenous as 
can be:


   [ "a" , 1, true, null, [2,false], {"b": null} ]


2. Having implemented the routines to convert Postgres arrays to json 
I'm not aware of any which can't be converted. Please supply an example 
of one that can't.



cheers

andrew




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] json function question

2016-02-24 Thread David G. Johnston
On Wednesday, February 24, 2016, Andrew Dunstan  wrote:
>
>
> Having json(b)_populate_record recursively process nested complex objects
> would be a large undertaking. One thing to consider is that json arrays are
> quite different from Postgres arrays: they are essentially one-dimensional
> heterogenous lists, not multi-dimensional homogeneous matrices. So while a
> Postgres array that's been converted to a json array should in principle be
> convertible back, an arbitrary json array could easily not be.
>
>
An arbitrary json array should be one-dimensional and homogeneous - seems
like that should be easy to import.  The true concern is that not all
PostgreSQL arrays are capable of being represented in json.

I'd be happy with just accepting json arrays and ignoring complex and
nested types.  While round-trip is nice externally supplied json that uses
arrays should be something we can import directly.

David J.


Re: [GENERAL] json function question

2016-02-24 Thread Andrew Dunstan



On 02/23/2016 02:54 PM, Tom Lane wrote:

Dan S  writes:

I have this table, data and query:
create table test
(
 id int,
 txt text,
 txt_arr text[],
 f float
);
insert into test
values
(1,'jkl','{abc,def,fgh}',3.14159),(2,'hij','{abc,def,fgh}',3.14159),(2,null,null,null),(3,'def',null,0);
select j, json_populate_record(null::test, j)
from
(
 select to_json(t) as j from test t
) r;
ERROR:  malformed array literal: "["abc","def","fgh"]"
DETAIL:  "[" must introduce explicitly-specified array dimensions.
Is it a bug or how am I supposed to use the populate function ?

AFAICS, json_populate_record has no intelligence about nested container
situations.  It'll basically just push the JSON text representation of any
field of the top-level object at the input converter for the corresponding
composite-type column.  That doesn't work if you're trying to convert a
JSON array to a Postgres array, and it wouldn't work for sub-object to
composite column either, because of syntax discrepancies.

Ideally this would work for arbitrarily-deeply-nested array+record
structures, but it looks like a less than trivial amount of work to make
that happen.


If I try an equivalent example with hstore it works well.

hstore hasn't got any concept of substructure in its field values, so
it's hard to see how you'd create an "equivalent" situation.

One problem with fixing this is avoiding backwards-compatibility breakage,
but I think we could do that by saying that we only change behavior when
(a) json sub-value is an array and target Postgres type is an array type,
or (b) json sub-value is an object and target Postgres type is a composite
type.  In both cases, current code would fail outright, so there's no
existing use-cases to protect.  For other target Postgres types, we'd
continue to do it as today, so for example conversion to a JSON column
type would continue to work as it does now.

I'm not sure if anything besides json[b]_populate_record needs to change
similarly, but we ought to look at all those conversion functions with
the thought of nested containers in mind.

regards, tom lane

PS: I'm not volunteering to do the work here, but it seems like a good
change to make.




Historically, we had row_to_json before we had json_populate_record, and 
a complete round-trip wasn't part of the design anyway AFAIR. Handling 
nested composites and arrays would be a fairly large piece of work, and 
I'm not available to do it either.


A much simpler way to get some round-trip-ability would be to have a row 
to json converter that would stringify instead of decomposing nested 
complex objects, much as hstore does. That would be fairly simple to do, 
and the results should be able to be fed straight back into 
json(b)_populate_record. I'm not volunteering to do that either, but the 
work involved would probably be measured in hours rather than days or 
weeks. Of course, the json produced by this would be ugly and the 
stringified complex objects would be opaque to other json processors. 
OTOH, many round-trip applications don't need to process the serialized 
object on the way around. So this wouldn't be a cure-all but it might 
meet some needs.


Having json(b)_populate_record recursively process nested complex 
objects would be a large undertaking. One thing to consider is that json 
arrays are quite different from Postgres arrays: they are essentially 
one-dimensional heterogenous lists, not multi-dimensional homogeneous 
matrices. So while a Postgres array that's been converted to a json 
array should in principle be convertible back, an arbitrary json array 
could easily not be.


cheers

andrew


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ubuntu and Rails postgresql setup

2016-02-24 Thread John R Pierce

On 2/24/2016 1:34 AM, Marco Lobbia wrote:
Once support for 14.04 LTS will be terminated in 2019, there will not 
be any more support for PostgreSQL too. At that time I suppose I and 
other Ubuntu users will be left with the only choice to install from 
scratch another version of Ubuntu and related software, included 
PostgreSQL.


by 2019, I would expect the server deployed today to be pretty obsolete.

its pretty easy now to use streaming replication to create a live copy 
of your database then promote and cut over to it with almost no downtime.


or, simply do an in place upgrade of your operating system, as long as 
you use the same major version of postgres, you can run with the same 
data directory.you can upgrade postgres to a new major version via 
pg_upgrade...



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ubuntu and Rails postgresql setup

2016-02-24 Thread Marco Lobbia
Once support for 14.04 LTS will be terminated in 2019, there will not be
any more support for PostgreSQL too. At that time I suppose I and other
Ubuntu users will be left with the only choice to install from scratch
another version of Ubuntu and related software, included PostgreSQL.


Re: [GENERAL] Ubuntu and Rails postgresql setup

2016-02-24 Thread Marco Lobbia
Would the LTS version be also adequate for development?
I am configuring my Ubuntu machine for development purposes.
According to the PostgreSQL download page, "Ubuntu "snapshots" a specific
version of PostgreSQL that is then supported throughout the lifetime of
that Ubuntu version".


Re: [GENERAL] Ubuntu and Rails postgresql setup

2016-02-24 Thread John R Pierce

On 2/24/2016 12:34 AM, Marco Lobbia wrote:
Finally I am wondering whether the choice of installing PostgreSQL 
through the PostgreSQL apt repository would be safe enough or it would 
be preferable to install the LTS version of Ubuntu.



the LTS version of Ubuntu is preferable for server installs where you 
want long term stability.


the versions of postgres in the ubuntu standard apt repositories are 
perfectly adequate for production use, up to date, and well managed.   
there's a bunch of ubuntu/debian specific command wrappers like 
pg_lsclusters, pg_createcluster, and so forth that come along with that 
install.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Ubuntu and Rails postgresql setup

2016-02-24 Thread Marco Lobbia
I am on a Ubuntu 14.04 LTS machine.

I found useful information on how to configure PostgreSQL in Ubuntu for
Rails development at help.ubuntu.com
, at Heroku
 and
at digitalocean.com

.

In the PostgreSQL documentation I could not find instructions related to
setup after installation with PPA and about configuration for Rails, but
only setup instructions after installation from souce code.

Putting everything together, all the information seems to converge on the
necessity of creating a database superuser with login name that match my
Ubuntu user name with:

sudo -u postgres createuser --superuser $USER

When time arrives to create a password for the new superuser with sudo -u
postgres psql , I am wondering if Rails can use PostgreSQL without setting
the password, if this password can and should be different from my Ubuntu
account password and also whether database.yml could be a security concern
when pushing to Git repository hosting web sites and to Heroku.
Indatabase.yml in fact is recorded exactly this kind of sensitive
information.

According to Heroku
 it is
necessary "to export the DATABASE_URL environment variable for your app to
connect to it when running locally", with:

export DATABASE_URL=postgres:///$(whoami)

Is that really necessary? At help.ubuntu.com
 and digitalocean.com

this information is not reported.

Finally I am wondering whether the choice of installing PostgreSQL through
the PostgreSQL apt repository would be safe enough or it would be
preferable to install the LTS version of Ubuntu.