Re: [GENERAL] pl/python composite type array as input parameter

2015-06-02 Thread Rémi Cura
Hey,
the only straight workaround I know (which is pretty bad)
is to cast down your record to text.
Then you have an array of text, which is manageable.

For this you can either 'flatten' your record into a unique text,
or cast each part of your record to text, then emulate an array of array
(you need to know the length of the inner array in your function though).

I used this to emulate a 2D numpy vector (N*3)(for numpy).

You'll need a custom aggregate, like this one
https://github.com/Remi-C/_utilities/blob/master/postgres/array_of_array.sql
.

The other more sane solution is to pass the information about the row you
want to retrieve, and retrieve the row directly within the python.
For instance, here you would pass an array of id of the employee you want
to work with.
This is saner, but as a design I don't really like to have specific SQL
code into a generic python function.

I agree it is cumbersome, and I also badly miss more powerful input for
python function (after all, plpython can already return composite types,
which is awesome)


Cheers,
Rémi-C

2015-06-02 2:44 GMT+02:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 06/01/2015 07:42 AM, Filipe Pina wrote:

 Thanks for the reply anyway, it's a pity though, it'd be useful..

 Another bump I've found along the pl/python road: insert ROWTYPE in
 table..
 Maybe you have some hint on that? :)

 So, in PLPGSQL I can:

 DECLARE
my_var my_table;
 BEGIN
my_var.col1 := 'asd';
INSERT INTO my_table VALUES(my_table.*);
 END;

 How would I do something like that in pl/python?

 First, how to declare a ROW-TYPE variable, as they're all python mappings?

 my_var = { 'col1': 'asd' } enough? it'd would miss all the other
 columns...

 Second, how to insert it?

 plpy.prepare and .execute say they don't support composite types, so I
 cannot simply pass

 pl = plpy.prepare('INSERT INTO core_customer VALUES ($1)', ['my_table'])

 Any workarounds for this? (meaning I wouldn't have to specify any
 columns in the insert statement)


 http://www.postgresql.org/docs/9.4/interactive/sql-insert.html

 pl = plpy.prepare('INSERT INTO core_table SELECT * FROM my_table')


 Thanks

 On Sex, Mai 29, 2015 at 2:00 , Peter Eisentraut pete...@gmx.net wrote:

 On 5/18/15 10:52 AM, Filipe Pina wrote:

 But one of the functions I need to create needs to accept an array
 of records.

 PL/Python doesn't support that. Some more code needs to be written to
 support that. You did everything correctly. I don't know of a good
 workaround.



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] TRIGGER TRUNCATE -- CASCADE or RESTRICT

2015-06-02 Thread Albe Laurenz
Andreas Ulbrich wrote:
 I'm in a handle for a trigger for TRUNCATE. Is it possible to find out
 whether the TRUNCATE TABLE ist called with CASCADE?

I don't think there is.

But you can find out the table where the trigger is defined and examine
if any foreign key constraints are referring to it.

If yes, then the trigger was called with CASCADE.
If no, it might have been called either way, but the effect would be the same.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pl/python composite type array as input parameter

2015-06-02 Thread Rémi Cura
OUps,

I forget another strategy I used :
instead of having
testp2(es employee[])

you can use
testp2( names text[], salaries integer[], ages integer[])

This might be the solution with the less work, but it is absolutely
terrible practice,
because it will be hard to change you record type (evolution difficult)
, and having many columns will make you create function with many arguments,
which is often a bad idea.

Cheers,
Rémi-C

2015-06-02 10:36 GMT+02:00 Rémi Cura remi.c...@gmail.com:

 Hey,
 the only straight workaround I know (which is pretty bad)
 is to cast down your record to text.
 Then you have an array of text, which is manageable.

 For this you can either 'flatten' your record into a unique text,
 or cast each part of your record to text, then emulate an array of array
 (you need to know the length of the inner array in your function though).

 I used this to emulate a 2D numpy vector (N*3)(for numpy).

 You'll need a custom aggregate, like this one
 https://github.com/Remi-C/_utilities/blob/master/postgres/array_of_array.sql
 .

 The other more sane solution is to pass the information about the row you
 want to retrieve, and retrieve the row directly within the python.
 For instance, here you would pass an array of id of the employee you want
 to work with.
 This is saner, but as a design I don't really like to have specific SQL
 code into a generic python function.

 I agree it is cumbersome, and I also badly miss more powerful input for
 python function (after all, plpython can already return composite types,
 which is awesome)


 Cheers,
 Rémi-C

 2015-06-02 2:44 GMT+02:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 06/01/2015 07:42 AM, Filipe Pina wrote:

 Thanks for the reply anyway, it's a pity though, it'd be useful..

 Another bump I've found along the pl/python road: insert ROWTYPE in
 table..
 Maybe you have some hint on that? :)

 So, in PLPGSQL I can:

 DECLARE
my_var my_table;
 BEGIN
my_var.col1 := 'asd';
INSERT INTO my_table VALUES(my_table.*);
 END;

 How would I do something like that in pl/python?

 First, how to declare a ROW-TYPE variable, as they're all python
 mappings?

 my_var = { 'col1': 'asd' } enough? it'd would miss all the other
 columns...

 Second, how to insert it?

 plpy.prepare and .execute say they don't support composite types, so I
 cannot simply pass

 pl = plpy.prepare('INSERT INTO core_customer VALUES ($1)', ['my_table'])

 Any workarounds for this? (meaning I wouldn't have to specify any
 columns in the insert statement)


 http://www.postgresql.org/docs/9.4/interactive/sql-insert.html

 pl = plpy.prepare('INSERT INTO core_table SELECT * FROM my_table')


 Thanks

 On Sex, Mai 29, 2015 at 2:00 , Peter Eisentraut pete...@gmx.net wrote:

 On 5/18/15 10:52 AM, Filipe Pina wrote:

 But one of the functions I need to create needs to accept an array
 of records.

 PL/Python doesn't support that. Some more code needs to be written to
 support that. You did everything correctly. I don't know of a good
 workaround.



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general





Re: [GENERAL] Python 3.2 XP64 and Numpy...

2015-06-02 Thread Rémi Cura
Hey,
python is installed from official binary, 64 b for windows,
in C/Python32

I can't remember the argument, but it might be irrelevant.
The problem doesn't seem to be to install numpy, it works perfectly in the
regular terminal.

The problem seems to be that postgres can't use correctly numpy.


I found no version of scipy installer for win 64 with python 3.2,
I tried several but not having the exact python version always end in
failure.
The binaries you linked to are dependent on intel math kernel library,
which I don't have.


Cheers,
Rémi-C


2015-06-01 19:41 GMT+02:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 06/01/2015 09:09 AM, Rémi Cura wrote:

 Hey, thanks to help me with that.
 I started fresh to have a truly reproducible process,
 so you can have all information and rule out some error possibilities.

   - Uninstall all python.
   - Check that PythonPath doesn't exist anymore
   - check that python doesn't exist anymore

   - install python 3.2.5 64 bit from official python website into
 C/Python32

   - Reload configuration for server.

   - create plpython3u , create a python function, test it (show path)

 * It works, python path is
  'C:\\Windows\\system32\\python32.zip', 'C:\\Python32\\Lib',
 'C:\\Python32\\DLLs', 'E:\\9.3\\data', 'C:\\Program
 Files\\PostgreSQL\\9.3\\bin', 'C:\\Python32',
 'C:\\Python32\\lib\\site-packages'

   - Donwload latest numpy from website.
   - ON antoher PC


 So what is the Python setup on this machine?

  * Compile numpy with visual 2008 , 64 bit
 * Create an binary installer for windows (using python.exe setup.py
 )with proper argument


 The argument would be?


   - On the server :
   - install numpy with the compiled installer.


 Best guess is that the numpy compilation you are doing on Machine A is not
 compatible with what you have installed on Machine B(the server).

 Have you looked at this:
 http://www.lfd.uci.edu/~gohlke/pythonlibs/

 or

 http://www.scipy.org/install.html



   - check that numpy is correctly installer in
 C:\Python32\Lib\site-packages
   - using an external terminal, check that numpy works (import numpy -
 OK)

   - Now, define a plpython3u function containing import numpy

   - Run the function -- error is
ERREUR:  ImportError: DLL load failed: Le module spécifié est
 introuvable.,
 which roughly translate to
ERROR: ImportError : DLL load failed : the specified module couldn't
 be found.

   - Create a plpython3u function returning sys.path
 the path is C:\\Windows\\system32\\python32.zip', 'C:\\Python32\\Lib',
 'C:\\Python32\\DLLs', 'E:\\9.3\\data', 'C:\\Program
 Files\\PostgreSQL\\9.3\\bin', 'C:\\Python32',
 'C:\\Python32\\lib\\site-packages
 numpy is in this path, in C:\\Python32\\lib\\site-packages
 All user of the computer have all rights on the
 C:\\Python32\\lib\\site-packages folder


   - execute `import imp; imp.find_package('numpy')` within the
 plpython3u function
 - returns None, 'C:\\Python32\\lib\\site-packages\\numpy', ('',
 '', 5)

   - create a helloworld module , put it next to numpy, try to call it
 - it gets called

 I really don't see what I can do more.

 Cheers,
 Rémi-C



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL] postgres db permissions

2015-06-02 Thread Steve Pribyl
Josh,

Via psql:
CREATE ROLE bob LOGIN
  NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;
GRANT dbA TO bob;
GRANT dbA_ro TO bob;
GRANT dbB TO bob;
GRANT dbB_ro TO bob;

dbA, dbA_ro, dbB, and dbB_ro are roles.

I have not created any database yet or assigned permissions to the roles.

Steve Pribyl




From: pgsql-general-ow...@postgresql.org pgsql-general-ow...@postgresql.org 
on behalf of Joshua D. Drake j...@commandprompt.com
Sent: Tuesday, June 2, 2015 12:44 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgres db permissions

On 06/02/2015 10:36 AM, Steve Pribyl wrote:

 Good Afternoon,

 Built a fresh 9.3. postgres server and added some users and noticed that any 
 user can create tables in any database including the postgres database by 
 default.

 Have I missed some step in securing the default install?

How exactly did you add the users?

JD



--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

 [http://www.akunacapital.com/images/akuna.png]
Steve Pribyl | Senior Systems Engineer
Akuna Capital LLC
36 S Wabash, Suite 310 Chicago IL 60603 USA | www.akunacapital.com 
http://www.akunacapital.com
p: +1 312 994 4646 | m: 847-343-2349 | f: +1 312 750 1667 | 
steve.pri...@akunacapital.com

Please consider the environment, before printing this email.

This electronic message contains information from Akuna Capital LLC that may be 
confidential, legally privileged or otherwise protected from disclosure. This 
information is intended for the use of the addressee only and is not offered as 
investment advice to be relied upon for personal or professional use. 
Additionally, all electronic messages are recorded and stored in compliance 
pursuant to applicable SEC rules. If you are not the intended recipient, you 
are hereby notified that any disclosure, copying, distribution, printing or any 
other use of, or any action in reliance on, the contents of this electronic 
message is strictly prohibited. If you have received this communication in 
error, please notify us by telephone at (312)994-4640 and destroy the original 
message.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres db permissions

2015-06-02 Thread Adrian Klaver

On 06/02/2015 10:50 AM, Steve Pribyl wrote:

Josh,

Via psql:
CREATE ROLE bob LOGIN
   NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;
GRANT dbA TO bob;
GRANT dbA_ro TO bob;
GRANT dbB TO bob;
GRANT dbB_ro TO bob;

dbA, dbA_ro, dbB, and dbB_ro are roles.


The burning question would be, how where they created?



I have not created any database yet or assigned permissions to the roles.

Steve Pribyl




From: pgsql-general-ow...@postgresql.org pgsql-general-ow...@postgresql.org on 
behalf of Joshua D. Drake j...@commandprompt.com
Sent: Tuesday, June 2, 2015 12:44 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgres db permissions

On 06/02/2015 10:36 AM, Steve Pribyl wrote:


Good Afternoon,

Built a fresh 9.3. postgres server and added some users and noticed that any 
user can create tables in any database including the postgres database by 
default.

Have I missed some step in securing the default install?


How exactly did you add the users?

JD



--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

  [http://www.akunacapital.com/images/akuna.png]
Steve Pribyl | Senior Systems Engineer
Akuna Capital LLC
36 S Wabash, Suite 310 Chicago IL 60603 USA | www.akunacapital.com 
http://www.akunacapital.com
p: +1 312 994 4646 | m: 847-343-2349 | f: +1 312 750 1667 | 
steve.pri...@akunacapital.com

Please consider the environment, before printing this email.

This electronic message contains information from Akuna Capital LLC that may be 
confidential, legally privileged or otherwise protected from disclosure. This 
information is intended for the use of the addressee only and is not offered as 
investment advice to be relied upon for personal or professional use. 
Additionally, all electronic messages are recorded and stored in compliance 
pursuant to applicable SEC rules. If you are not the intended recipient, you 
are hereby notified that any disclosure, copying, distribution, printing or any 
other use of, or any action in reliance on, the contents of this electronic 
message is strictly prohibited. If you have received this communication in 
error, please notify us by telephone at (312)994-4640 and destroy the original 
message.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres db permissions

2015-06-02 Thread Steve Pribyl
They all look like this.

CREATE ROLE dbA
  NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;

Steve Pribyl


From: Adrian Klaver adrian.kla...@aklaver.com
Sent: Tuesday, June 2, 2015 1:06 PM
To: Steve Pribyl; Joshua D. Drake; pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgres db permissions

On 06/02/2015 10:50 AM, Steve Pribyl wrote:
 Josh,

 Via psql:
 CREATE ROLE bob LOGIN
NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;
 GRANT dbA TO bob;
 GRANT dbA_ro TO bob;
 GRANT dbB TO bob;
 GRANT dbB_ro TO bob;

 dbA, dbA_ro, dbB, and dbB_ro are roles.

The burning question would be, how where they created?


 I have not created any database yet or assigned permissions to the roles.

 Steve Pribyl



 
 From: pgsql-general-ow...@postgresql.org pgsql-general-ow...@postgresql.org 
 on behalf of Joshua D. Drake j...@commandprompt.com
 Sent: Tuesday, June 2, 2015 12:44 PM
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] postgres db permissions

 On 06/02/2015 10:36 AM, Steve Pribyl wrote:

 Good Afternoon,

 Built a fresh 9.3. postgres server and added some users and noticed that any 
 user can create tables in any database including the postgres database by 
 default.

 Have I missed some step in securing the default install?

 How exactly did you add the users?

 JD



 --
 Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
 PostgreSQL Centered full stack support, consulting and development.
 Announcing I'm offended is basically telling the world you can't
 control your own emotions, so everyone else should do it for you.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 
   [http://www.akunacapital.com/images/akuna.png]
 Steve Pribyl | Senior Systems Engineer
 Akuna Capital LLC
 36 S Wabash, Suite 310 Chicago IL 60603 USA | www.akunacapital.com 
 http://www.akunacapital.com
 p: +1 312 994 4646 | m: 847-343-2349 | f: +1 312 750 1667 | 
 steve.pri...@akunacapital.com

 Please consider the environment, before printing this email.

 This electronic message contains information from Akuna Capital LLC that may 
 be confidential, legally privileged or otherwise protected from disclosure. 
 This information is intended for the use of the addressee only and is not 
 offered as investment advice to be relied upon for personal or professional 
 use. Additionally, all electronic messages are recorded and stored in 
 compliance pursuant to applicable SEC rules. If you are not the intended 
 recipient, you are hereby notified that any disclosure, copying, 
 distribution, printing or any other use of, or any action in reliance on, the 
 contents of this electronic message is strictly prohibited. If you have 
 received this communication in error, please notify us by telephone at 
 (312)994-4640 and destroy the original message.




--
Adrian Klaver
adrian.kla...@aklaver.com

 [http://www.akunacapital.com/images/akuna.png]
Steve Pribyl | Senior Systems Engineer
Akuna Capital LLC
36 S Wabash, Suite 310 Chicago IL 60603 USA | www.akunacapital.com 
http://www.akunacapital.com
p: +1 312 994 4646 | m: 847-343-2349 | f: +1 312 750 1667 | 
steve.pri...@akunacapital.com

Please consider the environment, before printing this email.

This electronic message contains information from Akuna Capital LLC that may be 
confidential, legally privileged or otherwise protected from disclosure. This 
information is intended for the use of the addressee only and is not offered as 
investment advice to be relied upon for personal or professional use. 
Additionally, all electronic messages are recorded and stored in compliance 
pursuant to applicable SEC rules. If you are not the intended recipient, you 
are hereby notified that any disclosure, copying, distribution, printing or any 
other use of, or any action in reliance on, the contents of this electronic 
message is strictly prohibited. If you have received this communication in 
error, please notify us by telephone at (312)994-4640 and destroy the original 
message.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres db permissions

2015-06-02 Thread Joshua D. Drake


On 06/02/2015 11:08 AM, Steve Pribyl wrote:


They all look like this.

CREATE ROLE dbA
   NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;


And how are you connecting to the database via psql?

JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres db permissions

2015-06-02 Thread Adrian Klaver

On 06/02/2015 11:04 AM, Steve Pribyl wrote:

None of the roles have permissions on the postgres database.  At this
point they don't have any permissions on any databases.


I have noted that  GRANT ALL ON SCHEMA public TO public is granted
on postgres.schemas.public.  I am looking at this in pgadmin so excuse
my nomenclature.


Is this what is allowing write access to the database?


Yes, though that should not be the default. See here:

http://www.postgresql.org/docs/9.4/interactive/sql-grant.html

PostgreSQL grants default privileges on some types of objects to PUBLIC. 
No privileges are granted to PUBLIC by default on tables, columns, 
schemas or tablespaces. For other types, the default privileges granted 
to PUBLIC are as follows: CONNECT and CREATE TEMP TABLE for databases; 
EXECUTE privilege for functions; and USAGE privilege for languages. The 
object owner can, of course, REVOKE both default and expressly granted 
privileges. (For maximum security, issue the REVOKE in the same 
transaction that creates the object; then there is no window in which 
another user can use the object.) Also, these initial default privilege 
settings can be changed using the ALTER DEFAULT PRIVILEGES command.


So how exactly was Postgres installed and where there any scripts run 
after the install?






Steve Pribyl
Sr. Systems Engineer
steve.pri...@akunacapital.com mailto:steve.pri...@akunacapital.com
Desk: 312-994-4646


*From:* Melvin Davidson melvin6...@gmail.com
*Sent:* Tuesday, June 2, 2015 12:55 PM
*To:* Steve Pribyl
*Cc:* Joshua D. Drake; pgsql-general@postgresql.org
*Subject:* Re: [GENERAL] postgres db permissions
Your problem is probably the INHERIT and
GRANT dbA TO bob;
GRANT dbA_ro TO bob;
GRANT dbB TO bob;
GRANT dbB_ro TO bob;

options. If any of the dbA's have the permission to CREATE tables (and I
suspect they do), so will bob.


On Tue, Jun 2, 2015 at 1:50 PM, Steve Pribyl
steve.pri...@akunacapital.com mailto:steve.pri...@akunacapital.com
wrote:

Josh,

Via psql:
CREATE ROLE bob LOGIN
   NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;
GRANT dbA TO bob;
GRANT dbA_ro TO bob;
GRANT dbB TO bob;
GRANT dbB_ro TO bob;

dbA, dbA_ro, dbB, and dbB_ro are roles.

I have not created any database yet or assigned permissions to the
roles.

Steve Pribyl




From: pgsql-general-ow...@postgresql.org
mailto:pgsql-general-ow...@postgresql.org
pgsql-general-ow...@postgresql.org
mailto:pgsql-general-ow...@postgresql.org on behalf of Joshua D.
Drake j...@commandprompt.com mailto:j...@commandprompt.com
Sent: Tuesday, June 2, 2015 12:44 PM
To: pgsql-general@postgresql.org mailto:pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgres db permissions

On 06/02/2015 10:36 AM, Steve Pribyl wrote:
 
  Good Afternoon,
 
  Built a fresh 9.3. postgres server and added some users and
noticed that any user can create tables in any database including
the postgres database by default.
 
  Have I missed some step in securing the default install?

How exactly did you add the users?

JD



--
Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564
tel:503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
mailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

  [http://www.akunacapital.com/images/akuna.png]
Steve Pribyl | Senior Systems Engineer
Akuna Capital LLC
36 S Wabash, Suite 310 Chicago IL 60603 USA | www.akunacapital.com
http://www.akunacapital.com http://www.akunacapital.com
p: +1 312 994 4646 tel:%2B1%20312%20994%204646 | m: 847-343-2349
tel:847-343-2349 | f: +1 312 750 1667
tel:%2B1%20312%20750%201667 | steve.pri...@akunacapital.com
mailto:steve.pri...@akunacapital.com

Please consider the environment, before printing this email.

This electronic message contains information from Akuna Capital LLC
that may be confidential, legally privileged or otherwise protected
from disclosure. This information is intended for the use of the
addressee only and is not offered as investment advice to be relied
upon for personal or professional use. Additionally, all electronic
messages are recorded and stored in compliance pursuant to
applicable SEC rules. If you are not the intended recipient, you are
hereby notified that any disclosure, copying, distribution, printing
or any other use of, or any action in reliance on, the contents of
   

Re: [GENERAL] postgres db permissions

2015-06-02 Thread Steve Pribyl
Thanks for clearing that up.

I seems that any database that gets created has GRANT ALL ON SCHEMA public TO 
public by default.  These are all clean installs.I have found this on 
Ubuntu 9.3, The Postgres 9.3 and 9.4 deb packages.

Default postgres from ubuntu, is the version I am testing on.
It seems to be the default install, though we might be a patch or two behind.
$ dpkg -l | grep postgres
ii  postgresql-9.39.3.5-0ubuntu0.14.04.1
amd64object-relational SQL database, version 9.3 server

I found this problem on a install from the postgres repo
$ dpkg -l postgresql-9.3
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name   VersionDescription
+++-==-==-
ii  postgresql-9.3 9.3.0-2.pgdg12 object-relational SQL database, version 9.3


$ dpkg -l postgresql-9.4
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name   Version  Architecture Description
+++-==---=
ii  postgresql-9.4 9.4.0-1.pgdg amd64object-relational SQL database, v


Steve Pribyl




From: Adrian Klaver adrian.kla...@aklaver.com
Sent: Tuesday, June 2, 2015 1:20 PM
To: Steve Pribyl; Melvin Davidson
Cc: Joshua D. Drake; pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgres db permissions

On 06/02/2015 11:04 AM, Steve Pribyl wrote:
 None of the roles have permissions on the postgres database.  At this
 point they don't have any permissions on any databases.


 I have noted that  GRANT ALL ON SCHEMA public TO public is granted
 on postgres.schemas.public.  I am looking at this in pgadmin so excuse
 my nomenclature.


 Is this what is allowing write access to the database?

Yes, though that should not be the default. See here:

http://www.postgresql.org/docs/9.4/interactive/sql-grant.html

PostgreSQL grants default privileges on some types of objects to PUBLIC.
No privileges are granted to PUBLIC by default on tables, columns,
schemas or tablespaces. For other types, the default privileges granted
to PUBLIC are as follows: CONNECT and CREATE TEMP TABLE for databases;
EXECUTE privilege for functions; and USAGE privilege for languages. The
object owner can, of course, REVOKE both default and expressly granted
privileges. (For maximum security, issue the REVOKE in the same
transaction that creates the object; then there is no window in which
another user can use the object.) Also, these initial default privilege
settings can be changed using the ALTER DEFAULT PRIVILEGES command.

So how exactly was Postgres installed and where there any scripts run
after the install?




 Steve Pribyl
 Sr. Systems Engineer
 steve.pri...@akunacapital.com mailto:steve.pri...@akunacapital.com
 Desk: 312-994-4646

 
 *From:* Melvin Davidson melvin6...@gmail.com
 *Sent:* Tuesday, June 2, 2015 12:55 PM
 *To:* Steve Pribyl
 *Cc:* Joshua D. Drake; pgsql-general@postgresql.org
 *Subject:* Re: [GENERAL] postgres db permissions
 Your problem is probably the INHERIT and
 GRANT dbA TO bob;
 GRANT dbA_ro TO bob;
 GRANT dbB TO bob;
 GRANT dbB_ro TO bob;

 options. If any of the dbA's have the permission to CREATE tables (and I
 suspect they do), so will bob.


 On Tue, Jun 2, 2015 at 1:50 PM, Steve Pribyl
 steve.pri...@akunacapital.com mailto:steve.pri...@akunacapital.com
 wrote:

 Josh,

 Via psql:
 CREATE ROLE bob LOGIN
NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;
 GRANT dbA TO bob;
 GRANT dbA_ro TO bob;
 GRANT dbB TO bob;
 GRANT dbB_ro TO bob;

 dbA, dbA_ro, dbB, and dbB_ro are roles.

 I have not created any database yet or assigned permissions to the
 roles.

 Steve Pribyl



 
 From: pgsql-general-ow...@postgresql.org
 mailto:pgsql-general-ow...@postgresql.org
 pgsql-general-ow...@postgresql.org
 mailto:pgsql-general-ow...@postgresql.org on behalf of Joshua D.
 Drake j...@commandprompt.com mailto:j...@commandprompt.com
 Sent: Tuesday, June 2, 2015 12:44 PM
 To: pgsql-general@postgresql.org mailto:pgsql-general@postgresql.org
 Subject: Re: [GENERAL] postgres db permissions

 On 06/02/2015 10:36 AM, Steve Pribyl wrote:
  
   Good Afternoon,
  
   Built a fresh 9.3. postgres server and added some users and
 noticed that any user can create tables in any database including
 the postgres database by default.
  
   Have I missed some step in securing the default install?

 How exactly did you 

Re: [GENERAL] postgres db permissions

2015-06-02 Thread Melvin Davidson
As Tom advised, it's called a public schema for a reason. It means the
general public (any user) has access to it and can create objects/tables in
it.


On Tue, Jun 2, 2015 at 2:58 PM, Joshua D. Drake j...@commandprompt.com
wrote:


 On 06/02/2015 11:46 AM, Tom Lane wrote:

 Adrian Klaver adrian.kla...@aklaver.com writes:

 On 06/02/2015 11:04 AM, Steve Pribyl wrote:

 I have noted that  GRANT ALL ON SCHEMA public TO public is granted
 on postgres.schemas.public.  I am looking at this in pgadmin so excuse
 my nomenclature.


  Is this what is allowing write access to the database?


  Yes, though that should not be the default.


 Huh?  Of course it's the default.  I'm not really sure why the OP is
 surprised at this.  A database that won't let you create any tables
 is not terribly useful.


 The owner (or super user) should always have access, anybody with access
 should not. This argument has actually come up before and you held a
 similar view. This should not be possible:

 postgres@sqitch:/# psql -U postgres
 psql (9.2.11)
 Type help for help.

 postgres=# create user foo;
 CREATE ROLE
 postgres=# \q

 root@sqitch:/# psql -U foo postgres
 psql (9.2.11)
 Type help for help.
 postgres= create table bar (id text);
 CREATE TABLE
 postgres=

 We can adjust this capability with pg_hba.conf but that is external to
 this behavior.

 Sincerely,

 JD



 --
 Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
 PostgreSQL Centered full stack support, consulting and development.
 Announcing I'm offended is basically telling the world you can't
 control your own emotions, so everyone else should do it for you.




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] postgres db permissions

2015-06-02 Thread Melvin Davidson
Your problem is probably the INHERIT and
GRANT dbA TO bob;
GRANT dbA_ro TO bob;
GRANT dbB TO bob;
GRANT dbB_ro TO bob;

options. If any of the dbA's have the permission to CREATE tables (and I
suspect they do), so will bob.


On Tue, Jun 2, 2015 at 1:50 PM, Steve Pribyl steve.pri...@akunacapital.com
wrote:

 Josh,

 Via psql:
 CREATE ROLE bob LOGIN
   NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;
 GRANT dbA TO bob;
 GRANT dbA_ro TO bob;
 GRANT dbB TO bob;
 GRANT dbB_ro TO bob;

 dbA, dbA_ro, dbB, and dbB_ro are roles.

 I have not created any database yet or assigned permissions to the roles.

 Steve Pribyl



 
 From: pgsql-general-ow...@postgresql.org 
 pgsql-general-ow...@postgresql.org on behalf of Joshua D. Drake 
 j...@commandprompt.com
 Sent: Tuesday, June 2, 2015 12:44 PM
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] postgres db permissions

 On 06/02/2015 10:36 AM, Steve Pribyl wrote:
 
  Good Afternoon,
 
  Built a fresh 9.3. postgres server and added some users and noticed that
 any user can create tables in any database including the postgres database
 by default.
 
  Have I missed some step in securing the default install?

 How exactly did you add the users?

 JD



 --
 Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
 PostgreSQL Centered full stack support, consulting and development.
 Announcing I'm offended is basically telling the world you can't
 control your own emotions, so everyone else should do it for you.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 
  [http://www.akunacapital.com/images/akuna.png]
 Steve Pribyl | Senior Systems Engineer
 Akuna Capital LLC
 36 S Wabash, Suite 310 Chicago IL 60603 USA | www.akunacapital.com 
 http://www.akunacapital.com
 p: +1 312 994 4646 | m: 847-343-2349 | f: +1 312 750 1667 |
 steve.pri...@akunacapital.com

 Please consider the environment, before printing this email.

 This electronic message contains information from Akuna Capital LLC that
 may be confidential, legally privileged or otherwise protected from
 disclosure. This information is intended for the use of the addressee only
 and is not offered as investment advice to be relied upon for personal or
 professional use. Additionally, all electronic messages are recorded and
 stored in compliance pursuant to applicable SEC rules. If you are not the
 intended recipient, you are hereby notified that any disclosure, copying,
 distribution, printing or any other use of, or any action in reliance on,
 the contents of this electronic message is strictly prohibited. If you have
 received this communication in error, please notify us by telephone at
 (312)994-4640 and destroy the original message.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] postgres db permissions

2015-06-02 Thread Steve Pribyl
This only seems to show up in pgadminIII, I am unable to see this grant using 
\dn+(but I am a bit of a novice).

postgres=# \dn+
  List of schemas
  Name  |  Owner   |  Access privileges   |  Description
+--+--+
 public | postgres | postgres=UC/postgres+| standard public schema
|  | =UC/postgres |


I would seem to me granting public access to the schema by default is bad.  
Granting access to just the required users is good.

Good:
CREATE SCHEMA public
  AUTHORIZATION postgres;

GRANT ALL ON SCHEMA public TO postgres;
COMMENT ON SCHEMA public

Bad and happens to be the default:
CREATE SCHEMA public
  AUTHORIZATION postgres;

GRANT ALL ON SCHEMA public TO postgres;
GRANT ALL ON SCHEMA public TO public;
COMMENT ON SCHEMA public

Steve Pribyl



From: pgsql-general-ow...@postgresql.org pgsql-general-ow...@postgresql.org 
on behalf of Steve Pribyl steve.pri...@akunacapital.com
Sent: Tuesday, June 2, 2015 1:45 PM
To: Adrian Klaver; Melvin Davidson
Cc: Joshua D. Drake; pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgres db permissions

Thanks for clearing that up.

I seems that any database that gets created has GRANT ALL ON SCHEMA public TO 
public by default.  These are all clean installs.I have found this on 
Ubuntu 9.3, The Postgres 9.3 and 9.4 deb packages.

Default postgres from ubuntu, is the version I am testing on.
It seems to be the default install, though we might be a patch or two behind.
$ dpkg -l | grep postgres
ii  postgresql-9.39.3.5-0ubuntu0.14.04.1
amd64object-relational SQL database, version 9.3 server

I found this problem on a install from the postgres repo
$ dpkg -l postgresql-9.3
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name   VersionDescription
+++-==-==-
ii  postgresql-9.3 9.3.0-2.pgdg12 object-relational SQL database, version 9.3


$ dpkg -l postgresql-9.4
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name   Version  Architecture Description
+++-==---=
ii  postgresql-9.4 9.4.0-1.pgdg amd64object-relational SQL database, v


Steve Pribyl




 [http://www.akunacapital.com/images/akuna.png]
Steve Pribyl | Senior Systems Engineer
Akuna Capital LLC
36 S Wabash, Suite 310 Chicago IL 60603 USA | www.akunacapital.com 
http://www.akunacapital.com
p: +1 312 994 4646 | m: 847-343-2349 | f: +1 312 750 1667 | 
steve.pri...@akunacapital.com

Please consider the environment, before printing this email.

This electronic message contains information from Akuna Capital LLC that may be 
confidential, legally privileged or otherwise protected from disclosure. This 
information is intended for the use of the addressee only and is not offered as 
investment advice to be relied upon for personal or professional use. 
Additionally, all electronic messages are recorded and stored in compliance 
pursuant to applicable SEC rules. If you are not the intended recipient, you 
are hereby notified that any disclosure, copying, distribution, printing or any 
other use of, or any action in reliance on, the contents of this electronic 
message is strictly prohibited. If you have received this communication in 
error, please notify us by telephone at (312)994-4640 and destroy the original 
message.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres db permissions

2015-06-02 Thread Steve Pribyl
None of the roles have permissions on the postgres database.  At this point 
they don't have any permissions on any databases.


I have noted that  GRANT ALL ON SCHEMA public TO public is granted on 
postgres.schemas.public.  I am looking at this in pgadmin so excuse my 
nomenclature.


Is this what is allowing write access to the database?


Steve Pribyl
Sr. Systems Engineer
steve.pri...@akunacapital.commailto:steve.pri...@akunacapital.com
Desk: 312-994-4646


From: Melvin Davidson melvin6...@gmail.com
Sent: Tuesday, June 2, 2015 12:55 PM
To: Steve Pribyl
Cc: Joshua D. Drake; pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgres db permissions

Your problem is probably the INHERIT and
GRANT dbA TO bob;
GRANT dbA_ro TO bob;
GRANT dbB TO bob;
GRANT dbB_ro TO bob;

options. If any of the dbA's have the permission to CREATE tables (and I 
suspect they do), so will bob.


On Tue, Jun 2, 2015 at 1:50 PM, Steve Pribyl 
steve.pri...@akunacapital.commailto:steve.pri...@akunacapital.com wrote:
Josh,

Via psql:
CREATE ROLE bob LOGIN
  NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;
GRANT dbA TO bob;
GRANT dbA_ro TO bob;
GRANT dbB TO bob;
GRANT dbB_ro TO bob;

dbA, dbA_ro, dbB, and dbB_ro are roles.

I have not created any database yet or assigned permissions to the roles.

Steve Pribyl




From: 
pgsql-general-ow...@postgresql.orgmailto:pgsql-general-ow...@postgresql.org 
pgsql-general-ow...@postgresql.orgmailto:pgsql-general-ow...@postgresql.org 
on behalf of Joshua D. Drake 
j...@commandprompt.commailto:j...@commandprompt.com
Sent: Tuesday, June 2, 2015 12:44 PM
To: pgsql-general@postgresql.orgmailto:pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgres db permissions

On 06/02/2015 10:36 AM, Steve Pribyl wrote:

 Good Afternoon,

 Built a fresh 9.3. postgres server and added some users and noticed that any 
 user can create tables in any database including the postgres database by 
 default.

 Have I missed some step in securing the default install?

How exactly did you add the users?

JD



--
Command Prompt, Inc. - http://www.commandprompt.com/  
503-667-4564tel:503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-general mailing list 
(pgsql-general@postgresql.orgmailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

 [http://www.akunacapital.com/images/akuna.png]
Steve Pribyl | Senior Systems Engineer
Akuna Capital LLC
36 S Wabash, Suite 310 Chicago IL 60603 USA | 
www.akunacapital.comhttp://www.akunacapital.com http://www.akunacapital.com
p: +1 312 994 4646tel:%2B1%20312%20994%204646 | m: 
847-343-2349tel:847-343-2349 | f: +1 312 750 
1667tel:%2B1%20312%20750%201667 | 
steve.pri...@akunacapital.commailto:steve.pri...@akunacapital.com

Please consider the environment, before printing this email.

This electronic message contains information from Akuna Capital LLC that may be 
confidential, legally privileged or otherwise protected from disclosure. This 
information is intended for the use of the addressee only and is not offered as 
investment advice to be relied upon for personal or professional use. 
Additionally, all electronic messages are recorded and stored in compliance 
pursuant to applicable SEC rules. If you are not the intended recipient, you 
are hereby notified that any disclosure, copying, distribution, printing or any 
other use of, or any action in reliance on, the contents of this electronic 
message is strictly prohibited. If you have received this communication in 
error, please notify us by telephone at (312)994-4640tel:%28312%29994-4640 
and destroy the original message.


--
Sent via pgsql-general mailing list 
(pgsql-general@postgresql.orgmailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Melvin Davidson
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you. 
[http://us.i1.yimg.com/us.yimg.com/i/mesg/tsmileys2/01.gif]

[http://www.akunacapital.com/images/akuna.png]
Steve Pribyl | Senior Systems Engineer
Akuna Capital LLC
36 S Wabash, Suite 310 Chicago IL 60603 USA | www.akunacapital.com 
http://www.akunacapital.com
p: +1 312 994 4646 | m: 847-343-2349 | f: +1 312 750 1667 | 
steve.pri...@akunacapital.com

Please consider the environment, before printing this email.

This electronic message contains information from Akuna Capital LLC that may be 
confidential, legally privileged or otherwise protected from disclosure. This 
information is intended for the use of the addressee only and is not offered as 
investment advice to be relied upon for personal or 

Re: [GENERAL] postgres db permissions

2015-06-02 Thread Joshua D. Drake


On 06/02/2015 11:46 AM, Tom Lane wrote:

Adrian Klaver adrian.kla...@aklaver.com writes:

On 06/02/2015 11:04 AM, Steve Pribyl wrote:

I have noted that  GRANT ALL ON SCHEMA public TO public is granted
on postgres.schemas.public.  I am looking at this in pgadmin so excuse
my nomenclature.



Is this what is allowing write access to the database?



Yes, though that should not be the default.


Huh?  Of course it's the default.  I'm not really sure why the OP is
surprised at this.  A database that won't let you create any tables
is not terribly useful.


The owner (or super user) should always have access, anybody with access 
should not. This argument has actually come up before and you held a 
similar view. This should not be possible:


postgres@sqitch:/# psql -U postgres
psql (9.2.11)
Type help for help.

postgres=# create user foo;
CREATE ROLE
postgres=# \q

root@sqitch:/# psql -U foo postgres
psql (9.2.11)
Type help for help.
postgres= create table bar (id text);
CREATE TABLE
postgres=

We can adjust this capability with pg_hba.conf but that is external to 
this behavior.


Sincerely,

JD



--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] TRIGGER TRUNCATE -- CASCADE or RESTRICT

2015-06-02 Thread Andreas Ulbrich

On 02.06.2015 16:20, Melvin Davidson wrote:
You can use the following to list the triggers and see what functions 
they call. Then you can check pg_proc to see how TRUNCATE is used in 
prosrc.



SELECT c.relname,
   t.tgname,
   p.pronameAS function_called,
   t.tgconstraint   AS is_constraint,
   CASE WHEN t.tgconstrrelid  0
THEN (SELECT relname
   FROM pg_class
  WHERE oid = t.tgconstrrelid)
ELSE ''
END AS constr_tbl,
   t.tgenabled
  FROM pg_trigger t
  INNER JOIN pg_proc p  ON ( p.oid = t.tgfoid)
  INNER JOIN pg_class c ON (c.oid = t.tgrelid)
  WHERE tgname NOT LIKE 'pg_%'
AND tgname NOT LIKE 'RI_%'  --  comment out to see constraints
--AND t.tgenabled = FALSE
 ORDER BY 1;


On Tue, Jun 2, 2015 at 5:31 AM, Albe Laurenz laurenz.a...@wien.gv.at 
mailto:laurenz.a...@wien.gv.at wrote:


Andreas Ulbrich wrote:
 I'm in a handle for a trigger for TRUNCATE. Is it possible to
find out
 whether the TRUNCATE TABLE ist called with CASCADE?

I don't think there is.

But you can find out the table where the trigger is defined and
examine
if any foreign key constraints are referring to it.

If yes, then the trigger was called with CASCADE.
If no, it might have been called either way, but the effect would
be the same.

Yours,
Laurenz Albe

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
mailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



I think, I must explain the problem deeper:

I have two (or more) tables
CREATE TABLE a (id ... UNIQUE -- maby the PRIMARY KEY,...);
CREATE TABLE b (...) INHERIT (a);

But the id has to be unique over the inheritance. So one solution of the 
problem is:

CREATE key_table (id ... UNIQUE, table REGCLASS);
By trigger every INSERT/UPDATE/DELETE in Table a,b,... changes the 
key_table.

This works.

Now I have a table reference to the id of table a*. This is not 
possible, but reference to key_table(id) works fine.
CREATE TABLE r (..., a_id /* REFERENCES a*(id) */ REFERENCES 
key_tabel(id),..);


And now the problem: Can I support TRUNCATE TABLE?
DELETE is not a problem: for DELETE FROM a the trigger deletes the entry 
in the key_table and if the reference action on delete is CASCADE, the 
entries in r will be deletet.


But TRUNCATE TABLE a! In a TRUNCATE TRIGGER I can delete the entries in 
the key_table WHERE table = a (O.K. the performance) -- it is actual 
not a TRUNCATE TABLE but a TRUNCATE PARTITION.
And if I not specified ONLY, there is also a TRUNCATE TABLE b and the 
trigger ist fired too.


But what is with table r? If I do the delete in the key_table, the 
delete action will be used. But there is not a truncate action, cascaded 
truncation is controlled by execute TRUNCATE. And so, I must delete the 
entries in r if there is a CASCADE in the TRUNCATE or raise an exception 
if the TRUNCATE is RESTRICTED.


Now the Question? How to find out in the trigger function for truncate 
whether is there a CASCADE or not.


regards, Andreas


--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.




Re: [GENERAL] postgres db permissions

2015-06-02 Thread Melvin Davidson
Yes. It is NEVER a good idea to use GRANT ALL on objects for users. Some
people use that as a short cut for allowing access to schemas and tables,
but in essence, it allows the users to do much more, and that is BAD!

http://www.postgresql.org/docs/9.3/interactive/sql-grant.html

On Tue, Jun 2, 2015 at 2:08 PM, Steve Pribyl steve.pri...@akunacapital.com
wrote:

 They all look like this.

 CREATE ROLE dbA
   NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;

 Steve Pribyl

 
 From: Adrian Klaver adrian.kla...@aklaver.com
 Sent: Tuesday, June 2, 2015 1:06 PM
 To: Steve Pribyl; Joshua D. Drake; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] postgres db permissions

 On 06/02/2015 10:50 AM, Steve Pribyl wrote:
  Josh,
 
  Via psql:
  CREATE ROLE bob LOGIN
 NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;
  GRANT dbA TO bob;
  GRANT dbA_ro TO bob;
  GRANT dbB TO bob;
  GRANT dbB_ro TO bob;
 
  dbA, dbA_ro, dbB, and dbB_ro are roles.

 The burning question would be, how where they created?

 
  I have not created any database yet or assigned permissions to the roles.
 
  Steve Pribyl
 
 
 
  
  From: pgsql-general-ow...@postgresql.org 
 pgsql-general-ow...@postgresql.org on behalf of Joshua D. Drake 
 j...@commandprompt.com
  Sent: Tuesday, June 2, 2015 12:44 PM
  To: pgsql-general@postgresql.org
  Subject: Re: [GENERAL] postgres db permissions
 
  On 06/02/2015 10:36 AM, Steve Pribyl wrote:
 
  Good Afternoon,
 
  Built a fresh 9.3. postgres server and added some users and noticed
 that any user can create tables in any database including the postgres
 database by default.
 
  Have I missed some step in securing the default install?
 
  How exactly did you add the users?
 
  JD
 
 
 
  --
  Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
  PostgreSQL Centered full stack support, consulting and development.
  Announcing I'm offended is basically telling the world you can't
  control your own emotions, so everyone else should do it for you.
 
 
  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general
  
[http://www.akunacapital.com/images/akuna.png]
  Steve Pribyl | Senior Systems Engineer
  Akuna Capital LLC
  36 S Wabash, Suite 310 Chicago IL 60603 USA | www.akunacapital.com 
 http://www.akunacapital.com
  p: +1 312 994 4646 | m: 847-343-2349 | f: +1 312 750 1667 |
 steve.pri...@akunacapital.com
 
  Please consider the environment, before printing this email.
 
  This electronic message contains information from Akuna Capital LLC that
 may be confidential, legally privileged or otherwise protected from
 disclosure. This information is intended for the use of the addressee only
 and is not offered as investment advice to be relied upon for personal or
 professional use. Additionally, all electronic messages are recorded and
 stored in compliance pursuant to applicable SEC rules. If you are not the
 intended recipient, you are hereby notified that any disclosure, copying,
 distribution, printing or any other use of, or any action in reliance on,
 the contents of this electronic message is strictly prohibited. If you have
 received this communication in error, please notify us by telephone at
 (312)994-4640 and destroy the original message.
 
 


 --
 Adrian Klaver
 adrian.kla...@aklaver.com
 
  [http://www.akunacapital.com/images/akuna.png]
 Steve Pribyl | Senior Systems Engineer
 Akuna Capital LLC
 36 S Wabash, Suite 310 Chicago IL 60603 USA | www.akunacapital.com 
 http://www.akunacapital.com
 p: +1 312 994 4646 | m: 847-343-2349 | f: +1 312 750 1667 |
 steve.pri...@akunacapital.com

 Please consider the environment, before printing this email.

 This electronic message contains information from Akuna Capital LLC that
 may be confidential, legally privileged or otherwise protected from
 disclosure. This information is intended for the use of the addressee only
 and is not offered as investment advice to be relied upon for personal or
 professional use. Additionally, all electronic messages are recorded and
 stored in compliance pursuant to applicable SEC rules. If you are not the
 intended recipient, you are hereby notified that any disclosure, copying,
 distribution, printing or any other use of, or any action in reliance on,
 the contents of this electronic message is strictly prohibited. If you have
 received this communication in error, please notify us by telephone at
 (312)994-4640 and destroy the original message.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] postgres db permissions

2015-06-02 Thread Adrian Klaver

On 06/02/2015 11:04 AM, Steve Pribyl wrote:

None of the roles have permissions on the postgres database.  At this
point they don't have any permissions on any databases.


I have noted that  GRANT ALL ON SCHEMA public TO public is granted
on postgres.schemas.public.  I am looking at this in pgadmin so excuse
my nomenclature.


Is this what is allowing write access to the database?


Should have added to previous post-

Log into the postgres database using psql and do:

\dn+




Steve Pribyl
Sr. Systems Engineer
steve.pri...@akunacapital.com mailto:steve.pri...@akunacapital.com
Desk: 312-994-4646



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres db permissions

2015-06-02 Thread Tom Lane
Adrian Klaver adrian.kla...@aklaver.com writes:
 On 06/02/2015 11:04 AM, Steve Pribyl wrote:
 I have noted that  GRANT ALL ON SCHEMA public TO public is granted
 on postgres.schemas.public.  I am looking at this in pgadmin so excuse
 my nomenclature.

 Is this what is allowing write access to the database?

 Yes, though that should not be the default.

Huh?  Of course it's the default.  I'm not really sure why the OP is
surprised at this.  A database that won't let you create any tables
is not terribly useful.

If you don't like this, you can get rid of the database's public schema
and/or restrict who has CREATE permissions on it.  But I can't see us
shipping a default configuration in which only superusers can create
tables.  That would just encourage people to operate as superusers, which
overall would be much less secure.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pl/python composite type array as input parameter

2015-06-02 Thread Filipe Pina
Basically, in an (maybe-over)simplified example:

CREATE OR REPLACE FUNCTION add_transaction(transaction core_transaction)
RETURNS integer AS $$
DECLARE
transaction2 core_transaction;
BEGIN
transaction.field1 := 'lapse’;
transaction2.field2 := transaction.field2;
transaction2.field1 := 'lapse2’;
INSERT INTO core_transaction VALUES(transaction.*);
INSERT INTO core_transaction VALUES(transaction2.*);
RETURN 1;
END
$$
LANGUAGE plpgsql;

So, I wanted to do the same in plpython…

CREATE OR REPLACE FUNCTION add_transaction(transaction core_transaction)
RETURNS integer AS $$
transaction['field1'] = ‘lapse’
transaction2 = { ‘field1’: ‘lapse2’, ‘field2’: transaction[‘field1’] } 

# not this but something that would work without enumericating all 
columns/fields
pl = plpy.execute('INSERT INTO core_transaction VALUES(transaction.*)’)
pl = plpy.execute('INSERT INTO core_transaction VALUES(transaction2.*)')
return 1
END
$$
LANGUAGE plpythonu;



 On 02/06/2015, at 15:51, Adrian Klaver adrian.kla...@aklaver.com wrote:
 
 On 06/02/2015 03:10 AM, Filipe Pina wrote:
 HI Adrian,
 
 I had a typo in the email:
 
 INSERT INTO my_table VALUES(my_table.*);
 
 was actually
 
 INSERT INTO my_table VALUES(my_var.*);
 
 Aah, that is different:)
 
 
 So I meant to insert the variable I had in memory (dict representing a row), 
 not the rows from the table..
 
 So where is the variable getting its data?
 
 Or can we see a simple example of what you are trying to do?
 
 
 On 02/06/2015, at 01:44, Adrian Klaver adrian.kla...@aklaver.com wrote:
 
 On 06/01/2015 07:42 AM, Filipe Pina wrote:
 Thanks for the reply anyway, it's a pity though, it'd be useful..
 
 Another bump I've found along the pl/python road: insert ROWTYPE in table..
 Maybe you have some hint on that? :)
 
 So, in PLPGSQL I can:
 
 DECLARE
   my_var my_table;
 BEGIN
   my_var.col1 := 'asd';
   INSERT INTO my_table VALUES(my_table.*);
 END;
 
 How would I do something like that in pl/python?
 
 First, how to declare a ROW-TYPE variable, as they're all python mappings?
 
 my_var = { 'col1': 'asd' } enough? it'd would miss all the other columns...
 
 Second, how to insert it?
 
 plpy.prepare and .execute say they don't support composite types, so I
 cannot simply pass
 
 pl = plpy.prepare('INSERT INTO core_customer VALUES ($1)', ['my_table'])
 
 Any workarounds for this? (meaning I wouldn't have to specify any
 columns in the insert statement)
 
 http://www.postgresql.org/docs/9.4/interactive/sql-insert.html
 
 pl = plpy.prepare('INSERT INTO core_table SELECT * FROM my_table')
 
 
 Thanks
 
 On Sex, Mai 29, 2015 at 2:00 , Peter Eisentraut pete...@gmx.net wrote:
 On 5/18/15 10:52 AM, Filipe Pina wrote:
 
But one of the functions I need to create needs to accept an array
of records.
 
 PL/Python doesn't support that. Some more code needs to be written to
 support that. You did everything correctly. I don't know of a good
 workaround.
 
 
 --
 Adrian Klaver
 adrian.kla...@aklaver.com
 
 
 
 
 -- 
 Adrian Klaver
 adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com


Re: [GENERAL] Planner cost adjustments

2015-06-02 Thread Daniel Begin
Here is a follow-up on the step-by-step procedure proposed by PT

#1 - setup postgresql planner's cost estimate settings for my hardware. 

--
Current parameters values described in section 18.7.2 haven't been changed
except for the effective_cache_size
seq_page_cost = 1
random_page_cost = 4
cpu_tuple_cost = 0.01
cpu_index_tuple_cost = 0.005
cpu_operator_cost = 0.0025 
effective_cache_size = 10GB

I did a bunch of tests on frequently used queries to see how well they
perform - using SET enable_seqscan = ON/OFF. 
As described earlier in this tread, the planner use Seq Scan on tables even
if using an Index Scan is in this case 5 times faster!
Here are the logs of EXPLAIN ANALYSE on a query...
 
osmdump=# SET enable_seqscan = ON;
osmdump=# EXPLAIN ANALYSE SELECT user_id, id AS changeset_id,closed FROM
changesets WHERE changesets.user_id IN(SELECT id FROM sample.users);

-
 Hash Semi Join  (cost=21.50..819505.27 rows=726722 width=24) (actual
time=1574.914..7444.938 rows=338568 loops=1)
   Hash Cond: (changesets.user_id = users.id)
   -  Seq Scan on changesets  (cost=0.00..745407.22 rows=25139722 width=24)
(actual time=0.002..4724.578 rows=25133929 loops=1)
   -  Hash  (cost=14.00..14.00 rows=600 width=8) (actual time=0.165..0.165
rows=600 loops=1)
 Buckets: 1024  Batches: 1  Memory Usage: 24kB
 -  Seq Scan on users  (cost=0.00..14.00 rows=600 width=8) (actual
time=0.003..0.073 rows=600 loops=1)
 Total runtime: 7658.715 ms
(7 rows)

osmdump=# SET enable_seqscan = OFF;
osmdump=# EXPLAIN ANALYSE SELECT user_id, id AS changeset_id,closed FROM
changesets WHERE changesets.user_id IN(SELECT id FROM sample.users);


 Nested Loop  (cost=115.94..10001072613.45 rows=726722 width=24)
(actual time=0.268..1490.515 rows=338568 loops=1)
   -  HashAggregate  (cost=115.50..121.50 rows=600 width=8)
(actual time=0.207..0.531 rows=600 loops=1)
 -  Seq Scan on users  (cost=100.00..114.00
rows=600 width=8) (actual time=0.003..0.037 rows=600 loops=1)
   -  Index Scan using changesets_useridndx on changesets
(cost=0.44..1775.54 rows=1211 width=24) (actual time=0.038..2.357 rows=564
loops=600
 Index Cond: (user_id = users.id)
 Total runtime: 1715.517 ms
(6 rows)
 

#2 - Run ANALYZE DATABASE and look at performance/planning improvement.

--
I ran ANALYZE DATABASE then rerun the query. It did not produce any
significant improvement according to the EXPLAIN ANALYSE below...

osmdump=# SET enable_seqscan = ON;
osmdump=# EXPLAIN ANALYSE SELECT user_id, id AS changeset_id,closed FROM
changesets WHERE changesets.user_id IN(SELECT id FROM sample.users);

-
 Hash Semi Join  (cost=21.50..819511.42 rows=729133 width=24) (actual
time=1538.100..7307.743 rows=338568 loops=1)
   Hash Cond: (changesets.user_id = users.id)
   -  Seq Scan on changesets  (cost=0.00..745390.84 rows=25138084 width=24)
(actual time=0.027..4620.691 rows=25133929 loops=1)
   -  Hash  (cost=14.00..14.00 rows=600 width=8) (actual time=0.300..0.300
rows=600 loops=1)
 Buckets: 1024  Batches: 1  Memory Usage: 24kB
 -  Seq Scan on users  (cost=0.00..14.00 rows=600 width=8) (actual
time=0.022..0.187 rows=600 loops=1)
 Total runtime: 7519.254 ms
(7 rows)

osmdump=# SET enable_seqscan = OFF;
osmdump=# EXPLAIN ANALYSE SELECT user_id, id AS changeset_id,closed FROM
changesets WHERE changesets.user_id IN(SELECT id FROM sample.users);


 Nested Loop  (cost=115.94..10001090810.49 rows=729133 width=24)
(actual time=0.268..1466.248 rows=338568 loops=1)
   -  HashAggregate  (cost=115.50..121.50 rows=600 width=8)
(actual time=0.205..0.530 rows=600 loops=1)
 -  Seq Scan on users  (cost=100.00..114.00
rows=600 width=8) (actual time=0.003..0.035 rows=600 loops=1)
   -  Index Scan using changesets_useridndx on changesets
(cost=0.44..1805.83 rows=1215 width=24) (actual time=0.036..2.314 rows=564
loops=600)
 Index Cond: (user_id = users.id)
 Total runtime: 1677.447 ms
(6 rows)

#3 - Run EXPLAIN ANALYZE and look for discrepancies between the estimated
and actual times

--
Looking at above results, there are obvious discrepancies between
expected/actual rows and time!
I dug a 

Re: [GENERAL] TRIGGER TRUNCATE -- CASCADE or RESTRICT

2015-06-02 Thread Melvin Davidson
Your problem is in your design.

If you do it like this:

CREATE TABLE A
(
p_col serial PRIMARY KEY,
acol  integer
);

CREATE TABLE B() INHERITS (A);

INSERT INTO A(acol) VALUES (1);
INSERT INTO B(acol) VALUES (2);

SELECT * FROM A;
SELECT * FROM B;

Then the sequence (p_col) will be UNIQUE across all tables and can be
referenced.
No need for a key table.

On Tue, Jun 2, 2015 at 3:45 PM, Andreas Ulbrich 
andreas.ulbr...@matheversum.de wrote:

  On 02.06.2015 16:20, Melvin Davidson wrote:

 You can use the following to list the triggers and see what functions they
 call. Then you can check pg_proc to see how TRUNCATE is used in prosrc.


 SELECT c.relname,
t.tgname,
p.pronameAS function_called,
t.tgconstraint   AS is_constraint,
CASE WHEN t.tgconstrrelid  0
 THEN (SELECT relname
FROM pg_class
   WHERE oid = t.tgconstrrelid)
 ELSE ''
 END AS constr_tbl,
t.tgenabled
   FROM pg_trigger t
   INNER JOIN pg_proc p  ON ( p.oid = t.tgfoid)
   INNER JOIN pg_class c ON (c.oid = t.tgrelid)
   WHERE tgname NOT LIKE 'pg_%'
 AND tgname NOT LIKE 'RI_%'  --  comment out to see constraints
 --AND t.tgenabled = FALSE
  ORDER BY 1;


 On Tue, Jun 2, 2015 at 5:31 AM, Albe Laurenz laurenz.a...@wien.gv.at
 wrote:

 Andreas Ulbrich wrote:
  I'm in a handle for a trigger for TRUNCATE. Is it possible to find out
  whether the TRUNCATE TABLE ist called with CASCADE?

 I don't think there is.

 But you can find out the table where the trigger is defined and examine
 if any foreign key constraints are referring to it.

 If yes, then the trigger was called with CASCADE.
 If no, it might have been called either way, but the effect would be the
 same.

 Yours,
 Laurenz Albe

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


  I think, I must explain the problem deeper:

 I have two (or more) tables
 CREATE TABLE a (id ... UNIQUE -- maby the PRIMARY KEY,...);
 CREATE TABLE b (...) INHERIT (a);

 But the id has to be unique over the inheritance. So one solution of the
 problem is:
 CREATE key_table (id ... UNIQUE, table REGCLASS);
 By trigger every INSERT/UPDATE/DELETE in Table a,b,... changes the
 key_table.
 This works.

 Now I have a table reference to the id of table a*. This is not possible,
 but reference to key_table(id) works fine.
 CREATE TABLE r (..., a_id /* REFERENCES a*(id) */ REFERENCES
 key_tabel(id),..);

 And now the problem: Can I support TRUNCATE TABLE?
 DELETE is not a problem: for DELETE FROM a the trigger deletes the entry
 in the key_table and if the reference action on delete is CASCADE, the
 entries in r will be deletet.

 But TRUNCATE TABLE a! In a TRUNCATE TRIGGER I can delete the entries in
 the key_table WHERE table = a (O.K. the performance) -- it is actual not
 a TRUNCATE TABLE but a TRUNCATE PARTITION.
 And if I not specified ONLY, there is also a TRUNCATE TABLE b and the
 trigger ist fired too.

 But what is with table r? If I do the delete in the key_table, the delete
 action will be used. But there is not a truncate action, cascaded
 truncation is controlled by execute TRUNCATE. And so, I must delete the
 entries in r if there is a CASCADE in the TRUNCATE or raise an exception if
 the TRUNCATE is RESTRICTED.

 Now the Question? How to find out in the trigger function for truncate
 whether is there a CASCADE or not.

 regards, Andreas


 --
  *Melvin Davidson*
 I reserve the right to fantasize.  Whether or not you
 wish to share my fantasy is entirely up to you.





-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-02 Thread Andres Freund
On 2015-06-01 14:22:32 -0400, Robert Haas wrote:

 commit d33b4eb0167f465edb00bd6c0e1bcaa67dd69fe9
 Author: Robert Haas rh...@postgresql.org
 Date:   Fri May 29 14:35:53 2015 -0400
 
 foo

Hehe!

 diff --git a/src/backend/access/transam/multixact.c 
 b/src/backend/access/transam/multixact.c
 index 9568ff1..aca829d 100644
 --- a/src/backend/access/transam/multixact.c
 +++ b/src/backend/access/transam/multixact.c
 @@ -199,8 +199,9 @@ typedef struct MultiXactStateData
   MultiXactOffset nextOffset;
  
   /*
 -  * Oldest multixact that is still on disk.  Anything older than this
 -  * should not be consulted.  These values are updated by vacuum.
 +  * Oldest multixact that may still be referenced from a relation.
 +  * Anything older than this should not be consulted.  These values are
 +  * updated by vacuum.
*/
   MultiXactId oldestMultiXactId;
   Oid oldestMultiXactDB;
 @@ -213,6 +214,18 @@ typedef struct MultiXactStateData
*/
   MultiXactId lastCheckpointedOldest;
  
 + /*
 +  * This is the oldest file that actually exist on the disk.  This value
 +  * is initialized by scanning pg_multixact/offsets, and subsequently
 +  * updated each time we complete a truncation.  We need a flag to
 +  * indicate whether this has been initialized yet.
 +  */
 + MultiXactId oldestMultiXactOnDisk;
 + boololdestMultiXactOnDiskValid;
 +
 + /* Has TrimMultiXact been called yet? */
 + booldidTrimMultiXact;

I'm not really convinced tying things closer to having done trimming is
easier to understand than tying things to recovery having finished.

E.g.
if (did_trim)
oldestOffset = GetOldestReferencedOffset(oldest_datminmxid);
imo is harder to understand than if (!InRecovery).

Maybe we could just name it finishedStartup and rename the functions 
accordingly?

 @@ -1956,12 +1971,6 @@ StartupMultiXact(void)
*/
   pageno = MXOffsetToMemberPage(offset);
   MultiXactMemberCtl-shared-latest_page_number = pageno;
 -
 - /*
 -  * compute the oldest member we need to keep around to avoid old member
 -  * data overrun.
 -  */
 - DetermineSafeOldestOffset(MultiXactState-oldestMultiXactId);
  }

Maybe it's worthwhile to add a 'NB: At this stage the data directory is
not yet necessarily consistent' StartupMultiXact's comments, to avoid
reintroducing problems like this?

   /*
 +  * We can read this without a lock, because it only changes when nothing
 +  * else is running.
 +  */
 + did_trim = MultiXactState-didTrimMultiXact;

Err, Hot Standby? It might be ok to not lock, but the comment is
definitely wrong. I'm inclined to simply use locking, this doesn't look
sufficiently critical performancewise.

 +static MultiXactOffset
 +GetOldestReferencedOffset(MultiXactId oldestMXact)
 +{
 + MultiXactId earliest;
 + MultiXactOffset oldestOffset;
 +
 + /*
 +  * Because of bugs in early 9.3.X and 9.4.X releases (see comments in
 +  * TrimMultiXact for details), oldest_datminmxid might point to a
 +  * nonexistent multixact.  If so, use the oldest one that actually 
 +  * exists.  Anything before this can't be successfully used anyway.
 +  */
 + earliest = GetOldestMultiXactOnDisk();
 + if (MultiXactIdPrecedes(oldestMXact, earliest))
 + oldestMXact = earliest;

Hm. If GetOldestMultiXactOnDisk() gets the starting point by scanning
the disk it'll always get one at a segment boundary, right? I'm not sure
that's actually ok; because the value at the beginning of the segment
can very well end up being a 0, as MaybeExtendOffsetSlru() will have
filled the page with zeros.

I think that should be harmless, the worst that can happen is that
oldestOffset errorneously is 0, which should be correct, even though
possibly overly conservative, in these cases.

Greetings,

Andres Freund


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-02 Thread Robert Haas
On Tue, Jun 2, 2015 at 4:19 PM, Andres Freund and...@anarazel.de wrote:
 I'm not really convinced tying things closer to having done trimming is
 easier to understand than tying things to recovery having finished.

 E.g.
 if (did_trim)
 oldestOffset = GetOldestReferencedOffset(oldest_datminmxid);
 imo is harder to understand than if (!InRecovery).

 Maybe we could just name it finishedStartup and rename the functions 
 accordingly?

Basing that particular call site on InRecovery doesn't work, because
InRecovery isn't set early enough.  But I'm fine to rename it to
whatever.

 Maybe it's worthwhile to add a 'NB: At this stage the data directory is
 not yet necessarily consistent' StartupMultiXact's comments, to avoid
 reintroducing problems like this?

Sure.

   /*
 +  * We can read this without a lock, because it only changes when 
 nothing
 +  * else is running.
 +  */
 + did_trim = MultiXactState-didTrimMultiXact;

 Err, Hot Standby? It might be ok to not lock, but the comment is
 definitely wrong. I'm inclined to simply use locking, this doesn't look
 sufficiently critical performancewise.

/me nods.  Good point.

 Hm. If GetOldestMultiXactOnDisk() gets the starting point by scanning
 the disk it'll always get one at a segment boundary, right? I'm not sure
 that's actually ok; because the value at the beginning of the segment
 can very well end up being a 0, as MaybeExtendOffsetSlru() will have
 filled the page with zeros.

 I think that should be harmless, the worst that can happen is that
 oldestOffset errorneously is 0, which should be correct, even though
 possibly overly conservative, in these cases.

Uh oh.  That seems like a real bad problem for this approach.  What
keeps that from being the opposite of too conservative?  There's no
safe value in a circular numbering space.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ALTER EVENT TRIGGER as non superuser

2015-06-02 Thread Andreas Ulbrich

I afraid there is a bug in ALTER EVENT TRIGGER:

I run the following script:
\set  ON_ERROR_ROLLBACK ON

BEGIN;
  CREATE FUNCTION event_trigger_function() RETURNS event_trigger AS $$
  BEGIN
RAISE NOTICE 'event_trigger_function';
  END$$ LANGUAGE plPgSQL;

  CREATE EVENT TRIGGER a_ddl_trigger ON ddl_command_end
 EXECUTE PROCEDURE event_trigger_function();
  SET LOCAL ROLE super;
CREATE EVENT TRIGGER a_ddl_trigger ON ddl_command_end
   EXECUTE PROCEDURE event_trigger_function();
  RESET ROLE;
  CREATE EVENT TRIGGER other_ddl_trigger ON ddl_command_end
 EXECUTE PROCEDURE event_trigger_function();

-- disable trigger and change the function again, index is corrupt; reindex
  \dy+ a_ddl_trigger
  ALTER EVENT TRIGGER a_ddl_trigger DISABLE;
  SELECT SESSION_USER, CURRENT_USER;
  \du andreas
  \dy+ a_ddl_trigger

ROLLBACK;
\set  ON_ERROR_ROLLBACK interactive

An get the following output:

andreas@localhost:testdb=\i t_reindexfn.sql
BEGIN
CREATE FUNCTION
psql:t_reindexfn.sql:13: ERROR:  permission denied to create event 
trigger a_ddl_trigger

HINT:  Must be superuser to create an event trigger.
SET
CREATE EVENT TRIGGER
RESET
psql:t_reindexfn.sql:19: ERROR:  permission denied to create event 
trigger other_ddl_trigger

HINT:  Must be superuser to create an event trigger.
 List of event triggers
 Name  |  Event  | Owner | Enabled | Procedure| 
Tags | Description

---+-+---+-++--+-
 a_ddl_trigger | ddl_command_end | super | enabled | 
event_trigger_function |  |

(1 row)

ALTER EVENT TRIGGER
 session_user | current_user
--+--
 andreas  | andreas
(1 row)

List of roles
 Role name | Attributes | Member of
---++
 andreas   || {super,adminusr,admindb,developer}

  List of event triggers
 Name  |  Event  | Owner | Enabled  | Procedure
| Tags | Description

---+-+---+--++--+-
 a_ddl_trigger | ddl_command_end | super | disabled | 
event_trigger_function |  |

(1 row)

ROLLBACK

What I did:
I created a event trigger function event_trigger_function()
I tried to create an event trigger -- failed, because I'm not a superuser
Set role to super
Create an event trigger -- success
Reset role
I tried again to create an event trigger -- failed, because I'm not a 
superuser

\dy shows the trigger is present and enabled
Diable the trigger -- success although I'm not a superuser
SELECT SESSION_USER, CURRENT_USER; shows the user
\du andreas shows the attributes
\dy shows the trigger is disabled

Or where is my mistake?

I tried this in a snapshot of 9.5 (May, 22th) and in 9.4.0

Documantation of ALTER EVENT TRIGGER says: You must be superuser to 
alter an event trigger.


Regards Andreas



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] TRIGGER TRUNCATE -- CASCADE or RESTRICT

2015-06-02 Thread Andreas Ulbrich

On 02.06.2015 22:12, Melvin Davidson wrote:

Your problem is in your design.

If you do it like this:

CREATE TABLE A
(
p_col serial PRIMARY KEY,
acol  integer
);

CREATE TABLE B() INHERITS (A);

INSERT INTO A(acol) VALUES (1);
INSERT INTO B(acol) VALUES (2);

SELECT * FROM A;
SELECT * FROM B;

Then the sequence (p_col) will be UNIQUE across all tables and can be 
referenced.

No need for a key table.

No, someone can do:
INSERT INTO A VALUES (2,3);
TABLE A;
shows:
 p_col | acol
---+--
 1 |1
 2 |2
 2 |3
p_col is not unique!



On Tue, Jun 2, 2015 at 3:45 PM, Andreas Ulbrich 
andreas.ulbr...@matheversum.de 
mailto:andreas.ulbr...@matheversum.de wrote:


On 02.06.2015 16:20, Melvin Davidson wrote:

You can use the following to list the triggers and see what
functions they call. Then you can check pg_proc to see how
TRUNCATE is used in prosrc.


SELECT c.relname,
   t.tgname,
   p.pronameAS function_called,
   t.tgconstraint   AS is_constraint,
   CASE WHEN t.tgconstrrelid  0
THEN (SELECT relname
   FROM pg_class
  WHERE oid = t.tgconstrrelid)
ELSE ''
END AS constr_tbl,
   t.tgenabled
  FROM pg_trigger t
  INNER JOIN pg_proc p  ON ( p.oid = t.tgfoid)
  INNER JOIN pg_class c ON (c.oid = t.tgrelid)
  WHERE tgname NOT LIKE 'pg_%'
AND tgname NOT LIKE 'RI_%'  --  comment out to see constraints
--AND t.tgenabled = FALSE
 ORDER BY 1;


On Tue, Jun 2, 2015 at 5:31 AM, Albe Laurenz
laurenz.a...@wien.gv.at mailto:laurenz.a...@wien.gv.at wrote:

Andreas Ulbrich wrote:
 I'm in a handle for a trigger for TRUNCATE. Is it possible
to find out
 whether the TRUNCATE TABLE ist called with CASCADE?

I don't think there is.

But you can find out the table where the trigger is defined
and examine
if any foreign key constraints are referring to it.

If yes, then the trigger was called with CASCADE.
If no, it might have been called either way, but the effect
would be the same.

Yours,
Laurenz Albe

--
Sent via pgsql-general mailing list
(pgsql-general@postgresql.org
mailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



I think, I must explain the problem deeper:

I have two (or more) tables
CREATE TABLE a (id ... UNIQUE -- maby the PRIMARY KEY,...);
CREATE TABLE b (...) INHERIT (a);

But the id has to be unique over the inheritance. So one solution
of the problem is:
CREATE key_table (id ... UNIQUE, table REGCLASS);
By trigger every INSERT/UPDATE/DELETE in Table a,b,... changes the
key_table.
This works.

Now I have a table reference to the id of table a*. This is not
possible, but reference to key_table(id) works fine.
CREATE TABLE r (..., a_id /* REFERENCES a*(id) */ REFERENCES
key_tabel(id),..);

And now the problem: Can I support TRUNCATE TABLE?
DELETE is not a problem: for DELETE FROM a the trigger deletes the
entry in the key_table and if the reference action on delete is
CASCADE, the entries in r will be deletet.

But TRUNCATE TABLE a! In a TRUNCATE TRIGGER I can delete the
entries in the key_table WHERE table = a (O.K. the performance)
-- it is actual not a TRUNCATE TABLE but a TRUNCATE PARTITION.
And if I not specified ONLY, there is also a TRUNCATE TABLE b and
the trigger ist fired too.

But what is with table r? If I do the delete in the key_table, the
delete action will be used. But there is not a truncate action,
cascaded truncation is controlled by execute TRUNCATE. And so, I
must delete the entries in r if there is a CASCADE in the TRUNCATE
or raise an exception if the TRUNCATE is RESTRICTED.

Now the Question? How to find out in the trigger function for
truncate whether is there a CASCADE or not.

regards, Andreas


-- 
*Melvin Davidson*

I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.





--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.




Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-02 Thread Andres Freund
  Hm. If GetOldestMultiXactOnDisk() gets the starting point by scanning
  the disk it'll always get one at a segment boundary, right? I'm not sure
  that's actually ok; because the value at the beginning of the segment
  can very well end up being a 0, as MaybeExtendOffsetSlru() will have
  filled the page with zeros.
 
  I think that should be harmless, the worst that can happen is that
  oldestOffset errorneously is 0, which should be correct, even though
  possibly overly conservative, in these cases.
 
 Uh oh.  That seems like a real bad problem for this approach.  What
 keeps that from being the opposite of too conservative?  There's no
 safe value in a circular numbering space.

I think it *might* (I'm really jetlagged) be fine because that'll only
happen after a upgrade from  9.3. And in that case we initialize
nextOffset to 0. That ought to safe us?

Greetings,

Andres Freund


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres db permissions

2015-06-02 Thread Adrian Klaver

On 06/02/2015 11:46 AM, Tom Lane wrote:

Adrian Klaver adrian.kla...@aklaver.com writes:

On 06/02/2015 11:04 AM, Steve Pribyl wrote:

I have noted that  GRANT ALL ON SCHEMA public TO public is granted
on postgres.schemas.public.  I am looking at this in pgadmin so excuse
my nomenclature.



Is this what is allowing write access to the database?



Yes, though that should not be the default.


Huh?  Of course it's the default.  I'm not really sure why the OP is
surprised at this.  A database that won't let you create any tables
is not terribly useful.


Aah, me being stupid.



If you don't like this, you can get rid of the database's public schema
and/or restrict who has CREATE permissions on it.  But I can't see us
shipping a default configuration in which only superusers can create
tables.  That would just encourage people to operate as superusers, which
overall would be much less secure.




regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pl/python composite type array as input parameter

2015-06-02 Thread Adrian Klaver

On 06/02/2015 12:33 PM, Filipe Pina wrote:

Basically, in an (maybe-over)simplified example:

CREATE OR REPLACE FUNCTION add_transaction(transaction core_transaction)
 RETURNS integer AS $$
DECLARE
 transaction2 core_transaction;
BEGIN
 transaction.field1 := 'lapse’;
 transaction2.field2 := transaction.field2;
 transaction2.field1 := 'lapse2’;
 INSERT INTO core_transaction VALUES(transaction.*);
 INSERT INTO core_transaction VALUES(transaction2.*);
 RETURN 1;
END
$$
LANGUAGE plpgsql;

So, I wanted to do the same in plpython…

CREATE OR REPLACE FUNCTION add_transaction(transaction core_transaction)
 RETURNS integer AS $$
 transaction['field1'] = ‘lapse’
 transaction2 = { ‘field1’: ‘lapse2’, ‘field2’: transaction[‘field1’] }

 # not this but something that would work without enumericating all
columns/fields
 pl = plpy.execute('INSERT INTO core_transaction VALUES(transaction.*)’)
 pl = plpy.execute('INSERT INTO core_transaction
VALUES(transaction2.*)')
 return 1
END
$$
LANGUAGE plpythonu;


Yea, I do not see a way of doing that. plpgsql is more tightly coupled 
to Postgres then plpythonu, so you get a lot more shortcuts. This why I 
tend to use plpgsql even though I prefer programming in Python. That 
being said, the feature set of plpythonu has been extended a good deal 
over the last couple of Postgres versions and I would expect that to 
continue.







On 02/06/2015, at 15:51, Adrian Klaver adrian.kla...@aklaver.com
mailto:adrian.kla...@aklaver.com wrote:

On 06/02/2015 03:10 AM, Filipe Pina wrote:

HI Adrian,

I had a typo in the email:

INSERT INTO my_table VALUES(my_table.*);

was actually

INSERT INTO my_table VALUES(my_var.*);


Aah, that is different:)



So I meant to insert the variable I had in memory (dict representing
a row), not the rows from the table..


So where is the variable getting its data?

Or can we see a simple example of what you are trying to do?




On 02/06/2015, at 01:44, Adrian Klaver adrian.kla...@aklaver.com
mailto:adrian.kla...@aklaver.com wrote:

On 06/01/2015 07:42 AM, Filipe Pina wrote:

Thanks for the reply anyway, it's a pity though, it'd be useful..

Another bump I've found along the pl/python road: insert ROWTYPE in
table..
Maybe you have some hint on that? :)

So, in PLPGSQL I can:

DECLARE
  my_var my_table;
BEGIN
  my_var.col1 := 'asd';
  INSERT INTO my_table VALUES(my_table.*);
END;

How would I do something like that in pl/python?

First, how to declare a ROW-TYPE variable, as they're all python
mappings?

my_var = { 'col1': 'asd' } enough? it'd would miss all the other
columns...

Second, how to insert it?

plpy.prepare and .execute say they don't support composite types, so I
cannot simply pass

pl = plpy.prepare('INSERT INTO core_customer VALUES ($1)',
['my_table'])

Any workarounds for this? (meaning I wouldn't have to specify any
columns in the insert statement)


http://www.postgresql.org/docs/9.4/interactive/sql-insert.html

pl = plpy.prepare('INSERT INTO core_table SELECT * FROM my_table')



Thanks

On Sex, Mai 29, 2015 at 2:00 , Peter Eisentraut pete...@gmx.net
wrote:

On 5/18/15 10:52 AM, Filipe Pina wrote:

   But one of the functions I need to create needs to accept an array
   of records.

PL/Python doesn't support that. Some more code needs to be written to
support that. You did everything correctly. I don't know of a good
workaround.



--
Adrian Klaver
adrian.kla...@aklaver.com






--
Adrian Klaver
adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] TRIGGER TRUNCATE -- CASCADE or RESTRICT

2015-06-02 Thread Gavin Flower

On 03/06/15 08:40, Andreas Ulbrich wrote:

On 02.06.2015 22:12, Melvin Davidson wrote:

Your problem is in your design.

If you do it like this:

CREATE TABLE A
(
p_col serial PRIMARY KEY,
acol  integer
);

CREATE TABLE B() INHERITS (A);

INSERT INTO A(acol) VALUES (1);
INSERT INTO B(acol) VALUES (2);

SELECT * FROM A;
SELECT * FROM B;

Then the sequence (p_col) will be UNIQUE across all tables and can be 
referenced.

No need for a key table.

No, someone can do:
INSERT INTO A VALUES (2,3);
TABLE A;
shows:
 p_col | acol
---+--
 1 |1
 2 |2
 2 |3
p_col is not unique!
Curious, I tried to investigate, to get a better understanding and ran 
into a problem...


   $ psql
   psql (9.4.1)
   Type help for help.

   gavin= CREATE TABLE A
   gavin- (
   gavin( p_col serial PRIMARY KEY,
   gavin( acol  integer
   gavin( );
   CREATE TABLE
   gavin= CREATE TABLE B() INHERITS (A);
   CREATE TABLE
   gavin= INSERT INTO A(acol) VALUES (1);
   ERROR:  column acol of relation a does not exist
   LINE 1: INSERT INTO A(acol) VALUES (1);
   ^
   gavin= \d+ a
   Table public.a
 Column | Type   |
   Modifiers | Storage | Stats target | Description

   
+-+---+-+--+-
 p_col  | integer | not null default
   nextval('a_p_col_seq'::regclass) | plain   |  |
 acol   | integer
   |   | plain  
   |  |

   Indexes:
a_pkey PRIMARY KEY, btree (p_col)
   Child tables: b

   gavin= \d b
   Table public.b
 Column | Type   | Modifiers
   +-+---
 p_col  | integer | not null default nextval('a_p_col_seq'::regclass)
 acol   | integer |
   Inherits: a

   gavin=




[...]


Cheers,
Gavin


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-02 Thread Thomas Munro
On Tue, Jun 2, 2015 at 9:30 AM, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
 My guess is that the file existed, and perhaps had one or more pages,
 but the wanted page doesn't exist, so we tried to read but got 0 bytes
 back.  read() returns 0 in this case but doesn't set errno.

 I didn't find a way to set things so that the file exists but is of
 shorter contents than oldestMulti by the time the checkpoint record is
 replayed.

I'm just starting to learn about the recovery machinery, so forgive me
if I'm missing something basic here, but I just don't get this.  As I
understand it, offsets/0046 should either have been copied with that
page present in it if it existed before the backup started (apparently
not in this case), or extended to contain it by WAL records that come
after the backup label but before the checkpoint record that
references it (also apparently not in this case).  If neither of these
things happened then that is completely different from the
segment-does-not-exist case where we read zeroes if in recovery on the
assumption that later WAL records must be about to delete the file.
There is no way that future WAL records will make an existing segment
file shorter! So at this point don't we know that there is something
wrong with the backup itself?

Put another way, if you bring this up under 9.4.1, won't it also be
unable to access multixact 4624559 at this point?  Of course it won't
try to do so during recovery like 9.4.2 does, but I'm just trying to
understand how this is supposed to work for 9.4.1 if it needs to
access that multixact for other reasons once normal running is reached
(say you recover up to that checkpoint, and then run
pg_get_multixact_members, or a row has that xmax and its members to be
looked up by a vacuum or any normal transaction).  In other words,
isn't this a base backup that is somehow broken, not at all like the
pg_upgrade corruption case which involved the specific case of
multixact 1 and an entirely missing segment file, and 9.4.2 just tells
you about it sooner than 9.4.1?

For what it's worth, I've also spent a lot of time trying to reproduce
basebackup problems with multixact creation, vacuums and checkpoints
injected at various points between copying backup label, pg_multixact,
and pg_control.  I have so far failed to produce anything more
interesting than the 'reading zeroes' case (see attached
copy-after-trunction.sh) and a case where the control file points at
a segment that doesn't exist, but it doesn't matter because the backup
label points at a checkpoint from a time when it did and
oldestMultiXactId is updated from there, and then procedes exactly as
it should (see copy-before-truncation.sh).  I updated my scripts to
look a bit more like your nicely automated example (though mine use a
different trick to create small quantities of multixacts so they run
against unpatched master).  I have also been considering a scenario
where multixact ID wraparound occurs during basebackup with some
ordering that causes trouble, but I don't yet see why it would break
if you replay the WAL from the backup label checkpoint (and I think
the repro would take days/weeks to run...)

-- 
Thomas Munro
http://www.enterprisedb.com


copy-after-truncation.sh
Description: Bourne shell script


copy-before-truncation.sh
Description: Bourne shell script

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Planner cost adjustments

2015-06-02 Thread Bill Moran
On Tue, 2 Jun 2015 14:01:35 -0400
Daniel Begin jfd...@hotmail.com wrote:

 Here is a follow-up on the step-by-step procedure proposed by PT
 
 #1 - setup postgresql planner's cost estimate settings for my hardware. 
 
 --
 Current parameters values described in section 18.7.2 haven't been changed
 except for the effective_cache_size
 seq_page_cost = 1
 random_page_cost = 4
 cpu_tuple_cost = 0.01
 cpu_index_tuple_cost = 0.005
 cpu_operator_cost = 0.0025 
 effective_cache_size = 10GB
 
 I did a bunch of tests on frequently used queries to see how well they
 perform - using SET enable_seqscan = ON/OFF. 
 As described earlier in this tread, the planner use Seq Scan on tables even
 if using an Index Scan is in this case 5 times faster!
 Here are the logs of EXPLAIN ANALYSE on a query...

In an earlier message you mention that the drives are external to the
computer. I don't remember details, but external drives can be quite
dicey as far as performance goes, depending on the specific technlogy
(USB vs. external SATA vs. NAS, for example) as well as some pretty
wild variances between different brands of the same technology.
See:
http://www.databasesoup.com/2012/05/random-page-cost-revisited.html

As a result, I'm suspicious that the default values you're using for
random_page_cost and seq_page_cost are throwing things off becuase
your disks aren't performing like internally connected disks.

Correct me if I'm wrong on any of the assumptions I'm making here,
but I got the impression that you can tweak values and restart
Postgres without any hardship. If that's the case, I'm guessing that
raising seq_page_cost (possible to 2) will cause Postgres to make
better decisions about what are good plans. My suggestion is to try
some different values for those two settings, doing several tests
after each change, and see if you can find a set of values that
starts getting you good plans. It appears that the planner thinks
that it can get better performance by reading from the disk in
sequence than by picking random pages, and that makes me think that
the difference between seq_page_cost and random_page_cost is bigger
than the actual behavior of the drives.

More comments below.

 osmdump=# SET enable_seqscan = ON;
 osmdump=# EXPLAIN ANALYSE SELECT user_id, id AS changeset_id,closed FROM
 changesets WHERE changesets.user_id IN(SELECT id FROM sample.users);
 
 -
  Hash Semi Join  (cost=21.50..819505.27 rows=726722 width=24) (actual
 time=1574.914..7444.938 rows=338568 loops=1)
Hash Cond: (changesets.user_id = users.id)
-  Seq Scan on changesets  (cost=0.00..745407.22 rows=25139722 width=24)
 (actual time=0.002..4724.578 rows=25133929 loops=1)
-  Hash  (cost=14.00..14.00 rows=600 width=8) (actual time=0.165..0.165
 rows=600 loops=1)
  Buckets: 1024  Batches: 1  Memory Usage: 24kB
  -  Seq Scan on users  (cost=0.00..14.00 rows=600 width=8) (actual
 time=0.003..0.073 rows=600 loops=1)
  Total runtime: 7658.715 ms
 (7 rows)
 
 osmdump=# SET enable_seqscan = OFF;
 osmdump=# EXPLAIN ANALYSE SELECT user_id, id AS changeset_id,closed FROM
 changesets WHERE changesets.user_id IN(SELECT id FROM sample.users);
 
 
  Nested Loop  (cost=115.94..10001072613.45 rows=726722 width=24)
 (actual time=0.268..1490.515 rows=338568 loops=1)
-  HashAggregate  (cost=115.50..121.50 rows=600 width=8)
 (actual time=0.207..0.531 rows=600 loops=1)
  -  Seq Scan on users  (cost=100.00..114.00
 rows=600 width=8) (actual time=0.003..0.037 rows=600 loops=1)
-  Index Scan using changesets_useridndx on changesets
 (cost=0.44..1775.54 rows=1211 width=24) (actual time=0.038..2.357 rows=564
 loops=600
  Index Cond: (user_id = users.id)
  Total runtime: 1715.517 ms
 (6 rows)
  
 
 #2 - Run ANALYZE DATABASE and look at performance/planning improvement.
 
 --
 I ran ANALYZE DATABASE then rerun the query. It did not produce any
 significant improvement according to the EXPLAIN ANALYSE below...
 
 osmdump=# SET enable_seqscan = ON;
 osmdump=# EXPLAIN ANALYSE SELECT user_id, id AS changeset_id,closed FROM
 changesets WHERE changesets.user_id IN(SELECT id FROM sample.users);
 
 -
  Hash Semi Join  (cost=21.50..819511.42 rows=729133 width=24) (actual
 time=1538.100..7307.743 rows=338568 loops=1)
Hash Cond: (changesets.user_id = users.id)
-  Seq Scan on changesets  (cost=0.00..745390.84 rows=25138084 width=24)
 

Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-02 Thread Alvaro Herrera
Thomas Munro wrote:
 On Tue, Jun 2, 2015 at 9:30 AM, Alvaro Herrera alvhe...@2ndquadrant.com 
 wrote:
  My guess is that the file existed, and perhaps had one or more pages,
  but the wanted page doesn't exist, so we tried to read but got 0 bytes
  back.  read() returns 0 in this case but doesn't set errno.
 
  I didn't find a way to set things so that the file exists but is of
  shorter contents than oldestMulti by the time the checkpoint record is
  replayed.
 
 I'm just starting to learn about the recovery machinery, so forgive me
 if I'm missing something basic here, but I just don't get this.  As I
 understand it, offsets/0046 should either have been copied with that
 page present in it if it existed before the backup started (apparently
 not in this case), or extended to contain it by WAL records that come
 after the backup label but before the checkpoint record that
 references it (also apparently not in this case).

Exactly --- that's the spot at which I am, also.  I have had this
spinning in my head for three days now, and tried every single variation
that I could think of, but like you I was unable to reproduce the issue.
However, our customer took a second base backup and it failed in exactly
the same way, module some changes to the counters (the file that
didn't exist was 004B rather than 0046).  I'm still at a loss at what
the failure mode is.  We must be missing some crucial detail ...


-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pl/python composite type array as input parameter

2015-06-02 Thread Filipe Pina
HI Adrian,

I had a typo in the email:

INSERT INTO my_table VALUES(my_table.*);

was actually

INSERT INTO my_table VALUES(my_var.*);

So I meant to insert the variable I had in memory (dict representing a row), 
not the rows from the table..

 On 02/06/2015, at 01:44, Adrian Klaver adrian.kla...@aklaver.com wrote:
 
 On 06/01/2015 07:42 AM, Filipe Pina wrote:
 Thanks for the reply anyway, it's a pity though, it'd be useful..
 
 Another bump I've found along the pl/python road: insert ROWTYPE in table..
 Maybe you have some hint on that? :)
 
 So, in PLPGSQL I can:
 
 DECLARE
   my_var my_table;
 BEGIN
   my_var.col1 := 'asd';
   INSERT INTO my_table VALUES(my_table.*);
 END;
 
 How would I do something like that in pl/python?
 
 First, how to declare a ROW-TYPE variable, as they're all python mappings?
 
 my_var = { 'col1': 'asd' } enough? it'd would miss all the other columns...
 
 Second, how to insert it?
 
 plpy.prepare and .execute say they don't support composite types, so I
 cannot simply pass
 
 pl = plpy.prepare('INSERT INTO core_customer VALUES ($1)', ['my_table'])
 
 Any workarounds for this? (meaning I wouldn't have to specify any
 columns in the insert statement)
 
 http://www.postgresql.org/docs/9.4/interactive/sql-insert.html
 
 pl = plpy.prepare('INSERT INTO core_table SELECT * FROM my_table')
 
 
 Thanks
 
 On Sex, Mai 29, 2015 at 2:00 , Peter Eisentraut pete...@gmx.net wrote:
 On 5/18/15 10:52 AM, Filipe Pina wrote:
 
But one of the functions I need to create needs to accept an array
of records.
 
 PL/Python doesn't support that. Some more code needs to be written to
 support that. You did everything correctly. I don't know of a good
 workaround.
 
 
 -- 
 Adrian Klaver
 adrian.kla...@aklaver.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database designpattern - product feature

2015-06-02 Thread Adrian Stern
Sorry. Will do in the future.

Product_freature is a table describing the valid keys for product features.
With this it is possible to limit keys to specific groups of products.

Freundliche Grüsse

Adrian Stern
unchained - web solutions

adrian.st...@unchained.ch
+41 79 292 83 47

On Tue, Jun 2, 2015 at 12:58 PM, Dorian Hoxha dorian.ho...@gmail.com
wrote:

 Please do reply-all so you also reply to the list.

 It's not ~good to develop with sqlite and deploy on posgresql. You should
 have your 'dev' as close to 'prod' as possible.

 Product_feature is another table in this case ?

 On Tue, Jun 2, 2015 at 11:44 AM, Adrian Stern adrian.st...@unchained.ch
 wrote:

 Database changeability is not a requirement. It just comes with django
 and makes development so much easier since I can develop on sqlite and
 deploy the wherever I want. Django orm is not great I agree, but it
 certainly does not suck, there are alternatives like sqlalchemy which are
 far more powerful.

 But yea. I get what you're trying to tell me. And I agree, this
 postgresql feature for jsonb look really nice. Much more easy to use than
 the whole xquery stuff. The complete EAV Pattern described on wikipedia is
 actually pretty complex and seems not easy at all to implement in using
 django. Therefore, i maybe should run a few tests with the json plugins.

 So let my sketch another approach.

 PRODUCT - P
 - name
 - type
 - features (jsonb)

 PRODUCT_FEATURE - PF
 - name
 - description
 - datatype
 - validation

 P now has the features field of type jsonb, which allows keys specified
 in PF together with a value of datatype or simply a valid one. PF holds
 the key-name, its datatype for generating the GUI, and some validation
 pattern for input sanitizing. There is no relation between the Tables.

 Getting the description is not an issue. I could even create a view
 mapping the jsonb keys to rows.

 Yes I like your approach. Is there anything I should be aware of? Some
 do's and don'ts or known pitfalls?





Re: [GENERAL] odbc to emulate mysql for end programs

2015-06-02 Thread Thomas Kellerer
Mimiko schrieb am 02.06.2015 um 13:16:
 1) mysql widelly uses case-insensitive naming for
 schemas,tables,columns. 

So does Postgres. FOO, foo and Foo are all the same name

 But postgres use case-sensitive when doulbe-quoting 

Which is what the SQL standard requires (and this was required *long* before 
MySQL even existed)

 or lowers the names without quoting. 

 Is there a configure option to ignore case by default? 

Yes: don't use quoted identifiers. 

 2)  as program double-quotes the schema,table and column names.

Don't use quoted identifiers. Neither in Postgres nor in MySQL (or any other 
DBMS)

They give you much more trouble than they are worth it (which you have just 
learned).




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] odbc to emulate mysql for end programs

2015-06-02 Thread Andrew Sullivan
On Tue, Jun 02, 2015 at 01:31:55PM +0200, Thomas Kellerer wrote:
  2)  as program double-quotes the schema,table and column names.
 
 Don't use quoted identifiers. Neither in Postgres nor in MySQL (or any other 
 DBMS)

I think a better rule of thumb is either always to use them (and spell
everything correctly) or never to use them.  Where you get in trouble
is the case where sometimes identifiers are quoted and sometimes not.
(I find the unquoted use more convenient, and I think it's subject to
fewer surprises like overloaded identifiers where one has an uppercase
in it; but I think that's a matter of taste, and if your system
framework quotes for you automatically then you have no choice but to
stick with that convention always and everywhere.)

This isn't really any different from any other development rule.  For
instance, in some environments there are various rules about single
and double quoting.  If you have no conventions imposed across all
your developers about when you use which, pretty soon you'll have an
unmaintainable mess.  And everyone has their favourite story of
frustration about indentation style or variable naming convention.
This case is no different.

A

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] advocating LTS release and feature-train release cycles

2015-06-02 Thread Andrew Sullivan
On Tue, Jun 02, 2015 at 12:59:14PM +0200, Tomas Vondra wrote:
 I disagree. The fact that we have 1 release per year means there's one
 deadline, and if you miss it you have to wait another year for the feature
 to be available in official release. That's a lot of pressure and
 frustration for developers. With more frequent releases, this issue gets
 less serious. Of course, it's not a silver bullet (e.g. does not change
 review capacity).

But it's the second part of this that is the main issue.  For the
people who are driving features in postgres now are overwhelmingly the
most advanced users, who also want rock solid database reliability.

After all, the simple use cases (the ones that basically treat the
DBMS as an expensive version of a flat filesystem) have been solved
for many releases quite well in Postgres.  These are the cases that
people used to compare with MySQL, and MySQL isn't any better at them
any more than Postgres.  But Postgres isn't really any better at them
than MySQL, either, because the basic development model along those
lines is low sophistication and is automatically constrained by round
tripping between the application and the database.

Anyone who wants to scale for real understands that and has already
figured out the abstractions they need.  But those are also the people
with real data at stake, which is why they picked Postgres as opposed
to some eventually-consistent mostly-doesn't-lose-data distributed
NoSQL system.  The traditional Postgres promise that it never loses
your data is important to all those people too.

Yet they're pressing for hot new features because it's the nifty
database tricks you can do that allow you to continue to build
ever-larger database systems.  If the model switched to more frequent
feature releases with less frequent LTS releases for stability,
one of two things would happen:

1.  There'd be pressure to get certain high-value features into
the LTS releases.  This is in effect the exact issue there is now.

2.  The people who really need high quality and advanced features
would all track the latest release anyway, because their risk
tolerance is actually higher than they think (or more likely,
they're doing the risk calculations wrong).  The effect of this
would be to put pressure on the intermediate releases for higher
quality, which would result in neglect of the quality issues of
the LTS anyway.

And on top of the above, you'd split the developer community between
those working on LTS and those not.  Given that the basic problem is
not enough developers to get the quality quite right against the
desired features, I don't really see how it helps.

As nearly as I can tell, noting that I'm watching almost entirely from
the sidelines, what really happened in the case that has everyone
worried is that one highly-esteemed developer claimed something and
maybe should have relinquished sooner given his workload.  That
happens; nobody's perfect.  It's frustrating, but this is not the only
community to have had that issue (cf. Linux kernel, for an
approximately infinite series of examples of this).  I am not sure
that the answer to this is a rejigging of the basic development model.
Hard cases make bad law.

Best regards,

A

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] odbc to emulate mysql for end programs

2015-06-02 Thread Mimiko

Thanks for response.

I've tried to connect the application to postgres with odbc. Arised 2 
problems:


1) mysql widelly uses case-insensitive naming for 
schemas,tables,columns. But postgres use case-sensitive when 
doulbe-quoting or lowers the names without quoting. Is there a configure 
option to ignore case by default?
2) despite odbc use, at first start the particular application tries to 
connect and create tables in database(schema). It kept saying that 
database(schema) does not exists, although I've created the schema 
exactly as it needs with cases and specified search_path also. May be 
this problem is related to case-sensitivenes, as program double-quotes 
the schema,table and column names.


On 02.06.2015 01:25, William Dunn wrote:

PostgreSQL has a fully standards compliant ODBC driver (See:
https://odbc.postgresql.org/). Any application designed to communicate
with DBMS over ODBC connection should be able to use that driver to
communicate with PostgreSQL. Most applications that interact with
databases come with ODBC drivers pre-installed for the most common
databases (MySQL, Microsoft SQL Server, Oracle, etc.) but allow you to
set up an ODBC driver for another DBMS.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Minor revision downgrade (9.2.11 - 9.2.10)

2015-06-02 Thread Bruce Momjian
On Tue, Jun  2, 2015 at 04:40:15PM +1200, Fabio Ugo Venchiarutti wrote:
 We're fairly confident that it's an issue with the hardware but we
 have political reasons to downgrade PG to 9.2.10 to show the hosting
 supplier that it's their fault.
 
 
 The release notes for 9.2.11 mention no data structure changes (in
 line with the usual PG versioning policy). Is it just as safe to
 downgrade too? We tested it on a couple non-critical boxes to no ill
 effect whatsoever, but we'd like a second opinion before we do it on
 the live installation too.

I have rarely seen this question asked.  I think minor-release
downgrading is fine in this case.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] advocating LTS release and feature-train release cycles

2015-06-02 Thread Tomas Vondra



On 06/02/15 04:27, Adrian Klaver wrote:

On 06/01/2015 07:11 PM, Arthur Silva wrote:

In my opinion, FWIW, that really does not change anything. Whether
you are dealing with 20 new features over a year or 10 over half a
year the same constraints apply, writing the code and getting it
reviewed over a given time period. Add in the extra overhead costs of
more frequent releases and I see no gain.


I disagree. The fact that we have 1 release per year means there's one 
deadline, and if you miss it you have to wait another year for the 
feature to be available in official release. That's a lot of pressure 
and frustration for developers. With more frequent releases, this issue 
gets less serious. Of course, it's not a silver bullet (e.g. does not 
change review capacity).



Maybe this should be reposted to the hackers list?


Yes. And there already are threads dealing with this topic.

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database designpattern - product feature

2015-06-02 Thread Dorian Hoxha
Please do reply-all so you also reply to the list.

It's not ~good to develop with sqlite and deploy on posgresql. You should
have your 'dev' as close to 'prod' as possible.

Product_feature is another table in this case ?

On Tue, Jun 2, 2015 at 11:44 AM, Adrian Stern adrian.st...@unchained.ch
wrote:

 Database changeability is not a requirement. It just comes with django and
 makes development so much easier since I can develop on sqlite and deploy
 the wherever I want. Django orm is not great I agree, but it certainly
 does not suck, there are alternatives like sqlalchemy which are far more
 powerful.

 But yea. I get what you're trying to tell me. And I agree, this postgresql
 feature for jsonb look really nice. Much more easy to use than the whole
 xquery stuff. The complete EAV Pattern described on wikipedia is actually
 pretty complex and seems not easy at all to implement in using django.
 Therefore, i maybe should run a few tests with the json plugins.

 So let my sketch another approach.

 PRODUCT - P
 - name
 - type
 - features (jsonb)

 PRODUCT_FEATURE - PF
 - name
 - description
 - datatype
 - validation

 P now has the features field of type jsonb, which allows keys specified in
 PF together with a value of datatype or simply a valid one. PF holds the
 key-name, its datatype for generating the GUI, and some validation pattern
 for input sanitizing. There is no relation between the Tables.

 Getting the description is not an issue. I could even create a view
 mapping the jsonb keys to rows.

 Yes I like your approach. Is there anything I should be aware of? Some
 do's and don'ts or known pitfalls?




Re: [GENERAL] pl/python composite type array as input parameter

2015-06-02 Thread Adrian Klaver

On 06/02/2015 03:10 AM, Filipe Pina wrote:

HI Adrian,

I had a typo in the email:

INSERT INTO my_table VALUES(my_table.*);

was actually

INSERT INTO my_table VALUES(my_var.*);


Aah, that is different:)



So I meant to insert the variable I had in memory (dict representing a row), 
not the rows from the table..


So where is the variable getting its data?

Or can we see a simple example of what you are trying to do?




On 02/06/2015, at 01:44, Adrian Klaver adrian.kla...@aklaver.com wrote:

On 06/01/2015 07:42 AM, Filipe Pina wrote:

Thanks for the reply anyway, it's a pity though, it'd be useful..

Another bump I've found along the pl/python road: insert ROWTYPE in table..
Maybe you have some hint on that? :)

So, in PLPGSQL I can:

DECLARE
   my_var my_table;
BEGIN
   my_var.col1 := 'asd';
   INSERT INTO my_table VALUES(my_table.*);
END;

How would I do something like that in pl/python?

First, how to declare a ROW-TYPE variable, as they're all python mappings?

my_var = { 'col1': 'asd' } enough? it'd would miss all the other columns...

Second, how to insert it?

plpy.prepare and .execute say they don't support composite types, so I
cannot simply pass

pl = plpy.prepare('INSERT INTO core_customer VALUES ($1)', ['my_table'])

Any workarounds for this? (meaning I wouldn't have to specify any
columns in the insert statement)


http://www.postgresql.org/docs/9.4/interactive/sql-insert.html

pl = plpy.prepare('INSERT INTO core_table SELECT * FROM my_table')



Thanks

On Sex, Mai 29, 2015 at 2:00 , Peter Eisentraut pete...@gmx.net wrote:

On 5/18/15 10:52 AM, Filipe Pina wrote:

But one of the functions I need to create needs to accept an array
of records.

PL/Python doesn't support that. Some more code needs to be written to
support that. You did everything correctly. I don't know of a good
workaround.



--
Adrian Klaver
adrian.kla...@aklaver.com






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] TRIGGER TRUNCATE -- CASCADE or RESTRICT

2015-06-02 Thread Melvin Davidson
You can use the following to list the triggers and see what functions they
call. Then you can check pg_proc to see how TRUNCATE is used in prosrc.


SELECT c.relname,
   t.tgname,
   p.pronameAS function_called,
   t.tgconstraint   AS is_constraint,
   CASE WHEN t.tgconstrrelid  0
THEN (SELECT relname
   FROM pg_class
  WHERE oid = t.tgconstrrelid)
ELSE ''
END AS constr_tbl,
   t.tgenabled
  FROM pg_trigger t
  INNER JOIN pg_proc p  ON ( p.oid = t.tgfoid)
  INNER JOIN pg_class c ON (c.oid = t.tgrelid)
  WHERE tgname NOT LIKE 'pg_%'
AND tgname NOT LIKE 'RI_%'  --  comment out to see constraints
--AND t.tgenabled = FALSE
 ORDER BY 1;


On Tue, Jun 2, 2015 at 5:31 AM, Albe Laurenz laurenz.a...@wien.gv.at
wrote:

 Andreas Ulbrich wrote:
  I'm in a handle for a trigger for TRUNCATE. Is it possible to find out
  whether the TRUNCATE TABLE ist called with CASCADE?

 I don't think there is.

 But you can find out the table where the trigger is defined and examine
 if any foreign key constraints are referring to it.

 If yes, then the trigger was called with CASCADE.
 If no, it might have been called either way, but the effect would be the
 same.

 Yours,
 Laurenz Albe

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] pl/python composite type array as input parameter

2015-06-02 Thread Filipe Pina
Thanks Rémi,

Indeed I needed something more generic and easy to maintain (regarding table 
schema evolution) so I ended up going back to PL/PGSQL (for that specific 
function) in the meantime.

 On 02/06/2015, at 09:41, Rémi Cura remi.c...@gmail.com wrote:
 
 OUps,
 
 I forget another strategy I used :
 instead of having 
 testp2(es employee[])
 
 you can use 
 testp2( names text[], salaries integer[], ages integer[])
 
 This might be the solution with the less work, but it is absolutely terrible 
 practice,
 because it will be hard to change you record type (evolution difficult)
 , and having many columns will make you create function with many arguments,
 which is often a bad idea.
 
 Cheers,
 Rémi-C
 
 2015-06-02 10:36 GMT+02:00 Rémi Cura remi.c...@gmail.com 
 mailto:remi.c...@gmail.com:
 Hey,
 the only straight workaround I know (which is pretty bad)
 is to cast down your record to text.
 Then you have an array of text, which is manageable.
 
 For this you can either 'flatten' your record into a unique text,
 or cast each part of your record to text, then emulate an array of array (you 
 need to know the length of the inner array in your function though).
 
 I used this to emulate a 2D numpy vector (N*3)(for numpy).
 
 You'll need a custom aggregate, like this one 
 https://github.com/Remi-C/_utilities/blob/master/postgres/array_of_array.sql.
 
 The other more sane solution is to pass the information about the row you 
 want to retrieve, and retrieve the row directly within the python.
 For instance, here you would pass an array of id of the employee you want to 
 work with.
 This is saner, but as a design I don't really like to have specific SQL code 
 into a generic python function.
 
 I agree it is cumbersome, and I also badly miss more powerful input for 
 python function (after all, plpython can already return composite types, 
 which is awesome)
 
 
 Cheers,
 Rémi-C
 
 2015-06-02 2:44 GMT+02:00 Adrian Klaver adrian.kla...@aklaver.com 
 mailto:adrian.kla...@aklaver.com:
 On 06/01/2015 07:42 AM, Filipe Pina wrote:
 Thanks for the reply anyway, it's a pity though, it'd be useful..
 
 Another bump I've found along the pl/python road: insert ROWTYPE in table..
 Maybe you have some hint on that? :)
 
 So, in PLPGSQL I can:
 
 DECLARE
my_var my_table;
 BEGIN
my_var.col1 := 'asd';
INSERT INTO my_table VALUES(my_table.*);
 END;
 
 How would I do something like that in pl/python?
 
 First, how to declare a ROW-TYPE variable, as they're all python mappings?
 
 my_var = { 'col1': 'asd' } enough? it'd would miss all the other columns...
 
 Second, how to insert it?
 
 plpy.prepare and .execute say they don't support composite types, so I
 cannot simply pass
 
 pl = plpy.prepare('INSERT INTO core_customer VALUES ($1)', ['my_table'])
 
 Any workarounds for this? (meaning I wouldn't have to specify any
 columns in the insert statement)
 
 http://www.postgresql.org/docs/9.4/interactive/sql-insert.html 
 http://www.postgresql.org/docs/9.4/interactive/sql-insert.html
 
 pl = plpy.prepare('INSERT INTO core_table SELECT * FROM my_table')
 
 
 Thanks
 
 On Sex, Mai 29, 2015 at 2:00 , Peter Eisentraut pete...@gmx.net 
 mailto:pete...@gmx.net wrote:
 On 5/18/15 10:52 AM, Filipe Pina wrote:
 
 But one of the functions I need to create needs to accept an array
 of records.
 
 PL/Python doesn't support that. Some more code needs to be written to
 support that. You did everything correctly. I don't know of a good
 workaround.
 
 
 -- 
 Adrian Klaver
 adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com
 
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org 
 mailto:pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general 
 http://www.postgresql.org/mailpref/pgsql-general
 
 



[GENERAL] Automatic Failover

2015-06-02 Thread Ivann Ruiz
Hello everyone,

I know most of you guys have plenty of experience with PostgreSQL, do any
of you know a tool for windows that allows me to create an automatic
failover system? I'm running Windows servers 2008 and 2012 and PostgreSQL
9.4.

Thanks in advance for your time!


Re: [GENERAL] Python 3.2 XP64 and Numpy...

2015-06-02 Thread Adrian Klaver

On 06/02/2015 01:24 AM, Rémi Cura wrote:

Hey,
python is installed from official binary, 64 b for windows,
in C/Python32

I can't remember the argument, but it might be irrelevant.
The problem doesn't seem to be to install numpy, it works perfectly in
the regular terminal.

The problem seems to be that postgres can't use correctly numpy.


So  to be clear, you can import other modules into plpythonu3 and they work?

In your previous post you said:

All user of the computer have all rights on the 
C:\\Python32\\lib\\site-packages folder


Do those rights extend down the sub-directories, in particular the numpy 
package?





I found no version of scipy installer for win 64 with python 3.2,
I tried several but not having the exact python version always end in
failure.
The binaries you linked to are dependent on intel math kernel library,
which I don't have.



Well it was worth a shot.




Cheers,
Rémi-C


2015-06-01 19:41 GMT+02:00 Adrian Klaver adrian.kla...@aklaver.com
mailto:adrian.kla...@aklaver.com:

On 06/01/2015 09:09 AM, Rémi Cura wrote:

Hey, thanks to help me with that.
I started fresh to have a truly reproducible process,
so you can have all information and rule out some error
possibilities.

   - Uninstall all python.
   - Check that PythonPath doesn't exist anymore
   - check that python doesn't exist anymore

   - install python 3.2.5 64 bit from official python website
into C/Python32

   - Reload configuration for server.

   - create plpython3u , create a python function, test it (show
path)

 * It works, python path is
  'C:\\Windows\\system32\\python32.zip', 'C:\\Python32\\Lib',
'C:\\Python32\\DLLs', 'E:\\9.3\\data', 'C:\\Program
Files\\PostgreSQL\\9.3\\bin', 'C:\\Python32',
'C:\\Python32\\lib\\site-packages'

   - Donwload latest numpy from website.
   - ON antoher PC


So what is the Python setup on this machine?

 * Compile numpy with visual 2008 , 64 bit
 * Create an binary installer for windows (using python.exe
setup.py
)with proper argument


The argument would be?


   - On the server :
   - install numpy with the compiled installer.


Best guess is that the numpy compilation you are doing on Machine A
is not compatible with what you have installed on Machine B(the server).

Have you looked at this:
http://www.lfd.uci.edu/~gohlke/pythonlibs/

or

http://www.scipy.org/install.html



   - check that numpy is correctly installer in
C:\Python32\Lib\site-packages
   - using an external terminal, check that numpy works (import
numpy - OK)

   - Now, define a plpython3u function containing import numpy

   - Run the function -- error is
ERREUR:  ImportError: DLL load failed: Le module spécifié est
introuvable.,
 which roughly translate to
ERROR: ImportError : DLL load failed : the specified module
couldn't
be found.

   - Create a plpython3u function returning sys.path
the path is C:\\Windows\\system32\\python32.zip',
'C:\\Python32\\Lib',
'C:\\Python32\\DLLs', 'E:\\9.3\\data', 'C:\\Program
Files\\PostgreSQL\\9.3\\bin', 'C:\\Python32',
'C:\\Python32\\lib\\site-packages
numpy is in this path, in C:\\Python32\\lib\\site-packages
All user of the computer have all rights on the
C:\\Python32\\lib\\site-packages folder


   - execute `import imp; imp.find_package('numpy')` within the
plpython3u function
 - returns None,
'C:\\Python32\\lib\\site-packages\\numpy', ('',
'', 5)

   - create a helloworld module , put it next to numpy, try to
call it
 - it gets called

I really don't see what I can do more.

Cheers,
Rémi-C



--
Adrian Klaver
adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] advocating LTS release and feature-train release cycles

2015-06-02 Thread Adrian Klaver

On 06/02/2015 03:59 AM, Tomas Vondra wrote:



On 06/02/15 04:27, Adrian Klaver wrote:

On 06/01/2015 07:11 PM, Arthur Silva wrote:

In my opinion, FWIW, that really does not change anything. Whether
you are dealing with 20 new features over a year or 10 over half a
year the same constraints apply, writing the code and getting it
reviewed over a given time period. Add in the extra overhead costs of
more frequent releases and I see no gain.


I disagree. The fact that we have 1 release per year means there's one
deadline, and if you miss it you have to wait another year for the
feature to be available in official release. That's a lot of pressure
and frustration for developers. With more frequent releases, this issue
gets less serious. Of course, it's not a silver bullet (e.g. does not
change review capacity).


That is the fundamental issue, wants versus resources to fulfill the 
wants. That issue remains regardless of the release cycle. The solution 
lies in either restricting the wants, increasing the resources(more 
developers, reviewers) or a some combination thereof.





Maybe this should be reposted to the hackers list?


Yes. And there already are threads dealing with this topic.

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] odbc to emulate mysql for end programs

2015-06-02 Thread Adrian Klaver

On 06/02/2015 04:16 AM, Mimiko wrote:

Thanks for response.

I've tried to connect the application to postgres with odbc. Arised 2
problems:

1) mysql widelly uses case-insensitive naming for
schemas,tables,columns.


Actually that is not true as I found out the hard way. See here for all 
the ways you can make that not true:


https://dev.mysql.com/doc/refman/5.0/en/identifier-case-sensitivity.html


But postgres use case-sensitive when
doulbe-quoting or lowers the names without quoting. Is there a configure
option to ignore case by default?  2) despite odbc use, at first start the 
particular application tries to
connect and create tables in database(schema). It kept saying that
database(schema) does not exists, although I've created the schema
exactly as it needs with cases and specified search_path also. May be
this problem is related to case-sensitivenes, as program double-quotes
the schema,table and column names.


MySQL != Postgres. You have just started down a tortuous path if your 
application is really expecting to talk to a MySQL database.




On 02.06.2015 01:25, William Dunn wrote:

PostgreSQL has a fully standards compliant ODBC driver (See:
https://odbc.postgresql.org/). Any application designed to communicate
with DBMS over ODBC connection should be able to use that driver to
communicate with PostgreSQL. Most applications that interact with
databases come with ODBC drivers pre-installed for the most common
databases (MySQL, Microsoft SQL Server, Oracle, etc.) but allow you to
set up an ODBC driver for another DBMS.







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] advocating LTS release and feature-train release cycles

2015-06-02 Thread Jan de Visser
On June 2, 2015 03:16:53 PM Zenaan Harkness wrote:
 On 6/2/15, Jan de Visser j...@de-visser.net wrote:
  On June 1, 2015 11:11:37 PM Arthur Silva wrote:
  In my opinion a twice a year schedule would be good.
  The LTS would be every 2 or 4 releases. Keeping 2 LTS versions supported
  at
  all moments.
  
  Maybe this should be reposted to the hackers list?
  
  Pretty sure this would be shot down pretty quick. At this point it seems
  more
  likely to me that the time between releases will be longer rather than
  shorter.
 
 Really, that sounds like an excellent way to test such an alternative
 - if pg development went to what every other major libre project does,
 we would not have a proper comparison of the outcome for the
 alternative (lengthening the release cycle, rather than shortening).
 
 I know how I think it'll pan out - but personal opions matter little
 here, only what the dev's choose.
 
 Whatever the outcome, this will be a great experiment in the long run,
 providing a data point we would be quite unlikely to have otherwise!

I was overly short. What I should have done is direct you to pgsql-hackers 
where 
release schedules have been extensively discussed recently. Reading those 
threads 
will give you an idea about what the thinking process of the people responsible 
for 
releasing pgsql is. And whether or not their thinking lines up with other 
projects is 
not really relevant in my opinion - all projects are different, not in the 
least because 
the people running them are different.


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-02 Thread Andres Freund
On 2015-06-01 14:22:32 -0400, Robert Haas wrote:
 On Mon, Jun 1, 2015 at 4:58 AM, Andres Freund and...@anarazel.de wrote:
  The lack of WAL logging actually has caused problems in the 9.3.3 (?)
  era, where we didn't do any truncation during recovery...
 
 Right, but now we're piggybacking on the checkpoint records, and I
 don't have any evidence that this approach can't be made robust.  It's
 possible that it can't be made robust, but that's not currently clear.

Well, it's possible that it can be made work without problems. But I
think robust is something different. Having to look at slrus, during
recovery, to find out what to truncate puts more intelligence/complexity
in the recovery path than I'm comfortable with.

  By the time we've reached the minimum recovery point, they will have
  been recreated by the same WAL records that created them in the first
  place.
 
  I'm not sure that's true. I think we could end up errorneously removing
  files that were included in the base backup. Anyway, let's focus on your
  patch for now.
 
 OK, but I am interested in discussing the other thing too.  I just
 can't piece together the scenario myself - there may well be one.  The
 base backup will begin replay from the checkpoint caused by
 pg_start_backup() and remove anything that wasn't there at the start
 of the backup.  But all of that stuff should get recreated by the time
 we reach the minimum recovery point (end of backup).

I'm not sure if it's reprouceably borked. What about this scenario:
1) pg_start_backup() is called, creates a checkpoint.
2) 2**31 multixacts are created, possibly with several checkpoints
   inbetween
3) pg_multixact is copied
4) basebackup finishes

Unless I'm missing something this will lead to a crash recovery startup
where the first call to TruncateMultiXact() will have
MultiXactState-lastCheckpointedOldest wildly inconsistent with
GetOldestMultiXactOnDisk() return value. Possibly leading to truncation
being skipped errorneously.  Whether that's a problem I'm not yet
entirely sure.

But what *definitely* looks wrong to me is that a TruncateMultiXact() in
this scenario now (since a couple weeks ago) does a
SimpleLruReadPage_ReadOnly() in the members slru via
find_multixact_start(). That just won't work acceptably when we're not
yet consistent. There very well could not be a valid members segment at
that point?  Am I missing something?

  I'm more worried about the cases where we didn't ever actually badly
  wrap around (i.e. overwrite needed data); but where that's not clear on
  the standby because the base backup isn't in a consistent state.
 
 I agree. The current patch tries to make it so that we never call
 find_multixact_start() while in recovery, but it doesn't quite
 succeed: the call in TruncateMultiXact still happens during recovery,
 but only once we're sure that the mxact we plan to call it on actually
 exists on disk.  That won't be called until we replay the first
 checkpoint, but that might still be prior to consistency.

It'll pretty much *always* be before we reach consistency, right? It'll
called on the checkpoint created by pg_start_backup()?

I don't think the presence check (that's GetOldestMultiXactOnDisk() in
TruncateMultiXact(), right) helps very much. There's no guarantee at all
that offsets and members are in any way consistent with each other. Or
in themselves for that matter, the copy could very well have been in the
middle of a write the slru page.

I think at the very least we'll have to skip this step while not yet
consistent. That really sucks, because we'll possibly end up with
multixacts that are completely filled by the time we've reached
consistency.

Greetings,

Andres Freund


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Minor revision downgrade (9.2.11 - 9.2.10)

2015-06-02 Thread Melvin Davidson
I hate to ask the obvious, but have you made sure you copied over the
postgresql.conf and pg_hba.conf to make it identical?

On Tue, Jun 2, 2015 at 7:39 AM, Bruce Momjian br...@momjian.us wrote:

 On Tue, Jun  2, 2015 at 04:40:15PM +1200, Fabio Ugo Venchiarutti wrote:
  We're fairly confident that it's an issue with the hardware but we
  have political reasons to downgrade PG to 9.2.10 to show the hosting
  supplier that it's their fault.
 
 
  The release notes for 9.2.11 mention no data structure changes (in
  line with the usual PG versioning policy). Is it just as safe to
  downgrade too? We tested it on a couple non-critical boxes to no ill
  effect whatsoever, but we'd like a second opinion before we do it on
  the live installation too.

 I have rarely seen this question asked.  I think minor-release
 downgrading is fine in this case.

 --
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com

   + Everyone has their own god. +


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-02 Thread Robert Haas
On Tue, Jun 2, 2015 at 1:21 AM, Noah Misch n...@leadboat.com wrote:
 On Mon, Jun 01, 2015 at 02:06:05PM -0400, Robert Haas wrote:
 On Mon, Jun 1, 2015 at 12:46 AM, Noah Misch n...@leadboat.com wrote:
  On Fri, May 29, 2015 at 03:08:11PM -0400, Robert Haas wrote:
  SetMultiXactIdLimit() bracketed certain parts of its
  logic with if (!InRecovery), but those guards were ineffective because
  it gets called before InRecovery is set in the first place.
 
  SetTransactionIdLimit() checks InRecovery for the same things, and it is
  called at nearly the same moments as SetMultiXactIdLimit().  Do you have a
  sense of whether it is subject to similar problems as a result?

 Well, I think it's pretty weird that those things will get done before
 beginning recovery, even on an inconsistent cluster, but not during
 recovery.  That is pretty strange.  I don't see that it can actually
 do any worse than emit a few log messages at the start of recovery
 that won't show up again until the end of recovery, though.

 Granted.  Would it be better to update both functions at the same time, and
 perhaps to make that a master-only change?  Does the status quo cause more
 practical harm via SetMultiXactIdLimit() than via SetTransactionIdLimit()?

It does in the case of the call to find_multixact_start().  If that
fails, we take the server down for no good reason, as demonstrated by
the original report. I'll revert the changes to the other two things
in this function that I changed to be protected by did_trim.  There's
no special reason to think that's a necessary change.

  1. Moves the call to DetermineSafeOldestOffset() that appears in
  StartupMultiXact() to TrimMultiXact(), so that we don't try to do this
  until we're consistent.  Also, instead of passing
  MultiXactState-oldestMultiXactId, pass the newer of that value and
  the earliest offset that exists on disk.  That way, it won't try to
  read data that's not there.
 
  Perhaps emit a LOG message when we do that, since it's our last 
  opportunity to
  point to the potential data loss?

 If the problem is just that somebody minmxid got set to 1 instead of
 the real value, I think that there is no data loss, because none of
 those older values are actually present there.  But we could add a LOG
 message anyway.  How do you suggest that we phrase that?

 There's no data loss if 1 = true_minmxid = 2^31 at the time minmxid got set
 to 1.  Otherwise, data loss is possible.

Yes, but in that scenario, the log message you propose wouldn't be
triggered.  If true_minmxid  2^31, then the stored minmxid will not
precede the files on disk; it will follow it (assuming the older stuff
hasn't been truncated, as is likely).  So the message would be
essentially:

LOG: you didn't lose data.  but if exactly the opposite of what this
message is telling you about had happened, then you would have.
DETAIL: Have a nice day.

 I don't hope for an actionable
 message, but we might want a reporter to grep logs for it when we diagnose
 future reports.  Perhaps this:

   missing pg_multixact/members files; begins at MultiXactId %u, expected %u

This seems misleading.  In the known failure case, it's not that the
pg_multixact files are unexpectedly missing; it's that we incorrectly
think that they should still be there.  Maybe:

oldest MultiXactId on disk %u follows expected oldest MultiXact %u

 For good measure, perhaps emit this when lastCheckpointedOldest  earliest by
 more than one segment:

   excess pg_multixact/members files; begins at MultiXactId %u, expected %u

So, this scenario will happen whenever the system was interrupted in
the middle of a truncation, or when the system is started from a base
backup and a truncation happened after files were copied.  I'm wary of
giving users the idea that this is an atypical event.  Perhaps a
message at DEBUG1?

 I'm not sure what you mean about it becoming too old.  At least with
 that fix, it should get updated to exactly the first file that we
 didn't remove.  Isn't that right?

 Consider a function raw_GOMXOD() that differs from GetOldestMultiXactOnDisk()
 only in that it never reads or writes the cache.  I might expect
 oldestMultiXactOnDisk==raw_GOMXOD() if oldestMultiXactOnDiskValid, and that
 does hold most of the time.  It does not always hold between the start of the
 quoted code's SimpleLruTruncate() and its oldestMultiXactOnDisk assignment.
 That's because raw_GOMXOD() changes at the instant we unlink the oldest
 segment, but oldestMultiXactOnDisk changes later.  Any backend may call
 GetOldestMultiXactOnDisk() via SetMultiXactIdLimit().  If a backend does that
 concurrent with the checkpointer running TruncateMultiXact() and sees a stale
 oldestMultiXactOnDisk, is that harmless?

As far as I can tell, it's pretty much harmless.  I mean, we've
already discussed the risk that the head and tail could get too far
apart, because really it should be TruncateMultiXact(), not
SetMultiXactIdLimit(), that establishes the new stop point.  But 

Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-02 Thread Andres Freund
On 2015-06-02 11:16:22 -0400, Robert Haas wrote:
 I'm having trouble figuring out what to do about this.  I mean, the
 essential principle of this patch is that if we can't count on
 relminmxid, datminmxid, or the control file to be accurate, we can at
 least look at what is present on the disk.  If we also cannot count on
 that to be accurate, we are left without any reliable source of
 information.  Consider a hypothetical cluster where all our stored
 minmxids of whatever form are corrupted (say, all change to 1) and in
 addition there are stray files in pg_multixact.  I don't think there's
 really any way to get ourselves out of trouble in that scenario.

If we were to truncate after vacuum, and only on the primary (via WAL
logging), we could, afaics, just rely on all the values to be
recomputed. I mean relminmxid will be recomputed after a vacuum, and
thus, after some time, will datminmxid and the control file value.  We
could just force a value of 1 to always trigger anti-wraparound vacuums
(or wait for that to happen implicitly, to delay the impact?). That'll
then should then fix the problem in a relatively short amount of time?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-02 Thread Robert Haas
On Tue, Jun 2, 2015 at 8:56 AM, Andres Freund and...@anarazel.de wrote:
 But what *definitely* looks wrong to me is that a TruncateMultiXact() in
 this scenario now (since a couple weeks ago) does a
 SimpleLruReadPage_ReadOnly() in the members slru via
 find_multixact_start(). That just won't work acceptably when we're not
 yet consistent. There very well could not be a valid members segment at
 that point?  Am I missing something?

Yes: that code isn't new.

TruncateMultiXact() called SimpleLruReadPage_ReadOnly() directly in
9.3.0 and every subsequent release until 9.3.7/9.4.2.  The only thing
that's changed is that we've moved that logic into a function called
find_multixact_start() instead of having it directly inside that
function.  We did that because we needed to use the same logic in some
other places.  The reason why 9.3.7/9.4.2 are causing problems for
people that they didn't have previously is because those new,
additional call sites were poorly chosen and didn't include adequate
protection against calling that function with an invalid input value.
What this patch is about is getting back to the situation that we were
in from 9.3.0 - 9.3.6 and 9.4.0 - 9.4.1, where TruncateMultiXact() did
the thing that you're complaining about here but no one else did.

From my point of view, I think that you are absolutely right to
question what's going on in TruncateMultiXact().  It's kooky, and
there may well be bugs buried there.  But I don't think fixing that
should be the priority right now, because we have zero reports of
problems attributable to that logic.  I think the priority should be
on undoing the damage that we did in 9.3.7/9.4.2, when we made other
places to do the same thing.  We started getting trouble reports
attributable to those changes *almost immediately*, which means that
whether or not TruncateMultiXact() is broken, these new call sites
definitely are.  I think we really need to fix those new places ASAP.

 I think at the very least we'll have to skip this step while not yet
 consistent. That really sucks, because we'll possibly end up with
 multixacts that are completely filled by the time we've reached
 consistency.

That would be a departure from the behavior of every existing release
that includes this code based on, to my knowledge, zero trouble
reports.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-02 Thread Andres Freund
On 2015-06-02 11:29:24 -0400, Robert Haas wrote:
 On Tue, Jun 2, 2015 at 8:56 AM, Andres Freund and...@anarazel.de wrote:
  But what *definitely* looks wrong to me is that a TruncateMultiXact() in
  this scenario now (since a couple weeks ago) does a
  SimpleLruReadPage_ReadOnly() in the members slru via
  find_multixact_start(). That just won't work acceptably when we're not
  yet consistent. There very well could not be a valid members segment at
  that point?  Am I missing something?
 
 Yes: that code isn't new.

Good point.

 TruncateMultiXact() called SimpleLruReadPage_ReadOnly() directly in
 9.3.0 and every subsequent release until 9.3.7/9.4.2.

But back then TruncateMultiXact() wasn't called during recovery. But
you're right in that it didn't seem to have reproduced attributable
bugreprorts since 9.3.2 where vacuuming during recovery was
introduced. So it indeed doesn't seem as urgent as fixing the new
callsites.

 That would be a departure from the behavior of every existing release
 that includes this code based on, to my knowledge, zero trouble
 reports.

On the other hand we're now at about bug #5 attributeable to the odd way
truncation works for multixacts. It's obviously complex and hard to get
right. It makes it harder to cope with the wrong values left in
datminxid etc. So I'm still wondering whether fixing this for good isn't
the better approach.

Greetings,

Andres Freund


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-02 Thread Robert Haas
On Tue, Jun 2, 2015 at 11:27 AM, Andres Freund and...@anarazel.de wrote:
 On 2015-06-02 11:16:22 -0400, Robert Haas wrote:
 I'm having trouble figuring out what to do about this.  I mean, the
 essential principle of this patch is that if we can't count on
 relminmxid, datminmxid, or the control file to be accurate, we can at
 least look at what is present on the disk.  If we also cannot count on
 that to be accurate, we are left without any reliable source of
 information.  Consider a hypothetical cluster where all our stored
 minmxids of whatever form are corrupted (say, all change to 1) and in
 addition there are stray files in pg_multixact.  I don't think there's
 really any way to get ourselves out of trouble in that scenario.

 If we were to truncate after vacuum, and only on the primary (via WAL
 logging), we could, afaics, just rely on all the values to be
 recomputed. I mean relminmxid will be recomputed after a vacuum, and
 thus, after some time, will datminmxid and the control file value.  We
 could just force a value of 1 to always trigger anti-wraparound vacuums
 (or wait for that to happen implicitly, to delay the impact?). That'll
 then should then fix the problem in a relatively short amount of time?

The exact circumstances under which we're willing to replace a
relminmxid with a newly-computed one that differs are not altogether
clear to me, but there's an if statement protecting that logic, so
there are some circumstances in which we'll leave the existing value
intact.  If we force non-stop vacuuming in that scenario, autovacuum
will just run like crazy without accomplishing anything, which
wouldn't be good.  It would similarly do so when the oldest MXID
reference in the relation is in fact 1, but that value can't be
vacuumed away yet.

Also, the database might be really big.  Even if it were true that a
full scan of every table would get us out of this state, describing
the time that it would take to do that as relatively short seems to
me to be considerably understating the impact of a full-cluster
VACUUM.

With regard to the more general question of WAL-logging this, are you
going to work on that?  Are you hoping Alvaro or I will work on that?
Should we draw straws?  It seems like somebody needs to do it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database designpattern - product feature

2015-06-02 Thread Dorian Hoxha
So product_feature is only 1 row for each product_type, right ? Looks good.

On Tue, Jun 2, 2015 at 1:15 PM, Adrian Stern adrian.st...@unchained.ch
wrote:

 Sorry. Will do in the future.

 Product_freature is a table describing the valid keys for product
 features. With this it is possible to limit keys to specific groups of
 products.

 Freundliche Grüsse

 Adrian Stern
 unchained - web solutions

 adrian.st...@unchained.ch
 +41 79 292 83 47

 On Tue, Jun 2, 2015 at 12:58 PM, Dorian Hoxha dorian.ho...@gmail.com
 wrote:

 Please do reply-all so you also reply to the list.

 It's not ~good to develop with sqlite and deploy on posgresql. You should
 have your 'dev' as close to 'prod' as possible.

 Product_feature is another table in this case ?

 On Tue, Jun 2, 2015 at 11:44 AM, Adrian Stern adrian.st...@unchained.ch
 wrote:

 Database changeability is not a requirement. It just comes with django
 and makes development so much easier since I can develop on sqlite and
 deploy the wherever I want. Django orm is not great I agree, but it
 certainly does not suck, there are alternatives like sqlalchemy which are
 far more powerful.

 But yea. I get what you're trying to tell me. And I agree, this
 postgresql feature for jsonb look really nice. Much more easy to use than
 the whole xquery stuff. The complete EAV Pattern described on wikipedia is
 actually pretty complex and seems not easy at all to implement in using
 django. Therefore, i maybe should run a few tests with the json plugins.

 So let my sketch another approach.

 PRODUCT - P
 - name
 - type
 - features (jsonb)

 PRODUCT_FEATURE - PF
 - name
 - description
 - datatype
 - validation

 P now has the features field of type jsonb, which allows keys specified
 in PF together with a value of datatype or simply a valid one. PF holds
 the key-name, its datatype for generating the GUI, and some validation
 pattern for input sanitizing. There is no relation between the Tables.

 Getting the description is not an issue. I could even create a view
 mapping the jsonb keys to rows.

 Yes I like your approach. Is there anything I should be aware of? Some
 do's and don'ts or known pitfalls?






Re: [GENERAL] Database designpattern - product feature

2015-06-02 Thread William Dunn
Hello Adrian,

May I ask why you need a non-standard model? By standard models I mean the
following:

1) When you don't need to have subclass specific database constraints: All
subclasses in the same table, subclasses that do not have an attribute have
that column null. This has the best performance because no joins are
required when querying both superclass and subclass attributes, and all the
data for an object will be in the same block on disk. The disadvantage of
this is that you cannot enforce constraints, such as not-null, on subclass
specific attributes columns because the constraints would also be applied
to the superclasses. If you can ensure that your application is the only
way data can be inserted those constraints will naturally be enforced there.

2) When you need subclass specific database constraints: Use an ORM such as
Django's ORM or SQLAlchemy which has one table with the superclass
attributes and a table for each subclass with their subclass specific
attributes. This is slower because joins will be needed and the data for an
object will be in 2 different blocks of disk but it allows you to enforce
constraints within the database which will be checked whenever any
application tries to insert values. There is a lot of complexity added
because there will be so many small tables and indexes but the ORM takes
care of that for you.

*Will J. Dunn*
*willjdunn.com http://willjdunn.com*

On Mon, Jun 1, 2015 at 10:35 AM, Adrian Stern adrian.st...@unchained.ch
wrote:

 Hi, I'm new

 I've been working as the sole administrator of various postgresql projects
 for a while now. All of which where django projects.
 Since a new project is starting and we've found the need for a more
 generic approach I would like to ask a few questions.

 I would like to implement a pattern similar to the product feature pattern
 explained in the silverstone book - the data model resource book vol 1. It
 is simply explained. There is a Table PRODUCT holding the fields all the
 products share, then there is the table PRODUCT_FEATURE, both of them in a
 “many to many“ relationship.

 PRODUCT --- m  n --- PRODUCT_FEATURE (a table in between of
 course)

 PRODUCT_FEATURE -- PF
 PRODUCT -- P
 TABLE IN BETWEEN -- TIB

 PF defines the feature Type while P stands for the product the feature is
 applied to. Some of these PF can have values of different types (text,
 numbers, floating, blob, ...) which would be applied to TIB.

 I don't like the idea of having numerous empty fields prepared in TIB,
 just to store occasional values of different types, therefore I need to
 specialize those TIB Values.

 Now how would I do That?

 I could create some tables solely for the means of holding [NUM], [TEXT],
 [BLOB], [ETC] and reference them with the TIB PK. When using them I could
 create a view TIBV containing all of [NUM, TEXT, BLOB, ETC] in the same
 column called Value, and join it with TIB to get the value of a PF.

 But is this a good idea?
 Is there a better way?

 Also, I would have to create a pivot table in order to list all the
 products with all the features. As this is implemented in C (afaik) I
 suppose it is rather fast or at least fast enough, but I do not actually
 know. What I know is, there are about 30 Product Types and around 50
 possible product features. One product can have up to approximately 25 PF
 but are mostly around 5 to 10.

 Do you think a pivot table is a good idea?
 What alternative do i have?

 There is room for caching since the dataset is not updated too often.

 regards, adrian



Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-02 Thread Robert Haas
On Tue, Jun 2, 2015 at 11:36 AM, Andres Freund and...@anarazel.de wrote:
 That would be a departure from the behavior of every existing release
 that includes this code based on, to my knowledge, zero trouble
 reports.

 On the other hand we're now at about bug #5 attributeable to the odd way
 truncation works for multixacts. It's obviously complex and hard to get
 right. It makes it harder to cope with the wrong values left in
 datminxid etc. So I'm still wondering whether fixing this for good isn't
 the better approach.

It may well be.  But I think we should do something more surgical
first.  Perhaps we can justify the pain and risk of making changes to
the WAL format in the back-branches, but let's not do it in a rush.
If we can get this patch to a state where it undoes the damage
inflicted in 9.3.7/9.4.2, then we will be in a state where we have as
much reliability as we had in 9.3.6 plus the protections against
member-space wraparound added in 9.3.7 - which, like the patch I'm
proposing now, were directly motivated by multiple, independent bug
reports.  That seems like a good place to get to.  If nothing else, it
will buy us some time to figure out what else we want to do.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-02 Thread Noah Misch
On Tue, Jun 02, 2015 at 11:16:22AM -0400, Robert Haas wrote:
 On Tue, Jun 2, 2015 at 1:21 AM, Noah Misch n...@leadboat.com wrote:
  On Mon, Jun 01, 2015 at 02:06:05PM -0400, Robert Haas wrote:

  Granted.  Would it be better to update both functions at the same time, and
  perhaps to make that a master-only change?  Does the status quo cause more
  practical harm via SetMultiXactIdLimit() than via SetTransactionIdLimit()?
 
 It does in the case of the call to find_multixact_start().  If that
 fails, we take the server down for no good reason, as demonstrated by
 the original report. I'll revert the changes to the other two things
 in this function that I changed to be protected by did_trim.

Sounds good.

  There's no data loss if 1 = true_minmxid = 2^31 at the time minmxid got 
  set
  to 1.  Otherwise, data loss is possible.
 
 Yes, but in that scenario, the log message you propose wouldn't be
 triggered.  If true_minmxid  2^31, then the stored minmxid will not
 precede the files on disk; it will follow it (assuming the older stuff
 hasn't been truncated, as is likely).

Ah, quite right.

missing pg_multixact/members files; begins at MultiXactId %u, expected 
  %u
 
 This seems misleading.  In the known failure case, it's not that the
 pg_multixact files are unexpectedly missing; it's that we incorrectly
 think that they should still be there.  Maybe:
 
 oldest MultiXactId on disk %u follows expected oldest MultiXact %u

Your wording is better.

  For good measure, perhaps emit this when lastCheckpointedOldest  earliest 
  by
  more than one segment:
 
excess pg_multixact/members files; begins at MultiXactId %u, expected %u
 
 So, this scenario will happen whenever the system was interrupted in
 the middle of a truncation, or when the system is started from a base
 backup and a truncation happened after files were copied.  I'm wary of
 giving users the idea that this is an atypical event.  Perhaps a
 message at DEBUG1?

DEBUG1 works for me, or feel free to leave it out.

  I can see that there might be an issue there, but I can't quite put my
  finger on it well enough to say that it definitely is an issue.  This
  code is examining the offsets space rather than the members space, and
  the protections against offsets wraparound have been there since the
  original commit of this feature
  (0ac5ad5134f2769ccbaefec73844f8504c4d6182).  To my knowledge we have
  no concrete evidence that there's ever been a problem in this area.
  It seems like it might be safer to rejigger that code so that it
  considers distance-behind-current rather than using the wrapped
  comparison logic, but I'm reluctant to start rejiggering more things
  without knowing what specifically I'm fixing.
 
  Anything that could cause the pg_multixact/offsets tail to rotate from being
  in the past to being in the future poses this risk.  (This is the tail from
  the perspective of files on disk; pg_control, datminmxid, and MultiXactState
  notions of the tail play no part here.)  I had in mind that the pg_upgrade
  datminmxid=1 bug could be a tool for achieving that, but I've been
  unsuccessful so far at building a credible thought experiment around it.  
  Near
  the beginning of your reply, you surmised that this could happen between a
  VACUUM's SetMultiXactIdLimit() and the next checkpoint's 
  TruncateMultiXact().
  Another vector is unlink() failure on a segment file.  SlruDeleteSegment()
  currently ignores the unlink() return value; the only harm has been some 
  disk
  usage.  With GetOldestMultiXactOnDisk() as-proposed, successful unlink() is
  mandatory to forestall the appearance of a wrapped state.
 
 I'm having trouble figuring out what to do about this.  I mean, the
 essential principle of this patch is that if we can't count on
 relminmxid, datminmxid, or the control file to be accurate, we can at
 least look at what is present on the disk.  If we also cannot count on
 that to be accurate, we are left without any reliable source of
 information.  Consider a hypothetical cluster where all our stored
 minmxids of whatever form are corrupted (say, all change to 1) and in
 addition there are stray files in pg_multixact.  I don't think there's
 really any way to get ourselves out of trouble in that scenario.

We could notice the problem and halt.  You mentioned above the possibility to
have SlruScanDirCbFindEarliest() check distance-behind-current.  Suppose it
used the current oldestMulti from pg_control as a reference point and
discovered the multixact on disk (a) most far behind that reference point and
(b) most far ahead of that reference point.  If MultiXactIdPrecedes(a, b),
we're good; adopt (a) as the new datminmxid.  Account for the possibility that
the space isn't really wrapped, but out reference point was totally wrong.
Once we determine that the space is wrapped, bail.

As you discuss downthread, the mxids actually present in t_max define the ones
we need.  We could scan them all and set authoritative 

Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-02 Thread Andres Freund
On 2015-06-02 11:37:02 -0400, Robert Haas wrote:
 The exact circumstances under which we're willing to replace a
 relminmxid with a newly-computed one that differs are not altogether
 clear to me, but there's an if statement protecting that logic, so
 there are some circumstances in which we'll leave the existing value
 intact.

I guess we'd have to change that then.

 It would similarly do so when the oldest MXID reference in the
 relation is in fact 1, but that value can't be vacuumed away yet.

I'd thought of just forcing consumption of one multixact in that
case. Not pretty, but imo acceptable.

 Also, the database might be really big.  Even if it were true that a
 full scan of every table would get us out of this state, describing
 the time that it would take to do that as relatively short seems to
 me to be considerably understating the impact of a full-cluster
 VACUUM.

Well. We're dealing with a corrupted cluster. Having a way out that's
done automatically, even if it takes a long while, is pretty good
already. In many cases the corruption (i.e. pg_upgrade) happened long
ago, so the table's relminmxid will already have been recomputed.  I
think that's acceptable.

 With regard to the more general question of WAL-logging this, are you
 going to work on that?  Are you hoping Alvaro or I will work on that?
 Should we draw straws?  It seems like somebody needs to do it.

I'm willing to invest the time to develop an initial version, but I'll
need help evaluating it. I don't have many testing resources available
atm, and I'm not going to trust stuff I developed while travelling by
just looking at the code.

Greetings,

Andres Freund


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-02 Thread Robert Haas
On Tue, Jun 2, 2015 at 11:44 AM, Andres Freund and...@anarazel.de wrote:
 On 2015-06-02 11:37:02 -0400, Robert Haas wrote:
 The exact circumstances under which we're willing to replace a
 relminmxid with a newly-computed one that differs are not altogether
 clear to me, but there's an if statement protecting that logic, so
 there are some circumstances in which we'll leave the existing value
 intact.

 I guess we'd have to change that then.

Yeah, but first we'd need to assess why it's like that.  Tom was the
one who installed the current logic, but I haven't been able to fully
understand it.

 It would similarly do so when the oldest MXID reference in the
 relation is in fact 1, but that value can't be vacuumed away yet.

 I'd thought of just forcing consumption of one multixact in that
 case. Not pretty, but imo acceptable.

What if multixact 1 still has living members?

 Also, the database might be really big.  Even if it were true that a
 full scan of every table would get us out of this state, describing
 the time that it would take to do that as relatively short seems to
 me to be considerably understating the impact of a full-cluster
 VACUUM.

 Well. We're dealing with a corrupted cluster. Having a way out that's
 done automatically, even if it takes a long while, is pretty good
 already. In many cases the corruption (i.e. pg_upgrade) happened long
 ago, so the table's relminmxid will already have been recomputed.  I
 think that's acceptable.

I'm a long way from being convinced the automated recovery is
possible.  There are so many different scenarios here that it's very
difficult to reason generally about what the right thing to do is.
I agree it would be nice if we had it, though.

 With regard to the more general question of WAL-logging this, are you
 going to work on that?  Are you hoping Alvaro or I will work on that?
 Should we draw straws?  It seems like somebody needs to do it.

 I'm willing to invest the time to develop an initial version, but I'll
 need help evaluating it. I don't have many testing resources available
 atm, and I'm not going to trust stuff I developed while travelling by
 just looking at the code.

I'm willing to help with that.  Hopefully I'm not the only one, though.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] postgres db permissions

2015-06-02 Thread Steve Pribyl
Good Afternoon,

Built a fresh 9.3. postgres server and added some users and noticed that any 
user can create tables in any database including the postgres database by 
default.

Have I missed some step in securing the default install?

Steve Pribyl



 [http://www.akunacapital.com/images/akuna.png]
Steve Pribyl | Senior Systems Engineer
Akuna Capital LLC
36 S Wabash, Suite 310 Chicago IL 60603 USA | www.akunacapital.com 
http://www.akunacapital.com
p: +1 312 994 4646 | m: 847-343-2349 | f: +1 312 750 1667 | 
steve.pri...@akunacapital.com

Please consider the environment, before printing this email.

This electronic message contains information from Akuna Capital LLC that may be 
confidential, legally privileged or otherwise protected from disclosure. This 
information is intended for the use of the addressee only and is not offered as 
investment advice to be relied upon for personal or professional use. 
Additionally, all electronic messages are recorded and stored in compliance 
pursuant to applicable SEC rules. If you are not the intended recipient, you 
are hereby notified that any disclosure, copying, distribution, printing or any 
other use of, or any action in reliance on, the contents of this electronic 
message is strictly prohibited. If you have received this communication in 
error, please notify us by telephone at (312)994-4640 and destroy the original 
message.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres db permissions

2015-06-02 Thread Joshua D. Drake


On 06/02/2015 10:36 AM, Steve Pribyl wrote:


Good Afternoon,

Built a fresh 9.3. postgres server and added some users and noticed that any 
user can create tables in any database including the postgres database by 
default.

Have I missed some step in securing the default install?


How exactly did you add the users?

JD



--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-02 Thread Andres Freund
On 2015-06-02 11:49:56 -0400, Robert Haas wrote:
 On Tue, Jun 2, 2015 at 11:44 AM, Andres Freund and...@anarazel.de wrote:
  On 2015-06-02 11:37:02 -0400, Robert Haas wrote:
  The exact circumstances under which we're willing to replace a
  relminmxid with a newly-computed one that differs are not altogether
  clear to me, but there's an if statement protecting that logic, so
  there are some circumstances in which we'll leave the existing value
  intact.
 
  I guess we'd have to change that then.
 
 Yeah, but first we'd need to assess why it's like that.  Tom was the
 one who installed the current logic, but I haven't been able to fully
 understand it.

We're talking about:
/* Similarly for relminmxid */
if (MultiXactIdIsValid(minmulti) 
pgcform-relminmxid != minmulti 
(MultiXactIdPrecedes(pgcform-relminmxid, minmulti) ||
 MultiXactIdPrecedes(ReadNextMultiXactId(), 
pgcform-relminmxid)))
{
pgcform-relminmxid = minmulti;
dirty = true;
}

right? Before that change (78db307bb/87f830e0ce) we only updated
relminmxid if the new value was newer than the old one. That's to avoid
values from going backwards, e.g. when a relation is first VACUUM
FREEZEd and then a normal VACUUM is performed (these values are
unfortunately based on the cutoff values instead of the observed
minima). The new thing is the || MultiXactIdPrecedes(ReadNextMultiXactId(), 
pgcform-relminmxid)
which is there to recognize values from the future. E.g. the 1
errorneously left in place by pg_upgrade.

Makes sense?

  It would similarly do so when the oldest MXID reference in the
  relation is in fact 1, but that value can't be vacuumed away yet.
 
  I'd thought of just forcing consumption of one multixact in that
  case. Not pretty, but imo acceptable.
 
 What if multixact 1 still has living members?

I think we should just trigger the logic if 1 is below the multi
freezing horizon - in which case a) the 1 will automatically be
replaced, because the horizon is newer b) it can't have a living member
anyway.

- Andres


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general