Re: [GENERAL] Vacuum and state_change

2017-06-09 Thread armand pirvu

> On Jun 9, 2017, at 4:20 PM, Adrian Klaver  wrote:
> 
> On 06/09/2017 02:01 PM, armand pirvu wrote:
>>> On Jun 9, 2017, at 3:52 PM, Adrian Klaver  wrote:
>>> 
>>> On 06/09/2017 01:31 PM, armand pirvu wrote:
>>> 
> 
> 
> 
>> By temporary tables I mean just regular table not tables created by "create 
>> temporary table" . I should have been more precise. We call them temporary 
>> since we do drop them after all is said and done. Maybe we should change the 
>> way we call them
> 
> You will want to look at this before making that decision:
> 
> https://www.postgresql.org/docs/9.6/static/sql-createtable.html
> 
> Temporary Tables
> 
> 
> Basically, temporary tables are session specific.
> 

I noticed that, but since we use multiple schemas can not have a session temp 
table in non temp schema

We have those in place for a specific reason in case we mess some processing in 
between and we want to be able to have the data which we started with.



> 
>> 9.5 but considering I can track what auto vacuum does I was thinking to use 
>> that as a reason to the upgrade advantage
> 
> It is nice, you just have to weigh against what effect the other changes:
> 
> https://www.postgresql.org/docs/9.6/static/release-9-6.html
> 
> might have on your setup.
> 
> 

As of now I don’t think we have a draw back per se. We are poised to go live on 
Postgres soon though so I was thinking maybe have this upgrade done before 
going live ? Just a thought



> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_catalog tables don't have constraints?

2017-06-09 Thread Neil Anderson
On 9 Jun 2017 14:59,  wrote:

Neil Anderson  wrote:

> I've been exploring the pg_catalog tables and pointed a couple of tools at
> it to extract an ER diagram for a blog post. At first I thought it was a
bug
> in the drawing tool but it appears that the relationships between the
> pg_catalog tables are implicit rather than enforced by the database, is
that correct?

Every time I have to dive into the pg_* tables, I really want such a
diagram because
the relationships aren't obvious to me, so I've been looking for a diagram
like that
and haven't found one. I've also considered trying to make one, but with
all of the
custom types, my normal ERD tool isn't very cooperative.

Would you be willing to share the diagram with the list when you're done?

Would you be willing to share the names of the tools and process you used
to create the diagram?


Of course. My plan is to copy the catalogs into a new schema so I can add
fk constraints and then get something like navicat or datagrip to draw it.



Thanks,
Kevin


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and state_change

2017-06-09 Thread Adrian Klaver

On 06/09/2017 02:01 PM, armand pirvu wrote:



On Jun 9, 2017, at 3:52 PM, Adrian Klaver  wrote:

On 06/09/2017 01:31 PM, armand pirvu wrote:







By temporary tables I mean just regular table not tables created by "create 
temporary table" . I should have been more precise. We call them temporary since we 
do drop them after all is said and done. Maybe we should change the way we call them


You will want to look at this before making that decision:

https://www.postgresql.org/docs/9.6/static/sql-createtable.html

Temporary Tables


Basically, temporary tables are session specific.




9.5 but considering I can track what auto vacuum does I was thinking to use 
that as a reason to the upgrade advantage


It is nice, you just have to weigh against what effect the other changes:

https://www.postgresql.org/docs/9.6/static/release-9-6.html

might have on your setup.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-09 Thread Tom Lane
Jeff Janes  writes:
> On Fri, Jun 9, 2017 at 9:13 AM, Eric Lemoine 
> wrote:
>> I now think that the performance bug is not related to the fn_extra
>> thing. I had hope but not anymore :) I don't see where the Pointcloud
>> and PostGIS extensions could conflict.

> Can you run 'perf top' on the slow query?  That might pretty quickly tell
> you which function is taking up your time.

Another idea, if you haven't tried it already, is to run these test cases
in a server built with --enable-debug and --enable-cassert.  The memory
clobber stuff that's enabled by the latter is very good at turning coding
errors into reproducible, debuggable crashes ;-)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and state_change

2017-06-09 Thread armand pirvu

> On Jun 9, 2017, at 5:42 PM, Adrian Klaver  wrote:
> 
> On 06/09/2017 02:26 PM, armand pirvu wrote:
>>> On Jun 9, 2017, at 4:20 PM, Adrian Klaver  wrote:
>>> 
>>> On 06/09/2017 02:01 PM, armand pirvu wrote:
> On Jun 9, 2017, at 3:52 PM, Adrian Klaver  
> wrote:
> 
> On 06/09/2017 01:31 PM, armand pirvu wrote:
> 
>>> 
>>> 
>>> 
 By temporary tables I mean just regular table not tables created by 
 "create temporary table" . I should have been more precise. We call them 
 temporary since we do drop them after all is said and done. Maybe we 
 should change the way we call them
>>> 
>>> You will want to look at this before making that decision:
>>> 
>>> https://www.postgresql.org/docs/9.6/static/sql-createtable.html
>>> 
>>> Temporary Tables
>>> 
>>> 
>>> Basically, temporary tables are session specific.
>>> 
>> I noticed that, but since we use multiple schemas can not have a session 
>> temp table in non temp schema
> 
> A true temporary table is going to be in its own temporary schema:
> 
> https://www.postgresql.org/docs/9.6/static/runtime-config-client.html#GUC-SEARCH-PATH
>  
> 
> 
> "Likewise, the current session's temporary-table schema, pg_temp_nnn, is 
> always searched if it exists. It can be explicitly listed in the path by 
> using the alias pg_temp. If it is not listed in the path then it is searched 
> first (even before pg_catalog). However, the temporary schema is only 
> searched for relation (table, view, sequence, etc) and data type names. It is 
> never searched for function or operator names.”

The search_path I am aware of it. But it is more than that in our case hence we 
decided to have several schemas with quote/unquote or what we call temp tables 
or we can recall them as staging tables

Their purpose is to help with the transition from old system to new (in some 
way anyways)



> 
>> We have those in place for a specific reason in case we mess some processing 
>> in between and we want to be able to have the data which we started with.
>>> 
> 
>> As of now I don’t think we have a draw back per se. We are poised to go live 
>> on Postgres soon though so I was thinking maybe have this upgrade done 
>> before going live ? Just a thought
> 
> Yeah well if you are in pre-production why not, if no other reason then you 
> get another year of community support on the back end.
> 

My sentiments exactly


> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com 


Re: [GENERAL] Vacuum and state_change

2017-06-09 Thread armand pirvu

> On Jun 9, 2017, at 11:23 AM, Adrian Klaver  wrote:
> 
> On 06/09/2017 09:13 AM, armand pirvu wrote:
>>> On Jun 9, 2017, at 11:01 AM, Adrian Klaver >> > wrote:
>>> 
>>> On 06/09/2017 08:45 AM, armand pirvu wrote:
 Hi
 Had a couple of processes blocking the vacuum so I terminated them using
 select pg_terminate_backend(pid);
 Running the following
 select distinct pid, backend_start, query_start, state_change, state, 
 query from pg_catalog.pg_stat_activity order by 1;
  pid  | backend_start |  query_start  |
  state_change  | state  | query
 ---+---+---+---++--
 10677 | 2017-06-09 10:25:49.189848-05 | 2017-06-09 10:33:43.598805-05 | 
 2017-06-09 10:33:43.599091-05 | idle   | SELECT 1
 11096 | 2017-06-09 10:27:03.686588-05 | 2017-06-09 10:33:56.28736-05  | 
 2017-06-09 10:33:56.287364-05 | active | select distinct pid, 
 backend_start, query_start, state_change, state, query from 
 pg_catalog.pg_stat_activity order by 1;
 13277 | 2017-06-09 07:48:49.506686-05 | 2017-06-09 07:48:52.887185-05 | 
 2017-06-09 07:48:52.887188-05 | active | autovacuum: VACUUM 
 csischema.tf_purchased_badge
 13484 | 2017-06-09 10:31:54.127672-05 | 2017-06-09 10:33:47.137938-05 | 
 2017-06-09 10:33:47.138226-05 | idle   | SELECT 1
 16886 | 2017-06-09 07:56:49.033893-05 | 2017-06-09 07:56:49.078369-05 | 
 2017-06-09 07:56:49.078371-05 | active | autovacuum: VACUUM 
 csischema.tf_purchases_person
 25387 | 2017-06-09 05:32:08.079397-05 | 2017-06-09 05:32:08.385728-05 | 
 2017-06-09 05:32:08.385731-05 | active | autovacuum: VACUUM 
 csischema.tf_demographic_response_person
 37465 | 2017-06-09 08:50:58.992002-05 | 2017-06-09 08:51:21.506829-05 | 
 2017-06-09 08:51:21.506831-05 | active | autovacuum: VACUUM 
 csischema.tf_transaction_item_person
 I did notice that state_change did not change one bit
>>> 
>>> Did the state change?
>>> 
>> No and that was what got me worried
> 
> Are these large tables?
> 
> 

I would say yes

select count(*) from csischema.tf_purchased_badge;
 9380749

select count(*) from csischema.tf_purchases_person;
 19902172

select count(*) from csischema.tf_demographic_response_person;
 80868561

select count(*) from csischema.tf_transaction_item_person;
 3281084

Interesting enough two completed

  relname   | seq_scan | seq_tup_read | idx_scan | 
idx_tup_fetch | n_tup_ins | n_tup_upd | n_live_tup | n_dead_tup | 
n_mod_since_analyze | last_vacuum |last_autovacuum| 
autovacuum_count 
+--+--+--+---+---+---+++-+-+---+--
 tf_transaction_item_person |  160 |0 |   476810 |   
1946119 |  2526 |473678 |3226110 |  0 |  116097 
| | 2017-06-09 11:15:24.701997-05 |2
 tf_purchased_badge |  358 |   1551142438 |  2108331 |   
7020502 |  5498 |   1243746 |9747336 | 107560 |  115888 
| | 2017-06-09 15:09:16.624363-05 |1


I did notice though that checkpoints seem a bit too often aka below 5 min from 
start to end


2017-06-09 14:18:38.552 CDT,,,888,,593a1810.378,271,,2017-06-08 22:37:52 
CDT,,0,LOG,0,"checkpoint starting: time",""
2017-06-09 14:21:12.210 CDT,,,888,,593a1810.378,272,,2017-06-08 22:37:52 
CDT,,0,LOG,0,"checkpoint complete: wrote 12070 buffers (9.2%); 0 
transaction log file(s) added, 0 removed, 4 recycled; write=148.714 s, 
sync=4.834 s, total=153.657 s; sync files=17, longest=1.841 s, average=0.284 s; 
distance=89452 kB, estimate=89452 kB",""

2017-06-09 14:23:38.278 CDT,,,888,,593a1810.378,273,,2017-06-08 22:37:52 
CDT,,0,LOG,0,"checkpoint starting: time",""
2017-06-09 14:24:38.629 CDT,,,888,,593a1810.378,274,,2017-06-08 22:37:52 
CDT,,0,LOG,0,"checkpoint complete: wrote 593 buffers (0.5%); 0 transaction 
log file(s) added, 0 removed, 1 recycled; write=59.825 s, sync=0.474 s, 
total=60.350 s; sync files=8, longest=0.355 s, average=0.059 s; distance=26952 
kB, estimate=83202 kB",""

And also 

SELECT
total_checkpoints,
seconds_since_start / total_checkpoints / 60 AS minutes_between_checkpoints
FROM
(SELECT
EXTRACT(EPOCH FROM (now() - pg_postmaster_start_time())) AS seconds_since_start,
(checkpoints_timed+checkpoints_req) AS total_checkpoints
FROM pg_stat_bgwriter
) AS sub;


 total_checkpoints | minutes_between_checkpoints 

Re: [GENERAL] Vacuum and state_change

2017-06-09 Thread Adrian Klaver

On 06/09/2017 01:31 PM, armand pirvu wrote:



Are these large tables?




I would say yes

select count(*) from csischema.tf_purchased_badge;
  9380749

select count(*) from csischema.tf_purchases_person;
  19902172

select count(*) from csischema.tf_demographic_response_person;
  80868561

select count(*) from csischema.tf_transaction_item_person;
  3281084

Interesting enough two completed


So the two 'smaller' tables which would make sense.



   relname   | seq_scan | seq_tup_read | idx_scan | 
idx_tup_fetch | n_tup_ins | n_tup_upd | n_live_tup | n_dead_tup | 
n_mod_since_analyze | last_vacuum |last_autovacuum| 
autovacuum_count

+--+--+--+---+---+---+++-+-+---+--
  tf_transaction_item_person |  160 |0 |   476810 | 
   1946119 |  2526 |473678 |3226110 |  0 |   
116097 | | 2017-06-09 11:15:24.701997-05 |2
  tf_purchased_badge |  358 |   1551142438 |  2108331 | 
   7020502 |  5498 |   1243746 |9747336 | 107560 |   
115888 | | 2017-06-09 15:09:16.624363-05 |1



I did notice though that checkpoints seem a bit too often aka below 5 
min from start to end


You probably should take a look at:

https://www.postgresql.org/docs/9.6/static/wal-configuration.html

and

https://www.postgresql.org/docs/9.6/static/routine-vacuuming.html#AUTOVACUUM



These tables suffer quite some data changes IIRC but that comes via some 
temp tables which reside in a temp  schema and some previous messages 
from the log suggest that it might have ran into  ladder locking in 
early stages, aka tmp table locked from vacuum  and any further 
processing waiting for it and causing some other waits on those largish 
tables


Did you do a manual VACUUM of the temporary tables?

If not see below.



Considering the temp ones are only for load and yes some processing goes 
in there , I am thinking disabling auto vacuum for the temp tables . Or 
should I disable auto vacuum all together and run say as a bath job on a 
weekend night ?


I don't think temporary tables are the problem as far as autovacuum goes:

https://www.postgresql.org/docs/9.6/static/routine-vacuuming.html#AUTOVACUUM

"Temporary tables cannot be accessed by autovacuum. Therefore, 
appropriate vacuum and analyze operations should be performed via 
session SQL commands."





If you are on Postgres 9.6:

https://www.postgresql.org/docs/9.6/static/progress-reporting.html



Aside that there are vacuum improvements and such, any other strong 
compelling reason to upgrade to 9.6 ?



That would depend on what version you are on now. If it is out of 
support then there would be a reason to upgrade, not necessarily to 9.6 
though.


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting DB access by role after initial connection?

2017-06-09 Thread Ken Tanzer
>
> As to your very last point (suggestions about other approaches), is it
> impossible or impractical to migrate to a scheme in which each user
> actually has a data base role and their own password? Postgresql has really
> great facility for managing database authorization and access by means of
> login roles assignable membership in group roles. Why not let the tool do
> what it can already do very effectively?
>
> -- B
>
>
If you mean having each individual person having their own role, I'd say
it's not impossible, impractical at the current moment but (probably)
desirable and a longer-term goal.  There's just an awful lot of logic that
would have to be worked into the access control, as well as a way to create
and maintain all the roles.  Some day!  Maybe! :)

Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Vacuum and state_change

2017-06-09 Thread Adrian Klaver

On 06/09/2017 02:26 PM, armand pirvu wrote:



On Jun 9, 2017, at 4:20 PM, Adrian Klaver  wrote:

On 06/09/2017 02:01 PM, armand pirvu wrote:

On Jun 9, 2017, at 3:52 PM, Adrian Klaver  wrote:

On 06/09/2017 01:31 PM, armand pirvu wrote:







By temporary tables I mean just regular table not tables created by "create 
temporary table" . I should have been more precise. We call them temporary since we 
do drop them after all is said and done. Maybe we should change the way we call them


You will want to look at this before making that decision:

https://www.postgresql.org/docs/9.6/static/sql-createtable.html

Temporary Tables


Basically, temporary tables are session specific.



I noticed that, but since we use multiple schemas can not have a session temp 
table in non temp schema


A true temporary table is going to be in its own temporary schema:

https://www.postgresql.org/docs/9.6/static/runtime-config-client.html#GUC-SEARCH-PATH

"Likewise, the current session's temporary-table schema, pg_temp_nnn, is 
always searched if it exists. It can be explicitly listed in the path by 
using the alias pg_temp. If it is not listed in the path then it is 
searched first (even before pg_catalog). However, the temporary schema 
is only searched for relation (table, view, sequence, etc) and data type 
names. It is never searched for function or operator names."




We have those in place for a specific reason in case we mess some processing in 
between and we want to be able to have the data which we started with.









As of now I don’t think we have a draw back per se. We are poised to go live on 
Postgres soon though so I was thinking maybe have this upgrade done before 
going live ? Just a thought



Yeah well if you are in pre-production why not, if no other reason then 
you get another year of community support on the back end.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and state_change

2017-06-09 Thread armand pirvu

> On Jun 9, 2017, at 3:52 PM, Adrian Klaver  wrote:
> 
> On 06/09/2017 01:31 PM, armand pirvu wrote:
> 
>>> 
>>> Are these large tables?
> 
> 
>> I would say yes
>> select count(*) from csischema.tf_purchased_badge;
>>  9380749
>> select count(*) from csischema.tf_purchases_person;
>>  19902172
>> select count(*) from csischema.tf_demographic_response_person;
>>  80868561
>> select count(*) from csischema.tf_transaction_item_person;
>>  3281084
>> Interesting enough two completed
> 
> So the two 'smaller' tables which would make sense.
> 
>>   relname   | seq_scan | seq_tup_read | idx_scan | 
>> idx_tup_fetch | n_tup_ins | n_tup_upd | n_live_tup | n_dead_tup | 
>> n_mod_since_analyze | last_vacuum |last_autovacuum| 
>> autovacuum_count
>> +--+--+--+---+---+---+++-+-+---+--
>>  tf_transaction_item_person |  160 |0 |   476810 |
>> 1946119 |  2526 |473678 |3226110 |  0 |   
>> 116097 | | 2017-06-09 11:15:24.701997-05 |2
>>  tf_purchased_badge |  358 |   1551142438 |  2108331 |
>> 7020502 |  5498 |   1243746 |9747336 | 107560 |   
>> 115888 | | 2017-06-09 15:09:16.624363-05 |1
>> I did notice though that checkpoints seem a bit too often aka below 5 min 
>> from start to end
> 
> You probably should take a look at:
> 
> https://www.postgresql.org/docs/9.6/static/wal-configuration.html
> 
> and
> 
> https://www.postgresql.org/docs/9.6/static/routine-vacuuming.html#AUTOVACUUM
> 
> 
> 
>> These tables suffer quite some data changes IIRC but that comes via some 
>> temp tables which reside in a temp  schema and some previous messages from 
>> the log suggest that it might have ran into  ladder locking in early stages, 
>> aka tmp table locked from vacuum  and any further processing waiting for it 
>> and causing some other waits on those largish tables
> 
> Did you do a manual VACUUM of the temporary tables?
> 
> If not see below.
> 
>> Considering the temp ones are only for load and yes some processing goes in 
>> there , I am thinking disabling auto vacuum for the temp tables . Or should 
>> I disable auto vacuum all together and run say as a bath job on a weekend 
>> night ?
> 
> I don't think temporary tables are the problem as far as autovacuum goes:
> 
> https://www.postgresql.org/docs/9.6/static/routine-vacuuming.html#AUTOVACUUM
> 
> "Temporary tables cannot be accessed by autovacuum. Therefore, appropriate 
> vacuum and analyze operations should be performed via session SQL commands.”


By temporary tables I mean just regular table not tables created by "create 
temporary table" . I should have been more precise. We call them temporary 
since we do drop them after all is said and done. Maybe we should change the 
way we call them



> 
>>> If you are on Postgres 9.6:
>>> 
>>> https://www.postgresql.org/docs/9.6/static/progress-reporting.html
>>> 
>> Aside that there are vacuum improvements and such, any other strong 
>> compelling reason to upgrade to 9.6 ?
> 
> 
> That would depend on what version you are on now. If it is out of support 
> then there would be a reason to upgrade, not necessarily to 9.6 though.

9.5 but considering I can track what auto vacuum does I was thinking to use 
that as a reason to the upgrade advantage



> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting DB access by role after initial connection?

2017-06-09 Thread Ken Tanzer
On Fri, Jun 9, 2017 at 11:22 AM, Joe Conway  wrote:

> On 06/09/2017 08:56 AM, Ken Tanzer wrote:
> > The extra logging would be undesirable.  Is there any way to skip that
> > entirely?  I see with block_log_statement I could dial down the logging
> > after switching users, but that would require the app to be aware of
> > what the current "normal" logging level was.
>
> Also from the README:
> ---
> Notes:
>
> If set_user.block_log_statement is set to "off", the log_statement
> setting is left unchanged.
> ---
>
> So assuming you do not normally have statements being logged, this would
> not change that.
>
>
Despite reading that, I was a little uncertain because of it being called
block_log_statement.  It seems like conceptually it's really
log_all_statements, though I suspect you won't want to change the name in
midstream.

FWIW, it would be clearer at least to me if you took the two statements in
the description:


   - log_statement setting is set to "all", meaning every SQL statement
   executed while in this state will also get logged.
   - If set_user.block_log_statement is set to "on", SET log_statement and
   variations will be blocked. And this one from the notes:

 And this one from the notes:

   - If set_user.block_log_statement is set to "off", the log_statement
   setting is left unchanged.


And combined them together:

If set-user.block_log_statement is set to "on", log_statement setting is
set to "all", meaning every SQL statement executed while in this state will
also get logged.  SET log_statement and variations will be blocked.  If set
to "off," the log statement setting is left unchanged.


> > Any other pitfalls I'm not seeing, or reasons this might be a bad idea?
>
> As noted in the README, set_user will refuse to run inside a transaction
> block, but other than that none that I know of. Of course if you come up
> with any I'd be very interested to hear about them.
>
>
If I go this route, get it up and running and find any, I'll be happy to
let you know. :)

Thanks a lot for your help!

Ken



> Joe
>
> --
> Crunchy Data - http://crunchydata.com
> PostgreSQL Support for Secure Enterprises
> Consulting, Training, & Open Source Development
>
>

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Limiting DB access by role after initial connection?

2017-06-09 Thread Joe Conway
On 06/09/2017 02:16 PM, Ken Tanzer wrote:
> FWIW, it would be clearer at least to me if you took the two statements
> in the description:
> 
>   * log_statement setting is set to "all", meaning every SQL statement
> executed while in this state will also get logged.
>   * If set_user.block_log_statement is set to "on", SET log_statement
> and variations will be blocked. And this one from the notes:
> 
>  And this one from the notes:
> 
>   * If set_user.block_log_statement is set to "off", the log_statement
> setting is left unchanged.
> 
> And combined them together:
> 
> If set-user.block_log_statement is set to "on", log_statement setting is
> set to "all", meaning every SQL statement executed while in this state
> will also get logged.  SET log_statement and variations will be
> blocked.  If set to "off," the log statement setting is left unchanged.

Sounds good, will make that change or something similar -- thanks for
the feedback.

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Limiting DB access by role after initial connection?

2017-06-09 Thread Bruno Wolff III

On Thu, Jun 08, 2017 at 22:37:34 -0700,
 Ken Tanzer  wrote:


My approach was to have the initial connection made by the owner, and then
after successfully authenticating the user, to switch to the role of the
site they belong to.  After investigation, this still seems feasible but
imperfect.  Specifically, I thought it would be possible to configure such
that after changing to a more restricted role, it would not be possible to
change back.  But after seeing this thread (


How are you keeping the credentials of the owner from being compromised? It 
seems if you are worried about role changing, adversaries will likely also 
be in a position to steal the owner's credentials or hijack the connection 
before privileges are dropped.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_upgrade --link on Windows

2017-06-09 Thread Arnaud L.

Hi

The pg_upgrade documentation for PostgreSQL 9.6 states that --link will 
use junction points on Windows.

Shouldn't it rather user hard-links ?
If I'm not mistaken, with junction points (i.e. soft-links to 
directories), the old data dir cannot be removed.
With hard-links to file, we can get rid of the old data dir once we are 
sure that the upgrade is fine.


Regards

--
Arnaud


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting DB access by role after initial connection?

2017-06-09 Thread Ken Tanzer
As a follow up to this, a couple more questions from my wishful thinking
list:

1) Any way, whether kosher or hacky, to set an arbitrary yet immutable (for
the lifetime of the session) variable within a session?  Something akin to
DEFINE, which wouldn't allow redefinition?  A temp table that couldn't be
dropped?

2) Same as above, but set by the app before making the connection?  I'd
settle for this even if it meant two connections per page.

Either of those would facilitate more fine-grained, per-user access, which
would ultimately be better and maybe allow me to skip the creation and
management of roles.  Having discovered that the RLS will not apply to all
my existing views and I'll need to rewrite them anyway, this seems like a
better thing to wish for!

Cheers,
Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] pg_upgrade --link on Windows

2017-06-09 Thread Adrian Klaver

On 06/09/2017 07:07 AM, Bruce Momjian wrote:

On Fri, Jun  9, 2017 at 12:00:56PM +0200, Arnaud L. wrote:

Hi

The pg_upgrade documentation for PostgreSQL 9.6 states that --link will use
junction points on Windows.
Shouldn't it rather user hard-links ?
If I'm not mistaken, with junction points (i.e. soft-links to directories),
the old data dir cannot be removed.
With hard-links to file, we can get rid of the old data dir once we are sure
that the upgrade is fine.


I was told junction points on Windows were hard links and no one has
ever complained about not being able to remove them.



https://msdn.microsoft.com/en-us/library/windows/desktop/aa365006(v=vs.85).aspx

Seems to me the difference is hard links point to file, junctions to 
directories.


So if I am following:

https://en.wikipedia.org/wiki/NTFS_junction_point#Creating_or_deleting_a_junction_point

You remove the junction and then the directory it points to.

--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade --link on Windows

2017-06-09 Thread Adrian Klaver

On 06/09/2017 07:39 AM, Arnaud L. wrote:

Le 9/06/2017 à 16:07, Bruce Momjian a écrit :

I was told junction points on Windows were hard links and no one has
ever complained about not being able to remove them.


Sorry, I think my explanation was not very clear.
You can remove the link, but the point is to remove the target (i.e. the 
old-data-dir).
You can do this with a hard link (there still exists a hardlink pointing 
to the inode so it remains), but with a soft link you end up with a link 
to nothing.
Deleting a junction target in Windows will work, but you'll have an 
error trying to access the junction directory (directory not found).


See this page for more details :
http://cects.com/overview-to-understanding-hard-links-junction-points-and-symbolic-links-in-windows/ 



Under "Hard Link (Linking for individual files)" :
"If the target is deleted, its content is still available through the 
hard link"


Junction Point (Directory Hard Link):
"If the target is moved, renamed or deleted, the Junction Point still 
exists, but points to a non-existing directory"


BUT, when I try to "pg_upgrade --link --check" with old-data-dir and 
new-data-dir on different volumes, I get an error saying that both 
directories must be on the same volume if --link is used.
So maybe pg_upgrade uses hard-links (i.e. to files), and only the 
documentation is wrong by calling them junctions (i.e. soft links to 
files) ?


Looks that way. In file.c in ~/src/bin/pg_upgrade I see:

#ifdef WIN32
 300 /* implementation of pg_link_file() on Windows */
 301 static int
 302 win32_pghardlink(const char *src, const char *dst)
 303 {
 304 /*
 305  * CreateHardLinkA returns zero for failure
 306  * http://msdn.microsoft.com/en-us/library/aa363860(VS.85).aspx
 307  */
 308 if (CreateHardLinkA(dst, src, NULL) == 0)
 309 {
 310 _dosmaperr(GetLastError());
 311 return -1;
 312 }
 313 else
 314 return 0;
 315 }
 316 #endif




Regards
--
Arnaud





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Vacuum and state_change

2017-06-09 Thread armand pirvu
Hi

Had a couple of processes blocking the vacuum so I terminated them using 
select pg_terminate_backend(pid);

Running the following
select distinct pid, backend_start, query_start, state_change, state, query 
from pg_catalog.pg_stat_activity order by 1;
  pid  | backend_start |  query_start  |
 state_change  | state  |   
   query   
---+---+---+---++--
 10677 | 2017-06-09 10:25:49.189848-05 | 2017-06-09 10:33:43.598805-05 | 
2017-06-09 10:33:43.599091-05 | idle   | SELECT 1
 11096 | 2017-06-09 10:27:03.686588-05 | 2017-06-09 10:33:56.28736-05  | 
2017-06-09 10:33:56.287364-05 | active | select distinct pid, backend_start, 
query_start, state_change, state, query from pg_catalog.pg_stat_activity order 
by 1;
 13277 | 2017-06-09 07:48:49.506686-05 | 2017-06-09 07:48:52.887185-05 | 
2017-06-09 07:48:52.887188-05 | active | autovacuum: VACUUM 
csischema.tf_purchased_badge
 13484 | 2017-06-09 10:31:54.127672-05 | 2017-06-09 10:33:47.137938-05 | 
2017-06-09 10:33:47.138226-05 | idle   | SELECT 1
 16886 | 2017-06-09 07:56:49.033893-05 | 2017-06-09 07:56:49.078369-05 | 
2017-06-09 07:56:49.078371-05 | active | autovacuum: VACUUM 
csischema.tf_purchases_person
 25387 | 2017-06-09 05:32:08.079397-05 | 2017-06-09 05:32:08.385728-05 | 
2017-06-09 05:32:08.385731-05 | active | autovacuum: VACUUM 
csischema.tf_demographic_response_person
 37465 | 2017-06-09 08:50:58.992002-05 | 2017-06-09 08:51:21.506829-05 | 
2017-06-09 08:51:21.506831-05 | active | autovacuum: VACUUM 
csischema.tf_transaction_item_person

I did notice that state_change did not change one bit

Does that mean that something is not quite right with the vacuums ? 

Thank you
Armand





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting DB access by role after initial connection?

2017-06-09 Thread Ken Tanzer
On Fri, Jun 9, 2017 at 6:42 AM, Joe Conway  wrote:

> On 06/08/2017 10:37 PM, Ken Tanzer wrote:
> > My approach was to have the initial connection made by the owner, and
> > then after successfully authenticating the user, to switch to the role
> > of the site they belong to.  After investigation, this still seems
> > feasible but imperfect.  Specifically, I thought it would be possible to
> > configure such that after changing to a more restricted role, it would
> > not be possible to change back.  But after seeing this thread
> > (http://www.postgresql-archive.org/Irreversible-SET-ROLE-td5828828.html),
> I'm
> > gathering that this is not the case.
>
> See set_user for a possible solution: https://github.com/pgaudit/
>
>
Thanks!  Looking at the README, it seems like the intended use case is the
opposite (escalating privileges), but if I understand could work anyway?

If I'm understanding, you could set_user() with a random token and thereby
prevent switching back?

The extra logging would be undesirable.  Is there any way to skip that
entirely?  I see with block_log_statement I could dial down the logging
after switching users, but that would require the app to be aware of what
the current "normal" logging level was.

Any other pitfalls I'm not seeing, or reasons this might be a bad idea?

Ken




-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Vacuum and state_change

2017-06-09 Thread Adrian Klaver

On 06/09/2017 08:45 AM, armand pirvu wrote:

Hi

Had a couple of processes blocking the vacuum so I terminated them using
select pg_terminate_backend(pid);

Running the following
select distinct pid, backend_start, query_start, state_change, state, query 
from pg_catalog.pg_stat_activity order by 1;
   pid  | backend_start |  query_start  |   
  state_change  | state  |  
query
---+---+---+---++--
  10677 | 2017-06-09 10:25:49.189848-05 | 2017-06-09 10:33:43.598805-05 | 
2017-06-09 10:33:43.599091-05 | idle   | SELECT 1
  11096 | 2017-06-09 10:27:03.686588-05 | 2017-06-09 10:33:56.28736-05  | 
2017-06-09 10:33:56.287364-05 | active | select distinct pid, backend_start, 
query_start, state_change, state, query from pg_catalog.pg_stat_activity order 
by 1;
  13277 | 2017-06-09 07:48:49.506686-05 | 2017-06-09 07:48:52.887185-05 | 
2017-06-09 07:48:52.887188-05 | active | autovacuum: VACUUM 
csischema.tf_purchased_badge
  13484 | 2017-06-09 10:31:54.127672-05 | 2017-06-09 10:33:47.137938-05 | 
2017-06-09 10:33:47.138226-05 | idle   | SELECT 1
  16886 | 2017-06-09 07:56:49.033893-05 | 2017-06-09 07:56:49.078369-05 | 
2017-06-09 07:56:49.078371-05 | active | autovacuum: VACUUM 
csischema.tf_purchases_person
  25387 | 2017-06-09 05:32:08.079397-05 | 2017-06-09 05:32:08.385728-05 | 
2017-06-09 05:32:08.385731-05 | active | autovacuum: VACUUM 
csischema.tf_demographic_response_person
  37465 | 2017-06-09 08:50:58.992002-05 | 2017-06-09 08:51:21.506829-05 | 
2017-06-09 08:51:21.506831-05 | active | autovacuum: VACUUM 
csischema.tf_transaction_item_person

I did notice that state_change did not change one bit


Did the state change?



Does that mean that something is not quite right with the vacuums ?


Might want to take a look at:

https://www.postgresql.org/docs/9.6/static/monitoring-stats.html#PG-STAT-ALL-TABLES-VIEW




Thank you
Armand








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-06-09 Thread Harry Ambrose
Hi,

Maybe you could give some info on :
> - your ext3 mkfs and mount options (journal, barriers, etc)
>
/etc/fstab details below:
LABEL=/var/lib/pgsql/var/lib/pgsql   ext3defaults
 1 2
LABEL=/tablespace1  /tablespace1ext3defaults
 1 2
LABEL=/tablespace2  /tablespace2ext3defaults
 1 2
LABEL=/tablespace3  /tablespace3ext3defaults
 1 2
pg_default:

Filesystem features:  has_journal ext_attr resize_inode dir_index
filetype needs_recovery sparse_super large_file

Filesystem flags: signed_directory_hash

Default mount options:(none)

Filesystem state: clean

Errors behavior:  Continue

Filesystem OS type:   Linux

Inode count:  36634624

Block count:  146506767

Reserved block count: 7325338

Free blocks:  143785740

Free inodes:  36627866

First block:  0

Block size:   4096

Fragment size:4096

Reserved GDT blocks:  989

Blocks per group: 32768

Fragments per group:  32768

Inodes per group: 8192

Inode blocks per group:   512

RAID stride:  64

RAID stripe width:64

Filesystem created:   Fri Aug  9 16:11:53 2013

Last mount time:  Fri Apr 21 22:37:02 2017

Last write time:  Fri Apr 21 22:37:02 2017

Mount count:  2

Maximum mount count:  100

Last checked: Thu Sep 15 18:52:43 2016

Check interval:   31536000 (12 months, 5 days)

Next check after: Fri Sep 15 18:52:43 2017

Reserved blocks uid:  0 (user root)

Reserved blocks gid:  0 (group root)

First inode:  11

Inode size:   256

Required extra isize: 28

Desired extra isize:  28

Journal inode:8

Default directory hash:   half_md4

Journal backup:   inode blocks
tablespaces

Filesystem features:  has_journal ext_attr resize_inode dir_index
filetype needs_recovery sparse_super large_file

Filesystem flags: signed_directory_hash

Default mount options:(none)

Filesystem state: clean

Errors behavior:  Continue

Filesystem OS type:   Linux

Inode count:  73261056

Block count:  293013543

Reserved block count: 14650677

Free blocks:  286208439

Free inodes:  73174728

First block:  0

Block size:   4096

Fragment size:4096

Reserved GDT blocks:  954

Blocks per group: 32768

Fragments per group:  32768

Inodes per group: 8192

Inode blocks per group:   512

RAID stride:  64

RAID stripe width:128

Filesystem created:   Fri Aug  9 16:11:53 2013

Last mount time:  Fri Apr 21 22:37:02 2017

Last write time:  Fri Apr 21 22:37:02 2017

Mount count:  2

Maximum mount count:  100

Last checked: Thu Sep 15 18:52:43 2016

Check interval:   31536000 (12 months, 5 days)

Next check after: Fri Sep 15 18:52:43 2017

Reserved blocks uid:  0 (user root)

Reserved blocks gid:  0 (group root)

First inode:  11

Inode size:   256

Required extra isize: 28

Desired extra isize:  28

Journal inode:8

Default directory hash:   half_md4

Journal backup:   inode blocks

> - your controller setup (battery should be working good and cache mode set
> to write back)
>
Cache Board Present: True
Cache Status: OK
Cache Ratio: 10% Read / 90% Write
Drive Write Cache: Disabled
Total Cache Size: 2.0 GB
Total Cache Memory Available: 1.8 GB
No-Battery Write Cache: Disabled
SSD Caching RAID5 WriteBack Enabled: False
SSD Caching Version: 1
Cache Backup Power Source: Capacitors
Battery/Capacitor Count: 1
Battery/Capacitor Status: OK

> - your disks setup (write cache should be disabled)
>
Write cache is disabled, see above.

> - you should check your syslogs/messages for any errors related to storage
>
No error messages found.

> - is your RAM ECC? Did you run any memtest?
>
Yes, memory is ECC. No error messages found.

> - is your CPU overheating ?
>
No overheating issues.

> - have you experienced any crashes/freezes ?
>
No crashes/freezes experienced.

Best wishes,
Harry


Re: [GENERAL] Limiting DB access by role after initial connection?

2017-06-09 Thread bto...@computer.org


- Original Message -
> From: "Ken Tanzer" 
> To: "PG-General Mailing List" 
> Sent: Friday, June 9, 2017 1:37:34 AM
> Subject: [GENERAL] Limiting DB access by role after initial connection?
> 
> ...I'm working with an organization with a current production
> database.  Organizations in other locations using the same service delivery
> model want to share this database, with some but not all of the data
> restricted so that people at each site can see only that site's data.  I've
> been looking at doing this by creating a role for each location, ...
> Currently the database has
> one user, the owner, and access is controlled within the application by
> usernames and passwords within the DB.
> 
> My approach was to have the initial connection made by the owner, and then
> after successfully authenticating the user, to switch to the role of the
> site they belong to.  ...
> 
> 
> ...I'd also welcome any
> thoughts, suggestions or feedback about 1) and 2), or better approaches
> entirely.  Thanks!
> 


As to your very last point (suggestions about other approaches), is it 
impossible or impractical to migrate to a scheme in which each user actually 
has a data base role and their own password? Postgresql has really great 
facility for managing database authorization and access by means of login roles 
assignable membership in group roles. Why not let the tool do what it can 
already do very effectively?

-- B



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade --link on Windows

2017-06-09 Thread Bruce Momjian
On Fri, Jun  9, 2017 at 12:00:56PM +0200, Arnaud L. wrote:
> Hi
> 
> The pg_upgrade documentation for PostgreSQL 9.6 states that --link will use
> junction points on Windows.
> Shouldn't it rather user hard-links ?
> If I'm not mistaken, with junction points (i.e. soft-links to directories),
> the old data dir cannot be removed.
> With hard-links to file, we can get rid of the old data dir once we are sure
> that the upgrade is fine.

I was told junction points on Windows were hard links and no one has
ever complained about not being able to remove them.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade --link on Windows

2017-06-09 Thread Arnaud L.

Le 9/06/2017 à 16:07, Bruce Momjian a écrit :

I was told junction points on Windows were hard links and no one has
ever complained about not being able to remove them.


Sorry, I think my explanation was not very clear.
You can remove the link, but the point is to remove the target (i.e. the 
old-data-dir).
You can do this with a hard link (there still exists a hardlink pointing 
to the inode so it remains), but with a soft link you end up with a link 
to nothing.
Deleting a junction target in Windows will work, but you'll have an 
error trying to access the junction directory (directory not found).


See this page for more details :
http://cects.com/overview-to-understanding-hard-links-junction-points-and-symbolic-links-in-windows/

Under "Hard Link (Linking for individual files)" :
"If the target is deleted, its content is still available through the 
hard link"


Junction Point (Directory Hard Link):
"If the target is moved, renamed or deleted, the Junction Point still 
exists, but points to a non-existing directory"


BUT, when I try to "pg_upgrade --link --check" with old-data-dir and 
new-data-dir on different volumes, I get an error saying that both 
directories must be on the same volume if --link is used.
So maybe pg_upgrade uses hard-links (i.e. to files), and only the 
documentation is wrong by calling them junctions (i.e. soft links to 
files) ?


Regards
--
Arnaud


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting DB access by role after initial connection?

2017-06-09 Thread Joe Conway
On 06/08/2017 10:37 PM, Ken Tanzer wrote:
> My approach was to have the initial connection made by the owner, and
> then after successfully authenticating the user, to switch to the role
> of the site they belong to.  After investigation, this still seems
> feasible but imperfect.  Specifically, I thought it would be possible to
> configure such that after changing to a more restricted role, it would
> not be possible to change back.  But after seeing this thread
> (http://www.postgresql-archive.org/Irreversible-SET-ROLE-td5828828.html), I'm
> gathering that this is not the case.

See set_user for a possible solution: https://github.com/pgaudit/

HTH,

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


[GENERAL] Enc: Extending SQL in C using VARIABLE length type in user defined type

2017-06-09 Thread Fabiana Zioti


I am trying around a little bit with creating my own types using shared 
objects, written in C. The usage of static types with fixed length was actually 
no problem for me, so I proceeded to variable length types. I created an 
n-dimensional point structure called "Geo_Polygon" that contains a field of 
Geo_Point values of dynamic length. Like:


typedef struct Geo_Point{
  double x;
  double y;
  int id;
} Geo_Point;

typedef struct Geo_Polygon{
  int32 v_len;
  Geo_Point pontos[FLEXIBLE_ARRAY_MEMBER];
} Geo_Polygon;

The Geo_Point works fine.

I don`t know to convert the structure of Geo_polygon to the internal/external 
representation properly.

PG_FUNCTION_INFO_V1(geo_polygon_in);
PG_FUNCTION_INFO_V1(geo_polygon_out);

PG_FUNCTION_INFO_V1(geo_polygon_in);
Datum
geo_polygon_in(PG_FUNCTION_ARGS){
  char *str = PG_GETARG_CSTRING(0);
  char *new_position = NULL;
  char aux[1024];
  int i, dimension;
  Geo_Polygon *result;
  Geo_Point *point;
  double x, y;

  point = (Geo_Point *) palloc(sizeof(Geo_Point));

  new_position = strchr(str, '(');

  for (dimension = 0; *new_position != '\0'; dimension++){

memset(aux, 0, sizeof(aux));
for (i  = 0; *new_position != ')'; i++, ++new_position) {
  aux[i] = *new_position;
}
aux[i] = *new_position;
++new_position;
if (*new_position == ',') {
++new_position;
}

point = (Geo_Point *) repalloc(point, (dimension + 1) * sizeof(Geo_Point));
if(sscanf(aux, " ( %lf , %lf )", ,  )!= 2)
ereport(ERROR,
(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),
 errmsg("entrada errada para o tipo geo_point: \"%s\"",
str)));

point->x = x;
point->y = y;

  }

  int len = sizeof(Geo_Point) * dimension + VARHDRSZ;
  result = (Geo_Polygon *) palloc0(len);
  SET_VARSIZE(result, len);
  // copy the coordinates to the data area destino arqu e quantos bytes
  memcpy((void *) VARDATA(result),
   (void *) VARDATA (point),
   (dimension) * sizeof(Geo_Point));

   PG_RETURN_POINTER(result);

}

PG_FUNCTION_INFO_V1(geo_polygon_out);

Datum
geo_polygon_out(PG_FUNCTION_ARGS){
Geo_Polygon *geo_polygon = (Geo_Polygon *) PG_GETARG_POINTER(0);
char *result;
  int i, tam;

  tam = VARSIZE(geo_polygon->point);
  for (i = 0; i < tam; i++){
result = psprintf("(%g,%g)", geo_polygon->point[i].x, 
geo_polygon->point[i].y);
snprintf(result, 100, "(%g,%g)", geo_polygon->point[i].x, 
geo_polygon->point[i].y);
  }

  PG_RETURN_CSTRING(result);

}


I would be very glad, if somebody could provide me some help to this.

Obs: I see cube.c and I did not understand.
Thank you in advance

Best regards

Fabiana



Re: [GENERAL] pg_upgrade --link on Windows

2017-06-09 Thread Arnaud L.

Le 9/06/2017 à 16:55, Adrian Klaver a écrit :

On 06/09/2017 07:39 AM, Arnaud L. wrote:
So maybe pg_upgrade uses hard-links (i.e. to files), and only the 
documentation is wrong by calling them junctions (i.e. soft links to 
files) ?


Looks that way. In file.c in ~/src/bin/pg_upgrade I see:

#ifdef WIN32
   300 /* implementation of pg_link_file() on Windows */
   301 static int
   302 win32_pghardlink(const char *src, const char *dst)
   303 {
   304 /*
   305  * CreateHardLinkA returns zero for failure
   306  * http://msdn.microsoft.com/en-us/library/aa363860(VS.85).aspx
   307  */
   308 if (CreateHardLinkA(dst, src, NULL) == 0)
   309 {
   310 _dosmaperr(GetLastError());
   311 return -1;
   312 }
   313 else
   314 return 0;
   315 }
   316 #endif


Great !
So I did a full upgrade for nothing (just for safety), but that's good 
to know for next time !
Should this be submitted to postgresql-bugs, or is there something more 
specific to the documentation ?


Regards
--
Arnaud




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade --link on Windows

2017-06-09 Thread Arnaud L.

Le 9/06/2017 à 17:02, Arnaud L. a écrit :

Le 9/06/2017 à 16:55, Adrian Klaver a écrit :

On 06/09/2017 07:39 AM, Arnaud L. wrote:
So maybe pg_upgrade uses hard-links (i.e. to files), and only the 
documentation is wrong by calling them junctions (i.e. soft links to 
files) ?


Looks that way. In file.c in ~/src/bin/pg_upgrade I see:

#ifdef WIN32
   300 /* implementation of pg_link_file() on Windows */
   301 static int
   302 win32_pghardlink(const char *src, const char *dst)
   303 {
   304 /*
   305  * CreateHardLinkA returns zero for failure
   306  * http://msdn.microsoft.com/en-us/library/aa363860(VS.85).aspx
   307  */
   308 if (CreateHardLinkA(dst, src, NULL) == 0)
   309 {
   310 _dosmaperr(GetLastError());
   311 return -1;
   312 }
   313 else
   314 return 0;
   315 }
   316 #endif


Great !
So I did a full upgrade for nothing (just for safety), but that's good
to know for next time !
Should this be submitted to postgresql-bugs, or is there something more
specific to the documentation ?


I just found the pgsql-d...@postgresql.org list, for for the spam.

--
Arnaud



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-06-09 Thread Tom Lane
Robert Lakes  writes:
> I am new to postgress and I am trying to write my first function to insert,
> update or delete and trap errors as a result of the table not existing ,
> the columns not exist or if any other error simply pass back the sqlstate

Please do not hijack an existing thread to ask an unrelated question.
Start a new thread (ie "compose" don't "reply") and use an appropriate
subject line.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting DB access by role after initial connection?

2017-06-09 Thread Ken Tanzer
On Fri, Jun 9, 2017 at 5:38 PM, Bruno Wolff III  wrote:

> On Thu, Jun 08, 2017 at 22:37:34 -0700,
>  Ken Tanzer  wrote:
>
>>
>> My approach was to have the initial connection made by the owner, and then
>> after successfully authenticating the user, to switch to the role of the
>> site they belong to.  After investigation, this still seems feasible but
>> imperfect.  Specifically, I thought it would be possible to configure such
>> that after changing to a more restricted role, it would not be possible to
>> change back.  But after seeing this thread (
>>
>
> How are you keeping the credentials of the owner from being compromised?
> It seems if you are worried about role changing, adversaries will likely
> also be in a position to steal the owner's credentials or hijack the
> connection before privileges are dropped.
>

Seems to me they are separate issues.   App currently has access to the
password for accessing the DB.  (Though I could change that to ident access
and skip the password.)  App 1) connects to the DB, 2) authenticates the
user (within the app), then 3) proceeds to process input, query the DB,
produce output.  If step 2A becomes irrevocably changing to a site-specific
role, then at least I know that everything that happens within 3 can't
cross the limitations of per-site access.  If someone can steal my password
or break into my backend, that's a whole separate problem that already
exists both now and in this new scenario.

Cheers,
Ken




-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


[GENERAL] Inconsistent performance with LIKE and bind variable on long-lived connection

2017-06-09 Thread Steven Grimm
PostgreSQL 9.6.3 on OS X Sierra, JDBC driver version 42.1.1.

I noticed that one of my queries was slowing down after a few invocations.
Narrowed it down to an issue with bind variables and LIKE conditions. Very
consistently, on a given connection, the first 9 times a SELECT containing
a LIKE whose value is a bind variable is executed, the query returns
results quickly. But starting with the tenth execution, it slows down by a
factor of 20 and stays slow until the connection is closed.

The slowdown doesn't happen with bind variables on equality conditions, and
it doesn't happen if a constant is used in the LIKE condition rather than a
bind variable.

The below Java code reproduces the problem 100% of the time on my system.
It populates a test table with 1 rows if the table doesn't already
exist, then runs a simple two-condition SELECT with different combinations
of bind variables and constants. Each query is run 20 times and its
execution times in milliseconds are displayed.

On my system I get output like this:

Two bind variables  57  22   8   5   9   9  10  13   8 144 151 236 198
204 197 197 152 126 108 102
Equality bind variable   5   5   5   5   5   5   7   5   4   6   5   4
5   5   4   4   5   5   5   5
LIKE bind variable   5   5   5   5   9   5   5  12   6 111 106 107 108
121 110 101 107 108 113 108
No bind variables5   5   4   5   5   5   4   4   5   5   4   5
5   4   6   5   4   5   7   4

In other words, when a bind variable is used in the LIKE condition, the
query suddenly goes from taking 5-15 milliseconds to taking 100+
milliseconds. When the query is run in psql, it takes 5-10 milliseconds.

I'm not sure if this is a problem with the JDBC driver or the server, but
it's certainly unexpected!

---

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;

public class BindTest {
  static String url = "jdbc:postgresql:test";

  private static void init() {
try (Connection conn = DriverManager.getConnection(url)) {
  try (PreparedStatement stmt = conn.prepareStatement(
  "CREATE TABLE test (col1 TEXT, col2 TEXT, PRIMARY KEY (col1,
col2))")) {
stmt.execute();
  }

  try (PreparedStatement stmt = conn.prepareStatement("INSERT INTO test
VALUES (?,?)")) {
stmt.setString(1, "xyz");

for (int i = 0; i<  1; i++) {
  stmt.setInt(2, i);
  stmt.addBatch();
}

stmt.executeBatch();
  }

  try (PreparedStatement stmt = conn.prepareStatement("VACUUM
ANALYZE")) {
stmt.execute();
  }
} catch (SQLException e) {
  // Table exists, so don't populate it
}
  }

  private static void logTime(String name, PreparedStatement stmt) throws
SQLException {
StringBuilder out = new StringBuilder(String.format("%-22s", name));

for (int i = 0; i<  20; i++) {
  long startTime = System.currentTimeMillis();
  ResultSet rs = stmt.executeQuery();
  while (rs.next()) {
rs.getString(1);
  }
  long endTime = System.currentTimeMillis();
  rs.close();

  out.append(String.format(" %3d", endTime - startTime));
}

stmt.close();

System.out.println(out);
  }

  public static void main(String[] args) throws Exception {
init();

try (Connection conn = DriverManager.getConnection(url)) {
  PreparedStatement stmt = conn.prepareStatement(
  "SELECT col2 FROM test WHERE col1 = ? AND col2 LIKE ? ORDER BY
col2");
  stmt.setString(1, "xyz");
  stmt.setString(2, "%");
  logTime("Two bind variables", stmt);

  stmt = conn.prepareStatement(
  "SELECT col2 FROM test WHERE col1 = ? AND col2 LIKE '%' ORDER BY
col2");
  stmt.setString(1, "xyz");
  logTime("Equality bind variable", stmt);

  stmt = conn.prepareStatement(
  "SELECT col2 FROM test WHERE col1 = 'xyz' AND col2 LIKE ? ORDER
BY col2");
  stmt.setString(1, "%");
  logTime("LIKE bind variable", stmt);

  stmt = conn.prepareStatement(
  "SELECT col2 FROM test WHERE col1 = 'xyz' AND col2 LIKE '%' ORDER
BY col2");
  logTime("No bind variables", stmt);
}
  }
}


Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-06-09 Thread Harry Ambrose
Hi Tom,

Thanks for attempting to replicate the issue.

Anyway, the bad news is I couldn't reproduce the problem then and I can't
> now.  I don't know if it's a timing issue or if there's something critical
> about configuration that I'm not duplicating.  Can you explain what sort
> of platform you're testing on, and what nondefault configuration settings
> you're using?
>

Further details about the environment that I can replicate on below:

- Non default postgresql.conf settings:
checkpoint_segments = 192
checkpoint_completion_target = 0.9
checkpoint_timeout = 5min
wal_keep_segments = 256
wal_writer_delay = 200ms
archive_mode = on
archive_command = 'rsync -e ssh -arv  /wal/pg_xlog/%f postgres@:/wal/pg_xlog'
archive_timeout = 60
syslog_facility = 'LOCAL0'
log_statement = 'mod'
syslog_ident = 'postgres'
log_line_prefix = '%h %m  %p %c %u %a  %e '
log_timezone = 'GB'
track_activities = on
track_counts = on
datestyle = 'iso, mdy'
timezone = 'GB'
default_text_search_config = 'pg_catalog.english'
array_nulls = on
sql_inheritance = on
standard_conforming_strings = on
synchronize_seqscans = on
transform_null_equals = off
- Two node master/slave setup using streaming replication (without slots).
- CentOS 6.9 (2.6.32-696.el6.x86_64).
- PostgreSQL 9.4.10 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)
4.4.7 20120313 (Red Hat 4.4.7-17), 64-bit.
- 64GiB RAM.
- AMD Opteron(TM) Processor 6238.
- pg_default sat on 2 disk RAID-1 conifugration (ext3 filesystem).
- Custom tablespaces (where the errors occur) sat on 4 disk RAID-10 (ext3
filesystem).
- All disks are HP 600G SAS 6.0Gbps with P420(i) controllers and battery
backed cache enabled.

Please let me know if you require further info.

Best wishes,
Harry


Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-06-09 Thread Achilleas Mantzios

On 09/06/2017 14:44, Harry Ambrose wrote:

Hi Tom,

Thanks for attempting to replicate the issue.

Anyway, the bad news is I couldn't reproduce the problem then and I can't
now.  I don't know if it's a timing issue or if there's something critical
about configuration that I'm not duplicating.  Can you explain what sort
of platform you're testing on, and what nondefault configuration settings
you're using?


Further details about the environment that I can replicate on below:

- Non default postgresql.conf settings:
checkpoint_segments = 192
checkpoint_completion_target = 0.9
checkpoint_timeout = 5min
wal_keep_segments = 256
wal_writer_delay = 200ms
archive_mode = on
archive_command = 'rsync -e ssh -arv  /wal/pg_xlog/%f postgres@:/wal/pg_xlog'
archive_timeout = 60
syslog_facility = 'LOCAL0'
log_statement = 'mod'
syslog_ident = 'postgres'
log_line_prefix = '%h %m  %p %c %u %a  %e '
log_timezone = 'GB'
track_activities = on
track_counts = on
datestyle = 'iso, mdy'
timezone = 'GB'
default_text_search_config = 'pg_catalog.english'
array_nulls = on
sql_inheritance = on
standard_conforming_strings = on
synchronize_seqscans = on
transform_null_equals = off
- Two node master/slave setup using streaming replication (without slots).
- CentOS 6.9 (2.6.32-696.el6.x86_64).
- PostgreSQL 9.4.10 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 
20120313 (Red Hat 4.4.7-17), 64-bit.
- 64GiB RAM.
- AMD Opteron(TM) Processor 6238.
- pg_default sat on 2 disk RAID-1 conifugration (ext3 filesystem).
- Custom tablespaces (where the errors occur) sat on 4 disk RAID-10 (ext3 
filesystem).
- All disks are HP 600G SAS 6.0Gbps with P420(i) controllers and battery backed 
cache enabled.


Maybe you could give some info on :
- your ext3 mkfs and mount options (journal, barriers, etc)
- your controller setup (battery should be working good and cache mode set to 
write back)
- your disks setup (write cache should be disabled)
- you should check your syslogs/messages for any errors related to storage
- is your RAM ECC? Did you run any memtest?
- is your CPU overheating ?
- have you experienced any crashes/freezes ?



Please let me know if you require further info.

Best wishes,
Harry




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] pg_upgrade --link on Windows

2017-06-09 Thread George Neuner
On Fri, 9 Jun 2017 10:07:24 -0400, Bruce Momjian 
wrote:

>On Fri, Jun  9, 2017 at 12:00:56PM +0200, Arnaud L. wrote:
>> Hi
>> 
>> The pg_upgrade documentation for PostgreSQL 9.6 states that --link will use
>> junction points on Windows.
>> Shouldn't it rather user hard-links ?
>> If I'm not mistaken, with junction points (i.e. soft-links to directories),
>> the old data dir cannot be removed.
>> With hard-links to file, we can get rid of the old data dir once we are sure
>> that the upgrade is fine.
>
>I was told junction points on Windows were hard links and no one has
>ever complained about not being able to remove them.


NTFS junctions are a distinct type of symbolic link which is meant for
filesystem mount points.  In NTFS "normal" symlinks are restricted to
targets within the same filesystem.

You can use a junction anywhere you want a symlink, but not the
reverse.  The downside is that pathname parsing is slower with
junctions than with symlinks because of the possibility that the path
may cross into a different filesystem.


The documentation is not very clear, IMO.

https://msdn.microsoft.com/en-us/library/windows/desktop/aa365006(v=vs.85).aspx
https://msdn.microsoft.com/en-us/library/windows/desktop/aa363878(v=vs.85).aspx
https://msdn.microsoft.com/en-us/library/windows/desktop/aa365503(v=vs.85).aspx


The mklink utility can create any of these types of links.  Its
documentation does not describe the differences, but is shows that
hard links, symlinks, and junctions all are distinct concepts in
Windows.

https://technet.microsoft.com/en-us/library/cc753194(v=ws.11).aspx


George



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and state_change

2017-06-09 Thread armand pirvu

> On Jun 9, 2017, at 11:01 AM, Adrian Klaver  wrote:
> 
> On 06/09/2017 08:45 AM, armand pirvu wrote:
>> Hi
>> Had a couple of processes blocking the vacuum so I terminated them using
>> select pg_terminate_backend(pid);
>> Running the following
>> select distinct pid, backend_start, query_start, state_change, state, query 
>> from pg_catalog.pg_stat_activity order by 1;
>>   pid  | backend_start |  query_start  | 
>> state_change  | state  | query
>> ---+---+---+---++--
>>  10677 | 2017-06-09 10:25:49.189848-05 | 2017-06-09 10:33:43.598805-05 | 
>> 2017-06-09 10:33:43.599091-05 | idle   | SELECT 1
>>  11096 | 2017-06-09 10:27:03.686588-05 | 2017-06-09 10:33:56.28736-05  | 
>> 2017-06-09 10:33:56.287364-05 | active | select distinct pid, backend_start, 
>> query_start, state_change, state, query from pg_catalog.pg_stat_activity 
>> order by 1;
>>  13277 | 2017-06-09 07:48:49.506686-05 | 2017-06-09 07:48:52.887185-05 | 
>> 2017-06-09 07:48:52.887188-05 | active | autovacuum: VACUUM 
>> csischema.tf_purchased_badge
>>  13484 | 2017-06-09 10:31:54.127672-05 | 2017-06-09 10:33:47.137938-05 | 
>> 2017-06-09 10:33:47.138226-05 | idle   | SELECT 1
>>  16886 | 2017-06-09 07:56:49.033893-05 | 2017-06-09 07:56:49.078369-05 | 
>> 2017-06-09 07:56:49.078371-05 | active | autovacuum: VACUUM 
>> csischema.tf_purchases_person
>>  25387 | 2017-06-09 05:32:08.079397-05 | 2017-06-09 05:32:08.385728-05 | 
>> 2017-06-09 05:32:08.385731-05 | active | autovacuum: VACUUM 
>> csischema.tf_demographic_response_person
>>  37465 | 2017-06-09 08:50:58.992002-05 | 2017-06-09 08:51:21.506829-05 | 
>> 2017-06-09 08:51:21.506831-05 | active | autovacuum: VACUUM 
>> csischema.tf_transaction_item_person
>> I did notice that state_change did not change one bit
> 
> Did the state change?
> 

No and that was what got me worried 


>> Does that mean that something is not quite right with the vacuums ?
> 
> Might want to take a look at:
> 
> https://www.postgresql.org/docs/9.6/static/monitoring-stats.html#PG-STAT-ALL-TABLES-VIEW
>  
> 
> 
> 
>> Thank you
>> Armand
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com 


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-09 Thread Eric Lemoine
On 06/08/2017 10:41 PM, Éric wrote:
> 
> 
> 
>> Have you experimented with other queries that don't involve PostGIS?
>> I'm wondering if your hook-installation code fails to work properly
>> unless PostGIS was loaded first.  This would be easier to credit if
>> there are hooks both extensions try to get into.
> 
> 
> I think you're right on Tom. It looks like I cannot reproduce the issue if I 
> start by calling a PostGIS function rather than a Pointcloud function. So it 
> may well be a conflict between PostGIS and Pointcloud. Both use fn_extra, and 
> that makes we wonder. This old thread [*] makes me wonder too! I still need 
> to figure out the bug, but I can see some light now! thanks
> 
> [*] 
> 


I now think that the performance bug is not related to the fn_extra
thing. I had hope but not anymore :) I don't see where the Pointcloud
and PostGIS extensions could conflict.

-- 
Éric Lemoine
Oslandia
+33 1 86 95 95 55
<>

signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Vacuum and state_change

2017-06-09 Thread Adrian Klaver

On 06/09/2017 09:13 AM, armand pirvu wrote:


On Jun 9, 2017, at 11:01 AM, Adrian Klaver > wrote:


On 06/09/2017 08:45 AM, armand pirvu wrote:

Hi
Had a couple of processes blocking the vacuum so I terminated them using
select pg_terminate_backend(pid);
Running the following
select distinct pid, backend_start, query_start, state_change, state, 
query from pg_catalog.pg_stat_activity order by 1;
  pid  | backend_start |  query_start 
 | state_change  | state  | query

---+---+---+---++--
 10677 | 2017-06-09 10:25:49.189848-05 | 2017-06-09 
10:33:43.598805-05 | 2017-06-09 10:33:43.599091-05 | idle   | SELECT 1
 11096 | 2017-06-09 10:27:03.686588-05 | 2017-06-09 10:33:56.28736-05 
 | 2017-06-09 10:33:56.287364-05 | active | select distinct pid, 
backend_start, query_start, state_change, state, query from 
pg_catalog.pg_stat_activity order by 1;
 13277 | 2017-06-09 07:48:49.506686-05 | 2017-06-09 
07:48:52.887185-05 | 2017-06-09 07:48:52.887188-05 | active | 
autovacuum: VACUUM csischema.tf_purchased_badge
 13484 | 2017-06-09 10:31:54.127672-05 | 2017-06-09 
10:33:47.137938-05 | 2017-06-09 10:33:47.138226-05 | idle   | SELECT 1
 16886 | 2017-06-09 07:56:49.033893-05 | 2017-06-09 
07:56:49.078369-05 | 2017-06-09 07:56:49.078371-05 | active | 
autovacuum: VACUUM csischema.tf_purchases_person
 25387 | 2017-06-09 05:32:08.079397-05 | 2017-06-09 
05:32:08.385728-05 | 2017-06-09 05:32:08.385731-05 | active | 
autovacuum: VACUUM csischema.tf_demographic_response_person
 37465 | 2017-06-09 08:50:58.992002-05 | 2017-06-09 
08:51:21.506829-05 | 2017-06-09 08:51:21.506831-05 | active | 
autovacuum: VACUUM csischema.tf_transaction_item_person

I did notice that state_change did not change one bit


Did the state change?



No and that was what got me worried


Are these large tables?


If you are on Postgres 9.6:

https://www.postgresql.org/docs/9.6/static/progress-reporting.html





Does that mean that something is not quite right with the vacuums ?


Might want to take a look at:

https://www.postgresql.org/docs/9.6/static/monitoring-stats.html#PG-STAT-ALL-TABLES-VIEW



Thank you
Armand



--
Adrian Klaver
adrian.kla...@aklaver.com 





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-06-09 Thread Robert Lakes
Ha guys,
I am new to postgress and I am trying to write my first function to insert,
update or delete and trap errors as a result of the table not existing ,
the columns not exist or if any other error simply pass back the sqlstate
here's my code can you help
CREATE OR REPLACE FUNCTION listings_audit() RETURNS TRIGGER AS
$listings_audit$
  BEGIN
IF (TG_OP = 'DELETE') THEN
 IF (EXISTS (
  SELECT 1
  FROM pg_catalog.pg_class c
  JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
  WHERE n.nspname = 'schema_name'
  AND   c.relname = 'table_name'
  AND   c.relkind = 'r' -- only tables
)) THEN
   INSERT INTO listings_changes
 SELECT now(), 'DELETE', OLD.*;
   RETURN OLD;
   ELSE RAISE EXCEPTION 'Table does not exists';
 END IF;
ELSIF (TG_OP = 'UPDATE') THEN
  IF (EXISTS (
  SELECT 1
  FROM pg_catalog.pg_class c
  JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
  WHERE n.nspname = 'schema_name'
  AND   c.relname = 'table_name'
  AND   c.relkind = 'r' -- only tables
)) THEN
   INSERT INTO listings_changes
 SELECT now(), 'UPDATE', NEW.*;
   RETURN NEW;
   ELSE RAISE EXCEPTION 'Table does not exists';
 END IF;
ELSEIF (TG_OP = 'INSERT') THEN

   INSERT INTO listings_changes
 SELECT now(), 'INSERT', NEW.*;
   RETURN NEW;

END IF;
EXCEPTION
WHEN SQLSTATE '42611' THEN
  RAISE EXCEPTION 'Columns do not match audit file does not match user
file';
WHEN SQLSTATE '42P16' THEN
  RAISE EXCEPTION 'Table does not exists';
WHEN OTHERS THEN
  RAISE EXCEPTION 'PostgresSQL error code that has occurred';
RETURN SQLSTATE;
END;
$listings_audit$ LANGUAGE plpgsql;

On Thu, Jun 8, 2017 at 12:49 PM, Tom Lane  wrote:

> Harry Ambrose  writes:
> > Please find the jar attached (renamed with a .txt extension as I know
> some
> > email services deem jars a security issue).
>
> Hmm, the output from this script reminds me quite a lot of one I was
> sent in connection with bug #1 awhile back:
> https://www.postgresql.org/message-id/20161201165505.
> 4360.28203%40wrigleys.postgresql.org
> Was that a colleague of yours?
>
> Anyway, the bad news is I couldn't reproduce the problem then and I can't
> now.  I don't know if it's a timing issue or if there's something critical
> about configuration that I'm not duplicating.  Can you explain what sort
> of platform you're testing on, and what nondefault configuration settings
> you're using?
>
> regards, tom lane
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] pg_upgrade --link on Windows

2017-06-09 Thread George Neuner
On Fri, 9 Jun 2017 07:24:03 -0700, Adrian Klaver
 wrote:


>https://msdn.microsoft.com/en-us/library/windows/desktop/aa365006(v=vs.85).aspx
>
>Seems to me the difference is hard links point to file, junctions to 
>directories.

You can make either hard links or symlinks to files.  Junctions are
distinct from normal symlinks in that junctions can cross filesystems.
Microsoft's cmdline tools complain if you try to make a junction to a
file, because Microsoft intended junctions for mount points ... but
you can do it programmatically, or trick the tool by creating the link
and then replacing the target, and in most cases it will work the same
as a normal symlink.

I have seen cases where a junction to a file didn't work, but they
seemed to be application related rather than an OS issue.  Prior to
Vista, the mklink utility was not available, so people wanting to
create symlinks were forced to use the sysinternals junction utility.
https://technet.microsoft.com/en-us/sysinternals/bb545021.aspx

George



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-09 Thread Jeff Janes
On Fri, Jun 9, 2017 at 9:13 AM, Eric Lemoine 
wrote:

> On 06/08/2017 10:41 PM, Éric wrote:
> >
> >
> >
> >> Have you experimented with other queries that don't involve PostGIS?
> >> I'm wondering if your hook-installation code fails to work properly
> >> unless PostGIS was loaded first.  This would be easier to credit if
> >> there are hooks both extensions try to get into.
> >
> >
> > I think you're right on Tom. It looks like I cannot reproduce the issue
> if I start by calling a PostGIS function rather than a Pointcloud function.
> So it may well be a conflict between PostGIS and Pointcloud. Both use
> fn_extra, and that makes we wonder. This old thread [*] makes me wonder
> too! I still need to figure out the bug, but I can see some light now!
> thanks
> >
> > [*]  984D0F47C5FF4D0DB0D71A4F6EF670ED%40cleverelephant.ca#
> 984d0f47c5ff4d0db0d71a4f6ef67...@cleverelephant.ca>
>
>
> I now think that the performance bug is not related to the fn_extra
> thing. I had hope but not anymore :) I don't see where the Pointcloud
> and PostGIS extensions could conflict.
>

Can you run 'perf top' on the slow query?  That might pretty quickly tell
you which function is taking up your time.

Cheers,

Jeff


Re: [GENERAL] pg_catalog tables don't have constraints?

2017-06-09 Thread
Neil Anderson  wrote:

> I've been exploring the pg_catalog tables and pointed a couple of tools at
> it to extract an ER diagram for a blog post. At first I thought it was a bug
> in the drawing tool but it appears that the relationships between the
> pg_catalog tables are implicit rather than enforced by the database, is that 
> correct?

Every time I have to dive into the pg_* tables, I really want such a diagram 
because
the relationships aren't obvious to me, so I've been looking for a diagram like 
that
and haven't found one. I've also considered trying to make one, but with all of 
the
custom types, my normal ERD tool isn't very cooperative.

Would you be willing to share the diagram with the list when you're done?

Would you be willing to share the names of the tools and process you used to 
create the diagram?

Thanks,
Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_catalog tables don't have constraints?

2017-06-09 Thread Thomas Kellerer

kbran...@pwhome.com schrieb am 09.06.2017 um 20:57:

Neil Anderson  wrote:


I've been exploring the pg_catalog tables and pointed a couple of tools at
it to extract an ER diagram for a blog post. At first I thought it was a bug
in the drawing tool but it appears that the relationships between the
pg_catalog tables are implicit rather than enforced by the database, is that 
correct?


Every time I have to dive into the pg_* tables, I really want such a diagram 
because
the relationships aren't obvious to me, so I've been looking for a diagram like 
that
and haven't found one. 


https://wiki.postgresql.org/wiki/Developer_FAQ#Is_there_a_diagram_of_the_system_catalogs_available.3F





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-09 Thread Adrian Klaver

On 06/09/2017 09:13 AM, Eric Lemoine wrote:

On 06/08/2017 10:41 PM, Éric wrote:





Have you experimented with other queries that don't involve PostGIS?
I'm wondering if your hook-installation code fails to work properly
unless PostGIS was loaded first.  This would be easier to credit if
there are hooks both extensions try to get into.



I think you're right on Tom. It looks like I cannot reproduce the issue if I 
start by calling a PostGIS function rather than a Pointcloud function. So it 
may well be a conflict between PostGIS and Pointcloud. Both use fn_extra, and 
that makes we wonder. This old thread [*] makes me wonder too! I still need to 
figure out the bug, but I can see some light now! thanks

[*] 




I now think that the performance bug is not related to the fn_extra
thing. I had hope but not anymore :) I don't see where the Pointcloud
and PostGIS extensions could conflict.



Crank up the logging detail?:

https://www.postgresql.org/docs/9.6/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN

log_min_messages (enum)

https://www.postgresql.org/docs/9.6/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT

log_connections (boolean)

log_disconnections (boolean)

log_duration (boolean)


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Limiting DB access by role after initial connection?

2017-06-09 Thread Joe Conway
On 06/09/2017 08:56 AM, Ken Tanzer wrote:
> On Fri, Jun 9, 2017 at 6:42 AM, Joe Conway wrote:
> See set_user for a possible solution: https://github.com/pgaudit/
> 
> Thanks!  Looking at the README, it seems like the intended use case is
> the opposite (escalating privileges), but if I understand could work anyway?

It currently supports both use-cases (but not both simultaneously very
well). For your use you can do (from the README):

---
Block switching to a superuser role
set_user.block_superuser = on
---

> If I'm understanding, you could set_user() with a random token and
> thereby prevent switching back?

Exactly -- in order to switch back the same token would be needed. So
assuming you are using persistent connections (connection pooler, etc.)
you would start a new user session by calling set_user() with a token,
and then reset when done with the same token. Or since "done" may not be
something the app can really know, you might end up doing a preemptive
reset using the token and then then set_user().

> The extra logging would be undesirable.  Is there any way to skip that
> entirely?  I see with block_log_statement I could dial down the logging
> after switching users, but that would require the app to be aware of
> what the current "normal" logging level was.

Also from the README:
---
Notes:

If set_user.block_log_statement is set to "off", the log_statement
setting is left unchanged.
---

So assuming you do not normally have statements being logged, this would
not change that.

> Any other pitfalls I'm not seeing, or reasons this might be a bad idea?

As noted in the README, set_user will refuse to run inside a transaction
block, but other than that none that I know of. Of course if you come up
with any I'd be very interested to hear about them.

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature