Re: [HACKERS] pg_receivexlog add synchronous mode

2014-06-24 Thread furuyao
 I found that this patch breaks --status-interval option of
 pg_receivexlog when -m option which the patch introduced is supplied.
 When -m is set, pg_receivexlog tries to send the feedback message as soon
 as it flushes WAL file even if status interval timeout has not been passed
 yet. If you want to send the feedback as soon as WAL is written or flushed,
 like walreceiver does, you need to extend --status-interval option, for
 example, so that it accepts the value -1 which means enabling that
 behavior.
 
 Including this change in your original patch would make it more difficult
 to review. I think that you should implement this as separate patch.
 Thought?
As your comments, the current specification to ignore the --status-intarvall.
It is necessary to respond immediately to synchronize.

It is necessary to think about specifications the --status-intarvall.
So I revised it to a patch of flushmode which performed flush by a timing same 
as walreceiver.

A changed part deletes the feedback message after flush, and transmitted the 
feedback message according to the status interval. 
Change to flushmode from syncmode the mode name, and fixed the document.

Regards,

-- 
Furuya Osamu


pg_receivexlog-add-flush-mode-v1.patch
Description: pg_receivexlog-add-flush-mode-v1.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add a filed to PageHeaderData

2014-06-24 Thread Soroosh Sardari
On Mon, Jun 23, 2014 at 10:23 AM, Soroosh Sardari soroosh.sard...@gmail.com
 wrote:

 Dear Hackers

 I wanted to add a char array with length of 20 to PageHeaderData in
 include/storage/bufpage.h.
 Surprisingly regression test failed on rangetypes test!

 The diff of resulted and expected file is :

 *** 968,974 
   select count(*) from test_range_spgist where ir -|- int4range(100,500);
count
   ---
 !  5
   (1 row)

   -- now check same queries using a bulk-loaded index
 --- 968,974 
   select count(*) from test_range_spgist where ir -|- int4range(100,500);
count
   ---
 !  2
   (1 row)

   -- now check same queries using a bulk-loaded index

 ==

 Any help appreciated.


 Soroosh Sardari



Is there any rule for adding a field to PageHeaderData?


[HACKERS] python modul pre-import to avoid importing each time

2014-06-24 Thread Rémi Cura
Hey List,
this is a repost from the general list where it get no responses (5 days)

I use plpython with postgis and 2 python modules (numpy and shapely).
Sadly importing such module in the plpython function is very slow (about
half a second).

I also don't know if this overhead is applied each time the function is
called in the same session.

Is there a way to pre-import those modules once and for all (at server
start up for example),
such that the python function are accelerated?

Thanks,

Cheers,
Rémi-C


Re: [HACKERS] inherit support for foreign tables

2014-06-24 Thread Etsuro Fujita

Hi Ashutosh,

Thank you for the review.

(2014/06/23 18:35), Ashutosh Bapat wrote:

Hi,
Selecting tableoid on parent causes an error, ERROR:  cannot extract
system attribute from virtual tuple. The foreign table has an OID which
can be reported as tableoid for the rows coming from that foreign table.
Do we want to do that?


No.  I think it's a bug.  I'll fix it.

Thanks,

Best regards,
Etsuro Fujita


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add a filed to PageHeaderData

2014-06-24 Thread Greg Stark
On Tue, Jun 24, 2014 at 12:02 AM, Soroosh Sardari
soroosh.sard...@gmail.com wrote:
 Is there any rule for adding a field to PageHeaderData?

Not really. It's a pretty internal thing, not something we expect
people to be doing all the time.

The only rule I can think of is that you should bump some version
numbers such as the page format version and probably the catalog
version. But that's probably irrelevant to your problem. It sounds
like you have a bug in your code but you haven't posted enough
information to say much more.



-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add a filed to PageHeaderData

2014-06-24 Thread Andres Freund
On 2014-06-24 01:58:32 -0700, Greg Stark wrote:
 On Tue, Jun 24, 2014 at 12:02 AM, Soroosh Sardari
 soroosh.sard...@gmail.com wrote:
  Is there any rule for adding a field to PageHeaderData?
 
 Not really. It's a pretty internal thing, not something we expect
 people to be doing all the time.

I'd actually say that 99% of the things that need it are not going to
happen because we don't want to break on disk compatibility.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add a filed to PageHeaderData

2014-06-24 Thread Pavan Deolasee
On Tue, Jun 24, 2014 at 2:28 PM, Greg Stark st...@mit.edu wrote:

 On Tue, Jun 24, 2014 at 12:02 AM, Soroosh Sardari
 soroosh.sard...@gmail.com wrote:
  Is there any rule for adding a field to PageHeaderData?

 Not really. It's a pretty internal thing, not something we expect
 people to be doing all the time.

 The only rule I can think of is that you should bump some version
 numbers such as the page format version and probably the catalog
 version. But that's probably irrelevant to your problem. It sounds
 like you have a bug in your code but you haven't posted enough
 information to say much more.


Out of curiosity, I actually tried adding a char[20] field in the page
header because just like you I thought this should be completely internal,
as long as the field is added before the pd_linp[] field. But I get the
same failure that OP is reporting. I wonder if its a bug in gist index
build, though I could not spot anything at the first glance. FWIW changing
the char[] from 20 to 22 or 24 does not cause any failure in rangetypes
test. So I am thinking its some alignment issue (mine is a 64 bit build)

Thanks,
Pavan
-- 
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee


Re: [HACKERS] 9.5: UPDATE/DELETE .. ORDER BY .. LIMIT ..

2014-06-24 Thread Heikki Linnakangas

On 05/13/2014 10:45 PM, Rukh Meski wrote:

On Sun, May 11, 2014 at 4:47 PM, Tom Lane t...@sss.pgh.pa.us wrote:

The $64 question is whether we'd accept an implementation that fails
if the target table has children (ie, is partitioned).  That seems
to me to not be up to the project's usual quality expectations, but
maybe if there's enough demand for a partial solution we should do so.

It strikes me that a big part of the problem here is that the current
support for this case assumes that the children don't all have the
same rowtype.  Which is important if you think of child table as
meaning subclass in the OO sense.  But for ordinary partitioning
cases it's just useless complexity, and ModifyTable isn't the only
thing that suffers from that useless complexity.

If we had a notion of partitioned table that involved a restriction
that all the child tables have the exact same rowtype, we could implement
UPDATE/DELETE in a much saner fashion --- just one plan tree, not one
per child table --- and it would be possible to support UPDATE/DELETE
ORDER BY LIMIT with no more work than for the single-table case.
So that might shift the calculation as to whether we're willing to
accept a partial implementation.


None of the use cases I have in mind would ever (have to) use this on
a parent table; in the worst case it might make sense to do it on the
child tables individually.  Personally, I think that just refusing to
operate on tables with children is a reasonable start.  I have no
interest in working on improving partitioning, but I don't think
pushing this feature back in the hopes that someone else will would
help anyone.


IMHO this needs to work with inheritance if we are to accept it. It 
would be a rather strange limitation for no apparent reason, other than 
that we didn't bother to implement it. It doesn't seem very difficult in 
theory to add the table OID to the plan as a junk column, and use that 
in the ModifyTable node to know which table a row came from.


In any case, the patch as it stands is clearly not acceptable, because 
it just produces wrong results with inheritance. I'm marking it as 
returned with feedback in the commitfest app. I'd suggest that you solve 
the inheritance problems and resubmit.


Per the docs in the patch:


+  para
+   If the literalLIMIT/ (or literalFETCH FIRST/) clause
+   is present, processing will stop after the system has attempted
+   to delete the specified amount of rows.  In particular, if a row
+   was concurrently changed not to match the given literalWHERE/
+   clause, it will count towards the literalLIMIT/ despite it
+   not being actually deleted.  Unlike in literalSELECT/, the
+   literalOFFSET/literal clause is not available in
+   literalDELETE/.
+  /para


That behavior with READ COMMITTED mode and concurrent changes is 
surprising. Do we really want it to behave like that, and if so, why?


Why is OFFSET not supported? Not that I care much for that, but I'm curious.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL in Windows console and Ctrl-C

2014-06-24 Thread Christian Ullrich
* From: MauMau [mailto:maumau...@gmail.com]

 From: Christian Ullrich ch...@chrullrich.net

  OK, here is the first draft against current master. It builds on Windows
  with VS 2012 and on FreeBSD 10 with clang 3.3. I ran the regression
  tests on Windows, they all pass.
 
  The changed behavior is limited to Windows, where it now silently
  ignores Ctrl-C and Ctrl-Break when started via pg_ctl start.
 
  I don't think there is currently any support for switch-type long
  options, so rather than invent my own, I squeezed the two lines I added
  into postmaster.c where they fit best; unfortunately, the result is
  quite ugly. I'll be happy to refine that if someone can give me a hint
  on how to do it.
 
 Overall, the patch seems good as it is based on the discussion.  I found a
 few problems:

Thank you for the review. I will rebase, retest, and resubmit as soon as I find 
the time, certainly sometime this week.

 (2)
 Although I haven't tried, doesn't pg_ctl start fail on non-Windows
 platforms
 because of the following check?
 
 !if (opt == '-')
 ! ereport(ERROR,
 !   (errcode(ERRCODE_SYNTAX_ERROR),
 !errmsg(--%s requires a value,
 ! optarg)));

On non-Windows platforms, the --background option is not passed, and the option 
handling is unmodified except for an additional pair of braces. The postmaster 
does not pass the option to its children on any platform.

 And, in the postgres reference page,
 
 http://www.postgresql.org/docs/devel/static/app-postgres.html
 
 there's a paragraph:
 
 The -- options will not work on FreeBSD or OpenBSD. Use -c instead. This
 is a bug in the affected operating systems; a future release of PostgreSQL
 willprovide a workaround if this is not fixed.
 
 Would --background work on FreeBSD and OpenBSD (i.e. would pg_ctl start
 succeed)?  I don't have access to those platforms.

pg_ctl does not pass the option anywhere but on Windows, and postmaster.c does 
not recognize it anywhere else. If it is encountered on a platform where it 
does not make sense, it will be treated like any other (unknown) long option.

This is actually the weakest point of the existing patch, in my opinion. 
Jamming the long option handling into postmaster.c by way of #ifdef WIN32 feels 
wrong, but I could not figure out a better way to do it.

 (3)
 --background will also be used by restart subcommand, won't it?
 
 + in a console window. It is used automatically by
 + commandpg_ctl/command when called with the
 + optionstart/option subcommand.

Restart is implemented as stop/start, so, yes.

-- 
Christian

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add a filed to PageHeaderData

2014-06-24 Thread Soroosh Sardari
On Tue, Jun 24, 2014 at 1:34 PM, Pavan Deolasee pavan.deola...@gmail.com
wrote:

 On Tue, Jun 24, 2014 at 2:28 PM, Greg Stark st...@mit.edu wrote:

 On Tue, Jun 24, 2014 at 12:02 AM, Soroosh Sardari
 soroosh.sard...@gmail.com wrote:
  Is there any rule for adding a field to PageHeaderData?

 Not really. It's a pretty internal thing, not something we expect
 people to be doing all the time.

 The only rule I can think of is that you should bump some version
 numbers such as the page format version and probably the catalog
 version. But that's probably irrelevant to your problem. It sounds
 like you have a bug in your code but you haven't posted enough
 information to say much more.


 Out of curiosity, I actually tried adding a char[20] field in the page
 header because just like you I thought this should be completely internal,
 as long as the field is added before the pd_linp[] field. But I get the
 same failure that OP is reporting. I wonder if its a bug in gist index
 build, though I could not spot anything at the first glance. FWIW changing
 the char[] from 20 to 22 or 24 does not cause any failure in rangetypes
 test. So I am thinking its some alignment issue (mine is a 64 bit build)

 Thanks,
 Pavan
 --
 Pavan Deolasee
 http://www.linkedin.com/in/pavandeolasee




I check this problem with a virgin source code of postgresql-9.3.2. So the
bug is not for my codes.
As Pavan said, may be some alignment issues cause this problem.
By the way, following code has two different output and it is weird.

drop table if exists test_range_spgist;
create table test_range_spgist(ir int4range);
create index test_range_spgist_idx on test_range_spgist using spgist (ir);
insert into test_range_spgist select int4range(g, g+10) from
generate_series(1,590) g;


SET enable_seqscan= t;
SET enable_indexscan  = f;
SET enable_bitmapscan = f;

select * from test_range_spgist where ir -|- int4range(100,500);

SET enable_seqscan= f;
SET enable_indexscan  = t;
SET enable_bitmapscan = f;

select * from test_range_spgist where ir -|- int4range(100,500);


Regards,
Soroosh


Re: [HACKERS] Add a filed to PageHeaderData

2014-06-24 Thread Abhijit Menon-Sen
At 2014-06-24 14:21:24 +0430, soroosh.sard...@gmail.com wrote:

 By the way, following code has two different output and it is weird.

I get the same output from both queries with both 9.3.4 and HEAD:

ir 
---
 [90,100)
 [500,510)
(2 rows)

If you're reporting a problem, please make some effort to provide enough
details to reproduce it. From your mail I could guess that you tried it
on 9.3.2, but please try not to make people guess.

-- Abhijit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add a filed to PageHeaderData

2014-06-24 Thread Kevin Grittner
Soroosh Sardari soroosh.sard...@gmail.com wrote:

 I check this problem with a virgin source code of
 postgresql-9.3.2. So the bug is not for my codes.

 By the way, following code has two different output and it is
 weird.

I can confirm that I see the difference in 9.3.2, and that I don't
see the difference in 9.3.4.  Upgrade.

http://www.postgresql.org/support/versioning/

There's really no point in reporting a possible bug on a version
with known bugs which have already had fixes published.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing NOT IN to use ANTI joins

2014-06-24 Thread David Rowley
On Wed, Jun 11, 2014 at 9:32 PM, Marti Raudsepp ma...@juffo.org wrote:

 On Sun, Jun 8, 2014 at 3:36 PM, David Rowley dgrowle...@gmail.com wrote:
  Currently pull_up_sublinks_qual_recurse only changes the plan for NOT
 EXISTS
  queries and leaves NOT IN alone. The reason for this is because the
 values
  returned by a subquery in the IN clause could have NULLs.

 There's a bug in targetListIsGuaranteedNotToHaveNulls, you have to
 drill deeper into the query to guarantee the nullability of a result
 column. If a table is OUTER JOINed, it can return NULLs even if the
 original column specification has NOT NULL.

 This test case produces incorrect results with your patch:

 create table a (x int not null);
 create table b (x int not null, y int not null);
 insert into a values(1);
 select * from a where x not in (select y from a left join b using (x));

 Unpatched version correctly returns 0 rows since y will be NULL.
 Your patch returns the value 1 from a.


I'm a bit stuck on fixing this and I can't quite figure out how I should
tell if the TargetEntry is coming from an outer join.

My first attempt does not work as it seems that I'm looking up the wrong
RangeTblEntry with the following:

rte =  rt_fetch(tlevar-varno, query-rtable);

if (IS_OUTER_JOIN(rte-jointype))
return true; /* Var from an outer join */

The jointype returns JOIN_INNER when loooking up the RangeTblEntry from the
TargetEntry's varno. It seems that the RangeTblEntry that I need is stored
in query-rtable, but I've just no idea how to tell which item in the list
it is. So if anyone can point me in the right direction then that would be
really useful.

On a more positive or even slightly exciting note I think I've managed to
devise a way that ANTI JOINS can be used for NOT IN much more often. It
seems that find_nonnullable_vars will analyse a quals list to find
expressions that mean that the var cannot be NULL. This means we can
perform ANTI JOINS for NOT IN with queries like:

SELECT * FROM a WHERE id NOT IN(SELECT nullable_col FROM b WHERE
nullable_col = 1);
or
SELECT * FROM a WHERE id NOT IN(SELECT nullable_col FROM b WHERE
nullable_col IS NOT NULL);

(The attached patch implements this)

the nullable_col =1 will mean that nullable_col cannot be NULL, so the ANTI
JOIN can be performed safely. I think this combined with the NOT NULL check
will cover probably just about all valid uses of NOT IN with a subquery...
unless of course I've assumed something wrongly about
find_nonnullable_vars. I just need the correct RangeTblEntry in order to
determine if the TargetEntry is from an out join.

The attached patch is a broken implemention that still needs the lookup
code fixed to reference the correct RTE. The failing regression tests show
where the problems lie.

Any help on this would be really appreciated.

Regards

David Rowley


not_in_anti_join_v0.5_broken.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A question about code in DefineRelation()

2014-06-24 Thread Heikki Linnakangas

On 04/25/2014 04:39 PM, Hadi Moshayedi wrote:


On second thought I noticed that that makes CREATE FOREIGN TABLE include
an OID column in newly-created foreign tables wrongly, when the
default_with_oids parameter is set to on.  Please find attached a patch.


Yeah, that's a bug.

The interactions with pg_dump are interesting. If you have any tables 
with OIDs in your database, pg_dump will output set 
default_with_oids=true before creating those tables. And if you have 
any foreign tables that end up being dumped after the table with OIDs, 
it will also be created with default_with_oids=true, and will end up 
with OIDs.


Fortunately, nothing very bad happens if a foreign table has oids. It 
will just be all-zeros if you select it.


Committed, and backpatched. 9.2 and 9.1 needed a slightly different 
patch because the code has changed, but it was still straightforward.



The fix makes sense to me, since in ALTER TABLE SET WITH OIDS we check that
the relation is a table and not a foreign table:

3160 case AT_AddOids: /* SET WITH OIDS */
3161 ATSimplePermissions(rel, ATT_TABLE);

So, I think we should be consistent between DefineRelation() and alter
table.


Yeah, default_with_oids is definitely not supposed to affect foreign tables.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Hooks Docu - list of hooks

2014-06-24 Thread geohas

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi to all!

I am searching for a documentation of hooks in PG, all i found was a
presentation in the wiki and some modules from 2ndQuadrant and petere on
github.  The last three weeks i was reading the source code to get some
information.

Is there a list of possible hooks, or maybe a little docu or overview?
Especially hooks to catch Insert, Update and Delete Stmts and SubQuerys.

It would help a lot to finish / write a log into Tables Module.



Please excuse my mail, if there was a similar question on the list, i
subscribed today and a simple search in the archive showd no results.

regards

geohas

PS: I've an excuse for my bad english - i'am austrian ;)
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTqVetAAoJEJFGMlQe7wR/ae0H/Rkt0G5d6kspgWhPyN/aIWIS
wTYKDdxDWt+EeyuCg7SWx/UxJLW22wnWKxmLjvfkT+/ibkCv5qmYRLMOh+cvH0O9
AimWP7fZX+VpYSfpmm/SuvuwUM3OQiM3iwU6MIpu4XfrulAD3F94/aafNp3D2jBK
Fz/J/Sjmr9LN/YBuE99i6asUJG669m4ISsmMpNwXPAh3wv+A3sN0dhvDCFJ11iCL
hIXqktMpm60iI5sIQUPUjgSTHFTj3aGuKtX3OCWPM4CHoaHwDNtq1klHeuiLSb3y
enjMW4tvTWtPw8DIkEgpatn8gsJvXVIjfsZPiTsp8HbN2evhkYxsgfV89R8usRU=
=vA51
-END PGP SIGNATURE-



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add a filed to PageHeaderData

2014-06-24 Thread Soroosh Sardari
On Tue, Jun 24, 2014 at 2:40 PM, Kevin Grittner kgri...@ymail.com wrote:

 Soroosh Sardari soroosh.sard...@gmail.com wrote:

  I check this problem with a virgin source code of
  postgresql-9.3.2. So the bug is not for my codes.

  By the way, following code has two different output and it is
  weird.

 I can confirm that I see the difference in 9.3.2, and that I don't
 see the difference in 9.3.4.  Upgrade.

 http://www.postgresql.org/support/versioning/

 There's really no point in reporting a possible bug on a version
 with known bugs which have already had fixes published.

 --
 Kevin Grittner
 EDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company



wow, it's arch-dependent.
in the 32-bit compiled of PG9.3.2 the code has same output and in 64-bit
binary of same code output is different!!

The problem is not about the sql code I posted in the last email. Problem
could be different in any architecture,
In 32-bit or 64-bit architecture adding a char array of length 20 to
PageHeaderData cause error in regression test.

Regards,
Soroosh


Re: [HACKERS] Add a filed to PageHeaderData

2014-06-24 Thread Andres Freund
On 2014-06-24 15:23:54 +0430, Soroosh Sardari wrote:
 On Tue, Jun 24, 2014 at 2:40 PM, Kevin Grittner kgri...@ymail.com wrote:
 
  Soroosh Sardari soroosh.sard...@gmail.com wrote:
 
   I check this problem with a virgin source code of
   postgresql-9.3.2. So the bug is not for my codes.
 
   By the way, following code has two different output and it is
   weird.
 
  I can confirm that I see the difference in 9.3.2, and that I don't
  see the difference in 9.3.4.  Upgrade.
 
  http://www.postgresql.org/support/versioning/
 
  There's really no point in reporting a possible bug on a version
  with known bugs which have already had fixes published.
 
  --
  Kevin Grittner
  EDB: http://www.enterprisedb.com
  The Enterprise PostgreSQL Company
 
 
 
 wow, it's arch-dependent.
 in the 32-bit compiled of PG9.3.2 the code has same output and in 64-bit
 binary of same code output is different!!
 
 The problem is not about the sql code I posted in the last email. Problem
 could be different in any architecture,
 In 32-bit or 64-bit architecture adding a char array of length 20 to
 PageHeaderData cause error in regression test.

You likely didn't adapt SizeOfPageHederData.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL in Windows console and Ctrl-C

2014-06-24 Thread MauMau

From: Christian Ullrich ch...@chrullrich.net
On non-Windows platforms, the --background option is not passed, and the 
option handling is unmodified except for an additional pair of braces. The 
postmaster does not pass the option to its children on any platform.
pg_ctl does not pass the option anywhere but on Windows, and postmaster.c 
does not recognize it anywhere else. If it is encountered on a platform 
where it does not make sense, it will be treated like any other (unknown) 
long option.


OK.



Restart is implemented as stop/start, so, yes.


Then, please mention restart mode as well like start and restart mode for 
clarification.


Regards
MauMau



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hooks Docu - list of hooks

2014-06-24 Thread Abhijit Menon-Sen
At 2014-06-24 12:49:17 +0200, li...@hasibether.at wrote:

 Is there a list of possible hooks, or maybe a little docu or overview?

The best I found was git grep _hook_type and then read the code to
understand when and why the hook was called.

 Especially hooks to catch Insert, Update and Delete Stmts and
 SubQuerys.
 
 It would help a lot to finish / write a log into Tables Module.

Look at how pgaudit does it: https://github.com/2ndQuadrant/pgaudit

The code has comments about how the various available hooks are used.
(I was planning to implement a bgwriter that wrote log messages to a
table, which sounds a bit like what you want to do.)

-- Abhijit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hooks Docu - list of hooks

2014-06-24 Thread geohas

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On 24/06/14 12:59, Abhijit Menon-Sen wrote:
 At 2014-06-24 12:49:17 +0200, li...@hasibether.at wrote:

 Is there a list of possible hooks, or maybe a little docu or overview?

 The best I found was git grep _hook_type and then read the code to
 understand when and why the hook was called.

 Especially hooks to catch Insert, Update and Delete Stmts and
 SubQuerys.

 It would help a lot to finish / write a log into Tables Module.

 Look at how pgaudit does it: https://github.com/2ndQuadrant/pgaudit
I already tried pgaudit ;), one of the best examples, it helped me much.


 The code has comments about how the various available hooks are used.
 (I was planning to implement a bgwriter that wrote log messages to a
 table, which sounds a bit like what you want to do.)
The module i'm thinking of, working on, is a bit inspired from pgaudit
and petere's pg_trashcan.
It should copy every created table in a shadow-schema with extra
columns for record on / record off and Userid (this is already working ;)).
In case of a drop statement it should rename the table in the shadow
schema XXX-droped-Date.

Now i am trying to catch the planned Stmts, ...
It should work without triggers - because the shadow schema should only
be visible for user postgres.

regards
geohas



 -- Abhijit



-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTqWQ4AAoJEJFGMlQe7wR/8CEIAJihWVGc//dDHGF9lDtMo3Ds
v1Xhd5U9n1tLL/Cx0/cqnslKctdfSCY2I/ptjNSDFO8U/YdUjNdPf4nYvxn0gjKR
n8VuC61BDr6qHFQvlJE7GLv2hs2GCxFM5dEgnV7foJjT18C/VgnSRFulJzxU87EZ
8uKG53+CM9ERDa5P9py9jyvrJJvIAXk9AAfevU9g+jimwK9OntwkC7ZfyVWEDwfr
x7LDyrzhge/EIco01pzJSimuVd0BPvTQ8V7XUTpy25xS+D8968wE8eRBaMWXH0b2
KR5lju+sz+SyVQKildcyExOEQWN3PgVmST5USAy9cAzPIuic+yR+qsa5H2VRTFI=
=ZYct
-END PGP SIGNATURE-



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add a filed to PageHeaderData

2014-06-24 Thread Soroosh Sardari
On Tue, Jun 24, 2014 at 3:27 PM, Andres Freund and...@2ndquadrant.com
wrote:

 On 2014-06-24 15:23:54 +0430, Soroosh Sardari wrote:
  On Tue, Jun 24, 2014 at 2:40 PM, Kevin Grittner kgri...@ymail.com
 wrote:
 
   Soroosh Sardari soroosh.sard...@gmail.com wrote:
  
I check this problem with a virgin source code of
postgresql-9.3.2. So the bug is not for my codes.
  
By the way, following code has two different output and it is
weird.
  
   I can confirm that I see the difference in 9.3.2, and that I don't
   see the difference in 9.3.4.  Upgrade.
  
   http://www.postgresql.org/support/versioning/
  
   There's really no point in reporting a possible bug on a version
   with known bugs which have already had fixes published.
  
   --
   Kevin Grittner
   EDB: http://www.enterprisedb.com
   The Enterprise PostgreSQL Company
  
 
 
  wow, it's arch-dependent.
  in the 32-bit compiled of PG9.3.2 the code has same output and in 64-bit
  binary of same code output is different!!
 
  The problem is not about the sql code I posted in the last email. Problem
  could be different in any architecture,
  In 32-bit or 64-bit architecture adding a char array of length 20 to
  PageHeaderData cause error in regression test.

 You likely didn't adapt SizeOfPageHederData.

 Greetings,

 Andres Freund

 --
  Andres Freund http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services




#define SizeOfPageHeaderData (offsetof(PageHeaderData, pd_linp))

I think ,the macro does not need any change!


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-24 Thread Robert Haas
On Mon, Jun 23, 2014 at 2:29 PM, Stephen Frost sfr...@snowman.net wrote:
 What are these policies going to depend on?  Will they be allowed to
 overlap?  I don't see multi-policy support as being very easily added.

We discussed the point about overlap upthread, and I gave specific
examples.  If there's something else you want me to provide here,
please be more clear about it.

 If there are specific ways to design the syntax which would make it
 easier to support multiple policies in the future, I'm all for it.  Have
 any specific thoughts regarding that?

I did propose something already upthread, and then Dean said this:

# Note that the syntax proposed elsewhere --- GRANT SELECT (polname) ON
# TABLE tab TO role --- doesn't work because it conflicts with the
# syntax for granting column privileges, so there needs to be a distinct
# syntax for this, and I think it ought to ultimately allow things like
#
# GRANT SELECT (col1, col2), UPDATE (col1) ON t1 TO bob USING policy1;

He's got a good point there.  I don't know whether the policy should
be given inline (e.g. GRANT ... WHERE stuff()) or out-of-line (GRANT
... USING policy1) but it seems like specifying it as some sort of
GRANT modifier might make sense.  I'm sure there are other ways also,
of course.

  - Require the user to specify in some way which of the available
  policies they want applied, and then apply only that one.
 
  I'd want to at least see a way to apply an ordering to the policies
  being applied, or have PG work out which one is cheapest and try that
  one first.

 Cost-based comparison of policies that return different results
 doesn't seem sensible to me.

 I keep coming back to the thought that, really, having multiple
 overlapping policies just adds unnecessary complication to the system
 for not much gain in real functionality.  Being able to specify a policy
 per-role might be useful, but that's only one dimension and I can
 imagine a lot of other dimensions that one might want to use to control
 which policy is used.

Well, I don't agree, and I've given examples upthread showing the
kinds of scenarios that I'm concerned about, which are drawn from real
experiences I've had.  It may be that I'm the only one who has had
such experiences, of course; or that there aren't enough people who
have to justify catering to such use cases.  But I'm not sure there's
much point in trying to have a conversation about how such a thing
could be made to work if you're just going to revert back to well, we
don't really need this anyway each time I make or refute a technical
point.

 I think it would be a VERY bad idea to design the system around the
 assumption that the RLS quals will be much more or less selective than
 the user-supplied quals.  That's going to be different in different
 environments.

 Fine- but do you really see the query planner having a problem pushing
 down whichever is the more selective qual, if the user-provided qual is
 marked as leakproof?

I'm not quite sure I understand the scenario you're describing here.
Can you provide a tangible example?  I expect that most of the things
the RLS-limited user might write in the WHERE clause will NOT get
pushed down because most functions are not leakproof.  However, the
issue I'm actually concerned about is whether the *security* qual is
simple enough to permit an index-scan.  Anything with an OR clause in
it probably won't be, and any function call definitely won't be.

 I realize that you want multiple policies because you'd like a way for
 the RLS qual to be made simpler for certain cases while also having more
 complex quals for other cases.  What I keep waiting to hear is exactly
 how you want to specify which policy is used because that's where it
 gets ugly and complicated.  I still really don't like the idea of trying
 to apply multiple policies inside of a single query execution.

See above comments.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Robert Haas
On Tue, Jun 24, 2014 at 7:45 AM, Sebastian Reitenbach
sebas...@l00-bugdead-prods.de wrote:
 I'm building the vax packages for openbsd. What I can tell is that
 for 5.5 no postgresql packages were built. But that may be that
 due to the recent upgrade from gcc 2.95 to 3.3.
 I guess that not all dependencies to actually build postgresql
 are available for the vax, or may build successfully there. But I need
 to verify. Might need a few days, since I'm currently on vacation,
 with sparse Internet connectivity. ;)

 OK, that was easy:

 $ cd /usr/ports/databases/postgresql
 $ make install
 ===  postgresql-client-9.3.4p0  requires shared libraries .

 OpenBSD VAX is static only, so no postgresql on OpenBSD
 VAX before shared libraries will ever be made working on it.

Thanks very much; that's useful information.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgaudit - an auditing extension for PostgreSQL

2014-06-24 Thread Robert Haas
On Mon, Jun 23, 2014 at 6:51 AM, Abhijit Menon-Sen a...@2ndquadrant.com wrote:
 There are some unresolved questions with #2 because the extensible
 reloptions patch seems to have lost favour, but I'm pretty sure we
 could figure out some alternative.

I didn't particularly like the proposed *implementation* of extensible
reloptions, but I think the general concept has merit.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #10728: json_to_recordset with nested json objects NULLs columns

2014-06-24 Thread Merlin Moncure
On Mon, Jun 23, 2014 at 8:43 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 * Nested json arrays are a bit more problematic.  What I'd ideally like
 is to spit them out in a form that would be successfully parsable as a SQL
 array of the appropriate element type.  Unfortunately, I think that that
 ship has sailed because json_populate_recordset failed to do that in 9.3.
 What we should probably do is define this the same as the nested object
 case, ie, we spit it out in *json* array format, meaning you can insert it
 into a text or json/jsonb field of the result record.  Maybe sometime in
 the future we can add a json-array-to-SQL-array converter function, but
 these functions won't do that.

Not quite following your logic here.  9.3 gave an error for an
internally nested array:

postgres=# create type foo as(a int, b int[]);
postgres=# select * from json_populate_recordset(null::foo, '[{a: 1,
b: [1,2,3]},{a: 1, b: [1,2,3]}]');
ERROR:  cannot call json_populate_recordset on a nested object

With your proposal this would still fail?  TBH, I'd rather this
function fail as above than implement a behavior we couldn't take back
later.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Vik Fearing
On 06/22/2014 05:11 PM, Kevin Grittner wrote:
 I found one substantive issue that had been missed in discussion,
 though.  The patch modifies the postgres_fdw extension to make it
 automatically exempt from an attempt to set a limit like this on
 the server to which it connects.  I'm not sure that's a good idea. 
 Why should this type of connection be allowed to sit indefinitely
 with an idle open transaction?  I'm inclined to omit this part of
 the patch

My reasoning for doing it the way I did is that if a transaction touches
a foreign table and then goes bumbling along with other things the
transaction is active but the connection to the remote server remains
idle in transaction.  If it hits the timeout, when the local transaction
goes to commit it errors out and you lose all your work.

If the local transaction is actually idle in transaction and the local
server doesn't have a timeout, we're no worse off than before this patch.
-- 
Vik


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread David G Johnston
On Tue, Jun 24, 2014 at 9:20 AM, Vik Fearing [via PostgreSQL] 
ml-node+s1045698n5808882...@n5.nabble.com wrote:

 On 06/22/2014 05:11 PM, Kevin Grittner wrote:
  I found one substantive issue that had been missed in discussion,
  though.  The patch modifies the postgres_fdw extension to make it
  automatically exempt from an attempt to set a limit like this on
  the server to which it connects.  I'm not sure that's a good idea.
  Why should this type of connection be allowed to sit indefinitely
  with an idle open transaction?  I'm inclined to omit this part of
  the patch

 My reasoning for doing it the way I did is that if a transaction touches
 a foreign table and then goes bumbling along with other things the
 transaction is active but the connection to the remote server remains
 idle in transaction.  If it hits the timeout, when the local transaction
 goes to commit it errors out and you lose all your work.

 If the local transaction is actually idle in transaction and the local
 server doesn't have a timeout, we're no worse off than before this patch.


​Going off of this reading alone wouldn't we have to allow the client to
set the timeout on the fdw_server - to zero - to ensure reasonable
operation?  If the client has a process that requires ​10 minutes to
complete, and the foreign server has a default 5 minute timeout, if the
client does not disable the timeout on the server wouldn't the foreign
server always cause the process to abort?

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/idle-in-transaction-timeout-tp5805859p5808883.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.

Re: [HACKERS] Extended Prefetching using Asynchronous IO - proposal and patch

2014-06-24 Thread John Lumby



 From: st...@mit.edu
 Date: Mon, 23 Jun 2014 16:04:50 -0700
 Subject: Re: Extended Prefetching using Asynchronous IO - proposal and patch
 To: johnlu...@hotmail.com
 CC: klaussfre...@gmail.com; hlinnakan...@vmware.com; 
 pgsql-hackers@postgresql.org

 On Mon, Jun 23, 2014 at 2:43 PM, John Lumby johnlu...@hotmail.com wrote:
 It is when some *other* backend gets there first with the ReadBuffer that
 things are a bit trickier. The current version of the patch did polling for 
 that case
 but that drew criticism, and so an imminent new version of the patch
 uses the sigevent mechanism. And there are other ways still.

 I'm a bit puzzled by this though. Postgres *already* has code for this
 case. When you call ReadBuffer you set the bits on the buffer

Good question. Let me explain.
Yes, postgresql has code for the case of a backend is inside a synchronous
read() or write(),  performed from a ReadBuffer(),  and some other backend
wants that buffer.    asynchronous aio is initiated not from ReadBuffer
but from PrefetchBuffer,    and performs its aio_read into an allocated,  
pinned,
postgresql buffer.    This is entirely different from the synchronous io case.
Why?  Because the issuer of the aio_read (the originator) is unaware
of this buffer pinned on its behalf,  and is then free to do any other 
reading or writing it wishes,   such as more prefetching  or any other 
operation.
And furthermore,  it may *never* issue a ReadBuffer for the block which it
prefetched.

Therefore,  asynchronous IO is different from synchronous IO,  and
a new bit,  BM_AIO_IN_PROGRESS, in the buf_header  is required to 
track this aio operation until completion.

I would encourage you to read the new 
postgresql-prefetching-asyncio.README
in the patch file where this is explained in greater detail.

 indicating I/O is in progress. If another backend does ReadBuffer for
 the same block they'll get the same buffer and then wait until the
 first backend's I/O completes. ReadBuffer goes through some hoops to
 handle this (and all the corner cases such as the other backend's I/O
 completing and the buffer being reused for another block before the
 first backend reawakens). It would be a shame to reinvent the wheel.

No re-invention!   Actually some effort has been made to use the
existing functions in bufmgr.c as much as possible rather than
rewriting them.


 The problem with using the Buffers I/O in progress bit is that the I/O
 might complete while the other backend is busy doing stuff. As long as
 you can handle the I/O completion promptly -- either in callback or
 thread or signal handler then that wouldn't matter. But I'm not clear
 that any of those will work reliably.

They both work reliably,  but the criticism was that backend B polling 
an aiocb of an aio issued by backend A is not documented as 
being supported  (although it happens to work),  hence the proposed
change to use sigevent.

By the way,   on the will it actually work though? question which several 
folks
have raised,    I should mention that this patch has been in semi-production 
use for almost 2 years now in different stages of completion on all postgresql
releases from 9.1.4 to 9.5 devel.       I would guess it has had around
500 hours of operation by now. I'm sure there are bugs still to be
found but I am confident it is fundamentally sound.
 

 --
 greg
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Kevin Grittner
David G Johnston david.g.johns...@gmail.com wrote:
 Vik Fearing [via PostgreSQL] [hidden email]wrote:
 On 06/22/2014 05:11 PM, Kevin Grittner wrote:
 I found one substantive issue that had been missed in discussion,
 though.  The patch modifies the postgres_fdw extension to make it
 automatically exempt from an attempt to set a limit like this on
 the server to which it connects.  I'm not sure that's a good idea.
 Why should this type of connection be allowed to sit indefinitely
 with an idle open transaction?  I'm inclined to omit this part of
 the patch

 My reasoning for doing it the way I did is that if a transaction touches
 a foreign table and then goes bumbling along with other things the
 transaction is active but the connection to the remote server remains
 idle in transaction.  If it hits the timeout, when the local transaction
 goes to commit it errors out and you lose all your work.

 If the local transaction is actually idle in transaction and the local
 server doesn't have a timeout, we're no worse off than before this patch. 



 ​Going off of this reading alone wouldn't we have to allow the
 client to set the timeout on the fdw_server - to zero - to ensure
 reasonable operation?  If the client has a process that requires
​ 10 minutes to complete, and the foreign server has a default 5
 minute timeout, if the client does not disable the timeout on the
 server wouldn't the foreign server always cause the process to
 abort?

That's what Vik did in his patch, and what I was questioning.  I
think he might be right, but I want to think about it.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Vik Fearing
On 06/24/2014 03:29 PM, David G Johnston wrote:
 On Tue, Jun 24, 2014 at 9:20 AM, Vik Fearing [via PostgreSQL] [hidden
 email] /user/SendEmail.jtp?type=nodenode=5808883i=0wrote:
 
 On 06/22/2014 05:11 PM, Kevin Grittner wrote:
  I found one substantive issue that had been missed in discussion,
  though.  The patch modifies the postgres_fdw extension to make it
  automatically exempt from an attempt to set a limit like this on
  the server to which it connects.  I'm not sure that's a good idea.
  Why should this type of connection be allowed to sit indefinitely
  with an idle open transaction?  I'm inclined to omit this part of
  the patch
 
 My reasoning for doing it the way I did is that if a transaction
 touches
 a foreign table and then goes bumbling along with other things the
 transaction is active but the connection to the remote server remains
 idle in transaction.  If it hits the timeout, when the local
 transaction
 goes to commit it errors out and you lose all your work.
 
 If the local transaction is actually idle in transaction and the local
 server doesn't have a timeout, we're no worse off than before this
 patch. 
 
 
 ​Going off of this reading alone wouldn't we have to allow the client to
 set the timeout on the fdw_server - to zero - to ensure reasonable
 operation?

That's what the patch currently does.

 If the client has a process that requires ​10 minutes to
 complete, and the foreign server has a default 5 minute timeout, if the
 client does not disable the timeout on the server wouldn't the foreign
 server always cause the process to abort?

Yes.
-- 
Vik


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extended Prefetching using Asynchronous IO - proposal and patch

2014-06-24 Thread Heikki Linnakangas

On 06/24/2014 04:29 PM, John Lumby wrote:

On Mon, Jun 23, 2014 at 2:43 PM, John Lumby johnlu...@hotmail.com wrote:

It is when some *other* backend gets there first with the ReadBuffer that
things are a bit trickier. The current version of the patch did polling for 
that case
but that drew criticism, and so an imminent new version of the patch
uses the sigevent mechanism. And there are other ways still.


I'm a bit puzzled by this though. Postgres *already* has code for this
case. When you call ReadBuffer you set the bits on the buffer


Good question. Let me explain.
Yes, postgresql has code for the case of a backend is inside a synchronous
read() or write(),  performed from a ReadBuffer(),  and some other backend
wants that buffer.asynchronous aio is initiated not from ReadBuffer
but from PrefetchBuffer,and performs its aio_read into an allocated,  
pinned,
postgresql buffer.This is entirely different from the synchronous io case.
Why?  Because the issuer of the aio_read (the originator) is unaware
of this buffer pinned on its behalf,  and is then free to do any other
reading or writing it wishes,   such as more prefetching  or any other 
operation.
And furthermore,  it may *never* issue a ReadBuffer for the block which it
prefetched.


I still don't see the difference. Once an asynchronous read is initiated 
on the buffer, it can't be used for anything else until the read has 
finished. This is exactly the same situation as with a synchronous read: 
after read() is called, the buffer can't be used for anything else until 
the call finishes.


In particular, consider the situation from another backend's point of 
view. Looking from another backend (i.e. one that didn't initiate the 
read), there's no difference between a synchronous and asynchronous 
read. So why do we need a different IPC mechanism for the synchronous 
and asynchronous cases? We don't.


I understand that *within the backend*, you need to somehow track the 
I/O, and you'll need to treat synchronous and asynchronous I/Os 
differently. But that's all within the same backend, and doesn't need to 
involve the flags or locks in shared memory at all. The inter-process 
communication doesn't need any changes.



The problem with using the Buffers I/O in progress bit is that the I/O
might complete while the other backend is busy doing stuff. As long as
you can handle the I/O completion promptly -- either in callback or
thread or signal handler then that wouldn't matter. But I'm not clear
that any of those will work reliably.


They both work reliably,  but the criticism was that backend B polling
an aiocb of an aio issued by backend A is not documented as
being supported  (although it happens to work),  hence the proposed
change to use sigevent.


You didn't understand what Greg meant. You need to handle the completion 
of the I/O in the same process that initiated it, by clearing the 
in-progress bit of the buffer and releasing the I/O in-progress lwlock 
on it. And you need to do that very quickly after the I/O has finished, 
because there might be another backend waiting for the buffer and you 
don't want him to wait longer than necessary.


The question is, if you receive the notification of the I/O completion 
using a signal or a thread, is it safe to release the lwlock from the 
signal handler or a separate thread?



By the way,   on the will it actually work though? question which several 
folks
have raised,I should mention that this patch has been in semi-production
use for almost 2 years now in different stages of completion on all postgresql
releases from 9.1.4 to 9.5 devel.   I would guess it has had around
500 hours of operation by now. I'm sure there are bugs still to be
found but I am confident it is fundamentally sound.


Well, a committable version of this patch is going to look quite 
different from the first version that you posted, so I don't put much 
weight on how long you've tested the first version.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Robert Haas
On Tue, Jun 24, 2014 at 9:18 AM, Vik Fearing vik.fear...@dalibo.com wrote:
 On 06/22/2014 05:11 PM, Kevin Grittner wrote:
 I found one substantive issue that had been missed in discussion,
 though.  The patch modifies the postgres_fdw extension to make it
 automatically exempt from an attempt to set a limit like this on
 the server to which it connects.  I'm not sure that's a good idea.
 Why should this type of connection be allowed to sit indefinitely
 with an idle open transaction?  I'm inclined to omit this part of
 the patch

 My reasoning for doing it the way I did is that if a transaction touches
 a foreign table and then goes bumbling along with other things the
 transaction is active but the connection to the remote server remains
 idle in transaction.  If it hits the timeout, when the local transaction
 goes to commit it errors out and you lose all your work.

 If the local transaction is actually idle in transaction and the local
 server doesn't have a timeout, we're no worse off than before this patch.

I think we are.  First, the correct timeout is a matter of
remote-server-policy, not local-server-policy.  If the remote server
wants to boot people with long-running idle transactions, it's
entitled to do that, and postgres_fdw shouldn't assume that it's
special.  The local server policy may be different, and may not even
have been configured by the same person.  Second, setting another GUC
at every session start adds overhead for all users of postgres_fdw.

Now, it might be that postgres_fdw should have a facility to allow
arbitrary options to be set on the foreign side at each connection
startup.  Then that could be used here if someone wants this behavior.
But I don't think we should hard-code it, because it could also be NOT
what someone wants.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #10728: json_to_recordset with nested json objects NULLs columns

2014-06-24 Thread Andrew Dunstan


On 06/23/2014 09:43 PM, Tom Lane wrote:

Andrew Dunstan and...@dunslane.net writes:

On 06/23/2014 07:34 PM, Tom Lane wrote:

I'm not following your comment about 9.3.  The json[b]_to_record[set]
functions are new in 9.4, which is what makes me feel it's not too
late to redefine their behavior.  But changing behavior of stuff that
was in 9.3 seems a lot more debatable.

This problem is also manifest in json_populate_recordset, which also
uses the function in question, and is in 9.3:

Ah, I see the problem.

Here is a first cut suggestion:

* Get rid of the use_json_as_text flag argument for the new functions.
In json_populate_record(set), ignore its value and deprecate using it.
(The fact that it already had a default makes that easier.)  The
behavior should always be as below.

* For nested json objects, we'll spit those out in json textual format,
which means they'll successfully convert to either text or json/jsonb.
Compared to the old behavior of json_populate_recordset, this just means
that we don't throw an error anymore regardless of the flag value,
which seems ok (though maybe not something to backpatch into 9.3).

* Nested json arrays are a bit more problematic.  What I'd ideally like
is to spit them out in a form that would be successfully parsable as a SQL
array of the appropriate element type.  Unfortunately, I think that that
ship has sailed because json_populate_recordset failed to do that in 9.3.
What we should probably do is define this the same as the nested object
case, ie, we spit it out in *json* array format, meaning you can insert it
into a text or json/jsonb field of the result record.  Maybe sometime in
the future we can add a json-array-to-SQL-array converter function, but
these functions won't do that.

From a user's standpoint this just boils down to (a) fix the bug with
mishandling of the hash tables, and (b) get rid of the gratuitous
error report.





The big problem is that we have been ignoring the result type when 
constructing the hash, even though the info is available. There is some 
sense in this in that the field might not even be present in the result 
type. And it works except for structured types like records, arrays and 
json. Even if we don't have a nested value, the functions will do the 
wrong thing for a scalar string destined for a json field (it will be 
de-escaped, when it should not be).


w.r.t. json arrays, I think you're chasing a chimera, since they are 
heterogenous, unlike SQL arrays.


w.r.t. the use_json_as_text argument, yes, it has a default, but the 
default is false. Ignoring it seems to be more than just deprecating it. 
I agree it's a mess, though :-(



cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] crash with assertions and WAL_DEBUG

2014-06-24 Thread Rahila Syed
Hello,

The patch on compilation gives following error,

mcxt.c: In function ‘MemoryContextAllowInCriticalSection’:
mcxt.c:322: error: ‘struct MemoryContextData’ has no member named
‘allowInCriticalSection’

The member in MemoryContextData is defined as 'allowInCritSection' while
the MemoryContextAllowInCriticalSection accesses the field as
'context-allowInCriticalSection'.


Thank you,





On Mon, Jun 23, 2014 at 3:28 PM, Heikki Linnakangas hlinnakan...@vmware.com
 wrote:

 On 06/21/2014 01:58 PM, Heikki Linnakangas wrote:

 It's a bit difficult to attach the mark to the palloc calls, as neither
 the WAL_DEBUG or LWLOCK_STATS code is calling palloc directly, but
 marking specific MemoryContexts as sanctioned ought to work. I'll take a
 stab at that.


 I came up with the attached patch. It adds a function called
 MemoryContextAllowInCriticalSection(), which can be used to exempt
 specific memory contexts from the assertion. The following contexts are
 exempted:

 * ErrorContext
 * MdCxt, which is used in checkpointer to absorb fsync requests. (the
 checkpointer process as a whole is no longer exempt)
 * The temporary StringInfos used in WAL_DEBUG (a new memory WAL Debug
 context is now created for them)
 * LWLock stats hash table (a new LWLock stats context is created for it)

 Barring objections, I'll commit this to master, and remove the assertion
 from REL9_4_STABLE.

 - Heikki



 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Andres Freund
On 2014-06-24 10:04:03 -0400, Robert Haas wrote:
 On Tue, Jun 24, 2014 at 9:18 AM, Vik Fearing vik.fear...@dalibo.com wrote:
  My reasoning for doing it the way I did is that if a transaction touches
  a foreign table and then goes bumbling along with other things the
  transaction is active but the connection to the remote server remains
  idle in transaction.  If it hits the timeout, when the local transaction
  goes to commit it errors out and you lose all your work.
 
  If the local transaction is actually idle in transaction and the local
  server doesn't have a timeout, we're no worse off than before this patch.
 
 I think we are.  First, the correct timeout is a matter of
 remote-server-policy, not local-server-policy.  If the remote server
 wants to boot people with long-running idle transactions, it's
 entitled to do that, and postgres_fdw shouldn't assume that it's
 special.  The local server policy may be different, and may not even
 have been configured by the same person.  Second, setting another GUC
 at every session start adds overhead for all users of postgres_fdw.

+1

 Now, it might be that postgres_fdw should have a facility to allow
 arbitrary options to be set on the foreign side at each connection
 startup.  Then that could be used here if someone wants this behavior.
 But I don't think we should hard-code it, because it could also be NOT
 what someone wants.

I think options=-c idle_in_transaction_timeout=0 in the server config
should already do the trick.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-24 Thread Alvaro Herrera
Robert Haas wrote:

  Right, if we were to support multiple policies on a given table then we
  would have to support adding and removing them individually, as well as
  specify when they are to be applied- and what if that when overlaps?
  Do we apply both and only a row which passed them all gets sent to the
  user?  Essentially we'd be defining the RLS policies to be AND'd
  together, right?  Would we want to support both AND-based and OR-based,
  and allow users to pick what set of conditionals they want applied to
  their various overlapping RLS policies?
 
 AND is not a sensible policy; it would need to be OR.  If you grant
 someone access to two different subsets of the rows in a table, it
 stands to reason that they will expect to have access to all of the
 rows that are in at least one of those subsets.

I haven't been following this thread, but this bit caught my attention.
I'm not sure I agree that OR is always the right policy either.
There is a case for a policy that says forbid these rows to these guys,
even if they have read permissions from elsewhere.  If OR is the only
way to mix multiple policies there might not be a way to implement this.
So ISTM each policy must be able to indicate what to do -- sort of how
PAM config files allow you to specify required, optional and so
forth for each module.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread David G Johnston
On Tue, Jun 24, 2014 at 10:05 AM, Robert Haas [via PostgreSQL] 
ml-node+s1045698n580889...@n5.nabble.com wrote:

 On Tue, Jun 24, 2014 at 9:18 AM, Vik Fearing [hidden email]
 http://user/SendEmail.jtp?type=nodenode=5808893i=0 wrote:

  On 06/22/2014 05:11 PM, Kevin Grittner wrote:
  I found one substantive issue that had been missed in discussion,
  though.  The patch modifies the postgres_fdw extension to make it
  automatically exempt from an attempt to set a limit like this on
  the server to which it connects.  I'm not sure that's a good idea.
  Why should this type of connection be allowed to sit indefinitely
  with an idle open transaction?  I'm inclined to omit this part of
  the patch
 
  My reasoning for doing it the way I did is that if a transaction touches
  a foreign table and then goes bumbling along with other things the
  transaction is active but the connection to the remote server remains
  idle in transaction.  If it hits the timeout, when the local transaction
  goes to commit it errors out and you lose all your work.
 
  If the local transaction is actually idle in transaction and the local
  server doesn't have a timeout, we're no worse off than before this
 patch.

 I think we are.  First, the correct timeout is a matter of
 remote-server-policy, not local-server-policy.  If the remote server
 wants to boot people with long-running idle transactions, it's
 entitled to do that, and postgres_fdw shouldn't assume that it's
 special.  The local server policy may be different, and may not even
 have been configured by the same person.  Second, setting another GUC
 at every session start adds overhead for all users of postgres_fdw.

 Now, it might be that postgres_fdw should have a facility to allow
 arbitrary options to be set on the foreign side at each connection
 startup.  Then that could be used here if someone wants this behavior.
 But I don't think we should hard-code it, because it could also be NOT
 what someone wants.


The missing ability is that while the user only cares about the one logical
session we are dealing with two physical sessions in a parent-child
relationship where the child session state does not match that of its
parent.  For me, this whole line of thought is based upon the logical
idle_in_transaction - did the application really mean to leave this
hanging?

Say that 90% of the time disabling the timeout will be the correct course
of action; making the user do this explicitly does not seem reasonable.
 And if doesn't matter is the current state when the foreign server is
configured no setting will be passed.  Then if the remote server does
institute a timeout all the relevant configurations will need to be changed.

ISTM that the additional overhead in this case would be very small in
percentage terms; at least enough so that usability would be my default
choice.

I have no problem allowing for user-specified behavior but the default of
disabling the timeout seems reasonable.  I am doubting that actually
synchronizing the parent and child sessions, so that the child reports the
same status as the parent, is a valid option - though it would be the
best solution since the child would only report IIT if the parent was IIT.

For me, a meaningful default and usability are trumping the unknown
performance degradation.  I can go either way on allowing the local
definition to specify its own non-zero timeout but it probably isn't worth
the effort.  The foreign server administrator ultimately will have to be
aware of which users are connecting via FDW and address his long-running
transaction concerns in a more nuanced way than this parameter allows.  In
effect this becomes an 80% solution because it is not (all that) useful on
the remote end of a FDW connection; though at least the local end can make
proper use of it to protect both servers.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/idle-in-transaction-timeout-tp5805859p5808905.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.

Re: [HACKERS] crash with assertions and WAL_DEBUG

2014-06-24 Thread Alvaro Herrera
Heikki Linnakangas wrote:
 On 06/21/2014 01:58 PM, Heikki Linnakangas wrote:
 It's a bit difficult to attach the mark to the palloc calls, as neither
 the WAL_DEBUG or LWLOCK_STATS code is calling palloc directly, but
 marking specific MemoryContexts as sanctioned ought to work. I'll take a
 stab at that.
 
 I came up with the attached patch. It adds a function called
 MemoryContextAllowInCriticalSection(), which can be used to exempt
 specific memory contexts from the assertion. The following contexts
 are exempted:

There is a typo in the comment to that function, This functions can be
used, s/functions/function/

Andres Freund wrote:

  @@ -1258,6 +1259,25 @@ begin:;
  if (XLOG_DEBUG)
  {
  StringInfoData buf;
  +   static MemoryContext walDebugCxt = NULL;
  +   MemoryContext oldCxt;
  +
  +   /*
  +* Allocations within a critical section are normally not 
  allowed,
  +* because allocation failure would lead to a PANIC. But this 
  is just
  +* debugging code that no-one is going to enable in production, 
  so we
  +* don't care. Use a memory context that's exempt from the rule.
  +*/
  +   if (walDebugCxt == NULL)
  +   {
  +   walDebugCxt = AllocSetContextCreate(TopMemoryContext,
  +   
  WAL Debug,
  +   
  ALLOCSET_DEFAULT_MINSIZE,
  +   
  ALLOCSET_DEFAULT_INITSIZE,
  +   
  ALLOCSET_DEFAULT_MAXSIZE);
  +   MemoryContextAllowInCriticalSection(walDebugCxt, true);
  +   }
  +   oldCxt = MemoryContextSwitchTo(walDebugCxt);
 
 This will only work though if the first XLogInsert() isn't called from a
 critical section. I'm not sure it's a good idea to rely on that.

Ah, true -- AllocSetContextCreate cannot be called from within a
critical section.

  diff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c
  index 3c1c81a..4264373 100644
  --- a/src/backend/storage/smgr/md.c
  +++ b/src/backend/storage/smgr/md.c
  @@ -219,6 +219,16 @@ mdinit(void)

  hash_ctl,
 HASH_ELEM | 
  HASH_FUNCTION | HASH_CONTEXT);
  pendingUnlinks = NIL;
  +
  +   /*
  +* XXX: The checkpointer needs to add entries to the pending ops
  +* table when absorbing fsync requests. That is done within a 
  critical
  +* section. It means that there's a theoretical possibility 
  that you
  +* run out of memory while absorbing fsync requests, which 
  leads to
  +* a PANIC. Fortunately the hash table is small so that's 
  unlikely to
  +* happen in practice.
  +*/
  +   MemoryContextAllowInCriticalSection(MdCxt, true);
  }
   }
 
 Isn't that allowing a bit too much? We e.g. shouldn't allow
 _fdvec_alloc() within a crritical section. Might make sense to create a
 child context for it.

I agree.

Rahila Syed wrote:

 The patch on compilation gives following error,
 
 mcxt.c: In function ‘MemoryContextAllowInCriticalSection’:
 mcxt.c:322: error: ‘struct MemoryContextData’ has no member named
 ‘allowInCriticalSection’
 
 The member in MemoryContextData is defined as 'allowInCritSection' while
 the MemoryContextAllowInCriticalSection accesses the field as
 'context-allowInCriticalSection'.

It appears Heikki did a search'n replace for -allowInCritSection
before submitting, which failed to match the struct declaration.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Vik Fearing
On 06/24/2014 04:04 PM, Robert Haas wrote:
 If the local transaction is actually idle in transaction and the local
  server doesn't have a timeout, we're no worse off than before this patch.

 I think we are.  First, the correct timeout is a matter of
 remote-server-policy, not local-server-policy.  If the remote server
 wants to boot people with long-running idle transactions, it's
 entitled to do that, and postgres_fdw shouldn't assume that it's
 special.

So how would the local transaction ever get its work done?  What option
does it have to tell the remote server that it isn't actually idling, it
just doesn't need to use the remote connection for a while?

Once the remote times out, the local transaction is doomed (and won't
even know it until it tries to commit).  If we don't allow the fdw to be
special, then the local transaction can't run at all.  Ever.

The point of the patch is to allow the DBA to knock off broken clients,
but this isn't a broken client, it just looks like one.
-- 
Vik


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Dave McGuire
On 06/23/2014 06:58 PM, Greg Stark wrote:
 On Mon, Jun 23, 2014 at 3:09 PM, Robert Haas robertmh...@gmail.com wrote:
 However, we don't know of anyone who has tried to do this in a very
 long time, and are therefore considering removing the remaining
 support for the VAX platform.  Has anyone tried to build PostgreSQL
 for VAX lately?
 
 Actually I tried a while ago but got stuck configuring the network on
 simh so I could get all the tools. I can try again if there's interest
 but we don't necessarily need to keep a port just because there's a
 simulator for it.

  ...not to mention actual hardware.

   -Dave

-- 
Dave McGuire, AK4HZ/3
New Kensington, PA


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Sebastian Reitenbach

On Tuesday, June 24, 2014 03:12 CEST, Robert Haas robertmh...@gmail.com wrote:

 On Mon, Jun 23, 2014 at 6:58 PM, Greg Stark st...@mit.edu wrote:
  On Mon, Jun 23, 2014 at 3:09 PM, Robert Haas robertmh...@gmail.com wrote:
  However, we don't know of anyone who has tried to do this in a very
  long time, and are therefore considering removing the remaining
  support for the VAX platform.  Has anyone tried to build PostgreSQL
  for VAX lately?
 
  Actually I tried a while ago but got stuck configuring the network on
  simh so I could get all the tools. I can try again if there's interest
  but we don't necessarily need to keep a port just because there's a
  simulator for it.

 That's really up to you.  I'm not particularly interested in
 generating interest in maintaining this port if there wouldn't
 otherwise be any; I'm trying to figure out whether there is existing
 interest in it.  For all I know, whateverBSD is shipping PostgreSQL
 binaries for VAX and every other platform they support in each new
 release and people are using them to get real work done.  Then again,
 for all I know, it doesn't even compile on that platform, and if you
 did manage to get it to compile it wouldn't fit on the disk, and if
 you managed to fit it on the disk it wouldn't work because key system
 calls aren't supported.  If someone is still interested in this, I'm
 hoping they'll help us figure out whether it's anywhere close to
 working, and maybe even contribute a buildfarm critter.  If no one
 cares, then let's just rip it out and be done with it.


I'm building the vax packages for openbsd. What I can tell is that
for 5.5 no postgresql packages were built. But that may be that
due to the recent upgrade from gcc 2.95 to 3.3.
I guess that not all dependencies to actually build postgresql
are available for the vax, or may build successfully there. But I need
to verify. Might need a few days, since I'm currently on vacation,
with sparse Internet connectivity. ;)

Sebastian


 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company








-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Sebastian Reitenbach

On Tuesday, June 24, 2014 13:37 CEST, Sebastian Reitenbach 
sebas...@l00-bugdead-prods.de wrote:

 On Tuesday, June 24, 2014 03:12 CEST, Robert Haas robertmh...@gmail.com 
 wrote:

  On Mon, Jun 23, 2014 at 6:58 PM, Greg Stark st...@mit.edu wrote:
   On Mon, Jun 23, 2014 at 3:09 PM, Robert Haas robertmh...@gmail.com 
   wrote:
   However, we don't know of anyone who has tried to do this in a very
   long time, and are therefore considering removing the remaining
   support for the VAX platform.  Has anyone tried to build PostgreSQL
   for VAX lately?
  
   Actually I tried a while ago but got stuck configuring the network on
   simh so I could get all the tools. I can try again if there's interest
   but we don't necessarily need to keep a port just because there's a
   simulator for it.
 
  That's really up to you.  I'm not particularly interested in
  generating interest in maintaining this port if there wouldn't
  otherwise be any; I'm trying to figure out whether there is existing
  interest in it.  For all I know, whateverBSD is shipping PostgreSQL
  binaries for VAX and every other platform they support in each new
  release and people are using them to get real work done.  Then again,
  for all I know, it doesn't even compile on that platform, and if you
  did manage to get it to compile it wouldn't fit on the disk, and if
  you managed to fit it on the disk it wouldn't work because key system
  calls aren't supported.  If someone is still interested in this, I'm
  hoping they'll help us figure out whether it's anywhere close to
  working, and maybe even contribute a buildfarm critter.  If no one
  cares, then let's just rip it out and be done with it.
 

 I'm building the vax packages for openbsd. What I can tell is that
 for 5.5 no postgresql packages were built. But that may be that
 due to the recent upgrade from gcc 2.95 to 3.3.
 I guess that not all dependencies to actually build postgresql
 are available for the vax, or may build successfully there. But I need
 to verify. Might need a few days, since I'm currently on vacation,
 with sparse Internet connectivity. ;)

OK, that was easy:

$ cd /usr/ports/databases/postgresql   
$ make install
===  postgresql-client-9.3.4p0  requires shared libraries .

OpenBSD VAX is static only, so no postgresql on OpenBSD
VAX before shared libraries will ever be made working on it.

cheers,
Sebastian



 Sebastian


  --
  Robert Haas
  EnterpriseDB: http://www.enterprisedb.com
  The Enterprise PostgreSQL Company








-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread David Brownlee
Well the latest NetBSD/vax package build doesn't seem to include any
PostgreSQL packages
http://ftp.netbsd.org/pub/pkgsrc/packages/NetBSD/vax/6.0_2014Q1/ but I
don't know why.

I'll try a quick (hah :) build this end to see what happens

David



On 24 June 2014 02:12, Robert Haas robertmh...@gmail.com wrote:

 On Mon, Jun 23, 2014 at 6:58 PM, Greg Stark st...@mit.edu wrote:
  On Mon, Jun 23, 2014 at 3:09 PM, Robert Haas robertmh...@gmail.com
 wrote:
  However, we don't know of anyone who has tried to do this in a very
  long time, and are therefore considering removing the remaining
  support for the VAX platform.  Has anyone tried to build PostgreSQL
  for VAX lately?
 
  Actually I tried a while ago but got stuck configuring the network on
  simh so I could get all the tools. I can try again if there's interest
  but we don't necessarily need to keep a port just because there's a
  simulator for it.

 That's really up to you.  I'm not particularly interested in
 generating interest in maintaining this port if there wouldn't
 otherwise be any; I'm trying to figure out whether there is existing
 interest in it.  For all I know, whateverBSD is shipping PostgreSQL
 binaries for VAX and every other platform they support in each new
 release and people are using them to get real work done.  Then again,
 for all I know, it doesn't even compile on that platform, and if you
 did manage to get it to compile it wouldn't fit on the disk, and if
 you managed to fit it on the disk it wouldn't work because key system
 calls aren't supported.  If someone is still interested in this, I'm
 hoping they'll help us figure out whether it's anywhere close to
 working, and maybe even contribute a buildfarm critter.  If no one
 cares, then let's just rip it out and be done with it.

 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company



Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-24 Thread Robert Haas
On Tue, Jun 24, 2014 at 10:30 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Robert Haas wrote:
  Right, if we were to support multiple policies on a given table then we
  would have to support adding and removing them individually, as well as
  specify when they are to be applied- and what if that when overlaps?
  Do we apply both and only a row which passed them all gets sent to the
  user?  Essentially we'd be defining the RLS policies to be AND'd
  together, right?  Would we want to support both AND-based and OR-based,
  and allow users to pick what set of conditionals they want applied to
  their various overlapping RLS policies?

 AND is not a sensible policy; it would need to be OR.  If you grant
 someone access to two different subsets of the rows in a table, it
 stands to reason that they will expect to have access to all of the
 rows that are in at least one of those subsets.

 I haven't been following this thread, but this bit caught my attention.
 I'm not sure I agree that OR is always the right policy either.
 There is a case for a policy that says forbid these rows to these guys,
 even if they have read permissions from elsewhere.  If OR is the only
 way to mix multiple policies there might not be a way to implement this.
 So ISTM each policy must be able to indicate what to do -- sort of how
 PAM config files allow you to specify required, optional and so
 forth for each module.

Hmm.  Well, that could be useful, but I'm not sure I'd view it as
something we absolutely have to have...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Robert Haas
On Tue, Jun 24, 2014 at 10:50 AM, Vik Fearing vik.fear...@dalibo.com wrote:
 On 06/24/2014 04:04 PM, Robert Haas wrote:
 If the local transaction is actually idle in transaction and the local
  server doesn't have a timeout, we're no worse off than before this patch.

 I think we are.  First, the correct timeout is a matter of
 remote-server-policy, not local-server-policy.  If the remote server
 wants to boot people with long-running idle transactions, it's
 entitled to do that, and postgres_fdw shouldn't assume that it's
 special.

 So how would the local transaction ever get its work done?  What option
 does it have to tell the remote server that it isn't actually idling, it
 just doesn't need to use the remote connection for a while?

It *is* idling.  You're going to get bloat, and lock contention, and
so on, just as you would for any other idle session.

I mean, you could make this assumption about any session: I'm not done
with the transaction yet, e.g. I'm waiting for user input before
deciding what to do next.  That doesn't mean that the DBA doesn't want
to kill it.

 The point of the patch is to allow the DBA to knock off broken clients,
 but this isn't a broken client, it just looks like one.

If it walks like a duck, and quacks like a duck, it's a duck.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread David G Johnston
On Tue, Jun 24, 2014 at 11:11 AM, Robert Haas [via PostgreSQL] 
ml-node+s1045698n5808915...@n5.nabble.com wrote:

 On Tue, Jun 24, 2014 at 10:50 AM, Vik Fearing [hidden email]
 http://user/SendEmail.jtp?type=nodenode=5808915i=0 wrote:

  On 06/24/2014 04:04 PM, Robert Haas wrote:
  If the local transaction is actually idle in transaction and the local
   server doesn't have a timeout, we're no worse off than before this
 patch.
 
  I think we are.  First, the correct timeout is a matter of
  remote-server-policy, not local-server-policy.  If the remote server
  wants to boot people with long-running idle transactions, it's
  entitled to do that, and postgres_fdw shouldn't assume that it's
  special.
 
  So how would the local transaction ever get its work done?  What option
  does it have to tell the remote server that it isn't actually idling, it
  just doesn't need to use the remote connection for a while?

 It *is* idling.  You're going to get bloat, and lock contention, and
 so on, just as you would for any other idle session.


If an application is making use of the foreign server directly then there
is the option to commit after using the foreign server, while saving the
relevant data for the main transaction.  But if you make use of API
functions there can only be a single transaction encompassing both the
local and foreign servers.  But even then, if the user needs a logical
super-transaction across both servers - even though the bulk of the work
occurs locally - that option to commit is then removed regardless of client
usage.

IMO this tool is too blunt to properly allow servers to self-manage
fdw-initiated transactions/sessions; and allowing it to be used is asking
for end-user confusion and frustration.

OTOH, requiring the administrator of the foreign server to issue an ALTER
ROLE fdw_user SET idle_in_transaction_session_timeout = 0; would be fairly
easy to justify.  Allowing them to distinguish between known long-running
and problematic transactions and those that are expected to execute quickly
has value as well.

Ultimately you give the users power and then just need to make sure we
provide sufficient documentation suggestions on how best to configure the
two servers in various typical usage scenarios.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/idle-in-transaction-timeout-tp5805859p5808920.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.

Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-24 Thread Stephen Frost
Robert,

I feel like we are getting to the point of simply talking past each
other and so I'll try anew, and I'll include my understanding of how the
different approaches would address the specific use-case you outlined
up-thread.

Single policy
-
The current implementation approach only allows a single policy to be
included.  The concern raised with this approach is that it won't be
very performant due to the qual complexity, which you outlined
(reformatted a bit) as:

WHERE
  sales_rep_id = (SELECT oid FROM pg_roles
  WHERE rolname = current_user
  AND
  oid IN (SELECT id FROM person WHERE is_sales_rep))
  OR
  partner_id = (SELECT p.org_id
FROM pg_roles a, person p
WHERE a.rolname = current_user
  AND a.oid = p.id)

Which I take to mean there is a 'person' table which looks like:

id, is_sales_rep, org_id

and a table which has the RLS qual which looks like:

pk_id, sales_rep_id, partner_id

Then, if the individual is_sales_rep and it's their account by
sales_rep_id, or if the individual's org_id matches the partner_id, they
can see the record.

Using this example with security barrier views and indexes on person.id,
data.pk_id, data.sales_rep_id, and data.partner_id, we'll get a bitmap
heap scan across the 'data' table by having the two OR's run as
InitPlan 1 and InitPlan 2.

Does that address the concern you had around multi-branch OR policies?
This works with more than two OR branches also, though of course we need
appropriate indexes to make use of a Bitmap Heap Scan.

Even with per-user policies, we would define a policy along these lines,
for the sfrost role:

WHERE
  sales_rep_id = 16384
  OR partner_id = 1

Which also ends up doing a Bitmap Heap Scan across the data table.

For the case where a sales rep isn't also a partner, you could simplify
this to:

WHERE
  sales_rep_id = 16384

but I'm not sure that really buys you much?  With the bitmap heap
scan, if one side of the OR ends up not returning anything then it
doesn't contribute to the blocks which have to be scanned.  The index
might still need to be scanned, although I think you could avoid even
that with an EXISTS check to see if the user is a partner at all.
That's not to say that a bitmap scan is equivilant to an index scan, but
it's certainly likely to be far better than a sequential scan.

Now, if the query is select * from data_view with pk_id = 1002;, then
we get an indexed lookup on the data table based on the PK.  That's what
I was trying to point out previously regarding leakproof functions
(which comprise about half of the boolean functions we provide, if I
recall my previous analysis correctly).  We also get indexed lookups
with pk_id  10 or similar as those are also leakproof.

Multiple, Overlapping policies
--
Per discussion, these would generally be OR'd together.

Building up the overall qual which has to include an OR branch for each
individual policy qual(s) looks like a complicated bit of work and one
which might be better left to the user (and, as just pointed out, the
user may actually want AND instead of OR in some cases..).

Managing the plan cache in a sensible way is certainly made more
complicated by this and might mean that it can't be used at all, which
has already been raised as a show-stopper issue.

In the example which you provided, while we could represent that the two
policies exist (sales representatives vs partners) and that they are to
be OR'd together in the catalog, but I don't immediately see how that
would change the qual which ends up being added to the query in this
case or really improving the overall query plan; at least, not without
eliminating one of the OR branches somehow- which I discuss below.

Multiple, Non-overlapping policies
--
Preventing the overlap of policies ends up being very complicated if
many dimensions are allowed.  For the simple case, perhaps only the
'current role' dimension is useful.  I expect that going down that
route would very quickly lead to requests for other dimensions (client
IP, etc) which is why I'm not a big fan of it, but if that's the
concensus then let's work out the syntax and update the patch and move
on.

Another option might be to have a qual for each policy which
the user can define that indicates if that policy is to be applied or
not and then simply pick the first policy for which that qual which
returns 'true'.  We would require an ordering to be defined in this
case, which I believe was an issue up-thread.  If we allow all policies
matching the quals then we run into the complications mentioned under
Overlapping policies above.

If we decide that per-role policies need to be supported, I very
quickly see the need to have groups of roles to which a policy is to
be applied.  This would differ from roles today as they would not be
allowed to overlap (otherwise we are into overlapping 

Re: [HACKERS] Autonomous Transaction (WIP)

2014-06-24 Thread Pavel Stehule
Hello

regress tests fails:

 plancache... ok
 limit... ok
 plpgsql  ... ok
 copy2... ok
 temp ... FAILED
 domain   ... ok
 rangefuncs   ... ok
 prepare  ... ok
 without_oid  ... ok
 conversion   ... ok
 truncate ... ok
 alter_table  ... ok
 sequence ... ok

I did some small tests and it works well. When I looked to code, I was
surprised by hardcoded max nesting level of autonomous transactions

#define MAX_AUTOX_NESTING_LEVEL   3

why? Is not it too restrictive?

I am missing a regress tests.

Regards

Pavel






2014-06-18 11:19 GMT+02:00 Rajeev rastogi rajeev.rast...@huawei.com:

 On 17 June 2014 02:01, Alvaro Herrera Wrote:

  What's the status of this patch?

 I have completed work on this and some more changes are done on top of
 earlier patch shared:
 1. Fixed all of the issues observed.
 2. Addressed some of the feedback from community like
 a. Change the syntax to
 START AUTONOMOUS TRANSACTION [READ ONLY | READ
 WRITE]
 b. As Pavan had pointed, I have made transaction behavior (only
 read-only properties) of main and autonomous transaction independent.
 3. Added documentation for this feature.
 4. Rebased to latest git code.

 Please find the attached latest patch and provide opinion.

 Thanks and Regards,
 Kumar Rajeev Rastogi



 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




regression.diffs
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Autonomous Transaction (WIP)

2014-06-24 Thread Pavel Stehule
postgres=# select version();

version
-
 PostgreSQL 9.5devel on x86_64-unknown-linux-gnu, compiled by gcc (GCC)
4.8.2 20131212 (Red Hat 4.8.2-7), 64-bit
(1 row)



2014-06-24 18:39 GMT+02:00 Pavel Stehule pavel.steh...@gmail.com:

 Hello

 regress tests fails:

  plancache... ok
  limit... ok
  plpgsql  ... ok
  copy2... ok
  temp ... FAILED
  domain   ... ok
  rangefuncs   ... ok
  prepare  ... ok
  without_oid  ... ok
  conversion   ... ok
  truncate ... ok
  alter_table  ... ok
  sequence ... ok

 I did some small tests and it works well. When I looked to code, I was
 surprised by hardcoded max nesting level of autonomous transactions

 #define MAX_AUTOX_NESTING_LEVEL   3

 why? Is not it too restrictive?

 I am missing a regress tests.

 Regards

 Pavel






 2014-06-18 11:19 GMT+02:00 Rajeev rastogi rajeev.rast...@huawei.com:

 On 17 June 2014 02:01, Alvaro Herrera Wrote:

  What's the status of this patch?

 I have completed work on this and some more changes are done on top of
 earlier patch shared:
 1. Fixed all of the issues observed.
 2. Addressed some of the feedback from community like
 a. Change the syntax to
 START AUTONOMOUS TRANSACTION [READ ONLY | READ
 WRITE]
 b. As Pavan had pointed, I have made transaction behavior (only
 read-only properties) of main and autonomous transaction independent.
 3. Added documentation for this feature.
 4. Rebased to latest git code.

 Please find the attached latest patch and provide opinion.

 Thanks and Regards,
 Kumar Rajeev Rastogi



 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers





temp.out
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Josh Berkus
On 06/23/2014 03:52 PM, Andres Freund wrote:
 On 2014-06-23 13:19:47 -0700, Kevin Grittner wrote:
 which already seems less clear (because the transaction belongs
 to idle)

 I have no idea what that means.
 
 It's idle_in_transaction_session_timeout. Not
 idle_in_transaction_session_timeout.
 
 and for another that distinction seems to be to subtle for users.

 The difference between an idle in transaction session and an
 idle transaction is too subtle for someone preparing to terminate
 one of those?
 
 Yes. To me that's an academic distinction. As a nonnative speaker it
 looks pretty much random that one has an in in it and the other
 doesn't. Maybe I'm just having a grammar fail, but there doesn't seem to
 be much sense in it.

As a native speaker, I find the distinction elusive as well.  If someone
was actually planning to commit transaction cancel, I'd object to it.

And frankly, it doesn't make any sense to have two independent timeouts
anyway.  Only one of them would ever be invoked, whichever one came
first.  If you really want to plan for a feature I doubt anyone is going
to write, the appropriate two GUCs are:

idle_transaction_timeout: ## ms
idle_transaction_timeout_action: cancel | terminate

However, since I'm not convinced that anyone is ever going to write the
cancel version, can we please just leave the 2nd GUC out for now?

 A long idle in transaction state pretty much always indicates a
 problematic interaction with postgres.

 True.  Which makes me wonder whether we shouldn't default this to
 something non-zero -- even if it is 5 or 10 days.

I'd go for even shorter: 48 hours.  I'd suggest 24 hours, but that would
trip up some users who just need really long pg_dumps.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Tom Lane
Sebastian Reitenbach sebas...@l00-bugdead-prods.de writes:
 OK, that was easy:

 $ cd /usr/ports/databases/postgresql   
 $ make install
 ===  postgresql-client-9.3.4p0  requires shared libraries .

 OpenBSD VAX is static only, so no postgresql on OpenBSD
 VAX before shared libraries will ever be made working on it.

Ouch.  We long ago passed the point of no return as far as requiring
shared library support: there's too much backend functionality that's
in separate shared libraries rather than being linked directly into
the core executable.  I doubt anyone will be interested in taking on
the task of supporting a parallel all-static build.

I think this means we can write off VAX on NetBSD/OpenBSD as a viable
platform for Postgres :-(.  I'm sad to hear it, but certainly have
not got the cycles personally to prevent it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Autonomous Transaction (WIP)

2014-06-24 Thread Pavel Stehule
Hello

There are lot of unnecessary block over one statement in code

+   if ((inAutoX)  (chunk == events-head)  ((char *)event
 afterTriggers-events_stack[my_level].tailfree))
+   {
+   continue;
+   }
+


and there a few too long lines

Regards

Pavel


2014-06-24 18:40 GMT+02:00 Pavel Stehule pavel.steh...@gmail.com:

 postgres=# select version();

 version

 -
  PostgreSQL 9.5devel on x86_64-unknown-linux-gnu, compiled by gcc (GCC)
 4.8.2 20131212 (Red Hat 4.8.2-7), 64-bit
 (1 row)



 2014-06-24 18:39 GMT+02:00 Pavel Stehule pavel.steh...@gmail.com:

 Hello

 regress tests fails:

  plancache... ok
  limit... ok
  plpgsql  ... ok
  copy2... ok
  temp ... FAILED
  domain   ... ok
  rangefuncs   ... ok
  prepare  ... ok
  without_oid  ... ok
  conversion   ... ok
  truncate ... ok
  alter_table  ... ok
  sequence ... ok

 I did some small tests and it works well. When I looked to code, I was
 surprised by hardcoded max nesting level of autonomous transactions

 #define MAX_AUTOX_NESTING_LEVEL   3

 why? Is not it too restrictive?

 I am missing a regress tests.

 Regards

 Pavel






 2014-06-18 11:19 GMT+02:00 Rajeev rastogi rajeev.rast...@huawei.com:

 On 17 June 2014 02:01, Alvaro Herrera Wrote:

  What's the status of this patch?

 I have completed work on this and some more changes are done on top of
 earlier patch shared:
 1. Fixed all of the issues observed.
 2. Addressed some of the feedback from community like
 a. Change the syntax to
 START AUTONOMOUS TRANSACTION [READ ONLY | READ
 WRITE]
 b. As Pavan had pointed, I have made transaction behavior (only
 read-only properties) of main and autonomous transaction independent.
 3. Added documentation for this feature.
 4. Rebased to latest git code.

 Please find the attached latest patch and provide opinion.

 Thanks and Regards,
 Kumar Rajeev Rastogi



 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers






Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Vik Fearing
On 06/24/2014 06:43 PM, Josh Berkus wrote:
 A long idle in transaction state pretty much always indicates a
  problematic interaction with postgres.
 
  True.  Which makes me wonder whether we shouldn't default this to
  something non-zero -- even if it is 5 or 10 days.

 I'd go for even shorter: 48 hours.  I'd suggest 24 hours, but that would
 trip up some users who just need really long pg_dumps.

Why would pg_dump be idle for 24 hours?
-- 
Vik


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Pavel Stehule
2014-06-24 18:43 GMT+02:00 Josh Berkus j...@agliodbs.com:

 On 06/23/2014 03:52 PM, Andres Freund wrote:
  On 2014-06-23 13:19:47 -0700, Kevin Grittner wrote:
  which already seems less clear (because the transaction belongs
  to idle)
 
  I have no idea what that means.
 
  It's idle_in_transaction_session_timeout. Not
  idle_in_transaction_session_timeout.
 
  and for another that distinction seems to be to subtle for users.
 
  The difference between an idle in transaction session and an
  idle transaction is too subtle for someone preparing to terminate
  one of those?
 
  Yes. To me that's an academic distinction. As a nonnative speaker it
  looks pretty much random that one has an in in it and the other
  doesn't. Maybe I'm just having a grammar fail, but there doesn't seem to
  be much sense in it.

 As a native speaker, I find the distinction elusive as well.  If someone
 was actually planning to commit transaction cancel, I'd object to it.

 And frankly, it doesn't make any sense to have two independent timeouts
 anyway.  Only one of them would ever be invoked, whichever one came
 first.  If you really want to plan for a feature I doubt anyone is going
 to write, the appropriate two GUCs are:

 idle_transaction_timeout: ## ms
 idle_transaction_timeout_action: cancel | terminate

 However, since I'm not convinced that anyone is ever going to write the
 cancel version, can we please just leave the 2nd GUC out for now?

  A long idle in transaction state pretty much always indicates a
  problematic interaction with postgres.
 
  True.  Which makes me wonder whether we shouldn't default this to
  something non-zero -- even if it is 5 or 10 days.

 I'd go for even shorter: 48 hours.  I'd suggest 24 hours, but that would
 trip up some users who just need really long pg_dumps.


long transactions should not be a problem - this should to break
transaction when it does nothing long time.

Regards

Pavel



 --
 Josh Berkus
 PostgreSQL Experts Inc.
 http://pgexperts.com


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] [BUGS] BUG #10728: json_to_recordset with nested json objects NULLs columns

2014-06-24 Thread Merlin Moncure
On Tue, Jun 24, 2014 at 9:08 AM, Andrew Dunstan and...@dunslane.net wrote:
 w.r.t. json arrays, I think you're chasing a chimera, since they are
 heterogenous, unlike SQL arrays.

But, there are many useful cases where the json is known to be well
formed, right?  Or do you mean that the difficulties stem from simply
validating the type?  Basically, I'm wondering if

SELECT to_json(foo_t[])

is ever going to be able to be reversed by:

SELECT array(json[b]_populate_recordset(null::foo_t[]), '...'::json[b])

...where foo_t is some arbitrarily complex nested type. even simpler
(although not necessarily faster) would be:

SELECT from_json(null::foo_t[], ',,,');

or even

SELECT '...'::foo_t[]::json::foo_t[];

My basic gripe with the json[b] APIs is that there is no convenient
deserialization reverse of to_json. Tom's proposal AIUI, in particular
having internal json arrays force to json, would foreclose the last
two cases from ever being possible.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Josh Berkus
On 06/24/2014 07:50 AM, Vik Fearing wrote:
 On 06/24/2014 04:04 PM, Robert Haas wrote:
 If the local transaction is actually idle in transaction and the local
 server doesn't have a timeout, we're no worse off than before this patch.

 I think we are.  First, the correct timeout is a matter of
 remote-server-policy, not local-server-policy.  If the remote server
 wants to boot people with long-running idle transactions, it's
 entitled to do that, and postgres_fdw shouldn't assume that it's
 special.
 
 So how would the local transaction ever get its work done?  What option
 does it have to tell the remote server that it isn't actually idling, it
 just doesn't need to use the remote connection for a while?
 
 Once the remote times out, the local transaction is doomed (and won't
 even know it until it tries to commit).  If we don't allow the fdw to be
 special, then the local transaction can't run at all.  Ever.

I'm unclear on how the FDW could be special.  From the point of the
remote server, how does it even know that it's receiving an FDW
connection and not some other kind of connection?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Atomics hardware support table supported architectures

2014-06-24 Thread Noah Misch
On Mon, Jun 23, 2014 at 05:16:15PM +0200, Andres Freund wrote:
 On 2014-06-23 10:29:54 -0400, Robert Haas wrote:
  Telling people that
  they can't have even the most minimal platform support code in
  PostgreSQL unless they're willing to contribute and maintain a BF VM
  indefinitely is not very friendly.  Of course, the risk of their
  platform getting broken is higher if they don't, but that's different
  than making it a hard requirement.
 
 I agree that we shouldn't actively try to break stuff. But having to
 understand  blindly modify unused code is on the other hand of actively
 breaking platforms. It's actively hindering development.

What I'm hearing is that you see two options, (1) personally authoring
e.g. sparcv8 code or (2) purging the source tree of sparcv8 code before
submitting the patch that would otherwise change it.  I favor middle ground
that lets minor platforms pay their own way.  Write your changes with as
little effort as you wish toward whether they run on sparcv8.  If they break
sparcv8, then either (a) that was okay, or (b) a user will show up with a
report and/or patch, and we'll deal with that.

For any minor-platform user sighing now, the community offers an unbeatable
deal on PostgreSQL committer time.  Provide a currently-passing buildfarm
member, and no PostgreSQL committer will be content until his new code works
there.  How can you pass that up?

(By break sparcv8, I mean a build failure, test suite failure, or large
performance regression.  If a change has the potential to make some
architectures give wrong answers only at odd times, that's a different kind of
problem.  For that reason, actively breaking Alpha is a good thing.)

  But I think this is all a bit off-topic for this thread.  Andres has
  already implemented a fallback for people who haven't got CAS and
  fetch-and-add on their platform, so whether or not we deprecate some
  more platforms has no bearing at all on this patch.
 
 While I indeed have that fallback code, that's statement is still not
 entirely true. We still need to add atomics support for lots of
 platforms, otherwise they're just going to be 'less supported' than
 now. Are we fine with that and just'll accept patches?

+1

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 On 06/24/2014 07:50 AM, Vik Fearing wrote:
 Once the remote times out, the local transaction is doomed (and won't
 even know it until it tries to commit).  If we don't allow the fdw to be
 special, then the local transaction can't run at all.  Ever.

 I'm unclear on how the FDW could be special.  From the point of the
 remote server, how does it even know that it's receiving an FDW
 connection and not some other kind of connection?

One way you could do it is to use a user id that's only for FDW
connections, and do an ALTER ROLE on that id to set the appropriate
timeout.

Personally I'm violently against having postgres_fdw mess with this
setting; for one thing, the proposed coding would prevent DBAs from
controlling the timeout as they see fit, because it would override
any ALTER ROLE or other remote-side setting.  It doesn't satisfy the
POLA either.  postgres_fdw does not for example override
statement_timeout; why should it override this timeout?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Atomics hardware support table supported architectures

2014-06-24 Thread Andres Freund
On 2014-06-24 13:03:37 -0400, Noah Misch wrote:
 On Mon, Jun 23, 2014 at 05:16:15PM +0200, Andres Freund wrote:
  On 2014-06-23 10:29:54 -0400, Robert Haas wrote:
   Telling people that
   they can't have even the most minimal platform support code in
   PostgreSQL unless they're willing to contribute and maintain a BF VM
   indefinitely is not very friendly.  Of course, the risk of their
   platform getting broken is higher if they don't, but that's different
   than making it a hard requirement.
  
  I agree that we shouldn't actively try to break stuff. But having to
  understand  blindly modify unused code is on the other hand of actively
  breaking platforms. It's actively hindering development.
 
 What I'm hearing is that you see two options, (1) personally authoring
 e.g. sparcv8 code or (2) purging the source tree of sparcv8 code before
 submitting the patch that would otherwise change it.  I favor middle ground
 that lets minor platforms pay their own way.  Write your changes with as
 little effort as you wish toward whether they run on sparcv8.  If they break
 sparcv8, then either (a) that was okay, or (b) a user will show up with a
 report and/or patch, and we'll deal with that.

Sounds sensible to me. But we should document such platforms as not
being officially supported in that case.

 If a change has the potential to make some architectures give wrong
 answers only at odd times, that's a different kind of problem.  For
 that reason, actively breaking Alpha is a good thing.

Not sure what you mean with the 'actively breaking Alpha' statement?
That we should drop Alpha?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 On 06/23/2014 03:52 PM, Andres Freund wrote:
 True.  Which makes me wonder whether we shouldn't default this to
 something non-zero -- even if it is 5 or 10 days.

 I'd go for even shorter: 48 hours.  I'd suggest 24 hours, but that would
 trip up some users who just need really long pg_dumps.

FWIW, I do not think we should have a nonzero default for this.
We could not safely set it to any value that would be small enough
to be really useful in the field.

BTW, has anyone thought about the interaction of this feature with
prepared transactions?  I wonder whether there shouldn't be a similar but
separately-settable maximum time for a transaction to stay in the prepared
state.  If we could set a nonzero default on that, perhaps on the order of
a few minutes, we could solve the ancient bugaboo that prepared
transactions are too dangerous to enable by default.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Atomics hardware support table supported architectures

2014-06-24 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2014-06-24 13:03:37 -0400, Noah Misch wrote:
 If a change has the potential to make some architectures give wrong
 answers only at odd times, that's a different kind of problem.  For
 that reason, actively breaking Alpha is a good thing.

 Not sure what you mean with the 'actively breaking Alpha' statement?
 That we should drop Alpha?

+1.  Especially with no buildfarm critter.  Would anyone here care
to bet even the price of a burger that Alpha isn't broken already?

Even if we *had* an Alpha in the buildfarm, I'd have pretty small
confidence in whether our code really worked on it.  The buildfarm
tests just don't stress heavily-concurrent behavior enough.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Vik Fearing
On 06/24/2014 07:17 PM, Tom Lane wrote:
 BTW, has anyone thought about the interaction of this feature with
 prepared transactions?  I wonder whether there shouldn't be a similar but
 separately-settable maximum time for a transaction to stay in the prepared
 state.  If we could set a nonzero default on that, perhaps on the order of
 a few minutes, we could solve the ancient bugaboo that prepared
 transactions are too dangerous to enable by default.

I did not think about that, but I could probably cook up a patch for it.
 I don't believe it belongs in this patch, though.
-- 
Vik


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Atomics hardware support table supported architectures

2014-06-24 Thread Noah Misch
On Tue, Jun 24, 2014 at 07:09:08PM +0200, Andres Freund wrote:
 On 2014-06-24 13:03:37 -0400, Noah Misch wrote:
  What I'm hearing is that you see two options, (1) personally authoring
  e.g. sparcv8 code or (2) purging the source tree of sparcv8 code before
  submitting the patch that would otherwise change it.  I favor middle ground
  that lets minor platforms pay their own way.  Write your changes with as
  little effort as you wish toward whether they run on sparcv8.  If they break
  sparcv8, then either (a) that was okay, or (b) a user will show up with a
  report and/or patch, and we'll deal with that.
 
 Sounds sensible to me. But we should document such platforms as not
 being officially supported in that case.

It is usually safe to make the documentation match the facts.

  If a change has the potential to make some architectures give wrong
  answers only at odd times, that's a different kind of problem.  For
  that reason, actively breaking Alpha is a good thing.
 
 Not sure what you mean with the 'actively breaking Alpha' statement?
 That we should drop Alpha?

Yes:
http://www.postgresql.org/message-id/ca+tgmozhgv_gowyfvcryetihpwnttk1dyea-o3f5+pue3tw...@mail.gmail.com

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Atomics hardware support table supported architectures

2014-06-24 Thread Andres Freund
On 2014-06-24 10:22:08 -0700, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  On 2014-06-24 13:03:37 -0400, Noah Misch wrote:
  If a change has the potential to make some architectures give wrong
  answers only at odd times, that's a different kind of problem.  For
  that reason, actively breaking Alpha is a good thing.
 
  Not sure what you mean with the 'actively breaking Alpha' statement?
  That we should drop Alpha?
 
 +1.  Especially with no buildfarm critter.  Would anyone here care
 to bet even the price of a burger that Alpha isn't broken already?

I'd actually be willing to bet a fair amount of money that it already is
broken. Especially in combination with an aggressively optimizing
compiler.

Then let's do that.

 Even if we *had* an Alpha in the buildfarm, I'd have pretty small
 confidence in whether our code really worked on it.  The buildfarm
 tests just don't stress heavily-concurrent behavior enough.

Yea.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Andres Freund
On 2014-06-24 10:17:49 -0700, Tom Lane wrote:
 BTW, has anyone thought about the interaction of this feature with
 prepared transactions?  I wonder whether there shouldn't be a similar but
 separately-settable maximum time for a transaction to stay in the prepared
 state.  If we could set a nonzero default on that, perhaps on the order of
 a few minutes, we could solve the ancient bugaboo that prepared
 transactions are too dangerous to enable by default.

I'd very much like that feature, but I'm not sure how to implement
it. Which process would do that check? We currently only allow rollbacks
from the corresponding database...
The best idea I have is to do it via autovacuum.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgaudit - an auditing extension for PostgreSQL

2014-06-24 Thread Fujii Masao
On Mon, Jun 23, 2014 at 9:50 PM, Stephen Frost sfr...@snowman.net wrote:
 * Fujii Masao (masao.fu...@gmail.com) wrote:
 On Mon, Jun 23, 2014 at 7:51 PM, Abhijit Menon-Sen a...@2ndquadrant.com 
 wrote:
  At 2014-06-23 19:15:39 +0900, masao.fu...@gmail.com wrote:
  You added this into CF, but its patch has not been posted yet. Are you
  planning to make a patch?
 
  It's a self-contained contrib module. I thought Ian had posted a
  tarball, but it looks like he forgot to attach it (or decided to
  provide only a Github link). I've attached a tarball here for
  your reference.

 I'm not a huge fan of adding this as a contrib module unless we can be
 quite sure that there's a path forward from here to a rework of the
 logging in core which would actually support the features pg_audit is
 adding, without a lot of pain and upgrade issues.  Those issues have
 kept other contrib modules from being added to core.

 Splitting up contrib into other pieces, one of which is a 'features'
 area, might address that but we'd really need a way to have those pieces
 be able to include/add catalog tables, at least..

  If not, it might be better to implement audit feature in core from the
  beginning.
 
  Sure, we're open to that possibility. Do you have any ideas about what
  an in-core implementation should do/look like?

 I don't have good idea about that. But maybe we can merge pgaudit.log
 into log_statement for more flexible settings of what to log.

 I'd expect a catalog table or perhaps changes to pg_class (maybe other
 things also..) to define what gets logged..

I'm not sure if this is good idea because this basically means that master
and every standbys must have the same audit settings and a user cannot
set what standby logs in standby side. Of course I guess that the audit
settings in standby would be similar to that in master generally, but I'm
not sure if that's always true.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL in Windows console and Ctrl-C

2014-06-24 Thread Noah Misch
On Tue, Jun 24, 2014 at 09:24:43AM +, Christian Ullrich wrote:
 pg_ctl does not pass the option anywhere but on Windows, and postmaster.c 
 does not recognize it anywhere else. If it is encountered on a platform where 
 it does not make sense, it will be treated like any other (unknown) long 
 option.
 
 This is actually the weakest point of the existing patch, in my opinion. 
 Jamming the long option handling into postmaster.c by way of #ifdef WIN32 
 feels wrong, but I could not figure out a better way to do it.

I liked the proposal here; was there a problem with it?
http://www.postgresql.org/message-id/ca+tgmoz3ake4enctmqmzsykc_0pjl_u4c_x47ge48uy1upb...@mail.gmail.com

The pg_upgrade test suite and the $(prove_check)-based test suites rely on
their pg_ctl-started postmasters receiving any console ^C.  pg_ctl deserves a
--foreground or --no-background option for callers that prefer the current
behavior.  That, or those tests need a new way to launch the postmaster.

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2014-06-24 10:17:49 -0700, Tom Lane wrote:
 BTW, has anyone thought about the interaction of this feature with
 prepared transactions?  I wonder whether there shouldn't be a similar but
 separately-settable maximum time for a transaction to stay in the prepared
 state.  If we could set a nonzero default on that, perhaps on the order of
 a few minutes, we could solve the ancient bugaboo that prepared
 transactions are too dangerous to enable by default.

 I'd very much like that feature, but I'm not sure how to implement
 it. Which process would do that check? We currently only allow rollbacks
 from the corresponding database...
 The best idea I have is to do it via autovacuum.

I did not actually have any plan in mind when I wrote that, but your
mention of autovacuum suggests an idea for it: consider the code that
kicks autovacuum off a table when somebody wants exclusive lock.
In the same way, we could teach processes that want a lock that conflicts
with a prepared xact that they can kill the prepared xact if it's more
than X seconds old.

The other way in which old prepared xacts are dangerous is in blocking
cleanup of dead tuples, and I agree with your thought that maybe
autovacuum is the place to deal with that.  I don't know whether we'd
really need both mechanisms, or if just one would be enough.

In either case, this wouldn't directly be a timeout but rather a license
to kill once a prepared xact exceeds the threshold and is getting in
somebody's way.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Kevin Grittner
Tom Lane t...@sss.pgh.pa.us wrote:
 Josh Berkus j...@agliodbs.com writes:

 Which makes me wonder whether we shouldn't default this to
 something non-zero -- even if it is 5 or 10 days.

 I'd go for even shorter: 48 hours.  I'd suggest 24 hours, but that
 would trip up some users who just need really long pg_dumps.

 FWIW, I do not think we should have a nonzero default for this.
 We could not safely set it to any value that would be small enough
 to be really useful in the field.

I have seen production environments where users asked for help when
performance had gradually degraded to a fraction of what it was,
due to a connection sitting idle in transaction for several
weeks.  Even a timeout of five or ten days would have saved a lot
of pain.  What concerns me on the other side is that I've been
known to start a long-running conversion or data fix on a Friday
and check the results on Monday before committing.  Something like
that might sit for a day or two with little or no concurrent
activity to cause a problem.  It would be a real forehead-slapper
to have forgotten to set a longer timeout before starting the run
on Friday.  A five day timeout seems likely to prevent extreme pain
in the former circumstances while not being likely to mess up ad
hoc bulk activity like the latter.

Of course, if I were managing a cluster and was knowingly and
consciously setting a value, it would probably be more like 5min. 
If I have actually set such a policy I am much less likely to
forget it when it needs to be extended or disabled, and far less
likely to be mad at anyone else if it cancels my work.

 BTW, has anyone thought about the interaction of this feature with
 prepared transactions?  I wonder whether there shouldn't be a similar but
 separately-settable maximum time for a transaction to stay in the prepared
 state.  If we could set a nonzero default on that, perhaps on the order of
 a few minutes, we could solve the ancient bugaboo that prepared
 transactions are too dangerous to enable by default.

I thought about it enough to mention it briefly.  I haven't taken
it further than to note that it would be a great follow-up patch
once this is in.  I'm not sure that a few minutes would be
sufficient, though.  Theoretically, a crash of the transaction
manager, or one of the other data stores managed by it, or even a
WAN connection to one of the servers, should cause the transaction
manager to finish things up after recovery from the problem.  I
think that a default would need to allow sufficient time for that,
so we can have some confidence that the transaction manager has
actually lost track of it.  If I were configuring this for a real
production environment, I would be in mind of frequently having
seen WAN outages of several hours, and a few which lasted two or
three days.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgaudit - an auditing extension for PostgreSQL

2014-06-24 Thread Stephen Frost
* Fujii Masao (masao.fu...@gmail.com) wrote:
 I'm not sure if this is good idea because this basically means that master
 and every standbys must have the same audit settings and a user cannot
 set what standby logs in standby side. Of course I guess that the audit
 settings in standby would be similar to that in master generally, but I'm
 not sure if that's always true.

The main difference would be that the standby wouldn't be logging
anything about data changing..  but that's to be expected.

Certainly when auditing of select queries and similar actions are done
to satisfy government or industry compliance requirements, it's about
who reads the data, regardless of where that data is..

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] wrapping in extended mode doesn't work well with default pager

2014-06-24 Thread Sergey Muraviov
Hi.

Is there any problem with the patch?


2014-06-17 0:21 GMT+04:00 Greg Stark st...@mit.edu:

 On Mon, Jun 16, 2014 at 9:05 PM, Robert Haas robertmh...@gmail.com
 wrote:
  So, it seems like we need to do something about this one way or
  another.  Who's working on that?

 So I'm fine finishing what I started. I've just been a bit busy this past
 week.

 My inclination is to try to push forward and commit this patch,
 document the changes and make sure we check for any consequences of
 them.

 The alternate plan is to revert it for 9.4 and commit the changes to
 9.5 and that gives us more time to be sure we're ok with them.


 --
 greg




-- 
Best regards,
Sergey Muraviov


Re: [HACKERS] Add a filed to PageHeaderData

2014-06-24 Thread Pavan Deolasee
On Tue, Jun 24, 2014 at 3:40 PM, Kevin Grittner kgri...@ymail.com wrote:

 Soroosh Sardari soroosh.sard...@gmail.com wrote:

  I check this problem with a virgin source code of
  postgresql-9.3.2. So the bug is not for my codes.

  By the way, following code has two different output and it is
  weird.

 I can confirm that I see the difference in 9.3.2, and that I don't
 see the difference in 9.3.4.  Upgrade.

 http://www.postgresql.org/support/versioning/

 There's really no point in reporting a possible bug on a version
 with known bugs which have already had fixes published.


FWIW I can reproduce this on HEAD with the attached patch. I could
reproduce this on a 64-bit Ubuntu as well as 64-bit Mac OSX. Very confusing
it is because I tried with various values for N in char[N] array and it
fails for N=20. Other values I tried are 4, 12, 22, 24 and the test passes
for all of them. The logic for trying other values is to see if pd_linp[]
starting on un-aligned boundary can trigger the issue. But there seem to be
no correlation.

postgres=# select version();

PostgreSQL 9.5devel on x86_64-apple-darwin13.2.0, compiled by Apple LLVM
version 5.1 (clang-503.0.38) (based on LLVM 3.4svn), 64-bit

postgres=# -- test SP-GiST index that's been built incrementally

postgres=# create table test_range_spgist(ir int4range);
postgres=# create index test_range_spgist_idx on test_range_spgist using
spgist (ir);
postgres=# insert into test_range_spgist select int4range(g, g+10) from
generate_series(1,586) g;
INSERT 0 586

postgres=# SET enable_seqscan= t;
postgres=# SET enable_indexscan  = f;
postgres=# SET enable_bitmapscan = f;

postgres=# select * from test_range_spgist where ir -|- int4range(100,500);
ir
---
[90,100)
[500,510)
(2 rows)

postgres=# SET enable_seqscan= f;
postgres=# select * from test_range_spgist where ir -|- int4range(100,500);
ir
---
 [90,100)
 [500,510)
(2 rows)

At this point, both rows are visible via index scan as well as seq scan.

postgres=# insert into test_range_spgist select int4range(g, g+10) from
generate_series(587,587) g;
INSERT 0 1

postgres=# select * from test_range_spgist where ir -|- int4range(100,500);
ir
--
 [90,100)
(1 row)

Ouch. The second row somehow disappeared.

postgres=# SET enable_seqscan= t;
postgres=# select * from test_range_spgist where ir -|- int4range(100,500);
ir
---
 [90,100)
 [500,510)
(2 rows)

So the last INSERT suddenly makes one row disappear via the index scan
though its still reachable via seq scan. I tried looking at the SP-Gist
code but clearly I don't understand it a whole lot to figure out the issue,
if one exists.

Thanks,
Pavan

--
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee


page-header-padding.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extended Prefetching using Asynchronous IO - proposal and patch

2014-06-24 Thread John Lumby
Thanks Heikki,


 Date: Tue, 24 Jun 2014 17:02:38 +0300
 From: hlinnakan...@vmware.com
 To: johnlu...@hotmail.com; st...@mit.edu
 CC: klaussfre...@gmail.com; pgsql-hackers@postgresql.org
 Subject: Re: Extended Prefetching using Asynchronous IO - proposal and patch

 On 06/24/2014 04:29 PM, John Lumby wrote:
 On Mon, Jun 23, 2014 at 2:43 PM, John Lumby johnlu...@hotmail.com wrote:
 It is when some *other* backend gets there first with the ReadBuffer that
 things are a bit trickier. The current version of the patch did polling 
 for that case
 but that drew criticism, and so an imminent new version of the patch
 uses the sigevent mechanism. And there are other ways still.

 I'm a bit puzzled by this though. Postgres *already* has code for this
 case. When you call ReadBuffer you set the bits on the buffer

 Good question. Let me explain.
 Yes, postgresql has code for the case of a backend is inside a synchronous
 read() or write(), performed from a ReadBuffer(), and some other backend
 wants that buffer. asynchronous aio is initiated not from ReadBuffer
 but from PrefetchBuffer, and performs its aio_read into an allocated, pinned,
 postgresql buffer. This is entirely different from the synchronous io case.
 Why? Because the issuer of the aio_read (the originator) is unaware
 of this buffer pinned on its behalf, and is then free to do any other
 reading or writing it wishes, such as more prefetching or any other 
 operation.
 And furthermore, it may *never* issue a ReadBuffer for the block which it
 prefetched.

 I still don't see the difference. Once an asynchronous read is initiated
 on the buffer, it can't be used for anything else until the read has
 finished. This is exactly the same situation as with a synchronous read:
 after read() is called, the buffer can't be used for anything else until
 the call finishes.

Ah,  now I see what you and Greg are asking.   See my next imbed below.


 In particular, consider the situation from another backend's point of
 view. Looking from another backend (i.e. one that didn't initiate the
 read), there's no difference between a synchronous and asynchronous
 read. So why do we need a different IPC mechanism for the synchronous
 and asynchronous cases? We don't.

 I understand that *within the backend*, you need to somehow track the
 I/O, and you'll need to treat synchronous and asynchronous I/Os
 differently. But that's all within the same backend, and doesn't need to
 involve the flags or locks in shared memory at all. The inter-process
 communication doesn't need any changes.

 The problem with using the Buffers I/O in progress bit is that the I/O
 might complete while the other backend is busy doing stuff. As long as
 you can handle the I/O completion promptly -- either in callback or
 thread or signal handler then that wouldn't matter. But I'm not clear
 that any of those will work reliably.

 They both work reliably, but the criticism was that backend B polling
 an aiocb of an aio issued by backend A is not documented as
 being supported (although it happens to work), hence the proposed
 change to use sigevent.

 You didn't understand what Greg meant. You need to handle the completion
 of the I/O in the same process that initiated it, by clearing the
 in-progress bit of the buffer and releasing the I/O in-progress lwlock
 on it. And you need to do that very quickly after the I/O has finished,
 because there might be another backend waiting for the buffer and you
 don't want him to wait longer than necessary.

I think I understand the question now.    I didn't spell out the details 
earlier.
Let me explain a little more.
With this patch,     when read is issued,   it is either a synchronous IO 
(as before),  or an asynchronous aio_read (new,   represented by
both BM_IO_IN_PROGRESS    *and*  BM_AIO_IN_PROGRESS).
The way other backends wait on a synchronous IO in progress is unchanged.
But if BM_AIO_IN_PROGRESS,   then *any*  backend which requests
ReadBuffer on this block (including originator) follows a new path
through BufCheckAsync() which,  depending on various flags and context,
send the backend down to FileCompleteaio to check and maybe wait.
So *all* backends who are waiting for a BM_AIO_IN_PROGRESS buffer
will wait in that way. 
  

 The question is, if you receive the notification of the I/O completion
 using a signal or a thread, is it safe to release the lwlock from the
 signal handler or a separate thread?

In the forthcoming  new version of the patch that uses sigevent,
the originator locks a LWlock associated with that BAaiocb eXclusive,
and ,   when signalled,  in the signal handler it places that LWlock
on a process-local queue of LWlocks awaiting release.
(No, It cannot be safely released inside the signal handler or in a 
separate thread). Whenever the mainline passes a CHECK_INTERRUPTS macro
and at a few additional points in bufmgr,  the backend walks this process-local
queue and releases those 

Re: [HACKERS] Extended Prefetching using Asynchronous IO - proposal and patch

2014-06-24 Thread Heikki Linnakangas

On 06/24/2014 06:08 PM, John Lumby wrote:

The question is, if you receive the notification of the I/O completion
using a signal or a thread, is it safe to release the lwlock from the
signal handler or a separate thread?


In the forthcoming  new version of the patch that uses sigevent,
the originator locks a LWlock associated with that BAaiocb eXclusive,
and ,   when signalled,  in the signal handler it places that LWlock
on a process-local queue of LWlocks awaiting release.
(No, It cannot be safely released inside the signal handler or in a
separate thread). Whenever the mainline passes a CHECK_INTERRUPTS macro
and at a few additional points in bufmgr,  the backend walks this process-local
queue and releases those LWlocks.This is also done if the originator
itself issues a ReadBuffer,  which is the most frequent case in which it
is released.

Meanwhile,  any other backend will simply acquire Shared and release.


Ok, doing the work in CHECK_FOR_INTERRUPTS sounds safe. But is that fast 
enough? We have never made any hard guarantees on how often 
CHECK_FOR_INTERRUPTS() is called. In particular, if you're running 3rd 
party C code or PL code, there might be no CHECK_FOR_INTERRUPTS() calls 
for many seconds, or even more. That's a long time to hold onto a buffer 
I/O lock. I don't think that's acceptable :-(.



I think you are right that the existing io_in_progress_lock LWlock in the
buf_header  could be used for this,  because if there is a aio in progress,
then that lock cannot be in use for synchronous IO.  I chose not to use it
because I preferred to keep the wait/post for asynch io separate,
  but they could both use the same LWlock.   However,   the way the LWlock
is acquired and released would still be a bit different because of the need
to have the originator release it in its mainline.


It would be nice to use the same LWLock.

However, if releasing a regular LWLock in a signal handler is not safe, 
and cannot be made safe, perhaps we should, after all, invent a whole 
new mechanism. One that would make it safe to release the lock in a 
signal handler.



By the way, on the will it actually work though? question which several folks
have raised, I should mention that this patch has been in semi-production
use for almost 2 years now in different stages of completion on all postgresql
releases from 9.1.4 to 9.5 devel. I would guess it has had around
500 hours of operation by now. I'm sure there are bugs still to be
found but I am confident it is fundamentally sound.


Well, a committable version of this patch is going to look quite
different from the first version that you posted, so I don't put much
weight on how long you've tested the first version.


Yes,  I am quite willing to change it,  time permitting.
I take the works committable version as a positive sign ...


BTW, sorry if I sound negative, I'm actually quite excited about this 
feature. A patch like this take a lot of work, and usually several 
rewrites, until it's ready ;-). But I'm looking forward for it.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
 Andres Freund and...@2ndquadrant.com writes:
  On 2014-06-24 10:17:49 -0700, Tom Lane wrote:
  BTW, has anyone thought about the interaction of this feature with
  prepared transactions?  I wonder whether there shouldn't be a similar but
  separately-settable maximum time for a transaction to stay in the prepared
  state.  If we could set a nonzero default on that, perhaps on the order of
  a few minutes, we could solve the ancient bugaboo that prepared
  transactions are too dangerous to enable by default.
 
  I'd very much like that feature, but I'm not sure how to implement
  it. Which process would do that check? We currently only allow rollbacks
  from the corresponding database...
  The best idea I have is to do it via autovacuum.
 
 I did not actually have any plan in mind when I wrote that, but your
 mention of autovacuum suggests an idea for it: consider the code that
 kicks autovacuum off a table when somebody wants exclusive lock.
 In the same way, we could teach processes that want a lock that conflicts
 with a prepared xact that they can kill the prepared xact if it's more
 than X seconds old.
 
 The other way in which old prepared xacts are dangerous is in blocking
 cleanup of dead tuples, and I agree with your thought that maybe
 autovacuum is the place to deal with that.  I don't know whether we'd
 really need both mechanisms, or if just one would be enough.
 
 In either case, this wouldn't directly be a timeout but rather a license
 to kill once a prepared xact exceeds the threshold and is getting in
 somebody's way.

Why isn't this what we want for idle-in-transaction sessions..?

Sounds like exactly what I'd want, at least.  Don't kill it off unless
it's blocking something or preventing xmin progression...

Indeed, we have specifically implemented a Nagios check which does
exactly this- looks to see if any idle-in-transaction process is
blocking something else and if it's been idle for too long it gets
killed.  We don't have prepared transactions enabled, so we havn't had
to address that.  We do have a check which alerts (but doesn't kill,
yet) idle-in-transaction processes which have been idle for a long time.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] pgaudit - an auditing extension for PostgreSQL

2014-06-24 Thread Stephen Frost
Abhijit,

* Abhijit Menon-Sen (a...@2ndquadrant.com) wrote:
 At 2014-06-23 16:51:55 -0400, sfr...@snowman.net wrote:
  Are both the connected user and the current role that the command is
  running under logged?
 
 Yes, they are. -++

Ok, great, I couldn't remember.  Wish we had that ability in the current
logging code...

  I'd much rather have that in-core capability and I worry that adding
  pgaudit as an external feature now would end up preventing us from
  moving forward in this area for years to come..
 
 OK. I've marked the patch as rejected in the CF, but of course we hope
 to see further discussion about an in-core implementation for 9.5.

I'm certainly all for it, though I'm not sure if I'll have resources
myself to be able to make it happen this fall..  Will you (collectively)
be working in this direction for 9.5?  That'd certainly be great news
from my quadrant (pun fully intended ;).

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] ALTER TABLESPACE MOVE command tag tweak

2014-06-24 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
 Stephen Frost sfr...@snowman.net writes:
  That it's more-or-less a bulk 'ALTER TABLE' operation is why I had been
  trying to think of a way to put it under that command.  What if we had a
  more general way to reference 'all objects in a tablespace'?
  tablespace.* or ALL:TABLESAPCE?  Are there other places which might
  benefit from being able to take and operate on all objects in a
  tablespace?
 
  Of course, we handle this in 'GRANT' with 'GRANT ON ALL TABLES', so why
  not 'ALTER TABLE ON ALL TABLES IN TABLESPACE blah'?  that does get
  pretty darn verbose but is at least a bit more in-line with what we have
  done before..
 
 That's not a bad line of thought --- I doubt that verbosity is critical
 here.

Alright, sounds like this is more-or-less the concensus.  I'll see about
making it happen shortly.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Tom Lane
Dave McGuire mcgu...@neurotica.com writes:
 On 06/24/2014 12:42 PM, Tom Lane wrote:
 I think this means we can write off VAX on NetBSD/OpenBSD as a viable
 platform for Postgres :-(.  I'm sad to hear it, but certainly have
 not got the cycles personally to prevent it.

   Nonono...NetBSD/vax has had shared library support for many years.
 It's only OpenBSD that has that limitation.

Ah, thanks for the clarification.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Josh Berkus
On 06/24/2014 10:17 AM, Tom Lane wrote:
 Josh Berkus j...@agliodbs.com writes:
 On 06/23/2014 03:52 PM, Andres Freund wrote:
 True.  Which makes me wonder whether we shouldn't default this to
 something non-zero -- even if it is 5 or 10 days.
 
 I'd go for even shorter: 48 hours.  I'd suggest 24 hours, but that would
 trip up some users who just need really long pg_dumps.
 
 FWIW, I do not think we should have a nonzero default for this.
 We could not safely set it to any value that would be small enough
 to be really useful in the field.

48 hours would actually be a useful value; I've dealt multiple times
with newbie users who had a transaction which had been open for a week.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_receivexlog add synchronous mode

2014-06-24 Thread Fujii Masao
On Tue, Jun 24, 2014 at 3:18 PM,  furu...@pm.nttdata.co.jp wrote:
 I found that this patch breaks --status-interval option of
 pg_receivexlog when -m option which the patch introduced is supplied.
 When -m is set, pg_receivexlog tries to send the feedback message as soon
 as it flushes WAL file even if status interval timeout has not been passed
 yet. If you want to send the feedback as soon as WAL is written or flushed,
 like walreceiver does, you need to extend --status-interval option, for
 example, so that it accepts the value -1 which means enabling that
 behavior.

 Including this change in your original patch would make it more difficult
 to review. I think that you should implement this as separate patch.
 Thought?
 As your comments, the current specification to ignore the --status-intarvall.
 It is necessary to respond immediately to synchronize.

 It is necessary to think about specifications the --status-intarvall.
 So I revised it to a patch of flushmode which performed flush by a timing 
 same as walreceiver.

I'm not sure if it's good idea to call the feature which you'd like to
add as 'flush mode'.
ISTM that 'flush mode' is vague and confusion for users. Instead, what
about adding
something like --fsync-interval which pg_recvlogical supports?

 A changed part deletes the feedback message after flush, and transmitted the 
 feedback message according to the status interval.
 Change to flushmode from syncmode the mode name, and fixed the document.

+ * Receive a message available from XLOG stream, blocking for
+ * maximum of 'timeout' ms.

The above comment seems incorrect because 'timeout' is boolean argument.

+FD_ZERO(input_mask);
+FD_SET(PQsocket(conn), input_mask);
+if (standby_message_timeout)

Why did you get rid of the check of 'still_sending' flag here? Originally the
flag was checked but not in the patch.

+r = rcv_receive(true , copybuf, conn,
standby_message_timeout, last_status, now);

When the return value is -2 (i.e., an error happend), we should go to
the 'error' label.

ISTM that stream_stop() should be called every time a message is
processed. But the
patch changes pg_receivexlog so that it keeps processing the received
data without
calling stream_stop(). This seems incorrect.

'copybuf' needs to be free'd every time new message is received. But you seem to
have forgotten to do that when rcv_receive() with no timeout is called.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Anders Magnusson

Tom Lane skrev 2014-06-24 18:42:

Sebastian Reitenbach sebas...@l00-bugdead-prods.de writes:

OK, that was easy:
$ cd /usr/ports/databases/postgresql
$ make install
===  postgresql-client-9.3.4p0  requires shared libraries .
OpenBSD VAX is static only, so no postgresql on OpenBSD
VAX before shared libraries will ever be made working on it.

Ouch.  We long ago passed the point of no return as far as requiring
shared library support: there's too much backend functionality that's
in separate shared libraries rather than being linked directly into
the core executable.  I doubt anyone will be interested in taking on
the task of supporting a parallel all-static build.

I think this means we can write off VAX on NetBSD/OpenBSD as a viable
platform for Postgres :-(.  I'm sad to hear it, but certainly have
not got the cycles personally to prevent it.


OpenBSD/vax is static only.  NetBSD/vax has dynamic libraries.

-- Ragge


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Matt Thomas

On Jun 24, 2014, at 9:42 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 I think this means we can write off VAX on NetBSD/OpenBSD as a viable
 platform for Postgres :-(.  I'm sad to hear it, but certainly have
 not got the cycles personally to prevent it.

Why?  NetBSD/vax has supported shared libraries for a long long time.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Paul Koning

On Jun 24, 2014, at 12:42 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Sebastian Reitenbach sebas...@l00-bugdead-prods.de writes:
 OK, that was easy:
 
 $ cd /usr/ports/databases/postgresql   
 $ make install
 ===  postgresql-client-9.3.4p0  requires shared libraries .
 
 OpenBSD VAX is static only, so no postgresql on OpenBSD
 VAX before shared libraries will ever be made working on it.
 
 Ouch.  We long ago passed the point of no return as far as requiring
 shared library support: there's too much backend functionality that's
 in separate shared libraries rather than being linked directly into
 the core executable.  I doubt anyone will be interested in taking on
 the task of supporting a parallel all-static build.
 
 I think this means we can write off VAX on NetBSD/OpenBSD as a viable
 platform for Postgres :-(.  I'm sad to hear it, but certainly have
 not got the cycles personally to prevent it.

NetBSD and OpenBSD are different systems.  I don’t remember if NetBSD supports 
shared libraries on VAX, but that’s independent of the fact that OpenBSD 
doesn’t.

paul




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Dave McGuire
On 06/24/2014 12:42 PM, Tom Lane wrote:
 Sebastian Reitenbach sebas...@l00-bugdead-prods.de writes:
 OK, that was easy:
 
 $ cd /usr/ports/databases/postgresql   
 $ make install
 ===  postgresql-client-9.3.4p0  requires shared libraries .
 
 OpenBSD VAX is static only, so no postgresql on OpenBSD
 VAX before shared libraries will ever be made working on it.
 
 Ouch.  We long ago passed the point of no return as far as requiring
 shared library support: there's too much backend functionality that's
 in separate shared libraries rather than being linked directly into
 the core executable.  I doubt anyone will be interested in taking on
 the task of supporting a parallel all-static build.
 
 I think this means we can write off VAX on NetBSD/OpenBSD as a viable
 platform for Postgres :-(.  I'm sad to hear it, but certainly have
 not got the cycles personally to prevent it.

  Nonono...NetBSD/vax has had shared library support for many years.
It's only OpenBSD that has that limitation.

-Dave

-- 
Dave McGuire, AK4HZ/3
New Kensington, PA


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Alvaro Herrera
Dave McGuire wrote:
 On 06/24/2014 12:42 PM, Tom Lane wrote:

  I think this means we can write off VAX on NetBSD/OpenBSD as a viable
  platform for Postgres :-(.  I'm sad to hear it, but certainly have
  not got the cycles personally to prevent it.
 
   Nonono...NetBSD/vax has had shared library support for many years.
 It's only OpenBSD that has that limitation.

So now we know that NetBSD/vax is free of the shared library limitation
that plagues OpenBSD, but does Postgres work on NetBSD/vax otherwise?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Keepalive-related socket options under FreeBSD 9, 10

2014-06-24 Thread Piotr Stefaniak
Since upgrading FreeBSD from 8 to 9, I've noticed the following messages
showing up in logs when a connection with pgAdmin3 is made:

LOG:  getsockopt(TCP_KEEPCNT) failed: Protocol not available
STATEMENT:  SELECT setting FROM pg_settings WHERE name IN ('autovacuum',
'track_counts')
LOG:  getsockopt(TCP_KEEPIDLE) failed: Protocol not available
STATEMENT:  SELECT setting FROM pg_settings WHERE name IN ('autovacuum',
'track_counts')
LOG:  getsockopt(TCP_KEEPINTVL) failed: Protocol not available
STATEMENT:  SELECT setting FROM pg_settings WHERE name IN ('autovacuum',
'track_counts')

tcp_keepalives_idle, tcp_keepalives_interval, and tcp_keepalives_count
are all set to the default (0), which means system default.

My guess as to what causes this:

src/backend/libpq/pqcomm.c apparently assumes that if TCP_KEEPIDLE 
friends are defined, then the respective options are readable, but
according to man tcp, that is not the case for FreeBSD 9 (and 10):

TCP_KEEPINIT
This write-only setsockopt(2) option accepts a per-socket
timeout argument of u_int in seconds, for new, non-estab-
lished TCP connections.  For the global default in mil-
liseconds see keepinit in the MIB Variables section fur-
ther down.

As a work-around, I've set the keepalive options to the system defaults
provided by man tcp.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How about a proper TEMPORARY TABLESPACE?

2014-06-24 Thread Matheus de Oliveira
On Sun, Jun 22, 2014 at 2:35 AM, Craig Ringer cr...@2ndquadrant.com wrote:

 A way to put UNLOGGED objects in such a space and have them recovered
 if they vanish would also be valuable, IMO.

 Not necessarily in the same patch, I'd just rather keep it in mind so
 any chosen design doesn't preclude adding that later.


The idea is nice, but I think you should think more about it. Were would we
put the init files in this case? It surely can't be in the tablespace.

Best regards,
-- 
Matheus de Oliveira
Analista de Banco de Dados
Dextra Sistemas - MPS.Br nível F!
www.dextra.com.br/postgres


Bug in spg_range_quad_inner_consistent for adjacent operator (was Re: [HACKERS] Add a filed to PageHeaderData)

2014-06-24 Thread Heikki Linnakangas

On 06/24/2014 08:48 PM, Pavan Deolasee wrote:

FWIW I can reproduce this on HEAD with the attached patch. I could
reproduce this on a 64-bit Ubuntu as well as 64-bit Mac OSX. Very confusing
it is because I tried with various values for N in char[N] array and it
fails for N=20. Other values I tried are 4, 12, 22, 24 and the test passes
for all of them. The logic for trying other values is to see if pd_linp[]
starting on un-aligned boundary can trigger the issue. But there seem to be
no correlation.

postgres=# select version();

PostgreSQL 9.5devel on x86_64-apple-darwin13.2.0, compiled by Apple LLVM
version 5.1 (clang-503.0.38) (based on LLVM 3.4svn), 64-bit

postgres=# -- test SP-GiST index that's been built incrementally

postgres=# create table test_range_spgist(ir int4range);
postgres=# create index test_range_spgist_idx on test_range_spgist using
spgist (ir);
postgres=# insert into test_range_spgist select int4range(g, g+10) from
generate_series(1,586) g;
INSERT 0 586

postgres=# SET enable_seqscan= t;
postgres=# SET enable_indexscan  = f;
postgres=# SET enable_bitmapscan = f;

postgres=# select * from test_range_spgist where ir -|- int4range(100,500);
 ir
---
[90,100)
[500,510)
(2 rows)

postgres=# SET enable_seqscan= f;
postgres=# select * from test_range_spgist where ir -|- int4range(100,500);
 ir
---
  [90,100)
  [500,510)
(2 rows)

At this point, both rows are visible via index scan as well as seq scan.

postgres=# insert into test_range_spgist select int4range(g, g+10) from
generate_series(587,587) g;
INSERT 0 1

postgres=# select * from test_range_spgist where ir -|- int4range(100,500);
 ir
--
  [90,100)
(1 row)

Ouch. The second row somehow disappeared.

postgres=# SET enable_seqscan= t;
postgres=# select * from test_range_spgist where ir -|- int4range(100,500);
 ir
---
  [90,100)
  [500,510)
(2 rows)

So the last INSERT suddenly makes one row disappear via the index scan
though its still reachable via seq scan. I tried looking at the SP-Gist
code but clearly I don't understand it a whole lot to figure out the issue,
if one exists.


Yeah, I can reproduce this. It doesn't seem to be related to the padding 
or alignment at all. The padding just happens to move tuples around so 
that [500, 510) is picked as an SP-GiST inner node.


The real bug is in spg_range_quad_inner_consistent(), for the adjacent 
operator. Things go wrong when:


The scan key is [100, 500)
The prev centroid is [500, 510)
The current centroid is [544, 554).

The row that should match but isn't returned, [500, 510) is equal to the 
previous centroid. It's in quadrant 3 from the current centroid, but 
spg_range_quad_inner_consistent() incorrectly concludes that it doesn't 
need to scan that quadrant.


The function compares the scan key's upper bound with the the previous 
centroid's lower bound and the current centroid's lower bound:



/*
 * Check if upper bound of argument is not in a
 * quadrant we visited in the previous step.
 */
cmp1 = range_cmp_bounds(typcache, upper, prevLower);
cmp2 = range_cmp_bounds(typcache, centroidLower,
prevLower);
if ((cmp2  0  cmp1  0) || (cmp2  0  cmp1  0))
which2 = 0;


The idea is that if the scan key's upper bound doesn't fall between the 
prev and current centroid's lower bounds, there is no match.


  *   **
 PL   XCL

X = scan key's upper bound: 500)
PL = prev centroid's lower bound [500
CL = current centroid's lower bound [500

This is wrong. X  PL, but it's still nevertheless adjacent to it.

I'll take a closer look tomorrow...

(The if (which2) ... block after the code I quoted above also looks 
wrong - it seems to be comparing the argument's lower bound when it 
should be comparing the upper bound according to the comment. )


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-24 Thread Dean Rasheed
On 24 June 2014 17:27, Stephen Frost sfr...@snowman.net wrote:
 Single policy vs Multiple, Overlapping policies vs Multiple, Non-overlapping 
 policies


What I was describing upthread was multiple non-overlapping policies.

I disagree that this will be more complicated to use. It's a strict
superset of the single policy functionality, so if you want to do it
all using a single policy then you can. But I think that once the ACLs
reach a certain level of complexity, you probably will want to break
it up into multiple policies, and I think doing so will make things
simpler, not more complicated.

Taking a specific, simplistic example, suppose you had 2 groups of
users - some are normal users who should only be able to access their
own records. For these users, you might have a policy like

  WHERE person_id = current_user

which would be highly selective, and probably use an index scan. Then
there might be another group of users who are managers with access to
the records of, say, everyone in their department. This might then be
a more complex qual along the lines of

  WHERE person_id IN (SELECT ... FROM person_department
   WHERE mgr_id = current_user AND ...)

which might end up being a hash or merge join, depending on any
user-supplied quals.

You _could_ combine those into a single policy, but I think it would
be much better to have 2 distinct policies, since they're 2 very
different queries, for different use cases. Normal users would only be
granted permission to use the normal_user_policy. Managers might be
granted permission to use either the normal_user_policy or the
manager_policy (but not both at the same time).

That's a very simplified example. In more realistic situations there
are likely to be many more classes of users, and trying to enforce all
the logic in a single WHERE clause is likely to get unmanageable, or
inefficient if it involves lots of logic hidden away in functions.
Allowing multiple, non-overlapping policies allows the problem to be
broken up into more manageable pieces, which also makes the planner's
job easier, since only a single, simpler policy is in effect in any
given query.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-24 Thread Dean Rasheed
Thinking about the examples upthread, a separate issue occurs to me
--- when defining a RLS qual, I think that there has to be a syntax to
specify an alias for the main table, so that correlated subqueries can
refer to it. I'm not sure if that's been mentioned in any of the
discussions so far, but it might be quite hard to define certain quals
without it.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-24 Thread Stephen Frost
Dean,

* Dean Rasheed (dean.a.rash...@gmail.com) wrote:
 Thinking about the examples upthread, a separate issue occurs to me
 --- when defining a RLS qual, I think that there has to be a syntax to
 specify an alias for the main table, so that correlated subqueries can
 refer to it. I'm not sure if that's been mentioned in any of the
 discussions so far, but it might be quite hard to define certain quals
 without it.

Yeah, that thought had occured to me also.  Have any suggestions about
how to approach that issue?  The way triggers have OLD/NEW comes to mind
but I'm not sure how easily that'd work.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Allowing join removals for more join types

2014-06-24 Thread Simon Riggs
On 23 June 2014 12:06, David Rowley dgrow...@gmail.com wrote:

 It's not clear to me where you get the term sortclause from. This is
 either the groupclause or distinctclause, but in the test cases you
 provide this shows this has nothing at all to do with sorting since
 there is neither an order by or a sorted aggregate anywhere near those
 queries. Can we think of a better name that won't confuse us in the
 future?


 I probably got the word sort from the function targetIsInSortList, which
 expects a list of SortGroupClause. I've renamed the function to
 sortlist_is_unique_on_restrictinfo() and renamed the sortclause parameter to
 sortlist. Hopefully will reduce confusion about it being an ORDER BY clause
 a bit more. I think sortgroupclauselist might be just a bit too long. What
 do you think?

OK, perhaps I should be clearer. The word sort here seems completely
misplaced and we should be using a more accurately descriptive term.
It's slightly more than editing to rename things like that, so I'd
prefer you cam up with a better name.

Did you comment on the transitive closure question? Should we add a
test for that, whether or not it works yet?

Other than that it looks pretty good to commit, so I'll wait a week
for other objections then commit.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing NOT IN to use ANTI joins

2014-06-24 Thread Simon Riggs
On 24 June 2014 11:32, David Rowley dgrowle...@gmail.com wrote:

 So if anyone can point me in the right direction then that would be
 really useful.

Many things can be added simply, but most things can't. It seems we
just don't have that information. If we did, Tom would have done this
already.

 On a more positive or even slightly exciting note I think I've managed to
 devise a way that ANTI JOINS can be used for NOT IN much more often. It
 seems that find_nonnullable_vars will analyse a quals list to find
 expressions that mean that the var cannot be NULL. This means we can perform
 ANTI JOINS for NOT IN with queries like:

 SELECT * FROM a WHERE id NOT IN(SELECT nullable_col FROM b WHERE
 nullable_col = 1);
 or
 SELECT * FROM a WHERE id NOT IN(SELECT nullable_col FROM b WHERE
 nullable_col IS NOT NULL);

 (The attached patch implements this)

 the nullable_col =1 will mean that nullable_col cannot be NULL, so the ANTI
 JOIN can be performed safely. I think this combined with the NOT NULL check
 will cover probably just about all valid uses of NOT IN with a subquery...
 unless of course I've assumed something wrongly about find_nonnullable_vars.
 I just need the correct RangeTblEntry in order to determine if the
 TargetEntry is from an out join.

This is the better way to go. It's much better to have explicit proof
its not null than a possibly long chain of metadata that might be
buggy.

 The attached patch is a broken implemention that still needs the lookup code
 fixed to reference the correct RTE. The failing regression tests show where
 the problems lie.

 Any help on this would be really appreciated.

I'd suggest we just drop the targetlist approach completely.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing NOT IN to use ANTI joins

2014-06-24 Thread Simon Riggs
On 11 June 2014 17:52, Greg Stark st...@mit.edu wrote:
 On Wed, Jun 11, 2014 at 3:26 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 If we didn't have mechanisms like this, we'd have far worse hazards from
 ALTER TABLE than whether the planner made an incorrect join optimization.
 Consider ALTER COLUMN TYPE for instance.

 Obviously not general cases of ALTER COLUMN TYPE but dropping a NULL
 constraint seems like the kind of change targeted by Simon's reduce
 lock strength patch that I'm sure he's still interested in. I think
 that patch, while full of dragons to steer around, is something that
 will keep coming up again and again in the future. It's a huge
 operational risk that even these short exclusive locks can cause a
 huge production outage if they happen to get queued up behind a
 reporting query.

The focus of the lock strength reduction was around actions that lock
the table for extended periods. So it was mostly about adding things.
All the DROP actions are still AccessExclusiveLocks and will be for a
while.

Having said that, any join plan that relies upon a constraint will
still be valid even if we drop a constraint while the plan executes
because any new writes will not be visible to the executing join plan.
If we are relaxing a constraint, then a writable query that still
thinks a constraint exists won't cause a problem - it may error out
when it need not, but that's not so bad as to be worth worrying about.

So I think we can remove a NOT NULL constraint without too much problem.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing NOT IN to use ANTI joins

2014-06-24 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 Having said that, any join plan that relies upon a constraint will
 still be valid even if we drop a constraint while the plan executes
 because any new writes will not be visible to the executing join plan.

mumble ... EvalPlanQual ?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing NOT IN to use ANTI joins

2014-06-24 Thread Simon Riggs
On 24 June 2014 23:44, Tom Lane t...@sss.pgh.pa.us wrote:
 Simon Riggs si...@2ndquadrant.com writes:
 Having said that, any join plan that relies upon a constraint will
 still be valid even if we drop a constraint while the plan executes
 because any new writes will not be visible to the executing join plan.

 mumble ... EvalPlanQual ?

As long as we are relaxing a constraint, we are OK if an earlier
snapshot thinks its dealing with a tighter constraint whereas the new
reality is a relaxed constraint.

The worst that could happen is we hit an ERROR from a constraint that
was in force at the start of the query, so for consistency we really
should be enforcing the same constraint throughout the lifetime of the
query.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing join removals for more join types

2014-06-24 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 Other than that it looks pretty good to commit, so I'll wait a week
 for other objections then commit.

I'd like to review this before it goes in.  I've been waiting for it to
get marked ready for committer though.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing NOT IN to use ANTI joins

2014-06-24 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 On 24 June 2014 23:44, Tom Lane t...@sss.pgh.pa.us wrote:
 Simon Riggs si...@2ndquadrant.com writes:
 Having said that, any join plan that relies upon a constraint will
 still be valid even if we drop a constraint while the plan executes
 because any new writes will not be visible to the executing join plan.

 mumble ... EvalPlanQual ?

 As long as we are relaxing a constraint, we are OK if an earlier
 snapshot thinks its dealing with a tighter constraint whereas the new
 reality is a relaxed constraint.

I guess I should have been more explicit: EvalPlanQual processing could
see newer versions of tuples that might not satisfy the constraints the
plan was designed against.  Now, this is true only for the tuple that's
the target of the UPDATE/DELETE, so it's possible you could prove that
there's no problem --- but it would take careful analysis of the specific
semantics of the constraints in question.  I don't believe the argument
you've made here holds up.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #10728: json_to_recordset with nested json objects NULLs columns

2014-06-24 Thread Tom Lane
Merlin Moncure mmonc...@gmail.com writes:
 On Mon, Jun 23, 2014 at 8:43 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 * Nested json arrays are a bit more problematic.  What I'd ideally like
 is to spit them out in a form that would be successfully parsable as a SQL
 array of the appropriate element type.  Unfortunately, I think that that
 ship has sailed because json_populate_recordset failed to do that in 9.3.
 What we should probably do is define this the same as the nested object
 case, ie, we spit it out in *json* array format, meaning you can insert it
 into a text or json/jsonb field of the result record.  Maybe sometime in
 the future we can add a json-array-to-SQL-array converter function, but
 these functions won't do that.

 Not quite following your logic here.  9.3 gave an error for an
 internally nested array:

 postgres=# create type foo as(a int, b int[]);
 postgres=# select * from json_populate_recordset(null::foo, '[{a: 1,
 b: [1,2,3]},{a: 1, b: [1,2,3]}]');
 ERROR:  cannot call json_populate_recordset on a nested object

Yeah, that's the default behavior, with use_json_as_text false.
However, consider what happens with use_json_as_text true:

regression=# select * from json_populate_recordset(null::foo, '[{a: 1,
b: [1,2,3]},{a: 1, b: [1,2,3]}]', true);
ERROR:  missing ] in array dimensions

That case is certainly useless, but suppose somebody had done

regression=# create type foo2 as(a int, b json);
CREATE TYPE
regression=# select * from json_populate_recordset(null::foo2, '[{a: 1,
b: [1,2,3]},{a: 1, b: [1,2,3]}]', true);
 a |b
---+-
 1 | [1,2,3]
 1 | [1,2,3]
(2 rows)

or even just

regression=# create type foo3 as(a int, b text);
CREATE TYPE
regression=# select * from json_populate_recordset(null::foo3, '[{a: 1,
b: [1,2,3]},{a: 1, b: [1,2,3]}]', true);
 a |b
---+-
 1 | [1,2,3]
 1 | [1,2,3]
(2 rows)

Since these cases work and do something arguably useful, I doubt we
can break them.

However, I don't see anything wrong with changing the behavior in
cases that currently throw an error, since presumably no application
is depending on them.  Perhaps Andrew's comment about looking at the
target type info yields a way forward, ie, we could output in SQL-array
format if the target is an array, or in JSON-array format if the target
is json.  Multiply-nested cases might be a pain to get right though.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] RLS Design

2014-06-24 Thread Stephen Frost
Dean, all,

Changing the subject of this thread (though keeping it threaded) as
we've really moved on to a much broader discussion.

* Dean Rasheed (dean.a.rash...@gmail.com) wrote:
 On 24 June 2014 17:27, Stephen Frost sfr...@snowman.net wrote:
  Single policy vs Multiple, Overlapping policies vs Multiple, 
  Non-overlapping policies
 
 What I was describing upthread was multiple non-overlapping policies.

Ok.

 I disagree that this will be more complicated to use. It's a strict
 superset of the single policy functionality, so if you want to do it
 all using a single policy then you can. But I think that once the ACLs
 reach a certain level of complexity, you probably will want to break
 it up into multiple policies, and I think doing so will make things
 simpler, not more complicated.

If we keep it explicitly to per-role only, with only one policy ever
being applied, then perhaps it would be, but I'm not convinced..

 Taking a specific, simplistic example, suppose you had 2 groups of
 users - some are normal users who should only be able to access their
 own records. For these users, you might have a policy like
 
   WHERE person_id = current_user
 
 which would be highly selective, and probably use an index scan. Then
 there might be another group of users who are managers with access to
 the records of, say, everyone in their department. This might then be
 a more complex qual along the lines of
 
   WHERE person_id IN (SELECT ... FROM person_department
WHERE mgr_id = current_user AND ...)
 
 which might end up being a hash or merge join, depending on any
 user-supplied quals.

Certainly my experience with such a setup is that it includes at least 4
levels (self, manager, director, officer).  Now, officer you could
perhaps exclude as being simply RLS-exempt but with such a structure I
would think we'd just make that a special kind of policy (and not chew
up those last 4 bits).  As for this example, it's quite naturally done
with a recursive query as it's a tree structure, but if you want to keep
the qual simple and fast, you'd materialize the results of such a query
and simply have:

WHERE EXISTS (SELECT 1 from org_chart
   WHERE current_user = emp_id
 AND person_id = org_chart.id) 

 You _could_ combine those into a single policy, but I think it would
 be much better to have 2 distinct policies, since they're 2 very
 different queries, for different use cases. Normal users would only be
 granted permission to use the normal_user_policy. Managers might be
 granted permission to use either the normal_user_policy or the
 manager_policy (but not both at the same time).

I can't recall a system where managers have to request access to their
manager role.  Having another way of changing the permissions which are
applied to a session (the existing one being 'set role') doesn't strike
me as a great idea either.

 That's a very simplified example. In more realistic situations there
 are likely to be many more classes of users, and trying to enforce all
 the logic in a single WHERE clause is likely to get unmanageable, or
 inefficient if it involves lots of logic hidden away in functions.

Functions and external security systems are exactly the real-world
use-case which users I've talked to are looking for.  All of this
discussion is completely orthogonal to their requirements.  I understand
that there are simpler use-cases than those and we may be able to
provide an approach which performs better for those.

 Allowing multiple, non-overlapping policies allows the problem to be
 broken up into more manageable pieces, which also makes the planner's
 job easier, since only a single, simpler policy is in effect in any
 given query.

Let's try to outline what this would look like then.

Taking your approach, we'd have:

CREATE POLICY p1;
CREATE POLICY p2;

ALTER TABLE t1 SET POLICY p1 TO t1_p1_quals;
ALTER TABLE t1 SET POLICY p2 TO t1_p2_quals;

GRANT SELECT ON TABLE t1 TO role1 USING p1;
GRANT SELECT ON TABLE t1 TO role2 USING p2;

I'm guessing we would need to further support:

GRANT INSERT ON TABLE t1 TO role1 USING p2;

as we've already discussed being able to support per-action (SELECT,
INSERT, UPDATE, DELETE) policies.  I'm not quite sure how to address
that though.

Further, as you mention, users would be able to do:

SET rls_policy = whatever;

and things would appear fine, until they tried to access a table to
which they didn't have that policy for, at which point they'd get an
error.

You mention:

GRANT SELECT (col1, col2), UPDATE (col1) ON t1 TO bob USING policy1;

but, to be clear, there would be no option for policies to be
column-specific, right?  The policy would apply to the whole row and
just the SELECT/UPDATE privileges would be on the specific columns (as
exists today).

From this what I'm gathering is that we'd need catalog tables along
these lines:

rls_policy
  oid, polname name, polowner oid, polnamespace oid, polacl aclitme[]
  (oid, 

  1   2   >