Re: [HACKERS] [patch] Proposal for \rotate in psql

2015-11-04 Thread Pavel Stehule
2015-11-05 7:39 GMT+01:00 Craig Ringer :

> On 5 November 2015 at 05:22, Pavel Stehule 
> wrote:
>
> > If I understand to text on wiki, the name is used for tool, that can do
> > little bit more things, but it is often used for this technique (so it is
> > much better than "rotate"). I don't understand well, why "crosstab" is
> too
> > wrong name. This is pretty similar to COPY - and I know, so in first
> minutes
> > hard to explain this difference between COPY and \copy to beginners, but
> > after day of using there is not any problem.
>
> I see constant confusion between \copy and COPY. It's a really good
> reason NOT to overload other psql commands IMO.
>

but crosstab is one old function from old extension with unfriendly design.
When we support PIVOT/UNPIVOT - the crosstab function will be obsolete. It
is not often used command.

Regards

Pavel




>
> --
>  Craig Ringer   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


Re: [HACKERS] [patch] Proposal for \rotate in psql

2015-11-04 Thread Craig Ringer
On 5 November 2015 at 05:22, Pavel Stehule  wrote:

> If I understand to text on wiki, the name is used for tool, that can do
> little bit more things, but it is often used for this technique (so it is
> much better than "rotate"). I don't understand well, why "crosstab" is too
> wrong name. This is pretty similar to COPY - and I know, so in first minutes
> hard to explain this difference between COPY and \copy to beginners, but
> after day of using there is not any problem.

I see constant confusion between \copy and COPY. It's a really good
reason NOT to overload other psql commands IMO.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Note about comparation PL/SQL packages and our schema/extensions

2015-11-04 Thread Pavel Stehule
Hi

I had talk about possibility to implement PL/SQL packages in Postgres.

The package concept is coming from ADA language and it is partially
foreign/redundant element in SQL world. Oracle needs it for modularization,
because schema plays different role there than in Postgres. My opinion
about packages in Postgres is clean - the concept of schemas and extension
is simple and just work. I don't see any big gap there. If we don't play
Oracle compatibility game, then we don't need to implement class like
Oracle package. But there are few features, that can help to PL/pgSQL
developers - generally or with porting from Oracle.

1. The encapsulation and local scope - all objects in schema are accessible
from other objects in schema  by default (can be rewritten by explicit
granting). Local objects are visible only from objects in schema. This
needs enhancing of our search_path mechanism.

2. The schema variables - a server side session (can be emulated now) and
server side local schema session variables (doesn't exist) is pretty useful
for storing some temp data or high frequent change data - and can
significantly increase speed of some use cases. Now we emulate it via
PLPerl shared array, but the encapsulation is missing.

3. The initialization routines - the routines called when any object from
schema is used first time.

All three features we can emulate relative simply in C, and probably for
all mentioned points we have some workaround (less/more ugly) for PL/pgSQL.
Can be nice do it cleanly in PLpgSQL too.

I don't think we need ADA/ | PL/SQL Syntax - we can enhance our extension
mechanism to support mentioned points.

Comments, notes?

Regards

Pavel


Re: [HACKERS] proposal: PL/Pythonu - function ereport

2015-11-04 Thread Catalin Iacob
On Wed, Nov 4, 2015 at 10:12 AM, Pavel Stehule  wrote:
> It helped me lot of, thank you

Welcome, I learned quite some from the process as well.

>>
>>
>> There's just the doc part left then.
>
>
> done

We're almost there but not quite.

There's still a typo in the docs: excation.

A plpy.SPIError can be raised should be A
plpy.SPIError can be raised right?

And most importantly, for Python 3.5 there is a plpython_error_5.out
which is needed because of an alternative exception message in that
version. You didn't update this file, this makes the tests fail on
Python3.5.

Since you might not have Python 3.5 easily available I've attached a
patch to plpython_error_5.out which makes the tests pass, you can fold
this into your patch.


adjust_py35_expected.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: BUG #13685: Archiving while idle every archive_timeout with wal_level hot_standby

2015-11-04 Thread Michael Paquier
On Wed, Nov 4, 2015 at 7:33 PM, Andres Freund  wrote:
> On 2015-11-04 16:01:28 +0900, Michael Paquier wrote:
>> On Wed, Nov 4, 2015 at 8:39 AM, Andres Freund  wrote:
>> > On November 4, 2015 12:37:02 AM GMT+01:00, Michael Paquier wrote:
>> >>On a completely idle system, I don't think we should log any standby
>> >>records. This is what ~9.3 does.
>> >
>> > Are you sure? I think it'll around checkpoints, no? I thought Heikki had 
>> > fixed that, but looking sound that doesn't seem to be the case.
>>
>> Er, yes, sorry. I should have used clearer words: I meant idle system
>> with something running nothing including internal checkpoints.
>
> Uh, but you'll always have checkpoints happen on wal_level =
> hot_standby, even in 9.3?  Maybe I'm not parsing your sentence right.

Reading again my previous sentence I cannot get the meaning of it
myself :) Well, I just meant that in ~9.3 LogStandbySnapshot() is
called at each checkpoint, checkpoints occurring after
checkpoint_timeout even if the system is idle.

> As soon as a single checkpoint ever happened the early-return logic in
> CreateCheckPoint() will fail to take the LogStandbySnapshot() in
> CreateCheckPoint() into account. The test is:
> if (curInsert == ControlFile->checkPoint +
> MAXALIGN(SizeOfXLogRecord + sizeof(CheckPoint)) &&
> ControlFile->checkPoint == ControlFile->checkPointCopy.redo)
> which obviously doesn't work if there's been a WAL record logged after
> the redo pointer has been determined etc.

Yes. If segment switches are enforced at a pace faster than
checkpoint_timeout, this check considers that a checkpoint needs to
happen because a SWITCH_XLOG record is in-between. I am a bit
surprised that this should happen actually. The segment switch
triggers a checkpoint record, and vice-versa, even for idle systems.
Shouldn't we make this check a bit smarter then?

> The reason that a single checkpoint is needed to "jumpstart" the
> pointless checkpoints is that otherwise we'll never have issued a
> LogStandbySnapshot() and thus the above code block works if we started
> from a proper shutdown checkpoint.
>
> Independent of the idle issue, it seems to me that the location of the
> LogStandbySnapshot() is actually rather suboptimal - it really should
> really be before the CheckPointGuts(), not afterwards. As closer it's to
> the redo pointer of the checkpoint a hot standby node starts up from,
> the sooner that node can reach consistency.  There's no difference for
> the first time a node starts from a basebackup (since we gotta replay
> that checkpoint anyway before we're consistent), but if we start from a
> restartpoint...

Agreed. LogStandbySnapshot() is called after CheckPointGuts() since
its introduction in efc16ea5. This may save time. This would surely be
a master-only optimization though.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Seq Scan

2015-11-04 Thread Haribabu Kommi
On Tue, Nov 3, 2015 at 9:41 PM, Amit Kapila  wrote:
> On Fri, Oct 23, 2015 at 4:41 PM, Amit Kapila 
> wrote:
>>
>> On Fri, Oct 23, 2015 at 10:33 AM, Robert Haas 
>> wrote:
>
> Please find the rebased partial seq scan patch attached with this
> mail.
>
> Robert suggested me off list that we should once try to see if we
> can use Seq Scan node instead of introducing a new Partial Seq Scan
> node. I have analyzed to see if we can use the SeqScan node (containing
> parallel flag) instead of introducing new partial seq scan and found that
> we primarily need to change most of the functions in nodeSeqScan.c to
> have a parallel flag check and do something special for Partial Seq Scan
> and apart from that we need special handling in function
> ExecSupportsBackwardScan().  In general, I think we can make
> SeqScan node parallel-aware by having some special paths without
> introducing much complexity and that can save us code-duplication
> between nodeSeqScan.c and nodePartialSeqScan.c.  One thing that makes
> me slightly uncomfortable with this approach is that for partial seq scan,
> currently the plan looks like:
>
> QUERY PLAN
> --
>  Gather  (cost=0.00..2588194.25 rows=9990667 width=4)
>Number of Workers: 1
>->  Partial Seq Scan on t1  (cost=0.00..89527.51 rows=9990667 width=4)
>  Filter: (c1 > 1)
> (4 rows)
>
> Now instead of displaying Partial Seq Scan, if we just display Seq Scan,
> then it might confuse user, so it is better to add some thing indicating
> parallel node if we want to go this route.

IMO, the change from Partial Seq Scan to Seq Scan may not confuse user,
if we clearly specify in the documentation that all plans under a Gather node
are parallel plans.

This is possible for the execution nodes that executes fully under a
Gather node.
The same is not possible for parallel aggregates, so we have to mention the
aggregate node below Gather node as partial only.

I feel this suggestion arises as may be because of some duplicate code between
Partial Seq Scan and Seq scan. By using Seq Scan node only if we display as
Partial Seq Scan by storing some flag in the plan? This avoids the
need of adding
new plan nodes.


Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] NOTIFY in Background Worker

2015-11-04 Thread Haribabu Kommi
On Sat, Aug 29, 2015 at 12:55 PM, Thomas Munro
 wrote:
> On Sat, Aug 29, 2015 at 9:03 AM, Thomas Munro
>  wrote:
>>
>> On Fri, Aug 28, 2015 at 10:30 PM, jacques klein
>>  wrote:
>>>
>>> Hello,
>>>
>>> I added a "NOFITY chan" to the SQL arg of an SPI_execute(), (I did it
>>> also with just the NOTIFY statement),
>>> but the listeners (other workers) don't get the notification until a
>>> "NOTIFY chan" is done for example with pgadmin,
>>>
>>> They don't get lost, just not emited after the "not forgotten" call of
>>> CommitTransactionCommand().
>>>
>>> Is this normal ( i.e. not supported (yet) ), a bug, or did I overlook
>>> some doc. (or source code) ?.
>>>
>>> For now, I will try to "emit" the NOTIFY via libpq.
>>
>>
>> That's because ProcessCompletedNotifies isn't being called.  For regular
>> backends it is called inside the top level loop PostgresMain.  I think you
>> need to include "commands/async.h" and add a call to
>> ProcessCompletedNotifies() after your background worker commits to make this
>> work.
>
>
> For the record, Jacques confirmed off-list that this worked, and I also did
> a couple of tests.
>
> Is this expected?  If so, should it be documented -- perhaps with something
> like the attached?  Alternatively there may be some way to make
> CommitTransactionCommand do it, though the comments in
> ProcessCompletedNotifies explain why that was rejected, at least as far as
> AtCommit_Notify goes.
>
> This made me wonder what happens if a background worker calls LISTEN.
> NotifyMyFrontEnd simply logs the notifications, since there is no remote
> libpq to sent a message to.  Perhaps a way of delivering to background
> workers could be developed, though of course there are plenty of other kinds
> of IPC available already.

Yes, the Notify command execution is possible with call to
ProcessCompletedNotifies
function in a background worker and the Listen command is not possible in
background worker because of no client associated with it.

The documentation patch provides a better understanding to the user regarding
notify and listen commands.

I marked this patch as ready for committer.

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] OS X El Capitan and DYLD_LIBRARY_PATH

2015-11-04 Thread Michael Paquier
On Thu, Nov 5, 2015 at 1:16 PM, Tom Lane  wrote:
> Michael Paquier  writes:
>> On Thu, Nov 5, 2015 at 12:17 PM, Peter Eisentraut  wrote:
>>> That might be worth a try.  I ended up disabling system integrity
>>> protection, which also fixed a few other strange behaviors (mysterious
>>> regression test failures in ecpg, for instance, if anyone stumbles
>>> across that).
>
>> Yeah, that's wiser IMO. I would expect at the end people doing some
>> serious development work to disable SIP at the end, this is known as
>> well to cause issues with homebrew for example.
>
> I think we should all file bugs telling Apple it's insane to suppress
> DYLD_LIBRARY_PATH across a shell invocation.  I can see the point of
> it for some other system binaries, but not sh.

There is:
http://openradar.appspot.com/22807197

And it seems that this is intended policy because python, perl, bash,
etc processes are running as protected processes even if there script
is not, and all the dldy variables are getting purged:
https://forums.developer.apple.com/thread/13161
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] OS X El Capitan and DYLD_LIBRARY_PATH

2015-11-04 Thread Tom Lane
Michael Paquier  writes:
> On Thu, Nov 5, 2015 at 12:17 PM, Peter Eisentraut  wrote:
>> That might be worth a try.  I ended up disabling system integrity
>> protection, which also fixed a few other strange behaviors (mysterious
>> regression test failures in ecpg, for instance, if anyone stumbles
>> across that).

> Yeah, that's wiser IMO. I would expect at the end people doing some
> serious development work to disable SIP at the end, this is known as
> well to cause issues with homebrew for example.

I think we should all file bugs telling Apple it's insane to suppress
DYLD_LIBRARY_PATH across a shell invocation.  I can see the point of
it for some other system binaries, but not sh.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] OS X El Capitan and DYLD_LIBRARY_PATH

2015-11-04 Thread Michael Paquier
On Thu, Nov 5, 2015 at 12:17 PM, Peter Eisentraut  wrote:
> On 11/3/15 6:36 AM, Andres Freund wrote:
>> I wonder if we could fix this by using install_name_tool during the
>> tempinstall to add an appropriate rpath.
>>
>> Alternatively we could, apparently, specify a relative path to libraries
>> as explained here:
>> http://qin.laya.com/tech_coding_help/dylib_linking.html
>>  % install_name_tool -change libbz2.1.0.2.dylib  
>> @executable_path/../Frameworks/libbz2.1.0.2.dylib MyFunBinary
>>
>> which ought to work independently from the tempinstall and normal
>> installation path.
>
> That might be worth a try.  I ended up disabling system integrity
> protection, which also fixed a few other strange behaviors (mysterious
> regression test failures in ecpg, for instance, if anyone stumbles
> across that).

Yeah, that's wiser IMO. I would expect at the end people doing some
serious development work to disable SIP at the end, this is known as
well to cause issues with homebrew for example.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] OS X El Capitan and DYLD_LIBRARY_PATH

2015-11-04 Thread Peter Eisentraut
On 11/3/15 6:36 AM, Andres Freund wrote:
> I wonder if we could fix this by using install_name_tool during the
> tempinstall to add an appropriate rpath.
> 
> Alternatively we could, apparently, specify a relative path to libraries
> as explained here:
> http://qin.laya.com/tech_coding_help/dylib_linking.html
>  % install_name_tool -change libbz2.1.0.2.dylib  
> @executable_path/../Frameworks/libbz2.1.0.2.dylib MyFunBinary
> 
> which ought to work independently from the tempinstall and normal
> installation path.

That might be worth a try.  I ended up disabling system integrity
protection, which also fixed a few other strange behaviors (mysterious
regression test failures in ecpg, for instance, if anyone stumbles
across that).



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Make timestamptz_out less slow.

2015-11-04 Thread David Rowley
On 5 November 2015 at 13:20, Peter Geoghegan  wrote:

> On Fri, Sep 11, 2015 at 9:00 AM, David Rowley
>  wrote:
> > Attached is also timestamp_delta.patch which changes pg_ltostr() to use
> the
> > INT_MIN special case that pg_ltoa also uses. I didn't make a similar
> change
> > to pg_ltostr_zeropad() as I'd need to add even more special code to
> prepend
> > the correct number of zero before the 2147483648. This zero padding
> reason
> > was why I originally came up with the alternative way to handle the
> negative
> > numbers. I had just thought that it was best to keep both functions as
> > similar as possible.
>
> I've started looking at this -- "timestamp_out_speedup_2015-09-12.patch".
>
> > I've not done anything yet to remove the #ifdef HAVE_INT64_TIMESTAMP from
> > AppendSeconds(). The only way I can think to handle this is to just make
> > fsec_t unconditionally an int64 (since with float datetimes the
> precision is
> > 10 decimal digits after the dec point, this does not fit in int32). I'd
> go
> > off and do this, but this means I need to write an int64 version of
> > pg_ltostr(). Should I do it this way?
>
> Have you thought about *just* having an int64 pg_ltostr()? Are you
> aware of an appreciable overhead from having int32 callers just rely
> on a promotion to int64?
>
>
I'd not thought of that. It would certainly add unnecessary overhead on
32bit machines.
To be honest, I don't really like the idea of making fsec_t an int64, there
are just to many places around the code base that need to be updated. Plus
with float datetimes, we'd need to multiple the fractional seconds by
10, which is based on the MAX_TIME_PRECISION setting as defined
when HAVE_INT64_TIMESTAMP is undefined.



> I have a few minor nitpicks about the patch.
>
> No need for "negative number" comment here -- just use
> "Assert(precision >= 0)" code:
>
> + * Note 'precision' must not be a negative number.
> + * Note callers should assume cp will not be NUL terminated.
>   */
> -static void
> +static char *
>  AppendSeconds(char *cp, int sec, fsec_t fsec, int precision, bool
> fillzeros)
>  {
> +#ifdef HAVE_INT64_TIMESTAMP
> +
>
> I would just use a period/full stop here:
>
> + * Note str is not NUL terminated, callers are responsible for NUL
> terminating
> + * str themselves.
>
>
Do you mean change the comment to "Note str is not NUL terminated. Callers
are responsible for NUL terminating str themselves." ?



> Don't think you need so many "Note:" specifiers here:
>
> + * Note: Callers should ensure that 'padding' is above zero.
> + * Note: This function is optimized for the case where the number is not
> too
> + *  big to fit inside of the specified padding.
> + * Note: Caller must ensure that 'str' points to enough memory to hold the
> +result (at least 12 bytes, counting a leading sign and trailing
> NUL,
> +or padding + 1 bytes, whichever is larger).
> + */
> +char *
> +pg_ltostr_zeropad(char *str, int32 value, int32 padding)
>
>
Yes, that's a bit ugly. How about just Assert(padding > 0) just like above?
That gets rid of one Note:
The optimized part is probably not that important anyway. I just mentioned
it to try and reduce the changes of someone using it when the padding is
always too small.


> Not so sure about this, within the new pg_ltostr_zeropad() function:
>
> +   /*
> +* Handle negative numbers in a special way. We can't just append a '-'
> +* prefix and reverse the sign as on two's complement machines negative
> +* numbers can be 1 further from 0 than positive numbers, we do it
> this way
> +* so we properly handle the smallest possible value.
> +*/
> +   if (num < 0)
> +   {
>   ...
> +   }
> +   else
> +   {
>   ...
> +   }
>
> Maybe it's better to just special case INT_MIN and the do an Abs()?
> Actually, would any likely caller of pg_ltostr_zeropad() actually care
> if INT_MIN was a case that you cannot handle, and assert against? You
> could perhaps reasonably make it the caller's problem. Haven't made up
> my mind if your timestamp_delta.patch is better than that; cannot
> immediately see a problem with putting it on the caller.
>
>
I can make that case work the same as pg_ltoa() for pg_ltostr(), that's not
a problem, I did that in the delta patch. With pg_ltostr_zeropad() I'd need
to add some corner case code to prepend the correct number of zeros onto
-2147483648. This is likely not going to look very nice, and also it's
probably going to end up about the same number of lines of code to what's
in the patch already. If you think it's better for performance, then I've
already done tests to show that it's not better. I really don't think it'll
look very nice either. I quite like my negative number handling, since it's
actually code that would be exercised 50% of the time ran than 1/ 2^32 of
the time, assuming all numbers have an equal chance of being passed.


> More generally, maybe routines like EncodeDateTime() ought now to use
> the Abs() macro.

Re: [HACKERS] Bitmap index scans use of filters on available columns

2015-11-04 Thread Tomas Vondra

Hi,

On 11/04/2015 11:32 PM, Tom Lane wrote:

Jeff Janes  writes:

On Wed, Nov 4, 2015 at 7:14 AM, Tom Lane  wrote:

You're missing my point: that is possible in an indexscan, but
*not* in a bitmap indexscan, because the index AM APIs are
totally different in the two cases. In a bitmap scan, nothing
more than a TID bitmap is ever returned out to anyplace that
could execute arbitrary expressions.



I had thought it must already be able to execute arbitrary
expressions, due to the ability to already support user-defined
btree ops (and ops of non-btree types in the case of other index
types).


No. An index AM is only expected to be able to evaluate clauses of
the form   , and the
key restriction there is that the operator is one that the AM has
volunteered to support. Well, actually, it's the opclass more than
the AM that determines this, but anyway it's not just some random
operator; more than likely, the AM and/or opclass has got special
logic about the operator.


Isn't that pretty much exactly the point made by Jeff and Simon, that 
index AM is currently only allowed to handle the indexable operators, 
i.e. operators that it can explicitly optimize (e.g. use to walk the 
btree and such), and completely ignores the other operators despite 
having all the columns in the index. Which means we'll have to do the 
heap fetch, which usually means a significant performance hit.




This also ties into Robert's point about evaluation of operators
against index entries for dead or invisible rows. Indexable operators
are much less likely than others to have unexpected side-effects.


I certainly understand there are cases that require care - like the 
leakproof thing pointed out by Robert for example. I don't immediately 
see why evaluation against dead rows would be a problem.


But maybe we can derive a set of rules required from the operators? Say 
only those marked as leakproof when RLS is enabled on the table, and 
perhaps additional things.


A "bruteforce" way would be to extend each index AM with every possible 
operator, but that's not quite manageable I guess. But why couldn't we 
provide a generic infrastructure that would allow filtering "safe" 
expressions and validating them on an index tuple?



kind regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [patch] Proposal for \rotate in psql

2015-11-04 Thread David Fetter
On Wed, Nov 04, 2015 at 08:20:28AM -0800, Joe Conway wrote:
> On 11/04/2015 04:09 AM, Pavel Stehule wrote:
> > I am looking on this last patch. I talked about the name of this command
> > with more people, and the name "rotate" is unhappy. The correct name for
> > this visualization technique is "crosstab" (see google "crosstab"). The
> > conflict with our extension is unhappy, but using "rotate" is more worst
> > - (see google "rotate"). The term "rotate" is used less time (related to
> > topic), and usually with zero informed people. More, in attached doc,
> > the word "crosstab" is pretty often used, and then the word "rotate" has
> > not sense.
> 
> Apologies if this has already been suggested (as I have not followed the
> entire thread), but if you don't want to conflict with the name
> crosstab, perhaps "pivot" would be better?

As I mentioned earlier, I'm hoping we can keep PIVOT reserved for the
server side, where all our competitors except DB2 (and MySQL if you
count that) have it.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Foreign join pushdown vs EvalPlanQual

2015-11-04 Thread Kouhei Kaigai
> -Original Message-
> From: Kyotaro HORIGUCHI [mailto:horiguchi.kyot...@lab.ntt.co.jp]
> Sent: Thursday, November 05, 2015 10:02 AM
> To: fujita.ets...@lab.ntt.co.jp
> Cc: Kaigai Kouhei(海外 浩平); robertmh...@gmail.com; t...@sss.pgh.pa.us;
> pgsql-hackers@postgresql.org; shigeru.han...@gmail.com
> Subject: Re: [HACKERS] Foreign join pushdown vs EvalPlanQual
> 
> Hi, I've caught up again.
> 
> > OK, so if we all agree that the joined-tuple optimization is just an
> > option for the case where all the component tables use ROW_MARK_COPY,
> > I'd propose to leave that for 9.6.
> 
> I still think that ExecScan is called under EPQ recheck without
> EQP tuple for the *scan*.
> 
> The ForeignScan can be generated for a join and underlying
> foreign scans and such execution node returns what the core
> deesn't expect for any scan node. This is what I think is the
> root cause of this problem.
> 
> So, as the third way, I propose to resurrect the abandoned
> ForeinJoinState seems to be for the unearthed requirements. FDW
> returns ForeignJoinPath, not ForeignScanPath then finally it
> becomes ForeignJoinState, which is handeled as a join node with
> no doubt.
> 
> What do you think about this?
>
Apart from EPQ issues, it is fundamentally impossible to reflect
the remote join tree on local side, because remote server runs
the partial join in their best or arbitrary way.
If this ForeignJoinState has just a compatible join sub-tree, what
is the difference from the alternative local join sub-plan?

Even if we have another node, the roles of FDW driver is unchanged.
It eventually needs to do them:
 1. Recheck scan-qualifier of base foreign table
 2. Recheck join-clause of remote joins
 3. Reconstruct a joined tuple


I try to estimate your intention...
You say that ForeignScan with scanrelid==0 is not a scan actually,
so it is problematic to call ExecScan on ExecForeignScan always.
Thus, individual ForeignJoin shall be defined.
Right?


In case of scanrelid==0, it performs like a scan on pseudo relation
that has record type defined by fdw_scan_tlist. The rows generated
with this node are consists of rows in underlying base relations.
A significant point is, FDW driver is responsible to generate the
rows according to the fdw_scan_tlist. Once FDW driver generates rows,
ExecScan() runs remaining tasks - execution of host clauses (although
it is not easy to image remote join includes host clause has cheaper
cost than others) and projection.

One thing I can agree is, ForeignScan is enforced to use ExecScan,
thus some FDW driver may concern about this hard-wired logic.
If we try to make ForeignScan unbound from the ExecScan, I like to
suggest to revise ExecForeignScan, just invoke a callback; then
FDW driver can choose whether ExecScan is best or not.

Thanks,
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei 



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Foreign join pushdown vs EvalPlanQual

2015-11-04 Thread Kyotaro HORIGUCHI
Hi, I've caught up again.

> OK, so if we all agree that the joined-tuple optimization is just an
> option for the case where all the component tables use ROW_MARK_COPY,
> I'd propose to leave that for 9.6.

I still think that ExecScan is called under EPQ recheck without
EQP tuple for the *scan*.

The ForeignScan can be generated for a join and underlying
foreign scans and such execution node returns what the core
deesn't expect for any scan node. This is what I think is the
root cause of this problem.

So, as the third way, I propose to resurrect the abandoned
ForeinJoinState seems to be for the unearthed requirements. FDW
returns ForeignJoinPath, not ForeignScanPath then finally it
becomes ForeignJoinState, which is handeled as a join node with
no doubt.

What do you think about this?

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Make timestamptz_out less slow.

2015-11-04 Thread Peter Geoghegan
On Fri, Sep 11, 2015 at 9:00 AM, David Rowley
 wrote:
> Attached is also timestamp_delta.patch which changes pg_ltostr() to use the
> INT_MIN special case that pg_ltoa also uses. I didn't make a similar change
> to pg_ltostr_zeropad() as I'd need to add even more special code to prepend
> the correct number of zero before the 2147483648. This zero padding reason
> was why I originally came up with the alternative way to handle the negative
> numbers. I had just thought that it was best to keep both functions as
> similar as possible.

I've started looking at this -- "timestamp_out_speedup_2015-09-12.patch".

> I've not done anything yet to remove the #ifdef HAVE_INT64_TIMESTAMP from
> AppendSeconds(). The only way I can think to handle this is to just make
> fsec_t unconditionally an int64 (since with float datetimes the precision is
> 10 decimal digits after the dec point, this does not fit in int32). I'd go
> off and do this, but this means I need to write an int64 version of
> pg_ltostr(). Should I do it this way?

Have you thought about *just* having an int64 pg_ltostr()? Are you
aware of an appreciable overhead from having int32 callers just rely
on a promotion to int64?

In general, years are 1900-based in Postgres:

struct pg_tm
{
 ...
int tm_mday;
int tm_mon; /* origin 0, not 1 */
int tm_year;/* relative to 1900 */
 ...
};

While it's not your patch's fault, it is not noted by EncodeDateTime()
and EncodeDateOnly() and others that there pg_tm structs are not
1900-based. This is in contrast to multiple functions within
formatting.c, nabstime.c, and timestamp.c (some of which are clients
or clients of clients for this new code). I think that the rule has
been undermined so much that it doesn't make sense to indicate
exceptions directly, though. I think pg_tm should have comments saying
that there are many cases where tm_year is not relative to 1900.

I have a few minor nitpicks about the patch.

No need for "negative number" comment here -- just use
"Assert(precision >= 0)" code:

+ * Note 'precision' must not be a negative number.
+ * Note callers should assume cp will not be NUL terminated.
  */
-static void
+static char *
 AppendSeconds(char *cp, int sec, fsec_t fsec, int precision, bool fillzeros)
 {
+#ifdef HAVE_INT64_TIMESTAMP
+

I would just use a period/full stop here:

+ * Note str is not NUL terminated, callers are responsible for NUL terminating
+ * str themselves.

Don't think you need so many "Note:" specifiers here:

+ * Note: Callers should ensure that 'padding' is above zero.
+ * Note: This function is optimized for the case where the number is not too
+ *  big to fit inside of the specified padding.
+ * Note: Caller must ensure that 'str' points to enough memory to hold the
+result (at least 12 bytes, counting a leading sign and trailing NUL,
+or padding + 1 bytes, whichever is larger).
+ */
+char *
+pg_ltostr_zeropad(char *str, int32 value, int32 padding)

Not so sure about this, within the new pg_ltostr_zeropad() function:

+   /*
+* Handle negative numbers in a special way. We can't just append a '-'
+* prefix and reverse the sign as on two's complement machines negative
+* numbers can be 1 further from 0 than positive numbers, we do it this way
+* so we properly handle the smallest possible value.
+*/
+   if (num < 0)
+   {
  ...
+   }
+   else
+   {
  ...
+   }

Maybe it's better to just special case INT_MIN and the do an Abs()?
Actually, would any likely caller of pg_ltostr_zeropad() actually care
if INT_MIN was a case that you cannot handle, and assert against? You
could perhaps reasonably make it the caller's problem. Haven't made up
my mind if your timestamp_delta.patch is better than that; cannot
immediately see a problem with putting it on the caller.

More generally, maybe routines like EncodeDateTime() ought now to use
the Abs() macro.

Those are all of my nitpicks. :-)

> I also did some tests on server grade hardware, and the performance increase
> is looking a bit better.
>
> create table ts (ts timestamp not null);
> insert into ts select generate_series('2010-01-01 00:00:00', '2011-01-01
> 00:00:00', '1 sec'::interval);
> vacuum freeze ts;

On my laptop, I saw a 2.4X - 2.6X speedup for this case. That seems
pretty nice to me.

Will make another pass over this soon.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Joshua D. Drake

On 11/04/2015 02:53 PM, Stephen Frost wrote:


This implies that a statement used takes a long time. It may not. The
lock is held at the transaction level not the statement level, which is
why a transaction level timeout is actually more useful than a statement
level timeout.

What I'm most interested in, in the use case which I described and which
David built a system for, is getting that lock released from the lower
priority process to let the higher priority process run. I couldn't care
less about statement level anything.



Ahh, o.k. Yes, I could see the benefit to that.

JD


Thanks!

Stephen



--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
New rule for social situations: "If you think to yourself not even
JD would say this..." Stop and shut your mouth. It's going to be bad.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [patch] Proposal for \rotate in psql

2015-11-04 Thread Pavel Stehule
2015-11-05 0:07 GMT+01:00 Daniel Verite :

> Pavel Stehule wrote:
>
> > I am looking on this last patch. I talked about the name of this command
> > with more people, and the name "rotate" is unhappy. The correct name for
> > this visualization technique is "crosstab" (see google "crosstab"). The
> > conflict with our extension is unhappy, but using "rotate" is more worst
> -
> > (see google "rotate"). The term "rotate" is used less time (related to
> > topic), and usually with zero informed people. More, in attached doc, the
> > word "crosstab" is pretty often used, and then the word "rotate" has not
> > sense.
>
> First, thanks for looking again at the patch and for your feedback.
>
> I note that you dislike and oppose the current name, as previously
> when that choice of name was discussed quite a bit.
> However I disagree that "rotate" doesn't make sense. On the semantics
> side, several people have expressed upthread that it was OK, as a
> plausible synonym for  "pivot". If it's unencumbered by previous use
> in this context, then all the better, I'd say why not corner it for our
> own use?
> It's not as if we had to cling to others people choices for psql
> meta-commands.
>
>
If there is correct and commonly used name, then using any other word is
wrong. More, if this word can be associated with different semantic. I know
so some people uses the "rotate', but it has a minimal cost, if these
people doesn't know existing terminology. My opinion is pretty strong in
this topic, mainly if we have to fix this name forever. It isn't internal
name, but clearly visible name.


> Anyway that's just a name. It shall be changed eventually to
> whatever the consensus is, if one happens to emerge.
>
> > The important question is sorting output. The vertical header is
> > sorted by first appearance in result. The horizontal header is
> > sorted in ascending or descending order. This is unfriendly for
> > often use case - month names. This can be solved by third parameter
> > - sort function.
>
> Right, it's not possible currently to sort the horizontal header by
> something else than the values in it.
> I agree that it would be best to allow it if there's a reasonable way to
> implement it. I'm not sure about letting the user provide a function
> in argument.
> In the case of the month names example, the function
> f(monthname)=number-of-month may not exist. If the
> user has to create it beforehand, it feels a bit demanding
> for a display feature.
>
> I wonder if this ordering information could be instead deduced
> somehow from the non-pivoted resultset at a lower cost.
> I'll try to think more and experiment around this.
>

It can be nice. These names can be transformed to numbers, but it lost some
information value. From the ideas that I found, the sort function is less
ugly.  I invite any proposals. On second hand - this is not major issue -
it is "nice to have" category - and can to help with user adoption of this
function - the time dimensions (dows, months) are usual.

Maybe more simple idea - using transform function - the data in non-pivoted
can be numbers - and some parameter can transform numbers to text used in
horizontal header. It can pretty simple for implementation.

Regards

Pavel

p.s. Although I have maybe unlikely comments - I like this feature. It can
help, and it can be really valuable and visible psql feature.



>
>
> Best regards,
> --
> Daniel Vérité
> PostgreSQL-powered mailer: http://www.manitou-mail.org
> Twitter: @DanielVerite
>


Re: [HACKERS] [patch] Proposal for \rotate in psql

2015-11-04 Thread Daniel Verite
Joe Conway wrote:

> but if you don't want to conflict with the name
> crosstab, perhaps "pivot" would be better?

Initially I had chosen \pivot without much thought about it,
but the objection was raised that a PIVOT/UNPIVOT SQL feature
would likely exist in core in a next release independantly from psql.

If things unfold as envisioned, we would end up in the future with:
- crosstab() from tablefunc's contrib module.
- SELECT (...) PIVOT [serialization?] (...) in the SQL grammar.
- \rotate or yet another name for the client-side approach.

The reason to avoid both \crosstab or \pivot for the psql feature is
the fear of confusion for the less expert users who don't feel
the clear-cut separation between core and extensions and client
versus server, they mostly know that they're trying to use
a feature named $X. When they search for $X, it's better when
they don't find something else and possibly jump to wrong
conclusions about what it does and how it works.

Or maybe that's worrying about things that will never matter in
reality, that's possible too.


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [patch] Proposal for \rotate in psql

2015-11-04 Thread Daniel Verite
Pavel Stehule wrote:

> I am looking on this last patch. I talked about the name of this command
> with more people, and the name "rotate" is unhappy. The correct name for
> this visualization technique is "crosstab" (see google "crosstab"). The
> conflict with our extension is unhappy, but using "rotate" is more worst -
> (see google "rotate"). The term "rotate" is used less time (related to
> topic), and usually with zero informed people. More, in attached doc, the
> word "crosstab" is pretty often used, and then the word "rotate" has not
> sense.

First, thanks for looking again at the patch and for your feedback.

I note that you dislike and oppose the current name, as previously
when that choice of name was discussed quite a bit.
However I disagree that "rotate" doesn't make sense. On the semantics
side, several people have expressed upthread that it was OK, as a
plausible synonym for  "pivot". If it's unencumbered by previous use
in this context, then all the better, I'd say why not corner it for our
own use?
It's not as if we had to cling to others people choices for psql
meta-commands.

Anyway that's just a name. It shall be changed eventually to
whatever the consensus is, if one happens to emerge.

> The important question is sorting output. The vertical header is
> sorted by first appearance in result. The horizontal header is
> sorted in ascending or descending order. This is unfriendly for
> often use case - month names. This can be solved by third parameter
> - sort function.

Right, it's not possible currently to sort the horizontal header by
something else than the values in it.
I agree that it would be best to allow it if there's a reasonable way to
implement it. I'm not sure about letting the user provide a function
in argument.
In the case of the month names example, the function
f(monthname)=number-of-month may not exist. If the
user has to create it beforehand, it feels a bit demanding
for a display feature.

I wonder if this ordering information could be instead deduced
somehow from the non-pivoted resultset at a lower cost.
I'll try to think more and experiment around this.


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Pavel Stehule
2015-11-04 23:53 GMT+01:00 Stephen Frost :

> JD,
>
> On Wednesday, November 4, 2015, Joshua D. Drake 
> wrote:
>
>> On 11/04/2015 02:15 PM, Stephen Frost wrote:
>>
>> Yeah but anything holding a lock that long can be terminated via
 statement_timeout can it not?

>>>
>>> Well, no?  statement_timeout is per-statement, while transaction_timeout
>>> is, well, per transaction.  If there's a process which is going and has
>>> an open transaction and it's holding locks, that can be an issue.
>>>
>>
>> No, what I mean is this:
>>
>> BEGIN;
>> select * from foo;
>> update bar;
>> delete baz;
>>
>> Each one of those is subject to statement_timeout, yes? If so, then I
>> don't see a point for transaction timeout. You set statement_timeout for
>> what works for your environment. Once the timeout is reached within the
>> statement (within the transaction), the transaction is going to rollback
>> too.
>>
>
> This implies that a statement used takes a long time. It may not. The lock
> is held at the transaction level not the statement level, which is why a
> transaction level timeout is actually more useful than a statement level
> timeout.
>

It hard to compare these proposals because any proposal solves slightly
different issue and has different advantages and disadvantages. The flat
solution probably will by too limited. I see a possible advantages of
transaction_timeout (max lock duration), transaction_idle_timeout,
statement_timeout. Any of these limits has sense, and can helps with
resource management. There is not full substitution.

Regards

Pavel



>
> What I'm most interested in, in the use case which I described and which
> David built a system for, is getting that lock released from the lower
> priority process to let the higher priority process run. I couldn't care
> less about statement level anything.
>
> Thanks!
>
> Stephen
>


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Stephen Frost
JD,

On Wednesday, November 4, 2015, Joshua D. Drake 
wrote:

> On 11/04/2015 02:15 PM, Stephen Frost wrote:
>
> Yeah but anything holding a lock that long can be terminated via
>>> statement_timeout can it not?
>>>
>>
>> Well, no?  statement_timeout is per-statement, while transaction_timeout
>> is, well, per transaction.  If there's a process which is going and has
>> an open transaction and it's holding locks, that can be an issue.
>>
>
> No, what I mean is this:
>
> BEGIN;
> select * from foo;
> update bar;
> delete baz;
>
> Each one of those is subject to statement_timeout, yes? If so, then I
> don't see a point for transaction timeout. You set statement_timeout for
> what works for your environment. Once the timeout is reached within the
> statement (within the transaction), the transaction is going to rollback
> too.
>

This implies that a statement used takes a long time. It may not. The lock
is held at the transaction level not the statement level, which is why a
transaction level timeout is actually more useful than a statement level
timeout.

What I'm most interested in, in the use case which I described and which
David built a system for, is getting that lock released from the lower
priority process to let the higher priority process run. I couldn't care
less about statement level anything.

Thanks!

Stephen


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Vik Fearing
On 10/30/2015 10:20 PM, Merlin Moncure wrote:
> Idle hanging transactions from poorly written applications are the
> bane of my existence.  Several months back one of them took down one
> of hour production websites for several hours.
> 
> Unfortunately, the only way to deal with them is to terminate the
> backend which is heavy handed and in some cases causes further damage.
>   Something like pg_cancel_transaction(pid) would be nice; it would
> end the transaction regardless if in an actual statement or not.
> Similarly, transaction_timeout would be a lot more effective than
> statement_timeout.  It's nice to think about a world where
> applications don't do such things, but in this endless sea of
> enterprise java soup I live it it's, uh, not realistic.  This would be
> lot cleaner than the cron driven sweep I'm forced to implement now,
> and could be made to be part of the standard configuration across the
> enterprise.

I would like to request that no one work on this.

I wrote a patch to do just that a year and a half ago[1] which was
rejected for technical reasons.  Since then, Andres has fixed those
reasons, and prodded me last week at PGConf.EU to pick my patch back up.
 I am planning on resubmitting it for the next commitfest.  I will also
take into account the things said on this thread.

[1] http://www.postgresql.org/message-id/538dc843.2070...@dalibo.com
-- 
Vik Fearing  +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bitmap index scans use of filters on available columns

2015-11-04 Thread Tom Lane
Jeff Janes  writes:
> On Wed, Nov 4, 2015 at 7:14 AM, Tom Lane  wrote:
>> You're missing my point: that is possible in an indexscan, but *not* in a
>> bitmap indexscan, because the index AM APIs are totally different in the
>> two cases.  In a bitmap scan, nothing more than a TID bitmap is ever
>> returned out to anyplace that could execute arbitrary expressions.

> I had thought it must already be able to execute arbitrary
> expressions, due to the ability to already support user-defined btree
> ops (and ops of non-btree types in the case of other index types).

No.  An index AM is only expected to be able to evaluate clauses of
the form   , and the key
restriction there is that the operator is one that the AM has volunteered
to support.  Well, actually, it's the opclass more than the AM that
determines this, but anyway it's not just some random operator; more than
likely, the AM and/or opclass has got special logic about the operator.

This also ties into Robert's point about evaluation of operators against
index entries for dead or invisible rows.  Indexable operators are much
less likely than others to have unexpected side-effects.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Joshua D. Drake

On 11/04/2015 02:15 PM, Stephen Frost wrote:


Yeah but anything holding a lock that long can be terminated via
statement_timeout can it not?


Well, no?  statement_timeout is per-statement, while transaction_timeout
is, well, per transaction.  If there's a process which is going and has
an open transaction and it's holding locks, that can be an issue.


No, what I mean is this:

BEGIN;
select * from foo;
update bar;
delete baz;

Each one of those is subject to statement_timeout, yes? If so, then I 
don't see a point for transaction timeout. You set statement_timeout for 
what works for your environment. Once the timeout is reached within the 
statement (within the transaction), the transaction is going to rollback 
too.


JD


--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
New rule for social situations: "If you think to yourself not even
JD would say this..." Stop and shut your mouth. It's going to be bad.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Stephen Frost
* Joshua D. Drake (j...@commandprompt.com) wrote:
> On 11/04/2015 01:55 PM, Stephen Frost wrote:
> >* Joe Conway (m...@joeconway.com) wrote:
> >>On 11/04/2015 01:24 PM, Alvaro Herrera wrote:
> >>>I agree with Pavel.  Having a transaction timeout just does not make any
> >>>sense.  I can see absolutely no use for it.  An idle-in-transaction
> >>>timeout, on the other hand, is very useful.
> >>
> >>+1 -- agreed
> >
> >I'm not sure of that.  I can certainly see a use for transaction
> >timeouts- after all, they hold locks and can be very disruptive in the
> >long run.  Further, there are cases where a transaction is normally very
> >fast and in a corner case it becomes extremely slow and disruptive to
> >the rest of the system.  In those cases, having a timeout for it is
> >valuable.
> 
> Yeah but anything holding a lock that long can be terminated via
> statement_timeout can it not?

Well, no?  statement_timeout is per-statement, while transaction_timeout
is, well, per transaction.  If there's a process which is going and has
an open transaction and it's holding locks, that can be an issue.

To be frank, my gut feeling is that transaction_timeout is actually more
useful than statement_timeout.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Joe Conway
On 11/04/2015 02:07 PM, Joshua D. Drake wrote:
> On 11/04/2015 01:55 PM, Stephen Frost wrote:
>> * Joe Conway (m...@joeconway.com) wrote:
>>> On 11/04/2015 01:24 PM, Alvaro Herrera wrote:
 I agree with Pavel.  Having a transaction timeout just does not make
 any
 sense.  I can see absolutely no use for it.  An idle-in-transaction
 timeout, on the other hand, is very useful.
>>>
>>> +1 -- agreed
>>
>> I'm not sure of that.  I can certainly see a use for transaction
>> timeouts- after all, they hold locks and can be very disruptive in the
>> long run.  Further, there are cases where a transaction is normally very
>> fast and in a corner case it becomes extremely slow and disruptive to
>> the rest of the system.  In those cases, having a timeout for it is
>> valuable.
> 
> Yeah but anything holding a lock that long can be terminated via
> statement_timeout can it not?

That is exactly what I was thinking


-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] patch for geqo tweaks

2015-11-04 Thread Tom Lane
Alvaro Herrera  writes:
> Tom Lane wrote:
>> Having said that, though, I believe that it's also probably a
>> *different* initial shuffle, which may well mean that GEQO gives
>> different plans in some cases.  That doesn't bother me as long as
>> we only make the change in HEAD, but does anyone want to complain?

> Uh, do we promise that plans found by geqo are stable?  That would seem
> odd to me -- wouldn't they change shape on a whim, say because the stats
> are different?  It seems odd to look for plan stability using a genetic
> algorithm.

Well, obviously any plan might change if the stats change.  But we do
promise repeatable plans given identical input data, cf commit f5bc74192.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Josh Berkus
On 11/04/2015 01:55 PM, Stephen Frost wrote:
> * Joe Conway (m...@joeconway.com) wrote:
>> On 11/04/2015 01:24 PM, Alvaro Herrera wrote:
>>> I agree with Pavel.  Having a transaction timeout just does not make any
>>> sense.  I can see absolutely no use for it.  An idle-in-transaction
>>> timeout, on the other hand, is very useful.
>>
>> +1 -- agreed
> 
> I'm not sure of that.  I can certainly see a use for transaction
> timeouts- after all, they hold locks and can be very disruptive in the
> long run.  Further, there are cases where a transaction is normally very
> fast and in a corner case it becomes extremely slow and disruptive to
> the rest of the system.  In those cases, having a timeout for it is
> valuable.

I could see a use for both, having written scripts which do both.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Joshua D. Drake

On 11/04/2015 01:55 PM, Stephen Frost wrote:

* Joe Conway (m...@joeconway.com) wrote:

On 11/04/2015 01:24 PM, Alvaro Herrera wrote:

I agree with Pavel.  Having a transaction timeout just does not make any
sense.  I can see absolutely no use for it.  An idle-in-transaction
timeout, on the other hand, is very useful.


+1 -- agreed


I'm not sure of that.  I can certainly see a use for transaction
timeouts- after all, they hold locks and can be very disruptive in the
long run.  Further, there are cases where a transaction is normally very
fast and in a corner case it becomes extremely slow and disruptive to
the rest of the system.  In those cases, having a timeout for it is
valuable.


Yeah but anything holding a lock that long can be terminated via 
statement_timeout can it not?


JD


--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
New rule for social situations: "If you think to yourself not even
JD would say this..." Stop and shut your mouth. It's going to be bad.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Stephen Frost
* Joe Conway (m...@joeconway.com) wrote:
> On 11/04/2015 01:24 PM, Alvaro Herrera wrote:
> > I agree with Pavel.  Having a transaction timeout just does not make any
> > sense.  I can see absolutely no use for it.  An idle-in-transaction
> > timeout, on the other hand, is very useful.
> 
> +1 -- agreed

I'm not sure of that.  I can certainly see a use for transaction
timeouts- after all, they hold locks and can be very disruptive in the
long run.  Further, there are cases where a transaction is normally very
fast and in a corner case it becomes extremely slow and disruptive to
the rest of the system.  In those cases, having a timeout for it is
valuable.

David (adding him to the CC) actually developed a utility specifically
to identify what transactions are blocking what others and to kill off
other processes if they were running for too long and blocking higher
priority processes.  It didn't matter, in that environment, if they were
idle-in-transaction or actively running.

David, please correct/confirm my recollection above.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Joe Conway
On 11/04/2015 01:24 PM, Alvaro Herrera wrote:
> I agree with Pavel.  Having a transaction timeout just does not make any
> sense.  I can see absolutely no use for it.  An idle-in-transaction
> timeout, on the other hand, is very useful.

+1 -- agreed

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Alvaro Herrera
Pavel Stehule wrote:
> 2015-11-04 22:14 GMT+01:00 Joshua D. Drake :
> 
> > On 11/04/2015 01:11 PM, Pavel Stehule wrote:
> >
> >> I am sorry, but I have a different experience from GoodData. The few
> >> hours autovacuum is usual. So probably, there should be exception for
> >> autovacuum, dump, ..
> >
> > But autovacuum and dump are not idle in transaction or am I missing
> > something?
> 
> last Merlin's proposal was about transaction_timeout not
> transaction_idle_timeout

I agree with Pavel.  Having a transaction timeout just does not make any
sense.  I can see absolutely no use for it.  An idle-in-transaction
timeout, on the other hand, is very useful.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [patch] Proposal for \rotate in psql

2015-11-04 Thread Pavel Stehule
2015-11-04 17:20 GMT+01:00 Joe Conway :

> On 11/04/2015 04:09 AM, Pavel Stehule wrote:
> > I am looking on this last patch. I talked about the name of this command
> > with more people, and the name "rotate" is unhappy. The correct name for
> > this visualization technique is "crosstab" (see google "crosstab"). The
> > conflict with our extension is unhappy, but using "rotate" is more worst
> > - (see google "rotate"). The term "rotate" is used less time (related to
> > topic), and usually with zero informed people. More, in attached doc,
> > the word "crosstab" is pretty often used, and then the word "rotate" has
> > not sense.
>
> Apologies if this has already been suggested (as I have not followed the
> entire thread), but if you don't want to conflict with the name
> crosstab, perhaps "pivot" would be better?
>

it is between "rotate" and "crosstab". This name is related to PIVOT
operator, what is feature that we want, but it isn't hard problem because
we have COPY statement and \copy and we can live with it too.

If I understand to text on wiki, the name is used for tool, that can do
little bit more things, but it is often used for this technique (so it is
much better than "rotate"). I don't understand well, why "crosstab" is too
wrong name. This is pretty similar to COPY - and I know, so in first
minutes hard to explain this difference between COPY and \copy to
beginners, but after day of using there is not any problem.

Regards

Pavel


>
> Joe
>
> --
> Crunchy Data - http://crunchydata.com
> PostgreSQL Support for Secure Enterprises
> Consulting, Training, & Open Source Development
>
>


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Pavel Stehule
2015-11-04 21:31 GMT+01:00 Merlin Moncure :

> On Wed, Nov 4, 2015 at 2:09 PM, Pavel Stehule 
> wrote:
> > 2015-11-04 20:35 GMT+01:00 Merlin Moncure :
> >>
> >> On Wed, Nov 4, 2015 at 11:26 AM, Pavel Stehule  >
>
> >> > it doesn't help. How I can set transaction_timeout if I have series of
> >> > slow
> >> > statements? In this case I cannot to set transaction_timeout before
> any
> >> > statement or after any success statement.
> >>
> >> Not quite following you. The client has to go:
> >> BEGIN;
> >> SET transaction_timeout = x;
> >> 
> >
> > where is the point when transaction_timeout start? In BEGIN or in SET
> > transaction_timeout ?
>
> transaction start (BEGIN).
>
> > How I can emulate transaction_idle_timeout? Can I refresh
> > transaction_timeout?
>
> Well, for my part, I'd probably set default to around an hour with
> longer running batch driven tasks having to override.
>
> > My issue isn't long statements, but broken client, that is broken in
> wrong
> > state - connect is still active, but no any statement will coming.
>
> Right, 'Idle in transaction'.  Agree that a setting directed purely at
> that problem could set a much lower timeout, say, 5 minutes or less
> since it almost never comes up in real applications.  In fact, in 15
> years of postgres development, I've never seen 'idle transaction' that
> indicated anything but application malfunction.
>
> That being said, hour timeout for general case would work for me.  It
> would only have to be set lower for very busy OLTP databases where
> continuous vacuum is essential.   In those cases, I don't mind forcing
> all batch processes to disclose in advance they are running long.
>

If I have a statement_timeout 20minutes, what can be transaction_timeout?
hour or 2 hours. If you don't know how much statements are in transaction,
then is pretty difficult to set it.

One hour is nothing for bigger databases with mix OLAP/OLTP and the age for
massive used OLAP.

Regards

Pavel


>
> merlin
>


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Pavel Stehule
2015-11-04 22:14 GMT+01:00 Joshua D. Drake :

> On 11/04/2015 01:11 PM, Pavel Stehule wrote:
>
>
>> I am sorry, but I have a different experience from GoodData. The few
>> hours autovacuum is usual. So probably, there should be exception for
>> autovacuum, dump, ..
>>
>
> But autovacuum and dump are not idle in transaction or am I missing
> something?
>

last Merlin's proposal was about transaction_timeout not
transaction_idle_timeout

Regards

Pavel

>
> JD
>
>
> --
> Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
> PostgreSQL Centered full stack support, consulting and development.
> New rule for social situations: "If you think to yourself not even
> JD would say this..." Stop and shut your mouth. It's going to be bad.
>


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Joshua D. Drake

On 11/04/2015 01:11 PM, Pavel Stehule wrote:



I am sorry, but I have a different experience from GoodData. The few
hours autovacuum is usual. So probably, there should be exception for
autovacuum, dump, ..


But autovacuum and dump are not idle in transaction or am I missing 
something?


JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
New rule for social situations: "If you think to yourself not even
JD would say this..." Stop and shut your mouth. It's going to be bad.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Pavel Stehule
2015-11-04 21:56 GMT+01:00 Joshua D. Drake :

> On 11/04/2015 12:31 PM, Merlin Moncure wrote:
>
> My issue isn't long statements, but broken client, that is broken in wrong
>>> state - connect is still active, but no any statement will coming.
>>>
>>
>> Right, 'Idle in transaction'.  Agree that a setting directed purely at
>> that problem could set a much lower timeout, say, 5 minutes or less
>> since it almost never comes up in real applications.  In fact, in 15
>> years of postgres development, I've never seen 'idle transaction' that
>> indicated anything but application malfunction.
>>
>
> I can +1 that.
>
>
>> That being said, hour timeout for general case would work for me.  It
>> would only have to be set lower for very busy OLTP databases where
>> continuous vacuum is essential.   In those cases, I don't mind forcing
>> all batch processes to disclose in advance they are running long.
>>
>
> Yeah about an hour sounds right.
>


I am sorry, but I have a different experience from GoodData. The few hours
autovacuum is usual. So probably, there should be exception for autovacuum,
dump, ..

Regards

Pavel


>
>
> JD
>
>
>> merlin
>>
>>
>>
>
> --
> Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
> PostgreSQL Centered full stack support, consulting and development.
> New rule for social situations: "If you think to yourself not even
> JD would say this..." Stop and shut your mouth. It's going to be bad.
>


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Joshua D. Drake

On 11/04/2015 12:31 PM, Merlin Moncure wrote:


My issue isn't long statements, but broken client, that is broken in wrong
state - connect is still active, but no any statement will coming.


Right, 'Idle in transaction'.  Agree that a setting directed purely at
that problem could set a much lower timeout, say, 5 minutes or less
since it almost never comes up in real applications.  In fact, in 15
years of postgres development, I've never seen 'idle transaction' that
indicated anything but application malfunction.


I can +1 that.



That being said, hour timeout for general case would work for me.  It
would only have to be set lower for very busy OLTP databases where
continuous vacuum is essential.   In those cases, I don't mind forcing
all batch processes to disclose in advance they are running long.


Yeah about an hour sounds right.


JD



merlin





--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
New rule for social situations: "If you think to yourself not even
JD would say this..." Stop and shut your mouth. It's going to be bad.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Merlin Moncure
On Wed, Nov 4, 2015 at 2:09 PM, Pavel Stehule  wrote:
> 2015-11-04 20:35 GMT+01:00 Merlin Moncure :
>>
>> On Wed, Nov 4, 2015 at 11:26 AM, Pavel Stehule 

>> > it doesn't help. How I can set transaction_timeout if I have series of
>> > slow
>> > statements? In this case I cannot to set transaction_timeout before any
>> > statement or after any success statement.
>>
>> Not quite following you. The client has to go:
>> BEGIN;
>> SET transaction_timeout = x;
>> 
>
> where is the point when transaction_timeout start? In BEGIN or in SET
> transaction_timeout ?

transaction start (BEGIN).

> How I can emulate transaction_idle_timeout? Can I refresh
> transaction_timeout?

Well, for my part, I'd probably set default to around an hour with
longer running batch driven tasks having to override.

> My issue isn't long statements, but broken client, that is broken in wrong
> state - connect is still active, but no any statement will coming.

Right, 'Idle in transaction'.  Agree that a setting directed purely at
that problem could set a much lower timeout, say, 5 minutes or less
since it almost never comes up in real applications.  In fact, in 15
years of postgres development, I've never seen 'idle transaction' that
indicated anything but application malfunction.

That being said, hour timeout for general case would work for me.  It
would only have to be set lower for very busy OLTP databases where
continuous vacuum is essential.   In those cases, I don't mind forcing
all batch processes to disclose in advance they are running long.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Pavel Stehule
2015-11-04 20:35 GMT+01:00 Merlin Moncure :

> On Wed, Nov 4, 2015 at 11:26 AM, Pavel Stehule 
> wrote:
> > 2015-11-04 18:18 GMT+01:00 Merlin Moncure :
> >>
> >> On Wed, Nov 4, 2015 at 11:15 AM, Pavel Stehule  >
> >> wrote:
> >> >
> >> >
> >> > 2015-11-04 18:11 GMT+01:00 Tom Lane :
> >> >>
> >> >> Merlin Moncure  writes:
> >> >> >> Yes, and that is what I meant.  I have two problems with
> >> >> >> transaction_idle_timeout (as opposed to transaction_timeout):
> >> >> >>
> >> >> >> A) It's more complex.  Unsophisticated administrators may not
> >> >> >> understand or set it properly
> >> >> >>
> >> >> >> B) There is no way to enforce an upper bound on transaction time
> >> >> >> with
> >> >> >> that setting.  A pathological application could keep a transaction
> >> >> >> open forever without running into any timeouts -- that's a
> >> >> >> dealbreaker
> >> >> >> for me.
> >> >> >>
> >> >> >> From my point of view the purpose of the setting should be to
> >> >> >> protect
> >> >> >> you from any single actor from doing things that damage the
> >> >> >> database.
> >> >> >> 'idle in transaction' happens to be one obvious way, but upper
> bound
> >> >> >> on transaction time protects you in general way.
> >> >>
> >> >> > Note, having both settings would work too.
> >> >>
> >> >> I'd vote for just transaction_timeout.  The way our timeout manager
> >> >> logic works, that should be more efficient, as the timeout would only
> >> >> have to be established once at transaction start, not every time the
> >> >> main command loop iterates.
> >> >
> >> >
> >> > I cannot to say, so transaction_timeout is not useful, but it cannot
> be
> >> > effective solution for some mentioned issues. With larger data you
> >> > cannot to
> >> > set transaction_timeout less than few hours.
> >>
> >> sure.  note however any process can manually opt in to a longer timeout.
> >
> >
> > it doesn't help. How I can set transaction_timeout if I have series of
> slow
> > statements? In this case I cannot to set transaction_timeout before any
> > statement or after any success statement.
>
> Not quite following you. The client has to go:
> BEGIN;
> SET transaction_timeout = x;
> 
>

where is the point when transaction_timeout start? In BEGIN or in SET
transaction_timeout ?

How I can emulate transaction_idle_timeout? Can I refresh
transaction_timeout?

My issue isn't long statements, but broken client, that is broken in wrong
state - connect is still active, but no any statement will coming.

Regards

Pavel


> or the client can do that on session start up.  There are two problem
> cases I can think of:
> 1) connection pooler (pgbouncer): This can work, but you have to be
> very careful.   Maybe DISCARD needs to be able to undo adjusted
> session settings if it doesn't already.
>
> 2) procedure emulating functions: It's a major pain that you can't
> manage timeout inside a function itself.  You also can't manage
> transaction state or isolation level.  The real solution here is to
> implement stored procedures though.
>
> merlin
>


Re: [HACKERS] FORCE ROW LEVEL SECURITY

2015-11-04 Thread Stephen Frost
Robert,

* Robert Haas (robertmh...@gmail.com) wrote:
> On Wed, Nov 4, 2015 at 1:48 PM, Stephen Frost  wrote:
> > * Robert Haas (robertmh...@gmail.com) wrote:
> >> FORCE ROW LEVEL SECURITY doesn't behave as I would expect.
> >>
> >> rhaas=# create policy hideit on foo1 using (a < 3);
> >> CREATE POLICY
> >> rhaas=# explain select * from foo1;
> >>QUERY PLAN
> >> -
> >>  Seq Scan on foo1  (cost=0.00..22.70 rows=1270 width=36)
> >> (1 row)
> >> rhaas=# alter table foo force row level security;
> >> ALTER TABLE
> >> rhaas=# alter table foo1 enable row level security;
> >> ALTER TABLE
> >
> > Sorry if my prior wasn't clear, but above you do 'foo' and 'foo1'
> > independently.
> >
> > Did you intend to alter table 'foo'?
> 
> Hmm.  I've clearly done both, but it still doesn't work:
> 
> rhaas=# alter table foo1 enable row level security;
> ALTER TABLE
> rhaas=# alter table foo1 force row level security;
> ALTER TABLE
> rhaas=# \d foo1
>  Table "public.foo1"
>  Column |  Type   | Modifiers
> +-+---
>  a  | integer | not null
>  b  | text|
> Policies (Forced Row Security Enabled):
> POLICY "hideit" FOR ALL
>   USING ((a < 3))
> Inherits: foo
> 
> rhaas=# explain select * from foo1;
>QUERY PLAN
> -
>  Seq Scan on foo1  (cost=0.00..22.70 rows=1270 width=36)
> (1 row)

In this case, you ran it as superuser which automatically has the
'BYPASSRLS' privilege, which means that RLS is bypassed always.

The change to how BYPASSRLS works was discussed with and ultimately
implemented by Noah, as I recall.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] FORCE ROW LEVEL SECURITY

2015-11-04 Thread Robert Haas
On Wed, Nov 4, 2015 at 1:48 PM, Stephen Frost  wrote:
> * Robert Haas (robertmh...@gmail.com) wrote:
>> FORCE ROW LEVEL SECURITY doesn't behave as I would expect.
>>
>> rhaas=# create policy hideit on foo1 using (a < 3);
>> CREATE POLICY
>> rhaas=# explain select * from foo1;
>>QUERY PLAN
>> -
>>  Seq Scan on foo1  (cost=0.00..22.70 rows=1270 width=36)
>> (1 row)
>> rhaas=# alter table foo force row level security;
>> ALTER TABLE
>> rhaas=# alter table foo1 enable row level security;
>> ALTER TABLE
>
> Sorry if my prior wasn't clear, but above you do 'foo' and 'foo1'
> independently.
>
> Did you intend to alter table 'foo'?

Hmm.  I've clearly done both, but it still doesn't work:

rhaas=# alter table foo1 enable row level security;
ALTER TABLE
rhaas=# alter table foo1 force row level security;
ALTER TABLE
rhaas=# \d foo1
 Table "public.foo1"
 Column |  Type   | Modifiers
+-+---
 a  | integer | not null
 b  | text|
Policies (Forced Row Security Enabled):
POLICY "hideit" FOR ALL
  USING ((a < 3))
Inherits: foo

rhaas=# explain select * from foo1;
   QUERY PLAN
-
 Seq Scan on foo1  (cost=0.00..22.70 rows=1270 width=36)
(1 row)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Merlin Moncure
On Wed, Nov 4, 2015 at 11:26 AM, Pavel Stehule  wrote:
> 2015-11-04 18:18 GMT+01:00 Merlin Moncure :
>>
>> On Wed, Nov 4, 2015 at 11:15 AM, Pavel Stehule 
>> wrote:
>> >
>> >
>> > 2015-11-04 18:11 GMT+01:00 Tom Lane :
>> >>
>> >> Merlin Moncure  writes:
>> >> >> Yes, and that is what I meant.  I have two problems with
>> >> >> transaction_idle_timeout (as opposed to transaction_timeout):
>> >> >>
>> >> >> A) It's more complex.  Unsophisticated administrators may not
>> >> >> understand or set it properly
>> >> >>
>> >> >> B) There is no way to enforce an upper bound on transaction time
>> >> >> with
>> >> >> that setting.  A pathological application could keep a transaction
>> >> >> open forever without running into any timeouts -- that's a
>> >> >> dealbreaker
>> >> >> for me.
>> >> >>
>> >> >> From my point of view the purpose of the setting should be to
>> >> >> protect
>> >> >> you from any single actor from doing things that damage the
>> >> >> database.
>> >> >> 'idle in transaction' happens to be one obvious way, but upper bound
>> >> >> on transaction time protects you in general way.
>> >>
>> >> > Note, having both settings would work too.
>> >>
>> >> I'd vote for just transaction_timeout.  The way our timeout manager
>> >> logic works, that should be more efficient, as the timeout would only
>> >> have to be established once at transaction start, not every time the
>> >> main command loop iterates.
>> >
>> >
>> > I cannot to say, so transaction_timeout is not useful, but it cannot be
>> > effective solution for some mentioned issues. With larger data you
>> > cannot to
>> > set transaction_timeout less than few hours.
>>
>> sure.  note however any process can manually opt in to a longer timeout.
>
>
> it doesn't help. How I can set transaction_timeout if I have series of slow
> statements? In this case I cannot to set transaction_timeout before any
> statement or after any success statement.

Not quite following you. The client has to go:
BEGIN;
SET transaction_timeout = x;


or the client can do that on session start up.  There are two problem
cases I can think of:
1) connection pooler (pgbouncer): This can work, but you have to be
very careful.   Maybe DISCARD needs to be able to undo adjusted
session settings if it doesn't already.

2) procedure emulating functions: It's a major pain that you can't
manage timeout inside a function itself.  You also can't manage
transaction state or isolation level.  The real solution here is to
implement stored procedures though.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FORCE ROW LEVEL SECURITY

2015-11-04 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
> FORCE ROW LEVEL SECURITY doesn't behave as I would expect.
> 
> rhaas=# create policy hideit on foo1 using (a < 3);
> CREATE POLICY
> rhaas=# explain select * from foo1;
>QUERY PLAN
> -
>  Seq Scan on foo1  (cost=0.00..22.70 rows=1270 width=36)
> (1 row)
> rhaas=# alter table foo force row level security;
> ALTER TABLE
> rhaas=# alter table foo1 enable row level security;
> ALTER TABLE

Sorry if my prior wasn't clear, but above you do 'foo' and 'foo1'
independently.

Did you intend to alter table 'foo'?

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] FORCE ROW LEVEL SECURITY

2015-11-04 Thread Stephen Frost
Robert,

* Robert Haas (robertmh...@gmail.com) wrote:
> FORCE ROW LEVEL SECURITY doesn't behave as I would expect.
> 
> rhaas=# create policy hideit on foo1 using (a < 3);
> CREATE POLICY
> rhaas=# explain select * from foo1;
>QUERY PLAN
> -
>  Seq Scan on foo1  (cost=0.00..22.70 rows=1270 width=36)
> (1 row)
> rhaas=# alter table foo force row level security;
> ALTER TABLE
> rhaas=# alter table foo1 enable row level security;
> ALTER TABLE
> rhaas=# explain select * from foo1;
>QUERY PLAN
> -
>  Seq Scan on foo1  (cost=0.00..22.70 rows=1270 width=36)
> (1 row)
> rhaas=# create user bob;
> CREATE ROLE
> rhaas=# grant select on foo1 to bob;
> GRANT
> rhaas=# \c - bob
> You are now connected to database "rhaas" as user "bob".
> rhaas=> select * from foo1;
>  a | b
> ---+---
> (0 rows)
> 
> rhaas=> explain select * from foo1;
>QUERY PLAN
> 
>  Seq Scan on foo1  (cost=0.00..25.88 rows=423 width=36)
>Filter: (a < 3)
> (2 rows)
> 
> Isn't the whole purpose of FORCE ROW LEVEL SECURITY to cause RLS to be
> applied even for the table owner?

Did you enable RLS for the table?

You need to do both ENABLE and FORCE if you want it to apply to owners.
There are regressions tests which should demonstrate that, if it helps.
Happy to work through the issue also though.

Thanks!

Stephen


signature.asc
Description: Digital signature


[HACKERS] FORCE ROW LEVEL SECURITY

2015-11-04 Thread Robert Haas
FORCE ROW LEVEL SECURITY doesn't behave as I would expect.

rhaas=# create policy hideit on foo1 using (a < 3);
CREATE POLICY
rhaas=# explain select * from foo1;
   QUERY PLAN
-
 Seq Scan on foo1  (cost=0.00..22.70 rows=1270 width=36)
(1 row)
rhaas=# alter table foo force row level security;
ALTER TABLE
rhaas=# alter table foo1 enable row level security;
ALTER TABLE
rhaas=# explain select * from foo1;
   QUERY PLAN
-
 Seq Scan on foo1  (cost=0.00..22.70 rows=1270 width=36)
(1 row)
rhaas=# create user bob;
CREATE ROLE
rhaas=# grant select on foo1 to bob;
GRANT
rhaas=# \c - bob
You are now connected to database "rhaas" as user "bob".
rhaas=> select * from foo1;
 a | b
---+---
(0 rows)

rhaas=> explain select * from foo1;
   QUERY PLAN

 Seq Scan on foo1  (cost=0.00..25.88 rows=423 width=36)
   Filter: (a < 3)
(2 rows)

Isn't the whole purpose of FORCE ROW LEVEL SECURITY to cause RLS to be
applied even for the table owner?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch for geqo tweaks

2015-11-04 Thread Alvaro Herrera
Tom Lane wrote:

> Having said that, though, I believe that it's also probably a
> *different* initial shuffle, which may well mean that GEQO gives
> different plans in some cases.  That doesn't bother me as long as
> we only make the change in HEAD, but does anyone want to complain?

Uh, do we promise that plans found by geqo are stable?  That would seem
odd to me -- wouldn't they change shape on a whim, say because the stats
are different?  It seems odd to look for plan stability using a genetic
algorithm.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bitmap index scans use of filters on available columns

2015-11-04 Thread Jeff Janes
On Wed, Nov 4, 2015 at 7:14 AM, Tom Lane  wrote:
> Simon Riggs  writes:
>> On 4 November 2015 at 15:54, Tom Lane  wrote:
>> We generate this plan
>>  Index Scan using f_x_y_idx on f  (cost=0.42..26075.71 rows=209 width=37)
>>Index Cond: (x = 5)
>>Filter: (y ~~ '%abc%'::text)
>
>> So it should be possible to do the Filter condition on the BitmapIndexScan.
>
> You're missing my point: that is possible in an indexscan, but *not* in a
> bitmap indexscan, because the index AM APIs are totally different in the
> two cases.  In a bitmap scan, nothing more than a TID bitmap is ever
> returned out to anyplace that could execute arbitrary expressions.

I had thought it must already be able to execute arbitrary
expressions, due to the ability to already support user-defined btree
ops (and ops of non-btree types in the case of other index types).
Does that only require a restricted execution environment?


> In the case at hand, the planner should have considered a plan of this
> shape as well.  Presumably it concluded it was more expensive than using
> the bitmap approach.  Jeff might try "set enable_bitmapscan = 0" and
> compare the estimated and actual costs.

It is a simplified test case, I can get it to do about anything I want
by tweaking the size, selectivity, columns selected, and cache warmth.

My understanding is that the normal (not index-only) index scan, which
is thought to be slower and actually is slower with a hot cache, also
applies the filter on the heap tuple, not the index tuple.  But since
it only matters much for large queries and large queries tend to
prefer bitmaps, I was less interested in that case.  And bitmaps can
feed into BitmapAnd, which is very helpful in avoiding index-creep
where every query gets a new index tailored for it.  I think we are
missing a trick here that could apply to lots of situations.

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bitmap index scans use of filters on available columns

2015-11-04 Thread Robert Haas
On Wed, Nov 4, 2015 at 10:59 AM, Tom Lane  wrote:
> Simon Riggs  writes:
>> On 4 November 2015 at 16:14, Tom Lane  wrote:
>>> You're missing my point: that is possible in an indexscan, but *not* in a
>>> bitmap indexscan, because the index AM APIs are totally different in the
>>> two cases.  In a bitmap scan, nothing more than a TID bitmap is ever
>>> returned out to anyplace that could execute arbitrary expressions.
>
>> Still misunderstanding each other... sorry about that
>
>> If a btree can Filter y like that on an IndexScan, then it can also apply
>> that Filter on y when it is looking through rows before it adds them to the
>> bitmap.
>
> btree applies no such filter in either case.  "Filter" conditions are
> applied outside the index AM --- and yes, I will resist any proposal to
> relocate that responsibility.

I don't understand what you're arguing against here - as Simon I think
correctly points out, there seems to be a substantial performance
improvement possible here.  It's not so much that we would apply the
Filter conditions outside the index AM as that we would have two kinds
of Index Quals that could be enforced by the index AM.  Some quals
(such as x = 5) could be enforced by scanning only the relevant
portion of the index, while others (such as y like '%abc%') don't
abridge the portion of the index that needs to be scanned, but could
disqualify some index tuples before we go to the trouble of hitting
the heap.

The problem I see is this might involve testing quals against an index
tuple when the corresponding heap tuple isn't visible.  This probably
isn't safe except for leakproof quals, and there might be some other
problems as well.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fortnight interval support

2015-11-04 Thread Robert Haas
On Tue, Nov 3, 2015 at 6:40 PM, Michael Paquier
 wrote:
> On Wed, Nov 4, 2015 at 2:31 AM, Robert Haas  wrote:
>> On Tue, Nov 3, 2015 at 8:31 AM, Alvaro Herrera  
>> wrote:
>>> (WRT the reference to Jane Austen and "Se'ennight" for "week", it occurs
>>> to me that fortnight is a similar contraction for "forteen night".)
>>
>> Well, clearly we also need enquië for the elves of Arda and tenday for
>> for Faerûnians.  Seems like a serious oversight, though making enquië
>> work in non-Unicode locales might be tricky.
>
> You are forgetting Klingon, parseltongue, dwarfic and the one of
> Mordor. It's not worth angering them as well. But I'll stop here.

I'm not aware that any of those have weeks of some duration other than
7 days, but the lack of support for the Klingon calendar in general is
indeed troubling.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch for geqo tweaks

2015-11-04 Thread Tom Lane
Nathan Wagner  writes:
> I have two patches to make the geqo initialization and mutation
> slightly better.

> The first adjusts the mutation swaps to avoid having to re-pick
> ties.  The second changes the initialization and shuffling algorithm
> for the gene array to use an in-place Fisher-Yates shuffling
> algorithm.

I took a quick look at this.

I'm not very impressed with the first patch: it might save a few
geqo_randint() calls, but it seems to do so at the price of making the
swap choices less random --- for instance it sure looks to me like the
last array element is now less likely to participate in swaps than
other elements.  Unless you can prove that actually the swapping is
still unbiased, I'm inclined to reject this part.

As for the second part, I had to look up Fisher-Yates ;-) but after
having read Wikipedia's entry about it I think this is a good change.
The code's shorter and more efficient, and it should mathematically
provide an equally-unbiased initial shuffle.  It could do with a better
comment, and I'd be inclined to handle the first element outside the loop
rather than uselessly computing geqo_randint(0,0), but those are trivial
changes.

Having said that, though, I believe that it's also probably a
*different* initial shuffle, which may well mean that GEQO gives
different plans in some cases.  That doesn't bother me as long as
we only make the change in HEAD, but does anyone want to complain?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bitmap index scans use of filters on available columns

2015-11-04 Thread Simon Riggs
On 4 November 2015 at 16:59, Tom Lane  wrote:

> Simon Riggs  writes:
> > On 4 November 2015 at 16:14, Tom Lane  wrote:
> >> You're missing my point: that is possible in an indexscan, but *not* in
> a
> >> bitmap indexscan, because the index AM APIs are totally different in the
> >> two cases.  In a bitmap scan, nothing more than a TID bitmap is ever
> >> returned out to anyplace that could execute arbitrary expressions.
>
> > Still misunderstanding each other... sorry about that
>
> > If a btree can Filter y like that on an IndexScan, then it can also apply
> > that Filter on y when it is looking through rows before it adds them to
> the
> > bitmap.
>
> btree applies no such filter in either case.  "Filter" conditions are
> applied outside the index AM --- and yes, I will resist any proposal to
> relocate that responsibility.
>

I don't think anyone has argued that point, yet, we just don't have enough
info to agree yet.

As Jeff said in his OP, "Is there a fundamental reason the filter on y is
not being applied to
the index scan, rather than the heap scan?", which still stands.

Why would you resist? And/Or Why is that important?

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Pavel Stehule
2015-11-04 18:18 GMT+01:00 Merlin Moncure :

> On Wed, Nov 4, 2015 at 11:15 AM, Pavel Stehule 
> wrote:
> >
> >
> > 2015-11-04 18:11 GMT+01:00 Tom Lane :
> >>
> >> Merlin Moncure  writes:
> >> >> Yes, and that is what I meant.  I have two problems with
> >> >> transaction_idle_timeout (as opposed to transaction_timeout):
> >> >>
> >> >> A) It's more complex.  Unsophisticated administrators may not
> >> >> understand or set it properly
> >> >>
> >> >> B) There is no way to enforce an upper bound on transaction time with
> >> >> that setting.  A pathological application could keep a transaction
> >> >> open forever without running into any timeouts -- that's a
> dealbreaker
> >> >> for me.
> >> >>
> >> >> From my point of view the purpose of the setting should be to protect
> >> >> you from any single actor from doing things that damage the database.
> >> >> 'idle in transaction' happens to be one obvious way, but upper bound
> >> >> on transaction time protects you in general way.
> >>
> >> > Note, having both settings would work too.
> >>
> >> I'd vote for just transaction_timeout.  The way our timeout manager
> >> logic works, that should be more efficient, as the timeout would only
> >> have to be established once at transaction start, not every time the
> >> main command loop iterates.
> >
> >
> > I cannot to say, so transaction_timeout is not useful, but it cannot be
> > effective solution for some mentioned issues. With larger data you
> cannot to
> > set transaction_timeout less than few hours.
>
> sure.  note however any process can manually opt in to a longer timeout.
>

it doesn't help. How I can set transaction_timeout if I have series of slow
statements? In this case I cannot to set transaction_timeout before any
statement or after any success statement.


>
> merlin
>


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Merlin Moncure
On Wed, Nov 4, 2015 at 11:15 AM, Pavel Stehule  wrote:
>
>
> 2015-11-04 18:11 GMT+01:00 Tom Lane :
>>
>> Merlin Moncure  writes:
>> >> Yes, and that is what I meant.  I have two problems with
>> >> transaction_idle_timeout (as opposed to transaction_timeout):
>> >>
>> >> A) It's more complex.  Unsophisticated administrators may not
>> >> understand or set it properly
>> >>
>> >> B) There is no way to enforce an upper bound on transaction time with
>> >> that setting.  A pathological application could keep a transaction
>> >> open forever without running into any timeouts -- that's a dealbreaker
>> >> for me.
>> >>
>> >> From my point of view the purpose of the setting should be to protect
>> >> you from any single actor from doing things that damage the database.
>> >> 'idle in transaction' happens to be one obvious way, but upper bound
>> >> on transaction time protects you in general way.
>>
>> > Note, having both settings would work too.
>>
>> I'd vote for just transaction_timeout.  The way our timeout manager
>> logic works, that should be more efficient, as the timeout would only
>> have to be established once at transaction start, not every time the
>> main command loop iterates.
>
>
> I cannot to say, so transaction_timeout is not useful, but it cannot be
> effective solution for some mentioned issues. With larger data you cannot to
> set transaction_timeout less than few hours.

sure.  note however any process can manually opt in to a longer timeout.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Pavel Stehule
2015-11-04 18:11 GMT+01:00 Tom Lane :

> Merlin Moncure  writes:
> >> Yes, and that is what I meant.  I have two problems with
> >> transaction_idle_timeout (as opposed to transaction_timeout):
> >>
> >> A) It's more complex.  Unsophisticated administrators may not
> >> understand or set it properly
> >>
> >> B) There is no way to enforce an upper bound on transaction time with
> >> that setting.  A pathological application could keep a transaction
> >> open forever without running into any timeouts -- that's a dealbreaker
> >> for me.
> >>
> >> From my point of view the purpose of the setting should be to protect
> >> you from any single actor from doing things that damage the database.
> >> 'idle in transaction' happens to be one obvious way, but upper bound
> >> on transaction time protects you in general way.
>
> > Note, having both settings would work too.
>
> I'd vote for just transaction_timeout.  The way our timeout manager
> logic works, that should be more efficient, as the timeout would only
> have to be established once at transaction start, not every time the
> main command loop iterates.
>

I cannot to say, so transaction_timeout is not useful, but it cannot be
effective solution for some mentioned issues. With larger data you cannot
to set transaction_timeout less than few hours.

Regards

Pavel


>
> regards, tom lane
>


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Tom Lane
Merlin Moncure  writes:
>> Yes, and that is what I meant.  I have two problems with
>> transaction_idle_timeout (as opposed to transaction_timeout):
>> 
>> A) It's more complex.  Unsophisticated administrators may not
>> understand or set it properly
>> 
>> B) There is no way to enforce an upper bound on transaction time with
>> that setting.  A pathological application could keep a transaction
>> open forever without running into any timeouts -- that's a dealbreaker
>> for me.
>> 
>> From my point of view the purpose of the setting should be to protect
>> you from any single actor from doing things that damage the database.
>> 'idle in transaction' happens to be one obvious way, but upper bound
>> on transaction time protects you in general way.

> Note, having both settings would work too.

I'd vote for just transaction_timeout.  The way our timeout manager
logic works, that should be more efficient, as the timeout would only
have to be established once at transaction start, not every time the
main command loop iterates.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [patch] Proposal for \rotate in psql

2015-11-04 Thread Joe Conway
On 11/04/2015 04:09 AM, Pavel Stehule wrote:
> I am looking on this last patch. I talked about the name of this command
> with more people, and the name "rotate" is unhappy. The correct name for
> this visualization technique is "crosstab" (see google "crosstab"). The
> conflict with our extension is unhappy, but using "rotate" is more worst
> - (see google "rotate"). The term "rotate" is used less time (related to
> topic), and usually with zero informed people. More, in attached doc,
> the word "crosstab" is pretty often used, and then the word "rotate" has
> not sense.

Apologies if this has already been suggested (as I have not followed the
entire thread), but if you don't want to conflict with the name
crosstab, perhaps "pivot" would be better?

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Pavel Stehule
2015-11-04 17:02 GMT+01:00 Merlin Moncure :

> On Wed, Nov 4, 2015 at 8:54 AM, Pavel Stehule 
> wrote:
> > 2015-11-04 15:50 GMT+01:00 Merlin Moncure :
> >>
> >> On Wed, Nov 4, 2015 at 8:42 AM, Pavel Stehule 
> >> wrote:
> >> >> > Okay, I think one more point to consider is that it would be
> >> >> > preferable
> >> >> > to
> >> >> > have such an option for backend sessions and not for other
> processes
> >> >> > like WalSender.
> >> >>
> >> >> All right...I see the usage..  I withdraw my objection to 'session'
> >> >> prefix then now that I understand the case.  So, do you agree that:
> >> >>
> >> >> *) session_idle_timeout: dumps the backend after X time in 'idle'
> state
> >> >> and
> >> >>  *) transaction_timeout: cancels transaction after X time, regardless
> >> >> of
> >> >> state
> >> >>
> >> >> sounds good?
> >> >
> >> >
> >> > Not too much
> >> >
> >> >  *) transaction_timeout: cancels transaction after X time, regardless
> of
> >> > state
> >> >
> >> > This is next level of statement_timeout. I can't to image sense. What
> is
> >> > a
> >> > issue solved by this property?
> >>
> >> That's the entire point of the thread (or so I thought): cancel
> >> transactions 'idle in transaction'.  This is entirely different than
> >> killing idle sessions.  BTW, I would never configure
> >> session_idle_timeout, because I have no idea what that would do to
> >> benign cases where connection poolers have grabbed a few extra
> >> connections during a load spike.   It's pretty common not to have
> >> those applications have coded connection retry properly and it would
> >> cause issues.
> >
> > you wrote "transaction_timeout: cancels transaction after X time,
> regardless
>
> Yes, and that is what I meant.  I have two problems with
> transaction_idle_timeout (as opposed to transaction_timeout):
>
> A) It's more complex.  Unsophisticated administrators may not
> understand or set it properly
>
> B) There is no way to enforce an upper bound on transaction time with
> that setting.  A pathological application could keep a transaction
> open forever without running into any timeouts -- that's a dealbreaker
> for me.
>
> From my point of view the purpose of the setting should be to protect
> you from any single actor from doing things that damage the database.
> 'idle in transaction' happens to be one obvious way, but upper bound
> on transaction time protects you in general way.
>

I agree so transaction_timeout is more general. But I  have same problem
@A. How it set properly. In our production max transaction can 30hours -
(VACUUM) or 5hours (ETL).  But transaction_idle_timeout can be 5minutes, I
know so 5 minutes in "idle in transaction" state signalizes some issue.

It looks very similar to relation between statement_timeout and
lock_timeout I am think.

Regards

Pavel


> merlin
>


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Merlin Moncure
On Wed, Nov 4, 2015 at 10:02 AM, Merlin Moncure  wrote:
> On Wed, Nov 4, 2015 at 8:54 AM, Pavel Stehule  wrote:
>> 2015-11-04 15:50 GMT+01:00 Merlin Moncure :
>>>
>>> On Wed, Nov 4, 2015 at 8:42 AM, Pavel Stehule 
>>> wrote:
>>> >> > Okay, I think one more point to consider is that it would be
>>> >> > preferable
>>> >> > to
>>> >> > have such an option for backend sessions and not for other processes
>>> >> > like WalSender.
>>> >>
>>> >> All right...I see the usage..  I withdraw my objection to 'session'
>>> >> prefix then now that I understand the case.  So, do you agree that:
>>> >>
>>> >> *) session_idle_timeout: dumps the backend after X time in 'idle' state
>>> >> and
>>> >>  *) transaction_timeout: cancels transaction after X time, regardless
>>> >> of
>>> >> state
>>> >>
>>> >> sounds good?
>>> >
>>> >
>>> > Not too much
>>> >
>>> >  *) transaction_timeout: cancels transaction after X time, regardless of
>>> > state
>>> >
>>> > This is next level of statement_timeout. I can't to image sense. What is
>>> > a
>>> > issue solved by this property?
>>>
>>> That's the entire point of the thread (or so I thought): cancel
>>> transactions 'idle in transaction'.  This is entirely different than
>>> killing idle sessions.  BTW, I would never configure
>>> session_idle_timeout, because I have no idea what that would do to
>>> benign cases where connection poolers have grabbed a few extra
>>> connections during a load spike.   It's pretty common not to have
>>> those applications have coded connection retry properly and it would
>>> cause issues.
>>
>> you wrote "transaction_timeout: cancels transaction after X time, regardless
>
> Yes, and that is what I meant.  I have two problems with
> transaction_idle_timeout (as opposed to transaction_timeout):
>
> A) It's more complex.  Unsophisticated administrators may not
> understand or set it properly
>
> B) There is no way to enforce an upper bound on transaction time with
> that setting.  A pathological application could keep a transaction
> open forever without running into any timeouts -- that's a dealbreaker
> for me.
>
> From my point of view the purpose of the setting should be to protect
> you from any single actor from doing things that damage the database.
> 'idle in transaction' happens to be one obvious way, but upper bound
> on transaction time protects you in general way.

Note, having both settings would work too.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Merlin Moncure
On Wed, Nov 4, 2015 at 8:54 AM, Pavel Stehule  wrote:
> 2015-11-04 15:50 GMT+01:00 Merlin Moncure :
>>
>> On Wed, Nov 4, 2015 at 8:42 AM, Pavel Stehule 
>> wrote:
>> >> > Okay, I think one more point to consider is that it would be
>> >> > preferable
>> >> > to
>> >> > have such an option for backend sessions and not for other processes
>> >> > like WalSender.
>> >>
>> >> All right...I see the usage..  I withdraw my objection to 'session'
>> >> prefix then now that I understand the case.  So, do you agree that:
>> >>
>> >> *) session_idle_timeout: dumps the backend after X time in 'idle' state
>> >> and
>> >>  *) transaction_timeout: cancels transaction after X time, regardless
>> >> of
>> >> state
>> >>
>> >> sounds good?
>> >
>> >
>> > Not too much
>> >
>> >  *) transaction_timeout: cancels transaction after X time, regardless of
>> > state
>> >
>> > This is next level of statement_timeout. I can't to image sense. What is
>> > a
>> > issue solved by this property?
>>
>> That's the entire point of the thread (or so I thought): cancel
>> transactions 'idle in transaction'.  This is entirely different than
>> killing idle sessions.  BTW, I would never configure
>> session_idle_timeout, because I have no idea what that would do to
>> benign cases where connection poolers have grabbed a few extra
>> connections during a load spike.   It's pretty common not to have
>> those applications have coded connection retry properly and it would
>> cause issues.
>
> you wrote "transaction_timeout: cancels transaction after X time, regardless

Yes, and that is what I meant.  I have two problems with
transaction_idle_timeout (as opposed to transaction_timeout):

A) It's more complex.  Unsophisticated administrators may not
understand or set it properly

B) There is no way to enforce an upper bound on transaction time with
that setting.  A pathological application could keep a transaction
open forever without running into any timeouts -- that's a dealbreaker
for me.

>From my point of view the purpose of the setting should be to protect
you from any single actor from doing things that damage the database.
'idle in transaction' happens to be one obvious way, but upper bound
on transaction time protects you in general way.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bitmap index scans use of filters on available columns

2015-11-04 Thread Tom Lane
Simon Riggs  writes:
> On 4 November 2015 at 16:14, Tom Lane  wrote:
>> You're missing my point: that is possible in an indexscan, but *not* in a
>> bitmap indexscan, because the index AM APIs are totally different in the
>> two cases.  In a bitmap scan, nothing more than a TID bitmap is ever
>> returned out to anyplace that could execute arbitrary expressions.

> Still misunderstanding each other... sorry about that

> If a btree can Filter y like that on an IndexScan, then it can also apply
> that Filter on y when it is looking through rows before it adds them to the
> bitmap.

btree applies no such filter in either case.  "Filter" conditions are
applied outside the index AM --- and yes, I will resist any proposal to
relocate that responsibility.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bitmap index scans use of filters on available columns

2015-11-04 Thread Simon Riggs
On 4 November 2015 at 16:14, Tom Lane  wrote:

> Simon Riggs  writes:
> > On 4 November 2015 at 15:54, Tom Lane  wrote:
> > We generate this plan
> >  Index Scan using f_x_y_idx on f  (cost=0.42..26075.71 rows=209 width=37)
> >Index Cond: (x = 5)
> >Filter: (y ~~ '%abc%'::text)
>
> > So it should be possible to do the Filter condition on the
> BitmapIndexScan.
>
> You're missing my point: that is possible in an indexscan, but *not* in a
> bitmap indexscan, because the index AM APIs are totally different in the
> two cases.  In a bitmap scan, nothing more than a TID bitmap is ever
> returned out to anyplace that could execute arbitrary expressions.
>

Still misunderstanding each other... sorry about that

If a btree can Filter y like that on an IndexScan, then it can also apply
that Filter on y when it is looking through rows before it adds them to the
bitmap.

I completely understand that it cannot return the value (of y) via the
bitmap.

We should be able to get a plan like this

explain select * from f where x=5 and y like '%abc%';

  QUERY PLAN

--
 Bitmap Heap Scan on f  (cost=382.67..9314.72 rows=1 width=37)
   ->  Bitmap Index Scan on f_x_y_idx  (cost=0.00..382.67 rows=10433
width=0)
 Index Cond: (x = 5)
>>  Filter: (y ~~ '%abc%'::text)

if the bitmap stays non-lossy.

I see that plannodes.h says
"In a BitmapIndexScan plan node, the targetlist and qual fields are not
used and are always NIL. "
but it doesn't say why.

Why can't the BitmapIndexScan execute arbitrary expressions? (or What did
you mean by that phrase?)

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] extend pgbench expressions with functions

2015-11-04 Thread Fabien COELHO


Hello Alvaro,


For instance for "random_gaussian(int, int, double)", it may be called with
any combination of 3 int/double arguments, each one must be tested and
possibly converted to the target type before calling the actual function.
For overloaded operators or functions (arithmetics, abs...) there is also
the decision about which operator is called and then what conversions are
necessary.


I think we should make this as simple as possible -- in particular I
don't see a lot of value in overloading here.  I'd rather have it throw
an error and force you to write the double with a ".0" when the given
value is an integer so that it is parsed as a double, rather than
accepting ambiguous input.  Conversely, if you pass a double to the
integer options, raise an error.


I agree that the overloading and other stuff is not a decisive and very 
useful feature of the patch, but I think that the descendant-typing 
two-functions recursive evaluation is a good compromise, as it works 
without much ado nor ambiguity, and from my point of view the code stays 
quite simple with this approach.


So although the benefit (of having overloaded operators and handling two 
types expressions) is indeed in practice quite limited, the cost in the 
code is low.


Being more restrictive or strict as you suggest would not remove much 
code, and would remove some convenience. Also, removing this feature would 
induce inconsistencies: integer arguments would allow general expressions, 
but only double constants would be ok for double arguments...


So I would rather keep it as it is.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bitmap index scans use of filters on available columns

2015-11-04 Thread Tom Lane
Simon Riggs  writes:
> On 4 November 2015 at 15:54, Tom Lane  wrote:
> We generate this plan
>  Index Scan using f_x_y_idx on f  (cost=0.42..26075.71 rows=209 width=37)
>Index Cond: (x = 5)
>Filter: (y ~~ '%abc%'::text)

> So it should be possible to do the Filter condition on the BitmapIndexScan.

You're missing my point: that is possible in an indexscan, but *not* in a
bitmap indexscan, because the index AM APIs are totally different in the
two cases.  In a bitmap scan, nothing more than a TID bitmap is ever
returned out to anyplace that could execute arbitrary expressions.

In the case at hand, the planner should have considered a plan of this
shape as well.  Presumably it concluded it was more expensive than using
the bitmap approach.  Jeff might try "set enable_bitmapscan = 0" and
compare the estimated and actual costs.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] extend pgbench expressions with functions

2015-11-04 Thread Alvaro Herrera
Fabien COELHO wrote:

> For instance for "random_gaussian(int, int, double)", it may be called with
> any combination of 3 int/double arguments, each one must be tested and
> possibly converted to the target type before calling the actual function.
> For overloaded operators or functions (arithmetics, abs...) there is also
> the decision about which operator is called and then what conversions are
> necessary.

I think we should make this as simple as possible -- in particular I
don't see a lot of value in overloading here.  I'd rather have it throw
an error and force you to write the double with a ".0" when the given
value is an integer so that it is parsed as a double, rather than
accepting ambiguous input.  Conversely, if you pass a double to the
integer options, raise an error.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CustomScan support on readfuncs.c

2015-11-04 Thread Kouhei Kaigai
> > On Tue, Sep 29, 2015 at 6:19 PM, Kouhei Kaigai  wrote:
> > >> On Mon, Sep 28, 2015 at 8:31 PM, Kouhei Kaigai  
> > >> wrote:
> > >> >> Instead of doing this:
> > >> >>
> > >> >> +/* Dump library and symbol name instead of raw pointer */
> > >> >>  appendStringInfoString(str, " :methods ");
> > >> >> -_outToken(str, node->methods->CustomName);
> > >> >> +_outToken(str, node->methods->methods_library_name);
> > >> >> +appendStringInfoChar(str, ' ');
> > >> >> +_outToken(str, node->methods->methods_symbol_name);
> > >> >>
> > >> >> Suppose we just make library_name and symbol_name fields in the node
> > >> >> itself, that are dumped and loaded like any others.
> > >> >>
> > >> >> Would that be better?
> > >> >>
> > >> > I have no preference here.
> > >> >
> > >> > Even if we dump library_name/symbol_name as if field in CustomScan,
> > >> > not CustomScanMethods, in a similar way, we cannot use 
> > >> > WRITE_STRING_FIELD
> > >> > here, because its 'fldname' assumes these members are direct field of
> > >> > CustomScan.
> > >> >
> > >> >   /* Write a character-string (possibly NULL) field */
> > >> >   #define WRITE_STRING_FIELD(fldname) \
> > >> >   (appendStringInfo(str, " :" CppAsString(fldname) " "), \
> > >> >_outToken(str, node->fldname))
> > >>
> > >> Well that's exactly what I was suggesting: making them a direct field
> > >> of CustomScan.
> > >>
> > > Let me confirm. Are you suggesting to have library_name/symbol_name
> > > in CustomScan, not CustomScanMethods?
> > > I prefer these fields are in CustomScanMethods because we don't need
> > > to setup them again once PG_init set up these symbols. CustomScan has
> > > to be created every time when it is chosen by planner.
> >
> > True.  But that doesn't cost much.  I'm not sure it's better, so if
> > you don't like it, don't worry about it.
> >
> Yep, I like library_name and symbol_name are in CustomScanMethods
> because the table of callbacks and its identifiers are not separable
> 
> > >> > One other question I have. Do we have a portable way to lookup
> > >> > a pair of library and symbol by address?
> > >> > Glibc has dladdr() functions that returns these information,
> > >> > however, manpage warned it is not defined by POSIX.
> > >> > If we would be able to have any portable way, it may make the
> > >> > interface simpler.
> > >>
> > >> Yes: load_external_function.
> > >>
> > > It looks up an address by a pair of library and symbol name
> > > What I'm looking for is a portable function that looks up a pair
> > > of library and symbol name by an address, like dladdr() of glibc.
> > > I don't know whether other *nix or windows have these infrastructure.
> >
> > No, that doesn't exist, and the chances of us trying add that
> > infrastructure for this feature are nil.
> >
> OK, probably, it is the only way to expect extension put correct
> values on the pair of library and symbol names.
> 
> I try to check whether the current patch workable with this direction
> using my extension. Please wait for a few days.
>
Sorry for my late.

I confirmed that CustomScan support of readfuncs.c works fine using the
attached patch for the PG-Strom tree. It worked as expected - duplication,
serialization and deserialization by:
  plan_dup = stringToNode(nodeToString(copyObject(plan_orig)))

So, the custom-scan-on-readfuncs.v1.path itself is good for me.

In addition, I felt the following functions are useful for extensions.
 * _outToken()
 * _outBitmapset()
 * _readBitmapset()

Do we have none-static version of above functions?
If nothing, extension has to implement them by itself, more or less.

Thanks,
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei 



custom-scan-on-readfuncs.v1.patch
Description: custom-scan-on-readfuncs.v1.patch


test-customscan-readfuncs.patch
Description: test-customscan-readfuncs.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bitmap index scans use of filters on available columns

2015-11-04 Thread Simon Riggs
On 4 November 2015 at 15:54, Tom Lane  wrote:

> Nicolas Barbier  writes:
> > 2015-11-04 Antonin Houska :
> >> (see expand_indexqual_opclause()), I'm not sure any kind of expansion is
> >> possible for '%abc%' which would result in a b-tree searchable
> condition.
>
> > I think the question is not about using the b-tree for checking the
> > condition, but about just retrieving the value for y from the index,
> > and just using that to check the condition before fetching the
> > corresponding tuple from the heap.
>
> Bitmap index scans only return TID bitmaps, not index tuples; indeed
> the underlying index may not store anything recognizable as tuples.
>

Agreed, though the OP's question was asking why a Filter condition can't be
added to a BitmapIndexScan, so that non-qualifying rows can be excluded
from the TID bitmap it generates.

We generate this plan

postgres=# explain select * from f where x=5 and y like '%abc%';

QUERY PLAN

--

 Index Scan using f_x_y_idx on f  (cost=0.42..26075.71 rows=209 width=37)
   Index Cond: (x = 5)
   Filter: (y ~~ '%abc%'::text)

So it should be possible to do the Filter condition on the BitmapIndexScan.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Pavel Stehule
2015-11-04 15:50 GMT+01:00 Merlin Moncure :

> On Wed, Nov 4, 2015 at 8:42 AM, Pavel Stehule 
> wrote:
> >> > Okay, I think one more point to consider is that it would be
> preferable
> >> > to
> >> > have such an option for backend sessions and not for other processes
> >> > like WalSender.
> >>
> >> All right...I see the usage..  I withdraw my objection to 'session'
> >> prefix then now that I understand the case.  So, do you agree that:
> >>
> >> *) session_idle_timeout: dumps the backend after X time in 'idle' state
> >> and
> >>  *) transaction_timeout: cancels transaction after X time, regardless of
> >> state
> >>
> >> sounds good?
> >
> >
> > Not too much
> >
> >  *) transaction_timeout: cancels transaction after X time, regardless of
> > state
> >
> > This is next level of statement_timeout. I can't to image sense. What is
> a
> > issue solved by this property?
>
> That's the entire point of the thread (or so I thought): cancel
> transactions 'idle in transaction'.  This is entirely different than
> killing idle sessions.  BTW, I would never configure
> session_idle_timeout, because I have no idea what that would do to
> benign cases where connection poolers have grabbed a few extra
> connections during a load spike.   It's pretty common not to have
> those applications have coded connection retry properly and it would
> cause issues.
>

you wrote "transaction_timeout: cancels transaction after X time,
regardless of
> state" - I understand if text is "cancels transaction after X time if
state is "idle in tramsaction"

Pavel


>
> The problem at hand is idle *transactions*, not sessions, and a
> configuration setting that deals with transaction time.  I do not
> understand the objection to setting an upper bound on transaction
> time.   I'm ok with cancelling or dumping the session with a slight
> preference on cancel.
>
> merlin
>


Re: [HACKERS] Bitmap index scans use of filters on available columns

2015-11-04 Thread Tom Lane
Nicolas Barbier  writes:
> 2015-11-04 Antonin Houska :
>> (see expand_indexqual_opclause()), I'm not sure any kind of expansion is
>> possible for '%abc%' which would result in a b-tree searchable condition.

> I think the question is not about using the b-tree for checking the
> condition, but about just retrieving the value for y from the index,
> and just using that to check the condition before fetching the
> corresponding tuple from the heap.

Bitmap index scans only return TID bitmaps, not index tuples; indeed
the underlying index may not store anything recognizable as tuples.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Merlin Moncure
On Wed, Nov 4, 2015 at 8:42 AM, Pavel Stehule  wrote:
>> > Okay, I think one more point to consider is that it would be preferable
>> > to
>> > have such an option for backend sessions and not for other processes
>> > like WalSender.
>>
>> All right...I see the usage..  I withdraw my objection to 'session'
>> prefix then now that I understand the case.  So, do you agree that:
>>
>> *) session_idle_timeout: dumps the backend after X time in 'idle' state
>> and
>>  *) transaction_timeout: cancels transaction after X time, regardless of
>> state
>>
>> sounds good?
>
>
> Not too much
>
>  *) transaction_timeout: cancels transaction after X time, regardless of
> state
>
> This is next level of statement_timeout. I can't to image sense. What is a
> issue solved by this property?

That's the entire point of the thread (or so I thought): cancel
transactions 'idle in transaction'.  This is entirely different than
killing idle sessions.  BTW, I would never configure
session_idle_timeout, because I have no idea what that would do to
benign cases where connection poolers have grabbed a few extra
connections during a load spike.   It's pretty common not to have
those applications have coded connection retry properly and it would
cause issues.

The problem at hand is idle *transactions*, not sessions, and a
configuration setting that deals with transaction time.  I do not
understand the objection to setting an upper bound on transaction
time.   I'm ok with cancelling or dumping the session with a slight
preference on cancel.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Pavel Stehule
> Okay, I think one more point to consider is that it would be preferable to
> > have such an option for backend sessions and not for other processes
> > like WalSender.
>
> All right...I see the usage..  I withdraw my objection to 'session'
> prefix then now that I understand the case.  So, do you agree that:
>
> *) session_idle_timeout: dumps the backend after X time in 'idle' state
> and
>  *) transaction_timeout: cancels transaction after X time, regardless of
> state
>
> sounds good?
>

Not too much

 *) transaction_timeout: cancels transaction after X time, regardless of
state

This is next level of statement_timeout. I can't to image sense. What is a
issue solved by this property?

Pavel


>
> merlin
>


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Amit Kapila
On Wed, Nov 4, 2015 at 8:06 PM, Merlin Moncure  wrote:
>
> On Tue, Nov 3, 2015 at 10:33 PM, Amit Kapila 
wrote:
>  It is 100% true. But the users can do strange things. If we solve
idle
>  transactions and not idle session, then they are able to increase
>  max_connections to thousands with happy smile in face.
> 
>  I have not strong idea about how to solve it well - maybe introduce
>  transaction_idle_timeout and session_idle_timeout?
> 
> >>>
> >>> What exactly do we want to define session_idle_timeout?  Some
> >>> possibilities:
> >>> a. Reset the session related variables like transaction, prepared
> >>> statements, etc. and retain it for connection pool kind of stuff
> >>> b. Exit from the session
> >>
> >>
> >> b is safe state - and currently it is only one state, that we can
forward
> >> to client side (with keep_alive packets) - so I prefer b
> >>
> >
> > Okay, I think one more point to consider is that it would be preferable
to
> > have such an option for backend sessions and not for other processes
> > like WalSender.
>
> All right...I see the usage..  I withdraw my objection to 'session'
> prefix then now that I understand the case.  So, do you agree that:
>
> *) session_idle_timeout: dumps the backend after X time in 'idle' state
>

Agreed.

> and
>  *) transaction_timeout: cancels transaction after X time, regardless of
state
>

I am not sure about this, let us see if any body else has opinion about
this parameter.



With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] RFC/WIP: adding new configuration options to TOAST

2015-11-04 Thread Craig Ringer
On 4 November 2015 at 20:48, Bill Moran  wrote:

> On a related note, since I've been digging in this area of code I feel
> comfortable commenting that pluggable compression isn't terribly difficult
> to implement. The most significant work will probably be in actually
> implementing the various algorithms, not in making them pluggable. I've
> been considering making that my next project.

IIRC the biggest roadblocks for pluggable compression were around
binary upgrade and around the per-Datum or per-tuple space cost of
identifying which algorithm is used. But I guess that's getting
off-track.

> Overall, I don't feel like you're actually disagreeing with
> anything I'm doing ... you're just wishing I was already further
> along.

Indeed. I'm concerned about the extra complexity exposed to users and
unsure about GUCs as an interface, but I can see being able to
encourage earlier and more aggressive compression as potentially quite
handy.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-11-04 Thread Merlin Moncure
On Tue, Nov 3, 2015 at 10:33 PM, Amit Kapila  wrote:
> On Tue, Nov 3, 2015 at 7:56 PM, Pavel Stehule 
> wrote:
>>
>>
>>
>> 2015-11-03 3:42 GMT+01:00 Amit Kapila :
>>>
>>> On Mon, Nov 2, 2015 at 10:45 PM, Pavel Stehule 
>>> wrote:


 It is 100% true. But the users can do strange things. If we solve idle
 transactions and not idle session, then they are able to increase
 max_connections to thousands with happy smile in face.

 I have not strong idea about how to solve it well - maybe introduce
 transaction_idle_timeout and session_idle_timeout?

>>>
>>> What exactly do we want to define session_idle_timeout?  Some
>>> possibilities:
>>> a. Reset the session related variables like transaction, prepared
>>> statements, etc. and retain it for connection pool kind of stuff
>>> b. Exit from the session
>>
>>
>> b is safe state - and currently it is only one state, that we can forward
>> to client side (with keep_alive packets) - so I prefer b
>>
>
> Okay, I think one more point to consider is that it would be preferable to
> have such an option for backend sessions and not for other processes
> like WalSender.

All right...I see the usage..  I withdraw my objection to 'session'
prefix then now that I understand the case.  So, do you agree that:

*) session_idle_timeout: dumps the backend after X time in 'idle' state
and
 *) transaction_timeout: cancels transaction after X time, regardless of state

sounds good?

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Summary of Vienna sharding summit, new TODO item

2015-11-04 Thread Bruce Momjian
[BCC to pgsql-cluster-hack...@postgresql.org]

I have summarized the Vienna cluster summit meeting from last week by
adding italicized answers to the agenda questions, even though we didn't
follow the agenda ;-) :

https://wiki.postgresql.org/wiki/PG-EU_2015_Cluster_Summit

I would like to propose the following TODO item:

*  Enhance foreign data wrappers, parallelism, partitioning,
and perhaps add a global snapshot/transaction manager to allow
creation of a proof-of-concept built-in sharding solution

Ideally these enhancements and new facilities will be available to
external sharding solutions as well.

Is this acceptable?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Valgrind and shared_buffers (Was: Restore-reliability mode)

2015-11-04 Thread Alvaro Herrera
Peter Geoghegan wrote:
> On Mon, Jul 27, 2015 at 7:12 AM, Alvaro Herrera
>  wrote:
> > I only tried a few tests, for lack of time, and it didn't produce any.
> > (To verify that the whole thing was working properly, I reduced the
> > range of memory made available during PinBuffer and that resulted in a
> > crash immediately).  I am not really familiar with valgrind TBH and just
> > copied a recipe to run postmaster under it, so if someone with more
> > valgrind-fu could verify this, it would be great.
> >
> >
> > This part:
> >
> >> > > > Under CLOBBER_FREED_MEMORY, wipe a shared buffer when its
> >> > > > global pin count falls to zero.
> >
> > can be done without any valgrind, I think.  Any takers?
> 
> It seems like this didn't go anywhere. Any chance that you'll find the
> time to work on this soon?

No.  How about you?  If you or someone else with more valgrind
familiarity than me look it over and think it's correct, I can push it.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RFC/WIP: adding new configuration options to TOAST

2015-11-04 Thread Bill Moran
On Wed, 4 Nov 2015 13:07:09 +0800
Craig Ringer  wrote:

> On 4 November 2015 at 10:58, Bill Moran  wrote:
> > On Tue, 3 Nov 2015 18:34:39 -0800
> > Jeff Janes  wrote:
> >
> >> On Tue, Nov 3, 2015 at 5:21 PM, Craig Ringer  wrote:
> >> > On 3 November 2015 at 23:04, Bill Moran  wrote:
> >> >>
> >> >> Looking for feedback to see if anyone sees any issues or has any
> >> >> suggestions on what I'm doing. The attached patch alters 3 things
> >> >> with regard to TOAST behavior:
> >> >
> >> > COMPRESSION_TEST_SIZE (2) seems useful.
> >> >
> >> > The other two mostly seem like options nobody's going to know are
> >> > there, or know how to sensibly set if they do notice them. What's the
> >> > driving reason behind those, the problem you're trying to solve? Why
> >> > make them configurable per-table (or at all)?
> >>
> >> I currently have a table with one column which has a median width of
> >> 500 bytes, a 90th percentile of 650 bytes, and makes up 75% of the
> >> table's size, and the column is rarely used, while the table itself is
> >> frequently seq scanned.  I'd very much like to drive that column out
> >> of main and into toast. I think target_tuple_size would let me do
> >> that.
> >
> > That's exactly the use case. As it currently stands, any tuple smaller
> > than about 2K will never be toasted. So if you have 1900 bytes of
> > highly compressible text that is infrequently queried from the table
> > whilst other columns are frequently accessed, there's no way to force
> > it to be out of line from the main table, or be compressed.
> 
> Ok, so that's the underlying issue to solve. Make smaller tuples
> TOASTable, especially when quite compressible.
> 
> Shouldn't that be column-level, really?
> 
> We have SET STORAGE at the moment. Would some sort of "FORCE" option
> to SET STORAGE EXTERNAL meet your needs? It'd potentially force small
> data out of line too, but that'd make sense for your rarely accessed
> use case.

I'm not discounting the potential value of column-level tunables. But
I don't feel that their potential value devalues table-level and
cluster-wide tunables.

As the code currently stands, TOASTing is completely skipped for
tuples smaller than 2k, which means that FORCE* storage types would
need to behave differently than other storage options. Not off the
table, of course, but it seemed unintuitive to me.

A lot of this was discussed previously in the threads linked here:
https://github.com/williammoran/postgres/blob/master/README

One important point is that it's not 100% clear that these tunables
are worthwhile (as you mention) but there's no way to be sure until
there is a prototype in place that can be used for testing ... which
is what this is all about (at this point, anyway). Getting to the
point where there are table-level tunables allows for convenient
testing (i.e. having two tables with different values and the same
data, and not having to restart/recreate them for each value to be
tested).

> > The two new configurables allow the DBA to make tradeoff decisions on
> > CPU usage vs. storage efficiency. Since the TOAST code attempts to
> > process the column that will provide the largest gain first, in your
> > described use case you could calculate the size of the other columns,
> > and set the target_tuple_size to just a bit larger than that, and
> > that large column should get moved into the toast table in most or
> > all cases (depending on how predictable the other sizes are)
> 
> I'm just concerned that this is another knob that 0.001% of the user
> base will know about and use correctly, 1% will use incorrectly based
> on some cargo-culted nonsense they pick up somewhere, and the rest
> will have no clue exists. We have more than a few of those already.

>From my perspective, I've seen people misunderstand and misuse the
simplest of things, so I don't hold to the idea that just because its
a little complicated means it's a bad idea. Certainly, it needs to
be as understandable as possible. This page will certianly need a
significant rewrite:
http://www.postgresql.org/docs/9.5/static/storage-toast.html

To me, the important factor is whether these tunables can make
Postgres more valuable to a skilled administrator. If they can, then
the fact that a cargo-cult exists to misuse them is simply
entertainment to me. It ensures that companies like 2ndquadrant will
always have work fixing other people's mistakes ;)

> The way you describe using a GUC here makes it sound like what you
> really want is just a column storage option, and that twiddling a GUC
> like this is a workaround to try to make one column more aggressively
> compressed and moved out of line without affecting the others.

Not really. The fact that I've only listed one case doesn't mean it's
the only one. Perhaps I need to enumerate all the potential benefits
somewhere ... but when I first researched and proposed this project,
I didn't see anywhere near the level of disagreement that I'm seeing
now. I'll try to en

Re: [HACKERS] [patch] Proposal for \rotate in psql

2015-11-04 Thread Pavel Stehule
Hi

I am looking on this last patch. I talked about the name of this command
with more people, and the name "rotate" is unhappy. The correct name for
this visualization technique is "crosstab" (see google "crosstab"). The
conflict with our extension is unhappy, but using "rotate" is more worst -
(see google "rotate"). The term "rotate" is used less time (related to
topic), and usually with zero informed people. More, in attached doc, the
word "crosstab" is pretty often used, and then the word "rotate" has not
sense.

The important question is sorting output. The vertical header is sorted by
first appearance in result. The horizontal header is sorted in ascending or
descending order. This is unfriendly for often use case - month names. This
can be solved by third parameter - sort function.

Regards

Pavel


Re: [HACKERS] Bitmap index scans use of filters on available columns

2015-11-04 Thread Nicolas Barbier
2015-11-04 Antonin Houska :

> While prefix expression
>
> y like 'abc%'
>
> can be converted to
>
> y >= 'abc'
>
> (see expand_indexqual_opclause()), I'm not sure any kind of expansion is
> possible for '%abc%' which would result in a b-tree searchable condition.

I think the question is not about using the b-tree for checking the
condition, but about just retrieving the value for y from the index,
and just using that to check the condition before fetching the
corresponding tuple from the heap.

Nicolas

-- 
A. Because it breaks the logical sequence of discussion.
Q. Why is top posting bad?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: BUG #13685: Archiving while idle every archive_timeout with wal_level hot_standby

2015-11-04 Thread Andres Freund
On 2015-11-04 16:01:28 +0900, Michael Paquier wrote:
> On Wed, Nov 4, 2015 at 8:39 AM, Andres Freund  wrote:
> > On November 4, 2015 12:37:02 AM GMT+01:00, Michael Paquier wrote:
> >>On a completely idle system, I don't think we should log any standby
> >>records. This is what ~9.3 does.
> >
> > Are you sure? I think it'll around checkpoints, no? I thought Heikki had 
> > fixed that, but looking sound that doesn't seem to be the case.
> 
> Er, yes, sorry. I should have used clearer words: I meant idle system
> with something running nothing including internal checkpoints.

Uh, but you'll always have checkpoints happen on wal_level =
hot_standby, even in 9.3?  Maybe I'm not parsing your sentence right.

As soon as a single checkpoint ever happened the early-return logic in
CreateCheckPoint() will fail to take the LogStandbySnapshot() in
CreateCheckPoint() into account. The test is:
if (curInsert == ControlFile->checkPoint +
MAXALIGN(SizeOfXLogRecord + sizeof(CheckPoint)) &&
ControlFile->checkPoint == ControlFile->checkPointCopy.redo)
which obviously doesn't work if there's been a WAL record logged after
the redo pointer has been determined etc.

The reason that a single checkpoint is needed to "jumpstart" the
pointless checkpoints is that otherwise we'll never have issued a
LogStandbySnapshot() and thus the above code block works if we started
from a proper shutdown checkpoint.


Independent of the idle issue, it seems to me that the location of the
LogStandbySnapshot() is actually rather suboptimal - it really should
really be before the CheckPointGuts(), not afterwards. As closer it's to
the redo pointer of the checkpoint a hot standby node starts up from,
the sooner that node can reach consistency.  There's no difference for
the first time a node starts from a basebackup (since we gotta replay
that checkpoint anyway before we're consistent), but if we start from a
restartpoint...

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Foreign join pushdown vs EvalPlanQual

2015-11-04 Thread Etsuro Fujita

On 2015/11/04 17:28, Kouhei Kaigai wrote:

I think we need to consider a general solution that can be applied not
only to the case where the component tables in a foreign join all use
ROW_MARK_COPY but to the case where those tables use different rowmark
types such as ROW_MARK_COPY and ROW_MARK_EXCLUSIVE, as I pointed out
upthread.



In mixture case, FDW/CSP can choose local recheck & reconstruction based
on the EPQ tuples of base relation. Nobody enforce FDW/CSP to return
a joined tuple always even if author don't want to support the feature.
Why do you think it is not a generic solution? FDW/CSP driver "can choose"
the best solution according to its implementation and capability.


It looked to me that you were discussing only the case where component 
foreign tables in a foreign join all use ROW_MARK_COPY, so I commented 
that.  Sorry for my misunderstanding.


Best regards,
Etsuro Fujita



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Foreign join pushdown vs EvalPlanQual

2015-11-04 Thread Etsuro Fujita

On 2015/11/04 17:10, Kouhei Kaigai wrote:

On 2015/10/28 6:04, Robert Haas wrote:

On Tue, Oct 20, 2015 at 12:39 PM, Etsuro Fujita
 wrote:

Sorry, my explanation was not correct.  (Needed to take in caffeine.) What
I'm concerned about is the following:

SELECT * FROM localtab JOIN (ft1 LEFT JOIN ft2 ON ft1.x = ft2.x) ON
localtab.id = ft1.id FOR UPDATE OF ft1

LockRows
-> Nested Loop
   Join Filter: (localtab.id = ft1.id)
   -> Seq Scan on localtab
   -> Foreign Scan on 
Remote SQL: SELECT * FROM ft1 LEFT JOIN ft2 WHERE ft1.x = ft2.x
FOR UPDATE OF ft1

Assume that ft1 performs late row locking.



If the SQL includes "FOR UPDATE of ft1", then it clearly performs
early row locking.  I assume you meant to omit that.



If an EPQ recheck was invoked
due to a concurrent transaction on the remote server that changed only the
value x of the ft1 tuple previously retrieved, then we would have to
generate a fake ft1/ft2-join tuple with nulls for ft2. (Assume that the ft2
tuple previously retrieved was not a null tuple.) However, I'm not sure how
we can do that in ForeignRecheck; we can't know for example, which one is
outer and which one is inner, without an alternative local join execution
plan.  Maybe I'm missing something, though.



I would expect it to issue a new query like: SELECT * FROM ft1 LEFT
JOIN ft2 WHERE ft1.x = ft2.x AND ft1.tid = $0 AND ft2.tid = $1.



We assume here that ft1 uses late row locking, so I thought the above
SQL should include "FOR UPDATE of ft1".  But I still don't think that
that is right; the SQL with "FOR UPDATE of ft1" wouldn't generate the
fake ft1/ft2-join tuple with nulls for ft2, as expected.  The reason for
that is that the updated version of the ft1 tuple wouldn't satisfy the
ft1.tid = $0 condition in an EPQ recheck, because the ctid for the
updated version of the ft1 tuple has changed.  (IIUC, I think that if we
use a TID scan for ft1, the SQL would generate the expected result,
because I think that the TID condition would be ignored in the EPQ
recheck, but I don't think it's guaranteed to use a TID scan for ft1.)
Maybe I'm missing something, though.



It looks to me, we should not use ctid system column to identify remote
row when postgres_fdw tries to support late row locking.

The documentation says:
   
http://www.postgresql.org/docs/devel/static/fdw-callbacks.html#FDW-CALLBACKS-UPDATE

   UPDATE and DELETE operations are performed against rows previously
   fetched by the table-scanning functions. The FDW may need extra information,
   such as a row ID or the values of primary-key columns, to ensure that it can
   identify the exact row to update or delete

The "rowid" should not be changed once it is fetched from the remote side
until it is actually updated, deleted or locked, for correct identification.
If ctid is used for this purpose, it is safe only when remote row is locked
when it is fetched - it is exactly early row locking behavior, isn't it?


Yeah, we should use early row locking for a target foreign table in 
UPDATE/DELETE.


In case of SELECT FOR UPDATE, I think we are allowed to use ctid to 
identify target rows for late row locking, but I think the above SQL 
should be changed to something like this:


SELECT * FROM (SELECT * FROM ft1 WHERE ft1.tid = $0 FOR UPDATE) ss1 LEFT 
JOIN (SELECT * FROM ft2 WHERE ft2.tid = $1) ss2 ON ss1.x = ss2.x



This should be significantly more efficient than fetching the base
rows from each of two tables with two separate queries.



Maybe I think we could fix the SQL, so I have to admit that, but I'm
just wondering (1) what would happen for the case when ft1 uses late row
rocking and ft2 uses early row rocking and (2) that would be still more
efficient than re-fetching only the base row from ft1.



It should be decision by FDW driver. It is not easy to estimate a certain
FDW driver mixes up early and late locking policy within a same remote join
query. Do you really want to support such a mysterious implementation?


Yeah, the reason for that is because GetForeignRowMarkType allows that.


Or, do you expect all the FDW driver is enforced to return a joined tuple
if remote join case?


No.  That wouldn't make sense if at least one component table involved 
in a foreign join uses the rowmark type other than ROW_MARK_COPY.



It is different from my idea; it shall be an extra
optimization option if FDW can fetch a joined tuple at once, but not always.
So, if FDW driver does not support this optimal behavior, your driver can
fetch two base tables then run local alternative join (or something other).


OK, so if we all agree that the joined-tuple optimization is just an 
option for the case where all the component tables use ROW_MARK_COPY, 
I'd propose to leave that for 9.6.


Best regards,
Etsuro Fujita



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: PL/Pythonu - function ereport

2015-11-04 Thread Pavel Stehule
Hi

2015-11-04 7:06 GMT+01:00 Catalin Iacob :

> Sorry, you're right, I didn't notice the x = plpy.SPIError() test.
>
> I did notice that you included the kw != NULL, I was explaining why it
> really is needed even though it *seems* the code also works without
>
it.
>

It helped me lot of, thank you


>
> There's just the doc part left then.
>

done

Regards

Pavel
diff --git a/doc/src/sgml/plpython.sgml b/doc/src/sgml/plpython.sgml
new file mode 100644
index 015bbad..10aeb35
*** a/doc/src/sgml/plpython.sgml
--- b/doc/src/sgml/plpython.sgml
*** $$ LANGUAGE plpythonu;
*** 1205,1210 
--- 1205,1228 
  approximately the same functionality
 

+ 
+   
+Raising Errors
+ 
+
+ A plpy.SPIError can be raised from PL/Python, the constructor accepts
+ keyword parameters:
+ plpy.SPIError([ message [, detail [, hint [, sqlstate  [, schema  [, table  [, column  [, datatype  [, constraint ]).
+
+
+ An example of raising custom excation could be written as:
+ 
+ DO $$
+   raise plpy.SPIError('custom message', hint = 'It is test only');
+ $$ LANGUAGE plpythonu;
+ 
+
+   
   
  
   
diff --git a/src/pl/plpython/expected/plpython_error.out b/src/pl/plpython/expected/plpython_error.out
new file mode 100644
index 1f52af7..435a5c2
*** a/src/pl/plpython/expected/plpython_error.out
--- b/src/pl/plpython/expected/plpython_error.out
*** EXCEPTION WHEN SQLSTATE 'SILLY' THEN
*** 422,424 
--- 422,486 
  	-- NOOP
  END
  $$ LANGUAGE plpgsql;
+ /* the possibility to set fields of custom exception
+  */
+ DO $$
+ raise plpy.SPIError('This is message text.',
+ detail = 'This is detail text',
+ hint = 'This is hint text.')
+ $$ LANGUAGE plpythonu;
+ ERROR:  plpy.SPIError: This is message text.
+ DETAIL:  This is detail text
+ HINT:  This is hint text.
+ CONTEXT:  Traceback (most recent call last):
+   PL/Python anonymous code block, line 4, in 
+ hint = 'This is hint text.')
+ PL/Python anonymous code block
+ \set VERBOSITY verbose
+ DO $$
+ raise plpy.SPIError('This is message text.',
+ detail = 'This is detail text',
+ hint = 'This is hint text.',
+ sqlstate = 'SILLY',
+ schema = 'any info about schema',
+ table = 'any info about table',
+ column = 'any info about column',
+ datatype = 'any info about datatype',
+ constraint = 'any info about constraint')
+ $$ LANGUAGE plpythonu;
+ ERROR:  SILLY: plpy.SPIError: This is message text.
+ DETAIL:  This is detail text
+ HINT:  This is hint text.
+ CONTEXT:  Traceback (most recent call last):
+   PL/Python anonymous code block, line 10, in 
+ constraint = 'any info about constraint')
+ PL/Python anonymous code block
+ SCHEMA NAME:  any info about schema
+ TABLE NAME:  any info about table
+ COLUMN NAME:  any info about column
+ DATATYPE NAME:  any info about datatype
+ CONSTRAINT NAME:  any info about constraint
+ LOCATION:  PLy_elog, plpy_elog.c:122
+ \set VERBOSITY default
+ DO $$
+ raise plpy.SPIError(detail = 'This is detail text')
+ $$ LANGUAGE plpythonu;
+ ERROR:  plpy.SPIError: 
+ DETAIL:  This is detail text
+ CONTEXT:  Traceback (most recent call last):
+   PL/Python anonymous code block, line 2, in 
+ raise plpy.SPIError(detail = 'This is detail text')
+ PL/Python anonymous code block
+ DO $$
+ raise plpy.SPIError();
+ $$ LANGUAGE plpythonu;
+ ERROR:  plpy.SPIError: 
+ CONTEXT:  Traceback (most recent call last):
+   PL/Python anonymous code block, line 2, in 
+ raise plpy.SPIError();
+ PL/Python anonymous code block
+ DO $$
+ raise plpy.SPIError(sqlstate = 'wrong sql state');
+ $$ LANGUAGE plpythonu;
+ ERROR:  could not create SPIError object (invalid SQLSTATE code)
+ CONTEXT:  PL/Python anonymous code block
diff --git a/src/pl/plpython/plpy_elog.c b/src/pl/plpython/plpy_elog.c
new file mode 100644
index 15406d6..a835af9
*** a/src/pl/plpython/plpy_elog.c
--- b/src/pl/plpython/plpy_elog.c
*** PyObject   *PLy_exc_spi_error = NULL;
*** 23,29 
  
  static void PLy_traceback(char **xmsg, char **tbmsg, int *tb_depth);
  static void PLy_get_spi_error_data(PyObject *exc, int *sqlerrcode, char **detail,
! 	   char **hint, char **query, int *position);
  static char *get_source_line(const char *src, int lineno);
  
  
--- 23,32 
  
  static void PLy_traceback(char **xmsg, char **tbmsg, int *tb_depth);
  static void PLy_get_spi_error_data(PyObject *exc, int *sqlerrcode, char **detail,
! 	   char **hint, char **query, int *position,
! 	   char **schema_name, char **table_name, char **column_name,
! 	   char **datatype_name, char **constraint_name);
! 
  static char *get_source_line(const char *src, int lineno);
  
  
*** PLy_elog(int elevel, const char *fmt,...
*** 51,62 
  	char	   *hint = NULL;
  	char	   *query = NULL;
  	int			position

Re: [HACKERS] Bitmap index scans use of filters on available columns

2015-11-04 Thread Antonin Houska
While prefix expression

y like 'abc%'

can be converted to

y >= 'abc'

(see expand_indexqual_opclause()), I'm not sure any kind of expansion is
possible for '%abc%' which would result in a b-tree searchable condition.

Jeff Janes  wrote:

> Is there a fundamental reason the filter on y is not being applied to
> the index scan, rather than the heap scan?

-- 
Antonin Houska
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de, http://www.cybertec.at


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] extend pgbench expressions with functions

2015-11-04 Thread Fabien COELHO


Hello Kyotaro-san,

[...] I understood the situation and agreed for current shape of the 
code. I no longer object the calling-alternatively code. But I'd like 
see the abbreviated discussion in the comment on the function.


Indeed, this design choice deserves much better explanations.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] extend pgbench expressions with functions

2015-11-04 Thread Fabien COELHO


Hello Robert,


1. I think there should really be two patches here, the first adding
functions, and the second adding doubles.  Those seem like separate
changes.  And offhand, the double stuff looks a lot less useful that
the function call syntax.


I first submitted the infrastructure part, but I was asked to show how 
more functions could be included, especially random variants. As random 
gaussian/exponential functions require a double argument, there must be 
some support for double values.


Now, it could certainly be split in two patches, but this is rather 
artificial, IMO.



2. ddebug and idebug seem like a lousy idea to me.


It was really useful to me for debugging and testing.

If we really need to be able to print stuff from pgbench, which I kind 
of doubt, then maybe we should have a string data type and a print() 
function, or maybe a string data type and a toplevel \echo command.


The *debug functions allow to intercept the value computed within an 
expression. If you rely on variables and some echo (which does not exist) 
this means that there should be double variables as well, which is not 
currently the case, and which I do not see as useful for the kind of 
script written for pgbench. Adding the string type is more work, and

I do not see a good use case for those.

So the *debug functions are really just a lightweight solution for 
debugging type related issues in expressions. I can drop them if this is a 
blocker, but the are really useful for testing quickly a script.



3. I'm perplexed by why you've replaced evaluateExpr() with evalInt()
and evalDouble().


As explained above in the thread (I think), the reason is that having one 
overloaded expression evaluation which handles types conversion would 
produce pretty heavy code, and the two functions with the descending 
typing allows to have a much smaller code with the same effect.


The issue is that with two types all functions must handle argument type 
conversion explicitely.


For instance for "random_gaussian(int, int, double)", it may be called 
with any combination of 3 int/double arguments, each one must be tested 
and possibly converted to the target type before calling the actual 
function. For overloaded operators or functions (arithmetics, abs...) 
there is also the decision about which operator is called and then what 
conversions are necessary.


With the descending typing and two functions cross recursion all these 
explicit tests and conversion disappear because the function evaluation 
calls evalInt or evalDouble depending on the expected types.


Basically, the code is significantly shorter and elegant with this option.

That doesn't seem similar to what I've seen in other 
expression-evaluation engines.


Probably. This is because I choose a descending typing to simplify the 
implementation. Changing this would bring no real practical benefit from 
the usage point of view, but would add significant more verbose and ugly 
code to test and handle type conversions everywhere, so I'm not keen to do 
that.


Perhaps I could find out by reading the comments, but actually not, 
because this entire patch seems to add only one comment:


+   /* reset column count for this scan */


There are a few others, really:-)


While I'm not a fan of excessive commenting, I think a little more
explanation here would be good.


I can certainly add more comments to the code, especially around the eval 
cross recursion functions.



4. The table of functions in pgbench.sgml seems to leave something to
be desired.  We added a pretty detailed write-up on the Gaussian and
exponential options to \setrandom, but exporand() has only this
description:


Yep. The idea was *not* to replicate the (painful) explanations about 
random functions, but that it should be shared between the function and 
the \set variants.



+  
+   exporand(i,
j, t)
+   integer
+   exponentially distributed random integer in the bounds,
see below
+   exporand(1, 10, 3.0)
+   int between 1 and 10
+  

That's not very helpful.


The table explanation must be kept short for the table format...

Without looking at the example, there's no way to guess what i and j 
mean, and even with looking at the example, there's no way to guess what 
t means.  If, as I'm guessing, exporand() and guassrand() behave like 
\setrandom with the exponential and/or Gaussian options, then the 
documentation for one of those things should contain all of the detailed 
information and the documentation for the other should refer to it.


Indeed, that was the idea, but it seems that I forgot the pointer:-)

More than likely, exporand() and gaussrand() should get the detailed 
explanation, and \setrandom should be document as a deprecated 
alternative to \set ... {gauss,expo,}rand(...)


Ok, the description can be moved to the function part and the \set 
version reference the other.



5. I liked Heikki's proposed function names random_gaussian(min, max,
thresh

Re: [HACKERS] Foreign join pushdown vs EvalPlanQual

2015-11-04 Thread Kouhei Kaigai
> -Original Message-
> From: Etsuro Fujita [mailto:fujita.ets...@lab.ntt.co.jp]
> Sent: Wednesday, November 04, 2015 5:11 PM
> To: Kaigai Kouhei(海外 浩平); Robert Haas
> Cc: Tom Lane; Kyotaro HORIGUCHI; pgsql-hackers@postgresql.org; Shigeru Hanada
> Subject: Re: [HACKERS] Foreign join pushdown vs EvalPlanQual
> 
> On 2015/11/03 22:15, Kouhei Kaigai wrote:
> > A challenge is that junk wholerow references on behalf of ROW_MARK_COPY
> > are injected by preprocess_targetlist(). It is earlier than the main path
> > consideration by query_planner(), thus, it is not predictable how remote
> > query shall be executed at this point.
> > If ROW_MARK_COPY, base tuple image is fetched using this junk attribute.
> > So, here is two options if we allow to put joined tuple on either of
> > es_epqTuple[].
> >
> > options-1) We ignore record type definition. FDW returns a joined tuple
> > towards the whole-row reference of either of the base relations in this
> > join. The junk attribute shall be filtered out eventually and only FDW
> > driver shall see, so it is harmless to do (probably).
> > This option takes no big changes, however, we need a little brave to adopt.
> >
> > options-2) We allow FDW/CSP to adjust target-list of the relevant nodes
> > after these paths get chosen by planner. It enables to remove whole-row
> > reference of base relations and add alternative whole-row reference instead
> > if FDW/CSP can support it.
> > This feature can be relevant to target-list push-down to the remote side,
> > not only EPQ rechecks, because adjustment of target-list means we allows
> > FDW/CSP to determine which expression shall be executed locally, or shall
> > not be.
> > I think, this option is more straightforward, however, needs a little bit
> > deeper consideration, because we have to design the best hook point and
> > need to ensure how path-ification will perform.
> >
> > Therefore, I think we need two steps towards the entire solution.
> > Step-1) FDW/CSP will recheck base EPQ tuples and support local
> > reconstruction on the fly. It does not need something special
> > enhancement on the planner - so we can fix up by v9.5 release.
> > Step-2) FDW/CSP will support adjustment of target-list to add whole-row
> > reference of joined tuple instead of multiple base relations, then FDW/CSP
> > will be able to put a joined tuple on either of EPQ slot if it wants - it
> > takes a new feature enhancement, so v9.6 is a suitable timeline.
> >
> > How about your opinion towards the direction?
> > I don't want to drop extra optimization opportunity, however, we are now in
> > November. I don't have enough brave to add none-obvious new feature here.
> 
> I think we need to consider a general solution that can be applied not
> only to the case where the component tables in a foreign join all use
> ROW_MARK_COPY but to the case where those tables use different rowmark
> types such as ROW_MARK_COPY and ROW_MARK_EXCLUSIVE, as I pointed out
> upthread.
>
In mixture case, FDW/CSP can choose local recheck & reconstruction based
on the EPQ tuples of base relation. Nobody enforce FDW/CSP to return
a joined tuple always even if author don't want to support the feature.
Why do you think it is not a generic solution? FDW/CSP driver "can choose"
the best solution according to its implementation and capability.

Thanks,
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Some bugs in psql_complete of psql

2015-11-04 Thread Kyotaro HORIGUCHI
Hello, I found that a typo(?) in tab-complete.c.

> /* ALTER TABLE,INDEX,MATERIALIZED VIEW xxx ALL IN TABLESPACE xxx OWNED BY */
> else if (pg_strcasecmp(prev6_wd, "ALL") == 0 &&
>pg_strcasecmp(prev5_wd, "IN") == 0 &&
>pg_strcasecmp(prev4_wd, "TABLESPACE") == 0 &&
>pg_strcasecmp(prev2_wd, "OWNED") == 0 &&
>pg_strcasecmp(prev4_wd, "BY") == 0)

"BY" is compared against the word in wrong position and it
prevents this completion from matching.

I also found some other bugs in psql-completion. The attached
patch applied on master and fixes them all togher.

- Fix completion for ALL IN TABLESPACE OWNED BY.

- Fix the assumed syntax of CREATE INDEX where the CONCURRENTLY
  is misplaced. (It is assuming the order "CREATE INDEX sth
  CONCURRENTLY")

- Provide missing terminating NULL to the preposition list for
  SECURITY LABEL.

- Add the preposition list with some missing items. However, this
  would not be a kind of bug in constrast to the three items
  above, though. This applied back to 9.3 and 9.2 doesn't have
  "EVENT TRIGGER" and "MATERIALIZED VIEW" and 9.1 additionally
  doesn't have "DATABASE" and "ROLE". I'll provide individual
  patches if necessary.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
>From 36096e1afdda3f54800c2c73fe4f058a3eef25ef Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi 
Date: Mon, 2 Nov 2015 14:10:53 +0900
Subject: [PATCH] Fix some tab-completino bugs.

  - 'ALTER [TABLE|xx] xxx ALL IN TABLE SPACE xx OWNED BY' cannot be completed.
  - CONCURRENTLY in CREATE INDEX is misplaced.
  - Preposition list of SECURITY LABEL is not terminated.
  - The target list of SECURITY LABEL ON misses some alternatives.
---
 src/bin/psql/tab-complete.c | 54 ++---
 1 file changed, 27 insertions(+), 27 deletions(-)

diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 0e8d395..912811d 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1040,7 +1040,7 @@ psql_completion(const char *text, int start, int end)
 			 pg_strcasecmp(prev5_wd, "IN") == 0 &&
 			 pg_strcasecmp(prev4_wd, "TABLESPACE") == 0 &&
 			 pg_strcasecmp(prev2_wd, "OWNED") == 0 &&
-			 pg_strcasecmp(prev4_wd, "BY") == 0)
+			 pg_strcasecmp(prev_wd, "BY") == 0)
 	{
 		COMPLETE_WITH_QUERY(Query_for_list_of_roles);
 	}
@@ -2320,35 +2320,34 @@ psql_completion(const char *text, int start, int end)
 		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_indexes,
    " UNION SELECT 'ON'"
    " UNION SELECT 'CONCURRENTLY'");
+	/* If we have INDEX CONCURRENTLY , then add exiting indexes */
+	else if (pg_strcasecmp(prev2_wd, "INDEX") == 0 &&
+			 pg_strcasecmp(prev_wd, "CONCURRENTLY") == 0)
+			 COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_indexes, NULL);
 	/* Complete ... INDEX [] ON with a list of tables  */
 	else if ((pg_strcasecmp(prev3_wd, "INDEX") == 0 ||
-			  pg_strcasecmp(prev2_wd, "INDEX") == 0 ||
-			  pg_strcasecmp(prev2_wd, "CONCURRENTLY") == 0) &&
+			  (pg_strcasecmp(prev4_wd, "INDEX") == 0 &&
+			   pg_strcasecmp(prev3_wd, "CONCURRENTLY") == 0)) &&
 			 pg_strcasecmp(prev_wd, "ON") == 0)
 		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tm, NULL);
-	/* If we have CREATE|UNIQUE INDEX  CONCURRENTLY, then add "ON" */
-	else if ((pg_strcasecmp(prev3_wd, "INDEX") == 0 ||
-			  pg_strcasecmp(prev2_wd, "INDEX") == 0) &&
-			 pg_strcasecmp(prev_wd, "CONCURRENTLY") == 0)
+	/* If we have CREATE|UNIQUE INDEX [CONCURRENTLY] , then add "ON" */
+	else if (((pg_strcasecmp(prev3_wd, "CREATE") == 0 ||
+			   pg_strcasecmp(prev3_wd, "UNIQUE") == 0) &&
+			  pg_strcasecmp(prev2_wd, "INDEX") == 0 &&
+			  pg_strcasecmp(prev_wd, "CONCURRENTLY") != 0) ||
+			 ((pg_strcasecmp(prev4_wd, "CREATE") == 0 ||
+			   pg_strcasecmp(prev4_wd, "UNIQUE") == 0) &&
+			  pg_strcasecmp(prev3_wd, "INDEX") == 0 &&
+			  pg_strcasecmp(prev2_wd, "CONCURRENTLY") == 0))
 		COMPLETE_WITH_CONST("ON");
-	/* If we have CREATE|UNIQUE INDEX , then add "ON" or "CONCURRENTLY" */
-	else if ((pg_strcasecmp(prev3_wd, "CREATE") == 0 ||
-			  pg_strcasecmp(prev3_wd, "UNIQUE") == 0) &&
-			 pg_strcasecmp(prev2_wd, "INDEX") == 0)
-	{
-		static const char *const list_CREATE_INDEX[] =
-		{"CONCURRENTLY", "ON", NULL};
-
-		COMPLETE_WITH_LIST(list_CREATE_INDEX);
-	}
 
 	/*
-	 * Complete INDEX  ON  with a list of table columns (which
-	 * should really be in parens)
+	 * Complete INDEX [CONCURRENTLY]  ON  with a list of table
+	 * columns (which should really be in parens)
 	 */
 	else if ((pg_strcasecmp(prev4_wd, "INDEX") == 0 ||
-			  pg_strcasecmp(prev3_wd, "INDEX") == 0 ||
-			  pg_strcasecmp(prev3_wd, "CONCURRENTLY") == 0) &&
+			  (pg_strcasecmp(prev5_wd, "INDEX") == 0 &&
+			   pg_strcasecmp(prev4_wd, "CONCURRENTLY") == 0)) &&
 			 pg_strcasecmp(prev2_wd, "ON") == 0)
 	{
 		static const char *const list_CREATE_INDEX2[] =
@@ -2357,8 +2356,8 @@ psql_completion(const char *text, int start, int end)
 		COMPLETE_WITH_L

Re: [HACKERS] Foreign join pushdown vs EvalPlanQual

2015-11-04 Thread Etsuro Fujita

On 2015/11/03 22:15, Kouhei Kaigai wrote:

A challenge is that junk wholerow references on behalf of ROW_MARK_COPY
are injected by preprocess_targetlist(). It is earlier than the main path
consideration by query_planner(), thus, it is not predictable how remote
query shall be executed at this point.
If ROW_MARK_COPY, base tuple image is fetched using this junk attribute.
So, here is two options if we allow to put joined tuple on either of
es_epqTuple[].

options-1) We ignore record type definition. FDW returns a joined tuple
towards the whole-row reference of either of the base relations in this
join. The junk attribute shall be filtered out eventually and only FDW
driver shall see, so it is harmless to do (probably).
This option takes no big changes, however, we need a little brave to adopt.

options-2) We allow FDW/CSP to adjust target-list of the relevant nodes
after these paths get chosen by planner. It enables to remove whole-row
reference of base relations and add alternative whole-row reference instead
if FDW/CSP can support it.
This feature can be relevant to target-list push-down to the remote side,
not only EPQ rechecks, because adjustment of target-list means we allows
FDW/CSP to determine which expression shall be executed locally, or shall
not be.
I think, this option is more straightforward, however, needs a little bit
deeper consideration, because we have to design the best hook point and
need to ensure how path-ification will perform.

Therefore, I think we need two steps towards the entire solution.
Step-1) FDW/CSP will recheck base EPQ tuples and support local
reconstruction on the fly. It does not need something special
enhancement on the planner - so we can fix up by v9.5 release.
Step-2) FDW/CSP will support adjustment of target-list to add whole-row
reference of joined tuple instead of multiple base relations, then FDW/CSP
will be able to put a joined tuple on either of EPQ slot if it wants - it
takes a new feature enhancement, so v9.6 is a suitable timeline.

How about your opinion towards the direction?
I don't want to drop extra optimization opportunity, however, we are now in
November. I don't have enough brave to add none-obvious new feature here.


I think we need to consider a general solution that can be applied not 
only to the case where the component tables in a foreign join all use 
ROW_MARK_COPY but to the case where those tables use different rowmark 
types such as ROW_MARK_COPY and ROW_MARK_EXCLUSIVE, as I pointed out 
upthread.


Best regards,
Etsuro Fujita



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Foreign join pushdown vs EvalPlanQual

2015-11-04 Thread Kouhei Kaigai
> On 2015/10/28 6:04, Robert Haas wrote:
> > On Tue, Oct 20, 2015 at 12:39 PM, Etsuro Fujita
> >  wrote:
> >> Sorry, my explanation was not correct.  (Needed to take in caffeine.) What
> >> I'm concerned about is the following:
> >>
> >> SELECT * FROM localtab JOIN (ft1 LEFT JOIN ft2 ON ft1.x = ft2.x) ON
> >> localtab.id = ft1.id FOR UPDATE OF ft1
> >>
> >> LockRows
> >> -> Nested Loop
> >>   Join Filter: (localtab.id = ft1.id)
> >>   -> Seq Scan on localtab
> >>   -> Foreign Scan on 
> >>Remote SQL: SELECT * FROM ft1 LEFT JOIN ft2 WHERE ft1.x = ft2.x
> >> FOR UPDATE OF ft1
> >>
> >> Assume that ft1 performs late row locking.
> 
> > If the SQL includes "FOR UPDATE of ft1", then it clearly performs
> > early row locking.  I assume you meant to omit that.
> 
> Right.  Sorry for my mistake.
> 
> >> If an EPQ recheck was invoked
> >> due to a concurrent transaction on the remote server that changed only the
> >> value x of the ft1 tuple previously retrieved, then we would have to
> >> generate a fake ft1/ft2-join tuple with nulls for ft2. (Assume that the ft2
> >> tuple previously retrieved was not a null tuple.) However, I'm not sure how
> >> we can do that in ForeignRecheck; we can't know for example, which one is
> >> outer and which one is inner, without an alternative local join execution
> >> plan.  Maybe I'm missing something, though.
> 
> > I would expect it to issue a new query like: SELECT * FROM ft1 LEFT
> > JOIN ft2 WHERE ft1.x = ft2.x AND ft1.tid = $0 AND ft2.tid = $1.
> 
> We assume here that ft1 uses late row locking, so I thought the above
> SQL should include "FOR UPDATE of ft1".  But I still don't think that
> that is right; the SQL with "FOR UPDATE of ft1" wouldn't generate the
> fake ft1/ft2-join tuple with nulls for ft2, as expected.  The reason for
> that is that the updated version of the ft1 tuple wouldn't satisfy the
> ft1.tid = $0 condition in an EPQ recheck, because the ctid for the
> updated version of the ft1 tuple has changed.  (IIUC, I think that if we
> use a TID scan for ft1, the SQL would generate the expected result,
> because I think that the TID condition would be ignored in the EPQ
> recheck, but I don't think it's guaranteed to use a TID scan for ft1.)
> Maybe I'm missing something, though.
>
It looks to me, we should not use ctid system column to identify remote
row when postgres_fdw tries to support late row locking.

The documentation says:
  
http://www.postgresql.org/docs/devel/static/fdw-callbacks.html#FDW-CALLBACKS-UPDATE

  UPDATE and DELETE operations are performed against rows previously
  fetched by the table-scanning functions. The FDW may need extra information,
  such as a row ID or the values of primary-key columns, to ensure that it can
  identify the exact row to update or delete

The "rowid" should not be changed once it is fetched from the remote side
until it is actually updated, deleted or locked, for correct identification.
If ctid is used for this purpose, it is safe only when remote row is locked
when it is fetched - it is exactly early row locking behavior, isn't it?

> > This should be significantly more efficient than fetching the base
> > rows from each of two tables with two separate queries.
> 
> Maybe I think we could fix the SQL, so I have to admit that, but I'm
> just wondering (1) what would happen for the case when ft1 uses late row
> rocking and ft2 uses early row rocking and (2) that would be still more
> efficient than re-fetching only the base row from ft1.
>
It should be decision by FDW driver. It is not easy to estimate a certain
FDW driver mixes up early and late locking policy within a same remote join
query. Do you really want to support such a mysterious implementation?

Or, do you expect all the FDW driver is enforced to return a joined tuple
if remote join case? It is different from my idea; it shall be an extra
optimization option if FDW can fetch a joined tuple at once, but not always.
So, if FDW driver does not support this optimal behavior, your driver can
fetch two base tables then run local alternative join (or something other).

> What I thought to improve the efficiency in the secondary-plan approach
> that I proposed was that if we could parallelize re-fetching foreign
> rows in ExecLockRows and EvalPlanQualFetchRowMarks, we would be able to
> improve the efficiency not only for the case when performing a join of
> foreign tables remotely but for the case when performing the join locally.
>
Parallelism is not a magic bullet...

Thanks,
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers