Re: [HACKERS] Faster methods for getting SPI results

2017-09-12 Thread Daniel Gustafsson
> On 12 Sep 2017, at 23:00, Tom Lane  wrote:
> 
> Chapman Flack  writes:
>> On 09/12/2017 03:41 PM, Tom Lane wrote:
>>> So the conclusion at the end of the last commitfest was that this patch
>>> should be marked Returned With Feedback, and no new work appears to have
>>> been done on it since then.  Why is it in this fest at all?  There
>>> certainly doesn't seem to be any reason to review it again.
> 
>> I'm not sure how to read the history of the CF entry. Could it
>> have rolled over to 2017-09 by default if its status was simply
>> never changed to Returned with Feedback as intended in the last
>> one? The history doesn't seem to show anything since 2017-03-19.
> 
> Maybe, or whoever was closing out the last CF didn't notice Andres'
> recommendation to mark it RWF.

It doesn’t seem to have been moved to this CF but was actually created here in
the first place.  Reading this thread it seems like there is clear concensus on
the status though so changing to RWF.

cheers ./daniel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results

2017-09-12 Thread Chapman Flack
On 09/12/17 17:00, Tom Lane wrote:

> I did not see any reason given in the thread why we should need that.
> If you want to accumulate tuples ten at a time before you do something
> with them, you can do that now, by calling ExecutorRun with count=10.

Ah, that sounds easy enough. I'll withdraw the more-complicated suggestion.

-Chap


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results

2017-09-12 Thread Tom Lane
Chapman Flack  writes:
> On 09/12/2017 03:41 PM, Tom Lane wrote:
>> So the conclusion at the end of the last commitfest was that this patch
>> should be marked Returned With Feedback, and no new work appears to have
>> been done on it since then.  Why is it in this fest at all?  There
>> certainly doesn't seem to be any reason to review it again.

> I'm not sure how to read the history of the CF entry. Could it
> have rolled over to 2017-09 by default if its status was simply
> never changed to Returned with Feedback as intended in the last
> one? The history doesn't seem to show anything since 2017-03-19.

Maybe, or whoever was closing out the last CF didn't notice Andres'
recommendation to mark it RWF.

> I would still advocate for a fast-callback/slow-callback distinction,
> as in
> https://www.postgresql.org/message-id/59813946.40508%40anastigmatix.net
> if that does not seem overcomplicated to more experienced hands.

I did not see any reason given in the thread why we should need that.
If you want to accumulate tuples ten at a time before you do something
with them, you can do that now, by calling ExecutorRun with count=10.
(plpgsql does something much like that IIRC.)  The only reason not to
just use count=1 is that ExecutorRun and ExecutePlan have accumulated
assorted startup/shutdown cruft on the assumption that their runtime
didn't particularly matter.  It still doesn't look that awful, but
it might be noticeable.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results

2017-09-12 Thread Chapman Flack
On 09/12/2017 03:41 PM, Tom Lane wrote:
> So the conclusion at the end of the last commitfest was that this patch
> should be marked Returned With Feedback, and no new work appears to have
> been done on it since then.  Why is it in this fest at all?  There
> certainly doesn't seem to be any reason to review it again.

I'm not sure how to read the history of the CF entry. Could it
have rolled over to 2017-09 by default if its status was simply
never changed to Returned with Feedback as intended in the last
one? The history doesn't seem to show anything since 2017-03-19.

I would still advocate for a fast-callback/slow-callback distinction,
as in
https://www.postgresql.org/message-id/59813946.40508%40anastigmatix.net
if that does not seem overcomplicated to more experienced hands.

-Chap


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results

2017-09-12 Thread Tom Lane
So the conclusion at the end of the last commitfest was that this patch
should be marked Returned With Feedback, and no new work appears to have
been done on it since then.  Why is it in this fest at all?  There
certainly doesn't seem to be any reason to review it again.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results

2017-08-01 Thread Chapman Flack
On 12/20/16 23:14, Jim Nasby wrote:
> I'm guessing one issue might be that
> we don't want to call an external interpreter while potentially holding page
> pins, but even then couldn't we just copy a single tuple at a time and save
> a huge amount of palloc overhead?

On 04/06/17 03:38, Craig Ringer wrote:
> Also, what rules apply in terms of what you can/cannot do from within
> a callback? Presumably it's unsafe to perform additional SPI calls,
> perform transactions, call into the executor, change the current
> snapshot, etc, but I would consider that reasonably obvious. Are there
> any specific things to avoid?


Confessing, right up front, that I'm not very familiar with the executor
or DestReceiver code, but thinking of issues that might be expected with
PLs, I wonder if there could be a design where the per-tuple callback
could sometimes return a status HAVE_SLOW_STUFF_TO_DO.

If it does, the executor could release some pins or locks, stack some
state, whatever allows it to (as far as practicable) relax restrictions
on what the callback would be allowed to do, then reinvoke the callback,
not with another tuple, but with OK_GO_DO_YOUR_SLOW_STUFF.

On return from that call, the executor could reacquire its stacked
state/locks/pins and resume generating tuples.

That way, a callback could, say, return normally 9 out of 10 times, just
quickly buffering up 10 tuples, and every 10th time return SLOW_STUFF_TO_DO
and get a chance to jump into the PL interpreter and deal with those 10 ...
(a) minimizing the restrictions on what the PL routine may do, and (b)
allowing any costs of state-stacking/lock-releasing-reacquiring, and control
transfer to the interpreter, to be amortized over some number of tuples.
How many tuples that should be might be an empirical question for any given
PL, but with a protocol like this, the callback has an easy way to control
it.

Or would that be overcomplicated?

-Chap


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-06 Thread Jim Nasby

On 4/6/17 9:21 PM, Andres Freund wrote:

Personally I'm way more excited about what a SPI feature like this
could do for plpgsql than about what it can do for plpython.  If the
latter is what floats your boat, that's fine; but I want a feature
that we can build on for other uses, not a hack that we know we need
to redesign next month.


Yeah, I thought about plpgsql and I can't see any way to make that work 
through an SPI callback (perhaps just due to my ignorance on things C). 
I suspect what plpgsql actually wants is a way to tell SPI to start the 
executor up, a function that pulls individual tuples out of the 
executor, and then a function to shut the executor down.



Dislike of the proposed implementation, alternative proposals, and the
refutation of the "absolutely no way to do more without breaking plpy"
argument leads to me to conclude that this should be returned with
feedback.


Agreed.
--
Jim Nasby, Chief Data Architect, Austin TX
OpenSCG http://OpenSCG.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-06 Thread Andres Freund
On 2017-04-07 00:11:59 -0400, Tom Lane wrote:
> Jim Nasby  writes:
> > On 4/6/17 8:13 PM, Tom Lane wrote:
> >> Given Peter's objections, I don't think this is getting into v10 anyway,
> >> so we might as well take a bit more time and do it right.
> 
> > Well, Peter's objection is that we're not going far enough in plpython, 
> > but there's absolutely no way to do more without breaking plpy, which 
> > seems a non-starter. We should certainly be able to expand the existing 
> > API to provide even more benefit, but I see no reason to leave the 
> > performance gain this patch provides on the floor just because there's 
> > more to be had with a different API.
> 
> Personally I'm way more excited about what a SPI feature like this
> could do for plpgsql than about what it can do for plpython.  If the
> latter is what floats your boat, that's fine; but I want a feature
> that we can build on for other uses, not a hack that we know we need
> to redesign next month.

Dislike of the proposed implementation, alternative proposals, and the
refutation of the "absolutely no way to do more without breaking plpy"
argument leads to me to conclude that this should be returned with
feedback.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-06 Thread Jim Nasby
On Apr 6, 2017, at 9:10 PM, Andres Freund  wrote:
> 
>>> Why?  We could very well return a somewhat "smarter" object. Returning
>>> rows row-by-row if accessed via iterator, materializes when accessed via
>>> row offset.
>> 
>> I completely agree with that. What I don't understand is the objection to
>> speeding up the old access method. Or are you thinking we'd just abandon the
>> old method?
> 
> What I'm saying is that we can do that transparently, with the current
> API.  And there's no need to materialize anything in plpython, we can
> transparently use the SPI materialized version.

Oh, just switching from a list to an iterator. Ok, I finally get it.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-06 Thread Tom Lane
Jim Nasby  writes:
> On 4/6/17 8:13 PM, Tom Lane wrote:
>> Given Peter's objections, I don't think this is getting into v10 anyway,
>> so we might as well take a bit more time and do it right.

> Well, Peter's objection is that we're not going far enough in plpython, 
> but there's absolutely no way to do more without breaking plpy, which 
> seems a non-starter. We should certainly be able to expand the existing 
> API to provide even more benefit, but I see no reason to leave the 
> performance gain this patch provides on the floor just because there's 
> more to be had with a different API.

Personally I'm way more excited about what a SPI feature like this
could do for plpgsql than about what it can do for plpython.  If the
latter is what floats your boat, that's fine; but I want a feature
that we can build on for other uses, not a hack that we know we need
to redesign next month.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-06 Thread Andres Freund
On 2017-04-06 21:06:59 -0700, Jim Nasby wrote:
> On 4/6/17 9:04 PM, Andres Freund wrote:
> > On 2017-04-06 09:14:43 -0700, Jim Nasby wrote:
> > > On 4/6/17 9:04 AM, Peter Eisentraut wrote:
> > > > On 4/6/17 03:50, Craig Ringer wrote:
> > > > > But otherwise, pending docs changes, I think it's ready for committer.
> > > > 
> > > > My opinion is still that this is ultimately the wrong approach.  The
> > > > right fix for performance issues in PL/Python is to change PL/Python not
> > > > to materialize the list of tuples.  Now with this change we would be
> > > > moving from two result materializations to one, but I think we are
> > > > keeping the wrong one.
> > > 
> > > That's an option for future improvement, but I see no way to accomplish 
> > > that
> > > without completely breaking plpy.
> > 
> > Why?  We could very well return a somewhat "smarter" object. Returning
> > rows row-by-row if accessed via iterator, materializes when accessed via
> > row offset.
> 
> I completely agree with that. What I don't understand is the objection to
> speeding up the old access method. Or are you thinking we'd just abandon the
> old method?

What I'm saying is that we can do that transparently, with the current
API.  And there's no need to materialize anything in plpython, we can
transparently use the SPI materialized version.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-06 Thread Jim Nasby

On 4/6/17 9:04 PM, Andres Freund wrote:

On 2017-04-06 09:14:43 -0700, Jim Nasby wrote:

On 4/6/17 9:04 AM, Peter Eisentraut wrote:

On 4/6/17 03:50, Craig Ringer wrote:

But otherwise, pending docs changes, I think it's ready for committer.


My opinion is still that this is ultimately the wrong approach.  The
right fix for performance issues in PL/Python is to change PL/Python not
to materialize the list of tuples.  Now with this change we would be
moving from two result materializations to one, but I think we are
keeping the wrong one.


That's an option for future improvement, but I see no way to accomplish that
without completely breaking plpy.


Why?  We could very well return a somewhat "smarter" object. Returning
rows row-by-row if accessed via iterator, materializes when accessed via
row offset.


I completely agree with that. What I don't understand is the objection 
to speeding up the old access method. Or are you thinking we'd just 
abandon the old method?

--
Jim Nasby, Chief Data Architect, Austin TX
OpenSCG http://OpenSCG.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-06 Thread Andres Freund
On 2017-04-06 09:14:43 -0700, Jim Nasby wrote:
> On 4/6/17 9:04 AM, Peter Eisentraut wrote:
> > On 4/6/17 03:50, Craig Ringer wrote:
> > > But otherwise, pending docs changes, I think it's ready for committer.
> > 
> > My opinion is still that this is ultimately the wrong approach.  The
> > right fix for performance issues in PL/Python is to change PL/Python not
> > to materialize the list of tuples.  Now with this change we would be
> > moving from two result materializations to one, but I think we are
> > keeping the wrong one.
> 
> That's an option for future improvement, but I see no way to accomplish that
> without completely breaking plpy.

Why?  We could very well return a somewhat "smarter" object. Returning
rows row-by-row if accessed via iterator, materializes when accessed via
row offset.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-06 Thread Jim Nasby

On 4/6/17 8:13 PM, Tom Lane wrote:

It's on the pointy end for Pg10, and I thought we'd be fine to include
this in pg10 then aim to clean up DestReceiver in early pg11, or even
as a post-feature-freeze refactoring fixup in pg10. Should the
callback approach be blocked because the API it has to use is a bit
ugly?

Given Peter's objections, I don't think this is getting into v10 anyway,
so we might as well take a bit more time and do it right.


Well, Peter's objection is that we're not going far enough in plpython, 
but there's absolutely no way to do more without breaking plpy, which 
seems a non-starter. We should certainly be able to expand the existing 
API to provide even more benefit, but I see no reason to leave the 
performance gain this patch provides on the floor just because there's 
more to be had with a different API.



Also, I'm entirely -1 on "post-feature-freeze refactoring fixups".
We're going to have more than enough to do trying to stabilize the
existing committed code, I fear (cf Robert's pessimistic summary of
the open-items list, a couple days ago).  We don't need to be
planning on doing new design post-freeze, whether it's painted as
mere refactoring or not.


Agreed, and I agree that the current patch is a bit of a hack when it 
comes to DestReceiver (or really, DestReceiver has become an ugly wart 
over the years, as you pointed out).


I'll plan to pick this up again once the dust settles on this commitfest.
--
Jim Nasby, Chief Data Architect, Austin TX
OpenSCG http://OpenSCG.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-06 Thread Tom Lane
Craig Ringer  writes:
> On 7 April 2017 at 00:54, Tom Lane  wrote:
>> ... External callers will only be
>> interested in the result of the canSetTag subquery.

> I wasn't aware that such queries could ever return a result set, though.

Possibly not, but the point is that they should be invisible to the
caller.  As the patch is set up, I think they'd look like empty result
sets instead, because the passed-in DestReceiver is used for them.
What we want is to use DestNone for non-canSetTag queries, and either
DestSPI or the caller's receiver for the canSetTag query.

> Personally I think this patch should be allowed to bypass
> CreateDestReceiver and create its own struct. I don't really see that
> it should be required to clean up the whole API first.

Well, if you bypass CreateDestReceiver then the question is what you're
putting in mydest and whether anything will get confused by that.  The
core problem with the existing API is that there's no provision for
adding new kinds of DestReceivers without a corresponding addition to
the CommandDest enum.  I think we really need some non-kluge solution
to that before moving forward.

> It's on the pointy end for Pg10, and I thought we'd be fine to include
> this in pg10 then aim to clean up DestReceiver in early pg11, or even
> as a post-feature-freeze refactoring fixup in pg10. Should the
> callback approach be blocked because the API it has to use is a bit
> ugly?

Given Peter's objections, I don't think this is getting into v10 anyway,
so we might as well take a bit more time and do it right.

Also, I'm entirely -1 on "post-feature-freeze refactoring fixups".
We're going to have more than enough to do trying to stabilize the
existing committed code, I fear (cf Robert's pessimistic summary of
the open-items list, a couple days ago).  We don't need to be
planning on doing new design post-freeze, whether it's painted as
mere refactoring or not.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-06 Thread Craig Ringer
On 7 April 2017 at 00:54, Tom Lane  wrote:

> I can certainly get on board with the idea of letting a SPI caller provide
> a DestReceiver instead of accumulating the query results into a
> SPITupleTable, but the way it was done here seems quite bizarre.  I think
> for instance you've mishandled non-canSetTag queries; those should have
> any results discarded, full stop.  External callers will only be
> interested in the result of the canSetTag subquery.

That's something I didn't even know was a thing to look for. Thanks
for spotting it.

For other readers who may also be confused, this refers to internally
generated queries that should not be visible to the caller. We use
these in things like CREATE SCHEMA, ALTER TABLE, in FKs, etc.

I wasn't aware that such queries could ever return a result set, though.

> I don't much like DestSPICallback either.  We may well need some better
> way for extension code to create a custom DestReceiver that does something
> out of the ordinary with query result tuples.  But if so, it should not be
> tied to SPI, not even just by name.
>
> I think that 0001/0002 need to be refactored as (A) a change to make
> DestReceiver creation more flexible, and then (B) a change to SPI to allow
> a caller to pass in the receiver to use.

That's exactly what I tried to avoid suggesting upthread, since it'd
be quite much more invasive than the current patch, though definitely
a desirable cleanup.

Personally I think this patch should be allowed to bypass
CreateDestReceiver and create its own struct. I don't really see that
it should be required to clean up the whole API first.

It's on the pointy end for Pg10, and I thought we'd be fine to include
this in pg10 then aim to clean up DestReceiver in early pg11, or even
as a post-feature-freeze refactoring fixup in pg10. Should the
callback approach be blocked because the API it has to use is a bit
ugly?

> After poking around a bit, it seems like we've allowed the original
> notion of CommandDest to get way out of hand.  The only places where
> CreateDestReceiver gets called with a non-constant argument (that is,
> where the caller doesn't know perfectly well which kind of DestReceiver
> it wants) are two calls in postgres.c, and both of those are ultimately
> passing whereToSendOutput, which has got only a really limited set of
> possible values.  I am thinking that we should cut down enum CommandDest
> to be just
>
> DestNone,/* results are discarded */
> DestDebug,   /* results go to debugging output */
> DestRemote,  /* results sent to frontend process */
> DestOther/* something else */
>
> and change CreateDestReceiver so it throws an error for DestOther; the
> way to create any type of receiver other than these few is to call the
> underlying creation function directly, rather than going through
> CreateDestReceiver.
>
> Having done that, the means for creating a custom receiver is just to
> set up an appropriate struct that embeds struct _DestReceiver and
> always has mydest = DestOther (or maybe we should just lose the mydest
> field altogether).  We could document that a bit better, but it's really
> not complicated.

I strongly agree that this is the way DestReceiver creation should be
refactored.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-06 Thread Tom Lane
Peter Eisentraut  writes:
> On 4/6/17 03:50, Craig Ringer wrote:
>> But otherwise, pending docs changes, I think it's ready for committer.

> My opinion is still that this is ultimately the wrong approach.  The
> right fix for performance issues in PL/Python is to change PL/Python not
> to materialize the list of tuples.  Now with this change we would be
> moving from two result materializations to one, but I think we are
> keeping the wrong one.

I just looked at these patches for the first time, and TBH I'm a bit
dubious too.  I don't particularly have an opinion about the 0003 patch,
but I think that's mostly what Peter is on about.  I do have opinions
about 0001 (and 0002, which ought to be merged; we do not commit features
separately from documentation around here).

I can certainly get on board with the idea of letting a SPI caller provide
a DestReceiver instead of accumulating the query results into a
SPITupleTable, but the way it was done here seems quite bizarre.  I think
for instance you've mishandled non-canSetTag queries; those should have
any results discarded, full stop.  External callers will only be
interested in the result of the canSetTag subquery.

I don't much like DestSPICallback either.  We may well need some better
way for extension code to create a custom DestReceiver that does something
out of the ordinary with query result tuples.  But if so, it should not be
tied to SPI, not even just by name.

I think that 0001/0002 need to be refactored as (A) a change to make
DestReceiver creation more flexible, and then (B) a change to SPI to allow
a caller to pass in the receiver to use.

After poking around a bit, it seems like we've allowed the original
notion of CommandDest to get way out of hand.  The only places where
CreateDestReceiver gets called with a non-constant argument (that is,
where the caller doesn't know perfectly well which kind of DestReceiver
it wants) are two calls in postgres.c, and both of those are ultimately
passing whereToSendOutput, which has got only a really limited set of
possible values.  I am thinking that we should cut down enum CommandDest
to be just

DestNone,/* results are discarded */
DestDebug,   /* results go to debugging output */
DestRemote,  /* results sent to frontend process */
DestOther/* something else */

and change CreateDestReceiver so it throws an error for DestOther; the
way to create any type of receiver other than these few is to call the
underlying creation function directly, rather than going through
CreateDestReceiver.

Having done that, the means for creating a custom receiver is just to
set up an appropriate struct that embeds struct _DestReceiver and
always has mydest = DestOther (or maybe we should just lose the mydest
field altogether).  We could document that a bit better, but it's really
not complicated.

There's enough cruft that's accumulated in this area that this would
probably not be an entirely trivial patch, but I think it would be
a good way to make things cleaner.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-06 Thread Jim Nasby

On 4/6/17 9:04 AM, Peter Eisentraut wrote:

On 4/6/17 03:50, Craig Ringer wrote:

But otherwise, pending docs changes, I think it's ready for committer.


My opinion is still that this is ultimately the wrong approach.  The
right fix for performance issues in PL/Python is to change PL/Python not
to materialize the list of tuples.  Now with this change we would be
moving from two result materializations to one, but I think we are
keeping the wrong one.


That's an option for future improvement, but I see no way to accomplish 
that without completely breaking plpy.


I think the best way to handle this would be to allow plpython functions 
to define their own callback function, which would be handed a python 
tuple that was translated from the SPI result tuple. How best to do that 
without breaking plpy will require some thought though.


In the meantime, I don't think a 27% performance gain is anything to 
sneeze at, and the SPI changes would be directly usable by pl/r and pl/tcl.

--
Jim C. Nasby, Data Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-06 Thread Peter Eisentraut
On 4/6/17 03:50, Craig Ringer wrote:
> But otherwise, pending docs changes, I think it's ready for committer.

My opinion is still that this is ultimately the wrong approach.  The
right fix for performance issues in PL/Python is to change PL/Python not
to materialize the list of tuples.  Now with this change we would be
moving from two result materializations to one, but I think we are
keeping the wrong one.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-06 Thread Craig Ringer
On 6 April 2017 at 15:38, Craig Ringer  wrote:

> Notes on the docs aside, I am pretty happy with this and think it's
> reasonable to proceed with it for Pg 10.

Actually, I'm a bit hesitant about returning a static struct that you
expect callers to copy and modify.  But it seems to be an issue with
CreateDestReceiver's interface where it mixes returning pointers to
static structs with returning pointers to palloc'd memory, not a new
issue added by this patch.

I think you're better off bypassing CreateDestReceiver here, and doing
what CreateTransientRelDestReceiver(), etc do directly instead. That
way there's no need for the hoop jumping of returning a pointer to a
static struct to memcpy() into another struct in:

+memcpy(&(callback->pub), CreateDestReceiver(DestSPICallback),
sizeof(DestReceiver));

Just document that the spi callback struct must have DestReciever as
its first member and must be palloc0'd.

But otherwise, pending docs changes, I think it's ready for committer.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-06 Thread Craig Ringer
On 6 April 2017 at 11:50, Jim Nasby  wrote:

> Attached is a complete series of patches that includes the docs patch.

+   SPI_execute_callback is the same as
+   SPI_execute, except that instead of returning results
+   via SPITupleTable, the user-supplied
callback
+   is used. Unlike
+   SPI_execute,
+   SPI_execute_callback
+   will run the callback for every SQL command passed in to
command.

This doesn't explain why the user should care or prefer this approach.
Maybe after "Unlike":

"SPI_execute_callback does not need to accumulate all the
query results into memory before the caller can process them. Instead
of building a SPI_tuptable containing all the results, the
supplied callback is invoked for each row processed by SPI. The row
data is passed to the callback then discarded when the callback
returns. This reduces copying and allows the application to process
results sooner."

The docs do not discuss memory lifetimes. What memory context is each
call invoked in and when is it reset? Something like:

"rStartup, receiveSlot and
rShutdown are all called in a memory context that is reset
for each SPI_execute_callback."

Also, what rules apply in terms of what you can/cannot do from within
a callback? Presumably it's unsafe to perform additional SPI calls,
perform transactions, call into the executor, change the current
snapshot, etc, but I would consider that reasonably obvious. Are there
any specific things to avoid?

Under what circumstances is  the query execution context [...] freed"
such that the destroy callback is called?

It is not clear to me when reading the document that the user of
SPI_execute_callback is expected to define their own
DestReceiver-compatible struct, and that the private fields that "may
appear beyond this point" are those defined by the application. How
does the app know that its DestReceiver is compatible with the result
of CreateDestReceiver(DestSPICallback)? Should it copy the fields?
etc. Your PLPython example incorporates DestReceiver by-value in its
own state struct; maybe your docs program listing should illustrate
that instead, show how to initialize the destreceiver member, and
separately list the callbacks in destreceiver in an  ?


It definitely needs to mention the relevant parts of plpython to look
at for an example of usage.

> Right now, the docs don't include a concrete example, because adding one
> would be a pretty large if it demonstrated real usage, which presumably
> means Yet Another Contrib Module strictly for the purpose of demonstrating
> something. Rather than doing that, ISTM it'd be better to point the user at
> what plpythonu is doing.

Seems fine to me.

Notes on the docs aside, I am pretty happy with this and think it's
reasonable to proceed with it for Pg 10.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-05 Thread Jim Nasby

On 4/5/17 9:08 PM, Craig Ringer wrote:

... which I can't reproduce now. Even though I cleared ccache and "git
reset -fdx" before I ran the above and got the crash.


Glad to hear that, since I can't repro at all. :)


Assume it's a local system peculiarity. If I can reproduce again I'll
dig into it.


Sounds good. Thanks!
--
Jim Nasby, Chief Data Architect, Austin TX
OpenSCG http://OpenSCG.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-05 Thread Craig Ringer
On 6 April 2017 at 11:46, Craig Ringer  wrote:
> On 6 April 2017 at 09:40, Jim Nasby  wrote:
>> On 4/4/17 7:44 PM, Craig Ringer wrote:
>>>
>>> The patch crashes in initdb with --enable-cassert builds:
>>
>>
>> Thanks for the review! I'll get to the rest of it in a bit, but I'm unable
>> to reproduce the initdb failure. I looked at the assert line and I don't see
>> anything obvious either. :/
>>
>> Can you send your full configure call? uname -a? Mine is:
>>
>> ./configure --with-includes=/opt/local/include
>> --with-libraries=/opt/local/lib --enable-debug --with-libxml --with-tcl
>> --with-perl --with-python --enable-depend --enable-dtrace --enable-tap-tests
>> --prefix=/Users/decibel/pgsql/HEAD/i/i --with-pgport=$PGC_PORT -C
>> --enable-cassert --enable-debug CFLAGS='-ggdb -O0 -fno-omit-frame-pointer'
>
>
> ./configure --prefix=/home/craig/pg/10 --enable-cassert --enable-debug
> --enable-tap-tests --with-python
> make -s clean
> make -s -j4
> make check
>
> results in the crash here.

... which I can't reproduce now. Even though I cleared ccache and "git
reset -fdx" before I ran the above and got the crash.

Assume it's a local system peculiarity. If I can reproduce again I'll
dig into it.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-05 Thread Jim Nasby

On 4/5/17 7:44 PM, Jim Nasby wrote:

Updated patches attached, but I still need to update the docs.


Attached is a complete series of patches that includes the docs patch.

Right now, the docs don't include a concrete example, because adding one 
would be a pretty large if it demonstrated real usage, which presumably 
means Yet Another Contrib Module strictly for the purpose of 
demonstrating something. Rather than doing that, ISTM it'd be better to 
point the user at what plpythonu is doing.


Another option would be to have a very simple example that only uses 
*receiveSlot, but that seems rather pointless to me.

--
Jim Nasby, Chief Data Architect, Austin TX
OpenSCG http://OpenSCG.com
From 0a2ef661f55a047763a43b0eebd7483760e4a427 Mon Sep 17 00:00:00 2001
From: Jim Nasby 
Date: Wed, 5 Apr 2017 20:52:39 -0500
Subject: [PATCH 1/3] Add SPI_execute_callback

Instead of placing results in a tuplestore, this method of execution
uses the supplied callback when creating the Portal for a query.
---
 src/backend/executor/spi.c | 80 --
 src/backend/tcop/dest.c| 15 +
 src/include/executor/spi.h |  4 +++
 src/include/tcop/dest.h|  1 +
 4 files changed, 90 insertions(+), 10 deletions(-)

diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c
index ca547dc6d9..4f6c3011f9 100644
--- a/src/backend/executor/spi.c
+++ b/src/backend/executor/spi.c
@@ -55,7 +55,8 @@ static void _SPI_prepare_oneshot_plan(const char *src, 
SPIPlanPtr plan);
 
 static int _SPI_execute_plan(SPIPlanPtr plan, ParamListInfo paramLI,
  Snapshot snapshot, Snapshot 
crosscheck_snapshot,
- bool read_only, bool fire_triggers, uint64 
tcount);
+ bool read_only, bool fire_triggers, uint64 
tcount,
+ DestReceiver *callback);
 
 static ParamListInfo _SPI_convert_params(int nargs, Oid *argtypes,
Datum *Values, const char *Nulls);
@@ -321,7 +322,35 @@ SPI_execute(const char *src, bool read_only, long tcount)
 
res = _SPI_execute_plan(, NULL,
InvalidSnapshot, 
InvalidSnapshot,
-   read_only, true, 
tcount);
+   read_only, true, 
tcount, NULL);
+
+   _SPI_end_call(true);
+   return res;
+}
+
+int
+SPI_execute_callback(const char *src, bool read_only, long tcount,
+   DestReceiver *callback)
+{
+   _SPI_plan   plan;
+   int res;
+
+   if (src == NULL || tcount < 0)
+   return SPI_ERROR_ARGUMENT;
+
+   res = _SPI_begin_call(true);
+   if (res < 0)
+   return res;
+
+   memset(, 0, sizeof(_SPI_plan));
+   plan.magic = _SPI_PLAN_MAGIC;
+   plan.cursor_options = 0;
+
+   _SPI_prepare_oneshot_plan(src, );
+
+   res = _SPI_execute_plan(, NULL,
+   InvalidSnapshot, 
InvalidSnapshot,
+   read_only, true, 
tcount, callback);
 
_SPI_end_call(true);
return res;
@@ -355,7 +384,34 @@ SPI_execute_plan(SPIPlanPtr plan, Datum *Values, const 
char *Nulls,

_SPI_convert_params(plan->nargs, plan->argtypes,

Values, Nulls),
InvalidSnapshot, 
InvalidSnapshot,
-   read_only, true, 
tcount);
+   read_only, true, 
tcount, NULL);
+
+   _SPI_end_call(true);
+   return res;
+}
+
+/* Execute a previously prepared plan with a callback */
+int
+SPI_execute_plan_callback(SPIPlanPtr plan, Datum *Values, const char *Nulls,
+bool read_only, long tcount, DestReceiver 
*callback)
+{
+   int res;
+
+   if (plan == NULL || plan->magic != _SPI_PLAN_MAGIC || tcount < 0)
+   return SPI_ERROR_ARGUMENT;
+
+   if (plan->nargs > 0 && Values == NULL)
+   return SPI_ERROR_PARAM;
+
+   res = _SPI_begin_call(true);
+   if (res < 0)
+   return res;
+
+   res = _SPI_execute_plan(plan,
+   
_SPI_convert_params(plan->nargs, plan->argtypes,
+   
Values, Nulls),
+   InvalidSnapshot, 
InvalidSnapshot,
+   read_only, true, 
tcount, callback);
 
_SPI_end_call(true);

Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-05 Thread Craig Ringer
On 6 April 2017 at 09:40, Jim Nasby  wrote:
> On 4/4/17 7:44 PM, Craig Ringer wrote:
>>
>> The patch crashes in initdb with --enable-cassert builds:
>
>
> Thanks for the review! I'll get to the rest of it in a bit, but I'm unable
> to reproduce the initdb failure. I looked at the assert line and I don't see
> anything obvious either. :/
>
> Can you send your full configure call? uname -a? Mine is:
>
> ./configure --with-includes=/opt/local/include
> --with-libraries=/opt/local/lib --enable-debug --with-libxml --with-tcl
> --with-perl --with-python --enable-depend --enable-dtrace --enable-tap-tests
> --prefix=/Users/decibel/pgsql/HEAD/i/i --with-pgport=$PGC_PORT -C
> --enable-cassert --enable-debug CFLAGS='-ggdb -O0 -fno-omit-frame-pointer'


./configure --prefix=/home/craig/pg/10 --enable-cassert --enable-debug
--enable-tap-tests --with-python
make -s clean
make -s -j4
make check

results in the crash here.

if I add

CFLAGS=""

to the arguments (which suppresses the default "-O2"), or pass

CFLAGS="-O0"

then the crash goes away.



$ python --version
Python 2.7.11

$ lsb_release -a
LSB Version: 
:core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: Fedora
Description: Fedora release 23 (Twenty Three)
Release: 23
Codename: TwentyThree

$ gcc --version
gcc (GCC) 5.3.1 20160406 (Red Hat 5.3.1-6)


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-05 Thread Jim Nasby

On 4/4/17 7:44 PM, Craig Ringer wrote:

On 5 April 2017 at 08:23, Craig Ringer  wrote:

On 5 April 2017 at 08:00, Craig Ringer  wrote:

This patch fails to update the documentation at all.

https://www.postgresql.org/docs/devel/static/spi.html


I'll fix that soon.


missing newline


Fixed.


+/* Execute a previously prepared plan with a callback Destination */


caps "Destination"


Hmm, I capitalized it since DestReceiver is capitalized. I guess it's 
best to just drop Destination.



+// XXX throw error if callback is set


Fixed (opted to just use an Assert).


+static DestReceiver spi_callbackDR = {
+donothingReceive, donothingStartup, donothingCleanup, donothingCleanup,
+DestSPICallback
+};
Presumably that's a default destination you're then supposed to modify
with your own callbacks? There aren't any comments to indicate what's
going on here.


Correct. Added the following comment:


/*
 * This is strictly a starting point for creating a callback. It should not
 * actually be used.
 */





Comments on patch 2:

If this is the "bare minimum" patch, what is pending? Does it impose
any downsides or limits?


No limits. I'm not aware of any downsides.

It's "bare minimum" because a follow-on step is to allow different 
methods of returning results. In particular, my testing indicates that 
patch 1 + returning a dict of lists (as opposed to the current list of 
dicts) results in a 460% improvement vs ~30% with patch 2.



+/* Get switch execution contexts */
+extern PLyExecutionContext
*PLy_switch_execution_context(PLyExecutionContext *new);

Comment makes no sense to me. This seems something like memory context
switch, where you supply the new and return the old. But the comment
doesn't explain it at all.


Comment changed to
/* Switch execution context (similar to MemoryContextSwitchTo */


+void PLy_CSStartup(DestReceiver *self, int operation, TupleDesc typeinfo);
+void PLy_CSDestroy(DestReceiver *self);

These are declared in the plpy_spi.c. Why aren't these static?


Derp. Fixed.


+volatile MemoryContext oldcontext;
+volatile ResourceOwner oldowner;
 int rv;
-volatile MemoryContext oldcontext;
-volatile ResourceOwner oldowner;

Unnecessary code movement.


IMHO it reads better that way. I've left it for now so $COMMITTER can 
decide, but if it really bugs you let me know and I'll swap it.



In PLy_Callback_New, I think your use of a sub-context is sensible. Is
it necessary to palloc0 though?


Hrm, maybe not... but it seems like cheap insurance considering the 
amount of other stuff involved in just starting a new SPI call. And 
honestly, I'd rather not mess with it at this point. :) I have added an 
XXX comment though.



+static CallbackState *
+PLy_Callback_Free(CallbackState *callback)

The code here looks like it could be part of spi.c's callback support,
rather than plpy_spi specific, since you provide a destroy callback in
the SPI callback struct.


I added this for use in PG_CATCH() blocks, because it's not clear to me 
that the portal gets cleaned up in those cases. It's certainly possible 
that it's pointless.


FWIW, I'm pretty sure I copied that pattern from somewhere else... 
probably plpgsql or pltcl.



+/* We need to store this because the TupleDesc the receive
function gets has no names. */
+myState->desc = typeinfo;

Is it safe to just store the pointer to the TupleDesc here? What's the lifetime?


Only Portal lifetime.


+ * will clean it up when the time is right. XXX This might result in a leak
+ * if an error happens and the result doesn't get dereferenced.
+ */
+MemoryContextSwitchTo(TopMemoryContext);
+result->tupdesc = CreateTupleDescCopy(typeinfo);

especially given this XXX comment...


I've changed the comment to the following. Hopefully this clarifies things:


/*
 * Save tuple descriptor for later use by result set metadata
 * functions.  Save it in TopMemoryContext so that it survives outside 
of
 * an SPI context.  We trust that PLy_result_dealloc() will clean it up
 * when the time is right. The difference between result and everything
 * else is that result needs to survive after the portal is destroyed,
 * because result is what's handed back to the plpython function. While
 * it's tempting to use something other than TopMemoryContext, that 
won't
 * work: the user could potentially put result into the global 
dictionary,
 * which means it could survive as long as the session does.  This might
 * result in a leak if an error happens and the result doesn't get
 * dereferenced, but if that happens it means the python GC has failed 
us,
 * at which point we probably have bigger problems.
 *
 * This still isn't perfect though; if something the result tupledesc
 * references has it's OID changed then the tupledesc 

Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-05 Thread Jim Nasby

On 4/4/17 7:44 PM, Craig Ringer wrote:

The patch crashes in initdb with --enable-cassert builds:


Thanks for the review! I'll get to the rest of it in a bit, but I'm 
unable to reproduce the initdb failure. I looked at the assert line and 
I don't see anything obvious either. :/


Can you send your full configure call? uname -a? Mine is:

./configure --with-includes=/opt/local/include 
--with-libraries=/opt/local/lib --enable-debug --with-libxml --with-tcl 
--with-perl --with-python --enable-depend --enable-dtrace 
--enable-tap-tests --prefix=/Users/decibel/pgsql/HEAD/i/i 
--with-pgport=$PGC_PORT -C --enable-cassert --enable-debug CFLAGS='-ggdb 
-O0 -fno-omit-frame-pointer'


Darwin decina.local 15.6.0 Darwin Kernel Version 15.6.0: Mon Jan  9 
23:07:29 PST 2017; root:xnu-3248.60.11.2.1~1/RELEASE_X86_64 x86_64

--
Jim Nasby, Chief Data Architect, Austin TX
OpenSCG http://OpenSCG.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-04 Thread Craig Ringer
On 5 April 2017 at 08:23, Craig Ringer  wrote:
> On 5 April 2017 at 08:00, Craig Ringer  wrote:
>
>> Taking a look at this now.
>
> Rebased to current master with conflicts and whitespace errors fixed.
> Review pending.

This patch fails to update the documentation at all.

https://www.postgresql.org/docs/devel/static/spi.html



The patch crashes in initdb with --enable-cassert builds:

performing post-bootstrap initialization ... TRAP:
FailedAssertion("!(myState->pub.mydest == DestSQLFunction)", File:
"functions.c", Line: 800)
sh: line 1: 30777 Aborted (core dumped)
"/home/craig/projects/2Q/postgres/tmp_install/home/craig/pg/10/bin/postgres"
--single -F -O -j -c search_path=pg_catalog -c exit_on_error=true
template1 > /dev/null
child process exited with exit code 134

Backtrace attached.




Details on patch 1:

missing newline

+}
+int
+SPI_execute_callback(const char *src, bool read_only, long tcount,


+/* Execute a previously prepared plan with a callback Destination */


caps "Destination"



+// XXX throw error if callback is set

^^



+static DestReceiver spi_callbackDR = {
+donothingReceive, donothingStartup, donothingCleanup, donothingCleanup,
+DestSPICallback
+};


Is the callback destreceiver supposed to be a blackhole? Why? Its
name, spi_callbackDR and DestSPICallback, doesn't seem to indicate
that it drops its input.

Presumably that's a default destination you're then supposed to modify
with your own callbacks? There aren't any comments to indicate what's
going on here.



Comments on patch 2:

If this is the "bare minimum" patch, what is pending? Does it impose
any downsides or limits?

+/* Get switch execution contexts */
+extern PLyExecutionContext
*PLy_switch_execution_context(PLyExecutionContext *new);

Comment makes no sense to me. This seems something like memory context
switch, where you supply the new and return the old. But the comment
doesn't explain it at all.

+void PLy_CSStartup(DestReceiver *self, int operation, TupleDesc typeinfo);
+void PLy_CSDestroy(DestReceiver *self);

These are declared in the plpy_spi.c. Why aren't these static?

+volatile MemoryContext oldcontext;
+volatile ResourceOwner oldowner;
 int rv;
-volatile MemoryContext oldcontext;
-volatile ResourceOwner oldowner;

Unnecessary code movement.

In PLy_Callback_New, I think your use of a sub-context is sensible. Is
it necessary to palloc0 though?

+static CallbackState *
+PLy_Callback_Free(CallbackState *callback)

The code here looks like it could be part of spi.c's callback support,
rather than plpy_spi specific, since you provide a destroy callback in
the SPI callback struct.

+/* We need to store this because the TupleDesc the receive
function gets has no names. */
+myState->desc = typeinfo;

Is it safe to just store the pointer to the TupleDesc here? What's the lifetime?

+ * will clean it up when the time is right. XXX This might result in a leak
+ * if an error happens and the result doesn't get dereferenced.
+ */
+MemoryContextSwitchTo(TopMemoryContext);
+result->tupdesc = CreateTupleDescCopy(typeinfo);

especially given this XXX comment...


Patch needs bug fix, docs updates, fixes for issues marked in
comments. But overall approach looks sensible enough.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
#0  0x7f9d5d64ea28 in raise () from /lib64/libc.so.6
No symbol table info available.
#1  0x7f9d5d65062a in abort () from /lib64/libc.so.6
No symbol table info available.
#2  0x0083a521 in ExceptionalCondition 
(conditionName=conditionName@entry=0x9b68d0 "!(myState->pub.mydest == 
DestSQLFunction)", errorType=errorType@entry=0x882d29 "FailedAssertion", 
fileName=fileName@entry=0x9b6920 "functions.c", 
lineNumber=lineNumber@entry=800) at assert.c:54
No locals.
#3  0x006166e8 in postquel_start (fcache=0x27772c0, es=0x284eb58) at 
functions.c:800
myState = 0x284eeb8
dest = 0x284eeb8
#4  fmgr_sql (fcinfo=0x27a7a28) at functions.c:1149
fcache = 0x27772c0
sqlerrcontext = {previous = 0x0, callback = 0x614910 
, arg = 0x27a79d0}
randomAccess = 0 '\000'
lazyEvalOK = 
is_first = 
pushed_snapshot = 0 '\000'
es = 0x284eb58
slot = 
result = 
eslist = 
eslc = 0x284eb90
__func__ = "fmgr_sql"
#5  0x0060810b in ExecInterpExpr (state=0x27a7528, econtext=0x27a4ce0, 
isnull=) at execExprInterp.c:670
fcinfo = 0x27a7a28
argnull = 0x27a7d68 ""
argno = 
op = 0x27a76f8
resultslot = 0x27a7130
innerslot = 
outerslot = 0x27a5930
scanslot = 0x0
dispatch_table = {0x608960 , 0x607e00 
, 0x607e20 , 0x607e38 
, 0x607c90 

Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-04 Thread Craig Ringer
On 5 April 2017 at 08:00, Craig Ringer  wrote:

> Taking a look at this now.

Rebased to current master with conflicts and whitespace errors fixed.
Review pending.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
From 71b6163734934db3160de81fd064e7d113b9122f Mon Sep 17 00:00:00 2001
From: Jim Nasby 
Date: Wed, 1 Mar 2017 15:45:51 -0600
Subject: [PATCH 1/2] Add SPI_execute_callback() and callback-based
 DestReceiver.

Instead of placing results in a tuplestore, this method of execution
uses the supplied callback when creating the Portal for a query.
---
 src/backend/executor/spi.c | 79 --
 src/backend/tcop/dest.c| 11 +++
 src/include/executor/spi.h |  4 +++
 src/include/tcop/dest.h|  1 +
 4 files changed, 85 insertions(+), 10 deletions(-)

diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c
index 3a89ccd..a0af31a 100644
--- a/src/backend/executor/spi.c
+++ b/src/backend/executor/spi.c
@@ -55,7 +55,8 @@ static void _SPI_prepare_oneshot_plan(const char *src, SPIPlanPtr plan);
 
 static int _SPI_execute_plan(SPIPlanPtr plan, ParamListInfo paramLI,
   Snapshot snapshot, Snapshot crosscheck_snapshot,
-  bool read_only, bool fire_triggers, uint64 tcount);
+  bool read_only, bool fire_triggers, uint64 tcount,
+  DestReceiver *callback);
 
 static ParamListInfo _SPI_convert_params(int nargs, Oid *argtypes,
 	Datum *Values, const char *Nulls);
@@ -321,7 +322,34 @@ SPI_execute(const char *src, bool read_only, long tcount)
 
 	res = _SPI_execute_plan(, NULL,
 			InvalidSnapshot, InvalidSnapshot,
-			read_only, true, tcount);
+			read_only, true, tcount, NULL);
+
+	_SPI_end_call(true);
+	return res;
+}
+int
+SPI_execute_callback(const char *src, bool read_only, long tcount,
+		DestReceiver *callback)
+{
+	_SPI_plan	plan;
+	int			res;
+
+	if (src == NULL || tcount < 0)
+		return SPI_ERROR_ARGUMENT;
+
+	res = _SPI_begin_call(true);
+	if (res < 0)
+		return res;
+
+	memset(, 0, sizeof(_SPI_plan));
+	plan.magic = _SPI_PLAN_MAGIC;
+	plan.cursor_options = 0;
+
+	_SPI_prepare_oneshot_plan(src, );
+
+	res = _SPI_execute_plan(, NULL,
+			InvalidSnapshot, InvalidSnapshot,
+			read_only, true, tcount, callback);
 
 	_SPI_end_call(true);
 	return res;
@@ -355,7 +383,34 @@ SPI_execute_plan(SPIPlanPtr plan, Datum *Values, const char *Nulls,
 			_SPI_convert_params(plan->nargs, plan->argtypes,
 Values, Nulls),
 			InvalidSnapshot, InvalidSnapshot,
-			read_only, true, tcount);
+			read_only, true, tcount, NULL);
+
+	_SPI_end_call(true);
+	return res;
+}
+
+/* Execute a previously prepared plan with a callback Destination */
+int
+SPI_execute_plan_callback(SPIPlanPtr plan, Datum *Values, const char *Nulls,
+ bool read_only, long tcount, DestReceiver *callback)
+{
+	int			res;
+
+	if (plan == NULL || plan->magic != _SPI_PLAN_MAGIC || tcount < 0)
+		return SPI_ERROR_ARGUMENT;
+
+	if (plan->nargs > 0 && Values == NULL)
+		return SPI_ERROR_PARAM;
+
+	res = _SPI_begin_call(true);
+	if (res < 0)
+		return res;
+
+	res = _SPI_execute_plan(plan,
+			_SPI_convert_params(plan->nargs, plan->argtypes,
+Values, Nulls),
+			InvalidSnapshot, InvalidSnapshot,
+			read_only, true, tcount, callback);
 
 	_SPI_end_call(true);
 	return res;
@@ -384,7 +439,7 @@ SPI_execute_plan_with_paramlist(SPIPlanPtr plan, ParamListInfo params,
 
 	res = _SPI_execute_plan(plan, params,
 			InvalidSnapshot, InvalidSnapshot,
-			read_only, true, tcount);
+			read_only, true, tcount, NULL);
 
 	_SPI_end_call(true);
 	return res;
@@ -425,7 +480,7 @@ SPI_execute_snapshot(SPIPlanPtr plan,
 			_SPI_convert_params(plan->nargs, plan->argtypes,
 Values, Nulls),
 			snapshot, crosscheck_snapshot,
-			read_only, fire_triggers, tcount);
+			read_only, fire_triggers, tcount, NULL);
 
 	_SPI_end_call(true);
 	return res;
@@ -472,7 +527,7 @@ SPI_execute_with_args(const char *src,
 
 	res = _SPI_execute_plan(, paramLI,
 			InvalidSnapshot, InvalidSnapshot,
-			read_only, true, tcount);
+			read_only, true, tcount, NULL);
 
 	_SPI_end_call(true);
 	return res;
@@ -1907,7 +1962,8 @@ _SPI_prepare_oneshot_plan(const char *src, SPIPlanPtr plan)
 static int
 _SPI_execute_plan(SPIPlanPtr plan, ParamListInfo paramLI,
   Snapshot snapshot, Snapshot crosscheck_snapshot,
-  bool read_only, bool fire_triggers, uint64 tcount)
+  bool read_only, bool fire_triggers, uint64 tcount,
+  DestReceiver *callback)
 {
 	int			my_res = 0;
 	uint64		my_processed = 0;
@@ -1918,6 +1974,7 @@ _SPI_execute_plan(SPIPlanPtr plan, ParamListInfo paramLI,
 	ErrorContextCallback spierrcontext;
 	CachedPlan *cplan = NULL;
 	ListCell   *lc1;
+	DestReceiver *dest = callback;
 
 	/*
 	 * Setup error traceback support for ereport()
@@ -2037,7 +2094,6 @@ 

Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-04-04 Thread Craig Ringer
On 6 March 2017 at 05:09, Jim Nasby  wrote:
> On 2/28/17 9:42 PM, Jim Nasby wrote:
>>>
>>>
>>> I'll post a plpython patch that doesn't add the output format control.
>>
>>
>> I've attached the results of that. Unfortunately the speed improvement
>> is only 27% at this point (with 999 tuples). Presumably that's
>> because it's constructing a brand new dictionary from scratch for each
>> tuple.

Taking a look at this now.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results

2017-03-06 Thread Peter Eisentraut
On 3/5/17 16:07, Jim Nasby wrote:
>> There is nothing that requires us to materialize the results into an
>> actual list of actual rows.  We could wrap the SPI_tuptable into a
>> Python object and implement __getitem__ or __iter__ to emulate sequence
>> or mapping access.
> Would it be possible to have that just pull tuples directly from the 
> executor? The overhead of populating the tuplestore just to drain it 
> again can become quite significant, and AFAICT it's completely unnecessary.

I think there are many options, depending on what you want.  If you want
to materialize the result, then you have to materialize it somewhere,
and then make a Python object around that.  Or you could make an
iterator interface that just reads a tuple at a time.  Or maybe there
are other options.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results

2017-03-06 Thread Jim Nasby

On 3/2/17 8:03 AM, Peter Eisentraut wrote:

On 12/20/16 23:14, Jim Nasby wrote:

I've been looking at the performance of SPI calls within plpython.
There's a roughly 1.5x difference from equivalent python code just in
pulling data out of the SPI tuplestore. Some of that is due to an
inefficiency in how plpython is creating result dictionaries, but fixing
that is ultimately a dead-end: if you're dealing with a lot of results
in python, you want a tuple of arrays, not an array of tuples.


There is nothing that requires us to materialize the results into an
actual list of actual rows.  We could wrap the SPI_tuptable into a
Python object and implement __getitem__ or __iter__ to emulate sequence
or mapping access.


Would it be possible to have that just pull tuples directly from the 
executor? The overhead of populating the tuplestore just to drain it 
again can become quite significant, and AFAICT it's completely unnecessary.


Unfortunately, I think adding support for that would be even more 
invasive, which is why I haven't attempted it. On the flip side, I 
believe that kind of an interface would be usable by plpgsql, whereas 
the DestReceiver approach is not (AFAICT).

--
Jim Nasby, Chief Data Architect, OpenSCG


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-03-06 Thread Jim Nasby

On 2/28/17 9:42 PM, Jim Nasby wrote:


I'll post a plpython patch that doesn't add the output format control.


I've attached the results of that. Unfortunately the speed improvement
is only 27% at this point (with 999 tuples). Presumably that's
because it's constructing a brand new dictionary from scratch for each
tuple.


I found a couple bugs. New patches attached.
--
Jim Nasby, Chief Data Architect, OpenSCG
From 116b6a45b0146e42f1faa130d78e9362950c18c3 Mon Sep 17 00:00:00 2001
From: Jim Nasby 
Date: Wed, 1 Mar 2017 15:45:51 -0600
Subject: [PATCH 1/2] Add SPI_execute_callback() and callback-based
 DestReceiver.

Instead of placing results in a tuplestore, this method of execution
uses the supplied callback when creating the Portal for a query.
---
 src/backend/executor/spi.c | 79 --
 src/backend/tcop/dest.c| 11 +++
 src/include/executor/spi.h |  4 +++
 src/include/tcop/dest.h|  1 +
 4 files changed, 85 insertions(+), 10 deletions(-)

diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c
index 55f97b14e6..ffeba679da 100644
--- a/src/backend/executor/spi.c
+++ b/src/backend/executor/spi.c
@@ -55,7 +55,8 @@ static void _SPI_prepare_oneshot_plan(const char *src, 
SPIPlanPtr plan);
 
 static int _SPI_execute_plan(SPIPlanPtr plan, ParamListInfo paramLI,
  Snapshot snapshot, Snapshot 
crosscheck_snapshot,
- bool read_only, bool fire_triggers, uint64 
tcount);
+ bool read_only, bool fire_triggers, uint64 
tcount,
+ DestReceiver *callback);
 
 static ParamListInfo _SPI_convert_params(int nargs, Oid *argtypes,
Datum *Values, const char *Nulls);
@@ -320,7 +321,34 @@ SPI_execute(const char *src, bool read_only, long tcount)
 
res = _SPI_execute_plan(, NULL,
InvalidSnapshot, 
InvalidSnapshot,
-   read_only, true, 
tcount);
+   read_only, true, 
tcount, NULL);
+
+   _SPI_end_call(true);
+   return res;
+}
+int
+SPI_execute_callback(const char *src, bool read_only, long tcount,
+   DestReceiver *callback)
+{
+   _SPI_plan   plan;
+   int res;
+
+   if (src == NULL || tcount < 0)
+   return SPI_ERROR_ARGUMENT;
+
+   res = _SPI_begin_call(true);
+   if (res < 0)
+   return res;
+
+   memset(, 0, sizeof(_SPI_plan));
+   plan.magic = _SPI_PLAN_MAGIC;
+   plan.cursor_options = 0;
+
+   _SPI_prepare_oneshot_plan(src, );
+
+   res = _SPI_execute_plan(, NULL,
+   InvalidSnapshot, 
InvalidSnapshot,
+   read_only, true, 
tcount, callback);
 
_SPI_end_call(true);
return res;
@@ -354,7 +382,34 @@ SPI_execute_plan(SPIPlanPtr plan, Datum *Values, const 
char *Nulls,

_SPI_convert_params(plan->nargs, plan->argtypes,

Values, Nulls),
InvalidSnapshot, 
InvalidSnapshot,
-   read_only, true, 
tcount);
+   read_only, true, 
tcount, NULL);
+
+   _SPI_end_call(true);
+   return res;
+}
+
+/* Execute a previously prepared plan with a callback Destination */
+int
+SPI_execute_plan_callback(SPIPlanPtr plan, Datum *Values, const char *Nulls,
+bool read_only, long tcount, DestReceiver 
*callback)
+{
+   int res;
+
+   if (plan == NULL || plan->magic != _SPI_PLAN_MAGIC || tcount < 0)
+   return SPI_ERROR_ARGUMENT;
+
+   if (plan->nargs > 0 && Values == NULL)
+   return SPI_ERROR_PARAM;
+
+   res = _SPI_begin_call(true);
+   if (res < 0)
+   return res;
+
+   res = _SPI_execute_plan(plan,
+   
_SPI_convert_params(plan->nargs, plan->argtypes,
+   
Values, Nulls),
+   InvalidSnapshot, 
InvalidSnapshot,
+   read_only, true, 
tcount, callback);
 
_SPI_end_call(true);
return res;
@@ -383,7 +438,7 @@ SPI_execute_plan_with_paramlist(SPIPlanPtr plan, 
ParamListInfo params,
 
res = _SPI_execute_plan(plan, params,
InvalidSnapshot, 
InvalidSnapshot,
-

Re: [HACKERS] Faster methods for getting SPI results

2017-03-02 Thread Jon Nelson
On Thu, Mar 2, 2017 at 10:03 AM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:

> On 12/20/16 23:14, Jim Nasby wrote:
> > I've been looking at the performance of SPI calls within plpython.
> > There's a roughly 1.5x difference from equivalent python code just in
> > pulling data out of the SPI tuplestore. Some of that is due to an
> > inefficiency in how plpython is creating result dictionaries, but fixing
> > that is ultimately a dead-end: if you're dealing with a lot of results
> > in python, you want a tuple of arrays, not an array of tuples.
>
> There is nothing that requires us to materialize the results into an
> actual list of actual rows.  We could wrap the SPI_tuptable into a
> Python object and implement __getitem__ or __iter__ to emulate sequence
> or mapping access.
>

Python objects have a small (but non-zero) overhead in terms of both memory
and speed. A built-in dictionary is probably one of the least-expensive
(memory/cpu) choices, although how the dictionary is constructed also
impacts performance.  Another choice is a tuple.

Avoiding Py_BuildValue(...) in exchange for a bit more complexity (via
PyTuple_New(..) and PyTuple_SetItem(...)) is also a nice performance win in
my experience.

-- 
Jon


Re: [HACKERS] Faster methods for getting SPI results

2017-03-02 Thread Peter Eisentraut
On 12/20/16 23:14, Jim Nasby wrote:
> I've been looking at the performance of SPI calls within plpython. 
> There's a roughly 1.5x difference from equivalent python code just in 
> pulling data out of the SPI tuplestore. Some of that is due to an 
> inefficiency in how plpython is creating result dictionaries, but fixing 
> that is ultimately a dead-end: if you're dealing with a lot of results 
> in python, you want a tuple of arrays, not an array of tuples.

There is nothing that requires us to materialize the results into an
actual list of actual rows.  We could wrap the SPI_tuptable into a
Python object and implement __getitem__ or __iter__ to emulate sequence
or mapping access.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-02-28 Thread Jim Nasby

On 1/24/17 10:43 PM, Jim Nasby wrote:



I strongly suggest making this design effort a separate thread, and
focusing on the SPI improvements that give "free" no-user-action
performance boosts here.


Fair enough. I posted the SPI portion of that yesterday. That should be
useful for pl/R and possibly pl/perl. pl/tcl could make use of it, but
it would end up executing arbitrary tcl code in the middle of portal
execution, which doesn't strike me as a great idea. Unfortunately, I
don't think plpgsql could make much use of this for similar reasons.

I'll post a plpython patch that doesn't add the output format control.


I've attached the results of that. Unfortunately the speed improvement 
is only 27% at this point (with 999 tuples). Presumably that's 
because it's constructing a brand new dictionary from scratch for each 
tuple.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)
>From 7ef3e944c1ee8266d70fafae080afc6beb492102 Mon Sep 17 00:00:00 2001
From: Jim Nasby 
Date: Wed, 25 Jan 2017 12:57:40 -0600
Subject: [PATCH 1/2] Add SPI_execute_callback() and callback-based
 DestReceiver.

Instead of placing results in a tuplestore, this method of execution
uses the supplied callback when creating the Portal for a query.
---
 src/backend/executor/spi.c | 76 --
 src/backend/tcop/dest.c| 11 +++
 src/include/executor/spi.h |  4 +++
 src/include/tcop/dest.h|  1 +
 4 files changed, 83 insertions(+), 9 deletions(-)

diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c
index 55f97b14e6..d55e06509f 100644
--- a/src/backend/executor/spi.c
+++ b/src/backend/executor/spi.c
@@ -55,7 +55,8 @@ static void _SPI_prepare_oneshot_plan(const char *src, 
SPIPlanPtr plan);
 
 static int _SPI_execute_plan(SPIPlanPtr plan, ParamListInfo paramLI,
  Snapshot snapshot, Snapshot 
crosscheck_snapshot,
- bool read_only, bool fire_triggers, uint64 
tcount);
+ bool read_only, bool fire_triggers, uint64 
tcount,
+ DestReceiver *callback);
 
 static ParamListInfo _SPI_convert_params(int nargs, Oid *argtypes,
Datum *Values, const char *Nulls);
@@ -320,7 +321,34 @@ SPI_execute(const char *src, bool read_only, long tcount)
 
res = _SPI_execute_plan(, NULL,
InvalidSnapshot, 
InvalidSnapshot,
-   read_only, true, 
tcount);
+   read_only, true, 
tcount, NULL);
+
+   _SPI_end_call(true);
+   return res;
+}
+int
+SPI_execute_callback(const char *src, bool read_only, long tcount,
+   DestReceiver *callback)
+{
+   _SPI_plan   plan;
+   int res;
+
+   if (src == NULL || tcount < 0)
+   return SPI_ERROR_ARGUMENT;
+
+   res = _SPI_begin_call(true);
+   if (res < 0)
+   return res;
+
+   memset(, 0, sizeof(_SPI_plan));
+   plan.magic = _SPI_PLAN_MAGIC;
+   plan.cursor_options = 0;
+
+   _SPI_prepare_oneshot_plan(src, );
+
+   res = _SPI_execute_plan(, NULL,
+   InvalidSnapshot, 
InvalidSnapshot,
+   read_only, true, 
tcount, callback);
 
_SPI_end_call(true);
return res;
@@ -354,7 +382,34 @@ SPI_execute_plan(SPIPlanPtr plan, Datum *Values, const 
char *Nulls,

_SPI_convert_params(plan->nargs, plan->argtypes,

Values, Nulls),
InvalidSnapshot, 
InvalidSnapshot,
-   read_only, true, 
tcount);
+   read_only, true, 
tcount, NULL);
+
+   _SPI_end_call(true);
+   return res;
+}
+
+/* Execute a previously prepared plan with a callback Destination */
+int
+SPI_execute_plan_callback(SPIPlanPtr plan, Datum *Values, const char *Nulls,
+bool read_only, long tcount, DestReceiver 
*callback)
+{
+   int res;
+
+   if (plan == NULL || plan->magic != _SPI_PLAN_MAGIC || tcount < 0)
+   return SPI_ERROR_ARGUMENT;
+
+   if (plan->nargs > 0 && Values == NULL)
+   return SPI_ERROR_PARAM;
+
+   res = _SPI_begin_call(true);
+   if (res < 0)
+   return res;
+
+   res = _SPI_execute_plan(plan,
+   

Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-02-23 Thread Jim Nasby

On 1/23/17 9:23 PM, Jim Nasby wrote:

I think the last step here is to figure out how to support switching
between the current behavior and the "columnar" behavior of a dict of lists.


I've thought more about this... instead of trying to switch from the 
current situation of 1 choice of how results are return to 2 choices, I 
think it'd be better to just expose the API that the new Destination 
type provides to SPI. Specifically, execute a python function during 
Portal startup, and a different function for receiving tuples. There'd 
be an optional 3rd function for Portal shutdown.


The startup function would be handed details of the resultset it was 
about to receive, as a list that contained python tuples with the 
results of SPI_fname, _gettype, _gettypeid. This function would return a 
callback version number and a python object that would be kept in the 
DestReceiver.


The receiver function would get the object created by the startup 
function, as well as a python tuple of the TupleTableSlot that had gone 
through type conversion. It would need to add the value to the object 
from the startup function. It would return true or false, just like a 
Portal receiver function does.


The shutdown function would receive the object that's been passed 
around. It would be able to do any post-processing. Whatever it returned 
is what would be handed back to python as the results of the query.


The version number returned by the startup function allows for future 
improvements to this facility. One idea there is allowing the startup 
function to control how Datums get mapped into python objects.


In order to support all of this without breaking backwards compatibility 
or forking a new API, plpy.execute would accept a kwdict, to avoid 
conflicting with the arbitrary number of arguments that can currently be 
accepted. We'd look in the kwdict for a key called "portal_functions" 
pointing at a 2 or 3 element tuple of the startup, receive and shutdown 
functions. plpy would pre-define a tuple that provides the current 
behavior, and that's what would be used by default. In the future, we 
might add a way to control the default.


Comments?
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-01-24 Thread Jim Nasby

On 1/23/17 10:36 PM, Craig Ringer wrote:

which is currently returned as

[ {"a":1, "b":10}, {"a":2, "b":20} ]

instead as

{ "a": [1, 2], "b": [10, 20] }


Correct.


If so I see that as a lot more of a niche thing. I can see why it'd be
useful and would help performance, but it seems much more disruptive.
It requires users to discover it exists, actively adopt a different
style of ingesting data, etc. For a 10%-ish gain in a PL.


In data science, what we're doing now is actually the niche. All real 
analytics happens with something like a Pandas DataFrame, which is 
organized as a dict of lists.


This isn't just idle nomenclature either: organizing results in what 
amounts to a column store provides a significant speed improvement for 
most analytics, because you're working on an array of contiguous memory 
(at least, when you're using more advanced types like DataFrames and 
Series).



I strongly suggest making this design effort a separate thread, and
focusing on the SPI improvements that give "free" no-user-action
performance boosts here.


Fair enough. I posted the SPI portion of that yesterday. That should be 
useful for pl/R and possibly pl/perl. pl/tcl could make use of it, but 
it would end up executing arbitrary tcl code in the middle of portal 
execution, which doesn't strike me as a great idea. Unfortunately, I 
don't think plpgsql could make much use of this for similar reasons.


I'll post a plpython patch that doesn't add the output format control.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-01-23 Thread Craig Ringer
On 24 January 2017 at 11:23, Jim Nasby  wrote:

> I finally got all the kinks worked out and did some testing with python 3.
> Performance for my test [1] improved ~460% when returning a dict of lists
> (as opposed to the current list of dicts). Based on previous testing, I
> expect that using this method to return a list of dicts will be about 8%
> slower. The inconsistency in results on 2.7 has to do with how python 2
> handles ints.

Impressive results.

> I think the last step here is to figure out how to support switching between
> the current behavior and the "columnar" behavior of a dict of lists.

That sounds like it'd be much better approached as a separate, later patch.

If I understand you correctly, you propose to return the resultset

a   b
1  10
2  20

which is currently returned as

[ {"a":1, "b":10}, {"a":2, "b":20} ]

instead as

{ "a": [1, 2], "b": [10, 20] }

?

If so I see that as a lot more of a niche thing. I can see why it'd be
useful and would help performance, but it seems much more disruptive.
It requires users to discover it exists, actively adopt a different
style of ingesting data, etc. For a 10%-ish gain in a PL.

I strongly suggest making this design effort a separate thread, and
focusing on the SPI improvements that give "free" no-user-action
performance boosts here.


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results (460% improvement)

2017-01-23 Thread Jim Nasby

On 1/5/17 9:50 PM, Jim Nasby wrote:

The * on that is there's something odd going on where plpython starts
out really fast at this, then gets 100% slower. I've reached out to some
python folks about that. Even so, the overall results from a quick test
on my laptop are (IMHO) impressive:

Old CodeNew CodeImprovement
Pure SQL2 sec  2 sec
plpython12.7-14 sec4-10 sec ~1.3-3x
plpython - SQL 10.7-12 sec 2-8 sec  ~1.3-6x

Pure SQL is how long an equivalent query takes to run with just SQL.
plpython - SQL is simply the raw python times minus the pure SQL time.


I finally got all the kinks worked out and did some testing with python 
3. Performance for my test [1] improved ~460% when returning a dict of 
lists (as opposed to the current list of dicts). Based on previous 
testing, I expect that using this method to return a list of dicts will 
be about 8% slower. The inconsistency in results on 2.7 has to do with 
how python 2 handles ints.


Someone who's familiar with pl/perl should take a look at this and see 
if it would apply there. I've attached the SPI portion of this patch.


I think the last step here is to figure out how to support switching 
between the current behavior and the "columnar" behavior of a dict of 
lists. I believe the best way to do that is to add two optional 
arguments to the execution functions: container=[] and members={}, and 
then copy those to produce the output objects. That means you can get 
the new behavior by doing something like:


plpy.execute('...', container={}, members=[])

Or, more interesting, you could do:

plpy.execute('...', container=Pandas.DataFrame, members=Pandas.Series)

since that's what a lot of people are going to want anyway.

In the future we could also add a GUC to change the default behavior.

Any concerns with that approach?

1:

d = plpy.execute('SELECT s AS some_table_id, s AS some_field_name, s AS 
some_other_field_name FROM generate_series(1,{}) s'.format(iter) )
return len(d['some_table_id'])

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)
diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c
index 7bd37283b7..97585d272e 100644
--- a/src/backend/executor/spi.c
+++ b/src/backend/executor/spi.c
@@ -55,7 +55,8 @@ static void _SPI_prepare_oneshot_plan(const char *src, 
SPIPlanPtr plan);
 
 static int _SPI_execute_plan(SPIPlanPtr plan, ParamListInfo paramLI,
  Snapshot snapshot, Snapshot 
crosscheck_snapshot,
- bool read_only, bool fire_triggers, uint64 
tcount);
+ bool read_only, bool fire_triggers, uint64 
tcount,
+ DestReceiver *callback);
 
 static ParamListInfo _SPI_convert_params(int nargs, Oid *argtypes,
Datum *Values, const char *Nulls);
@@ -320,7 +321,34 @@ SPI_execute(const char *src, bool read_only, long tcount)
 
res = _SPI_execute_plan(, NULL,
InvalidSnapshot, 
InvalidSnapshot,
-   read_only, true, 
tcount);
+   read_only, true, 
tcount, NULL);
+
+   _SPI_end_call(true);
+   return res;
+}
+int
+SPI_execute_callback(const char *src, bool read_only, long tcount,
+   DestReceiver *callback)
+{
+   _SPI_plan   plan;
+   int res;
+
+   if (src == NULL || tcount < 0)
+   return SPI_ERROR_ARGUMENT;
+
+   res = _SPI_begin_call(true);
+   if (res < 0)
+   return res;
+
+   memset(, 0, sizeof(_SPI_plan));
+   plan.magic = _SPI_PLAN_MAGIC;
+   plan.cursor_options = 0;
+
+   _SPI_prepare_oneshot_plan(src, );
+
+   res = _SPI_execute_plan(, NULL,
+   InvalidSnapshot, 
InvalidSnapshot,
+   read_only, true, 
tcount, callback);
 
_SPI_end_call(true);
return res;
@@ -354,7 +382,34 @@ SPI_execute_plan(SPIPlanPtr plan, Datum *Values, const 
char *Nulls,

_SPI_convert_params(plan->nargs, plan->argtypes,

Values, Nulls),
InvalidSnapshot, 
InvalidSnapshot,
-   read_only, true, 
tcount);
+   read_only, true, 
tcount, NULL);
+
+   _SPI_end_call(true);
+   return res;
+}
+
+/* Execute a previously prepared plan with a callback Destination */
+int

Re: [HACKERS] Faster methods for getting SPI results

2017-01-05 Thread Jim Nasby

On 12/28/16 3:14 AM, Craig Ringer wrote:

On 28 December 2016 at 12:32, Jim Nasby  wrote:

On 12/27/16 9:10 PM, Craig Ringer wrote:


On 28 December 2016 at 09:58, Jim Nasby  wrote:


I've looked at this some more, and ITSM that the only way to do this
without
some major surgery is to create a new type of Destination specifically
for
SPI that allows for the execution of an arbitrary C function for each
tuple
to be sent.



That sounds a lot more sensible than the prior proposals. Callback driven.



Are there other places this would be useful? I'm reluctant to write all of
this just to discover it doesn't help performance at all, but if it's useful
on it's own I can just submit it as a stand-alone patch.


I don't have a use for it personally. In BDR and pglogical anything
that does work with nontrivial numbers of tuples uses lower level
scans anyway.

I expect anything that uses the SPI to run arbitrary user queries
could have a use for something like this though. Any PL, for one.


Just a quick update on this: I've gotten this working well enough in 
plpython to do some performance testing. This patch does change python 
results from being a list of dicts to a dict of lists, but I suspect the 
vast majority of the speed improvement is from not creating a tuplestore.


The attached sample (from OS X /usr/bin/sample) is interesting. The 
highlight is:



!   3398 SPI_execute_callback  (in postgres) + 163  [0x110125793]
! 3394 _SPI_execute_plan  (in postgres) + 1262  [0x1101253fe]
! : 2043 standard_ExecutorRun  (in postgres) + 288  [0x1100f9a40]
! : | 1990 ExecProcNode  (in postgres) + 250  [0x1100fd62a]


The top line is the entry into SPI from plpython. The bottom line is 
generate_series into a tuplestore and then reading from that tuplestore. 
Almost all the time being spent in standard_ExecutorRun is in 
PLy_CSreceive, which is appending values to a set of python lists as 
it's getting tuples.


The other noteworthy item in the sample is this:


535 list_dealloc  (in Python) + 116,103,...  [0x11982b1b4,0x11982b1a7,...]


that's how long it's taking python to free the 3 lists (each with 
999 python int objects).


In short (and at best*), this makes plpython just as fast at processing 
results as SELECT count(SELECT s, s, s FROM generate_series()).


The * on that is there's something odd going on where plpython starts 
out really fast at this, then gets 100% slower. I've reached out to some 
python folks about that. Even so, the overall results from a quick test 
on my laptop are (IMHO) impressive:


Old CodeNew CodeImprovement
Pure SQL2 sec  2 sec
plpython12.7-14 sec4-10 sec ~1.3-3x
plpython - SQL 10.7-12 sec 2-8 sec  ~1.3-6x

Pure SQL is how long an equivalent query takes to run with just SQL.
plpython - SQL is simply the raw python times minus the pure SQL time.

I suspect other PL languages that have fairly fast object alloc and 
dealloc would see a similar benefit.


BTW, the patch currently breaks on nested calls to plpython, but I don't 
think that should change the performance much.


The test function:

CREATE OR REPLACE FUNCTION test_series(
iter int
) RETURNS int LANGUAGE plpythonu AS $body$
d = plpy.execute('SELECT s AS some_table_id, s AS some_field_name, s AS 
some_other_field_name FROM generate_series(1,{}) s'.format(iter) )
return len(d['some_table_id'])
$body$;

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)
diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c
index 80fc4c4725..ace30d75f0 100644
--- a/src/backend/executor/spi.c
+++ b/src/backend/executor/spi.c
@@ -55,7 +55,8 @@ static void _SPI_prepare_oneshot_plan(const char *src, 
SPIPlanPtr plan);
 
 static int _SPI_execute_plan(SPIPlanPtr plan, ParamListInfo paramLI,
  Snapshot snapshot, Snapshot 
crosscheck_snapshot,
- bool read_only, bool fire_triggers, uint64 
tcount);
+ bool read_only, bool fire_triggers, uint64 
tcount,
+ DestReceiver *callback);
 
 static ParamListInfo _SPI_convert_params(int nargs, Oid *argtypes,
Datum *Values, const char *Nulls);
@@ -320,7 +321,34 @@ SPI_execute(const char *src, bool read_only, long tcount)
 
res = _SPI_execute_plan(, NULL,
InvalidSnapshot, 
InvalidSnapshot,
-   read_only, true, 
tcount);
+   read_only, true, 
tcount, NULL);
+
+   _SPI_end_call(true);
+   return res;
+}
+int
+SPI_execute_callback(const char *src, bool read_only, long tcount,

Re: [HACKERS] Faster methods for getting SPI results

2016-12-29 Thread Jim Nasby

On 12/27/16 9:10 PM, Craig Ringer wrote:

On 28 December 2016 at 09:58, Jim Nasby  wrote:


I've looked at this some more, and ITSM that the only way to do this without
some major surgery is to create a new type of Destination specifically for
SPI that allows for the execution of an arbitrary C function for each tuple
to be sent.


That sounds a lot more sensible than the prior proposals. Callback driven.


Here's what I've got right now. I haven't bothered with 
SPI_execute_callback() yet, and there's some missing sanity checks.


I'm not sure if the changes to CreateDestReceiver() are warranted or 
necessary, though it would at least give you sane defaults. My 
incomplete code that would make use of this currently does


CallbackState   callback;

	memcpy(callback.pub, CreateDestReceiver(DestSPICallback), 
sizeof(DestReceiver));

callback.pub.receiveSlot = PLy_CSreceive;
callback.pub.rStartup = PLy_CSStartup;
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results

2016-12-28 Thread Craig Ringer
On 28 December 2016 at 12:32, Jim Nasby  wrote:
> On 12/27/16 9:10 PM, Craig Ringer wrote:
>>
>> On 28 December 2016 at 09:58, Jim Nasby  wrote:
>>
>>> I've looked at this some more, and ITSM that the only way to do this
>>> without
>>> some major surgery is to create a new type of Destination specifically
>>> for
>>> SPI that allows for the execution of an arbitrary C function for each
>>> tuple
>>> to be sent.
>>
>>
>> That sounds a lot more sensible than the prior proposals. Callback driven.
>
>
> Are there other places this would be useful? I'm reluctant to write all of
> this just to discover it doesn't help performance at all, but if it's useful
> on it's own I can just submit it as a stand-alone patch.

I don't have a use for it personally. In BDR and pglogical anything
that does work with nontrivial numbers of tuples uses lower level
scans anyway.

I expect anything that uses the SPI to run arbitrary user queries
could have a use for something like this though. Any PL, for one.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results

2016-12-27 Thread Jim Nasby

On 12/27/16 9:10 PM, Craig Ringer wrote:

On 28 December 2016 at 09:58, Jim Nasby  wrote:


I've looked at this some more, and ITSM that the only way to do this without
some major surgery is to create a new type of Destination specifically for
SPI that allows for the execution of an arbitrary C function for each tuple
to be sent.


That sounds a lot more sensible than the prior proposals. Callback driven.


Are there other places this would be useful? I'm reluctant to write all 
of this just to discover it doesn't help performance at all, but if it's 
useful on it's own I can just submit it as a stand-alone patch.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results

2016-12-27 Thread Craig Ringer
On 28 December 2016 at 09:58, Jim Nasby  wrote:

> I've looked at this some more, and ITSM that the only way to do this without
> some major surgery is to create a new type of Destination specifically for
> SPI that allows for the execution of an arbitrary C function for each tuple
> to be sent.

That sounds a lot more sensible than the prior proposals. Callback driven.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results

2016-12-27 Thread Jim Nasby

On 12/21/16 8:21 AM, Jim Nasby wrote:

On 12/20/16 10:14 PM, Jim Nasby wrote:

It would be a lot more efficient if we could just grab datums from the
executor and make a single copy into plpython (or R), letting the PL
deal with all the memory management overhead.

I briefly looked at using SPI cursors to do just that, but that looks
even worse: every fetch is executed in a subtransaction, and every fetch
creates an entire tuplestore even if it's just going to return a single
value. (But hey, we never claimed cursors were fast...)

Is there any way to avoid all of this? I'm guessing one issue might be
that we don't want to call an external interpreter while potentially
holding page pins, but even then couldn't we just copy a single tuple at
a time and save a huge amount of palloc overhead?


AFAICT that's exactly how DestRemote works: it grabs a raw slot from the
executor, makes sure it's fully expanded, and sends it on it's way via
pq_send*(). So presumably the same could be done for SPI, by creating a
new CommandDest (ISTM none of the existing ones would do what we want).

I'm not sure what the API for this should look like. One possibility is
to have SPI_execute and friends accept a flag that indicates not to
build a tupletable. I don't think a query needs to be read-only to allow
for no tuplestore, so overloading read_only seems like a bad idea.

Another option is to treat this as a "lightweight cursor" that only
allows forward fetches. One nice thing about that option is it leaves
open the possibility of using a small tuplestore for each "fetch",
without all the overhead that a full blown cursor has. This assumes
there are some use cases where you want to operate on relatively small
sets of tuples at a time, but you don't need to materialize the whole
thing in one shot.


I've looked at this some more, and ITSM that the only way to do this 
without some major surgery is to create a new type of Destination 
specifically for SPI that allows for the execution of an arbitrary C 
function for each tuple to be sent. AFAICT this should be fairly safe, 
since DestRemote can potentially block while sending a tuple and also 
runs output functions (which presumably could themselves generate errors).


_SPI_execute_plan() would need to accept an arbitrary DestReceiver 
struct, and use that (if specified) instead of creating it's own.


Once that's done, my plan is to allow plpy to use this functionality 
with a receiver function that adds tuple fields to corresponding python 
lists. This should result in significantly less overhead than going 
through a tuplestore when dealing with a large number of rows.


Before I go code this up, I'd like to know if there's some fatal flaw in 
this, or if there's an easier way to hack this up just to test my 
performance theory.


Suggestions?
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Faster methods for getting SPI results

2016-12-21 Thread Jim Nasby

On 12/20/16 10:14 PM, Jim Nasby wrote:

It would be a lot more efficient if we could just grab datums from the
executor and make a single copy into plpython (or R), letting the PL
deal with all the memory management overhead.

I briefly looked at using SPI cursors to do just that, but that looks
even worse: every fetch is executed in a subtransaction, and every fetch
creates an entire tuplestore even if it's just going to return a single
value. (But hey, we never claimed cursors were fast...)

Is there any way to avoid all of this? I'm guessing one issue might be
that we don't want to call an external interpreter while potentially
holding page pins, but even then couldn't we just copy a single tuple at
a time and save a huge amount of palloc overhead?


AFAICT that's exactly how DestRemote works: it grabs a raw slot from the 
executor, makes sure it's fully expanded, and sends it on it's way via 
pq_send*(). So presumably the same could be done for SPI, by creating a 
new CommandDest (ISTM none of the existing ones would do what we want).


I'm not sure what the API for this should look like. One possibility is 
to have SPI_execute and friends accept a flag that indicates not to 
build a tupletable. I don't think a query needs to be read-only to allow 
for no tuplestore, so overloading read_only seems like a bad idea.


Another option is to treat this as a "lightweight cursor" that only 
allows forward fetches. One nice thing about that option is it leaves 
open the possibility of using a small tuplestore for each "fetch", 
without all the overhead that a full blown cursor has. This assumes 
there are some use cases where you want to operate on relatively small 
sets of tuples at a time, but you don't need to materialize the whole 
thing in one shot.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers