Re: [HACKERS] POC: Sharing record typmods between backends

2017-09-15 Thread Andres Freund
On 2017-09-15 15:39:49 -0400, Tom Lane wrote:
> Andres Freund  writes:
> > On 2017-09-14 23:29:05 -0400, Tom Lane wrote:
> >> FWIW, I'm not on board with that.  I think the version of typedefs.list
> >> in the tree should reflect the last official pgindent run.
> 
> > Why? I see pretty much no upside to that. You can't reindent anyway, due
> > to unindented changes. You can get the used typedefs.list trivially from
> > git.
> 
> Perhaps, but the real problem is still this:
> 
> >> There's also a problem that it only works well if *every* committer
> >> faithfully updates typedefs.list, which isn't going to happen.
> 
> We can't even get everybody to pgindent patches before commit, let alone
> update typedefs.list.

Well, that's partially because right now it's really painful to do so,
and we've not tried to push people to do so.  You essentially have to:
1) Pull down a new typedefs.list (how many people know where from?)
2) Add new typedefs that have been added in the commit-to-be
3) Run pgindent only on the changed files, because there's bound to be
   thousands of unrelated reindents
4) Revert reindents in changed files that are unrelated to the commit.

1) is undocumented 2) is painful (add option to automatically
generate?), 3) is painful (add commandline tool?) 4) is painful.  So
it's not particularly surprising that many don't bother.


> >> For local pgindent'ing, I pull down
> >> https://buildfarm.postgresql.org/cgi-bin/typedefs.pl
> 
> > That's a mighty manual process - I want to be able to reindent files,
> > especially new ones where it's still reasonably possible, without having
> > to download files, then move changes out of the way, so I can rebase,
> 
> Well, that just shows you don't know how to use it.  You can tell pgindent
> to use an out-of-tree copy of typedefs.list.  I have the curl fetch and
> using the out-of-tree list all nicely scripted ;-)

Not sure how that invalidates my statement. If you have to script it
locally, and still have to add typedefs manually, that's still plenty
stuff every committer (and better even, ever contributor!) has to learn.


> There might be something to be said for removing the typedefs list
> from git altogether, and adjusting the standard wrapper script to pull
> it from the buildfarm into a .gitignore'd location if there's not a
> copy there already.

I wonder if we could add a command that pulls down an up2date list *and*
regenerates a list for the local tree with the local settings. And then
runs pgindent with the combined list - in most cases that'd result in a
properly indented tree. The number of commits with platform specific
changes that the author/committer doesn't compile/run isn't that high.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-09-15 Thread Tom Lane
Andres Freund  writes:
> On 2017-09-14 23:29:05 -0400, Tom Lane wrote:
>> FWIW, I'm not on board with that.  I think the version of typedefs.list
>> in the tree should reflect the last official pgindent run.

> Why? I see pretty much no upside to that. You can't reindent anyway, due
> to unindented changes. You can get the used typedefs.list trivially from
> git.

Perhaps, but the real problem is still this:

>> There's also a problem that it only works well if *every* committer
>> faithfully updates typedefs.list, which isn't going to happen.

We can't even get everybody to pgindent patches before commit, let alone
update typedefs.list.  So sooner or later your process is going to need
to involve getting a current list from the buildfarm.

>> For local pgindent'ing, I pull down
>> https://buildfarm.postgresql.org/cgi-bin/typedefs.pl

> That's a mighty manual process - I want to be able to reindent files,
> especially new ones where it's still reasonably possible, without having
> to download files, then move changes out of the way, so I can rebase,

Well, that just shows you don't know how to use it.  You can tell pgindent
to use an out-of-tree copy of typedefs.list.  I have the curl fetch and
using the out-of-tree list all nicely scripted ;-)

There might be something to be said for removing the typedefs list from
git altogether, and adjusting the standard wrapper script to pull it from
the buildfarm into a .gitignore'd location if there's not a copy there
already.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-09-15 Thread Andres Freund
On 2017-09-14 23:29:05 -0400, Tom Lane wrote:
> Thomas Munro  writes:
> > On Fri, Sep 15, 2017 at 3:03 PM, Andres Freund  wrote:
> >> - added typedefs to typedefs.list
> 
> > Should I do this manually with future patches?

I think there's sort of a circuit split on that one. Robert and I do
regularly, most others don't.


> FWIW, I'm not on board with that.  I think the version of typedefs.list
> in the tree should reflect the last official pgindent run.

Why? I see pretty much no upside to that. You can't reindent anyway, due
to unindented changes. You can get the used typedefs.list trivially from
git.


> There's also a problem that it only works well if *every* committer
> faithfully updates typedefs.list, which isn't going to happen.
> 
> For local pgindent'ing, I pull down
> 
> https://buildfarm.postgresql.org/cgi-bin/typedefs.pl
> 
> and then add any typedefs created by the patch I'm working on to that.
> But I don't put the result into the commit.  Maybe we need a bit better
> documentation and/or tool support for using an unofficial typedef list.

That's a mighty manual process - I want to be able to reindent files,
especially new ones where it's still reasonably possible, without having
to download files, then move changes out of the way, so I can rebase,
...

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-09-14 Thread Tom Lane
Thomas Munro  writes:
> On Fri, Sep 15, 2017 at 3:03 PM, Andres Freund  wrote:
>> - added typedefs to typedefs.list

> Should I do this manually with future patches?

FWIW, I'm not on board with that.  I think the version of typedefs.list
in the tree should reflect the last official pgindent run.  There's also
a problem that it only works well if *every* committer faithfully updates
typedefs.list, which isn't going to happen.

For local pgindent'ing, I pull down

https://buildfarm.postgresql.org/cgi-bin/typedefs.pl

and then add any typedefs created by the patch I'm working on to that.
But I don't put the result into the commit.  Maybe we need a bit better
documentation and/or tool support for using an unofficial typedef list.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-09-14 Thread Thomas Munro
On Fri, Sep 15, 2017 at 3:03 PM, Andres Freund  wrote:
> On 2017-09-04 18:14:39 +1200, Thomas Munro wrote:
>> Thanks for the review and commits so far.  Here's a rebased, debugged
>> and pgindented version of the remaining patches.
>
> I've pushed this with minor modifications:

Thank you!

> - added typedefs to typedefs.list

Should I do this manually with future patches?

> - re-pgindented, there were some missing reindents in headers
> - added a very brief intro into session.c, moved some content repeated
>   in various places to the header - some of them were bound to become
>   out-of-date due to future uses of the facility.
> - moved NULL setting in detach hook directly after the respective
>   resource deallocation, for the not really probable case of it being
>   reinvoked due to an error in a later dealloc function
>
> Two remarks:
> - I'm not sure I like the order in which things are added to the typemod
>   hashes, I wonder if some more careful organization could get rid of
>   the races. Doesn't seem critical, but would be a bit nicer.

I will have a think about whether I can improve that.  In an earlier
version I did things in a different order and had different problems.
The main hazard to worry about here is that you can't let any typmod
number escape into shmem where it might be read by others (for example
a concurrent session that wants a typmod for a TupleDesc that happens
to match) until the typmod number is resolvable back to a TupleDesc
(meaning you can look it up in shared_typmod_table).  Not
wasting/leaking memory in various failure cases is a secondary (but
obviously important) concern.

> - I'm not yet quite happy with the Session facility. I think it'd be
>   nicer if we'd a cleaner split between the shared memory notion of a
>   session and the local memory version of it. The shared memory version
>   would live in a ~max_connections sized array, referenced from
>   PGPROC. In a lot of cases it'd completely obsolete the need for a
>   shm_toc, because you could just store handles etc in there.  The local
>   memory version then would just store local pointers etc into that.
>
>   But I think we can get there incrementally.

+1 to all of the above.  I fully expect this to get changed around quite a lot.

I'll keep an eye out for problem reports.

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-09-14 Thread Andres Freund
Hi,

On 2017-09-04 18:14:39 +1200, Thomas Munro wrote:
> Thanks for the review and commits so far.  Here's a rebased, debugged
> and pgindented version of the remaining patches.

I've pushed this with minor modifications:
- added typedefs to typedefs.list
- re-pgindented, there were some missing reindents in headers
- added a very brief intro into session.c, moved some content repeated
  in various places to the header - some of them were bound to become
  out-of-date due to future uses of the facility.
- moved NULL setting in detach hook directly after the respective
  resource deallocation, for the not really probable case of it being
  reinvoked due to an error in a later dealloc function


Two remarks:
- I'm not sure I like the order in which things are added to the typemod
  hashes, I wonder if some more careful organization could get rid of
  the races. Doesn't seem critical, but would be a bit nicer.

- I'm not yet quite happy with the Session facility. I think it'd be
  nicer if we'd a cleaner split between the shared memory notion of a
  session and the local memory version of it. The shared memory version
  would live in a ~max_connections sized array, referenced from
  PGPROC. In a lot of cases it'd completely obsolete the need for a
  shm_toc, because you could just store handles etc in there.  The local
  memory version then would just store local pointers etc into that.

  But I think we can get there incrementally.

It's very nice to push commits that have stats like
 6 files changed, 27 insertions(+), 1110 deletions(-)
even if it essentially has been paid forward by a lot of previous work
;)

Thanks for the work on this!

Regards,

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-09-03 Thread Thomas Munro
Thanks for the review and commits so far.  Here's a rebased, debugged
and pgindented version of the remaining patches.  I ran pgindent with
--list-of-typedefs="SharedRecordTableKey,SharedRecordTableEntry,SharedTypmodTableEntry,SharedRecordTypmodRegistry,Session"
to fix some weirdness around these new typenames.

While rebasing the 0002 patch (removal of tqueue.c's remapping logic),
I modified the interface of the newly added
ExecParallelCreateReaders() function from commit 51daa7bd because it
no longer has any reason to take a TupleDesc.

On Fri, Aug 25, 2017 at 1:46 PM, Andres Freund  wrote:
> On 2017-08-21 11:02:52 +1200, Thomas Munro wrote:
>> 2.  Andres didn't like what I did to DecrTupleDescRefCount, namely
>> allowing to run when there is no ResourceOwner.  I now see that this
>> is probably an indication of a different problem; even if there were a
>> worker ResourceOwner as he suggested (or perhaps a session-scoped one,
>> which a worker would reset before being reused), it wouldn't be the
>> one that was active when the TupleDesc was created.  I think I have
>> failed to understand the contracts here and will think/read about it
>> some more.
>
> Maybe I'm missing something, but isn't the issue here that using
> DecrTupleDescRefCount() simply is wrong, because we're not actually
> necessarily tracking the TupleDesc via the resowner mechanism?

Yeah.  Thanks.

> If you look at the code, in the case it's a previously unknown tupledesc
> it's registered with:
>
> entDesc = CreateTupleDescCopy(tupDesc);
> ...
> /* mark it as a reference-counted tupdesc */
> entDesc->tdrefcount = 1;
> ...
> RecordCacheArray[newtypmod] = entDesc;
> ...
>
> Note that there's no PinTupleDesc(), IncrTupleDescRefCount() or
> ResourceOwnerRememberTupleDesc() managing the reference from the
> array. Nor was there one before.
>
> We have other code managing TupleDesc lifetimes similarly, and look at
> how they're freeing it:
> /* Delete tupdesc if we have it */
> if (typentry->tupDesc != NULL)
> {
> /*
>  * Release our refcount, and free the tupdesc if none 
> remain.
>  * (Can't use DecrTupleDescRefCount because this 
> reference is not
>  * logged in current resource owner.)
>  */
> Assert(typentry->tupDesc->tdrefcount > 0);
> if (--typentry->tupDesc->tdrefcount == 0)
> FreeTupleDesc(typentry->tupDesc);
> typentry->tupDesc = NULL;
> }

Right.  I have changed shared_record_typmod_registry_worker_detach()
to be more like that, with an explanation.

> This also made me think about how we're managing the lookup from the
> shared array:
>
> /*
>  * Our local array can now point 
> directly to the TupleDesc
>  * in shared memory.
>  */
> RecordCacheArray[typmod] = tupdesc;
>
> Uhm. Isn't that highly highly problematic? E.g. tdrefcount manipulations
> which are done by all lookups (cf. lookup_rowtype_tupdesc()) would in
> that case manipulate shared memory in a concurrency unsafe manner.

No.  See this change, in that and similar code paths:

-   IncrTupleDescRefCount(tupDesc);
+   PinTupleDesc(tupDesc);

The difference between IncrTupleDescRefCount() and PinTupleDesc() is
that the latter recognises non-refcounted tuple descriptors
(tdrefcount == -1) and does nothing.  Shared tuple descriptors are not
reference counted (see TupleDescCopy() which initialises
dst->tdrefcount to -1).  It was for foolish symmetry that I was trying
to use ReleaseTupleDesc() in shared_record_typmod_registry_detach()
before, since it also knows about non-refcounted tuple descriptors,
but that's not appropriate: it calls DecrTupleDescRefCount() which
assumes that we're using resource owners.  We're not.

To summarise the object lifetime management situation created by this
patch: shared TupleDesc objects accumulate in per-session DSM memory
until eventually the session ends and the DSM memory goes away.  A bit
like CacheMemoryContext: there is no retail cleanup of shared
TupleDesc objects.  BUT: the DSM detach callback is used to clear out
backend-local pointers to that stuff (and any non-shared reference
counted TupleDesc objects that might be found), in anticipation of
being able to reuse a worker process one day (which will involve
attaching to a new session, so we mustn't retain any traces of the
previous session in our local state).  Maybe I'm trying to be a little
too clairvoyant there...

I improved the cleanup code: now it frees RecordCacheArray and
RecordCacheHash and reinstalls NULL pointers.  Also it deals with
errors in GetSes

Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-24 Thread Andres Freund
On 2017-08-21 11:02:52 +1200, Thomas Munro wrote:
> 2.  Andres didn't like what I did to DecrTupleDescRefCount, namely
> allowing to run when there is no ResourceOwner.  I now see that this
> is probably an indication of a different problem; even if there were a
> worker ResourceOwner as he suggested (or perhaps a session-scoped one,
> which a worker would reset before being reused), it wouldn't be the
> one that was active when the TupleDesc was created.  I think I have
> failed to understand the contracts here and will think/read about it
> some more.

Maybe I'm missing something, but isn't the issue here that using
DecrTupleDescRefCount() simply is wrong, because we're not actually
necessarily tracking the TupleDesc via the resowner mechanism?

If you look at the code, in the case it's a previously unknown tupledesc
it's registered with:

entDesc = CreateTupleDescCopy(tupDesc);
...
/* mark it as a reference-counted tupdesc */
entDesc->tdrefcount = 1;
...
RecordCacheArray[newtypmod] = entDesc;
...

Note that there's no PinTupleDesc(), IncrTupleDescRefCount() or
ResourceOwnerRememberTupleDesc() managing the reference from the
array. Nor was there one before.

We have other code managing TupleDesc lifetimes similarly, and look at
how they're freeing it:
/* Delete tupdesc if we have it */
if (typentry->tupDesc != NULL)
{
/*
 * Release our refcount, and free the tupdesc if none 
remain.
 * (Can't use DecrTupleDescRefCount because this 
reference is not
 * logged in current resource owner.)
 */
Assert(typentry->tupDesc->tdrefcount > 0);
if (--typentry->tupDesc->tdrefcount == 0)
FreeTupleDesc(typentry->tupDesc);
typentry->tupDesc = NULL;
}




This also made me think about how we're managing the lookup from the
shared array:

/*
 * Our local array can now point 
directly to the TupleDesc
 * in shared memory.
 */
RecordCacheArray[typmod] = tupdesc;

Uhm. Isn't that highly highly problematic? E.g. tdrefcount manipulations
which are done by all lookups (cf. lookup_rowtype_tupdesc()) would in
that case manipulate shared memory in a concurrency unsafe manner.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-24 Thread Thomas Munro
On Wed, Aug 23, 2017 at 11:58 PM, Thomas Munro
 wrote:
> On Wed, Aug 23, 2017 at 5:46 PM, Andres Freund  wrote:
>> Notes for possible followup commits of the dshash API:
>> - nontrivial portions of dsahash are essentially critical sections lest
>>   dynamic shared memory is leaked. Should we, short term, introduce
>>   actual critical section markers to make that more obvious? Should we,
>>   longer term, make this more failsafe / easier to use, by
>>   extending/emulating memory contexts for dsa memory?
>
> Hmm.  I will look into this.

Yeah, dshash_create() leaks the control object if the later allocation
of the initial hash table array raises an error.  I think that should
be fixed -- please see 0001 in the new patch set attached.

The other two places where shared memory is allocated are resize() and
insert_into_bucket(), and both of those seem exception-safe to me: if
dsa_allocate() elogs then nothing is changed, and the code after that
point is no-throw.  Am I missing something?

>> - SharedRecordTypmodRegistryInit() is called from GetSessionDsmHandle()
>>   which calls EnsureCurrentSession(), but
>>   SharedRecordTypmodRegistryInit() does so again - sprinkling those
>>   around liberally seems like it could hide bugs.
>
> Yeah.  Will look into this.

One idea is to run InitializeSession() in InitPostgres() instead, so
that CurrentSession is initialized at startup, but initially empty.
See attached.  (I realised that that terminology is a bit like a large
volume called FRENCH CUISINE which turns out to have just one recipe
for an omelette in it, but you have to start somewhere...)  Better
ideas?

-- 
Thomas Munro
http://www.enterprisedb.com


shared-record-typmods-v9.patchset.tgz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-23 Thread Robert Haas
On Wed, Aug 23, 2017 at 12:42 PM, Andres Freund  wrote:
> I don't think that's sufficient. make, and especially check-world,
> should have a decent coverage of the code locally. Without having to
> know about options like force_parallel_mode=regress. As e.g. evidenced
> by the fact that Thomas's latest version crashed if you ran the tests
> that way.  If there's a few lines that aren't covered by the plain
> tests, and more than a few node + parallelism combinations, I'm not
> bothered much. But this is (soon hopefully was) a fairly complicated
> piece of infrastructure - that should be exercised.  If necessary that
> can just be a BEGIN; SET LOCAL force_parallel_mode=on; query with
> blessed descs;COMMIT or whatnot - it's not like we need something hugely
> complicated here.

Yeah, we've been bitten before by changes that seemed OK when run
without force_parallel_mode but misbehaved with that option, so it
would be nice to improve things.  Now, I'm not totally convinced that
just adding a test around blessed tupledescs is really going to help
very much - that option exercises a lot of code, and this is only one
relatively small bit of it.  But I'm certainly not objecting to the
idea.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-23 Thread Andres Freund
On 2017-08-23 09:45:38 -0400, Robert Haas wrote:
> On Wed, Aug 23, 2017 at 1:46 AM, Andres Freund  wrote:
> > For later commits in the series:
> > - Afaict the whole shared tupledesc stuff, as tqueue.c before, is
> >   entirely untested. This baffles me. See also [1]. I can force the code
> >   to be reached with force_parallel_mode=regress/1, but this absolutely
> >   really totally needs to be reached by the default tests. Robert?
> 
> force_parallel_mode=regress is a good way of testing this because it
> keeps the leader from doing the work, which would likely dodge any
> bugs that happened to exist.  If you want to test something in the
> regular regression tests, using force_parallel_mode=on is probably a
> good way to do it.
> 
> Also note that there are 3 buildfarm members that test with
> force_parallel_mode=regress on a regular basis, so it's not like there
> is no automated coverage of this area.

I don't think that's sufficient. make, and especially check-world,
should have a decent coverage of the code locally. Without having to
know about options like force_parallel_mode=regress. As e.g. evidenced
by the fact that Thomas's latest version crashed if you ran the tests
that way.  If there's a few lines that aren't covered by the plain
tests, and more than a few node + parallelism combinations, I'm not
bothered much. But this is (soon hopefully was) a fairly complicated
piece of infrastructure - that should be exercised.  If necessary that
can just be a BEGIN; SET LOCAL force_parallel_mode=on; query with
blessed descs;COMMIT or whatnot - it's not like we need something hugely
complicated here.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-23 Thread Robert Haas
On Wed, Aug 23, 2017 at 1:46 AM, Andres Freund  wrote:
> For later commits in the series:
> - Afaict the whole shared tupledesc stuff, as tqueue.c before, is
>   entirely untested. This baffles me. See also [1]. I can force the code
>   to be reached with force_parallel_mode=regress/1, but this absolutely
>   really totally needs to be reached by the default tests. Robert?

force_parallel_mode=regress is a good way of testing this because it
keeps the leader from doing the work, which would likely dodge any
bugs that happened to exist.  If you want to test something in the
regular regression tests, using force_parallel_mode=on is probably a
good way to do it.

Also note that there are 3 buildfarm members that test with
force_parallel_mode=regress on a regular basis, so it's not like there
is no automated coverage of this area.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-23 Thread Thomas Munro
On Wed, Aug 23, 2017 at 11:58 PM, Thomas Munro
 wrote:
> On Wed, Aug 23, 2017 at 5:46 PM, Andres Freund  wrote:
>> - Afaict GetSessionDsmHandle() uses the current rather than
>>   TopMemoryContext. Try running the regression tests under
>>   force_parallel_mode - crashes immediately for me without fixing that.
>
> Gah, right.  Fixed.

That version missed an early return case where dsm_create failed.
Here's a version that restores the caller's memory context in that
case too.

-- 
Thomas Munro
http://www.enterprisedb.com


shared-record-typmods-v8.patchset.tgz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-23 Thread Thomas Munro
On Wed, Aug 23, 2017 at 5:46 PM, Andres Freund  wrote:
> Committing 0003. This'll probably need further adjustment, but I think
> it's good to make progress here.

Thanks!

> Changes:
> - pgindent'ed after adding the necessary typedefs to typedefs.list
> - replaced INT64CONST w UINT64CONST
> - moved count assertion in delete_item to before decrementing - as count
>   is unsigned, it'd just wrap around on underflow not triggering the 
> assertion.
> - documented and asserted resize is called without partition lock held
> - removed reference to iterator in dshash_find comments
> - removed stray references to dshash_release (rather than dshash_release_lock)
> - reworded dshash_find_or_insert reference to dshash_find to also
>   mention error handling.

Doh.  Thanks.

> Notes for possible followup commits of the dshash API:
> - nontrivial portions of dsahash are essentially critical sections lest
>   dynamic shared memory is leaked. Should we, short term, introduce
>   actual critical section markers to make that more obvious? Should we,
>   longer term, make this more failsafe / easier to use, by
>   extending/emulating memory contexts for dsa memory?

Hmm.  I will look into this.

> - I'm very unconvinced of supporting both {compare,hash}_arg_function
>   and the non-arg version. Why not solely support the _arg_ version, but
>   add the size argument? On all relevant platforms that should still be
>   register arg callable, and the branch isn't free either.

Well, the idea was that both versions were compatible with existing
functions: one with DynaHash's hash and compare functions and the
other with qsort_arg's compare function type.  In the attached version
I've done as you suggested in 0001.  Since I guess many users will
finish up wanting raw memory compare and hash I've provided
dshash_memcmp() and dshash_memhash().  Thoughts?

Since there is no attempt to be compatible with anything else, I was
slightly tempted to make equal functions return true for a match,
rather than the memcmp-style return value but figured it was still
better to be consistent.

> - might be worthwhile to try to reduce duplication between
>   delete_item_from_bucket, delete_key_from_bucket, delete_item
>   dshash_delete_key.

Yeah.  I will try this and send a separate refactoring patch.

> For later commits in the series:
> - Afaict the whole shared tupledesc stuff, as tqueue.c before, is
>   entirely untested. This baffles me. See also [1]. I can force the code
>   to be reached with force_parallel_mode=regress/1, but this absolutely
>   really totally needs to be reached by the default tests. Robert?

A fair point.  0002 is a simple patch to push some blessed records
through a TupleQueue in select_parallel.sql.  It doesn't do ranges and
arrays (special cases in the tqueue.c code that 0004 rips out), but
for exercising the new shared code I believe this is enough.  If you
apply just 0002 and 0004 then this test fails with a strange confused
record decoding error as expected.

> - gcc wants static before const (0004).

Fixed.

> - Afaict GetSessionDsmHandle() uses the current rather than
>   TopMemoryContext. Try running the regression tests under
>   force_parallel_mode - crashes immediately for me without fixing that.

Gah, right.  Fixed.

> - SharedRecordTypmodRegistryInit() is called from GetSessionDsmHandle()
>   which calls EnsureCurrentSession(), but
>   SharedRecordTypmodRegistryInit() does so again - sprinkling those
>   around liberally seems like it could hide bugs.

Yeah.  Will look into this.

-- 
Thomas Munro
http://www.enterprisedb.com


shared-record-typmods-v7.patchset.tgz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-22 Thread Andres Freund
On 2017-08-22 16:41:23 -0700, Andres Freund wrote:
> On 2017-08-21 11:02:52 +1200, Thomas Munro wrote:
> > On Mon, Aug 21, 2017 at 6:17 AM, Andres Freund  wrote:
> > > Pushing 0001, 0002 now.
> > >
> > > - rebased after conflicts
> > > - fixed a significant number of too long lines
> > > - removed a number of now superflous linebreaks
> >
> > Thanks!  Please find attached a rebased version of the rest of the patch 
> > set.
>
> Pushed 0001, 0002.  Looking at later patches.

Committing 0003. This'll probably need further adjustment, but I think
it's good to make progress here.

Changes:
- pgindent'ed after adding the necessary typedefs to typedefs.list
- replaced INT64CONST w UINT64CONST
- moved count assertion in delete_item to before decrementing - as count
  is unsigned, it'd just wrap around on underflow not triggering the assertion.
- documented and asserted resize is called without partition lock held
- removed reference to iterator in dshash_find comments
- removed stray references to dshash_release (rather than dshash_release_lock)
- reworded dshash_find_or_insert reference to dshash_find to also
  mention error handling.

Notes for possible followup commits of the dshash API:
- nontrivial portions of dsahash are essentially critical sections lest
  dynamic shared memory is leaked. Should we, short term, introduce
  actual critical section markers to make that more obvious? Should we,
  longer term, make this more failsafe / easier to use, by
  extending/emulating memory contexts for dsa memory?
- I'm very unconvinced of supporting both {compare,hash}_arg_function
  and the non-arg version. Why not solely support the _arg_ version, but
  add the size argument? On all relevant platforms that should still be
  register arg callable, and the branch isn't free either.
- might be worthwhile to try to reduce duplication between
  delete_item_from_bucket, delete_key_from_bucket, delete_item
  dshash_delete_key.


For later commits in the series:
- Afaict the whole shared tupledesc stuff, as tqueue.c before, is
  entirely untested. This baffles me. See also [1]. I can force the code
  to be reached with force_parallel_mode=regress/1, but this absolutely
  really totally needs to be reached by the default tests. Robert?
- gcc wants static before const (0004).
- Afaict GetSessionDsmHandle() uses the current rather than
  TopMemoryContext. Try running the regression tests under
  force_parallel_mode - crashes immediately for me without fixing that.
- SharedRecordTypmodRegistryInit() is called from GetSessionDsmHandle()
  which calls EnsureCurrentSession(), but
  SharedRecordTypmodRegistryInit() does so again - sprinkling those
  around liberally seems like it could hide bugs.

Regards,

Andres

[1] https://coverage.postgresql.org/src/backend/executor/tqueue.c.gcov.html


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-22 Thread Andres Freund
On 2017-08-21 11:02:52 +1200, Thomas Munro wrote:
> On Mon, Aug 21, 2017 at 6:17 AM, Andres Freund  wrote:
> > Pushing 0001, 0002 now.
> >
> > - rebased after conflicts
> > - fixed a significant number of too long lines
> > - removed a number of now superflous linebreaks
> 
> Thanks!  Please find attached a rebased version of the rest of the patch set.

Pushed 0001, 0002.  Looking at later patches.


Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-20 Thread Michael Paquier
On Mon, Aug 21, 2017 at 10:18 AM, Thomas Munro
 wrote:
> On Mon, Aug 21, 2017 at 6:17 AM, Andres Freund  wrote:
>> I think it'd be a good idea to backpatch the addition of
>> TupleDescAttr(tupledesc, n) to make future backpatching easier. What do
>> others think?
>
> +1
>
> That would also provide a way for extension developers to be able to
> write code that compiles against PG11 and also earlier releases
> without having to do ugly conditional macros stuff.

Updating only tupdesc.h is harmless, so no real objection to your argument.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-20 Thread Thomas Munro
On Mon, Aug 21, 2017 at 6:17 AM, Andres Freund  wrote:
> I think it'd be a good idea to backpatch the addition of
> TupleDescAttr(tupledesc, n) to make future backpatching easier. What do
> others think?

+1

That would also provide a way for extension developers to be able to
write code that compiles against PG11 and also earlier releases
without having to do ugly conditional macros stuff.

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-20 Thread Thomas Munro
On Mon, Aug 21, 2017 at 6:17 AM, Andres Freund  wrote:
> Pushing 0001, 0002 now.
>
> - rebased after conflicts
> - fixed a significant number of too long lines
> - removed a number of now superflous linebreaks

Thanks!  Please find attached a rebased version of the rest of the patch set.

> Thomas, prepare yourself for some hate from extension and fork authors /
> maintainers ;)

/me hides

The attached version also fixes a couple of small details you
complained about last week:

On Wed, Aug 16, 2017 at 10:06 AM, Andres Freund  wrote:
>> > +   size_t key_size;/* Size of the key 
>> > (initial bytes of entry) */
>> > +   size_t entry_size;  /* Total size of entry */
>> >
>> > Wonder if it'd make sense to say that key/entry sizes to be only
>> > minimums? That means we could increase them to be the proper aligned
>> > size?
>>
>> I don't understand.  You mean explicitly saying that there are
>> overheads?  Doesn't that go without saying?
>
> I was thinking that we could do the MAXALIGN style calculations once
> instead of repeatedly, by including them in the key and entry sizes.

I must be missing something -- where do we do it repeatedly?  The only
place we use MAXALIGN is a compile-type constant expression (see
expansion of macros ENTRY_FROM_ITEM and ITEM_FROM_ENTRY, and also in
one place AXALIGN(sizeof(dshash_table_item))).

> shared-record-typmods-v5.patchset/0004-Refactor-typcache.c-s-record-typmod-hash-table.patch
>
> + * hashTupleDesc
> + * Compute a hash value for a tuple descriptor.
> + *
> + * If two tuple descriptors would be considered equal by equalTupleDescs()
> + * then their hash value will be equal according to this function.
> + */
> +uint32
> +hashTupleDesc(TupleDesc desc)
> +{
> +   uint32  s = 0;
> +   int i;
> +
> +   for (i = 0; i < desc->natts; ++i)
> +   s = hash_combine(s, hash_uint32(TupleDescAttr(desc, 
> i)->atttypid));
> +
> +   return s;
> +}
>
> Hm, is it right not to include tdtypeid, tdtypmod, tdhasoid here?
> equalTupleDescs() does compare them...

OK, now adding natts (just for consistency), tdtypeid and tdhasoid to
be exactly like equalTupleDescs().  Note that tdtypmod is deliberately
*not* included.

> +   return hashTupleDesc(((RecordCacheEntry *) data)->tupdesc);
> ...
> +   return equalTupleDescs(((RecordCacheEntry *) a)->tupdesc,
> +  ((RecordCacheEntry *) b)-
>
> I'd rather have local vars for the casted params, but it's not
> important.

Done.

> MemSet(&ctl, 0, sizeof(ctl));
> -   ctl.keysize = REC_HASH_KEYS * sizeof(Oid);
> +   ctl.keysize = 0;/* unused */
> ctl.entrysize = sizeof(RecordCacheEntry);
>
> Hm, keysize 0? Is that right? Wouldn't it be more correct to have both
> of the same size, given dynahash includes the key size in the entry, and
> the pointer really is the key?

Done.

> shared-record-typmods-v5.patchset/0006-Introduce-a-shared-memory-record-typmod-registry.patch
>
> Hm, name & comment don't quite describe this accurately anymore.

Updated commit message.

> +extern void EnsureCurrentSession(void);
> +extern void EnsureCurrentSession(void);
>
> duplicated.

Fixed.

> +/*
> + * We want to create a DSA area to store shared state that has the same 
> extent
> + * as a session.  So far, it's only used to hold the shared record type
> + * registry.  We don't want it to have to create any DSM segments just yet in
> + * common cases, so we'll give it enough space to hold a very small
> + * SharedRecordTypmodRegistry.
> + */
> +#define SESSION_DSA_SIZE   0x3
>
> Same "extent"? Maybe lifetime?

Done.

> +
> +/*
> + * Make sure that there is a CurrentSession.
> + */
> +void EnsureCurrentSession(void)
> +{
>
> linebreak.

Fixed.

> +{
> +   if (CurrentSession == NULL)
> +   {
> +   MemoryContext old_context = 
> MemoryContextSwitchTo(TopMemoryContext);
> +
> +   CurrentSession = palloc0(sizeof(Session));
> +   MemoryContextSwitchTo(old_context);
> +   }
> +}
>
> Isn't MemoryContextAllocZero easier?

Done.

I also stopped saying "const TupleDesc" in a few places, which was a
thinko (I wanted pointer to const tupldeDesc, not const pointer to
tupleDesc...), and made sure that the shmem TupleDescs always have
tdtypmod actually set.

So as I understand it the remaining issues (aside from any
undiscovered bugs...) are:

1.  Do we like "Session", "CurrentSession" etc?  Robert seems to be
suggesting that this is likely to get in the way when we try to tackle
this area more thoroughly.  Andres is suggesting that this is a good
time to take steps in this direction.

2.  Andres didn't like what I did to DecrTupleDescRefCount, namely
allowing to run when there is no ResourceOwner.  I now see that this
is probably an indication of a different prob

Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-20 Thread Andres Freund
Hi,

Pushing 0001, 0002 now.

- rebased after conflicts
- fixed a significant number of too long lines
- removed a number of now superflous linebreaks

I think it'd be a good idea to backpatch the addition of
TupleDescAttr(tupledesc, n) to make future backpatching easier. What do
others think?

Thomas, prepare yourself for some hate from extension and fork authors /
maintainers ;)

Regards,

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-16 Thread Robert Haas
On Tue, Aug 15, 2017 at 8:34 PM, Andres Freund  wrote:
> On 2017-08-15 20:30:16 -0400, Robert Haas wrote:
>> On Tue, Aug 15, 2017 at 6:06 PM, Andres Freund  wrote:
>> > Interesting. I was apparently thinking slightly differently. I'd have
>> > thought we'd have Session struct in statically allocated shared
>> > memory. Which'd then have dsa_handle, dshash_table_handle, ... members.
>>
>> Sounds an awful lot like what we're already doing with PGPROC.
>
> Except it'd be shared between leader and workers. So no, not really.

There's precedent for using it that way, though - cf. group locking.
And in practice you're going to need an array of the same length as
the procarray.  It's maybe not quite the same thing, but it smells
pretty similar.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-15 Thread Thomas Munro
Will respond to the actionable code review points separately with a
new patch set, but first:

On Wed, Aug 16, 2017 at 10:06 AM, Andres Freund  wrote:
> On 2017-08-15 17:44:55 +1200, Thomas Munro wrote:
>> > @@ -99,12 +72,9 @@ CreateTemplateTupleDesc(int natts, bool hasoid)
>> >
>> >  /*
>> >   * CreateTupleDesc
>> > - * This function allocates a new TupleDesc pointing to a given
>> > + * This function allocates a new TupleDesc by copying a given
>> >   * Form_pg_attribute array.
>> >   *
>> > - * Note: if the TupleDesc is ever freed, the Form_pg_attribute array
>> > - * will not be freed thereby.
>> > - *
>> >
>> > I'm leaning towards no, but you could argue that we should just change
>> > that remark to be about constr?
>>
>> I don't see why.
>
> Because for that the freeing bit is still true, ie. it's still
> separately allocated.

It's true of struct tupleDesc in general but not true of objects
returned by this function in respect of the arguments to the function.
In master, that comment is a useful warning that the object will hold
onto but never free the attrs array you pass in.  The same doesn't
apply to constr so I don't think we need to say anything.

>> > Review of 0003:
>> >
>> > I'm not doing a too detailed review, given I think there's some changes
>> > in the pipeline.
>>
>> Yep.  In the new patch set the hash table formerly known as DHT is now
>> in patch 0004 and I made the following changes based on your feedback:
>>
>> 1.  Renamed it to "dshash".  The files are named dshash.{c,h}, and the
>> prefix on identifiers is dshash_.  You suggested dsmhash, but the "m"
>> didn't seem to make much sense.  I considered dsahash, but dshash
>> seemed better.  Thoughts?
>
> WFM.  Just curious, why didn't m make sense? I was referring to dynamic
> shared memory hash - seems right. Whether there's an intermediary dsa
> layer or not...

I think of DSA as a defining characteristic that dshash exists to work
with (it's baked into dshash's API), but DSM as an implementation
detail which dshash doesn't directly depend on.  Therefore I don't
like the "m".

I speculate that in future we might have build modes where DSA doesn't
use DSM anyway: it could use native pointers and maybe even a
different allocator in a build that either uses threads or
non-portable tricks to carve out a huge amount of virtual address
space so that it can map memory in at the same location in each
backend.  In that universe DSA would still be providing the service of
grouping allocations together into a scope for "rip cord" cleanup
(possibly by forwarding to MemoryContext stuff) but otherwise compile
away to nearly nothing.

>> > +static int32
>> > +find_or_allocate_shared_record_typmod(TupleDesc tupdesc)
>> > +{
>> >
>> > +   /*
>> > +* While we still hold the atts_index entry locked, add this to
>> > +* typmod_index.  That's important because we don't want anyone to 
>> > be able
>> > +* to find a typmod via the former that can't yet be looked up in 
>> > the
>> > +* latter.
>> > +*/
>> > +   PG_TRY();
>> > +   {
>> > +   typmod_index_entry =
>> > +   
>> > dht_find_or_insert(CurrentSharedRecordTypmodRegistry.typmod_index,
>> > +  &typmod, 
>> > &found);
>> > +   if (found)
>> > +   elog(ERROR, "cannot create duplicate shared record 
>> > typmod");
>> > +   }
>> > +   PG_CATCH();
>> > +   {
>> > +   /*
>> > +* If we failed to allocate or elog()ed, we have to be 
>> > careful not to
>> > +* leak the shared memory.  Note that we might have 
>> > created a new
>> > +* atts_index entry above, but we haven't put anything in 
>> > it yet.
>> > +*/
>> > +   dsa_free(CurrentSharedRecordTypmodRegistry.area, 
>> > shared_dp);
>> > +   PG_RE_THROW();
>> > +   }
>> >
>> > Not entirely related, but I do wonder if we don't need abetter solution
>> > to this. Something like dsa pointers that register appropriate memory
>> > context callbacks to get deleted in case of errors?
>>
>> Huh, scope guards.  I have had some ideas about some kind of
>> destructor mechanism that might replace what we're doing with DSM
>> detach hooks in various places and also work in containers like hash
>> tables (ie entries could have destructors), but doing it with the
>> stack is another level...
>
> Not sure what you mean with 'stack'?

I probably read too much into your words.  I was imagining something
conceptually like the following, since the "appropriate memory
context" in the code above is actually a stack frame:

  dsa_pointer p = ...;

  ON_ERROR_SCOPE_EXIT(dsa_free, area, p); /* yeah, I know, no variadic macros */

  elog(ERROR, "boo"); /* this causes p to be freed */

The point being that if the caller of this function catches the error
then

Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-15 Thread Andres Freund
On 2017-08-15 20:30:16 -0400, Robert Haas wrote:
> On Tue, Aug 15, 2017 at 6:06 PM, Andres Freund  wrote:
> > Interesting. I was apparently thinking slightly differently. I'd have
> > thought we'd have Session struct in statically allocated shared
> > memory. Which'd then have dsa_handle, dshash_table_handle, ... members.
> 
> Sounds an awful lot like what we're already doing with PGPROC.

Except it'd be shared between leader and workers. So no, not really.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-15 Thread Robert Haas
On Tue, Aug 15, 2017 at 6:06 PM, Andres Freund  wrote:
> Interesting. I was apparently thinking slightly differently. I'd have
> thought we'd have Session struct in statically allocated shared
> memory. Which'd then have dsa_handle, dshash_table_handle, ... members.

Sounds an awful lot like what we're already doing with PGPROC.

I am not sure that inventing a Session thing that should have 500
things in it but actually has the 3 that are relevant to this patch is
really a step forward.  In fact, it sounds like something that will
just create confusion down the road.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-15 Thread Andres Freund
On 2017-08-15 17:44:55 +1200, Thomas Munro wrote:
> > @@ -99,12 +72,9 @@ CreateTemplateTupleDesc(int natts, bool hasoid)
> >
> >  /*
> >   * CreateTupleDesc
> > - * This function allocates a new TupleDesc pointing to a given
> > + * This function allocates a new TupleDesc by copying a given
> >   * Form_pg_attribute array.
> >   *
> > - * Note: if the TupleDesc is ever freed, the Form_pg_attribute array
> > - * will not be freed thereby.
> > - *
> >
> > I'm leaning towards no, but you could argue that we should just change
> > that remark to be about constr?
>
> I don't see why.

Because for that the freeing bit is still true, ie. it's still
separately allocated.


> > Review of 0003:
> >
> > I'm not doing a too detailed review, given I think there's some changes
> > in the pipeline.
>
> Yep.  In the new patch set the hash table formerly known as DHT is now
> in patch 0004 and I made the following changes based on your feedback:
>
> 1.  Renamed it to "dshash".  The files are named dshash.{c,h}, and the
> prefix on identifiers is dshash_.  You suggested dsmhash, but the "m"
> didn't seem to make much sense.  I considered dsahash, but dshash
> seemed better.  Thoughts?

WFM.  Just curious, why didn't m make sense? I was referring to dynamic
shared memory hash - seems right. Whether there's an intermediary dsa
layer or not...


> 2.  Ripped out the incremental resizing and iterator support for now,
> as discussed.  I want to post patches to add those features when we
> have a use case but I can see that it's no slam dunk so I want to keep
> that stuff out of the dependency graph for parallel hash.

Cool.


> 3.  Added support for hash and compare functions with an extra
> argument for user data, a bit like qsort_arg_comparator.  This is
> necessary for functions that need to be able to dereference a
> dsa_pointer stored in the entry, since they need the dsa_area.  (I
> would normally call such an argument 'user_data' or 'context' or
> something but 'arg' seemed to be establish by qsort_arg.)

Good.


> > +/*
> > + * The set of parameters needed to create or attach to a hash table.  The
> > + * members tranche_id and tranche_name do not need to be initialized when
> > + * attaching to an existing hash table.  The functions do need to be 
> > supplied
> > + * even when attaching because we can't safely share function pointers 
> > between
> > + * backends in general.
> > + */
> > +typedef struct
> > +{
> > +   size_t key_size;/* Size of the key (initial 
> > bytes of entry) */
> > +   size_t entry_size;  /* Total size of entry */
> > +   dht_compare_function compare_function;  /* Compare function */
> > +   dht_hash_function hash_function;/* Hash function */
> > +   int tranche_id; /* The tranche ID to use 
> > for locks. */
> > +} dht_parameters;
> >
> > Wonder if it'd make sense to say that key/entry sizes to be only
> > minimums? That means we could increase them to be the proper aligned
> > size?
>
> I don't understand.  You mean explicitly saying that there are
> overheads?  Doesn't that go without saying?

I was thinking that we could do the MAXALIGN style calculations once
instead of repeatedly, by including them in the key and entry sizes.


> > Ignoring aspects related to REC_HASH_KEYS and related discussion, since
> > we're already discussing that in another email.
>
> This version includes new refactoring patches 0003, 0004 to get rid of
> REC_HASH_KEYS by teaching the hash table how to use a TupleDesc as a
> key directly.  Then the shared version does approximately the same
> thing, with a couple of extra hoops to jump thought.  Thoughts?

Will look.


> > +static int32
> > +find_or_allocate_shared_record_typmod(TupleDesc tupdesc)
> > +{
> >
> > +   /*
> > +* While we still hold the atts_index entry locked, add this to
> > +* typmod_index.  That's important because we don't want anyone to 
> > be able
> > +* to find a typmod via the former that can't yet be looked up in 
> > the
> > +* latter.
> > +*/
> > +   PG_TRY();
> > +   {
> > +   typmod_index_entry =
> > +   
> > dht_find_or_insert(CurrentSharedRecordTypmodRegistry.typmod_index,
> > +  &typmod, &found);
> > +   if (found)
> > +   elog(ERROR, "cannot create duplicate shared record 
> > typmod");
> > +   }
> > +   PG_CATCH();
> > +   {
> > +   /*
> > +* If we failed to allocate or elog()ed, we have to be 
> > careful not to
> > +* leak the shared memory.  Note that we might have created 
> > a new
> > +* atts_index entry above, but we haven't put anything in 
> > it yet.
> > +*/
> > +   dsa_free(CurrentSharedRecordTypmodRegistry.area, shared_dp);
>

Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-15 Thread Thomas Munro
On Tue, Aug 15, 2017 at 5:44 PM, Thomas Munro
 wrote:
> On Mon, Aug 14, 2017 at 12:32 PM, Andres Freund  wrote:
>> But architecturally I'm still not sure I quite like the a bit ad-hoc
>> manner session state is defined here. I think we much more should go
>> towards a PGPROC like PGSESSION array, that PGPROCs reference. That'd
>> also be preallocated in "normal" shmem. From there things like the
>> handle for a dht typmod table could be referenced.  I think we should
>> slowly go towards a world where session state isn't in a lot of file
>> local static variables.  I don't know if this is the right moment to
>> start doing so, but I think it's quite soon.
>
> No argument from me about that general idea.  All our global state is
> an obstacle for testability, multi-threading, new CPU scheduling
> architectures etc.  I had been trying to avoid getting too adventurous
> here, but here goes nothing... In this version there is an honest
> Session struct.  There is still a single global variable --
> CurrentSession -- which would I guess could be a candidate to become a
> thread-local variable from the future (or alternatively an argument to
> every function that needs session access).  Is this better?  Haven't
> tested this much yet but seems like better code layout to me.

> 0006-Introduce-a-shared-memory-record-typmod-registry.patch

+/*
+ * A struct encapsulating some elements of a user's session.  For now this
+ * manages state that applies to parallel query, but it principle it could
+ * include other things that are currently global variables.
+ */
+typedef struct Session
+{
+   dsm_segment*segment;/* The session-scoped
DSM segment. */
+   dsa_area   *area;   /* The
session-scoped DSA area. */
+
+   /* State managed by typcache.c. */
+   SharedRecordTypmodRegistry *typmod_registry;
+   dshash_table   *record_table;   /* Typmods indexed by tuple
descriptor */
+   dshash_table   *typmod_table;   /* Tuple descriptors indexed
by typmod */
+} Session;

Upon reflection, these members should probably be called
shared_record_table etc.  Presumably later refactoring would introduce
(for example) local_record_table, which would replace the following
variable in typcache.c:

static HTAB *RecordCacheHash = NULL;

... and likewise for NextRecordTypmod and RecordCacheArray which
together embody this session's local typmod registry and ability to
make more.

The idea here is eventually to move all state that is tried to a
session into this structure, though I'm not proposing to do any more
of that than is necessary as part of *this* patchset.  For now I'm
just looking for a decent place to put the minimal shared session
state, but in a way that allows us "slowly [to] go towards a world
where session state isn't in a lot of file local static variables" as
you put it.

There's a separate discussion to be had about whether things like
assign_record_type_typmod() should take a Session pointer or access
the global variable (and perhaps in future thread-local)
CurrentSession, but the path of least resistance for now is, I think,
as I have it.

On another topic, I probably need to study and test some failure paths better.

Thoughts?

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-14 Thread Thomas Munro
On Mon, Aug 14, 2017 at 12:32 PM, Andres Freund  wrote:
> Review for 0001:
>
> I think you made a few long lines even longer, like:
>
> @@ -1106,11 +1106,11 @@ pltcl_trigger_handler(PG_FUNCTION_ARGS, 
> pltcl_call_state *call_state,
> Tcl_ListObjAppendElement(NULL, tcl_trigtup, Tcl_NewObj());
> for (i = 0; i < tupdesc->natts; i++)
> {
> -   if (tupdesc->attrs[i]->attisdropped)
> +   if (TupleDescAttr(tupdesc, i)->attisdropped)
> Tcl_ListObjAppendElement(NULL, tcl_trigtup, 
> Tcl_NewObj());
> else
> Tcl_ListObjAppendElement(NULL, tcl_trigtup,
> - 
>Tcl_NewStringObj(utf_e2u(NameStr(tupdesc->attrs[i]->attname)), -1));
> + 
>Tcl_NewStringObj(utf_e2u(NameStr(TupleDescAttr(tupdesc, i)->attname)), 
> -1));
>
>
> as it's not particularly pretty to access tupdesc->attrs[i] repeatedly,
> it'd be good if you instead had a local variable for the individual
> attribute.

Done.

> Similar:
> if 
> (OidIsValid(get_base_element_type(TupleDescAttr(tupdesc, i)->atttypid)))
> sv = plperl_ref_from_pg_array(attr, 
> TupleDescAttr(tupdesc, i)->atttypid);
> else if ((funcid = 
> get_transform_fromsql(TupleDescAttr(tupdesc, i)->atttypid, 
> current_call_data->prodesc->lang_oid, current_call_data->prodesc->trftypes)))
> sv = (SV *) 
> DatumGetPointer(OidFunctionCall1(funcid, attr));

Done.

> @@ -150,7 +148,7 @@ ValuesNext(ValuesScanState *node)
>  */
> values[resind] = 
> MakeExpandedObjectReadOnly(values[resind],
>   
>   isnull[resind],
> - 
>   att[resind]->attlen);
> + 
>   TupleDescAttr(slot->tts_tupleDescriptor, 
> resind)->attlen);
>
> @@ -158,9 +158,9 @@ convert_tuples_by_position(TupleDesc indesc,
>  * must agree.
>  */
> if (attrMap[i] == 0 &&
> -   indesc->attrs[i]->attisdropped &&
> -   indesc->attrs[i]->attlen == 
> outdesc->attrs[i]->attlen &&
> -   indesc->attrs[i]->attalign == 
> outdesc->attrs[i]->attalign)
> +   TupleDescAttr(indesc, i)->attisdropped &&
> +   TupleDescAttr(indesc, i)->attlen == 
> TupleDescAttr(outdesc, i)->attlen &&
> +   TupleDescAttr(indesc, i)->attalign == 
> TupleDescAttr(outdesc, i)->attalign)
> continue;

Done.

> I think you get the drift, there's more/

Done in some more places too.

> Review for 0002:
>
> @@ -71,17 +71,17 @@ typedef struct tupleConstr
>  typedef struct tupleDesc
>  {
> int natts;  /* number of 
> attributes in the tuple */
> -   Form_pg_attribute *attrs;
> -   /* attrs[N] is a pointer to the description of Attribute Number N+1 */
> TupleConstr *constr;/* constraints, or NULL if none */
> Oid tdtypeid;   /* composite type ID 
> for tuple type */
> int32   tdtypmod;   /* typmod for tuple type */
> booltdhasoid;   /* tuple has oid attribute in 
> its header */
> int tdrefcount; /* reference count, 
> or -1 if not counting */
> +   /* attrs[N] is the description of Attribute Number N+1 */
> +   FormData_pg_attribute attrs[FLEXIBLE_ARRAY_MEMBER];
>  } *TupleDesc;
>
> sorry if I'm beating on my hobby horse, but if we're re-ordering anyway,
> can you move TupleConstr to the second-to-last? That a) seems more
> consistent but b) (hobby horse, sorry) avoids unnecessary alignment
> padding.

Done.

> @@ -734,13 +708,13 @@ BuildDescForRelation(List *schema)
> /* Override TupleDescInitEntry's settings as requested */
> TupleDescInitEntryCollation(desc, attnum, attcollation);
> if (entry->storage)
> -   desc->attrs[attnum - 1]->attstorage = entry->storage;
> +   desc->attrs[attnum - 1].attstorage = entry->storage;
>
> /* Fill in additional stuff not handled by TupleDescInitEntry 
> */
> -   desc->attrs[attnum - 1]->attnotnull = entry->is_not_null;
> +   

Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-13 Thread Andres Freund
Hi,

On 2017-08-11 20:39:13 +1200, Thomas Munro wrote:
> Please find attached a new patch series.

Review for 0001:

I think you made a few long lines even longer, like:

@@ -1106,11 +1106,11 @@ pltcl_trigger_handler(PG_FUNCTION_ARGS, 
pltcl_call_state *call_state,
Tcl_ListObjAppendElement(NULL, tcl_trigtup, Tcl_NewObj());
for (i = 0; i < tupdesc->natts; i++)
{
-   if (tupdesc->attrs[i]->attisdropped)
+   if (TupleDescAttr(tupdesc, i)->attisdropped)
Tcl_ListObjAppendElement(NULL, tcl_trigtup, 
Tcl_NewObj());
else
Tcl_ListObjAppendElement(NULL, tcl_trigtup,
-   
 Tcl_NewStringObj(utf_e2u(NameStr(tupdesc->attrs[i]->attname)), -1));
+   
 Tcl_NewStringObj(utf_e2u(NameStr(TupleDescAttr(tupdesc, i)->attname)), -1));


as it's not particularly pretty to access tupdesc->attrs[i] repeatedly,
it'd be good if you instead had a local variable for the individual
attribute.

Similar:
if 
(OidIsValid(get_base_element_type(TupleDescAttr(tupdesc, i)->atttypid)))
sv = plperl_ref_from_pg_array(attr, 
TupleDescAttr(tupdesc, i)->atttypid);
else if ((funcid = 
get_transform_fromsql(TupleDescAttr(tupdesc, i)->atttypid, 
current_call_data->prodesc->lang_oid, current_call_data->prodesc->trftypes)))
sv = (SV *) 
DatumGetPointer(OidFunctionCall1(funcid, attr));


@@ -150,7 +148,7 @@ ValuesNext(ValuesScanState *node)
 */
values[resind] = 
MakeExpandedObjectReadOnly(values[resind],

isnull[resind],
-   
att[resind]->attlen);
+   
TupleDescAttr(slot->tts_tupleDescriptor, 
resind)->attlen);

@@ -158,9 +158,9 @@ convert_tuples_by_position(TupleDesc indesc,
 * must agree.
 */
if (attrMap[i] == 0 &&
-   indesc->attrs[i]->attisdropped &&
-   indesc->attrs[i]->attlen == 
outdesc->attrs[i]->attlen &&
-   indesc->attrs[i]->attalign == 
outdesc->attrs[i]->attalign)
+   TupleDescAttr(indesc, i)->attisdropped &&
+   TupleDescAttr(indesc, i)->attlen == 
TupleDescAttr(outdesc, i)->attlen &&
+   TupleDescAttr(indesc, i)->attalign == 
TupleDescAttr(outdesc, i)->attalign)
continue;


I think you get the drift, there's more

Otherwise this seems fairly boring...


Review for 0002:

@@ -71,17 +71,17 @@ typedef struct tupleConstr
 typedef struct tupleDesc
 {
int natts;  /* number of attributes 
in the tuple */
-   Form_pg_attribute *attrs;
-   /* attrs[N] is a pointer to the description of Attribute Number N+1 */
TupleConstr *constr;/* constraints, or NULL if none */
Oid tdtypeid;   /* composite type ID 
for tuple type */
int32   tdtypmod;   /* typmod for tuple type */
booltdhasoid;   /* tuple has oid attribute in 
its header */
int tdrefcount; /* reference count, or 
-1 if not counting */
+   /* attrs[N] is the description of Attribute Number N+1 */
+   FormData_pg_attribute attrs[FLEXIBLE_ARRAY_MEMBER];
 } *TupleDesc;

sorry if I'm beating on my hobby horse, but if we're re-ordering anyway,
can you move TupleConstr to the second-to-last? That a) seems more
consistent but b) (hobby horse, sorry) avoids unnecessary alignment
padding.


@@ -734,13 +708,13 @@ BuildDescForRelation(List *schema)
/* Override TupleDescInitEntry's settings as requested */
TupleDescInitEntryCollation(desc, attnum, attcollation);
if (entry->storage)
-   desc->attrs[attnum - 1]->attstorage = entry->storage;
+   desc->attrs[attnum - 1].attstorage = entry->storage;

/* Fill in additional stuff not handled by TupleDescInitEntry */
-   desc->attrs[attnum - 1]->attnotnull = entry->is_not_null;
+   desc->attrs[attnum - 1].attnotnull = entry->is_not_null;
has_not_null |= entry->is_not_null;
-   desc->attrs[attnum - 1]->attislocal =

Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-12 Thread Tom Lane
Robert Haas  writes:
> On Sat, Aug 12, 2017 at 11:30 PM, Andres Freund  wrote:
>> That seems to involve a lot more than this though, given that currently
>> the stats collector data doesn't entirely have to be in memory. I've
>> seen sites with a lot of databases with quite some per-database stats
>> data. Don't think we can just require that to be in memory :(

> Hmm.  I'm not sure it wouldn't end up being *less* memory.  Don't we
> end up caching 1 copy of it per backend, at least for the database to
> which that backend is connected?  Accessing a shared copy would avoid
> that sort of thing.

Yeah ... the collector itself has got all that in memory anyway.
We do need to think about synchronization issues if we make that
memory globally available, but I find it hard to see how that would
lead to more memory consumption overall than what happens now.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-12 Thread Robert Haas
On Sat, Aug 12, 2017 at 11:30 PM, Andres Freund  wrote:
> That seems to involve a lot more than this though, given that currently
> the stats collector data doesn't entirely have to be in memory. I've
> seen sites with a lot of databases with quite some per-database stats
> data. Don't think we can just require that to be in memory :(

Hmm.  I'm not sure it wouldn't end up being *less* memory.  Don't we
end up caching 1 copy of it per backend, at least for the database to
which that backend is connected?  Accessing a shared copy would avoid
that sort of thing.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-12 Thread Andres Freund
On 2017-08-12 22:52:57 -0400, Robert Haas wrote:
> On Fri, Aug 11, 2017 at 9:55 PM, Andres Freund  wrote:
> > Well, most of the potential usecases for dsmhash I've heard about so
> > far, don't actually benefit much from incremental growth. In nearly all
> > the implementations I've seen incremental move ends up requiring more
> > total cycles than doing it at once, and for parallelism type usecases
> > the stall isn't really an issue.  So yes, I think this is something
> > worth considering.   If we were to actually use DHT for shared caches or
> > such, this'd be different, but that seems darned far off.
> 
> I think it'd be pretty interesting to look at replacing parts of the
> stats collector machinery with something DHT-based.

That seems to involve a lot more than this though, given that currently
the stats collector data doesn't entirely have to be in memory. I've
seen sites with a lot of databases with quite some per-database stats
data. Don't think we can just require that to be in memory :(

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-12 Thread Robert Haas
On Fri, Aug 11, 2017 at 9:55 PM, Andres Freund  wrote:
> Well, most of the potential usecases for dsmhash I've heard about so
> far, don't actually benefit much from incremental growth. In nearly all
> the implementations I've seen incremental move ends up requiring more
> total cycles than doing it at once, and for parallelism type usecases
> the stall isn't really an issue.  So yes, I think this is something
> worth considering.   If we were to actually use DHT for shared caches or
> such, this'd be different, but that seems darned far off.

I think it'd be pretty interesting to look at replacing parts of the
stats collector machinery with something DHT-based.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-12 Thread Thomas Munro
Thanks for your feedback.  Here are two parts that jumped out at me.
I'll address the other parts in a separate email.

On Sat, Aug 12, 2017 at 1:55 PM, Andres Freund  wrote:
>> This is complicated, and in the category that I would normally want a
>> stack of heavy unit tests for.  If you don't feel like making
>> decisions about this now, perhaps iteration (and incremental resize?)
>> could be removed, leaving only the most primitive get/put hash table
>> facilities -- just enough for this purpose?  Then a later patch could
>> add them back, with a set of really convincing unit tests...
>
> I'm inclined to go for that, yes.

I will make it so.

>> > +/*
>> > + * An entry in SharedRecordTypmodRegistry's attribute index.  The key is 
>> > the
>> > + * first REC_HASH_KEYS attribute OIDs.  That means that collisions are
>> > + * possible, but that's OK because SerializedTupleDesc objects are 
>> > arranged
>> > + * into a list.
>> > + */
>> >
>> > +/* Parameters for SharedRecordTypmodRegistry's attributes hash table. */
>> > +const static dht_parameters srtr_atts_index_params = {
>> > +   sizeof(Oid) * REC_HASH_KEYS,
>> > +   sizeof(SRTRAttsIndexEntry),
>> > +   memcmp,
>> > +   tag_hash,
>> > +   LWTRANCHE_SHARED_RECORD_ATTS_INDEX
>> > +};
>> > +
>> > +/* Parameters for SharedRecordTypmodRegistry's typmod hash table. */
>> > +const static dht_parameters srtr_typmod_index_params = {
>> > +   sizeof(uint32),
>> > +   sizeof(SRTRTypmodIndexEntry),
>> > +   memcmp,
>> > +   tag_hash,
>> > +   LWTRANCHE_SHARED_RECORD_TYPMOD_INDEX
>> > +};
>> > +
>> >
>> > I'm very much not a fan of this representation. I know you copied the
>> > logic, but I think it's a bad idea. I think the key should just be a
>> > dsa_pointer, and then we can have a proper tag_hash that hashes the
>> > whole thing, and a proper comparator too.  Just have
>> >
>> > /*
>> >  * Combine two hash values, resulting in another hash value, with decent 
>> > bit
>> >  * mixing.
>> >  *
>> >  * Similar to boost's hash_combine().
>> >  */
>> > static inline uint32
>> > hash_combine(uint32 a, uint32 b)
>> > {
>> > a ^= b + 0x9e3779b9 + (a << 6) + (a >> 2);
>> > return a;
>> > }
>> >
>> > and then hash everything.
>>
>> Hmm.  I'm not sure I understand.  I know what hash_combine is for but
>> what do you mean when you say they key should just be a dsa_pointer?
>
>> What's wrong with providing the key size, whole entry size, compare
>> function and hash function like this?
>
> Well, right now the key is "sizeof(Oid) * REC_HASH_KEYS" which imo is
> fairly ugly. Both because it wastes space for narrow cases, and because
> it leads to conflicts for wide ones. By having a dsa_pointer as a key
> and custom hash/compare functions there's no need for that, you can just
> compute the hash based on all keys, and compare based on all keys.

Ah, that.  Yeah, it is ugly, both in the pre-existing code and in my
patch.  Stepping back from this a bit more, the true key here not an
array of Oid at all (whether fixed sized or variable).  It's actually
a whole TupleDesc, because this is really a TupleDesc intern pool:
given a TupleDesc, please give me the canonical TupleDesc equal to
this one.  You might call it a hash set rather than a hash table
(key->value associative).

Ideally, we'd get rid of the ugly REC_HASH_KEYS-sized key and the ugly
extra conflict chain, and tupdesc.c would have a hashTupleDesc()
function that is compatible with equalTupleDescs().  Then the hash
table entry would simply be a TupleDesc (that is, a pointer).

There is an extra complication when we use DSA memory though:  If you
have a hash table (set) full of dsa_pointer to struct tupleDesc but
want to be able to search it given a TupleDesc (= backend local
pointer) then you have to do some extra work.  I think that work is:
the hash table entries should be a small struct that has a union {
dsa_pointer, TupleDesc } and a discriminator field to say which it is,
and the hash + eq functions should be wrappers that follow dsa_pointer
if needed and then forward to hashTupleDesc() (a function that does
hash_combine() over the Oids) and equalTupleDescs().

(That complication might not exist if tupleDesc were fixed size and
could be directly in the hash table entry, but in the process of
flattening it (= holding the attributes in it) I made it variable
size, so we have to use a pointer to it in the hash table since both
DynaHash and DHT work with fixed size entries).

Thoughts?

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-11 Thread Andres Freund
Hi,

On 2017-08-11 20:39:13 +1200, Thomas Munro wrote:
> Please find attached a new patch series.  I apologise in advance for
> 0001 and note that the patchset now weighs in at ~75kB compressed.
> Here are my in-line replies to your two reviews:

Replying to a few points here, then I'll do a pass through your your
submission...


> On Tue, Jul 25, 2017 at 10:09 PM, Andres Freund  wrote:
> > It does concern me that we're growing yet another somewhat different
> > hashtable implementation. Yet I don't quite see how we could avoid
> > it. dynahash relies on proper pointers, simplehash doesn't do locking
> > (and shouldn't) and also relies on pointers, although to a much lesser
> > degree.  All the open coded tables aren't a good match either.  So I
> > don't quite see an alternative, but I'd love one.
> 
> Yeah, I agree.  To deal with data structures with different pointer
> types, locking policy, inlined hash/eq functions etc, perhaps there is
> a way we could eventually do 'policy based design' using the kind of
> macro trickery you started where we generate N different hash table
> variations but only have to maintain source code for one chaining hash
> table implementation?  Or perl scripts that effectively behave as a
> cfront^H^H^H nevermind.  I'm not planning to investigate that for this
> cycle.

Whaaa, what have I done But more seriously, I'm doubtful it's worth
going there.


> > + * level.  However, when a resize operation begins, all partition locks 
> > must
> > + * be acquired simultaneously for a brief period.  This is only expected to
> > + * happen a small number of times until a stable size is found, since 
> > growth is
> > + * geometric.
> >
> > I'm a bit doubtful that we need partitioning at this point, and that it
> > doesn't actually *degrade* performance for your typmod case.
> 
> Yeah, partitioning not needed for this case, but this is supposed to
> be more generally useful.  I thought about making the number of
> partitions a construction parameter, but it doesn't really hurt does
> it?

Well, using multiple locks and such certainly isn't free. An exclusively
owned cacheline mutex is nearly an order of magnitude faster than one
that's currently shared, not to speak of modified. Also it does increase
the size overhead, which might end up happening for a few other cases.


> > + * Resizing is done incrementally so that no individual insert operation 
> > pays
> > + * for the potentially large cost of splitting all buckets.
> >
> > I'm not sure this is a reasonable tradeoff for the use-case suggested so
> > far, it doesn't exactly make things simpler. We're not going to grow
> > much.
> 
> Yeah, designed to be more generally useful.  Are you saying you would
> prefer to see the DHT patch split into an initial submission that does
> the simplest thing possible, so that the unlucky guy who causes the
> hash table to grow has to do all the work of moving buckets to a
> bigger hash table?  Then we could move the more complicated
> incremental growth stuff to a later patch.

Well, most of the potential usecases for dsmhash I've heard about so
far, don't actually benefit much from incremental growth. In nearly all
the implementations I've seen incremental move ends up requiring more
total cycles than doing it at once, and for parallelism type usecases
the stall isn't really an issue.  So yes, I think this is something
worth considering.   If we were to actually use DHT for shared caches or
such, this'd be different, but that seems darned far off.


> This is complicated, and in the category that I would normally want a
> stack of heavy unit tests for.  If you don't feel like making
> decisions about this now, perhaps iteration (and incremental resize?)
> could be removed, leaving only the most primitive get/put hash table
> facilities -- just enough for this purpose?  Then a later patch could
> add them back, with a set of really convincing unit tests...

I'm inclined to go for that, yes.


> > +/*
> > + * Detach from a hash table.  This frees backend-local resources associated
> > + * with the hash table, but the hash table will continue to exist until it 
> > is
> > + * either explicitly destroyed (by a backend that is still attached to 
> > it), or
> > + * the area that backs it is returned to the operating system.
> > + */
> > +void
> > +dht_detach(dht_hash_table *hash_table)
> > +{
> > +   /* The hash table may have been destroyed.  Just free local memory. 
> > */
> > +   pfree(hash_table);
> > +}
> >
> > Somewhat inclined to add debugging refcount - seems like bugs around
> > that might be annoying to find. Maybe also add an assert ensuring that
> > no locks are held?
> 
> Added assert that not locks are held.
> 
> In an earlier version I had reference counts.  Then I realised that it
> wasn't really helping anything.  The state of being 'attached' to a
> dht_hash_table isn't really the same as holding a heavyweight resource
> like a DSM segment or a file which is backed by 

Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-11 Thread Andres Freund
On 2017-08-11 11:14:44 -0400, Robert Haas wrote:
> On Fri, Aug 11, 2017 at 4:39 AM, Thomas Munro
>  wrote:
> > OK.  Now it's ds_hash_table.{c,h}, where "ds" stands for "dynamic
> > shared".  Better?  If we were to do other data structures in DSA
> > memory they could follow that style: ds_red_black_tree.c, ds_vector.c,
> > ds_deque.c etc and their identifier prefix would be drbt_, dv_, dd_
> > etc.
> >
> > Do you want to see a separate patch to rename dsa.c?  Got a better
> > name?  You could have spoken up earlier :-)  It does sound like a bit
> > like the thing from crypto or perhaps a scary secret government
> > department.

I, and I bet a lot of other people, kind of missed dsa being merged for
a while...


> I doubt that we really want to have accessor functions with names like
> dynamic_shared_hash_table_insert or ds_hash_table_insert. Long names
> are fine, even desirable, for APIs that aren't too widely used,
> because they're relatively self-documenting, but a 30-character
> function name gets annoying in a hurry if you have to call it very
> often, and this is intended to be reusable for other things that want
> a dynamic shared memory hash table.  I think we should (a) pick some
> reasonably short prefix for all the function names, like dht or dsht
> or ds_hash, but not ds_hash_table or dynamic_shared_hash_table and (b)
> also use that prefix as the name for the .c and .h files.

Yea, I agree with this. Something dsmhash_{insert,...}... seems like
it'd kinda work without being too ambiguous like dht imo is, while still
being reasonably short.


> Right now, we've got a situation where the most widely-used hash table
> implementation uses dynahash.c for the code, hsearch.h for the
> interface, and "hash" as the prefix for the names, and that's really
> hard to remember.  I think having a consistent naming scheme
> throughout would be a lot better.

Yea, that situation still occasionally confuses me, a good 10 years
after starting to look at pg...  There's even a a dynahash.h, except
it's useless. And dynahash.c doesn't even include hsearch.h directly
(included via shmem.h)!  Personally I'd actually in favor of moving
hsearch.h stuff into dynahash.h and leave hsearch as a wrapper.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-11 Thread Robert Haas
On Fri, Aug 11, 2017 at 4:39 AM, Thomas Munro
 wrote:
> OK.  Now it's ds_hash_table.{c,h}, where "ds" stands for "dynamic
> shared".  Better?  If we were to do other data structures in DSA
> memory they could follow that style: ds_red_black_tree.c, ds_vector.c,
> ds_deque.c etc and their identifier prefix would be drbt_, dv_, dd_
> etc.
>
> Do you want to see a separate patch to rename dsa.c?  Got a better
> name?  You could have spoken up earlier :-)  It does sound like a bit
> like the thing from crypto or perhaps a scary secret government
> department.

I doubt that we really want to have accessor functions with names like
dynamic_shared_hash_table_insert or ds_hash_table_insert. Long names
are fine, even desirable, for APIs that aren't too widely used,
because they're relatively self-documenting, but a 30-character
function name gets annoying in a hurry if you have to call it very
often, and this is intended to be reusable for other things that want
a dynamic shared memory hash table.  I think we should (a) pick some
reasonably short prefix for all the function names, like dht or dsht
or ds_hash, but not ds_hash_table or dynamic_shared_hash_table and (b)
also use that prefix as the name for the .c and .h files.

Right now, we've got a situation where the most widely-used hash table
implementation uses dynahash.c for the code, hsearch.h for the
interface, and "hash" as the prefix for the names, and that's really
hard to remember.  I think having a consistent naming scheme
throughout would be a lot better.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-08-11 Thread Thomas Munro
Hi,

Please find attached a new patch series.  I apologise in advance for
0001 and note that the patchset now weighs in at ~75kB compressed.
Here are my in-line replies to your two reviews:

On Tue, Jul 25, 2017 at 10:09 PM, Andres Freund  wrote:
> It does concern me that we're growing yet another somewhat different
> hashtable implementation. Yet I don't quite see how we could avoid
> it. dynahash relies on proper pointers, simplehash doesn't do locking
> (and shouldn't) and also relies on pointers, although to a much lesser
> degree.  All the open coded tables aren't a good match either.  So I
> don't quite see an alternative, but I'd love one.

Yeah, I agree.  To deal with data structures with different pointer
types, locking policy, inlined hash/eq functions etc, perhaps there is
a way we could eventually do 'policy based design' using the kind of
macro trickery you started where we generate N different hash table
variations but only have to maintain source code for one chaining hash
table implementation?  Or perl scripts that effectively behave as a
cfront^H^H^H nevermind.  I'm not planning to investigate that for this
cycle.

>
> diff --git a/src/backend/lib/dht.c b/src/backend/lib/dht.c
> new file mode 100644
> index 000..2fec70f7742
> --- /dev/null
> +++ b/src/backend/lib/dht.c
>
> FWIW, not a big fan of dht as a filename (nor of dsa.c). For one DHT
> usually refers to distributed hash tables, which this is not, and for
> another the abbreviation is so short it's not immediately
> understandable, and likely to conflict further.  I think it'd possibly
> ok to have dht as symbol prefixes, but rename the file to be longer.

OK.  Now it's ds_hash_table.{c,h}, where "ds" stands for "dynamic
shared".  Better?  If we were to do other data structures in DSA
memory they could follow that style: ds_red_black_tree.c, ds_vector.c,
ds_deque.c etc and their identifier prefix would be drbt_, dv_, dd_
etc.

Do you want to see a separate patch to rename dsa.c?  Got a better
name?  You could have spoken up earlier :-)  It does sound like a bit
like the thing from crypto or perhaps a scary secret government
department.

> + * To deal with currency, it has a fixed size set of partitions, each of 
> which
> + * is independently locked.
>
> s/currency/concurrency/ I presume.

Fixed.

> + * Each bucket maps to a partition; so insert, find
> + * and iterate operations normally only acquire one lock.  Therefore, good
> + * concurrency is achieved whenever they don't collide at the lock partition
>
> s/they/operations/?

Fixed.

> + * level.  However, when a resize operation begins, all partition locks must
> + * be acquired simultaneously for a brief period.  This is only expected to
> + * happen a small number of times until a stable size is found, since growth 
> is
> + * geometric.
>
> I'm a bit doubtful that we need partitioning at this point, and that it
> doesn't actually *degrade* performance for your typmod case.

Yeah, partitioning not needed for this case, but this is supposed to
be more generally useful.  I thought about making the number of
partitions a construction parameter, but it doesn't really hurt does
it?

> + * Resizing is done incrementally so that no individual insert operation pays
> + * for the potentially large cost of splitting all buckets.
>
> I'm not sure this is a reasonable tradeoff for the use-case suggested so
> far, it doesn't exactly make things simpler. We're not going to grow
> much.

Yeah, designed to be more generally useful.  Are you saying you would
prefer to see the DHT patch split into an initial submission that does
the simplest thing possible, so that the unlucky guy who causes the
hash table to grow has to do all the work of moving buckets to a
bigger hash table?  Then we could move the more complicated
incremental growth stuff to a later patch.

> +/* The opaque type used for tracking iterator state. */
> +struct dht_iterator;
> +typedef struct dht_iterator dht_iterator;
>
> Isn't it actually the iterator state? Rather than tracking it? Also, why
> is it opaque given you're actually defining it below? Guess you'd moved
> it at some point.

Improved comment.  The iterator state is defined below in the .h, but
with a warning that client code mustn't access it; it exists in the
header only because it's very useful to be able to but dht_iterator on
the stack which requires the client code to have its definition, but I
want to reserve the right to change it arbitrarily in future.

> +/*
> + * The set of parameters needed to create or attach to a hash table.  The
> + * members tranche_id and tranche_name do not need to be initialized when
> + * attaching to an existing hash table.
> + */
> +typedef struct
> +{
> +   Size key_size;  /* Size of the key (initial 
> bytes of entry) */
> +   Size entry_size;/* Total size of entry */
>
> Let's use size_t, like we kind of concluded in the thread you started:
> http://archives.p

Re: [HACKERS] POC: Sharing record typmods between backends

2017-07-31 Thread Andres Freund
Hi,

diff --git a/src/backend/access/common/tupdesc.c 
b/src/backend/access/common/tupdesc.c
index 9fd7b4e019b..97c0125a4ba 100644
--- a/src/backend/access/common/tupdesc.c
+++ b/src/backend/access/common/tupdesc.c
@@ -337,17 +337,75 @@ DecrTupleDescRefCount(TupleDesc tupdesc)
 {
Assert(tupdesc->tdrefcount > 0);
 
-   ResourceOwnerForgetTupleDesc(CurrentResourceOwner, tupdesc);
+   if (CurrentResourceOwner != NULL)
+   ResourceOwnerForgetTupleDesc(CurrentResourceOwner, tupdesc);
if (--tupdesc->tdrefcount == 0)
FreeTupleDesc(tupdesc);
 }

What's this about? CurrentResourceOwner should always be valid here, no?
If so, why did that change? I don't think it's good to detach this from
the resowner infrastructure...


 /*
- * Compare two TupleDesc structures for logical equality
+ * Compare two TupleDescs' attributes for logical equality
  *
  * Note: we deliberately do not check the attrelid and tdtypmod fields.
  * This allows typcache.c to use this routine to see if a cached record type
  * matches a requested type, and is harmless for relcache.c's uses.
+ */
+bool
+equalTupleDescAttrs(Form_pg_attribute attr1, Form_pg_attribute attr2)
+{

comment not really accurate, this routine afaik isn't used by
typcache.c?


/*
- * Magic numbers for parallel state sharing.  Higher-level code should use
- * smaller values, leaving these very large ones for use by this module.
+ * Magic numbers for per-context parallel state sharing.  Higher-level code
+ * should use smaller values, leaving these very large ones for use by this
+ * module.
  */
 #define PARALLEL_KEY_FIXED 
UINT64CONST(0x0001)
 #define PARALLEL_KEY_ERROR_QUEUE   
UINT64CONST(0x0002)
@@ -63,6 +74,16 @@
 #define PARALLEL_KEY_ACTIVE_SNAPSHOT   UINT64CONST(0x0007)
 #define PARALLEL_KEY_TRANSACTION_STATE UINT64CONST(0x0008)
 #define PARALLEL_KEY_ENTRYPOINT
UINT64CONST(0x0009)
+#define PARALLEL_KEY_SESSION_DSM   
UINT64CONST(0x000A)
+
+/* Magic number for per-session DSM TOC. */
+#define PARALLEL_SESSION_MAGIC 0xabb0fbc9
+
+/*
+ * Magic numbers for parallel state sharing in the per-session DSM area.
+ */
+#define PARALLEL_KEY_SESSION_DSA   
UINT64CONST(0x0001)
+#define PARALLEL_KEY_RECORD_TYPMOD_REGISTRYUINT64CONST(0x0002)

Not this patch's fault, but this infrastructure really isn't great. We
should really replace it with a shmem.h style infrastructure, using a
dht hashtable as backing...


+/* The current per-session DSM segment, if attached. */
+static dsm_segment *current_session_segment = NULL;
+

I think it'd be better if we had a proper 'SessionState' and
'BackendSessionState' infrastructure that then contains the dsm segment
etc. I think we'll otherwise just end up with a bunch of parallel
infrastructures.



+/*
+ * A mechanism for sharing record typmods between backends.
+ */
+struct SharedRecordTypmodRegistry
+{
+   dht_hash_table_handle atts_index_handle;
+   dht_hash_table_handle typmod_index_handle;
+   pg_atomic_uint32 next_typmod;
+};
+

I think the code needs to explain better how these are intended to be
used. IIUC, atts_index is used to find typmods by "identity", and
typmod_index by the typmod, right? And we need both to avoid
all workers generating different tupledescs, right?  Kinda guessable by
reading typecache.c, but that shouldn't be needed.


+/*
+ * A flattened/serialized representation of a TupleDesc for use in shared
+ * memory.  Can be converted to and from regular TupleDesc format.  Doesn't
+ * support constraints and doesn't store the actual type OID, because this is
+ * only for use with RECORD types as created by CreateTupleDesc().  These are
+ * arranged into a linked list, in the hash table entry corresponding to the
+ * OIDs of the first 16 attributes, so we'd expect to get more than one entry
+ * in the list when named and other properties differ.
+ */
+typedef struct SerializedTupleDesc
+{
+   dsa_pointer next;   /* next with the same same 
attribute OIDs */
+   int natts;  /* number of attributes 
in the tuple */
+   int32   typmod; /* typmod for tuple type */
+   boolhasoid; /* tuple has oid attribute in 
its header */
+
+   /*
+* The attributes follow.  We only ever access the first
+* ATTRIBUTE_FIXED_PART_SIZE bytes of each element, like the code in
+* tupdesc.c.
+*/
+   FormData_pg_attribute attributes[FLEXIBLE_ARRAY_MEMBER];
+} SerializedTupleDesc;

Not a fan of a separate tupledesc representation, that's just going to
lead to divergence over time. I think we should rather change the normal
tupledesc representation to be compatible

Re: [HACKERS] POC: Sharing record typmods between backends

2017-07-25 Thread Andres Freund
On 2017-07-10 21:39:09 +1200, Thomas Munro wrote:
> Here's a new version that introduces a per-session DSM segment to hold
> the shared record typmod registry (and maybe more things later).

You like to switch it up. *.patchset.tgz??? ;)


It does concern me that we're growing yet another somewhat different
hashtable implementation. Yet I don't quite see how we could avoid
it. dynahash relies on proper pointers, simplehash doesn't do locking
(and shouldn't) and also relies on pointers, although to a much lesser
degree.  All the open coded tables aren't a good match either.  So I
don't quite see an alternative, but I'd love one.


Regards,

Andres

diff --git a/src/backend/lib/dht.c b/src/backend/lib/dht.c
new file mode 100644
index 000..2fec70f7742
--- /dev/null
+++ b/src/backend/lib/dht.c

FWIW, not a big fan of dht as a filename (nor of dsa.c). For one DHT
usually refers to distributed hash tables, which this is not, and for
another the abbreviation is so short it's not immediately
understandable, and likely to conflict further.  I think it'd possibly
ok to have dht as symbol prefixes, but rename the file to be longer.

+ * To deal with currency, it has a fixed size set of partitions, each of which
+ * is independently locked.

s/currency/concurrency/ I presume.


+ * Each bucket maps to a partition; so insert, find
+ * and iterate operations normally only acquire one lock.  Therefore, good
+ * concurrency is achieved whenever they don't collide at the lock partition

s/they/operations/?


+ * level.  However, when a resize operation begins, all partition locks must
+ * be acquired simultaneously for a brief period.  This is only expected to
+ * happen a small number of times until a stable size is found, since growth is
+ * geometric.

I'm a bit doubtful that we need partitioning at this point, and that it
doesn't actually *degrade* performance for your typmod case.


+ * Resizing is done incrementally so that no individual insert operation pays
+ * for the potentially large cost of splitting all buckets.

I'm not sure this is a reasonable tradeoff for the use-case suggested so
far, it doesn't exactly make things simpler. We're not going to grow
much.


+/* The opaque type used for tracking iterator state. */
+struct dht_iterator;
+typedef struct dht_iterator dht_iterator;

Isn't it actually the iterator state? Rather than tracking it? Also, why
is it opaque given you're actually defining it below? Guess you'd moved
it at some point.


+/*
+ * The set of parameters needed to create or attach to a hash table.  The
+ * members tranche_id and tranche_name do not need to be initialized when
+ * attaching to an existing hash table.
+ */
+typedef struct
+{
+   Size key_size;  /* Size of the key (initial 
bytes of entry) */
+   Size entry_size;/* Total size of entry */

Let's use size_t, like we kind of concluded in the thread you started:
http://archives.postgresql.org/message-id/25076.1489699457%40sss.pgh.pa.us
:)

+   dht_compare_function compare_function;  /* Compare function */
+   dht_hash_function hash_function;/* Hash function */

Might be worth explaining that these need to provided when attaching
because they're possibly process local. Did you test this with
EXEC_BACKEND?

+   int tranche_id; /* The tranche ID to use for 
locks. */
+} dht_parameters;


+struct dht_iterator
+{
+   dht_hash_table *hash_table; /* The hash table we are iterating 
over. */
+   bool exclusive; /* Whether to lock buckets 
exclusively. */
+   Size partition; /* The index of the next 
partition to visit. */
+   Size bucket;/* The index of the next bucket 
to visit. */
+   dht_hash_table_item *item;  /* The most recently returned item. */
+   dsa_pointer last_item_pointer; /* The last item visited. */
+   Size table_size_log2;   /* The table size when we started iterating. */
+   bool locked;/* Whether the current partition is 
locked. */

Haven't gotten to the actual code yet, but this kinda suggest we leave a
partition locked when iterating? Hm, that seems likely to result in a
fair bit of pain...


+/* Iterating over the whole hash table. */
+extern void dht_iterate_begin(dht_hash_table *hash_table,
+ dht_iterator 
*iterator, bool exclusive);
+extern void *dht_iterate_next(dht_iterator *iterator);
+extern void dht_iterate_delete(dht_iterator *iterator);

s/delete/delete_current/? Otherwise it looks like it's part of
manipulating just the iterator.

+extern void dht_iterate_release(dht_iterator *iterator);

I'd add lock to to the name.


+/*
+ * An item in the hash table.  This wraps the user's entry object in an
+ * envelop that holds a pointer back to the bucket and a pointer to the next
+ * item in the bucket.
+ */
+struct dht_hash

Re: [HACKERS] POC: Sharing record typmods between backends

2017-07-10 Thread Thomas Munro
On Thu, Jun 1, 2017 at 6:29 AM, Andres Freund  wrote:
> On May 31, 2017 11:28:18 AM PDT, Robert Haas  wrote:
>>> On 2017-05-31 13:27:28 -0400, Dilip Kumar wrote:
[ ... various discussion in support of using DHT ... ]

Ok, good.

Here's a new version that introduces a per-session DSM segment to hold
the shared record typmod registry (and maybe more things later).  The
per-session segment is created the first time you run a parallel query
(though there is handling for failure to allocate that allows the
parallel query to continue with no workers) and lives until your
leader backend exits.  When parallel workers start up, they see its
handle in the per-query segment and attach to it, which puts
typcache.c into write-through cache mode so their idea of record
typmods stays in sync with the leader (and each other).

I also noticed that I could delete even more of tqueue.c than before:
it doesn't seem to have any remaining reason to need to know the
TupleDesc.

One way to test this code is to apply just
0003-rip-out-tqueue-remapping-v3.patch and then try the example from
the first message in this thread to see it break, and then try again
with the other two patches applied.  By adding debugging trace you can
see that the worker pushes a bunch of TupleDescs into shmem, they get
pulled out by the leader when it sees the tuples, and then on a second
invocation the (new) worker can reuse them: it finds matches already
in shmem from the first invocation.

I used a DSM segment with a TOC and a DSA area inside that, like the
existing per-query DSM segment, but obviously you could spin it
various different ways.  One example: just have a DSA area and make a
new kind of TOC thing that deals in dsa_pointers.  Better ideas?

I believe combo CIDs should also go in there, to enable parallel
write, but I'm not 100% sure: that's neither per-session nor per-query
data, that's per-transaction.  So perhaps the per-session DSM could
hold a per-session DSA and a per-transaction DSA, where the latter is
reset for each transaction, just like TopTransactionContext (though
dsa.c doesn't have a 'reset thyself' function currently).  That seems
like a good place to store a shared combo CID hash table using DHT.
Thoughts?

-- 
Thomas Munro
http://www.enterprisedb.com


shared-record-typmod-registry-v3.patchset.tgz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-05-31 Thread Andres Freund


On May 31, 2017 11:28:18 AM PDT, Robert Haas  wrote:
>On Wed, May 31, 2017 at 1:46 PM, Andres Freund 
>wrote:
>> On 2017-05-31 13:27:28 -0400, Dilip Kumar wrote:
>>> On Wed, May 31, 2017 at 12:53 PM, Robert Haas
> wrote:
>>> > Well, SH_TYPE's members SH_ELEMENT_TYPE *data and void
>*private_data
>>> > are not going to work in DSM, because they are pointers.  You can
>>> > doubtless come up with a way around that problem, but I guess the
>>> > question is whether that's actually any better than just using
>DHT.
>>>
>>> Probably I misunderstood the question. I assumed that we need to
>bring
>>> in DHT only for achieving this goal. But, if the question is simply
>>> the comparison of DHT vs simplehash for this particular case then I
>>> agree that DHT is a more appropriate choice.
>>
>> Yea, I don't think simplehash is the best choice here.  It's
>worthwhile
>> to use it for performance critical bits, but using it for everything
>> would just increase code size without much benefit.  I'd tentatively
>> assume that anonymous record type aren't going to be super common,
>and
>> that this is going to be the biggest bottleneck if you use them.
>
>Did you mean "not going to be"?

Err, yes.  Thanks
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-05-31 Thread Robert Haas
On Wed, May 31, 2017 at 1:46 PM, Andres Freund  wrote:
> On 2017-05-31 13:27:28 -0400, Dilip Kumar wrote:
>> On Wed, May 31, 2017 at 12:53 PM, Robert Haas  wrote:
>> > Well, SH_TYPE's members SH_ELEMENT_TYPE *data and void *private_data
>> > are not going to work in DSM, because they are pointers.  You can
>> > doubtless come up with a way around that problem, but I guess the
>> > question is whether that's actually any better than just using DHT.
>>
>> Probably I misunderstood the question. I assumed that we need to bring
>> in DHT only for achieving this goal. But, if the question is simply
>> the comparison of DHT vs simplehash for this particular case then I
>> agree that DHT is a more appropriate choice.
>
> Yea, I don't think simplehash is the best choice here.  It's worthwhile
> to use it for performance critical bits, but using it for everything
> would just increase code size without much benefit.  I'd tentatively
> assume that anonymous record type aren't going to be super common, and
> that this is going to be the biggest bottleneck if you use them.

Did you mean "not going to be"?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-05-31 Thread Andres Freund
On 2017-05-31 13:27:28 -0400, Dilip Kumar wrote:
> On Wed, May 31, 2017 at 12:53 PM, Robert Haas  wrote:
> > Well, SH_TYPE's members SH_ELEMENT_TYPE *data and void *private_data
> > are not going to work in DSM, because they are pointers.  You can
> > doubtless come up with a way around that problem, but I guess the
> > question is whether that's actually any better than just using DHT.
> 
> Probably I misunderstood the question. I assumed that we need to bring
> in DHT only for achieving this goal. But, if the question is simply
> the comparison of DHT vs simplehash for this particular case then I
> agree that DHT is a more appropriate choice.

Yea, I don't think simplehash is the best choice here.  It's worthwhile
to use it for performance critical bits, but using it for everything
would just increase code size without much benefit.  I'd tentatively
assume that anonymous record type aren't going to be super common, and
that this is going to be the biggest bottleneck if you use them.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-05-31 Thread Dilip Kumar
On Wed, May 31, 2017 at 12:53 PM, Robert Haas  wrote:
> Well, SH_TYPE's members SH_ELEMENT_TYPE *data and void *private_data
> are not going to work in DSM, because they are pointers.  You can
> doubtless come up with a way around that problem, but I guess the
> question is whether that's actually any better than just using DHT.

Probably I misunderstood the question. I assumed that we need to bring
in DHT only for achieving this goal. But, if the question is simply
the comparison of DHT vs simplehash for this particular case then I
agree that DHT is a more appropriate choice.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-05-31 Thread Robert Haas
On Wed, May 31, 2017 at 11:16 AM, Dilip Kumar  wrote:
> I agree with you. But, if I understand the use case correctly we need
> to store the TupleDesc for the RECORD in shared hash so that it can be
> shared across multiple processes.  I think this can be achieved with
> the simplehash as well.
>
> For getting this done, we need some fixed shared memory for holding
> static members of SH_TYPE and the process which creates the simplehash
> will be responsible for copying these static members to the shared
> location so that other processes can access the SH_TYPE.  And, the
> dynamic part (the actual hash entries) can be allocated using DSA by
> registering SH_ALLOCATE function.

Well, SH_TYPE's members SH_ELEMENT_TYPE *data and void *private_data
are not going to work in DSM, because they are pointers.  You can
doubtless come up with a way around that problem, but I guess the
question is whether that's actually any better than just using DHT.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-05-31 Thread Dilip Kumar
On Wed, May 31, 2017 at 10:57 AM, Robert Haas  wrote:
>> Simplehash provides an option to provide your own allocator function
>> to it. So in the allocator function, you can allocate memory from DSA.
>> After it reaches some threshold it expands the size (double) and it
>> will again call the allocator function to allocate the bigger memory.
>> You can refer pagetable_allocate in tidbitmap.c.
>
> That only allows the pagetable to be shared, not the hash table itself.

I agree with you. But, if I understand the use case correctly we need
to store the TupleDesc for the RECORD in shared hash so that it can be
shared across multiple processes.  I think this can be achieved with
the simplehash as well.

For getting this done, we need some fixed shared memory for holding
static members of SH_TYPE and the process which creates the simplehash
will be responsible for copying these static members to the shared
location so that other processes can access the SH_TYPE.  And, the
dynamic part (the actual hash entries) can be allocated using DSA by
registering SH_ALLOCATE function.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-05-31 Thread Robert Haas
On Tue, May 30, 2017 at 2:45 AM, Dilip Kumar  wrote:
> On Tue, May 30, 2017 at 1:09 AM, Thomas Munro
>  wrote:
>>> * Perhaps simplehash + an LWLock would be better than dht, but I
>>> haven't looked into that.  Can it be convinced to work in DSA memory
>>> and to grow on demand?
>
> Simplehash provides an option to provide your own allocator function
> to it. So in the allocator function, you can allocate memory from DSA.
> After it reaches some threshold it expands the size (double) and it
> will again call the allocator function to allocate the bigger memory.
> You can refer pagetable_allocate in tidbitmap.c.

That only allows the pagetable to be shared, not the hash table itself.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-05-29 Thread Dilip Kumar
On Tue, May 30, 2017 at 1:09 AM, Thomas Munro
 wrote:
>> * Perhaps simplehash + an LWLock would be better than dht, but I
>> haven't looked into that.  Can it be convinced to work in DSA memory
>> and to grow on demand?

Simplehash provides an option to provide your own allocator function
to it. So in the allocator function, you can allocate memory from DSA.
After it reaches some threshold it expands the size (double) and it
will again call the allocator function to allocate the bigger memory.
You can refer pagetable_allocate in tidbitmap.c.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POC: Sharing record typmods between backends

2017-05-29 Thread Thomas Munro
On Fri, Apr 7, 2017 at 5:21 PM, Thomas Munro
 wrote:
> * It would be nice for the SharedRecordTypeRegistry to be able to
> survive longer than a single parallel query, perhaps in a per-session
> DSM segment.  Perhaps eventually we will want to consider a
> query-scoped area, a transaction-scoped area and a session-scoped
> area?  I didn't investigate that for this POC.

This seems like the right way to go.  I think there should be one
extra patch in this patch stack, to create a per-session DSA area (and
perhaps a "SharedSessionState" struct?) that worker backends can
attach to.  It could be created when you first run a parallel query,
and then reused for all parallel queries for the rest of your session.
So, after you've run one parallel query, all future record typmod
registrations would get pushed (write-through style) into shmem, for
use by other backends that you might start in future parallel queries.
That will avoid having to copy the leader's registered record typmods
into shmem for every query going forward (the behaviour of the current
POC patch).

> * Perhaps simplehash + an LWLock would be better than dht, but I
> haven't looked into that.  Can it be convinced to work in DSA memory
> and to grow on demand?

Any views on this?

> 1.  Apply dht-v3.patch[3].
> 2.  Apply shared-record-typmod-registry-v1.patch.
> 3.  Apply rip-out-tqueue-remapping-v1.patch.

Here's a rebased version of the second patch (the other two still
apply).  It's still POC code only and still uses a
per-parallel-context DSA area for space, not the per-session one I am
now proposing we develop, if people are in favour of the approach.

In case it wasn't clear from my earlier description, a nice side
effect of using a shared typmod registry is that you can delete 85% of
tqueue.c (see patch #3), so if you don't count the hash table
implementation we come out about even in terms of lines of code.

-- 
Thomas Munro
http://www.enterprisedb.com


shared-record-typmod-registry-v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers