On Tue, May 6, 2014 at 9:31 AM, Heikki Linnakangas
<hlinnakan...@vmware.com> wrote:
> As a generic remark, I wish that whatever parallel algorithms we will use
> won't need a lot of ad hoc memory allocations from shared memory. Even
> though we have dynamic shared memory now, complex data structures with a lot
> of pointers and different allocations are more painful to debug, tune, and
> make concurrency-safe. But I have no idea what exactly you have in mind, so
> I'll just have to take your word on it that this is sensible.

Yeah, I agree.  Actually, I'm hoping that a lot of what we want to do
can be done using the shm_mq stuff, which uses a messaging paradigm.
If the queue is full, you wait for the consumer to read some data
before writing more.  That is much simpler and avoids a lot of

There are several problems with using pointers in dynamic shared
memory.  The ones I'm most concerned about are:

1. Segments are relocatable, so you can't actually use absolute
pointers.  Maybe someday we'll have a facility for dynamic shared
memory segments that are mapped at the same address in every process,
or maybe not, but right now we sure don't.

2. You've got to decide up-front how much memory to set aside for
dynamic allocation, and you can't easily change your mind later.  Some
of the DSM implementations support growing the segment, but you've got
to somehow get everyone who is using it to remap it, possibly at a
different address, so it's a long way from being transparent.

But that having been said, pointers are a pretty fundamental data
structure, and I think trying to get rid of them completely isn't
likely to be feasible.  For sorting, you need a big SortTuple array
and then that needs to point to the individual tuples.  I think that's
simple enough to be reasonable, and at any rate I don't see a simpler

> Yeah, I saw in some tests that about 50% of the memory used for catalog
> caches was waste caused by rounding up all the allocations to power-of-two.

Sadly, I can't see using this allocator for the catalog caches as-is.
The problem is that AllocSetAlloc can start those caches off with a
tiny 1kB allocation.  This allocator is intended to be efficient for
large contexts, so you start off with 4kB of superblock descriptors
and a 64kB chunk for each size class that is in use.  Very reasonable
for multi-megabyte allocations; not so hot for tiny ones.  There may
be a way to serve both needs, but I haven't found it yet.

> I wouldn't conflate shared memory with this. If a piece of code needs to
> work with either one, I think the way to go is to have some sort of wrapper
> functions that route the calls to either the shared or private memory
> allocator, similar to how the same interface is used to deal with local,
> temporary buffers and shared buffers.

Well, that has several disadvantages.  One of them is code
duplication.  This allocator could certainly be a lot simpler if it
only handed shared memory, or for that matter if it only handled
backend-private memory.  But if the right way to do allocation for
sorting is to carve chunks out of superblocks, then it's the right way
regardless of whether you're allocating from shared memory or
backend-private memory.  And if that's the wrong way, then it's wrong
for both.  Using completely different allocators for parallel sort and
non-parallel sort doesn't seem like a great idea to me.

Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to