2015-09-25 19:13 GMT+02:00 Shulgin, Oleksandr <oleksandr.shul...@zalando.de>
> Some problems:
> There is a potential problem with the limited total number of DSM
> segments: it is reserved in a way to only allow 2 per backend (on average)
> and 64 additional per server, so if you run with the new option enabled at
> all times, you're down to only 1 additional DSM per backend (again, on
> average).  Not sure how critical this can be, but no one is forced to run
> with this option enabled for all backends.
> If you don't want to run it enabled at all times, then enabling the GUC
> per-backend can be problematic.  It's still possible to update the conf
> file and send SIGHUP to a single backend, but it's harder to accomplish
> over psql, for example.  I think here we might still have some luck with
> the signals: use another array of per-backend slots with flags, set the
> target backend's flag and send it SIGUSR1.  The backend wakes on the signal
> and examines its slot, then toggles the GUC if needed.  Sounds pretty safe,
> eh?
> No documentation changes yet, waiting for your comments. :-)

the preparing of content before execution is interesting idea, that can be
used more. The almost queries and plans are not too big, so when the size
of content is not too big - less than 1MB, then can be used one DSM for all
backends. When size of content is bigger than limit, then DSM will be
allocated specially for this content. The pointer to DSM and offset can be
stored in requested process slot. The reading and writing to requested slot
should be protected by spinlock, but it should block only two related
processes for short time (copy memory). Other possibility is showing the
size of content in requested process slot. Then the requester can
preallocate enough size of shared memory. But this doesn't solve a issues
related to waiting requester for content. So first variant is pretty
simple, and should be preferred. The disadvantage is clear - it can enforce
maybe significant slowdown of fast queries.



> Happy hacking!
> --
> Alex

Reply via email to