2015-09-28 12:37 GMT+02:00 Shulgin, Oleksandr <oleksandr.shul...@zalando.de>
:

> On Mon, Sep 28, 2015 at 12:05 PM, Pavel Stehule <pavel.steh...@gmail.com>
> wrote:
>
>>
>> 2015-09-28 12:01 GMT+02:00 Shulgin, Oleksandr <
>> oleksandr.shul...@zalando.de>:
>>
>>> On Sun, Sep 27, 2015 at 8:05 AM, Pavel Stehule <pavel.steh...@gmail.com>
>>> wrote:
>>>
>>> the preparing of content before execution is interesting idea, that can
>>>> be used more. The almost queries and plans are not too big, so when the
>>>> size of content is not too big - less than 1MB, then can be used one DSM
>>>> for all backends.
>>>>
>>>
>>>
>>>> When size of content is bigger than limit, then DSM will be allocated
>>>> specially for this content. The pointer to DSM and offset can be stored in
>>>> requested process slot. The reading and writing to requested slot should be
>>>> protected by spinlock, but it should block only two related processes for
>>>> short time (copy memory).
>>>>
>>>
>>> Sorry, I don't think this will fly.
>>>
>>> The whole idea is that a backend publishes the plan of a query just
>>> before running it and it doesn't care which other backend(s) might be
>>> reading it, how many times and in which order.  The only required locking
>>> (implicit) is contained in the code for dsm_attach/detach().
>>>
>>
>> I didn't propose too different solution. There is only one difference -
>> sharing DSM for smaller data. It is similar to using usual shared memory.
>>
>
> Does this mean implementing some sort of allocator on top of the shared
> memory segment?  If so, how are you going to prevent fragmentation?
>

yes, simple memory allocator is necessary in this case. But it should be
really simple - you can allocate only fixed size blocks - 10KB, 100KB and
1MB from separate buffers. So the fragmentation is not possible.

Regards

Pavel


>
> --
> Alex
>
>

Reply via email to