On 25 Aug 2015 10:34, "Vlad Rozov" <[email protected]> wrote:
>
> I think that the bufferserver should be allowed to use no more than
application specified amount of memory and behavior like linux file cache
will make it difficult to allocate operator/container cache without
reserving too much memory for spikes.

Sure, agreed.

My idea is to use *lesser* memory than what is allocated by application
since I am suggesting some level of control over group commits. So I am
thinking of taking the patch you wrote to have it trigger each time buffer
server fills by n units, n being window size.

If n exceed allocated memory, we can error out.

Thoughts?

But I may be wrong and it will be good to have suggested behavior
implemented in a prototype and benchmark prototype performance.
>
> Vlad
>
>
> On 8/24/15 18:24, Atri Sharma wrote:
>>
>> The idea is that if bufferserver dumps *all* pages once it runs out of
>> memory, then it's a huge I/O spike. If it starts paging out once it runs
>> out of memory,  then it behaves like a normal cache and further level of
>> paging control can be applied.
>>
>> My idea is that there should be functionality to control the amount of
data
>> that is committed together. This also allows me to 1) define optimal way
>> writes work on my disk 2) allow my application to define locality of
data.
>> For eg I might be performing graph analysis in which a time window's data
>> consists of sub graph.
>> On 25 Aug 2015 02:46, "Chetan Narsude" <[email protected]> wrote:
>>
>>> The bufferserver writes pages to disk *only when* it runs out of memory
to
>>> hold them.
>>>
>>> Can you elaborate where you see I/O spikes?
>>>
>>> --
>>> Chetan
>>>
>>> On Mon, Aug 24, 2015 at 12:39 PM, Atri Sharma <[email protected]>
wrote:
>>>
>>>> Folks,
>>>>
>>>> I was wondering if it makes sense to have a functionality in which
>>>> bufferserver writes out data pages to disk in batches defined by
>>>> timeslice/application window.
>>>>
>>>> This will allow flexible workloads and reduce I/O spikes (I understand
>>>
>>> that
>>>>
>>>> we have non-blocking I/O but it still would incur disk head costs).
>>>>
>>>> Thoughts?
>>>>
>>>> --
>>>> Regards,
>>>>
>>>> Atri
>>>> *l'apprenant*
>>>>
>

Reply via email to