2013/7/22 Christoph Engelbert <[email protected]>

> Am 22.07.2013 11:32, schrieb Tommaso Teofili:
> > thanks a lot Christoph for your detailed explanation, from my point of
> view
> > it'd be nice to have this behavior configurable (basically that would
> plug
> > in and out the new buffer backed) so that users could choose in a simpler
> > way, do you think that'd be possible ?
>
> It is exchangable.


ok, thanks for confirming it, I was almost sure this was the case.


> The old implementations without the buffer
> backend are still there and available that why I said we need to
> clarify that one implementation is behaving totally different :-)
>

yes, sure.


>
> > Regarding having an hangout I don't think it'd be easy, probably it's
> > better to keep sharing our concerns ideas here on the ML, however if you
> > could make something like a documentation / 2 mins tutorial page that
> would
> > help a lot I think.
>
> I'll try to :-)
>

thanks, that'd be great!

Tommaso


>
> >
> > I wonder also if it'd make sense to have a DM 0.2 release before
> committing
> > the new backend in order to have that be part of a separate additional
> > release.
> > What do you think?
> >
> > Thanks a lot for your effort,
> > Tommaso
> >
> >
> > 2013/7/21 Christoph Engelbert <[email protected]>
> >
> >> Am 21.07.2013 13:39, schrieb Christoph Engelbert:
> >>> Hey guys
> >>> What I forgot to mention is that we definately have to explain the
> >>> parameters when creating the MemoryManager since for the new buffer
> >>> backend they are different to the original ones. I use the
> >>> concurrencylevel which is already used by guava to decide on the
> >>> partition count and use the buffercount as the count of slices
> >>> inside of an partition.
> >>>
> >>> A bit more on the internals of the new buffer framework:
> >>>
> >>> There are a number of partitions which are sliced into multiple
> >>> fragments, all of the same size (a divisor of the partition size).
> >>> Partition 1 (fragment size for example 128 bytes)
> >>> +------------------------------------------ ----------+
> >>> |  1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |
> >>> +-----------------------------------------------------+
> >>> | ... | ... | ... | ... | ... | ... | ... | ... | ... |
> >>> +-----------------------------------------------------+
> >>> | ... | ... | ... | ... | ... | ... | ... | ... | ... |
> >>> +-----------------------------------------------------+
> >>>
> >>> Partition 2
> >>> +------------------------------------------ ----------+
> >>> |  1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |
> >>> +-----------------------------------------------------+
> >>> | ... | ... | ... | ... | ... | ... | ... | ... | ... |
> >>> +-----------------------------------------------------+
> >>> | ... | ... | ... | ... | ... | ... | ... | ... | ... |
> >>> +-----------------------------------------------------+
> >>>
> >>> If you request a new PartitionBuffer you have the option to give it
> >>> a base size, if not given a single fragment will be requested.
> >>>
> >>> PartitionBufferPool.getPartitionBuffer() => ParitionBuffer1 [[P1,1]]
> >>> PartitionBufferPool.getPartitionBuffer(256) => ParitionBuffer2
> >>> [[P2,1], [P2,2]]
> >>>
> >>> All PartitionBuffers are auto growing as long as there are free
> >>> fragments available so if you try to write beyond the current max
> >>> capacity in behind the buffer requests a new fragment to write to:
> >>>
> >>> PartitionBuffer1.writeByte(...) -> [[P1,1], [P1,1]]
> >> Oops should be:
> >>
> >> PartitionBuffer1.writeByte(...) -> [[P1,1], [P1,2]]
> >>
> >>
> >>> Fragments don't to be received from the same partition, this heavily
> >>> depends on the choosing selection strategy (like cpu local, thread
> >>> local, round robin) and if the current partition has some more free
> >>> fragments. Depending on the selection strategy different best
> >>> practices for partition count are available.
> >>>
> >>> What is important to note for the new buffer backend, the memory
> >>> usage is higher for small pieces of data or datasize little bit
> >>> bigger than multiple of fragment size. So in example you'll waste
> >>> lots of memory if you would store values of 129 bytes (2 fragments
> >>> with second fragment only used for one byte). This is a part that is
> >>> best for user to configure correctly but the most important part is,
> >>> this is totally different behavior than for old MemoryManagers which
> >>> sliced to the correct size.
> >>>
> >>> I guess that's it so far, any questions? I may would appreciate a
> >>> hangout or something like that to do some further explanation and to
> >>> get opinions, ideas, review.
> >>>
> >>> Chris
> >>>
> >>> Am 17.07.2013 19:29, schrieb Raffaele P. Guidi:
> >>>> Great ! :)
> >>>> Il giorno 17/lug/2013 12:41, "Christoph Engelbert" <
> >> [email protected]>
> >>>> ha scritto:
> >>
>
>

Reply via email to