Hi Dimitri,

Le 11/04/2018 à 13:42, Dimitri Vorona a écrit :
> I was thinking about something like this [0]. The point is, that the slice
> user has no way of knowing if the slice can still be safely used and who
> owns the memory.

I think the answer is that calling free() on something you exported to
consumers is incorrect.  If you allocate buffers, you should choose a
Buffer implementation with proper ownership semantics.  For example, we
have PoolBuffer, but also Python buffers and CUDA buffers.  They all
(should) have proper ownership.  If you want to create buffers with data
managed with malloc/free, you need to write a MallocBuffer implementation.

> A step back is a good idea. My use case would be to return a partially
> built slice on a buffer, while continuing appending to the buffer. Think
> delta dictionaries: while a slice of the coding table can be sent, we will
> have additional data to append later on.

I don't know anything about delta dictionaries, but I get the idea.

Does the implementation become harder if you split the coding table into
several buffers that never get resized?

> To build on your previous proposal: maybe some more finely grained locking
> mechanism, like the data_ being a shared_ptr<uint_8*>, slices grabbing a
> copy of it when they want to use it and releasing it afterwards? The parent
> would then check the couter of the shared_ptr (similar to the number of
> slices).

You need an actual lock to avoid race conditions (the parent may find a
zero shared_ptr counter, but another thread would grab a data pointer
immediately after).

I wonder if we really want such implementation complexity.  Also,
everyone is now paying the price of locking.  Ideally slicing and
fetching a data pointer should be cheap.  I'd like to know what others
think about this.



Reply via email to