On Tue, Aug 16, 2016 at 4:28 AM, Carsten Haitzler <ras...@rasterman.com> wrote:
> On Tue, 16 Aug 2016 01:43:43 -0300 Gustavo Sverzut Barbieri
> <barbi...@gmail.com> said:
>
>> On Mon, Aug 15, 2016 at 11:37 PM, Carsten Haitzler <ras...@rasterman.com>
>> wrote:
>> > On Mon, 15 Aug 2016 22:35:58 -0300 Gustavo Sverzut Barbieri
>> > <barbi...@gmail.com> said:
>> >
>> >> On Mon, Aug 15, 2016 at 8:13 PM, Carsten Haitzler <ras...@rasterman.com>
>> >> wrote:
>> >> > On Mon, 15 Aug 2016 12:07:16 -0300 Gustavo Sverzut Barbieri
>> >> > <barbi...@gmail.com> said:
>> [...]
>> >> It is the same, but you do not need to replicate this in every class
>> >> like done in Ecore_Exe, Ecore_Con, Ecore_Con_URL... :-)
>> >>
>> >> I was thinking just like you, but after talking to Tasn a bit I got
>> >> what he meant with a "thin wrapper around syscalls" and at the end it
>> >> does make sense, more sense actually.
>> >
>> > if it is just a thin wrapper then what value does it provide?
>>
>> Uniform access to the calls.
>>
>> Like in Linux, you do get read(2),write(2),close(2) and file
>> descriptors to work on almost every basic resource. But when you go to
>> higher level resources, like when doing HTTP over libcurl, then you
>> cannot call "read(2)" directly...
>>
>> With the API I'm proposing you get that simplicity of Unix FD's back.
>> It's almost the same call and behavior.
>>
>> Then you can write a simple code that monitors a source, see when
>> there is data to read, read some data, wait until the destination can
>> hold more data, then write it... in a loop. This is the Efl.Io.Copier.
>>
>> Check:
>> https://git.enlightenment.org/core/efl.git/log/?h=devs/barbieri/efl-io-interfaces
>>
>> You will see I already provide Stdin, Stdout, Stderr and File. Those
>> are "useless" since you could do with pure POSIX calls. But when I add
>> the objects implemented on complex libraries such as cURL, then that
>> code will "just work".
>
> unix fd's are NOT - simple. not if you want to be non-blocking. you have to
> handle write failures and figure out what was and was not written from your
> buffer, handle select() on the when it is available again and write then -
> and for all you know you may be able to just write a single byte and then ave
> to try again and so on.
>
> unix read/write and fd's push the logic of this up the stack into the app. the
> alternative is to do blocking i/o and that is just not viable for something
> that multiplexes all its i/o through an event loop.
>
> what i have read of this so far means pushing the "kernel buffer is full, 
> write
> failed now or partly failed" back off into the app. and that is not even close
> to replacing ecore_con - it fundamentally misses the "i'll buffer that for
> you, don't worry about it" nature of it that takes the kernel's limited
> buffering and extends it to "infinite" that saves a lot of pain and agony.

Raster, check the code and see it's all in there. When I mean "simple"
is that its nature is pretty simple, concepts are well understood...
not that it's simple or easy to use.

Since we must deal with that *at least* for the POSIX, then we'll do
it on our side at least once.

As this POSIX is very simple, it's implementable everywhere with
minimum effort, thus works well for this level of API.

I completely agree that for end users, having infinite buffers and
even line-buffers for most tasks is *must have*, that's why I'm
focusing on having this in an uniform way.

You seem to just not get the layers and split roles... maybe because
in previous ecore all elements replicated that? It's the only
difference, the logic is there, but moved outside so we don't
replicate it all.

Benefits everyone:

 - efl developers reader/writer providers become simpler, no need to
reimplement all on their own, no need to inherit a big bulk of a base
class

 - efl users know that they all look and feel the same, they can use
the efl.io.copier on them all and the signals, buffering and behavior
are the same. No need to understand Ecore_Exe, Ecore_Con,
Ecore_Con_URL events.

I know you're worried about "common cases", so am I. At the end of
this project I'll have to migrate all Ecore_Con + Ecore_Con_URL users
to the new API and I'll be the first one to come with helpers for
these common cases... then if the common case is to read all the
information to memory, like download a JSON to memory and use it...
this should be very easy and not require dozen lines more than the
legacy code :-)


>> >> >> Efl.Io.Copier does and keeps a "read_chunk" segment that is used as
>> >> >> memory for the given slice.
>> >> >>
>> >> >> This is why the Eina_Slice and Eina_Rw_Slice plays well in this
>> >> >> scenario. For example you can get a slice of the given binbuf in order
>> >> >> to handle to other functions that will write/read to/from it. It
>> >> >> doesn't require any a new binbuf to be created or COW logic.
>> >> >
>> >> > it requires a new eina slice struct to be allocated that points to the
>> >> > data which is EXACTLY the below binbuf api i mention.
>> >>
>> >> eina slice is a pair of 2 values, will always be. There is no opaque
>> >> or need for pointer, or allocate. The eina_slice.h API is mostly about
>> >> passing struct value, not reference/pointer.
>> >>
>> >> with binbuf indeed you're right, given its complexity you end with an
>> >> allocated opaque memory handle, magic validation, etc.
>> >
>> > and that's what a slice is - it's an allocated opaque handle over a blob of
>> > memory... is that not just binbuf?
>>
>> it's not opaque handle. It's a public structure, you allocate it on
>> stack... same cost as doing "const void *x, size_t xlen". But the pair
>> is carried, in sync, easy to use, easy to understand.
>
> it'll need to be allocated if you ever have buffering... and the data it 
> points
> to will have to be managed.

I'm officially giving up on this until you check the code. I'm talking
one thing, you're talking another. :-/


>> >> This can be done transparently with the current API proposal.
>> >
>> > but in your current one - writes will fail because you don't allocate or
>> > expand an existing buffer - right? once full.. then what?
>>
>> It's just like read(2)/write(2) that you know very well. If you want
>> to copy using them, you need an intermediate buffer.
>>
>> Efl.Io.Copier is that code and holds that buffer. You can limit it or not.
>>
>> If unlimited, reads() up to a maximum chunk size and keeps expanding
>> the buffer. Once write() returns positive value, that amount is
>> removed from the buffer, that can shrink.
>>
>> If limited, it will stop monitoring read (partially implemented), thus
>> will not call read(2), thus will not reach the kernel and eventually
>> its internal buffer will be full and the writer process will be
>> informed.
>
> so you are FORCING an api that HAS to memcpy() at the time a slice is passed 
> in
> before the func returns. that means either it always has to memcpy somewhere
> (or has to once writes() start failing when a kernel buffer is full) OR it
> requires a blocking api...

I'm not forcing anything neither requires a blocking API.

The current read() and write() methods behave like POSIX, that is, you
handle them a buffer. WHAT is this buffer is outside of scope. This
means if you can negotiate a DMA, then you can simply get the DMA
handle from the writer object and use it with the reader and there is
no memcpy() (This is a great idea... something to look as an extra
method for the Efl.Io.Writer class and use in Efl.Io.Copier, like when
using mmap() files). Likewise readers could offer DMA. In these cases
the copier class can avoid the internal buffer altogether.

NOTE: I'm not doing the DMA now since this is a base for Efl.Net stuff
which have higher priority, but as you can see it's doable and I can
get to them later.



> what i see here is that you are designing either:
>
> 1. a blocking api (unacceptable from any main loop construct)

There are 2 properties/events: Reader.can_read, Writer.can_write. If
you write when can_write is false, then you can either block or get an
EAGAIN/EWOULDBLOCK. If you read when can_read is false, that's the
same. That's the equivalent of select()/poll().

But if you read when "can_read" or write when "can_write", it's able
to provide/take at least one byte without blocking/failing.

Efl.Io.Copier is an Efl.Loop.User and internally will keep an
efl_loop_job(efl_loop_user_loop_get(self)). Thus it will process
chunks in the main loop... being friendly to the rest.

Note that this is all hidden from the user. Shall we want to use a
thread, we can move all this operation to threads.


> or
> 2. an api where writes can fail when buffers are full and that requires the
> caller handle buffering and write failures themselves (which makes the api a
> pain to use and no better than raw read/write with a raw fd)

this is the case for the Readers/Writers. The pain is taken care by
Efl.Io.Copier. Just different roles, before it used to be bundled
inside all other classes.


> or
> 3. an api that requires a memcpy of data on write ALWAYS once kernel buffers
> fill up and no ability to zero copy (which goes against the whole original 
> idea
> of you wanting to make it efficient).

See the DMA part above.



-- 
Gustavo Sverzut Barbieri
--------------------------------------
Mobile: +55 (16) 99354-9890

------------------------------------------------------------------------------
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to