On 05/08/16 18:11, Cedric BAIL wrote:
> Hello,
>
> On Fri, Aug 5, 2016 at 7:30 AM, Tom Hacohen <t...@osg.samsung.com> wrote:
>> On 04/08/16 19:45, Gustavo Sverzut Barbieri wrote:
>>> Answering both emails here, in order:
>>>
>>>> On 04/08/16 17:24, Tom Hacohen wrote:
>>>>> I don't really have much experience/knowledge
>>>>> about this topic, but to me, for efficiency and clarity, it seems like
>>>>> the best way to approach it would be to have a very simple wrapper
>>>>> around read()/write() that handles address resolution, timeout, lifetime
>>>>> and loop integration is enough. I don't think we should care about
>>>>> binbuf or any other complicated setups, all we should get is a const
>>>>> string (which is essentially a pointer to the static buffer with which
>>>>> we called "read"), and then decide what we want to do with it on the
>>>>> client side. Calling binbuf_append is not much work.
>>>
>>> That's exactly what it is, a very thin one.
>>>
>>> The blob/binbuf are important optimizations if we care about memory
>>> consumption and reducing copies, but they are not mandatory. I'll
>>> insist a bit more on them, but if it's the overall agreement to go
>>> with buffer copies, then Okay.
>>
>> I don't see why/how, but maybe I'm missing something, so let me tell you
>> what I have in mind, and please correct me if I'm missing anything.
>>
>> In any networking code you will have:
>> char buf[MAX_BUF];
>> read(..., buf, ...);
>> eo_callback_call(obj, ..., buf, size_of_buf);
>>
>> Or something similar. Binbuf just means you'll also add it to a binbuf
>> and then pass the binbuf instead of buf. So in the most basic case, that
>> is one reader of the data (is there any other case??) binbuf is at most
>> as good as the example above, and if someone really wants to keep it, he
>> just adds it to the binbuf on his own. I don't see how binbuf helps.
>
> It doesn't give the ability to "steal" the string from the event. So
> if for example you want to handle some parsing that seems costly into
> another thread, you will have to do a copy and push it. With a binbuf,
> you can steal it and avoid an unecessary copy. Also you can do smart
> agregation with no copy by chaining stealed string. All of that is not
> doable with a static string and yes, memcpy is a massive performance
> issue and has been one of the many crippling problem of Ecore_Con so
> avoiding repeating the same mistake would be neat. Especially when
> there is no justification for why we would with this kind of manual
> buffer. They won't look good in bindings, they are not helping, so why
> ?

This fails to take into account that the binbuf also needs an allocated 
string. This means you will *always* allocate even when you don't need 
it. Yes, you'll save a memcpy when you want to steal the string, but if 
your case is so performance critical, you probably shouldn't be using 
ecore-con, callback and all of this infrastructure. As I said before, I 
haven't used ecore con in anything substantial, so I can't comment on 
bottlenecks, but this just feels wrong to me to always allocate a binbuf.

Also, being able to steal the binbuf assumes that only one callback is 
allowed, which is another limitation put upon this implementation.

>
>>>>> All of this should be with simple eo callbacks. As you pointed out, this
>>>>> is really a solved problem, I think most (all?) the frameworks and
>>>>> languages do the basics quite the same. I'd follow suit, because I've
>>>>> never had a bad experience with any. Keep the basics simple, and then
>>>>> wrap as needed with a subclass (like for example, the send file cedric
>>>>> suggested).
>>>
>>> Problem is usually in the heavy usage, many are solved by using a
>>> language that helps with reference count for you.
>>>
>>> Like in Python, strings are immutable and one always keep references
>>> to them. In other environments, it's COW, and your reference is kept
>>> intact...
>>>
>>> In C, if we use the easiest API (void* + size_t), we'll pay a price of
>>> reallocs and memcpy :-/
>>
>> I think you are pre-optimising here. I don't think that is such a big
>> deal regarding performance, and if it is really such a big deal, the
>> user can just pass the buffer to be used to the object in a special
>> fast-path API, though I don't think this should be norm.
>
> WTF ! IT IS ! memcpy is what has the biggest impact on network
> performance. It is not early pre-optmizing, it is addressing one of
> the mistake that crippled Ecore_Con from the beginning. The fact you
> always had a memcpy kill its performance and usability. No serious
> network API would be build with the assumption that a memcpy is fine
> and avoiding them is early pre-optimising...
>
> <snip>

You keep on saying you always need to memcpy, I honestly don't see 
where. For parsing of structured packets and etc, you just need to parse 
the beginning (should be easy) and then figure out the rest. In some 
cases you'll have to construct a buffer out of the extra data (for 
example image), but you'd need to construct this buffer anyway, and pass 
it to an image object. No data saved. In others you may want to download 
a file, this means going straight to disk, no copy there. The cases 
where you need to copy into a buffer are rare and far between (from my 
first impression, please correct me if you have an example).

You keep on saying "WTF ! IT IS ! ... biggest impact ...", though I 
can't see even one example you can refer to, or one email on the ML that 
backs this. Furthermore, memcpy being the bottle neck in network I/O is 
one of the biggest bullshit statements I've ever heard. It just can not 
be the case.

>
>>> On Thu, Aug 4, 2016 at 2:14 PM, Tom Hacohen <t...@osg.samsung.com> wrote:
>>>> I forgot one more thing, and also was asked on IRC for some
>>>> clarifications about another, so here I am, replying to myself.
>>>>
>>>> A comment I forgot to make, which is to something cedric mentioned, and
>>>> is something that I've said a million times already and I'll say it
>>>> again because it's so damn important and people should stop doing it.
>>>> Please do not delete objects implicitly. :( Connection is closed? Don't
>>>> delete the object, I may steal be dealing with it. Let me eo_unref/del
>>>> it if I'm still the rightful owner of the object, it's my
>>>> responsibility. Maybe I still want to introspect it before it's gone?
>>>> It's my decision to make... If you really think this pattern of
>>>> autodeletion is useful, add a helper function people can add as a
>>>> callback that just eo_dels an object. It's that simple.
>>>
>>> Not really, I'm trying to use what is the standard and likely i got it
>>> wrongly. I'm okay to add explicit events and methods to close the
>>> socket, if that's what I should be doing :-)
>>>
>>> I just wanted to use socket.close() if people were expecting to do 
>>> socket.del().
>>>
>>> Note that this shouldn't mess with refcounts, it's more like
>>> evas_object_del() where you want the object to disappear immediately.
>>> In this case the socket is not available anymore, so people should
>>> stop using that after receiving the event.
>>
>> Yeah, don't mess with refcounts. Don't delete the object when the
>> connection dies.
>
> There is two objects that we can talk about here. First the object
> that are created by the server, are owned by the server and indeed
> will die if the connection is dropped. I think that is fine.
>
> The second type, the object you create, should be under you
> responsibility. So yes, splitting close from the destruction is I
> think necessary. Like hide and del. But still, if you del an object,
> it will also hide it. So if you do del a server or a dialer, it should
> close the connection obviously first.
>

I was referring to the second type. Though if you are talking about the 
ecore_con_client the server creates in the first, I'd also consider it 
as a user object. It can die when the parent dies, but killing it when a 
connection dies feels wrong to me. This one is not the end of the world 
though. We do however need to come up with a convention for such 
objects' lifetime.

--
Tom.


------------------------------------------------------------------------------
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to