use it as a parrallel thread in that way your read event wont br blocked

On Sat, Feb 25, 2017 at 10:28 AM, kushal bhattacharya <
[email protected]> wrote:

> hi ,
> i used one event-base for all the threads but when i am reading from the
> bifferevent i diveded that task in different threads for prcessing.
>
> On Sat, Feb 25, 2017 at 1:13 AM, Tôn Loan <[email protected]> wrote:
>
>> Hi Jan and Steffen,
>>
>> Thank all of you for giving the solution as well as experience about
>> multi-threading
>>
>> @Jan: Getting these things right can take some trial and error; one thing
>> I'd advise is to look at what mature projects do, because they have
>> often (though not always) done all the mistakes and learned from them.
>>
>> Can you suggest me some mature projects as you said?. I searched many
>> times but I have not enough experience to choose the mature project to
>> follow
>>
>> Best Regards,
>> Loan Ton
>>
>>
>> On Fri, Feb 24, 2017 at 6:04 PM, Steffen Christgau <[email protected]> wrote:
>>
>>> On 24.02.2017 09:23, Tôn Loan wrote:
>>> > Hi Steffen,
>>> >
>>> > Thank you for your useful advice. Actually, as  you said, if I let
>>> > libevent handle the networking stuff within one thread, I wonder that
>>> is
>>> > there a bottle neck that happens when client sends a lot of messages to
>>> > four sockets?
>>>
>>> Not from what libevent (or the underlying operating system calls) gives
>>> you. The bottleneck might be caused from your application, depending on
>>> how long you block the thread by processing the UDP packets. But
>>> depending on how the processing is done you may rewrite it and make the
>>> steps non-blocking/asynchronous as well.
>>>
>>> > I use multiple threads on different ports for the same service. Here is
>>> > my implementaton,
>>> >
>>> > Client A sends command X to thread 1 on socket 1234, then thread 1 will
>>> > receive command X and execute the command.
>>> > Client B sends command Y to both thread 1 and 2 on socket 1234 and
>>> 1235,
>>> > respectively. Two threads receive the command, but thread 2 send a
>>> > signal to thread 1, and wait. Thread 1 will execute the message, and
>>> > after it finishs, it send a signal to thread 2 to update the result of
>>> > command Y.
>>>
>>> By using this design, you actually have no benefit of using multiple
>>> threads. You process only one packet at a time and - in case - this is
>>> the reason for you bottleneck. As Jan already pointed out, create one
>>> that does the networking stuff (with libevent) and multiple others to
>>> process the packets to take advantage of multiple threads. It's a
>>> classical example of producers and consumers.
>>>
>>> The only advantage from your design is that the packets are received
>>> earlier by the application than in a single-threaded version. This might
>>> prevent running out of operating system buffers for udp packets but it
>>> does not make the client see faster responses. If you focus on
>>> responsiveness your approach gives you almost nothing. If you focus on
>>> "reliability", i.e. you try to avoid packet loss, then it might be
>>> valid. However, using UDP for reliable application is not a good idea at
>>> all.
>>>
>>> And as Jan already pointed out: benchmark/profile/measure your
>>> application: "premature optimization is the root of all evil" (or:
>>> "don't speculate. - benchmark!"). I'd follow Jan's advice to hack a
>>> small single-threaded (i.e. non-threaded) prototype with livevent that
>>> handles all four sockets including your main application logic.
>>>
>>> Depending if you are satisfied with its performance, you might choose
>>> between: a) process packets in multiple (not just one!) threads OR b)
>>> rewrite the processing to be as non-blocking as possible (my personal
>>> preference). (or combine a and b).
>>>
>>> Regards, Steffen
>>> ***********************************************************************
>>> To unsubscribe, send an e-mail to [email protected] with
>>> unsubscribe libevent-users    in the body.
>>>
>>
>>
>

Reply via email to