Re: Netty thread model clarification

2019-06-01 Thread sugiceg
"I'm just curious why using your own ExecutorService inside the handler is 
preferable to adding someEventExecutorGroup to that handler here.  Doesn't 
that achieve the same function?"

Did you find answer to this? what is the difference between "SslHandler 
sslHandler = new SslHandler(sslEngine, eventExecutorGroup);" vs 
"ctx.pipeline().addFirst(eventExecutorGroup, 
Constants.*SSL_HANDLER_NAME*, sslHandler);"



On Tuesday, 8 October 2013 08:40:30 UTC-7, sean@gmail.com wrote:
>
> Greetings,
>
> I just want to validate my understanding of Netty 4's thread model 
> compared to Netty 3's (specifically as it applies to NIO).
>
> With Netty 3, you would specify a boss thread pool and a worker thread 
> pool (all/most of the examples use Executors.newCachedThreadPool() with the 
> default args).  ChannelHandler events would be called on the I/O threads 
> from the worker group.  Any delay in the handler methods would cause a 
> delay in processing I/O for other connections.  In order to compensate for 
> this, you could add an ExecutionHandler to the pipeline which would cause 
> the handler methods to be fired on an executor thread and therefore 
> wouldn't affect I/O.
>
> In Netty 4, you specify a boss group and worker group, and as channels 
> connect they are registered to specific threads.  So channel 1 will always 
> have it's handler events fired on thread A, channel 2 will always have it's 
> handler events fired on thread B.  Again, any delayed processing that 
> occurs in the handler method will hurt I/O for other channels registered to 
> that worker thread.  To compensate, you specify an EventExecutorGroup so 
> that I/O is not affected with long running tasks.
>
> Assuming everything above is correct...
>
> Assume that I create a DefaultEventExecutorGroup, passing 4 as the number 
> of threads, and assign that to my handler in the pipeline.  Now, 8 channels 
> connect:
>
> Channel A: EventExecutor 1
> Channel B: EventExecutor 2
> Channel C: EventExecutor 3
> Channel D: EventExecutor 4
> Channel E: EventExecutor 1
> Channel F: EventExecutor 2
> Channel G: EventExecutor 3
> Channel H: EventExecutor 4
>
> Each channel is registered to EventExecutor thread.  If Channel A in the 
> above example performs a long running task (say, in channelRead0), then 
> won't Channel E be blocked during this time?  Is that correct or am I not 
> understanding something?  If I am correct, why would I ever want to use an 
> EventExecutor?  I feel like I would be better off using a shared Executor 
> directly from my handler methods (and handling thread synchronization 
> myself).  At least in that case I wouldn't be blocking other clients.
>
> Thank you,
> Sean
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Netty discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to netty+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/netty/8430beb8-55ac-4e98-97e0-1acdd3e1323a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Netty thread model clarification

2018-02-16 Thread 'Norman Maurer' via Netty discussions
The same is still true... a Channel is pinned to an EventLoop and so it’s IO is 
processed by one thread 

> Am 16.02.2018 um 00:04 schrieb Sean Bright :
> 
> Eric,
> 
> I don't know what I can add beyond what has already been written in this 
> thread. I haven't following Netty development so I don't know if this is 
> still the case or not. At the time, a channel was bound to a thread in an 
> EventExecutorGroup, so you could run into a situation where a group of 
> channels that were all bound to the same thread would be under serviced if 
> one of those tasks took a lot of CPU time. If instead you delegate to a 
> standard ExecutorService, each task for a given channel might run on a 
> separate thread so a single task couldn't starve all of the channels.
> 
> Kind regards,
> Sean
> 
>> On Friday, February 9, 2018 at 8:40:00 PM UTC-5, Eric Fulton wrote:
>> Hey sorry I know this is a very old thread, but can you explain why in you 
>> example you think it would be better to use your own ExecutorService over 
>> someEventExecutorGroup?  What would be the difference?
>> 
>> 
>> 
>> 
>>> On Wednesday, October 9, 2013 at 7:27:06 AM UTC-7, Sean Bright wrote:
>>> Thanks Norman.
>>> 
>>> So based on this, it seems that if I have a "typical" pipeline:
>>> 
>>> ctx.pipeline().addLast("decoder", new MyProtocolDecoder());
>>> ctx.pipeline().addLast("encoder", new MyProtocolEncoder());
>>> ctx.pipeline().addLast(someEventExecutorGroup, "handler", new 
>>> MyProtocolHandler());
>>> 
>>> Using an EventExecutorGroup doesn't actually buy me anything.  I/O won't be 
>>> blocked, but handler execution will.
>>> 
>>> I understand the point of doing all of this - if you know your methods will 
>>> always be called on the same thread it reduces the synchronization 
>>> complexities in the handlers - but in this model when you are dealing with 
>>> hundreds of connections, any handler method that causes a delay in 
>>> processing will block (num_connections / num_event_executor_threads) - 1 
>>> other handlers.
>>> 
>>> So it would appear that in order to get the behavior that I want (any one 
>>> connection should not affect another), I would eliminate the 
>>> EventExecutorGroup and would need to submit tasks to an ExecutorService 
>>> that I manage myself, correct?
>>> 
>>> I guess I just don't see how an EventExecutorGroup is beneficial.
>>> 
>>> In any case, I love Netty.  Keep up the good work!
>>> 
>>> 
 On Tuesday, October 8, 2013 11:40:30 AM UTC-4, Sean Bright wrote:
 Greetings,
 
 I just want to validate my understanding of Netty 4's thread model 
 compared to Netty 3's (specifically as it applies to NIO).
 
 With Netty 3, you would specify a boss thread pool and a worker thread 
 pool (all/most of the examples use Executors.newCachedThreadPool() with 
 the default args).  ChannelHandler events would be called on the I/O 
 threads from the worker group.  Any delay in the handler methods would 
 cause a delay in processing I/O for other connections.  In order to 
 compensate for this, you could add an ExecutionHandler to the pipeline 
 which would cause the handler methods to be fired on an executor thread 
 and therefore wouldn't affect I/O.
 
 In Netty 4, you specify a boss group and worker group, and as channels 
 connect they are registered to specific threads.  So channel 1 will always 
 have it's handler events fired on thread A, channel 2 will always have 
 it's handler events fired on thread B.  Again, any delayed processing that 
 occurs in the handler method will hurt I/O for other channels registered 
 to that worker thread.  To compensate, you specify an EventExecutorGroup 
 so that I/O is not affected with long running tasks.
 
 Assuming everything above is correct...
 
 Assume that I create a DefaultEventExecutorGroup, passing 4 as the number 
 of threads, and assign that to my handler in the pipeline.  Now, 8 
 channels connect:
 
 Channel A: EventExecutor 1
 Channel B: EventExecutor 2
 Channel C: EventExecutor 3
 Channel D: EventExecutor 4
 Channel E: EventExecutor 1
 Channel F: EventExecutor 2
 Channel G: EventExecutor 3
 Channel H: EventExecutor 4
 
 Each channel is registered to EventExecutor thread.  If Channel A in the 
 above example performs a long running task (say, in channelRead0), then 
 won't Channel E be blocked during this time?  Is that correct or am I not 
 understanding something?  If I am correct, why would I ever want to use an 
 EventExecutor?  I feel like I would be better off using a shared Executor 
 directly from my handler methods (and handling thread synchronization 
 myself).  At least in that case I wouldn't be blocking other clients.
 
 Thank you,
 Sean
 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Netty