The reverse is true for the producer. Let's assume the writer/producer has a 
list of ByteBuffer. When the selector fires that you can write for the channel 
then:
- write until there's nothing left to write, unregister for the write event, 
return to event processing
- write until the the channel is congestion controlled, stay registered for 
write event, return to event processing

This assumes you register for write event when a ByteBuffer is added to the 
output queue and the output queue is currently empty. 

(not at a computer to send more comprehensive info)

-Chad

Sent from my iPhone

On Dec 1, 2011, at 8:52 AM, Emmanuel Lecharny <[email protected]> wrote:

> On 12/1/11 2:49 PM, Emmanuel Lécharny wrote:
>> Forwarded to the ML
>> 
>> Quick note below.
>> 
>> Sent from my iPhone
>> 
>> On Dec 1, 2011, at 7:51 AM, Emmanuel Lécharny<[email protected]>  wrote:
>> 
>>> On 12/1/11 1:32 PM, Chad Beaulac wrote:
>>>> Hi Emmanuel,
>>>> 
>>>> A 1k ByteBuffer will be too small for large data pipes. Consider using 64k
>>>> like you mentioned yesterday.
>>> 
>>> Yes, this is probably what I'll do.
>>>> Draining the channel before returning control to the program can be
>>>> problematic. This thread can monopolize the CPU and other necessary
>>>> processing could get neglected. The selector will fire again when there's
>>>> more data to read. Suggest removing the loop below and using a 64k input
>>>> buffer.
>>> if we poll the channel with small channels, what will happen is that we 
>>> will generate a messageReceived event, which will be processed immediately. 
>>> Then we will reset the selector to be put in OP_READ state, and we will 
>>> immediately read again the data from the channel.
>>> 
>>> It's a bit difficult for me to see how it could be less CPU consuming that 
>>> reading everything immediately, and then go down the chain.
>>> 
>>> Do you have any technical information to back your claim, I would be very 
>>> interested to avoid falling in a trap I haven't seen.
>> 
>> It's not a question of CPU consumption. It's managing the reactor thread 
>> fairly.
>> 
>> If you sit in a tight loop like that and one channel is a high data rate you 
>> may not get out in a timely fashion to service change operations (connect, 
>> disconnect, registering for write operations).
> 
> Totally makes sense, right !
> 
>> 
>> I can forward a URL to an example later.
> Yes, I'd like to read more about this issue.
> 
> Anyway, I'll remove the loop, and use a wider buffer.
> 
> 
> -- 
> Regards,
> Cordialement,
> Emmanuel Lécharny
> www.iktek.com
> 

Reply via email to