Hi All,
My you guys think grizzly blocking read sytle can be not bad solution.
It do thing with a addition SelectionKey.
http://weblogs.java.net/blog/jfarcand/archive/2006/05/tricks_and_tips_1.html
http://weblogs.java.net/blog/jfarcand/archive/2006/06/tricks_and_tips.html

code framework like this, and it should be usage in mina, or communicate
with mina framework objects.
maybe like grizzly, IoSession could add blockingWrite or blockingRead
methods.


try{
           SocketChannel socketChannel = (SocketChannel)key.channel();
           while (count > 0){
               count = socketChannel.read(byteBuffer);
           }

           if ( byteRead == 0 ){
               readSelector = SelectorFactory.getSelector();
               tmpKey = socketChannel
                       .register(readSelector,SelectionKey.OP_READ);
               tmpKey.interestOps(tmpKey.interestOps() |
SelectionKey.OP_READ);
               int code = readSelector.select(readTimeout);
               tmpKey.interestOps(
                   tmpKey.interestOps() & (~SelectionKey.OP_READ));

               if ( code == 0 ){
                   return 0;
               }

               while (count > 0){
                   count = socketChannel.read(byteBuffer);
               }
           }
       } catch (Throwable t){
....

Regards,

2007/7/5, Trustin Lee <[EMAIL PROTECTED]>:

On 7/4/07, Chris Chalmers <[EMAIL PROTECTED]> wrote:
> Hi all
>
> Checking this again, it seems that an OOM is being thrown after all. The
> IoFilter throttling doesn't seem to kick in until too late,
unfortunately.
>
> Looking at the SocketIoProcessor class, if I apply the throttle before
> the read call, this achieves the desired effect for me: (ie: limits
> memory usage significantly):
>
>     private void process( Set<SelectionKey> selectedKeys )
>     {
>        ...
>               if (!throttle(session)) {
>                 read( session );
>               }
>        ...
>     }
>
> Unfortunately this solution is not clean as I am forced to modify the
> Mina source code and not use the API.
> Any better ideas anyone?
>
> Chris Chalmers wrote:
> > Hi Peter
> >
> > I'm now getting something strange:
> > - my read is throttling fine based on the number of outstanding writes
> > on a linked session
> > - the session.resumeRead() method is being called when the linked
> > writes have dropped
> > - however, the read is not resumed.
> >
> > I have limited memory usage to 64Mb, and at the point of session
> > resumption, 63.8Mb has been read in.
> > When using 256Mb, the process completes; at the point of resume, it
> > seems like the allocated memory doubles - I am hazarding a guess that
> > this could be causing the problem when using 64Mb. However, I don't
> > get an out-of-memory exception.
> >
> > Any ideas?

Did you try to add ReadThrottleFilter and WriteBufferLimitFilter
*before* ExecutorFilter?  I'm not sure this will work, but placing the
filters before the ExecutorFilter will make the traffic control more
immediate.

It would be the best if you can provide us some test code that
reproduces the problem, because it is the easiest way to fix our side.

HTH,
Trustin
--
what we call human nature is actually human habit
--
http://gleamynode.net/
--
PGP Key ID: 0x0255ECA6




--
向秦贤

Reply via email to