On 04/28/2008 11:40 AM, Emmanuel Lecharny wrote:
Yes and no - we should at least make support methods available to make this easier. MINA won't win any fans by being labeled "framework that makes decoder implementers do the most work" :-)
The best would be to offer an API having such a method : a blocking getNextByte() method. Atm, you have to do something like BB.hasRemaining() before grabbing a byte. Painfull ... and ineficient !

Blocking also has the problem of consuming threads. The ideal codec would be a fully non-blocking state machine (this is what I mean when I say "can exit at any point"). So for example, an HTTP state machine might accept "GE" before the buffer runs out. The method would then return; the state machine would know to expect "T" next.

In practice, this is really hard to do in Java. This is why Trustin is talking about a tool that is "similar to ANTLR"; it would be a tool to parse a simple language to generate a highly efficient non-blocking state machine (such as a DFA) in bytecode. This type of tool has the potential to be really excellent - it would make it easy for anyone to write an extremely efficient protocol codec for just about any protocol you could imagine.

If you saturate the output channel though, you'll have to be able to handle that situation somehow. It ultimately has to be up to the protocol decoder to detect a saturated output channel and perform whatever action is necessary to squelch the output message source until the output channel can be cleared again.

I would prefer the underlying layer to handle this case. The encoder is responsible to encode the data, not to handle the client sloppiness.

But the behavior of the client is almost always a part of the protocol. Also, the right behavior might not be to block. My use case in JBoss Remoting, for example, is much more complex. A connection might have four or five client threads using it at the same time. So I'd need to detect a saturated channel, and block the client threads. If the output channel is saturated, the input channel might still be usable, and vice-versa.

The output channel and the input channel operate completely independently in many protocols (including mine). Also, UDP-based protocols often have completely different policies.

At some point, we need to establish some policy about how to handle such problems. Even a disk is a limited resource.

Yes, but ultimately that decision *has* to be made by the protocol implementor. It depends on what is more important: allowing the sender to utilize maximum throughput, no matter what (in this case you'd spool to disk if you can't handle the messages fast enough), or to "push back" when the server is saturated (by either blocking the client using TCP mechanisms or by sending your own squelch message or some other mechanism).

So I'd cast my (useless and non-binding) vote behind either using ByteBuffer with static support methods, or using a byte array abstraction object with a separate buffer abstraction like Trustin suggests.

Some experiment with code could help, at this point. Navigating at such high level does not help a lot to grasp the real bytes we are dealing with on the network level ;)

Agreed.

- DML

Reply via email to