Pavel Zdenek wrote:
2009/11/17, Emmanuel Lecharny <[email protected]>:
<snip/>
- if the use of mina described here is correct

Enqueueing two codecs is a bit courageous...


Nice, i was just going to write my own post on the matter of
enqueueing two codecs, which as well stopped working for me with RC1.
I did have found a reason why it stopped working, and it matches with
the "courage" label. On revision 790000 of 30 June,
ProtocolCodecFilter has been actually modified to PROHIBIT multiple
instances in the chain. The private AttributeKeys had been changed to
static. Which means that there exists only one encoder and decoder
instance per whole chain, and that is the one for which the initCodec
has been called as the last one. Unfortunately the rewrite of existing
instances is completely silent.
Damnit ! That's my fault. I didn't realized back then that moving those elements to static as such a side effect. I must have been totally drunk...
I have even contacted the commiter directly some weeks ago, but he
didn't respond. There surely must be a deeper reason, an architectural
decision, which have induced this change?
Well, AFAIR, it was probably an idiotic attempt to optimize the code.

Btw, as I said in my response to Pavel, don't mail privately to committers, it kills the communication. As everyone can see, one's problem can ring a bell for another one user, and at the end, a solutiuon to a problem can be found.
About my reasons for being courageous: i do a server which accepts
traffic from microcontroller based GPRS stations with very limited
memory resources. Because of that, messages are packetized to about
0.5kB which is the limit of what the client can prepare at once. Each
packet has own header with length and sequence number. For that i
obviously need a CumulativeProtocolDecoder. After receiving all
packets, server does join them, checks CRC and THEN i need the actual
DemuxingProtocolCodecFactory to handle the big thing. There is about
20 different message types, so i really would like to use what is
already done, instead of reinventing my own demuxing wheel somehow
inside the SessionHandler.
When I wrote 'courageous', I meant that the initial code wasn't totally intended to allow codecs to be chained. At least in 2.0. The ProtocolCodecFilter messageReceived() code starts with :

   public void messageReceived(NextFilter nextFilter, IoSession session,
           Object message) throws Exception {
LOGGER.debug( "Processing a MESSAGE_RECEIVED for session {}", session.getId() ); if (!(message instanceof IoBuffer)) {
           nextFilter.messageReceived(session, message);
           return;
       }
       ...

It's obvious that if the next filter is a codec, then you must pass a IoBuffer. This is absolutely not what we want to pass, as the first codec is usually transforming a IoBuffer to some other data structure, and the second codec is supposed to deal with this very data structure.

This is the reason I said 'courageous' : you have to know about this atrocious hack in order to get the second codec to work, that means you also have to store the intermediate result in the session. No fun ...

Unless I'm totally wrong, of course !
What is the architecturally correct and acceptably less courageous
advice on implementing such protocol?
From a pragmatic POV, when using MINA 2.0, try to get all the codec in one simple filter. That could lead to some complex stateful decoder, but at least, you have the hand on the code.

Hope that MINA 3.0 will be better !


--
--
cordialement, regards,
Emmanuel Lécharny
www.iktek.com
directory.apache.org


Reply via email to