Thanks Maarten!
But that answer just confuses me more....
Our decoder tests have always arbitrarily chunked data into random-sized
blocks before calling the Decoders' decode() method. These tests have
been working for quite some time.
It is only the VmPipe connection (simulated data sets) that we are
seeing this issue on (and if I trusted my memory half as much as my
logic I'd swear this *used to* work -- maybe that changed when I
upgraded to M2?).
Our Decoders have been MessageDecoders (descendants of
MessageDecoderAdapter) that have been chained in via
DemuxingProtocolCodecFactory-descendants calling their
'addMessageDecoder()' one or more times to define a protocol.
Switching decoders to ProtocolDecoders (descended from
CumulativeProtocolDecoder) means I can no longer use the same
CodecFactory mechanisms, nor can I have multiple decoders (say for
different packet types) within a single protocol. Is this a necessary
re-write that I should just get through now rather than later, or is
there a better solution???
Each protocol is currently defined via:
* One or more Decoders (extends MessageDecoderAdapter)
* One or more Encoders (implements MessageEncoder<T>)
* One Handler (extends IoHandlerAdapter)
* One Codec Factory (extends DemuxingProtocolCodecFactory)
boB
Maarten Bosteels wrote:
Hi boB,
What you describe is the expected behaviour :-)
You're decoder is notified whenever new data arrives. Accumulating this
data (when necessary) is the responsibility of the decoder.
Usually the easiest thing to do is extending CumulativeProtocolDecoder.
http://mina.apache.org/report/trunk/apidocs/org/apache/mina/filter/codec/CumulativeProtocolDecoder.html
Also have a look at
http://mina.apache.org/tutorial-on-protocolcodecfilter-for-mina-2x.html
Maarten
On Mon, Dec 29, 2008 at 9:50 PM, boB Gage <[email protected]> wrote:
I've got a Mina application going that communicates with a variety of
devices through both serial and network interfaces.
Part of this application is data-set simulations (ie fake far-end devices)
handled by using VmPipeAddress and VmPipeConnector objects.
My data simulator puts data into the stream via session.write() calls; my
"normal" decoders pick up that data as if it had come from a real device.
This set of objects works to an extent. BUT.... data that is sent via
two consecutive session.write() calls does NOT show up as expected in the
decoder.
For example, assume a decoder that wants packets delimited by 0x01 (start)
and 0xFF (end)
session.write(0x01 02 03 04 05) followed by
session.write(0x06 07 08 09 FF)
Generates two calls to "decode" in the decoder object.
The first with five bytes 0x01 02 03 04 and 05 returns NEED_DATA because
the 0xFF end is not seen
The second with 5 bytes 0x06 07 08 09 FF returns NOT_OK because there is no
0x01 start byte.
I would have expected two calls, but different parameters:
First with 0x01 02 03 04 05 which would return NEED_DATA. The second with
0x01 02 03 04 05 06 07 08 09 FF which would return OK (after parsing)
because it had a full packet.
Am I suffering from unrealistic expectations?? Is this a known Mina bug???
Have I just done something stupid???
Thanks in advance,
boB