> > I'm pretty new to MINA as well as Java NIO. So my question might be very > stupid.
Welcome aboard, I had the same kind of questions not long ago, so I hope that I can help you out. > 1. While I iterate through IoBuffer 'in' (byte b = in.get();), I put the > byte 'b' to my buffer, say 'temp'. When I found the end-of-package byte, I > use ProtocolDecoderOutput out.write(temp.flip()); If you do not know the amount of resulting bytes, there may be (depending on your implementation) some automatic expansions done inside your temp IoBuffer (when using setAutoExpand(true)) that copies the entire buffer to a new, bigger one - and you don't want this to happen too often (e.g. for every n bytes) because copying is bad (e.g. slow clients could send only a few bytes inside each buffer causing many expansions, while fast clients send masses of bytes causing only a few). Allocating a properly sized temp buffer on allocation eliminates this mostly. So, if the buffer is big enough, there should be no fundamental differences to approach #2. > 2. When the end-of-package byte is found, set a proper limit/position of > 'in', then use temp.put(in) and out.write(temp.flip()). This way is like > TextLineDecoder does. In fact this is better in an out-of-the-box setup, because this avoids at least too many automatic expansions and will allocate a proper sized buffer / buffer expansion every time when new content is available - chunk wise, not byte wise. However, everytime new content is written, the temp buffer may be automatically expanded (copied) if it is not big enough (remember the slow clients). So properly sizing the temp buffer on allocation is also a good idea if a maximum content length is known or can at least be guessed. However, sizing the buffer too big may result in heavy memory consumption when many clients participate. Currently there is also some progress going on inside the Mina 2.0 branch that may reduce the copying / resizing / etc. overhead to a minimum in the future using buffer queues. So, if you plan to use Mina 2.0, this may be interesting for you because this changes will be committed before the 2.0 final release. With queues, there will be no need to copy small chunks of buffers to a bigger one for parsing. because the queue is reused when new data is available and will result in a copy once or even in a zero-copy situation if there will be a way to slice the queued content without the need to copy it to a large bytebuffer - I'm not sure what approach exactly is planned currently, but it's definitely worth observation. To everyone else: If I'm wrong with something, please feel free to correct me :) regards Daniel
