Of course, i'll paste the entire route:

<route id="udpBroadcastReceive">
        <from
uri="netty:{{broadcastTmProtocol}}://{{broadcastTmHost}}:{{broadcastTmPort}}?receiveBufferSizePredictor=65536&amp;decoders=#broadcastDecoder&amp;sync=false"
/>
        <split parallelProcessing="true" streaming="true">
                <simple>${body}</simple>
                <to uri="bean:payloadCodec?method=decode(GenericPayload)"/>
                <multicast parallelProcessing="true" streaming="true">
                        <to uri="activemq-vm:topic:parameterGroupsOut" />
                        <to uri="activemq-vm:topic:parameterGroupsUnsplit" />
                </multicast>
        </split>
</route>


You'll notice this is slightly different to the hawtio diagram. The from
seda:udp endpoint which started the route in the image was a product of us
making sure it wasn't the netty endpoint/codec causing the bottleneck. It
wasn't so we put the route back to the one pasted above.
The parallelProcessing and streaming options were adding just in case. The
website says they are false by default but the xsd claims they are true :)

As you can see it's literally a simple expression on the in body.



--
View this message in context: 
http://camel.465427.n5.nabble.com/Performance-puzzle-Slow-splitter-on-object-array-tp5729867p5729873.html
Sent from the Camel - Users mailing list archive at Nabble.com.

Reply via email to