To be honest, I don't understand how your body gets splitted... The body which your splitter receives is the GernericPayload object, right?
Best, Christian On Tue, Mar 26, 2013 at 10:58 PM, MarkD <[email protected]>wrote: > Of course, i'll paste the entire route: > > > <route id="udpBroadcastReceive"> > <from > > uri="netty:{{broadcastTmProtocol}}://{{broadcastTmHost}}:{{broadcastTmPort}}?receiveBufferSizePredictor=65536&decoders=#broadcastDecoder&sync=false" > /> > <split parallelProcessing="true" streaming="true"> > <simple>${body}</simple> > <to uri="bean:payloadCodec?method=decode(GenericPayload)"/> > <multicast parallelProcessing="true" streaming="true"> > <to uri="activemq-vm:topic:parameterGroupsOut" /> > <to uri="activemq-vm:topic:parameterGroupsUnsplit" > /> > </multicast> > </split> > </route> > > > You'll notice this is slightly different to the hawtio diagram. The from > seda:udp endpoint which started the route in the image was a product of us > making sure it wasn't the netty endpoint/codec causing the bottleneck. It > wasn't so we put the route back to the one pasted above. > The parallelProcessing and streaming options were adding just in case. The > website says they are false by default but the xsd claims they are true :) > > As you can see it's literally a simple expression on the in body. > > > > -- > View this message in context: > http://camel.465427.n5.nabble.com/Performance-puzzle-Slow-splitter-on-object-array-tp5729867p5729873.html > Sent from the Camel - Users mailing list archive at Nabble.com. >
