It is bypassed by design. As noted in https://storm.apache.org/apidocs/backtype/storm/task/OutputCollector.html, the emitted objects must be immutable. If you're intent on modifying them, be very careful.
On Tue, Dec 1, 2015 at 4:28 AM, Stephen Powis <[email protected]> wrote: > I believe anytime tuples are passed between bolts on the same jvm (either > in local mode or in remote mode where the upstream and downstream bolt both > reside on the same worker) serialization is bypassed by design. > > On Tue, Dec 1, 2015 at 1:46 PM, Edward Zhang <[email protected]> > wrote: > >> Hi Storm developers, >> >> Today, I hit one possible storm issue which happens in local mode. In >> local mode, one event object is sent out of spout and looks it does not go >> through serialization/deserialization, instead this event object including >> its members is directly referenced by following bolts. So when one bolt >> modifies this event object then another bolt will also see the changes >> immediately. >> >> For example the event object sent by spout includes one java Map object, >> if there are 2 following bolts after this spout, then in one bolt if we >> modify this Map object, then the other bolt will see that or throw >> concurrentmodificationexception if it iterates the Map Object. >> >> Please let us know if this behavior should be corrected by storm >> framework or by storm application. In storm application, we can do deep >> copy if it's local mode, but in storm framework, probably >> serialization/deserialization should be always executed. >> >> Let me know your thoughts. >> >> Thanks >> Edward Zhang >> > >
