Also I think this situation leads to a blocked status in which elements
with two different version UIDs are mixed and therefore unable to progress.

When I realised this was happening, I tried to rollback to the previous
version of the pipeline to drain it and then re-create the job with the new
version, but during that time, as new elements were streamed in, the old
version wasn't able to decode de new ones.

Does it make any sense?


On Wed, Feb 14, 2018 at 12:50 PM Carlos Alonso <> wrote:

> Another thing I've realised is that the stacktrace suggests it is using
> SerializerCoder instead of the MessageWithAttributesCoder custom one I have
> implemented. See it here:
> Is it possible that, upon updates, custom coders are not chosen? To give a
> little more context. The MessageWithAttribtues object is contained within a
> KV and a kept in a stateful/timely processing step. The coder is set before
> that stateful step via .setCoder(KVCoder.of(StringUtf8Coder.of(),
> MessageWithAttribtuesCoder.of())) and the exception is thrown when the
> buffered items are flushed out.
> Thanks!
> On Wed, Feb 14, 2018 at 11:33 AM Carlos Alonso <>
> wrote:
>> I've added a couple of methods to a case class and updated the job on
>> Dataflow and started getting
>> java.lang.IllegalStateException: Unable to decode tag list using
>> org.apache.beam.sdk.coders.SerializableCoder@4ad81832
>> Caused by
>> my.package.MessageWithAttributes; local class incompatible: stream
>> classdesc serialVersionUID = -5160195612720162441, local class
>> serialVersionUID = -9104690746829156208
>> It seems that, although the data contained in the class has not changed,
>> the class has changed it cannot deserialise it anymore. How should I
>> proceed to avoid this situations?
>> Thanks!

Reply via email to