openinx opened a new pull request #12428:
URL: https://github.com/apache/flink/pull/12428
…data when the upstream add a column.
## What is the purpose of the change
Fixing the flink stream job interruption when adding an extra column in the
upstream producer ( we use avro format in Kafka).
## Brief change log
Create a new decoder when encountered a new buffer to avoid to reuse the
unread the bytes inside avro BinaryDecoder's buffer.
## Verifying this change
This change is already covered by existing tests, such as
testSchemaAddColumn.
## Does this pull request potentially affect one of the following parts:
- Dependencies (does it add or upgrade a dependency): (no)
- The public API, i.e., is any changed class annotated with
`@Public(Evolving)`: (no)
- The serializers: (no)
- The runtime per-record code paths (performance sensitive): (no)
- Anything that affects deployment or recovery: JobManager (and its
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
- The S3 file system connector: (no)
## Documentation
- Does this pull request introduce a new feature? (yes / no)
- If yes, how is the feature documented? (not applicable / docs / JavaDocs
/ not documented)
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]