je-ik edited a comment on pull request #13592:
URL: https://github.com/apache/beam/pull/13592#issuecomment-749754120


   > I verified that the reader is reused from cache in Kafka case manually.
   
   Hm, are you sure? That confuses me, because looking at the code, I'm not 
sure how that could function with `System.identityHashCode` as hashCode for 
CheckpointMark. Provided that AutoValue delegates its hashCode as would be 
expected and that guava's Cache uses hashCode. Hm, maybe Dataflow is not using 
SplittableDoFnViaKeyedWorkItems and has some specific implementation? 
   
   All changes are in SDK side, as well as the cache creation, So it will be 
independent from runner executions(and I'm using beam_fn_api as well).
   
   But you are right that `hashCode` is not correct if it is not implemented 
correctly.
   
   > It makes me feel like configuring split frequency from PipelineOption
   
   Sure, Flink has such an option. It would be natural to either create one for 
generic use, or add that to respective runner's PipelineOptions.
   
   The checkpoint for SDF is different from Flink checkpoint. That says, even 
the checkpoint interval is not configured for Flink, the checkpoint for SDF 
will still happen based on how 
`OutputAndTimeBoundedSplittableProcessElementInvoker` is configured.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to