[
https://issues.apache.org/jira/browse/BEAM-10475?focusedWorklogId=520836&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-520836
]
ASF GitHub Bot logged work on BEAM-10475:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 07/Dec/20 00:06
Start Date: 07/Dec/20 00:06
Worklog Time Spent: 10m
Work Description: nehsyc commented on pull request #13493:
URL: https://github.com/apache/beam/pull/13493#issuecomment-739589853
I didn't add `ShardedKeyTypeConstraint` to typehint.py specifically here
https://github.com/apache/beam/blob/30f9a607509940f78459e4fba847617399780246/sdks/python/apache_beam/typehints/typehints.py#L1116
due to cyclic imports. I could have moved the entire definition to
typehints.py but I also needed to register the coder for the type constraint
via `typecoders.registry.register_coder` which couldn't be done within
typehints.py (again due to cyclic imports). Let me know if the current version
is fine or any suggestions to make it more clear.
Thinking it more about whether this constraint can be used by users.
Actually no such need. `ShardedKey` is not a magic type but just a normal type.
We made it well-known because we want the transform `GroupIntoBatches` that
uses `ShardedKey` to be a magic transform. So user defined transforms that
involve `ShardedKey` don't have to have `ShardedKeyCoder` in the graph (generic
fast primitive coder should work).
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 520836)
Time Spent: 22h 20m (was: 22h 10m)
> GroupIntoBatches with Runner-determined Sharding
> ------------------------------------------------
>
> Key: BEAM-10475
> URL: https://issues.apache.org/jira/browse/BEAM-10475
> Project: Beam
> Issue Type: Improvement
> Components: runner-dataflow
> Reporter: Siyuan Chen
> Assignee: Siyuan Chen
> Priority: P2
> Labels: GCP, performance
> Time Spent: 22h 20m
> Remaining Estimate: 0h
>
> [https://s.apache.org/sharded-group-into-batches|https://s.apache.org/sharded-group-into-batches__]
> Improve the existing Beam transform, GroupIntoBatches, to allow runners to
> choose different sharding strategies depending on how the data needs to be
> grouped. The goal is to help with the situation where the elements to process
> need to be co-located to reduce the overhead that would otherwise be incurred
> per element, while not losing the ability to scale the parallelism. The
> essential idea is to build a stateful DoFn with shardable states.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)