[ 
https://issues.apache.org/jira/browse/BEAM-10475?focusedWorklogId=500363&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-500363
 ]

ASF GitHub Bot logged work on BEAM-10475:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 14/Oct/20 00:56
            Start Date: 14/Oct/20 00:56
    Worklog Time Spent: 10m 
      Work Description: nehsyc commented on a change in pull request #13069:
URL: https://github.com/apache/beam/pull/13069#discussion_r504339524



##########
File path: sdks/python/apache_beam/coders/coder_impl.py
##########
@@ -1365,3 +1366,38 @@ def estimate_size(self, value, nested=False):
     # type: (Any, bool) -> int
     value_size = self._value_coder.estimate_size(value)
     return get_varint_size(value_size) + value_size
+
+
+class ShardedKeyCoderImpl(StreamCoderImpl):
+  """For internal use only; no backwards-compatibility guarantees.
+
+  A coder for sharded user keys.
+
+  The encoding and decoding should follow the order:
+      length of shard id byte string
+      shard id byte string
+      encoded user key
+  """
+  def __init__(self, key_coder_impl):
+    self._shard_id_coder_impl = LengthPrefixCoderImpl(BytesCoderImpl())

Review comment:
       `BytesCoder` simply copies the value to the stream and it doesn't 
automatically prepend a length prefix 
([code](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/coders/coder_impl.py#L491)).
 While in Java the `ByteArrayCoder` does prepend the length.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 500363)
    Time Spent: 9h 20m  (was: 9h 10m)

> GroupIntoBatches with Runner-determined Sharding
> ------------------------------------------------
>
>                 Key: BEAM-10475
>                 URL: https://issues.apache.org/jira/browse/BEAM-10475
>             Project: Beam
>          Issue Type: Improvement
>          Components: runner-dataflow
>            Reporter: Siyuan Chen
>            Assignee: Siyuan Chen
>            Priority: P2
>              Labels: GCP, performance
>          Time Spent: 9h 20m
>  Remaining Estimate: 0h
>
> [https://s.apache.org/sharded-group-into-batches|https://s.apache.org/sharded-group-into-batches__]
> Improve the existing Beam transform, GroupIntoBatches, to allow runners to 
> choose different sharding strategies depending on how the data needs to be 
> grouped. The goal is to help with the situation where the elements to process 
> need to be co-located to reduce the overhead that would otherwise be incurred 
> per element, while not losing the ability to scale the parallelism. The 
> essential idea is to build a stateful DoFn with shardable states.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to