aaltay commented on a change in pull request #13292:
URL: https://github.com/apache/beam/pull/13292#discussion_r525633557
##########
File path: sdks/python/apache_beam/transforms/util.py
##########
@@ -780,6 +783,48 @@ def expand(self, pcoll):
self.max_buffering_duration_secs,
self.clock))
+ @typehints.with_input_types(Tuple[K, V])
+ @typehints.with_output_types(Tuple[K, Iterable[V]])
+ class WithShardedKey(PTransform):
+ """A GroupIntoBatches transform that outputs batched elements associated
+ with sharded input keys.
+
+ The sharding is determined by the runner to balance the load during the
+ execution time. By default, it spreads the input elements with the same key
+ to all available threads executing the transform.
+ """
+ def __init__(self, batch_size, max_buffering_duration_secs=None):
+ """Create a new GroupIntoBatches.WithShardedKey.
+
+ Arguments:
+ batch_size: (required) How many elements should be in a batch
+ max_buffering_duration_secs: (optional) How long in seconds at most an
+ incomplete batch of elements is allowed to be buffered in the states.
+ The duration must be a positive second duration and should be given
as
+ an int or float.
+ """
+ self.batch_size = batch_size
+
+ if max_buffering_duration_secs is not None:
+ assert max_buffering_duration_secs > 0, (
+ 'max buffering duration should be a positive value')
+ self.max_buffering_duration_secs = max_buffering_duration_secs
+
+ _pid = os.getpid()
+
+ def expand(self, pcoll):
+ sharded_pcoll = pcoll | Map(
+ lambda x: (
Review comment:
Maybe use `key_value` instead of `x` ?
##########
File path: sdks/python/apache_beam/transforms/util.py
##########
@@ -780,6 +783,48 @@ def expand(self, pcoll):
self.max_buffering_duration_secs,
self.clock))
+ @typehints.with_input_types(Tuple[K, V])
+ @typehints.with_output_types(Tuple[K, Iterable[V]])
+ class WithShardedKey(PTransform):
Review comment:
Should this be tagged experimental similar to the Java change?
##########
File path: sdks/python/apache_beam/transforms/util.py
##########
@@ -780,6 +783,48 @@ def expand(self, pcoll):
self.max_buffering_duration_secs,
self.clock))
+ @typehints.with_input_types(Tuple[K, V])
+ @typehints.with_output_types(Tuple[K, Iterable[V]])
+ class WithShardedKey(PTransform):
+ """A GroupIntoBatches transform that outputs batched elements associated
+ with sharded input keys.
+
+ The sharding is determined by the runner to balance the load during the
Review comment:
How does this work? When/how runner does this load balancing?
##########
File path: sdks/python/apache_beam/transforms/util.py
##########
@@ -780,6 +783,48 @@ def expand(self, pcoll):
self.max_buffering_duration_secs,
self.clock))
+ @typehints.with_input_types(Tuple[K, V])
+ @typehints.with_output_types(Tuple[K, Iterable[V]])
+ class WithShardedKey(PTransform):
+ """A GroupIntoBatches transform that outputs batched elements associated
+ with sharded input keys.
+
+ The sharding is determined by the runner to balance the load during the
+ execution time. By default, it spreads the input elements with the same key
+ to all available threads executing the transform.
+ """
+ def __init__(self, batch_size, max_buffering_duration_secs=None):
+ """Create a new GroupIntoBatches.WithShardedKey.
+
+ Arguments:
+ batch_size: (required) How many elements should be in a batch
+ max_buffering_duration_secs: (optional) How long in seconds at most an
+ incomplete batch of elements is allowed to be buffered in the states.
+ The duration must be a positive second duration and should be given
as
+ an int or float.
+ """
+ self.batch_size = batch_size
+
+ if max_buffering_duration_secs is not None:
+ assert max_buffering_duration_secs > 0, (
+ 'max buffering duration should be a positive value')
+ self.max_buffering_duration_secs = max_buffering_duration_secs
+
+ _pid = os.getpid()
Review comment:
Why is the process id needed?
Note that there is only 1 python process per container, and since all
containers start the same way it is very likely that pid for each python
process in their own containers might even be the same.
If you need to distinguish per worker, you may need some concept of worker
id (which does not exist in Beam).
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]