[ 
https://issues.apache.org/jira/browse/BEAM-10475?focusedWorklogId=503941&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503941
 ]

ASF GitHub Bot logged work on BEAM-10475:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 23/Oct/20 00:44
            Start Date: 23/Oct/20 00:44
    Worklog Time Spent: 10m 
      Work Description: robertwb commented on a change in pull request #13144:
URL: https://github.com/apache/beam/pull/13144#discussion_r510535483



##########
File path: sdks/python/apache_beam/transforms/util.py
##########
@@ -777,33 +795,56 @@ def process(
         window=DoFn.WindowParam,
         element_state=DoFn.StateParam(ELEMENT_STATE),
         count_state=DoFn.StateParam(COUNT_STATE),
-        expiry_timer=DoFn.TimerParam(EXPIRY_TIMER)):
+        window_timer=DoFn.TimerParam(WINDOW_TIMER),
+        buffering_timer=DoFn.TimerParam(BUFFERING_TIMER)):
       # Allowed lateness not supported in Python SDK
       # 
https://beam.apache.org/documentation/programming-guide/#watermarks-and-late-data
-      expiry_timer.set(window.end)
+      window_timer.set(window.end)
       element_state.add(element)
       count_state.add(1)
       count = count_state.read()
+      if count == 1 and max_buffering_duration_secs is not None:
+        # This is the first element in batch. Start counting buffering time if 
a
+        # limit was set.
+        buffering_timer.set(clock() + max_buffering_duration_secs)
       if count >= batch_size:
-        batch = [element for element in element_state.read()]
-        key, _ = batch[0]
-        batch_values = [v for (k, v) in batch]
-        yield (key, batch_values)
-        element_state.clear()
-        count_state.clear()
-
-    @on_timer(EXPIRY_TIMER)
-    def expiry(
+        key, batch_values = self.flush_batch(
+            element_state, count_state, buffering_timer)
+        if key is not None:
+          yield key, batch_values
+
+    @on_timer(WINDOW_TIMER)
+    def on_window_timer(
         self,
         element_state=DoFn.StateParam(ELEMENT_STATE),
-        count_state=DoFn.StateParam(COUNT_STATE)):
+        count_state=DoFn.StateParam(COUNT_STATE),
+        buffering_timer=DoFn.TimerParam(BUFFERING_TIMER)):
+      key, batch_values = self.flush_batch(
+          element_state, count_state, buffering_timer)
+      if key is not None:
+        yield key, batch_values
+
+    @on_timer(BUFFERING_TIMER)
+    def on_buffering_timer(
+        self,
+        element_state=DoFn.StateParam(ELEMENT_STATE),
+        count_state=DoFn.StateParam(COUNT_STATE),
+        buffering_timer=DoFn.TimerParam(BUFFERING_TIMER)):
+      key, batch_values = self.flush_batch(
+          element_state, count_state, buffering_timer)
+      if key is not None:
+        yield key, batch_values
+
+    def flush_batch(self, element_state, count_state, buffering_timer):
       batch = [element for element in element_state.read()]
-      if batch:
-        key, _ = batch[0]
-        batch_values = [v for (k, v) in batch]
-        yield (key, batch_values)
-        element_state.clear()
-        count_state.clear()
+      if not batch:
+        return None, None

Review comment:
       Specifically, here you would just `return` and below you would `yield 
key, batch_values`.

##########
File path: sdks/python/apache_beam/transforms/util.py
##########
@@ -777,33 +795,56 @@ def process(
         window=DoFn.WindowParam,
         element_state=DoFn.StateParam(ELEMENT_STATE),
         count_state=DoFn.StateParam(COUNT_STATE),
-        expiry_timer=DoFn.TimerParam(EXPIRY_TIMER)):
+        window_timer=DoFn.TimerParam(WINDOW_TIMER),
+        buffering_timer=DoFn.TimerParam(BUFFERING_TIMER)):
       # Allowed lateness not supported in Python SDK
       # 
https://beam.apache.org/documentation/programming-guide/#watermarks-and-late-data
-      expiry_timer.set(window.end)
+      window_timer.set(window.end)
       element_state.add(element)
       count_state.add(1)
       count = count_state.read()
+      if count == 1 and max_buffering_duration_secs is not None:
+        # This is the first element in batch. Start counting buffering time if 
a
+        # limit was set.
+        buffering_timer.set(clock() + max_buffering_duration_secs)
       if count >= batch_size:
-        batch = [element for element in element_state.read()]
-        key, _ = batch[0]
-        batch_values = [v for (k, v) in batch]
-        yield (key, batch_values)
-        element_state.clear()
-        count_state.clear()
-
-    @on_timer(EXPIRY_TIMER)
-    def expiry(
+        key, batch_values = self.flush_batch(
+            element_state, count_state, buffering_timer)
+        if key is not None:

Review comment:
       None is a valid key, so this is not quite safe. Instead, you could make 
flush_batch an iterable, and just do `return self.flush_batch()` everywhere.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 503941)
    Time Spent: 15h 10m  (was: 15h)

> GroupIntoBatches with Runner-determined Sharding
> ------------------------------------------------
>
>                 Key: BEAM-10475
>                 URL: https://issues.apache.org/jira/browse/BEAM-10475
>             Project: Beam
>          Issue Type: Improvement
>          Components: runner-dataflow
>            Reporter: Siyuan Chen
>            Assignee: Siyuan Chen
>            Priority: P2
>              Labels: GCP, performance
>          Time Spent: 15h 10m
>  Remaining Estimate: 0h
>
> [https://s.apache.org/sharded-group-into-batches|https://s.apache.org/sharded-group-into-batches__]
> Improve the existing Beam transform, GroupIntoBatches, to allow runners to 
> choose different sharding strategies depending on how the data needs to be 
> grouped. The goal is to help with the situation where the elements to process 
> need to be co-located to reduce the overhead that would otherwise be incurred 
> per element, while not losing the ability to scale the parallelism. The 
> essential idea is to build a stateful DoFn with shardable states.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to