mosche opened a new issue, #22445:
URL: https://github.com/apache/beam/issues/22445

   ### What would you like to happen?
   
   At the time being (batch) pipeline translation in the 
SparkStructuredStreamingRunner is rather simple and not optimized in any way. 
These optimizations should help to significantly improve the performance of the 
experiemental runner.
   
   - Make use of Spark `Encoder`s to leverage structural information in 
translation (and potentially benefit from Catalyst optimizer). Though note, the 
possible benefit is limited as every ParDo is a black box and a hard boundary 
for anything that could be optimized.
   - Improved translation of `GroupByKey`. When applicable, group also by 
window to better scale out and/or use Spark native   `collect_list` to collect 
values of group.
   - Make use of specialised Spark `Aggregator`s for combine (per key / 
globally), particularly `Sessions` can be improved significantly.
   - Dedicated translation for `Combine.Globally` to avoid additional shuffle 
of data.
   - Remove additional serialization roundtrip when reading from a Beam 
`BoundedSource`.
   
   ### Issue Priority
   
   Priority: 2
   
   ### Issue Component
   
   Component: runner-spark


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to