Egbert created BEAM-6886:
----------------------------

             Summary: Change batch handling in ElasticsearchIO to avoid 
necessity for GroupIntoBatches
                 Key: BEAM-6886
                 URL: https://issues.apache.org/jira/browse/BEAM-6886
             Project: Beam
          Issue Type: Improvement
          Components: io-java-elasticsearch
    Affects Versions: 2.11.0
            Reporter: Egbert


I have a streaming job inserting records into an Elasticsearch cluster. I set 
the batch size appropriately big, but I found out this is not causing any 
effect at all: I found that all elements are inserted in batches of 1 or 2 
elements.

The reason seems to be that this is a streaming pipeline, which may result in 
tiny bundles. Since ElasticsearchIO uses `@FinishBundle` to flush a batch, this 
will result in equally small batches.

This results in a huge amount of bulk requests with just one element, grinding 
the Elasticsearch cluster to a halt.

I have now been able to work around this by using a `GroupIntoBatches` 
operation before the insert, but this results in 3 steps (mapping to a key, 
applying GroupIntoBatches, stripping key and outputting all collected 
elements), making the process quite awkward.

A much better approach would be to internalize this into the ElasticsearchIO 
write transform.. Use a timer that flushes the batch at batch size or end of 
window, not at the end of a bundle.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to