egalpin commented on pull request #17097:
URL: https://github.com/apache/beam/pull/17097#issuecomment-1069160581


   @je-ik thanks for the review!
   
   > Generally LGTM. I have a few comments. Two concerns:
   a) the ElasticsearchIO now does batching by default, are we sure, that users 
that did not do the batching previously will be able to successfully upgrade 
the running pipeline? I think that probably yes, but did we test that?
   b) do we know how this change affects performance?
   
   a)  The Elasticsearch#write transform has always done batching, but initial 
implementation was bundle-based only.  I agree though that this change set 
should be tested against a running pipeline where the pipeline is updated; I 
can take that on and report back.
   
   b) For anyone using stateful batching before, no performance change would be 
present.  In terms of _Elasticsearch_ performance, state-based batching is 
highly preferred in my experience.  I have been able to improve throughput 100x 
by using state-based over bundle-based batching.  In terms of Beam performance 
on a runner, I don't have concrete numbers describing the performance delta.  I 
do know that a production workload I manage processes millions of documents 
(GBs of data) per hour on a single vCPU using stateful batching.  In my 
experience, the process of indexing data into ES has always been heavily IO 
bound.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to