chishankar-work commented on pull request #13235:
URL: https://github.com/apache/beam/pull/13235#issuecomment-721268119


   Hey Chamikara,
   
   Thanks for that. The user was running beam 2.19, which seems to be the 
problem causing a few other errors like [BEAM-9792]. If this is now being 
surfaced correctly in >= 2.24 then I don't mind closing this. 
   
   However, to Pablos comment, I do think that if the buffer to write the rows 
is > 10mb, we should be splitting this into two separate requests upon limit 
exceeded error. The write for the accumulated rows into BigQuery for streaming 
[1], afaik, doesn't yet split this unless I am wrong or this has been 
implemented in a newer version. 
   
   Thanks
   
   
   [1] 
https://github.com/apache/beam/blob/18404e65855eac5a2d4f7993c1a9b994d013fd56/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/StreamingWriteFn.java#L181-L208


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to