[
https://issues.apache.org/jira/browse/BEAM-9960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17236954#comment-17236954
]
Eugene Nikolaiev commented on BEAM-9960:
----------------------------------------
A solution might be to implement a new option "num_splits" to control the
number of splits and estimation of desired bundle size for splitting and hence
- the resulting document size.
The "numSplits" option exists in Java MongoDBIO connector.
Another option might be to catch the exception when executing the split query
and automatically adjust the desired bundle size if response doesn't fit into
the limit. Since error contains the actual doc size, it may be possible to
calculate the new bundle size based on it.
> Python MongoDBIO fails when response of split vector command is larger than
> 16mb
> --------------------------------------------------------------------------------
>
> Key: BEAM-9960
> URL: https://issues.apache.org/jira/browse/BEAM-9960
> Project: Beam
> Issue Type: Bug
> Components: io-py-mongodb
> Affects Versions: 2.20.0
> Reporter: Corvin Deboeser
> Priority: P3
>
> When using MongoDBIO on a large collection with large documents on average,
> then the split vector command results in a lot of splits if the desired
> bundle size is small. In extreme cases, the response from the split vector
> command can be larger than 16mb which is not supported by pymongo / MongoDB:
> {{pymongo.errors.ProtocolError: Message length (33699186) is larger than
> server max message size (33554432)}}
>
> Environment: Was running this on Google Dataflow / Beam Python SDK 2.20.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)