[ 
https://issues.apache.org/jira/browse/BEAM-11266?focusedWorklogId=512674&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-512674
 ]

ASF GitHub Bot logged work on BEAM-11266:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 17/Nov/20 00:10
            Start Date: 17/Nov/20 00:10
    Worklog Time Spent: 10m 
      Work Description: y1chi commented on a change in pull request #13350:
URL: https://github.com/apache/beam/pull/13350#discussion_r524486089



##########
File path: sdks/python/apache_beam/io/mongodbio.py
##########
@@ -241,6 +275,27 @@ def _get_split_keys(self, desired_chunk_size_in_mb, 
start_pos, end_pos):
               max={'_id': end_pos},
               maxChunkSize=desired_chunk_size_in_mb)['splitKeys'])
 
+  def _get_buckets(self, desired_chunk_size, start_pos, end_pos):
+    if start_pos >= end_pos:
+      # single document not splittable
+      return []
+    size = self.estimate_size()
+    bucket_count = size // desired_chunk_size

Review comment:
       The split function will likely be called recursively for dynamic 
rebalancing, so for a range with start_pos and end_pos, it can be further split 
upon backend request, so it might not be reasonable to always use the total 
collection size divide by desired_chunk_size to calculate the bucket count. Is 
it possible to only get the buckets within the give _id range? and we can 
probably use an average document size times the number of documents to 
calculate the size of the range being split.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 512674)
    Time Spent: 1h  (was: 50m)

> Cannot use Python MongoDB connector with Atlas MongoDB
> ------------------------------------------------------
>
>                 Key: BEAM-11266
>                 URL: https://issues.apache.org/jira/browse/BEAM-11266
>             Project: Beam
>          Issue Type: Bug
>          Components: io-py-mongodb
>    Affects Versions: 2.25.0
>         Environment: Google Cloud Dataflow
>            Reporter: Eugene Nikolaiev
>            Assignee: Yichi Zhang
>            Priority: P2
>              Labels: mongodb, python
>             Fix For: 2.27.0
>
>          Time Spent: 1h
>  Remaining Estimate: 0h
>
> Cannot use the Python MongoDB connector with a managed Atlas instance. The 
> current implementations makes use of splitVector which is a high-privilege 
> function that cannot be assigned to any user in Atlas. Getting error:
> {code:java}
> pymongo.errors.OperationFailure: not authorized on properties to execute 
> command
>  { splitVector: "properties.properties", keyPattern: { _id: 1 },
> ...{code}
> BEAM-4567 addressed the same issue in Java connector.
> Proposed solution for Python is to add {{bucket_auto}} option for the 
> connector which would configure it to use {{@bucketAuto}} MongoDB aggregation 
> instead of {{splitVector}} command:
> {code:java}
> pipeline | ReadFromMongoDB(uri='mongodb+srv://user:[email protected]',
>                            db='testdb',
>                            coll='input',
>                            bucket_auto=True)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to