Hi all,
I was hoping for some advice on creating a MicroBatchStream for MongoDB.
MongoDB has a tailable cursor that listens to changes in a collection
(known as a change stream). As a user watches a collection via the change
stream cursor, the cursor reports a resume token that determines the
>>>> driver
>>>>>>>>>> receives "prepared" from all the tasks, a "commit" would be invoked
>>>>>>>>>> at each
>>>>>>>>>> of the individual tasks). Right now the responsibility of
Hi all,
I've been prototyping an implementation of the DataSource V2 writer for the
MongoDB Spark Connector and I have a couple of questions about how its
intended to be used with database systems. According to the Javadoc for
DataWriter.commit():
*"this method should still "hide" the written
Hi,
I hope this is the correct mailinglist. I've been adding v2 support to the
MongoDB Spark connector using Spark 2.3.1. I've noticed one of my tests
pass when using the original DefaultSource but errors with my v2
implementation:
The code I'm running is:
val df = spark.loadDS[Character]()
+1 Having an rc1 would help me get stable feedback on using my library with
Spark, compared to relying on 2.0.0-SNAPSHOT.
On Fri, 20 May 2016 at 05:57 Xiao Li wrote:
> Changed my vote to +1. Thanks!
>
> 2016-05-19 13:28 GMT-07:00 Xiao Li :
>
>> Will