Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/21577
this was along the lines of what I was thinking as well. Will do a full
review later.
Just curious if you were able to create a test to actually reproduce it?
From the other PR:
>> and data source v2 API assumes (job id, partition id, task attemp id)
can uniquely define a write task, even counting the failure cases.
Are there other docs that need to be updated for v2 datasource api?
@rdblue @cloud-fan
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]