Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19274
Hi @loneknightpy , think a bit on your PR, I think this can also be done in
the user side. User could create several threads in one task
(RDD#mapPartitions) to consume the records concurrently, so such feature may
not be so necessary to land in Spark's code. What do you think?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]