Github user piaozhexiu commented on the pull request:
https://github.com/apache/spark/pull/8512#issuecomment-138354520
@davies @marmbrus thank you very much for your comments.
**1)** I totally agree that we should apply parallel file listing to
`HadoopFsRelation` too. But HDFS `FileInputFormat.getSplits()` already
implements the thread pool that you're suggesting (see
[here](https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java#L235)),
so all we need to do is listing multiple input paths together in a single
`JobConf` instead of listing them separately. I am happy to update my patch to
improve `HadoopFsRelation`.
**2)** Regarding S3, since we don't list files via
`FileInputFormat.getSplits()`, we will need to implement the thread pool.
> For example, we could create HadoopRDD for each partitions in parallel
(using thread pool), also using S3 client for each HadoopRDD (if it's S3), then
we have two level of parallel, could be even faster than current c (in your
benchmark). Also, Each partition usually has single directory, the S3 client
will not fetch more files than expected (There could be some bad case in
current approach).
It will be definitely interesting to see how fast this approach will be.
Let me try and get back to you.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]