Github user sryza commented on the pull request:

    https://github.com/apache/spark/pull/3670#issuecomment-69964535
  
    Uploading a further along work-in-progress patch that switches to addFile 
recursive. Still to do:
    * Support useCache, i.e. copy from local instead of remote.
    * Write tests that work.  I'm a little unsure of how to approach this.  I 
wrote a test that works by using the Hadoop FileSystem API to read local files. 
 But now that I merged addFile with addDirectory, files with scheme "file:/" go 
through the HTTP server path.  There's a MiniDFSCluster API I could possibly 
use to support hdfs:/ schemes, but it's relatively heavyweight.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to