[ https://issues.apache.org/jira/browse/BAHIR-67?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15582750#comment-15582750 ]
Steve Loughran commented on BAHIR-67: ------------------------------------- oh, I don't dispute its merits; I'm just wondering about where it goes. webhdfs is implemented in hadoop-hdfs; all the JARs for it should be there in spark builds, making it more a matter of testing functionality (and implicitly, classpath setup). You shouldn't need any more JARs than there are today > WebHDFS Data Source for Spark SQL > --------------------------------- > > Key: BAHIR-67 > URL: https://issues.apache.org/jira/browse/BAHIR-67 > Project: Bahir > Issue Type: Improvement > Components: Spark SQL Data Sources > Reporter: Sourav Mazumder > Original Estimate: 336h > Remaining Estimate: 336h > > Ability to read/write data in Spark from/to HDFS of a remote Hadoop Cluster > In today's world of Analytics many use cases need capability to access data > from multiple remote data sources in Spark. Though Spark has great > integration with local Hadoop cluster it lacks heavily on capability for > connecting to a remote Hadoop cluster. However, in reality not all data of > enterprises in Hadoop and running Spark Cluster locally with Hadoop Cluster > is not always a solution. > In this improvement we propose to create a connector for accessing data (read > and write) from/to HDFS of a remote Hadoop cluster from Spark using webhdfs > api. -- This message was sent by Atlassian JIRA (v6.3.4#6332)