On 5 Jan 2017, at 20:07, Manohar Reddy <manohar.re...@happiestminds.com<mailto:manohar.re...@happiestminds.com>> wrote:
Hi Steve, Thanks for the reply and below is follow-up help needed from you. Do you mean we can set up two native file system to single sparkcontext ,so then based on urls prefix( gs://bucket/path and dest s3a://bucket-on-s3/path2) will that identify and write/read appropriate cloud. Is that my understanding right? I wouldn't use the term "native FS", as they are all just client libraries to talk to the relevant object stores. You'd still have to have the cluster "default" FS. but yes, you can use them: get your classpath right and they are all just URLS you use your code