t;)
>
> spark = SparkSession.builder\
> .config(conf=sc.getConf())\
> .getOrCreate()
>
> dfTermRaw = spark.read.format("csv")\
> .option("header", "true")\
> .option("delimiter"
spark = SparkSession.builder\
.config(conf=sc.getConf())\
.getOrCreate()
dfTermRaw = spark.read.format("csv")\
.option("header", "true")\
.option("delimiter" ,"\t")\
.option("inferSchema", "true")\
On 5 Jan 2017, at 20:07, Manohar Reddy
> wrote:
Hi Steve,
Thanks for the reply and below is follow-up help needed from you.
Do you mean we can set up two native file system to single sparkcontext ,so
then based on urls
understanding right?
Manohar
From: Steve Loughran [mailto:ste...@hortonworks.com]
Sent: Thursday, January 5, 2017 11:05 PM
To: Manohar Reddy
Cc: user@spark.apache.org
Subject: Re: Spark Read from Google store and save in AWS s3
On 5 Jan 2017, at 09:58, Manohar753
<manohar.re...@happiestminds.
On 5 Jan 2017, at 09:58, Manohar753
> wrote:
Hi All,
Using spark is interoperability communication between two
clouds(Google,AWS) possible.
in my use case i need to take Google store as input to spark and do some
for this kind of usecase bu using
directly spark without any middle components and share the info or link if
you have.
Thanks,
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Read-from-Google-store-and-save-in-AWS-s3-tp28278.html
Sent from the Apache Spark