This should help
https://cloud.google.com/hadoop/examples/bigquery-connector-spark-example
On 8 January 2017 at 03:49, neil90 wrote:
> Here is how you would read from Google Cloud Storage(note you need to
> create
> a service account key) ->
>
> os.environ['PYSPARK_SUBMIT_ARGS'] = """--jars
> /h
Here is how you would read from Google Cloud Storage(note you need to create
a service account key) ->
os.environ['PYSPARK_SUBMIT_ARGS'] = """--jars
/home/neil/Downloads/gcs-connector-latest-hadoop2.jar pyspark-shell"""
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSess
On 5 Jan 2017, at 20:07, Manohar Reddy
mailto:manohar.re...@happiestminds.com>> wrote:
Hi Steve,
Thanks for the reply and below is follow-up help needed from you.
Do you mean we can set up two native file system to single sparkcontext ,so
then based on urls prefix( gs://bucket/path and dest s3a
understanding right?
Manohar
From: Steve Loughran [mailto:ste...@hortonworks.com]
Sent: Thursday, January 5, 2017 11:05 PM
To: Manohar Reddy
Cc: user@spark.apache.org
Subject: Re: Spark Read from Google store and save in AWS s3
On 5 Jan 2017, at 09:58, Manohar753
mailto:manohar.re
On 5 Jan 2017, at 09:58, Manohar753
mailto:manohar.re...@happiestminds.com>> wrote:
Hi All,
Using spark is interoperability communication between two
clouds(Google,AWS) possible.
in my use case i need to take Google store as input to spark and do some
processing and finally needs to store in S