;>
>> Thanks.
>>
>> Denis
>>
>> --
>> *From:* Nicholas Chammas
>> *To:* Denis Mikhalkin ; "user@spark.apache.org" <
>> user@spark.apache.org>
>> *Sent:* Sunday, 25 January 2015, 3:06
>> *S
t; <
> user@spark.apache.org>
> *Sent:* Sunday, 25 January 2015, 3:06
> *Subject:* Re: Analyzing data from non-standard data sources (e.g. AWS
> Redshift)
>
> I believe databricks provides an rdd interface to redshift. Did you check
> spark-packages.org?
> On 2015년 1월 24일 (토) a
holas Chammas
To: Denis Mikhalkin ; "user@spark.apache.org"
Sent: Sunday, 25 January 2015, 3:06
Subject: Re: Analyzing data from non-standard data sources (e.g. AWS Redshift)
I believe databricks provides an rdd interface to redshift. Did you check
spark-packages.org?
On 2015년 1월 24일
I believe databricks provides an rdd interface to redshift. Did you check
spark-packages.org?
On 2015년 1월 24일 (토) at 오전 6:45 Denis Mikhalkin
wrote:
> Hello,
>
> we've got some analytics data in AWS Redshift. The data is being
> constantly updated.
>
> I'd like to be able to write a query against
Hello,
we've got some analytics data in AWS Redshift. The data is being constantly
updated.
I'd like to be able to write a query against Redshift which would return a
subset of data, and then run a Spark job (Pyspark) to do some analysis.
I could not find an RDD which would let me do it OOB (Pyt