Hi,

And so you have the money to keep a SPARK cluster up and running? The way I
make it work is test the code in local system with a localised spark
installation and then create data pipeline triggered by lambda which starts
SPARK cluster and processes the data via SPARK steps and then terminates
having stored the data in S3 or somewhere else if you have VPC and VPN
configured.

But that approach may be completely wrong when you have money.



Regards,
Gourav

On Wed, Dec 2, 2015 at 11:35 PM, Ted Yu <yuzhih...@gmail.com> wrote:

> Have you seen this thread ?
>
>
> http://search-hadoop.com/m/q3RTtvmsYMv0tKh2&subj=Re+Upgrading+Spark+in+EC2+clusters
>
> On Wed, Dec 2, 2015 at 2:39 PM, Andy Davidson <
> a...@santacruzintegration.com> wrote:
>
>> I am using spark-1.5.1-bin-hadoop2.6. I used
>> spark-1.5.1-bin-hadoop2.6/ec2/spark-ec2 to create a cluster. Any idea how I
>> can upgrade to 1.5.2 prebuilt binary?
>>
>> Also if I choose to build the binary, how would I upgrade my cluster?
>>
>> Kind regards
>>
>> Andy
>>
>>
>>
>

Reply via email to