[ 
https://issues.apache.org/jira/browse/BIGTOP-1414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14113026#comment-14113026
 ] 

Jörn Franke edited comment on BIGTOP-1414 at 8/27/14 11:09 PM:
---------------------------------------------------------------

Hi,

I attached a chart. Basically for the first spark job - to keep it simple - I 
would use the cleaned CSV as input and have as output various groupings based 
on country and product + some simple statistics (count, avg). 

Optionally, it can store them to a file.

Of course, we should define later additional jobs demonstrating the features of 
Spark.

I am not yet sure about the Spark version: I propose at least 1.0.0, because 
this one is rather stable and included amongst other in the Cloudera Quickstart 
VM 5.1

Let me know what you think.

Best regards,


was (Author: jornfranke):
chart with Spark job

> Add Apache Spark implementation to BigPetStore
> ----------------------------------------------
>
>                 Key: BIGTOP-1414
>                 URL: https://issues.apache.org/jira/browse/BIGTOP-1414
>             Project: Bigtop
>          Issue Type: Improvement
>          Components: blueprints
>    Affects Versions: backlog
>            Reporter: jay vyas
>             Fix For: 0.9.0
>
>         Attachments: chart.png
>
>
> Currently we only process data with hadoop.  Now its time to add spark to the 
> bigpetstore application.  This will basically demonstrate the difference 
> between a mapreduce based hadoop implementation of a big data app, versus a 
> Spark one.   
> *We will need to*
> - update graphviz arch.dot to diagram spark as a new path.
> - Adding a spark job to the existing code, in a new package., which uses 
> existing scala based generator, however, we will use it inside  a spark job, 
> rather than in a hadoop inputsplit.
> - The job should output to an RDD, which can then be serialized to disk, or 
> else, fed into the next spark job... 
> *So, the next spark job should*
> - group the data and write product summaries to a local file
> - run a product recommender against the input data set.
> We want the jobs to be runnable as modular, or as a single job, to leverage 
> the RDD paradigm.  
> So it will be interesting to see how the code is architected.    Lets start 
> the planning in this JIRA.  I have some stuff ive informally hacked together, 
> maybe i can attach an initial patch just to start a dialog. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to