[
https://issues.apache.org/jira/browse/MAHOUT-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13948978#comment-13948978
]
Dmitriy Lyubimov commented on MAHOUT-1489:
------------------------------------------
bq. 1) Ability to execute against a local or remote spark cluster
yes
bq. 2) Create parallelized collections based on an existing scala collection
no this is not the scope
bq. 3) Create a distributed dataset from a remote or local hadoop data set
no this is not the scope
bq. 4) A subset of transformations and actions as listed in the following link
(http://spark.incubator.apache.org/docs/0.8.1/scala-programming-guide.html)
no this is not the scope
This is purely engineering task, nothing fancy, no new functionality,
just the shell with proper packages pre-imported. I've already done
this several times without Spark specifics though
Implementor needs to famialize himself with the topics:
(1) scala tools, in particular, scala shell API
(2) Spark modifications and additions to the original shell – will
just need to take a rip of spark shell and add proper imports in the
scope per document of Scala Bindings.
> Interactive Scala & Spark Bindings Shell & Script processor
> -----------------------------------------------------------
>
> Key: MAHOUT-1489
> URL: https://issues.apache.org/jira/browse/MAHOUT-1489
> Project: Mahout
> Issue Type: New Feature
> Affects Versions: 1.0
> Reporter: Saikat Kanjilal
> Assignee: Dmitriy Lyubimov
> Fix For: 1.0
>
>
> Build an interactive shell /scripting (just like spark shell). Something very
> similar in R interactive/script runner mode.
--
This message was sent by Atlassian JIRA
(v6.2#6252)