[
https://issues.apache.org/jira/browse/MAHOUT-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14964639#comment-14964639
]
ASF GitHub Bot commented on MAHOUT-1570:
----------------------------------------
GitHub user dlyubimov opened a pull request:
https://github.com/apache/mahout/pull/161
MAHOUT-1570, sub-pr: a siggestion: let's unify all key class tag
extractors.
Unifying "keyClassTag" of checkpoitns and "classTagK" of logical operators
and
elevating "keyClassTag" into DrmLike[] trait. No more logical forks any
more .
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/dlyubimov/mahout flink-binding
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/mahout/pull/161.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #161
----
commit 0df6e08f27c76c6eeb4a714e0087722bef166970
Author: Dmitriy Lyubimov <[email protected]>
Date: 2015-10-20T06:26:43Z
Unifying "keyClassTag" of checkpoitns and "classTagK" of logical operators
and
elevating "keyClassTag" into DrmLike[] trait. No more logical forks any
more .
----
> Adding support for Apache Flink as a backend for the Mahout DSL
> ---------------------------------------------------------------
>
> Key: MAHOUT-1570
> URL: https://issues.apache.org/jira/browse/MAHOUT-1570
> Project: Mahout
> Issue Type: Improvement
> Reporter: Till Rohrmann
> Assignee: Alexey Grigorev
> Labels: DSL, flink, scala
> Fix For: 0.11.1
>
>
> With the finalized abstraction of the Mahout DSL plans from the backend
> operations (MAHOUT-1529), it should be possible to integrate further backends
> for the Mahout DSL. Apache Flink would be a suitable candidate to act as a
> good execution backend.
> With respect to the implementation, the biggest difference between Spark and
> Flink at the moment is probably the incremental rollout of plans, which is
> triggered by Spark's actions and which is not supported by Flink yet.
> However, the Flink community is working on this issue. For the moment, it
> should be possible to circumvent this problem by writing intermediate results
> required by an action to HDFS and reading from there.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)