[
https://issues.apache.org/jira/browse/MAHOUT-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945404#comment-14945404
]
ASF GitHub Bot commented on MAHOUT-1570:
----------------------------------------
Github user dlyubimov commented on the pull request:
https://github.com/apache/mahout/pull/137#issuecomment-145934646
If they are standard algorithm tests coming from the abstract test suites
in math-scala, they currently cannot be disabled (in any way i know anyway)
without disabling them for other backends too.
Thanks Alexey, i will take a look again when i have time!
On Tue, Oct 6, 2015 at 1:02 AM, Suneel Marthi <[email protected]>
wrote:
> I think it's best to disable the failing tests, else it's gonna break the
> ci build
>
> Sent from my iPhone
>
> > On Oct 6, 2015, at 3:45 AM, Alexey Grigorev <[email protected]>
> wrote:
> >
> > So I fixed it and I can run mvn clean package -DskipTests. But since
> some of the tests don't pass for the Flink backend, if I remove the
> -DskipTests, I won't build. What do you think, should I disable the
failing
> tests?
> >
> > —
> > Reply to this email directly or view it on GitHub.
> >
>
> —
> Reply to this email directly or view it on GitHub
> <https://github.com/apache/mahout/pull/137#issuecomment-145775984>.
>
> Adding support for Apache Flink as a backend for the Mahout DSL
> ---------------------------------------------------------------
>
> Key: MAHOUT-1570
> URL: https://issues.apache.org/jira/browse/MAHOUT-1570
> Project: Mahout
> Issue Type: Improvement
> Reporter: Till Rohrmann
> Assignee: Alexey Grigorev
> Labels: DSL, flink, scala
> Fix For: 0.11.1
>
>
> With the finalized abstraction of the Mahout DSL plans from the backend
> operations (MAHOUT-1529), it should be possible to integrate further backends
> for the Mahout DSL. Apache Flink would be a suitable candidate to act as a
> good execution backend.
> With respect to the implementation, the biggest difference between Spark and
> Flink at the moment is probably the incremental rollout of plans, which is
> triggered by Spark's actions and which is not supported by Flink yet.
> However, the Flink community is working on this issue. For the moment, it
> should be possible to circumvent this problem by writing intermediate results
> required by an action to HDFS and reading from there.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)