[ 
https://issues.apache.org/jira/browse/MAHOUT-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14964741#comment-14964741
 ] 

ASF GitHub Bot commented on MAHOUT-1570:
----------------------------------------

Github user hsaputra commented on a diff in the pull request:

    https://github.com/apache/mahout/pull/137#discussion_r42465218
  
    --- Diff: pom.xml ---
    @@ -121,6 +121,8 @@
         <scala.compat.version>2.10</scala.compat.version>
         <scala.version>2.10.4</scala.version>
         <spark.version>1.3.1</spark.version>
    +    <!-- TODO: Remove snapshot dependency when Flink 0.9.1 is released -->
    +    <flink.version>0.9-SNAPSHOT</flink.version>
    --- End diff --
    
    Flink 0.9.1 is out so we could remove the SNAPSHOT label I suppose.


> Adding support for Apache Flink as a backend for the Mahout DSL
> ---------------------------------------------------------------
>
>                 Key: MAHOUT-1570
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-1570
>             Project: Mahout
>          Issue Type: Improvement
>            Reporter: Till Rohrmann
>            Assignee: Alexey Grigorev
>              Labels: DSL, flink, scala
>             Fix For: 0.11.1
>
>
> With the finalized abstraction of the Mahout DSL plans from the backend 
> operations (MAHOUT-1529), it should be possible to integrate further backends 
> for the Mahout DSL. Apache Flink would be a suitable candidate to act as a 
> good execution backend. 
> With respect to the implementation, the biggest difference between Spark and 
> Flink at the moment is probably the incremental rollout of plans, which is 
> triggered by Spark's actions and which is not supported by Flink yet. 
> However, the Flink community is working on this issue. For the moment, it 
> should be possible to circumvent this problem by writing intermediate results 
> required by an action to HDFS and reading from there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to