[ 
https://issues.apache.org/jira/browse/MAHOUT-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14964659#comment-14964659
 ] 

ASF GitHub Bot commented on MAHOUT-1570:
----------------------------------------

Github user dlyubimov commented on the pull request:

    https://github.com/apache/mahout/pull/161#issuecomment-149451874
  
    spark backend tests pass, but in h20 i get 
    
    `
    10-19 23:45:46.002 192.168.11.4:54321    13168  #onsSuite INFO: Cloud of 
size 1 formed [/192.168.11.4:54321]
    *** RUN ABORTED ***
      java.lang.StackOverflowError:
      at 
org.apache.mahout.math.drm.DistributedEngine$.org$apache$mahout$math$drm$DistributedEngine$$pass1(DistributedEngine.scala:142)
      at 
org.apache.mahout.math.drm.DistributedEngine$.org$apache$mahout$math$drm$DistributedEngine$$pass1(DistributedEngine.scala:182)
      at 
org.apache.mahout.math.drm.DistributedEngine$class.optimizerRewrite(DistributedEngine.scala:44)
    `
    
    well the H20 build is known for that it never builds for me anyway :)


> Adding support for Apache Flink as a backend for the Mahout DSL
> ---------------------------------------------------------------
>
>                 Key: MAHOUT-1570
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-1570
>             Project: Mahout
>          Issue Type: Improvement
>            Reporter: Till Rohrmann
>            Assignee: Alexey Grigorev
>              Labels: DSL, flink, scala
>             Fix For: 0.11.1
>
>
> With the finalized abstraction of the Mahout DSL plans from the backend 
> operations (MAHOUT-1529), it should be possible to integrate further backends 
> for the Mahout DSL. Apache Flink would be a suitable candidate to act as a 
> good execution backend. 
> With respect to the implementation, the biggest difference between Spark and 
> Flink at the moment is probably the incremental rollout of plans, which is 
> triggered by Spark's actions and which is not supported by Flink yet. 
> However, the Flink community is working on this issue. For the moment, it 
> should be possible to circumvent this problem by writing intermediate results 
> required by an action to HDFS and reading from there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to