[
https://issues.apache.org/jira/browse/MAHOUT-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14966766#comment-14966766
]
ASF GitHub Bot commented on MAHOUT-1570:
----------------------------------------
Github user tillrohrmann commented on the pull request:
https://github.com/apache/mahout/pull/161#issuecomment-149886416
I forgot to mention that the error is
```
Caused by: java.util.NoSuchElementException: head of empty list
at scala.collection.immutable.Nil$.head(List.scala:337)
at scala.collection.immutable.Nil$.head(List.scala:334)
at
org.apache.mahout.flinkbindings.drm.RowsFlinkDrm$$anon$1.mapPartition(FlinkDrm.scala:68)
at
org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:98)
at
org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:496)
at
org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
at java.lang.Thread.run(Thread.java:745)
```
The error appears also in some of the other test cases, therefore I think
it's not a problem of Flink itself but of the implementation of the Flink
bindings.
> Adding support for Apache Flink as a backend for the Mahout DSL
> ---------------------------------------------------------------
>
> Key: MAHOUT-1570
> URL: https://issues.apache.org/jira/browse/MAHOUT-1570
> Project: Mahout
> Issue Type: Improvement
> Reporter: Till Rohrmann
> Assignee: Alexey Grigorev
> Labels: DSL, flink, scala
> Fix For: 0.11.1
>
>
> With the finalized abstraction of the Mahout DSL plans from the backend
> operations (MAHOUT-1529), it should be possible to integrate further backends
> for the Mahout DSL. Apache Flink would be a suitable candidate to act as a
> good execution backend.
> With respect to the implementation, the biggest difference between Spark and
> Flink at the moment is probably the incremental rollout of plans, which is
> triggered by Spark's actions and which is not supported by Flink yet.
> However, the Flink community is working on this issue. For the moment, it
> should be possible to circumvent this problem by writing intermediate results
> required by an action to HDFS and reading from there.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)