Matthias Boehm created SYSTEMML-1734:
----------------------------------------

             Summary: Spark reshape instruction creates incorrect outputs for 
sparse inputs
                 Key: SYSTEMML-1734
                 URL: https://issues.apache.org/jira/browse/SYSTEMML-1734
             Project: SystemML
          Issue Type: Bug
            Reporter: Matthias Boehm


Our spark reshape instruction creates incorrect partial outputs which 
ultimately lead to failures on the included mergeByKey (incorrect results or 
failures as shown below):

{code}
Caused by: org.apache.sysml.runtime.DMLRuntimeException: Number of non-zeros 
mismatch on merge disjoint (target=1000x50, nnz target=49994, nnz source=1000)
        at 
org.apache.sysml.runtime.matrix.data.MatrixBlock.merge(MatrixBlock.java:1686)
        at 
org.apache.sysml.runtime.instructions.spark.utils.RDDAggregateUtils$MergeBlocksFunction.call(RDDAggregateUtils.java:627)
        at 
org.apache.sysml.runtime.instructions.spark.utils.RDDAggregateUtils$MergeBlocksFunction.call(RDDAggregateUtils.java:1)
        at 
org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction2$1.apply(JavaPairRDD.scala:1037)
        at 
org.apache.spark.util.collection.ExternalSorter$$anonfun$5.apply(ExternalSorter.scala:189)
        at 
org.apache.spark.util.collection.ExternalSorter$$anonfun$5.apply(ExternalSorter.scala:188)
        at 
org.apache.spark.util.collection.AppendOnlyMap.changeValue(AppendOnlyMap.scala:150)
        at 
org.apache.spark.util.collection.SizeTrackingAppendOnlyMap.changeValue(SizeTrackingAppendOnlyMap.scala:32)
        at 
org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:194)
        at 
org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to