Shafaq Siddiqi created SYSTEMML-2541:
----------------------------------------

             Summary: Runtime exception in frame cbind operation in Spark 
context
                 Key: SYSTEMML-2541
                 URL: https://issues.apache.org/jira/browse/SYSTEMML-2541
             Project: SystemML
          Issue Type: Bug
          Components: Runtime
            Reporter: Shafaq Siddiqi


I am getting the following runtime exception while trying to append a column 
with a frame,

java.lang.RuntimeException: org.apache.sysds.runtime.DMLRuntimeException: 
Incompatible number of rows for cbind: 98 (expected: 
49)java.lang.RuntimeException: org.apache.sysds.runtime.DMLRuntimeException: 
Incompatible number of rows for cbind: 98 (expected: 49) at 
org.apache.sysds.runtime.instructions.spark.data.LazyIterableIterator.next(LazyIterableIterator.java:82)
 at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43) 
at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1336) at 
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59) at 
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104) at 
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48) at 
scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310) at 
scala.collection.AbstractIterator.to(Iterator.scala:1336) at 
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302) at 
scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336) at 
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289) at 
scala.collection.AbstractIterator.toArray(Iterator.scala:1336) at 
org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:935) at 
org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:935) at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944) 
at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944) 
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at 
org.apache.spark.scheduler.Task.run(Task.scala:99) at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)Caused by: 
org.apache.sysds.runtime.DMLRuntimeException: Incompatible number of rows for 
cbind: 98 (expected: 49) at 
org.apache.sysds.runtime.matrix.data.FrameBlock.append(FrameBlock.java:1003) at 
org.apache.sysds.runtime.instructions.spark.FrameAppendMSPInstruction$MapSideAppendPartitionFunction$MapAppendPartitionIterator.computeNext(FrameAppendMSPInstruction.java:120)
 at 
org.apache.sysds.runtime.instructions.spark.FrameAppendMSPInstruction$MapSideAppendPartitionFunction$MapAppendPartitionIterator.computeNext(FrameAppendMSPInstruction.java:103)
 at 
org.apache.sysds.runtime.instructions.spark.data.LazyIterableIterator.next(LazyIterableIterator.java:79)
 ... 22 more

 

The error could be reproduced by executing the following code in Spark context,

X = read($X, data_type="frame", format="csv");
d = as.frame(matrix(0,nrow(X), 1))
f = cbind(X, d)
print("this is f"+toString(f))



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to