[ 
https://issues.apache.org/jira/browse/SPARK-26940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuja Jakhade updated SPARK-26940:
----------------------------------
    Description: 
I have built Apache Spark v2.3.2 on Big Endian with AdoptJDK OpenJ9 1.8.0_202.

My build is successful. However while running the scala tests of "*Spark 
Project REPL*" module. I am facing failures at SingletonReplSuite with error 
log as attached below 

The deviation observed on big endian is greater than the acceptable deviation 
0.2.

How efficient is it to increase the deviation defined in 
SingletonReplSuite.scala

Can this be fixed? 

 

  was:
I have built Apache Spark v2.3.2 on Big Endian with AdoptJDK OpenJ9 1.8.0_202.

My build is successful. However while running the scala tests of "*Spark 
Project REPL*" module. I am facing failures at SingletonReplSuite with error 
log as below 

 
 - should clone and clean line object in ClosureCleaner *** FAILED ***
 isContain was true Interpreter output contained 'AssertionError':

scala> import org.apache.spark.rdd.RDD

scala>
 scala> lines: org.apache.spark.rdd.RDD[String] = pom.xml MapPartitionsRDD[46] 
at textFile at <console>:40

scala> defined class Data

scala> dataRDD: org.apache.spark.rdd.RDD[Data] = MapPartitionsRDD[47] at map at 
<console>:43

scala> res28: Long = 180

scala> repartitioned: org.apache.spark.rdd.RDD[Data] = MapPartitionsRDD[51] at 
repartition at <console>:41

scala> res29: Long = 180

scala>
 scala> | | getCacheSize: (rdd: org.apache.spark.rdd.RDD[_])Long

scala> cacheSize1: Long = 24608

scala> cacheSize2: Long = 17768

scala>
 scala>
 scala> deviation: Double = 0.2779583875162549

scala> | java.lang.AssertionError: assertion failed: deviation too large: 
0.2779583875162549, first size: 24608, second size: 17768
 at scala.Predef$.assert(Predef.scala:170)
 ... 46 elided

scala> | _result_1550641172995: Int = 1

scala> (SingletonReplSuite.scala:121)

 

The deviation observed on big endian is greater than the acceptable deviation 
0.2.

How efficient is it to increase the deviation defined in 
SingletonReplSuite.scala

Can this be fixed? 

 

        Summary: Observed greater deviation on big endian for 
SingletonReplSuite test case  (was: Observed greater deviation Big Endian for 
SingletonReplSuite test case)

> Observed greater deviation on big endian for SingletonReplSuite test case
> -------------------------------------------------------------------------
>
>                 Key: SPARK-26940
>                 URL: https://issues.apache.org/jira/browse/SPARK-26940
>             Project: Spark
>          Issue Type: Test
>          Components: Tests
>    Affects Versions: 2.3.2
>         Environment: Ubuntu 16.04 LTS
> openjdk version "1.8.0_202"
> OpenJDK Runtime Environment (build 1.8.0_202-b08)
> Eclipse OpenJ9 VM (build openj9-0.12.1, JRE 1.8.0 Linux (JIT enabled, AOT 
> enabled)
> OpenJ9 - 90dd8cb40
> OMR - d2f4534b
> JCL - d002501a90 based on jdk8u202-b08)
>            Reporter: Anuja Jakhade
>            Priority: Major
>
> I have built Apache Spark v2.3.2 on Big Endian with AdoptJDK OpenJ9 1.8.0_202.
> My build is successful. However while running the scala tests of "*Spark 
> Project REPL*" module. I am facing failures at SingletonReplSuite with error 
> log as attached below 
> The deviation observed on big endian is greater than the acceptable deviation 
> 0.2.
> How efficient is it to increase the deviation defined in 
> SingletonReplSuite.scala
> Can this be fixed? 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to