liuzqt commented on code in PR #38704:
URL: https://github.com/apache/spark/pull/38704#discussion_r1026966457


##########
sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala:
##########
@@ -2251,7 +2251,11 @@ class DatasetLargeResultCollectingSuite extends QueryTest
   with SharedSparkSession {
 
   override protected def sparkConf: SparkConf = 
super.sparkConf.set(MAX_RESULT_SIZE.key, "4g")
-  test("collect data with single partition larger than 2GB bytes array limit") 
{
+  // SPARK-41193: Ignore this suite because it cannot run successfully with 
Spark
+  // default Java Options, if user need do local test, please make the 
following changes:
+  // - Maven test: change `-Xmx4g` of `scalatest-maven-plugin` in 
`sql/core/pom.xml` to `-Xmx10g`
+  // - SBT test: change `-Xmx4g` of `Test / javaOptions` in `SparkBuild.scala` 
to `-Xmx10g`
+  ignore("collect data with single partition larger than 2GB bytes array 
limit") {

Review Comment:
   Yes @LuciferYang  is right, need to change `-Xmx4g` to `-Xmx10g` to make it 
work (it works for both shared local session and local cluster, but without the 
change neither work).  
   
   Thanks for the fix!  Previously I only tested this using IDE and I guess it 
increased the mem under the hood......Sorry for the inconvenience.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to