liuzqt commented on code in PR #38704:
URL: https://github.com/apache/spark/pull/38704#discussion_r1027614714
##########
sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala:
##########
@@ -2251,7 +2251,11 @@ class DatasetLargeResultCollectingSuite extends QueryTest
with SharedSparkSession {
override protected def sparkConf: SparkConf =
super.sparkConf.set(MAX_RESULT_SIZE.key, "4g")
- test("collect data with single partition larger than 2GB bytes array limit")
{
+ // SPARK-41193: Ignore this suite because it cannot run successfully with
Spark
+ // default Java Options, if user need do local test, please make the
following changes:
+ // - Maven test: change `-Xmx4g` of `scalatest-maven-plugin` in
`sql/core/pom.xml` to `-Xmx10g`
+ // - SBT test: change `-Xmx4g` of `Test / javaOptions` in `SparkBuild.scala`
to `-Xmx10g`
+ ignore("collect data with single partition larger than 2GB bytes array
limit") {
Review Comment:
I think we can leave it as ignore for now with the comments about using
larger mem to make it work. I'm not sure if we're able to configure the build
args for a specific test suite.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]