This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
     new 9ddda5a  [SPARK-34832][SQL][TEST] Set EXECUTOR_ALLOW_SPARK_CONTEXT to 
true to ensure ExternalAppendOnlyUnsafeRowArrayBenchmark run successfully
9ddda5a is described below

commit 9ddda5a0d3d2cb45e901e184e3d2e4519e489729
Author: yangjie01 <yangji...@baidu.com>
AuthorDate: Wed Mar 24 14:59:31 2021 +0900

    [SPARK-34832][SQL][TEST] Set EXECUTOR_ALLOW_SPARK_CONTEXT to true to ensure 
ExternalAppendOnlyUnsafeRowArrayBenchmark run successfully
    
    ### What changes were proposed in this pull request?
    SPARK-32160 add a config(`EXECUTOR_ALLOW_SPARK_CONTEXT`) to switch 
allow/disallow to create `SparkContext` in executors and the default value of 
the config is `false`
    
    `ExternalAppendOnlyUnsafeRowArrayBenchmark` will run fail when 
`EXECUTOR_ALLOW_SPARK_CONTEXT` use the default value because the 
`ExternalAppendOnlyUnsafeRowArrayBenchmark#withFakeTaskContext` method try to 
create a `SparkContext` manually in Executor Side.
    
    So the main change of this pr is  set `EXECUTOR_ALLOW_SPARK_CONTEXT` to 
`true` to ensure `ExternalAppendOnlyUnsafeRowArrayBenchmark` run successfully.
    
    ### Why are the changes needed?
    Bug fix.
    
    ### Does this PR introduce _any_ user-facing change?
    No.
    
    ### How was this patch tested?
    Manual test:
    ```
    bin/spark-submit --class 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArrayBenchmark --jars 
spark-core_2.12-3.2.0-SNAPSHOT-tests.jar spark-sql_2.12-3.2.0-SNAPSHOT-tests.jar
    ```
    
    **Before**
    ```
    Exception in thread "main" java.lang.IllegalStateException: SparkContext 
should only be created and accessed on the driver.
        at 
org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$assertOnDriver(SparkContext.scala:2679)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:89)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:137)
        at 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArrayBenchmark$.withFakeTaskContext(ExternalAppendOnlyUnsafeRowArrayBenchmark.scala:52)
        at 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArrayBenchmark$.testAgainstRawArrayBuffer(ExternalAppendOnlyUnsafeRowArrayBenchmark.scala:119)
        at 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArrayBenchmark$.$anonfun$runBenchmarkSuite$1(ExternalAppendOnlyUnsafeRowArrayBenchmark.scala:189)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at 
org.apache.spark.benchmark.BenchmarkBase.runBenchmark(BenchmarkBase.scala:40)
        at 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArrayBenchmark$.runBenchmarkSuite(ExternalAppendOnlyUnsafeRowArrayBenchmark.scala:186)
        at org.apache.spark.benchmark.BenchmarkBase.main(BenchmarkBase.scala:58)
        at 
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArrayBenchmark.main(ExternalAppendOnlyUnsafeRowArrayBenchmark.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:951)
        at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1030)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1039)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    ```
    
    **After**
    
    `ExternalAppendOnlyUnsafeRowArrayBenchmark` run successfully.
    
    Closes #31939 from LuciferYang/SPARK-34832.
    
    Authored-by: yangjie01 <yangji...@baidu.com>
    Signed-off-by: HyukjinKwon <gurwls...@apache.org>
    (cherry picked from commit 712a62ca8259539a76f45d9a54ccac8857b12a81)
    Signed-off-by: HyukjinKwon <gurwls...@apache.org>
---
 .../sql/execution/ExternalAppendOnlyUnsafeRowArrayBenchmark.scala      | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/sql/core/src/test/scala/org/apache/spark/sql/execution/ExternalAppendOnlyUnsafeRowArrayBenchmark.scala
 
b/sql/core/src/test/scala/org/apache/spark/sql/execution/ExternalAppendOnlyUnsafeRowArrayBenchmark.scala
index 0869e25..8962e92 100644
--- 
a/sql/core/src/test/scala/org/apache/spark/sql/execution/ExternalAppendOnlyUnsafeRowArrayBenchmark.scala
+++ 
b/sql/core/src/test/scala/org/apache/spark/sql/execution/ExternalAppendOnlyUnsafeRowArrayBenchmark.scala
@@ -47,6 +47,9 @@ object ExternalAppendOnlyUnsafeRowArrayBenchmark extends 
BenchmarkBase {
     // for a bug we had with bytes written past the last object in a batch 
(SPARK-2792)
     .set(config.SERIALIZER_OBJECT_STREAM_RESET, 1)
     .set(config.SERIALIZER, "org.apache.spark.serializer.JavaSerializer")
+    // SPARK-34832: Add this configuration to allow `withFakeTaskContext` 
method
+    // to create `SparkContext` on the executor side.
+    .set(config.EXECUTOR_ALLOW_SPARK_CONTEXT, true)
 
   private def withFakeTaskContext(f: => Unit): Unit = {
     val sc = new SparkContext("local", "test", conf)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to