HyukjinKwon commented on code in PR #38130:
URL: https://github.com/apache/spark/pull/38130#discussion_r995295713


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/python/AttachDistributedSequenceExec.scala:
##########
@@ -40,15 +42,45 @@ case class AttachDistributedSequenceExec(
 
   override def outputPartitioning: Partitioning = child.outputPartitioning
 
+  @transient private var cached: RDD[InternalRow] = _
+
   override protected def doExecute(): RDD[InternalRow] = {
-    val childRDD = child.execute().map(_.copy())
-    val checkpointed = if (childRDD.getNumPartitions > 1) {
-      // to avoid execute multiple jobs. zipWithIndex launches a Spark job.
-      childRDD.localCheckpoint()
+    val childRDD = child.execute()
+    // before `compute.distributed_sequence_index_storage_level` is explicitly 
set via
+    // `ps.set_option`, `SQLConf.get` can not get its value (as well as its 
default value);
+    // after `ps.set_option`, `SQLConf.get` can get its value:
+    //
+    //    In [1]: import pyspark.pandas as ps
+    //    In [2]: 
ps.get_option("compute.distributed_sequence_index_storage_level")
+    //    Out[2]: 'MEMORY_AND_DISK_SER'
+    //    In [3]: 
spark.conf.get("pandas_on_Spark.compute.distributed_sequence_index_storage_level")
+    //    ...
+    //    Py4JJavaError: An error occurred while calling o40.get.
+    //      : java.util.NoSuchElementException: 
pandas_on_Spark.compute.distributed_sequence_...
+    //    at 
org.apache.spark.sql.errors.QueryExecutionErrors$.noSuchElementExceptionError...
+    //    at 
org.apache.spark.sql.internal.SQLConf.$anonfun$getConfString$3(SQLConf.scala:4766)
+    //    ...
+    //    In [4]: 
ps.set_option("compute.distributed_sequence_index_storage_level", "NONE")
+    //    In [5]: 
spark.conf.get("pandas_on_Spark.compute.distributed_sequence_index_storage_level")
+    //    Out[5]: '"NONE"'
+    //    In [6]: 
ps.set_option("compute.distributed_sequence_index_storage_level", "DISK_ONLY")
+    //    In [7]: 
spark.conf.get("pandas_on_Spark.compute.distributed_sequence_index_storage_level")
+    //    Out[7]: '"DISK_ONLY"'
+    val storageLevel = StorageLevel.fromString(
+      SQLConf.get.getConfString(
+        "pandas_on_Spark.compute.distributed_sequence_index_storage_level",
+        "MEMORY_AND_DISK_SER"
+      ).replaceAll("\"", "")

Review Comment:
   This is quoted because we ser/de configuration values in JSON for pandas API 
on Spark. So I believe it's safe to do `stripPrefix("\"").stripSuffix("\"")`. I 
would recommend adding a comment too.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to