HyukjinKwon commented on code in PR #36683:
URL: https://github.com/apache/spark/pull/36683#discussion_r883340077


##########
python/pyspark/sql/pandas/conversion.py:
##########
@@ -613,16 +613,16 @@ def _create_from_pandas_with_arrow(
 
         @no_type_check
         def reader_func(temp_filename):
-            return 
self._jvm.PythonSQLUtils.readArrowStreamFromFile(jsparkSession, temp_filename)
+            return 
self._jvm.PythonSQLUtils.readArrowStreamFromFile(temp_filename)
 
         @no_type_check
         def create_RDD_server():
-            return self._jvm.ArrowRDDServer(jsparkSession)
+            return self._jvm.ArrowIteratorServer()
 
         # Create Spark DataFrame from Arrow stream file, using one batch per 
partition
-        jrdd = self._sc._serialize_to_jvm(arrow_data, ser, reader_func, 
create_RDD_server)
+        jiter = self._sc._serialize_to_jvm(arrow_data, ser, reader_func, 
create_RDD_server)

Review Comment:
   > If IO encryption is enabled then this codepath will use the 
`PythonParallelizeServer` codepath which will still produce a 
`JavaRDD[Array[Byte]]`.
   
   Actually, I think it's not. `create_RDD_server` is called, and 
`ArrowIteratorServer` will be used. In case of other Python RDDs, 
`PythonParallelizeServer` will be used .. maybe I should fix the docs in 
`_serialize_to_jvm`.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to