AngersZhuuuu commented on a change in pull request #34732:
URL: https://github.com/apache/spark/pull/34732#discussion_r758896993



##########
File path: python/pyspark/sql/session.py
##########
@@ -305,10 +305,9 @@ def __init__(
             ):
                 jsparkSession = 
self._jvm.SparkSession.getDefaultSession().get()
             else:
-                jsparkSession = self._jvm.SparkSession(self._jsc.sc())
-                if options is not None:
-                    for key, value in options.items():
-                        jsparkSession.sharedState().conf().set(key, value)
+                jsparkSession = self._jvm.SparkSession(
+                    self._jsc.sc(), self._jvm.PythonUtils.toScalaMap(options)

Review comment:
       > Looks good but last question. is this to ScalaMap required? We can 
make the Scala side to take Java map
   
   It seems worthwhile to use Java map to reduce one call of py4j.  Although 
Java map here looks not consistent. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to