Kimahriman commented on code in PR #50006:
URL: https://github.com/apache/spark/pull/50006#discussion_r1962536597
##########
python/pyspark/sql/connect/session.py:
##########
@@ -1041,11 +1045,17 @@ def _start_connect_server(master: str, opts: Dict[str,
Any]) -> None:
init_opts.update(opts)
opts = init_opts
+ token = str(uuid.uuid4())
+
# Configurations to be overwritten
overwrite_conf = opts
overwrite_conf["spark.master"] = master
overwrite_conf["spark.local.connect"] = "1"
+ # When running a local server, always use an ephemeral port
+ overwrite_conf["spark.connect.grpc.binding.port"] = "0"
+ overwrite_conf["spark.connect.authenticate.token"] = token
os.environ["SPARK_LOCAL_CONNECT"] = "1"
+ os.environ["SPARK_CONNECT_AUTHENTICATE_TOKEN"] = token
Review Comment:
Isn't that what `del os.environ["SPARK_CONNECT_AUTHENTICATE_TOKEN"]` is
doing in `SparkSession.stop`?
I did hit issues with tests and incorrect tokens, which is why I had to
manually set the token for the foreach batch worker and query listener
processes that are started. The env vars we're stomping on each other. I don't
think it's from the token not clearing after stopping, but due to thread-based
parallelism that is sharing the same environment vars. I need to go through and
remind myself where if anywhere the env vars are actually still needed with how
things are setup. I think there were a few unit tests that try to create a
second session to the same remote server, which I don't know if that's really
even a real use case
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]