HyukjinKwon commented on code in PR #54258:
URL: https://github.com/apache/spark/pull/54258#discussion_r2801374018
##########
python/pyspark/worker_util.py:
##########
@@ -155,39 +155,42 @@ def setup_spark_files(infile: IO) -> None:
def setup_broadcasts(infile: IO) -> None:
"""
Set up broadcasted variables.
+ {
+ "conn_info": int | str | None,
+ "auth_secret": str | None,
+ "broadcast_variables": [
+ {
+ "bid": int,
+ "path": str | None,
+ }
+ ]
+ }
"""
if not is_remote_only():
from pyspark.core.broadcast import Broadcast, _broadcastRegistry
- # fetch names and values of broadcast variables
- needs_broadcast_decryption_server = read_bool(infile)
- num_broadcast_variables = read_int(infile)
- if needs_broadcast_decryption_server:
+ data = json.loads(utf8_deserializer.loads(infile))
Review Comment:
If that's the case, I am not super supportive of this change. This could
impact jobs like Structured Streaming (with micro batches) or ML jobs that
disable `spark.python.worker.reuse` (which happen often in practice to work
around any problem by having long living daemon worker). Considering the
overhead vs benefit, I would prefer to just leave it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]