sryza commented on code in PR #53020:
URL: https://github.com/apache/spark/pull/53020#discussion_r2519895195


##########
python/pyspark/sql/connect/session.py:
##########
@@ -920,6 +895,60 @@ def clearTags(self) -> None:
 
     clearTags.__doc__ = PySparkSession.clearTags.__doc__
 
+    def addThreadlocalUserContextExtension(self, extension: any_pb2.Any) -> 
str:

Review Comment:
   This is adding public APIs if I understand correctly. I wonder if we need to 
add these public APIs? In general, better to limit public APIs to what we 
strictly need, as we'll be stuck supporting them forever.
   
   If I understand correctly, the purpose of this change is to enable setting 
the `PipelineAnalysisContext` on `UserContext` inside pipelines. Do we need to 
expose an API on `SparkSession` to support that? Can we just access the 
underlying methods on the Connect client directly?
   
   cc @SCHJonathan 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to