zero323 commented on a change in pull request #29122:
URL: https://github.com/apache/spark/pull/29122#discussion_r456033843



##########
File path: python/pyspark/sql/functions.py
##########
@@ -2392,7 +2393,7 @@ def json_tuple(col, *fields):
 
 
 @since(2.1)
-def from_json(col, schema, options={}):
+def from_json(col, schema, options: Dict = None):

Review comment:
       > Furthermore, it is easier to keep the code and annotations in sync. 
Curious what Maciej thinks.
   
   That's the topic for a long discussion :) Overall, I made some points 
[here](http://apache-spark-developers-list.1001551.n3.nabble.com/Re-PySpark-Revisiting-PySpark-type-annotations-td26232.html)
 and 
[here](http://apache-spark-developers-list.1001551.n3.nabble.com/PYTHON-PySpark-typing-hints-td21560.html)
 ‒ I think these are still valid.
   
   Now... I am still opened about merging `pyspark-stubs` about with the main 
project. However, have some annotations in the code-base, if there is no will 
to maintain these (and maintaining useful annotations is honestly a pain) and 
bring them to high coverage any time soon, as it may make things either wasted 
effort (if not used at all, if package is not PEP 561 compliant) or maybe even 
harmful. That's a topic for a longer discussion though...
   
   In general I'd bring the discussion to dev and or JIRA first.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to