xinrong-databricks commented on a change in pull request #34185:
URL: https://github.com/apache/spark/pull/34185#discussion_r730042321
##########
File path: python/pyspark/sql/observation.py
##########
@@ -102,13 +102,14 @@ def _on(self, df: DataFrame, *exprs: Column) -> DataFrame:
assert all(isinstance(c, Column) for c in exprs), "all exprs should be
Column"
assert self._jo is None, "an Observation can be used with a DataFrame
only once"
- self._jvm = df._sc._jvm # type: ignore[assignment, attr-defined]
+ self._jvm = df._sc._jvm # type: ignore[assignment, attr-defined,
has-type]
Review comment:
May I ask why `has-type` is needed here?
##########
File path: python/pyspark/sql/context.py
##########
@@ -268,7 +320,66 @@ def _inferSchema(self, rdd, samplingRatio=None):
"""
return self.sparkSession._inferSchema(rdd, samplingRatio)
- def createDataFrame(self, data, schema=None, samplingRatio=None,
verifySchema=True):
+ @overload
+ def createDataFrame(
+ self,
+ data: Union["RDD[RowLike]", "Iterable[RowLike]"],
Review comment:
Will `Iterable["RowLike"]` work?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]