tomvanbussel commented on code in PR #38979:
URL: https://github.com/apache/spark/pull/38979#discussion_r1043433892
##########
python/pyspark/sql/connect/session.py:
##########
@@ -264,9 +294,67 @@ def createDataFrame(self, data: "pd.DataFrame") ->
"DataFrame":
"""
assert data is not None
- if len(data) == 0:
+ if isinstance(data, DataFrame):
+ raise TypeError("data is already a DataFrame")
+ if isinstance(data, Sized) and len(data) == 0:
raise ValueError("Input data cannot be empty")
- return DataFrame.withPlan(plan.LocalRelation(data), self)
+
+ struct: Optional[StructType] = None
+ column_names: List[str] = []
+
+ if isinstance(schema, StructType):
+ struct = schema
+ column_names = struct.names
+
+ elif isinstance(schema, str):
+ struct = _parse_datatype_string(schema) # type: ignore[assignment]
Review Comment:
I'm not sure if this can be used here, as `_parse_datatype_string`
internally calls into the JVM. I think we have to add a field to the
`LocalRelation` message to store the schema string instead, so that the driver
can parse it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]