zhengruifeng commented on code in PR #38979:
URL: https://github.com/apache/spark/pull/38979#discussion_r1043185541


##########
python/pyspark/sql/connect/session.py:
##########
@@ -264,9 +294,67 @@ def createDataFrame(self, data: "pd.DataFrame") -> 
"DataFrame":
 
         """
         assert data is not None
-        if len(data) == 0:
+        if isinstance(data, DataFrame):
+            raise TypeError("data is already a DataFrame")
+        if isinstance(data, Sized) and len(data) == 0:
             raise ValueError("Input data cannot be empty")
-        return DataFrame.withPlan(plan.LocalRelation(data), self)
+
+        struct: Optional[StructType] = None
+        column_names: List[str] = []
+
+        if isinstance(schema, StructType):
+            struct = schema
+            column_names = struct.names
+
+        elif isinstance(schema, str):
+            struct = _parse_datatype_string(schema)  # type: ignore[assignment]
+            assert isinstance(struct, StructType)
+            column_names = struct.names
+
+        elif isinstance(schema, (list, tuple)):
+            # Must re-encode any unicode strings to be consistent with 
StructField names
+            column_names = [x.encode("utf-8") if not isinstance(x, str) else x 
for x in schema]
+
+        # Create the Pandas DataFrame
+        if isinstance(data, pd.DataFrame):
+            pdf = data
+
+        elif isinstance(data, np.ndarray):
+            # `data` of numpy.ndarray type will be converted to a pandas 
DataFrame,
+            if data.ndim not in [1, 2]:
+                raise ValueError("NumPy array input should be of 1 or 2 
dimensions.")
+
+            pdf = pd.DataFrame(data)
+
+            if len(column_names) == 0:
+                if data.ndim == 1 or data.shape[1] == 1:
+                    column_names = ["value"]
+                else:
+                    column_names = ["_%s" % i for i in range(1, data.shape[1] 
+ 1)]
+
+        else:
+            pdf = pd.DataFrame(list(data))
+
+            if len(column_names) == 0:
+                column_names = ["_%s" % i for i in range(1, pdf.shape[1] + 1)]
+
+        # Adjust the column names
+        if len(column_names) > 0:
+            pdf.columns = column_names
+
+        # Casting according to the input schema
+        if struct is not None:
+            for field in struct.fields:
+                name = field.name
+                dt = field.dataType
+                if isinstance(dt, StringType):
+                    pdf[name] = pdf[name].apply(str)
+                else:
+                    pt = PandasConversionMixin._to_corrected_pandas_type(dt)

Review Comment:
   I attempted to make `_to_corrected_pandas_type` support `StringType` by 
returning `np.str_`
   
   then the `createDataFrame` related tests pass as expected, but some other 
pyspark tests become weird. So check `isinstance(dt, StringType)` here.
   
    In the future, I think we should directly create PyArrow Table from ndarray 
and list, to skip the intermediate conversions to/from pandas.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to