moskvax commented on a change in pull request #28743:
URL: https://github.com/apache/spark/pull/28743#discussion_r438051033



##########
File path: python/pyspark/sql/pandas/conversion.py
##########
@@ -394,10 +394,11 @@ def _create_from_pandas_with_arrow(self, pdf, schema, 
timezone):
 
         # Create the Spark schema from list of names passed in with Arrow types
         if isinstance(schema, (list, tuple)):
-            arrow_schema = pa.Schema.from_pandas(pdf, preserve_index=False)
+            inferred_types = [pa.infer_type(s, mask=s.isna(), from_pandas=True)
+                              for s in (pdf[c] for c in pdf)]
             struct = StructType()
-            for name, field in zip(schema, arrow_schema):
-                struct.add(name, from_arrow_type(field.type), 
nullable=field.nullable)
+            for name, t in zip(schema, inferred_types):
+                struct.add(name, from_arrow_type(t), nullable=True)

Review comment:
       Sounds good, will update with a comment.
   
   Alternatively, `any(s.isna())` could be checked if we wanted to actively 
infer nullability here. This would change existing behavior as well as being 
inconsistent with the non-Arrow path, though, which similarly defaults to 
inferred types being nullable: 
https://github.com/apache/spark/blob/43063e2db2bf7469f985f1954d8615b95cf5c578/python/pyspark/sql/types.py#L1069




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to