xinrong-databricks commented on PR #36640:
URL: https://github.com/apache/spark/pull/36640#issuecomment-1138893839

   After further consideration, the behaviors below are expected:
   
   ```py
   >>> spark.createDataFrame(spark._sc.parallelize([[1], [2]])).show()
   +---+
   | _1|
   +---+
   |  1|
   |  2|
   +---+
   
   >>> spark.createDataFrame(spark._sc.parallelize([[],[]])).show()
   ++
   ||
   ++
   ||
   ||
   ++
   ```
   that's because bolded **[]** in [**[]**,..] marks a row in PySpark.
   
   So the PR is adjusted to enable `[[],[]]` input, by simply bypassing the 
check `if not first:`.
   
   The input as `[None, None]` will raise the `ValueError` as before.
   ```py
   >>> spark.createDataFrame(spark._sc.parallelize([None, None])).show()
   Traceback (most recent call last):
   ...
   ValueError: The first row in RDD is empty, can not infer schema
   ```
   
   CC @ueshin @HyukjinKwon @itholic
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to