AlenkaF commented on issue #37802:
URL: https://github.com/apache/arrow/issues/37802#issuecomment-1735425909

   > You can also inspect the schema of the two different datasets created with 
different versions of Apache Spark, see 
https://arrow.apache.org/docs/python/dataset.html#dataset-discovery. Maybe you 
will be able to find the difference?
   
   Oh, you mentioned in a latter comment that pyspark version is not an issue. 
What exactly is the issue then? Do you run out of memory in any case (no matter 
which version of pyspark you are using)?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to