AlenkaF commented on issue #37802:
URL: https://github.com/apache/arrow/issues/37802#issuecomment-1735422669

   It is hard to debug issues without a reproducible example.
   
   Does the filtering in polars give the issue or reading the dataset in 
pyarrow? That is, if you load the dataset only with pyarrow without using 
polars (`ds.dataset(parq_path+filename, partitioning='hive')`), do you also 
have an issue with memory?
   
   You can also inspect the schema of the two different datasets created with 
different versions of Apache Spark, see 
https://arrow.apache.org/docs/python/dataset.html#dataset-discovery. Maybe you 
will be able to find the difference?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to