[
https://issues.apache.org/jira/browse/ARROW-2628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Wes McKinney updated ARROW-2628:
--------------------------------
Fix Version/s: 1.0.0
> [Python] parquet.write_to_dataset is memory-hungry on large DataFrames
> ----------------------------------------------------------------------
>
> Key: ARROW-2628
> URL: https://issues.apache.org/jira/browse/ARROW-2628
> Project: Apache Arrow
> Issue Type: Improvement
> Components: C++, Python
> Reporter: Wes McKinney
> Priority: Major
> Labels: dataset, parquet
> Fix For: 1.0.0
>
>
> See discussion in https://github.com/apache/arrow/issues/1749. We should
> consider strategies for writing very large tables to a partitioned directory
> scheme.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)