[
https://issues.apache.org/jira/browse/ARROW-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Wes McKinney updated ARROW-3538:
--------------------------------
Summary: [Python] ability to override the automated assignment of uuid for
filenames when writing datasets (was: ability to override the automated
assignment of uuid for filenames when writing datasets)
> [Python] ability to override the automated assignment of uuid for filenames
> when writing datasets
> -------------------------------------------------------------------------------------------------
>
> Key: ARROW-3538
> URL: https://issues.apache.org/jira/browse/ARROW-3538
> Project: Apache Arrow
> Issue Type: Wish
> Affects Versions: 0.10.0
> Reporter: Ji Xu
> Priority: Major
> Labels: features, parquet
>
> Say I have a pandas DataFrame {{df}} that I would like to store on disk as
> dataset using pyarrow parquet, I would do this:
> {code:java}
> table = pyarrow.Table.from_pandas(df)
> pyarrow.parquet.write_to_dataset(table, root_path=some_path,
> partition_cols=['a',]){code}
> On disk the dataset would look like something like this:
> {color:#14892c}some_path{color}
> {color:#14892c}├── a=1{color}
> {color:#14892c}____├── 4498704937d84fe5abebb3f06515ab2d.parquet{color}
> {color:#14892c}├── a=2{color}
> {color:#14892c}____├── 8bcfaed8986c4bdba587aaaee532370c.parquet{color}
> *Wished Feature:* It'd be great if I can override the auto-assignment of the
> long UUID as filename somehow during the *dataset* writing. My purpose is to
> be able to overwrite the dataset on disk when I have a new version of {{df}}.
> Currently if I try to write the dataset again, another new uniquely named
> [UUID].parquet file will be placed next to the old one, with the same,
> redundant data.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)