[ https://issues.apache.org/jira/browse/ARROW-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Francois Saint-Jacques updated ARROW-3538: ------------------------------------------ Labels: dataset datasets features parquet pull-request-available (was: datasets features parquet pull-request-available) > [Python] ability to override the automated assignment of uuid for filenames > when writing datasets > ------------------------------------------------------------------------------------------------- > > Key: ARROW-3538 > URL: https://issues.apache.org/jira/browse/ARROW-3538 > Project: Apache Arrow > Issue Type: Wish > Components: Python > Affects Versions: 0.10.0 > Reporter: Ji Xu > Assignee: Thomas Elvey > Priority: Major > Labels: dataset, datasets, features, parquet, > pull-request-available > Fix For: 0.15.0 > > Time Spent: 3h 20m > Remaining Estimate: 0h > > Say I have a pandas DataFrame {{df}} that I would like to store on disk as > dataset using pyarrow parquet, I would do this: > {code:java} > table = pyarrow.Table.from_pandas(df) > pyarrow.parquet.write_to_dataset(table, root_path=some_path, > partition_cols=['a',]){code} > On disk the dataset would look like something like this: > {color:#14892c}some_path{color} > {color:#14892c}├── a=1{color} > {color:#14892c}____├── 4498704937d84fe5abebb3f06515ab2d.parquet{color} > {color:#14892c}├── a=2{color} > {color:#14892c}____├── 8bcfaed8986c4bdba587aaaee532370c.parquet{color} > *Wished Feature:* It'd be great if I can override the auto-assignment of the > long UUID as filename somehow during the *dataset* writing. My purpose is to > be able to overwrite the dataset on disk when I have a new version of {{df}}. > Currently if I try to write the dataset again, another new uniquely named > [UUID].parquet file will be placed next to the old one, with the same, > redundant data. -- This message was sent by Atlassian Jira (v8.3.2#803003)