[jira] [Updated] (ARROW-3538) [Python] ability to override the automated assignment of uuid for filenames when writing datasets
[ https://issues.apache.org/jira/browse/ARROW-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wes McKinney updated ARROW-3538: Labels: dataset features parquet pull-request-available (was: dataset datasets features parquet pull-request-available) > [Python] ability to override the automated assignment of uuid for filenames > when writing datasets > - > > Key: ARROW-3538 > URL: https://issues.apache.org/jira/browse/ARROW-3538 > Project: Apache Arrow > Issue Type: Wish > Components: Python >Affects Versions: 0.10.0 >Reporter: Ji Xu >Assignee: Thomas Elvey >Priority: Major > Labels: dataset, features, parquet, pull-request-available > Fix For: 0.15.0 > > Time Spent: 3h 20m > Remaining Estimate: 0h > > Say I have a pandas DataFrame {{df}} that I would like to store on disk as > dataset using pyarrow parquet, I would do this: > {code:java} > table = pyarrow.Table.from_pandas(df) > pyarrow.parquet.write_to_dataset(table, root_path=some_path, > partition_cols=['a',]){code} > On disk the dataset would look like something like this: > {color:#14892c}some_path{color} > {color:#14892c}├── a=1{color} > {color:#14892c}├── 4498704937d84fe5abebb3f06515ab2d.parquet{color} > {color:#14892c}├── a=2{color} > {color:#14892c}├── 8bcfaed8986c4bdba587aaaee532370c.parquet{color} > *Wished Feature:* It'd be great if I can override the auto-assignment of the > long UUID as filename somehow during the *dataset* writing. My purpose is to > be able to overwrite the dataset on disk when I have a new version of {{df}}. > Currently if I try to write the dataset again, another new uniquely named > [UUID].parquet file will be placed next to the old one, with the same, > redundant data. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (ARROW-3538) [Python] ability to override the automated assignment of uuid for filenames when writing datasets
[ https://issues.apache.org/jira/browse/ARROW-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francois Saint-Jacques updated ARROW-3538: -- Labels: dataset datasets features parquet pull-request-available (was: datasets features parquet pull-request-available) > [Python] ability to override the automated assignment of uuid for filenames > when writing datasets > - > > Key: ARROW-3538 > URL: https://issues.apache.org/jira/browse/ARROW-3538 > Project: Apache Arrow > Issue Type: Wish > Components: Python >Affects Versions: 0.10.0 >Reporter: Ji Xu >Assignee: Thomas Elvey >Priority: Major > Labels: dataset, datasets, features, parquet, > pull-request-available > Fix For: 0.15.0 > > Time Spent: 3h 20m > Remaining Estimate: 0h > > Say I have a pandas DataFrame {{df}} that I would like to store on disk as > dataset using pyarrow parquet, I would do this: > {code:java} > table = pyarrow.Table.from_pandas(df) > pyarrow.parquet.write_to_dataset(table, root_path=some_path, > partition_cols=['a',]){code} > On disk the dataset would look like something like this: > {color:#14892c}some_path{color} > {color:#14892c}├── a=1{color} > {color:#14892c}├── 4498704937d84fe5abebb3f06515ab2d.parquet{color} > {color:#14892c}├── a=2{color} > {color:#14892c}├── 8bcfaed8986c4bdba587aaaee532370c.parquet{color} > *Wished Feature:* It'd be great if I can override the auto-assignment of the > long UUID as filename somehow during the *dataset* writing. My purpose is to > be able to overwrite the dataset on disk when I have a new version of {{df}}. > Currently if I try to write the dataset again, another new uniquely named > [UUID].parquet file will be placed next to the old one, with the same, > redundant data. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (ARROW-3538) [Python] ability to override the automated assignment of uuid for filenames when writing datasets
[ https://issues.apache.org/jira/browse/ARROW-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated ARROW-3538: -- Labels: datasets features parquet pull-request-available (was: datasets features parquet) > [Python] ability to override the automated assignment of uuid for filenames > when writing datasets > - > > Key: ARROW-3538 > URL: https://issues.apache.org/jira/browse/ARROW-3538 > Project: Apache Arrow > Issue Type: Wish > Components: Python >Affects Versions: 0.10.0 >Reporter: Ji Xu >Priority: Major > Labels: datasets, features, parquet, pull-request-available > Fix For: 0.15.0 > > > Say I have a pandas DataFrame {{df}} that I would like to store on disk as > dataset using pyarrow parquet, I would do this: > {code:java} > table = pyarrow.Table.from_pandas(df) > pyarrow.parquet.write_to_dataset(table, root_path=some_path, > partition_cols=['a',]){code} > On disk the dataset would look like something like this: > {color:#14892c}some_path{color} > {color:#14892c}├── a=1{color} > {color:#14892c}├── 4498704937d84fe5abebb3f06515ab2d.parquet{color} > {color:#14892c}├── a=2{color} > {color:#14892c}├── 8bcfaed8986c4bdba587aaaee532370c.parquet{color} > *Wished Feature:* It'd be great if I can override the auto-assignment of the > long UUID as filename somehow during the *dataset* writing. My purpose is to > be able to overwrite the dataset on disk when I have a new version of {{df}}. > Currently if I try to write the dataset again, another new uniquely named > [UUID].parquet file will be placed next to the old one, with the same, > redundant data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (ARROW-3538) [Python] ability to override the automated assignment of uuid for filenames when writing datasets
[ https://issues.apache.org/jira/browse/ARROW-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wes McKinney updated ARROW-3538: Fix Version/s: (was: 0.14.0) 0.15.0 > [Python] ability to override the automated assignment of uuid for filenames > when writing datasets > - > > Key: ARROW-3538 > URL: https://issues.apache.org/jira/browse/ARROW-3538 > Project: Apache Arrow > Issue Type: Wish > Components: Python >Affects Versions: 0.10.0 >Reporter: Ji Xu >Priority: Major > Labels: datasets, features, parquet > Fix For: 0.15.0 > > > Say I have a pandas DataFrame {{df}} that I would like to store on disk as > dataset using pyarrow parquet, I would do this: > {code:java} > table = pyarrow.Table.from_pandas(df) > pyarrow.parquet.write_to_dataset(table, root_path=some_path, > partition_cols=['a',]){code} > On disk the dataset would look like something like this: > {color:#14892c}some_path{color} > {color:#14892c}├── a=1{color} > {color:#14892c}├── 4498704937d84fe5abebb3f06515ab2d.parquet{color} > {color:#14892c}├── a=2{color} > {color:#14892c}├── 8bcfaed8986c4bdba587aaaee532370c.parquet{color} > *Wished Feature:* It'd be great if I can override the auto-assignment of the > long UUID as filename somehow during the *dataset* writing. My purpose is to > be able to overwrite the dataset on disk when I have a new version of {{df}}. > Currently if I try to write the dataset again, another new uniquely named > [UUID].parquet file will be placed next to the old one, with the same, > redundant data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (ARROW-3538) [Python] ability to override the automated assignment of uuid for filenames when writing datasets
[ https://issues.apache.org/jira/browse/ARROW-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wes McKinney updated ARROW-3538: Labels: datasets features parquet (was: features parquet) > [Python] ability to override the automated assignment of uuid for filenames > when writing datasets > - > > Key: ARROW-3538 > URL: https://issues.apache.org/jira/browse/ARROW-3538 > Project: Apache Arrow > Issue Type: Wish > Components: Python >Affects Versions: 0.10.0 >Reporter: Ji Xu >Priority: Major > Labels: datasets, features, parquet > Fix For: 0.14.0 > > > Say I have a pandas DataFrame {{df}} that I would like to store on disk as > dataset using pyarrow parquet, I would do this: > {code:java} > table = pyarrow.Table.from_pandas(df) > pyarrow.parquet.write_to_dataset(table, root_path=some_path, > partition_cols=['a',]){code} > On disk the dataset would look like something like this: > {color:#14892c}some_path{color} > {color:#14892c}├── a=1{color} > {color:#14892c}├── 4498704937d84fe5abebb3f06515ab2d.parquet{color} > {color:#14892c}├── a=2{color} > {color:#14892c}├── 8bcfaed8986c4bdba587aaaee532370c.parquet{color} > *Wished Feature:* It'd be great if I can override the auto-assignment of the > long UUID as filename somehow during the *dataset* writing. My purpose is to > be able to overwrite the dataset on disk when I have a new version of {{df}}. > Currently if I try to write the dataset again, another new uniquely named > [UUID].parquet file will be placed next to the old one, with the same, > redundant data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (ARROW-3538) [Python] ability to override the automated assignment of uuid for filenames when writing datasets
[ https://issues.apache.org/jira/browse/ARROW-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Antoine Pitrou updated ARROW-3538: -- Fix Version/s: 0.14.0 > [Python] ability to override the automated assignment of uuid for filenames > when writing datasets > - > > Key: ARROW-3538 > URL: https://issues.apache.org/jira/browse/ARROW-3538 > Project: Apache Arrow > Issue Type: Wish > Components: Python >Affects Versions: 0.10.0 >Reporter: Ji Xu >Priority: Major > Labels: features, parquet > Fix For: 0.14.0 > > > Say I have a pandas DataFrame {{df}} that I would like to store on disk as > dataset using pyarrow parquet, I would do this: > {code:java} > table = pyarrow.Table.from_pandas(df) > pyarrow.parquet.write_to_dataset(table, root_path=some_path, > partition_cols=['a',]){code} > On disk the dataset would look like something like this: > {color:#14892c}some_path{color} > {color:#14892c}├── a=1{color} > {color:#14892c}├── 4498704937d84fe5abebb3f06515ab2d.parquet{color} > {color:#14892c}├── a=2{color} > {color:#14892c}├── 8bcfaed8986c4bdba587aaaee532370c.parquet{color} > *Wished Feature:* It'd be great if I can override the auto-assignment of the > long UUID as filename somehow during the *dataset* writing. My purpose is to > be able to overwrite the dataset on disk when I have a new version of {{df}}. > Currently if I try to write the dataset again, another new uniquely named > [UUID].parquet file will be placed next to the old one, with the same, > redundant data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (ARROW-3538) [Python] ability to override the automated assignment of uuid for filenames when writing datasets
[ https://issues.apache.org/jira/browse/ARROW-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Antoine Pitrou updated ARROW-3538: -- Component/s: Python > [Python] ability to override the automated assignment of uuid for filenames > when writing datasets > - > > Key: ARROW-3538 > URL: https://issues.apache.org/jira/browse/ARROW-3538 > Project: Apache Arrow > Issue Type: Wish > Components: Python >Affects Versions: 0.10.0 >Reporter: Ji Xu >Priority: Major > Labels: features, parquet > > Say I have a pandas DataFrame {{df}} that I would like to store on disk as > dataset using pyarrow parquet, I would do this: > {code:java} > table = pyarrow.Table.from_pandas(df) > pyarrow.parquet.write_to_dataset(table, root_path=some_path, > partition_cols=['a',]){code} > On disk the dataset would look like something like this: > {color:#14892c}some_path{color} > {color:#14892c}├── a=1{color} > {color:#14892c}├── 4498704937d84fe5abebb3f06515ab2d.parquet{color} > {color:#14892c}├── a=2{color} > {color:#14892c}├── 8bcfaed8986c4bdba587aaaee532370c.parquet{color} > *Wished Feature:* It'd be great if I can override the auto-assignment of the > long UUID as filename somehow during the *dataset* writing. My purpose is to > be able to overwrite the dataset on disk when I have a new version of {{df}}. > Currently if I try to write the dataset again, another new uniquely named > [UUID].parquet file will be placed next to the old one, with the same, > redundant data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (ARROW-3538) [Python] ability to override the automated assignment of uuid for filenames when writing datasets
[ https://issues.apache.org/jira/browse/ARROW-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wes McKinney updated ARROW-3538: Labels: features parquet (was: features) > [Python] ability to override the automated assignment of uuid for filenames > when writing datasets > - > > Key: ARROW-3538 > URL: https://issues.apache.org/jira/browse/ARROW-3538 > Project: Apache Arrow > Issue Type: Wish >Affects Versions: 0.10.0 >Reporter: Ji Xu >Priority: Major > Labels: features, parquet > > Say I have a pandas DataFrame {{df}} that I would like to store on disk as > dataset using pyarrow parquet, I would do this: > {code:java} > table = pyarrow.Table.from_pandas(df) > pyarrow.parquet.write_to_dataset(table, root_path=some_path, > partition_cols=['a',]){code} > On disk the dataset would look like something like this: > {color:#14892c}some_path{color} > {color:#14892c}├── a=1{color} > {color:#14892c}├── 4498704937d84fe5abebb3f06515ab2d.parquet{color} > {color:#14892c}├── a=2{color} > {color:#14892c}├── 8bcfaed8986c4bdba587aaaee532370c.parquet{color} > *Wished Feature:* It'd be great if I can override the auto-assignment of the > long UUID as filename somehow during the *dataset* writing. My purpose is to > be able to overwrite the dataset on disk when I have a new version of {{df}}. > Currently if I try to write the dataset again, another new uniquely named > [UUID].parquet file will be placed next to the old one, with the same, > redundant data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (ARROW-3538) [Python] ability to override the automated assignment of uuid for filenames when writing datasets
[ https://issues.apache.org/jira/browse/ARROW-3538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wes McKinney updated ARROW-3538: Summary: [Python] ability to override the automated assignment of uuid for filenames when writing datasets (was: ability to override the automated assignment of uuid for filenames when writing datasets) > [Python] ability to override the automated assignment of uuid for filenames > when writing datasets > - > > Key: ARROW-3538 > URL: https://issues.apache.org/jira/browse/ARROW-3538 > Project: Apache Arrow > Issue Type: Wish >Affects Versions: 0.10.0 >Reporter: Ji Xu >Priority: Major > Labels: features, parquet > > Say I have a pandas DataFrame {{df}} that I would like to store on disk as > dataset using pyarrow parquet, I would do this: > {code:java} > table = pyarrow.Table.from_pandas(df) > pyarrow.parquet.write_to_dataset(table, root_path=some_path, > partition_cols=['a',]){code} > On disk the dataset would look like something like this: > {color:#14892c}some_path{color} > {color:#14892c}├── a=1{color} > {color:#14892c}├── 4498704937d84fe5abebb3f06515ab2d.parquet{color} > {color:#14892c}├── a=2{color} > {color:#14892c}├── 8bcfaed8986c4bdba587aaaee532370c.parquet{color} > *Wished Feature:* It'd be great if I can override the auto-assignment of the > long UUID as filename somehow during the *dataset* writing. My purpose is to > be able to overwrite the dataset on disk when I have a new version of {{df}}. > Currently if I try to write the dataset again, another new uniquely named > [UUID].parquet file will be placed next to the old one, with the same, > redundant data. -- This message was sent by Atlassian JIRA (v7.6.3#76005)