[ 
https://issues.apache.org/jira/browse/ARROW-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17475774#comment-17475774
 ] 

Weston Pace commented on ARROW-12358:
-------------------------------------

Thanks for checking in.  I did some testing on this today.  I might not be 
understanding what you are after.  I just tested with the following:

{code}
import shutil

import pyarrow as pa
import pyarrow.dataset as ds

# Make sure the /tmp/newdataset directory does not exist                        
                                                                                
                                                   
shutil.rmtree('/tmp/newdataset', ignore_errors=True)

tab = pa.Table.from_pydict({ 'part': [0, 0, 1, 1], 'value': [0, 1, 2, 3] })
ds.write_dataset(tab,
                 '/tmp/newdataset',
                 partitioning_flavor='hive',
                 partitioning=['part'],
                 existing_data_behavior='delete_matching',
                 format='parquet')
{code}

I used the 6.0.1 release and did not run into any issues.  Am I 
misunderstanding the use case?  Or is it possible you are using a certain 
filesystem?  Or maybe you are on a particular OS?

> [C++][Python][R][Dataset] Control overwriting vs appending when writing to 
> existing dataset
> -------------------------------------------------------------------------------------------
>
>                 Key: ARROW-12358
>                 URL: https://issues.apache.org/jira/browse/ARROW-12358
>             Project: Apache Arrow
>          Issue Type: Improvement
>          Components: C++
>            Reporter: Joris Van den Bossche
>            Priority: Major
>              Labels: dataset
>             Fix For: 8.0.0
>
>
> Currently, the dataset writing (eg with {{pyarrow.dataset.write_dataset}}) 
> uses a fixed filename template ({{"part\{i\}.ext"}}). This means that when 
> you are writing to an existing dataset, you de facto overwrite previous data 
> when using this default template.
> There is some discussion in ARROW-10695 about how the user can avoid this by 
> ensuring the file names are unique (the user can specify the 
> {{basename_template}} to be something unique). There is also ARROW-7706 about 
> silently doubling data (so _not_ overwriting existing data) with the legacy 
> {{parquet.write_to_dataset}} implementation. 
> It could be good to have a "mode" when writing datasets that controls the 
> different possible behaviours. And erroring when there is pre-existing data 
> in the target directory is maybe the safest default, because both appending 
> vs overwriting silently can be surprising behaviour depending on your 
> expectations.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to