[
https://issues.apache.org/jira/browse/ARROW-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17329335#comment-17329335
]
Weston Pace commented on ARROW-12358:
-------------------------------------
tl;dr: Do what [~jorisvandenbossche] said and interpret "overwrite" as
"overwrite the entire partition".
[https://stackoverflow.com/questions/27033823/how-to-overwrite-the-output-directory-in-spark]
is related (talks about this issue and how it is handled in Spark). Even
reading through all the answers however I cannot tell if "overwrite" replaces
the entire partition or the entire dataset. It does appear to do one or the
other and not just replacing some of a partition. Only replacing some of a
partition does not seem like it would ever be useful.
Overwriting the entire table could always be easily achieved without pyarrow by
simply removing the dataset beforehand so I don't see much value in adding that
capability. Although it does bring up the question of repartitioning which
would require deleting the old data as it is read, but I think that is a
different topic (and related to the update topic I mention below). Deleting a
partition isn't very hard for the user either. The tricky part though is
knowing which partition to delete.
With that in mind I'd suggest the following:
Overwrite-partition: If the dataset write will write to partition X then delete
all data in partition X first.
Append: Same as [~jorisvandenbossche] mentioned. Similar to how we behave
today but add logic to make sure we never overwrite a file that happens to have
the same counter (e.g. detect the max counter value before we start writing and
continue the old counter)
Error: Same as [~jorisvandenbossche] mentioned.
The overwrite-partition mode is useful for the case of "Load the entire dataset
(or an entire partition), modify it, write it back out".
However, I think the use case that is still missing is:
* Run a filtered scan of the data
* Modify this subset of data
* Write it back out, intending to overwrite the old rows
In other words, something equivalent to the SQL "UPDATE dogs SET favorite=1
WHERE breed='poodle'"
Overwrite-partition won't work because it would delete any non-poodle data.
Append wouldn't work because it would duplicate the poodle data.
Perhaps however, that can be a separate operation. It brings up troubling
atomicity and consistency concerns. Although if we created such an operation
then presumably there would be no need for an overwrite mode.
> [C++][Python][R][Dataset] Control overwriting vs appending when writing to
> existing dataset
> -------------------------------------------------------------------------------------------
>
> Key: ARROW-12358
> URL: https://issues.apache.org/jira/browse/ARROW-12358
> Project: Apache Arrow
> Issue Type: Improvement
> Components: C++
> Reporter: Joris Van den Bossche
> Priority: Major
> Labels: dataset
> Fix For: 5.0.0
>
>
> Currently, the dataset writing (eg with {{pyarrow.dataset.write_dataset}}
> uses a fixed filename template ({{"part\{i\}.ext"}}). This means that when
> you are writing to an existing dataset, you de facto overwrite previous data
> when using this default template.
> There is some discussion in ARROW-10695 about how the user can avoid this by
> ensuring the file names are unique (the user can specify the
> {{basename_template}} to be something unique). There is also ARROW-7706 about
> silently doubling data (so _not_ overwriting existing data) with the legacy
> {{parquet.write_to_dataset}} implementation.
> It could be good to have a "mode" when writing datasets that controls the
> different possible behaviours. And erroring when there is pre-existing data
> in the target directory is maybe the safest default, because both appending
> vs overwriting silently can be surprising behaviour depending on your
> expectations.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)