[
https://issues.apache.org/jira/browse/ARROW-7706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17092028#comment-17092028
]
Will Jones commented on ARROW-7706:
-----------------------------------
To add to the idea of write modes, Spark's Dataframe.saveAsTable() method has a
mode attribute similar to what you're discussing here. Might be a good part of
their API to imitate.
It includes the modes:
{quote} * ??append??: Append contents of this
[{{DataFrame}}|https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame]
to existing data.
* ??overwrite??: Overwrite existing data.
* ??error?? or ??errorifexists??: Throw an exception if data already exists.
* ??ignore??: Silently ignore this operation if data already exists.
{quote}
The default is "error": error if destination is not empty.
Reference:
[https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrameWriter.saveAsTable]
> [Python] saving a dataframe to the same partitioned location silently doubles
> the data
> --------------------------------------------------------------------------------------
>
> Key: ARROW-7706
> URL: https://issues.apache.org/jira/browse/ARROW-7706
> Project: Apache Arrow
> Issue Type: Bug
> Components: Python
> Affects Versions: 0.15.1
> Reporter: Tsvika Shapira
> Priority: Major
> Labels: dataset, parquet
>
> When a user saves a dataframe:
> {code:python}
> df1.to_parquet('/tmp/table', partition_cols=['col_a'], engine='pyarrow')
> {code}
> it will create sub-directories named "{{a=val1}}", "{{a=val2}}" in
> {{/tmp/table}}. Each of them will contain one (or more?) parquet files with
> random filenames.
> If a user runs the same command again, the code will use the existing
> sub-directories, but with different (random) filenames. As a result, any data
> loaded from this folder will be wrong - each row will be present twice.
> For example, when using
> {code:python}
> df1.to_parquet('/tmp/table', partition_cols=['col_a'], engine='pyarrow') #
> second time
> df2 = pd.read_parquet('/tmp/table', engine='pyarrow')
> assert len(df1) == len(df2) # raise an error{code}
> This is a subtle change in the data that can pass unnoticed.
>
> I would expect that the code will prevent the user from using an non-empty
> destination as partitioned target. an overwrite flag can also be useful.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)