Hi, Anthony.

Thank you for trying 1.7.0. It seems that your unit test reuses the test
file name.

For breaking changes, I also raised similar breaking change issues at 1.6.x
and
fixed some in order to help the downstream migration.

    TITLE: Apache ORC Versioning (Semantic Versioning)
    https://lists.apache.org/thread/nhw99jh1r0fc7r74cof0nhhdzvcqwvw5

There is another recent discussion on the ORC releases too.

    TITLE: [DISCUSS] Apache ORC Release Cadence
    https://lists.apache.org/thread/ql5o2ndon1b0818d4z5nb6001q09z5ck

AFAIK, Apache ORC didn't follow Semantic Versioning officially until 1.6.x.
We are still in the middle of transitioning toward `Semantic Versioning`
and enforcing it.

BTW, you are talking about an ancient breaking change from 1.2.3
(2016-12-12) to 1.3.0 (2017-01-23).
They were archived a long time ago and recently Apache ORC 1.5 became EOL.
It could be an example, but it's beyond AS-IS scope of backward
compatibility at 1.6/1.7.

Dongjoon.


On Tue, Nov 2, 2021 at 6:12 PM A L <anthonyn...@gmail.com> wrote:

> Hi,
>
> My project used to use Apache orc-core 1.2.3, in my code I use
> createWriter(Path path, OrcFile.WriterOptions opts) to create a writer,
> path argument is a file exist in FileSystem. It worked well. After I
> upgraded orc-core to 1.7.0, I found that my unit test failed with the
> org.apache.hadoop.fs.FileAlreadyExistsException.
>
> I found that in ORC-119 there was a change that added a PhysicalFsWriter
> into WriterImpl constructor.
> In PhysicalFsWriter's constructor it checks if the file exist and if
> overwrite is true.
> So it's this change breaking the backward compatibility, has anyone had
> this issue when you upgrade the version?
> How did you fix it?
>
> Thanks,
>
> Anthony
>

Reply via email to