+1 (binding)

verified signature, checksum, license. The checksum was generated using the
old buggy release script because it was executed in the 0.13.x branch so it
still used the full file path. I have updated it to use the relative file
path. In case anyone sees checksum failure, please re-download the checksum
file and verify again.

Ran unit tests for all engine versions and JDK versions, AWS Integration
tests. For the Spark flaky test, given #4033 fixes the issue and it was not
a bug of the source code, I think we can continue without re-cut a
candidate.

Tested basic operations, copy-on-write delete, update and rewrite data
files on AWS EMR Spark 3.1 Flink 1.14 and verified fixes #3986 and #4024.

I did some basic tests for #4023 (the predicate pushdown fix) but I don't
have a large Spark 3.2 installation to further verify the performance. It
would be great if anyone else could do some additional verifications.

Best,
Jack Ye

On Fri, Feb 11, 2022 at 8:24 PM Manong Karl <abc549...@gmail.com> wrote:

> It's  flaky. This exception is only found in one agent of TeamCity.
> Changing agents will resolve the issue.
>
> Ryan Blue <b...@tabular.io> 于2022年2月12日周六 08:57写道:
>
>> Does that exception fail consistently, or is it a flaky test? We recently
>> fixed another Spark test that was flaky because of sampling and sort order:
>> https://github.com/apache/iceberg/pull/4033
>>
>> On Thu, Feb 10, 2022 at 7:12 PM Manong Karl <abc549...@gmail.com> wrote:
>>
>>> I got an issue failed on spark 3.2
>>> TestMergeOnReadDelete.testDeleteWithSerializableIsolation[catalogName =
>>> testhive, implementation = org.apache.iceberg.spark.SparkCatalog, config =
>>> {type=hive, default-namespace=default}, format = orc, vectorized = true,
>>> distributionMode = none] · Issue #4090 · apache/iceberg (github.com)
>>> <https://github.com/apache/iceberg/issues/4090>.
>>> Is it just my exception?
>>>
>>
>>
>> --
>> Ryan Blue
>> Tabular
>>
>

Reply via email to