Folks you had put your vote on the wrong thread, this is the RC0 thead, you 
need to put on the RC1,
https://lists.apache.org/thread/022p4rml5tvsx9xpq6t7b3n1td8lzz1d

-Ayush

Sent from my iPhone

> On 20-Jun-2023, at 10:03 PM, Ashutosh Gupta <ashutoshgupta...@gmail.com> 
> wrote:
> 
> Hi
> 
> Thanks Wei-Chiu for driving the release.
> 
> +1 (non-binding)
> 
> * Builds from source looks good.
> * Checksums and signatures look good.
> * Running basic HDFS commands and running simple MapReduce jobs looks good.
> * hadoop-tools/hadoop-aws UTs and ITs looks good
> 
> Thanks,
> Ash
> 
>> On Tue, Jun 20, 2023 at 5:18 PM Mukund Madhav Thakur
>> <mtha...@cloudera.com.invalid> wrote:
>> 
>> Hi Wei-Chiu,
>> Thanks for driving the release.
>> 
>> +1 (binding)
>> Verified checksum and signature.
>> Built from source successfully.
>> Ran aws Itests
>> Ran azure Itests
>> Compiled hadoop-api-shim
>> Compiled google cloud storage.
>> 
>> 
>> I did see the two test failures in GCS connector as well but those are
>> harmless.
>> 
>> 
>> 
>> On Thu, Jun 15, 2023 at 8:21 PM Wei-Chiu Chuang
>> <weic...@cloudera.com.invalid> wrote:
>> 
>>> Overall so far so good.
>>> 
>>> hadoop-api-shim:
>>> built, tested successfully.
>>> 
>>> cloudstore:
>>> built successfully.
>>> 
>>> Spark:
>>> built successfully. Passed hadoop-cloud tests.
>>> 
>>> Ozone:
>>> One test failure due to unrelated Ozone issue. This test is being
>> disabled
>>> in the latest Ozone code.
>>> 
>>> org.apache.hadoop.hdds.utils.NativeLibraryNotLoadedException: Unable
>>> to load library ozone_rocksdb_tools from both java.library.path &
>>> resource file libozone_rocksdb_t
>>> ools.so from jar.
>>>        at
>>> 
>> org.apache.hadoop.hdds.utils.db.managed.ManagedSSTDumpTool.<init>(ManagedSSTDumpTool.java:49)
>>> 
>>> 
>>> Google gcs:
>>> There are two test failures. The tests were added recently by
>> HADOOP-18724
>>> <https://issues.apache.org/jira/browse/HADOOP-18724> in Hadoop 3.3.6.
>> This
>>> is okay. Not production code problem. Can be addressed in GCS code.
>>> 
>>> [ERROR] Errors:
>>> [ERROR]
>>> 
>>> 
>> TestInMemoryGoogleContractOpen>AbstractContractOpenTest.testFloatingPointLength:403
>>> » IllegalArgument Unknown mandatory key for gs://fake-in-memory-test-buck
>>> et/contract-test/testFloatingPointLength "fs.option.openfile.length"
>>> [ERROR]
>>> 
>>> 
>> TestInMemoryGoogleContractOpen>AbstractContractOpenTest.testOpenFileApplyAsyncRead:341
>>> » IllegalArgument Unknown mandatory key for gs://fake-in-memory-test-b
>>> ucket/contract-test/testOpenFileApplyAsyncRead
>> "fs.option.openfile.length"
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Wed, Jun 14, 2023 at 5:01 PM Wei-Chiu Chuang <weic...@apache.org>
>>> wrote:
>>> 
>>>> The hbase-filesystem tests passed after reverting HADOOP-18596
>>>> <https://issues.apache.org/jira/browse/HADOOP-18596> and HADOOP-18633
>>>> <https://issues.apache.org/jira/browse/HADOOP-18633> from my local
>> tree.
>>>> So I think it's a matter of the default behavior being changed. It's
>> not
>>>> the end of the world. I think we can address it by adding an
>> incompatible
>>>> change flag and a release note.
>>>> 
>>>> On Wed, Jun 14, 2023 at 3:55 PM Wei-Chiu Chuang <weic...@apache.org>
>>>> wrote:
>>>> 
>>>>> Cross referenced git history and jira. Changelog needs some update
>>>>> 
>>>>> Not in the release
>>>>> 
>>>>>   1. HDFS-16858 <https://issues.apache.org/jira/browse/HDFS-16858>
>>>>> 
>>>>> 
>>>>>   1. HADOOP-18532 <
>> https://issues.apache.org/jira/browse/HADOOP-18532>
>>>>>   2.
>>>>>      1. HDFS-16861 <https://issues.apache.org/jira/browse/HDFS-16861
>>> 
>>>>>         2.
>>>>>            1. HDFS-16866
>>>>>            <https://issues.apache.org/jira/browse/HDFS-16866>
>>>>>            2.
>>>>>               1. HADOOP-18320
>>>>>               <https://issues.apache.org/jira/browse/HADOOP-18320>
>>>>>               2.
>>>>> 
>>>>> Updated fixed version. Will generate. new Changelog in the next RC.
>>>>> 
>>>>> Was able to build HBase and hbase-filesystem without any code change.
>>>>> 
>>>>> hbase has one unit test failure. This one is reproducible even with
>>>>> Hadoop 3.3.5, so maybe a red herring. Local env or something.
>>>>> 
>>>>> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time
>> elapsed:
>>>>> 9.007 s <<< FAILURE! - in
>>>>> org.apache.hadoop.hbase.regionserver.TestSyncTimeRangeTracker
>>>>> [ERROR]
>>>>> 
>>> 
>> org.apache.hadoop.hbase.regionserver.TestSyncTimeRangeTracker.testConcurrentIncludeTimestampCorrectness
>>>>> Time elapsed: 3.13 s  <<< ERROR!
>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>> at
>>>>> 
>>> 
>> org.apache.hadoop.hbase.regionserver.TestSyncTimeRangeTracker$RandomTestData.<init>(TestSyncTimeRangeTracker.java:91)
>>>>> at
>>>>> 
>>> 
>> org.apache.hadoop.hbase.regionserver.TestSyncTimeRangeTracker.testConcurrentIncludeTimestampCorrectness(TestSyncTimeRangeTracker.java:156)
>>>>> 
>>>>> hbase-filesystem has three test failures in TestHBOSSContractDistCp,
>> and
>>>>> is not reproducible with Hadoop 3.3.5.
>>>>> [ERROR] Failures: [ERROR]
>>>>> 
>>> 
>> TestHBOSSContractDistCp>AbstractContractDistCpTest.testDistCpUpdateCheckFileSkip:976->Assert.fail:88
>>>>> 10 errors in file of length 10
>>>>> [ERROR]
>>>>> 
>>> 
>> TestHBOSSContractDistCp>AbstractContractDistCpTest.testUpdateDeepDirectoryStructureNoChange:270->AbstractContractDistCpTest.assertCounterInRange:290->Assert.assertTrue:41->Assert.fail:88
>>>>> Files Skipped value 0 too below minimum 1
>>>>> [ERROR]
>>>>> 
>>> 
>> TestHBOSSContractDistCp>AbstractContractDistCpTest.testUpdateDeepDirectoryStructureToRemote:259->AbstractContractDistCpTest.distCpUpdateDeepDirectoryStructure:334->AbstractContractDistCpTest.assertCounterInRange:294->Assert.assertTrue:41->Assert.fail:88
>>>>> Files Copied value 2 above maximum 1
>>>>> [INFO]
>>>>> [ERROR] Tests run: 240, Failures: 3, Errors: 0, Skipped: 58
>>>>> 
>>>>> 
>>>>> Ozone
>>>>> test in progress. Will report back.
>>>>> 
>>>>> 
>>>>> On Tue, Jun 13, 2023 at 11:27 PM Wei-Chiu Chuang <weic...@apache.org>
>>>>> wrote:
>>>>> 
>>>>>> I am inviting anyone to try and vote on this release candidate.
>>>>>> 
>>>>>> Note:
>>>>>> This is built off branch-3.3.6 plus PR#5741 (aws sdk update) and
>>> PR#5740
>>>>>> (LICENSE file update)
>>>>>> 
>>>>>> The RC is available at:
>>>>>> https://home.apache.org/~weichiu/hadoop-3.3.6-RC0-amd64/ (for amd64)
>>>>>> https://home.apache.org/~weichiu/hadoop-3.3.6-RC0-arm64/ (for arm64)
>>>>>> 
>>>>>> Git tag: release-3.3.6-RC0
>>>>>> https://github.com/apache/hadoop/releases/tag/release-3.3.6-RC0
>>>>>> 
>>>>>> Maven artifacts is built by x86 machine and are staged at
>>>>>> 
>>> https://repository.apache.org/content/repositories/orgapachehadoop-1378/
>>>>>> 
>>>>>> My public key:
>>>>>> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>>>>>> 
>>>>>> Changelog:
>>>>>> https://home.apache.org/~weichiu/hadoop-3.3.6-RC0-amd64/CHANGELOG.md
>>>>>> 
>>>>>> Release notes:
>>>>>> 
>>> https://home.apache.org/~weichiu/hadoop-3.3.6-RC0-amd64/RELEASENOTES.md
>>>>>> 
>>>>>> This is a relatively small release (by Hadoop standard) containing
>>> about
>>>>>> 120 commits.
>>>>>> Please give it a try, this RC vote will run for 7 days.
>>>>>> 
>>>>>> 
>>>>>> Feature highlights:
>>>>>> 
>>>>>> SBOM artifacts
>>>>>> ----------------------------------------
>>>>>> Starting from this release, Hadoop publishes Software Bill of
>> Materials
>>>>>> (SBOM) using
>>>>>> CycloneDX Maven plugin. For more information about SBOM, please go to
>>>>>> [SBOM](https://cwiki.apache.org/confluence/display/COMDEV/SBOM).
>>>>>> 
>>>>>> HDFS RBF: RDBMS based token storage support
>>>>>> ----------------------------------------
>>>>>> HDFS Router-Router Based Federation now supports storing delegation
>>>>>> tokens on MySQL,
>>>>>> [HADOOP-18535](https://issues.apache.org/jira/browse/HADOOP-18535)
>>>>>> which improves token operation through over the original
>>> Zookeeper-based
>>>>>> implementation.
>>>>>> 
>>>>>> 
>>>>>> New File System APIs
>>>>>> ----------------------------------------
>>>>>> [HADOOP-18671](https://issues.apache.org/jira/browse/HADOOP-18671)
>>>>>> moved a number of
>>>>>> HDFS-specific APIs to Hadoop Common to make it possible for certain
>>>>>> applications that
>>>>>> depend on HDFS semantics to run on other Hadoop compatible file
>>> systems.
>>>>>> 
>>>>>> In particular, recoverLease() and isFileClosed() are exposed through
>>>>>> LeaseRecoverable
>>>>>> interface. While setSafeMode() is exposed through SafeMode interface.
>>>>>> 
>>>>>> 
>>>>>> 
>>> 
>> 

Reply via email to