btw. the original PR does mention it to be compatible [2].

Are you trying to compile ZK-3.5 with Hadoop-3.3.6 or Hadoop-3.3.6 with
ZK-3.5, If later and then if the compilation fails, then it shouldn't be an
incompatible change, right? or do we need to maintain compat that way as
well?

Hadoop 3.3.6-RC1 compiled against ZooKeeper 3.6 looks working with ZooKeeper 
3.5 server (using ZK-3.6 client).
Just compilation of Hadoop 3.3.6 against ZooKeeper 3.5 fails.
Explicit statement of the fact could avoid surprising downstream developers.

# The issue surfaced when I build RC using Bigtop in which
# products are compiled against versions of bundled products.
# Bigtop can easily address the issue by bumping ZooKeeper to 3.6 or
# applying patch to avoid compilation error.

On 2023/06/19 12:15, Ayush Saxena wrote:
Hi Masatake,
That ticket is just a backport of:
https://issues.apache.org/jira/browse/HADOOP-17612 and the only java code
change is in test [1], which is causing issues for you. That ticket isn't
marked incompatible either, so maybe that is why it got pulled back.....

btw. the original PR does mention it to be compatible [2].

Are you trying to compile ZK-3.5 with Hadoop-3.3.6 or Hadoop-3.3.6 with
ZK-3.5, If later and then if the compilation fails, then it shouldn't be an
incompatible change, right? or do we need to maintain compat that way as
well?

+1 in maintaining compatibility, incompatible changes should be avoided as
far as possible unless excessively necessary even in trunk or like we need
to do it for some "relevant" security issue or so in those thirdparty libs.
The RC1 vote is already up, do you plan to get this change excluded from
that?

Regarding the test, If you pass ``null`` instead of that DisconnectReason,
then also the test test passes, but I am pretty sure you would get
NoSuchMethod error for closeAll after that because that closeAll ain't
there is Zk-3.5, ZOOKEEPER-3439 removed it

-Ayush

PS. From this doc: https://zookeeper.apache.org/releases.html, even ZK-3.6
line is also EOL, not sure how those guys operate :-)

[1]
https://github.com/apache/hadoop/pull/3241/files#diff-b273546d6f060e617553eaa49da69039d2c655a77d42022779c2281d0f6cd08eR135
[2] https://github.com/apache/hadoop/pull/3241#issuecomment-889185103

On Mon, 19 Jun 2023 at 06:44, Masatake Iwasaki <iwasak...@oss.nttdata.com>
wrote:

I got compilation error against ZooKeeper 3.5 due to HADOOP-18515.
It should be marked as incompatible change?
https://issues.apache.org/jira/browse/HADOOP-18515

::

    [ERROR]
/home/rocky/srcs/bigtop/build/hadoop/rpm/BUILD/hadoop-3.3.6-src/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestZKFailoverControllerStress.java:[135,40]
cannot find symbol
      symbol:   variable DisconnectReason
      location: class org.apache.zookeeper.server.ServerCnxn

While ZooKeeper 3.5 is already EoL, it would be nice to keep compatibility
in a patch release
especially if only test code is the cause.

Thanks,
Masatake Iwasaki

On 2023/06/18 4:57, Wei-Chiu Chuang wrote:
I was going to do another RC in case something comes up.
But it looks like the only thing that needs to be fixed is the Changelog.


     1. HADOOP-18596 <https://issues.apache.org/jira/browse/HADOOP-18596>

HADOOP-18633 <https://issues.apache.org/jira/browse/HADOOP-18633>
are related to cloud store semantics, and I don't want to make a
judgement
call on it. As far as I can tell its effect can be addressed by
supplying a
config option in the application code.
It looks like the feature improves fault tolerance by ensuring files are
synchronized if modification time is different between the source and
destination. So to me it's the better behavior.

I can make a RC1 over the weekend to fix the Changelog but that's
probably
the only thing that's going to have.
On Sat, Jun 17, 2023 at 2:00 AM Xiaoqiao He <hexiaoq...@apache.org>
wrote:

Thanks Wei-Chiu for driving this release. The next RC will be prepared,
right?
If true, I would like to try and vote on the next RC.
Just notice that some JIRAs are not included and need to revert some
PRs to
pass HBase verification which are mentioned above.

Best Regards,
- He Xiaoqiao


On Fri, Jun 16, 2023 at 9:20 AM Wei-Chiu Chuang
<weic...@cloudera.com.invalid> wrote:

Overall so far so good.

hadoop-api-shim:
built, tested successfully.

cloudstore:
built successfully.

Spark:
built successfully. Passed hadoop-cloud tests.

Ozone:
One test failure due to unrelated Ozone issue. This test is being
disabled
in the latest Ozone code.

org.apache.hadoop.hdds.utils.NativeLibraryNotLoadedException: Unable
to load library ozone_rocksdb_tools from both java.library.path &
resource file libozone_rocksdb_t
ools.so from jar.
          at


org.apache.hadoop.hdds.utils.db.managed.ManagedSSTDumpTool.<init>(ManagedSSTDumpTool.java:49)


Google gcs:
There are two test failures. The tests were added recently by
HADOOP-18724
<https://issues.apache.org/jira/browse/HADOOP-18724> in Hadoop 3.3.6.
This
is okay. Not production code problem. Can be addressed in GCS code.

[ERROR] Errors:
[ERROR]



TestInMemoryGoogleContractOpen>AbstractContractOpenTest.testFloatingPointLength:403
» IllegalArgument Unknown mandatory key for
gs://fake-in-memory-test-buck
et/contract-test/testFloatingPointLength "fs.option.openfile.length"
[ERROR]



TestInMemoryGoogleContractOpen>AbstractContractOpenTest.testOpenFileApplyAsyncRead:341
» IllegalArgument Unknown mandatory key for gs://fake-in-memory-test-b
ucket/contract-test/testOpenFileApplyAsyncRead
"fs.option.openfile.length"





On Wed, Jun 14, 2023 at 5:01 PM Wei-Chiu Chuang <weic...@apache.org>
wrote:

The hbase-filesystem tests passed after reverting HADOOP-18596
<https://issues.apache.org/jira/browse/HADOOP-18596> and HADOOP-18633
<https://issues.apache.org/jira/browse/HADOOP-18633> from my local
tree.
So I think it's a matter of the default behavior being changed. It's
not
the end of the world. I think we can address it by adding an
incompatible
change flag and a release note.

On Wed, Jun 14, 2023 at 3:55 PM Wei-Chiu Chuang <weic...@apache.org>
wrote:

Cross referenced git history and jira. Changelog needs some update

Not in the release

     1. HDFS-16858 <https://issues.apache.org/jira/browse/HDFS-16858>


     1. HADOOP-18532 <
https://issues.apache.org/jira/browse/HADOOP-18532>
     2.
        1. HDFS-16861 <
https://issues.apache.org/jira/browse/HDFS-16861

           2.
              1. HDFS-16866
              <https://issues.apache.org/jira/browse/HDFS-16866>
              2.
                 1. HADOOP-18320
                 <https://issues.apache.org/jira/browse/HADOOP-18320>
                 2.

Updated fixed version. Will generate. new Changelog in the next RC.

Was able to build HBase and hbase-filesystem without any code change.

hbase has one unit test failure. This one is reproducible even with
Hadoop 3.3.5, so maybe a red herring. Local env or something.

[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time
elapsed:
9.007 s <<< FAILURE! - in
org.apache.hadoop.hbase.regionserver.TestSyncTimeRangeTracker
[ERROR]



org.apache.hadoop.hbase.regionserver.TestSyncTimeRangeTracker.testConcurrentIncludeTimestampCorrectness
   Time elapsed: 3.13 s  <<< ERROR!
java.lang.OutOfMemoryError: Java heap space
at



org.apache.hadoop.hbase.regionserver.TestSyncTimeRangeTracker$RandomTestData.<init>(TestSyncTimeRangeTracker.java:91)
at



org.apache.hadoop.hbase.regionserver.TestSyncTimeRangeTracker.testConcurrentIncludeTimestampCorrectness(TestSyncTimeRangeTracker.java:156)

hbase-filesystem has three test failures in TestHBOSSContractDistCp,
and
is not reproducible with Hadoop 3.3.5.
[ERROR] Failures: [ERROR]



TestHBOSSContractDistCp>AbstractContractDistCpTest.testDistCpUpdateCheckFileSkip:976->Assert.fail:88
10 errors in file of length 10
[ERROR]



TestHBOSSContractDistCp>AbstractContractDistCpTest.testUpdateDeepDirectoryStructureNoChange:270->AbstractContractDistCpTest.assertCounterInRange:290->Assert.assertTrue:41->Assert.fail:88
Files Skipped value 0 too below minimum 1
[ERROR]



TestHBOSSContractDistCp>AbstractContractDistCpTest.testUpdateDeepDirectoryStructureToRemote:259->AbstractContractDistCpTest.distCpUpdateDeepDirectoryStructure:334->AbstractContractDistCpTest.assertCounterInRange:294->Assert.assertTrue:41->Assert.fail:88
Files Copied value 2 above maximum 1
[INFO]
[ERROR] Tests run: 240, Failures: 3, Errors: 0, Skipped: 58


Ozone
test in progress. Will report back.


On Tue, Jun 13, 2023 at 11:27 PM Wei-Chiu Chuang <weic...@apache.org

wrote:

I am inviting anyone to try and vote on this release candidate.

Note:
This is built off branch-3.3.6 plus PR#5741 (aws sdk update) and
PR#5740
(LICENSE file update)

The RC is available at:
https://home.apache.org/~weichiu/hadoop-3.3.6-RC0-amd64/ (for
amd64)
https://home.apache.org/~weichiu/hadoop-3.3.6-RC0-arm64/ (for
arm64)

Git tag: release-3.3.6-RC0
https://github.com/apache/hadoop/releases/tag/release-3.3.6-RC0

Maven artifacts is built by x86 machine and are staged at


https://repository.apache.org/content/repositories/orgapachehadoop-1378/

My public key:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

Changelog:

https://home.apache.org/~weichiu/hadoop-3.3.6-RC0-amd64/CHANGELOG.md

Release notes:


https://home.apache.org/~weichiu/hadoop-3.3.6-RC0-amd64/RELEASENOTES.md

This is a relatively small release (by Hadoop standard) containing
about
120 commits.
Please give it a try, this RC vote will run for 7 days.


Feature highlights:

SBOM artifacts
----------------------------------------
Starting from this release, Hadoop publishes Software Bill of
Materials
(SBOM) using
CycloneDX Maven plugin. For more information about SBOM, please go
to
[SBOM](https://cwiki.apache.org/confluence/display/COMDEV/SBOM).

HDFS RBF: RDBMS based token storage support
----------------------------------------
HDFS Router-Router Based Federation now supports storing delegation
tokens on MySQL,
[HADOOP-18535](https://issues.apache.org/jira/browse/HADOOP-18535)
which improves token operation through over the original
Zookeeper-based
implementation.


New File System APIs
----------------------------------------
[HADOOP-18671](https://issues.apache.org/jira/browse/HADOOP-18671)
moved a number of
HDFS-specific APIs to Hadoop Common to make it possible for certain
applications that
depend on HDFS semantics to run on other Hadoop compatible file
systems.

In particular, recoverLease() and isFileClosed() are exposed through
LeaseRecoverable
interface. While setSafeMode() is exposed through SafeMode
interface.







---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org




---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org

Reply via email to