Re: Incompatible changes between branch-2.8 and branch-2.9
* For YARN-7813, not sure why moving from 2.8.4/5 -> 2.8.6 would be incompatible with this strategy? It should be OK to remove/add optional fields (removing the field with id 12, and adding the field with id 13) - A I misunderstood. I was thinking that the field was overwritten in branch-2.8 as well. Yea I think that approach will be fine. On Wed, Sep 25, 2019 at 2:31 AM Robert Kanter wrote: > > > > * For YARN-6050, there's a bit here: > > https://developers.google.com/protocol-buffers/docs/proto that says > > "optional is compatible with repeated", so I think we should be OK there. > > - Optional is compatible with repeatable over the wire such that > > protobuf won't blow up, but does that actually mean that it's compatible > in > > this case? If it's expecting an optional and gets a repeated, it's going > to > > drop everything except for the last value. I don't know enough about > > YARN-6050 to say if this will be ok or not. > > > It's been a while since I looked into this, but I think it should be okay. > If an older client (using optional) sends the message to a newer server > (using repeated), then there will never be more than one value for the > field. The server puts these into a list, so the list would simply have a > single value in it. The server's logic should be able to handle a single > valued list here because (a) IIRC we wanted to make sure compatibility > wasn't a problem (Cloudera supported rolling upgrades between CDH 5.x so > this was important) and (b) sending a single resource request, even in a > newer client, is a still a valid thing to do. > If a newer client (using repeated) sends the message to an older server > (using optional), I'm not sure what will happen. My guess is that it will > drop the extra values (though I wonder if it will keep the first or last > value...). In any case, I believe most clients will only send the one > value - in order for a client to send multiple values, you'd have to > specify some additional MR configs (see MAPREDUCE-6871). IIRC, there's > also a SPARK JIRA similar to MAPREDUCE-6871, but I can't find it right now. > > - Robert > > On Tue, Sep 24, 2019 at 9:49 PM Jonathan Hung > wrote: > > > - I've created YARN-9855 and uploaded patches to fix YARN-6616 in > > branch-2.8 and branch-2.7. > > - For YARN-6050, not sure either. Robert/Wangda, can you comment on > > YARN-6050 compatibility? > > - For YARN-7813, not sure why moving from 2.8.4/5 -> 2.8.6 would be > > incompatible with this strategy? It should be OK to remove/add optional > > fields (removing the field with id 12, and adding the field with id 13). > > The difficulties I see here are, we would have to leave id 12 blank in > > 2.8.6 (so we cannot have YARN-6164 in branch-2.8), and users on 2.8.4/5 > > would have to move to 2.8.6 before moving to 2.9+. But rolling upgrade > > would still work IIUC. > > > > Jonathan Hung > > > > > > On Tue, Sep 24, 2019 at 2:52 PM Eric Badger > > wrote: > > > >> * For YARN-6616, for branch-2.8 and below, it was only committed to > >> 2.7.8/2.8.6 which have not been released (as I understand). Perhaps we > can > >> revert YARN-6616 from branch-2.7 and branch-2.8. > >> - This seems reasonable. Since we haven't released anything, it should > >> be no issue to change the 2.7/2.8 protobuf field to have the same value > as > >> 2.9+ > >> > >> * For YARN-6050, there's a bit here: > >> https://developers.google.com/protocol-buffers/docs/proto that says > >> "optional is compatible with repeated", so I think we should be OK > there. > >> - Optional is compatible with repeatable over the wire such that > >> protobuf won't blow up, but does that actually mean that it's > compatible in > >> this case? If it's expecting an optional and gets a repeated, it's > going to > >> drop everything except for the last value. I don't know enough about > >> YARN-6050 to say if this will be ok or not. > >> > >> * For YARN-7813, it's in 2.8.4 so it seems upgrading from 2.8.4 or > >> 2.8.5 to a 2.9+ version will be an issue. One option could be to move > the > >> intraQueuePreemptionDisabled field from id 12 to id 13 in branch-2.8, > then > >> users would upgrade from 2.8.4/2.8.5 to 2.8.6 (someone would have to > >> release this), then upgrade from 2.8.6 to 2.9+. > >> - I'm ok with this, but it should be noted that the upgrade from > >> 2.8.4/2.8.5 to 2.8.6 (or 2.9+) would not be compatible for a rolling > >> upgrade. So this would cause some pain to anybody with clusters on those > >> versions. > >> > >> Eric > >> > >> On Tue, Sep 24, 2019 at 2:42 PM Jonathan Hung > >> wrote: > >> > >>> Sorry, let me edit my first point. We can just create addendums for > >>> YARN-6616 in branch-2.7 and branch-2.8 to edit the submitTime field to > the > >>> correct id 28. We don’t need to revert YARN-6616 from these branches > >>> completely. > >>> > >>> Jonathan > >>> > >>> > >>> From: Jonathan Hung > >>> Sent: Tuesday, September
[jira] [Created] (HADOOP-16611) Make test4tests vote for -0 instead of -1
Duo Zhang created HADOOP-16611: -- Summary: Make test4tests vote for -0 instead of -1 Key: HADOOP-16611 URL: https://issues.apache.org/jira/browse/HADOOP-16611 Project: Hadoop Common Issue Type: Sub-task Components: build Reporter: Duo Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16610) Upgrade to yetus 0.11.0 and use emoji vote on github pre commit
Duo Zhang created HADOOP-16610: -- Summary: Upgrade to yetus 0.11.0 and use emoji vote on github pre commit Key: HADOOP-16610 URL: https://issues.apache.org/jira/browse/HADOOP-16610 Project: Hadoop Common Issue Type: Sub-task Reporter: Duo Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16609) Add Jenkinsfile for all active branches
Duo Zhang created HADOOP-16609: -- Summary: Add Jenkinsfile for all active branches Key: HADOOP-16609 URL: https://issues.apache.org/jira/browse/HADOOP-16609 Project: Hadoop Common Issue Type: Sub-task Components: build Reporter: Duo Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16608) Improve the github pre commit job
Duo Zhang created HADOOP-16608: -- Summary: Improve the github pre commit job Key: HADOOP-16608 URL: https://issues.apache.org/jira/browse/HADOOP-16608 Project: Hadoop Common Issue Type: Improvement Components: build Reporter: Duo Zhang Now it ony works for trunk, and we should make it work for all active branches. And also, on github, there is no color for the vote table so it is really hard to find out the +1 and -1, with yetus 0.11.0, we can use emoji to vote, which will be better. And it should be -0 if there are no tests, as for some changes, such as modifying the pom, usually we will not introduce new tests but it is OK. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16607) s3a attempts to look up password/encryption fail if JCEKS file unreadable
Steve Loughran created HADOOP-16607: --- Summary: s3a attempts to look up password/encryption fail if JCEKS file unreadable Key: HADOOP-16607 URL: https://issues.apache.org/jira/browse/HADOOP-16607 Project: Hadoop Common Issue Type: Bug Reporter: Steve Loughran Hive deployments can use a JCEKs file to store secrets, which it sets up To be readable only by the Hive user, listing it under hadoop.credential.providers. When it tries to create an S3A FS instance as another user, via a doAs{} clause, the S3A FS getPassword() call fails on the subsequent AccessDeniedException -even if the secret it is looking for is in the XML file or, as in the case of encryption settings, or session key undefined. I can you point the blame at hive for this -it's the one with a forbidden JCEKS file on the provider path, but I think it is easiest to fix in S3AUtils than in hive, and safer then changing Configuration. ABFS is likely to see the same problem. I propose an option to set the fallback policy. I initially thought about always handling this: Catching the exception, attempting to downgrade to Reading XML and if that fails rethrowing the caught exception. However, this will do the wrong thing if the option is completely undefined, As is common with the encryption settings. I don't want to simply default to log and continue here though, as it may be a legitimate failure -such as when you really do want to read secrets from such a source. Issue: what fallback policies? * fail: fail fast. today's policy; the default. * ignore: log and continue We could try and be clever in future. To get away with that, we would have to declare which options were considered compulsory and re-throw the caught Exception if no value was found in the XML file. That can be a future enhancement -but it is why I want the policy to be an enumeration, rather than a simple boolean. Tests: should be straightforward; set hadoop.credential.providers to a non-existent file and expected to be processed according to the settings. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16529) Allow AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION to be set from abfs.xml property
[ https://issues.apache.org/jira/browse/HADOOP-16529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota resolved HADOOP-16529. - Resolution: Workaround > Allow AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION to be set from > abfs.xml property > --- > > Key: HADOOP-16529 > URL: https://issues.apache.org/jira/browse/HADOOP-16529 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Gabor Bota >Assignee: Gabor Bota >Priority: Major > > In > org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest#AbstractAbfsIntegrationTest > we do a > {code:java} > > abfsConfig.setBoolean(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION, > true); > {code} > which is not good for some testcases (eg. HADOOP-16138) where we want to test > against a container that is not exist. > A property should be added to be able to override this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/ [Sep 24, 2019 8:51:11 PM] (jhung) YARN-9730. Support forcing configured partitions to be exclusive based -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.security.authentication.server.TestMultiSchemeAuthenticationHandler hadoop.fs.sftp.TestSFTPFileSystem hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.server.namenode.ha.TestBootstrapStandby hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.registry.secure.TestSecureLogins hadoop.yarn.server.resourcemanager.TestAppManager hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt [308K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/diff-patch-shellcheck.txt [72K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/whitespace-tabs.txt [1.3M] xml: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt [1.1M] unit: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/patch-unit-hadoop-common-project_hadoop-auth.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [160K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/455/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [324K]
[jira] [Created] (HADOOP-16606) checksum link from hadoop web site is broken.
Rohith Sharma K S created HADOOP-16606: -- Summary: checksum link from hadoop web site is broken. Key: HADOOP-16606 URL: https://issues.apache.org/jira/browse/HADOOP-16606 Project: Hadoop Common Issue Type: Bug Reporter: Rohith Sharma K S Post HADOOP-16494, artifacts generated for release doesn't include *mds* file. But hadoop web site binary tar ball points to mds file which doesn't have. This breaks the hadoop website. For 3.2.1 release, I have manually generated md5 file and pushed into artifacts folder so that hadoop website link is not broken. The same issue will happen for 3.1.3 release also. I am referring https://hadoop.apache.org/releases.html page for checksum hyperlink. cc:/ [~vinodkv] [~tangzhankun] [~aajisaka] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16605) NPE in TestAdlSdkConfiguration failing in yetus
Steve Loughran created HADOOP-16605: --- Summary: NPE in TestAdlSdkConfiguration failing in yetus Key: HADOOP-16605 URL: https://issues.apache.org/jira/browse/HADOOP-16605 Project: Hadoop Common Issue Type: Bug Components: fs/adl Affects Versions: 3.3.0 Reporter: Steve Loughran Assignee: Sneha Vijayarajan Yetus builds are failing with NPE in TestAdlSdkConfiguration if they go near hadoop-azure-datalake. Assuming HADOOP-16438 until proven differently, though HADOOP-16371 may have done something too (how?), something which wasn't picked up as yetus didn't know that hadoo-azuredatalake was affected. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [ANNOUNCE] Apache Hadoop 3.2.1 release
Done: https://twitter.com/hadoop/status/1176787511865008128. If you have tweetdeck, any of the PMC members can do this. BTW, it looks we haven't published any releases since Nov 2018. Let's get back to doing this going forward! Thanks +Vinod > On Sep 25, 2019, at 2:44 PM, Rohith Sharma K S > wrote: > > Updated twitter message: > > `` > Apache Hadoop 3.2.1 is released: https://s.apache.org/96r4h > > Announcement: https://s.apache.org/jhnpe > Overview: https://s.apache.org/tht6a > Changes: https://s.apache.org/pd6of > Release notes: https://s.apache.org/ta50b > > Thanks to our community of developers, operators, and users. > > > -Rohith Sharma K S > > > On Wed, 25 Sep 2019 at 14:15, Sunil Govindan wrote: > >> Here the link of Overview URL is old. >> We should ideally use https://hadoop.apache.org/release/3.2.1.html >> >> Thanks >> Sunil >> >> On Wed, Sep 25, 2019 at 2:10 PM Rohith Sharma K S < >> rohithsharm...@apache.org> wrote: >> >>> Can someone help to post this in twitter account? >>> >>> Apache Hadoop 3.2.1 is released: https://s.apache.org/mzdb6 >>> Overview: https://s.apache.org/tht6a >>> Changes: https://s.apache.org/pd6of >>> Release notes: https://s.apache.org/ta50b >>> >>> Thanks to our community of developers, operators, and users. >>> >>> -Rohith Sharma K S >>> >>> On Wed, 25 Sep 2019 at 13:44, Rohith Sharma K S < >>> rohithsharm...@apache.org> wrote: >>> Hi all, It gives us great pleasure to announce that the Apache Hadoop community has voted to release Apache Hadoop 3.2.1. Apache Hadoop 3.2.1 is the stable release of Apache Hadoop 3.2 line, which includes 493 fixes since Hadoop 3.2.0 release: - For major changes included in Hadoop 3.2 line, please refer Hadoop 3.2.1 main page[1]. - For more details about fixes in 3.2.1 release, please read CHANGELOG[2] and RELEASENOTES[3]. The release news is posted on the Hadoop website too, you can go to the downloads section directly[4]. Thank you all for contributing to the Apache Hadoop! Cheers, Rohith Sharma K S [1] https://hadoop.apache.org/docs/r3.2.1/index.html [2] https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-common/release/3.2.1/CHANGELOG.3.2.1.html [3] https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-common/release/3.2.1/RELEASENOTES.3.2.1.html [4] https://hadoop.apache.org >>> - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [ANNOUNCE] Apache Hadoop 3.2.1 release
Updated twitter message: `` Apache Hadoop 3.2.1 is released: https://s.apache.org/96r4h Announcement: https://s.apache.org/jhnpe Overview: https://s.apache.org/tht6a Changes: https://s.apache.org/pd6of Release notes: https://s.apache.org/ta50b Thanks to our community of developers, operators, and users. -Rohith Sharma K S On Wed, 25 Sep 2019 at 14:15, Sunil Govindan wrote: > Here the link of Overview URL is old. > We should ideally use https://hadoop.apache.org/release/3.2.1.html > > Thanks > Sunil > > On Wed, Sep 25, 2019 at 2:10 PM Rohith Sharma K S < > rohithsharm...@apache.org> wrote: > >> Can someone help to post this in twitter account? >> >> Apache Hadoop 3.2.1 is released: https://s.apache.org/mzdb6 >> Overview: https://s.apache.org/tht6a >> Changes: https://s.apache.org/pd6of >> Release notes: https://s.apache.org/ta50b >> >> Thanks to our community of developers, operators, and users. >> >> -Rohith Sharma K S >> >> On Wed, 25 Sep 2019 at 13:44, Rohith Sharma K S < >> rohithsharm...@apache.org> wrote: >> >>> Hi all, >>> >>> It gives us great pleasure to announce that the Apache Hadoop >>> community has >>> voted to release Apache Hadoop 3.2.1. >>> >>> Apache Hadoop 3.2.1 is the stable release of Apache Hadoop 3.2 line, >>> which >>> includes 493 fixes since Hadoop 3.2.0 release: >>> >>> - For major changes included in Hadoop 3.2 line, please refer Hadoop >>> 3.2.1 main page[1]. >>> - For more details about fixes in 3.2.1 release, please read >>> CHANGELOG[2] and RELEASENOTES[3]. >>> >>> The release news is posted on the Hadoop website too, you can go to the >>> downloads section directly[4]. >>> >>> Thank you all for contributing to the Apache Hadoop! >>> >>> Cheers, >>> Rohith Sharma K S >>> >>> >>> [1] https://hadoop.apache.org/docs/r3.2.1/index.html >>> [2] >>> https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-common/release/3.2.1/CHANGELOG.3.2.1.html >>> [3] >>> https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-common/release/3.2.1/RELEASENOTES.3.2.1.html >>> [4] https://hadoop.apache.org >>> >>
Re: [ANNOUNCE] Apache Hadoop 3.2.1 release
Updated announcement Hi all, It gives us great pleasure to announce that the Apache Hadoop community has voted to release Apache Hadoop 3.2.1. Apache Hadoop 3.2.1 is the stable release of Apache Hadoop 3.2 line, which includes 493 fixes since Hadoop 3.2.0 release: - For major changes included in Hadoop 3.2 line, please refer Hadoop 3.2.1 main page [1]. - For more details about fixes in 3.2.1 release, please read CHANGELOG [2] and RELEASENOTES [3]. The release news is posted on the Hadoop website too, you can go to the downloads section directly [4]. This announcement itself is also up on the website [0]. Thank you all for contributing to the Apache Hadoop! Cheers, Rohith Sharma K S [0] Announcement: https://hadoop.apache.org/release/3.2.1.html [1] Overview of major changes: https://hadoop.apache.org/docs/r3.2.1/index.html [2] Detailed change-log: https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-common/release/3.2.1/CHANGELOG.3.2.1.html [3] Detailed release-notes: https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-common/release/3.2.1/RELEASENOTES.3.2.1.html [4] Project Home: https://hadoop.apache.org On Wed, 25 Sep 2019 at 13:44, Rohith Sharma K S wrote: > Hi all, > > It gives us great pleasure to announce that the Apache Hadoop > community has > voted to release Apache Hadoop 3.2.1. > > Apache Hadoop 3.2.1 is the stable release of Apache Hadoop 3.2 line, which > includes 493 fixes since Hadoop 3.2.0 release: > > - For major changes included in Hadoop 3.2 line, please refer Hadoop 3.2.1 > main page[1]. > - For more details about fixes in 3.2.1 release, please read CHANGELOG[2] > and RELEASENOTES[3]. > > The release news is posted on the Hadoop website too, you can go to the > downloads section directly[4]. > > Thank you all for contributing to the Apache Hadoop! > > Cheers, > Rohith Sharma K S > > > [1] https://hadoop.apache.org/docs/r3.2.1/index.html > [2] > https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-common/release/3.2.1/CHANGELOG.3.2.1.html > [3] > https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-common/release/3.2.1/RELEASENOTES.3.2.1.html > [4] https://hadoop.apache.org >
[jira] [Created] (HADOOP-16604) Provide copy functionality for cloud native applications
Rajesh Balamohan created HADOOP-16604: - Summary: Provide copy functionality for cloud native applications Key: HADOOP-16604 URL: https://issues.apache.org/jira/browse/HADOOP-16604 Project: Hadoop Common Issue Type: Improvement Components: fs Reporter: Rajesh Balamohan Lot of cloud native systems provide out of the box and optimized copy functionality within their system. They avoid bringing data over to the client and write back to the destination. It would be good to have a cloud native interface, which can be implemented by the cloud connectors to provide (e.g {{copy(URI srcFile, URI destFile)}}) This would be helpful for applications which make use of these connectors and enhance copy performance within cloud. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [ANNOUNCE] Apache Hadoop 3.2.1 release
Can someone help to post this in twitter account? Apache Hadoop 3.2.1 is released: https://s.apache.org/mzdb6 Overview: https://s.apache.org/tht6a Changes: https://s.apache.org/pd6of Release notes: https://s.apache.org/ta50b Thanks to our community of developers, operators, and users. -Rohith Sharma K S On Wed, 25 Sep 2019 at 13:44, Rohith Sharma K S wrote: > Hi all, > > It gives us great pleasure to announce that the Apache Hadoop > community has > voted to release Apache Hadoop 3.2.1. > > Apache Hadoop 3.2.1 is the stable release of Apache Hadoop 3.2 line, which > includes 493 fixes since Hadoop 3.2.0 release: > > - For major changes included in Hadoop 3.2 line, please refer Hadoop 3.2.1 > main page[1]. > - For more details about fixes in 3.2.1 release, please read CHANGELOG[2] > and RELEASENOTES[3]. > > The release news is posted on the Hadoop website too, you can go to the > downloads section directly[4]. > > Thank you all for contributing to the Apache Hadoop! > > Cheers, > Rohith Sharma K S > > > [1] https://hadoop.apache.org/docs/r3.2.1/index.html > [2] > https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-common/release/3.2.1/CHANGELOG.3.2.1.html > [3] > https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-common/release/3.2.1/RELEASENOTES.3.2.1.html > [4] https://hadoop.apache.org >
[ANNOUNCE] Apache Hadoop 3.2.1 release
Hi all, It gives us great pleasure to announce that the Apache Hadoop community has voted to release Apache Hadoop 3.2.1. Apache Hadoop 3.2.1 is the stable release of Apache Hadoop 3.2 line, which includes 493 fixes since Hadoop 3.2.0 release: - For major changes included in Hadoop 3.2 line, please refer Hadoop 3.2.1 main page[1]. - For more details about fixes in 3.2.1 release, please read CHANGELOG[2] and RELEASENOTES[3]. The release news is posted on the Hadoop website too, you can go to the downloads section directly[4]. Thank you all for contributing to the Apache Hadoop! Cheers, Rohith Sharma K S [1] https://hadoop.apache.org/docs/r3.2.1/index.html [2] https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-common/release/3.2.1/CHANGELOG.3.2.1.html [3] https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-common/release/3.2.1/RELEASENOTES.3.2.1.html [4] https://hadoop.apache.org
[jira] [Created] (HADOOP-16603) Lack of aarch64 platform support of dependent PhantomJS
liusheng created HADOOP-16603: - Summary: Lack of aarch64 platform support of dependent PhantomJS Key: HADOOP-16603 URL: https://issues.apache.org/jira/browse/HADOOP-16603 Project: Hadoop Common Issue Type: Bug Reporter: liusheng Hadoop depend the "PhantomJS-2.1.1"[1] library and import it by "phantomjs-maven-plugin:0.7", but there is an artifact of phantomjs for aarch64 in "com.github.klieber" group used by Hadoop[2]. [1] [https://github.com/apache/hadoop/blob/trunk/hadoop-project/pom.xml#L1703-L1707] [[2] https://search.maven.org/artifact/com.github.klieber/phantomjs/2.1.1/N%2FA|https://search.maven.org/artifact/com.github.klieber/phantomjs/2.1.1/N%2FA] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: Incompatible changes between branch-2.8 and branch-2.9
> > * For YARN-6050, there's a bit here: > https://developers.google.com/protocol-buffers/docs/proto that says > "optional is compatible with repeated", so I think we should be OK there. > - Optional is compatible with repeatable over the wire such that > protobuf won't blow up, but does that actually mean that it's compatible in > this case? If it's expecting an optional and gets a repeated, it's going to > drop everything except for the last value. I don't know enough about > YARN-6050 to say if this will be ok or not. It's been a while since I looked into this, but I think it should be okay. If an older client (using optional) sends the message to a newer server (using repeated), then there will never be more than one value for the field. The server puts these into a list, so the list would simply have a single value in it. The server's logic should be able to handle a single valued list here because (a) IIRC we wanted to make sure compatibility wasn't a problem (Cloudera supported rolling upgrades between CDH 5.x so this was important) and (b) sending a single resource request, even in a newer client, is a still a valid thing to do. If a newer client (using repeated) sends the message to an older server (using optional), I'm not sure what will happen. My guess is that it will drop the extra values (though I wonder if it will keep the first or last value...). In any case, I believe most clients will only send the one value - in order for a client to send multiple values, you'd have to specify some additional MR configs (see MAPREDUCE-6871). IIRC, there's also a SPARK JIRA similar to MAPREDUCE-6871, but I can't find it right now. - Robert On Tue, Sep 24, 2019 at 9:49 PM Jonathan Hung wrote: > - I've created YARN-9855 and uploaded patches to fix YARN-6616 in > branch-2.8 and branch-2.7. > - For YARN-6050, not sure either. Robert/Wangda, can you comment on > YARN-6050 compatibility? > - For YARN-7813, not sure why moving from 2.8.4/5 -> 2.8.6 would be > incompatible with this strategy? It should be OK to remove/add optional > fields (removing the field with id 12, and adding the field with id 13). > The difficulties I see here are, we would have to leave id 12 blank in > 2.8.6 (so we cannot have YARN-6164 in branch-2.8), and users on 2.8.4/5 > would have to move to 2.8.6 before moving to 2.9+. But rolling upgrade > would still work IIUC. > > Jonathan Hung > > > On Tue, Sep 24, 2019 at 2:52 PM Eric Badger > wrote: > >> * For YARN-6616, for branch-2.8 and below, it was only committed to >> 2.7.8/2.8.6 which have not been released (as I understand). Perhaps we can >> revert YARN-6616 from branch-2.7 and branch-2.8. >> - This seems reasonable. Since we haven't released anything, it should >> be no issue to change the 2.7/2.8 protobuf field to have the same value as >> 2.9+ >> >> * For YARN-6050, there's a bit here: >> https://developers.google.com/protocol-buffers/docs/proto that says >> "optional is compatible with repeated", so I think we should be OK there. >> - Optional is compatible with repeatable over the wire such that >> protobuf won't blow up, but does that actually mean that it's compatible in >> this case? If it's expecting an optional and gets a repeated, it's going to >> drop everything except for the last value. I don't know enough about >> YARN-6050 to say if this will be ok or not. >> >> * For YARN-7813, it's in 2.8.4 so it seems upgrading from 2.8.4 or >> 2.8.5 to a 2.9+ version will be an issue. One option could be to move the >> intraQueuePreemptionDisabled field from id 12 to id 13 in branch-2.8, then >> users would upgrade from 2.8.4/2.8.5 to 2.8.6 (someone would have to >> release this), then upgrade from 2.8.6 to 2.9+. >> - I'm ok with this, but it should be noted that the upgrade from >> 2.8.4/2.8.5 to 2.8.6 (or 2.9+) would not be compatible for a rolling >> upgrade. So this would cause some pain to anybody with clusters on those >> versions. >> >> Eric >> >> On Tue, Sep 24, 2019 at 2:42 PM Jonathan Hung >> wrote: >> >>> Sorry, let me edit my first point. We can just create addendums for >>> YARN-6616 in branch-2.7 and branch-2.8 to edit the submitTime field to the >>> correct id 28. We don’t need to revert YARN-6616 from these branches >>> completely. >>> >>> Jonathan >>> >>> >>> From: Jonathan Hung >>> Sent: Tuesday, September 24, 2019 11:38 AM >>> To: Eric Badger >>> Cc: Hadoop Common; yarn-dev; mapreduce-dev; Hdfs-dev >>> Subject: Re: Incompatible changes between branch-2.8 and branch-2.9 >>> >>> Hi Eric, thanks for the investigation. >>> >>> * For YARN-6616, for branch-2.8 and below, it was only committed to >>> 2.7.8/2.8.6 which have not been released (as I understand). Perhaps we can >>> revert YARN-6616 from branch-2.7 and branch-2.8. >>> * For YARN-6050, there's a bit here: >>> https://developers.google.com/protocol-buffers/docs/proto that says >>> "optional is compatible with repeated", so I think we
[jira] [Created] (HADOOP-16602) mvn package fails in hadoop-aws
Xieming Li created HADOOP-16602: --- Summary: mvn package fails in hadoop-aws Key: HADOOP-16602 URL: https://issues.apache.org/jira/browse/HADOOP-16602 Project: Hadoop Common Issue Type: Bug Components: documentation Reporter: Xieming Li Assignee: Xieming Li mvn package seems to fail in hadoop-aws {code:java} [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 08:04 min [INFO] Finished at: 2019-09-25T15:12:44+09:00 [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on project hadoop-aws: MavenReportException: Error while generating Javadoc: [ERROR] Exit code: 1 - /Users/sri/projects/hadoop-mirror/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:356: error: bad use of '>' [ERROR]* CustomSigner -> 'CustomSigner:org.apache...CustomSignerClass Multiple [ERROR]^ [ERROR] /Users/sri/projects/hadoop-mirror/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:357: error: bad use of '>' [ERROR]* CustomSigners -> 'CSigner1:CustomSignerClass1,CSigner2:CustomerSignerClass2 [ERROR] ^ [ERROR] /Users/sri/projects/hadoop-mirror/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:3812: warning: no @param for recursive [ERROR] public RemoteIterator listFilesAndEmptyDirectories( {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org