Re: [VOTE] Release Apache Hadoop 3.0.1 (RC0)
Thanks Eddy for driving this! +1 (binding) - Downloaded src tarball and verified md5 - Built from src - Started a pseudo distributed cluster - Verified basic hdfs operations work - Verified hdfs encryption basic operations work - Sanity checked logs The wiki release page seems to have less items than jira query (p1/p2, fixed, 3.0.1) though... -Xiao On Thu, Feb 15, 2018 at 3:36 PM, Lei Xuwrote: > Hi, all > > I've created release candidate 0 for Apache Hadoop 3.0.1 > > Apache Hadoop 3.0.1 will be the first bug fix release for Apache > Hadoop 3.0 release. It includes 49 bug fixes, which include 10 > blockers and 8 are critical. > > Please note: > * HDFS-12990. Change default NameNode RPC port back to 8020. It makes > incompatible changes to Hadoop 3.0.0. After 3.0.1 releases, Apache > Hadoop 3.0.0 will be deprecated due to this change. > > The release page is: > https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.0+Release > > New RC is available at: http://home.apache.org/~lei/hadoop-3.0.1-RC0/ > > The git tag is release-3.0.1-RC0, and the latest commit is > 494d075055b52b0cc922bc25237e231bb3771c90 > > The maven artifacts are available: > https://repository.apache.org/content/repositories/orgapachehadoop-1078/ > > Please try the release and vote; the vote will run for the usual 5 > days, ending on 2/20/2017 6pm PST time. > > Thanks! > > -- > Lei (Eddy) Xu > > - > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > >
[jira] [Created] (HADOOP-15245) S3AInputStream.skip() to use lazy seek
Steve Loughran created HADOOP-15245: --- Summary: S3AInputStream.skip() to use lazy seek Key: HADOOP-15245 URL: https://issues.apache.org/jira/browse/HADOOP-15245 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.1.0 Reporter: Steve Loughran the default skip() does a read and discard of all bytes, no matter how far ahead the skip is. This is very inefficient if the skip() is being done on S3A random IO, though exactly what to do when in sequential mode. Proposed: * add an optimized version of S3AInputStream.skip() which does a lazy seek, which itself will decided when to skip() vs issue a new GET. * add some more instrumentation to measure how often this gets used -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14606) S3AInputStream: Handle http stream skip(n) skipping < n bytes in a forward seek
[ https://issues.apache.org/jira/browse/HADOOP-14606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-14606. - Resolution: Invalid Fix Version/s: 3.1.0 Reviewing the code here, the problem doesn't exist. When we seek forwards we look for the return value, update our position, then, if it it doesn't match the expected number, close the stream and do a new GET. > S3AInputStream: Handle http stream skip(n) skipping < n bytes in a forward > seek > --- > > Key: HADOOP-14606 > URL: https://issues.apache.org/jira/browse/HADOOP-14606 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Priority: Critical > Fix For: 3.1.0 > > > There's some hints in the InputStream docs that {{skip(n)}} may skip bytes. Codepaths only seem to do this if read() returns -1, meaning end of > stream is reached. > If that happens when doing a forward seek via skip, then we have got our > numbers wrong and are in trouble. Look for a negative response, log @ ERROR > and revert to a close/reopen seek to an absolute position. > *I have no evidence of this acutally occurring* -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15244) s3guard uploads command to add a way to complete outstanding uploads
Steve Loughran created HADOOP-15244: --- Summary: s3guard uploads command to add a way to complete outstanding uploads Key: HADOOP-15244 URL: https://issues.apache.org/jira/browse/HADOOP-15244 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.1.0 Reporter: Steve Loughran The AWS API lets you not only list & cancel outstanding upload (as s3guard uploads) does, but actually list the parts. We may be able to actually complete an outstanding upload through the CLI. What would that do? It'd let you restore all but the last block of any logs being written to s3 where the app/VM failed before the upload was completed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15243) test and document use of fs.s3a.signing-algorithm
Steve Loughran created HADOOP-15243: --- Summary: test and document use of fs.s3a.signing-algorithm Key: HADOOP-15243 URL: https://issues.apache.org/jira/browse/HADOOP-15243 Project: Hadoop Common Issue Type: Sub-task Components: documentation, fs/s3, test Reporter: Steve Loughran we suggest changing of {{fs.s3a.signing-algorithm}} as a way to deal with some auth problems, but there's no standalone section of what is available, or any tests trying to switch to different methods. This is hampered by the fact that none of us have owned up to understanding all the choices available and the AWS docs not covering it...all we have is the source which sets up the options in >1 place Proposed: * add some tests of the actual valid options. * document the values in the s3a docs * and in core-default.xml -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: HADOOP-14163 proposal for new hadoop.apache.org
Hi, I would like to bump this thread up. TLDR; There is a proposed version of a new hadoop site which is available from here: https://elek.github.io/hadoop-site-proposal/ and https://issues.apache.org/jira/browse/HADOOP-14163 Please let me know what you think about it. Longer version: This thread started long time ago to use a more modern hadoop site: Goals were: 1. To make it easier to manage it (the release entries could be created by a script as part of the release process) 2. To use a better look-and-feel 3. Move it out from svn to git I proposed to: 1. Move the existing site to git and generate it with hugo (which is a single, standalone binary) 2. Move both the rendered and source branches to git. 3. (Create a jenkins job to generate the site automatically) NOTE: this is just about forrest based hadoop.apache.org, NOT about the documentation which is generated by mvn-site (as before) I got multiple valuable feedback and I improved the proposed site according to the comments. Allen had some concerns about the used technologies (hugo vs. mvn-site) and I answered all the questions why I think mvn-site is the best for documentation and hugo is best for generating site. I would like to finish this effort/jira: I would like to start a discussion about using this proposed version and approach as a new site of Apache Hadoop. Please let me know what you think. Thanks a lot, Marton - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/694/ [Feb 15, 2018 11:33:44 AM] (szetszwo) HDFS-13142. Define and Implement a DiifList Interface to store and [Feb 15, 2018 2:26:00 PM] (stevel) HADOOP-15090. Add ADL troubleshooting doc. Contributed by Steve [Feb 15, 2018 2:57:56 PM] (stevel) HADOOP-15076. Enhance S3A troubleshooting documents and add a [Feb 15, 2018 3:57:10 PM] (stevel) HADOOP-15176. Enhance IAM Assumed Role support in S3A client. [Feb 15, 2018 4:27:31 PM] (stevel) HADOOP-13972. ADLS to support per-store configuration. Contributed by [Feb 15, 2018 5:11:55 PM] (kihwal) xattr api cleanup [Feb 15, 2018 9:12:57 PM] (jlowe) MAPREDUCE-7052. TestFixedLengthInputFormat#testFormatCompressedIn is [Feb 15, 2018 9:32:42 PM] (kihwal) HDFS-13112. Token expiration edits may cause log corruption or deadlock. [Feb 15, 2018 10:23:38 PM] (kkaranasos) YARN-7920. Simplify configuration for PlacementConstraints. Contributed [Feb 15, 2018 11:09:00 PM] (jlowe) YARN-7677. Docker image cannot set HADOOP_CONF_DIR. Contributed by Jim -1 overall The following subsystems voted -1: asflicense findbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api org.apache.hadoop.yarn.api.records.Resource.getResources() may expose internal representation by returning Resource.resources At Resource.java:by returning Resource.resources At Resource.java:[line 234] Failed junit tests : hadoop.hdfs.TestDFSStripedOutputStreamWithFailure hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.TestErasureCodingPolicies hadoop.hdfs.TestDecommission hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage hadoop.yarn.client.api.impl.TestAMRMClientPlacementConstraints hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestMRIntermediateDataEncryption hadoop.mapred.TestJobSysDirWithDFS cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/694/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/694/artifact/out/diff-compile-javac-root.txt [280K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/694/artifact/out/diff-checkstyle-root.txt [17M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/694/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/694/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/694/artifact/out/diff-patch-shelldocs.txt [12K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/694/artifact/out/whitespace-eol.txt [9.2M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/694/artifact/out/whitespace-tabs.txt [288K] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/694/artifact/out/xml.txt [4.0K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/694/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/694/artifact/out/diff-javadoc-javadoc-root.txt [760K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/694/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [384K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/694/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [48K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/694/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/694/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/694/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [904K]