Re: [VOTE] Release Apache Hadoop 2.8.5 (RC0)
+1 (non-binding) - Verified all hashes and checksums - Built from source on macOS 10.13.6, Java 1.8.0u65 - Deployed a pseudo cluster - Ran some example jobs Eric On Tue, Sep 11, 2018 at 1:39 PM, Gabor Bota wrote: > Thanks for the work Junping! > > +1 (non-binding) > > - checked out git tag release-2.8.5-RC0 > - built from source on Mac OS X 10.13.6, java version 8.0.181-oracle > - deployed on a 3 node cluster > - verified pi job (yarn), teragen, terasort and teravalidate > > Regards, > Gabor Bota > > On Tue, Sep 11, 2018 at 6:31 PM Eric Payne invalid> > wrote: > > > Thanks a lot Junping! > > > > +1 (binding) > > > > Tested the following: > > - Built from source > > - Installed on a 7 node, multi-tenant, insecure pseudo cluster, running > > YARN capacity scheduler > > - Added a queue via refresh > > - Verified various GUI pages > > - Streaming jobs > > - Cross-queue (Inter) preemption > > - In-queue (Intra) preemption > > - Teragen / terasort > > > > > > -Eric > > > > > > > > > > On Monday, September 10, 2018, 7:01:46 AM CDT, 俊平堵 < > junping...@apache.org> > > wrote: > > > > > > > > > > > > Hi all, > > > > I've created the first release candidate (RC0) for Apache > > Hadoop 2.8.5. This is our next point release to follow up 2.8.4. It > > includes 33 important fixes and improvements. > > > > > > The RC artifacts are available at: > > http://home.apache.org/~junping_du/hadoop-2.8.5-RC0 > > > > > > The RC tag in git is: release-2.8.5-RC0 > > > > > > > > The maven artifacts are available via repository.apache.org< > > http://repository.apache.org> at: > > > > https://repository.apache.org/content/repositories/orgapachehadoop-1140 > > > > > > Please try the release and vote; the vote will run for the usual 5 > > working > > days, ending on 9/15/2018 PST time. > > > > > > Thanks, > > > > > > Junping > > > > - > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > > > > >
[jira] [Resolved] (HADOOP-15139) [Umbrella] Improvements and fixes for Hadoop shaded client work
[ https://issues.apache.org/jira/browse/HADOOP-15139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HADOOP-15139. - Resolution: Fixed Target Version/s: 3.1.1, 3.2.0 (was: 3.2.0) > [Umbrella] Improvements and fixes for Hadoop shaded client work > > > Key: HADOOP-15139 > URL: https://issues.apache.org/jira/browse/HADOOP-15139 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Bharat Viswanadham >Priority: Critical > > In HADOOP-11656, we have made great progress in splitting out third-party > dependencies from shaded hadoop client jar (hadoop-client-api), put runtime > dependencies in hadoop-client-runtime, and have shaded version of > hadoop-client-minicluster for test. However, there are still some left work > for this feature to be fully completed: > - We don't have a comprehensive documentation to guide downstream > projects/users to use shaded JARs instead of previous JARs > - We should consider to wrap up hadoop tools (distcp, aws, azure) to have > shaded version > - More issues could be identified when shaded jars are adopted in more test > and production environment, like HADOOP-15137. > Let's have this umbrella JIRA to track all efforts that left to improve > hadoop shaded client effort. > CC [~busbey], [~bharatviswa] and [~vinodkv]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15745) Add ABFS configuration to ConfigRedactor
Sean Mackrory created HADOOP-15745: -- Summary: Add ABFS configuration to ConfigRedactor Key: HADOOP-15745 URL: https://issues.apache.org/jira/browse/HADOOP-15745 Project: Hadoop Common Issue Type: Sub-task Reporter: Sean Mackrory Assignee: Sean Mackrory Sensitive information like credentials should be detected by ConfigRedactor so they never appear in logs or other channels. ABFS credentials are not all currently detected correctly so we should amend the default list of config patterns. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 2.8.5 (RC0)
Thanks for the work Junping! +1 (non-binding) - checked out git tag release-2.8.5-RC0 - built from source on Mac OS X 10.13.6, java version 8.0.181-oracle - deployed on a 3 node cluster - verified pi job (yarn), teragen, terasort and teravalidate Regards, Gabor Bota On Tue, Sep 11, 2018 at 6:31 PM Eric Payne wrote: > Thanks a lot Junping! > > +1 (binding) > > Tested the following: > - Built from source > - Installed on a 7 node, multi-tenant, insecure pseudo cluster, running > YARN capacity scheduler > - Added a queue via refresh > - Verified various GUI pages > - Streaming jobs > - Cross-queue (Inter) preemption > - In-queue (Intra) preemption > - Teragen / terasort > > > -Eric > > > > > On Monday, September 10, 2018, 7:01:46 AM CDT, 俊平堵 > wrote: > > > > > > Hi all, > > I've created the first release candidate (RC0) for Apache > Hadoop 2.8.5. This is our next point release to follow up 2.8.4. It > includes 33 important fixes and improvements. > > > The RC artifacts are available at: > http://home.apache.org/~junping_du/hadoop-2.8.5-RC0 > > > The RC tag in git is: release-2.8.5-RC0 > > > > The maven artifacts are available via repository.apache.org< > http://repository.apache.org> at: > > https://repository.apache.org/content/repositories/orgapachehadoop-1140 > > > Please try the release and vote; the vote will run for the usual 5 > working > days, ending on 9/15/2018 PST time. > > > Thanks, > > > Junping > > - > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > >
Re: [VOTE] Release Apache Hadoop 2.8.5 (RC0)
Thanks a lot Junping! +1 (binding) Tested the following: - Built from source - Installed on a 7 node, multi-tenant, insecure pseudo cluster, running YARN capacity scheduler - Added a queue via refresh - Verified various GUI pages - Streaming jobs - Cross-queue (Inter) preemption - In-queue (Intra) preemption - Teragen / terasort -Eric On Monday, September 10, 2018, 7:01:46 AM CDT, 俊平堵 wrote: Hi all, I've created the first release candidate (RC0) for Apache Hadoop 2.8.5. This is our next point release to follow up 2.8.4. It includes 33 important fixes and improvements. The RC artifacts are available at: http://home.apache.org/~junping_du/hadoop-2.8.5-RC0 The RC tag in git is: release-2.8.5-RC0 The maven artifacts are available via repository.apache.org< http://repository.apache.org> at: https://repository.apache.org/content/repositories/orgapachehadoop-1140 Please try the release and vote; the vote will run for the usual 5 working days, ending on 9/15/2018 PST time. Thanks, Junping - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15744) AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch
Andras Bokor created HADOOP-15744: - Summary: AbstractContractAppendTest fails against HDFS on HADOOP-15407 branch Key: HADOOP-15744 URL: https://issues.apache.org/jira/browse/HADOOP-15744 Project: Hadoop Common Issue Type: Bug Reporter: Andras Bokor Assignee: Andras Bokor {code:java} mvn test -Dtest=TestHDFSContractAppend#testAppendDirectory,TestRouterWebHDFSContractAppend#testAppendDirectory{code} In case of TestHDFSContractAppend the test excepts FileAlreadyExistsException but HDFS sends the exception wrapped into a RemoteException. In case of TestRouterWebHDFSContractAppend the append does not even throw exception. [~ste...@apache.org], [~tmarquardt], any thoughts? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Speakers needed for Apache DC Roadshow
We need your help to make the Apache Washington DC Roadshow on Dec 4th a success. What do we need most? Speakers! We're bringing a unique DC flavor to this event by mixing Open Source Software with talks about Apache projects as well as OSS CyberSecurity, OSS in Government and and OSS Career advice. Please take a look at: http://www.apachecon.com/usroadshow18/ (Note: You are receiving this message because you are subscribed to one or more mailing lists at The Apache Software Foundation.) Rich, for the ApacheCon Planners -- rbo...@apache.org http://apachecon.com @ApacheCon - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15743) Jetty and SSL tunings to stabilize KMS performance
Daryn Sharp created HADOOP-15743: Summary: Jetty and SSL tunings to stabilize KMS performance Key: HADOOP-15743 URL: https://issues.apache.org/jira/browse/HADOOP-15743 Project: Hadoop Common Issue Type: Bug Components: kms Affects Versions: 2.8.0 Reporter: Daryn Sharp The KMS has very low throughput with high client failure rates. The following config options will "stabilize" the KMS under load: # Disable ECDH algos because java's SSL engine is inexplicably HORRIBLE. # Reduce SSL session cache size (unlimited) and ttl (24h). The memory cache has very poor performance and causes extreme GC collection pressure. Load balancing diminishes the effectiveness of the cache to 1/N-hosts anyway. ** -Djavax.net.ssl.sessionCacheSize=1000 ** -Djavax.net.ssl.sessionCacheTimeout=6 # Completely disable thread LowResourceMonitor to stop jetty from immediately closing incoming connections during connection bursts. Client retries cause jetty to remain in a low resource state until many clients fail and cause thousands of sockets to linger in various close related states. # Set min/max threads to 4x processors. Jetty recommends only 50 to 500 threads. Java's SSL engine has excessive synchronization that limits performance anyway. # Set https idle timeout to 6s. # Significantly increase max fds to at least 128k. Recommend using a VIP load balancer with a lower limit. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/893/ [Sep 10, 2018 5:57:36 AM] (yqlin) HDFS-13884. Improve the description of the setting dfs.image.compress. [Sep 10, 2018 11:37:48 AM] (elek) HDDS-417. Ambiguous error message when using genconf tool. Contributed [Sep 10, 2018 1:24:41 PM] (stevel) HADOOP-15677. WASB: Add support for StreamCapabilities. Contributed by [Sep 10, 2018 3:45:49 PM] (xyao) HDDS-403. Fix createdOn and modifiedOn timestamp for volume, bucket, [Sep 10, 2018 6:52:52 PM] (elek) HDDS-421. Resilient DNS resolution in datanode-service. Contributed by [Sep 10, 2018 7:55:20 PM] (ericp) YARN-8709: CS preemption monitor always fails since one under-served -1 overall The following subsystems voted -1: asflicense findbugs pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine Unread field:FSBasedSubmarineStorageImpl.java:[line 39] Found reliance on default encoding in org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component):in org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component): new java.io.FileWriter(File) At YarnServiceJobSubmitter.java:[line 195] org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component) may fail to clean up java.io.Writer on checked exception Obligation to clean up resource created at YarnServiceJobSubmitter.java:to clean up java.io.Writer on checked exception Obligation to clean up resource created at YarnServiceJobSubmitter.java:[line 195] is not discharged org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String, int, String) concatenates strings using + in a loop At YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] Failed CTEST tests : test_test_libhdfs_threaded_hdfs_static test_libhdfs_threaded_hdfspp_test_shim_static Failed junit tests : hadoop.hdfs.TestLeaseRecovery2 hadoop.hdfs.server.datanode.TestBPOfferService hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/893/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/893/artifact/out/diff-compile-javac-root.txt [304K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/893/artifact/out/diff-checkstyle-root.txt [17M] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/893/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/893/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/893/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/893/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/893/artifact/out/whitespace-eol.txt [9.4M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/893/artifact/out/whitespace-tabs.txt [1.1M] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/893/artifact/out/xml.txt [4.0K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/893/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/893/artifact/out/branch-findbugs-hadoop-hdds_client.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/893/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/893/artifact/out/branch-findbugs-hadoop-hdds_framework.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/893/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt [4.0K]
[jira] [Created] (HADOOP-15742) Log if the ipc backoff is enabled in CallQueueManager
Yiqun Lin created HADOOP-15742: -- Summary: Log if the ipc backoff is enabled in CallQueueManager Key: HADOOP-15742 URL: https://issues.apache.org/jira/browse/HADOOP-15742 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.1.1 Reporter: Yiqun Lin Assignee: Ryan Wu Currently we don't log the info of ipc backoff. It will look good to print this as well so that makes users know if we enable this. {code:java} public CallQueueManager(Class> backingClass, Class schedulerClass, boolean clientBackOffEnabled, int maxQueueSize, String namespace, Configuration conf) { int priorityLevels = parseNumLevels(namespace, conf); this.scheduler = createScheduler(schedulerClass, priorityLevels, namespace, conf); BlockingQueue bq = createCallQueueInstance(backingClass, priorityLevels, maxQueueSize, namespace, conf); this.clientBackOffEnabled = clientBackOffEnabled; this.putRef = new AtomicReference>(bq); this.takeRef = new AtomicReference>(bq); LOG.info("Using callQueue: " + backingClass + " queueCapacity: " + maxQueueSize + " scheduler: " + schedulerClass); } {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15741) Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1
Akira Ajisaka created HADOOP-15741: -- Summary: Upgrade Maven Javadoc Plugin from 3.0.0-M1 to 3.0.1 Key: HADOOP-15741 URL: https://issues.apache.org/jira/browse/HADOOP-15741 Project: Hadoop Common Issue Type: Improvement Components: build Reporter: Akira Ajisaka -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org