[jira] [Resolved] (HDFS-13084) [SPS]: Fix the branch review comments
[ https://issues.apache.org/jira/browse/HDFS-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil Govindan resolved HDFS-13084. --- Resolution: Won't Fix Fix Version/s: (was: 3.2.0) (was: HDFS-10285) > [SPS]: Fix the branch review comments > - > > Key: HDFS-13084 > URL: https://issues.apache.org/jira/browse/HDFS-13084 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Uma Maheswara Rao G >Assignee: Rakesh R >Priority: Major > > Fix the review comments provided by [~daryn] > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-13084) [SPS]: Fix the branch review comments
[ https://issues.apache.org/jira/browse/HDFS-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil Govindan reopened HDFS-13084: --- As there are no patches associated with this task, and same comments are handled by other jiras, reopening this Jira to close with correct reason. > [SPS]: Fix the branch review comments > - > > Key: HDFS-13084 > URL: https://issues.apache.org/jira/browse/HDFS-13084 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Uma Maheswara Rao G >Assignee: Rakesh R >Priority: Major > > Fix the review comments provided by [~daryn] > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/ [Nov 5, 2018 6:28:21 AM] (aajisaka) HADOOP-15899. Update AWS Java SDK versions in NOTICE.txt. [Nov 5, 2018 8:51:26 AM] (aajisaka) HADOOP-15900. Update JSch versions in LICENSE.txt. [Nov 5, 2018 9:31:06 AM] (yqlin) HDDS-796. Fix failed test [Nov 5, 2018 5:40:00 PM] (arp) HDDS-797. If DN is started before SCM, it does not register. Contributed [Nov 5, 2018 6:13:22 PM] (shashikant) HDDS-799. Avoid ByteString to byte array conversion cost by using [Nov 5, 2018 6:41:28 PM] (arp) HDDS-794. Add configs to set StateMachineData write timeout in [Nov 5, 2018 6:50:57 PM] (shashikant) HDDS-794. addendum patch to fix compilation failure. Contributed by [Nov 5, 2018 7:02:31 PM] (gifuma) HDFS-14042. Fix NPE when PROVIDED storage is missing. Contributed by [Nov 6, 2018 12:48:37 AM] (inigoiri) HDFS-14043. Tolerate corrupted seen_txid file. Contributed by Lukas -1 overall The following subsystems voted -1: findbugs hadolint pathlen shadedclient unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml Failed CTEST tests : test_test_libhdfs_threaded_hdfs_static test_libhdfs_threaded_hdfspp_test_shim_static cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/diff-compile-javac-root.txt [324K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/diff-checkstyle-root.txt [17M] hadolint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/diff-patch-pylint.txt [40K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/diff-patch-shellcheck.txt [68K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/diff-patch-shelldocs.txt [12K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/whitespace-eol.txt [9.3M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/whitespace-tabs.txt [1.1M] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/xml.txt [4.0K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/branch-findbugs-hadoop-hdds_client.txt [24K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/branch-findbugs-hadoop-hdds_framework.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/branch-findbugs-hadoop-hdds_tools.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/branch-findbugs-hadoop-ozone_client.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/branch-findbugs-hadoop-ozone_common.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt [44K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/branch-findbugs-hadoop-ozone_tools.txt [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/diff-javadoc-javadoc-root.txt [752K] CTEST: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/949/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt [116K] unit:
[jira] [Created] (HDDS-808) Simplify OMAction and DNAction classes used for AuditLogging
Dinesh Chitlangia created HDDS-808: -- Summary: Simplify OMAction and DNAction classes used for AuditLogging Key: HDDS-808 URL: https://issues.apache.org/jira/browse/HDDS-808 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: Ozone Datanode, Ozone Manager Reporter: Dinesh Chitlangia Assignee: Dinesh Chitlangia While reviewing HDDS-120, [~ajayydv] suggested to simplify these class by removing the constructor and the getAction(). Refer review comment: https://issues.apache.org/jira/browse/HDDS-120?focusedCommentId=16670495=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16670495 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
branch-2.9.2 is almost closed for commit
Hi folks, Now there is only 1 critical issues targeted for 2.9.2 (YARN-8233), so I'd like to close branch-2.9.2 except YARN-8233. I create RC0 as soon as YARN-8233 is committed to branch-2.9.2. Thanks, Akira - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-14052) RBF: Use Router keytab for WebHDFS
Íñigo Goiri created HDFS-14052: -- Summary: RBF: Use Router keytab for WebHDFS Key: HDFS-14052 URL: https://issues.apache.org/jira/browse/HDFS-14052 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Íñigo Goiri Assignee: CR Hota When the RouterHttpServer starts it does: {code} NameNodeHttpServer.initWebHdfs(conf, httpAddress.getHostName(), httpServer, RouterWebHdfsMethods.class.getPackage().getName()); {code} This function is in the NN and is pretty generic. However, it then calls to NameNodeHttpServer#getAuthFilterParams, which does: {code} String httpKeytab = conf.get(DFSUtil.getSpnegoKeytabKey(conf, DFSConfigKeys.DFS_NAMENODE_KEYTAB_FILE_KEY)); {code} In most cases, the regular web keytab will kick in, but we should make this a parameter and load the Router one just in case. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-807) Do not support period as a valid character in bucket names
Arpit Agarwal created HDDS-807: -- Summary: Do not support period as a valid character in bucket names Key: HDDS-807 URL: https://issues.apache.org/jira/browse/HDDS-807 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Arpit Agarwal ozonefs paths use the following syntax: o3fs://bucket.volume/.. The OM host and port are read from configuration. There is no way to specify a target filesystem with a fully qualified path. E.g. _o3fs://bucket.volume.om-host.example.com:9862/. Hence there is no way we can hand a fully qualified URL with OM hostname to a client without setting up config files beforehand. This is inconvenient. It also means there is no way to perform a distcp from one Ozone cluster to another. We need a way to support fully qualified paths with OM hostname and port _bucket.volume.om-host.example.com_. If we allow periods in bucketnames, then such fully qualified paths cannot be parsed unambiguously. However if er disallow periods, then we can support all of the following paths unambiguously. # *o3fs://bucket.volume/key* - The authority has only two period-separated components. These must be bucket and volume name respectively. # *o3fs://bucket.volume.om-host.example.com/key* - The authority has more than two components. The first two must be bucket and volume, the rest must be the hostname. # *o3fs://bucket.volume.om-host.example.com:5678/key* - Similar to #2, except with a port number. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-806) writeStateMachineData times out because chunk executors are not scheduled
Mukul Kumar Singh created HDDS-806: -- Summary: writeStateMachineData times out because chunk executors are not scheduled Key: HDDS-806 URL: https://issues.apache.org/jira/browse/HDDS-806 Project: Hadoop Distributed Data Store Issue Type: Bug Affects Versions: 0.3.0 Reporter: Nilotpal Nandi Assignee: Mukul Kumar Singh Fix For: 0.3.0 Attachments: HDDS-799-ozone-0.3.001.patch, HDDS-799-ozone-0.3.002.patch, HDDS-799.001.patch, all-node-ozone-logs-1540979056.tar.gz datanode stopped due to following error : datanode.log {noformat} 2018-10-31 09:12:04,517 INFO org.apache.ratis.server.impl.RaftServerImpl: 9fab9937-fbcd-4196-8014-cb165045724b: set configuration 169: [9fab9937-fbcd-4196-8014-cb165045724b:172.27.15.131:9858, ce0084c2-97cd-4c97-9378-e5175daad18b:172.27.15.139:9858, f0291cb4-7a48-456a-847f-9f91a12aa850:172.27.38.9:9858], old=null at 169 2018-10-31 09:12:22,187 ERROR org.apache.ratis.server.storage.RaftLogWorker: Terminating with exit status 1: 9fab9937-fbcd-4196-8014-cb165045724b-RaftLogWorker failed. org.apache.ratis.protocol.TimeoutIOException: Timeout: WriteLog:182: (t:10, i:182), STATEMACHINELOGENTRY, client-611073BBFA46, cid=127-writeStateMachineData at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:87) at org.apache.ratis.server.storage.RaftLogWorker$WriteLog.execute(RaftLogWorker.java:310) at org.apache.ratis.server.storage.RaftLogWorker.run(RaftLogWorker.java:182) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.TimeoutException at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1771) at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915) at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:79) ... 3 more{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-805) Block token: Client api changes for block token
Ajay Kumar created HDDS-805: --- Summary: Block token: Client api changes for block token Key: HDDS-805 URL: https://issues.apache.org/jira/browse/HDDS-805 Project: Hadoop Distributed Data Store Issue Type: Sub-task Components: Security Reporter: Ajay Kumar Assignee: Ajay Kumar -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-804) Block token: Add secret token manager
Ajay Kumar created HDDS-804: --- Summary: Block token: Add secret token manager Key: HDDS-804 URL: https://issues.apache.org/jira/browse/HDDS-804 Project: Hadoop Distributed Data Store Issue Type: Sub-task Components: Security Reporter: Ajay Kumar Assignee: Xiaoyu Yao -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-803) Volume creation without leading / fails
[ https://issues.apache.org/jira/browse/HDDS-803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HDDS-803. - Resolution: Duplicate > Volume creation without leading / fails > --- > > Key: HDDS-803 > URL: https://issues.apache.org/jira/browse/HDDS-803 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Arpit Agarwal >Priority: Minor > Labels: newbie > > {code} > $ ozone sh vol create vol1 > Volume name is required > $ ozone sh vol create /vol1 > 18/11/05 15:55:08 INFO rpc.RpcClient: Creating Volume: vol1, with hdfs as > owner and quota set to 1152921504606846976 bytes. > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-803) Volume creation without leading / fails
Arpit Agarwal created HDDS-803: -- Summary: Volume creation without leading / fails Key: HDDS-803 URL: https://issues.apache.org/jira/browse/HDDS-803 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Arpit Agarwal {code} $ ozone sh vol create vol1 Volume name is required $ ozone sh vol create /vol1 18/11/05 15:55:08 INFO rpc.RpcClient: Creating Volume: vol1, with hdfs as owner and quota set to 1152921504606846976 bytes. {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-3455) Add docs for NameNode initializeSharedEdits and bootstrapStandby commands
[ https://issues.apache.org/jira/browse/HDFS-3455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDFS-3455. - Resolution: Cannot Reproduce > Add docs for NameNode initializeSharedEdits and bootstrapStandby commands > - > > Key: HDFS-3455 > URL: https://issues.apache.org/jira/browse/HDFS-3455 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Affects Versions: 2.0.0-alpha >Reporter: Todd Lipcon >Assignee: Dinesh Chitlangia >Priority: Major > Labels: newbie > > We've made the HA setup easier by adding new flags to the namenode to > automatically set up the standby. But, we didn't document them yet. We should > amend the HDFSHighAvailability.apt.vm docs to include this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-802) Container State Manager should get open pipelines for allocating container
Lokesh Jain created HDDS-802: Summary: Container State Manager should get open pipelines for allocating container Key: HDDS-802 URL: https://issues.apache.org/jira/browse/HDDS-802 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.4.0 ContainerStateManager#allocateContainer currenlty calls getPipelines(type, factor) which returns pipelines of all states. This Jira aims to add another api getPipelines(type, factor, state) which can be called by container state manager to get only the open pipelines. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
3.2.0 branch is closed for commits
Hi All, All blockers are closed for 3.2.0 as of now and RC is getting prepared. Hence 3.2.0 branch closed for commits. Please use branch-3.2 for any commit and set fixed version as 3.2.1 Thanks, Sunil