[jira] [Created] (HDFS-15089) RBF: SmallFix for RBFMetrics in doc
luhuachao created HDFS-15089: Summary: RBF: SmallFix for RBFMetrics in doc Key: HDFS-15089 URL: https://issues.apache.org/jira/browse/HDFS-15089 Project: Hadoop HDFS Issue Type: Bug Reporter: luhuachao Assignee: luhuachao SmallFix for RBFMetrics in doc -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2534) scmcli container delete not working
luhuachao created HDDS-2534: --- Summary: scmcli container delete not working Key: HDDS-2534 URL: https://issues.apache.org/jira/browse/HDDS-2534 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: luhuachao Assignee: luhuachao Fix For: 0.5.0 {code:java} java.lang.IllegalArgumentException: Unknown command type: DeleteContainer at org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocolServerSideTranslatorPB.processRequest(StorageContainerLocationProtocolServerSideTranslatorPB.java:219) at org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72) at org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocolServerSideTranslatorPB.submitRequest(StorageContainerLocationProtocolServerSideTranslatorPB.java:112) at org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:30454) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2401) Add robot test file for ozone-hdfs
luhuachao created HDDS-2401: --- Summary: Add robot test file for ozone-hdfs Key: HDDS-2401 URL: https://issues.apache.org/jira/browse/HDDS-2401 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: luhuachao -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2370) Remove classpath in RunningWithHDFS.md ozone-hdfs/docker-compose as dir 'ozoneplugin' is not exist anymore
luhuachao created HDDS-2370: --- Summary: Remove classpath in RunningWithHDFS.md ozone-hdfs/docker-compose as dir 'ozoneplugin' is not exist anymore Key: HDDS-2370 URL: https://issues.apache.org/jira/browse/HDDS-2370 Project: Hadoop Distributed Data Store Issue Type: Task Components: documentation Reporter: luhuachao In RunningWithHDFS.md {code:java} export HADOOP_CLASSPATH=/opt/ozone/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin.jar{code} ozone-hdfs/docker-compose.yaml {code:java} environment: HADOOP_CLASSPATH: /opt/ozone/share/hadoop/ozoneplugin/*.jar {code} when i run hddsdatanodeservice as pulgin in hdfs datanode, it comes out with the error below , there is no constructor without parameter. {code:java} 2019-10-21 21:38:56,391 ERROR datanode.DataNode (DataNode.java:startPlugins(972)) - Unable to load DataNode plugins. Specified list of plugins: org.apache.hadoop.ozone.HddsDatanodeService java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.ozone.HddsDatanodeService.() {code} what i doubt is that, ozone-0.5 not support running as a plugin in hdfs datanode now ? if so, why donnot we remove doc RunningWithHDFS.md ? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2348) remove log4j properties for org.apache.hadoop.ozone
luhuachao created HDDS-2348: --- Summary: remove log4j properties for org.apache.hadoop.ozone Key: HDDS-2348 URL: https://issues.apache.org/jira/browse/HDDS-2348 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: Ozone Manager Affects Versions: 0.5.0 Reporter: luhuachao The log in package org.apache.hadoop.ozone cannot be logged to .log file ;such as OM startup_msg . -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2095) Submit mr job to yarn failed, Error messegs is "Provider org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found"
luhuachao created HDDS-2095: --- Summary: Submit mr job to yarn failed, Error messegs is "Provider org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found" Key: HDDS-2095 URL: https://issues.apache.org/jira/browse/HDDS-2095 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Filesystem Affects Versions: 0.4.1 Reporter: luhuachao below is the submit command {code:java} hadoop jar hadoop-mapreduce-client-jobclient-3.2.0-tests.jar nnbench -Dfs.defaultFS=o3fs://buc.volume-test -maps 3 -bytesToWrite 1 -numberOfFiles 1000 -blockSize 16 -operation create_write {code} clinet fail with message {code:java} 19/09/06 15:26:52 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/hdfs/.staging/job_1567754782562_000119/09/06 15:26:52 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/hdfs/.staging/job_1567754782562_0001java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1567754782562_0001 to YARN : org.apache.hadoop.security.token.TokenRenewer: Provider org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:345) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:254) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562) at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:873) at org.apache.hadoop.hdfs.NNBench.runTests(NNBench.java:487) at org.apache.hadoop.hdfs.NNBench.run(NNBench.java:604) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.hdfs.NNBench.main(NNBench.java:579) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) at org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:144) at org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:152) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:308) at org.apache.hadoop.util.RunJar.main(RunJar.java:222)Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1567754782562_0001 to YARN : org.apache.hadoop.security.token.TokenRenewer: Provider org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:304) at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:299) at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:330) ... 34 more {code} the log in resourcemanager {code:java} 2019-09-06 15:26:51,836 WARN security.DelegationTokenRenewer (DelegationTokenRenewer.java:handleDTRenewerAppSubmitEvent(923)) - Unable to add the application to the delegation token renewer. java.util.ServiceConfigurationError: org.apache.hadoop.security.token.TokenRenewer: Provider org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found at java.util.ServiceLoader.fail(ServiceLoader.java:239) at java.util.ServiceLoader.access$300(ServiceLoader.java:185) at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:372) at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) at java.util.ServiceLoader$1.next(ServiceLoader.java:480) at org.apache.hadoop.security.token.
[jira] [Created] (HDFS-14620) RBF: when Disable namespace in kerberos with superuser's principal, ERROR appear 'not a super user'
luhuachao created HDFS-14620: Summary: RBF: when Disable namespace in kerberos with superuser's principal, ERROR appear 'not a super user' Key: HDFS-14620 URL: https://issues.apache.org/jira/browse/HDFS-14620 Project: Hadoop HDFS Issue Type: Bug Affects Versions: HDFS-13891 Reporter: luhuachao Fix For: HDFS-13891 use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace with error info below, as the code judge the principal not equals to hdfs, also hdfs is not belong to supergroup. {code:java} [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: hdfs-test@EXAMPLE is not a super user at org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1714) When restart om with Kerberos, NPException happened at addPersistedDelegationToken
luhuachao created HDDS-1714: --- Summary: When restart om with Kerberos, NPException happened at addPersistedDelegationToken Key: HDDS-1714 URL: https://issues.apache.org/jira/browse/HDDS-1714 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Manager Affects Versions: 0.4.0 Reporter: luhuachao the error stack: {code:java} 2019-06-21 15:17:41,744 [main] INFO - Loaded 11 tokens 2019-06-21 15:17:41,745 [main] INFO - Loading token state into token manager. 2019-06-21 15:17:41,748 [main] ERROR - Failed to start the OzoneManager. java.lang.NullPointerException at org.apache.hadoop.ozone.security.OzoneDelegationTokenSecretManager.addPersistedDelegationToken(OzoneDelegationTokenSecretManager.java:371) at org.apache.hadoop.ozone.security.OzoneDelegationTokenSecretManager.loadTokenSecretState(OzoneDelegationTokenSecretManager.java:358) at org.apache.hadoop.ozone.security.OzoneDelegationTokenSecretManager.(OzoneDelegationTokenSecretManager.java:96) at org.apache.hadoop.ozone.om.OzoneManager.createDelegationTokenSecretManager(OzoneManager.java:608) at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:332) at org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:941) at org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:859) 2019-06-21 15:17:41,753 [pool-2-thread-1] INFO - SHUTDOWN_MSG: {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-14457) Add order text SPACE in CLI command 'hdfs dfsrouteradmin'
luhuachao created HDFS-14457: Summary: Add order text SPACE in CLI command 'hdfs dfsrouteradmin' Key: HDFS-14457 URL: https://issues.apache.org/jira/browse/HDFS-14457 Project: Hadoop HDFS Issue Type: Bug Components: rbf Affects Versions: HDFS-13891 Reporter: luhuachao when execute cli comand 'hdfs dfsrouteradmin' ,the text in -order donot contain SPACE -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13889) The hadoop3.x client have compatible problem with hadoop2.x cluster
luhuachao created HDFS-13889: Summary: The hadoop3.x client have compatible problem with hadoop2.x cluster Key: HDFS-13889 URL: https://issues.apache.org/jira/browse/HDFS-13889 Project: Hadoop HDFS Issue Type: Improvement Reporter: luhuachao when use hadoop3.1.0 client submit a mapreduce job to the hadoop2.8.2 cluster,the appmaster will fail with 'java.lang.NumberFormatException: For input string: "30s"' on config dfs.client.datanode-restart.timeout; As in hadoop3.x hdfs-default.xml "dfs.client.datanode-restart.timeout" was set to value "30s" , and in hadoop2.x, DfsClientConf.java use Method getLong to get this value. Is it necessary to fix this problem? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13626) When the setOwner operation was denied,The logging username is not appropriate
luhuachao created HDFS-13626: Summary: When the setOwner operation was denied,The logging username is not appropriate Key: HDFS-13626 URL: https://issues.apache.org/jira/browse/HDFS-13626 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 3.0.0-alpha2, 2.7.4, 2.8.0 Environment: hadoop 2.8.2 Reporter: luhuachao when do the chown operation on target file /tmp/test with user 'root' to user 'hive', the log displays 'User hive is not a super user' ;This appropriate log here should be 'User root is not a super user' [root@lhccmh1 ~]# hdfs dfs -ls /tmp/test -rw-r--r-- 3 root hdfs 0 2018-05-28 10:33 /tmp/test [root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test chown: changing ownership of '/tmp/test': User hive is not a super user (non-super user cannot change owner). The last version patch of issue HDFS-10455 use username but not pc.getUser() in logs; if (!pc.isSuperUser()) { if (username != null && !pc.getUser().equals(username)) { - throw new AccessControlException("Non-super user cannot change owner"); + throw new AccessControlException("User " + *username* + + " is not a super user (non-super user cannot change owner)."); } if (group != null && !pc.isMemberOfGroup(group)) { - throw new AccessControlException("User does not belong to " + group); + throw new AccessControlException( + "User " + username + " does not belong to " + group); } } -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org