[jira] [Created] (HDDS-1297) Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization

2019-03-17 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1297:
---

 Summary: Fix IllegalArgumentException thrown with MiniOzoneCluster 
Initialization
 Key: HDDS-1297
 URL: https://issues.apache.org/jira/browse/HDDS-1297
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh
 Fix For: 0.4.0


Fix IllegalArgumentException thrown with MiniOzoneCluster Initialization

{code}
ava.lang.IllegalArgumentException: 30 is not within min = 500 or max = 
10
at 
org.apache.hadoop.hdds.server.ServerUtils.sanitizeUserArgs(ServerUtils.java:66)
at 
org.apache.hadoop.hdds.scm.HddsServerUtil.getStaleNodeInterval(HddsServerUtil.java:256)
at 
org.apache.hadoop.hdds.scm.node.NodeStateManager.(NodeStateManager.java:136)
at 
org.apache.hadoop.hdds.scm.node.SCMNodeManager.(SCMNodeManager.java:105)
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.initalizeSystemManagers(StorageContainerManager.java:391)
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:286)
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:218)
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:684)
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:628)
at 
org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createSCM(MiniOzoneClusterImpl.java:458)
at 
org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:392)
at 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.testBothGetandPutSmallFile(TestOzoneContainer.java:237)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1296) Fix checkstyle issue from Nightly run

2019-03-17 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1296:


 Summary: Fix checkstyle issue from Nightly run
 Key: HDDS-1296
 URL: https://issues.apache.org/jira/browse/HDDS-1296
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Affects Versions: 0.4.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


https://ci.anzix.net/job/ozone-nightly/28/checkstyle/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-03-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1078/

[Mar 16, 2019 4:06:44 AM] (bharat) HDDS-1281. Fix the findbug issue caused by 
HDDS-1163. Contributed by




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore
 
   
org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.entity.TimelineEntityDocument.setEvents(Map)
 makes inefficient use of keySet iterator instead of entrySet iterator At 
TimelineEntityDocument.java:keySet iterator instead of entrySet iterator At 
TimelineEntityDocument.java:[line 159] 
   
org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.entity.TimelineEntityDocument.setMetrics(Map)
 makes inefficient use of keySet iterator instead of entrySet iterator At 
TimelineEntityDocument.java:keySet iterator instead of entrySet iterator At 
TimelineEntityDocument.java:[line 142] 
   Unread field:TimelineEventSubDoc.java:[line 56] 
   Unread field:TimelineMetricSubDoc.java:[line 44] 
   Switch statement found in 
org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.flowrun.FlowRunDocument.aggregate(TimelineMetric,
 TimelineMetric) where default case is missing At 
FlowRunDocument.java:TimelineMetric) where default case is missing At 
FlowRunDocument.java:[lines 121-136] 
   
org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.flowrun.FlowRunDocument.aggregateMetrics(Map)
 makes inefficient use of keySet iterator instead of entrySet iterator At 
FlowRunDocument.java:keySet iterator instead of entrySet iterator At 
FlowRunDocument.java:[line 103] 
   Possible doublecheck on 
org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader.client
 in new 
org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader(Configuration)
 At CosmosDBDocumentStoreReader.java:new 
org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader(Configuration)
 At CosmosDBDocumentStoreReader.java:[lines 73-75] 
   Possible doublecheck on 
org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter.client
 in new 
org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter(Configuration)
 At CosmosDBDocumentStoreWriter.java:new 
org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter(Configuration)
 At CosmosDBDocumentStoreWriter.java:[lines 66-68] 

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.hdfs.server.namenode.TestDecommissioningStatus 
   hadoop.hdfs.server.datanode.TestBPOfferService 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdds.scm.node.TestSCMNodeManager 
   hadoop.ozone.om.TestOzoneManagerHA 
   hadoop.ozone.om.TestOzoneManager 
   hadoop.ozone.scm.TestContainerSmallFile 
   hadoop.ozone.om.TestContainerReportWithKeys 
   hadoop.ozone.TestContainerStateMachineIdempotency 
   hadoop.ozone.container.TestContainerReplication 
   hadoop.hdds.scm.container.TestContainerStateManagerIntegration 
   hadoop.ozone.om.TestOzoneManagerConfiguration 
   hadoop.ozone.container.ozoneimpl.TestOzoneContainer 
   hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis 
   hadoop.ozone.client.rpc.TestContainerStateMachineFailures 
   hadoop.ozone.TestMiniOzoneCluster 
   hadoop.ozone.om.TestOMDbCheckpointServlet 
   hadoop.ozone.web.TestOzoneRestWithMiniCluster 
   hadoop.ozone.client.rpc.TestOzoneAtRestEncryption 
   hadoop.ozone.web.client.TestOzoneClient 
   hadoop.ozone.scm.node.TestSCMNodeMetrics 
   hadoop.ozone.TestStorageContainerManager 
   hadoop.ozone.web.client.TestBuckets 
   hadoop.ozone.client.rpc.TestFailureHandlingByClient 
   hadoop.ozone.om.TestOmMetrics 
   

[jira] [Reopened] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-17 Thread Rakesh R (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R reopened HDFS-14355:
-

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-17 Thread Rakesh R (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R resolved HDFS-14355.
-
Resolution: Unresolved

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-03-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/263/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Dead store to state in 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At 
FSImageFormatPBINode.java:org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At FSImageFormatPBINode.java:[line 623] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestDiskChecker 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.sls.TestSLSRunner 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/263/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/263/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/263/artifact/out/diff-compile-cc-root-jdk1.8.0_191.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/263/artifact/out/diff-compile-javac-root-jdk1.8.0_191.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/263/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/263/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/263/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/263/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/263/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/263/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/263/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/263/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/263/artifact/out/xml.txt
  [20K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/263/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/263/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
  [8.0K]
   

[jira] [Created] (HDDS-1295) Add MiniOzoneChaosCluster to mimic long running workload in a unit test environment

2019-03-17 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1295:
---

 Summary: Add MiniOzoneChaosCluster to mimic long running workload 
in a unit test environment
 Key: HDDS-1295
 URL: https://issues.apache.org/jira/browse/HDDS-1295
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


MiniOzoneChaosCluster will mimic various error cases in Ozone Cluster right 
from Ozone Client to OzoneDatanodes.

This test can be controlled via command line parameters to determine the run 
duration and failure rates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1294) ExcludeList shoud be a RPC Client config so that multiple streams can avoid the same error.

2019-03-17 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1294:
---

 Summary: ExcludeList shoud be a RPC Client config so that multiple 
streams can avoid the same error.
 Key: HDDS-1294
 URL: https://issues.apache.org/jira/browse/HDDS-1294
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh


ExcludeList right now is a per BlockOutPutStream value, this can result in 
multiple keys created out of the same client to run into same exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1293) ExcludeList#getProtoBuf throws ArrayIndexOutOfBoundsException

2019-03-17 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1293:
---

 Summary: ExcludeList#getProtoBuf throws 
ArrayIndexOutOfBoundsException
 Key: HDDS-1293
 URL: https://issues.apache.org/jira/browse/HDDS-1293
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh


ExcludeList#getProtoBuf throws ArrayIndexOutOfBoundsException because 
getProtoBuf uses parallelStreams

{code}
2019-03-17 16:24:35,774 INFO  retry.RetryInvocationHandler 
(RetryInvocationHandler.java:log(411)) - com.google.protobuf.ServiceException: 
org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
 3
at java.util.ArrayList.add(ArrayList.java:463)
at 
org.apache.hadoop.hdds.protocol.proto.HddsProtos$ExcludeListProto$Builder.addContainerIds(HddsProtos.java:12904)
at 
org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList.lambda$getProtoBuf$3(ExcludeList.java:89)
at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291)
at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at 
java.util.concurrent.ForkJoinPool.helpComplete(ForkJoinPool.java:1870)
at 
java.util.concurrent.ForkJoinPool.externalHelpComplete(ForkJoinPool.java:2467)
at 
java.util.concurrent.ForkJoinTask.externalAwaitDone(ForkJoinTask.java:324)
at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:405)
at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734)
at 
java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160)
at 
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
at 
java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
at 
java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583)
at 
org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList.getProtoBuf(ExcludeList.java:89)
at 
org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolClientSideTranslatorPB.allocateBlock(ScmBlockLocationProtocolClientSideTranslatorPB.java:100)
at sun.reflect.GeneratedMethodAccessor107.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
at com.sun.proxy.$Proxy22.allocateBlock(Unknown Source)
at 
org.apache.hadoop.ozone.om.KeyManagerImpl.allocateBlock(KeyManagerImpl.java:275)
at 
org.apache.hadoop.ozone.om.KeyManagerImpl.allocateBlock(KeyManagerImpl.java:246)
at 
org.apache.hadoop.ozone.om.OzoneManager.allocateBlock(OzoneManager.java:2023)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.allocateBlock(OzoneManagerRequestHandler.java:631)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handle(OzoneManagerRequestHandler.java:231)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.java:131)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:86)
at 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
, while invoking $Proxy28.submitRequest over null(localhost:59024). Trying to 
failover immediately.
2019-03-17 16:24:35,783 INFO  om.KeyManagerImpl 
(KeyManagerImpl.java:allocateBlock(271)) - allocate block 
key:pool-9-thread-7-1581351327 exclude:datanodes:containers:#6#1#9#5pipelines:
{code}



--
This message was