[ 
https://issues.apache.org/jira/browse/HDDS-1293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-1293:
--------------------------------------
    Attachment: HDDS-1293.000.patch

> ExcludeList#getProtoBuf throws ArrayIndexOutOfBoundsException
> -------------------------------------------------------------
>
>                 Key: HDDS-1293
>                 URL: https://issues.apache.org/jira/browse/HDDS-1293
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: SCM
>    Affects Versions: 0.4.0
>            Reporter: Mukul Kumar Singh
>            Assignee: Shashikant Banerjee
>            Priority: Major
>         Attachments: HDDS-1293.000.patch
>
>
> ExcludeList#getProtoBuf throws ArrayIndexOutOfBoundsException because 
> getProtoBuf uses parallelStreams
> {code}
> 2019-03-17 16:24:35,774 INFO  retry.RetryInvocationHandler 
> (RetryInvocationHandler.java:log(411)) - 
> com.google.protobuf.ServiceException: 
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  3
>       at java.util.ArrayList.add(ArrayList.java:463)
>       at 
> org.apache.hadoop.hdds.protocol.proto.HddsProtos$ExcludeListProto$Builder.addContainerIds(HddsProtos.java:12904)
>       at 
> org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList.lambda$getProtoBuf$3(ExcludeList.java:89)
>       at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
>       at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>       at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>       at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291)
>       at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
>       at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>       at 
> java.util.concurrent.ForkJoinPool.helpComplete(ForkJoinPool.java:1870)
>       at 
> java.util.concurrent.ForkJoinPool.externalHelpComplete(ForkJoinPool.java:2467)
>       at 
> java.util.concurrent.ForkJoinTask.externalAwaitDone(ForkJoinTask.java:324)
>       at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:405)
>       at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734)
>       at 
> java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160)
>       at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
>       at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
>       at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
>       at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583)
>       at 
> org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList.getProtoBuf(ExcludeList.java:89)
>       at 
> org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolClientSideTranslatorPB.allocateBlock(ScmBlockLocationProtocolClientSideTranslatorPB.java:100)
>       at sun.reflect.GeneratedMethodAccessor107.invoke(Unknown Source)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at 
> org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
>       at com.sun.proxy.$Proxy22.allocateBlock(Unknown Source)
>       at 
> org.apache.hadoop.ozone.om.KeyManagerImpl.allocateBlock(KeyManagerImpl.java:275)
>       at 
> org.apache.hadoop.ozone.om.KeyManagerImpl.allocateBlock(KeyManagerImpl.java:246)
>       at 
> org.apache.hadoop.ozone.om.OzoneManager.allocateBlock(OzoneManager.java:2023)
>       at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.allocateBlock(OzoneManagerRequestHandler.java:631)
>       at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handle(OzoneManagerRequestHandler.java:231)
>       at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.java:131)
>       at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:86)
>       at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
>       at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
>       at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> , while invoking $Proxy28.submitRequest over null(localhost:59024). Trying to 
> failover immediately.
> 2019-03-17 16:24:35,783 INFO  om.KeyManagerImpl 
> (KeyManagerImpl.java:allocateBlock(271)) - allocate block 
> key:pool-9-thread-7-1581351327 exclude:datanodes:containers:#6#1#9#5pipelines:
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to