Re: Apache Hadoop 3.1.2 release plan

2018-10-24 Thread Vinod Kumar Vavilapalli
231 fixed JIRAs is already quite a bunch!

I only see 7 JIRAs marked with Affects Version 3.1.2 and only one of them as 
blocker.

Why not just release now as soon as there are no blockers?

Thanks
+Vinod

> On Oct 24, 2018, at 4:36 PM, Wangda Tan  wrote:
> 
> Hi, All
> 
> We have released Apache Hadoop 3.1.1 on Aug 8, 2018. To further
> improve the quality of the release, I plan to release 3.1.2
> by Nov. The focus of 3.1.2 will be fixing blockers / critical bugs
> and other enhancements. So far there are 231 JIRAs [1] have fix
> version marked to 3.1.2
> 
> I plan to cut branch-3.1 on Nov 15 and vote for RC on the same day.
> 
> Please feel free to share your insights.
> 
> Thanks,
> Wangda Tan
> 
> [1] project in (YARN, "Hadoop HDFS", "Hadoop Common", "Hadoop Map/Reduce")
> AND fixVersion = 3.1.2


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop 3.1.2 release plan

2018-10-24 Thread Wangda Tan
Hi, All

We have released Apache Hadoop 3.1.1 on Aug 8, 2018. To further
improve the quality of the release, I plan to release 3.1.2
by Nov. The focus of 3.1.2 will be fixing blockers / critical bugs
and other enhancements. So far there are 231 JIRAs [1] have fix
version marked to 3.1.2

I plan to cut branch-3.1 on Nov 15 and vote for RC on the same day.

Please feel free to share your insights.

Thanks,
Wangda Tan

[1] project in (YARN, "Hadoop HDFS", "Hadoop Common", "Hadoop Map/Reduce")
AND fixVersion = 3.1.2


[jira] [Created] (HDFS-14027) DFSStripedOutputStream should implement both hsync methods

2018-10-24 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-14027:


 Summary: DFSStripedOutputStream should implement both hsync methods
 Key: HDFS-14027
 URL: https://issues.apache.org/jira/browse/HDFS-14027
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0
Reporter: Xiao Chen
Assignee: Xiao Chen


In an internal spark investigation, it appears that when 
[EventLoggingListener|https://github.com/apache/spark/blob/7251be0c04f0380208e0197e559158a9e1400868/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala#L152-L155]
 writes to an EC file, one may get exceptions reading, or get odd outputs. A 
sample exception is
{noformat}
hdfs dfs -cat /user/spark/applicationHistory/application_1540333573846_0003 | 
head -1
18/10/23 18:12:39 WARN impl.BlockReaderFactory: I/O error constructing remote 
block reader.
java.io.IOException: Got error, status=ERROR, status message opReadBlock 
BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
exception java.io.IOException:  Offset 0 and length 116161 don't match block 
BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 ( blockLen 
110296 ), for OP_READ_BLOCK, self=/HOST_IP:48610, remote=/HOST2_IP:20002, for 
file /user/spark/applicationHistory/application_1540333573846_0003, for pool 
BP-1488936467-HOST_IP-154092519 block -9223372036854774960_1085
at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:440)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:408)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:848)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:744)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:379)
at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(DFSStripedInputStream.java:264)
at org.apache.hadoop.hdfs.StripeReader.readChunk(StripeReader.java:299)
at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:330)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:326)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:419)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:829)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:92)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:66)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:127)
at 
org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
at 
org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
at 
org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
18/10/23 18:12:39 WARN hdfs.DFSClient: Failed to connect to /HOST2_IP:20002 for 
blockBP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085
java.io.IOException: Got error, status=ERROR, status message opReadBlock 
BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 received 
exception java.io.IOException:  Offset 0 and length 116161 don't match block 
BP-1488936467-HOST_IP-154092519:blk_-9223372036854774960_1085 ( blockLen 
110296 ), for OP_READ_BLOCK, self=/HOST_IP:48610, remote=/HOST2_IP:20002, for 
file /user/spark/applicationHistory/application_1540333573846_0003, for pool 
BP-1488936467-HOST_IP-154092519 block -9223372036854774960_1085
at 

[jira] [Resolved] (HDFS-14018) Compilation fails in branch-3.0

2018-10-24 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-14018.

Resolution: Done

> Compilation fails in branch-3.0
> ---
>
> Key: HDFS-14018
> URL: https://issues.apache.org/jira/browse/HDFS-14018
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.4
>Reporter: Rohith Sharma K S
>Priority: Blocker
>
> HDFS branch-3.0 compilation fails.
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-hdfs: Compilation failure
> [ERROR] 
> /Users/rsharmaks/Repos/Apache/Commit_Repos/branch-3.0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManager.java:[306,9]
>  cannot find symbol
> [ERROR]   symbol:   variable ArrayUtils
> [ERROR]   location: class 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager
> [ERROR]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14026) Overload BlockPoolTokenSecretManager.checkAccess to make storageId and storageType optional

2018-10-24 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDFS-14026:
-

 Summary: Overload BlockPoolTokenSecretManager.checkAccess to make 
storageId and storageType optional
 Key: HDFS-14026
 URL: https://issues.apache.org/jira/browse/HDFS-14026
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ajay Kumar
Assignee: Ajay Kumar
 Fix For: 3.2.0, 3.0.4, 3.1.2, 3.3.0


Change in {{BlockPoolTokenSecretManager.checkAccess}} by 
[HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
compatibility for applications using the private API (we've run into such apps).

Although there is no compatibility guarantee for the private interface, we can 
restore the original version of checkAccess as an overload.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-732) Add read method which takes offset and length in SignedChunkInputStream

2018-10-24 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-732:
---

 Summary: Add read method which takes offset and length in 
SignedChunkInputStream
 Key: HDDS-732
 URL: https://issues.apache.org/jira/browse/HDDS-732
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


This Jira is created from the comments in HDDS-693

 
{quote}We have only read(), we don't have read(byte[] b, int off, int len), we 
might see some slow operation during put with SignedInputStream.  
{quote}
100% agree. I didn't check any performance numbers, yet, but we need to do it 
sooner or later. I would implement this method in a separate jira as it adds 
more complexity and as of now I would like to support the mkdir operations of 
the s3a unit tests (where the size is 0).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14025) TestPendingReconstruction.testPendingAndInvalidate fails

2018-10-24 Thread Ayush Saxena (JIRA)
Ayush Saxena created HDFS-14025:
---

 Summary: TestPendingReconstruction.testPendingAndInvalidate fails
 Key: HDFS-14025
 URL: https://issues.apache.org/jira/browse/HDFS-14025
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ayush Saxena
Assignee: Ayush Saxena


Reference:

[https://builds.apache.org/job/PreCommit-HDFS-Build/25322/testReport/junit/org.apache.hadoop.hdfs.server.blockmanagement/TestPendingReconstruction/testPendingAndInvalidate/]

Error Message :
{code:java}
java.lang.ArrayIndexOutOfBoundsException: 1 at 
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingReconstruction.testPendingAndInvalidate(TestPendingReconstruction.java:457)
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-714) Bump protobuf version to 3.5.1

2018-10-24 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HDDS-714:


I resolved the wrong issue... reopening.

> Bump protobuf version to 3.5.1
> --
>
> Key: HDDS-714
> URL: https://issues.apache.org/jira/browse/HDDS-714
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-714.001.patch
>
>
> This jira proposes to bump the current protobuf version to 3.5.1. This is 
> needed to make Ozone compile on Power PC architecture.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14024) RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService

2018-10-24 Thread CR Hota (JIRA)
CR Hota created HDFS-14024:
--

 Summary: RBF: ProvidedCapacityTotal json exception in 
NamenodeHeartbeatService
 Key: HDFS-14024
 URL: https://issues.apache.org/jira/browse/HDFS-14024
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: CR Hota
Assignee: CR Hota


Routers may be proxying for a downstream name node that is NOT migrated to 
understand "ProvidedCapacityTotal". updateJMXParameters method in 
NamenodeHeartbeatService should handle this without breaking.

 
{code:java}
jsonObject.getLong("MissingBlocks"),
jsonObject.getLong("PendingReplicationBlocks"),
jsonObject.getLong("UnderReplicatedBlocks"),
jsonObject.getLong("PendingDeletionBlocks"),
jsonObject.getLong("ProvidedCapacityTotal"));
{code}
One way to do this is create a json wrapper while gives back some default if 
json node is not found.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-731) Add shutdown hook to shutdown XceiverServerRatis on daemon stop

2018-10-24 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-731:
--

 Summary: Add shutdown hook to shutdown XceiverServerRatis on 
daemon stop
 Key: HDDS-731
 URL: https://issues.apache.org/jira/browse/HDDS-731
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


Currently on shutting down a Ozone datanode using "ozone --daemon stop 
datanode", the XceiverServerRatis is not shutdown properly. This jira proposes 
to add a shutdownHook to take ratis snapshot on shutdown



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14023) TestBalancerWithMultipleNameNodes#test1OutOf2BlockpoolsWithBlockPoolPolicy times out sometimes

2018-10-24 Thread JIRA
Íñigo Goiri created HDFS-14023:
--

 Summary: 
TestBalancerWithMultipleNameNodes#test1OutOf2BlockpoolsWithBlockPoolPolicy 
times out sometimes
 Key: HDFS-14023
 URL: https://issues.apache.org/jira/browse/HDFS-14023
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Íñigo Goiri


While running the tests for HDFS-14021, it seems like the test times out:
{code}
java.lang.Exception: test timed out after 60 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.sleep(TestBalancerWithMultipleNameNodes.java:353)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.wait(TestBalancerWithMultipleNameNodes.java:159)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.runBalancer(TestBalancerWithMultipleNameNodes.java:175)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.runTest(TestBalancerWithMultipleNameNodes.java:550)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.test1OutOf2BlockpoolsWithBlockPoolPolicy(TestBalancerWithMultipleNameNodes.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-10-24 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/936/

[Oct 23, 2018 6:16:06 AM] (sunilg) YARN-8873. [YARN-8811] Add CSI java-based 
client library. Contributed by
[Oct 23, 2018 7:28:41 AM] (rohithsharmaks) YARN-8826. Fix lingering timeline 
collector after serviceStop in
[Oct 23, 2018 11:22:17 AM] (shashikant) HDDS-708. Validate BCSID while reading 
blocks from containers in
[Oct 23, 2018 3:47:00 PM] (xiao) HADOOP-15873. Add JavaBeans Activation 
Framework API to LICENSE.txt.
[Oct 23, 2018 4:23:03 PM] (inigoiri) Revert "HADOOP-15836. Review of 
AccessControlList. Contributed by BELUGA
[Oct 23, 2018 5:49:15 PM] (jlowe) YARN-8904. TestRMDelegationTokens can fail in
[Oct 23, 2018 8:37:17 PM] (rkanter) YARN-8919. Some tests fail due to 
NoClassDefFoundError for
[Oct 23, 2018 9:28:37 PM] (inigoiri) HDFS-14004. 
TestLeaseRecovery2#testCloseWhileRecoverLease fails
[Oct 23, 2018 9:53:45 PM] (cliang) HDFS-13566. Add configurable additional RPC 
listener to NameNode.
[Oct 23, 2018 10:28:13 PM] (haibochen) MAPREDUCE-4669. MRAM web UI does not 
work with HTTPS. (Contributed by
[Oct 23, 2018 11:04:58 PM] (eyang) YARN-8814. Yarn Service Upgrade: Update the 
swagger definition. 




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-registry 
   Exceptional return value of 
java.util.concurrent.ExecutorService.submit(Callable) ignored in 
org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) 
At RegistryDNS.java:ignored in 
org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) 
At RegistryDNS.java:[line 900] 
   Exceptional return value of 
java.util.concurrent.ExecutorService.submit(Callable) ignored in 
org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) 
At RegistryDNS.java:ignored in 
org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) 
At RegistryDNS.java:[line 926] 
   Exceptional return value of 
java.util.concurrent.ExecutorService.submit(Callable) ignored in 
org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel,
 InetAddress, int) At RegistryDNS.java:ignored in 
org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel,
 InetAddress, int) At RegistryDNS.java:[line 850] 

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi 
   Useless control flow in 
csi.v0.Csi$CapacityRange$Builder.maybeForceBuilderInitialization() At Csi.java: 
At Csi.java:[line 14406] 
   Useless control flow in 
csi.v0.Csi$ControllerGetCapabilitiesRequest$Builder.maybeForceBuilderInitialization()
 At Csi.java: At Csi.java:[line 36068] 
   Class csi.v0.Csi$ControllerPublishVolumeRequest defines non-transient 
non-serializable instance field controllerPublishSecrets_ In Csi.java:instance 
field controllerPublishSecrets_ In Csi.java 
   Class csi.v0.Csi$ControllerPublishVolumeRequest defines non-transient 
non-serializable instance field volumeAttributes_ In Csi.java:instance field 
volumeAttributes_ In Csi.java 
   Useless control flow in 
csi.v0.Csi$ControllerPublishVolumeRequest$Builder.maybeForceBuilderInitialization()
 At Csi.java: At Csi.java:[line 24945] 
   Class csi.v0.Csi$ControllerPublishVolumeResponse defines non-transient 
non-serializable instance field publishInfo_ In Csi.java:instance field 
publishInfo_ In Csi.java 
   Useless control flow in 
csi.v0.Csi$ControllerPublishVolumeResponse$Builder.maybeForceBuilderInitialization()
 At Csi.java: At Csi.java:[line 26369] 
   Useless control flow in 
csi.v0.Csi$ControllerServiceCapability$Builder.maybeForceBuilderInitialization()
 At Csi.java: At Csi.java:[line 38230] 
   Useless control flow in 
csi.v0.Csi$ControllerServiceCapability$RPC$Builder.maybeForceBuilderInitialization()
 At Csi.java: At Csi.java:[line 37737] 
   Class csi.v0.Csi$ControllerUnpublishVolumeRequest defines non-transient 
non-serializable instance field controllerUnpublishSecrets_ In 
Csi.java:instance field controllerUnpublishSecrets_ In Csi.java 
   Useless control flow in 
csi.v0.Csi$ControllerUnpublishVolumeRequest$Builder.maybeForceBuilderInitialization()
 At Csi.java: At Csi.java:[line 27383] 
   Useless control flow in 
csi.v0.Csi$ControllerUnpublishVolumeResponse$Builder.maybeForceBuilderInitialization()
 At Csi.java: At Csi.java:[line 28203] 
   Class csi.v0.Csi$CreateSnapshotRequest defines non-transient 

[jira] [Resolved] (HDDS-706) Invalid Getting Started docker-compose YAML

2018-10-24 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton resolved HDDS-706.
---
Resolution: Fixed

Thank you very much [~aperepel] to report this issue. I updated the wiki page 
with your fixed docker-compose.yaml 
([https://cwiki.apache.org/confluence/display/HADOOP/Getting+Started+with+docker)]

 

Just two minor notes:

1.) You can request wiki page access at the hdfs-dev mailing list (or here from 
[~anu]) and you can modify any of the wiki pages.

 

2.) Now we also have docker-compose files which are included in the release 
distribution (./compose) which are always tested. You can also use them (I will 
update the wiki page with this information very soon...)

 

Thank you again the report and fix.

 

> Invalid Getting Started docker-compose YAML
> ---
>
> Key: HDDS-706
> URL: https://issues.apache.org/jira/browse/HDDS-706
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: website
>Reporter: Andrew Grande
>Priority: Major
> Attachments: docker-compose.yaml
>
>
> Consistent indentation is critical to the YAML file structure. The page here 
> lists a docker-compose file which is invalid.
> Here's the type of errors one is getting:
> {noformat}
> > docker-compose up -d
> ERROR: yaml.scanner.ScannerError: mapping values are not allowed here
>  in "./docker-compose.yaml", line 5, column 12{noformat}
>  I'm attaching a fixed YAML file, please ensure the getting started page 
> preserves the correct indentation and formatting.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-730) ozone fs cli prints hadoop fs in usage

2018-10-24 Thread Soumitra Sulav (JIRA)
Soumitra Sulav created HDDS-730:
---

 Summary: ozone fs cli prints hadoop fs in usage
 Key: HDDS-730
 URL: https://issues.apache.org/jira/browse/HDDS-730
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Affects Versions: 0.3.0
Reporter: Soumitra Sulav
 Attachments: image-2018-10-24-17-15-39-097.png

ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 

I believe the usage should be updated.

Check line 3 of screenshot.

!image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-729) OzoneFileSystem doesn't support modifyAclEntries

2018-10-24 Thread Soumitra Sulav (JIRA)
Soumitra Sulav created HDDS-729:
---

 Summary: OzoneFileSystem doesn't support modifyAclEntries
 Key: HDDS-729
 URL: https://issues.apache.org/jira/browse/HDDS-729
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Affects Versions: 0.3.0
Reporter: Soumitra Sulav


Hive service while starting does modifyAcl operation and as the same isn't 
supported it fails to start.
{code:java}
hdfs dfs -setfacl -m default:user:hive:rwx 
/warehouse/tablespace/external/hive{code}
Exception encountered :
{code:java}
[hdfs@ctr-e138-1518143905142-541600-02-02 ~]$ hdfs dfs -setfacl -m 
default:user:hive:rwx /warehouse/tablespace/external/hive
18/10/24 08:39:35 INFO conf.Configuration: Removed undeclared tags:
18/10/24 08:39:37 INFO conf.Configuration: Removed undeclared tags:
-setfacl: Fatal internal error
java.lang.UnsupportedOperationException: OzoneFileSystem doesn't support 
modifyAclEntries
at org.apache.hadoop.fs.FileSystem.modifyAclEntries(FileSystem.java:2926)
at 
org.apache.hadoop.fs.shell.AclCommands$SetfaclCommand.processPath(AclCommands.java:256)
at org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120)
at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
18/10/24 08:39:37 INFO conf.Configuration: Removed undeclared tags:
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-728) Datanodes are going to dead state after some interval

2018-10-24 Thread Soumitra Sulav (JIRA)
Soumitra Sulav created HDDS-728:
---

 Summary: Datanodes are going to dead state after some interval
 Key: HDDS-728
 URL: https://issues.apache.org/jira/browse/HDDS-728
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Affects Versions: 0.3.0
Reporter: Soumitra Sulav


Setup a 5 datanode ozone cluster with HDP on top of it.

After restarting all HDP services few times encountered below issue which is 
making the HDP services to fail.

Same exception was observed in an old setup but I thought it could have been 
issue with the setup but now encountered the same issue in new setup as well.
{code:java}
2018-10-24 10:42:03,308 WARN 
org.apache.ratis.grpc.server.GrpcServerProtocolService: 
2974da2b-e765-43f9-8d30-45fe40dcb9ab: Failed requestVote 
1672d28e-800f-4318-895b-1648976acff6->2974da2b-e765-43f9-8d30-45fe40dcb9ab#0
org.apache.ratis.protocol.GroupMismatchException: 
2974da2b-e765-43f9-8d30-45fe40dcb9ab: group-CE87A994686F not found.
at 
org.apache.ratis.server.impl.RaftServerProxy$ImplMap.get(RaftServerProxy.java:114)
at 
org.apache.ratis.server.impl.RaftServerProxy.getImplFuture(RaftServerProxy.java:252)
at 
org.apache.ratis.server.impl.RaftServerProxy.getImpl(RaftServerProxy.java:261)
at 
org.apache.ratis.server.impl.RaftServerProxy.getImpl(RaftServerProxy.java:256)
at 
org.apache.ratis.server.impl.RaftServerProxy.requestVote(RaftServerProxy.java:411)
at 
org.apache.ratis.grpc.server.GrpcServerProtocolService.requestVote(GrpcServerProtocolService.java:54)
at 
org.apache.ratis.proto.grpc.RaftServerProtocolServiceGrpc$MethodHandlers.invoke(RaftServerProtocolServiceGrpc.java:319)
at 
org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:171)
at 
org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:283)
at 
org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:707)
at 
org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at 
org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2018-10-24 10:42:03,342 WARN 
org.apache.ratis.grpc.server.GrpcServerProtocolService: 
2974da2b-e765-43f9-8d30-45fe40dcb9ab: Failed requestVote 
7839294e-5657-447f-b320-6b390fffb963->2974da2b-e765-43f9-8d30-45fe40dcb9ab#0
org.apache.ratis.protocol.GroupMismatchException: 
2974da2b-e765-43f9-8d30-45fe40dcb9ab: group-CE87A994686F not found.
at 
org.apache.ratis.server.impl.RaftServerProxy$ImplMap.get(RaftServerProxy.java:114)
at 
org.apache.ratis.server.impl.RaftServerProxy.getImplFuture(RaftServerProxy.java:252)
at 
org.apache.ratis.server.impl.RaftServerProxy.getImpl(RaftServerProxy.java:261)
at 
org.apache.ratis.server.impl.RaftServerProxy.getImpl(RaftServerProxy.java:256)
at 
org.apache.ratis.server.impl.RaftServerProxy.requestVote(RaftServerProxy.java:411)
at 
org.apache.ratis.grpc.server.GrpcServerProtocolService.requestVote(GrpcServerProtocolService.java:54)
at 
org.apache.ratis.proto.grpc.RaftServerProtocolServiceGrpc$MethodHandlers.invoke(RaftServerProtocolServiceGrpc.java:319)
at 
org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:171)
at 
org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:283)
at 
org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:707)
at 
org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at 
org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2018-10-24 10:42:04,466 WARN 
org.apache.ratis.grpc.server.GrpcServerProtocolService: 
2974da2b-e765-43f9-8d30-45fe40dcb9ab: Failed requestVote 
1672d28e-800f-4318-895b-1648976acff6->2974da2b-e765-43f9-8d30-45fe40dcb9ab#0
org.apache.ratis.protocol.GroupMismatchException: 
2974da2b-e765-43f9-8d30-45fe40dcb9ab: group-CE87A994686F not found.
at 
org.apache.ratis.server.impl.RaftServerProxy$ImplMap.get(RaftServerProxy.java:114)
at 
org.apache.ratis.server.impl.RaftServerProxy.getImplFuture(RaftServerProxy.java:252)
at 

[jira] [Created] (HDDS-727) ozone.log is not getting created in logs directory

2018-10-24 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-727:
---

 Summary: ozone.log is not getting created in logs directory
 Key: HDDS-727
 URL: https://issues.apache.org/jira/browse/HDDS-727
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Affects Versions: 0.3.0
Reporter: Nilotpal Nandi


ozone.log is no more present in logs directory of datanodes.

Need to be added back.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-726) Ozone Client should update SCM to move the container out of allocation path in case a write transaction fails

2018-10-24 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-726:


 Summary: Ozone Client should update SCM to move the container out 
of allocation path in case a write transaction fails
 Key: HDDS-726
 URL: https://issues.apache.org/jira/browse/HDDS-726
 Project: Hadoop Distributed Data Store
  Issue Type: Test
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee


Once an container write transaction fails, it will be marked corrupted. Once 
Ozone client gets an exception in such case it should tell SCM to move the 
container out of allocation path. SCM will eventually close the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-725) Exception thrown in loop while trying to write a file in ozonefs

2018-10-24 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-725:
---

 Summary: Exception thrown in loop while trying to write a file in 
ozonefs
 Key: HDDS-725
 URL: https://issues.apache.org/jira/browse/HDDS-725
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Affects Versions: 0.3.0
 Environment: Ran the following command :



ozone fs -put 2GB /testdir5/

Exceptions are thrown continuously in loop. Please note that there are 8 
datanodes alive in the cluster.
{noformat}
root@ctr-e138-1518143905142-53-01-08 logs]# /root/allssh.sh 'jps -l | 
grep Datanode'

Host::172.27.20.96

411564 org.apache.hadoop.ozone.HddsDatanodeService

Host::172.27.20.91

472897 org.apache.hadoop.ozone.HddsDatanodeService

Host::172.27.38.9

351139 org.apache.hadoop.ozone.HddsDatanodeService

Host::172.27.24.90

314304 org.apache.hadoop.ozone.HddsDatanodeService

Host::172.27.15.139

324820 org.apache.hadoop.ozone.HddsDatanodeService

Host::172.27.10.199


Host::172.27.15.131


Host::172.27.57.0


Host::172.27.23.139

627053 org.apache.hadoop.ozone.HddsDatanodeService

Host::172.27.68.65

557443 org.apache.hadoop.ozone.HddsDatanodeService

Host::172.27.19.74


Host::172.27.85.64

508121 org.apache.hadoop.ozone.HddsDatanodeService{noformat}
 
{noformat}
 
2018-10-24 09:49:47,093 INFO org.apache.ratis.server.impl.LeaderElection: 
7c3b2fb1-cf16-4e5f-94dc-8a089492ad57: Election REJECTED; received 0 response(s) 
[] and 2 exception(s); 7c3b2fb1-cf16-4e5f-94dc-8a089492ad57:t16296, 
leader=null, voted=7c3b2fb1-cf16-4e5f-94dc-8a089492ad57, raftlog=[(t:37, 
i:271)], conf=271: [7c3b2fb1-cf16-4e5f-94dc-8a089492ad57:172.27.85.64:9858, 
86f9e313-ae49-4675-95d7-27856641aee1:172.27.15.131:9858, 
9524f4e2-9031-4852-ab7c-11c2da3460db:172.27.57.0:9858], old=null
2018-10-24 09:49:47,093 INFO org.apache.ratis.server.impl.LeaderElection: 0: 
java.util.concurrent.ExecutionException: 
org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
exception
2018-10-24 09:49:47,093 INFO org.apache.ratis.server.impl.LeaderElection: 1: 
java.util.concurrent.ExecutionException: 
org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
exception
2018-10-24 09:49:47,093 INFO org.apache.ratis.server.impl.RaftServerImpl: 
7c3b2fb1-cf16-4e5f-94dc-8a089492ad57 changes role from CANDIDATE to FOLLOWER at 
term 16296 for changeToFollower
2018-10-24 09:49:47,093 INFO org.apache.ratis.server.impl.RoleInfo: 
7c3b2fb1-cf16-4e5f-94dc-8a089492ad57: shutdown LeaderElection
2018-10-24 09:49:47,093 INFO org.apache.ratis.server.impl.RoleInfo: 
7c3b2fb1-cf16-4e5f-94dc-8a089492ad57: start FollowerState
2018-10-24 09:49:48,171 INFO org.apache.ratis.server.impl.FollowerState: 
7c3b2fb1-cf16-4e5f-94dc-8a089492ad57 changes to CANDIDATE, lastRpcTime:1078, 
electionTimeout:1078ms
2018-10-24 09:49:48,171 INFO org.apache.ratis.server.impl.RoleInfo: 
7c3b2fb1-cf16-4e5f-94dc-8a089492ad57: shutdown FollowerState
2018-10-24 09:49:48,171 INFO org.apache.ratis.server.impl.RaftServerImpl: 
7c3b2fb1-cf16-4e5f-94dc-8a089492ad57 changes role from FOLLOWER to CANDIDATE at 
term 16296 for changeToCandidate
2018-10-24 09:49:48,172 INFO org.apache.ratis.server.impl.RoleInfo: 
7c3b2fb1-cf16-4e5f-94dc-8a089492ad57: start LeaderElection
2018-10-24 09:49:48,173 INFO org.apache.ratis.server.impl.LeaderElection: 
7c3b2fb1-cf16-4e5f-94dc-8a089492ad57: begin an election in Term 16297
2018-10-24 09:49:48,174 INFO org.apache.ratis.server.impl.LeaderElection: 
7c3b2fb1-cf16-4e5f-94dc-8a089492ad57 got exception when requesting votes: {}
java.util.concurrent.ExecutionException: 
org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
exception
 at java.util.concurrent.FutureTask.report(FutureTask.java:122)
 at java.util.concurrent.FutureTask.get(FutureTask.java:192)
 at 
org.apache.ratis.server.impl.LeaderElection.waitForResults(LeaderElection.java:214)
 at 
org.apache.ratis.server.impl.LeaderElection.askForVotes(LeaderElection.java:146)
 at org.apache.ratis.server.impl.LeaderElection.run(LeaderElection.java:102)
Caused by: org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: 
UNAVAILABLE: io exception
 at 
org.apache.ratis.thirdparty.io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:222)
 at 

[jira] [Created] (HDDS-724) Delimiters (/) should not allowed in bucket name when execute bucket update/delete command.

2018-10-24 Thread chencan (JIRA)
chencan created HDDS-724:


 Summary: Delimiters (/) should not allowed in bucket name when 
execute bucket update/delete command.
 Key: HDDS-724
 URL: https://issues.apache.org/jira/browse/HDDS-724
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: chencan


when execute the following commands, Delimiters (/) after bucket name is 
ignored.
    ozone sh bucket delete /volume1/bucket1/name1
    ozone sh bucket update /volume1/bucket1/name1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-723) CloseContainerCommandHandler throwing NullPointerException

2018-10-24 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-723:
---

 Summary: CloseContainerCommandHandler throwing NullPointerException
 Key: HDDS-723
 URL: https://issues.apache.org/jira/browse/HDDS-723
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.3.0
Reporter: Nilotpal Nandi


Seeing NullPointerException error while CloseContainerCommandHandler is trying 
to close container.

 

 
{noformat}
2018-10-24 04:22:04,699 INFO org.apache.ratis.server.storage.RaftLogWorker: 
8a61160b-8985-412e-9f25-9e65ceafa824-RaftLogWorker got closed and hit exception
java.io.IOException: java.lang.InterruptedException
 at org.apache.ratis.util.IOUtils.asIOException(IOUtils.java:51)
 at 
org.apache.ratis.server.storage.RaftLogWorker.flushWrites(RaftLogWorker.java:232)
 at 
org.apache.ratis.server.storage.RaftLogWorker.access$600(RaftLogWorker.java:51)
 at 
org.apache.ratis.server.storage.RaftLogWorker$WriteLog.execute(RaftLogWorker.java:309)
 at org.apache.ratis.server.storage.RaftLogWorker.run(RaftLogWorker.java:179)
 at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
 at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:347)
 at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
 at 
org.apache.ratis.server.storage.RaftLogWorker.flushWrites(RaftLogWorker.java:230)
 ... 4 more
2018-10-24 04:22:04,712 INFO org.apache.ratis.server.storage.RaftLogWorker: 
8a61160b-8985-412e-9f25-9e65ceafa824-RaftLogWorker close()
2018-10-24 04:22:31,293 ERROR 
org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CloseContainerCommandHandler:
 Can't close container 18
java.lang.NullPointerException
 at 
org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CloseContainerCommandHandler.handle(CloseContainerCommandHandler.java:78)
 at 
org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CommandDispatcher.handle(CommandDispatcher.java:93)
 at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$initCommandHandlerThread$1(DatanodeStateMachine.java:381)
 at java.lang.Thread.run(Thread.java:745)
2018-10-24 04:22:31,293 ERROR 
org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CloseContainerCommandHandler:
 Can't close container 10
java.lang.NullPointerException
 at 
org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CloseContainerCommandHandler.handle(CloseContainerCommandHandler.java:78)
 at 
org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CommandDispatcher.handle(CommandDispatcher.java:93)
 at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$initCommandHandlerThread$1(DatanodeStateMachine.java:381)
 at java.lang.Thread.run(Thread.java:745)
2018-10-24 04:22:31,293 ERROR 
org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CloseContainerCommandHandler:
 Can't close container 14
java.lang.NullPointerException
 at 
org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CloseContainerCommandHandler.handle(CloseContainerCommandHandler.java:78)
 at 
org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CommandDispatcher.handle(CommandDispatcher.java:93)
 at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$initCommandHandlerThread$1(DatanodeStateMachine.java:381)
 at java.lang.Thread.run(Thread.java:745){noformat}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org