[jira] [Commented] (HDDS-1909) Use new HA code for Non-HA in OM

2019-09-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16922525#comment-16922525
 ] 

Hudson commented on HDDS-1909:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17223 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17223/])
HDDS-1909. Use new HA code for Non-HA in OM. (#1225) (github: rev 
f25fe9274323298afa476b3dd282bd71d4d1944f)
* (edit) hadoop-ozone/dist/src/main/compose/ozone/test.sh
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/security/acl/TestOzoneNativeAuthorizer.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/bucket/TestOMBucketSetPropertyResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/volume/acl/TestOMVolumeAddAclRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/s3/bucket/TestS3BucketCreateResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRenameRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyDeletingService.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientForAclAuditLog.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java
* (edit) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/recovery/TestReconOmMetadataManagerImpl.java
* (edit) hadoop-ozone/dist/src/main/compose/ozone-topology/test.sh
* (edit) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/AbstractOMMetadataManagerTest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestBucketManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/volume/TestOMVolumeSetQuotaResponse.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/TableCacheImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/volume/TestOMVolumeDeleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMGetDelegationTokenRequest.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOmAcls.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOmMetrics.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/volume/TestOMVolumeCreateResponse.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestScmSafeMode.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestS3BucketManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/volume/acl/TestOMVolumeSetAclRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBufferWithOMResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/bucket/TestOMBucketCreateResponse.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/bucket/TestOMBucketCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/security/OMGetDelegationTokenResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/

[jira] [Commented] (HDDS-1810) SCM command to Activate and Deactivate pipelines

2019-09-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16922520#comment-16922520
 ] 

Hudson commented on HDDS-1810:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17223 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17223/])
HDDS-1810. SCM command to Activate and Deactivate pipelines. (#1224) (nanda: 
rev 0b9704f6106587d9df06c8b3860a23afbd43)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestSCMPipelineManager.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolClientSideTranslatorPB.java
* (add) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ActivatePipelineSubcommand.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/SCMAction.java
* (edit) hadoop-hdds/common/src/main/proto/hdds.proto
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateManager.java
* (add) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/DeactivatePipelineSubcommand.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestPipelineStateManager.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
* (edit) 
hadoop-hdds/common/src/main/proto/StorageContainerLocationProtocol.proto
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/StorageContainerLocationProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateMap.java


> SCM command to Activate and Deactivate pipelines
> 
>
> Key: HDDS-1810
> URL: https://issues.apache.org/jira/browse/HDDS-1810
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: SCM, SCM Client
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> It will be useful to have scm command to temporarily deactivate and 
> re-activate a pipeline. This will help us a lot in debugging a pipeline.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2077) Add maven-gpg-plugin.version to pom.ozone.xml

2019-09-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16922528#comment-16922528
 ] 

Hudson commented on HDDS-2077:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17223 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17223/])
HDDS-2077. Add maven-gpg-plugin.version to pom.ozone.xml. (#1396) (nanda: rev 
1ae775975bc43bfc20ca0e61ad045a521e227f7c)
* (edit) pom.ozone.xml


> Add maven-gpg-plugin.version to pom.ozone.xml
> -
>
> Key: HDDS-2077
> URL: https://issues.apache.org/jira/browse/HDDS-2077
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {{pom.ozone.xml}} is missing maven-gpg-plugin.version.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1783) Latency metric for applyTransaction in ContainerStateMachine

2019-09-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16922519#comment-16922519
 ] 

Hudson commented on HDDS-1783:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17223 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17223/])
HDDS-1783 : Latency metric for applyTransaction in ContainerStateMachine 
(31469764+bshashikant: rev b53d19a343e110dbcf0ec710e9d491ec6bd77a51)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/TestCSMMetrics.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/CSMMetrics.java


> Latency metric for applyTransaction in ContainerStateMachine
> 
>
> Key: HDDS-1783
> URL: https://issues.apache.org/jira/browse/HDDS-1783
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> applyTransaction is invoked from the Ratis pipeline and the 
> ContainerStateMachine
> uses a async executor to complete the task.
>  
> We require a latency metric to track the performance of log apply operations 
> in the state machine. This will measure the end-to-end latency of apply which 
> includes the queueing delay in the executor queues. Combined with the latency 
> measurement in HddsDispatcher, this will be an indicator if the executors are 
> overloaded.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14630) Configuration.getTimeDurationHelper() should not log time unit warning in info log.

2019-09-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921259#comment-16921259
 ] 

Hudson commented on HDFS-14630:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17222 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17222/])
HDFS-14630. Configuration.getTimeDurationHelper() should not log time 
(surendralilhore: rev 5ff76cb8bc69d68ba7c9487d00b1dc753d616bb2)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java


> Configuration.getTimeDurationHelper() should not log time unit warning in 
> info log.
> ---
>
> Key: HDFS-14630
> URL: https://issues.apache.org/jira/browse/HDFS-14630
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14630.001.patch, HDFS-14630.patch
>
>
> To solve [HDFS-12920|https://issues.apache.org/jira/browse/HDFS-12920] issue 
> we configured "dfs.client.datanode-restart.timeout" without time unit. No log 
> file is full of
> {noformat}
> 2019-06-22 20:13:14,605 | INFO  | pool-12-thread-1 | No unit for 
> dfs.client.datanode-restart.timeout(30) assuming SECONDS 
> org.apache.hadoop.conf.Configuration.logDeprecation(Configuration.java:1409){noformat}
> No need to log this, just give the behavior in property description.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14706) Checksums are not checked if block meta file is less than 7 bytes

2019-09-03 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921256#comment-16921256
 ] 

Hudson commented on HDFS-14706:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17222 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17222/])
Revert "HDFS-14706. Checksums are not checked if block meta file is less 
(weichiu: rev d207aba0265e786904ee2ac4e612c5537401c90d)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
HDFS-14706. Checksums are not checked if block meta file is less than 7 
(weichiu: rev 915cbc91c0a12cc7b4d3ef4ea951941defbbcb33)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/DataChecksum.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/CorruptMetaHeaderException.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestCorruptMetadataFile.java


> Checksums are not checked if block meta file is less than 7 bytes
> -
>
> Key: HDFS-14706
> URL: https://issues.apache.org/jira/browse/HDFS-14706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14706.001.patch, HDFS-14706.002.patch, 
> HDFS-14706.003.patch, HDFS-14706.004.patch, HDFS-14706.005.patch, 
> HDFS-14706.006.patch, HDFS-14706.007.patch, HDFS-14706.008.patch
>
>
> If a block and its meta file are corrupted in a certain way, the corruption 
> can go unnoticed by a client, causing it to return invalid data.
> The meta file is expected to always have a header of 7 bytes and then a 
> series of checksums depending on the length of the block.
> If the metafile gets corrupted in such a way, that it is between zero and 
> less than 7 bytes in length, then the header is incomplete. In 
> BlockSender.java the logic checks if the metafile length is at least the size 
> of the header and if it is not, it does not error, but instead returns a NULL 
> checksum type to the client.
> https://github.com/apache/hadoop/blob/b77761b0e37703beb2c033029e4c0d5ad1dce794/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java#L327-L357
> If the client receives a NULL checksum client, it will not validate checksums 
> at all, and even corrupted data will be returned to the reader. This means 
> this corrupt will go unnoticed and HDFS will never repair it. Even the Volume 
> Scanner will not notice the corruption as the checksums are silently ignored.
> Additionally, if the meta file does have enough bytes so it attempts to load 
> the header, and the header is corrupted such that it is not valid, it can 
> cause the datanode Volume Scanner to exit, which an exception like the 
> following:
> {code}
> 2019-08-06 18:16:39,151 ERROR datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting because of exception 
> java.lang.IllegalArgumentException: id=51 out of range [0, 5)
>   at 
> org.apache.hadoop.util.DataChecksum$Type.valueOf(DataChecksum.java:76)
>   at 
> org.apache.hadoop.util.DataChecksum.newDataChecksum(DataChecksum.java:167)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:173)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:139)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:153)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.loadLastPartialChunkChecksum(FsVolumeImpl.java:1140)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FinalizedReplica.loadLastPartialChunkChecksum(FinalizedReplica.java:157)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.getPartialChunkChecksumForFinalized(BlockSender.java:4

[jira] [Commented] (HDFS-14654) RBF: TestRouterRpc#testNamenodeMetrics is flaky

2019-09-02 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920696#comment-16920696
 ] 

Hudson commented on HDFS-14654:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17221 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17221/])
HDFS-14654. RBF: TestRouterRpc#testNamenodeMetrics is flaky. Contributed 
(ayushsaxena: rev 040f6e93bb803287bfc73424ec5a64745938d712)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java


> RBF: TestRouterRpc#testNamenodeMetrics is flaky
> ---
>
> Key: HDFS-14654
> URL: https://issues.apache.org/jira/browse/HDFS-14654
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14654.001.patch, HDFS-14654.002.patch, 
> HDFS-14654.003.patch, HDFS-14654.004.patch, HDFS-14654.005.patch, error.log
>
>
> They sometimes pass and sometimes fail.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13843) RBF: Add optional parameter -d for detailed listing of mount points.

2019-09-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920600#comment-16920600
 ] 

Hudson commented on HDFS-13843:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17220 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17220/])
HDFS-13843. RBF: Add optional parameter -d for detailed listing of mount 
(ayushsaxena: rev c3abfcefdd256650b2a45ae2aac53c4a22721a46)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java


> RBF: Add optional parameter -d for detailed listing of mount points.
> 
>
> Key: HDFS-13843
> URL: https://issues.apache.org/jira/browse/HDFS-13843
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation
>Reporter: Soumyapn
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Fix For: 3.3.0
>
> Attachments: HDFS-13843-03.patch, HDFS-13843-04.patch, 
> HDFS-13843.01.patch, HDFS-13843.02.patch
>
>
> *Scenario:*
> Execute the below add/update command for single mount entry for single 
> nameservice pointing to multiple destinations. 
>  # hdfs dfsrouteradmin -add /apps1 hacluster /tmp1
>  # hdfs dfsrouteradmin -add /apps1 hacluster /tmp1,/tmp2,/tmp3
>  # hdfs dfsrouteradmin -update /apps1 hacluster /tmp1,/tmp2,/tmp3 -order 
> RANDOM
> *Actual*. With the above commands, mount entry is successfully updated.
> But order information like HASH, RANDOM is not displayed in mount entries and 
> also not displayed in federation router UI. However order information is 
> updated properly when there are multiple nameservices. This issue is with 
> single nameservice having multiple destinations.
> *Expected:* 
> *Order information should be updated in mount entries so that the user will 
> come to know which order has been set.*
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14711) RBF: RBFMetrics throws NullPointerException if stateStore disabled

2019-09-01 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920325#comment-16920325
 ] 

Hudson commented on HDFS-14711:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17218 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17218/])
HDFS-14711. RBF: RBFMetrics throws NullPointerException if stateStore 
(ayushsaxena: rev 18d74fe41c0982dc1540367805b0c3d0d4fc29d3)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/RBFMetrics.java


> RBF: RBFMetrics throws NullPointerException if stateStore disabled
> --
>
> Key: HDFS-14711
> URL: https://issues.apache.org/jira/browse/HDFS-14711
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14711.001.patch, HDFS-14711.002.patch, 
> HDFS-14711.003.patch, HDFS-14711.004.patch, HDFS-14711.005.patch
>
>
> In current implementation, if \{{stateStore}} initialize fail, only log an 
> error message. Actually RBFMetrics can't work normally at this time.
> {code:java}
> 2019-08-08 22:43:58,024 [qtp812446698-28] ERROR jmx.JMXJsonServlet 
> (JMXJsonServlet.java:writeAttribute(345)) - getting attribute FilesTotal of 
> Hadoop:service=NameNode,name=FSNamesystem-2 threw an exception
> javax.management.RuntimeMBeanException: java.lang.NullPointerException
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
> at 
> org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:338)
> at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:316)
> at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
> at 
> org.apache.hadoop.security.authentication.server.ProxyUserAuthenticationFilter.doFilter(ProxyUserAuthenticationFilter.java:104)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
> at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:51)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1604)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:539)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
> at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.ja

[jira] [Commented] (HDDS-2060) Create Ozone specific LICENSE file for the Ozone source and binary packages

2019-08-31 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920225#comment-16920225
 ] 

Hudson commented on HDDS-2060:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17214 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17214/])
HDDS-2060. Create Ozone specific LICENSE file for the Ozone source (aengineer: 
rev c187d2cb9f34c772fb12d6468f39fa8d987a2d0a)
* (add) hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-jquery.txt
* (add) hadoop-ozone/dist/src/main/license/src/licenses/LICENSE-angular-nvd3.txt
* (edit) hadoop-ozone/pom.xml
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-javax.servlet.jsp-jsp-api.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-ratis-thirdparty-misc.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-com.sun.xml.bind.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-org.openjdk.jmh-jmh-generator-annprocess.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-javax.activation-activation.txt
* (add) hadoop-ozone/dist/src/main/license/src/licenses/IMPORTANT.md
* (delete) hadoop-ozone/assemblies/pom.xml
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-com.sun.jersey.txt
* (add) hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-jetty.txt
* (add) hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-angular-nvd3.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/NOTICE-ratis-thirtparty-misc.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-com.google.code.findbugs-jsr305.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-jakarta.annotation-jakarta.annotation-api.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-net.sf.jopt-simple-jopt-simple.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-javax.servlet-javax.servlet-api.txt
* (add) hadoop-ozone/dist/src/main/license/bin/LICENSE.txt
* (add) hadoop-ozone/dist/src/main/license/src/licenses/LICENSE-jquery.txt
* (add) hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-angular.txt
* (add) hadoop-ozone/dist/src/main/license/src/licenses/LICENSE-angular.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-org.codehaus.mojo-animal-sniffer-annotations.txt
* (add) hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-org.slf4j.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-org.ow2.asm-asm.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-com.google.re2j-re2j.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-javax.ws.rs-javax.ws.rs-api.txt
* (add) hadoop-ozone/dist/src/main/license/src/licenses/LICENSE-d3.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-com.jcraft-jsch.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-org.openjdk.jmh-jmh-core.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-org.glassfish.hk2.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-org.fusesource.leveldbjni-leveldbjni-all.txt
* (add) hadoop-ozone/dist/src/main/assemblies/ozone-src.xml
* (add) hadoop-ozone/dist/src/main/license/bin/NOTICE.txt
* (add) hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-d3.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-org.codehaus.woodstox-stax2-api.txt
* (add) hadoop-ozone/dist/src/main/license/src/NOTICE.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-javax.el-javax.el-api.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-dnsjava-dnsjava.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-javax.xml.bind-jaxb-api.txt
* (edit) hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
* (add) hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-jersey.txt
* (edit) pom.ozone.xml
* (delete) hadoop-hdds/framework/src/main/resources/webapps/static/dfs-dust.js
* (add) hadoop-ozone/dist/src/main/license/src/licenses/LICENSE-nvd3.txt
* (add) hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-nvd3.txt
* (delete) hadoop-ozone/assemblies/src/main/resources/assemblies/ozone-src.xml
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-javax.interceptor-javax.interceptor-api.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-javax.ws.rs-jsr311-api.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-com.thoughtworks.paranamer-paranamer.txt
* (add) 
hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-javax.annotation-javax.annotation-api.txt
* (add) hadoop-ozone/dist/src/main/license/src/LICENSE.txt
* (add) hadoop-ozone/dist/src/main/license/bin/licenses/LICENSE-protobuf.txt


> Create Ozone specific LICENSE file for the Ozone source and binary packages
> -

[jira] [Commented] (HDDS-1413) TestCloseContainerCommandHandler is flaky

2019-08-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919989#comment-16919989
 ] 

Hudson commented on HDDS-1413:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17210 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17210/])
HDDS-1413. Attempt to fix TestCloseContainerCommandHandler by adjusting 
(aengineer: rev a2d083f2c546ef9e0a543ea287c2435c6440d9aa)
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerCommandHandler.java


> TestCloseContainerCommandHandler is flaky
> -
>
> Key: HDDS-1413
> URL: https://issues.apache.org/jira/browse/HDDS-1413
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: ozone-flaky-test, pull-request-available
> Fix For: 0.5.0
>
> Attachments: ci.log
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> TestCloseContainerCommandHandler.testCloseContainerViaStandalone is flaky, we 
> get the below exception when it fails.
> {code}
> org.apache.ratis.protocol.NotLeaderException: Server 
> a200dff7-f26d-4be3-addd-e8e0ca569ae0 is not the leader (null). Request must 
> be sent to leader.
>   at 
> org.apache.ratis.server.impl.RaftServerImpl.generateNotLeaderException(RaftServerImpl.java:448)
>   at 
> org.apache.ratis.server.impl.RaftServerImpl.checkLeaderState(RaftServerImpl.java:419)
>   at 
> org.apache.ratis.server.impl.RaftServerImpl.submitClientRequestAsync(RaftServerImpl.java:514)
>   at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$submitClientRequestAsync$7(RaftServerProxy.java:333)
>   at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$null$5(RaftServerProxy.java:328)
>   at org.apache.ratis.util.JavaUtils.callAsUnchecked(JavaUtils.java:109)
>   at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$submitRequest$6(RaftServerProxy.java:328)
>   at 
> java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:981)
>   at 
> java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2124)
>   at 
> org.apache.ratis.server.impl.RaftServerProxy.submitRequest(RaftServerProxy.java:327)
>   at 
> org.apache.ratis.server.impl.RaftServerProxy.submitClientRequestAsync(RaftServerProxy.java:333)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.submitRequest(XceiverServerRatis.java:485)
>   at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler.createContainer(TestCloseContainerCommandHandler.java:310)
>   at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler.testCloseContainerViaStandalone(TestCloseContainerCommandHandler.java:111)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.

[jira] [Commented] (HDDS-2042) Avoid log on console with Ozone shell

2019-08-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919983#comment-16919983
 ] 

Hudson commented on HDDS-2042:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17209 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17209/])
HDDS-2042. Avoid log on console with Ozone shell (aengineer: rev 
c4411f7fdf745eefac32749dad4388635a0a9aae)
* (add) hadoop-ozone/dist/src/main/conf/ozone-shell-log4j.properties
* (edit) hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot
* (edit) hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
* (edit) hadoop-ozone/dist/src/main/smoketest/createbucketenv.robot
* (edit) hadoop-ozone/common/src/main/bin/ozone
* (edit) hadoop-ozone/dist/src/main/smoketest/createmrenv.robot


> Avoid log on console with Ozone shell
> -
>
> Key: HDDS-2042
> URL: https://issues.apache.org/jira/browse/HDDS-2042
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> HDDS-1489 fixed several sample docker compose configs to avoid unnecessary 
> messages on console when running eg. {{ozone sh key put}}.  The goal of this 
> task is to fix the remaining ones.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2061) Add hdds.container.chunk.persistdata as exception to TestOzoneConfigurationFields

2019-08-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919841#comment-16919841
 ] 

Hudson commented on HDDS-2061:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17208 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17208/])
HDDS-2061. Add hdds.container.chunk.persistdata as exception to (bharat: rev 
70855126d16c42d2c18bb6c190901e4912b96cec)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestOzoneConfigurationFields.java


> Add hdds.container.chunk.persistdata as exception to 
> TestOzoneConfigurationFields
> -
>
> Key: HDDS-2061
> URL: https://issues.apache.org/jira/browse/HDDS-2061
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> HDDS-1094 introduced a new config key 
> ([hdds.container.chunk.persistdata|https://github.com/apache/hadoop/blob/96f7dc1992246a16031f613e55dc39ea0d64acd1/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java#L241-L245]),
>  which needs to be added to {{ozone-default.xml}}, too.
> https://github.com/elek/ozone-ci/blob/master/trunk/trunk-nightly-20190830-rr75b/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestOzoneConfigurationFields.txt



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2063) Integration tests create untracked file audit.log

2019-08-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919839#comment-16919839
 ] 

Hudson commented on HDDS-2063:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17207 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17207/])
HDDS-2063. Integration tests create untracked file audit.log (#1384) (bharat: 
rev 472a26d2b8a5f4c91ba851f48345d33481f5bb24)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientForAclAuditLog.java
* (delete) hadoop-ozone/integration-test/src/test/resources/log4j2.properties
* (add) hadoop-ozone/integration-test/src/test/resources/auditlog.properties


> Integration tests create untracked file audit.log
> -
>
> Key: HDDS-2063
> URL: https://issues.apache.org/jira/browse/HDDS-2063
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> An untracked {{audit.log}} file is created during integration test run.  Eg:
> {code}
> $ mvn -Phdds -pl :hadoop-ozone-integration-test test 
> -Dtest=Test2WayCommitInRatis
> ...
> $ git status
> ...
> Untracked files:
>   (use "git add ..." to include in what will be committed)
>   hadoop-ozone/integration-test/audit.log
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2047) Datanodes fail to come up after 10 retries in a secure environment

2019-08-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919730#comment-16919730
 ] 

Hudson commented on HDDS-2047:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17206 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17206/])
HDDS-2047. Datanodes fail to come up after 10 retries in a secure env… (github: 
rev ec34cee5e37ca48bf61403655eba8b89dba0ed57)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java


> Datanodes fail to come up after 10 retries in a secure environment
> --
>
> Key: HDDS-2047
> URL: https://issues.apache.org/jira/browse/HDDS-2047
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, Security
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:java}
> 10:06:36.585 PMERRORHddsDatanodeService
> Error while storing SCM signed certificate.
> java.net.ConnectException: Call From 
> jmccarthy-ozone-secure-2.vpc.cloudera.com/10.65.50.127 to 
> jmccarthy-ozone-secure-1.vpc.cloudera.com:9961 failed on connection 
> exception: java.net.ConnectException: Connection refused; For more details 
> see:  http://wiki.apache.org/hadoop/ConnectionRefused
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
> at org.apache.hadoop.ipc.Client.call(Client.java:1457)
> at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy15.getDataNodeCertificate(Unknown Source)
> at 
> org.apache.hadoop.hdds.protocolPB.SCMSecurityProtocolClientSideTranslatorPB.getDataNodeCertificateChain(SCMSecurityProtocolClientSideTranslatorPB.java:156)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.getSCMSignedCert(HddsDatanodeService.java:278)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.initializeCertificateClient(HddsDatanodeService.java:248)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.start(HddsDatanodeService.java:211)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.start(HddsDatanodeService.java:168)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.call(HddsDatanodeService.java:143)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.call(HddsDatanodeService.java:70)
> at picocli.CommandLine.execute(CommandLine.java:1173)
> at picocli.CommandLine.access$800(CommandLine.java:141)
> at picocli.CommandLine$RunLast.handle(CommandLine.java:1367)
> at picocli.CommandLine$RunLast.handle(CommandLine.java:1335)
> at 
> picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:1243)
> at picocli.CommandLine.parseWithHandlers(CommandLine.java:1526)
> at picocli.CommandLine.parseWithHandler(CommandLine.java:1465)
> at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:65)
> at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:56)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.main(HddsDatanodeService.java:126)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:690)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:794)
> at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
> at org.apache.hadoo

[jira] [Commented] (HDDS-2014) Create Symmetric Key for GDPR

2019-08-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919731#comment-16919731
 ] 

Hudson commented on HDDS-2014:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17206 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17206/])
HDDS-2014. Create Symmetric Key for GDPR (#1362) (bharat: rev 
46696bd9b0118dc49d4f225d668a7e8cbdd3a6a0)
* (add) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/security/TestGDPRSymmetricKey.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/GDPRSymmetricKey.java


> Create Symmetric Key for GDPR
> -
>
> Key: HDDS-2014
> URL: https://issues.apache.org/jira/browse/HDDS-2014
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2058) Remove hadoop dependencies in ozone build

2019-08-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919458#comment-16919458
 ] 

Hudson commented on HDDS-2058:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17204 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17204/])
HDDS-2058. Remove hadoop dependencies in ozone build (elek: rev 
22a58615a26afd26eb00f0dd8ba47876ee58d0a9)
* (add) hadoop-hdds/common/src/main/proto/FSProtos.proto
* (edit) hadoop-ozone/dist/dev-support/bin/dist-tar-stitching
* (edit) hadoop-hdds/common/pom.xml
* (add) hadoop-hdds/common/src/main/conf/hadoop-env.cmd
* (edit) hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
* (edit) pom.ozone.xml
* (add) hadoop-hdds/common/src/main/bin/hadoop-daemons.sh
* (add) hadoop-hdds/common/src/main/bin/hadoop-config.sh
* (add) hadoop-hdds/common/src/main/conf/hadoop-env.sh
* (add) hadoop-hdds/common/src/main/conf/hadoop-policy.xml
* (add) hadoop-hdds/common/src/main/bin/workers.sh
* (add) hadoop-hdds/common/src/main/proto/Security.proto
* (delete) 
hadoop-assemblies/src/main/resources/assemblies/hadoop-src-with-hdds.xml
* (add) hadoop-ozone/assemblies/src/main/resources/assemblies/ozone-src.xml
* (edit) hadoop-ozone/common/pom.xml
* (add) hadoop-hdds/common/src/main/bin/hadoop-functions.sh
* (add) hadoop-hdds/common/src/main/bin/hadoop-config.cmd
* (add) hadoop-hdds/common/src/main/conf/core-site.xml
* (add) hadoop-hdds/common/src/main/conf/hadoop-metrics2.properties
* (add) hadoop-ozone/assemblies/pom.xml
* (edit) pom.xml


> Remove hadoop  dependencies in ozone build
> --
>
> Key: HDDS-2058
> URL: https://issues.apache.org/jira/browse/HDDS-2058
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone build currently depend on hadoop code, this makes it difficult to 
> create a ozone only source tar for release. Ozone should not depend on hadoop 
> code during build, it should only depend on hadoop via maven artifacts.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14796) Define LOG instead of BlockManager.LOG in ErasureCodingWork/ReplicationWork

2019-08-30 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919459#comment-16919459
 ] 

Hudson commented on HDFS-14796:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17204 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17204/])
HDFS-14796. Define LOG instead of BlockManager.LOG in (surendralilhore: rev 
96f7dc1992246a16031f613e55dc39ea0d64acd1)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicationWork.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ErasureCodingWork.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockReconstructionWork.java


> Define LOG instead of BlockManager.LOG in ErasureCodingWork/ReplicationWork
> ---
>
> Key: HDFS-14796
> URL: https://issues.apache.org/jira/browse/HDFS-14796
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14796.001.patch, HDFS-14796.002.patch, 
> HDFS-14796.003.patch
>
>
> There are too many noisy logs with BlockManager.LOG, it's too hard to debug 
> problem. Define LOG instead of it in  ErasureCodingWork.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12212) Options.Rename.To_TRASH is considered even when Options.Rename.NONE is specified

2019-08-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919186#comment-16919186
 ] 

Hudson commented on HDFS-12212:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17202 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17202/])
HDFS-12212. Options.Rename.To_TRASH is considered even when (ayushsaxena: rev 
e220dac15cc9972ebdd54ea9c82f288f234fca51)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java


> Options.Rename.To_TRASH is considered even when Options.Rename.NONE is 
> specified
> 
>
> Key: HDFS-12212
> URL: https://issues.apache.org/jira/browse/HDFS-12212
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.9.0, 2.7.4, 3.0.0-alpha1, 2.8.2
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.4
>
> Attachments: HDFS-12212-01.patch
>
>
> HDFS-8312 introduced {{Options.Rename.TO_TRASH}} to differentiate the 
> movement to trash and other renames for permission checks.
> When Options.Rename.NONE is passed also TO_TRASH is considered for rename and 
> wrong permissions are checked for rename.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14706) Checksums are not checked if block meta file is less than 7 bytes

2019-08-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919184#comment-16919184
 ] 

Hudson commented on HDFS-14706:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17202 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17202/])
HDFS-14706. Checksums are not checked if block meta file is less than 7 
(weichiu: rev 7bebad61d9c3dbff81fdcf243585fd3e9ae59dde)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java


> Checksums are not checked if block meta file is less than 7 bytes
> -
>
> Key: HDFS-14706
> URL: https://issues.apache.org/jira/browse/HDFS-14706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14706.001.patch, HDFS-14706.002.patch, 
> HDFS-14706.003.patch, HDFS-14706.004.patch, HDFS-14706.005.patch, 
> HDFS-14706.006.patch, HDFS-14706.007.patch, HDFS-14706.008.patch
>
>
> If a block and its meta file are corrupted in a certain way, the corruption 
> can go unnoticed by a client, causing it to return invalid data.
> The meta file is expected to always have a header of 7 bytes and then a 
> series of checksums depending on the length of the block.
> If the metafile gets corrupted in such a way, that it is between zero and 
> less than 7 bytes in length, then the header is incomplete. In 
> BlockSender.java the logic checks if the metafile length is at least the size 
> of the header and if it is not, it does not error, but instead returns a NULL 
> checksum type to the client.
> https://github.com/apache/hadoop/blob/b77761b0e37703beb2c033029e4c0d5ad1dce794/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java#L327-L357
> If the client receives a NULL checksum client, it will not validate checksums 
> at all, and even corrupted data will be returned to the reader. This means 
> this corrupt will go unnoticed and HDFS will never repair it. Even the Volume 
> Scanner will not notice the corruption as the checksums are silently ignored.
> Additionally, if the meta file does have enough bytes so it attempts to load 
> the header, and the header is corrupted such that it is not valid, it can 
> cause the datanode Volume Scanner to exit, which an exception like the 
> following:
> {code}
> 2019-08-06 18:16:39,151 ERROR datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting because of exception 
> java.lang.IllegalArgumentException: id=51 out of range [0, 5)
>   at 
> org.apache.hadoop.util.DataChecksum$Type.valueOf(DataChecksum.java:76)
>   at 
> org.apache.hadoop.util.DataChecksum.newDataChecksum(DataChecksum.java:167)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:173)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:139)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:153)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.loadLastPartialChunkChecksum(FsVolumeImpl.java:1140)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FinalizedReplica.loadLastPartialChunkChecksum(FinalizedReplica.java:157)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.getPartialChunkChecksumForFinalized(BlockSender.java:451)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:266)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:446)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:558)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:633)
> 2019-08-06 18:16:39,152 INFO datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14256) Review Logging of NameNode Class

2019-08-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919061#comment-16919061
 ] 

Hudson commented on HDFS-14256:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17201 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17201/])
HDFS-14256. Review Logging of NameNode Class. Contributed by David (inigoiri: 
rev 3b22fcd377eecedacceb6e37368463b48e0133c8)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java


> Review Logging of NameNode Class
> 
>
> Key: HDFS-14256
> URL: https://issues.apache.org/jira/browse/HDFS-14256
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14256.1.patch
>
>
> * Clean up and standardize some of the logging
>  * Change some of logging from STDERR/STDIN to logging facilities
>  * Little bit of clean up
> What brought this to my attention in the first place was a logging message to 
> STDERR in a sea of SLF4J logging during the execution of a unit test suite 
> for another project.
>  
> {quote}Formatting using clusterid: testClusterID
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8178) QJM doesn't move aside stale inprogress edits files

2019-08-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-8178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919062#comment-16919062
 ] 

Hudson commented on HDFS-8178:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17201 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17201/])
HDFS-8178. QJM doesn't move aside stale inprogress edits files. (weichiu: rev 
fcb7884bfc0146b083f928a223069bc0acaf6133)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNNStorageRetentionManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JNStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorageRetentionManager.java


> QJM doesn't move aside stale inprogress edits files
> ---
>
> Key: HDFS-8178
> URL: https://issues.apache.org/jira/browse/HDFS-8178
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: qjm
>Reporter: Zhe Zhang
>Assignee: Istvan Fajth
>Priority: Major
>  Labels: BB2015-05-TBR
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HDFS-8178.000.patch, HDFS-8178.002.patch, 
> HDFS-8178.003.patch, HDFS-8178.004.patch, HDFS-8178.005.patch, 
> HDFS-8178.006.patch, HDFS-8178.007.patch, HDFS-8178.008.addendum, 
> HDFS-8178.008.merged, HDFS-8178.008.patch, HDFS-8178.branch-3.2.patch
>
>
> When a QJM crashes, the in-progress edit log file at that time remains in the 
> file system. When the node comes back, it will accept new edit logs and those 
> stale in-progress files are never cleaned up. QJM treats them as regular 
> in-progress edit log files and tries to finalize them, which potentially 
> causes high memory usage. This JIRA aims to move aside those stale edit log 
> files to avoid this scenario.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14104) Review getImageTxIdToRetain

2019-08-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919060#comment-16919060
 ] 

Hudson commented on HDFS-14104:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17201 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17201/])
HDFS-14104. Review getImageTxIdToRetain. Contributed by David Mollitor. 
(inigoiri: rev ffca734c62fba26211f22232ddb5e80eae4b5d51)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorageRetentionManager.java


> Review getImageTxIdToRetain
> ---
>
> Key: HDFS-14104
> URL: https://issues.apache.org/jira/browse/HDFS-14104
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14104.1.patch, HDFS-14104.1.patch, 
> HDFS-14104.1.patch, HDFS-14104.2.patch, HDFS-14104.3.patch, 
> HDFS-14104.4.patch, HDFS-14104.5.patch
>
>
> {code:java|title=NNStorageRetentionManager.java}
>   private long getImageTxIdToRetain(FSImageTransactionalStorageInspector 
> inspector) {
>   
> List images = inspector.getFoundImages();
> TreeSet imageTxIds = Sets.newTreeSet();
> for (FSImageFile image : images) {
>   imageTxIds.add(image.getCheckpointTxId());
> }
> 
> List imageTxIdsList = Lists.newArrayList(imageTxIds);
> if (imageTxIdsList.isEmpty()) {
>   return 0;
> }
> 
> Collections.reverse(imageTxIdsList);
> int toRetain = Math.min(numCheckpointsToRetain, imageTxIdsList.size());   
>  
> long minTxId = imageTxIdsList.get(toRetain - 1);
> LOG.info("Going to retain " + toRetain + " images with txid >= " +
> minTxId);
> return minTxId;
>   }
> {code}
> # Fix check style issues
> # Use SLF4J paramaterized logging
> # A lot of work gets done before checking if the list actually contains any 
> entries and returning a 0.  That should be the first thing that happens
> # Instead of building up the {{TreeSet}} in its natural order, then reversing 
> the collection, simply use a reverse natural ordering to begin with and save 
> a step.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1935) Improve the visibility with Ozone Insight tool

2019-08-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919064#comment-16919064
 ] 

Hudson commented on HDDS-1935:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17201 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17201/])
HDDS-1935. Improve the visibility with Ozone Insight tool (#1255) (aengineer: 
rev 4f5f46eb4af721a5cef2543a78ba6b3812331e3b)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/BaseInsightPoint.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/ProtocolMessageMetrics.java
* (edit) hadoop-ozone/dev-support/intellij/ozone-site.xml
* (add) hadoop-ozone/insight/pom.xml
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/package-info.java
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/Component.java
* (edit) hadoop-ozone/pom.xml
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/scm/NodeManagerInsight.java
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/Insight.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/LoggerSource.java
* (add) 
hadoop-ozone/insight/src/test/java/org/apache/hadoop/ozone/insight/LogSubcommandTest.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/scm/ReplicaManagerInsight.java
* (add) hadoop-ozone/insight/dev-support/findbugsExcludeFile.xml
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/ScmBlockLocationProtocolServerSideTranslatorPB.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/datanode/package-info.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/scm/package-info.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/ListSubCommand.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/datanode/RatisInsight.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/om/KeyManagerInsight.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/scm/EventQueueInsight.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/MetricGroupDisplay.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/BaseInsightSubCommand.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/MetricDisplay.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMBlockProtocolServer.java
* (add) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/LogStreamServlet.java
* (edit) 
hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileGenerator.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/InsightPoint.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/om/package-info.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/scm/ScmProtocolBlockLocationInsight.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
* (edit) hadoop-ozone/dist/pom.xml
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/ConfigurationSubCommand.java
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
* (edit) hadoop-ozone/common/src/main/bin/ozone
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/LogSubcommand.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/om/OmProtocolInsight.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (add) 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/MetricsSubCommand.java


> Improve the visibility with Ozone Insight tool
> --
>
> Key: HDDS-1935
> URL: https://issues.apache.org/jira/browse/HDDS-1935
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature

[jira] [Commented] (HDDS-2050) Error while compiling ozone-recon-web

2019-08-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919063#comment-16919063
 ] 

Hudson commented on HDDS-2050:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17201 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17201/])
HDDS-2050. Error while compiling ozone-recon-web (#1374) (aengineer: rev 
7b3fa4fcaa2942c7c3d622c2f696c543b5d39296)
* (edit) 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/src/components/Breadcrumbs/Breadcrumbs.tsx
* (edit) 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/src/components/NavBar/NavBar.tsx
* (edit) 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/package.json
* (edit) 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/yarn.lock


> Error while compiling ozone-recon-web
> -
>
> Key: HDDS-2050
> URL: https://issues.apache.org/jira/browse/HDDS-2050
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Nanda kumar
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The following error is seen while compiling {{ozone-recon-web}}
> {noformat}
> [INFO] Running 'yarn install' in 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web
> [INFO] yarn install v1.9.2
> [INFO] [1/4] Resolving packages...
> [INFO] [2/4] Fetching packages...
> [ERROR] (node:31190) [DEP0005] DeprecationWarning: Buffer() is deprecated due 
> to security and usability issues. Please use the Buffer.alloc(), 
> Buffer.allocUnsafe(), or Buffer.from() methods instead.
> [INFO] [3/4] Linking dependencies...
> [ERROR] warning " > less-loader@5.0.0" has unmet peer dependency 
> "webpack@^2.0.0 || ^3.0.0 || ^4.0.0".
> [INFO] [4/4] Building fresh packages...
> [ERROR] warning Error running install script for optional dependency: 
> "/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents:
>  Command failed.
> [ERROR] Exit code: 1
> [ERROR] Command: node install
> [ERROR] Arguments:
> [ERROR] Directory: 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents
> [ERROR] Output:
> [ERROR] node-pre-gyp info it worked if it ends with ok
> [INFO] info This module is OPTIONAL, you can safely ignore this error
> [ERROR] node-pre-gyp info using node-pre-gyp@0.12.0
> [ERROR] node-pre-gyp info using node@12.1.0 | darwin | x64
> [ERROR] node-pre-gyp WARN Using request for node-pre-gyp https download
> [ERROR] node-pre-gyp info check checked for 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\"
>  (not found)
> [ERROR] node-pre-gyp http GET 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp http 404 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Tried to download(404): 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.8 and 
> node@12.1.0 (node-v72 ABI, unknown) (falling back to source compile with 
> node-gyp)
> [ERROR] node-pre-gyp http 404 status code downloading tarball 
> https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz
> [ERROR] node-pre-gyp ERR! build error
> [ERROR] node-pre-gyp ERR! stack Error: Failed to execute 'node-gyp clean' 
> (Error: spawn node-gyp ENOENT)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess. 
> (/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:77:29)
> [ERROR] node-pre-gyp ERR! stack at ChildProcess.emit (events.js:196:13)
> [ERROR] node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit 
> (internal/child_process.js:254:12)
> [ERROR] node-pre-gyp ERR! stack at onErrorNT 
> (internal/child_process.js:431:16)
> [ERROR] node-pre-gyp ERR! stack at processTicksAndRejections 
> (internal/process/task_queues.js:84:17)
> [ERROR] node-pre-gyp ERR! System Darwin 18.5.0
> [ERROR] node-pre-gyp ERR! command 
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/target/node/node\"
>  
> \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone

[jira] [Commented] (HDFS-11246) FSNameSystem#logAuditEvent should be called outside the read or write locks

2019-08-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-11246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918788#comment-16918788
 ] 

Hudson commented on HDFS-11246:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17200 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17200/])
HDFS-11246. FSNameSystem#logAuditEvent should be called outside the read 
(weichiu: rev f600fbb6c4987c69292faea6b5abf022bb213ffd)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> FSNameSystem#logAuditEvent should be called outside the read or write locks
> ---
>
> Key: HDFS-11246
> URL: https://issues.apache.org/jira/browse/HDFS-11246
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Kuhu Shukla
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-11246.001.patch, HDFS-11246.002.patch, 
> HDFS-11246.003.patch, HDFS-11246.004.patch, HDFS-11246.005.patch, 
> HDFS-11246.006.patch, HDFS-11246.007.patch, HDFS-11246.008.patch, 
> HDFS-11246.009.patch, HDFS-11246.010.patch, HDFS-11246.011.patch
>
>
> {code}
> readLock();
> boolean success = true;
> ContentSummary cs;
> try {
>   checkOperation(OperationCategory.READ);
>   cs = FSDirStatAndListingOp.getContentSummary(dir, src);
> } catch (AccessControlException ace) {
>   success = false;
>   logAuditEvent(success, operationName, src);
>   throw ace;
> } finally {
>   readUnlock(operationName);
> }
> {code}
> It would be nice to have audit logging outside the lock esp. in scenarios 
> where applications hammer a given operation several times. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14721) RBF: ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor

2019-08-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918681#comment-16918681
 ] 

Hudson commented on HDFS-14721:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17199 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17199/])
HDFS-14721. RBF: ProxyOpComplete is not accurate in (ayushsaxena: rev 
8e779a151e20528ceda1b5b44812412f4ae7f83d)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java


> RBF: ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor
> ---
>
> Key: HDFS-14721
> URL: https://issues.apache.org/jira/browse/HDFS-14721
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14721-trunk-001.patch, HDFS-14721-trunk-002.patch, 
> HDFS-14721-trunk-003.patch, HDFS-14721-trunk-004.patch
>
>
> ProxyOpComplete is not accurate in FederationRPCPerformanceMonitor when 
> RemoteException is returned.
> Because the remoteException is unwrap in invoke method, and it will be 
> proxyOpComplete(false) in invokeMethod.
> {code:java}
> // invoke method
> if (ioe instanceof RemoteException) {
>   RemoteException re = (RemoteException) ioe;
>   ioe = re.unwrapRemoteException();
>   ioe = getCleanException(ioe);
> }
> // invokeMethod method
> if (this.rpcMonitor != null) {
>   this.rpcMonitor.proxyOpFailureCommunicate();
>   this.rpcMonitor.proxyOpComplete(false);
> }
> throw ioe;{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2045) Partially started compose cluster left running

2019-08-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918373#comment-16918373
 ] 

Hudson commented on HDDS-2045:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17197 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17197/])
HDDS-2045. Partially started compose cluster left running (elek: rev 
c749f6247075274954f8302dd45feee984d9bd10)
* (edit) hadoop-ozone/dist/src/main/compose/testlib.sh


> Partially started compose cluster left running
> --
>
> Key: HDDS-2045
> URL: https://issues.apache.org/jira/browse/HDDS-2045
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker, test
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> If any container in the sample cluster [fails to 
> start|https://github.com/elek/ozone-ci/blob/5c64f77f3ab64aed0826d8f40991fe621f843efd/pr/pr-hdds-2026-p4f6m/acceptance/output.log#L24],
>  all successfully started containers are left running.  This 
> [prevents|https://github.com/elek/ozone-ci/blob/5c64f77f3ab64aed0826d8f40991fe621f843efd/pr/pr-hdds-2026-p4f6m/acceptance/output.log#L59]
>  any further acceptance tests from normal completion.  This is only a minor 
> inconvenience, since acceptance test as a whole fails either way.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1596) Create service endpoint to download configuration from SCM

2019-08-29 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918352#comment-16918352
 ] 

Hudson commented on HDDS-1596:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17196 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17196/])
Revert "HDDS-1596. Create service endpoint to download configuration (elek: rev 
371c9eb6a69de8f45008ff6f4033a5fa78ccf2f6)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerStarter.java
* (edit) hadoop-hdds/pom.xml
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
* (edit) 
hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestServerUtils.java
* (delete) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/discovery/DiscoveryApplication.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
* (delete) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/discovery/ConfigurationXml.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManagerHttpServer.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/Gateway.java
* (edit) hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/discovery/DiscoveryUtil.java
* (delete) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/discovery/package-info.java
* (delete) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/discovery/package-info.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManagerStarter.java
* (edit) hadoop-hdds/server-scm/pom.xml
* (delete) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/discovery/ConfigurationXmlEntry.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
* (edit) hadoop-ozone/dist/src/main/compose/ozone/docker-config
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ServerUtils.java
* (delete) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/discovery/ConfigurationEndpoint.java
* (edit) hadoop-ozone/ozonefs/pom.xml


> Create service endpoint to download configuration from SCM
> --
>
> Key: HDDS-1596
> URL: https://issues.apache.org/jira/browse/HDDS-1596
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> As written in the design doc (see the parent issue) it was proposed to 
> download the configuration from the scm by the other services.
> I propose to create a separated endpoint to provide the ozone configuration. 
> /conf can't be used as it contains *all* the configuration and we need only 
> the modified configuration.
> The easiest way to implement this feature is:
>  * Create a simple rest endpoint which publishes all the configuration
>  * Download the configurations to $HADOOP_CONF_DIR/ozone-global.xml during 
> the service startup.
>  * Add ozone-global.xml as an additional config source (before ozone-site.xml 
> but after ozone-default.xml)
>  * The download can be optional
> With this approach we keep the support of the existing manual configuration 
> (ozone-site.xml has higher priority) but we can download the configuration to 
> a separated file during the startup, which will be loaded.
> There is no magic: the configuration file is saved and it's easy to debug 
> what's going on as the OzoneConfiguration is loaded from the $HADOOP_CONF_DIR 
> as before.
> Possible follow-up steps:
>  * Migrate all the other services (recon, s3g) to the new approach. (possible 
> newbie jiras)
>  * Improve the CLI to define the SCM address. (As of now we use 
> ozone.scm.names)
>  * Create a service/hostname registration mechanism and autofill some of the 
> configuration based on the topology information.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1941) Unused executor in SimpleContainerDownloader

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918188#comment-16918188
 ] 

Hudson commented on HDDS-1941:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17194 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17194/])
HDDS-1941. Unused executor in SimpleContainerDownloader (#1367) (bharat: rev 
872cdf48a638236441669ca6fa4d4077c39370aa)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/SimpleContainerDownloader.java


> Unused executor in SimpleContainerDownloader
> 
>
> Key: HDDS-1941
> URL: https://issues.apache.org/jira/browse/HDDS-1941
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{SimpleContainerDownloader}} has an {{executor}} that's created and shut 
> down, but never used.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2051) Rat check failure in decommissioning.md

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918099#comment-16918099
 ] 

Hudson commented on HDDS-2051:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDDS-2051. Rat check failure in decommissioning.md (#1372) (aengineer: rev 
3e6a0166f4707ec433e2cdbc04c054b81722c073)
* (edit) hadoop-hdds/docs/content/design/decommissioning.md


> Rat check failure in decommissioning.md
> ---
>
> Key: HDDS-2051
> URL: https://issues.apache.org/jira/browse/HDDS-2051
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code}
> hadoop-hdds/docs/target/rat.txt: !? 
> /var/jenkins_home/workspace/ozone/hadoop-hdds/docs/content/design/decommissioning.md
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918098#comment-16918098
 ] 

Hudson commented on HDDS-1950:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDDS-1950. S3 MPU part-list call fails if there are no parts (aengineer: rev 
aef6a4fe0d04fe0d42fa36dc04cac2cc53ae8efd)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java


> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1937) Acceptance tests fail if scm webui shows invalid json

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918096#comment-16918096
 ] 

Hudson commented on HDDS-1937:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDDS-1937. Acceptance tests fail if scm webui shows invalid json (aengineer: 
rev addfb7ff7d4124db93d7713516f5890811cad9b2)
* (edit) hadoop-ozone/dist/src/main/compose/testlib.sh


> Acceptance tests fail if scm webui shows invalid json
> -
>
> Key: HDDS-1937
> URL: https://issues.apache.org/jira/browse/HDDS-1937
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Acceptance test of a nightly build is failed with the following error:
> {code}
> Creating ozonesecure_datanode_3 ... 
> 
> Creating ozonesecure_kdc_1  ... done
> 
> Creating ozonesecure_om_1   ... done
> 
> Creating ozonesecure_scm_1  ... done
> 
> Creating ozonesecure_datanode_3 ... done
> 
> Creating ozonesecure_kms_1  ... done
> 
> Creating ozonesecure_s3g_1  ... done
> 
> Creating ozonesecure_datanode_2 ... done
> 
> Creating ozonesecure_datanode_1 ... done
> parse error: Invalid numeric literal at line 2, column 0
> {code}
> https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-5b87q/acceptance/output.log
> The problem is in the script which checks the number of available datanodes.
> If the HTTP endpoint of the SCM is already started BUT not ready yet it may 
> return with a simple HTML error message instead of json. Which can not be 
> parsed by jq:
> In testlib.sh:
> {code}
>   37   │   if [[ "${SECURITY_ENABLED}" == 'true' ]]; then
>   38   │ docker-compose -f "${compose_file}" exec -T scm bash -c "kinit 
> -k HTTP/scm@EXAMPL
>│ E.COM -t /etc/security/keytabs/HTTP.keytab && curl --negotiate -u : 
> -s '${jmx_url}'"
>   39   │   else
>   40   │ docker-compose -f "${compose_file}" exec -T scm curl -s 
> "${jmx_url}"
>   41   │   fi \
>   42   │ | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | 
> .value'
> {code}
> One possible fix is to adjust the error handling (set +x / set -x) per method 
> instead of using a generic set -x at the beginning. It would provide a more 
> predictable behavior. In our case count_datanode should not fail evert (as 
> the caller method: wait_for_datanodes can retry anyway).



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8631) WebHDFS : Support setQuota

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918095#comment-16918095
 ] 

Hudson commented on HDFS-8631:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDFS-8631. WebHDFS : Support setQuota. Contributed by Chao Sun. 
(surendralilhore: rev 29bd6f3fc3bd78b439d61768885c9f3e7f31a540)
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/StorageTypeParam.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/PutOpParam.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterWebHdfsMethods.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/NameSpaceQuotaParam.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/StorageSpaceQuotaParam.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


> WebHDFS : Support setQuota
> --
>
> Key: HDFS-8631
> URL: https://issues.apache.org/jira/browse/HDFS-8631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.7.2
>Reporter: nijel
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-8631-001.patch, HDFS-8631-002.patch, 
> HDFS-8631-003.patch, HDFS-8631-004.patch, HDFS-8631-005.patch, 
> HDFS-8631-006.patch, HDFS-8631-007.patch, HDFS-8631-008.patch, 
> HDFS-8631-009.patch, HDFS-8631-010.patch, HDFS-8631-011.patch
>
>
> User is able do quota management from filesystem object. Same operation can 
> be allowed trough REST API.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-738) Removing REST protocol support from OzoneClient

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918091#comment-16918091
 ] 

Hudson commented on HDDS-738:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDDS-738. Removing REST protocol support from OzoneClient. Contributed 
(aengineer: rev dc72782008b2c66970dc3dee47fe12e4850bfefe)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/RatisTestHelper.java
* (delete) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rest/DefaultRestServerSelector.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/netty/RequestDispatchObjectStoreChannelHandler.java
* (edit) hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
* (delete) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeysRatis.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/interfaces/UserAuth.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java
* (edit) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/contract/OzoneContract.java
* (edit) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
* (delete) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/web/TestUtils.java
* (delete) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestVolumeRatis.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/handlers/VolumeHandler.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/handlers/package-info.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/InfoVolumeHandler.java
* (edit) hadoop-ozone/datanode/pom.xml
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/messages/StringMessageBodyWriter.java
* (delete) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestBuckets.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/ObjectPrinter.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/client/rest/OzoneException.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/ServiceInfo.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/handlers/package-info.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerRestart.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/client/rest/package-info.java
* (delete) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/web/package-info.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/response/KeyInfo.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
* (delete) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestVolume.java
* (edit) hadoop-ozone/pom.xml
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/interfaces/package-info.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/client/rest/response/KeyLocation.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/interfaces/Volume.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/messages/LengthInputStreamMessageBodyWriter.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/client/rest/response/VolumeOwner.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/interfaces/Accounting.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/handlers/UserHandlerBuilder.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/client/rest/response/KeyInfoDetails.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/web/ozShell/TestOzoneAddress.java
* (delete) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/handlers/KeyHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/CreateBucketHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java
* (edit) 
hadoo

[jira] [Commented] (HDDS-1881) Design doc: decommissioning in Ozone

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918093#comment-16918093
 ] 

Hudson commented on HDDS-1881:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDDS-1881. Design doc: decommissioning in Ozone (#1196) (aengineer: rev 
c7d426daf0aeda808c2a4a70fb89146c50305ee3)
* (add) hadoop-hdds/docs/content/design/decommissioning.md


> Design doc: decommissioning in Ozone
> 
>
> Key: HDDS-1881
> URL: https://issues.apache.org/jira/browse/HDDS-1881
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: design, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 43h
>  Remaining Estimate: 0h
>
> Design doc can be attached to the documentation. In this jira the design doc 
> will be attached and merged to the documentation page.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918097#comment-16918097
 ] 

Hudson commented on HDDS-1942:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDDS-1942. Support copy during S3 multipart upload part creation (aengineer: 
rev 2fcd0da7dcbc15793041efb079210e06272482a4)
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/OzoneBucketStub.java
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
* (add) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestObjectEndpoint.java
* (add) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestMultipartUploadWithCopy.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3Consts.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java
* (add) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/CopyPartResult.java


> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1596) Create service endpoint to download configuration from SCM

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918090#comment-16918090
 ] 

Hudson commented on HDDS-1596:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDDS-1596. Create service endpoint to download configuration from SCM. 
(aengineer: rev c0499bd70455e67bef9a1e00da73e25c9e2cc0ff)
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ServerUtils.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/discovery/ConfigurationXml.java
* (edit) hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerStarter.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/discovery/package-info.java
* (edit) hadoop-ozone/dist/src/main/compose/ozone/docker-config
* (edit) 
hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestServerUtils.java
* (edit) hadoop-hdds/pom.xml
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/discovery/DiscoveryUtil.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManagerHttpServer.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/discovery/package-info.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/discovery/ConfigurationXmlEntry.java
* (edit) hadoop-hdds/server-scm/pom.xml
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/discovery/DiscoveryApplication.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManagerStarter.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/discovery/ConfigurationEndpoint.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/Gateway.java
* (edit) hadoop-ozone/ozonefs/pom.xml


> Create service endpoint to download configuration from SCM
> --
>
> Key: HDDS-1596
> URL: https://issues.apache.org/jira/browse/HDDS-1596
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> As written in the design doc (see the parent issue) it was proposed to 
> download the configuration from the scm by the other services.
> I propose to create a separated endpoint to provide the ozone configuration. 
> /conf can't be used as it contains *all* the configuration and we need only 
> the modified configuration.
> The easiest way to implement this feature is:
>  * Create a simple rest endpoint which publishes all the configuration
>  * Download the configurations to $HADOOP_CONF_DIR/ozone-global.xml during 
> the service startup.
>  * Add ozone-global.xml as an additional config source (before ozone-site.xml 
> but after ozone-default.xml)
>  * The download can be optional
> With this approach we keep the support of the existing manual configuration 
> (ozone-site.xml has higher priority) but we can download the configuration to 
> a separated file during the startup, which will be loaded.
> There is no magic: the configuration file is saved and it's easy to debug 
> what's going on as the OzoneConfiguration is loaded from the $HADOOP_CONF_DIR 
> as before.
> Possible follow-up steps:
>  * Migrate all the other services (recon, s3g) to the new approach. (possible 
> newbie jiras)
>  * Improve the CLI to define the SCM address. (As of now we use 
> ozone.scm.names)
>  * Create a service/hostname registration mechanism and autofill some of the 
> configuration based on the topology information.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14710) RBF: Improve some RPC performance by using previous block

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918094#comment-16918094
 ] 

Hudson commented on HDFS-14710:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDFS-14710. RBF: Improve some RPC performance by using previous block. 
(inigoiri: rev 48cb58390655b87506fb8b620e4aafd11e38bb34)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java


> RBF: Improve some RPC performance by using previous block
> -
>
> Key: HDFS-14710
> URL: https://issues.apache.org/jira/browse/HDFS-14710
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14710-trunk-001.patch, HDFS-14710-trunk-002.patch, 
> HDFS-14710-trunk-003.patch, HDFS-14710-trunk-004.patch, 
> HDFS-14710-trunk-005.patch
>
>
> We can improve some RPC performance if the extendedBlock is not null.
> Such as addBlock, getAdditionalDatanode and complete.
> Since HDFS encourages user to write large files, so the extendedBlock is not 
> null in most case.
> In the scenario of Multiple Destination and large file, the effect is more 
> obvious.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1094) Performance test infrastructure : skip writing user data on Datanode

2019-08-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918092#comment-16918092
 ] 

Hudson commented on HDDS-1094:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17193/])
HDDS-1094. Performance test infrastructure : skip writing user data on (arp7: 
rev 1407414a5212e38956c13984e5daf32199175e83)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerFactory.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerDummyImpl.java
* (add) 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestDataValidateWithDummyContainers.java


> Performance test infrastructure : skip writing user data on Datanode
> 
>
> Key: HDDS-1094
> URL: https://issues.apache.org/jira/browse/HDDS-1094
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Goal:
> It can be useful to exercise the IO and control paths in Ozone for simulated 
> large datasets without having huge disk capacity at hand. For example, this 
> will allow us to get things like container reports and incremental container 
> reports, while not needing huge cluster capacity. The 
> [SimulatedFsDataset|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java]
>  does something similar in HDFS. It has been an invaluable tool to simulate 
> large data stores.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14760) Log INFO mode if snapshot usage and actual usage differ

2019-08-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16917432#comment-16917432
 ] 

Hudson commented on HDFS-14760:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17192 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17192/])
HDFS-14760. Log INFO mode if snapshot usage and actual usage differ. (weichiu: 
rev 6e37d65b03ff57cca25a46695ca3852da795d6f7)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DirectoryWithQuotaFeature.java


> Log INFO mode if snapshot usage and actual usage differ
> ---
>
> Key: HDFS-14760
> URL: https://issues.apache.org/jira/browse/HDFS-14760
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14760.001.patch, HDFS-14760.002.patch
>
>
> In DirectoryWithQuotaFeature#checkStoragespace code logs in error mode 
> without throwing any exceptions or action and pollutes logs. This should be 
> in INFO mode.
> {code}
>   private void checkStoragespace(final INodeDirectory dir, final long 
> computed) {
> if (-1 != quota.getStorageSpace() && usage.getStorageSpace() != computed) 
> {
>   NameNode.LOG.error("BUG: Inconsistent storagespace for directory "
>   + dir.getFullPathName() + ". Cached = " + usage.getStorageSpace()
>   + " != Computed = " + computed);
> }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1946) CertificateClient should not persist keys/certs to ozone.metadata.dir

2019-08-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16917433#comment-16917433
 ] 

Hudson commented on HDDS-1946:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17192 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17192/])
HDDS-1946. CertificateClient should not persist keys/certs to ozone.m… (xyao: 
rev b1eee8b52eecf45827abbe8fe16ab48eade46cc8)
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/client/TestDefaultCertificateClient.java
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/client/TestCertificateClientInit.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/TestHddsSecureDatanodeInit.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/OMCertificateClient.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/DNCertificateClient.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestSecureOzoneManager.java
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/utils/TestCertificateCodec.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/utils/CertificateCodec.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/KeyCodec.java
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/keys/TestKeyCodec.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/DefaultCertificateClient.java


> CertificateClient should not persist keys/certs to ozone.metadata.dir
> -
>
> Key: HDDS-1946
> URL: https://issues.apache.org/jira/browse/HDDS-1946
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> For example, when OM and SCM are deployed on the same host with 
> ozone.metadata.dir defined. SCM can start successfully but OM can not because 
> the key/cert from OM will collide with SCM.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1753) Datanode unable to find chunk while replication data using ratis.

2019-08-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16917434#comment-16917434
 ] 

Hudson commented on HDDS-1753:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17192 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17192/])
HDDS-1753. Datanode unable to find chunk while replication data using (ljain: 
rev 5d31a4eff785ba4da22bf0b30c9b995495c98844)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CommandDispatcher.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/testutils/BlockDeletingServiceTestImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestDeleteWithSlowFollower.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerDeletionChoosingPolicy.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/ratis/RatisHelper.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/CSMMetrics.java


> Datanode unable to find chunk while replication data using ratis.
> -
>
> Key: HDDS-1753
> URL: https://issues.apache.org/jira/browse/HDDS-1753
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Attachments: HDDS-1753.000.patch
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> Leader datanode is unable to read chunk from the datanode while replicating 
> data from leader to follower.
> Please note that deletion of keys is also happening while the data is being 
> replicated.
> {code}
> 2019-07-02 19:39:22,604 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(972)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: inconsistency entries. 
> Reply:76a3eb0f-d7cd-477b-8973-db1
> 014feb398<-5ac88709-a3a2-4c8f-91de-5e54b617f05e#70:FAIL,INCONSISTENCY,nextIndex:9771,term:2,followerCommit:9782
> 2019-07-02 19:39:22,605 ERROR impl.ChunkManagerImpl 
> (ChunkUtils.java:readData(161)) - Unable to find the chunk file. chunk info : 
> ChunkInfo{chunkName='76ec669ae2cb6e10dd9f08c0789c5fdf_stream_a2850dce-def3
> -4d64-93d8-fa2ebafee933_chunk_1, offset=0, len=2048}
> 2019-07-02 19:39:22,605 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(990)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: Failed appendEntries as latest snapshot 
> (9770) already h
> as the append entries (first index: 1)
> 2019-07-02 19:39:22,605 INFO  impl.RaftServerImpl 
> (RaftServerImpl.java:checkInconsistentAppendEntries(972)) - 
> 5ac88709-a3a2-4c8f-91de-5e54b617f05e: inconsistency entries. 
> Reply:76a3eb0f-d7cd-477b-8973-db1
> 014feb398<-5ac88709-a3a2-4c8f-91de-5e54b617f05e#71:FAIL,INCONSISTENCY,nextIndex:9771,term:2,followerCommit:9782
> 2019-07-02 19:39:22,605 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: ReadChunk : Trace 
> ID: 4216d461a4679e17:4216d461a4679e17:0:0 : Message: Unable to find the c
> hunk file. chunk info 
> ChunkInfo{chunkNa

[jira] [Commented] (HDDS-2037) Fix hadoop version in pom.ozone.xml

2019-08-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16917244#comment-16917244
 ] 

Hudson commented on HDDS-2037:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17191 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17191/])
HDDS-2037. Fix hadoop version in pom.ozone.xml. (aengineer: rev 
2b9cc7eb95a455ba927d395fac91010980d99707)
* (edit) hadoop-ozone/ozone-recon/pom.xml
* (edit) hadoop-hdds/pom.xml
* (edit) pom.ozone.xml
* (edit) hadoop-hdds/server-scm/pom.xml


> Fix hadoop version in pom.ozone.xml
> ---
>
> Key: HDDS-2037
> URL: https://issues.apache.org/jira/browse/HDDS-2037
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The hadoop version in pom.ozone.xml is pointing to SNAPSHOT version, this has 
> to be fixed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2026) Overlapping chunk region cannot be read concurrently

2019-08-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16917241#comment-16917241
 ] 

Hudson commented on HDDS-2026:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17191 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17191/])
HDDS-2026. Overlapping chunk region cannot be read concurrently (aengineer: rev 
0883ce102113cdc9527ab8aa548895a8418cb6bb)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
* (add) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/helpers/TestChunkUtils.java


> Overlapping chunk region cannot be read concurrently
> 
>
> Key: HDDS-2026
> URL: https://issues.apache.org/jira/browse/HDDS-2026
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-2026-repro.patch, changes.diff, 
> first-cut-proposed.diff
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Concurrent requests to datanode for the same chunk may result in the 
> following exception in datanode:
> {code}
> java.nio.channels.OverlappingFileLockException
>at java.base/sun.nio.ch.FileLockTable.checkList(FileLockTable.java:229)
>at java.base/sun.nio.ch.FileLockTable.add(FileLockTable.java:123)
>at 
> java.base/sun.nio.ch.AsynchronousFileChannelImpl.addToFileLockTable(AsynchronousFileChannelImpl.java:178)
>at 
> java.base/sun.nio.ch.SimpleAsynchronousFileChannelImpl.implLock(SimpleAsynchronousFileChannelImpl.java:185)
>at 
> java.base/sun.nio.ch.AsynchronousFileChannelImpl.lock(AsynchronousFileChannelImpl.java:118)
>at 
> org.apache.hadoop.ozone.container.keyvalue.helpers.ChunkUtils.readData(ChunkUtils.java:175)
>at 
> org.apache.hadoop.ozone.container.keyvalue.impl.ChunkManagerImpl.readChunk(ChunkManagerImpl.java:213)
>at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleReadChunk(KeyValueHandler.java:574)
>at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:195)
>at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:271)
>at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
>at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
> {code}
> It seems this is covered by retry logic, as key read is eventually successful 
> at client side.
> The problem is that:
> bq. File locks are held on behalf of the entire Java virtual machine. They 
> are not suitable for controlling access to a file by multiple threads within 
> the same virtual machine. 
> ([source|https://docs.oracle.com/javase/8/docs/api/java/nio/channels/FileLock.html])
> code ref: 
> [{{ChunkUtils.readData}}|https://github.com/apache/hadoop/blob/c92de8209d1c7da9e7ce607abeecb777c4a52c6a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java#L175]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14497) Write lock held by metasave impact following RPC processing

2019-08-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16917240#comment-16917240
 ] 

Hudson commented on HDFS-14497:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17191 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17191/])
HDFS-14497. Addendum: Write lock held by metasave impact following RPC 
(weichiu: rev dde9399b37bffb77da17c025f0b9b673d7088bc6)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Write lock held by metasave impact following RPC processing
> ---
>
> Key: HDFS-14497
> URL: https://issues.apache.org/jira/browse/HDFS-14497
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14497-addendum.001.patch, HDFS-14497.001.patch
>
>
> NameNode meta save hold global write lock currently, so following RPC r/w 
> request or inner-thread of NameNode could be paused if they try to acquire 
> global read/write lock and have to wait before metasave release it.
> I propose to change write lock to read lock and let some read request could 
> be process normally. I think it could not change informations which meta save 
> try to get if we try to open read request.
> Actually, we need ensure that there are only one thread to execute metaSave, 
> otherwise, output streams could meet exception especially both streams hold 
> the same file handle or some other same output stream.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14779) Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns

2019-08-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16917238#comment-16917238
 ] 

Hudson commented on HDFS-14779:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17191 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17191/])
HDFS-14779. Fix logging error in (jhung: rev 
8ab7020e641e65deb002a10732d23bb22802c09d)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java


> Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns
> 
>
> Key: HDFS-14779
> URL: https://issues.apache.org/jira/browse/HDFS-14779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14779.001.patch
>
>
> {noformat}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdfs: Compilation failure
> [ERROR] 
> /Users/jhung/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java:[364,31]
>  incompatible types: java.lang.String cannot be converted to 
> java.lang.Throwable
> [ERROR] {noformat}
> Logger changed from o.a.commons.logging.Log to slf4j logger in branch-3.2 
> (ref: HDFS-13695), so HDFS-14674 did not apply cleanly to branch-3.1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1610) applyTransaction failure should not be lost on restart

2019-08-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16917239#comment-16917239
 ] 

Hudson commented on HDDS-1610:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17191 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17191/])
Revert "HDDS-1610. applyTransaction failure should not be lost on (shashikant: 
rev ce8eb1283acbebb990a4f1e40848d78700309222)
* (edit) 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestFreonWithDatanodeFastRestart.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
* (edit) 
hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineFailures.java
HDDS-1610. applyTransaction failure should not be lost on restart. (shashikant: 
rev 66cfa482c450320f7326b2568703bae0d4b39e3c)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
* (edit) 
hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineFailures.java
* (edit) 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestFreonWithDatanodeFastRestart.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java


> applyTransaction failure should not be lost on restart
> --
>
> Key: HDDS-1610
> URL: https://issues.apache.org/jira/browse/HDDS-1610
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> If the applyTransaction fails in the containerStateMachine, then the 
> container should not accept new writes on restart,.
> This can occur if
> # chunk write applyTransaction fails
> # container state update to UNHEALTHY also fails
> # Ratis snapshot is taken
> # Node restarts
> # container accepts new transactions



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1998) TestSecureContainerServer#testClientServerRatisGrpc is failing

2019-08-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16916874#comment-16916874
 ] 

Hudson commented on HDDS-1998:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17190 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17190/])
HDDS-1998. TestSecureContainerServer#testClientServerRatisGrpc is 
(31469764+bshashikant: rev 3329257d99d2808e66ae6c2fe87a9c4f8877026f)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/server/TestSecureContainerServer.java


> TestSecureContainerServer#testClientServerRatisGrpc is failing
> --
>
> Key: HDDS-1998
> URL: https://issues.apache.org/jira/browse/HDDS-1998
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> {{TestSecureContainerServer#testClientServerRatisGrpc}} is failing on trunk 
> with the following error.
> {noformat}
> [ERROR] 
> testClientServerRatisGrpc(org.apache.hadoop.ozone.container.server.TestSecureContainerServer)
>   Time elapsed: 7.544 s  <<< ERROR!
> java.io.IOException:
> Failed to command cmdType: CreateContainer
> containerID: 1566379872577
> datanodeUuid: "87ebf146-2a8f-4060-8f06-615ed61a9fe0"
> createContainer {
> }
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientSpi.sendCommand(XceiverClientSpi.java:113)
>   at 
> org.apache.hadoop.ozone.container.server.TestSecureContainerServer.runTestClientServer(TestSecureContainerServer.java:206)
>   at 
> org.apache.hadoop.ozone.container.server.TestSecureContainerServer.runTestClientServerRatis(TestSecureContainerServer.java:157)
>   at 
> org.apache.hadoop.ozone.container.server.TestSecureContainerServer.testClientServerRatisGrpc(TestSecureContainerServer.java:132)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.ratis.protocol.StateMachineException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Block token verification failed. Fail to find any token (empty or null.)
>   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientSpi.sendCommand(XceiverClientSpi.

[jira] [Commented] (HDFS-14772) RBF: hdfs-rbf-site.xml can't be loaded automatically

2019-08-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16916873#comment-16916873
 ] 

Hudson commented on HDFS-14772:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17190 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17190/])
HDFS-14772. RBF: hdfs-rbf-site.xml can't be loaded automatically. (tasanuma: 
rev b69ac575a1a02d39c64c6cf998ec2ef4eb5918cd)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java


> RBF: hdfs-rbf-site.xml can't be loaded automatically
> 
>
> Key: HDFS-14772
> URL: https://issues.apache.org/jira/browse/HDFS-14772
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Yuxuan Wang
>Assignee: Yuxuan Wang
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14772.001.patch, HDFS-14772.002.patch, 
> HDFS-14772.003.patch, HDFS-14772.004.patch
>
>
> ISSUE:
> hdfs-rbf-site.xml can't be loaded automatically
> WHY:
> Currently the code is 
> {code:title=RBFConfigKeys.java|borderStyle=solid}
>   static {
> Configuration.addDefaultResource(HDFS_RBF_SITE_XML);
>   }
> {code}
> But it will never be executed unless we explicitly load the class.
> HOW TO FIX:
> Reference to class *HdfsConfiguration*, make a method
> {code:title=RBFConfigKeys.java|borderStyle=solid}
>   public static void init() {
>   }
> {code}
> and call it in other class.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1981) Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED state

2019-08-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16916872#comment-16916872
 ] 

Hudson commented on HDDS-1981:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17190 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17190/])
HDDS-1981: Datanode should sync db when container is moved to CLOSED or 
(github: rev 4379370fb1102de222b810b91e6b8c758a3affc2)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/LevelDBStore.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/RocksDBStore.java
* (edit) hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainerMarkUnhealthy.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/MetadataStore.java


> Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED 
> state
> ---
>
> Key: HDDS-1981
> URL: https://issues.apache.org/jira/browse/HDDS-1981
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED 
> state. This will ensure that the metadata is persisted.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16916247#comment-16916247
 ] 

Hudson commented on HDFS-2470:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17188 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17188/])
HDFS-2470. NN should automatically set permissions on (arp: rev 
07e3cf952eac9e47e7bd5e195b0f9fc28c468313)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JNStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStartup.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java


> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron T. Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, 
> HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch, HDFS-2470.09.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1975) Implement default acls for bucket/volume/key for OM HA code

2019-08-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16916020#comment-16916020
 ] 

Hudson commented on HDDS-1975:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17187 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17187/])
HDDS-1975. Implement default acls for bucket/volume/key for OM HA code. 
(github: rev d1aa8596e0e5929ecf0865f4bb008cc1769a3546)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadCommitPartRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketDeleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3InitiateMultipartUploadRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadCompleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadAbortRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMClientRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRenameRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMAllocateBlockRequest.java


> Implement default acls for bucket/volume/key for OM HA code
> ---
>
> Key: HDDS-1975
> URL: https://issues.apache.org/jira/browse/HDDS-1975
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> This Jira is to implement default ACLs for Ozone volume/bucket/key.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2002) Update documentation for 0.4.1 release

2019-08-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914845#comment-16914845
 ] 

Hudson commented on HDDS-2002:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17182 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17182/])
HDDS-2002. Update documentation for 0.4.1 release. (aengineer: rev 
b661dcf563c0b3cb6fe6f22bb3a39f87e3ec1c57)
* (edit) hadoop-hdds/docs/content/concept/Overview.md
* (edit) hadoop-hdds/docs/content/concept/Hdds.md
* (edit) hadoop-hdds/docs/content/start/Kubernetes.md
* (edit) hadoop-hdds/docs/content/security/SecuringDatanodes.md
* (edit) hadoop-hdds/docs/content/security/SecuringS3.md
* (edit) hadoop-hdds/docs/content/shell/KeyCommands.md
* (edit) hadoop-hdds/docs/content/security/SecurityAcls.md
* (edit) hadoop-hdds/docs/content/beyond/DockerCheatSheet.md
* (edit) hadoop-hdds/docs/content/recipe/SparkOzoneFSK8S.md
* (edit) hadoop-hdds/docs/content/beyond/Containers.md
* (edit) hadoop-hdds/docs/content/start/OnPrem.md
* (edit) hadoop-hdds/docs/content/interface/S3.md
* (edit) hadoop-hdds/docs/content/interface/JavaApi.md
* (edit) hadoop-hdds/docs/content/interface/OzoneFS.md
* (edit) hadoop-hdds/docs/content/concept/OzoneManager.md
* (edit) hadoop-hdds/docs/content/shell/BucketCommands.md
* (edit) hadoop-hdds/docs/content/shell/VolumeCommands.md
* (edit) hadoop-hdds/docs/content/beyond/RunningWithHDFS.md
* (edit) hadoop-hdds/docs/content/start/StartFromDockerHub.md
* (edit) hadoop-hdds/docs/content/recipe/Prometheus.md
* (edit) hadoop-hdds/docs/content/security/SecuityWithRanger.md
* (edit) hadoop-hdds/docs/content/security/SecureOzone.md
* (edit) hadoop-hdds/docs/content/concept/Datanodes.md
* (edit) hadoop-hdds/docs/content/recipe/_index.md
* (edit) hadoop-hdds/docs/content/security/SecuringTDE.md


> Update documentation for 0.4.1 release
> --
>
> Key: HDDS-2002
> URL: https://issues.apache.org/jira/browse/HDDS-2002
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> We have to update Ozone documentation based on the latest changes/features.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14722) RBF: GetMountPointStatus should return mountTable information when getFileInfoAll throw IOException

2019-08-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914846#comment-16914846
 ] 

Hudson commented on HDFS-14722:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17182 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17182/])
HDFS-14722. RBF: GetMountPointStatus should return mountTable (ayushsaxena: rev 
d2225c8ca8f9bdc5cef7266697518394d8763c88)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java


> RBF: GetMountPointStatus should return mountTable information when 
> getFileInfoAll throw IOException
> ---
>
> Key: HDFS-14722
> URL: https://issues.apache.org/jira/browse/HDFS-14722
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14722-trunk-001.patch, HDFS-14722-trunk-002.patch, 
> HDFS-14722-trunk-003.patch, HDFS-14722-trunk-004.patch, 
> HDFS-14722-trunk-005.patch, HDFS-14722-trunk-006.patch, 
> HDFS-14722-trunk-bug-discuss.patch
>
>
> When IOException in getFileInfoAll, we should return the mountTable 
> informations instead of super information.
> Code like:
> {code:java}
> // RouterClientProtocol#getMountPointStatus
> try {
>   String mName = name.startsWith("/") ? name : "/" + name;
>   MountTableResolver mountTable = (MountTableResolver) subclusterResolver;
>   MountTable entry = mountTable.getMountPoint(mName);
>   if (entry != null) {
> RemoteMethod method = new RemoteMethod("getFileInfo",
> new Class[] {String.class}, new RemoteParam());
> HdfsFileStatus fInfo = getFileInfoAll(
> entry.getDestinations(), method, mountStatusTimeOut);
> if (fInfo != null) {
>   permission = fInfo.getPermission();
>   owner = fInfo.getOwner();
>   group = fInfo.getGroup();
>   childrenNum = fInfo.getChildrenNum();
> } else {
>   permission = entry.getMode();
>   owner = entry.getOwnerName();
>   group = entry.getGroupName();
> }
>   }
> } catch (IOException e) {
>   LOG.error("Cannot get mount point: {}", e.getMessage());
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14766) RBF: MountTableStoreImpl#getMountTableEntries returns extra entry

2019-08-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914720#comment-16914720
 ] 

Hudson commented on HDFS-14766:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17181 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17181/])
HDFS-14766. RBF: MountTableStoreImpl#getMountTableEntries returns extra 
(inigoiri: rev 0b796754b9d746c0389782f1a5e3ee9ef673e54c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MountTableStoreImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java


> RBF: MountTableStoreImpl#getMountTableEntries returns extra entry
> -
>
> Key: HDFS-14766
> URL: https://issues.apache.org/jira/browse/HDFS-14766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14766.001.patch, HDFS-14766.002.patch
>
>
> Similar issue with HDFS-14756, should use \{{FederationUtil.isParentEntry()}} 
> instead of \{{String.startsWith()}} to identify parent path



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1827) Load Snapshot info when OM Ratis server starts

2019-08-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914718#comment-16914718
 ] 

Hudson commented on HDDS-1827:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17181 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17181/])
HDDS-1827. Load Snapshot info when OM Ratis server starts. (#1130) (github: rev 
3f887f3b925cf2a80f426d2c528e3c035a6cf58b)
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerHAProtocol.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneHAClusterImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOMRatisSnapshots.java
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/snapshot/TestOMRatisSnapshotInfo.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/snapshot/TestOzoneManagerSnapshotProvider.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OMRatisSnapshotInfo.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerRatisServer.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMDBCheckpointServlet.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java


> Load Snapshot info when OM Ratis server starts
> --
>
> Key: HDDS-1827
> URL: https://issues.apache.org/jira/browse/HDDS-1827
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> When Ratis server is starting it looks for the latest snapshot to load it. 
> Even though OM does not save snapshots via Ratis, we need to load the saved 
> snaphsot index into Ratis so that the LogAppender knows to not look for logs 
> before the snapshot index. Otherwise, Ratis will replay the logs from 
> beginning every time it starts up.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14674) [SBN read] Got an unexpected txid when tail editlog

2019-08-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914558#comment-16914558
 ] 

Hudson commented on HDFS-14674:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17180 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17180/])
HDFS-14674. [SBN read] Got an unexpected txid when tail editlog. (cliang: rev 
ebef99dcf41a7538d44db6c8d14d5376c7a065f8)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java


> [SBN read] Got an unexpected txid when tail editlog
> ---
>
> Key: HDFS-14674
> URL: https://issues.apache.org/jira/browse/HDFS-14674
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Blocker
> Attachments: HDFS-14674-001.patch, HDFS-14674-003.patch, 
> HDFS-14674-004.patch, HDFS-14674-005.patch, HDFS-14674-006.patch, 
> HDFS-14674-007.patch, HDFS-14674-008.patch, HDFS-14674-009.patch, 
> HDFS-14674-010.patch, HDFS-14674-011.patch, 
> image-2019-08-22-16-24-06-518.png, image.png
>
>
> Add the following configuration
> !image-2019-08-22-16-24-06-518.png|width=451,height=80!
> error:
> {code:java}
> //
> [2019-07-17T11:50:21.048+08:00] [INFO] [Edit log tailer] : replaying edit 
> log: 1/20512836 transactions completed. (0%) [2019-07-17T11:50:21.059+08:00] 
> [INFO] [Edit log tailer] : Edits file 
> http://ip/getJournal?jid=ns1003&segmentTxId=232056426162&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232056426162&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232056426162&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  of size 3126782311 edits # 500 loaded in 3 seconds 
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Reading 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@51ceb7bc 
> expecting start txid #232056752162 [2019-07-17T11:50:21.059+08:00] [INFO] 
> [Edit log tailer] : Start loading edits file 
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  maxTxnipsToRead = 500 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log 
> tailer] : Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit 
> log tailer] ip: Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.061+08:00] [ERROR] [Edit 
> log tailer] : Unknown error encountered while tailing edits. Shutting down 
> standby NN. java.io.IOException: There appears to be a gap in the edit log. 
> We expected txid 232056752162, but got txid 232077264498. at 
> org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:239)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:895) at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>  at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
>  [2019-07-17T11:50:21.064+08:00] [INFO] [Edit lo

[jira] [Commented] (HDFS-13977) NameNode can kill itself if it tries to send too many txns to a QJM simultaneously

2019-08-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914557#comment-16914557
 ] 

Hudson commented on HDFS-13977:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17180 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17180/])
HDFS-13977. Override shouldForceSync in QuorumOutputStream to allow for 
(xkrogen: rev d699022fce756d25956d33e022100111aa0dd22e)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSImageTestUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManagerUnit.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumJournalManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java


> NameNode can kill itself if it tries to send too many txns to a QJM 
> simultaneously
> --
>
> Key: HDFS-13977
> URL: https://issues.apache.org/jira/browse/HDFS-13977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, qjm
>Affects Versions: 2.7.7
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-13977-branch-2.003.patch, HDFS-13977.000.patch, 
> HDFS-13977.001.patch, HDFS-13977.002.patch, HDFS-13977.003.patch
>
>
> h3. Problem & Logs
> We recently encountered an issue on a large cluster (running 2.7.4) in which 
> the NameNode killed itself because it was unable to communicate with the JNs 
> via QJM. We discovered that it was the result of the NameNode trying to send 
> a huge batch of over 1 million transactions to the JNs in a single RPC:
> {code:title=NameNode Logs}
> WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Remote 
> journal X.X.X.X: failed to
>  write txns 1000-11153636. Will try to write to this JN again after the 
> next log roll.
> ...
> WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took 1098ms 
> to send a batch of 1153637 edits (335886611 bytes) to remote journal 
> X.X.X.X:
> {code}
> {code:title=JournalNode Logs}
> INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for port 8485: 
> readAndProcess from client X.X.X.X threw exception [java.io.IOException: 
> Requested data length 335886776 is longer than maximum configured RPC length 
> 67108864.  RPC came from X.X.X.X]
> java.io.IOException: Requested data length 335886776 is longer than maximum 
> configured RPC length 67108864.  RPC came from X.X.X.X
> at 
> org.apache.hadoop.ipc.Server$Connection.checkDataLength(Server.java:1610)
> at 
> org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1672)
> at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:897)
> at 
> org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:753)
> at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:724)
> {code}
> The JournalNodes rejected the RPC because it had a size well over the 64MB 
> default {{ipc.maximum.data.length}}.
> This was triggered by a huge number of files all hitting a hard lease timeout 
> simultaneously, causing the NN to force-close them all at once. This can be a 
> particularly nasty bug as the NN will attempt to re-send this same huge RPC 
> on restart, as it loads an fsimage which still has all of these open files 
> that need to be force-closed.
> h3. Proposed Solution
> To solve this we propose to modify {{EditsDoubleBuffer}} to add a "hard 
> limit" based on the value of {{ipc.maximum.data.length}}. When {{writeOp()}} 
> or {{writeRaw()}} is called, first check the size of {{bufCurrent}}. If it 
> exceeds the hard limit, block the writer until the buffer is flipped and 
> {{bufCurrent}} becomes {{bufReady}}. This gives some self-throttling to 
> prevent the NameNode from killing itself in this way.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14761) RBF: MountTableResolver cannot invalidate cache correctly

2019-08-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914509#comment-16914509
 ] 

Hudson commented on HDFS-14761:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17179 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17179/])
HDFS-14761. RBF: MountTableResolver cannot invalidate cache correctly (elgoiri: 
rev 894e2300d60f6222b80ed5afca22e4e17551cf6a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java


> RBF: MountTableResolver cannot invalidate cache correctly
> -
>
> Key: HDFS-14761
> URL: https://issues.apache.org/jira/browse/HDFS-14761
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Yuxuan Wang
>Assignee: Yuxuan Wang
>Priority: Major
>  Labels: RBF
> Fix For: 3.3.0
>
> Attachments: draft-reproduce-patch-HDFS-14761.patch
>
>
> STEPS TO REPRODUCE:
> add mount table entry 1->/
> mountTable.getDestinationForPath("/foo/a") will return "1->/foo/a", that's 
> correct
> add mount table entry 2->/foo
> mountTable.getDestinationForPath("/foo/a") should return "2->/foo/a", but it 
> still return "1->/foo/a"
> WHY:
> {code:title=MountTableResolver.java|borderStyle=solid}
> private void invalidateLocationCache(...)
> {
> ...
> String src = loc.getSourcePath();
> if (src != null) {
> if (isParentEntry(src, path)) {
>   LOG.debug("Removing {}", src);
>   it.remove();
> }
> }
> ...
> }
> {code}
> *path* is the new entry, in our case is "/foo"
> But *src* is the mount point path, in our case is "/", which isn't child of 
> "/foo"
> So, it can't invalidate the cache entry.
> HOW TO FIX:
> Just reverse the parameters of *isParentEntry* .
> PS:
> *PathLocation#getSourcePath()* will return *PathLocation#sourcePath*, which 
> attached a comment about "Source path in global namespace.". But I think the 
> field indeed denotes the mount point path after I review the code. I think 
> it's confused.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1978) Create helper script to run blockade tests

2019-08-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914473#comment-16914473
 ] 

Hudson commented on HDDS-1978:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17178 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17178/])
HDDS-1978. Create helper script to run blockade tests. (#1310) (github: rev 
20064b69a8a7926f2d80776b029da28d5f98f730)
* (add) hadoop-ozone/dev-support/checks/blockade.sh
* (edit) 
hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/ozone/cluster.py


> Create helper script to run blockade tests
> --
>
> Key: HDDS-1978
> URL: https://issues.apache.org/jira/browse/HDDS-1978
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> To run blockade tests as part of jenkins job we need some kind of helper 
> script.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14747) RBF: IsFileClosed should be return false when the file is open in multiple destination

2019-08-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914283#comment-16914283
 ] 

Hudson commented on HDFS-14747:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17177 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17177/])
HDFS-14747. RBF: IsFileClosed should be return false when the file is 
(ayushsaxena: rev c92de8209d1c7da9e7ce607abeecb777c4a52c6a)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java


> RBF: IsFileClosed should be return false when the file is open in multiple 
> destination
> --
>
> Key: HDFS-14747
> URL: https://issues.apache.org/jira/browse/HDFS-14747
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14747-trunk-001.patch, HDFS-14747-trunk-002.patch
>
>
> *IsFileClosed* should be return false when the file is open or be writing in 
> multiple destinations.
> Liks this:
> Mount point has multiple destination(ns0 and ns1).
> And the file is in ns0 but it is be writing, ns1 doesn't has this file.
> In this case *IsFileClosed* should return false instead of throw 
> FileNotFoundException.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2023) Fix rat check failures in trunk

2019-08-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914171#comment-16914171
 ] 

Hudson commented on HDDS-2023:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17176 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17176/])
HDDS-2023. Fix rat check failures in trunk (addendum) (elek: rev 
d3fe993e60c2cef04232a0ca8ef3a4b60cdabf8b)
* (edit) hadoop-hdds/docs/pom.xml
* (edit) hadoop-hdds/pom.xml


> Fix rat check failures in trunk
> ---
>
> Key: HDDS-2023
> URL: https://issues.apache.org/jira/browse/HDDS-2023
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Several files in hadop-ozone do not have apache license headers and cause a 
> failure in trunk. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2024) rat.sh: grep: warning: recursive search of stdin

2019-08-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914172#comment-16914172
 ] 

Hudson commented on HDDS-2024:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17176 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17176/])
HDDS-2024. rat.sh: grep: warning: recursive search of stdin (elek: rev 
75bf090990d5237e2f76f83d00dce5259c39a294)
* (edit) hadoop-ozone/dev-support/checks/rat.sh


> rat.sh: grep: warning: recursive search of stdin
> 
>
> Key: HDDS-2024
> URL: https://issues.apache.org/jira/browse/HDDS-2024
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Running {{rat.sh}} locally fails with the following error message (after the 
> two Maven runs):
> {code:title=./hadoop-ozone/dev-support/checks/rat.sh}
> ...
> grep: warning: recursive search of stdin
> {code}
> This happens if {{grep}} is not the GNU one.
> Further, {{rat.sh}} runs into: {{cat: target/rat-aggregated.txt: No such file 
> or directory}} in subshell due to a typo, and so always exits with success:
> {code}
> $ ./hadoop-ozone/dev-support/checks/rat.sh
> ...
> cat: target/rat-aggregated.txt: No such file or directory
> $ echo $?
> 0
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2023) Fix rat check failures in trunk

2019-08-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914132#comment-16914132
 ] 

Hudson commented on HDDS-2023:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17175 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17175/])
HDDS-2023. Fix rat check failures in trunk (elek: rev 
e2a55482ee59624a3c1d6cd16d0acb8104201071)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketRemoveAclRequest.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/util/BooleanBiFunction.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/s3/multipart/TestS3MultipartUploadAbortResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/s3/multipart/TestS3MultipartUploadCompleteRequest.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientForAclAuditLog.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/s3/bucket/TestS3BucketDeleteResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadCompleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/s3/multipart/TestS3MultipartUploadAbortRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerDoubleBufferHelper.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketSetAclRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/multipart/S3MultipartUploadCompleteResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyPurgeResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAclRequest.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/CacheResult.java
* (edit) 
hadoop-ozone/dist/src/main/compose/ozones3-haproxy/haproxy-conf/haproxy.cfg
* (edit) hadoop-hdds/pom.xml
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyPurgeRequest.java


> Fix rat check failures in trunk
> ---
>
> Key: HDDS-2023
> URL: https://issues.apache.org/jira/browse/HDDS-2023
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Several files in hadop-ozone do not have apache license headers and cause a 
> failure in trunk. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2000) Don't depend on bootstrap/jquery versions from hadoop-trunk snapshot

2019-08-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16914110#comment-16914110
 ] 

Hudson commented on HDDS-2000:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17174 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17174/])
HDDS-2000. Don't depend on bootstrap/jquery versions from hadoop-trunk (elek: 
rev b4a95a2b00f2fb560de9c462fba25b9dad37aca4)
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/bootstrap-3.4.1/css/bootstrap-theme.min.css.map
* (edit) hadoop-hdds/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/bootstrap-3.4.1/js/bootstrap.min.js
* (edit) hadoop-hdds/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/bootstrap-3.4.1/fonts/glyphicons-halflings-regular.eot
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/bootstrap-3.4.1/fonts/glyphicons-halflings-regular.ttf
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/bootstrap-3.4.1/fonts/glyphicons-halflings-regular.woff2
* (edit) hadoop-hdds/docs/themes/ozonedoc/static/css/bootstrap.min.css
* (add) hadoop-hdds/docs/themes/ozonedoc/static/js/jquery-3.4.1.min.js
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/bootstrap-3.4.1/css/bootstrap-theme.css.map
* (edit) hadoop-hdds/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/bootstrap-3.4.1/fonts/glyphicons-halflings-regular.woff
* (edit) hadoop-hdds/pom.xml
* (edit) hadoop-hdds/docs/themes/ozonedoc/layouts/partials/navbar.html
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/bootstrap-3.4.1/css/bootstrap.min.css
* (add) hadoop-hdds/framework/src/main/resources/webapps/static/hadoop.css
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/bootstrap-3.4.1/js/bootstrap.js
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/bootstrap-3.4.1/css/bootstrap.css.map
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/bootstrap-3.4.1/css/bootstrap-theme.css
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/jquery-3.4.1.min.js
* (delete) hadoop-hdds/docs/themes/ozonedoc/static/js/jquery.min.js
* (edit) hadoop-hdds/framework/pom.xml
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/bootstrap-3.4.1/fonts/glyphicons-halflings-regular.svg
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/bootstrap-3.4.1/css/bootstrap.css
* (edit) hadoop-ozone/pom.xml
* (edit) hadoop-hdds/docs/themes/ozonedoc/static/js/bootstrap.min.js
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/bootstrap-3.4.1/css/bootstrap-editable.css
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/bootstrap-3.4.1/css/bootstrap-theme.min.css
* (edit) hadoop-hdds/docs/themes/ozonedoc/layouts/partials/footer.html
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/bootstrap-3.4.1/js/bootstrap-editable.min.js
* (add) 
hadoop-hdds/framework/src/main/resources/webapps/static/bootstrap-3.4.1/css/bootstrap.min.css.map


> Don't depend on bootstrap/jquery versions from hadoop-trunk snapshot
> 
>
> Key: HDDS-2000
> URL: https://issues.apache.org/jira/browse/HDDS-2000
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: om, SCM
>Reporter: Elek, Marton
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 16h
>  Remaining Estimate: 0h
>
> The OM/SCM web pages are broken due to the upgrade in HDFS-14729 (which is a 
> great patch on the Hadoop side). To have more stability I propose to use our 
> own instance from jquery/bootstrap instead of copying the actual version from 
> hadoop trunk which is a SNAPSHOT build.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1948) S3 MPU can't be created with octet-stream content-type

2019-08-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16913983#comment-16913983
 ] 

Hudson commented on HDDS-1948:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17173 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17173/])
HDDS-1948. S3 MPU can't be created with octet-stream content-type  (bharat: rev 
edd708527d34d0bf3b09dc35a7f645f49e7becb3)
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestInitiateMultipartUpload.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/VirtualHostStyleFilter.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestAbortMultipartUpload.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestMultipartUploadComplete.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/HeaderPreprocessor.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestPartUpload.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/S3GatewayHttpServer.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestListParts.java


> S3 MPU can't be created with octet-stream content-type 
> ---
>
> Key: HDDS-1948
> URL: https://issues.apache.org/jira/browse/HDDS-1948
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> This problem is reported offline by [~shaneku...@gmail.com].
> When aws-sdk-go is used to access to s3 gateway of Ozone it sends the Multi 
> Part Upload initialize message with "application/octet-stream" Content-Type. 
> This Content-Type is missing from the aws-cli which is used to reimplement s3 
> endpoint.
> The problem is that we use the same rest endpoint for initialize and complete 
> Multipart Upload request. For the completion we need the 
> CompleteMultipartUploadRequest parameter which is parsed from the body.
> For initialize we have an empty body which can't be serialized to 
> CompleteMultipartUploadRequest.
> The workaround is to set a specific content type from a filter which help up 
> to create two different REST method for initialize and completion message.
> Here is an example to test (using bogus AWS credentials).
> {code}
> curl -H 'Host:yourhost' -H 'User-Agent:aws-sdk-go/1.15.11 (go1.11.2; linux; 
> amd64)' -H 'Content-Length:0' -H 'Authorization:AWS4-HMAC-SHA256 
> Credential=qwe/20190809/ozone/s3/aws4_request, 
> SignedHeaders=content-type;host;x-amz-acl;x-amz-content-sha256;x-amz-date;x-amz-storage-class,
>  Signature=7726ed63990ba3f4f1f796d4ab263f5d9c3374528840f5e49d106dbef491f22c' 
> -H 'Content-Type:application/octet-stream' -H 'X-Amz-Acl:private' -H 
> 'X-Amz-Content-Sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'
>  -H 'X-Amz-Date:20190809T070142Z' -H 'X-Amz-Storage-Class:STANDARD' -H 
> 'Accept-Encoding:gzip' -X POST 
> 'http://localhost:/docker/docker/registry/v2/repositories/apache/ozone-runner/_uploads/2173f019-09c3-466b-bb7d-c31ce749d826/data?uploads
> {code}
> Without the patch it returns with HTTP 405 (Not supported Media Type).



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14396) Failed to load image from FSImageFile when downgrade from 3.x to 2.x

2019-08-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16913984#comment-16913984
 ] 

Hudson commented on HDFS-14396:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17173 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17173/])
HDFS-14396. Failed to load image from FSImageFile when downgrade from 
(aajisaka: rev bd7baea5a5d4ff351645e34c0ef09b7ba82f4285)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java


> Failed to load image from FSImageFile when downgrade from 3.x to 2.x
> 
>
> Key: HDFS-14396
> URL: https://issues.apache.org/jira/browse/HDFS-14396
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14396.001.patch, HDFS-14396.002.patch
>
>
> After fixing HDFS-13596, try to downgrade from 3.x to 2.x. But namenode can't 
> start because exception occurs. The message follows
> {code:java}
> 2019-01-23 17:22:18,730 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Failed to load image from 
> FSImageFile(file=/data1/hadoopdata/hadoop-namenode/current/fsimage_0025310,
>  cpktTxId=00
> 25310)
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:243)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:179)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:226)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:885)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:869)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:742)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:673)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:998)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:700)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:612)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:672)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:839)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:823)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1517)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1583)
> 2019-01-23 17:22:19,023 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: Failed to load FSImage file, see error(s) above for more 
> info.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:688)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:998)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:700)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:612)
> {code}
> This issue occurs because 3.x namenode saves image with EC fields during 
> upgrade
> Try to fix it



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-08-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16913957#comment-16913957
 ] 

Hudson commented on HDFS-13596:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17172 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17172/])
HDFS-13596. NN restart fails after RollingUpgrade from 2.x to 3.x. (aajisaka: 
rev abc8fde4caea0e197568ee28392c46f1ce0d42e1)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditsDoubleBuffer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditsDoubleBuffer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumJournalManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogBackupOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/QJMTestUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupJournalManager.java


> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Fei Hui
>Priority: Blocker
> Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, 
> HDFS-13596.003.patch, HDFS-13596.004.patch, HDFS-13596.005.patch, 
> HDFS-13596.006.patch, HDFS-13596.007.patch, HDFS-13596.008.patch, 
> HDFS-13596.009.patch, HDFS-13596.010.patch
>
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs

[jira] [Commented] (HDDS-1808) TestRatisPipelineCreateAndDestory#testPipelineCreationOnNodeRestart times out

2019-08-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16913870#comment-16913870
 ] 

Hudson commented on HDDS-1808:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17171 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17171/])
HDDS-1808. TestRatisPipelineCreateAndDestory times out (#1338) (bharat: rev 
f6af7d0fd7ad0d1780bd3e37ee587918653c265e)
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineCreateAndDestroy.java
* (delete) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineCreateAndDestory.java


> TestRatisPipelineCreateAndDestory#testPipelineCreationOnNodeRestart times out
> -
>
> Key: HDDS-1808
> URL: https://issues.apache.org/jira/browse/HDDS-1808
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:java}
> Error Message
> test timed out after 3 milliseconds
> Stacktrace
> java.lang.Exception: test timed out after 3 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:382)
>   at 
> org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory.waitForPipelines(TestRatisPipelineCreateAndDestory.java:126)
>   at 
> org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory.testPipelineCreationOnNodeRestart(TestRatisPipelineCreateAndDestory.java:121)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14675) Increase Balancer Defaults Further

2019-08-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16913818#comment-16913818
 ] 

Hudson commented on HDFS-14675:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17170 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17170/])
HDFS-14675. Increase Balancer Defaults Further. Contributed by Stephen 
(weichiu: rev 93daf69f90df650a6c5fb33f79e51878ad8985c9)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Increase Balancer Defaults Further
> --
>
> Key: HDFS-14675
> URL: https://issues.apache.org/jira/browse/HDFS-14675
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14675.001.patch
>
>
> HDFS-10297 increased the balancer defaults to 50 for 
> dfs.datanode.balance.max.concurrent.moves and to 10MB/s for 
> dfs.datanode.balance.bandwidthPerSec.
> We have found that these settings often have to be increased further as users 
> find the balancer operates too slowly with 50 and 10MB/s. We often recommend 
> moving concurrent moves to between 200 and 300 and setting the bandwidth to 
> 100 or even 1000MB/s, and these settings seem to work well in practice.
> I would like to suggest we increase the balancer defaults further. I would 
> suggest 100 for concurrent moves and 100MB/s for the bandwidth, but I would 
> like to know what others think on this topic too.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14617) Improve fsimage load time by writing sub-sections to the fsimage index

2019-08-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16913819#comment-16913819
 ] 

Hudson commented on HDFS-14617:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17170 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17170/])
HDFS-14617. Improve fsimage load time by writing sub-sections to the (weichiu: 
rev b67812ea2111fa11bdd76096b923c93e1bdf2923)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSImageTestUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithSnapshot.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Improve fsimage load time by writing sub-sections to the fsimage index
> --
>
> Key: HDFS-14617
> URL: https://issues.apache.org/jira/browse/HDFS-14617
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14617.001.patch, ParallelLoading.svg, 
> SerialLoading.svg, dirs-single.svg, flamegraph.parallel.svg, 
> flamegraph.serial.svg, inodes.svg
>
>
> Loading an fsimage is basically a single threaded process. The current 
> fsimage is written out in sections, eg iNode, iNode_Directory, Snapshots, 
> Snapshot_Diff etc. Then at the end of the file, an index is written that 
> contains the offset and length of each section. The image loader code uses 
> this index to initialize an input stream to read and process each section. It 
> is important that one section is fully loaded before another is started, as 
> the next section depends on the results of the previous one.
> What I would like to propose is the following:
> 1. When writing the image, we can optionally output sub_sections to the 
> index. That way, a given section would effectively be split into several 
> sections, eg:
> {code:java}
>inode_section offset 10 length 1000
>  inode_sub_section offset 10 length 500
>  inode_sub_section offset 510 length 500
>  
>inode_dir_section offset 1010 length 1000
>  inode_dir_sub_section offset 1010 length 500
>  inode_dir_sub_section offset 1010 length 500
> {code}
> Here you can see we still have the original section index, but then we also 
> have sub-section entries that cover the entire section. Then a processor can 
> either read the full section in serial, or read each sub-section in parallel.
> 2. In the Image Writer code, we should set a target number of sub-sections, 
> and then based on the total inodes in memory, it will create that many 
> sub-sections per major image section. I think the only sections worth doing 
> this for are inode, inode_reference, inode_dir and snapshot_diff. All others 
> tend to be fairly small in practice.
> 3. If there are under some threshold of inodes (eg 10M) then don't bother 
> with the sub-sections as a serial load only takes a few seconds at that scale.
> 4. The image loading code can then have a switch to enable 'parallel loading' 
> and a 'number of threads' where it uses the sub-sections, or if not enabled 
> falls back to the existing logic to read the entire section in serial.
> Working with a large image of 316M inodes and 35GB on disk, I have a proof of 
> concept of this change working, allowing just inode and inode_dir to be 
> loaded in parallel, but I believe inode_reference and snapshot_diff can be 
> make parallel with the same technique.
> Some benchmarks I have are as follows:
> {code:java}
> Threads   1 2 3 4 
> 
> inodes448   290   226   189 
> inode_dir 326   211   170   161 
> Total 927   651   535   488 (MD5 calculation about 100 seconds)
> {code}
> The above table shows the time in seconds to load the inode section and the 
> inode_directory section, and then the total load time of the image.
> With 4 threads using the above technique, we are able to better than half the 
> 

[jira] [Commented] (HDDS-1347) Implement GetS3Secret to use double buffer and cache.

2019-08-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16913817#comment-16913817
 ] 

Hudson commented on HDDS-1347:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17170 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17170/])
HDDS-1347. In OM HA getS3Secret call Should happen only leader OM. (github: rev 
4028cac56d469c566f2dbad9e9f11c36c53f5ee9)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/security/package-info.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/security/S3GetSecretResponse.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/security/S3GetSecretRequest.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/security/package-info.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/audit/OMAction.java


> Implement GetS3Secret to use double buffer and cache.
> -
>
> Key: HDDS-1347
> URL: https://issues.apache.org/jira/browse/HDDS-1347
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> In Om HA getS3Secret  should happen only leader OM.
>  
>  
> The reason is similar to initiateMultipartUpload. For more info refer 
> HDDS-1319 
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14763) Fix package name of audit log class in Dynamometer document

2019-08-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16913587#comment-16913587
 ] 

Hudson commented on HDFS-14763:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17169 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17169/])
HDFS-14763. Fix package name of audit log class in Dynamometer document 
(github: rev ee7c261e1e81f836bb18ca7f92a72abb056faf8a)
* (edit) hadoop-tools/hadoop-dynamometer/src/site/markdown/Dynamometer.md


> Fix package name of audit log class in Dynamometer document
> ---
>
> Key: HDFS-14763
> URL: https://issues.apache.org/jira/browse/HDFS-14763
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation, tools
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2008) Wrong package for RatisHelper class in hadoop-hdds/common module.

2019-08-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16913592#comment-16913592
 ] 

Hudson commented on HDDS-2008:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17169 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17169/])
HDDS-2008 : Wrong package for RatisHelper class in hadoop-hdds/common (bharat: 
rev 28fb4b527afec93926127a93e4b94a157c0f64f1)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
* (delete) hadoop-hdds/common/src/main/java/org/apache/ratis/RatisHelper.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/RatisTestHelper.java
* (delete) hadoop-hdds/common/src/main/java/org/apache/ratis/package-info.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/ratis/RatisHelper.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/ratis/package-info.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerCommandHandler.java


> Wrong package for RatisHelper class in hadoop-hdds/common module.
> -
>
> Key: HDDS-2008
> URL: https://issues.apache.org/jira/browse/HDDS-2008
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> It is currently org.apache.ratis.RatisHelper. 
> It should be org.apache.hadoop.hdds.ratis.RatisHelper.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14755) [Dynamometer] Hadoop-2 DataNode fail to start

2019-08-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16913590#comment-16913590
 ] 

Hudson commented on HDFS-14755:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17169 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17169/])
HDFS-14755. [Dynamometer] Enhance compatibility of Dynamometer with (xkrogen: 
rev 63c295e29840587eb6eb4a0fa258c55002e3229a)
* (edit) 
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/SimulatedDataNodes.java


> [Dynamometer] Hadoop-2 DataNode fail to start
> -
>
> Key: HDFS-14755
> URL: https://issues.apache.org/jira/browse/HDFS-14755
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.3.0
>
>
> When using a fsimage of Hadoop-2 with hadoop-dynamometer, datanodes fail to 
> start with the following error.
> {noformat}
> Exception in thread "main" java.lang.IllegalAccessError: tried to access 
> method 
> org.apache.hadoop.hdfs.server.datanode.StorageLocation.getUri()Ljava/net/URI; 
> from class org.apache.hadoop.tools.dynamometer.SimulatedDataNodes
> at 
> org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(SimulatedDataNodes.java:113)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at 
> org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.main(SimulatedDataNodes.java:88)
> ./start-component.sh: line 317: kill: (9876) - No such process
> {noformat}
> The cause of this error is an incompatibility of StorageLocation.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14583) FileStatus#toString() will throw IllegalArgumentException

2019-08-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16913591#comment-16913591
 ] 

Hudson commented on HDFS-14583:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17169 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17169/])
HDFS-14583. FileStatus#toString() will throw IllegalArgumentException. 
(inigoiri: rev e04dcfdc57434858884601ac647522f1160830f7)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsNamedFileStatus.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java


> FileStatus#toString() will throw IllegalArgumentException
> -
>
> Key: HDFS-14583
> URL: https://issues.apache.org/jira/browse/HDFS-14583
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
>  Labels: HDFS
> Fix For: 3.3.0
>
> Attachments: HDFS-14583-trunk-0001.patch, HDFS-14583-trunk-002.patch, 
> HDFS-14583-trunk-003.patch
>
>
> FileStatus#toString() will throw IllegalArgumentException, stack and error 
> message like this:
> {code:java}
> java.lang.IllegalArgumentException: Can not create a Path from an empty string
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:172)
>   at org.apache.hadoop.fs.Path.(Path.java:184)
>   at 
> org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus.getSymlink(HdfsLocatedFileStatus.java:117)
>   at org.apache.hadoop.fs.FileStatus.toString(FileStatus.java:462)
>   at 
> org.apache.hadoop.hdfs.web.TestJsonUtil.testHdfsFileStatus(TestJsonUtil.java:123)
> {code}
> Test Code like this:
> {code:java}
> @Test
> public void testHdfsFileStatus() throws IOException {
>   HdfsFileStatus hdfsFileStatus = new HdfsFileStatus.Builder()
>   .replication(1)
>   .blocksize(1024)
>   .perm(new FsPermission((short) 777))
>   .owner("owner")
>   .group("group")
>   .symlink(new byte[0])
>   .path(new byte[0])
>   .fileId(1010)
>   .isdir(true)
>   .build();
>   System.out.println("HdfsFileStatus = " + hdfsFileStatus.toString());
> }{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14358) Provide LiveNode and DeadNode filter in DataNode UI

2019-08-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16913137#comment-16913137
 ] 

Hudson commented on HDFS-14358:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17168 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17168/])
HDFS-14358. Provide LiveNode and DeadNode filter in DataNode UI. 
(surendralilhore: rev 76790a1e671c3c10c6083d13fb4fb8b1b3326ccf)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/hadoop.css
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js


> Provide LiveNode and DeadNode filter in DataNode UI
> ---
>
> Key: HDFS-14358
> URL: https://issues.apache.org/jira/browse/HDFS-14358
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.2
>Reporter: Ravuri Sushma sree
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14358 (4).patch, HDFS-14358(2).patch, 
> HDFS-14358(3).patch, HDFS-14358.005.patch, HDFS14358.JPG, hdfs-14358.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14741) RBF: RecoverLease should be return false when the file is open in multiple destination

2019-08-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912940#comment-16912940
 ] 

Hudson commented on HDFS-14741:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17166 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17166/])
HDFS-14741. RBF: RecoverLease should be return false when the file is 
(ayushsaxena: rev 52c77bc1607421037f6f84f695f607bb89b97cb6)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java


> RBF: RecoverLease should be return false when the file is open in multiple 
> destination
> --
>
> Key: HDFS-14741
> URL: https://issues.apache.org/jira/browse/HDFS-14741
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14741-trunk-001.patch, HDFS-14741-trunk-002.patch, 
> HDFS-14741-trunk-003.patch, HDFS-14741-trunk-004.patch, 
> HDFS-14741-trunk-005.patch, HDFS-14741-trunk-006.patch, 
> HDFS-14741-trunk-007.patch
>
>
> RecoverLease should be return false when the file is open or be writing in 
> multiple destinations.
> Liks this:
> Mount point has multiple destination(ns0 and ns1).
> And the file is in ns0 but it is be writing, ns1 doesn't has this file.
> In this case *recoverLease* should return false instead of throw 
> FileNotFoundException.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1927) Consolidate add/remove Acl into OzoneAclUtil class

2019-08-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912809#comment-16912809
 ] 

Hudson commented on HDDS-1927:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17164 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17164/])
HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. (aengineer: rev 
d58eba867234eaac0e229feb990e9dab3912e063)
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyRemoveAclRequest.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketInfo.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/security/acl/TestOzoneNativeAuthorizer.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmPrefixInfo.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadCompleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3InitiateMultipartUploadRequest.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OzoneAclUtil.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyAddAclRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java
* (edit) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/storage/DistributedStorageHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManager.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/PrefixManagerImpl.java
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeySetAclRequest.java
* (add) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/helpers/TestOzoneAclUtil.java


> Consolidate add/remove Acl into OzoneAclUtil class
> --
>
> Key: HDDS-1927
> URL: https://issues.apache.org/jira/browse/HDDS-1927
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> This Jira is created based on @xiaoyu comment on HDDS-1884
> Can we abstract these add/remove logic into common AclUtil class as we can 
> see similar logic in both bucket manager and key manager? For example,
> public static boolean addAcl(List existingAcls, OzoneAcl newAcl)
> public static boolean removeAcl(List existingAcls, OzoneAcl newAcl)
>  
> But to do this, we need both OmKeyInfo and OMBucketInfo to use list of 
> OzoneAcl/OzoneAclInfo.
> This Jira is to do that refactor, and also address above comment to move 
> common logic to AclUtils.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14744) RBF: Non secured routers should not log in error mode when UGI is default.

2019-08-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912765#comment-16912765
 ] 

Hudson commented on HDFS-14744:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17163 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17163/])
HDFS-14744. RBF: Non secured routers should not log in error mode when 
(ayushsaxena: rev f9029c4070e8eb046b403f5cb6d0a132c5d58448)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java


> RBF: Non secured routers should not log in error mode when UGI is default.
> --
>
> Key: HDFS-14744
> URL: https://issues.apache.org/jira/browse/HDFS-14744
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14744.001.patch
>
>
> RouterClientProtocol#getMountPointStatus logs error when groups are not found 
> for default web user dr.who. The line should be logged in "error" mode for 
> secured cluster, for unsecured clusters, we may want to just specify "debug" 
> or else logs are filled up with this non-critical line
> {{ERROR org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer: 
> Cannot get the remote user: There is no primary group for UGI dr.who 
> (auth:SIMPLE)}}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14276) [SBN read] Reduce tailing overhead

2019-08-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912764#comment-16912764
 ] 

Hudson commented on HDFS-14276:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17163 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17163/])
HDFS-14276. [SBN read] Reduce tailing overhead. Contributed by Wei-Chiu 
(ayushsaxena: rev 0f598aed13d0fc55908bab3f1653f20084153299)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java


> [SBN read] Reduce tailing overhead
> --
>
> Key: HDFS-14276
> URL: https://issues.apache.org/jira/browse/HDFS-14276
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, namenode
>Affects Versions: 3.3.0
> Environment: Hardware: 4-node cluster, each node has 4 core, Xeon 
> 2.5Ghz, 25GB memory.
> Software: CentOS 7.4, CDH 6.0 + Consistent Reads from Standby, Kerberos, SSL, 
> RPC encryption + Data Transfer Encryption.
>Reporter: Wei-Chiu Chuang
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14276-01.patch, HDFS-14276.000.patch, Screen Shot 
> 2019-02-12 at 10.51.41 PM.png, Screen Shot 2019-02-14 at 11.50.37 AM.png
>
>
> When Observer sets {{dfs.ha.tail-edits.period}} = {{0ms}}, it tails edit log 
> continuously in order to fetch the latest edits, but there is a lot of 
> overhead in doing so.
> Critically, edit log tailer should _not_ update NameDirSize metric every 
> time. It has nothing to do with fetching edits, and it involves lots of 
> directory space calculation.
> Profiler suggests a non-trivial chunk of time is spent for nothing.
> Other than this, the biggest overhead is in the communication to 
> serialize/deserialize messages to/from JNs. I am looking for ways to reduce 
> the cost because it's burning 30% of my CPU time even when the cluster is 
> idle.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1999) Basic acceptance test and SCM/OM web UI broken by Bootstrap upgrade

2019-08-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912647#comment-16912647
 ] 

Hudson commented on HDDS-1999:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17162 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17162/])
HDDS-1999. Basic acceptance test and SCM/OM web UI broken by Bootstrap (bharat: 
rev 2ae7f444bdef15fda202f920232bcc1b639e8900)
* (edit) hadoop-ozone/dist/src/main/smoketest/basic/basic.robot
* (edit) hadoop-ozone/s3gateway/src/main/resources/webapps/static/index.html
* (edit) hadoop-hdds/server-scm/src/main/resources/webapps/scm/index.html
* (edit) 
hadoop-ozone/ozone-manager/src/main/resources/webapps/ozoneManager/index.html
* (edit) hadoop-ozone/s3gateway/src/main/resources/browser.html


> Basic acceptance test and SCM/OM web UI broken by Bootstrap upgrade
> ---
>
> Key: HDDS-1999
> URL: https://issues.apache.org/jira/browse/HDDS-1999
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: Screen Shot 2019-08-21 at 2.34.23 PM.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> {code:title=https://elek.github.io/ozone-ci/trunk/trunk-nightly-9stkx/acceptance/smokeresult/log.html#s1-s8-t1}
> $ curl --negotiate -u : -s -I 
> http://scm:9876/static/bootstrap-3.3.7/js/bootstrap.min.js 2>&1
> HTTP/1.1 404 Not Found
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1871) Remove anti-affinity rules from k8s minkube example

2019-08-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912648#comment-16912648
 ] 

Hudson commented on HDDS-1871:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17162 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17162/])
HDDS-1871. Remove anti-affinity rules from k8s minkube example (aengineer: rev 
8fc6567b946f1d536ffed4798b5403a365021464)
* (edit) 
hadoop-ozone/dist/src/main/k8s/examples/getting-started/s3g-statefulset.yaml
* (edit) 
hadoop-ozone/dist/src/main/k8s/examples/ozone/csi/csi-ozone-serviceaccount.yaml
* (edit) hadoop-ozone/dist/src/main/k8s/definitions/ozone/datanode-ss.yaml
* (edit) 
hadoop-ozone/dist/src/main/k8s/examples/getting-started/scm-statefulset.yaml
* (edit) hadoop-ozone/dist/src/main/k8s/examples/ozone/datanode-statefulset.yaml
* (edit) hadoop-ozone/dist/src/main/k8s/examples/minikube/Flekszible
* (edit) 
hadoop-ozone/dist/src/main/k8s/examples/ozone-dev/prometheus-operator-clusterrolebinding.yaml
* (edit) 
hadoop-ozone/dist/src/main/k8s/examples/ozone/csi/csi-ozone-clusterrole.yaml
* (edit) 
hadoop-ozone/dist/src/main/k8s/examples/ozone/csi/csi-ozone-clusterrolebinding.yaml
* (edit) hadoop-ozone/dist/src/main/k8s/examples/minikube/om-statefulset.yaml
* (edit) hadoop-ozone/dist/src/main/k8s/examples/minikube/scm-statefulset.yaml
* (edit) 
hadoop-ozone/dist/src/main/k8s/examples/ozone-dev/csi/csi-ozone-clusterrole.yaml
* (edit) 
hadoop-ozone/dist/src/main/k8s/examples/ozone-dev/csi/csi-ozone-clusterrolebinding.yaml
* (edit) 
hadoop-ozone/dist/src/main/k8s/examples/minikube/datanode-statefulset.yaml
* (edit) 
hadoop-ozone/dist/src/main/k8s/examples/ozone-dev/datanode-statefulset.yaml
* (edit) 
hadoop-ozone/dist/src/main/k8s/examples/getting-started/om-statefulset.yaml
* (edit) hadoop-ozone/dist/src/main/k8s/examples/minikube/s3g-statefulset.yaml
* (edit) 
hadoop-ozone/dist/src/main/k8s/examples/getting-started/datanode-statefulset.yaml
* (edit) 
hadoop-ozone/dist/src/main/k8s/examples/ozone-dev/csi/csi-ozone-serviceaccount.yaml
* (edit) 
hadoop-ozone/dist/src/main/k8s/examples/ozone-dev/prometheus-clusterrole.yaml


> Remove anti-affinity rules from k8s minkube example
> ---
>
> Key: HDDS-1871
> URL: https://issues.apache.org/jira/browse/HDDS-1871
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: kubernetes
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> HDDS-1646 introduced real persistence for k8s example deployment files which 
> means that we need anti-affinity scheduling rules: Even if we use statefulset 
> instead of daemonset we would like to start one datanode per real nodes.
> With minikube we have only one node therefore the scheduling rule should be 
> removed to enable at least 3 datanodes on the same physical nodes.
> How to test:
> {code}
>  mvn clean install -DskipTests -f pom.ozone.xml
> cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/kubernetes/examples/minikube
> minikube start
> kubectl apply -f .
> kc get pod
> {code}
> You should see 3 datanode instances.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1973) Implement OM RenewDelegationToken request to use Cache and DoubleBuffer

2019-08-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912611#comment-16912611
 ] 

Hudson commented on HDDS-1973:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17161 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17161/])
HDDS-1973. Implement OM RenewDelegationToken request to use Cache and (github: 
rev 217e74816c4035d8d4e7645ab91089fd5bf6af66)
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerHARequestHandlerImpl.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/security/OMRenewDelegationTokenRequest.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/security/OMRenewDelegationTokenResponse.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java


> Implement OM RenewDelegationToken request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1973
> URL: https://issues.apache.org/jira/browse/HDDS-1973
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Implement OM RenewDelegationToken request to use OM Cache, double buffer.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14756) RBF: getQuotaUsage may ignore some folders

2019-08-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912610#comment-16912610
 ] 

Hudson commented on HDFS-14756:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17161 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17161/])
HDFS-14756. RBF: getQuotaUsage may ignore some folders. Contributed by 
(inigoiri: rev 93595febaa6673eea369911c3f7fcd75d4915dbc)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java


> RBF: getQuotaUsage may ignore some folders
> --
>
> Key: HDFS-14756
> URL: https://issues.apache.org/jira/browse/HDFS-14756
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14756.001.patch, HDFS-14756.002.patch
>
>
> {{getValidQuotaLocations}} want to filter duplicate subfolders, but it used 
> wrong method to determine the parent folder. In this logic, if we have 2 
> mountpoint like /miui and /miuiads, then /miuiads will be ignored.
> {code:java}
> private List getValidQuotaLocations(String path)
> throws IOException {
>   final List locations = getQuotaRemoteLocations(path);
>   // NameService -> Locations
>   ListMultimap validLocations =
>   ArrayListMultimap.create();
>   for (RemoteLocation loc : locations) {
> final String nsId = loc.getNameserviceId();
> final Collection dests = validLocations.get(nsId);
> // Ensure the paths in the same nameservice is different.
> // Do not include parent-child paths.
> boolean isChildPath = false;
> for (RemoteLocation d : dests) {
>   if (StringUtils.startsWith(loc.getDest(), d.getDest())) {
> isChildPath = true;
> break;
>   }
> }
> if (!isChildPath) {
>   validLocations.put(nsId, loc);
> }
>   }
>   return Collections
>   .unmodifiableList(new ArrayList<>(validLocations.values()));
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14714) RBF: implement getReplicatedBlockStats interface

2019-08-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912609#comment-16912609
 ] 

Hudson commented on HDFS-14714:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17161 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17161/])
HDFS-14714. RBF: implement getReplicatedBlockStats interface. (inigoiri: rev 
5eeb6da2d44335a27dc79e59e6ca561247b46a31)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ReplicatedBlockStats.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java


> RBF: implement getReplicatedBlockStats interface
> 
>
> Key: HDFS-14714
> URL: https://issues.apache.org/jira/browse/HDFS-14714
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14714.001.patch, HDFS-14714.002.patch, 
> HDFS-14714.003.patch, HDFS-14714.004.patch, HDFS-14714.005.patch
>
>
> It's not implemented now, we sometime need this interface for cluster monitor
> {code:java}
> // current implementation
> public ReplicatedBlockStats getReplicatedBlockStats() throws IOException {
>   rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
>   return null;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14476) lock too long when fix inconsistent blocks between disk and in-memory

2019-08-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912475#comment-16912475
 ] 

Hudson commented on HDFS-14476:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17160 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17160/])
Revert "HDFS-14476. lock too long when fix inconsistent blocks between 
(weichiu: rev 57f737017465cccb0f6b5ab6e3130ef49a02d4c2)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java


> lock too long when fix inconsistent blocks between disk and in-memory
> -
>
> Key: HDFS-14476
> URL: https://issues.apache.org/jira/browse/HDFS-14476
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.6.0, 2.7.0, 3.0.3
>Reporter: Sean Chow
>Assignee: Sean Chow
>Priority: Major
> Fix For: 2.10.0, 2.8.6, 2.9.3
>
> Attachments: HDFS-14476-branch-2.01.patch, HDFS-14476.00.patch, 
> HDFS-14476.002.patch, HDFS-14476.01.patch, HDFS-14476.branch-3.2.001.patch, 
> datanode-with-patch-14476.png
>
>
> When directoryScanner have the results of differences between disk and 
> in-memory blocks. it will try to run {{checkAndUpdate}} to fix it. However 
> {{FsDatasetImpl.checkAndUpdate}} is a synchronized call
> As I have about 6millions blocks for every datanodes and every 6hours' scan 
> will have about 25000 abnormal blocks to fix. That leads to a long lock 
> holding FsDatasetImpl object.
> let's assume every block need 10ms to fix(because of latency of SAS disk), 
> that will cost 250 seconds to finish. That means all reads and writes will be 
> blocked for 3mins for that datanode.
>  
> {code:java}
> 2019-05-06 08:06:51,704 INFO 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
> BP-1644920766-10.223.143.220-1450099987967 Total blocks: 6850197, missing 
> metadata files:23574, missing block files:23574, missing blocks in 
> memory:47625, mismatched blocks:0
> ...
> 2019-05-06 08:16:41,625 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Took 588402ms to process 1 commands from NN
> {code}
> Take long time to process command from nn because threads are blocked. And 
> namenode will see long lastContact time for this datanode.
> Maybe this affect all hdfs versions.
> *how to fix:*
> just like process invalidate command from namenode with 1000 batch size, fix 
> these abnormal block should be handled with batch too and sleep 2 seconds 
> between the batch to allow normal reading/writing blocks.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14759) HDFS cat logs an info message

2019-08-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16911919#comment-16911919
 ] 

Hudson commented on HDFS-14759:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17157 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17157/])
HDFS-14759. HDFS cat logs an info message. Contributed by Eric Badger. 
(aengineer: rev 8aaf5e1a14e577a7d8142bc7d49bb94014032afd)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java


> HDFS cat logs an info message
> -
>
> Key: HDFS-14759
> URL: https://issues.apache.org/jira/browse/HDFS-14759
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: HDFS-14759.001.patch
>
>
> HDFS-13699 changed a debug log line into an info log line and this line is 
> printed during {{hadoop fs -cat}} operations. This make it very difficult to 
> figure out where the log line ends and where the catted file begins, 
> especially when the output is sent to a tool for parsing. 
> {noformat}
> [ebadger@foobar bin]$ hadoop fs -cat /foo 2>/dev/null
> 2019-08-20 22:09:45,907 INFO  [main] sasl.SaslDataTransferClient 
> (SaslDataTransferClient.java:checkTrustAndSend(230)) - SASL encryption trust 
> check: localHostTrusted = false, remoteHostTrusted = false
> bar
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1870) ConcurrentModification at PrometheusMetricsSink

2019-08-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16911860#comment-16911860
 ] 

Hudson commented on HDDS-1870:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17156 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17156/])
HADOOP-16496. Apply HDDS-1870 (ConcurrentModification at (aajisaka: rev 
30ce8546f13209e7272617178f3f2f8753a6c3f2)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/PrometheusMetricsSink.java


> ConcurrentModification at PrometheusMetricsSink
> ---
>
> Key: HDDS-1870
> URL: https://issues.apache.org/jira/browse/HDDS-1870
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Encountered on {{ozoneperf}} compose env when running low on CPU:
> {code}
> om_1  | java.util.ConcurrentModificationException
> om_1  |   at 
> java.base/java.util.HashMap$HashIterator.nextNode(HashMap.java:1493)
> om_1  |   at 
> java.base/java.util.HashMap$ValueIterator.next(HashMap.java:1521)
> om_1  |   at 
> org.apache.hadoop.hdds.server.PrometheusMetricsSink.writeMetrics(PrometheusMetricsSink.java:123)
> om_1  |   at 
> org.apache.hadoop.hdds.server.PrometheusServlet.doGet(PrometheusServlet.java:43)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16911824#comment-16911824
 ] 

Hudson commented on HDDS-1965:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17155 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17155/])
Revert "HDDS-1965. Compile error due to leftover (xyao: rev 
7653ebdbb21e0f489cb113e6b878e40aa54c3b3a)
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java
HDDS-1965. Compile error due to leftover ScmBlockLocationTestIngClient (xyao: 
rev 10b4997b42e64ae33b2e26636f449d3dfee6169f)
* (delete) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java


> Compile error due to leftover ScmBlockLocationTestIngClient file
> 
>
> Key: HDDS-1965
> URL: https://issues.apache.org/jira/browse/HDDS-1965
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
>  class ScmBlockLocationTestingClient is public, should be declared in a file 
> named ScmBlockLocationTestingClient.java
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
>  duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14582) Failed to start DN with ArithmeticException when NULL checksum used

2019-08-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16911801#comment-16911801
 ] 

Hudson commented on HDFS-14582:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17154 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17154/])
HDFS-14582. Failed to start DN with ArithmeticException when NULL (weichiu: rev 
3a145e2918b66b5776a22eeffba41fc000611936)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java


> Failed to start DN with ArithmeticException when NULL checksum used
> ---
>
> Key: HDFS-14582
> URL: https://issues.apache.org/jira/browse/HDFS-14582
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14582.001.patch
>
>
> {code}
> Caused by: java.lang.ArithmeticException: / by zero
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.validateIntegrityAndSetLength(BlockPoolSlice.java:823)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addReplicaToReplicasMap(BlockPoolSlice.java:627)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:702)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice$AddReplicaProcessor.compute(BlockPoolSlice.java:1047)
> at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
> at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
> at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
> at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14311) Multi-threading conflict at layoutVersion when loading block pool storage

2019-08-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16911692#comment-16911692
 ] 

Hudson commented on HDFS-14311:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17152 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17152/])
HDFS-14311. Multi-threading conflict at layoutVersion when loading block 
(weichiu: rev 4cb22cd867a9295efc815dc95525b5c3e5960ea6)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java


> Multi-threading conflict at layoutVersion when loading block pool storage
> -
>
> Key: HDFS-14311
> URL: https://issues.apache.org/jira/browse/HDFS-14311
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.9.2
>Reporter: Yicong Cai
>Assignee: Yicong Cai
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14311.1.patch, HDFS-14311.2.patch, 
> HDFS-14311.branch-2.1.patch
>
>
> When DataNode upgrade from 2.7.3 to 2.9.2, there is a conflict at 
> StorageInfo.layoutVersion in loading block pool storage process.
> It will cause this exception:
>  
> {panel:title=exceptions}
> 2019-02-15 10:18:01,357 [13783] - INFO [Thread-33:BlockPoolSliceStorage@395] 
> - Restored 36974 block files from trash before the layout upgrade. These 
> blocks will be moved to the previous directory during the upgrade
> 2019-02-15 10:18:01,358 [13784] - WARN [Thread-33:BlockPoolSliceStorage@226] 
> - Failed to analyze storage directories for block pool 
> BP-1216718839-10.120.232.23-1548736842023
> java.io.IOException: Datanode state: LV = -57 CTime = 0 is newer than the 
> namespace state: LV = -63 CTime = 0
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.doTransition(BlockPoolSliceStorage.java:406)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:177)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:221)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:250)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:460)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:390)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
>  at java.lang.Thread.run(Thread.java:748)
> 2019-02-15 10:18:01,358 [13784] - WARN [Thread-33:DataStorage@472] - Failed 
> to add storage directory [DISK]file:/mnt/dfs/2/hadoop/hdfs/data/ for block 
> pool BP-1216718839-10.120.232.23-1548736842023
> java.io.IOException: Datanode state: LV = -57 CTime = 0 is newer than the 
> namespace state: LV = -63 CTime = 0
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.doTransition(BlockPoolSliceStorage.java:406)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:177)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:221)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:250)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:460)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:390)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:388)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
>  at java.lang.Thread.run(Thread.java:748) 

[jira] [Commented] (HDFS-13201) Fix prompt message in testPolicyAndStateCantBeNull

2019-08-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16911691#comment-16911691
 ] 

Hudson commented on HDFS-13201:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17152 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17152/])
HDFS-13201. Fix prompt message in testPolicyAndStateCantBeNull. (weichiu: rev 
aa6995fde289719e0b300e11568c5e68c36b5d05)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/protocol/TestErasureCodingPolicyInfo.java


> Fix prompt message in testPolicyAndStateCantBeNull
> --
>
> Key: HDFS-13201
> URL: https://issues.apache.org/jira/browse/HDFS-13201
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: chencan
>Assignee: chencan
>Priority: Minor
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-13201.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14729) Upgrade Bootstrap and jQuery versions used in HDFS UIs

2019-08-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16911689#comment-16911689
 ] 

Hudson commented on HDFS-14729:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17152 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17152/])
HDFS-14729. Upgrade Bootstrap and jQuery versions used in HDFS UIs. (sunilg: 
rev bd9246232123416201eb8c257b3cd8ab0ad32664)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap-editable.css
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/index.html
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/js/bootstrap.min.js
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/fonts/glyphicons-halflings-regular.ttf
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/js/bootstrap-editable.min.js
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/js/npm.js
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/css/bootstrap-theme.min.css
* (delete) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/fonts/glyphicons-halflings-regular.svg
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap-theme.css.map
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/fonts/glyphicons-halflings-regular.svg
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap.css.map
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/fonts/glyphicons-halflings-regular.woff2
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/fonts/glyphicons-halflings-regular.woff
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery-3.3.1.min.js
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/css/bootstrap.css.map
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/js/bootstrap.js
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/css/bootstrap-theme.css.map
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery-3.4.1.min.js
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap-theme.min.css.map
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap.min.css.map
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/fonts/glyphicons-halflings-regular.eot
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/css/bootstrap-theme.min.css.map
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/fonts/glyphicons-halflings-regular.eot
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/css/bootstrap.css
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/fonts/glyphicons-halflings-regular.ttf
* (edit) LICENSE.txt
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/js/bootstrap.js
* (edit) hadoop-hdfs-project/hadoop-hdfs/pom.xml
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/js/npm.js
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap.min.css
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/datanode.html
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.html
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/fonts/glyphicons-halflings-regular.woff2
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/fonts/glyphicons-halflings-regular.woff
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/css/bootstrap-editable.css
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/css/bootstrap.min.css.map
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap-theme.min.css
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap.css
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.3.7/css/bootstrap-theme.css
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1/css/bootstrap-theme.css
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* (delete) 

[jira] [Commented] (HDDS-1610) applyTransaction failure should not be lost on restart

2019-08-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16911134#comment-16911134
 ] 

Hudson commented on HDDS-1610:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17151 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17151/])
HDDS-1610. applyTransaction failure should not be lost on restart. (shashikant: 
rev 62445021d5d57b0d49adcb1bd4365c13532328fc)
* (edit) 
hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
* (edit) 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestFreonWithDatanodeFastRestart.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineFailures.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java


> applyTransaction failure should not be lost on restart
> --
>
> Key: HDDS-1610
> URL: https://issues.apache.org/jira/browse/HDDS-1610
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> If the applyTransaction fails in the containerStateMachine, then the 
> container should not accept new writes on restart,.
> This can occur if
> # chunk write applyTransaction fails
> # container state update to UNHEALTHY also fails
> # Ratis snapshot is taken
> # Node restarts
> # container accepts new transactions



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1972) Provide example ha proxy with multiple s3 servers back end.

2019-08-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910780#comment-16910780
 ] 

Hudson commented on HDDS-1972:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17150 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17150/])
HDDS-1972. Provide example ha proxy with multiple s3 servers back end. (github: 
rev 4f925afa820b607f3479a310b84f351812542833)
* (add) hadoop-ozone/dist/src/main/compose/ozones3-haproxy/docker-config
* (add) hadoop-ozone/dist/src/main/compose/ozones3-haproxy/docker-compose.yaml
* (add) hadoop-ozone/dist/src/main/compose/ozones3-haproxy/test.sh
* (add) hadoop-ozone/dist/src/main/compose/ozones3-haproxy/.env
* (add) 
hadoop-ozone/dist/src/main/compose/ozones3-haproxy/haproxy-conf/haproxy.cfg


> Provide example ha proxy with multiple s3 servers back end.
> ---
>
> Key: HDDS-1972
> URL: https://issues.apache.org/jira/browse/HDDS-1972
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> In this Jira, we shall provide docker-compose files where we start 3 s3 
> gateway servers, and ha-proxy is used to load balance these S3 Gateway 
> Servers.
>  
> In this Jira, all are proxy configurations are hardcoded, we can make 
> improvements to scale and automatically configure with environment variables 
> as a future improvement. This is just a starter example.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13709) Report bad block to NN when transfer block encounter EIO exception

2019-08-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910744#comment-16910744
 ] 

Hudson commented on HDFS-13709:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17149 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17149/])
HDFS-13709. Report bad block to NN when transfer block encounter EIO (weichiu: 
rev 360a96f342f3c8cb8246f011abb9bcb0b6ef3eaa)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskFileCorruptException.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplication.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java


> Report bad block to NN when transfer block encounter EIO exception
> --
>
> Key: HDFS-13709
> URL: https://issues.apache.org/jira/browse/HDFS-13709
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-13709.002.patch, HDFS-13709.003.patch, 
> HDFS-13709.004.patch, HDFS-13709.005.patch, HDFS-13709.patch
>
>
> In our online cluster, the BlockPoolSliceScanner is turned off, and sometimes 
> disk bad track may cause data loss.
> For example, there are 3 replicas on 3 machines A/B/C, if a bad track occurs 
> on A's replica data, and someday B and C crushed at the same time, NN will 
> try to replicate data from A but failed, this block is corrupt now but no one 
> knows, because NN think there is at least 1 healthy replica and it keep 
> trying to replicate it.
> When reading a replica which have data on bad track, OS will return an EIO 
> error, if DN reports the bad block as soon as it got an EIO,  we can find 
> this case ASAP and try to avoid data loss



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14746) Trivial test code update after HDFS-14687

2019-08-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910639#comment-16910639
 ] 

Hudson commented on HDFS-14746:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17148 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17148/])
HDFS-14746. Trivial test code update after HDFS-14687. Contributed by (weichiu: 
rev abae6ff2a2760500b7e7d4414a43069ed4a45930)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingDataNodeMessages.java


> Trivial test code update after HDFS-14687
> -
>
> Key: HDFS-14746
> URL: https://issues.apache.org/jira/browse/HDFS-14746
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ec
>Reporter: Wei-Chiu Chuang
>Assignee: kevin su
>Priority: Trivial
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14746.001.patch
>
>
> Instead of getting erasure coding policy instance by id, it should use a 
> constant value.
> Change
> {code}
> ErasureCodingPolicy ecPolicy = SystemErasureCodingPolicies.getPolicies()
> .get(3);
> {code}
> to
> {code}
> ErasureCodingPolicy ecPolicy = 
> SystemErasureCodingPolicies.getByID(XOR_2_1_POLICY_ID);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14687) Standby Namenode never come out of safemode when EC files are being written.

2019-08-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910640#comment-16910640
 ] 

Hudson commented on HDFS-14687:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17148 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17148/])
HDFS-14746. Trivial test code update after HDFS-14687. Contributed by (weichiu: 
rev abae6ff2a2760500b7e7d4414a43069ed4a45930)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingDataNodeMessages.java


> Standby Namenode never come out of safemode when EC files are being written.
> 
>
> Key: HDFS-14687
> URL: https://issues.apache.org/jira/browse/HDFS-14687
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, namenode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Critical
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14687.001.patch, HDFS-14687.002.patch, 
> HDFS-14687.003.patch, HDFS-14687.004.patch
>
>
> When huge number of EC files are being written and SBN is restarted then it 
> will never come out of safe mode and required blocks count getting increase.
> {noformat}
> The reported blocks 16658401 needs additional 1702 blocks to reach the 
> threshold 0.9 of total blocks 16660120.
> The reported blocks 16658659 needs additional 2935 blocks to reach the 
> threshold 0.9 of total blocks 16661611.
> The reported blocks 16659947 needs additional 3868 blocks to reach the 
> threshold 0.9 of total blocks 16663832.
> The reported blocks 1335 needs additional 5116 blocks to reach the 
> threshold 0.9 of total blocks 16671468.
> The reported blocks 16669311 needs additional 6384 blocks to reach the 
> threshold 0.9 of total blocks 16675712.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



<    2   3   4   5   6   7   8   9   10   11   >