[jira] [Updated] (HDDS-817) Create SCM metrics for disk from node report

2018-11-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-817:

Description: 
# Disk usage HDD and SSD
 # Total no of datanodes in cluster (Running, Unhealthy, Failed) (Add a UT for 
this implementation which already exists)

  was:
# Disk usage HDD and SSD
 # Total no of datanodes in cluster (Running, Unhealthy, Failed) (Add a UT for 
this implementation)


> Create SCM metrics for disk from node report
> 
>
> Key: HDDS-817
> URL: https://issues.apache.org/jira/browse/HDDS-817
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-817.00.patch
>
>
> # Disk usage HDD and SSD
>  # Total no of datanodes in cluster (Running, Unhealthy, Failed) (Add a UT 
> for this implementation which already exists)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-9) Add GRPC protocol interceptors for Ozone Block Token

2018-11-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-9:
--
Attachment: HDDS-9-HDDS-4.003.patch

> Add GRPC protocol interceptors for Ozone Block Token
> 
>
> Key: HDDS-9
> URL: https://issues.apache.org/jira/browse/HDDS-9
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-9-HDDS-4.001.patch, HDDS-9-HDDS-4.002.patch, 
> HDDS-9-HDDS-4.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-284) CRC for ChunksData

2018-11-19 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-284:

Description: 
This Jira is to add CRC for chunks data.
 Right now a Chunk Info structure looks like this:

_message ChunkInfo {_
  _required string chunkName =_ _1__;_
  _required uint64 offset =_ _2__;_
  _required uint64 len =_ _3__;_
  _optional string checksum =_ _4__;_
  _repeated KeyValue metadata =_ _5__;_
 _}_

 
Proposal is to change ChunkInfo structure as below: 
_message ChunkInfo {_
 _required string chunkName = 1 ;_
 _required uint64 offset = 2 ;_
 _required uint64 len = 3 ;_
 _repeated KeyValue metadata = 4;_
 _required ChecksumData checksumData = 5;_
_}_

 

The ChecksumData structure would be as follows: 
_message ChecksumData {_
 _required ChecksumType type = 1;_ 
 _required uint32 bytesPerChecksum = 2;_ 
 _repeated bytes checksums = 3;_
_}_
 
Instead of changing disk format, we put the checksum into chunkInfo.

  was:
This Jira is to add CRC for chunks data.
Right now a Chunk Info structure looks like this:
_message ChunkInfo {_
 _required string chunkName =_ _1__;_
_required uint64 offset =_ _2__;_
_required uint64 len =_ _3__;_
_optional string checksum =_ _4__;_
_repeated KeyValue metadata =_ _5__;_
_}_

_Proposal is to change ChunkInfo structure as below:_
_message ChunkInfo {_
 _required string chunkName =_ _1__;_
 _required uint64 offset =_ _2__;_
 _required uint64 len =_ _3__;_
 _optional bytes checksum =_ _4__;_
 _optional CRCType checksumType =_ _5__;_
 _optional string legacyMetadata =_ _6__;_
 _optional string legacyData =_ _7__;_
 _repeated KeyValue metadata =_ _8__;_
_}_

_Instead of changing disk format, we put the checksum, checksumtype and legacy 
data fields in to chunkInfo._


> CRC for ChunksData
> --
>
> Key: HDDS-284
> URL: https://issues.apache.org/jira/browse/HDDS-284
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: CRC and Error Detection for Containers.pdf, 
> HDDS-284.00.patch, HDDS-284.005.patch, HDDS-284.01.patch, HDDS-284.02.patch, 
> HDDS-284.03.patch, HDDS-284.04.patch, Interleaving CRC and Error Detection 
> for Containers.pdf
>
>
> This Jira is to add CRC for chunks data.
>  Right now a Chunk Info structure looks like this:
> _message ChunkInfo {_
>   _required string chunkName =_ _1__;_
>   _required uint64 offset =_ _2__;_
>   _required uint64 len =_ _3__;_
>   _optional string checksum =_ _4__;_
>   _repeated KeyValue metadata =_ _5__;_
>  _}_
>  
> Proposal is to change ChunkInfo structure as below: 
> _message ChunkInfo {_
>  _required string chunkName = 1 ;_
>  _required uint64 offset = 2 ;_
>  _required uint64 len = 3 ;_
>  _repeated KeyValue metadata = 4;_
>  _required ChecksumData checksumData = 5;_
> _}_
>  
> The ChecksumData structure would be as follows: 
> _message ChecksumData {_
>  _required ChecksumType type = 1;_ 
>  _required uint32 bytesPerChecksum = 2;_ 
>  _repeated bytes checksums = 3;_
> _}_
>  
> Instead of changing disk format, we put the checksum into chunkInfo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-857) Adding more DN permission info into the block token identifier

2018-11-19 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-857:
---

 Summary: Adding more DN permission info into the block token 
identifier
 Key: HDDS-857
 URL: https://issues.apache.org/jira/browse/HDDS-857
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao


Based on comments on HDDS-9, we want to include more DN permission info such as 
pipeline ID, etc to block token identifier in the future so that DN can enforce 
permission/rule check against client I/O usages. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13762) Support non-volatile storage class memory(SCM) in HDFS cache directives

2018-11-19 Thread Wei Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HDFS-13762:

Attachment: HDFS-13762.002.patch

> Support non-volatile storage class memory(SCM) in HDFS cache directives
> ---
>
> Key: HDFS-13762
> URL: https://issues.apache.org/jira/browse/HDFS-13762
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching, datanode
>Reporter: Sammi Chen
>Assignee: Wei Zhou
>Priority: Major
> Attachments: HDFS-13762.000.patch, HDFS-13762.001.patch, 
> HDFS-13762.002.patch, SCMCacheDesign-2018-11-08.pdf, SCMCacheTestPlan.pdf
>
>
> No-volatile storage class memory is a type of memory that can keep the data 
> content after power failure or between the power cycle. Non-volatile storage 
> class memory device usually has near access speed as memory DIMM while has 
> lower cost than memory.  So today It is usually used as a supplement to 
> memory to hold long tern persistent data, such as data in cache. 
> Currently in HDFS, we have OS page cache backed read only cache and RAMDISK 
> based lazy write cache.  Non-volatile memory suits for both these functions. 
> This Jira aims to enable storage class memory first in read cache. Although 
> storage class memory has non-volatile characteristics, to keep the same 
> behavior as current read only cache, we don't use its persistent 
> characteristics currently.  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-808) Simplify OMAction and DNAction classes used for AuditLogging

2018-11-19 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692411#comment-16692411
 ] 

Ajay Kumar commented on HDDS-808:
-

[~dineshchitlangia] My comment was about enum classes. We don't need those 
String constructors. We can keep getAction and return corresponding string 
value of enum variable. I am also fine with making AuditAction a marker 
interface.

> Simplify OMAction and DNAction classes used for AuditLogging
> 
>
> Key: HDDS-808
> URL: https://issues.apache.org/jira/browse/HDDS-808
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: logging
>
> While reviewing HDDS-120, [~ajayydv] suggested to simplify these class by 
> removing the constructor and the getAction().
> Refer review comment: 
> https://issues.apache.org/jira/browse/HDDS-120?focusedCommentId=16670495=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16670495



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14075) NPE while Edit Logging

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692428#comment-16692428
 ] 

Hadoop QA commented on HDFS-14075:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}129m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}182m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | hadoop.hdfs.TestAclsEndToEnd |
|   | hadoop.hdfs.TestRollingUpgradeRollback |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStream |
|   | hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.hdfs.TestDFSPermission |
|   | 
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
|
|   | hadoop.hdfs.TestDFSUpgrade |
|   | hadoop.hdfs.server.namenode.TestXAttrConfigFlag |
|   | hadoop.hdfs.server.namenode.ha.TestQuotasWithHA |
|   | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.namenode.TestNamenodeStorageDirectives |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | 

[jira] [Commented] (HDDS-9) Add GRPC protocol interceptors for Ozone Block Token

2018-11-19 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692444#comment-16692444
 ] 

Jitendra Nath Pandey commented on HDDS-9:
-

+1 for the latest patch.

> Add GRPC protocol interceptors for Ozone Block Token
> 
>
> Key: HDDS-9
> URL: https://issues.apache.org/jira/browse/HDDS-9
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-9-HDDS-4.001.patch, HDDS-9-HDDS-4.002.patch, 
> HDDS-9-HDDS-4.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-284) CRC for ChunksData

2018-11-19 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-284:

Description: 
This Jira is to add CRC for chunks data.
Right now a Chunk Info structure looks like this:
_message ChunkInfo {_
 _required string chunkName =_ _1__;_
_required uint64 offset =_ _2__;_
_required uint64 len =_ _3__;_
_optional string checksum =_ _4__;_
_repeated KeyValue metadata =_ _5__;_
_}_

_Proposal is to change ChunkInfo structure as below:_
_message ChunkInfo {_
 _required string chunkName =_ _1__;_
 _required uint64 offset =_ _2__;_
 _required uint64 len =_ _3__;_
 _optional bytes checksum =_ _4__;_
 _optional CRCType checksumType =_ _5__;_
 _optional string legacyMetadata =_ _6__;_
 _optional string legacyData =_ _7__;_
 _repeated KeyValue metadata =_ _8__;_
_}_

_Instead of changing disk format, we put the checksum, checksumtype and legacy 
data fields in to chunkInfo._

  was:
This Jira is to add CRC for chunks data.

 

 

Right now a Chunk Info structure looks like this:

 

_message ChunkInfo {_

 _required string chunkName =_ _1__;_

_required uint64 offset =_ _2__;_

_required uint64 len =_ _3__;_

_optional string checksum =_ _4__;_

_repeated KeyValue metadata =_ _5__;_

_}_

 

_Proposal is to change ChunkInfo structure as below:_

 

_message ChunkInfo {_

 _required string chunkName =_ _1__;_

 _required uint64 offset =_ _2__;_

 _required uint64 len =_ _3__;_

 _optional bytes checksum =_ _4__;_

 _optional CRCType checksumType =_ _5__;_

 _optional string legacyMetadata =_ _6__;_

 _optional string legacyData =_ _7__;_

 _repeated KeyValue metadata =_ _8__;_

_}_

 

_Instead of changing disk format, we put the checksum, checksumtype and legacy 
data fields in to chunkInfo._

 


> CRC for ChunksData
> --
>
> Key: HDDS-284
> URL: https://issues.apache.org/jira/browse/HDDS-284
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: CRC and Error Detection for Containers.pdf, 
> HDDS-284.00.patch, HDDS-284.005.patch, HDDS-284.01.patch, HDDS-284.02.patch, 
> HDDS-284.03.patch, HDDS-284.04.patch, Interleaving CRC and Error Detection 
> for Containers.pdf
>
>
> This Jira is to add CRC for chunks data.
> Right now a Chunk Info structure looks like this:
> _message ChunkInfo {_
>  _required string chunkName =_ _1__;_
> _required uint64 offset =_ _2__;_
> _required uint64 len =_ _3__;_
> _optional string checksum =_ _4__;_
> _repeated KeyValue metadata =_ _5__;_
> _}_
> _Proposal is to change ChunkInfo structure as below:_
> _message ChunkInfo {_
>  _required string chunkName =_ _1__;_
>  _required uint64 offset =_ _2__;_
>  _required uint64 len =_ _3__;_
>  _optional bytes checksum =_ _4__;_
>  _optional CRCType checksumType =_ _5__;_
>  _optional string legacyMetadata =_ _6__;_
>  _optional string legacyData =_ _7__;_
>  _repeated KeyValue metadata =_ _8__;_
> _}_
> _Instead of changing disk format, we put the checksum, checksumtype and 
> legacy data fields in to chunkInfo._



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-808) Simplify OMAction and DNAction classes used for AuditLogging

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692545#comment-16692545
 ] 

Hadoop QA commented on HDDS-808:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 37s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 33s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-808 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948782/HDDS-808.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux daf24c048f5e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b5d7b29 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1765/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
| unit | 

[jira] [Commented] (HDDS-9) Add GRPC protocol interceptors for Ozone Block Token

2018-11-19 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692583#comment-16692583
 ] 

Ajay Kumar commented on HDDS-9:
---

[~xyao] thanks for working on this. 
XceiverClientGrpc 
* Shall we make X509Certificate a class field instead of initializing it in 
{{verify}}?
* L65 verifySignature returns false if signature fails. This is not handled 
currently. We should throw some exception if signature verification fails.
* L71 shall we rethrow the catched exception?


> Add GRPC protocol interceptors for Ozone Block Token
> 
>
> Key: HDDS-9
> URL: https://issues.apache.org/jira/browse/HDDS-9
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-9-HDDS-4.001.patch, HDDS-9-HDDS-4.002.patch, 
> HDDS-9-HDDS-4.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-817) Create SCM metrics for disk from node report

2018-11-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-817:

Description: 
# Disk usage HDD and SSD
 # Total no of datanodes in cluster (Running, Unhealthy, Failed) (Add a UT for 
this implementation)

  was:
# Disk usage HDD and SSD
 # Total no of datanodes in cluster (Running, Unhealthy, Failed)


> Create SCM metrics for disk from node report
> 
>
> Key: HDDS-817
> URL: https://issues.apache.org/jira/browse/HDDS-817
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-817.00.patch
>
>
> # Disk usage HDD and SSD
>  # Total no of datanodes in cluster (Running, Unhealthy, Failed) (Add a UT 
> for this implementation)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-19 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692276#comment-16692276
 ] 

Bharat Viswanadham edited comment on HDDS-816 at 11/19/18 9:03 PM:
---

Thank You [~elek] for review.
{quote}I am not sure if we can shutdown the whole metrics system from one 
simple metric...
{quote}
 

This unregister is called during OM stop(), that is the reason for doing it. I 
have removed the shutdown() in OmMetrics.java in patch v07.

 


was (Author: bharatviswa):
{quote}Thank You [~elek] for review.

I am not sure if we can shutdown the whole metrics system from one simple 
metric...
{quote}
 

This unregister is called during OM stop(), that is the reason for doing it. I 
have removed the shutdown() in OmMetrics.java in patch v07.

 

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, HDDS-816.01.patch, HDDS-816.03.patch, 
> HDDS-816.04.patch, HDDS-816.05.patch, HDDS-816.06.patch, HDDS-816.07.patch, 
> Metrics for number of volumes, buckets, keys.pdf, Proposed Approach.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13101) Yet another fsimage corruption related to snapshot

2018-11-19 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692378#comment-16692378
 ] 

Siyao Meng commented on HDFS-13101:
---

[~arpitagarwal] I'm working on the root cause but so far no conclusions yet. 
Would definitely post a patch if I fixed it.

> Yet another fsimage corruption related to snapshot
> --
>
> Key: HDFS-13101
> URL: https://issues.apache.org/jira/browse/HDFS-13101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
>
> Lately we saw case similar to HDFS-9406, even though HDFS-9406 fix is 
> present, so it's likely another case not covered by the fix. We are currently 
> trying to collect good fsimage + editlogs to replay to reproduce it and 
> investigate. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-9) Add GRPC protocol interceptors for Ozone Block Token

2018-11-19 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692434#comment-16692434
 ] 

Xiaoyu Yao commented on HDDS-9:
---

Upload a new patch based on offline discussion. Delta of v3: add maxLength 
block token identifier so that DN can enforce consistent read with the block 
token.

> Add GRPC protocol interceptors for Ozone Block Token
> 
>
> Key: HDDS-9
> URL: https://issues.apache.org/jira/browse/HDDS-9
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-9-HDDS-4.001.patch, HDDS-9-HDDS-4.002.patch, 
> HDDS-9-HDDS-4.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-808) Simplify OMAction and DNAction classes used for AuditLogging

2018-11-19 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-808:
---
Attachment: HDDS-808.001.patch
Status: Patch Available  (was: Open)

> Simplify OMAction and DNAction classes used for AuditLogging
> 
>
> Key: HDDS-808
> URL: https://issues.apache.org/jira/browse/HDDS-808
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: logging
> Attachments: HDDS-808.001.patch
>
>
> While reviewing HDDS-120, [~ajayydv] suggested to simplify these class by 
> removing the constructor and the getAction().
> Refer review comment: 
> https://issues.apache.org/jira/browse/HDDS-120?focusedCommentId=16670495=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16670495



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-817) Create SCM metrics for disk from node report

2018-11-19 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692319#comment-16692319
 ] 

Bharat Viswanadham commented on HDDS-817:
-

Thank You [~linyiqun] for review.

I have addressed all of your review comments in patch v01.

> Create SCM metrics for disk from node report
> 
>
> Key: HDDS-817
> URL: https://issues.apache.org/jira/browse/HDDS-817
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-817.00.patch, HDDS-817.01.patch
>
>
> # Disk usage HDD and SSD
>  # Total no of datanodes in cluster (Running, Unhealthy, Failed) (Add a UT 
> for this implementation which already exists)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-817) Create SCM metrics for disk from node report

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692446#comment-16692446
 ] 

Hadoop QA commented on HDDS-817:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 38m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 13s{color} | {color:orange} root: The patch generated 3 new + 0 unchanged - 
0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 42s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-817 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948776/HDDS-817.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fa6bb33b6389 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Commented] (HDFS-13101) Yet another fsimage corruption related to snapshot

2018-11-19 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692381#comment-16692381
 ] 

Arpit Agarwal commented on HDFS-13101:
--

Thanks for the update [~smeng]. Do you have any further details to share beyond 
delete trash + delete snapshots?

We can also try to repro and debug it.

> Yet another fsimage corruption related to snapshot
> --
>
> Key: HDFS-13101
> URL: https://issues.apache.org/jira/browse/HDFS-13101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
>
> Lately we saw case similar to HDFS-9406, even though HDFS-9406 fix is 
> present, so it's likely another case not covered by the fix. We are currently 
> trying to collect good fsimage + editlogs to replay to reproduce it and 
> investigate. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-115) GRPC: Support secure gRPC endpoint with mTLS

2018-11-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-115 started by Xiaoyu Yao.
---
> GRPC: Support secure gRPC endpoint with mTLS 
> -
>
> Key: HDDS-115
> URL: https://issues.apache.org/jira/browse/HDDS-115
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-856) Add Channelz support for Ozone GRPC endpoints

2018-11-19 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-856:
---

 Summary: Add Channelz support for Ozone GRPC endpoints
 Key: HDDS-856
 URL: https://issues.apache.org/jira/browse/HDDS-856
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Xiaoyu Yao


Based on the description here: 
[https://grpc.io/blog/a_short_introduction_to_channelz], this can provide lots 
of runtime information for live troubleshooting ozone GRPC endpoints.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-9) Add GRPC protocol interceptors for Ozone Block Token

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692522#comment-16692522
 ] 

Hadoop QA commented on HDDS-9:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
29s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
56s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
12s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
55s{color} | {color:green} HDDS-4 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 19m 
56s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
32s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
2s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 17m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 10m 
32s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 37s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 33s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 43s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 39s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 37s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} 

[jira] [Commented] (HDDS-808) Simplify OMAction and DNAction classes used for AuditLogging

2018-11-19 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692585#comment-16692585
 ] 

Dinesh Chitlangia commented on HDDS-808:


failure unrelated to patch

> Simplify OMAction and DNAction classes used for AuditLogging
> 
>
> Key: HDDS-808
> URL: https://issues.apache.org/jira/browse/HDDS-808
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: logging
> Attachments: HDDS-808.001.patch
>
>
> While reviewing HDDS-120, [~ajayydv] suggested to simplify these class by 
> removing the constructor and the getAction().
> Refer review comment: 
> https://issues.apache.org/jira/browse/HDDS-120?focusedCommentId=16670495=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16670495



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-817) Create SCM metrics for disk from node report

2018-11-19 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-817:

Attachment: HDDS-817.01.patch

> Create SCM metrics for disk from node report
> 
>
> Key: HDDS-817
> URL: https://issues.apache.org/jira/browse/HDDS-817
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-817.00.patch, HDDS-817.01.patch
>
>
> # Disk usage HDD and SSD
>  # Total no of datanodes in cluster (Running, Unhealthy, Failed) (Add a UT 
> for this implementation which already exists)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692398#comment-16692398
 ] 

Hadoop QA commented on HDDS-816:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 45s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 41s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 39s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | 

[jira] [Commented] (HDDS-808) Simplify OMAction and DNAction classes used for AuditLogging

2018-11-19 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692452#comment-16692452
 ] 

Dinesh Chitlangia commented on HDDS-808:


[~ajayydv] - Gotcha. Attached patch 001 for your review.

> Simplify OMAction and DNAction classes used for AuditLogging
> 
>
> Key: HDDS-808
> URL: https://issues.apache.org/jira/browse/HDDS-808
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: logging
>
> While reviewing HDDS-120, [~ajayydv] suggested to simplify these class by 
> removing the constructor and the getAction().
> Refer review comment: 
> https://issues.apache.org/jira/browse/HDDS-120?focusedCommentId=16670495=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16670495



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-284) CRC for ChunksData

2018-11-19 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692490#comment-16692490
 ] 

Hanisha Koneru commented on HDDS-284:
-

Thank you for the review [~shashikant].
{quote} 2. With the patch it aways seems to be computing the checksum in 
writeChunkToContainerCall. With HTTP headers, if the checksum is already 
available in a Rest call, we might not require to recompute again. Are we going 
to address such cases later?{quote}
We can add support for this later. If the checksum is already provided in the 
HTTP header, we can use that and skip the computation. Let's open a new Jira to 
track this?
{quote} 3. ChunkManagerImpl#writeChunk:-> while handling the overWrites of a 
chunkFile we can just verify the checksum if its already present and return 
accordingly without actually doing I/O ( addressed as TODO in the code). We can 
also add the checksum verification here, though these can be addressed in a 
separate patch as well.{quote}
Yes, let's address this also in a separate patch.
{quote} 4. ChunkInputStream.java : L213-215 : why is this change specifically 
required? Is it just for making the tests added to work?{quote}
Added this to propagate the actual exception to the client. Otherwise, we just 
get "Unexpected OzoneException" without specifying the actual reason for the 
failure.

> CRC for ChunksData
> --
>
> Key: HDDS-284
> URL: https://issues.apache.org/jira/browse/HDDS-284
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: CRC and Error Detection for Containers.pdf, 
> HDDS-284.00.patch, HDDS-284.005.patch, HDDS-284.01.patch, HDDS-284.02.patch, 
> HDDS-284.03.patch, HDDS-284.04.patch, Interleaving CRC and Error Detection 
> for Containers.pdf
>
>
> This Jira is to add CRC for chunks data.
> Right now a Chunk Info structure looks like this:
> _message ChunkInfo {_
>  _required string chunkName =_ _1__;_
> _required uint64 offset =_ _2__;_
> _required uint64 len =_ _3__;_
> _optional string checksum =_ _4__;_
> _repeated KeyValue metadata =_ _5__;_
> _}_
> _Proposal is to change ChunkInfo structure as below:_
> _message ChunkInfo {_
>  _required string chunkName =_ _1__;_
>  _required uint64 offset =_ _2__;_
>  _required uint64 len =_ _3__;_
>  _optional bytes checksum =_ _4__;_
>  _optional CRCType checksumType =_ _5__;_
>  _optional string legacyMetadata =_ _6__;_
>  _optional string legacyData =_ _7__;_
>  _repeated KeyValue metadata =_ _8__;_
> _}_
> _Instead of changing disk format, we put the checksum, checksumtype and 
> legacy data fields in to chunkInfo._



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13762) Support non-volatile storage class memory(SCM) in HDFS cache directives

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692668#comment-16692668
 ] 

Hadoop QA commented on HDFS-13762:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project-dist . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m  
4s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  1m  4s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m  4s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 35s{color} | {color:orange} root: The patch generated 16 new + 784 unchanged 
- 3 fixed = 800 total (was 787) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
14s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project-dist . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 59s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | 

[jira] [Commented] (HDDS-835) Use storageSize instead of Long for buffer size configs in Ozone Client

2018-11-19 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692675#comment-16692675
 ] 

Shashikant Banerjee commented on HDDS-835:
--

Thanks [~msingh], for the review.

 
{code:java}
ScmConfigKeys:140, lets change OZONE_SCM_CHUNK_MAX_SIZE to 32MB as well{code}
Since, OZONE_SCM_CHUNK_MAX_SIZE is constant, moved it to OzoneConsts.java
{code:java}
TestFailureHandlingByClient:91, the SCM_BLOCK size needs to be set here
{code}
BlockSize is already to set to required value when miniOzoneCluster instance is 
created. No need to set it here.

Rest of the review comments are addressed.

 

> Use storageSize instead of Long for buffer size configs in Ozone Client
> ---
>
> Key: HDDS-835
> URL: https://issues.apache.org/jira/browse/HDDS-835
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-835.000.patch, HDDS-835.001.patch
>
>
> As per [~msingh] review comments in HDDS-675 , for streamBufferFlushSize, 
> streamBufferMaxSize, blockSize configs, we should use getStorageSize instead 
> of a long value, This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-835) Use storageSize instead of Long for buffer size configs in Ozone Client

2018-11-19 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-835:
-
Attachment: HDDS-835.001.patch

> Use storageSize instead of Long for buffer size configs in Ozone Client
> ---
>
> Key: HDDS-835
> URL: https://issues.apache.org/jira/browse/HDDS-835
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-835.000.patch, HDDS-835.001.patch
>
>
> As per [~msingh] review comments in HDDS-675 , for streamBufferFlushSize, 
> streamBufferMaxSize, blockSize configs, we should use getStorageSize instead 
> of a long value, This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-817) Create SCM metrics for disk from node report

2018-11-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692730#comment-16692730
 ] 

Hudson commented on HDDS-817:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15469 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15469/])
HDDS-817. Create SCM metrics for disk from node report. Contributed by (yqlin: 
rev d0cc679441da436d7004b38d0eb83af3891e6e09)
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManagerMXBean.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMNodeManagerMXBean.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java


> Create SCM metrics for disk from node report
> 
>
> Key: HDDS-817
> URL: https://issues.apache.org/jira/browse/HDDS-817
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-817.00.patch, HDDS-817.01.patch
>
>
> # Disk usage HDD and SSD
>  # Total no of datanodes in cluster (Running, Unhealthy, Failed) (Add a UT 
> for this implementation which already exists)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14082) RBF: Add option to fail operations when a subcluster is unavailable

2018-11-19 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692596#comment-16692596
 ] 

Yiqun Lin edited comment on HDFS-14082 at 11/20/18 7:28 AM:


hI [~elgoiri], would you  mind addressing my previous comments? I suppose you 
have missed that. Here I think we only take care for that if the whole files 
are returned or not.
{quote}I mean we make following change and not use specific number.
 // Test the behavior when everything is fine
 + FileSystem fs = getRouterFileSystem();
 + FileStatus[] files = fs.listStatus(new Path("/"));
 int originalCount= files.length
 
 // simplify the assertion logic
 assertNotEqual("...", originalCount , files.length);
 Catching one place I am missing before:
 fail("I should throw an exception");
 Can we reword this to "listStatus call should throw an exception"?
{quote}
Others look good to me..


was (Author: linyiqun):
hI [~elgoiri], would you  mind addressing my previous comments? I suppose you 
have missed that.
{quote}I mean we make following change and not use specific number.
 // Test the behavior when everything is fine
 + FileSystem fs = getRouterFileSystem();
 + FileStatus[] files = fs.listStatus(new Path("/"));
 int originalCount= files.length
 
 // simplify the assertion logic
 assertNotEqual("...", originalCount , files.length);
 Catching one place I am missing before:
 fail("I should throw an exception");
 Can we reword this to "listStatus call should throw an exception"?
{quote}
Others look good to me..

> RBF: Add option to fail operations when a subcluster is unavailable
> ---
>
> Key: HDFS-14082
> URL: https://issues.apache.org/jira/browse/HDFS-14082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14082-HDFS-13891.002.patch, HDFS-14082.000.patch, 
> HDFS-14082.001.patch
>
>
> When a subcluster is unavailable, we succeed operations like 
> {{getListing()}}. We should add an option to fail the operation if one of the 
> subclusters is unavailable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-372) There are two buffer copies in ChunkOutputStream

2018-11-19 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-372:
--
Status: In Progress  (was: Patch Available)

> There are two buffer copies in ChunkOutputStream
> 
>
> Key: HDDS-372
> URL: https://issues.apache.org/jira/browse/HDDS-372
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: HDDS-372.20180829.patch
>
>
> Currently, there are two buffer copies in ChunkOutputStream
> # from byte[] to ByteBuffer, and
> # from ByteBuffer to ByteString.
> We should eliminate the ByteBuffer in the middle.
> For zero copy io, we should support WritableByteChannel instead of 
> OutputStream.  It won't be done in this JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService

2018-11-19 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692621#comment-16692621
 ] 

Ranith Sardar commented on HDFS-14089:
--

Error details in log: 

2018-11-19 14:52:54,270 ERROR 
org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
Cannot fetch HA status for hacluster-nn1:vm1:65110: DestHost:destPort *:65110 , 
LocalHost:localPort */*.*.*.*:0. Failed on local exception: 
java.io.IOException: Couldn't set up IO streams: 
java.lang.IllegalArgumentException: Failed to specify server's Kerberos 
principal name
java.io.IOException: DestHost:destPort *:65110 , LocalHost:localPort 
*/**.*.*.*:0. Failed on local exception: java.io.IOException: Couldn't set up 
IO streams: java.lang.IllegalArgumentException: Failed to specify server's 
Kerberos principal name

> RBF: Failed to specify server's Kerberos pricipal name in 
> NamenodeHeartbeatService
> --
>
> Key: HDFS-14089
> URL: https://issues.apache.org/jira/browse/HDFS-14089
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Fix For: HDFS-13891
>
> Attachments: HDFS-14089.patch
>
>
> DFSZKFailoverController, DFSHAAdmin setting the conf for 
> "HADOOP_SECURITY_SERVICE_USER_NAME_KEY".  Need to add the configuration for 
> NamenodeHeartbeatService as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-19 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692737#comment-16692737
 ] 

Surendra Singh Lilhore edited comment on HDFS-14085 at 11/20/18 6:53 AM:
-

{quote}Given that, I would propose to just show those folders as r-xr-xr-x.
{quote}
What if the destination folder has "rwx--" pemission and in router mount 
table pemission is "r-xr-xr-x" ?

User from directory owner group will get permission denied exception and he may 
get confused when he execute "dfs -ls", .


was (Author: surendrasingh):
{quote}Given that, I would propose to just show those folders as r-xr-xr-x.
{quote}
What if the destination folder has "rwx--" pemission and in router moun 
table pemission is "r-xr-xr-x" ?

User from directory owner group will get permission denied exception and he may 
get confused when he execute "dfs -ls", .

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14082) RBF: Add option to fail operations when a subcluster is unavailable

2018-11-19 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692596#comment-16692596
 ] 

Yiqun Lin commented on HDFS-14082:
--

hI [~elgoiri], would you  mind addressing my previous comments? I suppose you 
have missed that.
{quote}I mean we make following change and not use specific number.
 // Test the behavior when everything is fine
 + FileSystem fs = getRouterFileSystem();
 + FileStatus[] files = fs.listStatus(new Path("/"));
 int originalCount= files.length
 
 // simplify the assertion logic
 assertNotEqual("...", originalCount , files.length);
 Catching one place I am missing before:
 fail("I should throw an exception");
 Can we reword this to "listStatus call should throw an exception"?
{quote}
Others look good to me..

> RBF: Add option to fail operations when a subcluster is unavailable
> ---
>
> Key: HDFS-14082
> URL: https://issues.apache.org/jira/browse/HDFS-14082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14082-HDFS-13891.002.patch, HDFS-14082.000.patch, 
> HDFS-14082.001.patch
>
>
> When a subcluster is unavailable, we succeed operations like 
> {{getListing()}}. We should add an option to fail the operation if one of the 
> subclusters is unavailable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-817) Create SCM metrics for disk from node report

2018-11-19 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-817:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

LGTM, +1.

Committed to trunk with checkstyle issues fixed. Thanks [~bharatviswa] for the 
contribution!

> Create SCM metrics for disk from node report
> 
>
> Key: HDDS-817
> URL: https://issues.apache.org/jira/browse/HDDS-817
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-817.00.patch, HDDS-817.01.patch
>
>
> # Disk usage HDD and SSD
>  # Total no of datanodes in cluster (Running, Unhealthy, Failed) (Add a UT 
> for this implementation which already exists)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-835) Use storageSize instead of Long for buffer size configs in Ozone Client

2018-11-19 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692675#comment-16692675
 ] 

Shashikant Banerjee edited comment on HDDS-835 at 11/20/18 5:02 AM:


Thanks [~msingh], for the review.
{code:java}
ScmConfigKeys:140, lets change OZONE_SCM_CHUNK_MAX_SIZE to 32MB as well{code}
Since, OZONE_SCM_CHUNK_MAX_SIZE is constant, moved it to OzoneConsts.java
{code:java}
TestFailureHandlingByClient:91, the SCM_BLOCK size needs to be set here
{code}
BlockSize is already to set to required value when miniOzoneCluster instance is 
created. No need to set it here.

Rest of the review comments are addressed.

 


was (Author: shashikant):
Thanks [~msingh], for the review.

 
{code:java}
ScmConfigKeys:140, lets change OZONE_SCM_CHUNK_MAX_SIZE to 32MB as well{code}
Since, OZONE_SCM_CHUNK_MAX_SIZE is constant, moved it to OzoneConsts.java
{code:java}
TestFailureHandlingByClient:91, the SCM_BLOCK size needs to be set here
{code}
BlockSize is already to set to required value when miniOzoneCluster instance is 
created. No need to set it here.

Rest of the review comments are addressed.

 

> Use storageSize instead of Long for buffer size configs in Ozone Client
> ---
>
> Key: HDDS-835
> URL: https://issues.apache.org/jira/browse/HDDS-835
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-835.000.patch, HDDS-835.001.patch
>
>
> As per [~msingh] review comments in HDDS-675 , for streamBufferFlushSize, 
> streamBufferMaxSize, blockSize configs, we should use getStorageSize instead 
> of a long value, This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-835) Use storageSize instead of Long for buffer size configs in Ozone Client

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692752#comment-16692752
 ] 

Hadoop QA commented on HDDS-835:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 16s{color} | {color:orange} root: The patch generated 2 new + 4 unchanged - 
0 fixed = 6 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 45s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 43s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 37s{color} 
| {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 37s{color} 
| {color:red} objectstore-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | 

[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-19 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692737#comment-16692737
 ] 

Surendra Singh Lilhore commented on HDFS-14085:
---

{quote}Given that, I would propose to just show those folders as r-xr-xr-x.
{quote}
What if the destination folder has "rwx--" pemission and in router moun 
table pemission is "r-xr-xr-x" ?

User from directory owner group will get permission denied exception and he may 
get confused when he execute "dfs -ls", .

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-854) TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky

2018-11-19 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692016#comment-16692016
 ] 

Shashikant Banerjee commented on HDDS-854:
--

[~nandakumar131], I will take care of this.

> TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky
> ---
>
> Key: HDDS-854
> URL: https://issues.apache.org/jira/browse/HDDS-854
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>
> TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky. It 
> times out while waiting for the mini cluster datanode to restart
> {code}
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:389)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.waitForClusterToBeReady(MiniOzoneClusterImpl.java:122)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:276)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:283)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures(TestFailureHandlingByClient.java:200)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-849) fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692017#comment-16692017
 ] 

Dinesh Chitlangia commented on HDDS-849:


failure unrelated to patch

> fix NPE in TestKeyValueHandler because of audit log write
> -
>
> Key: HDDS-849
> URL: https://issues.apache.org/jira/browse/HDDS-849
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-849.001.patch
>
>
> TestKeyValueHandler#testCloseInvalidContainer and 
> TestKeyValueHandler#testHandlerCommandHandling are failing because of the 
> following exception.
> {code}
> [ERROR] 
> testCloseInvalidContainer(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler)
>   Time elapsed: 0.006 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.audit.AuditLogger.logWriteFailure(AuditLogger.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.audit(HddsDispatcher.java:433)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:242)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler.testCloseInvalidContainer(TestKeyValueHandler.java:282)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-849) Fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-849:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix NPE in TestKeyValueHandler because of audit log write
> -
>
> Key: HDDS-849
> URL: https://issues.apache.org/jira/browse/HDDS-849
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-849.001.patch
>
>
> TestKeyValueHandler#testCloseInvalidContainer and 
> TestKeyValueHandler#testHandlerCommandHandling are failing because of the 
> following exception.
> {code}
> [ERROR] 
> testCloseInvalidContainer(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler)
>   Time elapsed: 0.006 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.audit.AuditLogger.logWriteFailure(AuditLogger.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.audit(HddsDispatcher.java:433)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:242)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler.testCloseInvalidContainer(TestKeyValueHandler.java:282)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-854) TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky

2018-11-19 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692026#comment-16692026
 ] 

Nanda kumar commented on HDDS-854:
--

Thanks [~shashikant]!

> TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky
> ---
>
> Key: HDDS-854
> URL: https://issues.apache.org/jira/browse/HDDS-854
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>
> TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky. It 
> times out while waiting for the mini cluster datanode to restart
> {code}
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:389)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.waitForClusterToBeReady(MiniOzoneClusterImpl.java:122)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:276)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:283)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures(TestFailureHandlingByClient.java:200)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-849) Fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692036#comment-16692036
 ] 

Dinesh Chitlangia commented on HDDS-849:


Thanks [~nandakumar131] for review and commit.

> Fix NPE in TestKeyValueHandler because of audit log write
> -
>
> Key: HDDS-849
> URL: https://issues.apache.org/jira/browse/HDDS-849
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-849.001.patch
>
>
> TestKeyValueHandler#testCloseInvalidContainer and 
> TestKeyValueHandler#testHandlerCommandHandling are failing because of the 
> following exception.
> {code}
> [ERROR] 
> testCloseInvalidContainer(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler)
>   Time elapsed: 0.006 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.audit.AuditLogger.logWriteFailure(AuditLogger.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.audit(HddsDispatcher.java:433)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:242)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler.testCloseInvalidContainer(TestKeyValueHandler.java:282)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-849) Fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-849:
-
Summary: Fix NPE in TestKeyValueHandler because of audit log write  (was: 
fix NPE in TestKeyValueHandler because of audit log write)

> Fix NPE in TestKeyValueHandler because of audit log write
> -
>
> Key: HDDS-849
> URL: https://issues.apache.org/jira/browse/HDDS-849
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-849.001.patch
>
>
> TestKeyValueHandler#testCloseInvalidContainer and 
> TestKeyValueHandler#testHandlerCommandHandling are failing because of the 
> following exception.
> {code}
> [ERROR] 
> testCloseInvalidContainer(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler)
>   Time elapsed: 0.006 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.audit.AuditLogger.logWriteFailure(AuditLogger.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.audit(HddsDispatcher.java:433)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:242)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler.testCloseInvalidContainer(TestKeyValueHandler.java:282)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-849) Fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692025#comment-16692025
 ] 

Nanda kumar commented on HDDS-849:
--

Thanks [~dineshchitlangia] for the contribution and to [~msingh] for reporting 
this. I committed it to trunk.

> Fix NPE in TestKeyValueHandler because of audit log write
> -
>
> Key: HDDS-849
> URL: https://issues.apache.org/jira/browse/HDDS-849
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-849.001.patch
>
>
> TestKeyValueHandler#testCloseInvalidContainer and 
> TestKeyValueHandler#testHandlerCommandHandling are failing because of the 
> following exception.
> {code}
> [ERROR] 
> testCloseInvalidContainer(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler)
>   Time elapsed: 0.006 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.audit.AuditLogger.logWriteFailure(AuditLogger.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.audit(HddsDispatcher.java:433)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:242)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler.testCloseInvalidContainer(TestKeyValueHandler.java:282)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-849) Fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692043#comment-16692043
 ] 

Hudson commented on HDDS-849:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15463 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15463/])
HDDS-849. Fix NPE in TestKeyValueHandler because of audit log write. (nanda: 
rev e7438a1b38ff1d2bb25aa9d849a227c6f354143b)
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueHandler.java


> Fix NPE in TestKeyValueHandler because of audit log write
> -
>
> Key: HDDS-849
> URL: https://issues.apache.org/jira/browse/HDDS-849
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-849.001.patch
>
>
> TestKeyValueHandler#testCloseInvalidContainer and 
> TestKeyValueHandler#testHandlerCommandHandling are failing because of the 
> following exception.
> {code}
> [ERROR] 
> testCloseInvalidContainer(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler)
>   Time elapsed: 0.006 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.audit.AuditLogger.logWriteFailure(AuditLogger.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.audit(HddsDispatcher.java:433)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:242)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler.testCloseInvalidContainer(TestKeyValueHandler.java:282)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13960) hdfs dfs -checksum command should optionally show block size in output

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691974#comment-16691974
 ] 

Hadoop QA commented on HDFS-13960:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  4s{color} | {color:orange} root: The patch generated 1 new + 203 unchanged 
- 0 fixed = 204 total (was 203) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 23s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cli.TestCLI |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13960 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948720/HDFS-13960.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 879d71014b0d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9366608 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| 

[jira] [Assigned] (HDDS-854) TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky

2018-11-19 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reassigned HDDS-854:


Assignee: Shashikant Banerjee  (was: Nanda kumar)

> TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky
> ---
>
> Key: HDDS-854
> URL: https://issues.apache.org/jira/browse/HDDS-854
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>
> TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky. It 
> times out while waiting for the mini cluster datanode to restart
> {code}
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:389)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.waitForClusterToBeReady(MiniOzoneClusterImpl.java:122)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:276)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:283)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures(TestFailureHandlingByClient.java:200)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-853) Option to force close a container in Datanode

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692029#comment-16692029
 ] 

Hadoop QA commented on HDDS-853:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 29s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-853 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948738/HDDS-853.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 33221380922e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9366608 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1759/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1759/testReport/ |
| Max. process+thread count | 417 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1759/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (HDDS-808) Simplify OMAction and DNAction classes used for AuditLogging

2018-11-19 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692182#comment-16692182
 ] 

Dinesh Chitlangia commented on HDDS-808:


Currently, interface AuditAction is implemented by OMAction, DNAction which 
define the respective actions.
{code:java}
public interface AuditAction {
  /**
   * Implementation must override.
   * @return String
   */
  String getAction();
}
{code}
As proposed in this jira, if we are to remove {{getAction()}}, then the 
interface itself becomes pointless and we might as well remove this interface 
altogether.

I am fine with removing this interface.
[~ajayydv] , [~anu] - thoughts?

> Simplify OMAction and DNAction classes used for AuditLogging
> 
>
> Key: HDDS-808
> URL: https://issues.apache.org/jira/browse/HDDS-808
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: logging
>
> While reviewing HDDS-120, [~ajayydv] suggested to simplify these class by 
> removing the constructor and the getAction().
> Refer review comment: 
> https://issues.apache.org/jira/browse/HDDS-120?focusedCommentId=16670495=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16670495



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-855) Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common

2018-11-19 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-855:

Attachment: HDDS-855.00.patch

> Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common
> -
>
> Key: HDDS-855
> URL: https://issues.apache.org/jira/browse/HDDS-855
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-855.00.patch
>
>
> Move {{OMMetadataManager}} from hadoop-ozone/ozone-manager to 
> hadoop-ozone/common. This will allow usage if OMMetadataManagerImpl in 
> {{SecurityManager}} which will be in common module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-855) Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common

2018-11-19 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-855:

Status: Patch Available  (was: Open)

> Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common
> -
>
> Key: HDDS-855
> URL: https://issues.apache.org/jira/browse/HDDS-855
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-855.00.patch
>
>
> Move {{OMMetadataManager}} from hadoop-ozone/ozone-manager to 
> hadoop-ozone/common. This will allow usage if OMMetadataManagerImpl in 
> {{SecurityManager}} which will be in common module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-804) Block token: Add secret token manager

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692194#comment-16692194
 ] 

Hadoop QA commented on HDDS-804:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
13s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} HDDS-4 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 
15s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-ozone: The patch generated 14 new + 0 
unchanged - 0 fixed = 14 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 
22s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 25s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 22s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-804 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948761/HDDS-804-HDDS-4.00.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux aed5fe4c3a4b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-4 / ffe5e7d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1760/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
| unit | 

[jira] [Commented] (HDFS-14082) RBF: Add option to fail operations when a subcluster is unavailable

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692190#comment-16692190
 ] 

Hadoop QA commented on HDFS-14082:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
34s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
56s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14082 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948758/HDFS-14082-HDFS-13891.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 142d27f33775 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 4d8cc85 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25564/testReport/ |
| Max. process+thread count | 1351 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25564/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This 

[jira] [Commented] (HDDS-795) RocksDb specific classes leak from DBStore/Table interfaces

2018-11-19 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692206#comment-16692206
 ] 

Ajay Kumar commented on HDDS-795:
-

[~elek] thanks for updating the patch. 
patch v5 still has a typo at L109 , {{DBStore}}. Either we can remove "a" or 
rephrase whole line.
{quote}Can't use this name exactly, as I have both put and delete operations 
with and without batch batch support. I need two new names.{quote}
I was thinking of making taking a generic parameter to handle both put and 
delete but current approach is good as well as we don't have too many 
operations.


> RocksDb specific classes leak from DBStore/Table interfaces
> ---
>
> Key: HDDS-795
> URL: https://issues.apache.org/jira/browse/HDDS-795
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-795.001.patch, HDDS-795.002.patch, 
> HDDS-795.003.patch, HDDS-795.004.patch, HDDS-795.005.patch
>
>
> org.apache.hadoop.utils.db.RocksDB and Table interfaces provide a 
> vendor-independent way to access any key value store. 
> The default implementation uses RocksDb but other implementation also could 
> be used (for example an InMemory implementation for testing only).
> The current Table interface contains methods which depend on RocksDB specific 
> classes. For example:
> {code}
> public interface DBStore extends AutoCloseable {
> //...
> /**
>* Return the Column Family handle. TODO: This leaks an RockDB abstraction
>* into Ozone code, cleanup later.
>*
>* @return ColumnFamilyHandle
>*/
>   ColumnFamilyHandle getHandle();
> //...
> {code}
> We need to remove the RocksDB specific classes from the generic interfaces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-795) RocksDb specific classes leak from DBStore/Table interfaces

2018-11-19 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692206#comment-16692206
 ] 

Ajay Kumar edited comment on HDDS-795 at 11/19/18 8:16 PM:
---

[~elek] thanks for updating the patch. 
patch v5 still has a typo at L109 , {{DBStore}}. Either we can remove "a" or 
rephrase whole line.
{quote}Can't use this name exactly, as I have both put and delete operations 
with and without batch batch support. I need two new names.{quote}
I was thinking of taking a generic parameter to handle both put and delete but 
current approach is good as well as we don't have too many operations.



was (Author: ajayydv):
[~elek] thanks for updating the patch. 
patch v5 still has a typo at L109 , {{DBStore}}. Either we can remove "a" or 
rephrase whole line.
{quote}Can't use this name exactly, as I have both put and delete operations 
with and without batch batch support. I need two new names.{quote}
I was thinking of making taking a generic parameter to handle both put and 
delete but current approach is good as well as we don't have too many 
operations.


> RocksDb specific classes leak from DBStore/Table interfaces
> ---
>
> Key: HDDS-795
> URL: https://issues.apache.org/jira/browse/HDDS-795
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-795.001.patch, HDDS-795.002.patch, 
> HDDS-795.003.patch, HDDS-795.004.patch, HDDS-795.005.patch
>
>
> org.apache.hadoop.utils.db.RocksDB and Table interfaces provide a 
> vendor-independent way to access any key value store. 
> The default implementation uses RocksDb but other implementation also could 
> be used (for example an InMemory implementation for testing only).
> The current Table interface contains methods which depend on RocksDB specific 
> classes. For example:
> {code}
> public interface DBStore extends AutoCloseable {
> //...
> /**
>* Return the Column Family handle. TODO: This leaks an RockDB abstraction
>* into Ozone code, cleanup later.
>*
>* @return ColumnFamilyHandle
>*/
>   ColumnFamilyHandle getHandle();
> //...
> {code}
> We need to remove the RocksDB specific classes from the generic interfaces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-795) RocksDb specific classes leak from DBStore/Table interfaces

2018-11-19 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692206#comment-16692206
 ] 

Ajay Kumar edited comment on HDDS-795 at 11/19/18 8:16 PM:
---

[~elek] thanks for updating the patch. 
patch v5 still has a typo at L109 , {{DBStore}}. Either we can remove "a" or 
rephrase whole line.
{quote}Can't use this name exactly, as I have both put and delete operations 
with and without batch batch support. I need two new names.{quote}
I was thinking of taking a generic parameter to handle both put and delete but 
current approach is good as well as we don't have too many operations.

+1 with that javadoc fixed.


was (Author: ajayydv):
[~elek] thanks for updating the patch. 
patch v5 still has a typo at L109 , {{DBStore}}. Either we can remove "a" or 
rephrase whole line.
{quote}Can't use this name exactly, as I have both put and delete operations 
with and without batch batch support. I need two new names.{quote}
I was thinking of taking a generic parameter to handle both put and delete but 
current approach is good as well as we don't have too many operations.


> RocksDb specific classes leak from DBStore/Table interfaces
> ---
>
> Key: HDDS-795
> URL: https://issues.apache.org/jira/browse/HDDS-795
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-795.001.patch, HDDS-795.002.patch, 
> HDDS-795.003.patch, HDDS-795.004.patch, HDDS-795.005.patch
>
>
> org.apache.hadoop.utils.db.RocksDB and Table interfaces provide a 
> vendor-independent way to access any key value store. 
> The default implementation uses RocksDb but other implementation also could 
> be used (for example an InMemory implementation for testing only).
> The current Table interface contains methods which depend on RocksDB specific 
> classes. For example:
> {code}
> public interface DBStore extends AutoCloseable {
> //...
> /**
>* Return the Column Family handle. TODO: This leaks an RockDB abstraction
>* into Ozone code, cleanup later.
>*
>* @return ColumnFamilyHandle
>*/
>   ColumnFamilyHandle getHandle();
> //...
> {code}
> We need to remove the RocksDB specific classes from the generic interfaces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14075) NPE while Edit Logging

2018-11-19 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14075:

Attachment: HDFS-14075-05.patch

> NPE while Edit Logging
> --
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14075-01.patch, HDFS-14075-02.patch, 
> HDFS-14075-03.patch, HDFS-14075-04.patch, HDFS-14075-04.patch, 
> HDFS-14075-04.patch, HDFS-14075-05.patch
>
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-804) Block token: Add secret token manager

2018-11-19 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-804:

Attachment: (was: HDDS-804-HDDS-4_Draft.patch)

> Block token: Add secret token manager
> -
>
> Key: HDDS-804
> URL: https://issues.apache.org/jira/browse/HDDS-804
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-804-HDDS-4.00.patch
>
>
> Add secret manager to process block tokens in OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-804) Block token: Add secret token manager

2018-11-19 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-804:

Status: Patch Available  (was: In Progress)

> Block token: Add secret token manager
> -
>
> Key: HDDS-804
> URL: https://issues.apache.org/jira/browse/HDDS-804
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-804-HDDS-4.00.patch
>
>
> Add secret manager to process block tokens in OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-804) Block token: Add secret token manager

2018-11-19 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-804:

Attachment: HDDS-804-HDDS-4.00.patch

> Block token: Add secret token manager
> -
>
> Key: HDDS-804
> URL: https://issues.apache.org/jira/browse/HDDS-804
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-804-HDDS-4.00.patch
>
>
> Add secret manager to process block tokens in OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-855) Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common

2018-11-19 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-855:

Description: Move {{OMMetadataManager}} from hadoop-ozone/ozone-manager to 
hadoop-ozone/common. This will allow usage if OMMetadataManagerImpl in 
{{SecurityManager}} which will be in common module.  (was: The valid uri 
pattern for an Ozone fs uri should be 
{{o3fs://://}}.

But OzoneFileSystem accepts uri's of the form {{o3fs://.}} only.
{code:java}
# In OzoneFileSyste.java
private static final Pattern URL_SCHEMA_PATTERN =
Pattern.compile("(.+)\\.([^\\.]+)");
if (!matcher.matches()) {
  throw new IllegalArgumentException("Ozone file system url should be "
  + "in the form o3fs://bucket.volume");
}{code}
)

> Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common
> -
>
> Key: HDDS-855
> URL: https://issues.apache.org/jira/browse/HDDS-855
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Anu Engineer
>Priority: Blocker
>
> Move {{OMMetadataManager}} from hadoop-ozone/ozone-manager to 
> hadoop-ozone/common. This will allow usage if OMMetadataManagerImpl in 
> {{SecurityManager}} which will be in common module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-855) Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common

2018-11-19 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-855:

Labels:   (was: alpha2)

> Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common
> -
>
> Key: HDDS-855
> URL: https://issues.apache.org/jira/browse/HDDS-855
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>
> Move {{OMMetadataManager}} from hadoop-ozone/ozone-manager to 
> hadoop-ozone/common. This will allow usage if OMMetadataManagerImpl in 
> {{SecurityManager}} which will be in common module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-855) Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common

2018-11-19 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692134#comment-16692134
 ] 

Ajay Kumar commented on HDDS-855:
-

cc: [~anu]

> Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common
> -
>
> Key: HDDS-855
> URL: https://issues.apache.org/jira/browse/HDDS-855
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Anu Engineer
>Priority: Blocker
>
> Move {{OMMetadataManager}} from hadoop-ozone/ozone-manager to 
> hadoop-ozone/common. This will allow usage if OMMetadataManagerImpl in 
> {{SecurityManager}} which will be in common module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-855) Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common

2018-11-19 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-855:

Priority: Major  (was: Blocker)

> Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common
> -
>
> Key: HDDS-855
> URL: https://issues.apache.org/jira/browse/HDDS-855
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> Move {{OMMetadataManager}} from hadoop-ozone/ozone-manager to 
> hadoop-ozone/common. This will allow usage if OMMetadataManagerImpl in 
> {{SecurityManager}} which will be in common module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-855) Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common

2018-11-19 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-855:

Component/s: (was: documentation)

> Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common
> -
>
> Key: HDDS-855
> URL: https://issues.apache.org/jira/browse/HDDS-855
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>
> Move {{OMMetadataManager}} from hadoop-ozone/ozone-manager to 
> hadoop-ozone/common. This will allow usage if OMMetadataManagerImpl in 
> {{SecurityManager}} which will be in common module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-855) Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common

2018-11-19 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HDDS-855:
---

Assignee: Ajay Kumar  (was: Anu Engineer)

> Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common
> -
>
> Key: HDDS-855
> URL: https://issues.apache.org/jira/browse/HDDS-855
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
>
> Move {{OMMetadataManager}} from hadoop-ozone/ozone-manager to 
> hadoop-ozone/common. This will allow usage if OMMetadataManagerImpl in 
> {{SecurityManager}} which will be in common module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-855) Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common

2018-11-19 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-855:
---

 Summary: Move OMMetadataManager from hadoop-ozone/ozone-manager to 
hadoop-ozone/common
 Key: HDDS-855
 URL: https://issues.apache.org/jira/browse/HDDS-855
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: documentation
Reporter: Ajay Kumar
Assignee: Anu Engineer


The valid uri pattern for an Ozone fs uri should be 
{{o3fs://://}}.

But OzoneFileSystem accepts uri's of the form {{o3fs://.}} only.
{code:java}
# In OzoneFileSyste.java
private static final Pattern URL_SCHEMA_PATTERN =
Pattern.compile("(.+)\\.([^\\.]+)");
if (!matcher.matches()) {
  throw new IllegalArgumentException("Ozone file system url should be "
  + "in the form o3fs://bucket.volume");
}{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14090) RBF: Improved isolation for downstream name nodes.

2018-11-19 Thread CR Hota (JIRA)
CR Hota created HDFS-14090:
--

 Summary: RBF: Improved isolation for downstream name nodes.
 Key: HDFS-14090
 URL: https://issues.apache.org/jira/browse/HDFS-14090
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: CR Hota
Assignee: CR Hota


Router is a gateway to underlying name nodes. Gateway architectures, should 
help minimize impact of clients connecting to healthy clusters vs unhealthy 
clusters.

For example - If there are 2 name nodes downstream, and one of them is heavily 
loaded with calls spiking rpc queue times, due to back pressure the same with 
start reflecting on the router. As a result of this, clients connecting to 
healthy/faster name nodes will also slow down as same rpc queue is maintained 
for all calls at the router layer. Essentially the same IPC thread pool is used 
by router to connect to all name nodes.

Currently router uses one single rpc queue for all calls. Lets discuss how we 
can change the architecture and add some throttling logic for 
unhealthy/slow/overloaded name nodes.

One way could be to read from current call queue, immediately identify 
downstream name node and maintain a separate queue for each underlying name 
node. Another simpler way is to maintain some sort of rate limiter configured 
for each name node and let routers drop/reject/send error requests after 
certain threshold. 

This won’t be a simple change as router’s ‘Server’ layer would need redesign 
and implementation. Currently this layer is the same as name node.

Opening this ticket to discuss, design and implement this feature.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692140#comment-16692140
 ] 

Íñigo Goiri commented on HDFS-13972:


I would do this one after HDFS-13358 (setting it as a dependency) to make it 
easier to handle.
Basically, the changes are:
* RouterJspHelper. Do we want to keep referring to NN there and make it easier 
to merge at the end or should we start using Router naming?
* RouterUserProvider. This looks pretty straightforward. Where do we use this 
class?

> RBF: Support for Delegation Token (WebHDFS)
> ---
>
> Key: HDFS-13972
> URL: https://issues.apache.org/jira/browse/HDFS-13972
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13972-HDFS-13891.001.patch
>
>
> HDFS Router should support issuing HDFS delegation tokens through WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692163#comment-16692163
 ] 

Hadoop QA commented on HDFS-13972:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
53s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 18 new + 21 unchanged - 0 fixed = 39 total (was 21) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
26s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13972 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948755/HDFS-13972-HDFS-13891.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux f7fceb9803fa 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 4d8cc85 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25563/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 

[jira] [Created] (HDDS-854) TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky

2018-11-19 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-854:


 Summary: 
TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky
 Key: HDDS-854
 URL: https://issues.apache.org/jira/browse/HDDS-854
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Nanda kumar
Assignee: Nanda kumar


TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky. It 
times out while waiting for the mini cluster datanode to restart

{code}
at 
org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:389)
at 
org.apache.hadoop.ozone.MiniOzoneClusterImpl.waitForClusterToBeReady(MiniOzoneClusterImpl.java:122)
at 
org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:276)
at 
org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:283)
at 
org.apache.hadoop.ozone.client.rpc.TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures(TestFailureHandlingByClient.java:200)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13369) FSCK Report broken with RequestHedgingProxyProvider

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692055#comment-16692055
 ] 

Hadoop QA commented on HDFS-13369:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 59s{color} | {color:orange} root: The patch generated 7 new + 168 unchanged 
- 4 fixed = 175 total (was 172) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 20s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
46s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}198m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestDiskCheckerWithDiskIo |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.util.TestReadWriteDiskValidator |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13369 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948723/HDFS-13369.007.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux acd22b3d596e 4.4.0-138-generic 

[jira] [Comment Edited] (HDDS-835) Use storageSize instead of Long for buffer size configs in Ozone Client

2018-11-19 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692063#comment-16692063
 ] 

Mukul Kumar Singh edited comment on HDDS-835 at 11/19/18 5:56 PM:
--

Thanks for working on this [~shashikant], the patch looks really good to me.
There are some checkstyle issues with the patch. Some minor comments on the 
patch.

1) ozone-default.xml:627, this value should be 256MB i think
2) ScmConfigKeys:140, lets change OZONE_SCM_CHUNK_MAX_SIZE to 32MB as well
3) TestFailureHandlingByClient:91, the SCM_BLOCK size needs to be set here
4) XceiverServerRatis, can we also use the size config in newRaftProperties ?, 
this will help in cleaning up config handling.


was (Author: msingh):
Thanks for working on this [~shashikant].
There are some checkstyle issues with the patch.

1) ozone-default.xml:627, this value should be 256MB i think
2) ScmConfigKeys:140, lets change OZONE_SCM_CHUNK_MAX_SIZE to 32MB as well
3) TestFailureHandlingByClient:91, the SCM_BLOCK size needs to be set here
4) XceiverServerRatis, can we also use the size config in newRaftProperties ?, 
this will help in cleaning up config handling.

> Use storageSize instead of Long for buffer size configs in Ozone Client
> ---
>
> Key: HDDS-835
> URL: https://issues.apache.org/jira/browse/HDDS-835
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-835.000.patch
>
>
> As per [~msingh] review comments in HDDS-675 , for streamBufferFlushSize, 
> streamBufferMaxSize, blockSize configs, we should use getStorageSize instead 
> of a long value, This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-835) Use storageSize instead of Long for buffer size configs in Ozone Client

2018-11-19 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692063#comment-16692063
 ] 

Mukul Kumar Singh commented on HDDS-835:


Thanks for working on this [~shashikant].
There are some checkstyle issues with the patch.

1) ozone-default.xml:627, this value should be 256MB i think
2) ScmConfigKeys:140, lets change OZONE_SCM_CHUNK_MAX_SIZE to 32MB as well
3) TestFailureHandlingByClient:91, the SCM_BLOCK size needs to be set here
4) XceiverServerRatis, can we also use the size config in newRaftProperties ?, 
this will help in cleaning up config handling.

> Use storageSize instead of Long for buffer size configs in Ozone Client
> ---
>
> Key: HDDS-835
> URL: https://issues.apache.org/jira/browse/HDDS-835
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-835.000.patch
>
>
> As per [~msingh] review comments in HDDS-675 , for streamBufferFlushSize, 
> streamBufferMaxSize, blockSize configs, we should use getStorageSize instead 
> of a long value, This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14087) RBF: In Router UI NameNode heartbeat printing the negative values

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692071#comment-16692071
 ] 

Íñigo Goiri commented on HDFS-14087:


Can you post a screenshot or give more details where this happens?

> RBF: In Router UI NameNode heartbeat printing the negative values 
> --
>
> Key: HDFS-14087
> URL: https://issues.apache.org/jira/browse/HDFS-14087
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14087) RBF: In Router UI NameNode heartbeat printing the negative values

2018-11-19 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14087:
---
Summary: RBF: In Router UI NameNode heartbeat printing the negative values  
 (was: RBF : In Router UI NameNode heartbeat printing the negative values )

> RBF: In Router UI NameNode heartbeat printing the negative values 
> --
>
> Key: HDFS-14087
> URL: https://issues.apache.org/jira/browse/HDFS-14087
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692074#comment-16692074
 ] 

Íñigo Goiri commented on HDFS-14089:


Thanks [~RANith] for the patch.
Let's do this as part of HDFS-13532.

> RBF: Failed to specify server's Kerberos pricipal name in 
> NamenodeHeartbeatService
> --
>
> Key: HDFS-14089
> URL: https://issues.apache.org/jira/browse/HDFS-14089
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Fix For: HDFS-13891
>
> Attachments: HDFS-14089.patch
>
>
> DFSZKFailoverController, DFSHAAdmin setting the conf for 
> "HADOOP_SECURITY_SERVICE_USER_NAME_KEY".  Need to add the configuration for 
> NamenodeHeartbeatService as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692074#comment-16692074
 ] 

Íñigo Goiri edited comment on HDFS-14089 at 11/19/18 6:07 PM:
--

Thanks [~RANith] for the patch.
Let's do this as part of HDFS-13532.

Any easy way to test this? What would we need a secure mini ZK cluster?


was (Author: elgoiri):
Thanks [~RANith] for the patch.
Let's do this as part of HDFS-13532.

> RBF: Failed to specify server's Kerberos pricipal name in 
> NamenodeHeartbeatService
> --
>
> Key: HDFS-14089
> URL: https://issues.apache.org/jira/browse/HDFS-14089
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Fix For: HDFS-13891
>
> Attachments: HDFS-14089.patch
>
>
> DFSZKFailoverController, DFSHAAdmin setting the conf for 
> "HADOOP_SECURITY_SERVICE_USER_NAME_KEY".  Need to add the configuration for 
> NamenodeHeartbeatService as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13369) FSCK Report broken with RequestHedgingProxyProvider

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692079#comment-16692079
 ] 

Íñigo Goiri commented on HDFS-13369:


Thanks [~RANith] for rebasing.
Can we make the TODOs cleaner?

> FSCK Report broken with RequestHedgingProxyProvider 
> 
>
> Key: HDFS-13369
> URL: https://issues.apache.org/jira/browse/HDFS-13369
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.3, 3.0.0, 3.1.0
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13369.001.patch, HDFS-13369.002.patch, 
> HDFS-13369.003.patch, HDFS-13369.004.patch, HDFS-13369.005.patch, 
> HDFS-13369.006.patch, HDFS-13369.007.patch
>
>
> Scenario:-
> 1.Configure the RequestHedgingProxy
> 2. write some files in file system
> 3. Take FSCK report for the above files
>  
> {noformat}
> bin> hdfs fsck /file1 -locations -files -blocks
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler
>  cannot be cast to org.apache.hadoop.ipc.RpcInvocationHandler
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:626)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.getConnectionId(RetryInvocationHandler.java:438)
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:628)
> at org.apache.hadoop.ipc.RPC.getServerAddress(RPC.java:611)
> at org.apache.hadoop.hdfs.HAUtil.getAddressOfActive(HAUtil.java:263)
> at 
> org.apache.hadoop.hdfs.tools.DFSck.getCurrentNamenodeAddress(DFSck.java:257)
> at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:319)
> at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:156)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:153)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:152)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:385){noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)

2018-11-19 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-13972:
---
Attachment: HDFS-13972-HDFS-13891.001.patch
Status: Patch Available  (was: Open)

> RBF: Support for Delegation Token (WebHDFS)
> ---
>
> Key: HDFS-13972
> URL: https://issues.apache.org/jira/browse/HDFS-13972
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13972-HDFS-13891.001.patch
>
>
> HDFS Router should support issuing HDFS delegation tokens through WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14082) RBF: Add option to fail operations when a subcluster is unavailable

2018-11-19 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14082:
---
Attachment: HDFS-14082-HDFS-13891.002.patch

> RBF: Add option to fail operations when a subcluster is unavailable
> ---
>
> Key: HDFS-14082
> URL: https://issues.apache.org/jira/browse/HDFS-14082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14082-HDFS-13891.002.patch, HDFS-14082.000.patch, 
> HDFS-14082.001.patch
>
>
> When a subcluster is unavailable, we succeed operations like 
> {{getListing()}}. We should add an option to fail the operation if one of the 
> subclusters is unavailable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-718) Introduce new SCM Commands to list and close Pipelines

2018-11-19 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-718:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for working on this [~ljain]. I have committed this to trunk.

> Introduce new SCM Commands to list and close Pipelines
> --
>
> Key: HDDS-718
> URL: https://issues.apache.org/jira/browse/HDDS-718
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Blocker
> Attachments: HDDS-718.001.patch, HDDS-718.002.patch, 
> HDDS-718.003.patch
>
>
> We need to have a tear-down pipeline command in SCM so that an administrator 
> can close/destroy a pipeline in the cluster.
> HDDS-695 brings in the commands in branch ozone-0.3, this Jira is for porting 
> them to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14011) RBF: Add more information to HdfsFileStatus for a mount point

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692068#comment-16692068
 ] 

Íñigo Goiri commented on HDFS-14011:


[~surendrasingh], I think this is a reasonable interface for the mount points.
In HDFS-14085 we may want to have some richer semantic.
I'll put my thoughts in that JIRA.

> RBF: Add more information to HdfsFileStatus for a mount point
> -
>
> Key: HDFS-14011
> URL: https://issues.apache.org/jira/browse/HDFS-14011
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14011.01.patch, HDFS-14011.02.patch, 
> HDFS-14011.03.patch
>
>
> RouterClientProtocol#getMountPointStatus does not use information of the 
> mount point, therefore, 'hdfs dfs -ls' to a directory including mount point 
> returns the incorrect information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-718) Introduce new SCM Commands to list and close Pipelines

2018-11-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692067#comment-16692067
 ] 

Hudson commented on HDDS-718:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15464 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15464/])
HDDS-718. Introduce new SCM Commands to list and close Pipelines. (msingh: rev 
b5d7b292c988de6a8555d472a4448275522b7622)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateMap.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
* (add) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestPipelineStateManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineManager.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java
* (add) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ClosePipelineSubcommand.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
* (add) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/package-info.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-hdds/common/src/main/proto/StorageContainerLocationProtocol.proto
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/StorageContainerLocationProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java


> Introduce new SCM Commands to list and close Pipelines
> --
>
> Key: HDDS-718
> URL: https://issues.apache.org/jira/browse/HDDS-718
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Blocker
> Attachments: HDDS-718.001.patch, HDDS-718.002.patch, 
> HDDS-718.003.patch
>
>
> We need to have a tear-down pipeline command in SCM so that an administrator 
> can close/destroy a pipeline in the cluster.
> HDDS-695 brings in the commands in branch ozone-0.3, this Jira is for porting 
> them to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14088) RequestHedgingProxyProvider can throw NullPointerException when failover due to no lock on currentUsedProxy

2018-11-19 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14088:
---
Summary: RequestHedgingProxyProvider can throw NullPointerException when 
failover due to no lock on currentUsedProxy  (was: RequestHedgingProxyProvider 
can throw NullPointerException when failvoer due to no lock on currentUsedProxy)

> RequestHedgingProxyProvider can throw NullPointerException when failover due 
> to no lock on currentUsedProxy
> ---
>
> Key: HDFS-14088
> URL: https://issues.apache.org/jira/browse/HDFS-14088
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Yuxuan Wang
>Priority: Major
>
> {code:java}
> if (currentUsedProxy != null) {
> try {
>   Object retVal = method.invoke(currentUsedProxy.proxy, args);
>   LOG.debug("Invocation successful on [{}]",
>   currentUsedProxy.proxyInfo);
> {code}
> If a thread run try block and then other thread trigger a fail over calling 
> method
> {code:java}
> @Override
>   public synchronized void performFailover(T currentProxy) {
> toIgnore = this.currentUsedProxy.proxyInfo;
> this.currentUsedProxy = null;
>   }
> {code}
> It will set currentUsedProxy to null, and the first thread can throw a 
> NullPointerException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14079) RBF: RouterAdmin should have failover concept for router

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692070#comment-16692070
 ] 

Íñigo Goiri commented on HDFS-14079:


[~surendrasingh] for the solution that [~crh] is talking about, there is no 
code change.
It would be a matter of putting the admin port behind a load balancer and 
setting the config to point to that endpoint.
Anyway, we probably want to set a full HA endpoint in addition.

> RBF: RouterAdmin should have failover concept for router
> 
>
> Key: HDFS-14079
> URL: https://issues.apache.org/jira/browse/HDFS-14079
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
>
> Currenlty {{RouterAdmin}} connect with only one router for admin operation, 
> if the configured router is down then router admin command is failing. It 
> should allow to configure all the router admin address.
> {code}
> // Initialize RouterClient
> try {
>   String address = getConf().getTrimmed(
>   RBFConfigKeys.DFS_ROUTER_ADMIN_ADDRESS_KEY,
>   RBFConfigKeys.DFS_ROUTER_ADMIN_ADDRESS_DEFAULT);
>   InetSocketAddress routerSocket = NetUtils.createSocketAddr(address);
>   client = new RouterClient(routerSocket, getConf());
> } catch (RPC.VersionMismatch v) {
>   System.err.println(
>   "Version mismatch between client and server... command aborted");
>   return exitCode;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14075) NPE while Edit Logging

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692076#comment-16692076
 ] 

Íñigo Goiri commented on HDFS-14075:


I have to say that Whitebox is pretty convenient, and at the end spy ends up 
doing the same with more steps.
Anyway, let's avoid it if that's the call.
For  [^HDFS-14075-04.patch], can we use {{LambdaTestUtils#intercept}}?

> NPE while Edit Logging
> --
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14075-01.patch, HDFS-14075-02.patch, 
> HDFS-14075-03.patch, HDFS-14075-04.patch, HDFS-14075-04.patch, 
> HDFS-14075-04.patch
>
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14088) RequestHedgingProxyProvider can throw NullPointerException when failover due to no lock on currentUsedProxy

2018-11-19 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-14088:
--

Assignee: Yuxuan Wang

> RequestHedgingProxyProvider can throw NullPointerException when failover due 
> to no lock on currentUsedProxy
> ---
>
> Key: HDFS-14088
> URL: https://issues.apache.org/jira/browse/HDFS-14088
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Yuxuan Wang
>Assignee: Yuxuan Wang
>Priority: Major
>
> {code:java}
> if (currentUsedProxy != null) {
> try {
>   Object retVal = method.invoke(currentUsedProxy.proxy, args);
>   LOG.debug("Invocation successful on [{}]",
>   currentUsedProxy.proxyInfo);
> {code}
> If a thread run try block and then other thread trigger a fail over calling 
> method
> {code:java}
> @Override
>   public synchronized void performFailover(T currentProxy) {
> toIgnore = this.currentUsedProxy.proxyInfo;
> this.currentUsedProxy = null;
>   }
> {code}
> It will set currentUsedProxy to null, and the first thread can throw a 
> NullPointerException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)

2018-11-19 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692086#comment-16692086
 ] 

CR Hota commented on HDFS-13972:


[~elgoiri]  [~brahmareddy] Thanks for helping with the merge.

I could make webhdfs work in my environment, uploading the patch here for early 
review. This patch is not final and needs change, specially after rpc patch is 
committed. Please take a look and see if things look fine to you folks.

Also as discussed in previous threads, we can do optimizations to re-use 
namenode code, but have kept it simple for now.

 

> RBF: Support for Delegation Token (WebHDFS)
> ---
>
> Key: HDFS-13972
> URL: https://issues.apache.org/jira/browse/HDFS-13972
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
>
> HDFS Router should support issuing HDFS delegation tokens through WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-718) Introduce new SCM Commands to list and close Pipelines

2018-11-19 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-718:
--
Fix Version/s: 0.4.0

> Introduce new SCM Commands to list and close Pipelines
> --
>
> Key: HDDS-718
> URL: https://issues.apache.org/jira/browse/HDDS-718
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.4.0
>
> Attachments: HDDS-718.001.patch, HDDS-718.002.patch, 
> HDDS-718.003.patch
>
>
> We need to have a tear-down pipeline command in SCM so that an administrator 
> can close/destroy a pipeline in the cluster.
> HDDS-695 brings in the commands in branch ozone-0.3, this Jira is for porting 
> them to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692119#comment-16692119
 ] 

Íñigo Goiri commented on HDFS-14085:


Internally, the mount point is not a folder and we have to map it to something 
that looks like a folder when we do {{ls}}.
The current implementation in the HDFS-13891 branch is that the mount point 
shows up as a folder with the user, group, and permissions from the mount point 
in the table.
I think conceptually this is correct for the user and the group.
I agree that the permissions are kind of confusing; if we were able to do move 
and delete for those folders we could keep the current syntax.
However, we currently only allow this through the dfsrouteradmin, so no point 
on showing them this way.
(At some point, we may want to do some admin ops like mv, chown, and rm through 
the ClientProtocol interface.)

Given that, I would propose to just show those folders as r-xr-xr-x.
I would keep the user/group as is right now.
In addition, we may want to add a xattr with something that indicates this is a 
mount point.


> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-849) fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691912#comment-16691912
 ] 

Nanda kumar commented on HDDS-849:
--

+1, pending Jenkins.

> fix NPE in TestKeyValueHandler because of audit log write
> -
>
> Key: HDDS-849
> URL: https://issues.apache.org/jira/browse/HDDS-849
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-849.001.patch
>
>
> TestKeyValueHandler#testCloseInvalidContainer and 
> TestKeyValueHandler#testHandlerCommandHandling are failing because of the 
> following exception.
> {code}
> [ERROR] 
> testCloseInvalidContainer(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler)
>   Time elapsed: 0.006 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.audit.AuditLogger.logWriteFailure(AuditLogger.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.audit(HddsDispatcher.java:433)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:242)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler.testCloseInvalidContainer(TestKeyValueHandler.java:282)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-849) fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691929#comment-16691929
 ] 

Hadoop QA commented on HDDS-849:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 27s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-849 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948734/HDDS-849.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f05a5ce5411f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9366608 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1758/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1758/testReport/ |
| Max. process+thread count | 414 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1758/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> fix NPE in TestKeyValueHandler because of audit log write
> 

[jira] [Updated] (HDDS-853) Option to force close a container in Datanode

2018-11-19 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-853:
-
Status: Patch Available  (was: Open)

> Option to force close a container in Datanode
> -
>
> Key: HDDS-853
> URL: https://issues.apache.org/jira/browse/HDDS-853
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-853.000.patch
>
>
> We need an option to force close a container in Datanode. When the container 
> is marked as QuasiClosed, based on the blockCommitSequenceId SCM will decide 
> the latest container replica and it will close try to close the QuasiClosed 
> container.
> For this, we need force close support in Datanode which will close the 
> QuasiClosed container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-718) Introduce new SCM Commands to list and close Pipelines

2018-11-19 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16691950#comment-16691950
 ] 

Mukul Kumar Singh commented on HDDS-718:


Thanks for updating the patch [~ljain].
+1, v3 patch looks good to me. 

There are following nitpicks in the patch, will fix them while committing the 
patch.
1) ScmClient.java:176,178 PipelineID -> Pipeline
2) StorageContainerLocationProtocol:130,132 -> PipelineID -> Pipeline


> Introduce new SCM Commands to list and close Pipelines
> --
>
> Key: HDDS-718
> URL: https://issues.apache.org/jira/browse/HDDS-718
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Blocker
> Attachments: HDDS-718.001.patch, HDDS-718.002.patch, 
> HDDS-718.003.patch
>
>
> We need to have a tear-down pipeline command in SCM so that an administrator 
> can close/destroy a pipeline in the cluster.
> HDDS-695 brings in the commands in branch ozone-0.3, this Jira is for porting 
> them to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14075) NPE while Edit Logging

2018-11-19 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692218#comment-16692218
 ] 

Ayush Saxena commented on HDFS-14075:
-

Thanx [~elgoiri] for giving it a look.

bq. For HDFS-14075-04.patch, can we use LambdaTestUtils#intercept?

Have uploaded v5 using it.

Pls Review :)

> NPE while Edit Logging
> --
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14075-01.patch, HDFS-14075-02.patch, 
> HDFS-14075-03.patch, HDFS-14075-04.patch, HDFS-14075-04.patch, 
> HDFS-14075-04.patch, HDFS-14075-05.patch
>
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-855) Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692268#comment-16692268
 ] 

Hadoop QA commented on HDDS-855:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 25s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 22s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-855 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948767/HDDS-855.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a0a23c29fa68 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b5d7b29 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 

  1   2   >