[jira] [Commented] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-14 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929889#comment-16929889
 ] 

Ayush Saxena commented on HDFS-14849:
-

Thanx [~marvelrock] for the report, Can you give a brief about the fix, why 
exactly it is happening.


> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-14 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14849:

Fix Version/s: (was: 3.3.0)

> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14847) Blocks are over-replicated while EC decommissioning

2019-09-14 Thread HuangTao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929885#comment-16929885
 ] 

HuangTao commented on HDFS-14847:
-

I have submitted HDFS-14849 to fix my bug

> Blocks are over-replicated while EC decommissioning
> ---
>
> Key: HDFS-14847
> URL: https://issues.apache.org/jira/browse/HDFS-14847
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-14847.001.patch, HDFS-14847.002.patch
>
>
> Found that Some blocks are over-replicated while ec decommissioning. Messages 
> in log as follow
> {quote}
> INFO BlockStateChange: Block: blk_-9223372035714984112_363779142, Expected 
> Replicas: 9, live replicas: 8, corrupt replicas: 0, decommissioned replicas: 
> 0, decommissioning replicas: 3, maintenance replicas: 0, live entering 
> maintenance replicas: 0, excess replicas: 0, Is Open File: false, Datanodes 
> having this block: 10.254.41.34:50010 10.254.54.53:50010 10.254.28.53:50010 
> 10.254.56.55:50010 10.254.32.21:50010 10.254.33.19:50010 10.254.63.17:50010 
> 10.254.31.19:50010 10.254.35.29:50010 10.254.51.57:50010 10.254.40.58:50010 
> 10.254.69.31:50010 10.254.47.18:50010 10.254.51.18:50010 10.254.43.57:50010 
> 10.254.50.47:50010 10.254.42.37:50010 10.254.57.29:50010 10.254.67.40:50010 
> 10.254.44.16:50010 10.254.59.38:50010 10.254.53.56:50010 10.254.45.11:50010 
> 10.254.39.22:50010 10.254.30.16:50010 10.254.35.53:50010 10.254.22.30:50010 
> 10.254.26.34:50010 10.254.17.58:50010 10.254.65.53:50010 10.254.60.39:50010 
> 10.254.61.20:50010 10.254.64.23:50010 10.254.21.13:50010 10.254.37.35:50010 
> 10.254.68.30:50010 10.254.62.37:50010 10.254.25.58:50010 10.254.52.54:50010 
> 10.254.58.31:50010 10.254.49.11:50010 10.254.55.52:50010 10.254.19.19:50010 
> 10.254.36.40:50010 10.254.18.30:50010 10.254.20.39:50010 10.254.66.52:50010 
> 10.254.56.32:50010 10.254.24.55:50010 10.254.34.11:50010 10.254.29.58:50010 
> 10.254.27.40:50010 10.254.46.33:50010 10.254.23.19:50010 10.254.74.12:50010 
> 10.254.74.13:50010 10.254.41.35:50010 10.254.67.58:50010 10.254.54.11:50010 
> 10.254.68.14:50010 10.254.27.14:50010 10.254.51.29:50010 10.254.45.21:50010 
> 10.254.50.56:50010 10.254.47.31:50010 10.254.40.14:50010 10.254.65.21:50010 
> 10.254.62.22:50010 10.254.57.16:50010 10.254.36.52:50010 10.254.30.13:50010 
> 10.254.35.12:50010 10.254.69.34:50010 10.254.34.58:50010 10.254.17.50:50010 
> 10.254.63.12:50010 10.254.28.21:50010 10.254.58.30:50010 10.254.24.57:50010 
> 10.254.33.50:50010 10.254.44.52:50010 10.254.32.48:50010 10.254.43.39:50010 
> 10.254.20.37:50010 10.254.56.59:50010 10.254.22.33:50010 10.254.60.34:50010 
> 10.254.49.19:50010 10.254.52.21:50010 10.254.23.59:50010 10.254.21.16:50010 
> 10.254.42.55:50010 10.254.29.33:50010 10.254.53.17:50010 10.254.19.14:50010 
> 10.254.64.51:50010 10.254.46.20:50010 10.254.66.22:50010 10.254.18.38:50010 
> 10.254.39.17:50010 10.254.37.57:50010 10.254.31.54:50010 10.254.55.33:50010 
> 10.254.25.17:50010 10.254.61.33:50010 10.254.26.40:50010 10.254.59.23:50010 
> 10.254.59.35:50010 10.254.66.48:50010 10.254.41.15:50010 10.254.54.31:50010 
> 10.254.61.50:50010 10.254.62.31:50010 10.254.17.56:50010 10.254.29.18:50010 
> 10.254.45.16:50010 10.254.63.48:50010 10.254.22.34:50010 10.254.37.51:50010 
> 10.254.65.49:50010 10.254.58.21:50010 10.254.42.12:50010 10.254.55.17:50010 
> 10.254.27.13:50010 10.254.57.17:50010 10.254.67.18:50010 10.254.31.31:50010 
> 10.254.28.12:50010 10.254.36.12:50010 10.254.21.59:50010 10.254.30.30:50010 
> 10.254.26.50:50010 10.254.40.40:50010 10.254.32.17:50010 10.254.47.55:50010 
> 10.254.60.55:50010 10.254.49.33:50010 10.254.68.47:50010 10.254.39.21:50010 
> 10.254.56.14:50010 10.254.33.54:50010 10.254.69.57:50010 10.254.43.50:50010 
> 10.254.50.13:50010 10.254.25.49:50010 10.254.18.20:50010 10.254.52.23:50010 
> 10.254.19.11:50010 10.254.20.21:50010 10.254.74.16:50010 10.254.64.55:50010 
> 10.254.24.48:50010 10.254.46.29:50010 10.254.51.12:50010 10.254.23.56:50010 
> 10.254.44.59:50010 10.254.53.58:50010 10.254.34.38:50010 10.254.35.37:50010 
> 10.254.35.16:50010 10.254.36.23:50010 10.254.41.47:50010 10.254.54.12:50010 
> 10.254.20.59:50010 , Current Datanode: 10.254.56.55:50010, Is current 
> datanode decommissioning: true, Is current datanode entering maintenance: 
> false
> {quote}
> Decommisions hang for a long time.
> Deep into the code and find that There is a problem in ErasureCodingWork.java
> For Example, there are 2 nodes(dn0, dn1) in decommission and an ec block 
> group with the 2 nodes. After creating an ErasureCodingWork to reconstruct, 
> it will create 2 replication work. 
> If dn0 replicates in success and dn1 replicates in failure, Then it 

[jira] [Updated] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-14 Thread HuangTao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HuangTao updated HDFS-14849:

   Attachment: HDFS-14849.001.patch
Fix Version/s: 3.3.0
 Target Version/s: 3.3.0
Affects Version/s: (was: 3.1.2)
   (was: 3.2.0)
   Labels: EC HDFS NameNode  (was: )
   Status: Patch Available  (was: Open)

[~ayushtkn] PTAL

> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: HDFS, EC, NameNode
> Fix For: 3.3.0
>
> Attachments: HDFS-14849.001.patch
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6524) Choosing datanode retries times considering with block replica number

2019-09-14 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-6524:
--
Attachment: HDFS-6524.003.patch

> Choosing datanode  retries times considering with block replica number
> --
>
> Key: HDFS-6524
> URL: https://issues.apache.org/jira/browse/HDFS-6524
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Liang Xie
>Assignee: Lisheng Sun
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6524.001.patch, HDFS-6524.002.patch, 
> HDFS-6524.003.patch, HDFS-6524.txt
>
>
> Currently the chooseDataNode() does retry with the setting: 
> dfsClientConf.maxBlockAcquireFailures, which by default is 3 
> (DFS_CLIENT_MAX_BLOCK_ACQUIRE_FAILURES_DEFAULT = 3), it would be better 
> having another option, block replication factor. One cluster with only  two 
> block replica setting, or using Reed-solomon encoding solution with one 
> replica factor. It helps to reduce the long tail latency.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-14 Thread HuangTao (Jira)
HuangTao created HDFS-14849:
---

 Summary: Erasure Coding: replicate block infinitely when datanode 
being decommissioning
 Key: HDFS-14849
 URL: https://issues.apache.org/jira/browse/HDFS-14849
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.1.2, 3.2.0, 3.3.0
Reporter: HuangTao
Assignee: HuangTao


When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
that datanode will be replicated infinitely.





--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14836) FileIoProvider should not increase FileIoErrors metric in datanode volume metric

2019-09-14 Thread Aiphago (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929875#comment-16929875
 ] 

Aiphago commented on HDFS-14836:


Hi [~jojochuang] I update the code can you help review this again? Thanks very 
much.

> FileIoProvider should not increase FileIoErrors metric in datanode volume 
> metric
> 
>
> Key: HDFS-14836
> URL: https://issues.apache.org/jira/browse/HDFS-14836
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Aiphago
>Assignee: Aiphago
>Priority: Minor
> Attachments: HDFS-14836-trunk-001.patch, HDFS-14836.patch
>
>
> I found that  FileIoErrors metric will increase in 
> BlockSender.sendPacket(),when use fileIoProvider.transferToSocketFully().But 
> in https://issues.apache.org/jira/browse/HDFS-2054 the Exception has been 
> ignore like "Broken pipe" and "Connection reset" .
> So should do a filter when fileIoProvider increase FileIoErrors count ?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14836) FileIoProvider should not increase FileIoErrors metric in datanode volume metric

2019-09-14 Thread Aiphago (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aiphago updated HDFS-14836:
---
Attachment: HDFS-14836-trunk-001.patch

> FileIoProvider should not increase FileIoErrors metric in datanode volume 
> metric
> 
>
> Key: HDFS-14836
> URL: https://issues.apache.org/jira/browse/HDFS-14836
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Aiphago
>Assignee: Aiphago
>Priority: Minor
> Attachments: HDFS-14836-trunk-001.patch, HDFS-14836.patch
>
>
> I found that  FileIoErrors metric will increase in 
> BlockSender.sendPacket(),when use fileIoProvider.transferToSocketFully().But 
> in https://issues.apache.org/jira/browse/HDFS-2054 the Exception has been 
> ignore like "Broken pipe" and "Connection reset" .
> So should do a filter when fileIoProvider increase FileIoErrors count ?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-14 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929793#comment-16929793
 ] 

Eric Yang commented on HDFS-14845:
--

[~Prabhu Joseph] Would it be possible that HttpFSAuthenticationFilter is only a 
parameter passing filter to trigger filter initialization like 
ProxyUserAuthenticationFilterInitializer, and the internally route all doGet, 
doPost methods to the initialized filter?  

1. If httpfs.authentication.* are not defined, then fall back to the default 
behavior to be consistent with hadoop.http.authentication.type.  
2. This provides appearance that if httpfs.authentication.type is configured to 
use custom filter, the system will respond consistently with rest of the Hadoop 
web end points.  
3. If httpfs.authentication.type=kerberos, HttpFSAuthenticationFilter is a 
combo of Kerberos + DelegationToken + Proxy support.


> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2131) Optimize replication type and creation time calculation in S3 MPU list call

2019-09-14 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-2131:
-

Assignee: Siddharth Wagle

> Optimize replication type and creation time calculation in S3 MPU list call
> ---
>
> Key: HDDS-2131
> URL: https://issues.apache.org/jira/browse/HDDS-2131
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Siddharth Wagle
>Priority: Major
>
> Based on the review from [~bharatviswa]:
> {code}
>  
> hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
>   metadataManager.getOpenKeyTable();
>   OmKeyInfo omKeyInfo =
>   openKeyTable.get(upload.getDbKey());
> {code}
> {quote}Here we are reading openKeyTable only for getting creation time. If we 
> can have this information in omMultipartKeyInfo, we could avoid DB calls for 
> openKeyTable.
> To do this, We can set creationTime in OmMultipartKeyInfo during 
> initiateMultipartUpload . In this way, we can get all the required 
> information from the MultipartKeyInfo table.
> And also StorageClass is missing from the returned OmMultipartUpload, as 
> listMultipartUploads shows StorageClass information. For this, if we can 
> return replicationType and depending on this value, we can set StorageClass 
> in the listMultipartUploads Response.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14762) "Path(Path/String parent, String child)" will fail when "child" contains ":"

2019-09-14 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929777#comment-16929777
 ] 

hemanthboyina commented on HDFS-14762:
--

thanks for the new test case condition [~ayushtkn]

Updated new patch

> "Path(Path/String parent, String child)" will fail when "child" contains ":"
> 
>
> Key: HDFS-14762
> URL: https://issues.apache.org/jira/browse/HDFS-14762
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shixiong Zhu
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14762.001.patch, HDFS-14762.002.patch, 
> HDFS-14762.003.patch, HDFS-14762.004.patch
>
>
> When the "child" parameter contains ":", "Path(Path/String parent, String 
> child)" will throw the following exception:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: ...
> {code}
> Not sure if this is a legit bug. But the following places will hit this error 
> when seeing a Path with a file name containing ":":
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java#L101
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java#L270



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14762) "Path(Path/String parent, String child)" will fail when "child" contains ":"

2019-09-14 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929764#comment-16929764
 ] 

Hadoop QA commented on HDFS-14762:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
18s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
55s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:39e82acc485 |
| JIRA Issue | HDFS-14762 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980323/HDFS-14762.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 700d6629710c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e04b8a4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27874/testReport/ |
| Max. process+thread count | 1341 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27874/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> "Path(Path/String parent, String child)" will fail when "child" contains ":"
> 
>
>   

[jira] [Work logged] (HDDS-2129) Using dist profile fails with pom.ozone.xml as parent pom

2019-09-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2129?focusedWorklogId=312537=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312537
 ]

ASF GitHub Bot logged work on HDDS-2129:


Author: ASF GitHub Bot
Created on: 14/Sep/19 12:02
Start Date: 14/Sep/19 12:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1449: HDDS-2129. Using 
dist profile fails with pom.ozone.xml as parent pom
URL: https://github.com/apache/hadoop/pull/1449#issuecomment-531474159
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1117 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | shadedclient | 771 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 749 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 2782 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1449/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1449 |
   | Optional Tests | dupname asflicense xml |
   | uname | Linux 3775f2f7e567 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e04b8a4 |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1449/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312537)
Time Spent: 20m  (was: 10m)

> Using dist profile fails with pom.ozone.xml as parent pom
> -
>
> Key: HDDS-2129
> URL: https://issues.apache.org/jira/browse/HDDS-2129
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The build fails with the {{dist}} profile. Details in a comment below.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2129) Using dist profile fails with pom.ozone.xml as parent pom

2019-09-14 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929750#comment-16929750
 ] 

Elek, Marton commented on HDDS-2129:


AFAIK the problem is introduced with switching to javadoc 3.0 maven plugin. Now 
we need to ignore the warning in a different way.

Patch is uploaded (if you don't mind I also fix the assembly plugin version, 
let me know if you prefer to do it in a separated patch...)

> Using dist profile fails with pom.ozone.xml as parent pom
> -
>
> Key: HDDS-2129
> URL: https://issues.apache.org/jira/browse/HDDS-2129
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Major
>
> The build fails with the {{dist}} profile. Details in a comment below.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2129) Using dist profile fails with pom.ozone.xml as parent pom

2019-09-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2129:
-
Labels: pull-request-available  (was: )

> Using dist profile fails with pom.ozone.xml as parent pom
> -
>
> Key: HDDS-2129
> URL: https://issues.apache.org/jira/browse/HDDS-2129
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>
> The build fails with the {{dist}} profile. Details in a comment below.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2129) Using dist profile fails with pom.ozone.xml as parent pom

2019-09-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2129?focusedWorklogId=312536=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312536
 ]

ASF GitHub Bot logged work on HDDS-2129:


Author: ASF GitHub Bot
Created on: 14/Sep/19 11:14
Start Date: 14/Sep/19 11:14
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1449: HDDS-2129. Using 
dist profile fails with pom.ozone.xml as parent pom
URL: https://github.com/apache/hadoop/pull/1449
 
 
   The build fails with the {{dist}} profile. Details in a comment below.
   
   See: https://issues.apache.org/jira/browse/HDDS-2129
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312536)
Remaining Estimate: 0h
Time Spent: 10m

> Using dist profile fails with pom.ozone.xml as parent pom
> -
>
> Key: HDDS-2129
> URL: https://issues.apache.org/jira/browse/HDDS-2129
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The build fails with the {{dist}} profile. Details in a comment below.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2129) Using dist profile fails with pom.ozone.xml as parent pom

2019-09-14 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2129:
---
Status: Patch Available  (was: Open)

> Using dist profile fails with pom.ozone.xml as parent pom
> -
>
> Key: HDDS-2129
> URL: https://issues.apache.org/jira/browse/HDDS-2129
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The build fails with the {{dist}} profile. Details in a comment below.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2129) Using dist profile fails with pom.ozone.xml as parent pom

2019-09-14 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929748#comment-16929748
 ] 

Elek, Marton commented on HDDS-2129:


Interesting: I tried it with the 56b7571131b which is the last commit before 
HDDS-2106 ad it's failing in the same way. Still investigating...

> Using dist profile fails with pom.ozone.xml as parent pom
> -
>
> Key: HDDS-2129
> URL: https://issues.apache.org/jira/browse/HDDS-2129
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Major
>
> The build fails with the {{dist}} profile. Details in a comment below.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14762) "Path(Path/String parent, String child)" will fail when "child" contains ":"

2019-09-14 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14762:
-
Attachment: HDFS-14762.004.patch

> "Path(Path/String parent, String child)" will fail when "child" contains ":"
> 
>
> Key: HDFS-14762
> URL: https://issues.apache.org/jira/browse/HDFS-14762
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shixiong Zhu
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14762.001.patch, HDFS-14762.002.patch, 
> HDFS-14762.003.patch, HDFS-14762.004.patch
>
>
> When the "child" parameter contains ":", "Path(Path/String parent, String 
> child)" will throw the following exception:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: ...
> {code}
> Not sure if this is a legit bug. But the following places will hit this error 
> when seeing a Path with a file name containing ":":
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java#L101
> https://github.com/apache/hadoop/blob/f9029c4070e8eb046b403f5cb6d0a132c5d58448/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java#L270



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-14 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929738#comment-16929738
 ] 

Prabhu Joseph commented on HDFS-14845:
--

[~eyang] HttpFSAuthenticationFilter supports JWTRedirectAuthenticationHandler 
by setting it in httpfs.authentication.type (similar to simple or kerberos).

AuthenticationFilterInitializer or ProxyUserAuthenticationFilterInitializer can 
be made default for HttpFS but will will miss support of httpfs.authentication 
specific configs and WebHdfs Deletagion Token provided by 
HttpFSAuthenticationFilter.










> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2133) TestOzoneContainer is failing

2019-09-14 Thread Nanda kumar (Jira)
Nanda kumar created HDDS-2133:
-

 Summary: TestOzoneContainer is failing
 Key: HDDS-2133
 URL: https://issues.apache.org/jira/browse/HDDS-2133
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Nanda kumar


{{TestOzoneContainer}} is failing with the following exception
{noformat}
[ERROR] 
testBuildContainerMap(org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer)
  Time elapsed: 2.031 s  <<< FAILURE!
java.lang.AssertionError: expected:<10> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.testBuildContainerMap(TestOzoneContainer.java:143)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2132) TestKeyValueContainer is failing

2019-09-14 Thread Nanda kumar (Jira)
Nanda kumar created HDDS-2132:
-

 Summary: TestKeyValueContainer is failing
 Key: HDDS-2132
 URL: https://issues.apache.org/jira/browse/HDDS-2132
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Nanda kumar


{{TestKeyValueContainer}} is failing with the following exception 
{noformat}
[ERROR] 
testContainerImportExport(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer)
  Time elapsed: 0.173 s  <<< ERROR!
java.lang.NullPointerException
at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
at 
org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.parseKVContainerData(KeyValueContainerUtil.java:201)
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.importContainerData(KeyValueContainer.java:500)
at 
org.apache.hadoop.ozone.container.keyvalue.TestKeyValueContainer.testContainerImportExport(TestKeyValueContainer.java:235)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?focusedWorklogId=312532=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312532
 ]

ASF GitHub Bot logged work on HDDS-2119:


Author: ASF GitHub Bot
Created on: 14/Sep/19 08:03
Start Date: 14/Sep/19 08:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1435: HDDS-2119. Use 
checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
validation.
URL: https://github.com/apache/hadoop/pull/1435#issuecomment-531459900
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 60 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 70 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1696 | trunk passed |
   | +1 | compile | 1359 | trunk passed |
   | -1 | mvnsite | 1093 | root in trunk failed. |
   | +1 | shadedclient | 4937 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 482 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 36 | Maven dependency ordering for patch |
   | -1 | mvninstall | 26 | root in the patch failed. |
   | -1 | mvninstall | 25 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 22 | build-tools in the patch failed. |
   | -1 | compile | 28 | root in the patch failed. |
   | -1 | javac | 28 | root in the patch failed. |
   | -1 | mvnsite | 27 | root in the patch failed. |
   | -1 | whitespace | 0 | The patch 1  line(s) with tabs. |
   | +1 | xml | 8 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 772 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 30 | root in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 30 | root in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 6735 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1435 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 81b07cd59e9d 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e04b8a4 |
   | Default Java | 1.8.0_222 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/3/artifact/out/branch-mvnsite-root.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/3/artifact/out/patch-mvninstall-root.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/3/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/3/artifact/out/patch-mvninstall-hadoop-hdds_build-tools.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/3/artifact/out/patch-compile-root.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/3/artifact/out/patch-compile-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/3/artifact/out/patch-mvnsite-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/3/artifact/out/whitespace-tabs.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/3/artifact/out/patch-javadoc-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/3/artifact/out/patch-unit-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/3/testReport/ |
   | Max. process+thread count | 455 (vs. ulimit of 5500) |
   | modules | C: . hadoop-hdds hadoop-hdds/build-tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/3/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and 

[jira] [Work logged] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?focusedWorklogId=312531=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312531
 ]

ASF GitHub Bot logged work on HDDS-2119:


Author: ASF GitHub Bot
Created on: 14/Sep/19 08:02
Start Date: 14/Sep/19 08:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1435: 
HDDS-2119. Use checkstyle.xml and suppressions.xml in hdds/ozone projects for 
checkstyle validation.
URL: https://github.com/apache/hadoop/pull/1435#discussion_r324416600
 
 

 ##
 File path: 
hadoop-hdds/build-tools/src/main/resources/checkstyle/checkstyle-noframes-sorted.xsl
 ##
 @@ -0,0 +1,189 @@
+
+http://www.w3.org/1999/XSL/Transform; 
version="1.0">
 
 Review comment:
   whitespace:tabs in line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312531)
Time Spent: 2h 20m  (was: 2h 10m)

> Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
> validation
> 
>
> Key: HDDS-2119
> URL: https://issues.apache.org/jira/browse/HDDS-2119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> After HDDS-2106 hdds/ozone no more relies on hadoop parent pom, so we have to 
> use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
> checkstyle validation.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14847) Blocks are over-replicated while EC decommissioning

2019-09-14 Thread HuangTao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929715#comment-16929715
 ] 

HuangTao commented on HDFS-14847:
-

I just verified with [~ferhui]'s UT and my snippet, and failed.

I will write a new issues to record my scenario later.

> Blocks are over-replicated while EC decommissioning
> ---
>
> Key: HDFS-14847
> URL: https://issues.apache.org/jira/browse/HDFS-14847
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-14847.001.patch, HDFS-14847.002.patch
>
>
> Found that Some blocks are over-replicated while ec decommissioning. Messages 
> in log as follow
> {quote}
> INFO BlockStateChange: Block: blk_-9223372035714984112_363779142, Expected 
> Replicas: 9, live replicas: 8, corrupt replicas: 0, decommissioned replicas: 
> 0, decommissioning replicas: 3, maintenance replicas: 0, live entering 
> maintenance replicas: 0, excess replicas: 0, Is Open File: false, Datanodes 
> having this block: 10.254.41.34:50010 10.254.54.53:50010 10.254.28.53:50010 
> 10.254.56.55:50010 10.254.32.21:50010 10.254.33.19:50010 10.254.63.17:50010 
> 10.254.31.19:50010 10.254.35.29:50010 10.254.51.57:50010 10.254.40.58:50010 
> 10.254.69.31:50010 10.254.47.18:50010 10.254.51.18:50010 10.254.43.57:50010 
> 10.254.50.47:50010 10.254.42.37:50010 10.254.57.29:50010 10.254.67.40:50010 
> 10.254.44.16:50010 10.254.59.38:50010 10.254.53.56:50010 10.254.45.11:50010 
> 10.254.39.22:50010 10.254.30.16:50010 10.254.35.53:50010 10.254.22.30:50010 
> 10.254.26.34:50010 10.254.17.58:50010 10.254.65.53:50010 10.254.60.39:50010 
> 10.254.61.20:50010 10.254.64.23:50010 10.254.21.13:50010 10.254.37.35:50010 
> 10.254.68.30:50010 10.254.62.37:50010 10.254.25.58:50010 10.254.52.54:50010 
> 10.254.58.31:50010 10.254.49.11:50010 10.254.55.52:50010 10.254.19.19:50010 
> 10.254.36.40:50010 10.254.18.30:50010 10.254.20.39:50010 10.254.66.52:50010 
> 10.254.56.32:50010 10.254.24.55:50010 10.254.34.11:50010 10.254.29.58:50010 
> 10.254.27.40:50010 10.254.46.33:50010 10.254.23.19:50010 10.254.74.12:50010 
> 10.254.74.13:50010 10.254.41.35:50010 10.254.67.58:50010 10.254.54.11:50010 
> 10.254.68.14:50010 10.254.27.14:50010 10.254.51.29:50010 10.254.45.21:50010 
> 10.254.50.56:50010 10.254.47.31:50010 10.254.40.14:50010 10.254.65.21:50010 
> 10.254.62.22:50010 10.254.57.16:50010 10.254.36.52:50010 10.254.30.13:50010 
> 10.254.35.12:50010 10.254.69.34:50010 10.254.34.58:50010 10.254.17.50:50010 
> 10.254.63.12:50010 10.254.28.21:50010 10.254.58.30:50010 10.254.24.57:50010 
> 10.254.33.50:50010 10.254.44.52:50010 10.254.32.48:50010 10.254.43.39:50010 
> 10.254.20.37:50010 10.254.56.59:50010 10.254.22.33:50010 10.254.60.34:50010 
> 10.254.49.19:50010 10.254.52.21:50010 10.254.23.59:50010 10.254.21.16:50010 
> 10.254.42.55:50010 10.254.29.33:50010 10.254.53.17:50010 10.254.19.14:50010 
> 10.254.64.51:50010 10.254.46.20:50010 10.254.66.22:50010 10.254.18.38:50010 
> 10.254.39.17:50010 10.254.37.57:50010 10.254.31.54:50010 10.254.55.33:50010 
> 10.254.25.17:50010 10.254.61.33:50010 10.254.26.40:50010 10.254.59.23:50010 
> 10.254.59.35:50010 10.254.66.48:50010 10.254.41.15:50010 10.254.54.31:50010 
> 10.254.61.50:50010 10.254.62.31:50010 10.254.17.56:50010 10.254.29.18:50010 
> 10.254.45.16:50010 10.254.63.48:50010 10.254.22.34:50010 10.254.37.51:50010 
> 10.254.65.49:50010 10.254.58.21:50010 10.254.42.12:50010 10.254.55.17:50010 
> 10.254.27.13:50010 10.254.57.17:50010 10.254.67.18:50010 10.254.31.31:50010 
> 10.254.28.12:50010 10.254.36.12:50010 10.254.21.59:50010 10.254.30.30:50010 
> 10.254.26.50:50010 10.254.40.40:50010 10.254.32.17:50010 10.254.47.55:50010 
> 10.254.60.55:50010 10.254.49.33:50010 10.254.68.47:50010 10.254.39.21:50010 
> 10.254.56.14:50010 10.254.33.54:50010 10.254.69.57:50010 10.254.43.50:50010 
> 10.254.50.13:50010 10.254.25.49:50010 10.254.18.20:50010 10.254.52.23:50010 
> 10.254.19.11:50010 10.254.20.21:50010 10.254.74.16:50010 10.254.64.55:50010 
> 10.254.24.48:50010 10.254.46.29:50010 10.254.51.12:50010 10.254.23.56:50010 
> 10.254.44.59:50010 10.254.53.58:50010 10.254.34.38:50010 10.254.35.37:50010 
> 10.254.35.16:50010 10.254.36.23:50010 10.254.41.47:50010 10.254.54.12:50010 
> 10.254.20.59:50010 , Current Datanode: 10.254.56.55:50010, Is current 
> datanode decommissioning: true, Is current datanode entering maintenance: 
> false
> {quote}
> Decommisions hang for a long time.
> Deep into the code and find that There is a problem in ErasureCodingWork.java
> For Example, there are 2 nodes(dn0, dn1) in decommission and an ec block 
> group with the 2 nodes. After creating an ErasureCodingWork to reconstruct, 
> it will create 2 

[jira] [Updated] (HDDS-2129) Using dist profile fails with pom.ozone.xml as parent pom

2019-09-14 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-2129:
--
Status: Open  (was: Patch Available)

> Using dist profile fails with pom.ozone.xml as parent pom
> -
>
> Key: HDDS-2129
> URL: https://issues.apache.org/jira/browse/HDDS-2129
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Major
>
> The build fails with the {{dist}} profile. Details in a comment below.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14847) Blocks are over-replicated while EC decommissioning

2019-09-14 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929706#comment-16929706
 ] 

Fei Hui commented on HDFS-14847:


After reading comments on HDFS-14699, i think the issue here is not the same as 
it.

> Blocks are over-replicated while EC decommissioning
> ---
>
> Key: HDFS-14847
> URL: https://issues.apache.org/jira/browse/HDFS-14847
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-14847.001.patch, HDFS-14847.002.patch
>
>
> Found that Some blocks are over-replicated while ec decommissioning. Messages 
> in log as follow
> {quote}
> INFO BlockStateChange: Block: blk_-9223372035714984112_363779142, Expected 
> Replicas: 9, live replicas: 8, corrupt replicas: 0, decommissioned replicas: 
> 0, decommissioning replicas: 3, maintenance replicas: 0, live entering 
> maintenance replicas: 0, excess replicas: 0, Is Open File: false, Datanodes 
> having this block: 10.254.41.34:50010 10.254.54.53:50010 10.254.28.53:50010 
> 10.254.56.55:50010 10.254.32.21:50010 10.254.33.19:50010 10.254.63.17:50010 
> 10.254.31.19:50010 10.254.35.29:50010 10.254.51.57:50010 10.254.40.58:50010 
> 10.254.69.31:50010 10.254.47.18:50010 10.254.51.18:50010 10.254.43.57:50010 
> 10.254.50.47:50010 10.254.42.37:50010 10.254.57.29:50010 10.254.67.40:50010 
> 10.254.44.16:50010 10.254.59.38:50010 10.254.53.56:50010 10.254.45.11:50010 
> 10.254.39.22:50010 10.254.30.16:50010 10.254.35.53:50010 10.254.22.30:50010 
> 10.254.26.34:50010 10.254.17.58:50010 10.254.65.53:50010 10.254.60.39:50010 
> 10.254.61.20:50010 10.254.64.23:50010 10.254.21.13:50010 10.254.37.35:50010 
> 10.254.68.30:50010 10.254.62.37:50010 10.254.25.58:50010 10.254.52.54:50010 
> 10.254.58.31:50010 10.254.49.11:50010 10.254.55.52:50010 10.254.19.19:50010 
> 10.254.36.40:50010 10.254.18.30:50010 10.254.20.39:50010 10.254.66.52:50010 
> 10.254.56.32:50010 10.254.24.55:50010 10.254.34.11:50010 10.254.29.58:50010 
> 10.254.27.40:50010 10.254.46.33:50010 10.254.23.19:50010 10.254.74.12:50010 
> 10.254.74.13:50010 10.254.41.35:50010 10.254.67.58:50010 10.254.54.11:50010 
> 10.254.68.14:50010 10.254.27.14:50010 10.254.51.29:50010 10.254.45.21:50010 
> 10.254.50.56:50010 10.254.47.31:50010 10.254.40.14:50010 10.254.65.21:50010 
> 10.254.62.22:50010 10.254.57.16:50010 10.254.36.52:50010 10.254.30.13:50010 
> 10.254.35.12:50010 10.254.69.34:50010 10.254.34.58:50010 10.254.17.50:50010 
> 10.254.63.12:50010 10.254.28.21:50010 10.254.58.30:50010 10.254.24.57:50010 
> 10.254.33.50:50010 10.254.44.52:50010 10.254.32.48:50010 10.254.43.39:50010 
> 10.254.20.37:50010 10.254.56.59:50010 10.254.22.33:50010 10.254.60.34:50010 
> 10.254.49.19:50010 10.254.52.21:50010 10.254.23.59:50010 10.254.21.16:50010 
> 10.254.42.55:50010 10.254.29.33:50010 10.254.53.17:50010 10.254.19.14:50010 
> 10.254.64.51:50010 10.254.46.20:50010 10.254.66.22:50010 10.254.18.38:50010 
> 10.254.39.17:50010 10.254.37.57:50010 10.254.31.54:50010 10.254.55.33:50010 
> 10.254.25.17:50010 10.254.61.33:50010 10.254.26.40:50010 10.254.59.23:50010 
> 10.254.59.35:50010 10.254.66.48:50010 10.254.41.15:50010 10.254.54.31:50010 
> 10.254.61.50:50010 10.254.62.31:50010 10.254.17.56:50010 10.254.29.18:50010 
> 10.254.45.16:50010 10.254.63.48:50010 10.254.22.34:50010 10.254.37.51:50010 
> 10.254.65.49:50010 10.254.58.21:50010 10.254.42.12:50010 10.254.55.17:50010 
> 10.254.27.13:50010 10.254.57.17:50010 10.254.67.18:50010 10.254.31.31:50010 
> 10.254.28.12:50010 10.254.36.12:50010 10.254.21.59:50010 10.254.30.30:50010 
> 10.254.26.50:50010 10.254.40.40:50010 10.254.32.17:50010 10.254.47.55:50010 
> 10.254.60.55:50010 10.254.49.33:50010 10.254.68.47:50010 10.254.39.21:50010 
> 10.254.56.14:50010 10.254.33.54:50010 10.254.69.57:50010 10.254.43.50:50010 
> 10.254.50.13:50010 10.254.25.49:50010 10.254.18.20:50010 10.254.52.23:50010 
> 10.254.19.11:50010 10.254.20.21:50010 10.254.74.16:50010 10.254.64.55:50010 
> 10.254.24.48:50010 10.254.46.29:50010 10.254.51.12:50010 10.254.23.56:50010 
> 10.254.44.59:50010 10.254.53.58:50010 10.254.34.38:50010 10.254.35.37:50010 
> 10.254.35.16:50010 10.254.36.23:50010 10.254.41.47:50010 10.254.54.12:50010 
> 10.254.20.59:50010 , Current Datanode: 10.254.56.55:50010, Is current 
> datanode decommissioning: true, Is current datanode entering maintenance: 
> false
> {quote}
> Decommisions hang for a long time.
> Deep into the code and find that There is a problem in ErasureCodingWork.java
> For Example, there are 2 nodes(dn0, dn1) in decommission and an ec block 
> group with the 2 nodes. After creating an ErasureCodingWork to reconstruct, 
> it will create 2 replication work. 
> If dn0 replicates in 

[jira] [Commented] (HDFS-14847) Blocks are over-replicated while EC decommissioning

2019-09-14 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929705#comment-16929705
 ] 

Fei Hui commented on HDFS-14847:


[~ayushtkn] Unit Test of HDFS-14847.002.path is timed out without  the fix on 
current trunk branch after HDFS-14699.
I think maybe it is not the same issue.

> Blocks are over-replicated while EC decommissioning
> ---
>
> Key: HDFS-14847
> URL: https://issues.apache.org/jira/browse/HDFS-14847
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-14847.001.patch, HDFS-14847.002.patch
>
>
> Found that Some blocks are over-replicated while ec decommissioning. Messages 
> in log as follow
> {quote}
> INFO BlockStateChange: Block: blk_-9223372035714984112_363779142, Expected 
> Replicas: 9, live replicas: 8, corrupt replicas: 0, decommissioned replicas: 
> 0, decommissioning replicas: 3, maintenance replicas: 0, live entering 
> maintenance replicas: 0, excess replicas: 0, Is Open File: false, Datanodes 
> having this block: 10.254.41.34:50010 10.254.54.53:50010 10.254.28.53:50010 
> 10.254.56.55:50010 10.254.32.21:50010 10.254.33.19:50010 10.254.63.17:50010 
> 10.254.31.19:50010 10.254.35.29:50010 10.254.51.57:50010 10.254.40.58:50010 
> 10.254.69.31:50010 10.254.47.18:50010 10.254.51.18:50010 10.254.43.57:50010 
> 10.254.50.47:50010 10.254.42.37:50010 10.254.57.29:50010 10.254.67.40:50010 
> 10.254.44.16:50010 10.254.59.38:50010 10.254.53.56:50010 10.254.45.11:50010 
> 10.254.39.22:50010 10.254.30.16:50010 10.254.35.53:50010 10.254.22.30:50010 
> 10.254.26.34:50010 10.254.17.58:50010 10.254.65.53:50010 10.254.60.39:50010 
> 10.254.61.20:50010 10.254.64.23:50010 10.254.21.13:50010 10.254.37.35:50010 
> 10.254.68.30:50010 10.254.62.37:50010 10.254.25.58:50010 10.254.52.54:50010 
> 10.254.58.31:50010 10.254.49.11:50010 10.254.55.52:50010 10.254.19.19:50010 
> 10.254.36.40:50010 10.254.18.30:50010 10.254.20.39:50010 10.254.66.52:50010 
> 10.254.56.32:50010 10.254.24.55:50010 10.254.34.11:50010 10.254.29.58:50010 
> 10.254.27.40:50010 10.254.46.33:50010 10.254.23.19:50010 10.254.74.12:50010 
> 10.254.74.13:50010 10.254.41.35:50010 10.254.67.58:50010 10.254.54.11:50010 
> 10.254.68.14:50010 10.254.27.14:50010 10.254.51.29:50010 10.254.45.21:50010 
> 10.254.50.56:50010 10.254.47.31:50010 10.254.40.14:50010 10.254.65.21:50010 
> 10.254.62.22:50010 10.254.57.16:50010 10.254.36.52:50010 10.254.30.13:50010 
> 10.254.35.12:50010 10.254.69.34:50010 10.254.34.58:50010 10.254.17.50:50010 
> 10.254.63.12:50010 10.254.28.21:50010 10.254.58.30:50010 10.254.24.57:50010 
> 10.254.33.50:50010 10.254.44.52:50010 10.254.32.48:50010 10.254.43.39:50010 
> 10.254.20.37:50010 10.254.56.59:50010 10.254.22.33:50010 10.254.60.34:50010 
> 10.254.49.19:50010 10.254.52.21:50010 10.254.23.59:50010 10.254.21.16:50010 
> 10.254.42.55:50010 10.254.29.33:50010 10.254.53.17:50010 10.254.19.14:50010 
> 10.254.64.51:50010 10.254.46.20:50010 10.254.66.22:50010 10.254.18.38:50010 
> 10.254.39.17:50010 10.254.37.57:50010 10.254.31.54:50010 10.254.55.33:50010 
> 10.254.25.17:50010 10.254.61.33:50010 10.254.26.40:50010 10.254.59.23:50010 
> 10.254.59.35:50010 10.254.66.48:50010 10.254.41.15:50010 10.254.54.31:50010 
> 10.254.61.50:50010 10.254.62.31:50010 10.254.17.56:50010 10.254.29.18:50010 
> 10.254.45.16:50010 10.254.63.48:50010 10.254.22.34:50010 10.254.37.51:50010 
> 10.254.65.49:50010 10.254.58.21:50010 10.254.42.12:50010 10.254.55.17:50010 
> 10.254.27.13:50010 10.254.57.17:50010 10.254.67.18:50010 10.254.31.31:50010 
> 10.254.28.12:50010 10.254.36.12:50010 10.254.21.59:50010 10.254.30.30:50010 
> 10.254.26.50:50010 10.254.40.40:50010 10.254.32.17:50010 10.254.47.55:50010 
> 10.254.60.55:50010 10.254.49.33:50010 10.254.68.47:50010 10.254.39.21:50010 
> 10.254.56.14:50010 10.254.33.54:50010 10.254.69.57:50010 10.254.43.50:50010 
> 10.254.50.13:50010 10.254.25.49:50010 10.254.18.20:50010 10.254.52.23:50010 
> 10.254.19.11:50010 10.254.20.21:50010 10.254.74.16:50010 10.254.64.55:50010 
> 10.254.24.48:50010 10.254.46.29:50010 10.254.51.12:50010 10.254.23.56:50010 
> 10.254.44.59:50010 10.254.53.58:50010 10.254.34.38:50010 10.254.35.37:50010 
> 10.254.35.16:50010 10.254.36.23:50010 10.254.41.47:50010 10.254.54.12:50010 
> 10.254.20.59:50010 , Current Datanode: 10.254.56.55:50010, Is current 
> datanode decommissioning: true, Is current datanode entering maintenance: 
> false
> {quote}
> Decommisions hang for a long time.
> Deep into the code and find that There is a problem in ErasureCodingWork.java
> For Example, there are 2 nodes(dn0, dn1) in decommission and an ec block 
> group with the 2 nodes. After creating an ErasureCodingWork to 

[jira] [Commented] (HDFS-13736) BlockPlacementPolicyDefault can not choose favored nodes when 'dfs.namenode.block-placement-policy.default.prefer-local-node' set to false

2019-09-14 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929703#comment-16929703
 ] 

Ayush Saxena commented on HDFS-13736:
-

[~xiaodong.hu] any plans working here. If not I can takeover...

> BlockPlacementPolicyDefault can not choose favored nodes when 
> 'dfs.namenode.block-placement-policy.default.prefer-local-node' set to false
> --
>
> Key: HDFS-13736
> URL: https://issues.apache.org/jira/browse/HDFS-13736
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Major
> Attachments: HDFS-13736.001.patch
>
>
> BlockPlacementPolicyDefault can not choose favored nodes when 
> 'dfs.namenode.block-placement-policy.default.prefer-local-node' set to false. 
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2129) Using dist profile fails with pom.ozone.xml as parent pom

2019-09-14 Thread Nanda kumar (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929702#comment-16929702
 ] 

Nanda kumar commented on HDDS-2129:
---

The initial problem occurs while enabling {{src}} profile.

> Using dist profile fails with pom.ozone.xml as parent pom
> -
>
> Key: HDDS-2129
> URL: https://issues.apache.org/jira/browse/HDDS-2129
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Major
>
> The build fails with the {{dist}} profile. Details in a comment below.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14847) Blocks are over-replicated while EC decommissioning

2019-09-14 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929701#comment-16929701
 ] 

Ayush Saxena commented on HDFS-14847:
-

I guess [~marvelrock] tends to say After HDFS-14699., the issue reported here 
will not come. if I am catching it correct?
Well, anyway If not, I will try to confirm in couple of days.

> Blocks are over-replicated while EC decommissioning
> ---
>
> Key: HDFS-14847
> URL: https://issues.apache.org/jira/browse/HDFS-14847
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-14847.001.patch, HDFS-14847.002.patch
>
>
> Found that Some blocks are over-replicated while ec decommissioning. Messages 
> in log as follow
> {quote}
> INFO BlockStateChange: Block: blk_-9223372035714984112_363779142, Expected 
> Replicas: 9, live replicas: 8, corrupt replicas: 0, decommissioned replicas: 
> 0, decommissioning replicas: 3, maintenance replicas: 0, live entering 
> maintenance replicas: 0, excess replicas: 0, Is Open File: false, Datanodes 
> having this block: 10.254.41.34:50010 10.254.54.53:50010 10.254.28.53:50010 
> 10.254.56.55:50010 10.254.32.21:50010 10.254.33.19:50010 10.254.63.17:50010 
> 10.254.31.19:50010 10.254.35.29:50010 10.254.51.57:50010 10.254.40.58:50010 
> 10.254.69.31:50010 10.254.47.18:50010 10.254.51.18:50010 10.254.43.57:50010 
> 10.254.50.47:50010 10.254.42.37:50010 10.254.57.29:50010 10.254.67.40:50010 
> 10.254.44.16:50010 10.254.59.38:50010 10.254.53.56:50010 10.254.45.11:50010 
> 10.254.39.22:50010 10.254.30.16:50010 10.254.35.53:50010 10.254.22.30:50010 
> 10.254.26.34:50010 10.254.17.58:50010 10.254.65.53:50010 10.254.60.39:50010 
> 10.254.61.20:50010 10.254.64.23:50010 10.254.21.13:50010 10.254.37.35:50010 
> 10.254.68.30:50010 10.254.62.37:50010 10.254.25.58:50010 10.254.52.54:50010 
> 10.254.58.31:50010 10.254.49.11:50010 10.254.55.52:50010 10.254.19.19:50010 
> 10.254.36.40:50010 10.254.18.30:50010 10.254.20.39:50010 10.254.66.52:50010 
> 10.254.56.32:50010 10.254.24.55:50010 10.254.34.11:50010 10.254.29.58:50010 
> 10.254.27.40:50010 10.254.46.33:50010 10.254.23.19:50010 10.254.74.12:50010 
> 10.254.74.13:50010 10.254.41.35:50010 10.254.67.58:50010 10.254.54.11:50010 
> 10.254.68.14:50010 10.254.27.14:50010 10.254.51.29:50010 10.254.45.21:50010 
> 10.254.50.56:50010 10.254.47.31:50010 10.254.40.14:50010 10.254.65.21:50010 
> 10.254.62.22:50010 10.254.57.16:50010 10.254.36.52:50010 10.254.30.13:50010 
> 10.254.35.12:50010 10.254.69.34:50010 10.254.34.58:50010 10.254.17.50:50010 
> 10.254.63.12:50010 10.254.28.21:50010 10.254.58.30:50010 10.254.24.57:50010 
> 10.254.33.50:50010 10.254.44.52:50010 10.254.32.48:50010 10.254.43.39:50010 
> 10.254.20.37:50010 10.254.56.59:50010 10.254.22.33:50010 10.254.60.34:50010 
> 10.254.49.19:50010 10.254.52.21:50010 10.254.23.59:50010 10.254.21.16:50010 
> 10.254.42.55:50010 10.254.29.33:50010 10.254.53.17:50010 10.254.19.14:50010 
> 10.254.64.51:50010 10.254.46.20:50010 10.254.66.22:50010 10.254.18.38:50010 
> 10.254.39.17:50010 10.254.37.57:50010 10.254.31.54:50010 10.254.55.33:50010 
> 10.254.25.17:50010 10.254.61.33:50010 10.254.26.40:50010 10.254.59.23:50010 
> 10.254.59.35:50010 10.254.66.48:50010 10.254.41.15:50010 10.254.54.31:50010 
> 10.254.61.50:50010 10.254.62.31:50010 10.254.17.56:50010 10.254.29.18:50010 
> 10.254.45.16:50010 10.254.63.48:50010 10.254.22.34:50010 10.254.37.51:50010 
> 10.254.65.49:50010 10.254.58.21:50010 10.254.42.12:50010 10.254.55.17:50010 
> 10.254.27.13:50010 10.254.57.17:50010 10.254.67.18:50010 10.254.31.31:50010 
> 10.254.28.12:50010 10.254.36.12:50010 10.254.21.59:50010 10.254.30.30:50010 
> 10.254.26.50:50010 10.254.40.40:50010 10.254.32.17:50010 10.254.47.55:50010 
> 10.254.60.55:50010 10.254.49.33:50010 10.254.68.47:50010 10.254.39.21:50010 
> 10.254.56.14:50010 10.254.33.54:50010 10.254.69.57:50010 10.254.43.50:50010 
> 10.254.50.13:50010 10.254.25.49:50010 10.254.18.20:50010 10.254.52.23:50010 
> 10.254.19.11:50010 10.254.20.21:50010 10.254.74.16:50010 10.254.64.55:50010 
> 10.254.24.48:50010 10.254.46.29:50010 10.254.51.12:50010 10.254.23.56:50010 
> 10.254.44.59:50010 10.254.53.58:50010 10.254.34.38:50010 10.254.35.37:50010 
> 10.254.35.16:50010 10.254.36.23:50010 10.254.41.47:50010 10.254.54.12:50010 
> 10.254.20.59:50010 , Current Datanode: 10.254.56.55:50010, Is current 
> datanode decommissioning: true, Is current datanode entering maintenance: 
> false
> {quote}
> Decommisions hang for a long time.
> Deep into the code and find that There is a problem in ErasureCodingWork.java
> For Example, there are 2 nodes(dn0, dn1) in decommission and an ec block 
> group with the 2 nodes. After 

[jira] [Commented] (HDFS-13522) Support observer node from Router-Based Federation

2019-09-14 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929700#comment-16929700
 ] 

Ayush Saxena commented on HDFS-13522:
-

Thanx [~crh] for the design. I started reading but got an initial doubt, 
regarding the need to split read and write routers. I think we can use only one 
kind of routers itself. The reason to split for observer read seems here too to 
differentiate call between active NN for write and Observer NN for read. 
Can this be not done in existing routers, we can check if the stateId is set, 
That means the client is using {{ObserverProxyProvider}} and we can direct the 
call to Observer NN and if not we can follow the normal flow as it is.
Let me know if I missed some fact here.

> Support observer node from Router-Based Federation
> --
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13522.001.patch, RBF_ Observer support.pdf, 
> Router+Observer RPC clogging.png, ShortTerm-Routers+Observer.png
>
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14847) Blocks are over-replicated while EC decommissioning

2019-09-14 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929698#comment-16929698
 ] 

Fei Hui commented on HDFS-14847:


[~ayushtkn][~marvelrock] Thanks for your comments.
Failed tests are unrelated.

> Blocks are over-replicated while EC decommissioning
> ---
>
> Key: HDFS-14847
> URL: https://issues.apache.org/jira/browse/HDFS-14847
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-14847.001.patch, HDFS-14847.002.patch
>
>
> Found that Some blocks are over-replicated while ec decommissioning. Messages 
> in log as follow
> {quote}
> INFO BlockStateChange: Block: blk_-9223372035714984112_363779142, Expected 
> Replicas: 9, live replicas: 8, corrupt replicas: 0, decommissioned replicas: 
> 0, decommissioning replicas: 3, maintenance replicas: 0, live entering 
> maintenance replicas: 0, excess replicas: 0, Is Open File: false, Datanodes 
> having this block: 10.254.41.34:50010 10.254.54.53:50010 10.254.28.53:50010 
> 10.254.56.55:50010 10.254.32.21:50010 10.254.33.19:50010 10.254.63.17:50010 
> 10.254.31.19:50010 10.254.35.29:50010 10.254.51.57:50010 10.254.40.58:50010 
> 10.254.69.31:50010 10.254.47.18:50010 10.254.51.18:50010 10.254.43.57:50010 
> 10.254.50.47:50010 10.254.42.37:50010 10.254.57.29:50010 10.254.67.40:50010 
> 10.254.44.16:50010 10.254.59.38:50010 10.254.53.56:50010 10.254.45.11:50010 
> 10.254.39.22:50010 10.254.30.16:50010 10.254.35.53:50010 10.254.22.30:50010 
> 10.254.26.34:50010 10.254.17.58:50010 10.254.65.53:50010 10.254.60.39:50010 
> 10.254.61.20:50010 10.254.64.23:50010 10.254.21.13:50010 10.254.37.35:50010 
> 10.254.68.30:50010 10.254.62.37:50010 10.254.25.58:50010 10.254.52.54:50010 
> 10.254.58.31:50010 10.254.49.11:50010 10.254.55.52:50010 10.254.19.19:50010 
> 10.254.36.40:50010 10.254.18.30:50010 10.254.20.39:50010 10.254.66.52:50010 
> 10.254.56.32:50010 10.254.24.55:50010 10.254.34.11:50010 10.254.29.58:50010 
> 10.254.27.40:50010 10.254.46.33:50010 10.254.23.19:50010 10.254.74.12:50010 
> 10.254.74.13:50010 10.254.41.35:50010 10.254.67.58:50010 10.254.54.11:50010 
> 10.254.68.14:50010 10.254.27.14:50010 10.254.51.29:50010 10.254.45.21:50010 
> 10.254.50.56:50010 10.254.47.31:50010 10.254.40.14:50010 10.254.65.21:50010 
> 10.254.62.22:50010 10.254.57.16:50010 10.254.36.52:50010 10.254.30.13:50010 
> 10.254.35.12:50010 10.254.69.34:50010 10.254.34.58:50010 10.254.17.50:50010 
> 10.254.63.12:50010 10.254.28.21:50010 10.254.58.30:50010 10.254.24.57:50010 
> 10.254.33.50:50010 10.254.44.52:50010 10.254.32.48:50010 10.254.43.39:50010 
> 10.254.20.37:50010 10.254.56.59:50010 10.254.22.33:50010 10.254.60.34:50010 
> 10.254.49.19:50010 10.254.52.21:50010 10.254.23.59:50010 10.254.21.16:50010 
> 10.254.42.55:50010 10.254.29.33:50010 10.254.53.17:50010 10.254.19.14:50010 
> 10.254.64.51:50010 10.254.46.20:50010 10.254.66.22:50010 10.254.18.38:50010 
> 10.254.39.17:50010 10.254.37.57:50010 10.254.31.54:50010 10.254.55.33:50010 
> 10.254.25.17:50010 10.254.61.33:50010 10.254.26.40:50010 10.254.59.23:50010 
> 10.254.59.35:50010 10.254.66.48:50010 10.254.41.15:50010 10.254.54.31:50010 
> 10.254.61.50:50010 10.254.62.31:50010 10.254.17.56:50010 10.254.29.18:50010 
> 10.254.45.16:50010 10.254.63.48:50010 10.254.22.34:50010 10.254.37.51:50010 
> 10.254.65.49:50010 10.254.58.21:50010 10.254.42.12:50010 10.254.55.17:50010 
> 10.254.27.13:50010 10.254.57.17:50010 10.254.67.18:50010 10.254.31.31:50010 
> 10.254.28.12:50010 10.254.36.12:50010 10.254.21.59:50010 10.254.30.30:50010 
> 10.254.26.50:50010 10.254.40.40:50010 10.254.32.17:50010 10.254.47.55:50010 
> 10.254.60.55:50010 10.254.49.33:50010 10.254.68.47:50010 10.254.39.21:50010 
> 10.254.56.14:50010 10.254.33.54:50010 10.254.69.57:50010 10.254.43.50:50010 
> 10.254.50.13:50010 10.254.25.49:50010 10.254.18.20:50010 10.254.52.23:50010 
> 10.254.19.11:50010 10.254.20.21:50010 10.254.74.16:50010 10.254.64.55:50010 
> 10.254.24.48:50010 10.254.46.29:50010 10.254.51.12:50010 10.254.23.56:50010 
> 10.254.44.59:50010 10.254.53.58:50010 10.254.34.38:50010 10.254.35.37:50010 
> 10.254.35.16:50010 10.254.36.23:50010 10.254.41.47:50010 10.254.54.12:50010 
> 10.254.20.59:50010 , Current Datanode: 10.254.56.55:50010, Is current 
> datanode decommissioning: true, Is current datanode entering maintenance: 
> false
> {quote}
> Decommisions hang for a long time.
> Deep into the code and find that There is a problem in ErasureCodingWork.java
> For Example, there are 2 nodes(dn0, dn1) in decommission and an ec block 
> group with the 2 nodes. After creating an ErasureCodingWork to reconstruct, 
> it will create 2 replication work. 
> If dn0 replicates in success and 

[jira] [Updated] (HDDS-2126) Ozone 0.4.1 branch build issue

2019-09-14 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-2126:
--
Fix Version/s: 0.4.1
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Ozone 0.4.1 branch build issue
> --
>
> Key: HDDS-2126
> URL: https://issues.apache.org/jira/browse/HDDS-2126
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{ozone=0.4.1}} branch build is failing with below error
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-ozone-integration-test: Compilation 
> failure
> [ERROR] 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerReplicationEndToEnd.java:[202,9]
>  cannot find symbol
> [ERROR]   symbol:   method getBlockCommitSequenceId()
> [ERROR]   location: class 
> org.apache.hadoop.ozone.container.common.impl.ContainerData
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2129) Using dist profile fails with pom.ozone.xml as parent pom

2019-09-14 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-2129:
--
Status: Patch Available  (was: Open)

> Using dist profile fails with pom.ozone.xml as parent pom
> -
>
> Key: HDDS-2129
> URL: https://issues.apache.org/jira/browse/HDDS-2129
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Major
>
> The build fails with the {{dist}} profile. Details in a comment below.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2126) Ozone 0.4.1 branch build issue

2019-09-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2126?focusedWorklogId=312517=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312517
 ]

ASF GitHub Bot logged work on HDDS-2126:


Author: ASF GitHub Bot
Created on: 14/Sep/19 06:18
Start Date: 14/Sep/19 06:18
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1438: 
HDDS-2126. Ozone 0.4.1 branch build issue.
URL: https://github.com/apache/hadoop/pull/1438
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312517)
Time Spent: 50m  (was: 40m)

> Ozone 0.4.1 branch build issue
> --
>
> Key: HDDS-2126
> URL: https://issues.apache.org/jira/browse/HDDS-2126
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{ozone=0.4.1}} branch build is failing with below error
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-ozone-integration-test: Compilation 
> failure
> [ERROR] 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerReplicationEndToEnd.java:[202,9]
>  cannot find symbol
> [ERROR]   symbol:   method getBlockCommitSequenceId()
> [ERROR]   location: class 
> org.apache.hadoop.ozone.container.common.impl.ContainerData
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2126) Ozone 0.4.1 branch build issue

2019-09-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2126?focusedWorklogId=312516=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312516
 ]

ASF GitHub Bot logged work on HDDS-2126:


Author: ASF GitHub Bot
Created on: 14/Sep/19 06:18
Start Date: 14/Sep/19 06:18
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on issue #1438: HDDS-2126. 
Ozone 0.4.1 branch build issue.
URL: https://github.com/apache/hadoop/pull/1438#issuecomment-531453780
 
 
   Thanks for the review @anuengineer 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312516)
Time Spent: 40m  (was: 0.5h)

> Ozone 0.4.1 branch build issue
> --
>
> Key: HDDS-2126
> URL: https://issues.apache.org/jira/browse/HDDS-2126
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {{ozone=0.4.1}} branch build is failing with below error
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-ozone-integration-test: Compilation 
> failure
> [ERROR] 
> /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerReplicationEndToEnd.java:[202,9]
>  cannot find symbol
> [ERROR]   symbol:   method getBlockCommitSequenceId()
> [ERROR]   location: class 
> org.apache.hadoop.ozone.container.common.impl.ContainerData
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?focusedWorklogId=312515=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312515
 ]

ASF GitHub Bot logged work on HDDS-2119:


Author: ASF GitHub Bot
Created on: 14/Sep/19 06:15
Start Date: 14/Sep/19 06:15
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on issue #1435: HDDS-2119. Use 
checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
validation.
URL: https://github.com/apache/hadoop/pull/1435#issuecomment-531453630
 
 
   Since we are not using maven artifact anymore, we don't need a new maven 
module `hadoop-hdds-build-tools` exactly. I'm happy to leave it as it is if we 
can use this to add more build tools, or else we can remove this module and 
move the files somewhere else.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312515)
Time Spent: 2h 10m  (was: 2h)

> Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
> validation
> 
>
> Key: HDDS-2119
> URL: https://issues.apache.org/jira/browse/HDDS-2119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> After HDDS-2106 hdds/ozone no more relies on hadoop parent pom, so we have to 
> use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
> checkstyle validation.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14847) Blocks are over-replicated while EC decommissioning

2019-09-14 Thread HuangTao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929693#comment-16929693
 ] 

HuangTao edited comment on HDFS-14847 at 9/14/19 6:13 AM:
--

[~ayushtkn] Yes, we had the same issue, however we have fixed it with the 
following code.
{code:java}
// 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager#chooseSourceDatanodes
if (state == StoredReplicaState.LIVE) { // move here
  if (!bitSet.get(blockIndex)) {
bitSet.set(blockIndex);
//} else if (state == StoredReplicaState.LIVE) { // from here
  } else {
numReplicas.subtract(StoredReplicaState.LIVE, 1);
numReplicas.add(StoredReplicaState.REDUNDANT, 1);
  }
}
{code}
[~ferhui] I think this code can fix it too.:)


was (Author: marvelrock):
[~ayushtkn] Yes, we had the same issue, however we have fixed it with the 
following code.
{code:java}
if (state == StoredReplicaState.LIVE) { // move here
  if (!bitSet.get(blockIndex)) {
bitSet.set(blockIndex);
//} else if (state == StoredReplicaState.LIVE) { // from here
  } else {
numReplicas.subtract(StoredReplicaState.LIVE, 1);
numReplicas.add(StoredReplicaState.REDUNDANT, 1);
  }
}
{code}
[~ferhui] I think this code can fix it too.:)

> Blocks are over-replicated while EC decommissioning
> ---
>
> Key: HDFS-14847
> URL: https://issues.apache.org/jira/browse/HDFS-14847
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-14847.001.patch, HDFS-14847.002.patch
>
>
> Found that Some blocks are over-replicated while ec decommissioning. Messages 
> in log as follow
> {quote}
> INFO BlockStateChange: Block: blk_-9223372035714984112_363779142, Expected 
> Replicas: 9, live replicas: 8, corrupt replicas: 0, decommissioned replicas: 
> 0, decommissioning replicas: 3, maintenance replicas: 0, live entering 
> maintenance replicas: 0, excess replicas: 0, Is Open File: false, Datanodes 
> having this block: 10.254.41.34:50010 10.254.54.53:50010 10.254.28.53:50010 
> 10.254.56.55:50010 10.254.32.21:50010 10.254.33.19:50010 10.254.63.17:50010 
> 10.254.31.19:50010 10.254.35.29:50010 10.254.51.57:50010 10.254.40.58:50010 
> 10.254.69.31:50010 10.254.47.18:50010 10.254.51.18:50010 10.254.43.57:50010 
> 10.254.50.47:50010 10.254.42.37:50010 10.254.57.29:50010 10.254.67.40:50010 
> 10.254.44.16:50010 10.254.59.38:50010 10.254.53.56:50010 10.254.45.11:50010 
> 10.254.39.22:50010 10.254.30.16:50010 10.254.35.53:50010 10.254.22.30:50010 
> 10.254.26.34:50010 10.254.17.58:50010 10.254.65.53:50010 10.254.60.39:50010 
> 10.254.61.20:50010 10.254.64.23:50010 10.254.21.13:50010 10.254.37.35:50010 
> 10.254.68.30:50010 10.254.62.37:50010 10.254.25.58:50010 10.254.52.54:50010 
> 10.254.58.31:50010 10.254.49.11:50010 10.254.55.52:50010 10.254.19.19:50010 
> 10.254.36.40:50010 10.254.18.30:50010 10.254.20.39:50010 10.254.66.52:50010 
> 10.254.56.32:50010 10.254.24.55:50010 10.254.34.11:50010 10.254.29.58:50010 
> 10.254.27.40:50010 10.254.46.33:50010 10.254.23.19:50010 10.254.74.12:50010 
> 10.254.74.13:50010 10.254.41.35:50010 10.254.67.58:50010 10.254.54.11:50010 
> 10.254.68.14:50010 10.254.27.14:50010 10.254.51.29:50010 10.254.45.21:50010 
> 10.254.50.56:50010 10.254.47.31:50010 10.254.40.14:50010 10.254.65.21:50010 
> 10.254.62.22:50010 10.254.57.16:50010 10.254.36.52:50010 10.254.30.13:50010 
> 10.254.35.12:50010 10.254.69.34:50010 10.254.34.58:50010 10.254.17.50:50010 
> 10.254.63.12:50010 10.254.28.21:50010 10.254.58.30:50010 10.254.24.57:50010 
> 10.254.33.50:50010 10.254.44.52:50010 10.254.32.48:50010 10.254.43.39:50010 
> 10.254.20.37:50010 10.254.56.59:50010 10.254.22.33:50010 10.254.60.34:50010 
> 10.254.49.19:50010 10.254.52.21:50010 10.254.23.59:50010 10.254.21.16:50010 
> 10.254.42.55:50010 10.254.29.33:50010 10.254.53.17:50010 10.254.19.14:50010 
> 10.254.64.51:50010 10.254.46.20:50010 10.254.66.22:50010 10.254.18.38:50010 
> 10.254.39.17:50010 10.254.37.57:50010 10.254.31.54:50010 10.254.55.33:50010 
> 10.254.25.17:50010 10.254.61.33:50010 10.254.26.40:50010 10.254.59.23:50010 
> 10.254.59.35:50010 10.254.66.48:50010 10.254.41.15:50010 10.254.54.31:50010 
> 10.254.61.50:50010 10.254.62.31:50010 10.254.17.56:50010 10.254.29.18:50010 
> 10.254.45.16:50010 10.254.63.48:50010 10.254.22.34:50010 10.254.37.51:50010 
> 10.254.65.49:50010 10.254.58.21:50010 10.254.42.12:50010 10.254.55.17:50010 
> 10.254.27.13:50010 10.254.57.17:50010 10.254.67.18:50010 10.254.31.31:50010 
> 10.254.28.12:50010 10.254.36.12:50010 10.254.21.59:50010 10.254.30.30:50010 
> 10.254.26.50:50010 10.254.40.40:50010 10.254.32.17:50010 10.254.47.55:50010 
> 10.254.60.55:50010 10.254.49.33:50010 10.254.68.47:50010 

[jira] [Commented] (HDFS-14847) Blocks are over-replicated while EC decommissioning

2019-09-14 Thread HuangTao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929693#comment-16929693
 ] 

HuangTao commented on HDFS-14847:
-

[~ayushtkn] Yes, we had the same issue, however we have fixed it with the 
following code.
{code:java}
if (state == StoredReplicaState.LIVE) { // move here
  if (!bitSet.get(blockIndex)) {
bitSet.set(blockIndex);
//} else if (state == StoredReplicaState.LIVE) { // from here
  } else {
numReplicas.subtract(StoredReplicaState.LIVE, 1);
numReplicas.add(StoredReplicaState.REDUNDANT, 1);
  }
}
{code}
[~ferhui] I think this code can fix it too.:)

> Blocks are over-replicated while EC decommissioning
> ---
>
> Key: HDFS-14847
> URL: https://issues.apache.org/jira/browse/HDFS-14847
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-14847.001.patch, HDFS-14847.002.patch
>
>
> Found that Some blocks are over-replicated while ec decommissioning. Messages 
> in log as follow
> {quote}
> INFO BlockStateChange: Block: blk_-9223372035714984112_363779142, Expected 
> Replicas: 9, live replicas: 8, corrupt replicas: 0, decommissioned replicas: 
> 0, decommissioning replicas: 3, maintenance replicas: 0, live entering 
> maintenance replicas: 0, excess replicas: 0, Is Open File: false, Datanodes 
> having this block: 10.254.41.34:50010 10.254.54.53:50010 10.254.28.53:50010 
> 10.254.56.55:50010 10.254.32.21:50010 10.254.33.19:50010 10.254.63.17:50010 
> 10.254.31.19:50010 10.254.35.29:50010 10.254.51.57:50010 10.254.40.58:50010 
> 10.254.69.31:50010 10.254.47.18:50010 10.254.51.18:50010 10.254.43.57:50010 
> 10.254.50.47:50010 10.254.42.37:50010 10.254.57.29:50010 10.254.67.40:50010 
> 10.254.44.16:50010 10.254.59.38:50010 10.254.53.56:50010 10.254.45.11:50010 
> 10.254.39.22:50010 10.254.30.16:50010 10.254.35.53:50010 10.254.22.30:50010 
> 10.254.26.34:50010 10.254.17.58:50010 10.254.65.53:50010 10.254.60.39:50010 
> 10.254.61.20:50010 10.254.64.23:50010 10.254.21.13:50010 10.254.37.35:50010 
> 10.254.68.30:50010 10.254.62.37:50010 10.254.25.58:50010 10.254.52.54:50010 
> 10.254.58.31:50010 10.254.49.11:50010 10.254.55.52:50010 10.254.19.19:50010 
> 10.254.36.40:50010 10.254.18.30:50010 10.254.20.39:50010 10.254.66.52:50010 
> 10.254.56.32:50010 10.254.24.55:50010 10.254.34.11:50010 10.254.29.58:50010 
> 10.254.27.40:50010 10.254.46.33:50010 10.254.23.19:50010 10.254.74.12:50010 
> 10.254.74.13:50010 10.254.41.35:50010 10.254.67.58:50010 10.254.54.11:50010 
> 10.254.68.14:50010 10.254.27.14:50010 10.254.51.29:50010 10.254.45.21:50010 
> 10.254.50.56:50010 10.254.47.31:50010 10.254.40.14:50010 10.254.65.21:50010 
> 10.254.62.22:50010 10.254.57.16:50010 10.254.36.52:50010 10.254.30.13:50010 
> 10.254.35.12:50010 10.254.69.34:50010 10.254.34.58:50010 10.254.17.50:50010 
> 10.254.63.12:50010 10.254.28.21:50010 10.254.58.30:50010 10.254.24.57:50010 
> 10.254.33.50:50010 10.254.44.52:50010 10.254.32.48:50010 10.254.43.39:50010 
> 10.254.20.37:50010 10.254.56.59:50010 10.254.22.33:50010 10.254.60.34:50010 
> 10.254.49.19:50010 10.254.52.21:50010 10.254.23.59:50010 10.254.21.16:50010 
> 10.254.42.55:50010 10.254.29.33:50010 10.254.53.17:50010 10.254.19.14:50010 
> 10.254.64.51:50010 10.254.46.20:50010 10.254.66.22:50010 10.254.18.38:50010 
> 10.254.39.17:50010 10.254.37.57:50010 10.254.31.54:50010 10.254.55.33:50010 
> 10.254.25.17:50010 10.254.61.33:50010 10.254.26.40:50010 10.254.59.23:50010 
> 10.254.59.35:50010 10.254.66.48:50010 10.254.41.15:50010 10.254.54.31:50010 
> 10.254.61.50:50010 10.254.62.31:50010 10.254.17.56:50010 10.254.29.18:50010 
> 10.254.45.16:50010 10.254.63.48:50010 10.254.22.34:50010 10.254.37.51:50010 
> 10.254.65.49:50010 10.254.58.21:50010 10.254.42.12:50010 10.254.55.17:50010 
> 10.254.27.13:50010 10.254.57.17:50010 10.254.67.18:50010 10.254.31.31:50010 
> 10.254.28.12:50010 10.254.36.12:50010 10.254.21.59:50010 10.254.30.30:50010 
> 10.254.26.50:50010 10.254.40.40:50010 10.254.32.17:50010 10.254.47.55:50010 
> 10.254.60.55:50010 10.254.49.33:50010 10.254.68.47:50010 10.254.39.21:50010 
> 10.254.56.14:50010 10.254.33.54:50010 10.254.69.57:50010 10.254.43.50:50010 
> 10.254.50.13:50010 10.254.25.49:50010 10.254.18.20:50010 10.254.52.23:50010 
> 10.254.19.11:50010 10.254.20.21:50010 10.254.74.16:50010 10.254.64.55:50010 
> 10.254.24.48:50010 10.254.46.29:50010 10.254.51.12:50010 10.254.23.56:50010 
> 10.254.44.59:50010 10.254.53.58:50010 10.254.34.38:50010 10.254.35.37:50010 
> 10.254.35.16:50010 10.254.36.23:50010 10.254.41.47:50010 10.254.54.12:50010 
> 10.254.20.59:50010 , Current Datanode: 10.254.56.55:50010, Is current 
> datanode decommissioning: true, Is current datanode 

[jira] [Work logged] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?focusedWorklogId=312514=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312514
 ]

ASF GitHub Bot logged work on HDDS-2119:


Author: ASF GitHub Bot
Created on: 14/Sep/19 06:11
Start Date: 14/Sep/19 06:11
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on issue #1435: HDDS-2119. Use 
checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
validation.
URL: https://github.com/apache/hadoop/pull/1435#issuecomment-531453399
 
 
   Thanks for the suggestion @elek . Modified `pom.ozone.xml` to use the files 
from the project instead of maven artifact.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 312514)
Time Spent: 2h  (was: 1h 50m)

> Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
> validation
> 
>
> Key: HDDS-2119
> URL: https://issues.apache.org/jira/browse/HDDS-2119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> After HDDS-2106 hdds/ozone no more relies on hadoop parent pom, so we have to 
> use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
> checkstyle validation.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org