[jira] [Updated] (HDDS-2525) Sonar : replace lambda with method reference in SCM BufferPool

2019-11-17 Thread Supratim Deka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka updated HDDS-2525:

Status: Patch Available  (was: Open)

> Sonar : replace lambda with method reference in SCM BufferPool
> --
>
> Key: HDDS-2525
> URL: https://issues.apache.org/jira/browse/HDDS-2525
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As per Sonar, method references are more compact than lambda - this applies 
> to java 8, not older versions.
> Sonar report:
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_5KcVY8lQ4ZsVn=AW5md-_5KcVY8lQ4ZsVn



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2524) Sonar : clumsy error handling in BlockOutputStream validateResponse

2019-11-17 Thread Supratim Deka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka updated HDDS-2524:

Status: Patch Available  (was: Open)

> Sonar : clumsy error handling in BlockOutputStream validateResponse
> ---
>
> Key: HDDS-2524
> URL: https://issues.apache.org/jira/browse/HDDS-2524
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Link to Sonar report : 
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVk=AW5md-_2KcVY8lQ4ZsVk



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2525) Sonar : replace lambda with method reference in SCM BufferPool

2019-11-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2525:
-
Labels: pull-request-available  (was: )

> Sonar : replace lambda with method reference in SCM BufferPool
> --
>
> Key: HDDS-2525
> URL: https://issues.apache.org/jira/browse/HDDS-2525
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available
>
> As per Sonar, method references are more compact than lambda - this applies 
> to java 8, not older versions.
> Sonar report:
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_5KcVY8lQ4ZsVn=AW5md-_5KcVY8lQ4ZsVn



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2525) Sonar : replace lambda with method reference in SCM BufferPool

2019-11-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2525?focusedWorklogId=345091=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-345091
 ]

ASF GitHub Bot logged work on HDDS-2525:


Author: ASF GitHub Bot
Created on: 18/Nov/19 06:53
Start Date: 18/Nov/19 06:53
Worklog Time Spent: 10m 
  Work Description: supratimdeka commented on pull request #210: HDDS-2525. 
Sonar : replace lambda with method reference in SCM BufferPool
URL: https://github.com/apache/hadoop-ozone/pull/210
 
 
   https://issues.apache.org/jira/browse/HDDS-2525
   
   TestBlockOutPutStream unit tests execute the changed function.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 345091)
Remaining Estimate: 0h
Time Spent: 10m

> Sonar : replace lambda with method reference in SCM BufferPool
> --
>
> Key: HDDS-2525
> URL: https://issues.apache.org/jira/browse/HDDS-2525
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As per Sonar, method references are more compact than lambda - this applies 
> to java 8, not older versions.
> Sonar report:
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_5KcVY8lQ4ZsVn=AW5md-_5KcVY8lQ4ZsVn



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1573) Add scrubber metrics and pipeline metrics

2019-11-17 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng reassigned HDDS-1573:
--

Assignee: Li Cheng

> Add scrubber metrics and pipeline metrics
> -
>
> Key: HDDS-1573
> URL: https://issues.apache.org/jira/browse/HDDS-1573
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>
> - Add metrics for how many pipelines per datanode
> - Add metrics for pipelines that are chosen by the scrubber
> - Add metrics for pipelines that are in violation



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2396) OM rocksdb core dump during writing

2019-11-17 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng resolved HDDS-2396.

  Assignee: Li Cheng
Resolution: Fixed

> OM rocksdb core dump during writing
> ---
>
> Key: HDDS-2396
> URL: https://issues.apache.org/jira/browse/HDDS-2396
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Li Cheng
>Assignee: Li Cheng
>Priority: Major
> Attachments: hs_err_pid9340.log
>
>
> Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM on a separate VM, say 
> it's VM0.
> I use goofys as a fuse and enable ozone S3 gateway to mount ozone to a path 
> on VM0, while reading data from VM0 local disk and write to mount path. The 
> dataset has various sizes of files from 0 byte to GB-level and it has a 
> number of ~50,000 files. 
>  
> There happens core dump in rocksdb while it's occasional. 
>  
> Stack: [0x7f5891a23000,0x7f5891b24000], sp=0x7f5891b21bb8, free 
> space=1018k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> C [libc.so.6+0x151d60] __memmove_ssse3_back+0x1ae0
> C [librocksdbjni3192271038586903156.so+0x358fec] 
> rocksdb::MemTableInserter::PutCFImpl(unsigned int, rocksdb::Slice const&, 
> rocksdb::Slice const&, rocksdb:
> :ValueType)+0x51c
> C [librocksdbjni3192271038586903156.so+0x359d17] 
> rocksdb::MemTableInserter::PutCF(unsigned int, rocksdb::Slice const&, 
> rocksdb::Slice const&)+0x17
> C [librocksdbjni3192271038586903156.so+0x3513bc] 
> rocksdb::WriteBatch::Iterate(rocksdb::WriteBatch::Handler*) const+0x45c
> C [librocksdbjni3192271038586903156.so+0x354df9] 
> rocksdb::WriteBatchInternal::InsertInto(rocksdb::WriteThread::WriteGroup&, 
> unsigned long, rocksdb::ColumnFamilyMemTables*, rocksdb::FlushScheduler*, 
> bool, unsigned long, rocksdb::DB*, bool, bool, bool)+0x1f9
> C [librocksdbjni3192271038586903156.so+0x29fd79] 
> rocksdb::DBImpl::WriteImpl(rocksdb::WriteOptions const&, 
> rocksdb::WriteBatch*, rocksdb::WriteCallback*, unsigned long*, unsigned long, 
> bool, unsigned long*, unsigned long, rocksdb::PreReleaseCallback*)+0x24b9
> C [librocksdbjni3192271038586903156.so+0x2a0431] 
> rocksdb::DBImpl::Write(rocksdb::WriteOptions const&, 
> rocksdb::WriteBatch*)+0x21
> C [librocksdbjni3192271038586903156.so+0x1a064c] 
> Java_org_rocksdb_RocksDB_write0+0xcc
> J 7899 org.rocksdb.RocksDB.write0(JJJ)V (0 bytes) @ 0x7f58f1872dbe 
> [0x7f58f1872d00+0xbe]
> J 10093% C1 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushTransactions()V
>  (400 bytes) @ 0x7f58f2308b0c [0x7f58f2307a40+0x10cc]
> j 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer$$Lambda$29.run()V+4
> j java.lang.Thread.run()V+11



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2396) OM rocksdb core dump during writing

2019-11-17 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976315#comment-16976315
 ] 

Li Cheng commented on HDDS-2396:


This is resolved in [https://github.com/apache/hadoop-ozone/pull/100]/. Thanks 
for [~bharat]'s fix.

> OM rocksdb core dump during writing
> ---
>
> Key: HDDS-2396
> URL: https://issues.apache.org/jira/browse/HDDS-2396
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Li Cheng
>Priority: Major
> Attachments: hs_err_pid9340.log
>
>
> Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM on a separate VM, say 
> it's VM0.
> I use goofys as a fuse and enable ozone S3 gateway to mount ozone to a path 
> on VM0, while reading data from VM0 local disk and write to mount path. The 
> dataset has various sizes of files from 0 byte to GB-level and it has a 
> number of ~50,000 files. 
>  
> There happens core dump in rocksdb while it's occasional. 
>  
> Stack: [0x7f5891a23000,0x7f5891b24000], sp=0x7f5891b21bb8, free 
> space=1018k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> C [libc.so.6+0x151d60] __memmove_ssse3_back+0x1ae0
> C [librocksdbjni3192271038586903156.so+0x358fec] 
> rocksdb::MemTableInserter::PutCFImpl(unsigned int, rocksdb::Slice const&, 
> rocksdb::Slice const&, rocksdb:
> :ValueType)+0x51c
> C [librocksdbjni3192271038586903156.so+0x359d17] 
> rocksdb::MemTableInserter::PutCF(unsigned int, rocksdb::Slice const&, 
> rocksdb::Slice const&)+0x17
> C [librocksdbjni3192271038586903156.so+0x3513bc] 
> rocksdb::WriteBatch::Iterate(rocksdb::WriteBatch::Handler*) const+0x45c
> C [librocksdbjni3192271038586903156.so+0x354df9] 
> rocksdb::WriteBatchInternal::InsertInto(rocksdb::WriteThread::WriteGroup&, 
> unsigned long, rocksdb::ColumnFamilyMemTables*, rocksdb::FlushScheduler*, 
> bool, unsigned long, rocksdb::DB*, bool, bool, bool)+0x1f9
> C [librocksdbjni3192271038586903156.so+0x29fd79] 
> rocksdb::DBImpl::WriteImpl(rocksdb::WriteOptions const&, 
> rocksdb::WriteBatch*, rocksdb::WriteCallback*, unsigned long*, unsigned long, 
> bool, unsigned long*, unsigned long, rocksdb::PreReleaseCallback*)+0x24b9
> C [librocksdbjni3192271038586903156.so+0x2a0431] 
> rocksdb::DBImpl::Write(rocksdb::WriteOptions const&, 
> rocksdb::WriteBatch*)+0x21
> C [librocksdbjni3192271038586903156.so+0x1a064c] 
> Java_org_rocksdb_RocksDB_write0+0xcc
> J 7899 org.rocksdb.RocksDB.write0(JJJ)V (0 bytes) @ 0x7f58f1872dbe 
> [0x7f58f1872d00+0xbe]
> J 10093% C1 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushTransactions()V
>  (400 bytes) @ 0x7f58f2308b0c [0x7f58f2307a40+0x10cc]
> j 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer$$Lambda$29.run()V+4
> j java.lang.Thread.run()V+11



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2524) Sonar : clumsy error handling in BlockOutputStream validateResponse

2019-11-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2524?focusedWorklogId=345082=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-345082
 ]

ASF GitHub Bot logged work on HDDS-2524:


Author: ASF GitHub Bot
Created on: 18/Nov/19 06:31
Start Date: 18/Nov/19 06:31
Worklog Time Spent: 10m 
  Work Description: supratimdeka commented on pull request #209: HDDS-2524. 
Sonar : clumsy error handling in BlockOutputStream validateResponse
URL: https://github.com/apache/hadoop-ozone/pull/209
 
 
   https://issues.apache.org/jira/browse/HDDS-2524
   
   removed error Log spam and replaced with a debug log.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 345082)
Remaining Estimate: 0h
Time Spent: 10m

> Sonar : clumsy error handling in BlockOutputStream validateResponse
> ---
>
> Key: HDDS-2524
> URL: https://issues.apache.org/jira/browse/HDDS-2524
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Link to Sonar report : 
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVk=AW5md-_2KcVY8lQ4ZsVk



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2524) Sonar : clumsy error handling in BlockOutputStream validateResponse

2019-11-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2524:
-
Labels: pull-request-available  (was: )

> Sonar : clumsy error handling in BlockOutputStream validateResponse
> ---
>
> Key: HDDS-2524
> URL: https://issues.apache.org/jira/browse/HDDS-2524
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available
>
> Link to Sonar report : 
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVk=AW5md-_2KcVY8lQ4ZsVk



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-17 Thread Aiphago (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976301#comment-16976301
 ] 

Aiphago commented on HDFS-14986:


Thanks for your advice, I'll fix later.

> ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
> --
>
> Key: HDFS-14986
> URL: https://issues.apache.org/jira/browse/HDFS-14986
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, performance
>Reporter: Ryan Wu
>Assignee: Ryan Wu
>Priority: Major
> Attachments: HDFS-14986.001.patch
>
>
> Running DU across lots of disks is very expensive . We applied the patch 
> HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR 
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992--1450855658517]
>  
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  ReplicaCachingGetSpaceUsed refresh error
> java.util.ConcurrentModificationException: Tree has been modified outside of 
> iterator
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
> 
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
> 
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
> 
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>    
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14955) RBF: getQuotaUsage() on mount point should return global quota.

2019-11-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976291#comment-16976291
 ] 

Hadoop QA commented on HDFS-14955:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
39s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14955 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986072/HDFS-14955.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 05a2b1e69b99 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 96c4520 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28322/testReport/ |
| Max. process+thread count | 2731 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28322/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: getQuotaUsage() on mount point should return global quota.
> ---
>
> Key: HDFS-14955
> 

[jira] [Commented] (HDFS-14651) DeadNodeDetector checks dead node periodically

2019-11-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976290#comment-16976290
 ] 

Hadoop QA commented on HDFS-14651:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14651 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986063/HDFS-14651.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  

[jira] [Created] (HDDS-2532) Sonar : fix issues in OzoneQuota

2019-11-17 Thread Supratim Deka (Jira)
Supratim Deka created HDDS-2532:
---

 Summary: Sonar : fix issues in OzoneQuota
 Key: HDDS-2532
 URL: https://issues.apache.org/jira/browse/HDDS-2532
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Client
Reporter: Supratim Deka
Assignee: Supratim Deka


Sonar issues : 
remove runtime exception from declaration.
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-4NKcVY8lQ4ZsO_=AW5md-4NKcVY8lQ4ZsO_

use primitive boolean expression.
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-4NKcVY8lQ4ZsO-=AW5md-4NKcVY8lQ4ZsO-




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14955) RBF: getQuotaUsage() on mount point should return global quota.

2019-11-17 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14955:
---
Attachment: HDFS-14955.003.patch

> RBF: getQuotaUsage() on mount point should return global quota.
> ---
>
> Key: HDFS-14955
> URL: https://issues.apache.org/jira/browse/HDFS-14955
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-14955.001.patch, HDFS-14955.002.patch, 
> HDFS-14955.003.patch
>
>
> When getQuotaUsage() on a mount point path, the quota part should be the 
> global quota. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14955) RBF: getQuotaUsage() on mount point should return global quota.

2019-11-17 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976270#comment-16976270
 ] 

Jinglun commented on HDFS-14955:


Hi [~ayushtkn], thanks your nice suggestion ! Upload v03.

> RBF: getQuotaUsage() on mount point should return global quota.
> ---
>
> Key: HDFS-14955
> URL: https://issues.apache.org/jira/browse/HDFS-14955
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-14955.001.patch, HDFS-14955.002.patch
>
>
> When getQuotaUsage() on a mount point path, the quota part should be the 
> global quota. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14955) RBF: getQuotaUsage() on mount point should return global quota.

2019-11-17 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14955:
---
Attachment: (was: HDFS-14955.003.patch)

> RBF: getQuotaUsage() on mount point should return global quota.
> ---
>
> Key: HDFS-14955
> URL: https://issues.apache.org/jira/browse/HDFS-14955
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-14955.001.patch, HDFS-14955.002.patch
>
>
> When getQuotaUsage() on a mount point path, the quota part should be the 
> global quota. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14955) RBF: getQuotaUsage() on mount point should return global quota.

2019-11-17 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14955:
---
Attachment: HDFS-14955.003.patch

> RBF: getQuotaUsage() on mount point should return global quota.
> ---
>
> Key: HDFS-14955
> URL: https://issues.apache.org/jira/browse/HDFS-14955
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-14955.001.patch, HDFS-14955.002.patch, 
> HDFS-14955.003.patch
>
>
> When getQuotaUsage() on a mount point path, the quota part should be the 
> global quota. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2531) Sonar : remove duplicate string literals in BlockOutputStream

2019-11-17 Thread Supratim Deka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka updated HDDS-2531:

Description: 
Sonar issue in executePutBlock, duplicate string literal "blockID" :

https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_1KcVY8lQ4ZsVa=AW5md-_1KcVY8lQ4ZsVa

format specifiers in Log:
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVg=AW5md-_2KcVY8lQ4ZsVg

define string constant instead of duplicate string literals.
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVb=AW5md-_2KcVY8lQ4ZsVb


  was:
Sonar issue in executePutBlock, duplicate string literal "blockID" :

https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_1KcVY8lQ4ZsVa=AW5md-_1KcVY8lQ4ZsVa



> Sonar : remove duplicate string literals in BlockOutputStream
> -
>
> Key: HDDS-2531
> URL: https://issues.apache.org/jira/browse/HDDS-2531
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>
> Sonar issue in executePutBlock, duplicate string literal "blockID" :
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_1KcVY8lQ4ZsVa=AW5md-_1KcVY8lQ4ZsVa
> format specifiers in Log:
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVg=AW5md-_2KcVY8lQ4ZsVg
> define string constant instead of duplicate string literals.
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVb=AW5md-_2KcVY8lQ4ZsVb



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2531) Sonar : remove duplicate string literals in BlockOutputStream

2019-11-17 Thread Supratim Deka (Jira)
Supratim Deka created HDDS-2531:
---

 Summary: Sonar : remove duplicate string literals in 
BlockOutputStream
 Key: HDDS-2531
 URL: https://issues.apache.org/jira/browse/HDDS-2531
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Client
Reporter: Supratim Deka
Assignee: Supratim Deka


Sonar issue in executePutBlock, duplicate string literal "blockID" :

https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_1KcVY8lQ4ZsVa=AW5md-_1KcVY8lQ4ZsVa




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2530) Sonar : refactor verifyResourceName in HddsClientUtils to fix Sonar errors

2019-11-17 Thread Supratim Deka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka updated HDDS-2530:

Description: 
Sonar report : 
Reduce cognitive complexity from 33 to 15
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWR=AW5md_APKcVY8lQ4ZsWR


https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWQ=AW5md_APKcVY8lQ4ZsWQ

https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWJ=AW5md_APKcVY8lQ4ZsWJ





  was:
Sonar report : 
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWR=AW5md_APKcVY8lQ4ZsWR

https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWQ=AW5md_APKcVY8lQ4ZsWQ





> Sonar : refactor verifyResourceName in HddsClientUtils to fix Sonar errors
> --
>
> Key: HDDS-2530
> URL: https://issues.apache.org/jira/browse/HDDS-2530
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>
> Sonar report : 
> Reduce cognitive complexity from 33 to 15
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWR=AW5md_APKcVY8lQ4ZsWR
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWQ=AW5md_APKcVY8lQ4ZsWQ
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWJ=AW5md_APKcVY8lQ4ZsWJ



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2520) Sonar: Avoid temporary variable scmSecurityClient

2019-11-17 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2520:

Status: Patch Available  (was: Open)

> Sonar: Avoid temporary variable scmSecurityClient
> -
>
> Key: HDDS-2520
> URL: https://issues.apache.org/jira/browse/HDDS-2520
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWL=AW5md_APKcVY8lQ4ZsWL



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2520) Sonar: Avoid temporary variable scmSecurityClient

2019-11-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2520:
-
Labels: pull-request-available  (was: )

> Sonar: Avoid temporary variable scmSecurityClient
> -
>
> Key: HDDS-2520
> URL: https://issues.apache.org/jira/browse/HDDS-2520
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWL=AW5md_APKcVY8lQ4ZsWL



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2530) Sonar : refactor verifyResourceName in HddsClientUtils to fix Sonar errors

2019-11-17 Thread Supratim Deka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka updated HDDS-2530:

Summary: Sonar : refactor verifyResourceName in HddsClientUtils to fix 
Sonar errors  (was: Sonar : refactor verifyResourceName in HddsClientUtils to 
reduce Cognitive Complexity )

> Sonar : refactor verifyResourceName in HddsClientUtils to fix Sonar errors
> --
>
> Key: HDDS-2530
> URL: https://issues.apache.org/jira/browse/HDDS-2530
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>
> Sonar report : 
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWR=AW5md_APKcVY8lQ4ZsWR
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWQ=AW5md_APKcVY8lQ4ZsWQ



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2530) Sonar : refactor verifyResourceName in HddsClientUtils to reduce Cognitive Complexity

2019-11-17 Thread Supratim Deka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka updated HDDS-2530:

Description: 
Sonar report : 
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWR=AW5md_APKcVY8lQ4ZsWR

https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWQ=AW5md_APKcVY8lQ4ZsWQ




  was:
Sonar report : 
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWR=AW5md_APKcVY8lQ4ZsWR



> Sonar : refactor verifyResourceName in HddsClientUtils to reduce Cognitive 
> Complexity 
> --
>
> Key: HDDS-2530
> URL: https://issues.apache.org/jira/browse/HDDS-2530
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>
> Sonar report : 
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWR=AW5md_APKcVY8lQ4ZsWR
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWQ=AW5md_APKcVY8lQ4ZsWQ



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2520) Sonar: Avoid temporary variable scmSecurityClient

2019-11-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2520?focusedWorklogId=345057=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-345057
 ]

ASF GitHub Bot logged work on HDDS-2520:


Author: ASF GitHub Bot
Created on: 18/Nov/19 03:52
Start Date: 18/Nov/19 03:52
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #208: 
HDDS-2520. Sonar: Avoid temporary variable scmSecurityClient
URL: https://github.com/apache/hadoop-ozone/pull/208
 
 
   ## What changes were proposed in this pull request?
   Eliminated temporary variable which served no purpose.
   Now, we return the created instance immediately without assigning it first 
to a temporary variable.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2520
   
   ## How was this patch tested?
   No code logic was changed, just verified mvn install.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 345057)
Remaining Estimate: 0h
Time Spent: 10m

> Sonar: Avoid temporary variable scmSecurityClient
> -
>
> Key: HDDS-2520
> URL: https://issues.apache.org/jira/browse/HDDS-2520
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWL=AW5md_APKcVY8lQ4ZsWL



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2530) Sonar : refactor verifyResourceName in HddsClientUtils to reduce Cognitive Complexity

2019-11-17 Thread Supratim Deka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka updated HDDS-2530:

Component/s: Ozone Client

> Sonar : refactor verifyResourceName in HddsClientUtils to reduce Cognitive 
> Complexity 
> --
>
> Key: HDDS-2530
> URL: https://issues.apache.org/jira/browse/HDDS-2530
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>
> Sonar report : 
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWR=AW5md_APKcVY8lQ4ZsWR



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2530) Sonar : refactor verifyResourceName in HddsClientUtils to reduce Cognitive Complexity

2019-11-17 Thread Supratim Deka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka updated HDDS-2530:

Summary: Sonar : refactor verifyResourceName in HddsClientUtils to reduce 
Cognitive Complexity   (was: Sonar : refactor method to reduce Cognitive )

> Sonar : refactor verifyResourceName in HddsClientUtils to reduce Cognitive 
> Complexity 
> --
>
> Key: HDDS-2530
> URL: https://issues.apache.org/jira/browse/HDDS-2530
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>
> Sonar report : 
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWR=AW5md_APKcVY8lQ4ZsWR



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2530) Sonar : refactor method to reduce Cognitive

2019-11-17 Thread Supratim Deka (Jira)
Supratim Deka created HDDS-2530:
---

 Summary: Sonar : refactor method to reduce Cognitive 
 Key: HDDS-2530
 URL: https://issues.apache.org/jira/browse/HDDS-2530
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Supratim Deka
Assignee: Supratim Deka


Sonar report : 
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWR=AW5md_APKcVY8lQ4ZsWR




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2529) Sonar : return interface instead of implementation class in XceiverClientRatis getCommintInfoMap

2019-11-17 Thread Supratim Deka (Jira)
Supratim Deka created HDDS-2529:
---

 Summary: Sonar : return interface instead of implementation class 
in XceiverClientRatis getCommintInfoMap
 Key: HDDS-2529
 URL: https://issues.apache.org/jira/browse/HDDS-2529
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Supratim Deka
Assignee: Supratim Deka


Sonar report :
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AKKcVY8lQ4ZsWH=AW5md_AKKcVY8lQ4ZsWH




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2528) Sonar : change return type to interface instead of implementation in CommitWatcher

2019-11-17 Thread Supratim Deka (Jira)
Supratim Deka created HDDS-2528:
---

 Summary: Sonar : change return type to interface instead of 
implementation in CommitWatcher
 Key: HDDS-2528
 URL: https://issues.apache.org/jira/browse/HDDS-2528
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Supratim Deka
Assignee: Supratim Deka


Sonar report :
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_8KcVY8lQ4ZsVq=AW5md-_8KcVY8lQ4ZsVq

https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_8KcVY8lQ4ZsVr=AW5md-_8KcVY8lQ4ZsVr




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2527) Sonar : remove redundant temporary assignment in HddsVersionProvider

2019-11-17 Thread Supratim Deka (Jira)
Supratim Deka created HDDS-2527:
---

 Summary: Sonar : remove redundant temporary assignment in 
HddsVersionProvider
 Key: HDDS-2527
 URL: https://issues.apache.org/jira/browse/HDDS-2527
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Supratim Deka
Assignee: Supratim Deka


Sonar report :
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-4AKcVY8lQ4ZsO6=AW5md-4AKcVY8lQ4ZsO6




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2526) Sonar : use format specifiers in Log inside HddsConfServlet

2019-11-17 Thread Supratim Deka (Jira)
Supratim Deka created HDDS-2526:
---

 Summary: Sonar : use format specifiers in Log inside 
HddsConfServlet 
 Key: HDDS-2526
 URL: https://issues.apache.org/jira/browse/HDDS-2526
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Supratim Deka
Assignee: Supratim Deka


Sonar report :
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-4jKcVY8lQ4ZsPQ=AW5md-4jKcVY8lQ4ZsPQ




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2356) Multipart upload report errors while writing to ozone Ratis pipeline

2019-11-17 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976258#comment-16976258
 ] 

Li Cheng commented on HDDS-2356:


I checked out [https://github.com/apache/hadoop-ozone/pull/163] and compiled 
out a jar to deploy onto my cluster. [~bharat]

> Multipart upload report errors while writing to ozone Ratis pipeline
> 
>
> Key: HDDS-2356
> URL: https://issues.apache.org/jira/browse/HDDS-2356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
> Environment: Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM 
> on a separate VM
>Reporter: Li Cheng
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Fix For: 0.5.0
>
> Attachments: 2018-11-15-OM-logs.txt, 2019-11-06_18_13_57_422_ERROR, 
> hs_err_pid9340.log, image-2019-10-31-18-56-56-177.png, 
> om-audit-VM_50_210_centos.log, om_audit_log_plc_1570863541668_9278.txt
>
>
> Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM on a separate VM, say 
> it's VM0.
> I use goofys as a fuse and enable ozone S3 gateway to mount ozone to a path 
> on VM0, while reading data from VM0 local disk and write to mount path. The 
> dataset has various sizes of files from 0 byte to GB-level and it has a 
> number of ~50,000 files. 
> The writing is slow (1GB for ~10 mins) and it stops after around 4GB. As I 
> look at hadoop-root-om-VM_50_210_centos.out log, I see OM throwing errors 
> related with Multipart upload. This error eventually causes the  writing to 
> terminate and OM to be closed. 
>  
> Updated on 11/06/2019:
> See new multipart upload error NO_SUCH_MULTIPART_UPLOAD_ERROR and full logs 
> are in the attachment.
>  2019-11-05 18:12:37,766 ERROR 
> org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest:
>  MultipartUpload Commit is failed for Key:./2
> 0191012/plc_1570863541668_9278 in Volume/Bucket 
> s325d55ad283aa400af464c76d713c07ad/ozone-test
> NO_SUCH_MULTIPART_UPLOAD_ERROR 
> org.apache.hadoop.ozone.om.exceptions.OMException: No such Multipart upload 
> is with specified uploadId fcda8608-b431-48b7-8386-
> 0a332f1a709a-103084683261641950
> at 
> org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest.validateAndUpdateCache(S3MultipartUploadCommitPartRequest.java:1
> 56)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.
> java:217)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:132)
> at 
> org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:100)
> at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
>  
> Updated on 10/28/2019:
> See MISMATCH_MULTIPART_LIST error.
>  
> 2019-10-28 11:44:34,079 [qtp1383524016-70] ERROR - Error in Complete 
> Multipart Upload Request for bucket: ozone-test, key: 
> 20191012/plc_1570863541668_927
>  8
>  MISMATCH_MULTIPART_LIST org.apache.hadoop.ozone.om.exceptions.OMException: 
> Complete Multipart Upload Failed: volume: 
> s3c89e813c80ffcea9543004d57b2a1239bucket:
>  ozone-testkey: 20191012/plc_1570863541668_9278
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:732)
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.completeMultipartUpload(OzoneManagerProtocolClientSideTranslatorPB
>  .java:1104)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:497)
>  at 
> 

[jira] [Created] (HDDS-2525) Sonar : replace lambda with method reference in SCM BufferPool

2019-11-17 Thread Supratim Deka (Jira)
Supratim Deka created HDDS-2525:
---

 Summary: Sonar : replace lambda with method reference in SCM 
BufferPool
 Key: HDDS-2525
 URL: https://issues.apache.org/jira/browse/HDDS-2525
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Supratim Deka
Assignee: Supratim Deka


As per Sonar, method references are more compact than lambda - this applies to 
java 8, not older versions.

Sonar report:
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_5KcVY8lQ4ZsVn=AW5md-_5KcVY8lQ4ZsVn




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2356) Multipart upload report errors while writing to ozone Ratis pipeline

2019-11-17 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976256#comment-16976256
 ] 

Bharat Viswanadham commented on HDDS-2356:
--

[~timmylicheng] And for testing with PR, have you used the branch and set up a 
new cluster or replaced jars. Could you provide some information on this? 

> Multipart upload report errors while writing to ozone Ratis pipeline
> 
>
> Key: HDDS-2356
> URL: https://issues.apache.org/jira/browse/HDDS-2356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
> Environment: Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM 
> on a separate VM
>Reporter: Li Cheng
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Fix For: 0.5.0
>
> Attachments: 2018-11-15-OM-logs.txt, 2019-11-06_18_13_57_422_ERROR, 
> hs_err_pid9340.log, image-2019-10-31-18-56-56-177.png, 
> om-audit-VM_50_210_centos.log, om_audit_log_plc_1570863541668_9278.txt
>
>
> Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM on a separate VM, say 
> it's VM0.
> I use goofys as a fuse and enable ozone S3 gateway to mount ozone to a path 
> on VM0, while reading data from VM0 local disk and write to mount path. The 
> dataset has various sizes of files from 0 byte to GB-level and it has a 
> number of ~50,000 files. 
> The writing is slow (1GB for ~10 mins) and it stops after around 4GB. As I 
> look at hadoop-root-om-VM_50_210_centos.out log, I see OM throwing errors 
> related with Multipart upload. This error eventually causes the  writing to 
> terminate and OM to be closed. 
>  
> Updated on 11/06/2019:
> See new multipart upload error NO_SUCH_MULTIPART_UPLOAD_ERROR and full logs 
> are in the attachment.
>  2019-11-05 18:12:37,766 ERROR 
> org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest:
>  MultipartUpload Commit is failed for Key:./2
> 0191012/plc_1570863541668_9278 in Volume/Bucket 
> s325d55ad283aa400af464c76d713c07ad/ozone-test
> NO_SUCH_MULTIPART_UPLOAD_ERROR 
> org.apache.hadoop.ozone.om.exceptions.OMException: No such Multipart upload 
> is with specified uploadId fcda8608-b431-48b7-8386-
> 0a332f1a709a-103084683261641950
> at 
> org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest.validateAndUpdateCache(S3MultipartUploadCommitPartRequest.java:1
> 56)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.
> java:217)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:132)
> at 
> org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:100)
> at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
>  
> Updated on 10/28/2019:
> See MISMATCH_MULTIPART_LIST error.
>  
> 2019-10-28 11:44:34,079 [qtp1383524016-70] ERROR - Error in Complete 
> Multipart Upload Request for bucket: ozone-test, key: 
> 20191012/plc_1570863541668_927
>  8
>  MISMATCH_MULTIPART_LIST org.apache.hadoop.ozone.om.exceptions.OMException: 
> Complete Multipart Upload Failed: volume: 
> s3c89e813c80ffcea9543004d57b2a1239bucket:
>  ozone-testkey: 20191012/plc_1570863541668_9278
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:732)
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.completeMultipartUpload(OzoneManagerProtocolClientSideTranslatorPB
>  .java:1104)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at 

[jira] [Commented] (HDDS-2356) Multipart upload report errors while writing to ozone Ratis pipeline

2019-11-17 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976255#comment-16976255
 ] 

Li Cheng commented on HDDS-2356:


[~bharat] Make sense. I failed to find Key:plc_1570869510243_5542 related info 
in Om audit logs. It might have been rotated. I can try again today to have 
some fresh logs.

> Multipart upload report errors while writing to ozone Ratis pipeline
> 
>
> Key: HDDS-2356
> URL: https://issues.apache.org/jira/browse/HDDS-2356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
> Environment: Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM 
> on a separate VM
>Reporter: Li Cheng
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Fix For: 0.5.0
>
> Attachments: 2018-11-15-OM-logs.txt, 2019-11-06_18_13_57_422_ERROR, 
> hs_err_pid9340.log, image-2019-10-31-18-56-56-177.png, 
> om-audit-VM_50_210_centos.log, om_audit_log_plc_1570863541668_9278.txt
>
>
> Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM on a separate VM, say 
> it's VM0.
> I use goofys as a fuse and enable ozone S3 gateway to mount ozone to a path 
> on VM0, while reading data from VM0 local disk and write to mount path. The 
> dataset has various sizes of files from 0 byte to GB-level and it has a 
> number of ~50,000 files. 
> The writing is slow (1GB for ~10 mins) and it stops after around 4GB. As I 
> look at hadoop-root-om-VM_50_210_centos.out log, I see OM throwing errors 
> related with Multipart upload. This error eventually causes the  writing to 
> terminate and OM to be closed. 
>  
> Updated on 11/06/2019:
> See new multipart upload error NO_SUCH_MULTIPART_UPLOAD_ERROR and full logs 
> are in the attachment.
>  2019-11-05 18:12:37,766 ERROR 
> org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest:
>  MultipartUpload Commit is failed for Key:./2
> 0191012/plc_1570863541668_9278 in Volume/Bucket 
> s325d55ad283aa400af464c76d713c07ad/ozone-test
> NO_SUCH_MULTIPART_UPLOAD_ERROR 
> org.apache.hadoop.ozone.om.exceptions.OMException: No such Multipart upload 
> is with specified uploadId fcda8608-b431-48b7-8386-
> 0a332f1a709a-103084683261641950
> at 
> org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest.validateAndUpdateCache(S3MultipartUploadCommitPartRequest.java:1
> 56)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.
> java:217)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:132)
> at 
> org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:100)
> at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
>  
> Updated on 10/28/2019:
> See MISMATCH_MULTIPART_LIST error.
>  
> 2019-10-28 11:44:34,079 [qtp1383524016-70] ERROR - Error in Complete 
> Multipart Upload Request for bucket: ozone-test, key: 
> 20191012/plc_1570863541668_927
>  8
>  MISMATCH_MULTIPART_LIST org.apache.hadoop.ozone.om.exceptions.OMException: 
> Complete Multipart Upload Failed: volume: 
> s3c89e813c80ffcea9543004d57b2a1239bucket:
>  ozone-testkey: 20191012/plc_1570863541668_9278
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:732)
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.completeMultipartUpload(OzoneManagerProtocolClientSideTranslatorPB
>  .java:1104)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at 

[jira] [Comment Edited] (HDDS-2356) Multipart upload report errors while writing to ozone Ratis pipeline

2019-11-17 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976253#comment-16976253
 ] 

Bharat Viswanadham edited comment on HDDS-2356 at 11/18/19 3:01 AM:


Before my PR  [https://github.com/apache/hadoop-ozone/pull/163] I have too seen 
this error.  I think in goofys there is a logic if complete Multipart upload 
failed, it aborts and uploads. (Upload after abort, fails with 
No_SUCH_MULTIPART_ERROR, this is expected from Ozone/S3 perspective)

 

With the above PR, I was able to upload 1GB,2GB, ... ,6GB files. Please have a 
look in to PR #163 comment. And for testing with PR, have you used the branch 
and set up a new cluster or replaced jars. Could you provide some information 
on this.

 

So, we need to look for is there any failure for 
COMPLETE_MULTIPART_UPLOAD_ERROR for the key. The reason for this cause is 
explained in HDDS-2477. Can you also upload om-audit log, if there is an 
occurrence of COMPLETE_MULTIPART_UPLOAD_ERROR still.


was (Author: bharatviswa):
Before my PR  [https://github.com/apache/hadoop-ozone/pull/163] I have too seen 
this error.  I think in goofys there is a logic if complete Multipart upload 
failed, it aborts and uploads. (Upload after abort, fails with 
No_SUCH_MULTIPART_ERROR, this is expected from Ozone/S3 perspective)

 

With the above PR, I was able to upload 1GB,2GB, ... ,6GB files. Please have a 
look in to PR #163 comment. And for testing with PR, have you used the branch 
and set up a new cluster or replaced jars. Could you provide some information 
on this.

 

So, we need to look for is there any failure for 
COMPLETE_MULTIPART_UPLOAD_ERROR for the key. The reason for this cause is 
explained in HDDS-2477. Can you also upload om-audit log, if there is an 
occurrence of COMPLETE_MULTIPART_UPLOAD_ERROR still.

> Multipart upload report errors while writing to ozone Ratis pipeline
> 
>
> Key: HDDS-2356
> URL: https://issues.apache.org/jira/browse/HDDS-2356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
> Environment: Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM 
> on a separate VM
>Reporter: Li Cheng
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Fix For: 0.5.0
>
> Attachments: 2018-11-15-OM-logs.txt, 2019-11-06_18_13_57_422_ERROR, 
> hs_err_pid9340.log, image-2019-10-31-18-56-56-177.png, 
> om-audit-VM_50_210_centos.log, om_audit_log_plc_1570863541668_9278.txt
>
>
> Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM on a separate VM, say 
> it's VM0.
> I use goofys as a fuse and enable ozone S3 gateway to mount ozone to a path 
> on VM0, while reading data from VM0 local disk and write to mount path. The 
> dataset has various sizes of files from 0 byte to GB-level and it has a 
> number of ~50,000 files. 
> The writing is slow (1GB for ~10 mins) and it stops after around 4GB. As I 
> look at hadoop-root-om-VM_50_210_centos.out log, I see OM throwing errors 
> related with Multipart upload. This error eventually causes the  writing to 
> terminate and OM to be closed. 
>  
> Updated on 11/06/2019:
> See new multipart upload error NO_SUCH_MULTIPART_UPLOAD_ERROR and full logs 
> are in the attachment.
>  2019-11-05 18:12:37,766 ERROR 
> org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest:
>  MultipartUpload Commit is failed for Key:./2
> 0191012/plc_1570863541668_9278 in Volume/Bucket 
> s325d55ad283aa400af464c76d713c07ad/ozone-test
> NO_SUCH_MULTIPART_UPLOAD_ERROR 
> org.apache.hadoop.ozone.om.exceptions.OMException: No such Multipart upload 
> is with specified uploadId fcda8608-b431-48b7-8386-
> 0a332f1a709a-103084683261641950
> at 
> org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest.validateAndUpdateCache(S3MultipartUploadCommitPartRequest.java:1
> 56)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.
> java:217)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:132)
> at 
> org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:100)
> at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
> at 
> 

[jira] [Created] (HDDS-2524) Sonar : clumsy error handling in BlockOutputStream validateResponse

2019-11-17 Thread Supratim Deka (Jira)
Supratim Deka created HDDS-2524:
---

 Summary: Sonar : clumsy error handling in BlockOutputStream 
validateResponse
 Key: HDDS-2524
 URL: https://issues.apache.org/jira/browse/HDDS-2524
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Client
Reporter: Supratim Deka
Assignee: Supratim Deka


Link to Sonar report : 
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVk=AW5md-_2KcVY8lQ4ZsVk




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2356) Multipart upload report errors while writing to ozone Ratis pipeline

2019-11-17 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976253#comment-16976253
 ] 

Bharat Viswanadham commented on HDDS-2356:
--

Before my PR  [https://github.com/apache/hadoop-ozone/pull/163] I have too seen 
this error.  I think in goofys there is a logic if complete Multipart upload 
failed, it aborts and uploads. (Upload after abort, fails with 
No_SUCH_MULTIPART_ERROR, this is expected from Ozone/S3 perspective)

 

With the above PR, I was able to upload 1GB,2GB, ... ,6GB files. Please have a 
look in to PR #163 comment. And for testing with PR, have you used the branch 
and set up a new cluster or replaced jars. Could you provide some information 
on this.

 

So, we need to look for is there any failure for 
COMPLETE_MULTIPART_UPLOAD_ERROR for the key. The reason for this cause is 
explained in HDDS-2477. Can you also upload om-audit log, if there is an 
occurrence of COMPLETE_MULTIPART_UPLOAD_ERROR still.

> Multipart upload report errors while writing to ozone Ratis pipeline
> 
>
> Key: HDDS-2356
> URL: https://issues.apache.org/jira/browse/HDDS-2356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
> Environment: Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM 
> on a separate VM
>Reporter: Li Cheng
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Fix For: 0.5.0
>
> Attachments: 2018-11-15-OM-logs.txt, 2019-11-06_18_13_57_422_ERROR, 
> hs_err_pid9340.log, image-2019-10-31-18-56-56-177.png, 
> om-audit-VM_50_210_centos.log, om_audit_log_plc_1570863541668_9278.txt
>
>
> Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM on a separate VM, say 
> it's VM0.
> I use goofys as a fuse and enable ozone S3 gateway to mount ozone to a path 
> on VM0, while reading data from VM0 local disk and write to mount path. The 
> dataset has various sizes of files from 0 byte to GB-level and it has a 
> number of ~50,000 files. 
> The writing is slow (1GB for ~10 mins) and it stops after around 4GB. As I 
> look at hadoop-root-om-VM_50_210_centos.out log, I see OM throwing errors 
> related with Multipart upload. This error eventually causes the  writing to 
> terminate and OM to be closed. 
>  
> Updated on 11/06/2019:
> See new multipart upload error NO_SUCH_MULTIPART_UPLOAD_ERROR and full logs 
> are in the attachment.
>  2019-11-05 18:12:37,766 ERROR 
> org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest:
>  MultipartUpload Commit is failed for Key:./2
> 0191012/plc_1570863541668_9278 in Volume/Bucket 
> s325d55ad283aa400af464c76d713c07ad/ozone-test
> NO_SUCH_MULTIPART_UPLOAD_ERROR 
> org.apache.hadoop.ozone.om.exceptions.OMException: No such Multipart upload 
> is with specified uploadId fcda8608-b431-48b7-8386-
> 0a332f1a709a-103084683261641950
> at 
> org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest.validateAndUpdateCache(S3MultipartUploadCommitPartRequest.java:1
> 56)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.
> java:217)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:132)
> at 
> org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:100)
> at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
>  
> Updated on 10/28/2019:
> See MISMATCH_MULTIPART_LIST error.
>  
> 2019-10-28 11:44:34,079 [qtp1383524016-70] ERROR - Error in Complete 
> Multipart Upload Request for bucket: ozone-test, key: 
> 20191012/plc_1570863541668_927
>  8
>  MISMATCH_MULTIPART_LIST org.apache.hadoop.ozone.om.exceptions.OMException: 
> Complete Multipart Upload Failed: volume: 
> s3c89e813c80ffcea9543004d57b2a1239bucket:
>  

[jira] [Commented] (HDDS-2356) Multipart upload report errors while writing to ozone Ratis pipeline

2019-11-17 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976237#comment-16976237
 ] 

Li Cheng commented on HDDS-2356:


Tried [https://github.com/apache/hadoop-ozone/pull/163] to run. It did last 
longer, which is good thing. But eventually it still failed due to a 
NO_SUCH_MULTIPART_UPLOAD_ERROR error. Attached the logs in the attachment. 
Interesting is that the first error happened earlier but didn't prevented 
writing. After a few hours, it failed and bellow is the last error log. 

 

2019-11-15 22:13:56,493 ERROR 
org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest:
 MultipartUpload Commit is failed for Key:plc_1570869510243_5542 in 
Volume/Bucket s325d55ad283aa400af464c76d713c07ad/ozone-test
NO_SUCH_MULTIPART_UPLOAD_ERROR 
org.apache.hadoop.ozone.om.exceptions.OMException: No such Multipart upload is 
with specified uploadId 69162f8b-a923-4247-bb67-b1d6f9fa0d9d-103141824303150377
 at 
org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest.validateAndUpdateCache(S3MultipartUploadCommitPartRequest.java:159)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.java:217)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:132)
 at 
org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:100)
 at 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

> Multipart upload report errors while writing to ozone Ratis pipeline
> 
>
> Key: HDDS-2356
> URL: https://issues.apache.org/jira/browse/HDDS-2356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
> Environment: Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM 
> on a separate VM
>Reporter: Li Cheng
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Fix For: 0.5.0
>
> Attachments: 2018-11-15-OM-logs.txt, 2019-11-06_18_13_57_422_ERROR, 
> hs_err_pid9340.log, image-2019-10-31-18-56-56-177.png, 
> om-audit-VM_50_210_centos.log, om_audit_log_plc_1570863541668_9278.txt
>
>
> Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM on a separate VM, say 
> it's VM0.
> I use goofys as a fuse and enable ozone S3 gateway to mount ozone to a path 
> on VM0, while reading data from VM0 local disk and write to mount path. The 
> dataset has various sizes of files from 0 byte to GB-level and it has a 
> number of ~50,000 files. 
> The writing is slow (1GB for ~10 mins) and it stops after around 4GB. As I 
> look at hadoop-root-om-VM_50_210_centos.out log, I see OM throwing errors 
> related with Multipart upload. This error eventually causes the  writing to 
> terminate and OM to be closed. 
>  
> Updated on 11/06/2019:
> See new multipart upload error NO_SUCH_MULTIPART_UPLOAD_ERROR and full logs 
> are in the attachment.
>  2019-11-05 18:12:37,766 ERROR 
> org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest:
>  MultipartUpload Commit is failed for Key:./2
> 0191012/plc_1570863541668_9278 in Volume/Bucket 
> s325d55ad283aa400af464c76d713c07ad/ozone-test
> NO_SUCH_MULTIPART_UPLOAD_ERROR 
> org.apache.hadoop.ozone.om.exceptions.OMException: No such Multipart upload 
> is with specified uploadId fcda8608-b431-48b7-8386-
> 0a332f1a709a-103084683261641950
> at 
> org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest.validateAndUpdateCache(S3MultipartUploadCommitPartRequest.java:1
> 56)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.
> java:217)
> at 
> 

[jira] [Updated] (HDDS-2356) Multipart upload report errors while writing to ozone Ratis pipeline

2019-11-17 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng updated HDDS-2356:
---
Attachment: 2018-11-15-OM-logs.txt

> Multipart upload report errors while writing to ozone Ratis pipeline
> 
>
> Key: HDDS-2356
> URL: https://issues.apache.org/jira/browse/HDDS-2356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
> Environment: Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM 
> on a separate VM
>Reporter: Li Cheng
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Fix For: 0.5.0
>
> Attachments: 2018-11-15-OM-logs.txt, 2019-11-06_18_13_57_422_ERROR, 
> hs_err_pid9340.log, image-2019-10-31-18-56-56-177.png, 
> om-audit-VM_50_210_centos.log, om_audit_log_plc_1570863541668_9278.txt
>
>
> Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM on a separate VM, say 
> it's VM0.
> I use goofys as a fuse and enable ozone S3 gateway to mount ozone to a path 
> on VM0, while reading data from VM0 local disk and write to mount path. The 
> dataset has various sizes of files from 0 byte to GB-level and it has a 
> number of ~50,000 files. 
> The writing is slow (1GB for ~10 mins) and it stops after around 4GB. As I 
> look at hadoop-root-om-VM_50_210_centos.out log, I see OM throwing errors 
> related with Multipart upload. This error eventually causes the  writing to 
> terminate and OM to be closed. 
>  
> Updated on 11/06/2019:
> See new multipart upload error NO_SUCH_MULTIPART_UPLOAD_ERROR and full logs 
> are in the attachment.
>  2019-11-05 18:12:37,766 ERROR 
> org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest:
>  MultipartUpload Commit is failed for Key:./2
> 0191012/plc_1570863541668_9278 in Volume/Bucket 
> s325d55ad283aa400af464c76d713c07ad/ozone-test
> NO_SUCH_MULTIPART_UPLOAD_ERROR 
> org.apache.hadoop.ozone.om.exceptions.OMException: No such Multipart upload 
> is with specified uploadId fcda8608-b431-48b7-8386-
> 0a332f1a709a-103084683261641950
> at 
> org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest.validateAndUpdateCache(S3MultipartUploadCommitPartRequest.java:1
> 56)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.
> java:217)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:132)
> at 
> org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:100)
> at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
>  
> Updated on 10/28/2019:
> See MISMATCH_MULTIPART_LIST error.
>  
> 2019-10-28 11:44:34,079 [qtp1383524016-70] ERROR - Error in Complete 
> Multipart Upload Request for bucket: ozone-test, key: 
> 20191012/plc_1570863541668_927
>  8
>  MISMATCH_MULTIPART_LIST org.apache.hadoop.ozone.om.exceptions.OMException: 
> Complete Multipart Upload Failed: volume: 
> s3c89e813c80ffcea9543004d57b2a1239bucket:
>  ozone-testkey: 20191012/plc_1570863541668_9278
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:732)
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.completeMultipartUpload(OzoneManagerProtocolClientSideTranslatorPB
>  .java:1104)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:497)
>  at 
> org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
>  at com.sun.proxy.$Proxy82.completeMultipartUpload(Unknown 

[jira] [Updated] (HDFS-14651) DeadNodeDetector checks dead node periodically

2019-11-17 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14651:
---
Attachment: HDFS-14651.002.patch

> DeadNodeDetector checks dead node periodically
> --
>
> Key: HDFS-14651
> URL: https://issues.apache.org/jira/browse/HDFS-14651
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14651.001.patch, HDFS-14651.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2523) BufferPool.releaseBuffer may release a buffer different than the head of the list

2019-11-17 Thread Tsz-wo Sze (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976229#comment-16976229
 ] 

Tsz-wo Sze commented on HDDS-2523:
--

I have checked the code before HDDS-2375.  It has the same problem.

> BufferPool.releaseBuffer may release a buffer different than the head of the 
> list
> -
>
> Key: HDDS-2523
> URL: https://issues.apache.org/jira/browse/HDDS-2523
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Tsz-wo Sze
>Priority: Major
>
> {code}
> //BufferPool
>   public void releaseBuffer(ByteBuffer byteBuffer) {
> // always remove from head of the list and append at last
> ByteBuffer buffer = bufferList.remove(0);
> // Ensure the buffer to be removed is always at the head of the list.
> Preconditions.checkArgument(buffer.equals(byteBuffer));
> buffer.clear();
> bufferList.add(buffer);
> Preconditions.checkArgument(currentBufferIndex >= 0);
> currentBufferIndex--;
>   }
> {code}
> In the code above, it expects buffer and byteBuffer are the same object, i.e. 
>  buffer == byteBuffer.  However the precondition is checking 
> buffer.equals(byteBuffer). Unfortunately, both buffer and byteBuffer have 
> remaining() == 0 so that equals(..) returns true and the precondition does 
> not catch the bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2523) BufferPool.releaseBuffer may release a buffer different than the head of the list

2019-11-17 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze updated HDDS-2523:
-
Description: 
{code}
//BufferPool
  public void releaseBuffer(ByteBuffer byteBuffer) {
// always remove from head of the list and append at last
ByteBuffer buffer = bufferList.remove(0);
// Ensure the buffer to be removed is always at the head of the list.
Preconditions.checkArgument(buffer.equals(byteBuffer));
buffer.clear();
bufferList.add(buffer);
Preconditions.checkArgument(currentBufferIndex >= 0);
currentBufferIndex--;
  }
{code}
In the code above, it expects buffer and byteBuffer are the same object, i.e.  
buffer == byteBuffer.  However the precondition is checking 
buffer.equals(byteBuffer). Unfortunately, both buffer and byteBuffer have 
remaining() == 0 so that equals(..) returns true and the precondition does not 
catch the bug.


  was:
{code}
//BufferPool
  public void releaseBuffer(ByteBuffer byteBuffer) {
// always remove from head of the list and append at last
ByteBuffer buffer = bufferList.remove(0);
// Ensure the buffer to be removed is always at the head of the list.
Preconditions.checkArgument(buffer.equals(byteBuffer));
buffer.clear();
bufferList.add(buffer);
Preconditions.checkArgument(currentBufferIndex >= 0);
currentBufferIndex--;
  }
{code}
In the code above, it expects buffer and byteBuffer are the same object, i.e.  
buffer == byteBuffer.  However the precondition is checking 
buffer.equals(byteBuffer). Unfortunately, the both buffer and byteBuffer have 
remaining() == 0 so that equals(..) returns true and the precondition does not 
catch the bug.



> BufferPool.releaseBuffer may release a buffer different than the head of the 
> list
> -
>
> Key: HDDS-2523
> URL: https://issues.apache.org/jira/browse/HDDS-2523
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Tsz-wo Sze
>Priority: Major
>
> {code}
> //BufferPool
>   public void releaseBuffer(ByteBuffer byteBuffer) {
> // always remove from head of the list and append at last
> ByteBuffer buffer = bufferList.remove(0);
> // Ensure the buffer to be removed is always at the head of the list.
> Preconditions.checkArgument(buffer.equals(byteBuffer));
> buffer.clear();
> bufferList.add(buffer);
> Preconditions.checkArgument(currentBufferIndex >= 0);
> currentBufferIndex--;
>   }
> {code}
> In the code above, it expects buffer and byteBuffer are the same object, i.e. 
>  buffer == byteBuffer.  However the precondition is checking 
> buffer.equals(byteBuffer). Unfortunately, both buffer and byteBuffer have 
> remaining() == 0 so that equals(..) returns true and the precondition does 
> not catch the bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2523) BufferPool.releaseBuffer may release a buffer different than the head of the list

2019-11-17 Thread Tsz-wo Sze (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976206#comment-16976206
 ] 

Tsz-wo Sze commented on HDDS-2523:
--

If we change the precondition to check == as below, TestContainerMapper will 
fail.
{code}
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BufferPool.java
@@ -93,7 +93,7 @@ public void releaseBuffer(ByteBuffer byteBuffer) {
 // always remove from head of the list and append at last
 ByteBuffer buffer = bufferList.remove(0);
 // Ensure the buffer to be removed is always at the head of the list.
-Preconditions.checkArgument(buffer.equals(byteBuffer));
+Preconditions.checkArgument(buffer == byteBuffer);
 buffer.clear();
 bufferList.add(buffer);
 Preconditions.checkArgument(currentBufferIndex >= 0);
{code}


> BufferPool.releaseBuffer may release a buffer different than the head of the 
> list
> -
>
> Key: HDDS-2523
> URL: https://issues.apache.org/jira/browse/HDDS-2523
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Tsz-wo Sze
>Priority: Major
>
> {code}
> //BufferPool
>   public void releaseBuffer(ByteBuffer byteBuffer) {
> // always remove from head of the list and append at last
> ByteBuffer buffer = bufferList.remove(0);
> // Ensure the buffer to be removed is always at the head of the list.
> Preconditions.checkArgument(buffer.equals(byteBuffer));
> buffer.clear();
> bufferList.add(buffer);
> Preconditions.checkArgument(currentBufferIndex >= 0);
> currentBufferIndex--;
>   }
> {code}
> In the code above, it expects buffer and byteBuffer are the same object, i.e. 
>  buffer == byteBuffer.  However the precondition is checking 
> buffer.equals(byteBuffer). Unfortunately, the both buffer and byteBuffer have 
> remaining() == 0 so that equals(..) returns true and the precondition does 
> not catch the bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2523) BufferPool.releaseBuffer may release a buffer different than the head of the list

2019-11-17 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze updated HDDS-2523:
-
Description: 
{code}
//BufferPool
  public void releaseBuffer(ByteBuffer byteBuffer) {
// always remove from head of the list and append at last
ByteBuffer buffer = bufferList.remove(0);
// Ensure the buffer to be removed is always at the head of the list.
Preconditions.checkArgument(buffer.equals(byteBuffer));
buffer.clear();
bufferList.add(buffer);
Preconditions.checkArgument(currentBufferIndex >= 0);
currentBufferIndex--;
  }
{code}
In the code above, it expects buffer and byteBuffer are the same object, i.e.  
buffer == byteBuffer.  However the precondition is checking 
buffer.equals(byteBuffer). Unfortunately, the both buffer have remaining() == 0 
so that equals(..) returns true and the precondition does not catch the bug.


  was:
{code}
  public void releaseBuffer(ByteBuffer byteBuffer) {
// always remove from head of the list and append at last
ByteBuffer buffer = bufferList.remove(0);
// Ensure the buffer to be removed is always at the head of the list.
Preconditions.checkArgument(buffer.equals(byteBuffer));
buffer.clear();
bufferList.add(buffer);
Preconditions.checkArgument(currentBufferIndex >= 0);
currentBufferIndex--;
  }
{code}
In the code above, it expects buffer and byteBuffer are the same object, i.e.  
buffer == byteBuffer.  However the precondition is checking 
buffer.equals(byteBuffer). Unfortunately, the both buffer have remaining() == 0 
so that equals(..) returns true and the precondition does not catch the bug.



> BufferPool.releaseBuffer may release a buffer different than the head of the 
> list
> -
>
> Key: HDDS-2523
> URL: https://issues.apache.org/jira/browse/HDDS-2523
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Tsz-wo Sze
>Priority: Major
>
> {code}
> //BufferPool
>   public void releaseBuffer(ByteBuffer byteBuffer) {
> // always remove from head of the list and append at last
> ByteBuffer buffer = bufferList.remove(0);
> // Ensure the buffer to be removed is always at the head of the list.
> Preconditions.checkArgument(buffer.equals(byteBuffer));
> buffer.clear();
> bufferList.add(buffer);
> Preconditions.checkArgument(currentBufferIndex >= 0);
> currentBufferIndex--;
>   }
> {code}
> In the code above, it expects buffer and byteBuffer are the same object, i.e. 
>  buffer == byteBuffer.  However the precondition is checking 
> buffer.equals(byteBuffer). Unfortunately, the both buffer have remaining() == 
> 0 so that equals(..) returns true and the precondition does not catch the bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2523) BufferPool.releaseBuffer may release a buffer different than the head of the list

2019-11-17 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze updated HDDS-2523:
-
Description: 
{code}
//BufferPool
  public void releaseBuffer(ByteBuffer byteBuffer) {
// always remove from head of the list and append at last
ByteBuffer buffer = bufferList.remove(0);
// Ensure the buffer to be removed is always at the head of the list.
Preconditions.checkArgument(buffer.equals(byteBuffer));
buffer.clear();
bufferList.add(buffer);
Preconditions.checkArgument(currentBufferIndex >= 0);
currentBufferIndex--;
  }
{code}
In the code above, it expects buffer and byteBuffer are the same object, i.e.  
buffer == byteBuffer.  However the precondition is checking 
buffer.equals(byteBuffer). Unfortunately, the both buffer and byteBuffer have 
remaining() == 0 so that equals(..) returns true and the precondition does not 
catch the bug.


  was:
{code}
//BufferPool
  public void releaseBuffer(ByteBuffer byteBuffer) {
// always remove from head of the list and append at last
ByteBuffer buffer = bufferList.remove(0);
// Ensure the buffer to be removed is always at the head of the list.
Preconditions.checkArgument(buffer.equals(byteBuffer));
buffer.clear();
bufferList.add(buffer);
Preconditions.checkArgument(currentBufferIndex >= 0);
currentBufferIndex--;
  }
{code}
In the code above, it expects buffer and byteBuffer are the same object, i.e.  
buffer == byteBuffer.  However the precondition is checking 
buffer.equals(byteBuffer). Unfortunately, the both buffer have remaining() == 0 
so that equals(..) returns true and the precondition does not catch the bug.



> BufferPool.releaseBuffer may release a buffer different than the head of the 
> list
> -
>
> Key: HDDS-2523
> URL: https://issues.apache.org/jira/browse/HDDS-2523
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Tsz-wo Sze
>Priority: Major
>
> {code}
> //BufferPool
>   public void releaseBuffer(ByteBuffer byteBuffer) {
> // always remove from head of the list and append at last
> ByteBuffer buffer = bufferList.remove(0);
> // Ensure the buffer to be removed is always at the head of the list.
> Preconditions.checkArgument(buffer.equals(byteBuffer));
> buffer.clear();
> bufferList.add(buffer);
> Preconditions.checkArgument(currentBufferIndex >= 0);
> currentBufferIndex--;
>   }
> {code}
> In the code above, it expects buffer and byteBuffer are the same object, i.e. 
>  buffer == byteBuffer.  However the precondition is checking 
> buffer.equals(byteBuffer). Unfortunately, the both buffer and byteBuffer have 
> remaining() == 0 so that equals(..) returns true and the precondition does 
> not catch the bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2523) BufferPool.releaseBuffer may release a buffer different than the head of the list

2019-11-17 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze updated HDDS-2523:
-
Description: 
{code}
  public void releaseBuffer(ByteBuffer byteBuffer) {
// always remove from head of the list and append at last
ByteBuffer buffer = bufferList.remove(0);
// Ensure the buffer to be removed is always at the head of the list.
Preconditions.checkArgument(buffer.equals(byteBuffer));
buffer.clear();
bufferList.add(buffer);
Preconditions.checkArgument(currentBufferIndex >= 0);
currentBufferIndex--;
  }
{code}
In the code above, it expects buffer and byteBuffer are the same object, i.e.  
buffer == byteBuffer.  However the precondition is checking 
buffer.equals(byteBuffer). Unfortunately, the both buffer have remaining() == 0 
so that equals(..) returns true and the precondition does not catch the bug.


  was:
{code}
  public void releaseBuffer(ByteBuffer byteBuffer) {
// always remove from head of the list and append at last
ByteBuffer buffer = bufferList.remove(0);
// Ensure the buffer to be removed is always at the head of the list.
Preconditions.checkArgument(buffer.equals(byteBuffer));
buffer.clear();
bufferList.add(buffer);
Preconditions.checkArgument(currentBufferIndex >= 0);
currentBufferIndex--;
  }
{code}
In the code above, it expects buffer and byteBuffer are the same object, i.e.  
buffer == byteBuffer.  However the precondition is checking 
buffer.equals(byteBuffer). Unfortunately
, the both buffer have remaining() == 0 so that equals(..) returns true and the 
precondition does not catch the bug.



> BufferPool.releaseBuffer may release a buffer different than the head of the 
> list
> -
>
> Key: HDDS-2523
> URL: https://issues.apache.org/jira/browse/HDDS-2523
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Tsz-wo Sze
>Priority: Major
>
> {code}
>   public void releaseBuffer(ByteBuffer byteBuffer) {
> // always remove from head of the list and append at last
> ByteBuffer buffer = bufferList.remove(0);
> // Ensure the buffer to be removed is always at the head of the list.
> Preconditions.checkArgument(buffer.equals(byteBuffer));
> buffer.clear();
> bufferList.add(buffer);
> Preconditions.checkArgument(currentBufferIndex >= 0);
> currentBufferIndex--;
>   }
> {code}
> In the code above, it expects buffer and byteBuffer are the same object, i.e. 
>  buffer == byteBuffer.  However the precondition is checking 
> buffer.equals(byteBuffer). Unfortunately, the both buffer have remaining() == 
> 0 so that equals(..) returns true and the precondition does not catch the bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2523) BufferPool.releaseBuffer may release a buffer different than the head of the list

2019-11-17 Thread Tsz-wo Sze (Jira)
Tsz-wo Sze created HDDS-2523:


 Summary: BufferPool.releaseBuffer may release a buffer different 
than the head of the list
 Key: HDDS-2523
 URL: https://issues.apache.org/jira/browse/HDDS-2523
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Tsz-wo Sze


{code}
  public void releaseBuffer(ByteBuffer byteBuffer) {
// always remove from head of the list and append at last
ByteBuffer buffer = bufferList.remove(0);
// Ensure the buffer to be removed is always at the head of the list.
Preconditions.checkArgument(buffer.equals(byteBuffer));
buffer.clear();
bufferList.add(buffer);
Preconditions.checkArgument(currentBufferIndex >= 0);
currentBufferIndex--;
  }
{code}
In the code above, it expects buffer and byteBuffer are the same object, i.e.  
buffer == byteBuffer.  However the precondition is checking 
buffer.equals(byteBuffer). Unfortunately
, the both buffer have remaining() == 0 so that equals(..) returns true and the 
precondition does not catch the bug.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2519) Sonar: Double Brace Initialization should not be used

2019-11-17 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia resolved HDDS-2519.
-
Resolution: Duplicate

> Sonar: Double Brace Initialization should not be used
> -
>
> Key: HDDS-2519
> URL: https://issues.apache.org/jira/browse/HDDS-2519
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWN=AW5md_APKcVY8lQ4ZsWN



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2519) Sonar: Double Brace Initialization should not be used

2019-11-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2519?focusedWorklogId=345027=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-345027
 ]

ASF GitHub Bot logged work on HDDS-2519:


Author: ASF GitHub Bot
Created on: 17/Nov/19 22:56
Start Date: 17/Nov/19 22:56
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #201: 
HDDS-2519. Sonar: Double Brace Initialization should not be used
URL: https://github.com/apache/hadoop-ozone/pull/201
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 345027)
Time Spent: 20m  (was: 10m)

> Sonar: Double Brace Initialization should not be used
> -
>
> Key: HDDS-2519
> URL: https://issues.apache.org/jira/browse/HDDS-2519
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWN=AW5md_APKcVY8lQ4ZsWN



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2489) Sonar: Anonymous class based initialization in HddsClientUtils

2019-11-17 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia resolved HDDS-2489.
-
Target Version/s: 0.5.0
  Resolution: Fixed

Thanks [~swagle] for the reporting and fixing the issue, thanks [~adoroszlai] 
for the reviews.

> Sonar: Anonymous class based initialization in HddsClientUtils
> --
>
> Key: HDDS-2489
> URL: https://issues.apache.org/jira/browse/HDDS-2489
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWN=false=BUG



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2489) Sonar: Anonymous class based initialization in HddsClientUtils

2019-11-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2489?focusedWorklogId=345026=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-345026
 ]

ASF GitHub Bot logged work on HDDS-2489:


Author: ASF GitHub Bot
Created on: 17/Nov/19 22:53
Start Date: 17/Nov/19 22:53
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #172: 
HDDS-2489. Change anonymous class based initialization in HddsUtils.
URL: https://github.com/apache/hadoop-ozone/pull/172
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 345026)
Time Spent: 20m  (was: 10m)

> Sonar: Anonymous class based initialization in HddsClientUtils
> --
>
> Key: HDDS-2489
> URL: https://issues.apache.org/jira/browse/HDDS-2489
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWN=false=BUG



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong

2019-11-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976187#comment-16976187
 ] 

Hadoop QA commented on HDFS-14519:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14519 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986053/HDFS-14519.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3ab5ff326e13 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 96c4520 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28320/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28320/testReport/ |
| Max. process+thread count | 2839 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |

[jira] [Commented] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong

2019-11-17 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976136#comment-16976136
 ] 

Ayush Saxena commented on HDFS-14519:
-

Thanx [~RANith]  for the contribution, I fixed the test in your patch on your 
behalf. Just tweaked a line. Will push this , once I get some one else too get 
his eyes on it.

[~weichiu] can you check once.

> NameQuota is not update after concat operation, so namequota is wrong
> -
>
> Key: HDFS-14519
> URL: https://issues.apache.org/jira/browse/HDFS-14519
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14519.001.patch, HDFS-14519.002.patch, 
> HDFS-14519.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14519) NameQuota is not update after concat operation, so namequota is wrong

2019-11-17 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14519:

Attachment: HDFS-14519.003.patch

> NameQuota is not update after concat operation, so namequota is wrong
> -
>
> Key: HDFS-14519
> URL: https://issues.apache.org/jira/browse/HDFS-14519
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14519.001.patch, HDFS-14519.002.patch, 
> HDFS-14519.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2522) Fix TestSecureOzoneCluster

2019-11-17 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2522:
---
Status: Patch Available  (was: In Progress)

> Fix TestSecureOzoneCluster
> --
>
> Key: HDDS-2522
> URL: https://issues.apache.org/jira/browse/HDDS-2522
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> TestSecureOzoneCluster is failing with {{failure to login}}.
> {code:title=https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2291-5997d/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt}
> ---
> Test set: org.apache.hadoop.ozone.TestSecureOzoneCluster
> ---
> Tests run: 10, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 23.937 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
> testSCMSecurityProtocol(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 2.474 s  <<< ERROR!
> org.apache.hadoop.security.KerberosAuthException: 
> failure to login: for principal: 
> scm/pr-hdds-2291-5997d-4279494...@example.com from keytab 
> /workdir/hadoop-ozone/integration-test/target/test-dir/TestSecureOzoneCluster/scm.keytab
>  javax.security.auth.login.LoginException: Unable to obtain password from user
>   at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1215)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1008)
>   at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:315)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.loginAsSCMUser(StorageContainerManager.java:508)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:254)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:212)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:600)
>   at 
> org.apache.hadoop.hdds.scm.HddsTestUtils.getScm(HddsTestUtils.java:91)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSCMSecurityProtocol(TestSecureOzoneCluster.java:299)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14983) RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option

2019-11-17 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976130#comment-16976130
 ] 

Ayush Saxena commented on HDFS-14983:
-

Thanx [~aajisaka] for the report. Indeed That is something missing, though cost 
to restart router is low, But still this is good to have.

> RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option
> ---
>
> Key: HDFS-14983
> URL: https://issues.apache.org/jira/browse/HDFS-14983
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Minor
>
> NameNode can update proxyuser config by -refreshSuperUserGroupsConfiguration 
> without restarting but DFSRouter cannot. It would be better for DFSRouter to 
> have such functionality to be compatible with NameNode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14967) TestWebHDFS fails in Windows

2019-11-17 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976128#comment-16976128
 ] 

Ayush Saxena commented on HDFS-14967:
-

Thanx [~prasad-acit] for the fix.

v002 LGTM.

> TestWebHDFS  fails in Windows 
> --
>
> Key: HDFS-14967
> URL: https://issues.apache.org/jira/browse/HDFS-14967
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
> Attachments: HDFS-14967.001.patch, HDFS-14967.002.patch
>
>
> In TestWebHDFS test class, few test cases are not closing the MiniDFSCluster, 
> which results in remaining test failures in Windows. Once cluster status is 
> open, all consecutive test cases fail to get the lock on Data dir which 
> results  in test case failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14860) Clean Up StoragePolicySatisfyManager.java

2019-11-17 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976125#comment-16976125
 ] 

Ayush Saxena commented on HDFS-14860:
-

[~belugabehr] can you give a check here once more?

> Clean Up StoragePolicySatisfyManager.java
> -
>
> Key: HDFS-14860
> URL: https://issues.apache.org/jira/browse/HDFS-14860
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HDFS-14860.1.patch, HDFS-14860.2.patch, 
> HDFS-14860.3.patch
>
>
> * Remove superfluous debug log guards
> * Use {{java.util.concurrent}} package for internal structure instead of 
> external synchronization.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14528) Failover from Active to Standby Failed

2019-11-17 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976124#comment-16976124
 ] 

Ayush Saxena commented on HDFS-14528:
-

The introduced test fails with an NPE, Give a check once. I don't think it is 
related with the fix, must be some miss in the test.
 Secondly, do we need the static block, Can't we do the same in the {{@Before}} 
part?

> Failover from Active to Standby Failed  
> 
>
> Key: HDFS-14528
> URL: https://issues.apache.org/jira/browse/HDFS-14528
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Reporter: Ravuri Sushma sree
>Assignee: Ravuri Sushma sree
>Priority: Major
>  Labels: multi-sbnn
> Attachments: HDFS-14528.003.patch, HDFS-14528.004.patch, 
> HDFS-14528.005.patch, HDFS-14528.006.patch, HDFS-14528.2.Patch, 
> ZKFC_issue.patch
>
>
>  *In a cluster with more than one Standby namenode, manual failover throws 
> exception for some cases*
> *When trying to exectue the failover command from active to standby* 
> *._/hdfs haadmin  -failover nn1 nn2, below Exception is thrown_*
>   Operation failed: Call From X-X-X-X/X-X-X-X to Y-Y-Y-Y: failed on 
> connection exception: java.net.ConnectException: Connection refused
> This is encountered in the following cases :
>  Scenario 1 : 
> Namenodes - NN1(Active) , NN2(Standby), NN3(Standby)
> When trying to manually failover from NN1 to NN2 if NN3 is down, Exception is 
> thrown
> Scenario 2 :
>  Namenodes - NN1(Active) , NN2(Standby), NN3(Standby)
> ZKFC's -              ZKFC1,            ZKFC2,            ZKFC3
> When trying to manually failover using NN1 to NN3 if NN3's ZKFC (ZKFC3) is 
> down, Exception is thrown



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13811) RBF: Race condition between router admin quota update and periodic quota update service

2019-11-17 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976121#comment-16976121
 ] 

Ayush Saxena commented on HDFS-13811:
-

[~linyiqun] you have been following this from start, The idea seems fair enough 
do you have any concerns with approach in v03 by [~LiJinglun]?

> RBF: Race condition between router admin quota update and periodic quota 
> update service
> ---
>
> Key: HDFS-13811
> URL: https://issues.apache.org/jira/browse/HDFS-13811
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-13811-000.patch, HDFS-13811-HDFS-13891-000.patch, 
> HDFS-13811.001.patch, HDFS-13811.002.patch, HDFS-13811.003.patch
>
>
> If we try to update quota of an existing mount entry and at the same time 
> periodic quota update service is running on the same mount entry, it is 
> leading the mount table to _inconsistent state._
> Here transactions are:
> A - Quota update service is fetching mount table entries.
> B - Quota update service is updating the mount table with current usage.
> A' - User is trying to update quota using admin cmd.
> and the transaction sequence is [ A A' B ]
> quota update service is updating the mount table with old quota value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14967) TestWebHDFS fails in Windows

2019-11-17 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14967:

Summary: TestWebHDFS  fails in Windows   (was: TestWebHDFS - Many test 
cases are failing in Windows )

> TestWebHDFS  fails in Windows 
> --
>
> Key: HDFS-14967
> URL: https://issues.apache.org/jira/browse/HDFS-14967
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
> Attachments: HDFS-14967.001.patch, HDFS-14967.002.patch
>
>
> In TestWebHDFS test class, few test cases are not closing the MiniDFSCluster, 
> which results in remaining test failures in Windows. Once cluster status is 
> open, all consecutive test cases fail to get the lock on Data dir which 
> results  in test case failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14651) DeadNodeDetector checks dead node periodically

2019-11-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976119#comment-16976119
 ] 

Hadoop QA commented on HDFS-14651:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-hdfs-project: The patch generated 6 new + 
30 unchanged - 0 fixed = 36 total (was 30) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
9s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 5 new 
+ 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  org.apache.hadoop.hdfs.protocol.DatanodeInfo is incompatible with 
expected argument type String in 
org.apache.hadoop.hdfs.DeadNodeDetector.probeCallBack(DeadNodeDetector$Probe, 
boolean)  At DeadNodeDetector.java:argument type String in 
org.apache.hadoop.hdfs.DeadNodeDetector.probeCallBack(DeadNodeDetector$Probe, 
boolean)  At DeadNodeDetector.java:[line 312] |
|  |  org.apache.hadoop.hdfs.protocol.DatanodeInfo is incompatible with 
expected argument type String in 

[jira] [Commented] (HDFS-14955) RBF: getQuotaUsage() on mount point should return global quota.

2019-11-17 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976118#comment-16976118
 ] 

Ayush Saxena commented on HDFS-14955:
-

Thanx [~LiJinglun] for the patch.

v002 LGTM. Minor stuff for the test :

{code:java}
+// clear normal path.
+routerFs.delete(new Path("/dir-1/dir-normal"), true);
{code}

This clearing should be in finally block, try block for which should start just 
after :

{code:java}
routerFs.mkdirs(new Path("/dir-1/dir-normal"));
{code}

Otherwise if the test fails post this line, the file won't be cleared.


> RBF: getQuotaUsage() on mount point should return global quota.
> ---
>
> Key: HDFS-14955
> URL: https://issues.apache.org/jira/browse/HDFS-14955
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-14955.001.patch, HDFS-14955.002.patch
>
>
> When getQuotaUsage() on a mount point path, the quota part should be the 
> global quota. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14283) DFSInputStream to prefer cached replica

2019-11-17 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976114#comment-16976114
 ] 

Ayush Saxena commented on HDFS-14283:
-

Thanx [~leosun08] for the patch.

Can you address [~smeng] comments?

Apart :
 
{code:java}
2902If the cached replica of the datanode is preferred, set this value 
is true.
2903Otherwise the replica of the closest datanode is preffered, set 
this value is false.
{code}

This sounds little grammatically incorrect, Can you have a check once.

[~weichiu] This JIRA  got initiated from you,  It would be good, if you can 
also give a check once, to ensure we don't miss anything expected!!!
 

> DFSInputStream to prefer cached replica
> ---
>
> Key: HDFS-14283
> URL: https://issues.apache.org/jira/browse/HDFS-14283
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.0
> Environment: HDFS Caching
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14283.001.patch, HDFS-14283.002.patch, 
> HDFS-14283.003.patch, HDFS-14283.004.patch, HDFS-14283.005.patch
>
>
> HDFS Caching offers performance benefits. However, currently NameNode does 
> not treat cached replica with higher priority, so HDFS caching is only useful 
> when cache replication = 3, that is to say, all replicas are cached in 
> memory, so that a client doesn't randomly pick an uncached replica.
> HDFS-6846 proposed to let NameNode give higher priority to cached replica. 
> Changing a logic in NameNode is always tricky so that didn't get much 
> traction. Here I propose a different approach: let client (DFSInputStream) 
> prefer cached replica.
> A {{LocatedBlock}} object already contains cached replica location so a 
> client has the needed information. I think we can change 
> {{DFSInputStream#getBestNodeDNAddrPair()}} for this purpose.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14974) RBF: Make tests use free ports

2019-11-17 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16976097#comment-16976097
 ] 

Ayush Saxena commented on HDFS-14974:
-

Seems fair and safe enough to me.

if no objections, will push this by today EOD.

> RBF: Make tests use free ports
> --
>
> Key: HDFS-14974
> URL: https://issues.apache.org/jira/browse/HDFS-14974
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14974.000.patch
>
>
> Currently, {{TestRouterSecurityManager#testCreateCredentials}} create a 
> Router with the default ports. However, these ports might be used. We should 
> set it to :0 for it to be assigned dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2522) Fix TestSecureOzoneCluster

2019-11-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2522?focusedWorklogId=344963=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-344963
 ]

ASF GitHub Bot logged work on HDDS-2522:


Author: ASF GitHub Bot
Created on: 17/Nov/19 16:05
Start Date: 17/Nov/19 16:05
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #207: HDDS-2522. 
Fix TestSecureOzoneCluster
URL: https://github.com/apache/hadoop-ozone/pull/207
 
 
   ## What changes were proposed in this pull request?
   
   Fix `TestSecureOzoneCluster`, failing because it used the wrong principal 
for SCM.
   
   Plus code cleanup in additional commit.
   
   https://issues.apache.org/jira/browse/HDDS-2522
   
   ## How was this patch tested?
   
   `TestSecureOzoneCluster` now passes .  No other code changed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 344963)
Remaining Estimate: 0h
Time Spent: 10m

> Fix TestSecureOzoneCluster
> --
>
> Key: HDDS-2522
> URL: https://issues.apache.org/jira/browse/HDDS-2522
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> TestSecureOzoneCluster is failing with {{failure to login}}.
> {code:title=https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2291-5997d/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt}
> ---
> Test set: org.apache.hadoop.ozone.TestSecureOzoneCluster
> ---
> Tests run: 10, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 23.937 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
> testSCMSecurityProtocol(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 2.474 s  <<< ERROR!
> org.apache.hadoop.security.KerberosAuthException: 
> failure to login: for principal: 
> scm/pr-hdds-2291-5997d-4279494...@example.com from keytab 
> /workdir/hadoop-ozone/integration-test/target/test-dir/TestSecureOzoneCluster/scm.keytab
>  javax.security.auth.login.LoginException: Unable to obtain password from user
>   at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1215)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1008)
>   at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:315)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.loginAsSCMUser(StorageContainerManager.java:508)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:254)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:212)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:600)
>   at 
> org.apache.hadoop.hdds.scm.HddsTestUtils.getScm(HddsTestUtils.java:91)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSCMSecurityProtocol(TestSecureOzoneCluster.java:299)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2522) Fix TestSecureOzoneCluster

2019-11-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2522:
-
Labels: pull-request-available  (was: )

> Fix TestSecureOzoneCluster
> --
>
> Key: HDDS-2522
> URL: https://issues.apache.org/jira/browse/HDDS-2522
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> TestSecureOzoneCluster is failing with {{failure to login}}.
> {code:title=https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2291-5997d/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt}
> ---
> Test set: org.apache.hadoop.ozone.TestSecureOzoneCluster
> ---
> Tests run: 10, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 23.937 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
> testSCMSecurityProtocol(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 2.474 s  <<< ERROR!
> org.apache.hadoop.security.KerberosAuthException: 
> failure to login: for principal: 
> scm/pr-hdds-2291-5997d-4279494...@example.com from keytab 
> /workdir/hadoop-ozone/integration-test/target/test-dir/TestSecureOzoneCluster/scm.keytab
>  javax.security.auth.login.LoginException: Unable to obtain password from user
>   at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1215)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1008)
>   at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:315)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.loginAsSCMUser(StorageContainerManager.java:508)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:254)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:212)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:600)
>   at 
> org.apache.hadoop.hdds.scm.HddsTestUtils.getScm(HddsTestUtils.java:91)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSCMSecurityProtocol(TestSecureOzoneCluster.java:299)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14651) DeadNodeDetector checks dead node periodically

2019-11-17 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14651:
---
Attachment: HDFS-14651.001.patch
Status: Patch Available  (was: Open)

> DeadNodeDetector checks dead node periodically
> --
>
> Key: HDFS-14651
> URL: https://issues.apache.org/jira/browse/HDFS-14651
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14651.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14651) DeadNodeDetector checks dead node periodically

2019-11-17 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14651:
---
Summary: DeadNodeDetector checks dead node periodically  (was: 
DeadNodeDetector periodically detects Dead Node)

> DeadNodeDetector checks dead node periodically
> --
>
> Key: HDFS-14651
> URL: https://issues.apache.org/jira/browse/HDFS-14651
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2522) Fix TestSecureOzoneCluster

2019-11-17 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2522:
--

 Summary: Fix TestSecureOzoneCluster
 Key: HDDS-2522
 URL: https://issues.apache.org/jira/browse/HDDS-2522
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Affects Versions: 0.5.0
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


TestSecureOzoneCluster is failing with {{failure to login}}.

{code:title=https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2291-5997d/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt}
---
Test set: org.apache.hadoop.ozone.TestSecureOzoneCluster
---
Tests run: 10, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 23.937 s <<< 
FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
testSCMSecurityProtocol(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
elapsed: 2.474 s  <<< ERROR!
org.apache.hadoop.security.KerberosAuthException: 
failure to login: for principal: scm/pr-hdds-2291-5997d-4279494...@example.com 
from keytab 
/workdir/hadoop-ozone/integration-test/target/test-dir/TestSecureOzoneCluster/scm.keytab
 javax.security.auth.login.LoginException: Unable to obtain password from user

at 
org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1215)
at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1008)
at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:315)
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.loginAsSCMUser(StorageContainerManager.java:508)
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:254)
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:212)
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:600)
at 
org.apache.hadoop.hdds.scm.HddsTestUtils.getScm(HddsTestUtils.java:91)
at 
org.apache.hadoop.ozone.TestSecureOzoneCluster.testSCMSecurityProtocol(TestSecureOzoneCluster.java:299)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2522) Fix TestSecureOzoneCluster

2019-11-17 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2522 started by Attila Doroszlai.
--
> Fix TestSecureOzoneCluster
> --
>
> Key: HDDS-2522
> URL: https://issues.apache.org/jira/browse/HDDS-2522
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>
> TestSecureOzoneCluster is failing with {{failure to login}}.
> {code:title=https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2291-5997d/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt}
> ---
> Test set: org.apache.hadoop.ozone.TestSecureOzoneCluster
> ---
> Tests run: 10, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 23.937 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
> testSCMSecurityProtocol(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 2.474 s  <<< ERROR!
> org.apache.hadoop.security.KerberosAuthException: 
> failure to login: for principal: 
> scm/pr-hdds-2291-5997d-4279494...@example.com from keytab 
> /workdir/hadoop-ozone/integration-test/target/test-dir/TestSecureOzoneCluster/scm.keytab
>  javax.security.auth.login.LoginException: Unable to obtain password from user
>   at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1215)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1008)
>   at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:315)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.loginAsSCMUser(StorageContainerManager.java:508)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:254)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:212)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:600)
>   at 
> org.apache.hadoop.hdds.scm.HddsTestUtils.getScm(HddsTestUtils.java:91)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSCMSecurityProtocol(TestSecureOzoneCluster.java:299)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2521) Multipart upload failing with NPE

2019-11-17 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2521:
---
Status: Patch Available  (was: In Progress)

> Multipart upload failing with NPE
> -
>
> Key: HDDS-2521
> URL: https://issues.apache.org/jira/browse/HDDS-2521
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> S3 multipart upload is 
> [failing|https://elek.github.io/ozone-ci-03/pr/pr-hdds-2501-b5dhd/acceptance/summary.html#s1-s11-s5]
>  with 
> [NPE|https://github.com/elek/ozone-ci-03/blob/ddbaf4dd92ee5f855fea3e84c59b702fb2dda663/pr/pr-hdds-2501-b5dhd/acceptance/docker-ozones3-ozones3-s3-scm.log#L740-L747].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2274) Avoid buffer copying in Codec

2019-11-17 Thread Attila Doroszlai (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16975978#comment-16975978
 ] 

Attila Doroszlai commented on HDDS-2274:


Thanks [~szetszwo] for confirmation.  Let me reassign this back to you for 
planning.

> Avoid buffer copying in Codec
> -
>
> Key: HDDS-2274
> URL: https://issues.apache.org/jira/browse/HDDS-2274
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz-wo Sze
>Assignee: Attila Doroszlai
>Priority: Major
>
> Codec declares byte[] as a parameter in fromPersistedFormat(..) and a return 
> type in toPersistedFormat(..).  It leads to buffer copying when using it with 
> ByteString.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2274) Avoid buffer copying in Codec

2019-11-17 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai reassigned HDDS-2274:
--

Assignee: Tsz-wo Sze  (was: Attila Doroszlai)

> Avoid buffer copying in Codec
> -
>
> Key: HDDS-2274
> URL: https://issues.apache.org/jira/browse/HDDS-2274
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>
> Codec declares byte[] as a parameter in fromPersistedFormat(..) and a return 
> type in toPersistedFormat(..).  It leads to buffer copying when using it with 
> ByteString.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2521) Multipart upload failing with NPE

2019-11-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2521:
-
Labels: pull-request-available  (was: )

> Multipart upload failing with NPE
> -
>
> Key: HDDS-2521
> URL: https://issues.apache.org/jira/browse/HDDS-2521
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
>
> S3 multipart upload is 
> [failing|https://elek.github.io/ozone-ci-03/pr/pr-hdds-2501-b5dhd/acceptance/summary.html#s1-s11-s5]
>  with 
> [NPE|https://github.com/elek/ozone-ci-03/blob/ddbaf4dd92ee5f855fea3e84c59b702fb2dda663/pr/pr-hdds-2501-b5dhd/acceptance/docker-ozones3-ozones3-s3-scm.log#L740-L747].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2521) Multipart upload failing with NPE

2019-11-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2521?focusedWorklogId=344915=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-344915
 ]

ASF GitHub Bot logged work on HDDS-2521:


Author: ASF GitHub Bot
Created on: 17/Nov/19 10:00
Start Date: 17/Nov/19 10:00
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #206: HDDS-2521. 
Multipart upload failing with NPE
URL: https://github.com/apache/hadoop-ozone/pull/206
 
 
   ## What changes were proposed in this pull request?
   
* Fixed NPE in `ObjectEndpoint`: `OzoneOutputStream` needs to be closed (to 
get the key committed) to make upload part info available.
* Changed `OzoneOutputStreamStub` to simulate this "no part info before 
commit" properly.  This makes the unit test fail with the previous code.
   
   https://issues.apache.org/jira/browse/HDDS-2521
   
   ## How was this patch tested?
   
   Ran acceptance test `ozones3` and S3 Gateway unit tests.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 344915)
Remaining Estimate: 0h
Time Spent: 10m

> Multipart upload failing with NPE
> -
>
> Key: HDDS-2521
> URL: https://issues.apache.org/jira/browse/HDDS-2521
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> S3 multipart upload is 
> [failing|https://elek.github.io/ozone-ci-03/pr/pr-hdds-2501-b5dhd/acceptance/summary.html#s1-s11-s5]
>  with 
> [NPE|https://github.com/elek/ozone-ci-03/blob/ddbaf4dd92ee5f855fea3e84c59b702fb2dda663/pr/pr-hdds-2501-b5dhd/acceptance/docker-ozones3-ozones3-s3-scm.log#L740-L747].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2521) Multipart upload failing with NPE

2019-11-17 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2521 started by Attila Doroszlai.
--
> Multipart upload failing with NPE
> -
>
> Key: HDDS-2521
> URL: https://issues.apache.org/jira/browse/HDDS-2521
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>
> S3 multipart upload is 
> [failing|https://elek.github.io/ozone-ci-03/pr/pr-hdds-2501-b5dhd/acceptance/summary.html#s1-s11-s5]
>  with 
> [NPE|https://github.com/elek/ozone-ci-03/blob/ddbaf4dd92ee5f855fea3e84c59b702fb2dda663/pr/pr-hdds-2501-b5dhd/acceptance/docker-ozones3-ozones3-s3-scm.log#L740-L747].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2521) Multipart upload failing with NPE

2019-11-17 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2521:
--

 Summary: Multipart upload failing with NPE
 Key: HDDS-2521
 URL: https://issues.apache.org/jira/browse/HDDS-2521
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: S3
Affects Versions: 0.5.0
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


S3 multipart upload is 
[failing|https://elek.github.io/ozone-ci-03/pr/pr-hdds-2501-b5dhd/acceptance/summary.html#s1-s11-s5]
 with 
[NPE|https://github.com/elek/ozone-ci-03/blob/ddbaf4dd92ee5f855fea3e84c59b702fb2dda663/pr/pr-hdds-2501-b5dhd/acceptance/docker-ozones3-ozones3-s3-scm.log#L740-L747].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2485) Disable XML external entity processing

2019-11-17 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2485:
---
Status: Patch Available  (was: In Progress)

> Disable XML external entity processing
> --
>
> Key: HDDS-2485
> URL: https://issues.apache.org/jira/browse/HDDS-2485
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Security
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available, sonar
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Disable XML external entity processing in
> * NodeSchemaLoader: 
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2nKcVY8lQ4ZsNm=AW5md-2nKcVY8lQ4ZsNm
> * ConfigFileAppender:
> ** 
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_uKcVY8lQ4ZsVY=AW5md-_uKcVY8lQ4ZsVY
> ** 
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_uKcVY8lQ4ZsVZ=AW5md-_uKcVY8lQ4ZsVZ
> * MultiDeleteRequestUnmarshaller: 
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-kDKcVY8lQ4Zr-N=AW5md-kDKcVY8lQ4Zr-N



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org