[jira] [Commented] (HDDS-2149) Replace findbugs with spotbugs

2019-09-19 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933118#comment-16933118
 ] 

Elek, Marton commented on HDDS-2149:


The non-Jenkins CI scripts are using 
./hadoop-ozone/dev-support/checks/findbugs.sh.

As far as the shell script can be run, it will work...

> Replace findbugs with spotbugs
> --
>
> Key: HDDS-2149
> URL: https://issues.apache.org/jira/browse/HDDS-2149
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> Findbugs has been marked deprecated and all future work is now happening 
> under SpotBugs project.
> This Jira is to investigate and possibly transition to Spotbugs in Ozone
>  
> Ref1 - 
> [https://mailman.cs.umd.edu/pipermail/findbugs-discuss/2017-September/004383.html]
> Ref2 - [https://spotbugs.github.io/]
>  
> A turn off for developers is that IntelliJ does not yet have a plugin for 
> Spotbugs - [https://youtrack.jetbrains.com/issue/IDEA-201846]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14768) In some cases, erasure blocks are corruption when they are reconstruct.

2019-09-19 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933115#comment-16933115
 ] 

Surendra Singh Lilhore commented on HDFS-14768:
---

Thanks [~gjhkael] for pinging me. I am on leave for a week, next week I will 
review this.

> In some cases, erasure blocks are corruption  when they are reconstruct.
> 
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Assignee: guojh
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: 1568275810244.jpg, 1568276338275.jpg, 1568771471942.jpg, 
> HDFS-14768.000.patch, HDFS-14768.001.patch, HDFS-14768.002.patch, 
> HDFS-14768.jpg, guojh_UT_after_deomission.txt, 
> guojh_UT_before_deomission.txt, zhaoyiming_UT_after_deomission.txt, 
> zhaoyiming_UT_beofre_deomission.txt
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          hasValidTargets = true;
>         }
>       }
>     }
>  }
> {code}
> targetIndices[0]=6, and targetIndices[1] is aways 0 from initial value.
> The StripedReader is  aways create reader from first 6 index block, and is 
> [0,1,2,3,4,5]
> Use the index [0,1,2,3,4,5] to build target index[6,0] will trigger the isal 
> bug. the block index6's data is corruption(all data is zero).
> I write a unit test can stabilize repreduce.
> {code:java}
> // code placeholder
> private int replicationStreamsHardLimit = 
> DFSConfigKeys.DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_DEFAULT;
> numDNs = dataBlocks + parityBlocks + 10;
> @Test(timeout = 24)
> public void testFileDecommission() throws Exception {
>   LOG.info("Starting test testFileDecommission");
>   final Path ecFile = new Path(ecDir, "testFileDecommission");
>   int writeBytes = cellSize * dataBlocks;
>   writeStripedFile(dfs, ecFile, writeBytes);
>   Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
>   FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
>   final INodeFile fileNode = cluster.getNamesystem().getFSDirectory()
>   .getINode4Write(ecFile.toString()).asFile();
>   LocatedBlocks locatedBlocks =
>   StripedFileTestUtil.getLocatedBlocks(ecFile, dfs);
>   LocatedBlock lb = dfs.getClient().getLocatedBlocks(ecFile.toString(), 0)
>   .get(0);
>   DatanodeInfo[] dnLocs = lb.getLocations();
>   LocatedStripedBlock lastBlock =
>   (LocatedStripedBlock)locatedBlocks.getLastLocatedBlock();
>   DatanodeInfo[] storageInfos = lastBlock.getLocations();
>   //
>   DatanodeDescriptor datanodeDescriptor = 
> cluster.getNameNode().getNamesystem()
>   
> .getBlockManager().getDatanodeManager().getDatanode(storageInfos[6].getDatanodeUuid());
>   BlockInfo firstBlock = fileNode.getBlocks()[0];
>   DatanodeStorageInfo[] dStorageInfos = bm.getStorages(firstBlock);
>   // the first heartbeat will consume 3 replica tasks
>   for (int i = 0; i <= replicationStreamsHardLimit + 3; i++) {
> BlockManagerTestUtil.addBlockToBeReplicated(datanodeDescriptor, new 
> Block(i),
> new DatanodeStorageInfo[]{dStorageInfos[0]});
>   }
>   assertEquals(dataBlocks + parityBlocks, dnLocs.length);
>   int[] decommNodeIndex = {3, 4};
>   final List decommisionNodes = new ArrayList();
>   // add the node which will be decommissioning
>   decommisionNodes.add(dnLocs[decommNodeIndex[0]]);
>   decommisionNodes.add(dnLocs[decommNodeIndex[1]]);
>   decommissionNode(0, decommisionNodes, AdminStates.DECOMMISSIONED);
>   assertEquals(decommisionNodes.size(), fsn.getNumDecomLiveDataNodes());
>   

[jira] [Work logged] (HDDS-2101) Ozone filesystem provider doesn't exist

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2101?focusedWorklogId=314866=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314866
 ]

ASF GitHub Bot logged work on HDDS-2101:


Author: ASF GitHub Bot
Created on: 19/Sep/19 07:22
Start Date: 19/Sep/19 07:22
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1473: HDDS-2101. 
Ozone filesystem provider doesn't exist
URL: https://github.com/apache/hadoop/pull/1473#issuecomment-533002895
 
 
   /label ozone
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314866)
Time Spent: 20m  (was: 10m)

> Ozone filesystem provider doesn't exist
> ---
>
> Key: HDDS-2101
> URL: https://issues.apache.org/jira/browse/HDDS-2101
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Jitendra Nath Pandey
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We don't have a filesystem provider in META-INF. 
> i.e. following file doesn't exist.
> {{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}
> See for example
> {{hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2101) Ozone filesystem provider doesn't exist

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2101:
-
Labels: pull-request-available  (was: )

> Ozone filesystem provider doesn't exist
> ---
>
> Key: HDDS-2101
> URL: https://issues.apache.org/jira/browse/HDDS-2101
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Jitendra Nath Pandey
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
>
> We don't have a filesystem provider in META-INF. 
> i.e. following file doesn't exist.
> {{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}
> See for example
> {{hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2101) Ozone filesystem provider doesn't exist

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2101?focusedWorklogId=314867=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314867
 ]

ASF GitHub Bot logged work on HDDS-2101:


Author: ASF GitHub Bot
Created on: 19/Sep/19 07:22
Start Date: 19/Sep/19 07:22
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1473: HDDS-2101. 
Ozone filesystem provider doesn't exist
URL: https://github.com/apache/hadoop/pull/1473#issuecomment-533002998
 
 
   @elek @anuengineer Please review
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314867)
Time Spent: 0.5h  (was: 20m)

> Ozone filesystem provider doesn't exist
> ---
>
> Key: HDDS-2101
> URL: https://issues.apache.org/jira/browse/HDDS-2101
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Jitendra Nath Pandey
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We don't have a filesystem provider in META-INF. 
> i.e. following file doesn't exist.
> {{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}
> See for example
> {{hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2101) Ozone filesystem provider doesn't exist

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2101?focusedWorklogId=314865=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314865
 ]

ASF GitHub Bot logged work on HDDS-2101:


Author: ASF GitHub Bot
Created on: 19/Sep/19 07:22
Start Date: 19/Sep/19 07:22
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1473: 
HDDS-2101. Ozone filesystem provider doesn't exist
URL: https://github.com/apache/hadoop/pull/1473
 
 
   Ozone did not have a filesystem provider in META-INF and this PR adds a 
filesystem provider for both ozonefs-lib-legacy and ozonefs-lib-current.
   
   Testing done: 
   I tested the map reduce robot tests for Hadoop27 and Hadoop32 after removing 
`fs.o3fs.impl=org.apache.hadoop.fs.ozone.OzoneFileSystem` and 
`fs.o3fs.impl=org.apache.hadoop.fs.ozone.BasicOzoneFileSystem` from the docker 
configs and verified that the tests pass.  
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314865)
Remaining Estimate: 0h
Time Spent: 10m

> Ozone filesystem provider doesn't exist
> ---
>
> Key: HDDS-2101
> URL: https://issues.apache.org/jira/browse/HDDS-2101
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Jitendra Nath Pandey
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We don't have a filesystem provider in META-INF. 
> i.e. following file doesn't exist.
> {{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}
> See for example
> {{hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2101) Ozone filesystem provider doesn't exist

2019-09-19 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2101:
-
Status: Patch Available  (was: In Progress)

> Ozone filesystem provider doesn't exist
> ---
>
> Key: HDDS-2101
> URL: https://issues.apache.org/jira/browse/HDDS-2101
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Jitendra Nath Pandey
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We don't have a filesystem provider in META-INF. 
> i.e. following file doesn't exist.
> {{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}
> See for example
> {{hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?focusedWorklogId=314916=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314916
 ]

ASF GitHub Bot logged work on HDDS-2119:


Author: ASF GitHub Bot
Created on: 19/Sep/19 09:15
Start Date: 19/Sep/19 09:15
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1435: HDDS-2119. Use 
checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
validation.
URL: https://github.com/apache/hadoop/pull/1435#issuecomment-533041700
 
 
   Currently all the checkstyle checks are showing false negative answers. I 
will commit this with the suggested change (moving checkstyle XML files to the 
dev-support) if no objections...
   
   We can improve it (or switch back to the dedicated project) in follow-up 
jiras, but we need to fix the checks ASAP.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314916)
Time Spent: 3h  (was: 2h 50m)

> Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
> validation
> 
>
> Key: HDDS-2119
> URL: https://issues.apache.org/jira/browse/HDDS-2119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> After HDDS-2106 hdds/ozone no more relies on hadoop parent pom, so we have to 
> use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
> checkstyle validation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14832) RBF : Add Icon for ReadOnly False

2019-09-19 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933104#comment-16933104
 ] 

Takanobu Asanuma commented on HDFS-14832:
-

Thanks for sharing your thoughts, [~hemanthboyina] and [~elgoiri].
All right, Let's use icons with _federationhealth-mounttable-legend._
{quote}I think glyphicon edit matches for read-write scenario.
{quote}
That looks good to me too.

> RBF : Add Icon for ReadOnly False
> -
>
> Key: HDFS-14832
> URL: https://issues.apache.org/jira/browse/HDFS-14832
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Minor
> Attachments: Screenshot from 2019-09-18 23-55-17.png
>
>
> In Router Web UI for Mount Table information , add icon for read only state 
> false 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1574) Ensure same datanodes are not a part of multiple pipelines

2019-09-19 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng reassigned HDDS-1574:
--

Assignee: Li Cheng  (was: Siddharth Wagle)

> Ensure same datanodes are not a part of multiple pipelines
> --
>
> Key: HDDS-1574
> URL: https://issues.apache.org/jira/browse/HDDS-1574
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>
> Details in design doc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1570) Refactor heartbeat reports to report all the pipelines that are open

2019-09-19 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng reassigned HDDS-1570:
--

Assignee: Li Cheng  (was: Siddharth Wagle)

> Refactor heartbeat reports to report all the pipelines that are open
> 
>
> Key: HDDS-1570
> URL: https://issues.apache.org/jira/browse/HDDS-1570
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>
> Presently the pipeline report only reports a single pipeline id.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2016) Add option to enforce gdpr in Bucket Create command

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2016?focusedWorklogId=314927=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314927
 ]

ASF GitHub Bot logged work on HDDS-2016:


Author: ASF GitHub Bot
Created on: 19/Sep/19 09:52
Start Date: 19/Sep/19 09:52
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1458: HDDS-2016. Add 
option to enforce gdpr in Bucket Create command.
URL: https://github.com/apache/hadoop/pull/1458
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314927)
Time Spent: 40m  (was: 0.5h)

> Add option to enforce gdpr in Bucket Create command
> ---
>
> Key: HDDS-2016
> URL: https://issues.apache.org/jira/browse/HDDS-2016
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> e2e flow where user can enforce GDPR for a bucket during creation only.
> Add/update audit logs as this will be a useful action for compliance purpose.
> Add docs to show usage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2016) Add option to enforce gdpr in Bucket Create command

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2016:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Add option to enforce gdpr in Bucket Create command
> ---
>
> Key: HDDS-2016
> URL: https://issues.apache.org/jira/browse/HDDS-2016
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> e2e flow where user can enforce GDPR for a bucket during creation only.
> Add/update audit logs as this will be a useful action for compliance purpose.
> Add docs to show usage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-19 Thread HuangTao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HuangTao updated HDFS-14849:

Attachment: scheduleReconstruction.png

> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?focusedWorklogId=314870=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314870
 ]

ASF GitHub Bot logged work on HDDS-730:
---

Author: ASF GitHub Bot
Created on: 19/Sep/19 07:31
Start Date: 19/Sep/19 07:31
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1464: HDDS-730. Ozone 
fs cli prints hadoop fs in usage.
URL: https://github.com/apache/hadoop/pull/1464
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314870)
Time Spent: 1h  (was: 50m)

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
> Attachments: fscmd.png, fswith_nonexsitcmd.png, 
> image-2018-10-24-17-15-39-097.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-730:
--
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
> Attachments: fscmd.png, fswith_nonexsitcmd.png, 
> image-2018-10-24-17-15-39-097.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-730?focusedWorklogId=314872=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314872
 ]

ASF GitHub Bot logged work on HDDS-730:
---

Author: ASF GitHub Bot
Created on: 19/Sep/19 07:31
Start Date: 19/Sep/19 07:31
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1464: HDDS-730. Ozone fs cli 
prints hadoop fs in usage.
URL: https://github.com/apache/hadoop/pull/1464#issuecomment-533005884
 
 
   Merged. Thanks @cxorm the contribution.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314872)
Time Spent: 1h 10m  (was: 1h)

> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
> Attachments: fscmd.png, fswith_nonexsitcmd.png, 
> image-2018-10-24-17-15-39-097.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2153) Add a config to tune max pending requests in Ratis leader

2019-09-19 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-2153:
--
Status: Patch Available  (was: Open)

> Add a config to tune max pending requests in Ratis leader
> -
>
> Key: HDDS-2153
> URL: https://issues.apache.org/jira/browse/HDDS-2153
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is deleted

2019-09-19 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933185#comment-16933185
 ] 

Ayush Saxena commented on HDFS-14853:
-

Thanx [~RANith] for the patch. Fix makes sense to me.
For the UT you may use the existing cluster in the Test Class, Something like 
this may work :

{code:java}
  @Test
  public void testChooseRandomWithStorageTypeWithInvalidExcludedNode() {
HashSet excluded = new HashSet<>();
excluded.add(new DatanodeInfoBuilder()
.setNodeID(DatanodeID.EMPTY_DATANODE_ID).build());
Node node = CLUSTER.chooseRandomWithStorageType("/", "/l1/d1/r1", excluded,
StorageType.ARCHIVE);
assertNotNull(node);
  }
{code}


> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is deleted
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14853.001.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2119:
---
Fix Version/s: 0.5.0

> Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
> validation
> 
>
> Key: HDDS-2119
> URL: https://issues.apache.org/jira/browse/HDDS-2119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> After HDDS-2106 hdds/ozone no more relies on hadoop parent pom, so we have to 
> use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
> checkstyle validation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2016) Add option to enforce gdpr in Bucket Create command

2019-09-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933233#comment-16933233
 ] 

Hudson commented on HDDS-2016:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17331 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17331/])
HDDS-2016. Add option to enforce GDPR in Bucket Create command (elek: rev 
5c963a75d648cb36e7e36884f61616831229b25a)
* (add) hadoop-hdds/docs/content/gdpr/_index.md
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketArgs.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/InfoKeyHandler.java
* (edit) hadoop-hdds/docs/content/shell/BucketCommands.md
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/CreateBucketHandler.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketInfo.java
* (add) hadoop-hdds/docs/content/gdpr/GDPR in Ozone.md


> Add option to enforce gdpr in Bucket Create command
> ---
>
> Key: HDDS-2016
> URL: https://issues.apache.org/jira/browse/HDDS-2016
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> e2e flow where user can enforce GDPR for a bucket during creation only.
> Add/update audit logs as this will be a useful action for compliance purpose.
> Add docs to show usage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933234#comment-16933234
 ] 

Hudson commented on HDDS-2119:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17331 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17331/])
HDDS-2119. Use checkstyle.xml and suppressions.xml in hdds/ozone (elek: rev 
e78848fc3cb113733ea640f0aa3abbb271b16005)
* (edit) hadoop-hdds/pom.xml
* (edit) pom.ozone.xml
* (add) hadoop-hdds/dev-support/checkstyle/checkstyle.xml
* (add) hadoop-hdds/dev-support/checkstyle/suppressions.xml
* (add) hadoop-hdds/dev-support/checkstyle/checkstyle-noframes-sorted.xsl


> Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
> validation
> 
>
> Key: HDDS-2119
> URL: https://issues.apache.org/jira/browse/HDDS-2119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> After HDDS-2106 hdds/ozone no more relies on hadoop parent pom, so we have to 
> use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
> checkstyle validation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-730) ozone fs cli prints hadoop fs in usage

2019-09-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933132#comment-16933132
 ] 

Hudson commented on HDDS-730:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17329 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17329/])
HDDS-730. ozone fs cli prints hadoop fs in usage (elek: rev 
ef478fe73e72692b660de818d8c8faa9a155a10b)
* (edit) hadoop-ozone/common/src/main/bin/ozone
* (add) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFsShell.java


> ozone fs cli prints hadoop fs in usage
> --
>
> Key: HDDS-730
> URL: https://issues.apache.org/jira/browse/HDDS-730
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.3.0
>Reporter: Soumitra Sulav
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
> Attachments: fscmd.png, fswith_nonexsitcmd.png, 
> image-2018-10-24-17-15-39-097.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> ozone fs cli help/usage page contains Usage: hadoop fs [ generic options ] 
> I believe the usage should be updated.
> Check line 3 of screenshot.
> !image-2018-10-24-17-15-39-097.png|width=1693,height=1512!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2153) Add a config to tune max pending requests in Ratis leader

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2153:
-
Labels: pull-request-available  (was: )

> Add a config to tune max pending requests in Ratis leader
> -
>
> Key: HDDS-2153
> URL: https://issues.apache.org/jira/browse/HDDS-2153
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2153) Add a config to tune max pending requests in Ratis leader

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2153?focusedWorklogId=314899=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314899
 ]

ASF GitHub Bot logged work on HDDS-2153:


Author: ASF GitHub Bot
Created on: 19/Sep/19 08:26
Start Date: 19/Sep/19 08:26
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #1474: HDDS-2153. 
Add a config to tune max pending requests in Ratis leader.
URL: https://github.com/apache/hadoop/pull/1474
 
 
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314899)
Remaining Estimate: 0h
Time Spent: 10m

> Add a config to tune max pending requests in Ratis leader
> -
>
> Key: HDDS-2153
> URL: https://issues.apache.org/jira/browse/HDDS-2153
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is deleted

2019-09-19 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933212#comment-16933212
 ] 

Hadoop QA commented on HDFS-14853:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m  
4s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 8 new + 1 unchanged - 0 fixed = 9 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:39e82acc485 |
| JIRA Issue | HDFS-14853 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980677/HDFS-14853.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ebb73db2a4ab 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4ed0aef |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27907/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27907/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27907/testReport/ |
| Max. process+thread count | 2842 (vs. ulimit of 5500) |
| 

[jira] [Work logged] (HDDS-2101) Ozone filesystem provider doesn't exist

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2101?focusedWorklogId=314885=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314885
 ]

ASF GitHub Bot logged work on HDDS-2101:


Author: ASF GitHub Bot
Created on: 19/Sep/19 08:09
Start Date: 19/Sep/19 08:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1473: HDDS-2101. Ozone 
filesystem provider doesn't exist
URL: https://github.com/apache/hadoop/pull/1473#issuecomment-533018777
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | -1 | mvninstall | 28 | hadoop-ozone in trunk failed. |
   | -1 | compile | 18 | hadoop-ozone in trunk failed. |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1044 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 47 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-ozone in the patch failed. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 739 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 67 | hadoop-hdds generated 118 new + 16 unchanged - 0 fixed 
= 134 total (was 16) |
   | -1 | javadoc | 92 | hadoop-ozone generated 212 new + 3 unchanged - 149 
fixed = 215 total (was 152) |
   ||| _ Other Tests _ |
   | +1 | unit | 255 | hadoop-hdds in the patch passed. |
   | -1 | unit | 27 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 2750 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1473/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1473 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient |
   | uname | Linux 3b1338b8a2df 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4ed0aef |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1473/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1473/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1473/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1473/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1473/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1473/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1473/1/artifact/out/diff-javadoc-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1473/1/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1473/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1473/1/testReport/ |
   | Max. process+thread count | 473 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozonefs-lib-current 
hadoop-ozone/ozonefs-lib-legacy U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1473/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to 

[jira] [Work logged] (HDDS-2150) Update dependency versions to avoid security vulnerabilities

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2150?focusedWorklogId=314919=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314919
 ]

ASF GitHub Bot logged work on HDDS-2150:


Author: ASF GitHub Bot
Created on: 19/Sep/19 09:28
Start Date: 19/Sep/19 09:28
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1472: HDDS-2150. 
Update dependency versions to avoid security vulnerabilities.
URL: https://github.com/apache/hadoop/pull/1472#discussion_r326073658
 
 

 ##
 File path: pom.ozone.xml
 ##
 @@ -127,6 +127,9 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xs
 1.9.13
 2.9.9
 
+
+1.0.0
 
 Review comment:
   Jaeger 1.0 depends on newer OpenTracing (0.33), which is not backwards 
compatible.
   
   https://github.com/opentracing/opentracing-java/pull/339
   https://github.com/opentracing/opentracing-java#deprecated-members-since-031
   
   `hadoop-hdds-common` compiles only due to explicit dependency on 
`opentracing-util` 0.31.0.  However, it fails at runtime with 
[`NoSuchMethodError`](https://github.com/elek/ozone-ci/blob/259712a9df53dd8531786e23676ebed13f527918/pr/pr-hdds-2150-pzdq9/integration/hadoop-ozone/ozonefs/org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp.txt#L6).
   
   For the security fix I think it is enough to upgrade to Jaeger 0.34, which 
[updated Apache Thrift to 
0.12](https://github.com/jaegertracing/jaeger-client-java/blob/136a849202e8d0a95e007e6faae38f1519cdba55/build.gradle#L22).
  [Latest Jaeger Client 
release](https://github.com/jaegertracing/jaeger-client-java/releases/latest) 
0.35.2 should be OK, too, as it depends on OpenTracing 0.32, which still has 
the deprecated methods.  In this case OpenTracing version should be changed to 
0.32.0.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314919)
Time Spent: 20m  (was: 10m)

> Update dependency versions to avoid security vulnerabilities
> 
>
> Key: HDDS-2150
> URL: https://issues.apache.org/jira/browse/HDDS-2150
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The following dependency versions have known security vulnerabilities. We 
> should update them to recent/ later versions.
>  * Apache Thrift 0.11.0
>  * Apache Zookeeper 3.4.13
>  * Jetty Servlet 9.3.24



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is deleted

2019-09-19 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14853:
-
Status: Patch Available  (was: Open)

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is deleted
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14853.001.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is deleted

2019-09-19 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14853:
-
Attachment: HDFS-14853.001.patch

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is deleted
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14853.001.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is deleted

2019-09-19 Thread Ranith Sardar (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933100#comment-16933100
 ] 

Ranith Sardar commented on HDFS-14853:
--

Attached the patch. Please review it once.

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is deleted
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14853.001.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1879) Support multiple excluded scopes when choosing datanodes in NetworkTopology

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1879?focusedWorklogId=314880=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314880
 ]

ASF GitHub Bot logged work on HDDS-1879:


Author: ASF GitHub Bot
Created on: 19/Sep/19 07:52
Start Date: 19/Sep/19 07:52
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on issue #1194: HDDS-1879.  Support 
multiple excluded scopes when choosing datanodes in NetworkTopology
URL: https://github.com/apache/hadoop/pull/1194#issuecomment-533012656
 
 
   Thanks @xiaoyuyao  for the code review and @elek for monitor the build 
result. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314880)
Time Spent: 5h  (was: 4h 50m)

> Support multiple excluded scopes when choosing datanodes in NetworkTopology
> ---
>
> Key: HDDS-1879
> URL: https://issues.apache.org/jira/browse/HDDS-1879
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2101) Ozone filesystem provider doesn't exist

2019-09-19 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2101 started by Vivek Ratnavel Subramanian.

> Ozone filesystem provider doesn't exist
> ---
>
> Key: HDDS-2101
> URL: https://issues.apache.org/jira/browse/HDDS-2101
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Jitendra Nath Pandey
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>
> We don't have a filesystem provider in META-INF. 
> i.e. following file doesn't exist.
> {{hadoop-ozone/ozonefs/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}
> See for example
> {{hadoop-tools/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?focusedWorklogId=314915=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314915
 ]

ASF GitHub Bot logged work on HDDS-2119:


Author: ASF GitHub Bot
Created on: 19/Sep/19 09:11
Start Date: 19/Sep/19 09:11
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1435: HDDS-2119. Use 
checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
validation.
URL: https://github.com/apache/hadoop/pull/1435#issuecomment-533041700
 
 
   Currently all the checkstyle checks are showing false negative answers. I 
will commit this with the suggested change (moving checkstyle XML files to the 
dev-support) if no objections...
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314915)
Time Spent: 2h 50m  (was: 2h 40m)

> Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
> validation
> 
>
> Key: HDDS-2119
> URL: https://issues.apache.org/jira/browse/HDDS-2119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> After HDDS-2106 hdds/ozone no more relies on hadoop parent pom, so we have to 
> use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
> checkstyle validation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?focusedWorklogId=314934=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314934
 ]

ASF GitHub Bot logged work on HDDS-2119:


Author: ASF GitHub Bot
Created on: 19/Sep/19 10:01
Start Date: 19/Sep/19 10:01
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1435: HDDS-2119. Use 
checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
validation.
URL: https://github.com/apache/hadoop/pull/1435
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314934)
Time Spent: 3h 10m  (was: 3h)

> Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
> validation
> 
>
> Key: HDDS-2119
> URL: https://issues.apache.org/jira/browse/HDDS-2119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> After HDDS-2106 hdds/ozone no more relies on hadoop parent pom, so we have to 
> use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
> checkstyle validation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2119:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
> validation
> 
>
> Key: HDDS-2119
> URL: https://issues.apache.org/jira/browse/HDDS-2119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> After HDDS-2106 hdds/ozone no more relies on hadoop parent pom, so we have to 
> use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
> checkstyle validation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2154) Fix Checkstyle issues

2019-09-19 Thread Elek, Marton (Jira)
Elek, Marton created HDDS-2154:
--

 Summary: Fix Checkstyle issues
 Key: HDDS-2154
 URL: https://issues.apache.org/jira/browse/HDDS-2154
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton


Unfortunately checkstyle checks didn't work well from HDDS-2106 to HDDS-2119. 

This patch fixes all the issues which are accidentally merged in the mean time. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-19 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933237#comment-16933237
 ] 

Hadoop QA commented on HDFS-14849:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} HDFS-14849 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14849 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27908/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2153) Add a config to tune max pending requests in Ratis leader

2019-09-19 Thread Shashikant Banerjee (Jira)
Shashikant Banerjee created HDDS-2153:
-

 Summary: Add a config to tune max pending requests in Ratis leader
 Key: HDDS-2153
 URL: https://issues.apache.org/jira/browse/HDDS-2153
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.5.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2147) Include dumpstream in test report

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2147?focusedWorklogId=314909=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314909
 ]

ASF GitHub Bot logged work on HDDS-2147:


Author: ASF GitHub Bot
Created on: 19/Sep/19 08:48
Start Date: 19/Sep/19 08:48
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1470: HDDS-2147. Include 
dumpstream in test report
URL: https://github.com/apache/hadoop/pull/1470#issuecomment-533032996
 
 
   Thanks @elek for reviewing and merging it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314909)
Time Spent: 50m  (was: 40m)

> Include dumpstream in test report
> -
>
> Key: HDDS-2147
> URL: https://issues.apache.org/jira/browse/HDDS-2147
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Include {{*.dumpstream}} in the unit test report, which may help finding out 
> the cause of {{Corrupted STDOUT}} warning of forked JVM.
> {noformat:title=https://github.com/elek/ozone-ci/blob/5429d0982c3b13d311ec353dba198f2f5253757c/pr/pr-hdds-2141-4zm8s/unit/output.log#L333-L334}
> [INFO] Running org.apache.hadoop.utils.TestMetadataStore
> [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM 
> 1. See FAQ web page and the dump file 
> /workdir/hadoop-hdds/common/target/surefire-reports/2019-09-18T12-58-05_531-jvmRun1.dumpstream
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2147) Include dumpstream in test report

2019-09-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933182#comment-16933182
 ] 

Hudson commented on HDDS-2147:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17330 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17330/])
HDDS-2147. Include dumpstream in test report (elek: rev 
1029060e616358449aa3919739116883085208d8)
* (edit) hadoop-ozone/dev-support/checks/_mvn_unit_report.sh


> Include dumpstream in test report
> -
>
> Key: HDDS-2147
> URL: https://issues.apache.org/jira/browse/HDDS-2147
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Include {{*.dumpstream}} in the unit test report, which may help finding out 
> the cause of {{Corrupted STDOUT}} warning of forked JVM.
> {noformat:title=https://github.com/elek/ozone-ci/blob/5429d0982c3b13d311ec353dba198f2f5253757c/pr/pr-hdds-2141-4zm8s/unit/output.log#L333-L334}
> [INFO] Running org.apache.hadoop.utils.TestMetadataStore
> [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM 
> 1. See FAQ web page and the dump file 
> /workdir/hadoop-hdds/common/target/surefire-reports/2019-09-18T12-58-05_531-jvmRun1.dumpstream
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1933) Datanode should use hostname in place of ip addresses to allow DN's to work when ipaddress change

2019-09-19 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933136#comment-16933136
 ] 

Li Cheng commented on HDDS-1933:


After discussion, we suppose the change could remain as it is now. And 
[~msingh] Please try to set DFS_DATANODE_USE_DN_HOSTNAME_DEFAULT = true in 
kubernetes env and see it works. 

> Datanode should use hostname in place of ip addresses to allow DN's to work 
> when ipaddress change
> -
>
> Key: HDDS-1933
> URL: https://issues.apache.org/jira/browse/HDDS-1933
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Blocker
>
> This was noticed by [~elek] while deploying Ozone on Kubernetes based 
> environment.
> When the datanode ip address change on restart, the Datanode details cease to 
> be correct for the datanode. and this prevents the cluster from functioning 
> after a restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2147) Include dumpstream in test report

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2147?focusedWorklogId=314907=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314907
 ]

ASF GitHub Bot logged work on HDDS-2147:


Author: ASF GitHub Bot
Created on: 19/Sep/19 08:42
Start Date: 19/Sep/19 08:42
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1470: HDDS-2147. 
Include dumpstream in test report
URL: https://github.com/apache/hadoop/pull/1470
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314907)
Time Spent: 40m  (was: 0.5h)

> Include dumpstream in test report
> -
>
> Key: HDDS-2147
> URL: https://issues.apache.org/jira/browse/HDDS-2147
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Include {{*.dumpstream}} in the unit test report, which may help finding out 
> the cause of {{Corrupted STDOUT}} warning of forked JVM.
> {noformat:title=https://github.com/elek/ozone-ci/blob/5429d0982c3b13d311ec353dba198f2f5253757c/pr/pr-hdds-2141-4zm8s/unit/output.log#L333-L334}
> [INFO] Running org.apache.hadoop.utils.TestMetadataStore
> [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM 
> 1. See FAQ web page and the dump file 
> /workdir/hadoop-hdds/common/target/surefire-reports/2019-09-18T12-58-05_531-jvmRun1.dumpstream
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2147) Include dumpstream in test report

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2147:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Include dumpstream in test report
> -
>
> Key: HDDS-2147
> URL: https://issues.apache.org/jira/browse/HDDS-2147
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Include {{*.dumpstream}} in the unit test report, which may help finding out 
> the cause of {{Corrupted STDOUT}} warning of forked JVM.
> {noformat:title=https://github.com/elek/ozone-ci/blob/5429d0982c3b13d311ec353dba198f2f5253757c/pr/pr-hdds-2141-4zm8s/unit/output.log#L333-L334}
> [INFO] Running org.apache.hadoop.utils.TestMetadataStore
> [WARNING] Corrupted STDOUT by directly writing to native stream in forked JVM 
> 1. See FAQ web page and the dump file 
> /workdir/hadoop-hdds/common/target/surefire-reports/2019-09-18T12-58-05_531-jvmRun1.dumpstream
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=314931=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314931
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 19/Sep/19 09:57
Start Date: 19/Sep/19 09:57
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-533058852
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 3434 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 44 | Maven dependency ordering for branch |
   | -1 | mvninstall | 31 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 125 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 897 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 47 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 164 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 27 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | -1 | mvninstall | 33 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-ozone in the patch failed. |
   | -1 | cc | 25 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 29 | hadoop-hdds: The patch generated 6 new + 8 
unchanged - 2 fixed = 14 total (was 10) |
   | -0 | checkstyle | 96 | hadoop-ozone: The patch generated 370 new + 2478 
unchanged - 15 fixed = 2848 total (was 2493) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 660 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 51 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 27 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 249 | hadoop-hdds in the patch passed. |
   | -1 | unit | 29 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 6687 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1277 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 68f3b34ec47c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1029060 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/13/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit 

[jira] [Created] (HDDS-2155) Fix checkstyle errors

2019-09-19 Thread Doroszlai, Attila (Jira)
Doroszlai, Attila created HDDS-2155:
---

 Summary: Fix checkstyle errors
 Key: HDDS-2155
 URL: https://issues.apache.org/jira/browse/HDDS-2155
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.5.0
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


Checkstyle CI check have been providing false negative result recently, so some 
checkstyle violations have crept in.

{noformat}
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCache.java

hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/RocksDBStoreIterator.java


hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/LevelDBStoreIterator.java

hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/utils/TestMetadataStore.java

hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java

hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java



hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/S3KeyGenerator.java

hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/OzoneClientKeyValidator.java

hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/SameKeyReader.java

hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java

hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFsShell.java




















{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1569?focusedWorklogId=314964=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314964
 ]

ASF GitHub Bot logged work on HDDS-1569:


Author: ASF GitHub Bot
Created on: 19/Sep/19 10:50
Start Date: 19/Sep/19 10:50
Worklog Time Spent: 10m 
  Work Description: timmylicheng commented on issue #1431: HDDS-1569 
Support creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop/pull/1431#issuecomment-533075841
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314964)
Time Spent: 40m  (was: 0.5h)

> Add ability to SCM for creating multiple pipelines with same datanode
> -
>
> Key: HDDS-1569
> URL: https://issues.apache.org/jira/browse/HDDS-1569
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Siddharth Wagle
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> - Refactor _RatisPipelineProvider.create()_ to be able to create pipelines 
> with datanodes that are not a part of sufficient pipelines
> - Define soft and hard upper bounds for pipeline membership
> - Create SCMAllocationManager that can be leveraged to get a candidate set of 
> datanodes based on placement policies
> - Add the datanodes to internal datastructures



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-19 Thread HuangTao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933376#comment-16933376
 ] 

HuangTao commented on HDFS-14849:
-

I find a clue:

the `chooseSourceDatanodes` get 
{quote}LIVE=2, READONLY=0, DECOMMISSIONING=7, DECOMMISSIONED=0, 
MAINTENANCE_NOT_FOR_READ=0, MAINTENANCE_FOR_READ=0, CORRUPT=0, EXCESS=0, 
STALESTORAGE=0, REDUNDANT=22{quote}
and all block index (0-8) exists, and three blocks 3/4/8 have no redundant 
block, and the datanode where block 8 stored is in DECOMMISSIONING, other two 
datanode adminState is null. 

the `countNodes(block)` get
{quote}LIVE=8, READONLY=0, DECOMMISSIONING=7, DECOMMISSIONED=0, 
MAINTENANCE_NOT_FOR_READ=0, MAINTENANCE_FOR_READ=0, CORRUPT=0, EXCESS=0, 
STALESTORAGE=0, REDUNDANT=16{quote}

so we need to replicate block 8, but there is no racks anymore.


> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2148) Remove redundant code in CreateBucketHandler.java

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2148?focusedWorklogId=315058=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315058
 ]

ASF GitHub Bot logged work on HDDS-2148:


Author: ASF GitHub Bot
Created on: 19/Sep/19 13:48
Start Date: 19/Sep/19 13:48
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1471: HDDS-2148. 
Remove redundant code in CreateBucketHandler.java
URL: https://github.com/apache/hadoop/pull/1471#issuecomment-533138999
 
 
   Thanks @elek & @adoroszlai  for reviewing, @elek for committing.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 315058)
Time Spent: 1h 10m  (was: 1h)

> Remove redundant code in CreateBucketHandler.java
> -
>
> Key: HDDS-2148
> URL: https://issues.apache.org/jira/browse/HDDS-2148
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {code:java}
> if (isVerbose()) {
>   System.out.printf("Volume Name : %s%n", volumeName);
>   System.out.printf("Bucket Name : %s%n", bucketName);
>   if (bekName != null) {
> bb.setBucketEncryptionKey(bekName);
> System.out.printf("Bucket Encryption enabled with Key Name: %s%n",
> bekName);
>   }
> }
> {code}
> This jira aims to remove the redundant line 
> {{bb.setBucketEncryptionKey(bekName);}} as the same operation is performed in 
> the preceding code block. This code block is to print additional details if 
> verbose option was specified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-19 Thread HuangTao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HuangTao updated HDFS-14849:

Attachment: fsck-file.png

> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, fsck-file.png, 
> scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2148) Remove redundant code in CreateBucketHandler.java

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2148?focusedWorklogId=314950=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314950
 ]

ASF GitHub Bot logged work on HDDS-2148:


Author: ASF GitHub Bot
Created on: 19/Sep/19 10:26
Start Date: 19/Sep/19 10:26
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1471: HDDS-2148. Remove 
redundant code in CreateBucketHandler.java
URL: https://github.com/apache/hadoop/pull/1471
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314950)
Time Spent: 1h  (was: 50m)

> Remove redundant code in CreateBucketHandler.java
> -
>
> Key: HDDS-2148
> URL: https://issues.apache.org/jira/browse/HDDS-2148
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:java}
> if (isVerbose()) {
>   System.out.printf("Volume Name : %s%n", volumeName);
>   System.out.printf("Bucket Name : %s%n", bucketName);
>   if (bekName != null) {
> bb.setBucketEncryptionKey(bekName);
> System.out.printf("Bucket Encryption enabled with Key Name: %s%n",
> bekName);
>   }
> }
> {code}
> This jira aims to remove the redundant line 
> {{bb.setBucketEncryptionKey(bekName);}} as the same operation is performed in 
> the preceding code block. This code block is to print additional details if 
> verbose option was specified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2141) Missing total number of operations

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2141?focusedWorklogId=314997=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314997
 ]

ASF GitHub Bot logged work on HDDS-2141:


Author: ASF GitHub Bot
Created on: 19/Sep/19 12:11
Start Date: 19/Sep/19 12:11
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1462: HDDS-2141. 
Missing total number of operations
URL: https://github.com/apache/hadoop/pull/1462
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314997)
Time Spent: 1h 10m  (was: 1h)

> Missing total number of operations
> --
>
> Key: HDDS-2141
> URL: https://issues.apache.org/jira/browse/HDDS-2141
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: missing_total.png, total-new.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Total number of operations is missing from some metrics graphs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2141) Missing total number of operations

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2141:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Missing total number of operations
> --
>
> Key: HDDS-2141
> URL: https://issues.apache.org/jira/browse/HDDS-2141
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: missing_total.png, total-new.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Total number of operations is missing from some metrics graphs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2154) Fix Checkstyle issues

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2154:
--

Assignee: Elek, Marton

> Fix Checkstyle issues
> -
>
> Key: HDDS-2154
> URL: https://issues.apache.org/jira/browse/HDDS-2154
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Unfortunately checkstyle checks didn't work well from HDDS-2106 to HDDS-2119. 
> This patch fixes all the issues which are accidentally merged in the mean 
> time. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2154) Fix Checkstyle issues

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2154:
-
Labels: pull-request-available  (was: )

> Fix Checkstyle issues
> -
>
> Key: HDDS-2154
> URL: https://issues.apache.org/jira/browse/HDDS-2154
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>
> Unfortunately checkstyle checks didn't work well from HDDS-2106 to HDDS-2119. 
> This patch fixes all the issues which are accidentally merged in the mean 
> time. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2148) Remove redundant code in CreateBucketHandler.java

2019-09-19 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933250#comment-16933250
 ] 

Hudson commented on HDDS-2148:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17332 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17332/])
HDDS-2148. Remove redundant code in CreateBucketHandler.java (elek: rev 
28913f733e53c75e97397953a71f06191308c9b8)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/CreateBucketHandler.java


> Remove redundant code in CreateBucketHandler.java
> -
>
> Key: HDDS-2148
> URL: https://issues.apache.org/jira/browse/HDDS-2148
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:java}
> if (isVerbose()) {
>   System.out.printf("Volume Name : %s%n", volumeName);
>   System.out.printf("Bucket Name : %s%n", bucketName);
>   if (bekName != null) {
> bb.setBucketEncryptionKey(bekName);
> System.out.printf("Bucket Encryption enabled with Key Name: %s%n",
> bekName);
>   }
> }
> {code}
> This jira aims to remove the redundant line 
> {{bb.setBucketEncryptionKey(bekName);}} as the same operation is performed in 
> the preceding code block. This code block is to print additional details if 
> verbose option was specified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2148) Remove redundant code in CreateBucketHandler.java

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2148:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Remove redundant code in CreateBucketHandler.java
> -
>
> Key: HDDS-2148
> URL: https://issues.apache.org/jira/browse/HDDS-2148
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:java}
> if (isVerbose()) {
>   System.out.printf("Volume Name : %s%n", volumeName);
>   System.out.printf("Bucket Name : %s%n", bucketName);
>   if (bekName != null) {
> bb.setBucketEncryptionKey(bekName);
> System.out.printf("Bucket Encryption enabled with Key Name: %s%n",
> bekName);
>   }
> }
> {code}
> This jira aims to remove the redundant line 
> {{bb.setBucketEncryptionKey(bekName);}} as the same operation is performed in 
> the preceding code block. This code block is to print additional details if 
> verbose option was specified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2151) Ozone client prints the entire request payload in DEBUG level.

2019-09-19 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933257#comment-16933257
 ] 

Steve Loughran commented on HDDS-2151:
--



I think you might want to consider how much of that payload to print at all. 
Does it ever include secrets?

> Ozone client prints the entire request payload in DEBUG level.
> --
>
> Key: HDDS-2151
> URL: https://issues.apache.org/jira/browse/HDDS-2151
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Priority: Major
>
> In XceiverClientRatis.java:221, we have the following snippet where we have a 
> DEBUG line that prints out the entire Container Request proto. 
> {code}
>   ContainerCommandRequestProto finalPayload =
>   ContainerCommandRequestProto.newBuilder(request)
>   .setTraceID(TracingUtil.exportCurrentSpan())
>   .build();
>   boolean isReadOnlyRequest = HddsUtils.isReadOnly(finalPayload);
>   ByteString byteString = finalPayload.toByteString();
>   LOG.debug("sendCommandAsync {} {}", isReadOnlyRequest, finalPayload);
>   return isReadOnlyRequest ?
>   getClient().sendReadOnlyAsync(() -> byteString) :
>   getClient().sendAsync(() -> byteString);
> {code}
> This causes OOM while writing large (~300MB) keys. 
> {code}
> SLF4J: Failed toString() invocation on an object of type 
> [org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$ContainerCommandRequestProto]
> Reported exception:
> java.lang.OutOfMemoryError: Java heap space
>   at java.util.Arrays.copyOf(Arrays.java:3332)
>   at 
> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
>   at 
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:649)
>   at java.lang.StringBuilder.append(StringBuilder.java:202)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormatEscaper.escapeBytes(TextFormatEscaper.java:75)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormatEscaper.escapeBytes(TextFormatEscaper.java:94)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat.escapeBytes(TextFormat.java:1836)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printFieldValue(TextFormat.java:436)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printSingleField(TextFormat.java:376)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printField(TextFormat.java:338)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.print(TextFormat.java:325)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printFieldValue(TextFormat.java:449)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printSingleField(TextFormat.java:376)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printField(TextFormat.java:338)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.print(TextFormat.java:325)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.access$000(TextFormat.java:307)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat.print(TextFormat.java:68)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat.printToString(TextFormat.java:148)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.AbstractMessage.toString(AbstractMessage.java:117)
>   at 
> org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:299)
>   at 
> org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:271)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:233)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:173)
>   at org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:151)
>   at org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:252)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendRequestAsync(XceiverClientRatis.java:221)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommandAsync(XceiverClientRatis.java:302)
>   at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync(ContainerProtocolCalls.java:310)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunkToContainer(BlockOutputStream.java:601)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunk(BlockOutputStream.java:459)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:240)
>   at 
> 

[jira] [Work logged] (HDDS-2153) Add a config to tune max pending requests in Ratis leader

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2153?focusedWorklogId=314966=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314966
 ]

ASF GitHub Bot logged work on HDDS-2153:


Author: ASF GitHub Bot
Created on: 19/Sep/19 11:02
Start Date: 19/Sep/19 11:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1474: HDDS-2153. Add a 
config to tune max pending requests in Ratis leader.
URL: https://github.com/apache/hadoop/pull/1474#issuecomment-533080522
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 3497 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 34 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 963 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 49 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 189 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 24 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 31 | hadoop-ozone in the patch failed. |
   | -1 | compile | 23 | hadoop-ozone in the patch failed. |
   | -1 | javac | 23 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 53 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 726 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 51 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 29 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 361 | hadoop-hdds in the patch passed. |
   | -1 | unit | 33 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 6970 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1474 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux b201fc79c4c7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e78848f |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/1/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/1/testReport/ |
   | Max. process+thread count | 438 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 

[jira] [Updated] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-19 Thread HuangTao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HuangTao updated HDFS-14849:

Description: 
When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
that datanode will be replicated infinitely.

// added 2019/09/19
I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
simultaneously. 
 !scheduleReconstruction.png! 

 !fsck-file.png! 

  was:
When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
that datanode will be replicated infinitely.

// added 2019/09/19
I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
simultaneously.
 
 !scheduleReconstruction.png! 

 !fsck-file.png! 


> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2153) Add a config to tune max pending requests in Ratis leader

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2153?focusedWorklogId=315010=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315010
 ]

ASF GitHub Bot logged work on HDDS-2153:


Author: ASF GitHub Bot
Created on: 19/Sep/19 12:35
Start Date: 19/Sep/19 12:35
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1474: HDDS-2153. Add a 
config to tune max pending requests in Ratis leader.
URL: https://github.com/apache/hadoop/pull/1474#issuecomment-53311
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 3567 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | -1 | mvninstall | 31 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 829 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 49 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 176 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 24 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | -1 | mvninstall | 31 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 27 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 648 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 48 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 25 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 237 | hadoop-hdds in the patch failed. |
   | -1 | unit | 27 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 6653 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1474 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 5a09aeb564a8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 28913f7 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/2/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/2/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1474/2/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 

[jira] [Work logged] (HDDS-2127) Detailed Tools doc not reachable

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2127?focusedWorklogId=315055=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315055
 ]

ASF GitHub Bot logged work on HDDS-2127:


Author: ASF GitHub Bot
Created on: 19/Sep/19 13:43
Start Date: 19/Sep/19 13:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1476: HDDS-2127. 
Detailed Tools doc not reachable
URL: https://github.com/apache/hadoop/pull/1476#issuecomment-533136748
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 29 | hadoop-ozone in trunk failed. |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 904 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 33 | hadoop-ozone in the patch failed. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 661 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 1895 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1476/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1476 |
   | Optional Tests | dupname asflicense mvnsite |
   | uname | Linux 8470fef3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c9900a0 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1476/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1476/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | Max. process+thread count | 439 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs U: hadoop-hdds/docs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1476/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 315055)
Time Spent: 20m  (was: 10m)

> Detailed Tools doc not reachable
> 
>
> Key: HDDS-2127
> URL: https://issues.apache.org/jira/browse/HDDS-2127
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There are two doc pages for tools:
>  * docs/beyond/tools.html
>  * docs/tools.html
> The latter is more detailed (has subpages for several tools), but it is not 
> reachable (even indirectly) from the start page.  Not sure if this is 
> intentional.
> On a related note, it has two "Testing tools" sub-pages. One of them is empty 
> and should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1982) Extend SCMNodeManager to support decommission and maintenance states

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1982?focusedWorklogId=314995=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314995
 ]

ASF GitHub Bot logged work on HDDS-1982:


Author: ASF GitHub Bot
Created on: 19/Sep/19 12:06
Start Date: 19/Sep/19 12:06
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#issuecomment-533099643
 
 
   @sodonnel Can you please rebase and push (some of the integration tests are 
fixed on trunk, we can double check the test results with a new, updated push)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314995)
Time Spent: 6h 20m  (was: 6h 10m)

> Extend SCMNodeManager to support decommission and maintenance states
> 
>
> Key: HDDS-1982
> URL: https://issues.apache.org/jira/browse/HDDS-1982
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> Currently, within SCM a node can have the following states:
> HEALTHY
> STALE
> DEAD
> DECOMMISSIONING
> DECOMMISSIONED
> The last 2 are not currently used.
> In order to support decommissioning and maintenance mode, we need to extend 
> the set of states a node can have to include decommission and maintenance 
> states.
> It is also important to note that a node decommissioning or entering 
> maintenance can also be HEALTHY, STALE or go DEAD.
> Therefore in this Jira I propose we should model a node state with two 
> different sets of values. The first, is effectively the liveliness of the 
> node, with the following states. This is largely what is in place now:
> HEALTHY
> STALE
> DEAD
> The second is the node operational state:
> IN_SERVICE
> DECOMMISSIONING
> DECOMMISSIONED
> ENTERING_MAINTENANCE
> IN_MAINTENANCE
> That means the overall total number of states for a node is the cross-product 
> of the two above lists, however it probably makes sense to keep the two 
> states seperate internally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-19 Thread HuangTao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HuangTao updated HDFS-14849:

Attachment: liveBlockIndices.png

> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, liveBlockIndices.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1949) Missing or error-prone test cleanup

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1949?focusedWorklogId=315084=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315084
 ]

ASF GitHub Bot logged work on HDDS-1949:


Author: ASF GitHub Bot
Created on: 19/Sep/19 14:11
Start Date: 19/Sep/19 14:11
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1365: HDDS-1949. Missing 
or error-prone test cleanup
URL: https://github.com/apache/hadoop/pull/1365#issuecomment-533148984
 
 
   Thanks @arp7 for the review.  Conflict is now resolved.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 315084)
Time Spent: 1h 40m  (was: 1.5h)

> Missing or error-prone test cleanup
> ---
>
> Key: HDDS-1949
> URL: https://issues.apache.org/jira/browse/HDDS-1949
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Some integration tests do not clean up after themselves.  Some only clean up 
> if the test is successful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1569?focusedWorklogId=314945=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314945
 ]

ASF GitHub Bot logged work on HDDS-1569:


Author: ASF GitHub Bot
Created on: 19/Sep/19 10:10
Start Date: 19/Sep/19 10:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1431: HDDS-1569 
Support creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop/pull/1431#issuecomment-533063582
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ HDDS-1564 Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 608 | HDDS-1564 passed |
   | +1 | compile | 399 | HDDS-1564 passed |
   | +1 | checkstyle | 75 | HDDS-1564 passed |
   | +1 | mvnsite | 0 | HDDS-1564 passed |
   | +1 | shadedclient | 991 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | HDDS-1564 passed |
   | 0 | spotbugs | 443 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 652 | HDDS-1564 passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 630 | the patch passed |
   | +1 | compile | 424 | the patch passed |
   | +1 | javac | 424 | the patch passed |
   | +1 | checkstyle | 85 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 755 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 187 | the patch passed |
   | -1 | findbugs | 260 | hadoop-hdds generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 310 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2026 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 8347 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds |
   |  |  Dead store to dnDetails in 
org.apache.hadoop.hdds.scm.safemode.HealthyPipelineSafeModeRule.process(SCMDatanodeHeartbeatDispatcher$PipelineReportFromDatanode)
  At 
HealthyPipelineSafeModeRule.java:org.apache.hadoop.hdds.scm.safemode.HealthyPipelineSafeModeRule.process(SCMDatanodeHeartbeatDispatcher$PipelineReportFromDatanode)
  At HealthyPipelineSafeModeRule.java:[line 119] |
   | Failed junit tests | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestKeyInputStream |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestroy |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1431 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a3e22edef6ea 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | HDDS-1564 / 326b5ac |
   | Default Java | 

[jira] [Work started] (HDDS-2155) Fix checkstyle errors

2019-09-19 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2155 started by Doroszlai, Attila.
---
> Fix checkstyle errors
> -
>
> Key: HDDS-2155
> URL: https://issues.apache.org/jira/browse/HDDS-2155
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> Checkstyle CI check have been providing false negative result recently, so 
> some checkstyle violations have crept in.
> {noformat}
> hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCache.java
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
> hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/RocksDBStoreIterator.java
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
> hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/LevelDBStoreIterator.java
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
> hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/utils/TestMetadataStore.java
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
> hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
>  source="com.puppycrawl.tools.checkstyle.checks.blocks.RightCurlyCheck"/>
> hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
> hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/S3KeyGenerator.java
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
> hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/OzoneClientKeyValidator.java
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
> hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/SameKeyReader.java
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
> hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
> hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFsShell.java
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.blocks.LeftCurlyCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.blocks.LeftCurlyCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.javadoc.JavadocStyleCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.ArrayTypeStyleCheck"/>
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2154) Fix Checkstyle issues

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2154?focusedWorklogId=314946=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314946
 ]

ASF GitHub Bot logged work on HDDS-2154:


Author: ASF GitHub Bot
Created on: 19/Sep/19 10:10
Start Date: 19/Sep/19 10:10
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1475: HDDS-2154. Fix 
Checkstyle issues
URL: https://github.com/apache/hadoop/pull/1475
 
 
   Unfortunately checkstyle checks didn't work well from HDDS-2106 to 
HDDS-2119. 
   
   This patch fixes all the issues which are accidentally merged in the mean 
time. 
   
   See: https://issues.apache.org/jira/browse/HDDS-2154
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314946)
Remaining Estimate: 0h
Time Spent: 10m

> Fix Checkstyle issues
> -
>
> Key: HDDS-2154
> URL: https://issues.apache.org/jira/browse/HDDS-2154
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Unfortunately checkstyle checks didn't work well from HDDS-2106 to HDDS-2119. 
> This patch fixes all the issues which are accidentally merged in the mean 
> time. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2154) Fix Checkstyle issues

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2154:
---
Status: Patch Available  (was: Open)

> Fix Checkstyle issues
> -
>
> Key: HDDS-2154
> URL: https://issues.apache.org/jira/browse/HDDS-2154
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Unfortunately checkstyle checks didn't work well from HDDS-2106 to HDDS-2119. 
> This patch fixes all the issues which are accidentally merged in the mean 
> time. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2155) Fix checkstyle errors

2019-09-19 Thread Doroszlai, Attila (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila resolved HDDS-2155.
-
Resolution: Duplicate

> Fix checkstyle errors
> -
>
> Key: HDDS-2155
> URL: https://issues.apache.org/jira/browse/HDDS-2155
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> Checkstyle CI check has been providing false negative result recently, so 
> some checkstyle violations have crept in.
> {noformat}
> hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCache.java
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
> hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/RocksDBStoreIterator.java
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
> hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/LevelDBStoreIterator.java
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
> hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/utils/TestMetadataStore.java
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
> hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
>  source="com.puppycrawl.tools.checkstyle.checks.blocks.RightCurlyCheck"/>
> hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
> hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/S3KeyGenerator.java
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
> hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/OzoneClientKeyValidator.java
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
> hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/SameKeyReader.java
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
> hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
>  source="com.puppycrawl.tools.checkstyle.checks.sizes.LineLengthCheck"/>
> hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFsShell.java
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.imports.UnusedImportsCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.blocks.LeftCurlyCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.blocks.LeftCurlyCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.javadoc.JavadocStyleCheck"/>
>  source="com.puppycrawl.tools.checkstyle.checks.ArrayTypeStyleCheck"/>
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2154) Fix Checkstyle issues

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2154?focusedWorklogId=314980=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314980
 ]

ASF GitHub Bot logged work on HDDS-2154:


Author: ASF GitHub Bot
Created on: 19/Sep/19 11:37
Start Date: 19/Sep/19 11:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1475: HDDS-2154. Fix 
Checkstyle issues
URL: https://github.com/apache/hadoop/pull/1475#issuecomment-533091052
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | -1 | mvninstall | 30 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 829 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 46 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 157 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 28 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | -1 | mvninstall | 33 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 28 | hadoop-hdds: The patch generated 0 new + 0 
unchanged - 9 fixed = 0 total (was 9) |
   | +1 | checkstyle | 30 | hadoop-ozone: The patch generated 0 new + 0 
unchanged - 24 fixed = 0 total (was 24) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 676 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 46 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 28 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 265 | hadoop-hdds in the patch failed. |
   | -1 | unit | 30 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 3183 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1475/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1475 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 48876ff3c5af 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 28913f7 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1475/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1475/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1475/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1475/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1475/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1475/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1475/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1475/1/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1475/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1475/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1475/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 

[jira] [Work logged] (HDDS-1982) Extend SCMNodeManager to support decommission and maintenance states

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1982?focusedWorklogId=314988=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-314988
 ]

ASF GitHub Bot logged work on HDDS-1982:


Author: ASF GitHub Bot
Created on: 19/Sep/19 12:05
Start Date: 19/Sep/19 12:05
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#issuecomment-533099312
 
 
   LGTM
   
   If I understood well everybody agreed with this approach and AFAIK all of 
the comments are addressed.
   
   @anuengineer @nandakumar131 please let us now if you have any further 
comments.
   
   I am planning to commit it tomorrow if no more objections.
   
   I think we can commit it to the trunk, I am not sure if we need a separated 
branch (let me know if you prefer a feature branch).
   
* It's smaller or the same size as the OM HA
* Complexity is smaller (at least for the existing code base), most of the 
code will be new and independent.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 314988)
Time Spent: 6h 10m  (was: 6h)

> Extend SCMNodeManager to support decommission and maintenance states
> 
>
> Key: HDDS-1982
> URL: https://issues.apache.org/jira/browse/HDDS-1982
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> Currently, within SCM a node can have the following states:
> HEALTHY
> STALE
> DEAD
> DECOMMISSIONING
> DECOMMISSIONED
> The last 2 are not currently used.
> In order to support decommissioning and maintenance mode, we need to extend 
> the set of states a node can have to include decommission and maintenance 
> states.
> It is also important to note that a node decommissioning or entering 
> maintenance can also be HEALTHY, STALE or go DEAD.
> Therefore in this Jira I propose we should model a node state with two 
> different sets of values. The first, is effectively the liveliness of the 
> node, with the following states. This is largely what is in place now:
> HEALTHY
> STALE
> DEAD
> The second is the node operational state:
> IN_SERVICE
> DECOMMISSIONING
> DECOMMISSIONED
> ENTERING_MAINTENANCE
> IN_MAINTENANCE
> That means the overall total number of states for a node is the cross-product 
> of the two above lists, however it probably makes sense to keep the two 
> states seperate internally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-19 Thread Lokesh Jain (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933383#comment-16933383
 ] 

Lokesh Jain commented on HDDS-1868:
---

[~swagle] I think there is a case where it is not handled. There can be a 
leader elected s1 and two followers s2 and s3. Pipeline Report from s2 and s3 
can now arrive after the pipeline action and may not arrive at all. In both 
these cases we would have opened the pipeline in SCM. I think we need to either 
send only pipeline report or only pipeline action in this case from the 
datanodes. Once we get this action or report from all the datanodes after a 
leader has been elected and acknowledged by all the datanodes, SCM can open the 
pipeline?

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14855) client always print standbyexception info with multi standby namenode

2019-09-19 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933384#comment-16933384
 ] 

Ayush Saxena commented on HDFS-14855:
-

Thanx [~shenyinjie] for putting this up. Agreed this actually appears to be a 
noise. There has been a similar discussion on HDFS-14271. You may follow the 
discussion there and take over that itself.

> client always print standbyexception info with multi standby namenode
> -
>
> Key: HDFS-14855
> URL: https://issues.apache.org/jira/browse/HDFS-14855
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: image-2019-09-19-20-04-54-591.png
>
>
> When cluster has more than two standby namenodes,  client executes shell will 
> print standbyexception info. May we change the log level from INFO to DEBUG,  
>  !image-2019-09-19-20-04-54-591.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-19 Thread HuangTao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HuangTao updated HDFS-14849:

Comment: was deleted

(was:  !liveBlockIndices.png! )

> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, liveBlockIndices.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-19 Thread HuangTao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933376#comment-16933376
 ] 

HuangTao edited comment on HDFS-14849 at 9/19/19 1:35 PM:
--

I find a clue:

the `chooseSourceDatanodes` get 
{quote}LIVE=2, READONLY=0, DECOMMISSIONING=7, DECOMMISSIONED=0, 
MAINTENANCE_NOT_FOR_READ=0, MAINTENANCE_FOR_READ=0, CORRUPT=0, EXCESS=0, 
STALESTORAGE=0, REDUNDANT=22{quote}
and all block index (0-8) exists, and three blocks 3/4/8 have no redundant 
block, and the datanode where block 8 stored is in DECOMMISSIONING, other two 
datanode adminState is null. 
{quote}[0, 1, 2, 3, 4, 5, 6, 7, 8, 6, 7, 6, 6, 5, 0, 1, 5, 0, 2, 5, 2, 5, 1, 2, 
1, 5, 2, 7, 5, 2, 0]{quote}

the `countNodes(block)` get
{quote}LIVE=8, READONLY=0, DECOMMISSIONING=7, DECOMMISSIONED=0, 
MAINTENANCE_NOT_FOR_READ=0, MAINTENANCE_FOR_READ=0, CORRUPT=0, EXCESS=0, 
STALESTORAGE=0, REDUNDANT=16{quote}

so we need to replicate block 8, but there is no racks anymore.

Now, I have a doubt why replicate some block more than once other than 
replicate the block 8 ?


was (Author: marvelrock):
I find a clue:

the `chooseSourceDatanodes` get 
{quote}LIVE=2, READONLY=0, DECOMMISSIONING=7, DECOMMISSIONED=0, 
MAINTENANCE_NOT_FOR_READ=0, MAINTENANCE_FOR_READ=0, CORRUPT=0, EXCESS=0, 
STALESTORAGE=0, REDUNDANT=22{quote}
and all block index (0-8) exists, and three blocks 3/4/8 have no redundant 
block, and the datanode where block 8 stored is in DECOMMISSIONING, other two 
datanode adminState is null. 

the `countNodes(block)` get
{quote}LIVE=8, READONLY=0, DECOMMISSIONING=7, DECOMMISSIONED=0, 
MAINTENANCE_NOT_FOR_READ=0, MAINTENANCE_FOR_READ=0, CORRUPT=0, EXCESS=0, 
STALESTORAGE=0, REDUNDANT=16{quote}

so we need to replicate block 8, but there is no racks anymore.

Now, I have a doubt why replicate some block more than once other than 
replicate the block 8 ?

> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, liveBlockIndices.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2141) Missing total number of operations

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2141?focusedWorklogId=315001=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315001
 ]

ASF GitHub Bot logged work on HDDS-2141:


Author: ASF GitHub Bot
Created on: 19/Sep/19 12:16
Start Date: 19/Sep/19 12:16
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1462: HDDS-2141. Missing 
total number of operations
URL: https://github.com/apache/hadoop/pull/1462#issuecomment-533102961
 
 
   Thanks @elek for reviewing and merging it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 315001)
Time Spent: 1h 20m  (was: 1h 10m)

> Missing total number of operations
> --
>
> Key: HDDS-2141
> URL: https://issues.apache.org/jira/browse/HDDS-2141
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: missing_total.png, total-new.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Total number of operations is missing from some metrics graphs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2127) Detailed Tools doc not reachable

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2127:
-
Labels: pull-request-available  (was: )

> Detailed Tools doc not reachable
> 
>
> Key: HDDS-2127
> URL: https://issues.apache.org/jira/browse/HDDS-2127
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>
> There are two doc pages for tools:
>  * docs/beyond/tools.html
>  * docs/tools.html
> The latter is more detailed (has subpages for several tools), but it is not 
> reachable (even indirectly) from the start page.  Not sure if this is 
> intentional.
> On a related note, it has two "Testing tools" sub-pages. One of them is empty 
> and should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2127) Detailed Tools doc not reachable

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-2127:
---
Status: Patch Available  (was: Open)

> Detailed Tools doc not reachable
> 
>
> Key: HDDS-2127
> URL: https://issues.apache.org/jira/browse/HDDS-2127
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are two doc pages for tools:
>  * docs/beyond/tools.html
>  * docs/tools.html
> The latter is more detailed (has subpages for several tools), but it is not 
> reachable (even indirectly) from the start page.  Not sure if this is 
> intentional.
> On a related note, it has two "Testing tools" sub-pages. One of them is empty 
> and should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2127) Detailed Tools doc not reachable

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2127?focusedWorklogId=315013=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315013
 ]

ASF GitHub Bot logged work on HDDS-2127:


Author: ASF GitHub Bot
Created on: 19/Sep/19 12:42
Start Date: 19/Sep/19 12:42
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1476: HDDS-2127. 
Detailed Tools doc not reachable
URL: https://github.com/apache/hadoop/pull/1476
 
 
   There are two doc pages for tools:
* docs/beyond/tools.html
* docs/tools.html
   
   The latter is more detailed (has subpages for several tools), but it is not 
reachable (even indirectly) from the start page.  Not sure if this is 
intentional.
   
   On a related note, it has two "Testing tools" sub-pages. One of them is 
empty and should be removed.
   
   See: https://issues.apache.org/jira/browse/HDDS-2127
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 315013)
Remaining Estimate: 0h
Time Spent: 10m

> Detailed Tools doc not reachable
> 
>
> Key: HDDS-2127
> URL: https://issues.apache.org/jira/browse/HDDS-2127
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are two doc pages for tools:
>  * docs/beyond/tools.html
>  * docs/tools.html
> The latter is more detailed (has subpages for several tools), but it is not 
> reachable (even indirectly) from the start page.  Not sure if this is 
> intentional.
> On a related note, it has two "Testing tools" sub-pages. One of them is empty 
> and should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2127) Detailed Tools doc not reachable

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-2127:
--

Assignee: Elek, Marton

> Detailed Tools doc not reachable
> 
>
> Key: HDDS-2127
> URL: https://issues.apache.org/jira/browse/HDDS-2127
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Elek, Marton
>Priority: Major
>
> There are two doc pages for tools:
>  * docs/beyond/tools.html
>  * docs/tools.html
> The latter is more detailed (has subpages for several tools), but it is not 
> reachable (even indirectly) from the start page.  Not sure if this is 
> intentional.
> On a related note, it has two "Testing tools" sub-pages. One of them is empty 
> and should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2043) "VOLUME_NOT_FOUND" exception thrown while listing volumes

2019-09-19 Thread Elek, Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton resolved HDDS-2043.

Resolution: Duplicate

Tested and worked well. HDDS-1926 fixed the same problem IMHO.

> "VOLUME_NOT_FOUND" exception thrown while listing volumes
> -
>
> Key: HDDS-2043
> URL: https://issues.apache.org/jira/browse/HDDS-2043
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI, Ozone Manager
>Reporter: Nilotpal Nandi
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> ozone list volume command throws OMException
> bin/ozone sh volume list --user root
>  VOLUME_NOT_FOUND org.apache.hadoop.ozone.om.exceptions.OMException: Volume 
> info not found for vol-test-putfile-1566902803
>  
> On enabling DEBUG log , here is the console output :
>  
>  
> {noformat}
> bin/ozone sh volume create /n1 ; echo $?
> 2019-08-27 11:47:16 DEBUG ThriftSenderFactory:33 - Using the UDP Sender to 
> send spans to the agent.
> 2019-08-27 11:47:16 DEBUG SenderResolver:86 - Using sender UdpSender()
> 2019-08-27 11:47:16 DEBUG MutableMetricsFactory:43 - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, 
> always=false, valueName=Time, about=, interval=10, type=DEFAULT, value=[Rate 
> of successful kerberos logins and latency (milliseconds)])
> 2019-08-27 11:47:16 DEBUG MutableMetricsFactory:43 - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, 
> always=false, valueName=Time, about=, interval=10, type=DEFAULT, value=[Rate 
> of failed kerberos logins and latency (milliseconds)])
> 2019-08-27 11:47:16 DEBUG MutableMetricsFactory:43 - field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, 
> always=false, valueName=Time, about=, interval=10, type=DEFAULT, 
> value=[GetGroups])
> 2019-08-27 11:47:16 DEBUG MutableMetricsFactory:43 - field private 
> org.apache.hadoop.metrics2.lib.MutableGaugeLong 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal
>  with annotation 
> @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, always=false, 
> valueName=Time, about=, interval=10, type=DEFAULT, value=[Renewal failures 
> since startup])
> 2019-08-27 11:47:16 DEBUG MutableMetricsFactory:43 - field private 
> org.apache.hadoop.metrics2.lib.MutableGaugeInt 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures 
> with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, 
> always=false, valueName=Time, about=, interval=10, type=DEFAULT, 
> value=[Renewal failures since last successful login])
> 2019-08-27 11:47:16 DEBUG MetricsSystemImpl:231 - UgiMetrics, User and group 
> related metrics
> 2019-08-27 11:47:16 DEBUG SecurityUtil:124 - Setting 
> hadoop.security.token.service.use_ip to true
> 2019-08-27 11:47:16 DEBUG Shell:821 - setsid exited with exit code 0
> 2019-08-27 11:47:16 DEBUG Groups:449 - Creating new Groups object
> 2019-08-27 11:47:16 DEBUG Groups:151 - Group mapping 
> impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; 
> cacheTimeout=30; warningDeltaMs=5000
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:254 - hadoop login
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:187 - hadoop login commit
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:215 - using local 
> user:UnixPrincipal: root
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:221 - Using user: 
> "UnixPrincipal: root" with name root
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:235 - User entry: "root"
> 2019-08-27 11:47:16 DEBUG UserGroupInformation:766 - UGI loginUser:root 
> (auth:SIMPLE)
> 2019-08-27 11:47:16 DEBUG OzoneClientFactory:287 - Using 
> org.apache.hadoop.ozone.client.rpc.RpcClient as client protocol.
> 2019-08-27 11:47:16 DEBUG Server:280 - rpcKind=RPC_PROTOCOL_BUFFER, 
> rpcRequestWrapperClass=class 
> org.apache.hadoop.ipc.ProtobufRpcEngine$RpcProtobufRequest, 
> rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@710f4dc7
> 2019-08-27 11:47:16 DEBUG Client:63 - getting client out of cache: 
> org.apache.hadoop.ipc.Client@24313fcc
> 2019-08-27 11:47:16 DEBUG Client:487 - The ping interval is 6 ms.
> 2019-08-27 11:47:16 DEBUG Client:785 - Connecting to 
> nnandi-1.gce.cloudera.com/172.31.117.213:9862
> 2019-08-27 11:47:16 DEBUG Client:1064 - IPC Client (580871917) connection to 
> 

[jira] [Work logged] (HDDS-2151) Ozone client prints the entire request payload in DEBUG level.

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2151?focusedWorklogId=315034=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315034
 ]

ASF GitHub Bot logged work on HDDS-2151:


Author: ASF GitHub Bot
Created on: 19/Sep/19 13:09
Start Date: 19/Sep/19 13:09
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1477: HDDS-2151. 
Ozone client logs the entire request payload at DEBUG level
URL: https://github.com/apache/hadoop/pull/1477
 
 
   ## What changes were proposed in this pull request?
   
   Remove byte data from container command request before logging it 
(applicable to `PutSmallFile` and `WriteChunk`).
   
   https://issues.apache.org/jira/browse/HDDS-2151
   
   ## How was this patch tested?
   
   Set root log level to DEBUG in `ozone-shell-log4j.properties`.  Created 
small and large keys via `ozone sh`.  Verified that `ozone-shell.log` contains 
detailed request without actual `data`.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 315034)
Remaining Estimate: 0h
Time Spent: 10m

> Ozone client prints the entire request payload in DEBUG level.
> --
>
> Key: HDDS-2151
> URL: https://issues.apache.org/jira/browse/HDDS-2151
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In XceiverClientRatis.java:221, we have the following snippet where we have a 
> DEBUG line that prints out the entire Container Request proto. 
> {code}
>   ContainerCommandRequestProto finalPayload =
>   ContainerCommandRequestProto.newBuilder(request)
>   .setTraceID(TracingUtil.exportCurrentSpan())
>   .build();
>   boolean isReadOnlyRequest = HddsUtils.isReadOnly(finalPayload);
>   ByteString byteString = finalPayload.toByteString();
>   LOG.debug("sendCommandAsync {} {}", isReadOnlyRequest, finalPayload);
>   return isReadOnlyRequest ?
>   getClient().sendReadOnlyAsync(() -> byteString) :
>   getClient().sendAsync(() -> byteString);
> {code}
> This causes OOM while writing large (~300MB) keys. 
> {code}
> SLF4J: Failed toString() invocation on an object of type 
> [org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$ContainerCommandRequestProto]
> Reported exception:
> java.lang.OutOfMemoryError: Java heap space
>   at java.util.Arrays.copyOf(Arrays.java:3332)
>   at 
> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
>   at 
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:649)
>   at java.lang.StringBuilder.append(StringBuilder.java:202)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormatEscaper.escapeBytes(TextFormatEscaper.java:75)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormatEscaper.escapeBytes(TextFormatEscaper.java:94)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat.escapeBytes(TextFormat.java:1836)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printFieldValue(TextFormat.java:436)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printSingleField(TextFormat.java:376)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printField(TextFormat.java:338)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.print(TextFormat.java:325)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printFieldValue(TextFormat.java:449)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printSingleField(TextFormat.java:376)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printField(TextFormat.java:338)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.print(TextFormat.java:325)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.access$000(TextFormat.java:307)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat.print(TextFormat.java:68)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat.printToString(TextFormat.java:148)
>   at 
> 

[jira] [Work logged] (HDDS-2151) Ozone client prints the entire request payload in DEBUG level.

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2151?focusedWorklogId=315035=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315035
 ]

ASF GitHub Bot logged work on HDDS-2151:


Author: ASF GitHub Bot
Created on: 19/Sep/19 13:09
Start Date: 19/Sep/19 13:09
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1477: HDDS-2151. Ozone 
client logs the entire request payload at DEBUG level
URL: https://github.com/apache/hadoop/pull/1477#issuecomment-533122964
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 315035)
Time Spent: 20m  (was: 10m)

> Ozone client prints the entire request payload in DEBUG level.
> --
>
> Key: HDDS-2151
> URL: https://issues.apache.org/jira/browse/HDDS-2151
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In XceiverClientRatis.java:221, we have the following snippet where we have a 
> DEBUG line that prints out the entire Container Request proto. 
> {code}
>   ContainerCommandRequestProto finalPayload =
>   ContainerCommandRequestProto.newBuilder(request)
>   .setTraceID(TracingUtil.exportCurrentSpan())
>   .build();
>   boolean isReadOnlyRequest = HddsUtils.isReadOnly(finalPayload);
>   ByteString byteString = finalPayload.toByteString();
>   LOG.debug("sendCommandAsync {} {}", isReadOnlyRequest, finalPayload);
>   return isReadOnlyRequest ?
>   getClient().sendReadOnlyAsync(() -> byteString) :
>   getClient().sendAsync(() -> byteString);
> {code}
> This causes OOM while writing large (~300MB) keys. 
> {code}
> SLF4J: Failed toString() invocation on an object of type 
> [org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$ContainerCommandRequestProto]
> Reported exception:
> java.lang.OutOfMemoryError: Java heap space
>   at java.util.Arrays.copyOf(Arrays.java:3332)
>   at 
> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
>   at 
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:649)
>   at java.lang.StringBuilder.append(StringBuilder.java:202)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormatEscaper.escapeBytes(TextFormatEscaper.java:75)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormatEscaper.escapeBytes(TextFormatEscaper.java:94)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat.escapeBytes(TextFormat.java:1836)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printFieldValue(TextFormat.java:436)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printSingleField(TextFormat.java:376)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printField(TextFormat.java:338)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.print(TextFormat.java:325)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printFieldValue(TextFormat.java:449)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printSingleField(TextFormat.java:376)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printField(TextFormat.java:338)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.print(TextFormat.java:325)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.access$000(TextFormat.java:307)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat.print(TextFormat.java:68)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat.printToString(TextFormat.java:148)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.AbstractMessage.toString(AbstractMessage.java:117)
>   at 
> org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:299)
>   at 
> org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:271)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:233)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:173)
>   at 

[jira] [Updated] (HDDS-2151) Ozone client prints the entire request payload in DEBUG level.

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2151:
-
Labels: pull-request-available  (was: )

> Ozone client prints the entire request payload in DEBUG level.
> --
>
> Key: HDDS-2151
> URL: https://issues.apache.org/jira/browse/HDDS-2151
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: pull-request-available
>
> In XceiverClientRatis.java:221, we have the following snippet where we have a 
> DEBUG line that prints out the entire Container Request proto. 
> {code}
>   ContainerCommandRequestProto finalPayload =
>   ContainerCommandRequestProto.newBuilder(request)
>   .setTraceID(TracingUtil.exportCurrentSpan())
>   .build();
>   boolean isReadOnlyRequest = HddsUtils.isReadOnly(finalPayload);
>   ByteString byteString = finalPayload.toByteString();
>   LOG.debug("sendCommandAsync {} {}", isReadOnlyRequest, finalPayload);
>   return isReadOnlyRequest ?
>   getClient().sendReadOnlyAsync(() -> byteString) :
>   getClient().sendAsync(() -> byteString);
> {code}
> This causes OOM while writing large (~300MB) keys. 
> {code}
> SLF4J: Failed toString() invocation on an object of type 
> [org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$ContainerCommandRequestProto]
> Reported exception:
> java.lang.OutOfMemoryError: Java heap space
>   at java.util.Arrays.copyOf(Arrays.java:3332)
>   at 
> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
>   at 
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:649)
>   at java.lang.StringBuilder.append(StringBuilder.java:202)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormatEscaper.escapeBytes(TextFormatEscaper.java:75)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormatEscaper.escapeBytes(TextFormatEscaper.java:94)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat.escapeBytes(TextFormat.java:1836)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printFieldValue(TextFormat.java:436)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printSingleField(TextFormat.java:376)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printField(TextFormat.java:338)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.print(TextFormat.java:325)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printFieldValue(TextFormat.java:449)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printSingleField(TextFormat.java:376)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printField(TextFormat.java:338)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.print(TextFormat.java:325)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.access$000(TextFormat.java:307)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat.print(TextFormat.java:68)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.TextFormat.printToString(TextFormat.java:148)
>   at 
> org.apache.ratis.thirdparty.com.google.protobuf.AbstractMessage.toString(AbstractMessage.java:117)
>   at 
> org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:299)
>   at 
> org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:271)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:233)
>   at 
> org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:173)
>   at org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:151)
>   at org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:252)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendRequestAsync(XceiverClientRatis.java:221)
>   at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommandAsync(XceiverClientRatis.java:302)
>   at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync(ContainerProtocolCalls.java:310)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunkToContainer(BlockOutputStream.java:601)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunk(BlockOutputStream.java:459)
>   at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:240)
>   at 
> 

[jira] [Commented] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-19 Thread HuangTao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933382#comment-16933382
 ] 

HuangTao commented on HDFS-14849:
-

 !liveBlockIndices.png! 

> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, liveBlockIndices.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2154) Fix Checkstyle issues

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2154?focusedWorklogId=315062=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315062
 ]

ASF GitHub Bot logged work on HDDS-2154:


Author: ASF GitHub Bot
Created on: 19/Sep/19 13:51
Start Date: 19/Sep/19 13:51
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1475: HDDS-2154. 
Fix Checkstyle issues
URL: https://github.com/apache/hadoop/pull/1475#issuecomment-533140298
 
 
   +1 (non-binding) , thanks for filing the patch
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 315062)
Time Spent: 0.5h  (was: 20m)

> Fix Checkstyle issues
> -
>
> Key: HDDS-2154
> URL: https://issues.apache.org/jira/browse/HDDS-2154
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Unfortunately checkstyle checks didn't work well from HDDS-2106 to HDDS-2119. 
> This patch fixes all the issues which are accidentally merged in the mean 
> time. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-19 Thread HuangTao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HuangTao updated HDFS-14849:

Description: 
When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
that datanode will be replicated infinitely.

// added 2019/09/19
I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
simultaneously.
 
 !scheduleReconstruction.png! 

 !fsck-file.png! 

  was:
When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
that datanode will be replicated infinitely.

// added 2019/09/19
I reproduced this scenario in a 165 nodes cluster with decommission 100 nodes 
simultaneously.
 
 !scheduleReconstruction.png! 

 !fsck-file.png! 


> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, fsck-file.png, 
> scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously.
>  
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-19 Thread HuangTao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HuangTao updated HDFS-14849:

Description: 
When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
that datanode will be replicated infinitely.

// added 2019/09/19
I reproduced this scenario in a 165 nodes cluster with decommission 100 nodes 
simultaneously.
 
 !scheduleReconstruction.png! 

 !fsck-file.png! 

  was:
When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
that datanode will be replicated infinitely.




> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, fsck-file.png, 
> scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.
> // added 2019/09/19
> I reproduced this scenario in a 165 nodes cluster with decommission 100 nodes 
> simultaneously.
>  
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-19 Thread HuangTao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HuangTao updated HDFS-14849:

Attachment: HDFS-14849.002.patch

> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously.
>  
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-19 Thread HuangTao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933266#comment-16933266
 ] 

HuangTao commented on HDFS-14849:
-

[~ferhui] We have the same scenario, but our fix can't pass each other's UT.

> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously.
>  
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14855) client always print standbyexception info with multi standby namenode

2019-09-19 Thread Shen Yinjie (Jira)
Shen Yinjie created HDFS-14855:
--

 Summary: client always print standbyexception info with multi 
standby namenode
 Key: HDFS-14855
 URL: https://issues.apache.org/jira/browse/HDFS-14855
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Shen Yinjie
 Attachments: image-2019-09-19-20-04-54-591.png

When cluster has more than two standby namenodes,  client executes shell will 
print standbyexception info. May we change the log level from INFO to DEBUG,   
!image-2019-09-19-20-04-54-591.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14855) client always print standbyexception info with multi standby namenode

2019-09-19 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933313#comment-16933313
 ] 

hemanthboyina commented on HDFS-14855:
--

hi [~shenyinjie] , IMO the standbyexception should be at Info level 
User should know why the exception was thrown for this basic command.

> client always print standbyexception info with multi standby namenode
> -
>
> Key: HDFS-14855
> URL: https://issues.apache.org/jira/browse/HDFS-14855
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shen Yinjie
>Priority: Major
> Attachments: image-2019-09-19-20-04-54-591.png
>
>
> When cluster has more than two standby namenodes,  client executes shell will 
> print standbyexception info. May we change the log level from INFO to DEBUG,  
>  !image-2019-09-19-20-04-54-591.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14855) client always print standbyexception info with multi standby namenode

2019-09-19 Thread Shen Yinjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie reassigned HDFS-14855:
--

Assignee: Shen Yinjie

> client always print standbyexception info with multi standby namenode
> -
>
> Key: HDFS-14855
> URL: https://issues.apache.org/jira/browse/HDFS-14855
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: image-2019-09-19-20-04-54-591.png
>
>
> When cluster has more than two standby namenodes,  client executes shell will 
> print standbyexception info. May we change the log level from INFO to DEBUG,  
>  !image-2019-09-19-20-04-54-591.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1569?focusedWorklogId=315031=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315031
 ]

ASF GitHub Bot logged work on HDDS-1569:


Author: ASF GitHub Bot
Created on: 19/Sep/19 13:05
Start Date: 19/Sep/19 13:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1431: HDDS-1569 
Support creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop/pull/1431#issuecomment-533121382
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 2739 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 26 new or modified test 
files. |
   ||| _ HDDS-1564 Compile Tests _ |
   | 0 | mvndep | 69 | Maven dependency ordering for branch |
   | +1 | mvninstall | 735 | HDDS-1564 passed |
   | +1 | compile | 450 | HDDS-1564 passed |
   | +1 | checkstyle | 86 | HDDS-1564 passed |
   | +1 | mvnsite | 0 | HDDS-1564 passed |
   | +1 | shadedclient | 1056 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 198 | HDDS-1564 passed |
   | 0 | spotbugs | 527 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 761 | HDDS-1564 passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 39 | Maven dependency ordering for patch |
   | +1 | mvninstall | 682 | the patch passed |
   | +1 | compile | 451 | the patch passed |
   | +1 | javac | 451 | the patch passed |
   | +1 | checkstyle | 90 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 813 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 192 | the patch passed |
   | +1 | findbugs | 780 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 330 | hadoop-hdds in the patch passed. |
   | -1 | unit | 4231 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 98 | The patch does not generate ASF License warnings. |
   | | | 13973 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.client.rpc.TestKeyInputStream |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestroy |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1431 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux cf162ac69e58 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | HDDS-1564 / 326b5ac |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/3/testReport/ |
   | Max. process+thread count | 5410 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific 

[jira] [Comment Edited] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-19 Thread HuangTao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933376#comment-16933376
 ] 

HuangTao edited comment on HDFS-14849 at 9/19/19 1:30 PM:
--

I find a clue:

the `chooseSourceDatanodes` get 
{quote}LIVE=2, READONLY=0, DECOMMISSIONING=7, DECOMMISSIONED=0, 
MAINTENANCE_NOT_FOR_READ=0, MAINTENANCE_FOR_READ=0, CORRUPT=0, EXCESS=0, 
STALESTORAGE=0, REDUNDANT=22{quote}
and all block index (0-8) exists, and three blocks 3/4/8 have no redundant 
block, and the datanode where block 8 stored is in DECOMMISSIONING, other two 
datanode adminState is null. 

the `countNodes(block)` get
{quote}LIVE=8, READONLY=0, DECOMMISSIONING=7, DECOMMISSIONED=0, 
MAINTENANCE_NOT_FOR_READ=0, MAINTENANCE_FOR_READ=0, CORRUPT=0, EXCESS=0, 
STALESTORAGE=0, REDUNDANT=16{quote}

so we need to replicate block 8, but there is no racks anymore.

Now, I have a doubt why replicate some block more than once other than 
replicate the block 8 ?


was (Author: marvelrock):
I find a clue:

the `chooseSourceDatanodes` get 
{quote}LIVE=2, READONLY=0, DECOMMISSIONING=7, DECOMMISSIONED=0, 
MAINTENANCE_NOT_FOR_READ=0, MAINTENANCE_FOR_READ=0, CORRUPT=0, EXCESS=0, 
STALESTORAGE=0, REDUNDANT=22{quote}
and all block index (0-8) exists, and three blocks 3/4/8 have no redundant 
block, and the datanode where block 8 stored is in DECOMMISSIONING, other two 
datanode adminState is null. 

the `countNodes(block)` get
{quote}LIVE=8, READONLY=0, DECOMMISSIONING=7, DECOMMISSIONED=0, 
MAINTENANCE_NOT_FOR_READ=0, MAINTENANCE_FOR_READ=0, CORRUPT=0, EXCESS=0, 
STALESTORAGE=0, REDUNDANT=16{quote}

so we need to replicate block 8, but there is no racks anymore.


> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-19 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933399#comment-16933399
 ] 

Hadoop QA commented on HDFS-14849:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}124m 
44s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}195m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:39e82acc485 |
| JIRA Issue | HDFS-14849 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980709/HDFS-14849.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f069cb69415f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 28913f7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27909/testReport/ |
| Max. process+thread count | 2743 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27909/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> 

[jira] [Commented] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-19 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933412#comment-16933412
 ] 

Hadoop QA commented on HDFS-14849:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} HDFS-14849 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14849 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27910/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, liveBlockIndices.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14849) Erasure Coding: replicate block infinitely when datanode being decommissioning

2019-09-19 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933429#comment-16933429
 ] 

Fei Hui commented on HDFS-14849:


[~marvelrock] Thanks for your patch. 
Could you please give more detail why replicate block infinitely ?
I want to check whether it is the same  scenario with HDFS-14847

> Erasure Coding: replicate block infinitely when datanode being decommissioning
> --
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, liveBlockIndices.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC block in 
> that datanode will be replicated infinitely.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2151) Ozone client prints the entire request payload in DEBUG level.

2019-09-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2151?focusedWorklogId=315094=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315094
 ]

ASF GitHub Bot logged work on HDDS-2151:


Author: ASF GitHub Bot
Created on: 19/Sep/19 14:36
Start Date: 19/Sep/19 14:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1477: HDDS-2151. Ozone 
client logs the entire request payload at DEBUG level
URL: https://github.com/apache/hadoop/pull/1477#issuecomment-533160601
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1999 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 31 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 857 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 47 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 198 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 26 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 29 | hadoop-hdds: The patch generated 6 new + 0 
unchanged - 0 fixed = 6 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 678 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 51 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 28 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 234 | hadoop-hdds in the patch passed. |
   | -1 | unit | 29 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 5167 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1477/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1477 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7cb4771f2bb4 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d4205dc |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1477/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1477/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1477/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1477/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1477/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1477/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1477/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1477/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1477/1/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1477/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1477/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1477/1/testReport/ |
   | Max. process+thread count | 498 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client U: hadoop-hdds/client |
   | Console output | 

  1   2   3   >