[jira] [Work logged] (HDDS-2117) ContainerStateMachine#writeStateMachineData times out

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2117?focusedWorklogId=311874=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311874
 ]

ASF GitHub Bot logged work on HDDS-2117:


Author: ASF GitHub Bot
Created on: 13/Sep/19 04:54
Start Date: 13/Sep/19 04:54
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on issue #1430: HDDS-2117. 
ContainerStateMachine#writeStateMachineData times out.
URL: https://github.com/apache/hadoop/pull/1430#issuecomment-531097629
 
 
   @bshashikant Thanks for working on this! Can we also check if 
TestContainerSmallFile failure is related?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 311874)
Time Spent: 50m  (was: 40m)

> ContainerStateMachine#writeStateMachineData times out
> -
>
> Key: HDDS-2117
> URL: https://issues.apache.org/jira/browse/HDDS-2117
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The issue seems to be happening because the below precondition check fails in 
> case two writeChunk gets executed in parallel and the runtime exception 
> thrown is handled correctly in ContainerStateMachine.
>  
> HddsDispatcher.java:239
> {code:java}
> Preconditions
> .checkArgument(!container2BCSIDMap.containsKey(containerID));
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2121) Create a shaded ozone filesystem (client) jar

2019-09-12 Thread Elek, Marton (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928993#comment-16928993
 ] 

Elek, Marton commented on HDDS-2121:


Thanks to open this issue [~arp] 

1. HDDS-2120 removes the included hadoop classes from the current jar (yes, 
it's very short, so it also can be merged to this one)
2. I agree with the idea: we can shade (package relocate) all the remaining 3rd 
party classes inside current to be sure they are not conflicting.


ps: legacy jar still required to support older version of spark/hadoop (if we 
would like to support them...)



> Create a shaded ozone filesystem (client) jar
> -
>
> Key: HDDS-2121
> URL: https://issues.apache.org/jira/browse/HDDS-2121
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Arpit Agarwal
>Priority: Blocker
>
> We need a shaded Ozonefs jar that does not include Hadoop ecosystem 
> components (Hadoop, HDFS, Ratis, Zookeeper).
> A common expected use case for Ozone is Hadoop clients (3.2.0 and later) 
> wanting to access Ozone via the Ozone Filesystem interface. For these 
> clients, we want to add Ozone file system jar to the classpath, however we 
> want to use Hadoop ecosystem dependencies that are `provided` and already 
> expected to be in the client classpath.
> Note that this is different from the legacy jar which bundles a shaded Hadoop 
> 3.2.0.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14833) RBF: Router Update Doesn't Sync Quota

2019-09-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928988#comment-16928988
 ] 

Hadoop QA commented on HDFS-14833:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 12s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:f4f9f0fe4f2 |
| JIRA Issue | HDFS-14833 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980231/HDFS-14833-02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3624acab3d50 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4852a90 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27861/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27861/testReport/ |
| Max. process+thread count | 1593 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Updated] (HDFS-14833) RBF: Router Update Doesn't Sync Quota

2019-09-12 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14833:

Attachment: HDFS-14833-02.patch

> RBF: Router Update Doesn't Sync Quota
> -
>
> Key: HDFS-14833
> URL: https://issues.apache.org/jira/browse/HDFS-14833
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14833-01.patch, HDFS-14833-02.patch
>
>
> HDFS-14777 Added a check to prevent RPC call, It checks whether in the 
> present state whether quota is changing. 
> But ignores the part that if the locations are changed. if the location is 
> changed the new destination should be synchronized with the mount entry 
> quota. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14833) RBF: Router Update Doesn't Sync Quota

2019-09-12 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928978#comment-16928978
 ] 

Ayush Saxena commented on HDFS-14833:
-

Test failures unrelated. Checkstyles shall fix in next patch.
 [~elgoiri]  can you help review

> RBF: Router Update Doesn't Sync Quota
> -
>
> Key: HDFS-14833
> URL: https://issues.apache.org/jira/browse/HDFS-14833
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14833-01.patch
>
>
> HDFS-14777 Added a check to prevent RPC call, It checks whether in the 
> present state whether quota is changing. 
> But ignores the part that if the locations are changed. if the location is 
> changed the new destination should be synchronized with the mount entry 
> quota. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14837) Review of Block.java

2019-09-12 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928975#comment-16928975
 ] 

Íñigo Goiri commented on HDFS-14837:


We may want to keep the comment in hashCode().
I think is relevant to mention we ignore the generation stamp there.

> Review of Block.java
> 
>
> Key: HDFS-14837
> URL: https://issues.apache.org/jira/browse/HDFS-14837
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HDFS-14837.1.patch, HDFS-14837.2.patch
>
>
> The {{Block}} class is such a core class in the project, I just wanted to 
> make sure it was super clean and documentation was correct.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14768) In some cases, erasure blocks are corruption when they are reconstruct.

2019-09-12 Thread guojh (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928962#comment-16928962
 ] 

guojh commented on HDFS-14768:
--

[~zhaoyim] Yes, after you sleep 3 min, NN will reconstruct a new correct block 
for I stop the DN on the block with index 6. But in product, because exist two 
index 6 block, NN may delete the correct one, if data block is missing, then 
the incorrect block will encode to generate the data block, after that, the 
file is corruption and is not useable.

> In some cases, erasure blocks are corruption  when they are reconstruct.
> 
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Assignee: guojh
>Priority: Major
>  Labels: patch
> Fix For: 3.3.0
>
> Attachments: 1568275810244.jpg, 1568276338275.jpg, 
> HDFS-14768.000.patch, guojh_UT_after_deomission.txt, 
> guojh_UT_before_deomission.txt, zhaoyiming_UT_after_deomission.txt, 
> zhaoyiming_UT_beofre_deomission.txt
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          hasValidTargets = true;
>         }
>       }
>     }
>  }
> {code}
> targetIndices[0]=6, and targetIndices[1] is aways 0 from initial value.
> The StripedReader is  aways create reader from first 6 index block, and is 
> [0,1,2,3,4,5]
> Use the index [0,1,2,3,4,5] to build target index[6,0] will trigger the isal 
> bug. the block index6's data is corruption(all data is zero).
> I write a unit test can stabilize repreduce.
> {code:java}
> // code placeholder
> private int replicationStreamsHardLimit = 
> DFSConfigKeys.DFS_NAMENODE_REPLICATION_STREAMS_HARD_LIMIT_DEFAULT;
> numDNs = dataBlocks + parityBlocks + 10;
> @Test(timeout = 24)
> public void testFileDecommission() throws Exception {
>   LOG.info("Starting test testFileDecommission");
>   final Path ecFile = new Path(ecDir, "testFileDecommission");
>   int writeBytes = cellSize * dataBlocks;
>   writeStripedFile(dfs, ecFile, writeBytes);
>   Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
>   FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);
>   final INodeFile fileNode = cluster.getNamesystem().getFSDirectory()
>   .getINode4Write(ecFile.toString()).asFile();
>   LocatedBlocks locatedBlocks =
>   StripedFileTestUtil.getLocatedBlocks(ecFile, dfs);
>   LocatedBlock lb = dfs.getClient().getLocatedBlocks(ecFile.toString(), 0)
>   .get(0);
>   DatanodeInfo[] dnLocs = lb.getLocations();
>   LocatedStripedBlock lastBlock =
>   (LocatedStripedBlock)locatedBlocks.getLastLocatedBlock();
>   DatanodeInfo[] storageInfos = lastBlock.getLocations();
>   //
>   DatanodeDescriptor datanodeDescriptor = 
> cluster.getNameNode().getNamesystem()
>   
> .getBlockManager().getDatanodeManager().getDatanode(storageInfos[6].getDatanodeUuid());
>   BlockInfo firstBlock = fileNode.getBlocks()[0];
>   DatanodeStorageInfo[] dStorageInfos = bm.getStorages(firstBlock);
>   // the first heartbeat will consume 3 replica tasks
>   for (int i = 0; i <= replicationStreamsHardLimit + 3; i++) {
> BlockManagerTestUtil.addBlockToBeReplicated(datanodeDescriptor, new 
> Block(i),
> new DatanodeStorageInfo[]{dStorageInfos[0]});
>   }
>   assertEquals(dataBlocks + parityBlocks, dnLocs.length);
>   int[] decommNodeIndex = {3, 4};
>   final List decommisionNodes = new ArrayList();
>   // add the node which will be decommissioning
>   decommisionNodes.add(dnLocs[decommNodeIndex[0]]);
>   decommisionNodes.add(dnLocs[decommNodeIndex[1]]);
>  

[jira] [Commented] (HDFS-14846) libhdfs tests are failing on trunk due to jni usage bugs

2019-09-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928961#comment-16928961
 ] 

Hadoop QA commented on HDFS-14846:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
53s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
48m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
11s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
32s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1436/1/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/1436 |
| JIRA Issue | HDFS-14846 |
| Optional Tests | dupname asflicense compile cc mvnsite javac unit |
| uname | Linux 404d9186ce3c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 4852a90 |
| Default Java | 1.8.0_222 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1436/1/testReport/ |
| Max. process+thread count | 1392 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-native-client U: . |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1436/1/console |
| versions | git=2.7.4 maven=3.3.9 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> libhdfs tests are failing on trunk due to jni usage bugs
> 
>
> Key: HDFS-14846
> URL: 

[jira] [Commented] (HDFS-14837) Review of Block.java

2019-09-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928955#comment-16928955
 ] 

Hadoop QA commented on HDFS-14837:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs-client: The patch 
generated 0 new + 9 unchanged - 1 fixed = 9 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
52s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:f4f9f0fe4f2 |
| JIRA Issue | HDFS-14837 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980229/HDFS-14837.2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0bdbc489e256 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4852a90 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27860/testReport/ |
| Max. process+thread count | 447 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27860/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Work logged] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?focusedWorklogId=311818=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311818
 ]

ASF GitHub Bot logged work on HDDS-2119:


Author: ASF GitHub Bot
Created on: 12/Sep/19 23:39
Start Date: 12/Sep/19 23:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1435: HDDS-2119. Use 
checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
validation.
URL: https://github.com/apache/hadoop/pull/1435#issuecomment-531046239
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 136 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 76 | Maven dependency ordering for branch |
   | +1 | mvninstall | 2041 | trunk passed |
   | +1 | compile | 1366 | trunk passed |
   | -1 | mvnsite | 1285 | root in trunk failed. |
   | +1 | shadedclient | 5556 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 376 | root in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | -1 | mvninstall | 20 | root in the patch failed. |
   | -1 | mvninstall | 18 | hadoop-ozone in the patch failed. |
   | -1 | mvninstall | 16 | build-tools in the patch failed. |
   | -1 | compile | 20 | root in the patch failed. |
   | -1 | javac | 20 | root in the patch failed. |
   | -1 | mvnsite | 20 | root in the patch failed. |
   | -1 | whitespace | 0 | The patch 5  line(s) with tabs. |
   | +1 | xml | 7 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 856 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | root in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 23 | root in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 7318 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1435 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux f31165afa566 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4852a90 |
   | Default Java | 1.8.0_212 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/1/artifact/out/branch-mvnsite-root.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/1/artifact/out/branch-javadoc-root.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/1/artifact/out/patch-mvninstall-root.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/1/artifact/out/patch-mvninstall-hadoop-ozone_build-tools.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/1/artifact/out/patch-compile-root.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/1/artifact/out/patch-compile-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/1/artifact/out/patch-mvnsite-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/1/artifact/out/whitespace-tabs.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/1/artifact/out/patch-javadoc-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/1/artifact/out/patch-unit-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/1/testReport/ |
   | Max. process+thread count | 333 (vs. ulimit of 5500) |
   | modules | C: . hadoop-ozone hadoop-ozone/build-tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1435/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

[jira] [Work logged] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?focusedWorklogId=311817=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311817
 ]

ASF GitHub Bot logged work on HDDS-2119:


Author: ASF GitHub Bot
Created on: 12/Sep/19 23:39
Start Date: 12/Sep/19 23:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1435: 
HDDS-2119. Use checkstyle.xml and suppressions.xml in hdds/ozone projects for 
checkstyle validation.
URL: https://github.com/apache/hadoop/pull/1435#discussion_r323990902
 
 

 ##
 File path: 
hadoop-ozone/build-tools/src/main/resources/checkstyle/checkstyle-noframes-sorted.xsl
 ##
 @@ -0,0 +1,189 @@
+
+http://www.w3.org/1999/XSL/Transform; 
version="1.0">
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+   
+   
+   

[jira] [Work logged] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?focusedWorklogId=311814=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311814
 ]

ASF GitHub Bot logged work on HDDS-2119:


Author: ASF GitHub Bot
Created on: 12/Sep/19 23:38
Start Date: 12/Sep/19 23:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1435: 
HDDS-2119. Use checkstyle.xml and suppressions.xml in hdds/ozone projects for 
checkstyle validation.
URL: https://github.com/apache/hadoop/pull/1435#discussion_r323990878
 
 

 ##
 File path: 
hadoop-ozone/build-tools/src/main/resources/checkstyle/checkstyle-noframes-sorted.xsl
 ##
 @@ -0,0 +1,189 @@
+
+http://www.w3.org/1999/XSL/Transform; 
version="1.0">
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+   
+   
+   

[jira] [Work logged] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?focusedWorklogId=311815=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311815
 ]

ASF GitHub Bot logged work on HDDS-2119:


Author: ASF GitHub Bot
Created on: 12/Sep/19 23:38
Start Date: 12/Sep/19 23:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1435: 
HDDS-2119. Use checkstyle.xml and suppressions.xml in hdds/ozone projects for 
checkstyle validation.
URL: https://github.com/apache/hadoop/pull/1435#discussion_r323990885
 
 

 ##
 File path: 
hadoop-ozone/build-tools/src/main/resources/checkstyle/checkstyle-noframes-sorted.xsl
 ##
 @@ -0,0 +1,189 @@
+
+http://www.w3.org/1999/XSL/Transform; 
version="1.0">
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+   
+   
+   

[jira] [Work logged] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?focusedWorklogId=311816=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311816
 ]

ASF GitHub Bot logged work on HDDS-2119:


Author: ASF GitHub Bot
Created on: 12/Sep/19 23:39
Start Date: 12/Sep/19 23:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1435: 
HDDS-2119. Use checkstyle.xml and suppressions.xml in hdds/ozone projects for 
checkstyle validation.
URL: https://github.com/apache/hadoop/pull/1435#discussion_r323990891
 
 

 ##
 File path: 
hadoop-ozone/build-tools/src/main/resources/checkstyle/checkstyle-noframes-sorted.xsl
 ##
 @@ -0,0 +1,189 @@
+
+http://www.w3.org/1999/XSL/Transform; 
version="1.0">
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+   
+   
+   

[jira] [Work logged] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?focusedWorklogId=311813=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311813
 ]

ASF GitHub Bot logged work on HDDS-2119:


Author: ASF GitHub Bot
Created on: 12/Sep/19 23:38
Start Date: 12/Sep/19 23:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1435: 
HDDS-2119. Use checkstyle.xml and suppressions.xml in hdds/ozone projects for 
checkstyle validation.
URL: https://github.com/apache/hadoop/pull/1435#discussion_r323990870
 
 

 ##
 File path: 
hadoop-ozone/build-tools/src/main/resources/checkstyle/checkstyle-noframes-sorted.xsl
 ##
 @@ -0,0 +1,189 @@
+
> Key: HDDS-2119
> URL: https://issues.apache.org/jira/browse/HDDS-2119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> After HDDS-2106 hdds/ozone no more relies on hadoop parent pom, so we have to 
> use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
> checkstyle validation.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14837) Review of Block.java

2019-09-12 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928937#comment-16928937
 ] 

David Mollitor commented on HDFS-14837:
---

[~elgoiri] Thanks for the feed back!

{{EqualsBuider}} and {{HashCodeBuilder}} are overkill here because there is 
only one or two fields to consider.  Also, this {{Block}} class is used in many 
{{HashMap}} and {{HashSet}} so performance of the hash code and equals matters 
here.

I did put back in some comments.  I hope they help.  Feel free to update to 
something that is meaningful and helpful to you after patch is committed.

> Review of Block.java
> 
>
> Key: HDFS-14837
> URL: https://issues.apache.org/jira/browse/HDFS-14837
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HDFS-14837.1.patch, HDFS-14837.2.patch
>
>
> The {{Block}} class is such a core class in the project, I just wanted to 
> make sure it was super clean and documentation was correct.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14837) Review of Block.java

2019-09-12 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HDFS-14837:
--
Attachment: HDFS-14837.2.patch

> Review of Block.java
> 
>
> Key: HDFS-14837
> URL: https://issues.apache.org/jira/browse/HDFS-14837
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HDFS-14837.1.patch, HDFS-14837.2.patch
>
>
> The {{Block}} class is such a core class in the project, I just wanted to 
> make sure it was super clean and documentation was correct.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14837) Review of Block.java

2019-09-12 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HDFS-14837:
--
Status: Open  (was: Patch Available)

> Review of Block.java
> 
>
> Key: HDFS-14837
> URL: https://issues.apache.org/jira/browse/HDFS-14837
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HDFS-14837.1.patch, HDFS-14837.2.patch
>
>
> The {{Block}} class is such a core class in the project, I just wanted to 
> make sure it was super clean and documentation was correct.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14837) Review of Block.java

2019-09-12 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HDFS-14837:
--
Status: Patch Available  (was: Open)

> Review of Block.java
> 
>
> Key: HDFS-14837
> URL: https://issues.apache.org/jira/browse/HDFS-14837
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HDFS-14837.1.patch, HDFS-14837.2.patch
>
>
> The {{Block}} class is such a core class in the project, I just wanted to 
> make sure it was super clean and documentation was correct.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2089) Add CLI createPipeline

2019-09-12 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-2089:
-
Fix Version/s: HDDS-1564
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~timmylicheng] fro the contribution. I've merged the PR to feature 
branch. 

> Add CLI createPipeline
> --
>
> Key: HDDS-2089
> URL: https://issues.apache.org/jira/browse/HDDS-2089
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: Li Cheng
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: HDDS-1564
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Add a SCMCLI to create pipeline for ozone.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2117) ContainerStateMachine#writeStateMachineData times out

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2117?focusedWorklogId=311807=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311807
 ]

ASF GitHub Bot logged work on HDDS-2117:


Author: ASF GitHub Bot
Created on: 12/Sep/19 23:06
Start Date: 12/Sep/19 23:06
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1430: HDDS-2117. 
ContainerStateMachine#writeStateMachineData times out.
URL: https://github.com/apache/hadoop/pull/1430#discussion_r323984092
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -706,9 +711,15 @@ public void notifyIndexUpdate(long term, long index) {
   // Ensure the command gets executed in a separate thread than
   // stateMachineUpdater thread which is calling applyTransaction here.
   CompletableFuture future =
-  CompletableFuture.supplyAsync(
-  () -> runCommand(requestProto, builder.build()),
-  getCommandExecutor(requestProto));
+  CompletableFuture.supplyAsync(() -> {
+try {
+  return runCommand(requestProto, builder.build());
+} catch (Exception e) {
+  e.printStackTrace();
 
 Review comment:
   same as above. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 311807)
Time Spent: 40m  (was: 0.5h)

> ContainerStateMachine#writeStateMachineData times out
> -
>
> Key: HDDS-2117
> URL: https://issues.apache.org/jira/browse/HDDS-2117
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The issue seems to be happening because the below precondition check fails in 
> case two writeChunk gets executed in parallel and the runtime exception 
> thrown is handled correctly in ContainerStateMachine.
>  
> HddsDispatcher.java:239
> {code:java}
> Preconditions
> .checkArgument(!container2BCSIDMap.containsKey(containerID));
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2117) ContainerStateMachine#writeStateMachineData times out

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2117?focusedWorklogId=311805=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311805
 ]

ASF GitHub Bot logged work on HDDS-2117:


Author: ASF GitHub Bot
Created on: 12/Sep/19 23:04
Start Date: 12/Sep/19 23:04
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1430: HDDS-2117. 
ContainerStateMachine#writeStateMachineData times out.
URL: https://github.com/apache/hadoop/pull/1430#discussion_r323983697
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -435,13 +435,18 @@ private ExecutorService getCommandExecutor(
 .setStage(DispatcherContext.WriteChunkStage.WRITE_DATA)
 .setContainer2BCSIDMap(container2BCSIDMap)
 .build();
+CompletableFuture raftFuture = new CompletableFuture<>();
 // ensure the write chunk happens asynchronously in writeChunkExecutor pool
 // thread.
 CompletableFuture writeChunkFuture =
-CompletableFuture.supplyAsync(() ->
-runCommand(requestProto, context), chunkExecutor);
-
-CompletableFuture raftFuture = new CompletableFuture<>();
+CompletableFuture.supplyAsync(() -> {
+  try {
+return runCommand(requestProto, context);
+  } catch (Exception e) {
+e.printStackTrace();
 
 Review comment:
   Can we remove this or adding the exception as Trace log instead?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 311805)
Time Spent: 0.5h  (was: 20m)

> ContainerStateMachine#writeStateMachineData times out
> -
>
> Key: HDDS-2117
> URL: https://issues.apache.org/jira/browse/HDDS-2117
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The issue seems to be happening because the below precondition check fails in 
> case two writeChunk gets executed in parallel and the runtime exception 
> thrown is handled correctly in ContainerStateMachine.
>  
> HddsDispatcher.java:239
> {code:java}
> Preconditions
> .checkArgument(!container2BCSIDMap.containsKey(containerID));
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2089) Add CLI createPipeline

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2089?focusedWorklogId=311804=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311804
 ]

ASF GitHub Bot logged work on HDDS-2089:


Author: ASF GitHub Bot
Created on: 12/Sep/19 23:01
Start Date: 12/Sep/19 23:01
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1418: HDDS-2089: 
Add createPipeline CLI.
URL: https://github.com/apache/hadoop/pull/1418
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 311804)
Time Spent: 1h 20m  (was: 1h 10m)

> Add CLI createPipeline
> --
>
> Key: HDDS-2089
> URL: https://issues.apache.org/jira/browse/HDDS-2089
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: Li Cheng
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Add a SCMCLI to create pipeline for ozone.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14090) RBF: Improved isolation for downstream name nodes. {Static}

2019-09-12 Thread CR Hota (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928913#comment-16928913
 ] 

CR Hota commented on HDFS-14090:


[~elgoiri] Thanks for the final review.

[~brahmareddy] [~aajisaka] [~xkrogen] [~hexiaoqiao] [~linyiqun] [~tanyuxin] 
Gentle ping.

Let me know if you folks have any final thoughts on v014.patch. I am trying to 
see if we can target this with 3.3 release.

> RBF: Improved isolation for downstream name nodes. {Static}
> ---
>
> Key: HDFS-14090
> URL: https://issues.apache.org/jira/browse/HDFS-14090
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14090-HDFS-13891.001.patch, 
> HDFS-14090-HDFS-13891.002.patch, HDFS-14090-HDFS-13891.003.patch, 
> HDFS-14090-HDFS-13891.004.patch, HDFS-14090-HDFS-13891.005.patch, 
> HDFS-14090.006.patch, HDFS-14090.007.patch, HDFS-14090.008.patch, 
> HDFS-14090.009.patch, HDFS-14090.010.patch, HDFS-14090.011.patch, 
> HDFS-14090.012.patch, HDFS-14090.013.patch, HDFS-14090.014.patch, RBF_ 
> Isolation design.pdf
>
>
> Router is a gateway to underlying name nodes. Gateway architectures, should 
> help minimize impact of clients connecting to healthy clusters vs unhealthy 
> clusters.
> For example - If there are 2 name nodes downstream, and one of them is 
> heavily loaded with calls spiking rpc queue times, due to back pressure the 
> same with start reflecting on the router. As a result of this, clients 
> connecting to healthy/faster name nodes will also slow down as same rpc queue 
> is maintained for all calls at the router layer. Essentially the same IPC 
> thread pool is used by router to connect to all name nodes.
> Currently router uses one single rpc queue for all calls. Lets discuss how we 
> can change the architecture and add some throttling logic for 
> unhealthy/slow/overloaded name nodes.
> One way could be to read from current call queue, immediately identify 
> downstream name node and maintain a separate queue for each underlying name 
> node. Another simpler way is to maintain some sort of rate limiter configured 
> for each name node and let routers drop/reject/send error requests after 
> certain threshold. 
> This won’t be a simple change as router’s ‘Server’ layer would need redesign 
> and implementation. Currently this layer is the same as name node.
> Opening this ticket to discuss, design and implement this feature.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14846) libhdfs tests are failing on trunk due to jni usage bugs

2019-09-12 Thread Sahil Takiar (Jira)
Sahil Takiar created HDFS-14846:
---

 Summary: libhdfs tests are failing on trunk due to jni usage bugs
 Key: HDFS-14846
 URL: https://issues.apache.org/jira/browse/HDFS-14846
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs, native
Reporter: Sahil Takiar
Assignee: Sahil Takiar


While working on HDFS-14564, I noticed that the libhdfs tests are failing on 
trunk (both on hadoop-yetus and locally). I dig some digging and found out that 
the {{-Xcheck:jni}} flag is causing a bunch of crashes. I haven't been able to 
pinpoint what caused this regression, but my best guess is that an upgrade in 
the JDK we use in hadoop-yetus started causing these failures. I looked back at 
some old JIRAs and it looks like the tests work on Java 1.8.0_212, but yetus is 
running 1.8.0_222 (as is my local env) (I couldn't confirm this theory because 
I'm having trouble getting install 1.8.0_212 next to 1.8.0_222 on my Ubuntu 
machine) (even after re-winding the commit history back to a known good commit 
where the libhdfs passed, the tests still fail, so I don't think a code change 
caused the regressions).

The failures are a bunch of "FATAL ERROR in native method: Bad global or local 
ref passed to JNI" errors. After doing some debugging, it looks like 
{{-Xcheck:jni}} now errors out if any code tries to pass a local ref to 
{{DeleteLocalRef}} twice (previously it looked like it didn't complain) (we 
have some checks to avoid this, but it looks like they don't work as expected).

There are a few places in the libhdfs code where this pattern causes a crash, 
as well as one place in {{JniBasedUnixGroupsMapping}}.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14833) RBF: Router Update Doesn't Sync Quota

2019-09-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928895#comment-16928895
 ] 

Hadoop QA commented on HDFS-14833:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 18s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant |
|   | hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:f4f9f0fe4f2 |
| JIRA Issue | HDFS-14833 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980225/HDFS-14833-01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 98d84ec6af42 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4852a90 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27859/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| unit | 

[jira] [Commented] (HDFS-14778) BlockManager findAndMarkBlockAsCorrupt adds block to the map if the Storage state is failed

2019-09-12 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928891#comment-16928891
 ] 

Hadoop QA commented on HDFS-14778:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 6 new + 121 unchanged - 0 fixed = 127 total (was 121) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:f4f9f0fe4f2 |
| JIRA Issue | HDFS-14778 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980223/HDFS-14778.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 74295cae275e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1505d3f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27858/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27858/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27858/testReport/ |
| Max. process+thread count | 2882 (vs. ulimit of 5500) |

[jira] [Updated] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-12 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-2119:
--
Status: Patch Available  (was: Open)

> Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
> validation
> 
>
> Key: HDDS-2119
> URL: https://issues.apache.org/jira/browse/HDDS-2119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After HDDS-2106 hdds/ozone no more relies on hadoop parent pom, so we have to 
> use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
> checkstyle validation.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?focusedWorklogId=311705=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311705
 ]

ASF GitHub Bot logged work on HDDS-2119:


Author: ASF GitHub Bot
Created on: 12/Sep/19 20:46
Start Date: 12/Sep/19 20:46
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1435: 
HDDS-2119. Use checkstyle.xml and suppressions.xml in hdds/ozone projects for 
checkstyle validation.
URL: https://github.com/apache/hadoop/pull/1435
 
 
   After #1423 hdds/ozone no more relies on hadoop parent pom, so we have to 
use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
checkstyle validation.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 311705)
Remaining Estimate: 0h
Time Spent: 10m

> Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
> validation
> 
>
> Key: HDDS-2119
> URL: https://issues.apache.org/jira/browse/HDDS-2119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After HDDS-2106 hdds/ozone no more relies on hadoop parent pom, so we have to 
> use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
> checkstyle validation.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2119) Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2119:
-
Labels: pull-request-available  (was: )

> Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle 
> validation
> 
>
> Key: HDDS-2119
> URL: https://issues.apache.org/jira/browse/HDDS-2119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>
> After HDDS-2106 hdds/ozone no more relies on hadoop parent pom, so we have to 
> use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
> checkstyle validation.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14090) RBF: Improved isolation for downstream name nodes. {Static}

2019-09-12 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928872#comment-16928872
 ] 

Íñigo Goiri commented on HDFS-14090:


+1 on  [^HDFS-14090.014.patch].

> RBF: Improved isolation for downstream name nodes. {Static}
> ---
>
> Key: HDFS-14090
> URL: https://issues.apache.org/jira/browse/HDFS-14090
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14090-HDFS-13891.001.patch, 
> HDFS-14090-HDFS-13891.002.patch, HDFS-14090-HDFS-13891.003.patch, 
> HDFS-14090-HDFS-13891.004.patch, HDFS-14090-HDFS-13891.005.patch, 
> HDFS-14090.006.patch, HDFS-14090.007.patch, HDFS-14090.008.patch, 
> HDFS-14090.009.patch, HDFS-14090.010.patch, HDFS-14090.011.patch, 
> HDFS-14090.012.patch, HDFS-14090.013.patch, HDFS-14090.014.patch, RBF_ 
> Isolation design.pdf
>
>
> Router is a gateway to underlying name nodes. Gateway architectures, should 
> help minimize impact of clients connecting to healthy clusters vs unhealthy 
> clusters.
> For example - If there are 2 name nodes downstream, and one of them is 
> heavily loaded with calls spiking rpc queue times, due to back pressure the 
> same with start reflecting on the router. As a result of this, clients 
> connecting to healthy/faster name nodes will also slow down as same rpc queue 
> is maintained for all calls at the router layer. Essentially the same IPC 
> thread pool is used by router to connect to all name nodes.
> Currently router uses one single rpc queue for all calls. Lets discuss how we 
> can change the architecture and add some throttling logic for 
> unhealthy/slow/overloaded name nodes.
> One way could be to read from current call queue, immediately identify 
> downstream name node and maintain a separate queue for each underlying name 
> node. Another simpler way is to maintain some sort of rate limiter configured 
> for each name node and let routers drop/reject/send error requests after 
> certain threshold. 
> This won’t be a simple change as router’s ‘Server’ layer would need redesign 
> and implementation. Currently this layer is the same as name node.
> Opening this ticket to discuss, design and implement this feature.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13891) HDFS RBF stabilization phase I

2019-09-12 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928857#comment-16928857
 ] 

Brahma Reddy Battula commented on HDFS-13891:
-

Once again thanks to all.

> HDFS RBF stabilization phase I  
> 
>
> Key: HDFS-13891
> URL: https://issues.apache.org/jira/browse/HDFS-13891
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Priority: Major
>  Labels: RBF
>
> RBF(Router Based Federation) shipped in 3.0+ and 2.9..
> now its out various corner cases, scale and error handling issues are 
> surfacing.
> And we are targeting security feaiure (HDFS-13532) also.
> this umbrella to fix all those issues and support missing 
> protocols(HDFS-13655) before next 3.3 release.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13891) HDFS RBF stabilization phase I

2019-09-12 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928856#comment-16928856
 ] 

Brahma Reddy Battula commented on HDFS-13891:
-

Hopefully I set the fix version for all the jira's under this umbrella.Hence 
going to close this now.

> HDFS RBF stabilization phase I  
> 
>
> Key: HDFS-13891
> URL: https://issues.apache.org/jira/browse/HDFS-13891
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Priority: Major
>  Labels: RBF
>
> RBF(Router Based Federation) shipped in 3.0+ and 2.9..
> now its out various corner cases, scale and error handling issues are 
> surfacing.
> And we are targeting security feaiure (HDFS-13532) also.
> this umbrella to fix all those issues and support missing 
> protocols(HDFS-13655) before next 3.3 release.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14526) RBF: Update the document of RBF related metrics

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14526:

Fix Version/s: 3.3.0

> RBF: Update the document of RBF related metrics
> ---
>
> Key: HDFS-14526
> URL: https://issues.apache.org/jira/browse/HDFS-14526
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14526-HDFS-13891.1.patch, 
> HDFS-14526-HDFS-13891.2.patch, HDFS-14526-HDFS-13891.3.patch, 
> federationmetrics_v1.png
>
>
> This is a follow-on task of HDFS-14508. We need to update 
> {{HDFSRouterFederation.md#Metrics}} and {{Metrics.md}}.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14545) RBF: Router should support GetUserMappingsProtocol

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14545:

Fix Version/s: 3.3.0

> RBF: Router should support GetUserMappingsProtocol
> --
>
> Key: HDFS-14545
> URL: https://issues.apache.org/jira/browse/HDFS-14545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14545-HDFS-13891-01.patch, 
> HDFS-14545-HDFS-13891-02.patch, HDFS-14545-HDFS-13891-03.patch, 
> HDFS-14545-HDFS-13891-04.patch, HDFS-14545-HDFS-13891-05.patch, 
> HDFS-14545-HDFS-13891-06.patch, HDFS-14545-HDFS-13891-07.patch, 
> HDFS-14545-HDFS-13891-08.patch, HDFS-14545-HDFS-13891-09.patch, 
> HDFS-14545-HDFS-13891-10.patch, HDFS-14545-HDFS-13891.000.patch
>
>
> We should be able to check the groups for a user from a Router.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14550) RBF: Failed to get statistics from NameNodes before 2.9.0

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14550:

Fix Version/s: 3.3.0

> RBF: Failed to get statistics from NameNodes before 2.9.0
> -
>
> Key: HDFS-14550
> URL: https://issues.apache.org/jira/browse/HDFS-14550
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14550-HDFS-13891.001.patch
>
>
> DFSRouter fails to get stats from NameNodes that do not have HDFS-7877
> {noformat}
> 2019-06-03 17:40:15,407 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Cannot get stat from nn1:nn01:8022 using JMX
> org.codehaus.jettison.json.JSONException: 
> JSONObject["NumInMaintenanceLiveDataNodes"] not found.
> at org.codehaus.jettison.json.JSONObject.get(JSONObject.java:360)
> at org.codehaus.jettison.json.JSONObject.getInt(JSONObject.java:421)
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateJMXParameters(NamenodeHeartbeatService.java:345)
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.getNamenodeStatusReport(NamenodeHeartbeatService.java:278)
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:206)
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:160)
> at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14508) RBF: Clean-up and refactor UI components

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14508:

Fix Version/s: 3.3.0

> RBF: Clean-up and refactor UI components
> 
>
> Key: HDFS-14508
> URL: https://issues.apache.org/jira/browse/HDFS-14508
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14508-HDFS-13891.1.patch, 
> HDFS-14508-HDFS-13891.2.patch, HDFS-14508-HDFS-13891.3.patch, 
> HDFS-14508-HDFS-13891.4.patch, HDFS-14508-HDFS-13891.5.patch
>
>
> Router UI has tags that are not used or incorrectly set. The code should be 
> cleaned-up.
> One such example is 
> Path : 
> (\hadoop-hdfs-project\hadoop-hdfs-rbf\src\main\webapps\router\federationhealth.js)
> {code:java}
> {"name": "routerstat", "url": 
> "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13480) RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13480:

Fix Version/s: 3.3.0

> RBF: Separate namenodeHeartbeat and routerHeartbeat to different config key
> ---
>
> Key: HDFS-13480
> URL: https://issues.apache.org/jira/browse/HDFS-13480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: maobaolong
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-13480-HDFS-13891-05.patch, 
> HDFS-13480-HDFS-13891-06.patch, HDFS-13480-HDFS-13891-07.patch, 
> HDFS-13480-HDFS-13891-08.patch, HDFS-13480.001.patch, HDFS-13480.002.patch, 
> HDFS-13480.002.patch, HDFS-13480.003.patch, HDFS-13480.004.patch
>
>
> Now, if i enable the heartbeat.enable, but i do not want to monitor any 
> namenode, i get an ERROR log like:
> {code:java}
> [2018-04-19T14:00:03.057+08:00] [ERROR] 
> federation.router.Router.serviceInit(Router.java 214) [main] : Heartbeat is 
> enabled but there are no namenodes to monitor
> {code}
> and if i disable the heartbeat.enable, we cannot get any mounttable update, 
> because the following logic in Router.java:
> {code:java}
> if (conf.getBoolean(
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE,
> RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT)) {
>   // Create status updater for each monitored Namenode
>   this.namenodeHeartbeatServices = createNamenodeHeartbeatServices();
>   for (NamenodeHeartbeatService hearbeatService :
>   this.namenodeHeartbeatServices) {
> addService(hearbeatService);
>   }
>   if (this.namenodeHeartbeatServices.isEmpty()) {
> LOG.error("Heartbeat is enabled but there are no namenodes to 
> monitor");
>   }
>   // Periodically update the router state
>   this.routerHeartbeatService = new RouterHeartbeatService(this);
>   addService(this.routerHeartbeatService);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14516) RBF: Create hdfs-rbf-site.xml for RBF specific properties

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14516:

Fix Version/s: 3.3.0

> RBF: Create hdfs-rbf-site.xml for RBF specific properties
> -
>
> Key: HDFS-14516
> URL: https://issues.apache.org/jira/browse/HDFS-14516
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14516.1.patch, HDFS-14516.2.patch
>
>
> Currently, users write rbf properties in {{hdfs-site.xml}} though the 
> definitions are in {{hdfs-rbf-default.xml}}. Like other modules, it would be 
> better if there is a specific configuration file, {{hdfs-rbf-site.xml}}.
> {{hdfs-rbf-default.xml}} also should be loaded when it exists in the 
> configuration directory. It is just a document at the moment.
> There is an early discussion in HDFS-13215.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14490) RBF: Remove unnecessary quota checks

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14490:

Fix Version/s: 3.3.0

> RBF: Remove unnecessary quota checks
> 
>
> Key: HDFS-14490
> URL: https://issues.apache.org/jira/browse/HDFS-14490
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14490-HDFS-13891-01.patch, 
> HDFS-14490-HDFS-13891-02.patch, HDFS-14490-HDFS-13891-03.patch
>
>
> Remove unnecessary quota checks for unrelated operations such as setEcPolicy, 
> getEcPolicy and similar  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14457) RBF: Add order text SPACE in CLI command 'hdfs dfsrouteradmin'

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14457:

Fix Version/s: 3.3.0

> RBF: Add order text SPACE in CLI command 'hdfs dfsrouteradmin'
> --
>
> Key: HDFS-14457
> URL: https://issues.apache.org/jira/browse/HDFS-14457
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: HDFS-13891
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
>  Labels: RBF
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14457-HDFS-13891-01.patch, 
> HDFS-14457-HDFS-13891-02.patch, HDFS-14457.01.patch
>
>
> when execute cli comand 'hdfs dfsrouteradmin' ,the text in -order donot 
> contain SPACE



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14454) RBF: getContentSummary() should allow non-existing folders

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14454:

Fix Version/s: 3.3.0

> RBF: getContentSummary() should allow non-existing folders
> --
>
> Key: HDFS-14454
> URL: https://issues.apache.org/jira/browse/HDFS-14454
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14454-HDFS-13891.000.patch, 
> HDFS-14454-HDFS-13891.001.patch, HDFS-14454-HDFS-13891.002.patch, 
> HDFS-14454-HDFS-13891.003.patch, HDFS-14454-HDFS-13891.004.patch, 
> HDFS-14454-HDFS-13891.005.patch, HDFS-14454-HDFS-13891.006.patch
>
>
> We have a mount point with HASH_ALL and one of the subclusters does not 
> contain the folder.
> In this case, getContentSummary() returns FileNotFoundException.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14447:

Fix Version/s: 3.3.0

> RBF: Router should support RefreshUserMappingsProtocol
> --
>
> Key: HDFS-14447
> URL: https://issues.apache.org/jira/browse/HDFS-14447
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14447-HDFS-13891.01.patch, 
> HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, 
> HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, 
> HDFS-14447-HDFS-13891.06.patch, HDFS-14447-HDFS-13891.07.patch, 
> HDFS-14447-HDFS-13891.08.patch, HDFS-14447-HDFS-13891.09.patch, error.png
>
>
> HDFS with RBF
> We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin 
> -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration,
>  it throws "Unknown protocol: ...RefreshUserMappingProtocol".
> RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser 
> client would be refused to impersonate.As shown in the screenshot



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14351) RBF: Optimize configuration item resolving for monitor namenode

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14351:

Fix Version/s: 3.3.0

> RBF: Optimize configuration item resolving for monitor namenode
> ---
>
> Key: HDFS-14351
> URL: https://issues.apache.org/jira/browse/HDFS-14351
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14351-HDFS-13891.001.patch, 
> HDFS-14351-HDFS-13891.002.patch, HDFS-14351-HDFS-13891.003.patch, 
> HDFS-14351-HDFS-13891.004.patch, HDFS-14351-HDFS-13891.005.patch, 
> HDFS-14351-HDFS-13891.006.patch, HDFS-14351.001.patch, HDFS-14351.002.patch
>
>
> We invoke {{configuration.get}} to resolve configuration item 
> `dfs.federation.router.monitor.namenode` at `Router.java`, then split the 
> value by comma to get nsid and nnid, it may confused users since this is not 
> compatible with blank space but other common parameters could do. The 
> following segment show example that resolve fails.
> {code:java}
>   
> dfs.federation.router.monitor.namenode
> nameservice1.nn1, nameservice1.nn2
> 
>   The identifier of the namenodes to monitor and heartbeat.
> 
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14369) RBF: Fix trailing "/" for webhdfs

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14369:

Fix Version/s: 3.3.0

> RBF: Fix trailing "/" for webhdfs
> -
>
> Key: HDFS-14369
> URL: https://issues.apache.org/jira/browse/HDFS-14369
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14369-HDFS-13891-regressiontest-001.patch, 
> HDFS-14369-HDFS-13891.001.patch, HDFS-14369-HDFS-13891.002.patch, 
> HDFS-14369-HDFS-13891.003.patch, HDFS-14369-HDFS-13891.004.patch, 
> HDFS-14369-HDFS-13891.005.patch, HDFS-14369-HDFS-13891.006.patch
>
>
> WebHDFS doesn't trim trailing slash causing discrepancy in operations.
> Example below
> --
> Using HDFS API, two directory are listed.
> {code}
> $ hdfs dfs -ls hdfs://:/tmp/
> Found 2 items
> drwxrwxrwx   - hdfs supergroup  0 2018-11-09 17:50 
> hdfs://:/tmp/tmp1
> drwxrwxrwx   - hdfs supergroup  0 2018-11-09 17:50 
> hdfs://:/tmp/tmp2
> {code}
> Using WebHDFS API, only one directory is listed.
> {code}
> $ curl -u : --negotiate -i 
> "http://:50071/webhdfs/v1/tmp/?op=LISTSTATUS"
> (snip)
> {"FileStatuses":{"FileStatus":[
> {"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16387,"group":"supergroup","length":0,"modificationTime":1552016766769,"owner":"hdfs","pathSuffix":"tmp1","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
> ]}}
> {code}
> The mount table is as follows:
> {code}
> $ hdfs dfsrouteradmin -ls /tmp
> Mount Table Entries:
> SourceDestinations  Owner 
> Group Mode  Quota/Usage  
> /tmp  ns1->/tmp aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> /tmp/tmp1 ns1->/tmp/tmp1aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> /tmp/tmp2 ns2->/tmp/tmp2aajisaka  
> users rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> {code}
> Without trailing thrash, two directories are listed.
> {code}
> $ curl -u : --negotiate -i 
> "http://:50071/webhdfs/v1/tmp?op=LISTSTATUS"
> (snip)
> {"FileStatuses":{"FileStatus":[
> {"accessTime":1541753421917,"blockSize":0,"childrenNum":0,"fileId":0,"group":"supergroup","length":0,"modificationTime":1541753421917,"owner":"hdfs","pathSuffix":"tmp1","permission":"777","replication":0,"storagePolicy":0,"symlink":"","type":"DIRECTORY"},
> {"accessTime":1541753429812,"blockSize":0,"childrenNum":0,"fileId":0,"group":"supergroup","length":0,"modificationTime":1541753429812,"owner":"hdfs","pathSuffix":"tmp2","permission":"777","replication":0,"storagePolicy":0,"symlink":"","type":"DIRECTORY"}
> ]}}
> {code}
> [~ajisakaa] Thanks for reporting this, I borrowed the text from 
> HDFS-13972



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14440) RBF: Optimize the file write process in case of multiple destinations.

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14440:

Fix Version/s: 3.3.0

> RBF: Optimize the file write process in case of multiple destinations.
> --
>
> Key: HDFS-14440
> URL: https://issues.apache.org/jira/browse/HDFS-14440
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14440-HDFS-13891-01.patch, 
> HDFS-14440-HDFS-13891-02.patch, HDFS-14440-HDFS-13891-03.patch, 
> HDFS-14440-HDFS-13891-04.patch, HDFS-14440-HDFS-13891-05.patch, 
> HDFS-14440-HDFS-13891-06.patch
>
>
> In case of multiple destinations, We need to check if the file already exists 
> in one of the subclusters for which we use the existing getBlockLocation() 
> API which is by default a sequential Call,
> In an ideal scenario where the file needs to be created each subcluster shall 
> be checked sequentially, this can be done concurrently to save time.
> In another case where the file is found and if the last block is null, we 
> need to do getFileInfo to all the locations to get the location where the 
> file exists. This also can be prevented by use of ConcurrentCall since we 
> shall be having the remoteLocation to where the getBlockLocation returned a 
> non null entry.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14422) RBF: Router shouldn't allow READ operations in safe mode

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14422:

Fix Version/s: 3.3.0

> RBF: Router shouldn't allow READ operations in safe mode
> 
>
> Key: HDFS-14422
> URL: https://issues.apache.org/jira/browse/HDFS-14422
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14422-HDFS-13891.000.patch, 
> HDFS-14422-HDFS-13891.001.patch
>
>
> We are currently seeing:
> org.apache.hadoop.hdfs.server.federation.store.StateStoreUnavailableException:
>  Mount Table not initialized
>   at 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver.verifyMountTable(MountTableResolver.java:521)
>   at 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver.getDestinationForPath(MountTableResolver.java:394)
>   at 
> org.apache.hadoop.hdfs.server.federation.resolver.MultipleDestinationMountTableResolver.getDestinationForPath(MultipleDestinationMountTableResolver.java:87)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1258)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.getFileInfo(RouterClientProtocol.java:747)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:749)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:881)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:513)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1011)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1915)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2621)
> The Namenode allows READ operations but for the Router not being able to 
> access the State Store also hits the read operations.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14388) RBF: Prevent loading metric system when disabled

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14388:

Fix Version/s: 3.3.0

> RBF: Prevent loading metric system when disabled
> 
>
> Key: HDFS-14388
> URL: https://issues.apache.org/jira/browse/HDFS-14388
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14388-HDFS-13891.000.patch, 
> HDFS-14388-HDFS-13891.001.patch
>
>
> Currently, the Router and the State Store try to initialize the metrics even 
> when they are specially disabled. This produces a lot of verbose logs in 
> tests without metrics.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14343) RBF: Fix renaming folders spread across multiple subclusters

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14343:

Fix Version/s: HDFS-13891
   3.3.0

> RBF: Fix renaming folders spread across multiple subclusters
> 
>
> Key: HDFS-14343
> URL: https://issues.apache.org/jira/browse/HDFS-14343
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14343-HDFS-13891-01.patch, 
> HDFS-14343-HDFS-13891-02.patch, HDFS-14343-HDFS-13891-03.patch, 
> HDFS-14343-HDFS-13891-04.patch, HDFS-14343-HDFS-13891-05.patch
>
>
> The {{RouterClientProtocol#rename()}} function assumes that we are renaming 
> files and only renames one of them (i.e., {{invokeSequential()}}). In the 
> case of folders which are in all subclusters (e.g., HASH_ALL) we should 
> rename all locations (i.e., {{invokeAll()}}).



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14334) RBF: Use human readable format for long numbers in the Router UI

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14334:

Fix Version/s: 3.3.0

> RBF: Use human readable format for long numbers in the Router UI
> 
>
> Key: HDFS-14334
> URL: https://issues.apache.org/jira/browse/HDFS-14334
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14334-HDFS-13891.000.patch, 
> HDFS-14334-HDFS-13891.001.patch, block-files-numbers-after.png, 
> block-files-numbers.png
>
>
> Currently, for the number of files, we show the raw number. When it starts to 
> get into millions, it is hard to read. We should use a human readable version 
> similar to what we do with PB, GB, MB,...



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14331) RBF: IOE While Removing Mount Entry

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14331:

Fix Version/s: 3.3.0

> RBF: IOE While Removing Mount Entry
> ---
>
> Key: HDFS-14331
> URL: https://issues.apache.org/jira/browse/HDFS-14331
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14331-HDFS-13891-01.patch, 
> HDFS-14331-HDFS-13891-02.patch, HDFS-14331-HDFS-13891-03.patch
>
>
> IOException while trying to remove the mount entry when the actual 
> destination doesn't exist.
> {noformat}
> java.io.IOException: Directory does not exist: /mount at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.valueOf(INodeDirectory.java:59)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSetQuota(FSDirAttrOp.java:334)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setQuota(FSDirAttrOp.java:244)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setQuota(FSNamesystem.java:3352)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setQuota(NameNodeRpcServer.java:1484)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setQuota(ClientNamenodeProtocolServerSideTranslatorPB.java:1042)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:37182)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at 
> org.apache.hadoop.ipc.Server$Call.run(Server.java:1) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2825)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14316:

Fix Version/s: 3.3.0

> RBF: Support unavailable subclusters for mount points with multiple 
> destinations
> 
>
> Key: HDFS-14316
> URL: https://issues.apache.org/jira/browse/HDFS-14316
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14316-HDFS-13891.000.patch, 
> HDFS-14316-HDFS-13891.001.patch, HDFS-14316-HDFS-13891.002.patch, 
> HDFS-14316-HDFS-13891.003.patch, HDFS-14316-HDFS-13891.004.patch, 
> HDFS-14316-HDFS-13891.005.patch, HDFS-14316-HDFS-13891.006.patch, 
> HDFS-14316-HDFS-13891.007.patch, HDFS-14316-HDFS-13891.008.patch, 
> HDFS-14316-HDFS-13891.009.patch, HDFS-14316-HDFS-13891.010.patch, 
> HDFS-14316-HDFS-13891.011.patch, HDFS-14316-HDFS-13891.012.patch, 
> HDFS-14316-HDFS-13891.013.patch, HDFS-14316-HDFS-13891.014.patch, 
> HDFS-14316-HDFS-13891.015.patch
>
>
> Currently mount points with multiple destinations (e.g., HASH_ALL) fail 
> writes when the destination subcluster is down. We need an option to allow 
> writing in other subclusters when one is down.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14329) RBF: Add maintenance nodes to federation metrics

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14329:

Fix Version/s: 3.3.0

> RBF: Add maintenance nodes to federation metrics
> 
>
> Key: HDFS-14329
> URL: https://issues.apache.org/jira/browse/HDFS-14329
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14329-HDFS-13891-01.patch, 
> HDFS-14329-HDFS-13891-02.patch
>
>
> Extend datanode maintenance related metrics into federation metrics.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14268) RBF: Fix the location of the DNs in getDatanodeReport()

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14268:

Fix Version/s: 3.3.0

> RBF: Fix the location of the DNs in getDatanodeReport()
> ---
>
> Key: HDFS-14268
> URL: https://issues.apache.org/jira/browse/HDFS-14268
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14268-HDFS-13891.000.patch, 
> HDFS-14268-HDFS-13891.001.patch, HDFS-14268-HDFS-13891.002.patch, 
> HDFS-14268-HDFS-13891.003.patch, HDFS-14268-HDFS-13891.004.patch
>
>
> When getting all the DNs in the federation, the Router queries each of the 
> subclusters and aggregates them assigning the subcluster id to the location. 
> This query uses a {{HashSet}} which provides a "random" order for the results.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14230) RBF: Throw RetriableException instead of IOException when no namenodes available

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14230:

Fix Version/s: 3.3.0

> RBF: Throw RetriableException instead of IOException when no namenodes 
> available
> 
>
> Key: HDFS-14230
> URL: https://issues.apache.org/jira/browse/HDFS-14230
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14230-HDFS-13891.001.patch, 
> HDFS-14230-HDFS-13891.002.patch, HDFS-14230-HDFS-13891.003.patch, 
> HDFS-14230-HDFS-13891.004.patch, HDFS-14230-HDFS-13891.005.patch, 
> HDFS-14230-HDFS-13891.006.patch
>
>
> Failover usually happens when upgrading namenodes. And there are no active 
> namenodes within some seconds, Accessing HDFS through router fails at this 
> moment. This could make jobs  failure or hang. Some hive jobs logs are as 
> follow  
> {code:java}
> 2019-01-03 16:12:08,337 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
> 133.33 sec
> MapReduce Total cumulative CPU time: 2 minutes 13 seconds 330 msec
> Ended Job = job_1542178952162_24411913
> Launching Job 4 out of 6
> Exception in thread "Thread-86" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): No namenode 
> available under nameservice Cluster3
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.shouldRetry(RouterRpcClient.java:328)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:488)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:495)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:385)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:760)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1804)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1338)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3925)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1014)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
> {code}
> Deep into the code. Maybe we can throw StandbyException when no namenodes 
> available. Client will fail after some 

[jira] [Updated] (HDFS-14259) RBF: Fix safemode message for Router

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14259:

Fix Version/s: 3.3.0

> RBF: Fix safemode message for Router
> 
>
> Key: HDFS-14259
> URL: https://issues.apache.org/jira/browse/HDFS-14259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14259-HDFS-13891.000.patch, 
> HDFS-14259-HDFS-13891.001.patch, HDFS-14259-HDFS-13891.002.patch
>
>
> Currently, the {{getSafemode()}} bean checks the state of the Router but 
> returns the error if the status is different than SAFEMODE:
> {code}
>   public String getSafemode() {
>   if (!getRouter().isRouterState(RouterServiceState.SAFEMODE)) {
> return "Safe mode is ON. " + this.getSafeModeTip();
>   }
> } catch (IOException e) {
>   return "Failed to get safemode status. Please check router"
>   + "log for more detail.";
> }
> return "";
>   }
> {code}
> The condition should be reversed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14252) RBF : Exceptions are exposing the actual sub cluster path

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14252:

Fix Version/s: HDFS-13891
   3.3.0

> RBF : Exceptions are exposing the actual sub cluster path
> -
>
> Key: HDFS-14252
> URL: https://issues.apache.org/jira/browse/HDFS-14252
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14252-HDFS-13891-01.patch, 
> HDFS-14252-HDFS-13891-02.patch, HDFS-14252-HDFS-13891-03.patch
>
>
> In case of file not found exception. If only one destination is available. 
> Either mounted only one Or mounted multiple but available only one(disabled 
> NS or something) during operation. In that scenario the exceptions are not 
> processed and is directly thrown. This exposes the actual sub cluster 
> destination path instead the path w.r.t. Mount.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14249) RBF: Tooling to identify the subcluster location of a file

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14249:

Fix Version/s: HDFS-13891
   3.3.0

> RBF: Tooling to identify the subcluster location of a file
> --
>
> Key: HDFS-14249
> URL: https://issues.apache.org/jira/browse/HDFS-14249
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14249-HDFS-13891.000.patch, 
> HDFS-14249-HDFS-13891.001.patch, HDFS-14249-HDFS-13891.002.patch
>
>
> Mount points can spread files across multiple subclusters depennding on a 
> policy (e.g., HASH, HASH_ALL). Administrators would need a way to identify 
> the location.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=311676=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311676
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 12/Sep/19 19:59
Start Date: 12/Sep/19 19:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1277: 
HDDS-1054. List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r315708896
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
 ##
 @@ -200,3 +200,24 @@ Test Multipart Upload with the simplified aws s3 cp API
 Execute AWSS3Clicp s3://${BUCKET}/mpyawscli 
/tmp/part1.result
 Execute AWSS3Clirm s3://${BUCKET}/mpyawscli
 Compare files   /tmp/part1
/tmp/part1.result
+
+Test Multipart Upload list
+${result} = Execute AWSS3APICli create-multipart-upload 
--bucket ${BUCKET} --key listtest/key1
+${uploadID1} =  Execute and checkrc echo '${result}' | jq -r 
'.UploadId'0
+Should contain  ${result}${BUCKET}
+Should contain  ${result}listtest/key1
+Should contain  ${result}UploadId
+
+${result} = Execute AWSS3APICli create-multipart-upload 
--bucket ${BUCKET} --key listtest/key2
+${uploadID2} =  Execute and checkrc echo '${result}' | jq -r 
'.UploadId'0
+Should contain  ${result}${BUCKET}
+Should contain  ${result}listtest/key2
+Should contain  ${result}UploadId
+
+${result} = Execute AWSS3APICli list-multipart-uploads 
--bucket ${BUCKET} --prefix listtest
+Should contain  ${result}${uploadID1}
+Should contain  ${result}${uploadID2}
+
+${count} =  Execute and checkrc  echo '${result}' | jq -r 
'.Uploads | length'  0
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 311676)
Time Spent: 7h 40m  (was: 7.5h)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=311675=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311675
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 12/Sep/19 19:59
Start Date: 12/Sep/19 19:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1277: 
HDDS-1054. List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r314769222
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
 ##
 @@ -200,3 +200,24 @@ Test Multipart Upload with the simplified aws s3 cp API
 Execute AWSS3Clicp s3://${BUCKET}/mpyawscli 
/tmp/part1.result
 Execute AWSS3Clirm s3://${BUCKET}/mpyawscli
 Compare files   /tmp/part1
/tmp/part1.result
+
+Test Multipart Upload list
+${result} = Execute AWSS3APICli create-multipart-upload 
--bucket ${BUCKET} --key listtest/key1
+${uploadID1} =  Execute and checkrc echo '${result}' | jq -r 
'.UploadId'0
+Should contain  ${result}${BUCKET}
+Should contain  ${result}listtest/key1
+Should contain  ${result}UploadId
+
+${result} = Execute AWSS3APICli create-multipart-upload 
--bucket ${BUCKET} --key listtest/key2
+${uploadID2} =  Execute and checkrc echo '${result}' | jq -r 
'.UploadId'0
+Should contain  ${result}${BUCKET}
+Should contain  ${result}listtest/key2
+Should contain  ${result}UploadId
+
+${result} = Execute AWSS3APICli list-multipart-uploads 
--bucket ${BUCKET} --prefix listtest
+Should contain  ${result}${uploadID1}
+Should contain  ${result}${uploadID2}
+
+${count} =  Execute and checkrc  echo '${result}' | jq -r 
'.Uploads | length'  0
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 311675)
Time Spent: 7.5h  (was: 7h 20m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14833) RBF: Router Update Doesn't Sync Quota

2019-09-12 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14833:

Attachment: HDFS-14833-01.patch

> RBF: Router Update Doesn't Sync Quota
> -
>
> Key: HDFS-14833
> URL: https://issues.apache.org/jira/browse/HDFS-14833
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14833-01.patch
>
>
> HDFS-14777 Added a check to prevent RPC call, It checks whether in the 
> present state whether quota is changing. 
> But ignores the part that if the locations are changed. if the location is 
> changed the new destination should be synchronized with the mount entry 
> quota. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14833) RBF: Router Update Doesn't Sync Quota

2019-09-12 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928855#comment-16928855
 ] 

Ayush Saxena commented on HDFS-14833:
-

Uploaded patch v1.
Pls Review!!!

> RBF: Router Update Doesn't Sync Quota
> -
>
> Key: HDFS-14833
> URL: https://issues.apache.org/jira/browse/HDFS-14833
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14833-01.patch
>
>
> HDFS-14777 Added a check to prevent RPC call, It checks whether in the 
> present state whether quota is changing. 
> But ignores the part that if the locations are changed. if the location is 
> changed the new destination should be synchronized with the mount entry 
> quota. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14833) RBF: Router Update Doesn't Sync Quota

2019-09-12 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14833:

Status: Patch Available  (was: Open)

> RBF: Router Update Doesn't Sync Quota
> -
>
> Key: HDFS-14833
> URL: https://issues.apache.org/jira/browse/HDFS-14833
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14833-01.patch
>
>
> HDFS-14777 Added a check to prevent RPC call, It checks whether in the 
> present state whether quota is changing. 
> But ignores the part that if the locations are changed. if the location is 
> changed the new destination should be synchronized with the mount entry 
> quota. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=311674=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311674
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 12/Sep/19 19:59
Start Date: 12/Sep/19 19:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1277: 
HDDS-1054. List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r312740496
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
 ##
 @@ -200,3 +200,24 @@ Test Multipart Upload with the simplified aws s3 cp API
 Execute AWSS3Clicp s3://${BUCKET}/mpyawscli 
/tmp/part1.result
 Execute AWSS3Clirm s3://${BUCKET}/mpyawscli
 Compare files   /tmp/part1
/tmp/part1.result
+
+Test Multipart Upload list
+${result} = Execute AWSS3APICli create-multipart-upload 
--bucket ${BUCKET} --key listtest/key1
+${uploadID1} =  Execute and checkrc echo '${result}' | jq -r 
'.UploadId'0
+Should contain  ${result}${BUCKET}
+Should contain  ${result}listtest/key1
+Should contain  ${result}UploadId
+
+${result} = Execute AWSS3APICli create-multipart-upload 
--bucket ${BUCKET} --key listtest/key2
+${uploadID2} =  Execute and checkrc echo '${result}' | jq -r 
'.UploadId'0
+Should contain  ${result}${BUCKET}
+Should contain  ${result}listtest/key2
+Should contain  ${result}UploadId
+
+${result} = Execute AWSS3APICli list-multipart-uploads 
--bucket ${BUCKET} --prefix listtest
+Should contain  ${result}${uploadID1}
+Should contain  ${result}${uploadID2}
+
+${count} =  Execute and checkrc  echo '${result}' | jq -r 
'.Uploads | length'  0
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 311674)
Time Spent: 7h 20m  (was: 7h 10m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 7h 20m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=311671=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311671
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 12/Sep/19 19:58
Start Date: 12/Sep/19 19:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-526197265
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for branch |
   | +1 | mvninstall | 597 | trunk passed |
   | +1 | compile | 383 | trunk passed |
   | +1 | checkstyle | 68 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 917 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   | 0 | spotbugs | 470 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 701 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 593 | the patch passed |
   | +1 | compile | 394 | the patch passed |
   | +1 | cc | 394 | the patch passed |
   | +1 | javac | 394 | the patch passed |
   | +1 | checkstyle | 81 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 756 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 171 | the patch passed |
   | +1 | findbugs | 639 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 303 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1591 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 7646 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1277 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 376784018ab2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8c0759d |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/9/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/9/testReport/ |
   | Max. process+thread count | 5019 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/s3gateway hadoop-ozone/dist U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 311671)
Time Spent: 7h  (was: 6h 50m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  


[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=311669=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311669
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 12/Sep/19 19:58
Start Date: 12/Sep/19 19:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-525862802
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 10 | https://github.com/apache/hadoop/pull/1277 does not 
apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1277 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/6/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 311669)
Time Spent: 6h 40m  (was: 6.5h)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14226) RBF: Setting attributes should set on all subclusters' directories.

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14226:

Fix Version/s: 3.3.0

> RBF: Setting attributes should set on all subclusters' directories.
> ---
>
> Key: HDFS-14226
> URL: https://issues.apache.org/jira/browse/HDFS-14226
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14226-HDFS-13891-01.patch, 
> HDFS-14226-HDFS-13891-02.patch, HDFS-14226-HDFS-13891-03.patch, 
> HDFS-14226-HDFS-13891-04.patch, HDFS-14226-HDFS-13891-05.patch, 
> HDFS-14226-HDFS-13891-06.patch, HDFS-14226-HDFS-13891-07.patch, 
> HDFS-14226-HDFS-13891-WIP1.patch
>
>
> Only one subcluster is set now.
> {noformat}
> // create a mount point of multiple subclusters
> hdfs dfsrouteradmin -add /all_data ns1 /data1
> hdfs dfsrouteradmin -add /all_data ns2 /data2
> hdfs ec -Dfs.defaultFS=hdfs://router: -setPolicy -path /all_data -policy 
> RS-3-2-1024k
> Set RS-3-2-1024k erasure coding policy on /all_data
> hdfs ec -Dfs.defaultFS=hdfs://router: -getPolicy -path /all_data
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns1-namenode:8020 -getPolicy -path /data1
> RS-3-2-1024k
> hdfs ec -Dfs.defaultFS=hdfs://ns2-namenode:8020 -getPolicy -path /data2
> The erasure coding policy of /data2 is unspecified
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=311672=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311672
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 12/Sep/19 19:58
Start Date: 12/Sep/19 19:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-527276378
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for branch |
   | +1 | mvninstall | 614 | trunk passed |
   | +1 | compile | 388 | trunk passed |
   | +1 | checkstyle | 99 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 922 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | trunk passed |
   | 0 | spotbugs | 431 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 639 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 561 | the patch passed |
   | +1 | compile | 423 | the patch passed |
   | +1 | cc | 423 | the patch passed |
   | +1 | javac | 423 | the patch passed |
   | +1 | checkstyle | 83 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 778 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 192 | the patch passed |
   | +1 | findbugs | 770 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 310 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2359 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 8614 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1277 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 371835c48863 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/10/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/10/testReport/ |
   | Max. process+thread count | 4430 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/s3gateway hadoop-ozone/dist U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/10/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 311672)
Time Spent: 7h 10m  (was: 7h)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>   

[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=311670=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311670
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 12/Sep/19 19:58
Start Date: 12/Sep/19 19:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-52616
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 76 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for branch |
   | +1 | mvninstall | 616 | trunk passed |
   | +1 | compile | 386 | trunk passed |
   | +1 | checkstyle | 69 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 926 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | trunk passed |
   | 0 | spotbugs | 471 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 709 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 610 | the patch passed |
   | +1 | compile | 409 | the patch passed |
   | +1 | cc | 409 | the patch passed |
   | +1 | javac | 409 | the patch passed |
   | +1 | checkstyle | 95 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 808 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 198 | the patch passed |
   | +1 | findbugs | 685 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 374 | hadoop-hdds in the patch passed. |
   | -1 | unit | 262 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 6658 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestKeyManagerUnit |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1277 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux fc03450f9b59 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c749f62 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/8/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/8/testReport/ |
   | Max. process+thread count | 1237 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/s3gateway hadoop-ozone/dist U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 311670)
Time Spent: 6h 50m  (was: 6h 40m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  


[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=311668=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311668
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 12/Sep/19 19:57
Start Date: 12/Sep/19 19:57
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-525885449
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 10 | https://github.com/apache/hadoop/pull/1277 does not 
apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1277 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/7/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 311668)
Time Spent: 6.5h  (was: 6h 20m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14223) RBF: Add configuration documents for using multiple sub-clusters

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14223:

Fix Version/s: 3.3.0

> RBF: Add configuration documents for using multiple sub-clusters
> 
>
> Key: HDFS-14223
> URL: https://issues.apache.org/jira/browse/HDFS-14223
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14223-HDFS-13891.1.patch, 
> HDFS-14223-HDFS-13891.2.patch
>
>
> When using multiple sub-clusters for a mount point, we need to set 
> {{dfs.federation.router.file.resolver.client.class}} to 
> {{MultipleDestinationMountTableResolver}}. The current documents lack of the 
> explanation. We should add it to HDFSRouterFederation.md and 
> hdfs-rbf-default.xml.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14215) RBF: Remove dependency on availability of default namespace

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14215:

Fix Version/s: 3.3.0

> RBF: Remove dependency on availability of default namespace
> ---
>
> Key: HDFS-14215
> URL: https://issues.apache.org/jira/browse/HDFS-14215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14215-HDFS-13891-01.patch, 
> HDFS-14215-HDFS-13891-02.patch, HDFS-14215-HDFS-13891-03.patch, 
> HDFS-14215-HDFS-13891-04.patch, HDFS-14215-HDFS-13891-05.patch, 
> HDFS-14215-HDFS-13891-05.patch, HDFS-14215-HDFS-13891-06.patch, 
> HDFS-14215-HDFS-13891-07.patch, HDFS-14215-HDFS-13891-08.patch, 
> HDFS-14215-HDFS-13891-09.patch
>
>
> Remove the dependency of all API's on the availability of default namespace.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14224) RBF: NPE in getContentSummary() for getEcPolicy() in case of multiple destinations

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14224:

Fix Version/s: 3.3.0

> RBF: NPE in getContentSummary() for getEcPolicy() in case of multiple 
> destinations
> --
>
> Key: HDFS-14224
> URL: https://issues.apache.org/jira/browse/HDFS-14224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14224-HDFS-13891-01.patch, 
> HDFS-14224-HDFS-13891-02.patch, HDFS-14224-HDFS-13891-03.patch, 
> HDFS-14224-HDFS-13891-04.patch, HDFS-14224-HDFS-13891-05.patch, 
> HDFS-14224-HDFS-13891-06.patch
>
>
> Null Pointer Exception in GetContentSummary for EC policy when there are 
> multiple destinations.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14225) RBF : MiniRouterDFSCluster should configure the failover proxy provider for namespace

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14225:

Fix Version/s: 3.3.0

> RBF : MiniRouterDFSCluster should configure the failover proxy provider for 
> namespace
> -
>
> Key: HDFS-14225
> URL: https://issues.apache.org/jira/browse/HDFS-14225
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Minor
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14225-HDFS-13891.000.patch
>
>
> Getting UnknownHostException in UT.
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> java.net.UnknownHostException: ns0
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14209) RBF: setQuota() through router is working for only the mount Points under the Source column in MountTable

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14209:

Fix Version/s: 3.3.0

> RBF: setQuota() through router is working for only the mount Points under the 
> Source column in MountTable
> -
>
> Key: HDFS-14209
> URL: https://issues.apache.org/jira/browse/HDFS-14209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Shubham Dewan
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14209-HDFS-13891.002.patch, 
> HDFS-14209-HDFS-13891.003.patch, HDFS-14209.001.patch
>
>
> Through router we are only able to setQuota for the directories under the 
> Source column of the mount table.
>  For any other directories apart from mount table entry ==> No remote 
> locations available IOException is thrown.
>  Should be able to setQuota for all the directories if present.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14210) RBF: ACL commands should work over all the destinations

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14210:

Fix Version/s: 3.3.0

> RBF: ACL commands should work over all the destinations
> ---
>
> Key: HDFS-14210
> URL: https://issues.apache.org/jira/browse/HDFS-14210
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14210-HDFS-13891-04.patch, 
> HDFS-14210-HDFS-13891-05.patch, HDFS-14210-HDFS-13891.002.patch, 
> HDFS-14210-HDFS-13891.003.patch, HDFS-14210.001.patch
>
>
> 1) A mount point with multiple destinations.
> 2) ./bin/hdfs dfs -setfacl -m user:abc:rwx /testacl
> 3) where /testacl => /test1, /test2
> 4) command works for only one destination.
> ACL should be set on both of the destinations.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14156) RBF: rollEdit() command fails with Router

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14156:

Fix Version/s: 3.3.0

> RBF: rollEdit() command fails with Router
> -
>
> Key: HDFS-14156
> URL: https://issues.apache.org/jira/browse/HDFS-14156
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Shubham Dewan
>Priority: Major
>  Labels: RBF
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14156-HDFS-13891.006.patch, 
> HDFS-14156-HDFS-13891.007.patch, HDFS-14156.001.patch, HDFS-14156.002.patch, 
> HDFS-14156.003.patch, HDFS-14156.004.patch, HDFS-14156.005.patch
>
>
> {noformat}
> bin> ./hdfs dfsadmin -rollEdits
> rollEdits: Cannot cast java.lang.Long to long
> bin>
> {noformat}
> Trace :-
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.ClassCastException): Cannot 
> cast java.lang.Long to long
> at java.lang.Class.cast(Class.java:3369)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1085)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:982)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.rollEdits(RouterClientProtocol.java:900)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.rollEdits(RouterRpcServer.java:862)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rollEdits(ClientNamenodeProtocolServerSideTranslatorPB.java:899)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> at org.apache.hadoop.ipc.Client.call(Client.java:1376)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy11.rollEdits(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rollEdits(ClientNamenodeProtocolTranslatorPB.java:804)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.rollEdits(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.rollEdits(DFSClient.java:2350)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.rollEdits(DistributedFileSystem.java:1550)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.rollEdits(DFSAdmin.java:850)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2353)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2568)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional 

[jira] [Updated] (HDFS-14193) RBF: Inconsistency with the Default Namespace

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14193:

Fix Version/s: 3.3.0

> RBF: Inconsistency with the Default Namespace
> -
>
> Key: HDFS-14193
> URL: https://issues.apache.org/jira/browse/HDFS-14193
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14193-HDFS-13891-01.patch, 
> HDFS-14193-HDFS-13891-02.patch
>
>
> In the present scenario, if the default nameservice is not explicitly 
> mentioned.Each router fallbacks to it local namespace as Default.There in 
> each router having different default namespaces. Which leads to 
> inconsistencies in operations and even blocks in maintaining a global uniform 
> state. The outputs becomes specific to which router is serving the request 
> and is different with different routers.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14206) RBF: Cleanup quota modules

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14206:

Fix Version/s: 3.3.0

> RBF: Cleanup quota modules
> --
>
> Key: HDFS-14206
> URL: https://issues.apache.org/jira/browse/HDFS-14206
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14206-HDFS-13891.000.patch, 
> HDFS-14206-HDFS-13891.001.patch, HDFS-14206-HDFS-13891.002.patch
>
>
> The quota part needs some cleanup.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14191) RBF: Remove hard coded router status from FederationMetrics.

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14191:

Fix Version/s: 3.3.0

> RBF: Remove hard coded router status from FederationMetrics.
> 
>
> Key: HDFS-14191
> URL: https://issues.apache.org/jira/browse/HDFS-14191
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14191-HDFS-13891.002.patch, 
> HDFS-14191-HDFS-13891.003.patch, HDFS-14191.001.patch, 
> IMG_20190109_023713.jpg, image-2019-01-08-16-05-34-736.png, 
> image-2019-01-08-16-09-46-648.png
>
>
> Status value in "Router Information" and in Overview tab, is not matching for 
> "SAFEMODE" condition.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14161) RBF: Throw StandbyException instead of IOException so that client can retry when can not get connection

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14161:

Fix Version/s: 3.3.0

> RBF: Throw StandbyException instead of IOException so that client can retry 
> when can not get connection
> ---
>
> Key: HDFS-14161
> URL: https://issues.apache.org/jira/browse/HDFS-14161
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14161-HDFS-13891.001.patch, 
> HDFS-14161-HDFS-13891.002.patch, HDFS-14161-HDFS-13891.003.patch, 
> HDFS-14161-HDFS-13891.004.patch, HDFS-14161-HDFS-13891.005.patch, 
> HDFS-14161-HDFS-13891.006.patch, HDFS-14161.001.patch
>
>
> Hive Client may hang when get IOException, stack follows
> {code:java}
> Exception in thread "Thread-150" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot get a 
> connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:554)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot 
> get a connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1503)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1441)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:775)
>   at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>   at 
> 

[jira] [Updated] (HDFS-14150) RBF: Quotas of the sub-cluster should be removed when removing the mount point

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14150:

Fix Version/s: 3.3.0

> RBF: Quotas of the sub-cluster should be removed when removing the mount point
> --
>
> Key: HDFS-14150
> URL: https://issues.apache.org/jira/browse/HDFS-14150
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14150-HDFS-13891.2.patch, 
> HDFS-14150-HDFS-13891.3.patch, HDFS-14150.1.patch, HDFS-14150.2.patch
>
>
> From HDFS-14143
> {noformat}
> $ hdfs dfsrouteradmin -add /ns1_data ns1 /data
> $ hdfs dfsrouteradmin -setQuota /ns1_data -nsQuota 10 -ssQuota 10
> $ hdfs dfsrouteradmin -ls /ns1_data
> SourceDestinations  Owner 
> Group Mode  Quota/Usage
> /ns1_datans1->/data tasanuma
> users  rwxr-xr-x [NsQuota: 10/1, SsQuota: 
> 10 B/0 B]
> $ hdfs dfsrouteradmin -rm /ns1_data
> $ hdfs dfsrouteradmin -add /ns1_data ns1 /data
> $ hdfs dfsrouteradmin -ls /ns1_data
> SourceDestinations  Owner 
> Group Mode  Quota/Usage
> /ns1_datans1->/data tasanuma
> users  rwxr-xr-x [NsQuota: -/-, SsQuota: 
> -/-]
> $ hadoop fs -put file1 /ns1_data/file1
> put: The DiskSpace quota of /data is exceeded: quota = 10 B = 10 B but 
> diskspace consumed = 402653184 B = 384 MB
> {noformat}
> This is because the quotas of the subclusters still remain after "hdfs 
> dfsrouteradmin -rm". And "hdfs dfsrouteradmin -add" doesn't reflect the 
> existing quotas.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14152) RBF: Fix a typo in RouterAdmin usage

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14152:

Fix Version/s: 3.3.0

> RBF: Fix a typo in RouterAdmin usage
> 
>
> Key: HDFS-14152
> URL: https://issues.apache.org/jira/browse/HDFS-14152
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF, newbie
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14152-HDFS-13891-01.patch
>
>
> {{routeradmin}} is wrong.
> {noformat}
> Usage: hdfs routeradmin
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14167) RBF: Add stale nodes to federation metrics

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14167:

Fix Version/s: 3.3.0

> RBF: Add stale nodes to federation metrics
> --
>
> Key: HDFS-14167
> URL: https://issues.apache.org/jira/browse/HDFS-14167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14167-HDFS-13891.000.patch
>
>
> The federation metrics mimic the Namenode FSNamesystemState. However, the 
> stale datanodes are not collected.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13856) RBF: RouterAdmin should support dfsrouteradmin -refreshRouterArgs command

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13856:

Fix Version/s: 3.3.0

> RBF: RouterAdmin should support dfsrouteradmin -refreshRouterArgs command
> -
>
> Key: HDFS-13856
> URL: https://issues.apache.org/jira/browse/HDFS-13856
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-13856-HDFS-13891.001.patch, 
> HDFS-13856-HDFS-13891.002.patch, HDFS-13856-HDFS-13891.003.patch, 
> HDFS-13856.001.patch, HDFS-13856.002.patch
>
>
> Like namenode router should support refresh policy individually. For example, 
> we have implemented simple password authentication per rpc connection. The 
> password dict can be refreshed by generic refresh policy. We also want to 
> support this in RouterAdminServer. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14085:

Fix Version/s: 3.3.0

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14085-HDFS-13891-01.patch, 
> HDFS-14085-HDFS-13891-02.patch, HDFS-14085-HDFS-13891-03.patch, 
> HDFS-14085-HDFS-13891-04.patch, HDFS-14085-HDFS-13891-05.patch, 
> HDFS-14085-HDFS-13891-06.patch, HDFS-14085-HDFS-13891-07.patch, 
> HDFS-14085-HDFS-13891-08.patch, HDFS-14085-HDFS-13891-09.patch
>
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14151:

Fix Version/s: 3.3.0

> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14151.1.patch, HDFS-14151.2.patch, 
> HDFS-14151.3.patch, mount_table_3rd_patch.png, mount_table_before.png, 
> read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14024) RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14024:

Fix Version/s: 3.3.0

> RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService
> -
>
> Key: HDFS-14024
> URL: https://issues.apache.org/jira/browse/HDFS-14024
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14024-HDFS-13891.0.patch, HDFS-14024.0.patch
>
>
> Routers may be proxying for a downstream name node that is NOT migrated to 
> understand "ProvidedCapacityTotal". updateJMXParameters method in 
> NamenodeHeartbeatService should handle this without breaking.
>  
> {code:java}
> jsonObject.getLong("MissingBlocks"),
> jsonObject.getLong("PendingReplicationBlocks"),
> jsonObject.getLong("UnderReplicatedBlocks"),
> jsonObject.getLong("PendingDeletionBlocks"),
> jsonObject.getLong("ProvidedCapacityTotal"));
> {code}
> One way to do this is create a json wrapper while gives back some default if 
> json node is not found.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14114) RBF: MIN_ACTIVE_RATIO should be configurable

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14114:

Fix Version/s: 3.3.0

> RBF: MIN_ACTIVE_RATIO should be configurable
> 
>
> Key: HDFS-14114
> URL: https://issues.apache.org/jira/browse/HDFS-14114
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14114-HDFS-13891.001.patch, 
> HDFS-14114-HDFS-13891.002.patch, HDFS-14114.001.patch, HDFS-14114.002.patch, 
> HDFS-14114.003.patch, HDFS-14114.004.patch, HDFS-14114.005.patch, 
> HDFS-14114.006.patch, HDFS-14114.007.patch, HDFS-14114.008.patch
>
>
> The following code contains 
> {code:java}
>   if (timeSinceLastActive > connectionCleanupPeriodMs ||
>   active < MIN_ACTIVE_RATIO * total) {
> // Remove and close 1 connection
> List conns = pool.removeConnections(1);
> for (ConnectionContext conn : conns) {
>   conn.close();
> }
> LOG.debug("Removed connection {} used {} seconds ago. " +
> "Pool has {}/{} connections", pool.getConnectionPoolId(),
> TimeUnit.MILLISECONDS.toSeconds(timeSinceLastActive),
> pool.getNumConnections(), pool.getMaxSize());
>   }
> ...
> if (pool.getNumConnections() < pool.getMaxSize() &&
> active >= MIN_ACTIVE_RATIO * total) {
>   ConnectionContext conn = pool.newConnection();
>   pool.addConnection(conn);
> } else {
>   LOG.debug("Cannot add more than {} connections to {}",
>   pool.getMaxSize(), pool);
> }
> {code}
> It affects cleanup and creating Connections. Maybe it should be configurable 
> so that we can reconfig it to improve performance



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14011) RBF: Add more information to HdfsFileStatus for a mount point

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14011:

Fix Version/s: 3.3.0

> RBF: Add more information to HdfsFileStatus for a mount point
> -
>
> Key: HDFS-14011
> URL: https://issues.apache.org/jira/browse/HDFS-14011
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-14011.01.patch, HDFS-14011.02.patch, 
> HDFS-14011.03.patch
>
>
> RouterClientProtocol#getMountPointStatus does not use information of the 
> mount point, therefore, 'hdfs dfs -ls' to a directory including mount point 
> returns the incorrect information.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1054) List Multipart uploads in a bucket

2019-09-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1054?focusedWorklogId=311664=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-311664
 ]

ASF GitHub Bot logged work on HDDS-1054:


Author: ASF GitHub Bot
Created on: 12/Sep/19 19:52
Start Date: 12/Sep/19 19:52
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r323920792
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java
 ##
 @@ -555,6 +555,16 @@ public OzoneOutputStream createFile(String keyName, long 
size,
 .listStatus(volumeName, name, keyName, recursive, startKey, 
numEntries);
   }
 
+  /**
+   * Return with the list of the in-flight multipart uploads.
+   *
+   * @param prefix Optional string to filter for the selected keys.
+   */
+  public OzoneMultipartUploadList listMultpartUploads(String prefix)
 
 Review comment:
   thanks, fixed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 311664)
Time Spent: 6h 20m  (was: 6h 10m)

> List Multipart uploads in a bucket
> --
>
> Key: HDDS-1054
> URL: https://issues.apache.org/jira/browse/HDDS-1054
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> This Jira is to implement in ozone to list of in-progress multipart uploads 
> in a bucket.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13845) RBF: The default MountTableResolver should fail resolving multi-destination paths

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13845:

Fix Version/s: 3.3.0

> RBF: The default MountTableResolver should fail resolving multi-destination 
> paths
> -
>
> Key: HDFS-13845
> URL: https://issues.apache.org/jira/browse/HDFS-13845
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-13845.001.patch, HDFS-13845.002.patch, 
> HDFS-13845.003.patch, HDFS-13845.004.patch, HDFS-13845.005.patch
>
>
> When we use the default MountTableResolver to resolve the path, we cannot get 
> the destination paths for the default DestinationOrder.HASH. 
> {code:java}
> // Some comments here
> private static PathLocation buildLocation(
>   ..
> List locations = new LinkedList<>();
> for (RemoteLocation oneDst : entry.getDestinations()) {
>   String nsId = oneDst.getNameserviceId();
>   String dest = oneDst.getDest();
>   String newPath = dest;
>   if (!newPath.endsWith(Path.SEPARATOR) && !remainingPath.isEmpty()) {
> newPath += Path.SEPARATOR;
>   }
>   newPath += remainingPath;
>   RemoteLocation remoteLocation = new RemoteLocation(nsId, newPath, path);
>   locations.add(remoteLocation);
> }
> DestinationOrder order = entry.getDestOrder();
> return new PathLocation(srcPath, locations, order);
>   }
> {code}
> The default order will be hash, but the HashFirstResolver will not be invoked 
> to order the location.
> It is ambiguous for the MountTableResolver that we will see the HASH order in 
> the web ui for multi-destinations path but we cannot get the result.
> In my opinion, the MountTableResolver will be a simple resolver to implement 
> 1 to 1 not including the 1 to n destinations. So we should check the 
> buildLocation. If the entry has multi destinations, we should reject it.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13852) RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys.

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13852:

Fix Version/s: 3.3.0

> RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured 
> in RBFConfigKeys.
> -
>
> Key: HDFS-13852
> URL: https://issues.apache.org/jira/browse/HDFS-13852
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-13852-HDFS-13891.0.patch, HDFS-13852.001.patch, 
> HDFS-13852.002.patch, HDFS-13852.003.patch, HDFS-13852.004.patch
>
>
> In the NamenodeBeanMetrics the router will invokes 'getDataNodeReport' 
> periodically. And we can set the dfs.federation.router.dn-report.time-out and 
> dfs.federation.router.dn-report.cache-expire to avoid time out. But when we 
> start the router, the FederationMetrics will also invoke the method to get 
> node usage. If time out error happened, we cannot adjust the parameter 
> time_out. And the time_out in the FederationMetrics and NamenodeBeanMetrics 
> should be the same.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13906) RBF: Add multiple paths for dfsrouteradmin "rm" and "clrquota" commands

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13906:

Fix Version/s: 3.3.0

> RBF: Add multiple paths for dfsrouteradmin "rm" and "clrquota" commands
> ---
>
> Key: HDFS-13906
> URL: https://issues.apache.org/jira/browse/HDFS-13906
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Soumyapn
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-13906-01.patch, HDFS-13906-02.patch, 
> HDFS-13906-03.patch, HDFS-13906-04.patch
>
>
> Currently we have option to delete only one mount entry at once. 
> If we have multiple mount entries, then it would be difficult for the user to 
> execute the command for N number of times.
> Better If the "rm" and "clrQuota" command supports multiple entries, then It 
> would be easy for the user to provide all the required entries in one single 
> command.
> Namenode is already suporting "rm" and "clrQuota" with multiple destinations.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13834) RBF: Connection creator thread should catch Throwable

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13834:

Fix Version/s: 3.3.0

> RBF: Connection creator thread should catch Throwable
> -
>
> Key: HDFS-13834
> URL: https://issues.apache.org/jira/browse/HDFS-13834
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Critical
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-13834-HDFS-13891.0.patch, 
> HDFS-13834-HDFS-13891.1.patch, HDFS-13834-HDFS-13891.2.patch, 
> HDFS-13834-HDFS-13891.3.patch, HDFS-13834-HDFS-13891.4.patch, 
> HDFS-13834-HDFS-13891.5.patch, HDFS-13834.0.patch, HDFS-13834.1.patch
>
>
> Connection creator thread is a single thread thats responsible for creating 
> all downstream namenode connections.
> This is very critical thread and hence should not die understand 
> exception/error scenarios.
> We saw this behavior in production systems where the thread died leaving the 
> router process in bad state.
> The thread should also catch a generic error/exception.
> {code}
> @Override
> public void run() {
>   while (this.running) {
> try {
>   ConnectionPool pool = this.queue.take();
>   try {
> int total = pool.getNumConnections();
> int active = pool.getNumActiveConnections();
> if (pool.getNumConnections() < pool.getMaxSize() &&
> active >= MIN_ACTIVE_RATIO * total) {
>   ConnectionContext conn = pool.newConnection();
>   pool.addConnection(conn);
> } else {
>   LOG.debug("Cannot add more than {} connections to {}",
>   pool.getMaxSize(), pool);
> }
>   } catch (IOException e) {
> LOG.error("Cannot create a new connection", e);
>   }
> } catch (InterruptedException e) {
>   LOG.error("The connection creator was interrupted");
>   this.running = false;
> }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13869) RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13869:

Fix Version/s: 3.3.0

> RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics
> 
>
> Key: HDFS-13869
> URL: https://issues.apache.org/jira/browse/HDFS-13869
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-13869-002.diff, HDFS-13869-003.diff, 
> HDFS-13869-004.patch, HDFS-13869-005.patch, HDFS-13869-006.patch, 
> HDFS-13869-007.patch, HDFS-13869-HDFS-13891.009.patch, 
> HDFS-13869-HDFS-13891.010.patch, HDFS-13869-HDFS-13891.011.patch, 
> HDFS-13869.patch, HDFS-13891-HDFS-13869-008.patch
>
>
> {code:java}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getUsed(NamenodeBeanMetrics.java:205)
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getCapacityUsed(NamenodeBeanMetrics.java:519)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){code}
> ngMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13443:

Fix Version/s: 3.3.0

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, 
> HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-016.patch, 
> HDFS-13443-017.patch, HDFS-13443-HDFS-13891-001.patch, 
> HDFS-13443-HDFS-13891-002.patch, HDFS-13443-branch-2.001.patch, 
> HDFS-13443-branch-2.002.patch, HDFS-13443.001.patch, HDFS-13443.002.patch, 
> HDFS-13443.003.patch, HDFS-13443.004.patch, HDFS-13443.005.patch, 
> HDFS-13443.006.patch, HDFS-13443.007.patch, HDFS-13443.008.patch, 
> HDFS-13443.009.patch, HDFS-13443.010.patch, HDFS-13443.011.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13255) RBF: Fail when try to remove mount point paths

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13255:

Fix Version/s: 3.3.0

> RBF: Fail when try to remove mount point paths
> --
>
> Key: HDFS-13255
> URL: https://issues.apache.org/jira/browse/HDFS-13255
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wu Weiwei
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-13255-HDFS-13891-002.patch, 
> HDFS-13255-HDFS-13891-003.patch, HDFS-13255-HDFS-13891-004.patch, 
> HDFS-13255-HDFS-13891-wip-001.patch
>
>
> when delete a ns-fed path which include mount point paths, will issue a error.
> Need to delete each mount point path independently.
> Operation step:
> {code:java}
> [hadp@root]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /rm-test-all/rm-test-ns10 ns10->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> /rm-test-all/rm-test-ns2 ns1->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns2/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 101 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/NOTICE.txt
> -rw-r--r-- 3 hadp supergroup 1366 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/README.txt
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
> [hadp@root]$ hdfs dfs -rm -r hdfs://ns-fed/rm-test-all/
> rm: Failed to move to trash: hdfs://ns-fed/rm-test-all. Consider using 
> -skipTrash option
> [hadp@root]$ hdfs dfs -rm -r -skipTrash hdfs://ns-fed/rm-test-all/
> rm: `hdfs://ns-fed/rm-test-all': Input/output error
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13853) RBF: RouterAdmin update cmd is overwriting the entry not updating the existing

2019-09-12 Thread Brahma Reddy Battula (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-13853:

Fix Version/s: 3.3.0

> RBF: RouterAdmin update cmd is overwriting the entry not updating the existing
> --
>
> Key: HDFS-13853
> URL: https://issues.apache.org/jira/browse/HDFS-13853
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, HDFS-13891
>
> Attachments: HDFS-13853-HDFS-13891-01.patch, 
> HDFS-13853-HDFS-13891-02.patch, HDFS-13853-HDFS-13891-03.patch, 
> HDFS-13853-HDFS-13891-04.patch, HDFS-13853-HDFS-13891-05.patch, 
> HDFS-13853-HDFS-13891-06.patch, HDFS-13853-HDFS-13891-07.patch, 
> HDFS-13853-HDFS-13891-08.patch, HDFS-13853-HDFS-13891-09.patch
>
>
> {code:java}
> // Create a new entry
> Map destMap = new LinkedHashMap<>();
> for (String ns : nss) {
>   destMap.put(ns, dest);
> }
> MountTable newEntry = MountTable.newInstance(mount, destMap);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13891) HDFS RBF stabilization phase I

2019-09-12 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928850#comment-16928850
 ] 

Brahma Reddy Battula commented on HDFS-13891:
-

[~jojochuang] thanks for remainder and sorry for late reply.. Will do now.

> HDFS RBF stabilization phase I  
> 
>
> Key: HDFS-13891
> URL: https://issues.apache.org/jira/browse/HDFS-13891
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Priority: Major
>  Labels: RBF
>
> RBF(Router Based Federation) shipped in 3.0+ and 2.9..
> now its out various corner cases, scale and error handling issues are 
> surfacing.
> And we are targeting security feaiure (HDFS-13532) also.
> this umbrella to fix all those issues and support missing 
> protocols(HDFS-13655) before next 3.3 release.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >